Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from http://www.dbi-services.com/
Updated: 16 hours 48 min ago

Documentum Multiple ADTS - Ratio of rendition creations between instances (Part 2)

Thu, 2015-01-29 02:15

This is the second part of my previous blog concerning rendition creation ratio between two ADTS servers. In this part I will talk about another way of getting this ratio. In addition this one doesn't require to enable auditing and this way is preferable concerning database space footprint.

DMR_CONTENT

This method uses objects that already exist in the docbase and are populated each time a rendition is done. Because in fact, it is the content of the rendition itself! Fortunately the server where the content has been created is listed in the object.

You just have to adapt the following query and execute it:

select set_client,count(*) as val from dmr_content where full_format = 'pdf' and set_client like '%chbsmv_dmrsp%' group by set_client;

ADTS_dmr_content_ratio_blur.PNG

And now you have all your ADTS servers listed with the total rendition they did.

Flush one SQL statement to hard parse it again

Wed, 2015-01-28 22:12

If you want a statement to be hard parsed on its next execution, you can flush the shared pool, but you don't want all the cursors to be hard parsed. Here is how to flush only one statement, illustrated with the case where it can be useful.
During the performance training, here is how I introduce Adaptive Cursor Sharing, here is how I show the bind variable peeking problem that is well known by everyone that was DBA at the times of 9iR2 upgrades.

I've a customer table with very few ones born before 30's and lot of ones born in 70's. Optimal plan is index access for those from 1913 and full table scan for those born in 1971.

I've an histogram on that column so the optimizer can choose the optimal plan, whatever the value is. But I'm a good developer and I'm using bind variables in order to avoid to parse and waste shared pool memory for each value.

Here is my first user that executes the query for the value 1913

SQL> execute :YEAR:=1913;
PL/SQL procedure successfully completed.

SQL> select cust_gender,count(*) from CUSTOMERS where cust_year_of_birth=:YEAR group by cust_gender;

C   COUNT(*)
- ----------
M          4
F          1

Here is the execution plan:
SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last +peeked_binds'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------
SQL_ID  dpxj8c5y81bdr, child number 0
-------------------------------------
select cust_gender,count(*) from CUSTOMERS where
cust_year_of_birth=:YEAR group by cust_gender

Plan hash value: 790974867

------------------------------- ------------ --------------------------------------
| Id  | Operation               | Name       | Starts | E-Rows | A-Rows | Buffers |
------------------------------- ------------ --------------------------------------
|   0 | SELECT STATEMENT        |            |      1 |        |      2 |       7 |
|   1 |  HASH GROUP BY          |            |      1 |      2 |      2 |       7 |
|   2 |   TABLE ACCESS BY INDEX | CUSTOMERS  |      1 |      5 |      5 |       7 |
|*  3 |    INDEX RANGE SCAN     | DEMO_CUST_ |      1 |      5 |      5 |       2 |
------------------------------- ------------ --------------------------------------
And thanks to the '+peeked_binds' I know that it has been optimized for 1913
Peeked Binds (identified by position):
--------------------------------------
   1 - :1 (NUMBER): 1913

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("CUST_YEAR_OF_BIRTH"=:YEAR)
I've the right plan, optimal for my value.

But I've used bind variables in order to share my cursor. Others will execute the same with other values. They will soft parse only and share my cursor. Look at it:

SQL> execute :YEAR:=1971;
PL/SQL procedure successfully completed.

SQL> select cust_gender,count(*) from CUSTOMERS where cust_year_of_birth=:YEAR group by cust_gender;

C   COUNT(*)
- ----------
M        613
F        312
Look at the plan, it's the same:
SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last +peeked_binds'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------
SQL_ID  dpxj8c5y81bdr, child number 0
-------------------------------------
select cust_gender,count(*) from CUSTOMERS where
cust_year_of_birth=:YEAR group by cust_gender

Plan hash value: 790974867

------------------------------- --------------- --------------------------------------
| Id  | Operation               | Name          | Starts | E-Rows | A-Rows | Buffers |
------------------------------- --------------- --------------------------------------
|   0 | SELECT STATEMENT        |               |      1 |        |      2 |     228 |
|   1 |  HASH GROUP BY          |               |      1 |      2 |      2 |     228 |
|   2 |   TABLE ACCESS BY INDEX | CUSTOMERS     |      1 |      5 |    925 |     228 |
|*  3 |    INDEX RANGE SCAN     | DEMO_CUST_YEA |      1 |      5 |    925 |       4 |
------------------------------- --------------- --------------------------------------

Peeked Binds (identified by position):
--------------------------------------
   1 - :1 (NUMBER): 1913

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("CUST_YEAR_OF_BIRTH"=:YEAR)
The plan is optimized for 1913, estimating 5 rows (E-Rows) but now returning 925 rows (A-Rows). That may be bad. Imagine a nested loop planned for few rows but finally running on million of rows...

The goal of this post is not to show Adaptive Cursor Sharing that may solve the issue once the problem has occured. And Adaptive Cursor Sharing do not work in all contexts (see Bug 8357294: ADAPTIVE CURSOR SHARING DOESN'T WORK FOR STATIC SQL CURSORS FROM PL/SQL)

The goal is to answer to a question I had during the workshop: Can we flush one cursor in order to have it hard parsed again ? It's a good question and It's a good idea to avoid to flush the whole shared pool!

This is not new (see here, here, here, here, here,...). But here is the query I use to quickly flush a statement with its sql_id.

I have the following cursor in memory:

SQL> select child_number,address,hash_value,last_load_time from v$sql where sql_id='dpxj8c5y81bdr';

CHILD_NUMBER ADDRESS          HASH_VALUE LAST_LOAD_TIME
------------ ---------------- ---------- -------------------
           0 00000000862A0E08 2088807863 2015-01-29/14:56:46
and I flush it with dbms_shared_pool.purge:
SQL> exec for c in (select address,hash_value,users_executing,sql_text from v$sqlarea where sql_id='dpxj8c5y81bdr') loop sys.dbms_shared_pool.purge(c.address||','||c.hash_value,'...'); end loop;

PL/SQL procedure successfully completed.
I've 3 remarks about it:

1. If the cursor is currently running, the procedure will wait.

2. In 10g you have to set the following event for your session:

alter session set events '5614566 trace name context forever';

3. The '...' is anything you want which is not a P,Q,R,T which are used for Procedures, seQences, tRigger, Type. Anything else is for cursors. Don't worry, this is in the doc.

Ok, the cursor is not there anymore:

SQL> select child_number,address,hash_value,last_load_time from v$sql where sql_id='dpxj8c5y81bdr';
no rows selected
And the next execution will optimize it for its peeked bind value:
SQL> execute :YEAR:=1971;
PL/SQL procedure successfully completed.

SQL> select cust_gender,count(*) from CUSTOMERS where cust_year_of_birth=:YEAR group by cust_gender;

C   COUNT(*)
- ----------
M          4
F          1

SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last +peeked_binds'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------
SQL_ID  dpxj8c5y81bdr, child number 0
-------------------------------------
select cust_gender,count(*) from CUSTOMERS where
cust_year_of_birth=:YEAR group by cust_gender

Plan hash value: 1577413243

-----------------------------------------------------------------------------
| Id  | Operation          | Name      | Starts | E-Rows | A-Rows | Buffers |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |           |      1 |        |      2 |     233 |
|   1 |  HASH GROUP BY     |           |      1 |      2 |      2 |     233 |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |      1 |    925 |      5 |     233 |
-----------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------
   1 - :1 (NUMBER): 1971

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("CUST_YEAR_OF_BIRTH"=:YEAR)

Here is the cursor that have been re-loaded, re-parsed, and re-optimized:
SQL> select child_number,address,hash_value,last_load_time from v$sql where sql_id='dpxj8c5y81bdr';

CHILD_NUMBER ADDRESS          HASH_VALUE LAST_LOAD_TIME
------------ ---------------- ---------- -------------------
           0 00000000862A0E08 2088807863 2015-01-29/14:56:49
That's the right plan. A full table scan when I want to read lot of rows.

Don't take it wrong. This is not a solution. It's just a quick fix when a plan has gone wrong because the first execution was done by a special value. We flush the plan and expect that the following execution is done with a regular value.
You probably have the sql_id as you have seen a long running query with a bad plan. Here is the way to flush all its children - ready to copy/paste in case of emergency:

set serveroutput on
begin
 for c in (select address,hash_value,users_executing,sql_text from v$sqlarea where sql_id='&sql_id') 
 loop 
  dbms_output.put_line(c.users_executing||' users executing '||c.sql_text);
  sys.dbms_shared_pool.purge(c.address||','||c.hash_value,'...'); 
  dbms_output.put_line('flushed.');
 end loop;
end;
/

Stay with non-CDB or go to CDB?

Wed, 2015-01-28 10:19

This is a question that starts to be raised quite frequently. Oracle released the multitenant architecture 1.5 year ago. And now says that the previous architecture - known as non-CDB - is deprecated. What does it mean? Do we have to go to the CDB architecture even if we dont want to use/pay multi-tenant?

When to gather workload system statistics?

Fri, 2015-01-23 08:40

This month we started to give our Oracle Tuning Workshop. And with a new workshop comes new questions. We advise to give to the optimizer the most accurate statsistics that we can have. That suggests that WORKLOAD statistics are better than NOWORKLOAD ones because they gather the average number of blocks read in multiblock read, rather than using default values. But then, the question is: which time period do you choose to gather workload statistics, and with which interval duration?

Memory management, OOM issues with SQL Server 2014 In-memory OLTP

Thu, 2015-01-22 08:47

Last week I gave a workshop about SQL Server 2014 and the new features. The first day we worked on new In-memory OLTP and different topics such as the new internal storage, the new transaction processing behavior or the new checkpointing process. During this day, one of the attendees asked me about the memory management with In-Memory OLTP feature. It was a very interesting question but unfortunately I didn’t have the time to discuss about it with him, so I decided to publish something concerning this topic. This subject may be extensive and time consuming, so I will try to give only a good overview to understand correctly how memory management works against memory-optimized objects and how important the monitoring aspect is in this particular context.

First of all, keep in mind that memory-optimized tables are memory-oriented feature. It means that memory-optimized structures (indexes and data rows) will reside exclusively in memory. This is by design and this point is very important. Let me explain why later in this blog post.

For the moment, let’s focus on memory aspects of this new feature. In-memory OLTP is not different from other memory consumers on SQL Server. Indeed, In-memory OLTP objects have their own memory clerk MEMORYCLERK_XTP. Let’s have a look at the sys.dm_os_memory_clerks DMV to show information concerning In-Memory OLTP allocated memory.


select        [type],        name,        memory_node_id,        page_size_in_bytes,        pages_kb/1024 as size_MB from sys.dm_os_memory_clerks where type like '%xtp%'; go

 

blog_27_1_memory_clerk


In my case we may notice that the database dbi_hk (DB_ID = 24) contains memory-optimized objects with a dedicated memory clerk on it. Others xtp memory clerks are dedicated for system threads (first line) and DAC (last line) but let’s focus on my user database memory clerk which has 2336MB of page memory allocated.

On my lab environment, I have only one memory-optimized table named bigTransactionHistory_xtp inside the dbi_hk database. Let’s have a look at the new DMV sys.dm_db_xtp_table_memory_stats to show memory information for this table:

 

SELECT        object_name(object_id) AS table_name,        memory_allocated_for_indexes_kb / 1024 as mem_alloc_index_mb,        memory_allocated_for_table_kb / 1024 as mem_alloc_table_mb,        memory_used_by_indexes_kb / 1024 as mem_used_index_mb,        memory_used_by_table_kb / 1024 as mem_used_table_mb,        (memory_allocated_for_table_kb + memory_allocated_for_indexes_kb) / 1024 as mem_alloc_total_mb,        (memory_used_by_table_kb + memory_used_by_indexes_kb) /1024 as mem_used_total_mb FROM sys.dm_db_xtp_table_memory_stats where object_id = object_id('bigTransactionHistory_xtp'); go

 

 

blog_27_2_xtp_table_memory_stats


We may expect to retrieve the same amount of memory page allocated here and in the dedicated memory clerk of the dbi_hk database. This is approximatively the case. The difference we found concerns probably memory allocated for system internal structures. We may have a look at the concerned DMV sys.dm_db_xtp_memory_consumers but I will focus on it in a next blog post.

At this point we know where to find information concerning the memory consumption for memory-optimized objects but I still have one question in mind: how does SQL Server memory manager deal with memory concurrent activities between memory-optimized tables and their disk-based table counterparts? Like any other memory consumer, the in-memory OLTP engine responds to memory-pressure, but to a limited degree because memory consumed by data and indexes can’t be released even under memory pressure.

To deal correctly with In-Memory OLTP engine and others consumers we have to turn on the resource governor (RG) side. Indeed, by default all databases are mapped to the default resource pool regardless the RG is enabled. In the same way, workloads issued from both disk-based tables and memory-optimized tables will run concurrently on the default resource pool if any special configuration is performed. In such case, RG will use an internal threshold for In-Memory OLTP to avoid conflicts over pool usage. The threshold is depending on the memory size configured for SQL Server and especially to the target commit memory for the SQL Server instance. You can refer to the Microsoft documentation here for more details.

So, in my case the max memory setting value is configured to 6144MB and the target committed memory is as follows:

 

select        committed_target_kb / 1024 as committed_target_mb from sys.dm_os_sys_info;

 

blog_27_3_target_commit_memory

 

According the Microsoft documentation (cf. link above) the percent available for in-memory tables will be 70% or 0.7 * 4898 = 3429MB. I may retrieve this information by using the DMV related on the RG. You can find an original version of this script on MSSQLTIPs.com website.

 

;with cte as (        select                RP.pool_id ,              RP.Name ,              RP.min_memory_percent ,              RP.max_memory_percent ,              cast(RP.max_memory_kb / 1024. as numeric(12, 2)) AS max_memory_mb ,              cast(RP.used_memory_kb / 1024. as numeric(12, 2)) AS used_memory_mb ,              cast(RP.target_memory_kb / 1024. as numeric(12,2)) AS target_memory_mb,              cast(SI.committed_target_kb / 1024. as numeric(12, 2)) AS committed_target_mb    from sys.dm_resource_governor_resource_pools RP    cross join sys.dm_os_sys_info SI ) select        c.pool_id ,        c.Name ,        c.min_memory_percent ,        c.max_memory_percent ,        c.max_memory_mb ,        c.used_memory_mb ,        c.target_memory_mb ,        c.committed_target_mb,        CAST(c.committed_target_mb * case when c.committed_target_mb then 0.7                                                                when c.committed_target_mb < 16384 then 0.75                                                                when c.committed_target_mb < 32768 then 0.8                                                                when c.committed_target_mb then 0.85                                                                when c.committed_target_mb > 98304 then 0.9                                                       end * c.max_memory_percent / 100 as numeric(12,2)) as [Max_for_InMemory_Objects_mb],        CAST(c.committed_target_mb * case when c.committed_target_mb then 0.7                                                                when c.committed_target_mb < 16384 then 0.75                                                                when c.committed_target_mb < 32768 then 0.8                                                                when c.committed_target_mb then 0.85                                                                when c.committed_target_mb > 98304 then 0.9                                                       end * c.max_memory_percent / 100 as numeric(12,2)) - c.used_memory_mb as Free_for_InMemory_Objects_mb FROM cte c; go

 

blog_27_4_RG_memory_available_for_xtp_tables


Ok I retrieve (approximatively) this value by looking at the Max_for_InMemory_Objects_mb column on the default pool line record. Notice that we have already 2008MB used in the default resource pool.

At this point the In-memory OLTP and the disk-based OLTP run concurrently on the same resource pool and of course, this is not a recommended situation. Indeed, we may be in a situation where In-Memory OLTP consumes all the available memory from this pool. In such situation, SQL Server will be forced to flush data pages from disk-based tables and you know the performance impact of this process.

Go ahead and let’s create an issue you can faced with In-Memory OLTP and misconfigured environments. First we decrease the max memory setting value to 4096MB and then we load another bunch of data into bigTransactionHistory_xtp table to consume an important part of the available memory dedicated to memory-optimized objects in the default resource pool. Finally let’s have again a look at the RG memory configuration by using the previous script. We have now a good picture of changes applied after our reconfiguration:

 

blog_27_5_RG_memory_available_for_xtp_tables 

As expected, several values have changed for target memory, memory available for memory-optimized tables and memory used memory into the default resource pool. The new available memory value for the resource pool is now 1605MB (3891MB – 2286MB). I let you think about a bad situation where your memory-optimized table will consume all available memory inside the default resource pool in order of magnitude … the consequences are obvious (even if they depend on the context): probably a lot of memory pressures between the buffer pool consumer and In-Memory OLTP consumer and in the worst case a potential OOM issue like as follows:

 

blog_27_6_RG_default_pool_oom_condition

 

After loading data into the bigTransactionHistory_xtp we can notice we have consumed all available memory for In-memory objects into the default resource pool. However as said earlier, RG guarantees a certain amount of memory for disk-based tables.

 

blog_27_6_RG_memory_available_oom_condition


Ok now let’s simulate a crash recovery scenario by restarting the SQL Server instance. In my case the SQL Server engine service didn’t restart correctly… ouch... What’s going on? Of course my first though was to take a look directly on the error log of my SQL Server instance. The first error message I encountered was as follows:

 

blog_27_7_xtp_start_instance_error_fail_page_allocation

 

Ok... it seems there is an issue during the dbi_hk recovery database process. In fact during the recovery process one step consists in building the index structure and link the data rows to this structure. But you can see that this step fails with an OOM (Out Of Memory) issue.

 

blog_27_8_xtp_start_instance_oom_1


blog_27_8_xtp_start_instance_oom_2


In this second part, we have interesting information concerning our OOM issue. First of all in the “process / system counts” section we may notice that SQL Server had to deal with internal memory pressures (process physical memory low = 1) so we can exclude external memory pressure. Then, in the “memory manager” section we have two additional sections Last OOM Factor and Page Alloc Potential. The former confirms an OOM (out of memory) issue into the Memory manager. The latter shows a negative value that indicates that the buffer pool does not have any free memory so our assumption that it was an internal memory pressure is correct. As a reminder Page Alloc Potential is similar to Stolen Potential in previous versions of SQL Server.

Let’s continue and point out the memory clerks which are responsible for the memory pressure. By investigating down into the log file, I found two relevant memory clerks with a lot of pages allocated as shown above:

 

blog_27_9_xtp_clerk_status


blog_27_9_xtp_clerk_status_2


As expected, the first memory clerk concerns In-memory OLTP (XTP as Extreme Transaction Processing) and the second is related on the log pool manager that is heavily used during recovery processing. The both memory clerks, at the time of the OOM issue, have a total size of 3.7GB. This does not leave much room for the caches left in the default resource pool. Finally the end of the error log contains the following error messages that confirm that SQL Server is missing memory for its default resource pool.

 

blog_27_9_xtp_clerk_status_3


According to the Microsoft documentation that’s the resolution of OOM issues with In-Memory table’s scenarios, the number of solutions are very limited. In my case, I started the SQL Server engine with –f parameter to load minimal configuration and then I increased the amount of memory dedicated to In-memory OLTP by increasing the max server memory option in the server side. This fix will avoid to face the same issue on the next restart of my SQL Server engine service.

Is it possible to fix definitely this OOM condition? The response is yes and we have to configure a resource pool with memory limitations and bind it with our memory-optimized database. This is another story and I let you check the Microsoft document! My intention in this blog post is only to create awareness of the importance of a good memory management with new In-memory OLTP feature.

Happy configuration!

Oracle OPatch and "Archive not applied" error

Mon, 2015-01-19 18:10

Recently, I have installed an Oracle 11.2.0.4 RDBMS on a server, and I wanted to install the latest Patch Set Update (PSU) 11.2.0.4.4 from October 2014. During the execution, I fell on the error "Archive not applied" and the job failed. This is how to quickly fix this issue.

While installing the PSU 11.2.0.4.4 from October 2014 on my Oracle 11.2.0.4 database server, I experienced the following error:

 

The following actions have failed:
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libserver11.a/kjb.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libserver11.a... ''
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libpls11.a/pevmexe.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libpls11.a... ''
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libclient11.a/knoggcap.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libclient11.a... ''
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libgeneric11.a/qcd.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libgeneric11.a... ''
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libpls11_pic.a/pevmexe_pic.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libpls11_pic.a... ''
Archive not applied /oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/17478514/files/lib/libxml11.a/lpxm.o to /u00/app/oracle/product/11.2.0/db_4_0/lib/libxml11.a... ''

 

Do you want to proceed? [y|n]
n

 

Typically, this error appears when there are permission issues in the Oracle directories. Here, the /u00/app/oracle/product/11.2.0/db_4_0/lib directory seemed to by the cause of my error. So I checked my environment. By invoking OPatch with the Oracle user, access should be gained to that directory:

 

oracle@srvora01 ~]$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)

 

I am connected as "oracle" user, member of the "oinstall" group.

 

[oracle@srvora01 ~]$ ls -ld /u00/app/oracle/product/11.2.0/db_4_0/lib
drwxr-xr-x 4 oracle oinstall 12288 Jan 16 13:10 /u00/app/oracle/product/11.2.0/db_4_0/lib/oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/

 

As you can see, my "lib" directory is owned by "oracle" with the "oinstall" group. So there are definitely no issues with permissions here :-(

I was very surpised during a few minutes! But I discovered an issue in the path of the patch I wanted to deploy:

 

[oracle@srvora01 19121551]$ pwd
/oracle/soft/11.2.0.4/psu 11.2.0.4.4/19791364/19121551/

 

Do you see my mistake? I used a space character in the name of the directory containing the patch to apply. And OPatch does not like space characters, definitely...

 

Once my mistake discovered, I just had to rename my directory and run the OPatch again:

 

[oracle@srvora01 ~]$ mv "/oracle/soft/11.2.0.4/psu\ 11.2.0.4.4/" "/oracle/soft/11.2.0.4/psu_11.2.0.4.4/"

 

[oracle@vmreforadg01 ~]$ cd /oracle/soft/11.2.0.4/psu_11.2.0.4.4/19791364/19121551
[oracle@vmreforadg01 19121551]$ opatch apply
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
...

 

The patch was installed successfully:

 

[oracle@vmreforadg01 19121551]$ opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
Oracle Home       : /u00/app/oracle/product/11.2.0/db_4_0
Central Inventory : /u00/app/oraInventory
   from           : /u00/app/oracle/product/11.2.0/db_4_0/oraInst.loc
OPatch version    : 11.2.0.3.6
OUI version       : 11.2.0.4.0
Log file location : /u00/app/oracle/product/11.2.0/db_4_0/cfgtoollogs/opatch/opatch2015-01-16_13-33-02PM_1.log
Lsinventory Output file location : /u00/app/oracle/product/11.2.0/db_4_0/cfgtoollogs/opatch/lsinv/lsinventory2015-01-16_13-33-02PM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 11g                                                  11.2.0.4.0
There are 1 product(s) installed in this Oracle Home.
Interim patches (1) :
Patch  19121551     : applied on Fri Jan 16 13:10:20 CET 2015
Unique Patch ID:  17949166
Patch description:  "Database Patch Set Update : 11.2.0.4.4 (19121551)"
   Created on 6 Oct 2014, 10:07:57 hrs PST8PDT
Sub-patch  18522509; "Database Patch Set Update : 11.2.0.4.3 (18522509)"
Sub-patch  18031668; "Database Patch Set Update : 11.2.0.4.2 (18031668)"
Sub-patch  17478514; "Database Patch Set Update : 11.2.0.4.1 (17478514)"
   Bugs fixed:
     17288409, 17205719, 17811429, 17754782, 17726838, 13364795, 17311728
     17284817, 17441661, 13645875, 18199537, 16992075, 16542886, 17446237
     14565184, 17071721, 17610798, 17375354, 17449815, 17397545, 19463897
     18230522, 17235750, 16360112, 13866822, 17982555, 17478514, 12905058
     14338435, 13944971, 16929165, 12747740, 17546973, 14054676, 17088068
     18264060, 17343514, 17016369, 17042658, 14602788, 14657740, 17332800
     19211724, 13951456, 16315398, 17186905, 18744139, 16850630, 17437634
     19049453, 18673304, 17883081, 18641419, 17296856, 18262334, 17006183
     18277454, 17232014, 16855292, 10136473, 17705023, 17865671, 18554871
     19121551, 17588480, 17551709, 17344412, 17842825, 18681862, 17390160
     13955826, 13609098, 18139690, 17501491, 17239687, 17752121, 17299889
     17602269, 18673325, 17313525, 17242746, 19544839, 17600719, 18191164
     17571306, 19466309, 17951233, 18094246, 17165204, 17011832, 17040527
     16785708, 16180763, 17477958, 17174582, 17465741, 18522509, 17323222
     19463893, 16875449, 16524926, 17237521, 17596908, 17811438, 17811447
     18031668, 16912439, 16494615, 18061914, 17545847, 17082359, 19554106
     17614134, 17341326, 17891946, 19458377, 17716305, 17752995, 16392068
     19271443, 17767676, 17614227, 17040764, 17381384, 18973907, 18673342
     14084247, 17389192, 17006570, 17612828, 17721717, 13853126, 18203837
     17390431, 17570240, 14245531, 16043574, 16863422, 19727057, 17468141
     17786518, 17037130, 17267114, 18203838, 16198143, 16956380, 17478145
     14829250, 17394950, 17027426, 16268425, 18247991, 19584068, 14458214
     18436307, 17265217, 13498382, 16692232, 17786278, 17227277, 16042673
     16314254, 17443671, 16228604, 16837842, 17393683, 17787259, 18009564
     15861775, 16399083, 18018515, 16472716, 17050888, 14010183, 17325413
     16613964, 17080436, 17036973, 17761775, 16721594, 18280813, 15979965
     18203835, 17297939, 16731148, 17811456, 14133975, 17385178, 17586955
     16450169, 17655634, 9756271, 17892268, 17648596, 16220077, 16069901
     11733603, 16285691, 17587063, 18180390, 17393915, 18096714, 17238511
     17824637, 14285317, 19289642, 14764829, 18328509, 17622427, 16943711
     17346671, 18996843, 14852021, 17783588, 16618694, 17672719, 17546761
--------------------------------------------------------------------------------
OPatch succeeded.

NLS_LANG on Windows in Europe: WE8MSWIN1252 or WE8PC850?

Wed, 2015-01-14 23:57

In europe we have accents and non US7ASCII characters. We need special characterset. I'm not talking about Unicode here that solves all the problems. If you have a Java application, you have no problem: it's Unicode. You can store all characters in one multi-byte characterset. But for other applications, on Windows, you have 2 possible charactersets for Western Europe WE8MSWIN1252 and WE8PC850. WE8MSWIN1252 is the one that is set by default in the registry, but is it the right one?

SQL Server 2014: FCIs, availability groups, and TCP port conflict issues

Mon, 2015-01-12 23:26

After giving my session about SQL Server AlwaysOn and availability groups at the last French event “Les journées SQL Server 2014”, I had several questions concerning the port conflict issues, particularly the differences that exist between FCIs and availability groups (AAGs) on this subject.

In fact, in both cases, we may have port conflicts depending on which components that are installed on each cluster node. Fundamentally, FCIs and AAGs are both clustered-based features but each of them use the WSFC differently: SQL Server FCIs are “cluster-aware” services while AAGs use standalone instances by default (using of clustered instances with AAGs is possible but this scenario is relatively uncommon and it doesn’t change in any way the story).

First of all, my thinking is based on the following question: Why does having an availability group listener on the same TCP port than an SQL Server instance (but on a different process) cause a conflict issue whereas having both SQL Server FCIs with the same port is working fine?

Let’s begin with the SQL Server FCIs. When you install two SQL Server FCIs (on the same WSFC), you can configure the same listen port for the both instances and it works perfectly right? Why? The main reason is that each SQL Server FCI has its dedicated virtual IP address and as you know, a process can open a socket to a particular IP address on a specific port. However, two or more processes that attempt to open a socket on the same specific port and on the same IP address will result to a conflict. For instance, in my case, I have two SQL Server FCIs - SQLCLUST-01\SQL01 and SQLCLUST-02\SQL02 – that respectively listen on the same TCP port number: 1490. Here the picture of netstat –ano command output

 

blog_26_-_netstat_ano_-_1

 

Notice that each SQL Server process listens to its IP address and only to this one. We can confirm this by taking a look at each SQL Server error log.

 

blog_26_-_sqlclust01_sql01_error_log_-_2

 

...

 

blog_26_-_sqlclust02_sql02_error_log_-_3

 

Now let’s continue with the availability groups. The story is not the same because in most scenarios, we use standalone instances and by default they listen on all available IP addresses. In my case, this time I have two standalone instances – MSSQLSERVER (default) and APP - that listen respectively on the TCP port 1433 and 1438. By looking at the netstat –ano output we can notice that each process listen on all available IP addresses (LocalAddress = 0.0.0.0)

 

blog_26_-_netstat_ano_-_4

 

We can also verify the SQL Server error log of each standalone instance (default and APP)

 

blog_26_-_sql141_error_log_-_5

 

...

 

blog_26_-_sql141_app_error_log_-_6

 

At this point I am sure you are beginning to understand the issue you may have with availability groups and listeners. Let’s try to create a listener for an availability group with the default instances (MSSQLSERVER). My default instances on each cluster node listen on the port 1433 whereas the APP instances listen on the port 1438 as showed on the above picture. If I attempt to create my listener LST-DUMMY on the port 1433 it will be successful because my availability group and my default instances are on the same process.

 

blog_26_-_netstat_ano_-_7

 

Notice that the listener LST-DUMMY listens to the same port than the default instance and both are on the same process (PID = 1416). Of course if I try to change the TCP port number of my listener with 1438, SQL Server will raise the well-known error message with id 19486.

 

USE [master] GO ALTER AVAILABILITY GROUP [dummy] MODIFY LISTENER N'LST-DUMMY'(PORT=1438); GO

 

Msg 19486, Level 16, State 1, Line 3 The configuration changes to the availability group listener were completed, but the TCP provider of the instance of SQL Server failed to listen on the specified port [LST-DUMMY:1438]. This TCP port is already in use. Reconfigure the availability group listener, specifying an available TCP port. For information about altering an availability group listener, see the "ALTER AVAILABILITY GROUP (Transact-SQL)" topic in SQL Server Books Online.

 

The response becomes obvious now. Indeed, the SQL Server instance APP listens on TCP port 1438 for all available IP addresses (including the IP address of the listener LST-DUMMY).

 

blog_26_-_netstat_ano_-_8

 

You don't trust me? Well, I can prove it by connecting directly to the SQL Server named instance APP with the IP address of the listener LST-DUMMY - 192.168.0.35 - and the TCP port of the named instance – 1438 -

 

blog_26_-_sqlcmd_-_9

 

To summarize:

  • Having several SQL Server FCI that listen on the same port is not a problem because they can open a socket on their distinct IP address. However you can face port conflicts in the case you have also a standalone instance installed on one of the cluster node.
  • Having an availability group with a listener that listen on the same TCP port than the standalone instance on the same process will not result to a TCP port conflict.
  • Having an availability group with a listener that listen on the same TCP port than the standalone instance on a different process will result to a TCP port conflict. In this case each SQL Server process will attempt to open a socket on the same TCP port and on the same address IP.

Hope it helps!

SOUG-Romand: Journée performance le 21 mai

Mon, 2015-01-12 10:19

(english below)

Bonne nouvelle pour les francophones: le SOUG-R est de plus en plus actif.

Le 21 mai 2015 une journée performance est organisée sur Lausanne.

Flashback logging overhead: 'db file sequential read' on UNDO tablespace

Fri, 2015-01-09 13:09

in my previous post I've reproduced an issue where some bulk insert into a GTT was generating too much undo. But the real case I encountered was worse: the insert was reading lot of UNDO. And lot of this undo was read from disk.

Insert into GTT: bulk with APPEND_VALUES

Fri, 2015-01-09 13:04

This post is about the case I had where a GTT was generating too much undo. The effects were amplified by the fact that the database was in flashback logging (see Jonathan Lewis answer in my question on OTN forum about that, but that's for the next post.

Don't forget to configure Power Management settings on Hyper-V

Tue, 2015-01-06 02:51

Recently I had the opportunity to audit a SQL Server database hosted on a Hyper-V 2012 cluster. I noticed that the guest operating system had the Power Plan configured to High performance. This is great thing but when I talked to the system administrator to verify if the same option is turned on on the Hyper-V operating system, this was unfortunately not the case. 

As a reminder, the power policy setting has no effect on the guest operating system in case of virtual environments and we always have to verify if this option is configured correctly at the right level. 

I performed a quick demonstration to my customer by using the SuperPI benchmark tool that is pretty simple: it calculates pi to a specific number of digits by using one thread and for my purpose it's sufficient. 

--> Let's have the situation when Power Saver is enabled on the Hyper-V side and High performance turned on on the guest side. Then let's run SuperPI tool with 512K of digit to compute:

 

blog_25_-_superpi_calculation_-_power_saving

 

Here the time taken by the guest to calculate pi:

 

blog_25_-_superpi_calculation_-_power_saving_-_result

 

 

Now let's change the story by reversing the power settings value: High performance on the Hyper-V side and Power Saver on the guest side. Then we can do the same benchmark test:

 

blog_25_-_superpi_calculation_-_high_perf_-_result

 

 5,688 seconds for this test against 13,375 seconds for the first test - 57% of improvement .. not so bad :-) but let's have a more suitable situation. Indeed in most configurations power management setting is configured to Balanced by default and my customer asked me if there is a noticable difference if we leave the default configuration. In order to justify my recommandation we performed the same test but this time I decided to change the number of digits to compute to simulate a more realistic OLTP transaction (short and requiere all CPU resources during a short time). The table lists and compare the both results:

 

Settings Duration (s) Hyper - V : Load balancing 0.219 Hyper - V : High performance  0.141 

 

We can notice a 64% of CPU time improvement in the context of my customer! So after that, my customer was convinced to change this setting and I hope it is the same for you! Of course with long running queries that consume a lot of CPU resources during a long time the difference may be less discernible because the processor wake-up time is very small compared to the total worker time consumed by them.

Keep in mind that changing Power Management state from the guest has no effect on virtualized environment. You must take care of this setting directly on the hypervisor.

Happy virtualization !!

Managed Backup with SQL Server 2014

Mon, 2015-01-05 21:37

In a previous blog post called Backup a SQL Server database from On-Premise to Azure, I presented the different tools to backup your on-premise databases on Azure Storage. SQL Server Managed Backup to Windows Azure was one of these tools.

In my opinion, Managed Backup is a great tool. That is why I decided to dedicate an entire blog to this feature.

 

Understanding Managed Backup

Managed Backup is a new feature introduced in SQL Server 2014 working with Windows Azure. This feature allows to manage and automate SQL Server backups (from your on-premise or Azure SQL Server instance), configurable by … script only (T-SQL or PowerShell)!

Microsoft recommends to use Managed Backup for Windows Azure virtual machines.

Managed backup only works with user databases in Full or Bulk-logged Recovery Model, and can only perform Full and Log backups.

SQL Backups supports a point in time restore, and are stored following a retention period. This setting indicates the desired lifespan of a backup stored in Azure Storage. Once the period is reached, the backup is deleted.

SQL Backups are scheduled following the transaction workload of the database.

A full database backup is scheduled when:

  • The Managed backup feature is enabled for the first time
  • The log growth is 1 GB or larger
  • The last full database is older than 1 week
  • The log chain is broken

A transaction log backup is scheduled when:

  • No log backup history is available
  • The log space is 5 MB or larger
  • The last log backup is older than 2 hours
  • A full database has been performed

 

Configuring Managed Backup

First, you need to activate SQL Server Agent service in order to use the feature.

In this example, I have 3 user databases as follows:

 

Database Named

Recovery Model

Data Files Location

AdventureWorks2012

Simple

On-premise

AdventureWorks2014

Full

On-premise

Hybriddb

Bulk-logged

Azure Storage

 

Managed Backup can be enabled at the instance level or database level.

If you decide to activate the feature at the instance level, the configuration will be set for all user databases of your instance (even for databases added after the configuration).

On the other hand, you can activate the feature for specific user databases. If the feature is also configured at the instance level, it will be overridden by the configuration at the database level.

To configure the feature, you must provide a set of parameters:

  • The URL of the Azure Storage
  • The retention period in days
  • The credential name
  • The encryption algorithm

If the encryption algorithm is not set to ‘NO_ENCRYPTION’, you also need to provide these parameters:

  • The encryptor type
  • The encryptor name

Moreover, when you configure your Managed Backup, you need to specify if you want to activate your Managed Backup.

 

You can perform a database backup with COPY_ONLY. To do this, you need to use 'smart_admin.sp_backup_on_demand' stored procedure, by specifying the database name.

However, this stored procedure will use the configuration of the Managed Backup at the database level. That means you must configure and enable the Managed Backup for your database.

 

We need to create a credential in order to be able to connect to Azure Storage:

 

CREATE CREDENTIAL dbiservices

WITH IDENTITY = 'dbiservices',

SECRET = 'password'

 

Let’s configure our Managed Backup at the instance level:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @enable_backup = 0,

    @storage_url = 'https://dbiservices.blob.core.windows.net',

    @retention_days = 15,

    @credential_name = 'dbiservices';

    @encryption_algorithm = 'NO_ENCRYPTION';

 

 

 

 

 

 

 

 

 

If I want to display the instance configuration:

 

USE msdb;

GO

SELECT * FROM smart_admin.fn_backup_instance_config();

 

Here is the result:

 

display-configuration.png

 

We will override the Managed Backup configuration for ‘hybriddb’ database:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @database_name = 'hybriddb',

    @enable_backup = 0,

    @credential_name = 'dbiservices',

    @storage_url = 'https://dbiservices.blob.core.windows.net,

    @retention_days = 25,

    @encryption_algorithm = 'NO_ENCRYPTION';

 

If I want to display the database configuration of all databases of the instance:

 

USE msdb;

SELECT db_name, is_managed_backup_enabled, retention_days, storage_url, encryption_algorithm

FROM smart_admin.fn_backup_db_config(NULL)

 

Here is the result:

 

diplay-databases-configuration.png

 

Notice that ‘AdventureWorks2012’ database has ‘is_managed_backup_enabled’ set to ‘NULL’. Indeed, this database is not sensitive to Managed Backup because it has his Recovery Model set to Simple.

 

Now, I activate the Managed Backup at the instance level:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @enable_backup = 1;

GO

 

Now, I activate the Managed Backup for ‘hybriddb’ database:

 

USE msdb;

GO

EXEC smart_admin.sp_set_db_backup

    @database_name = 'hybriddb',

    @enable_backup

 

If I explore Azure Storage, I can find my backups:

 

backups.png

 

 

Conclusion

As I said in Introduction, Managed Backup is a great feature. Easily and quicly, you can configure and enable backups for your user databases.

However, it has some serious limitations... We can expect Managed Backup to be extended to system databases. Moreover, we can also expect Managed Backup to allow backups from user databases in Simple Recovery Model.

Furthermore, this feature is only available to Azure Storage. Indeed, I would like to choose my storage destination. I do not understand why we cannot back up to local disks for example.

Oracle multitenant dictionary: upgrade

Mon, 2015-01-05 16:10

This is a second part of the previous post about metadata link. I've shown how a sharing=metadata function becomes a sharing=none function when it is changed in the pdb - i.e when not having the same DDL, not having a different signature.

Here is another experimentation doing the opposite: change the function in root and see what happens in the pdb. Again playing with internals in order to understand the 'upgrade by unplug-plug' feature available in 12c multi-tenant (and single-tenant).

Oracle multitenant dictionary: metadata links

Mon, 2015-01-05 16:02

As in previous posts (1 and 2) I'm still playing with multitenant dictionary. You may wonder why I'm spending time to do unsupported things that we will never be able to use in real life.

Of course, playing with internals is fun ;)

But it's not only that. We will implement CDB databases (multi-tenant, and single-tenant as I think non-CDB will be deprecated in future versions). We will operate pluggable databases (clone, upgrade, downgrade,...), encounter issues, etc. Before doing that on critical environments, I need to be confident about the multitenant implementation. I'ts new, it's undocumented, and - at the first sight - it looks like a 'hack' in the way the dictionary is working for years.

Happy New Year 27, 104, 2015 and 2558

Wed, 2014-12-31 06:25

calendar          today  tomorrow  message
----------------- ------ --------- --------------
Arabic Hijrah     1436   1436
English Hijrah    1436   1436
Gregorian         2014   2015      Happy New Year
Japanese Imperial 0026   0027      Happy New Year
Persian           1393   1393
ROC Official      0103   0104      Happy New Year
Thai Buddha       2557   2558      Happy New Year

Oracle multitenant dictionary: object links

Mon, 2014-12-29 15:28

I've described Oracle 12c metadata and object links internals in a previous post. But before that, the first time I investigated on it, I made a wrong assumption because I was looking at AUDIT_ACTIONS which is not correctly implemented. That investigation came from a question on dba-village. And recently Ivica Arsov (@IvicaArsov) has made an interesting comment about AUDIT_ACTIONS object link table, so I'll explain here what is special with it.

AUDIT_ACTIONS

Here is how is defined AUDIT_ACTIONS:

SQL> select object_name,object_type,sharing from dba_objects where object_name in ('DBA_AUDIT_TRAIL','AUDIT_ACTIONS') order by object_name,object_type;

OBJECT_NAME          OBJECT_TYPE     SHARING
-------------------- --------------- -------------
AUDIT_ACTIONS        SYNONYM         METADATA LINK
AUDIT_ACTIONS        TABLE           OBJECT LINK
DBA_AUDIT_TRAIL      SYNONYM         METADATA LINK
DBA_AUDIT_TRAIL      VIEW            METADATA LINK
It's a sharing=object table so you expect that the data is common to all containers. And we will also query a view that reads that table - DBA_AUDIT_TRAIL.

Then let's query the table from CDB$ROOT and from a PDB and check from ROWID if we read the same rows:

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select rowid,action,name,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','AUDIT_ACTIONS') file_id from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME       FILE_ID
------------------ ---------- ------- ----------
AAABG7AABAAACo5AAD          3 SELECT           1

SQL> alter session set container=PDB1;
Session altered.

SQL> select rowid,action,name,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','AUDIT_ACTIONS') file_id from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME       FILE_ID
------------------ ---------- ------- ----------
AAABG5AABAAAA3pAAD          3 SELECT           8

The rows are not coming from the same file, but from the local SYSTEM tablespace of each container. This is a proof that this OBJECT LINK table is not common at all.

DBA_AUDIT_TRAIL

Now I want to check what happens when we query through the view. I don't have the ROWID so let's update the table in the PDB so that we can distinguish rows coming from CDB$ROOT and from PDB1:

SQL> update AUDIT_ACTIONS set name='select' where action=3;

1 row updated.

SQL> select rowid,action,name from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME
------------------ ---------- -------
AAABG5AABAAAA3pAAD          3 select

SQL> select distinct dbid,action,action_name from DBA_AUDIT_TRAIL;

      DBID     ACTION ACTION_NAME
---------- ---------- ----------------------------
 314687597          3 select

Ok. I've changed one 'ACTION_NAME' to lowercase - only in the PDB1. And when I query through the view I see the local row. This definitly prooves that the implementation of AUDIT_ACTIONS is not achieving the goal of multinenant dictionary: store common oracle objects only in CDB$ROOT to avoid duplication and faciliate upgrade. Note that it is not a big problem anyway as it is just a 200 rows table.

DBA_CPOOL_INFO

In order to show the normal behaviour of object links I'll do the same with DBA_CPOOL_INFO which is a view over SYS.CPOOL$. I've described this behaviour previously by creating my own objects but here I'll show how it is used to store the DRCP information which is at CDB level. Here are the involved table and views:

SQL> select object_name,object_type,sharing from dba_objects where object_name in ('CPOOL$','INT$DBA_CPOOL_INFO','DBA_CPOOL_INFO') order by object_name,object_type;

OBJECT_NAME          OBJECT_TYPE     SHARING
-------------------- --------------- -------------
CPOOL$               TABLE           OBJECT LINK
DBA_CPOOL_INFO       SYNONYM         METADATA LINK
DBA_CPOOL_INFO       VIEW            METADATA LINK
INT$DBA_CPOOL_INFO   VIEW            OBJECT LINK
CPOOL$ is defined with sharing=object. An internal view INT$DBA_CPOOL_INFO is defined on it with sharing=object as well. And finally that view is exposed through DBA_CPOOL_INFO.

As before, I check the ROWID of CPOOL$ row from CDB$ROOT and PDB1:
SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz5AABAAADb5AAA          4          1

SQL> alter session set container=PDB1;
Session altered.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz3AABAAABQJAAA          4          8

So this is the same as we have seen before: an OBJECT LINK has its data in each PDB.

But what is different here is the view charing which is sharing=object. Let's query that view after changing the value in PDB1:

SQL> update SYS.CPOOL$ set minsize=0;
1 row updated.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz3AABAAABQJAAA          0          8

SQL> select minsize from INT$DBA_CPOOL_INFO;

   MINSIZE
----------
         4

SQL> select minsize from DBA_CPOOL_INFO;

   MINSIZE
----------
         4

Now we have a view which will always show the CDB$ROOT rows, even when we are in a PDB container. We still have rows in the PDB containers, but they will not be used. Once again, this defeats the goal of deduplication, but this is a very small table.

AWR tables

The main advantage of multitenant dictionary architecture is with the big tables storing data which is common in the whole CDB, such as the AWR data:

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select con_id,count(*) from containers(WRH$_SQLTEXT) group by con_id;

    CON_ID   COUNT(*)
---------- ----------
         1       5549

SQL> alter session set container=PDB1;

Session altered.

SQL> select count(*) from WRH$_SQLTEXT;

  COUNT(*)
----------
         0

This information - stored only from CDB$ROOT - is shared in all PDB through the OBJECT LINK view.

Creating a schema synonym in Oracle - an unsupported feature

Sun, 2014-12-28 14:59

Ivica Arsov (@IvicaArsov) has made an interesting comment about AUDIT_ACTIONS object link table. I'll blog about it soon, but in the meantime when checking its definition in cataudit.sql it came upon the following:

/* SCHEMA SYNONYMS will be added in 12g */
-- insert into audit_actions values (222, 'CREATE SCHEMA SYNONYM');
-- insert into audit_actions values (224, 'DROP SCHEMA SYNONYM');

which caught my attention.

 

MySQL versions performance comparison

Fri, 2014-12-26 03:18

This blog aims to make a performance comparison between the different MySQL versions/editions and also comparing the differents MySQL forks such as Percona Server and MariaDB.  Indeed number of improvements as been done to innodb storage engine in the last MySQL versions. You can find below some of the performance improvements applied to InnoDB these last years (non exhaustive list):


MySQL 5.0

1. New compact storage format which can save up to 20% of the disk space required in previous MySQL/InnoDB versions.
2. Faster recovery from a failed or aborted ALTER TABLE.
3. Faster implementation of TRUNCATE TABLE.


MySQL 5.5

1. MySQL Enterprise Thread Pool, As of MySQL 5.5.16, MySQL Enterprise Edition distributions include a thread pool plugin that provides an alternative thread-handling model designed to reduce overhead and improve performance.
2. Changes to the InnoDB I/O subsystem enable more effective use of available I/O capacity. The changes also provide more control over configuration of the I/O subsystem.


MySQL 5.6

1. Improvements to the algorithms for adaptive flushing make I/O operations more efficient and consistent under a variety of workloads. The new algorithm and default configuration values are expected to improve performance and concurrency for most users. Advanced users can fine-tune their I/O responsiveness through several configuration options.
2. InnoDB has several internal performance enhancements, including reducing contention by splitting the kernel mutex, moving flushing operations from the main thread to a separate thread, enabling multiple purge threads, and reducing contention for the buffer pool on large-memory systems.
3. You can now set the InnoDB page size for uncompressed tables to 8KB or 4KB, as an alternative to the default 16KB. This setting is controlled by the innodb_page_size configuration option. You specify the size when creating the MySQL instance. All InnoDB tablespaces within an instance share the same page size. Smaller page sizes can help to avoid redundant or inefficient I/O for certain combinations of workload and storage devices, particularly SSD devices with small block sizes.


MySQL 5.7

1. In MySQL 5.7.2, InnoDB buffer pool dump and load operations are enhanced. A new system variable, innodb_buffer_pool_dump_pct, allows you to specify the percentage of most recently used pages in each buffer pool to read out and dump. When there is other I/O activity being performed by InnoDB background tasks, InnoDB attempts to limit the number of buffer pool load operations per second using the innodb_io_capacity setting.

2. As of MySQL 5.7.4, InnoDB supports multiple page cleaner threads for flushing dirty pages from buffer pool instances. A new system variable, innodb_page_cleaners, is used to specify the number of page cleaner threads. The default value of 1 maintains the pre-MySQL 5.7.4 configuration in which there is a single page cleaner thread. This enhancement builds on work completed in MySQL 5.6, which introduced a single page cleaner thread to offload buffer pool flushing work from the InnoDB master thread.

 

You can find an exhaustive performance improvement list on:

http://dev.mysql.com/doc/refman/5.1/en/mysql-nutshell.html

http://dev.mysql.com/doc/refman/5.5/en/mysql-nutshell.html

http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html

http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html

  Test limitations

This test won't take into consideration all new possible optimizations provided through new variables and functionnalities. The aim of this one is simply to demonstrate the performance improvement with a non optimized but consistent configuration. In this context, a limited set of variables available in all MySQL versions (since version 5.0) have been set up.

This test is obvisously not representative of your own environnement (hardware, queries, database schema, storage engine, data type, etc..). Therefore you probably won't have the same performance behavior.

 

MySQL performance test Hardware configuration

This test has been done with sysbench 0.5, it has been run on a laptop equiped with a Processor Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GH and 16Go RAM. The data are stored on a Samsung SSD 840 PRO Series.

 

First step: Installation

The first step consists in installing several different MySQL versions. Thanks to mysql_multi I've been able to run the following versions in parallel:

 

Editor/Provider

MySQL Server

Version

Port

Edition

Oracle

mysqld1

5.0.15

33001

Community Edition

Oracle

mysqld2

5.1.73

33002

Community Edition

Oracle

mysqld3

5.5.39

33003

Community Edition

Oracle

mysqld4

5.6.20

33004

Community Edition

Oracle

mysqld5

5.7.4

33005

Community Edition

Oracle

mysqld6

5.6.21

33006

Enterprise Edition

Percona

mysqld7

5.6.20

33007

N/A

Mariadb

mysqld8

10.0.15

33008

N/A

 

These servers have been setup with the same settings. However depending on the MySQL version, the default MySQL settings are different. For instance, on MySQL 5.0.15 the default value for global variable innodb_buffer_pool_size is 8388608 wheras on MySQL 5.1.73 the default value is 134217728. The default MySQL version settings have not been changed.

The only variables which have been set up are the following:

  • max_connections = 8000
  • table_open_cache=8000
  • open_files_limit = 8192


max_connections: The maximum permitted number of simultaneous client connections
table_open_cache: (or table_cache): The number of open tables for all threads:
open_files_limit: The number of files that the operating system permits mysqld to open. The value of this variable at runtime is the real value permitted by the system and might be different from the value you specify at server startup.

 

The OFA (Optimal Flexible Architecture) directory structure has been used to install the MySQL Servers.

 

You can find below an example of this structure:

port           = 33001
mysqladmin     = /u00/app/mysql/product/mysql-5.0.15/bin/mysqladmin
mysqld         = /u00/app/mysql/product/mysql-5.0.15/bin/mysqld
socket         = /u00/app/mysql/admin/mysqld1/socket/mysqld1.sock
pid-file       = /u00/app/mysql/admin/mysqld1/socket/mysqld1.pid
log-error      = /u00/app/mysql/admin/mysqld1/log/mysqld1.err
datadir        = /u01/mysqldata/mysqld1
basedir        = /u00/app/mysql/product/mysql-5.0.15


Second step: Test preparation

Once all MySQL Server installed and running, the second step is to prepare the table containing the records where the queries will be performed. In this test I decided to create only one table. This one is automatically named sbtest1 by sysbench. Notice that it is possible to create several tables by using “oltp-table-count” parameter.

The number of rows in this table is specified by the parameter “oltp-table-size”. This test table will contain 20'000'000 rows. The test mode is OLTP. According to sysbench documentation, this test mode was written to benchmark a real database performance.

At the prepare stage the following table is created:


mysql> desc sbtest1;
+-------+------------------+------+-----+---------+----------------+
| Field | Type             | Null | Key | Default | Extra          |
+-------+------------------+------+-----+---------+----------------+
| id    | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| k     | int(10) unsigned | NO   | MUL | 0       |                |
| c     | char(120)        | NO   |     |         |                |
| pad   | char(60)         | NO   |     |         |                |
+-------+------------------+------+-----+---------+----------------+


Each record contains random strings in the fields c and pad and random integers between 1 and oltp-table-size in the field k as presented in the following picture:

 sbtest.png

 

Sysbench prepare script:

sysbench \
--db-driver=mysql \
--mysql-table-engine=innodb \
--oltp-table-size=20000000 \
--mysql-socket=/u00/app/mysql/admin/mysqld1/socket/mysqld1.sock \
--mysql-port=33301 \
--mysql-db=sysbench \
--mysql-user=sbtest \
--mysql-password=sbtest \
--test=/home/mysql/sysbench/oltp.lua \
prepare

In order to be sure to have the same set of data on each server a MySQL dump has been done on the server after the first load. This dump has been imported on each server.

 

Third step: Running the test

The test has been run with different number of threads in order to understand how the different version/edition and fork of MySQL scale depending on the number of threads. The parameter max-request limits the total number of requests. The OLTP test mode (oltp.lua) has been written to improve performance's benchmarking of database servers by providing a realistic scenario of an OLTP database.

 

sysbench \
--db-driver=mysql \
--test=oltp \
--num-threads=1 \
--mysql-user=sbtest \
--mysql-password=sbtest \
--mysql-db=sysbench \
--max-requests=10000 \
--oltp-test-mode=complex \
--test=/home/mysql/sysbench/oltp.lua \
--mysql-socket=/u00/app/mysql/admin/mysqld1/socket/mysqld1.sock \
--oltp-table-name=sbtest1 \
run

 

In order to ensure correct results, avoiding any side effects due to external process and ensuring consistent results over time, the benchmark has been run twice.

 

Fourth step: Collecting results

All the results have been collected in an excel sheet and the following graph directly comes from these results:

 MySQLPerformanceComparison_20141226-105949_1.png

  Fifth step: results analysis

1. innodb has been improved over time in regards of scalability and the tests results tempt to proove that. The performance with 64 threads are radically different depending on the MySQL Version:

MySQL 5.0.15 – 1237 tps
MySQL 5.1.73 – 1818 tps
MySQL 5.5.39 -  2978 tps
MySQL 5.6.20 – 2801 tps
MySQL 5.6.21 – 2830 tps
MySQL 5.7.4 – 2725 tps
Percona 5.6.21 – 2853 tps
Mariadb 10.0.15 – 2941 tps

 

2. For application using only one thread the peformance between MySQL version (with default settings) is more or less equivalent (+/-10%):

MySQL 5.0.15 – 163 tps
MySQL 5.1.73 – 158 tps
MySQL 5.5.39 -  150 tps
MySQL 5.6.20 – 145 tps
MySQL 5.6.21 – 149 tps
MySQL 5.7.4 – 145 tps
Percona 5.6.21 – 145 tps
Mariadb 10.0.15 – 143 tps

 

3. For large number of threads it definitively worth to use pool of threads plugin from Percona. During these tests a improvement factor of x30 has been observed. Unfortunately I didn't see any performance improvement with MySQL 5.6.21 with the thread_pool plugin and thread_pool_size parameter set to 36 ( Best performances with Sysbench according to http://dev.mysql.com/doc/refman/5.6/en/thread-pool-tuning.html) . Regarding Percona I set up the parameter thread_pool_high_prio_mode to transactions. You can find below the results with 4096 thread:

MySQL 5.0.15 – error
MySQL 5.1.73 – 3.97 tps
MySQL 5.5.39 -  9.05 tps
MySQL 5.6.20 – 9.29 tps
MySQL 5.6.21 – 9.07 tps
MySQL 5.6.21 pool of thread plugin – 8.75
MySQL 5.7.4 – 5.64 tps
Percona 5.6.21 – 9.83 tps
Percona 5.6.21 pool of thread plugin – 295.4 tps
Mariadb 10.0.15 – 8.04 tps

It is interesting to notice that performance degradation can occur with the thread pool plugin activated for MySQL and for Percona. This performance degradation has been observed for a number of thread between 16 and 128 for Percona and 32 and 512 with MySQL.

 

Conclusion

These results tempt to prove that last MySQL releases perform better than older ones especially with several threads (64 threads in this case). The only exception is MySQL 5.7.4 which is a development release.


Applications using only one thread won't benefit from a huge performance improvement with the last MySQL versions. However enhancements provided in last versions such as ONLINE DDL, faster deadlock detection, dynamic innodb_buffer_pool_size parameter, etc, etc.. will for sure save you lots of time.


MySQL forks such as Percona and MariaDB, perform as MySQL Server. In addition I didn't observe any performance difference between MySQL Enterprise Edition and MySQL Community Edition. It is interesting to notice that thread pool plugin provided by Percona provide a huge performance improvement with large number of threads compared to standard behavior.


Regarding MySQL Enterprise Edition I haven't been able to see any performance improvement with MySQL Thread Pool plugin activated even with large number of threads. This is perhaps due to a misconfiguration from my side... however I presented these results to an Oracle MySQL specialist present on the Oracle UKOUG booth and he hasn't been able to find any error in my configuration.

dbi services wishes you a Merry Christmas with this SQL Server script

Thu, 2014-12-18 07:57

The end of the year approachs and soon it will be time to celebrate Christmas with your family and friends. At dbi services, we wish you a merry christmas via SQL Server with the following script, to execute in your SQL Server Management Studio for example (supported only by SQL Server 2012 and SQL Server 2014). You will find a suprise in the result pane :-)

 

SET NOCOUNT ON;   IF EXISTS(SELECT * FROM tempdb.sys.objects WHERE name LIKE N'#SURPRISE%')        DROP TABLE #SURPRISE; GO   CREATE TABLE #SURPRISE ( col1 CHAR(290) ); GO   ------------------------------   INSERT #SURPRISE VALUES (REPLICATE(CHAR(111),16)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1), 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(77) + CHAR(101) + CHAR(114) + CHAR(114) + CHAR(121)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(246)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(149)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(67) + CHAR(104) + CHAR(114) + CHAR(105) + CHAR(115) + CHAR(116) + CHAR(109) + CHAR(97) + CHAR(115) ); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(165)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(95)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(42)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(47)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(47)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(126)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, + CHAR(92)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(186)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(182)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(164)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(164)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(2) + CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(91)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(93)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(124)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(176)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + REPLICATE(CHAR(42), 19)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(3) + CHAR(100) + CHAR(98) + CHAR(105)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(1) + CHAR(115) + CHAR(101) + CHAR(114) + CHAR(118) + CHAR(105) + CHAR(99) + CHAR(101) + CHAR(115)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, SPACE(5) + CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, CHAR(46)); UPDATE #SURPRISE SET col1 = STUFF(col1, LEN(col1) + 1, 50, REPLICATE(CHAR(111), 17));   ------------------------------   DECLARE @SQL VARCHAR(100);   SELECT        @SQL = 'DBCC TRACEON(3604); DBCC PAGE (''tempdb'', 1, ' + CAST(allocated_page_page_id AS VARCHAR(20)) + ', 1)' FROM sys.dm_db_database_page_allocations(2, OBJECT_ID('tempdb..#SURPRISE'), 0, NULL, DEFAULT) WHERE is_iam_page = 0 AND is_allocated = 1;   EXEC(@SQL);

 

Merry Christmas to all of you!