Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 14 hours 25 min ago

Rolling Invalidate Window Exceeded

Tue, 2016-08-30 16:05

Today I was doing a hard parse storm post-mortem analysis. One hypothesis was rolling invalidation causing invalidation, but figures didn’t match. I often reproduce the hypothesis to check the numbers to be sure I interpret them correctly. Especially the timestamps in V$SQL_SHARED_CURSOR.REASON. And as it may help others (including myself in the future) I share the test case.

I create a table with one row (12c online statistics gathering, so num_rows is 1) and then insert one more row.

21:31:26 SQL> create table DEMO as select * from dual;
Table created.
21:31:26 SQL> insert into DEMO select * from dual;
1 row created.
21:31:26 SQL> commit;
Commit complete.

I run a query on the table. I don’t care about the result, so let’s put it something that will be useful later: the UTC time as the number of seconds since Jan 1st, 1970 (aka Epoch)

21:32:52 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585572
1472585572

The execution plan cardinality estimation is 1 row as this is what is in object statistics.

21:32:52 SQL> select * from table(dbms_xplan.display_cursor(null,null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

I gather statistics with all default attributes, so rolling invalidation occurs.

21:32:52 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.

At this time, the cursor has been parsed only once:

21:32:52 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
0 1 1 1 2016-08-30/21:32:51 2016-08-30/21:32:51 30-AUG-16 21:32:51

By default the invalidation window is 5 hours. I don’t want to wait so I set it to something shorter- 15 seconds:

21:32:54 SQL> alter system set "_optimizer_invalidation_period"=15;
System altered.

There will not be any invalidation until the next execution. To prove it I wait 20 seconds, run the query again and check the execution plan:

21:33:12 SQL> select (sysdate-date'1970-01-01')*24*3600 from DEMO;
 
(SYSDATE-DATE'1970-01-01')*24*3600
----------------------------------
1472592792
1472592792
 
21:33:12 SQL> select * from table(dbms_xplan.display_cursor('61x2h0y9zv0r6',null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

This is still the old cursor (child number 0) with old stats (num_rows=1)

However, from this point rolling invalidation occurs: a random timestamp is generated within the rolling window (15 seconds here – 5 hours in default database).

I don’t know how to see this timestamp at that point (comments welcome) so I run the query several times within this 15 seconds window to see when it occurs:

21:33:16 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585596
1472585596
 
21:33:19 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585599
1472585599
 
21:33:22 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585602
1472585602
 
21:33:25 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585605
1472585605
 
21:33:28 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585608
1472585608
 
21:33:31 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585611
1472585611

After those runs, I check that I have a new execution plan with new estimation from new statistics (num_rows=2):

21:33:31 SQL> select * from table(dbms_xplan.display_cursor('61x2h0y9zv0r6',null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------
 
SQL_ID 61x2h0y9zv0r6, child number 1
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 2 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

Yes, I have a new child cursor, child number 1. A new cursor means that I have a reason in V$SQL_SHARED_CURSOR:

21:33:31 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='61x2h0y9zv0r6';
 
CHILD_NUMBER REASON
------------ --------------------------------------------------------------------------------
0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(2)</reason><size>0x0</size><details>already_processed</details></Ch
ildNode><ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invali
date Window Exceeded(3)</reason><size>2x4</size><invalidation_window>1472585604<
/invalidation_window><ksugctm>1472585607</ksugctm></ChildNode>
 
1

Child cursor number 0 has not been shared because of rolling invalidation. The invalidation_window number, 1472585604, is the timestamp set by rolling invalidation, set at first parse call after stats gathering, and defined within the rolling window that follows. After this one (1472585604 is 21:33:24 in my GMT+2 timezone) the cursor will not be shared and a new hard parse occurs. I think that ksugctm is the timestamp when the new cursor is created. 1472585607 is 21:33:27 here in Switzerland. You see the corresponding timestamps in V$SQL:

21:33:31 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
0 1 5 5 2016-08-30/21:32:51 2016-08-30/21:32:51 30-AUG-16 21:33:24
0 1 2 2 2016-08-30/21:32:51 2016-08-30/21:33:27 30-AUG-16 21:33:30

Ok. Important thing is that the ‘rolling invalidation’ is not an invalidation (as V$SQL.INVALIDATIONS=0) of the cursor, but just non-sharing of the child.

If we gather statistics with immediate invalidation, it’s different:

21:33:31 SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.
 
21:33:34 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585614
1472585614
21:33:34 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='61x2h0y9zv0r6';
 
CHILD_NUMBER REASON
------------ --------------------------------------------------------------------------------
0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(3)</reason><size>2x4</size><invalidation_window>1472585604</invalid
ation_window><ksugctm>1472585607</ksugctm></ChildNode><ChildNode><ChildNumber>0<
/ChildNumber><ID>33</ID><reason>Rolling Invalidate Window Exceeded(2)</reason><s
ize>0x0</size><details>already_processed</details></ChildNode>

I’ve only one child here, a new one, and I’m not sure the reason has a meaning.

21:33:34 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
1 2 1 1 2016-08-30/21:32:51 2016-08-30/21:33:33 30-AUG-16 21:33:33

This is an invalidation of the cursor. Old children cursors are removed and the proud parent is marked as invalidated 1 time.

 

Cet article Rolling Invalidate Window Exceeded est apparu en premier sur Blog dbi services.

The fastest way to get the Oracle sample schemas

Tue, 2016-08-30 00:52

Do you need the Oracle sample schemas to do a quick test or demonstration? And, as always, you did not install the sample schemas when you did the setup of your environment? The probably fastest way to get them installed is to download them from github. Installation instructions are there as well. Have fun …

 

Cet article The fastest way to get the Oracle sample schemas est apparu en premier sur Blog dbi services.

Filenames in AWR reports

Mon, 2016-08-29 14:33

If you have read my latest blog posts, you know I’ve measured IOPS with SLOB to estimate ACFS overhead on a fast storage. This blog is about something I learned after wasting one hour on the result.

Here is how I did my tests:

  1. Create a SLOB database in ACFS
  2. Run SLOB PIO tests and tag the AWR report as ‘ACFS’
  3. Move datafile to +DATA
  4. Run SLOB PIO tests and tag the AWR report as ‘ASM’

Of course, I’ve scripted to run several tests varying the number of sessions, work unit, etc. while I was doing something more productive.

While done, I got a set of AWR report and the first task was to check that they were consistent. But they were not. The datafile in ‘File IO Stats’ section did not match the tag I’ve put in the file name. First I suspected a bug in my script with bad tagging or failed datafile move. I had to read the alert.log to get that my tagging was good but filename in AWR reports was wrong. I finally looked at AWR views to understand why the filename was wrong and understood the problem:

SQL> desc DBA_HIST_DATAFILE;
Name Null? Type
----------------------------------------- -------- ----------------------------
DBID NOT NULL NUMBER
FILE# NOT NULL NUMBER
CREATION_CHANGE# NOT NULL NUMBER
FILENAME NOT NULL VARCHAR2(513)
TS# NOT NULL NUMBER
TSNAME VARCHAR2(30)
BLOCK_SIZE NUMBER
CON_DBID NUMBER
CON_ID NUMBER

There’s no SNAP_ID. AWR do not store the history of file names. We can suppose that it stores only the latest filename, but then my reports would be good as they were generated immediately after the snapshot. Or that the first name stays, but I had some reports with ‘+DATA’.

Then, I grepped for ‘WRH$_HISTORY’ in ORACLE_HOME/rdbms/admin and came upon this:

dbmsawr.sql: -- This routine updates WRH$_DATAFILE rows for the datafile name and
dbmsawr.sql: -- WRH$_DATAFILE with the current information in database.

There is an update_datafile_info procedure here in the dbms_workload_repository and the comment says something like:
This change will be captured at max after some
-- (generally 50) snapshots. So the AWR and AWR report may be wrong with
-- respect to data file name or tablespace name for that duration.

I love to work with Oracle. All information is there if you know where to look at.

So if you want to rely on filename in an AWR report after a move, you should run this procedure before taking the report. And you should run this report before the next datafile move.

Here is the example:

SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
 
SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users01.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users01.dbf
6 rows selected.

My file is user01 and this is what is stored in AWR.

I rename it to users02 (thanks to 12c online move)

SQL> alter database move datafile '/u01/DEMO/oradata/DEMO14/users01.dbf' to '/u01/DEMO/oradata/DEMO14/users02.dbf';
Database altered.

but AWR is not aware of the change even after a snapshot:

SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
 
SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users01.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1250 6 /u01/DEMO/oradata/DEMO14/users01.dbf

You have to wait for those 50 snapshots or run the update:

SQL> exec dbms_workload_repository.update_datafile_info;
PL/SQL procedure successfully completed.

SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users02.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1250 6 /u01/DEMO/oradata/DEMO14/users02.dbf

But as you see no history about previous names.

Note that if you look at the table behind the view, there’s a SNAP_ID but it’s not part of the primary key. It is used by the purge procedures.

 

Cet article Filenames in AWR reports est apparu en premier sur Blog dbi services.

Letting GoldenGate automatically maintain the insert and update timestamps on the target

Mon, 2016-08-29 11:18

Today at one of our GoldenGate customers we were faced with the following requirement: For one of the tables on the target we needed two additional columns which shall hold the insert and update timestamps for when the row was either inserted or updated. Just to be clear: It was not about getting the timestamps for the inserts or updates as they happened on the source but the target. In this post we’ll look at how GoldenGate can be configured to do the work requested.

All the below examples work with the well known scott/tiger schema. There is one extract running for capturing on the source and one replicat is re-playing the changes on the target (over sqlnet, no integrated mode here). This is the current status of the streams:

GGSCI (oelogg1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:02    
REPLICAT    RUNNING     REPLSCO     00:00:00      00:00:01    


GGSCI (oelogg1) 2> view params EXTRSCO

EXTRACT extrsco
USERIDALIAS DB1 DOMAIN admin
GETUPDATEBEFORES
REPORT AT 23:40
DDL INCLUDE OPTYPE TRUNCATE OBJNAME ARBOR.*, &
    INCLUDE OPTYPE ALTER OBJNAME ARBOR.*
EXTTRAIL /u01/app/ogg/product/12.1.2.1.9/dirdat/es
TABLE SCOTT.*;

GGSCI (oelogg1) 3> view params REPLSCO

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.*, TARGET SCOTT.*;

Pretty basic, no unusual stuff here. The table we’ll use for the scope of this post is the “project” table which has the following contents in a fresh scott/tiger installation:

SQL> col DESCRIPTION for a50
SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

Of course the table looks the same on the target:

SQL> col DESCRIPTION for a50
SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

To prove that the streams are really working lets add an additional row to the source:

SQL> insert into project values (1005, 'my fun project 1', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

SQL> 

… and then check if the row indeed was replicated to the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1005 my fun project 1				      29-AUG-16    28-SEP-16
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

Ok, looks fine, so back to the initial requirement. We need two additional columns on the target:

alter table SCOTT.PROJECT add create_dt timestamp with time zone;
alter table SCOTT.PROJECT add update_dt timestamp with time zone;
alter table SCOTT.PROJECT modify create_dt default to_date('01.01.2000','DD.MM.YYYY');
alter table SCOTT.PROJECT modify update_dt default to_date('01.01.2000','DD.MM.YYYY');

In our case we needed to set a default value as both column are not allowed to contain NULL values. For the moment the content of the table on the target is:

SQL> col CREATE_DT for a20
SQL> col UPDATE_DT for a20
SQL> select * from project;

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT	     UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ -------------------- --------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

In real life when the table which will be extended holds millions of rows the following will probably be too simple and you’ll need to spend some time on thinking on how you organize the updates. For the scope of this post this is fine:

update SCOTT.PROJECT set create_dt = to_date('01.01.2000','DD.MM.YYYY') where create_dt is null;
update SCOTT.PROJECT set update_dt = to_date('01.01.2000','DD.MM.YYYY') where update_dt is null;
commit;
alter table SCOTT.PROJECT modify create_dt not null;
alter table SCOTT.PROJECT modify update_dt not null;

From now on we have identical insert and update timestamps for all of the rows on the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01                01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

A final check on the source for being sure that the default values work:

SQL> insert into project values (1006, 'my fun project 2', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

On the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

All fine. Time to do the GoldenGate work. Obviously the first step is to stop the replicat:

GGSCI (oelogg1) 2> stop REPLSCO

Sending STOP request to REPLICAT REPLSCO ...
Request processed.

GGSCI (oelogg1) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:09    
REPLICAT    STOPPED     REPLSCO     00:00:00      00:00:01    

Lets do the insert case in a first step by adding the following line (COLMAP) to the replicat configuration:

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DISCARDFILE /u01/app/ogg/product/12.1.2.1.9/dirdsc/dscsco.dsc, APPEND, MEGABYTES 10
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.PROJECT, TARGET SCOTT.PROJECT, COLMAP (usedefaults,
    create_dt = @IF (@STREQ (@GETENV ('GGHEADER', 'OPTYPE'), 'INSERT'), @DATENOW(), @COLSTAT (MISSING)));
MAP SCOTT.*, TARGET SCOTT.*;

Start again:

GGSCI (oelogg1) 6> start REPLSCO

Sending START request to MANAGER ...
REPLICAT REPLSCO starting


GGSCI (oelogg1) 7> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:01    
REPLICAT    RUNNING     REPLSCO     00:00:00      00:00:00    

Looks good from a configuration perspective. Time to start:

SQL> insert into project values (1007, 'my fun project 3', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

On the target we should now see the exact insert date of the record instead of the default value of the column:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

Perfect, this works. Lets go on with the update case: All we need is to add the additional column for the update case and populate it:

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DISCARDFILE /u01/app/ogg/product/12.1.2.1.9/dirdsc/dscsco.dsc, APPEND, MEGABYTES 10
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.PROJECT, TARGET SCOTT.PROJECT, COLMAP (usedefaults,
    create_dt = @IF (@STREQ (@GETENV ('GGHEADER', 'OPTYPE'), 'INSERT'), @DATENOW(), @COLSTAT (MISSING)),
    update_dt = @IF (@VALONEOF (@GETENV ('GGHEADER', 'OPTYPE'), 'UPDATE', 'SQL COMPUPDATE', 'PK UPDATE' ), @DATENOW(), @COLSTAT 
(MISSING)));
MAP SCOTT.*, TARGET SCOTT.*;

Then stop and start the replicat to bring the changes into effect. Lets create a new row just to see that this does not have any effect on the insert case:

SQL> insert into project values (1008, 'my fun project 4', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

What we should see on the target are two rows with an exact insert date but a default update date:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1008 my fun project 4				      29-AUG-16    28-SEP-16	29-AUG-16 12.58.40.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

Perfect. What about the update itself?

SQL> update project set DESCRIPTION = upper(description) where PROJECTNO = 1008;

1 row updated.

SQL> commit;

Commit complete.

This should result in one exact update date for my fun project 4:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1008 MY FUN PROJECT 4				      29-AUG-16    28-SEP-16	29-AUG-16 12.58.40.000000 PM +02:00 29-AUG-16 01.04.49.000000 PM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

8 rows selected.

Perfect, works fine, too. Just to be sure that deletes still work lets do a final test:

SQL> delete from project where PROJECTNO = 1008;

1 row deleted.

SQL> commit;

Commit complete.

SQL> 

The two additional columns should not prevent Goldengate from being able to delete the rows, lets see:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

The fun project 4 is gone and all works as expected. Hope this helps ….

 

Cet article Letting GoldenGate automatically maintain the insert and update timestamps on the target est apparu en premier sur Blog dbi services.

ODA X6 database classes and shapes

Mon, 2016-08-29 08:47

On the Oracle Database Appliance, like on the Oracle public Cloud, you define the CPU capacity with ‘shapes’. On the latest ODA, the X6, we have a new interface to provision a database. Let’s look at the different shapes available.

ODACLI

You can provision a new database with the command line ODACLI which replaces the OAKCLI you used in ODA X5:

[root@odax6m ~]# odacli create-database
Usage: create-database [options] Options:
* --adminpassword, -m
Password for SYS,SYSTEM and PDB Admin
--cdb, -c
Create Container Database
Default: false
--dbclass, -cl
Database Class OLTP/DSS/IMDB
Default: OLTP
--dbconsole, -co
Enable Database Console
Default: false
--dbhomeid, -dh
Database Home ID (Use Existing DB Home)
* --dbname, -n
Database Name
--dbshape, -s
Database Shape{odb1,odb2,odb3 etc.}
Default: odb1
--dbstorage, -r
Database Storage {ACFS|ASM}
Default: ACFS
--dbtype, -y
Database Type {SI|RAC}
Default: SI
--help, -h
get help
Default: false
--instanceonly, -io
Create Instance Only (For Standby)
Default: false
--json, -j
json output
Default: false
--pdbadmin, -d
Pluggable Database Admin User
Default: pdbadmin
--pdbname, -p
Pluggable Database Name
Default: pdb1
--targetnode, -g
Node Number (for single-instance databases)
Default: 0
--version, -v
Database Version
Default: 12.1.0.2

ODA WebConsole

But the ODA X6 has also a small graphical interface from the web console.

CaptureODACreate

12c multitenant is the default, but you can choose.

Edition

You don’t have the choice when you create the database. You install the ODA in Standard or Enterprise and then you cannot change.

Versions

Two database versions are available: 11.2.0.4 and 12.1.0.2

CaptureODAVersions

You choose ODA to get a stable, certified and supported system so it make sense to run only supported versions with latest PSU. If you have older versions, you must upgrade. Set optimizer_features_enable to previous if your application was not tuned for newer versions. Very often, when an ISV do not certify his software it’s because of optimizer regressions. With proper testing and optimizer settings you should be able to upgrade any application without the risk of regression.

Templates

There are four DBCA templates available

  • Standard Edition or Enterprise Edition
  • Multitenant or non-CDB

The main difference between Enterprise Edition and Standard Editions are:
Options OMS,SPATIAL,CWMLITE,DV are installed in Enterprise Edition but not in Standard Edition
fast_start_mttr_target=300 in Enterprise Edition (feature not supported in Standard Edition)

The main difference between multitenant and non-CDB:
Options JSERVER,IMEDIA,ORACLE_TEXT,APEX are installed in a CDB an not in a non-CDB
maxdatafiles=1024 in CDB (100 in non-CDB)

All templates are configured with filesystem_io_options=setall and use_large_pages=only

Following underscore parameters are set for all ODA templates:
*._datafile_write_errors_crash_instance=FALSE
*._disable_interface_checking=TRUE
*._enable_NUMA_support=FALSE
*._file_size_increase_increment=2143289344
*._gc_policy_time=0
*._gc_undo_affinity=FALSE

Note that both 12c and 11g are available in Enterprise Edition as well as Standard Edition (can even be Standard Edition One for 11g).
Of course, CDB is only for 12c.

Shapes

As in the Oracle Public Cloud, the CPU and Memory comes in shapes:

CaptureODACShape

The choice is the number of core. The cores are hyperthreaded, which means that odb1 will have cpu_count=2. And it is set in spfile. Note that at install no resource manager plan is active so instance caging will not occur except during the automatic maintenance window… My recommandation is to set a plan. In 12.1.0.2 Standard Edition resource manager is implicitly activated.

ODA X6-2 processors are Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz. Here is an example of the LIOPS you can reach when running on all the 40 threads of a X6-2M:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 39.9 545.6 0.00 25.43
DB CPU(s): 38.8 530.9 0.00 24.75
Logical read (blocks): 18,494,690.4 252,862,769.1

This is 18 million logical reads per seconds in this 2 sockets (2s10c20t) appliance. Half of it on the X6-2S which has one socket 1s10c20t.

The core factor for those processors is 0.5 which means that you can run an Enterprise Edition ‘odb2′ with a single processor license (public price 47,500$) and you can run 4 sessions in CPU which means more that you can do nearly 3 million logical reads per second, as here:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 13.80
DB CPU(s): 4.0 54.5 0.00 13.78
Logical read (blocks): 2,901,991.5 39,660,331.1

Those shapes are defined as:

CaptureODAShapes

Given the high LIOPS and the available memory, this entry-level appliance can be sufficient for most of medium OLTP workload.

Classes

Three classes are defined to derive the database parameters from the shape.

CaptureODAClasses

The SGA/PGA is calculated from the shape memory and a class factor.
OLTP gives 50% to SGA and 25% to PGA which means that for example a odb4 has sga_target=16 and pga_aggregate_target=8G
DSS gives 25% to SGA and 50% to PGA

Note that OLTP is the only one available in Standard Edition. This does not mean that you can run only OLTP. You can change memory settings later (DSS usually need more PGA than SGA) and you have very good storage bandwidth and IOPS (NVMe access to SSD). This setting is more an indication that most of datawarehouses need features available only on Enterprise Edition such as parallel query, partitioning, bitmap indexes.

ASM or ACFS?

CaptureODAStorage

The template shapes above define a 100GB database. When you create a new database you have the choice to put it directly on +DATA and +RECO, or create a 100GB ADVM volume and ACFS filesystem that will be mounted under /u02/app/oracle/oradata. If you choose ACFS the FRA and REDO will be created under the /u03/app/oracle mount point which is a common ACFS.

The default is ACFS but you should think about it. For production, best performance is ASM. You have SSD to reduce avoid disk latency. You have NVMe to reduce CPU latency. You don’t want to add the ACFS layer. The maximum IOPS we observe is 10 times higher with datafiles directly on ASM:

@FranckPachot Don't know? We get 75k IOPS on ACFS and 700k IOPS on pure ASM. /cc @kevinclosson pic.twitter.com/TcLzUsOh0d

— Marco Mischke (@DBAMarco) August 29, 2016

For test databases, where you use snapshot features, especially with multitenant, you may choose ACFS. However, why not create the CDB in ASM and use ACFS for the PDBs you will want to snapshot? No need for that additional layer for the CDB files. Better to isolate the master and clones for a specific environment into its own ACFS.

And anyway, ODA X6-2S and X6-2M are very interesting for Standard Edition, and you cannot use snapshots nor any ACFS features for a database in Standard Edition.

Storage performance is truly amazing. At 100000 IOPS we have 99% single block reads faster than 256 milliseconds and 97% faster than 128 ms. At 800000 IOPS here are the figures:

% of Waits
-----------------------------------------------
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms 1s
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
db file parallel read 6.9M 4.0 74.1 21.7 .2 .0 .0 .0
db file sequential read 18.4M 83.1 16.3 .7 .0 .0 .0

So what?

It’s nice to have an easy GUI to provision a database on ODA. However there are some limits with it:

  • Be careful on the defaults. They may not fit what you want. Do you want you databases on ACFS?
  • Not all operations can be done though the GUI: you can create but not delete a database.

But there’s more. Performance is there. You can run application that need high performance.

Do you know any other solution which gives you a fully certified system installed in few hours with databases ready? With very good hardware and managed software costs (NUP, Standard Edition in socket metric or Entrerprise Edition capacity-on-demand by multiple of 1 processor license).
You need high-availability? In Standard Edition you cannot use Data Guard. In Standard Edition you can buy Dbvisit standby which gives you switchover and failover (Recovery Point Objective of few minutes) to a second ODA or to a cloud service. Of course, you can build or buy custom scripts to manage the manual standby. However, if you go to ODA you probably appreciate easy and robust software.

 

Cet article ODA X6 database classes and shapes est apparu en premier sur Blog dbi services.

ODA X6 command line and Web Console

Sat, 2016-08-27 14:33

The ODA X6 comes with a new command line (odacli) which replaces oakcli, and with a small web console which can display information about the appliance, the databases and the provisioning jobs. It also has the possibility to create a database, but this is for next blog post. In this post I’ll show which information are displayed once the ODA is installed.

The examples here come from ODA X6 version: 12.1.2.7.0

Appliance

The first screen is about the appliance information, the ones that you define when installed the ODA:

CaptureODAWEB001

The same information can be displayed from command line with odacli describe-appliance:

[root@odax6m ~]# odacli describe-appliance
 
Appliance Information
----------------------------------------------------------------
ID: bb8f0eec-0f5c-4319-bade-75770848b923
Platform: OdaliteM
Data Disk Count: 2
CPU Core Count: 20
Created: Aug 26, 2016 2:51:26 PM
 
System Information
----------------------------------------------------------------
Name: odax6m
Domain Name: oracle.democenter.arrowcs.ch
Time Zone: Europe/Zurich
DB Edition: EE
DNS Servers: 172.22.1.9
NTP Servers: 172.22.1.9
 
Disk Group Information
----------------------------------------------------------------
DG Name Redundancy Percentage
------------------------- ------------------------- ------------
Data Normal 75
Reco Normal 25

An important thing to note here is that the choice between Standard Edition and Enterprise Edition is at appliance level: you cannot mix.
There’s also no mention of virtualization because ODA X6 2S and 2M are only bare-metal.

odacli list-networks

[root@odax6m ~]# odacli list-networks
 
ID Name NIC IP Address Subnet Mask Gateway
---------------------------------------- -------------------- ---------- ------------------ ------------------ ------------------
ffcf7d89-8074-4342-9f19-5e72ed695ce7 Private-network priv0 192.168.16.24 255.255.255.240
71a422bc-39d3-483c-b79b-ffe25129dfd2 Public-network btbond1 172.22.1.23 255.255.255.224 172.22.1.2

I’ve no Auto Service Request configured here:
[root@odax6m ~]# odacli describe-asr
Aug 27, 2016 8:56:33 PM com.oracle.oda.dcscli.commands.AsrCommand$getAsr run
SEVERE: No asr found

Databases

The second screen is about the databases:

CaptureODAWEB002

From command line you have information about the ORACLE_HOMEs and databases.

[root@odax6m ~]# odacli list-dbhomes
 
ID Name DB Version Home Location
---------------------------------------- -------------------- ---------- ---------------------------------------------
67419075-d1f9-4c2e-85b1-c74430e35120 OraDB12102_home1 12.1.0.2 /u01/app/oracle/product/12.1.0.2/dbhome_1
cf76a90b-f9e3-44b2-9b43-56111c1785e4 OraDB12102_home2 12.1.0.2 /u01/app/oracle/product/12.1.0.2/dbhome_2
adcbe8bf-f26f-4ab7-98a1-0abcd4412305 OraDB11204_home1 11.2.0.4 /u01/app/oracle/product/11.2.0.4/dbhome_1

[root@odax6m ~]# odacli list-databases
 
ID DB Name DB Version CDB Class Shape Storage Status
---------------------------------------- ---------- ---------- ---------- -------- -------- ---------- ----------
4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9 dbee1 12.1.0.2 false OLTP odb4 ACFS Configured
5564ea51-fc93-46f2-9188-c13c23caba94 odb1s 12.1.0.2 true OLTP odb1s ACFS Configured
26c2213d-5992-4b2b-94b0-2d0f4d0f9c2d dbee11g1 11.2.0.4 false OLTP odb2 ACFS Configured

You can get more detail about one database:

CaptureODAWEB0022

[root@odax6m ~]# odacli describe-dbhome -i 67419075-d1f9-4c2e-85b1-c74430e35120
 
DB Home details
----------------------------------------------------------------
ID: 67419075-d1f9-4c2e-85b1-c74430e35120
Name: OraDB12102_home1
Version: 12.1.0.2
Home Location: /u01/app/oracle/product/12.1.0.2/dbhome_1
Created: Aug 26, 2016 2:51:26 PM

[root@odax6m ~]# odacli describe-database -i 4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9
 
Database details
----------------------------------------------------------------
ID: 4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9
Description: dbee1
DB Name: dbee1
DB Version: 12.1.0.2
DBID: 2933563624
CDB: false
PDB Name:
PDB Admin User Name:
Class: OLTP
Shape: odb4
Storage: ACFS
CharacterSet: DbCharacterSet(characterSet=AL32UTF8, nlsCharacterset=AL16UTF16, dbTerritory=AMERICA, dbLanguage=AMERICAN)
Home ID: 67419075-d1f9-4c2e-85b1-c74430e35120
Console Enabled: false
Created: Aug 26, 2016 2:51:26 PM

Activity

Here is the log of what has been done on the ODA:

CaptureODAWEB003

[root@odax6m ~]# odacli list-jobs
 
ID Description Created Status
---------------------------------------- ------------------------------ ------------------------- ----------
1b99d278-6ab4-4ead-a5f8-f112c74a8f97 Provisioning service creation Aug 26, 2016 2:51:26 PM Success
f0ac9a2c-ba37-412c-8a81-9cc7cb301417 Database service creation with db name: odb1s Aug 26, 2016 4:03:39 PM Success
dec37817-feb7-46e5-b991-b23362268cb1 Database service creation with db name: dbee11g1 Aug 26, 2016 5:09:33 PM Success

And we have more info about the steps executed for one job:

CaptureODAWEB004

Same in command line:

[root@odax6m ~]# odacli describe-job -i 1b99d278-6ab4-4ead-a5f8-f112c74a8f97
 
Job details
----------------------------------------------------------------
ID: 1b99d278-6ab4-4ead-a5f8-f112c74a8f97
Description: Provisioning service creation
Status: Success
Created: 26.08.2016 14:51:26
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ------------------------- ------------------------- ----------
Setting up Network Aug 26, 2016 2:51:27 PM Aug 26, 2016 2:51:27 PM Success
Creating group :asmdba Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :asmoper Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :asmadmin Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :dba Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :dbaoper Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :oinstall Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating user :grid Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating user :oracle Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Setting up ssh equivalance Aug 26, 2016 2:51:39 PM Aug 26, 2016 2:51:39 PM Success
Gi Home creation Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:57:54 PM Success
Creating GI home directories Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:54:49 PM Success
Cloning Gi home Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:57:54 PM Success
GI stack creation Aug 26, 2016 2:57:54 PM Aug 26, 2016 3:08:44 PM Success
Configuring GI Aug 26, 2016 2:57:54 PM Aug 26, 2016 2:58:21 PM Success
Running GI root scripts Aug 26, 2016 2:58:21 PM Aug 26, 2016 3:05:03 PM Success
Running GI config assistants Aug 26, 2016 3:05:03 PM Aug 26, 2016 3:05:34 PM Success
Creating RECO disk group Aug 26, 2016 3:10:02 PM Aug 26, 2016 3:10:16 PM Success
Creating volume reco Aug 26, 2016 3:10:16 PM Aug 26, 2016 3:10:26 PM Success
Creating volume datdbee1 Aug 26, 2016 3:10:26 PM Aug 26, 2016 3:10:35 PM Success
Creating ACFS filesystem for RECO Aug 26, 2016 3:10:35 PM Aug 26, 2016 3:10:42 PM Success
Creating ACFS filesystem for DATA Aug 26, 2016 3:10:42 PM Aug 26, 2016 3:10:49 PM Success
Db Home creation Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:13:40 PM Success
Creating DbHome Directory Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:10:49 PM Success
Extract DB clones Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:12:29 PM Success
Enable DB options Aug 26, 2016 3:12:29 PM Aug 26, 2016 3:12:38 PM Success
Clone Db home Aug 26, 2016 3:12:38 PM Aug 26, 2016 3:13:37 PM Success
Run Root DB scripts Aug 26, 2016 3:13:37 PM Aug 26, 2016 3:13:37 PM Success
Database Service creation Aug 26, 2016 3:13:40 PM Aug 26, 2016 3:19:43 PM Success
Database Creation Aug 26, 2016 3:13:40 PM Aug 26, 2016 3:17:58 PM Success
Running DataPatch Aug 26, 2016 3:18:33 PM Aug 26, 2016 3:19:43 PM Success
create Users tablespace Aug 26, 2016 3:19:43 PM Aug 26, 2016 3:19:46 PM Success

Yes, this is the ODA installation. Half an hour to setup the OS, install Grid Infrastructure, setup the storage and create a first database.

Refresh

Those that all the screens are not refreshed automatically even when you navigate through them. Don’t forget the ‘Refresh’ button (circular arrow) on top-right.

JSON

You can also build your own interface from the JSON format:

[root@odax6m ~]# odacli list-jobs -j
[ {
"jobId" : "1b99d278-6ab4-4ead-a5f8-f112c74a8f97",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472215886601,
"description" : "Provisioning service creation"
}, {
"jobId" : "f0ac9a2c-ba37-412c-8a81-9cc7cb301417",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472220219016,
"description" : "Database service creation with db name: odb1s"
}, {
"jobId" : "dec37817-feb7-46e5-b991-b23362268cb1",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472224173747,
"description" : "Database service creation with db name: dbee11g1"
} ]

So what?

ODA is for easy and fast provisioning and the GUI that was missing is finally there. Of course, it looks very simple, but that’s the goal of the appliance: setup quickly a standardized environment. ODA X6-2S is cheap and has good performance for small databases. You may find equivalent hardware, but can you build and install a stable hardware, OS and database in 30 minutes?

 

Cet article ODA X6 command line and Web Console est apparu en premier sur Blog dbi services.

AUDIT_SYS_OPERATIONS and top-level operation

Tue, 2016-08-23 04:28

SYSDBA is powerful. You should use it only when you need to. And you should audit what is done when connected SYSDBA. This is achieved by AUDIT_SYS_OPERATION. But do you have an idea how much audit log will be generated? Note that in 12c, AUDIT_SYS_OPERATION=true is the default.

The documentation says:
AUDIT_SYS_OPERATIONS enables or disables the auditing of top-level operations, which are SQL statements directly issued by users when connecting with the SYSASM, SYSBACKUP, SYSDBA, SYSDG, SYSKM, or SYSOPER privileges. (SQL statements run from within PL/SQL procedures or functions are not considered top-level.)

From the documentation, you can expect that only the statements that are issued by a SYSDBA user call are logged.
Not the recursive statements.
Not the scheduler jobs.

However, this is not what happens. Look at the following:

CaptureEventLog001

On Windows, the SYS Audit goes to the Event log and it is full of those kind of statements. Those are no statements that I’ce issued when connected as SYSDBA. Those are recursive statements from the automatic statistic gathering job that run everyday. Do I want to audit that? probably not. But it fills my system log.

On Unix, this is not better. Here is a database created a few days ago with nearly no activity since then:

[oracle@CDB adump]$ ls -alrt | head
total 2341264
drwxr-xr-x 5 oracle oinstall 4096 Jul 29 16:12 ..
-rw-r----- 1 oracle oinstall 2699 Aug 19 03:15 CDB_m000_10396_20160819031532925620143795.aud
-rw-r----- 1 oracle oinstall 1150 Aug 19 03:15 CDB_ora_10402_20160819031534799868143795.aud
-rw-r----- 1 oracle oinstall 1134 Aug 19 03:15 CDB_ora_10404_20160819031535178281143795.aud
-rw-r----- 1 oracle oinstallcat 1131 Aug 19 03:15 CDB_ora_10406_20160819031535558089143795.aud
-rw-r----- 1 oracle oinstall 1139 Aug 19 03:15 CDB_ora_10408_20160819031535936355143795.aud
-rw-r----- 1 oracle oinstall 1156 Aug 19 03:15 CDB_ora_10410_20160819031536306359143795.aud
-rw-r----- 1 oracle oinstall 1165 Aug 19 03:15 CDB_ora_10412_20160819031536679750143795.aud
[oracle@CDB adump]$ ls -alrt | tail
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9044_20160823083113966954143795.aud
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9047_20160823083114497136143795.aud
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9048_20160823083114567197143795.aud
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9049_20160823083115041317143795.aud
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9050_20160823083115311603143795.aud
-rw-r----- 1 oracle oinstall 54568 Aug 23 08:31 CDB_ora_9036_20160823083113459749143795.aud
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9052_20160823083115922822143795.aud
-rw-r----- 1 oracle oinstall 54568 Aug 23 08:31 CDB_ora_9045_20160823083114267635143795.aud
drwxr-xr-x 5 oracle oinstall 64274432 Aug 23 08:31 .
[oracle@CDB adump]$ du -hs
8.5G .

Yes, this is two million files. 8GB of text. What will you do with that?
That’s the problem with auditing. It’s activated by default to comply with security policies, but there’s nothing to purge or archive them by default. Size grows and you cannot do anything relevant with it.

If I look at a subset of files:

[oracle@CDB adump]$ ls CDB_ora_*_20160823*aud | wc -l
8184

Only two of them are from user connections. The others have no terminal:

[oracle@CDB adump]$ grep -l "^CLIENT TERMINAL:\[0\]" CDB_ora_*_20160823*aud | wc -l
8182

Expected feature

It’s not a bug. Oracle has decided to log everything in the SYS audit. Because if they don’t, a SYSDBA connection can run a procedure or a job that looks harmless when only the top-level call is logged. That procedure may be created by any user and so do not appear in the audit trail.

So what?

Whatever the default is, the audit trail is something to manage. On Windows, the event log has a maximum size. On Linux, it fills the adump directory (or syslog if you enabled it). So you should have housekeeping jobs. You can also disable the audit so only connections are logged, but not the statements. In this case you should control who has access to SYSDBA.

 

Cet article AUDIT_SYS_OPERATIONS and top-level operation est apparu en premier sur Blog dbi services.

Documentum Administrator UCF Troubleshooting

Fri, 2016-08-19 04:23

Maybe you had some issues with UCF in DA as me. I had this for no reason since few days at a customer. The problem was that we use SSL with DA and the Unified Content Facilities (UCF) wasn’t happy about it.
Thus, in this short blog I’ll speak about troubleshooting UCF.

The error I got happened when trying to edit, view or create documents; I had a popup saying an error occured with UCF.

First, we must know our enemy in order to fight it!

UCF stands for Unified Content Facilities. It’s a java applet made by EMC and used by wdk applications in order to manage and optimize content transfer between the application and your workstation. Thanks to UCF you can transfer large files with compressions and reconnect if the network failed some packets. The applet is downloaded to your workstation at runtime when you connect to a wdk application.
You can find the UCF configuration in your user folder like follow:
C:\Users\<USER>\Documentum\ucf

Refresh UCF Cache

Before going deeper in the debugging, maybe try to clear the ucf cache first and re-download the latest one from the server. In order to do so you’ll have to perform the following steps:
Clear you browser cache. If you have IE, go to Tools -> Delete Browsing History (or press CTRL+SHIFT+DEL).
Then check each checkboxes and click Delete.

Capture1

Close the browser afterwards.

Now be sure that you don’t have any browser pointing to a wdk application and go to C:\Users\<USER>\Documentum and try deleting the ucf folder.
If you have an error telling you it is already used, open the task manager and search for javaw.exe processes, and then close them down.
You should be able to delete the ucf folder now.

Also clear the cached ucf jar files by opening the java control panel. Go to Control Panel -> search for Java -> General Tab -> Temporary Internet Files -> Settings -> Delete Files

Now test again by opening Documentum Administrator and creating/editing a document. You shouldn’t have a popup error about UCF.

If you reached this point in the blog that means you didn’t get rid of the problem, so didn’t I. Well at this point we did some corrections but we don’t know what is the real error about UCF, we only get this stack trace saying UCF failed. We can now enable the ucf tracing in order to see if something more interesting is written in the logs.
You can enable the tracing on both sides, the server and your workstation. The easiest is to begin with your workstation so go back to the ucf folder C:\Users\<USER>\Documentum\ucf
Then navigate to <PCNAME>\shared\config and edit ucf.client.config.xml
Add the following options between <configuration/>:

<option name="tracing.enabled">
    <value>true</value>
</option>
<option name="debug.mode">
    <value>true</value>
</option>

Also edit the file: ucf.client.logging.properties by changing .level=WARNING to .level=ALL

Now reproduce the error and check what has been written in C:\Users\<USER>\Documentum\Logs

If you can’t see what the problem is you can also activate the tracing on the webserver by editing the same way: ../WEB-INF/classes/ucf.server.config.xml but note that you need to restart the webserver for it to take effect.

The errors in the generated log should allow you to find the real cause of the ucf error. In my case it was the SSL handshake that was failing.

 

Cet article Documentum Administrator UCF Troubleshooting est apparu en premier sur Blog dbi services.

Neue Datenbank Konsole Befehle (DBCC) CLONEDATABASE

Fri, 2016-08-19 02:52

Sie haben es vielleicht noch nicht gesehen, aber mit den Service Pack 2 von SQL Server 2014 gibt es einen neuen DBCC Befehl: CLONEDATABASE.

„DBCC CLONEDATABASE sollte Kopien eines Schemas und den Statistiken einer Produktionsdatenbank erstellen, um bei Leistungsproblemen Abfragen zu untersuchen.“ MSDN Quelle finden wir hier.

Dieser Befehl kann nur Benutzedatenbanken klonen.

Clonedatabase01

Wie Sie sehen, für die Systemdatenbanken ist es nicht möglich, denn es treten Fehlermeldungen auf:
Msg 12603, Level 16, State 1, Line 5
DBCC CLONEDATABASE does not support cloning system databases.

Mit DBCC CLONEDATABASE wird eine neue Datenbank erstellet. Es ist ein interner Snapshot der die Systemmetadaten, alle Schemas und alle Statistiken für alle Indizes kopiert. Deswegen, ist die Datenbank leer und ist im Read-Only Modus.
Clonedatabase02

Die Schemas sind kopiert… Ok, ich werde jetzt mit SQL Server Data Tools (SSDT) ein «Schemavergleich» durchführen:
Clonedatabase03

Alle Schemas sind in der geklonten Databank. Die Änderungen sind für den Fulltext und die Daten im XML Schema, wie zum Beispiel der MileRangeType mit seiner Bedeutung.
Clonedatabase04

Ich habe mit meinem Freund Michel über diese Funktionalität gesprochen und er hat mir gefragt wie verhaltet es sich mit den Daten?
Mein erster Schritt ist es die Betrachtung der Dateistruktur nach dem Klonen :
Clonedatabase05
Sie können feststellen, dass meine geklonte Datenbank weder die Benutzerdatenbank noch die Modeldatenbank übernommen hat.
Ich richte eine neue Filegroup [FG_Employees] ein, mit ein neue File AdventureWorks2014_Employees.ndf
Ich ändere mein Clustered Indize PK_Employee_BusinessEntityID zu dieser neuen Filegroup:

USE [master]
GO
ALTER DATABASE [AdventureWorks2014] ADD FILEGROUP [FG_Employees]
GO
ALTER DATABASE [AdventureWorks2014]
ADD FILE ( NAME = N'AdventureWorks2014_Employees',
FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.TEST\MSSQL\DATA\AdventureWorks2014_Employees.ndf'
, SIZE = 16384KB , FILEGROWTH = 8192KB ) TO FILEGROUP [FG_Employees]
GO
USE [AdventureWorks2014];
GO
CREATE UNIQUE CLUSTERED INDEX PK_Employee_BusinessEntityID
    ON HumanResources.Employee(BusinessEntityID)
WITH (DROP_EXISTING =  ON )
ON [FG_Employees]

Und jetzt, klone ich wieder meine Datenbank:
Clonedatabase06
Die Dateistruktur ist kopiert, cool!
Kann ich die Datenbank auf Read-Only ändern und Datei importieren?

USE [master]
GO
ALTER DATABASE [AdventureWorks2014_clone] SET  READ_WRITE WITH NO_WAIT
GO
INSERT INTO [AdventureWorks2014_clone].[Sales].[Currency]
SELECT *  FROM [AdventureWorks2014].[Sales].[Currency]

Kein Problem, die Daten sind direkt kopiert.
Clonedatabase07

Diese neue Funktionalität ist wirklich einfach zu verwenden.
Ich hoffen dass im nächsten Service Pack von SQL Server 2012 und SQL Server 2016 die CLONEDATABASE Funktion auch integriert wird. Für mehr Information, MSDN link hier

 

Cet article Neue Datenbank Konsole Befehle (DBCC) CLONEDATABASE est apparu en premier sur Blog dbi services.

Java reflection: Get classes and packages names from a root package within a custom URLClassloader

Thu, 2016-08-18 08:21

I have got a case for which I needed to use a custom Classloader, in order to find the list of classes and packages available across several software plugins materialized by a set of jar files. These jar files are intended to be located inside a particular place.

After googling, some solutions invited us to deploy external libraries. But I was not interested to manage another lib in my soft just for that purpose.
So, the solution was to recover all jars from the specific Classloader and loop on them in order to find out the classes and packages I was looking for.

This blog post is an extension of a previous one, that only looked for classes within the JVM context Classloader:

http://blog.dbi-services.com/java-get-class-names-from-package-string-in-classpath/

 

This example now, shows how to initiate a new child Classloader, to work with, and extract packages names in addition to Classes list available inside a root package name.
Following, you will see a complete Java class resolving this issue:

 

import java.io.File;
import java.io.FileFilter;
import java.net.URISyntaxException;
import java.net.URL;
import java.net.URLClassLoader;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.TreeMap;
import java.util.TreeSet;
import java.util.jar.JarEntry;
import java.util.jar.JarFile;

/**
 * @author Philippe Schweitzer, dbi services Switzerland
 *
 * It is a Class that demonstrates how to list Classes and Packages from a given
 * root package name.
 *
 * This class uses a specific Classloader in order to lookup into jar files
 * located at a dedicated place (and not in all JavaSE library ).
 *
 */
public class RessourceFinderExample {

    /* Root package name */
    private static final String packageName = "YOUR PACKAGE NAME";

    /* Paths list to jar files */
    private static final String[] classPaths = {"plugins", ".", "dist"};

    /**
     * Main method, it creates the Classloader and displays a consolidated
     * result.
     *
     * @param args no argument required
     */
    public static void main(String args[]) {

        URLClassLoader cl = new URLClassLoader(findJarURLsInClasspath(), Thread.currentThread().getContextClassLoader());

        System.out.println("");

        TreeSet<String> packages = RessourceFinderExample.getPackageFromPackage(packageName, cl);

        System.out.println("");
        System.out.println("-- CONSOLIDATED  RESULT --");
        System.out.println("");

        for (String packageString : packages) {
            System.out.println("   PACKAGE : " + packageString);
        }

        System.out.println("");
        System.out.println("--------------------------");
        System.out.println("");

        TreeMap<String, Class> classes = RessourceFinderExample.getClassesFromPackage(packageName, cl);

        System.out.println("");
        System.out.println("-- CONSOLIDATED  RESULT --");
        System.out.println("");

        for (Class c : classes.values()) {
            System.out.println("   CLASS   : " + packageName + "." + c.getCanonicalName());
        }

        System.out.println("");
        System.out.println("--------------------------");
        System.out.println("");
    }

    /**
     * Method that finds all jar files available in given dedicated classpath
     * places. It serves for an URLClassloader initialization.
     *
     * @return List of jar files URLs
     */
    private static URL[] findJarURLsInClasspath() {
        URL url;

        ArrayList<URL> jarURLs = new ArrayList();

        for (String path : classPaths) {

            File[] jars = new File(path).listFiles(new FileFilter() {
                public boolean accept(File pathname) {

                    return pathname.getName().toLowerCase().endsWith(".jar");
                }
            });

            if (jars != null) {
                for (int i = 0; i < jars.length; i++) {
                    try {
                        System.out.println("JAR Path: " + jars[i].getAbsolutePath());
                        url = jars[i].toURI().toURL();

                        jarURLs.add(url);

                    } catch (Exception e) {

                    }
                }
            }
        }

        URL[] urls = jarURLs.toArray(new URL[0]);
        return urls;
    }

    /**
     * Method that returns all jar files registered in the given URLClassloader
     * and which are present in dedicated classpath places.
     *
     * @return List of jar files URLs
     */
    private static URL[] getJarURLs(URLClassLoader cl) {
        URL[] result = cl.getURLs();
        ArrayList<URL> urls = new ArrayList();

        for (URL url : result) {

            try {
                Path jarPath = Paths.get(url.toURI());

                for (String classPathString : classPaths) {

                    Path classPath = Paths.get(classPathString).toAbsolutePath();

                    if (jarPath.startsWith(classPath)) {
                        urls.add(url);
                    }
                }

            } catch (URISyntaxException ex) {
            }
        }

        result = new URL[urls.size()];
        result = urls.toArray(result);

        return result;
    }

    /**
     * Method that returns all classes available underneath a given package
     * name.
     *
     * @return Set of Classes
     */
    private static TreeMap<String, Class> getClassesFromPackage(String pckgname, URLClassLoader cl) {
        TreeMap<String, Class> result = new TreeMap();
        ArrayList<File> files = new ArrayList();

        for (URL jarURL : getJarURLs(cl)) {
            getClassesInSamePackageFromJar(result, pckgname, jarURL.getPath(), cl);
        }

        return result;
    }

    /**
     * Method that fills TreeMap with all classes available in a particular jar
     * file, underneath a given package name.
     *
     */
    private static void getClassesInSamePackageFromJar(TreeMap<String, Class> result, String packageName, String jarPath, URLClassLoader cl) {
        JarFile jarFile = null;
        try {

            System.out.println("");
            System.out.println("** IN JAR : " + jarPath);
            jarFile = new JarFile(jarPath);

            Enumeration<JarEntry> en = jarFile.entries();
            while (en.hasMoreElements()) {
                JarEntry entry = en.nextElement();
                String entryName = entry.getName();

                packageName = packageName.replace('.', '/');

                if (entryName != null && entryName.endsWith(".class") && entryName.startsWith(packageName) && !entryName.substring(packageName.length() + 1).contains("/")) {

                    try {
                        Class<?> entryClass = cl.loadClass(entryName.substring(0, entryName.length() - 6).replace('/', '.'));
                        if (entryClass != null) {
                            result.put(entryClass.getCanonicalName(), entryClass);
                            System.out.println("   CLASS   : " + packageName + "." + entryClass.getCanonicalName());
                        }
                    } catch (Throwable e) {
//                        System.err.println("Error instanciating: " + entryName + " " + e.toString());
                    }
                }
            }
        } catch (Exception e) {
        } finally {
            try {
                if (jarFile != null) {
                    jarFile.close();
                }
            } catch (Exception e) {
            }
        }
    }

    /**
     * Method that returns all package names underneath a given root package
     * name.
     *
     * @return Set of Classes
     */
    private static TreeSet<String> getPackageFromPackage(String pckgname, URLClassLoader cl) {
        TreeSet<String> result = new TreeSet();

        for (URL jarURL : getJarURLs(cl)) {
            getPackageInPackageFromJar(result, pckgname, jarURL.getPath(), cl);
        }

        return result;
    }

    /**
     * Method that fills TreeMap with all package names in a particular jar
     * file, underneath a given root package name.
     *
     */
    private static void getPackageInPackageFromJar(TreeSet<String> result, String packageName, String jarPath, URLClassLoader cl) {
        JarFile jarFile = null;
        try {
            System.out.println("");
            System.out.println("** IN JAR : " + jarPath);

            jarFile = new JarFile(jarPath);

            Enumeration<JarEntry> en = jarFile.entries();
            while (en.hasMoreElements()) {
                JarEntry entry = en.nextElement();
                String entryName = entry.getName();

                packageName = packageName.replace('.', '/');

                if (entryName != null && entryName.endsWith("/") && entryName.startsWith(packageName + "/")) {

                    try {
                        String packageEntryName = entryName.substring(packageName.length() + 1);
                        packageEntryName = packageEntryName.substring(0, packageEntryName.indexOf("/"));

                        result.add(packageName.replace('/', '.') + "." + packageEntryName);

                        System.out.println("   PACKAGE : " + packageName.replace('/', '.') + "." + packageEntryName);
                    } catch (Throwable e) {
                    }
                }
            }
        } catch (Exception e) {
        } finally {
            try {
                if (jarFile != null) {
                    jarFile.close();
                }
            } catch (Exception e) {
            }
        }
    }

 

Cet article Java reflection: Get classes and packages names from a root package within a custom URLClassloader est apparu en premier sur Blog dbi services.

Generate Azure VM with Resource Manager deployment in PowerShell

Wed, 2016-08-17 09:04

Recently, there is a new way to manage the Azure infrastructure with Resource Manager. It brings many advantages regarding the classic deployment.
The differences between these two deployments will not be covered in this blog because it is not the initial goal, and it already exists a very good Microsoft topic on this subject.

In this blog, we will generate a new Windows Azure Virtual Machine using Resource Manager deployment with PowerShell from On-Premise.

Remember, only RM object can be listed with RM cmdlets! On the contrary, only Classic object can be listed with Classic cmdlets!

We can connect automatically to Azure Account with this command:
Select-AzureRmProfile -Path "C:\temp\AzureCert.json"

But to download this certificate, we need to connect manually to Azure Account at least once as follows:
Add-AzureRmAccount -SubscriptionId "<YourSubscriptionID>"

Enter your personal credentials and then run the following command:
Save-AzureRmProfile -Path "C:\temp\AzureCert.json"

If you want to navigate through your different attached Azure Subscriptions, use the cmdlets Get-AzureRmSubscription/Set-AzureRmSubcription.

To obtain the different existing Azure Locations:
Get-AzureRmLocation | Select DisplayName

For the end of this blog, we will work in this specific Azure Location:
$location = "West Europe"

Hardware Profile

To list all different available Resource Group:
Get-AzureRmResourceGroup | Select ResourceGroupName, Location

And select your specific Azure Resource Group:
$resourceGroupName = (Get-AzureRmResourceGroup).ResourceGroupName[0]

To choose the correct VM size, list all available Azure formats:
Get-AzureRmVMSize -location $location | Select Name, NumberOfCores, MemoryInMB
$vmSize = "Standard_A3"

And initialize the VM object to build:
$vm = New-AzureRMVMConfig -Name $vmname -VMSize $vmsize

Image Profile

Now we want to select a specific image available from a publisher in Azure. In this case, we will choose the last SQL Server 2016 Enterprise edition ISO.
The different steps will describe the method to find out all the elements to select the correct available image.

Select all publishers from a specific Azure Location:
Get-AzureRmVMImagePublisher -Location $location | Select PublisherName
$publisher = "MicrosoftSQLServer"

Now select all offers from a specific Azure Publisher:
Get-AzureRmVMImageOffer -Location $location -PublisherName $publisher | Select Offer
$offer = "SQL2016-WS2012R2"

Then select all Skus from a specific Azure Offer:
Get-AzureRmVMImageSku -Location $location -PublisherName $publisher -Offer $offer | Select Skus
$skus = "Enterprise"

Finally choose your version:
(Get-AzureRmVMImage -Location $location -PublisherName $publisher -Offer $publisher -Skus $skus).version

To obtain the last version of the image:
$Version = (Get-AzureRmVMImage -Location $location -PublisherName $publisher -Offer $offer -Skus $skus | sort -Descending).version[0]

Add the image profile to the existing VM object:
$vm = Set-AzureRmVMSourceImage -VM $vm -PublisherName $publisher -Offer $offer -Skus $skus -Version $version

OS Profile

According to the Image Profile, the Virtual Machine will be a Windows Server. So enter the specifications as follows:
$username = "dbi"
$password = ConvertTo-SecureString "B3stPa$$w0rd3v3r" -AsPlainText –Force
$cred = New-Object System.Management.Automation.PSCredential ($username, $password)
$vm = Set-AzureRmVMOperatingSystem -VM $VM -ComputerName "Artanis" -Windows -Credential $cred -ProvisionVMAgent

Disk Profile

As the VM will be created from an Azure Image, we need to specify a location and a name for the OS disk.

To list all your available Azure Storage Accounts, run this command:
Get-AzureRmStorageAccount | Select StorageAccountName, Location

To list the different containers available in your Azure Storage:
(Get-AzureRmStorageAccount | Get-AzureStorageContainer).CloudBlobContainer

And now add a disk profile to the existing VM:
$diskLocation = "https://<accountStorageName>.blob.core.windows.net/vhds/"
$vm = Set-AzureRmVMOSDisk -VM $vm -Name "artanisVHDOS.vhd" -VhdUri ($diskLocation+"artanisVHDOS.vhd") -CreateOption FromImage

IP Profile

Here is an example of Network configuration:
$subnet = New-AzureRmVirtualNetworkSubnetConfig -Name "CloudSubnet" -AddressPrefix "10.0.64.0/24"
$ain = New-AzureRmVirtualNetwork -Name "VirtualNetwork" -ResourceGroupName $resourceGroupName -Location $location -AddressPrefix "10.0.0.0/16" -Subnet $subnet
$pip = New-AzureRmPublicIpAddress -Name "AzurePublicIP" -ResourceGroupName $resourceGroupName -AllocationMethod Dynamic -Location $location
$nic = New-AzureRMNetworkInterface -Name "AzureNetInterface" -ResourceGroupName $resourceGroupName -Location $location SubnetId $ain.Subnets[0].Id -PublicIpAddressId $pip.Id

Conclusion: VM generation

Now we have entered all different profiles required to generate a new Windows Azure VM:
$azurevm = New-AzureRmVM -ResourceGroupName $resourceGroupName -Location $location -VM $vm

Use “Get-AzureRmVM” cmdlet to list all available VMs.

To download the remote desktop file to connect to this new virtual machine, use the following command:
Get-AzureRmRemoteDesktopFile -ResourceGroupName $resourceGroupName -Name $vmName -LocalPath "C:\Temp\Artanis.rdp"

With all these commands, you can realize how simple it is to automate the generation of a new Virtual Machine in Azure. Moreover you should probably have noticed the construction of the VM object (with the different profiles) is similar to Hyper-V structure.

I hope it helps you ;-)

 

Cet article Generate Azure VM with Resource Manager deployment in PowerShell est apparu en premier sur Blog dbi services.

Unplugged pluggable databases

Mon, 2016-08-15 08:35

When Oracle Multitenant came out in 12c, with pluggable databases, it was easy to draw them as USB sticks that you can plug and unplug to/from your Container Database (CDB). I don’t like this because it gives the false idea that an unplugged database is detached from the container.

Containers

In the Oracle documentation, the Concept book, the description of the multitenant architecture starts with an illustration of a CDB.
CDB with two PDBs where the text description starts like:
This graphic depicts a CDB as a cylinder. Inside the CDB is a box labeled “Root (CDB$ROOT).” Plugged into the box is the Seed (CDB$SEED), hrpdb, and salespdb.

Let me list what I don’t like with this description:

  1. There are 5 containers here but 3 ways to draw them. The CDB itself is a container (CDB_ID=0) and is a cylinder. The CDB$ROOT (CON_ID=1) is a container and is a box. The PDB$SEED, and the user PDBs are cylinders with USB plug.
  2. The CDB$ROOT do not look like a database (cylinder). However, physically it’s the same: SYSTEM, SYSAUX, UNDO, TEMP tablepsaces
  3. The PDB$SEED (CON_ID=1) looks like it is pluggable (USB stick) but you never unplug the PDB$SEED
  4. The USB plug is plugged inside the CDB$ROOT. That’s wrong. All containers inside a CDB are at the same level and are ‘plugged’ in the CDB (CON_ID=0) and not the CDB$ROOT(CON_ID=1). They are contained by the CDB and if they are plugged somewhere, it’s in the CDB controlfile. The root is a root for metadata and object links, not for the whole PDBs.

If I had to show pluggable databases as USB sticks it would be like that:
CaptureUSBStick

Here CDB$ROOT is a container like the pluggable databases, except that you cannot unplug it. PDB$SEED is a pluggable database but that you don’t unplug. The CDB is a container but do not look like a database. It’s the controlfile and the instance, but there’s no datafiles directly attached to the CDB.

Unplugged

However with this illustration, we can think that an unplugged pluggable database is detached from the CDB, which is wrong.

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB READ WRITE NO
SQL> alter pluggable database PDB close;
Pluggable database altered.
 
SQL> alter pluggable database PDB unplug into '/tmp/PDB.xml';
Pluggable database altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB MOUNTED

Here PDB is unplugged, but still pertains to the CDB.

The CDB controlfile still addresses all the PDB datafiles:

RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 829 SYSTEM YES /u02/app/oracle/oradata/CDB/system01.dbf
3 1390 SYSAUX NO /u02/app/oracle/oradata/CDB/sysaux01.dbf
4 4771 UNDOTBS1 YES /u02/app/oracle/oradata/CDB/undotbs01.dbf
5 270 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/CDB/pdbseed/system01.dbf
6 2 USERS NO /u02/app/oracle/oradata/CDB/users01.dbf
7 540 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/CDB/pdbseed/sysaux01.dbf
12 280 PDB:SYSTEM NO /u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_system_ctcxz29m_.dbf
13 570 PDB:SYSAUX NO /u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_sysaux_ctcxz2bb_.dbf

The datafiles 12 and 13 are the ones from my unplugged PDB, still known and managed by the CDB.

Backup

An unplugged PDB has data, and data should have backups. Who is responsible for the unplugged PDB backups? It’s still the CDB:

RMAN> backup database;
 
...
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00013 name=/u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_sysaux_ctcxz2bb_.dbf
input datafile file number=00012 name=/u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_system_ctcxz29m_.dbf
...

The unplugged CDB is not detached at all and the CDB is still referencing its files and is responsible for them. This is very different from an unplugged USB stick which has no link anymore with the hosts it was plugged-in before.

Backup optimization

If you wonderwhether it’s good to backup an unplugged PDB with each CDB backup, don’t worry. RMAN knows that it is in a state where it cannot be modified (like read-only tablespaces) and do not backup it each time. Of course, you need to have BACKUP OPTIMIZATION is configured to ON:

RMAN> backup database;
 
Starting backup at 15-AUG-16
using channel ORA_DISK_1
skipping datafile 12; already backed up 2 time(s)
skipping datafile 13; already backed up 2 time(s)
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set

Unplug and DROP

From what we have seen, an unplugged PDB is like a closed PDB. There’s a difference through: an unplugged PDB is closed forever. You cannot open it again:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB MOUNTED
SQL> alter pluggable database PDB open;
alter pluggable database PDB open
*
ERROR at line 1:
ORA-65086: cannot open/close the pluggable database
 
SQL> host oerr ora 65086
65086, 00000, "cannot open/close the pluggable database"
// *Cause: The pluggable database has been unplugged.
// *Action: The pluggable database can only be dropped.

So, if you want to keep the USB stick analogy, unplugged do not mean ‘physically unplugged’ but something like what you should do before removing a USB stick:
Screenshot 2016-08-15 10.44.46

In summary:

ALTER PLUGGABLE DATABASE … UNPLUG is like the logical ‘eject’ you do to be sure that what you will remove physically was closed forever.
ALTER PLUGGABLE DATABASE … DROP … KEEP DATAFILES is the physical removal from the CDB

Because DROP is the only thing that can be done on an unplugged PDB, SQL Developer do the both when you click on ‘unplug':
CaptureUNPLUGDROP

The idea to drop it just after the unplug is probably there to prevent the risk to drop it ‘including datafiles’ after it has been plugged into another CDB. Because then it is lost.
However, keep in mind that when unplugged and dropped, nobody will backup the PDB datafiles until it is plugged into a new CDB.

Read-Only

There’s a last one more difference. A USB stick can be read-only. A plugged PDB cannot. You may want to share a database from a read-only filesystem, like you can do with transportable tablespaces. but you can’t:

SQL> drop pluggable database PDB keep datafiles;
Pluggable database dropped.
 
SQL> create pluggable database PDB using '/tmp/PDB.xml';
Pluggable database created.
 
SQL> alter pluggable database PDB open read only;
alter pluggable database PDB open read only
*
ERROR at line 1:
ORA-65085: cannot open pluggable database in read-only mode

The plugged PDB must be opened in read/write mode at least once:
SQL> host oerr ora 65085
65085, 00000, "cannot open pluggable database in read-only mode"
// *Cause: The pluggable database has been created and not opened.
// *Action: The pluggable database needs to be opened in read/write or
// restricted mode first.

And only then, it can be opened read-only:

SQL> alter pluggable database PDB open;
Pluggable database altered.
 
SQL> alter pluggable database PDB close;
Pluggable database altered.
 
SQL> alter pluggable database PDB open read only;
Pluggable database altered.

So what…

Here is the way I visualize pluggable databases:

CaptureCON_ID

Just a bunch of tablespaces, referenced by the CDB controlfile and grouped by self-contained containers. CDB$ROOT cannot be cloned nor unplugged. PDB$SEED can be cloned but not unplugged (but it’s a PDB). Other PDBs can be cloned and unplugged.

I’ll talk about multitenant at Oracle Open World, DOAG Conference and UKOUG TECH16.
There’s also a book coming, probably early 2017 (depends on 12.2 availability)

 

Cet article Unplugged pluggable databases est apparu en premier sur Blog dbi services.

Windows: Sauvegarde RMAN sur un disque partagé

Mon, 2016-08-08 11:09

Dans ce blog, je vais vous expliquer comment exporter les backups RMAN sur un « share disk » appartenant à un Domaine.

Assurer la sécurité des données est l’une des tâches principales de l’administrateur :

  • La mise en œuvre d’une protection des fichiers sensibles de la base :
    • Fichier de contrôle
    • Fichiers de journalisation
  • La mise en place d’une stratégie de sauvegarde/récupération :
    • Adaptée aux contraintes de l’entreprise
    • Testée et documentée.

Afin de vous documenter sur les différentes techniques de sauvegarde et de restauration, je vous propose de jeter un coup d’œil à notre page Workshop Oracle Backup Recovery.

Plusieurs d’entre vous utilisent certainement des serveurs Windows pour administrer les bases de données Oracle, cependant il n’est pas toujours évident de les administrer sur un environnement Windows par rapport à Linux.
C’est pourquoi, je vous propose une solution de sauvegarde qui exportera vos backups sur un disque partagé ou un serveur de stockage sur lequel une sauvegarde des backups se fait quotidiennement sur un disque ou une bande.

Voici les étapes à suivre:

  • Vérifiez les droits (lecture/écriture) sur le disque partagé
  • Configurez le service Oracle et le Listener dans l’outil « services.msc » avec l’utilisateur de service
  • Vérifiez que le mot de passe du compte de service n’expire jamais et qu’il ne soit jamais verrouillé ou supprimé.
  • Redémarrez les services (oracle et listener)
  • Testez les backups avec RMAN

Allez dans le menu « services.msc » et changez le paramètre du service « OracleService_[nom_de_l’instance] » ainsi que le service « Listener » à l’aide de l’utilisateur de service qui fait fonctionner vos bases de données.

Faites un clic droit sur « Propriété » aller sur l’onglet « Connexion » puis sélectionnez « Ce compte ».
Cliquez ensuite sur « Parcourir » puis écrivez le nom de l’utilisateur de service, pour finir cliquez sur « Vérifier les noms » afin de trouver l’utilisateur dans l’Active Directory.

Capture-13
Capture-8Capture-9

Redémarrez le service Oracle et Listener.
ATTENTION : Ne pas redémarrer les services si la base de données est une production !
Capture-15

Testez les Backups RMAN sur le lecteur partagé :
Capture-17

Bien entendu, il est préférable de scripter les backups via le Planificateur de tâches, afin de les exécuter automatiquement. Je vous parlerais de cette prochaine étape lors d’un second blog.

 

Cet article Windows: Sauvegarde RMAN sur un disque partagé est apparu en premier sur Blog dbi services.

Multitenant internals: object links on fixed tables

Sun, 2016-08-07 11:13

The previous post partly answered to the original question (why an object link to V$SESSION is refreshed only every 30 seconds): recursive queries on shared=object views. Now let’s see what is different with fixed tables.

Disclaimer: this is research only. Don’t do that anywhere else than a lab. This is implicit when the title is ‘internals’.

Result cache

When query on a shared=object view is executed from a PDB, the session switches to the CDB to run a recursive query to get the rows. This query uses result cache by adding the following hint:
/*+ RESULT_CACHE (SYSOBJ=TRUE) */

This enables result cache for the rows fetched by this query, and even for system object. The ‘SYSOBJ=TRUE’ is there because the “_rc_sys_obj_enabled” defaults to true.

Here is the result cache from the previous post (I flushed the result cache just before the second run because in 12c hard parsing is also using a lot the result cache):


16:34:00 SQL> select con_id,sql_id,rows_processed,plan_hash_value,executions,parse_calls,fetches,buffer_gets,sql_text from v$sql where plan_hash_value in (3598352655,3551671056) order by last_active_time;
 
CON_ID SQL_ID ROWS_PROCESSED PLAN_HASH_VALUE EXECUTIONS PARSE_CALLS FETCHES BUFFER_GETS SQL_TEXT
---------- ------------- -------------- --------------- ---------- ----------- ---------- ----------- --------------------------------------------------------------------------------
1 350gg6247sfa6 200 3598352655 2 2 2 26 SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE) */ ID,NUM FROM NO_OBJECT_LINK("SYS"."DEMOV
3 bynmh7xm4bf54 0 3598352655 0 5 0 51 SELECT * FROM NO_OBJECT_LINK("SYS"."DEMOV")
3 duy45bn72jr35 200 3551671056 2 2 16 269 select id from DEMOV where num column name format a120 trunc
16:34:00 SQL> select type,status,name,row_count from v$result_cache_objects order by row_count desc fetch first 10 rows only;
 
TYPE STATUS NAME ROW_COUNT
---------- --------- ------------------------------------------------------------------------------------------------------------------------ ----------
Result Published SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE) */ ID,NUM FROM NO_OBJECT_LINK("SYS"."DEMOV") "DEMOV" WHERE "DEMOV"."NUM"<=100 100
Dependency Published SYS.DEMOT 0
Dependency Published SYS.DEMOV 0

As with regular result cache, there is dependency tracking: as soon as the underlying table has some modification, the cache will be invalidated. So this query is always guaranteed to get fresh results.

Invalidation

I did the same when deleting half of the rows before the second execution in order to invalidate the result cache:


16:43:46 SQL> select con_id,sql_id,rows_processed,plan_hash_value,executions,parse_calls,fetches,buffer_gets,sql_text from v$sql where plan_hash_value in (3598352655,3551671056) order by last_active_time;
 
CON_ID SQL_ID ROWS_PROCESSED PLAN_HASH_VALUE EXECUTIONS PARSE_CALLS FETCHES BUFFER_GETS SQL_TEXT
---------- ------------- -------------- --------------- ---------- ----------- ---------- ----------- --------------------------------------------------------------------------------
1 350gg6247sfa6 150 3598352655 2 2 2 26 SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE) */ ID,NUM FROM NO_OBJECT_LINK("SYS"."DEMOV
3 bynmh7xm4bf54 0 3598352655 0 5 0 51 SELECT * FROM NO_OBJECT_LINK("SYS"."DEMOV")
3 duy45bn72jr35 150 3551671056 2 2 13 269 select id from DEMOV where num column name format a120 trunc
16:43:46 SQL> select type,status,name,row_count from v$result_cache_objects order by row_count desc fetch first 10 rows only;
 
TYPE STATUS NAME ROW_COUNT
---------- --------- ------------------------------------------------------------------------------------------------------------------------ ----------
Result Invalid SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE) */ ID,NUM FROM NO_OBJECT_LINK("SYS"."DEMOV") "DEMOV" WHERE "DEMOV"."NUM"<=100 100
Result Published SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE) */ ID,NUM FROM NO_OBJECT_LINK("SYS"."DEMOV") "DEMOV" WHERE "DEMOV"."NUM"<=100 50
Dependency Published SYS.DEMOT 0
Dependency Published SYS.DEMOV 0

I’ve 100 rows from the first run, invalidated, and them 50 rows from the second one.

Note that I’ve the same result when I set “_disable_cdb_view_rc_invalidation”=true. Sometimes undocumented parameters behavior cannot be guessed only from their name.

Fixed tables

I’ve run the same testcase but with the following definition of DEMOV:

create view DEMOV sharing=object as select saddr id, rownum num from V$SESSION;

Here is the trace of the recursive query run in CDB$ROOT, at first execution:

PARSING IN CURSOR #140436732146672 len=112 dep=1 uid=0 oct=3 lid=0 tim=769208810641 hv=3298783355 ad='108ee92f0' sqlid=' 10qf9kb29yw3v'
SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE SHELFLIFE=30) */ ID,NUM FROM "SYS"."DEMOV" "DEMOV" WHERE "DEMOV"."NUM"<=100
END OF STMT
PARSE #140436732146672:c=4000,e=10614,p=0,cr=6,cu=0,mis=1,r=0,dep=1,og=4,plh=350654732,tim=769208810640
EXEC #140436732146672:c=0,e=17,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=350654732,tim=769208810701
FETCH #140436732146672:c=1000,e=961,p=0,cr=0,cu=0,mis=0,r=53,dep=1,og=4,plh=350654732,tim=769208811687
STAT #140436732146672 id=1 cnt=53 pid=0 pos=1 obj=0 op='RESULT CACHE byq3fbkawmkm34gtfk1csvwv52 (cr=0 pr=0 pw=0 time=877 us)'
STAT #140436732146672 id=2 cnt=53 pid=1 pos=1 obj=98258 op='VIEW DEMOV (cr=0 pr=0 pw=0 time=655 us cost=0 size=171 card=9)'
STAT #140436732146672 id=3 cnt=53 pid=2 pos=1 obj=0 op='COUNT (cr=0 pr=0 pw=0 time=652 us)'
STAT #140436732146672 id=4 cnt=53 pid=3 pos=1 obj=0 op='NESTED LOOPS (cr=0 pr=0 pw=0 time=597 us cost=0 size=306 card=9)'
STAT #140436732146672 id=5 cnt=53 pid=4 pos=1 obj=0 op='NESTED LOOPS (cr=0 pr=0 pw=0 time=436 us cost=0 size=270 card=9)'
STAT #140436732146672 id=6 cnt=53 pid=5 pos=1 obj=0 op='FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=219 us cost=0 size=352 card=44)'
STAT #140436732146672 id=7 cnt=53 pid=5 pos=2 obj=0 op='FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0 pw=0 time=101 us cost=0 size=22 card=1)'
STAT #140436732146672 id=8 cnt=53 pid=4 pos=2 obj=0 op='FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=70 us cost=0 size=4 card=1)'

The difference here is that SHELFLIFE=30 has been added to the generated result cache hint.

The second run has very short parse time (c=0) because it’s a soft parse but you also see very short fetch time (c=0) because it’s a cache hit:

PARSING IN CURSOR #140436733602136 len=112 dep=1 uid=0 oct=3 lid=0 tim=769208821904 hv=3298783355 ad='108ee92f0' sqlid=' 10qf9kb29yw3v'
SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE SHELFLIFE=30) */ ID,NUM FROM "SYS"."DEMOV" "DEMOV" WHERE "DEMOV"."NUM"<=100
END OF STMT
PARSE #140436733602136:c=0,e=29,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=350654732,tim=769208821904
EXEC #140436733602136:c=0,e=13,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=350654732,tim=769208821955
FETCH #140436733602136:c=0,e=18,p=0,cr=0,cu=0,mis=0,r=53,dep=1,og=4,plh=350654732,tim=769208821990

When I look at the result cache, there were no invalidations:
TYPE STATUS NAME ROW_COUNT
---------- --------- ------------------------------------------------------------------------------------------------------------------------ ----------
Result Published SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE SHELFLIFE=30) */ ID,NUM FROM "SYS"."DEMOV" "DEMOV" WHERE "DEMOV"."NUM"<=100 55
Dependency Published SYS.DEMOV 0

When SHELFLIFE is set in a result cache hint, there is no dependency tracking. I’ve described RESULT_CACHE hint expiration options in a previous post.

The V$ views are on fixed tables, structures in memory, where there is no dependency tracking possibility. This is probably why the recursive query for sharing=object views use a SHELFLIFE instead.

This means that if you create an object link view on a fixed table the query will show same result for the next executions for 30 seconds.

“_cdb_view_rc_shelflife”

I’ve tested a shared=object view on V$SESSION as an answer to a previous blog comment. My query selects MAX(LAST_ET_CALL) which is supposed to increase every second for the inactive sessions. And we see that the result changes only every 30 seconds.

Those 30 seconds are parametered by “_cdb_view_rc_shelflife”. Here is the same test where I set “_cdb_view_rc_shelflife” to 5 seconds:


15:31:48 SQL> alter session set "_cdb_view_rc_shelflife"=5;
Session altered.
 
15:31:48 SQL> set serveroutput on
15:31:48 SQL> declare
15:31:48 2 x varchar2(100);
15:31:48 3 begin
15:31:48 4 for i in 1..60 loop
15:31:48 5 dbms_lock.sleep(1);
15:31:48 6 select to_char(current_timestamp)||' --> '||max(last_call_et) into x from DEMOV;
15:31:48 7 dbms_output.put_line(x);
15:31:48 8 end loop;
15:31:48 9 end;
15:31:48 10 /
 
07-AUG-16 03.31.49.852081 PM +00:00 --> 775144
07-AUG-16 03.31.50.863742 PM +00:00 --> 775144
07-AUG-16 03.31.51.863753 PM +00:00 --> 775144
07-AUG-16 03.31.52.864697 PM +00:00 --> 775144
07-AUG-16 03.31.53.864706 PM +00:00 --> 775144
07-AUG-16 03.31.54.864726 PM +00:00 --> 775144
07-AUG-16 03.31.55.864669 PM +00:00 --> 775150
07-AUG-16 03.31.56.864711 PM +00:00 --> 775150
07-AUG-16 03.31.57.864754 PM +00:00 --> 775150
07-AUG-16 03.31.58.864702 PM +00:00 --> 775150
07-AUG-16 03.31.59.864711 PM +00:00 --> 775150
07-AUG-16 03.32.00.864779 PM +00:00 --> 775150
07-AUG-16 03.32.01.865710 PM +00:00 --> 775156
07-AUG-16 03.32.02.866738 PM +00:00 --> 775156
07-AUG-16 03.32.03.866719 PM +00:00 --> 775156
07-AUG-16 03.32.04.866787 PM +00:00 --> 775156
07-AUG-16 03.32.05.866758 PM +00:00 --> 775156
07-AUG-16 03.32.06.866805 PM +00:00 --> 775156
07-AUG-16 03.32.07.867738 PM +00:00 --> 775162
07-AUG-16 03.32.08.868743 PM +00:00 --> 775162
07-AUG-16 03.32.09.868727 PM +00:00 --> 775162
07-AUG-16 03.32.10.868724 PM +00:00 --> 775162
07-AUG-16 03.32.11.868758 PM +00:00 --> 775162
07-AUG-16 03.32.12.869763 PM +00:00 --> 775167
07-AUG-16 03.32.13.870741 PM +00:00 --> 775167
07-AUG-16 03.32.14.870742 PM +00:00 --> 775167
07-AUG-16 03.32.15.870721 PM +00:00 --> 775167
07-AUG-16 03.32.16.870734 PM +00:00 --> 775167
07-AUG-16 03.32.17.870883 PM +00:00 --> 775167
07-AUG-16 03.32.18.872741 PM +00:00 --> 775173
07-AUG-16 03.32.19.873837 PM +00:00 --> 775173

And here is the same test after setting:

SQL> exec dbms_result_cache.bypass(true);

I’ve not tested, but I expect the same in Standard Edition where result cache is disabled


07-AUG-16 03.43.32.158741 PM +00:00 --> 775846
07-AUG-16 03.43.33.185793 PM +00:00 --> 775847
07-AUG-16 03.43.34.186633 PM +00:00 --> 775848
07-AUG-16 03.43.35.186738 PM +00:00 --> 775849
07-AUG-16 03.43.36.187696 PM +00:00 --> 775850
07-AUG-16 03.43.37.188684 PM +00:00 --> 775851
07-AUG-16 03.43.38.188692 PM +00:00 --> 775852
07-AUG-16 03.43.39.189755 PM +00:00 --> 775853
07-AUG-16 03.43.40.190697 PM +00:00 --> 775854
07-AUG-16 03.43.41.191763 PM +00:00 --> 775855
07-AUG-16 03.43.42.192706 PM +00:00 --> 775856
07-AUG-16 03.43.43.193736 PM +00:00 --> 775857

Conclusion

Don’t be afraid. There are very few sharing=object views in the dictionary, and only few of them have dependencies on fixed tables:

SQL> select owner,name,referenced_name from dba_dependencies
where (referenced_owner,referenced_name) in (select 'SYS',view_name from v$fixed_view_definition union select 'SYS',name from v$fixed_table)
and (owner,name,type) in (select owner,object_name,object_type from dba_objects where sharing='OBJECT LINK')
;
 
OWNER NAME REFERENCED_NAME
------------------------------ ------------------------------ ------------------------------
SYS INT$DBA_HIST_SQLSTAT X$MODACT_LENGTH
SYS INT$DBA_HIST_ACT_SESS_HISTORY X$MODACT_LENGTH
SYS INT$DBA_OUTSTANDING_ALERTS X$KELTGSD
SYS INT$DBA_OUTSTANDING_ALERTS X$KELTSD
SYS INT$DBA_OUTSTANDING_ALERTS X$KELTOSD
SYS INT$DBA_ALERT_HISTORY X$KELTGSD
SYS INT$DBA_ALERT_HISTORY X$KELTSD
SYS INT$DBA_ALERT_HISTORY X$KELTOSD
SYS INT$DBA_ALERT_HISTORY_DETAIL X$KELTGSD
SYS INT$DBA_ALERT_HISTORY_DETAIL X$KELTSD
SYS INT$DBA_ALERT_HISTORY_DETAIL X$KELTOSD
SYS DEMOV GV$SESSION
 
6 rows selected.

I’ve described how AWR views are stacked onto each other in a previous post.

And don’t worry, you don’t need to have a fresh view of those X$ tables. As an example, behind DBA_HIST_ACTIVE_SES_HISTORY the fixed table X$MODACT_LENGTH holds only the length of module and action strings:
SQL> select * from X$MODACT_LENGTH;
 
ADDR INDX INST_ID CON_ID KSUMODLEN KSUACTLEN
---------------- ---------- ---------- ---------- ---------- ----------
00007FF2EF280920 0 1 0 48 32

And the others (X$KELTSD, X$KELTGSD, X$KELTOSD) are the structures behind V$ALERT_TYPES that are not supposed to change.

So don’t panic. The multitenant architecture has some strange implementation stuff, but mostly harmless…

 

Cet article Multitenant internals: object links on fixed tables est apparu en premier sur Blog dbi services.

Multitenant internals: how object links are parsed/executed

Sun, 2016-08-07 05:33

I had a comment on object links internals when creating a sharing=object view on GV$SESSION. Before posting about this specific case, I realized that I’ve never explained how a query on an object link is run on the CDB$ROOT container.

Data link testcase

Here is how I create the DEMOT table and DEMOV view as object link.

14:48:58 SQL> connect / as sysdba
Connected.
14:48:59 SQL> alter session set container=CDB$ROOT;
Session altered.
14:48:59 SQL> alter session set "_oracle_script"=true;
Session altered.
14:48:59 SQL> create table DEMOT as select rownum id, rownum num from xmltable('10 to 1000000');
Table created.
SQL> exec dbms_stats.gather_table_stats('','DEMOT');
14:49:00 SQL> create view DEMOV sharing=object as select * from DEMOT;
View created.
14:49:00 SQL> alter session set container=PDB;
Session altered.
14:49:00 SQL> create table DEMOT as select rownum id, rownum num from xmltable('1 to 1');
Table created.
SQL> exec dbms_stats.gather_table_stats('','DEMOT');
14:49:01 SQL> create view DEMOV sharing=object as select * from DEMOT;
View created.
14:49:01 SQL> alter session set "_oracle_script"=false;
Session altered.

And I run the following query two times (easier to look at trace without hard parsing)

14:49:02 SQL> select id from DEMOV where num<=100;
...
100 rows selected.

SQL_TRACE

I’ll detail the sql_trace of the last run.

First, the query is parsed in our PDB:

PARSING IN CURSOR #140360238365672 len=35 dep=0 uid=0 oct=3 lid=0 tim=687080512770 hv=237558885 ad='10cf55ae8' sqlid=' duy45bn72jr35'
select id from DEMOV where num<=100
END OF STMT
PARSE #140360238365672:c=0,e=86,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3055399777,tim=687080512769
EXEC #140360238365672:c=0,e=61,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3055399777,tim=687080512872

There was no hard parse for this second parse call (mis=0 means no ‘Misses in library cache during parse’). Execution occurred but no fetch yet.

At that point, the session switches to CDB$ROOT container (you have to trust me as there is no information about it in the trace file in 12.1)

PARSING IN CURSOR #140360238643592 len=99 dep=1 uid=0 oct=3 lid=0 tim=687080513015 hv=2967959178 ad='107be5590' sqlid=' 3b9x1rasffxna'
SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE) */ ID,NUM FROM "SYS"."DEMOV" "DEMOV" WHERE "DEMOV"."NUM"<=100
END OF STMT
PARSE #140360238643592:c=0,e=33,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=3598352655,tim=687080513014
EXEC #140360238643592:c=0,e=16,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=3598352655,tim=687080513084
FETCH #140360238643592:c=0,e=32,p=0,cr=0,cu=0,mis=0,r=100,dep=1,og=4,plh=3598352655,tim=687080513137
STAT #140360238643592 id=1 cnt=100 pid=0 pos=1 obj=0 op='RESULT CACHE cgn1rxw6ycznac8fyzfursq2z6 (cr=0 pr=0 pw=0 time=12 us)'
STAT #140360238643592 id=2 cnt=0 pid=1 pos=1 obj=98422 op='TABLE ACCESS FULL DEMOT (cr=0 pr=0 pw=0 time=0 us)'

We have here a recursive query (dep=1) that is run on the view in CDB$ROOT. It’s not the same query as ours: FROM clause is our sharing=object view, WHERE clause is the predicates that applies on it, and SELECT clause the columns that we need (ID was in my SELECT clause and NUM was in my WHERE clause). The query is parsed, executed, the 100 rows are fetched and interestingly the result goes to result cache. Yes, if you query V$RESULT_CACHE_OBJECTS in a CDB you will see lot of objects:

#multitenant object links uses result cache internally. AWR views are object links. pic.twitter.com/V5IMW2qQx2

— Franck Pachot (@FranckPachot) August 7, 2016

If you look at the FETCH line above, you see that the second execution was a result cache hit (cr=0)

So, the rows we require from the object link are fetched, then the execution of our query continues in our PDB:


FETCH #140360238365672:c=0,e=235,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=3055399777,tim=687080513194
FETCH #140360238365672:c=0,e=11,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=1,plh=3055399777,tim=687080513375
FETCH #140360238365672:c=0,e=10,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=1,plh=3055399777,tim=687080513586
FETCH #140360238365672:c=0,e=9,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=1,plh=3055399777,tim=687080513776
FETCH #140360238365672:c=0,e=9,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=1,plh=3055399777,tim=687080513983
FETCH #140360238365672:c=0,e=9,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=1,plh=3055399777,tim=687080514188
FETCH #140360238365672:c=0,e=8,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=1,plh=3055399777,tim=687080514375

Up there 91 rows were fetched. We can see in the trace that the recursive cursor is closed there (session switches to CDB$ROOT for that):

CLOSE #140360238643592:c=0,e=3,dep=1,type=0,tim=687080514584

And our session is back on PDB container where the remaining rows are fetched and our cursor closed:

FETCH #140360238365672:c=0,e=40,p=0,cr=0,cu=0,mis=0,r=9,dep=0,og=1,plh=3055399777,tim=687080514610
STAT #140360238365672 id=1 cnt=100 pid=0 pos=1 obj=0 op='FIXED TABLE FULL X$OBLNK$ (cr=0 pr=0 pw=0 time=263 us cost=0 size=13000 card=500)'
CLOSE #140360238365672:c=0,e=3,dep=0,type=0,tim=687080516173

As I’ve explained at DOAGDB16 and SOUG Day (next session is planned for UKOUG TECH16), following metadata and object links is done by the session switching from PDB to CDB$ROOT.

TKPROF

Here is the tkprof of the full trace with two executions

Our query on PDB


SQL ID: duy45bn72jr35 Plan Hash: 3055399777
 
select id
from
DEMOV where num<=100
 
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 16 0.00 0.00 0 0 0 200
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 20 0.00 0.00 0 0 0 200
 
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Number of plan statistics captured: 2
 
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
100 100 100 FIXED TABLE FULL X$OBLNK$ (cr=1040 pr=1034 pw=0 time=42636 us cost=0 size=13000 card=500)

The number of executions and row count is correct, but there’s no logical reads here because all block reads occurred through the recursive query. The execution plan shows are full table scan on X$OBLNK$ which is how object link access path are displayed in 12.1

Query on CDB$ROOT


SQL ID: 3b9x1rasffxna Plan Hash: 3598352655
 
SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE) */ ID,NUM
FROM
"SYS"."DEMOV" "DEMOV" WHERE "DEMOV"."NUM"<=100
 
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.03 0.08 2068 2074 0 200
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.03 0.08 2068 2074 0 200
 
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Number of plan statistics captured: 2
 
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
100 100 100 RESULT CACHE cgn1rxw6ycznac8fyzfursq2z6 (cr=1037 pr=1034 pw=0 time=662 us)
100 50 100 TABLE ACCESS FULL DEMOT (cr=1037 pr=1034 pw=0 time=522 us)

Here is where the work to get rows from the view is, in CDB$ROOT. Only two fetches there (one per execution). As we have seen in the row trace, all rows from the object link were fetched before we issue any fetch call from our query. I did same test with more rows selected and it seems that the fetch size is 200 rows: when 200 rows are fetched from CDB$ROOT, session switches back to PDB to fetch those rows (15 by 15 with the default sqlplus arraysize) and comes again to CDB$ROOT for next 200 rows. This means that they are probably buffered.

Actually there’s a hidden parameter to define that: “_cdb_view_prefetch_batch_size” is set to 200 by default.

Note that the 2000 logical reads are from the first execution only because the second one found the result in result cache.

V$SQL

From SQL_TRACE, the work done in the other container is not included in statement statistics. This makes tuning more difficult as we are used to see recursive work cumulated in the top level statement.

From CDB$ROOT here is what we can see from the shared pool (V$SQL) about the queries I’ve seen in the SQL_TRACE. I query V$SQL with the PLAN_HASH_VALUE (‘phv’ in the SQL_TRACE dump).


14:49:02 SQL> select con_id,sql_id,rows_processed,plan_hash_value,executions,parse_calls,fetches,buffer_gets,sql_text from v$sql where plan_hash_value in (3598352655,3055399777) order by last_active_time;
 
CON_ID SQL_ID ROWS_PROCESSED PLAN_HASH_VALUE EXECUTIONS PARSE_CALLS FETCHES BUFFER_GETS SQL_TEXT
---------- ------------- -------------- --------------- ---------- ----------- ---------- ----------- --------------------------------------------------------------------------------
4 duy45bn72jr35 200 3055399777 2 2 16 2721 select id from DEMOV where num<=100
4 bynmh7xm4bf54 0 3598352655 0 1 0 61 SELECT * FROM NO_OBJECT_LINK("SYS"."DEMOV")
1 3b9x1rasffxna 200 3598352655 2 2 2 2080 SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE) */ ID,NUM FROM "SYS"."DEMOV" "DEMOV" WHERE

For the two queries we have seen above, the number of executions and fetches matches what we have seen in the trace. However, buffer_gets from the user query includes the logical reads from the recursive query.

But there’s more here. A statement is there with the same PLAN_HASH_VALUE than the internal query. It’s the query on the shared=object view, with the undocumented NO_OBJECT_LINK() function. It is parsed but not executed. This parse occurred in PDB just before switching to CDB$ROOT. This parse occurend only one time when our query was hard parsed. It has the same plan hash value than the internal query because the plan is the same: full table scan on the table.

My understanding of that is that when hard parsing our query and executing the recursive query on CDB$ROOT, the optimizer checks the view definition in the current container (the PDB) by parsing it without following object links (reason for the NO_OBJECT_LINK).

NO_OBJECT_LINK

Here is the parsing of that query with NO_OBJECT_LINK that occurs in the PDB:


PARSING IN CURSOR #140360238422472 len=43 dep=1 uid=0 oct=3 lid=0 tim=687080413554 hv=1715845284 ad='108fc0230' sqlid=' bynmh7xm4bf54'
SELECT * FROM NO_OBJECT_LINK("SYS"."DEMOV")
END OF STMT
PARSE #140360238422472:c=3000,e=2948,p=0,cr=61,cu=0,mis=1,r=0,dep=1,og=4,plh=3598352655,tim=687080413553
CLOSE #140360238422472:c=0,e=3,dep=1,type=0,tim=687080413614

There is no where clause here. I guess that the goal is just to validate the view in the PDB before executing the full query on CDB$ROOT.

Note that query in the CDB$ROOT do not use the NO_OBJECT_LINK here in 12.1 but could have use it to ensure that there are no further links.

Execution plans

With all those recursive queries, how the cardinalities are estimated? DEMOT has no rows in PDB and 1000000 rows in CDB$ROOT. Statistics gathered and I query only 100 rows (they are evenly distributed between low and high value);

The query that is only parsed in PDB:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID bynmh7xm4bf54, child number 0
-------------------------------------
SELECT * FROM NO_OBJECT_LINK("SYS"."DEMOV")
Plan hash value: 3598352655
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMOT | 1 | 6 | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------

The query that is run in CDB$ROOT:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 3b9x1rasffxna, child number 0
-------------------------------------
SELECT /*+ RESULT_CACHE (SYSOBJ=TRUE) */ ID,NUM FROM "SYS"."DEMOV"
"DEMOV" WHERE "DEMOV"."NUM"<=100
Plan hash value: 3598352655
-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 568 (100)| |
| 1 | RESULT CACHE | 9cv1sbwyz16651fgh17234v67g | | | | |
|* 2 | TABLE ACCESS FULL| DEMOT | 100 | 1000 | 568 (2)| 00:00:01 |
-------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("NUM"<=100)

Estimation is ok here.

And my user query:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID duy45bn72jr35, child number 0
-------------------------------------
select id from DEMOV where num<=100
Plan hash value: 3055399777
------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1 (100)|
|* 1 | FIXED TABLE FULL| X$OBLNK$ | 500 | 13000 | 0 (0)|
------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("NUM"<=100)

Here, this estimation is not accurate at all, seems to have an hardcoded value of 500.

Conclusion

Lot of interesting things here. Object link (that you can call data links as well) are processed in a very special way. But don’t worry. Remember that you are not allowed to create them yourself. And there are only very few oracle maintained object links: some PDB information that must be available when PDB are not opened, AWR information that is consolidated in root, some audit reference tables,…

It interesting to see (and think about all consequences) that result cache is used here for internal optimization. Even when you don’t use result cache for your application, you should have a look at it and maybe size it differently than default. In a future post I’ll create a sharing=object view on V$ fixed views and result cache will be even more fun.

 

Cet article Multitenant internals: how object links are parsed/executed est apparu en premier sur Blog dbi services.

List usernames instead of uids with the ps command for long usernames

Fri, 2016-08-05 08:27

Have your ever faced such a situation. You have usernames in your /etc/passwd file with more than 8 characters. This is no problem for Linux at all, usernames may be up to 32 characters long, only your ps output might look a little scrambled.

It shows you the uid instead of the username like in the following example:

$ id

uid=20001(longuser01) gid=10002(oinstall) groups=10002(oinstall)

$ sleep 1000000 &

$ ps -ef | grep sleep | grep -v grep

20001    14069 11739  0 14:11 pts/0    00:00:00 sleep 1000000

 

But you want to see the username instead of the uid. The workaround is

  • Don’t use more than eight characters for your usernames  :-)
  • Or …. format your ps output the right way

You could use the following alias to get the job done.

$ alias psx='export PS_FORMAT="user:12,pid,%cpu,%mem,vsz,rss,tty,stat,start,time,command"; ps ax'

$ psx | grep sleep | grep -v grep

longuser01 14069  0.0  58940 520 pts/0 S 14:11:50 sleep 1000000

 

Now it looks better.

Cheers, William

 

Cet article List usernames instead of uids with the ps command for long usernames est apparu en premier sur Blog dbi services.

How to do a Filesystem Resize (ext3/ext4) on Redhat running on VMware

Fri, 2016-08-05 07:43

A filesystem resize can be done in several ways, online, offline, with LVM2 or without LVM2.  However, this blog will describe how to do an online resize of ext3/ext4 filesystems where a virtual disk (vmdk) is online added to a VMware Redhat guest OS.

So let’s start with the online filesystem resize of ext3/4 filesystems on the Redhat guest OS.  A new virutal disk (preferably an eagerd zero thick on VM running Oracle) was added as a pre requirement. Adding a new virtual disk is an online operation and no downtime is required to do it.

The whole procedure in this document is described by using the command line only. There is also a graphical user interface `system-config-lvm` that can perform the job, but that tool is out of scope in this document.

Online resize a ext3/4 filesystem

There are several steps that have to be done. These are in general:

  1. Scanning for new LUN’s
  2. Partition the new LUN’s and partprobe
  3. Create the physical volume
  4. Extend the volume group and the logical volume
  5. Extend the filesystem online
Rescan for new LUN’s

Depending on the number of virtual controllers, you have to scan for your new LUN’s on each of these. In case you know on which the disk was added, then of course, you need to scan only the appropriate one.

Rescan for new LUN’s on the first SCSI Controller (LSI Logic Parallel)
# echo "- - -"  > /sys/class/scsi_host/host0/scan*
Rescan for new LUN’s on the second SCSI Controller (Paravirtualized)
# echo "- - -"  > /sys/class/scsi_host/host1/scan*
Create a Primary Partion on the new devices
# fdisk /dev/sdx??

# fdisk /dev/sdy??
Partprobe the new devices

Partprobe is a program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read the partition table.

# partprobe /dev/sdx??

# partprobe /dev/sdy??
Create the Pysical Volumes
# pvcreate /dev/sdx??

Physical volume "/dev/sdx??" successfully created
# pvcreate /dev/sdy??

Physical volume "/dev/sdy??" successfully created
Extend the Volume Group
# vgextend VGOracle /dev/sdx??

Volume group "VGOracle" successfully extended
# vgextend VGOracle /dev/sdy??

Volume group "VGOracle" successfully extended
Extend the Logical Volume
# lvextend -L 72G /dev/VGOracle/LVOracleu??

Extending logical volume LVOracleu?? to 72.00 GB

Logical volume LVOracleu01 successfully resized
Online Resize the ext3/ext4 Filesystem

After the logical volume is resized successfully, you can resize, in fact any filesystem that is online re-sizable. The following are examples for the ext3/ext4 filesystems. The syntax for ext3 and ext4 differ only slightly. For ext3 you use `resize2fs` even if its ext3 and not ext2, and in case of ext4 you use `resize4fs` were the command name is more logically.

ext3
# resize2fs /dev/VGOracle/LVOracleu??
ext4
# resize4fs /dev/VGOracle/LVOracleu??

 

That’s it. Now have fun with the bigger filesystem.

Cheers,

William

 

 

Cet article How to do a Filesystem Resize (ext3/ext4) on Redhat running on VMware est apparu en premier sur Blog dbi services.

SQL Server AlwaysOn: troubleshooting improvements with new service packs

Tue, 2016-08-02 09:42

As promised in my latest blog, I will talk about improvements in the detection of the availability group replication latency. As a reminder, replication latency between high available replicas may be monitored in different manners so far.

Firstly, in my experience I had the opportunity to use different performance counters as “Database Replica:Transaction Delay” and “Databas Replica:Mirrored Write Transactions/sec” in order to get a good picture of the latency over the time after implementing an availability group. I have also experienced performance issues at customer places where referring to the HADR_SYNC_COMMIT wait type was also very valuable. However, in both cases experiencing high transaction delay means only that the primary is waiting on response from remote replica that the log block has been hardened. This is a relevant information for sure but this is only the symptom. Indeed, at this point we are not aware of where the issue is occurring exactly and we have to figure out what the root cause of this latency is. Is the network involved or maybe log flushing activity on the secondary or both? Well, it is sure that further investigations will be necessary to locate and get rid of the root cause.

When I discovered the new diagnostic features from recent service packs of SQL Server, I enjoyed to see that new extended events were added and will allow to respond easier to the previous request. Before going further to the new extended events study, let’s say that I learned a lot by attending to the very interesting web meeting about AlwaysOn troubleshooting improvements given by Amit Banerjee (Senior PM Microsoft for Tiger Team). During this web meeting, he provided a lot of useful information about new ways to diagnostic AlwaysOn architectures including the new extended events.

You may find out an exhaustive list of new / enhanced extended events and their corresponding performance counters by referring to the Microsoft KB3173156 but let me summarize the main benefit we may get. Well, we have now a complete picture of the replication stack and processing time value at each level of the architecture as you may see below:

blog 102 - AG 2014 SP2 - extended events new columns

For an easier understanding, I tried to draw a picture of the AlwaysOn replication process by placing the extended events at their correct location in the replication stack. Please feel free to comment if you find out any mistakes.

 

blog 102 - AG 2014 SP2 - replication and xe

As said earlier, these new extended events will probably make some troubleshooting stuffs easier and in this blog post I would like to focus on two latency issues I faced at customer places where I noticed high HADR_SYNC_COMMIT wait type values but the root cause was not the same.

I decided to simulate these two issues on my lab environment (one simple architecture that includes one availability group, two replicas involved in synchronous replication and run with SQL Server 2014 SP2). My main goal was to check if the new extended events were able to locate quickly where the root cause is located in each case.

So the first thing to do is to implement and to enable a customized AlwaysOn extended event session that includes all the events displayed in the above picture on each side (both primary and secondary). I used directly the definition of the extended event shared by Amit Banerjee on GitHub.

CREATE EVENT SESSION [AlwaysOn_Data_Movement_Tracing] 
ON SERVER 
ADD EVENT sqlserver.hadr_apply_log_block,
ADD EVENT sqlserver.hadr_capture_filestream_wait,
ADD EVENT sqlserver.hadr_capture_log_block,
ADD EVENT sqlserver.hadr_capture_vlfheader,
ADD EVENT sqlserver.hadr_database_flow_control_action,
ADD EVENT sqlserver.hadr_db_commit_mgr_harden,
ADD EVENT sqlserver.hadr_log_block_compression,
ADD EVENT sqlserver.hadr_log_block_decompression,
ADD EVENT sqlserver.hadr_log_block_group_commit,
ADD EVENT sqlserver.hadr_log_block_send_complete,
ADD EVENT sqlserver.hadr_lsn_send_complete,
ADD EVENT sqlserver.hadr_receive_harden_lsn_message,
ADD EVENT sqlserver.hadr_send_harden_lsn_message,
ADD EVENT sqlserver.hadr_transport_flow_control_action,
ADD EVENT sqlserver.hadr_transport_receive_log_block_message,
ADD EVENT sqlserver.log_block_pushed_to_logpool,
ADD EVENT sqlserver.log_flush_complete,
ADD EVENT sqlserver.log_flush_start,
ADD EVENT sqlserver.recovery_unit_harden_log_timestamps,
ADD EVENT ucs.ucs_connection_send_msg
(
    WHERE ([correlation_id]<>(0))
) 
ADD TARGET package0.event_file
(
	SET filename=N'AlwaysOn_Data_Movement_Tracing.xel',max_rollover_files=(100)
)
WITH 
(
	MAX_MEMORY=4096 KB,
	EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
	MAX_DISPATCH_LATENCY=30 SECONDS,
	MAX_EVENT_SIZE=0 KB,
	MEMORY_PARTITION_MODE=NONE,
	TRACK_CAUSALITY=OFF,
	STARTUP_STATE=ON
)
GO

Firstly, I have to warn about the use of such trace because it may generate a large amount of data depending mainly on the workload activity. In my test scenario, the extended event session has generated up to 4GB of events in a time interval of 20 minutes. So we will have probably to think about some customizations that might be needed according the context in order to minimize the overall performance impact and the disk usage as well.

Then, after collecting extended events data, I experienced a very long execution time and a high tempdb usage as well by using the initial extraction script provided by Amit Banerjee on GitHub. In my case, the script took approximatively 9 hours to finish on my lab environment with 4 VCPUs and 4GB of RAM. This is definitely not a good way to perform quickly further analysis on collected data. My first feeling was the script was designed for the demonstration purpose with few data. So in order to handle larger collected data, I decided to modify the initial version of the script to reduce the extract duration time. It took less than 2h for the same amount of data in my case). You will also notice that I reduced the number of extracted data to the minimum in order to meet my own requirements (I only have two replicas in my case). But let’s say that I’m still in a process of improving the performance of the extraction script in order to meet more realistic production scenarios. So here my own version of the script used in this blog post:

SET NOCOUNT ON

IF OBJECT_ID('tempdb..#xe_AlwaysOn_Data_Movement_Tracing', 'U') IS NOT NULL
	DROP TABLE #xe_AlwaysOn_Data_Movement_Tracing;


SELECT
	object_name as event_name,
	CONVERT(XML,Event_data) AS event_data
INTO #xe_AlwaysOn_Data_Movement_Tracing
FROM sys.fn_xe_file_target_read_file(
			'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL14\MSSQL\Log\AlwaysOn_Data_Movement_Tracing_0_*.xel', 
			null, null, null) as xe
where object_name in ('hadr_log_block_group_commit',
						'log_block_pushed_to_logpool',
						'log_flush_start',
						'log_flush_complete',
						'hadr_log_block_compression',
						'hadr_capture_log_block',
						'hadr_capture_filestream_wait',
						'hadr_log_block_send_complete',
						'hadr_receive_harden_lsn_message',
						'hadr_db_commit_mgr_harden',
						'recovery_unit_harden_log_timestamps',
						'hadr_capture_vlfheader',
						'hadr_log_block_decompression',
						'hadr_apply_log_block',
						'hadr_send_harden_lsn_message',
						'hadr_log_block_decompression',
						'hadr_lsn_send_complete',
						'hadr_transport_receive_log_block_message');


IF OBJECT_ID('dbo.DMReplicaEvents', 'U') IS NOT NULL
	DROP TABLE dbo.DMReplicaEvents_primary;

CREATE TABLE [dbo].[DMReplicaEvents_primary](
	[server_name] [nvarchar](128) NULL,
	[event_name] [nvarchar](60) NOT NULL,
	[log_block_id] [bigint] NULL,
	[database_id] [int] NULL,
	[processing_time] [bigint] NULL,
	[start_timestamp] [bigint] NULL,
	[publish_timestamp] [datetimeoffset](7) NULL,
	[log_block_size] [int] NULL,
	[target_availability_replica_id] [uniqueidentifier] NULL,
	[mode] [bigint] NULL
) ON [PRIMARY]


INSERT INTO DMReplicaEvents_Primary
SELECT 
	@@SERVERNAME as server_name,
	event_name,
	xe.event_data.value('(/event/data[@name="log_block_id"]/value)[1]','bigint') AS log_block_id,
	xe.event_data.value('(/event/data[@name="database_id"]/value)[1]','int') AS database_id,
	CASE event_name 
		WHEN 'hadr_db_commit_mgr_harden' THEN xe.event_data.value('(/event/data[@name="time_to_commit"]/value)[1]','bigint')
		WHEN 'hadr_apply_log_block' THEN xe.event_data.value('(/event/data[@name="total_processing_time"]/value)[1]','bigint')
		WHEN 'hadr_log_block_send_complete' THEN xe.event_data.value('(/event/data[@name="total_processing_time"]/value)[1]','bigint')
		WHEN 'hadr_lsn_send_complete' THEN xe.event_data.value('(/event/data[@name="total_processing_time"]/value)[1]','bigint')
		WHEN 'log_flush_complete' THEN xe.event_data.value('(/event/data[@name="duration"]/value)[1]','bigint') * 1000
		ELSE xe.event_data.value('(/event/data[@name="processing_time"]/value)[1]','bigint') 
	END AS processing_time,
	xe.event_data.value('(/event/data[@name="start_timestamp"]/value)[1]','bigint') AS start_timestamp,
	xe.event_data.value('(/event/@timestamp)[1]','DATETIMEOFFSET') AS publish_timestamp,
	CASE event_name
		WHEN 'hadr_log_block_compression' THEN xe.event_data.value('(/event/data[@name="uncompressed_size"]/value)[1]','int')
		WHEN 'hadr_log_block_decompression' THEN xe.event_data.value('(/event/data[@name="uncompressed_size"]/value)[1]','int')
		WHEN 'hadr_capture_log_block' THEN xe.event_data.value('(/event/data[@name="log_block_size"]/value)[1]','int')
		ELSE NULL 
	END AS log_block_size,
	CASE event_name
		WHEN 'hadr_db_commit_mgr_harden' THEN xe.event_data.value('(/event/data[@name="replica_id"]/value)[1]','uniqueidentifier')
		WHEN 'hadr_log_block_compression' THEN xe.event_data.value('(/event/data[@name="availability_replica_id"]/value)[1]','uniqueidentifier')
		WHEN 'hadr_log_block_decompression' THEN xe.event_data.value('(/event/data[@name="availability_replica_id"]/value)[1]','uniqueidentifier')
		WHEN 'hadr_capture_log_block' THEN xe.event_data.value('(/event/data[@name="availability_replica_id"]/value)[1]','uniqueidentifier')
		WHEN 'hadr_capture_filestream_wait' THEN xe.event_data.value('(/event/data[@name="availability_replica_id"]/value)[1]','uniqueidentifier')
		WHEN 'hadr_receive_harden_lsn_message' THEN xe.event_data.value('(/event/data[@name="target_availability_replica_id"]/value)[1]','uniqueidentifier')
		WHEN 'hadr_transport_receive_log_block_message' THEN xe.event_data.value('(/event/data[@name="target_availability_replica_id"]/value)[1]','uniqueidentifier')
		WHEN 'hadr_capture_vlfheader' THEN xe.event_data.value('(/event/data[@name="availability_replica_id"]/value)[1]','uniqueidentifier')
		WHEN 'hadr_send_harden_lsn_message' THEN xe.event_data.value('(/event/data[@name="availability_replica_id"]/value)[1]','uniqueidentifier')
		ELSE NULL 
	END AS target_availability_replica_id,
	xe.event_data.value('(/event/data[@name="mode"]/value)[1]','bigint') AS mode
--INTO DMReplicaEvents
FROM #xe_AlwaysOn_Data_Movement_Tracing xe

Having talked about the data collector aspect, let’s continue by using it against my two latency issue scenarios:

 

First scenario (network bandwidth issue)

My first test consisted in simulating a network bandwidth issue by limiting the maximum bandwidth of the public network card as low as possible to trigger a noticeable increase of the availability group replication latency. This is what you may see in the following picture between 19:37 and 19:42. The average transaction delay (or latency) increases accordingly up to 17ms against 2ms during normal processing.

 

blog 102 - AG 2014 SP2 - first scenario - perfmon

If we take a look at the wait statistics we may found out the HADR_SYNC_COMMIT wait stat with an average value of 20ms.

blog 102 - AG 2014 SP2 - first scenario - waitstats

But at this point we only know that we are experiencing latency issues but we didn’t get any additional information to prove that the network is the culprit. This is where new extended events come into play. I used Excel to consume collected data and to produce charts for a better analysis. Here the picture of the first test scenario (total processing time per event) :

blog 102 - AG 2014 SP2 - first scenario - xe primary

blog 102 - AG 2014 SP2 - first scenario - xe secondary

From the above charts, we may easily notice between 19:37 and 19:43 an increase of the hadr_db_commit_mgr_harden processing time on the primary. Other events stay constant over the time. Referring to the first replication diagram, we know this event type represents the remote response from the secondary that increases during the concerned time interval. Furthermore, from the secondary, we may see constant processing time values for all events meaning that the root cause doesn’t seem to concern the secondary itself but certainly an external item between the primary and the secondary, so in this case probably the network stack. By using new extended events, we were able to point out quickly the issue and perform further investigations on the network side.

 

Second scenario (storage performance issue)

Let’s continue with the second scenario. At first glance we may see the same kind of wait type after the second test meaning that we’re facing the same latency issue as the previous scenario

blog 102 - AG 2014 SP2 - second scenario - waitstats

But this time the root cause is not the same as we may see very soon by looking at the following extended event charts

 

blog 102 - AG 2014 SP2 - second scenario - xe primary

blog 102 - AG 2014 SP2 - second scenario - xe secondary

This time, we may notice a pretty obvious correlation between the two charts (primary and secondary replicas). Indeed, each time the remote response time (in other words the latency) increases from the primary, we may see the same increase of the log_flush_complete processing time from the secondary and with the same order of magnitude. In this case, we may quickly focus on database file (and storage) response time on the secondary replica. In my case I voluntary used a very slow storage to host the killerdb transaction log file as you may see below at the database file io statistics

blog 102 - AG 2014 SP2 - second scenario - file io stats

Bottom line

In this blog post, we have seen how useful may be the new extended events shipped with new service packs of SQL Server and we probably have just scratched the surface. But I think we have to keep in mind that using these new extended events has a cost and for the moment I must admit to be not confident to use blindly the initial version script of collecting on production for the reasons cited above. I have to investigate further on a better way to collect and extract data in a real situation.

Happy troubleshooting!

 

 

 

 

 

Cet article SQL Server AlwaysOn: troubleshooting improvements with new service packs est apparu en premier sur Blog dbi services.

Exadata X-5 Bare Metal vs. OVM load testing

Sun, 2016-07-31 02:49

In a previous post I tried to compare a single thread workload between Exadata X5 Bare Metal and Virtualized. The conclusions were that there is no huge differences, and that this kind of comparison is not easy.

About the comparison is not easy, some reasons have been nicely detailed by Lonny Niederstadt in this twitter thread

Besides the single thread tests, I did a test with 50 sessions doing updates on a small data set. It’s a 50 session SLOB with SCALE=100M WORK_UNIT=64 UPDATE_PCT=100 and a small buffer cache.

Here are the load profiles, side by side:

Bare Metal Virtualized
 
Load Profile Per Second Per Transaction Per Second Per Transaction
~~~~~~~~~~~~~~~ --------------- --------------- --------------- --------------- -
DB Time(s): 42.9 0.0 48.6 0.1
DB CPU(s): 5.6 0.0 6.8 0.0
Background CPU(s): 3.4 0.0 3.8 0.0
Redo size (bytes): 58,364,739.1 52,693.1 52,350,760.2 52,555.7
Logical read (blocks): 83,306.4 75.2 73,826.5 74.1
Block changes: 145,360.7 131.2 130,416.6 130.9
Physical read (blocks): 66,038.6 59.6 60,512.7 60.8
Physical write (blocks): 69,121.8 62.4 62,962.0 63.2
Read IO requests: 65,944.8 59.5 60,448.0 60.7
Write IO requests: 59,618.4 53.8 55,883.5 56.1
Read IO (MB): 515.9 0.5 472.8 0.5
Write IO (MB): 540.0 0.5 491.9 0.5
Executes (SQL): 1,170.7 1.1 1,045.6 1.1
Rollbacks: 0.0 0.0 0.0 0.0
Transactions: 1,107.6 996.1

and I/O profile:
Bare Metal Virtualized
 
IO Profile Read+Write/Second Read/Second Write/Second Read+Write/Second Read/Second Write/Second
~~~~~~~~~~ ----------------- --------------- --------------- ----------------- --------------- ---------------
Total Requests: 126,471.0 65,950.0 60,521.0 117,014.4 60,452.8 56,561.6
Database Requests: 125,563.2 65,944.8 59,618.4 116,331.5 60,448.0 55,883.5
Optimized Requests: 125,543.0 65,941.1 59,601.9 116,130.7 60,439.9 55,690.7
Redo Requests: 902.2 0.1 902.1 677.1 0.1 677.0
Total (MB): 1,114.0 516.0 598.0 1,016.5 472.8 543.6
Database (MB): 1,055.9 515.9 540.0 964.7 472.8 491.9
Optimized Total (MB): 1,043.1 515.9 527.2 942.6 472.7 469.8
Redo (MB): 57.7 0.0 57.7 51.7 0.0 51.7
Database (blocks): 135,160.4 66,038.6 69,121.8 123,474.7 60,512.7 62,962.0
Via Buffer Cache (blocks): 135,159.8 66,038.5 69,121.3 123,474.0 60,512.6 62,961.4
Direct (blocks): 0.6 0.2 0.5 0.7 0.1 0.5

This is roughly what you can expect from OLTP workload: small data set that fits in flash cache, high redo rate. Of course in OLTP you will have a higher buffer cache, but this is not what I wanted to measure here. It seems that the I/O performance is slightly better in bare metal. This is what we also see on averages:

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Total Wait Wait % DB Wait
Event Waits Time (sec) Avg(ms) time Class
------------------------------ ----------- ---------- ---------- ------ --------
cell single block physical rea 9,272,548 4182.4 0.45 69.3 User I/O
free buffer waits 152,043 1511.5 9.94 25.1 Configur
DB CPU 791.4 13.1
 
Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Total Wait Wait % DB Wait
Event Waits Time (sec) Avg(ms) time Class
------------------------------ ----------- ---------- ---------- ------ --------
cell single block physical rea 7,486,727 3836.2 0.51 63.7 User I/O
free buffer waits 208,658 1840.9 8.82 30.6 Configur
DB CPU 845.9 14.1

It’s interesting to see that even when on I/O bound system there are no significant waits on log file sync.

I’ll focus on ‘log file paralle writes':

Bare Metal
EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
---------------------------------------- --------------- ---------- ------------------------------
log file parallel write 1 0 1 microsecond
log file parallel write 2 0 2 microseconds
log file parallel write 4 0 4 microseconds
log file parallel write 8 0 8 microseconds
log file parallel write 16 0 16 microseconds
log file parallel write 32 0 32 microseconds
log file parallel write 64 0 64 microseconds
log file parallel write 128 0 128 microseconds
log file parallel write 256 8244 256 microseconds
log file parallel write 512 102771 512 microseconds
log file parallel write 1024 14812 1 millisecond
log file parallel write 2048 444 2 milliseconds
log file parallel write 4096 42 4 milliseconds
log file parallel write 8192 11 8 milliseconds
log file parallel write 16384 3 16 milliseconds
log file parallel write 32768 1 32 milliseconds

Virtualized
EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
---------------------------------------- --------------- ---------- ------------------------------
log file parallel write 1 0 1 microsecond
log file parallel write 2 0 2 microseconds
log file parallel write 4 0 4 microseconds
log file parallel write 8 0 8 microseconds
log file parallel write 16 0 16 microseconds
log file parallel write 32 0 32 microseconds
log file parallel write 64 0 64 microseconds
log file parallel write 128 0 128 microseconds
log file parallel write 256 723 256 microseconds
log file parallel write 512 33847 512 microseconds
log file parallel write 1024 41262 1 millisecond
log file parallel write 2048 6483 2 milliseconds
log file parallel write 4096 805 4 milliseconds
log file parallel write 8192 341 8 milliseconds
log file parallel write 16384 70 16 milliseconds
log file parallel write 32768 10 32 milliseconds

As I’ve seen in previous tests, most of the writes where below 512 microseconds in bare metal, and above in virtualized.

And here are the histograms for the single block reads:

Bare Metal
EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
---------------------------------------- --------------- ---------- ------------------------------
cell single block physical read 1 0 1 microsecond
cell single block physical read 2 0 2 microseconds
cell single block physical read 4 0 4 microseconds
cell single block physical read 8 0 8 microseconds
cell single block physical read 16 0 16 microseconds
cell single block physical read 32 0 32 microseconds
cell single block physical read 64 0 64 microseconds
cell single block physical read 128 432 128 microseconds
cell single block physical read 256 2569835 256 microseconds
cell single block physical read 512 5275814 512 microseconds
cell single block physical read 1024 837402 1 millisecond
cell single block physical read 2048 275112 2 milliseconds
cell single block physical read 4096 297320 4 milliseconds
cell single block physical read 8192 4550 8 milliseconds
cell single block physical read 16384 1485 16 milliseconds
cell single block physical read 32768 99 32 milliseconds
cell single block physical read 65536 24 65 milliseconds
cell single block physical read 131072 11 131 milliseconds
cell single block physical read 262144 14 262 milliseconds
cell single block physical read 524288 7 524 milliseconds
cell single block physical read 1048576 4 1 second
cell single block physical read 2097152 1 2 seconds

Virtualized

EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
---------------------------------------- --------------- ---------- ------------------------------
cell single block physical read 1 0 1 microsecond
cell single block physical read 2 0 2 microseconds
cell single block physical read 4 0 4 microseconds
cell single block physical read 8 0 8 microseconds
cell single block physical read 16 0 16 microseconds
cell single block physical read 32 0 32 microseconds
cell single block physical read 64 0 64 microseconds
cell single block physical read 128 0 128 microseconds
cell single block physical read 256 518447 256 microseconds
cell single block physical read 512 5371496 512 microseconds
cell single block physical read 1024 1063689 1 millisecond
cell single block physical read 2048 284640 2 milliseconds
cell single block physical read 4096 226581 4 milliseconds
cell single block physical read 8192 16292 8 milliseconds
cell single block physical read 16384 3191 16 milliseconds
cell single block physical read 32768 474 32 milliseconds
cell single block physical read 65536 62 65 milliseconds
cell single block physical read 131072 2 131 milliseconds

Same conclusions here: the ‘less than 256 microseconds’ occurs more frequently in bare metal than virtualized.

For the reference, those tests wer done on similar configuration (except virtualization): X5-2L High Capacity with 3 storage cells, version cell-12.1.2.3.1_LINUX.X64_160411-1.x86_64, flashcache in writeback. Whole system started before the test. This test is the only thing running on the whole database machine.

 

Cet article Exadata X-5 Bare Metal vs. OVM load testing est apparu en premier sur Blog dbi services.

Oracle serializable is not serializable

Sat, 2016-07-30 17:17

Did you know that when you set isolation level to SERIALIZABLE, it is not serializable but SNAPSHOT? This isolation levels is lower than serializable. I’ve never thought about it until I read Markus Winand slides about transactions. I recommend every developer or DBA to read those slides. This post is there to illustrate write skew in Oracle.

Let’s show an example on SCOTT.EMP table. Let’s say there’s a HR directive to increase one of department 10 employees salary so that total salaries for the department is 9000.
Now let’s imagine that two HR users received the directive at the same time.

User A checks the salaries:

23:18:33 SID=365> select ename,sal from EMP where deptno=10;
 
ENAME SAL
---------- ----------
CLARK 2450
KING 5000
MILLER 1300

The sum is 8750 so User A decides to increase MILLER’s salary with additional 250.

However, to be sure that he is the only one to do that, he starts a transaction in SERIALIZABLE isolation level, checks the sum again, and do the update:

23:18:40 SID=365> set transaction isolation level serializable;
Transaction set.
 
23:18:41 SID=365> select sum(sal) from EMP where deptno=10;
 
SUM(SAL)
----------
8750
 
23:18:44 SID=365> update EMP set sal=sal+250 where ename='MILLER';
1 row updated.

Now at the same time, User B is doing the same but chose to increase CLARK’s salary:


23:18:30 SID=12> set transaction isolation level serializable;
Transaction set.
 
23:18:51 SID=12> select sum(sal) from EMP where deptno=10;
 
SUM(SAL)
----------
8750
 
23:18:53 SID=12> update EMP set sal=sal+250 where ename='CLARK';
1 row updated.

Note that there is no “ORA-08177: can’t serialize access for this transaction” there because the updates occurs on different rows.

The User A checks again the sum and then commits his transaction:


23:18:46 SID=365> select sum(sal) from EMP where deptno=10;
 
SUM(SAL)
----------
9000
 
23:19:04 SID=365> commit;
Commit complete.

And so does the User B:


23:18:55 SID=12> select sum(sal) from EMP where deptno=10;
 
SUM(SAL)
----------
9000
 
23:19:08 SID=12> commit;
Commit complete.

However, once you commit, the result is different:


23:19:09 SID=12> select sum(sal) from EMP where deptno=10;
 
SUM(SAL)
----------
9250

Actually, what Oracle calls SERIALIZABLE here is only SNAPSHOT isolation level. You see data without the concurrent changes that have been commited after the beginning of your transaction. And you cannot modify a row that has been modified by another session. However, nothing prevents that what you have read is modified by another session. You don’t see those modification, but they can be commited.

The definition of serializability requires that the result is the same when transactions occurs one after the other. Here, if User A had commited before the start of User B transaction, the latter would have seen that the total were already at 9000.

In this example, if you want to prevent write skew you need to lock the table in Share mode. Locking the rows (with select for update) is sufficient to prevent concurrent updates, but then another user can insert a new employee which brings the total salary higher. In addition to that, row locks are exclusive and you don’t want readers to block readers. Locking a range (DEPTNO=10) is not possible in Oracle. So the solution is to lock the table.

It seems that only PostgreSQL (version >= 9.1) is able to guarantee true serializability without locking.

 

Cet article Oracle serializable is not serializable est apparu en premier sur Blog dbi services.

Pages