Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 13 hours 1 min ago

12cR2 real-time materialized view (on query computation)

Fri, 2017-02-17 15:49

Materialized views is a very old feature (you may remember that it was called snapshots a long time ago). It has all advantages of a view, as you can define any select statement that joins, filters, aggregates, and see it as one table. It has all advantages of a table, as it is stored in one segment, can be indexed, partitioned, have constraints, be compressed, etc. It looks like indexes as it stores data redundantly, in a different physical way, more focused on the way it will be queried rather than on the way data is entered. Like indexes, they can be used transparently (with query rewrite) but unlike indexes, they are not maintained synchronously but have to be refreshed. It has some advantages of replication because it can capture the changes done on source tables, into materialized view logs, so that refresh can be incremental (fast refresh).
Oracle Database 12.2 goes a step further being able to deliver fresh result even when the materialized is stale. This is an amazing feature called real-time materialized view, that does on-query computation of fresh result from the stale one, joined with the materialized view log.

I create my DEMO table on Oracle Exdata Express Cloud Service

SQL> create table DEMO (id primary key,a,b) as select rownum,round(log(10,rownum)) a, rownum b from xmltable('1 to 100000');
Table created.

I plan to create a materialized view to aggregate the count and sum of B grouped by A. And DBMS_MVIEW can tell me what I need to be able to fast refresh it.

Explain Materialized View

The goal is to have real-time materialized view with frequent refreshes, which means that we need fast refresh to be possible after any kind of modification.


SQL> exec dbms_mview.explain_mview('select a,count(b),sum(b),count(*) from DEMO group by a');
PL/SQL procedure successfully completed.
 
SQL> select distinct capability_name||' '||msgtxt||' '||related_text from mv_capabilities_table where capability_name like 'REFRESH_FAST%' and possible='N';
 
CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
REFRESH_FAST_AFTER_ONETAB_DML COUNT(*) is not present in the select list
REFRESH_FAST
REFRESH_FAST_AFTER_INSERT the detail table does not have a materialized view log PDB_ADMIN.DEMO
REFRESH_FAST_AFTER_ANY_DML see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view
REFRESH_FAST_AFTER_ONETAB_DML see the reason why REFRESH_FAST_AFTER_INSERT is disabled

Here is what I have to do in order to have a materialized view that can be fast refreshed: COUNT(*) in the select, and create a materialized view log.

Materialized view log


SQL> create materialized view log on DEMO;
Materialized view log created.

Let’s check if it is ok now, with he additional count(*):

SQL> delete from mv_capabilities_table;
15 rows deleted.
 
SQL> exec dbms_mview.explain_mview('select a,count(b),sum(b),count(*) from DEMO group by a');
PL/SQL procedure successfully completed.
 
SQL> select distinct capability_name||' '||msgtxt||' '||related_text from mv_capabilities_table where capability_name like 'REFRESH_FAST%' and possible='N';
 
CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
REFRESH_FAST
REFRESH_FAST_AFTER_ANY_DML see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_AFTER_INSERT mv log must have ROWID PDB_ADMIN.DEMO
REFRESH_FAST_AFTER_INSERT mv log must have new values PDB_ADMIN.DEMO
REFRESH_FAST_AFTER_INSERT mv log does not have all necessary columns PDB_ADMIN.DEMO
REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view
REFRESH_FAST_AFTER_ONETAB_DML see the reason why REFRESH_FAST_AFTER_INSERT is disabled

I must add ROWID, used columns and NEW VALUES


SQL> drop materialized view log on DEMO;
Materialized view log dropped.
 
SQL> create materialized view log on DEMO with sequence, rowid (a,b) including new values;
Materialized view log created.

You can see that I’ve added the sequence, that was not mentioned by the explain_mview. I’ll come back on that later and probably in another post.


SQL> delete from mv_capabilities_table;
16 rows deleted.
SQL> exec dbms_mview.explain_mview('select a,count(b),sum(b),count(*) from DEMO group by a');
PL/SQL procedure successfully completed.
SQL> select distinct capability_name||' '||msgtxt||' '||related_text from mv_capabilities_table where capability_name like 'REFRESH_FAST%' and possible='N';
 
CAPABILITY_NAME||''||MSGTXT||''||RELATED_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
REFRESH_FAST_PCT PCT is not possible on any of the detail tables in the materialized view

Ok, now I’m ready to create the materialized view. The only remaining message is for partitioned tables.


SQL> create materialized view DEMO_MV refresh fast on demand as select a,count(b),sum(b),count(*) from DEMO group by a;
Materialized view created.

Aggregate query on the source table

I’m running a simple query that can get its result from the source table or from the materialized view


SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 0
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2180342005
 
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 262 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 262 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 262 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("A"=3)
filter("A"=3)

The query has read the source table. I need to enable query rewrite to get the CBO transparently transforming to a query on the materialized view.

Query Rewrite


SQL> alter materialized view DEMO_MV enable query rewrite;
Materialized view altered.

I also need the query_rewrite_integrity to be set. It is by default:

SQL> show parameter query_rewrite
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
query_rewrite_enabled string TRUE
query_rewrite_integrity string enforced

Now, the rewrite can occur:

SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 0
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2792196921
 
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
|* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
-----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("DEMO_MV"."A"=3)
filter("DEMO_MV"."A"=3)

This query is optimized: 9 blocks read from the materialized view instead of 262 ones from the source table.

You can note that it’s not a new child cursor: the previous cursor has been invalidated when I altered the materialized view.

This rewrite can occur only because the materialized view has been refreshed and the source table had no modifications on it.

Stale MVIEW

Let’s do some DML on the source table.


SQL> insert into DEMO values(0,0,0);
1 row created.

and query again


SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 1
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2180342005
 
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("A"=3)
filter("A"=3)

Now, the materialized view is stale. We cannot get the same result from it, so the rewrite didn’t happen.

You can see that I have a new child cursor. The previous one cannot be shared because the previous one was only for non-stale materialized view.

Stale tolerated

If I want to keep using the materialized view, I have the option to accept stale results:


SQL> alter session set query_rewrite_integrity=stale_tolerated;
Session altered.

Now, the rewrite can occur even when the source table has changed since the last refresh.


SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 2
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2792196921
 
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
|* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
-----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("DEMO_MV"."A"=3)
filter("DEMO_MV"."A"=3)

Of course, here you can’t see that the result is stale, because I inserted a row with value 0 which do not change the sum. Let’s do a count the rows, which is something that is also aggregated in my materialized view. I have the option to disable the rewrite and query the source table:


SQL> select /*+ no_rewrite */ count(b) from DEMO;
 
COUNT(B)
----------
100001

This is the accurate result, but with access to full table.

The rewrite can also be forced by hint (because it is a cost decision)


SQL> select /*+ rewrite */ count(b) from DEMO;
 
COUNT(B)
----------
100000

Stale result here: I don’t see the latest modifications.

Frequent refresh

In order to limit the gap between fresh data and stale result, you can refresh the materialized view frequently. It’s not too expensive thanks to the materialized view log: fast refresh is incremental.

Here I don’t want stale result:

SQL> alter session set query_rewrite_integrity=enforced;
Session altered.

and I refresh the materialized view


SQL> exec dbms_mview.refresh('DEMO_MV','f');
PL/SQL procedure successfully completed.

Then I can expect, for the time until the next updates, to get results from he materialized view.


SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 1
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2180342005
 
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("A"=3)
filter("A"=3)

Unfortunately I re-used the same cursor here. When you refresh, the cursors are not invalidated.

I’m running another statement now to get it parsed again:

SQL> select sum(b) this_is_another_cursor from DEMO where a=3;
 
THIS_IS_ANOTHER_CURSOR
----------------------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 27xfg0qjcf7ff, child number 0
-------------------------------------
select sum(b) this_is_another_cursor from DEMO where a=3
 
Plan hash value: 2792196921
 
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
|* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
-----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("DEMO_MV"."A"=3)
filter("DEMO_MV"."A"=3)

So, we now read the materialized view but this will last only while there is no updates on the table. So the idea is to trigger a refresh as soon as there are modifications. Ideally it should be like indexes, which are maintained automatically. But indexes are much simple: it’s a simple value to rowid mapping entry to maintain. And rowids do not change. Materialized views have joins, aggregates and contains all columns.

Refresh on commit

So the idea is to defer the maintenance of the materialized view to commit time. This is the latest point where we are required to do it as we want other sessions to never see stale results. And materialized view logs are there to store the incremental changes even if the transaction is very long. Of course, we need to be aware of it because in general the commit is an immediate and simple operation.

Let’s define the materialized view to refresh on commit instead of on-demand


SQL> alter materialized view DEMO_MV refresh on commit;
Materialized view altered.

I do some modifications


SQL> delete from DEMO where id=0;
1 row deleted.

And I run my query


SQL> select sum(b) this_is_a_third_cursor from DEMO where a=3;
 
THIS_IS_A_THIRD_CURSOR
----------------------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 5dfs068dgbwvd, child number 0
-------------------------------------
select sum(b) this_is_a_third_cursor from DEMO where a=3
 
Plan hash value: 2180342005
 
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("A"=3)
filter("A"=3)

Rewrite cannot happen here because the materialized view is stale. I didn’t commit yet. Of course, other sessions can query from the view because they must not see my modification.


SQL> commit;
Commit complete.

The commit has triggered the fast refresh of the materialized view


SQL> select sum(b) this_is_a_fourth_cursor from DEMO where a=3;
 
THIS_IS_A_FOURTH_CURSOR
-----------------------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0075r0yzqt90a, child number 0
-------------------------------------
select sum(b) this_is_a_fourth_cursor from DEMO where a=3
 
Plan hash value: 2792196921
 
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
|* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
-----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("DEMO_MV"."A"=3)
filter("DEMO_MV"."A"=3)

With on commit refresh, the materialized view is never stale. The problem is that it can slow down the transactions: in addition to fill the materialized view logs, the commit has the overhead to apply them. In 12.1 this is the only way to have a query on the materialized view that is always fresh. But there’s something new in 12.2.

Real-time materialized views

Even when the materialized view is stale, we can get fresh result without querying the source tables. We have the stale values in the materialized view and we have all changes logged into the materialized view log. Easy or not, merging that can be computed to get fresh result. We still need fast refresh but we don’t need refresh on commit anymore:


SQL> alter materialized view DEMO_MV refresh on demand;
Materialized view altered.

And in order to use this new feature we have to enable it a materialized view level:


SQL> alter materialized view DEMO_MV enable on query computation;
Materialized view altered.

Then let the magic happen:


SQL> select sum(b) from DEMO where a=3;
 
SUM(B)
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID brdc1qcbc2npk, child number 0
-------------------------------------
select sum(b) from DEMO where a=3
 
Plan hash value: 2792196921
 
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 9 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 9 |
|* 2 | MAT_VIEW REWRITE ACCESS STORAGE FULL| DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 |
-----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - storage("DEMO_MV"."A"=3)
filter("DEMO_MV"."A"=3)

Here my materialized view is not stale, so nothing special happened. Here is a some modification:

SQL> insert into DEMO values(0,0,0);
1 row created.

and…

SQL> select sum(b) try_again from DEMO where a=3;
&npsp;
TRY_AGAIN
----------
4950617
&npsp;
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
&npsp;
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID dtmhccwr0v7r5, child number 0
-------------------------------------
select sum(b) try_again from DEMO where a=3
&npsp;
Plan hash value: 2180342005
&npsp;
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
---------------------------------------------------------------------------------------------
&npsp;
Predicate Information (identified by operation id):
---------------------------------------------------
&npsp;
2 - storage("A"=3)
filter("A"=3)

Still no magic here. For the session that did the modifications, it seems that query rewrite cannot happen. All changes are in the materialized view log, but applying the uncommited ones for my session seems to be impossible here. Well, let’s commit my changes.


SQL> commit;
Commit complete.

and see the magic:


SQL> select sum(b) try_again from DEMO where a=3;
&nbps;
SUM(B)
----------
4950617
&nbps;
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
&nbps;
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID dtmhccwr0v7r5, child number 0
-------------------------------------
select sum(b) try_again from DEMO where a=3
&nbps;
Plan hash value: 2180342005
&nbps;
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 270 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 270 |
|* 2 | TABLE ACCESS STORAGE FULL| DEMO | 1 | 16667 | 2846 |00:00:00.01 | 270 |
---------------------------------------------------------------------------------------------
&nbps;
Predicate Information (identified by operation id):
---------------------------------------------------
&nbps;
2 - storage("A"=3)
filter("A"=3)

Oh… that’s my previous cursor. No invalidation occurs. I have to parse a different statement.


SQL> select sum(b) here_I_am from DEMO where a=3;
 
HERE_I_AM
----------
4950617
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 34fqrktpthuk7, child number 1
-------------------------------------
select sum(b) here_I_am from DEMO where a=3
 
Plan hash value: 1240257898
 
-----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 25 | | | |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 25 | | | |
| 2 | VIEW | | 1 | 705 | 1 |00:00:00.01 | 25 | | | |
| 3 | UNION-ALL | | 1 | | 1 |00:00:00.01 | 25 | | | |
|* 4 | FILTER | | 1 | | 1 |00:00:00.01 | 16 | | | |
|* 5 | HASH JOIN OUTER | | 1 | 100 | 1 |00:00:00.01 | 16 | 3843K| 3843K| 1699K (0)|
|* 6 | MAT_VIEW ACCESS STORAGE FULL | DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 | 1025K| 1025K| |
| 7 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 7 | | | |
| 8 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 7 | 1956K| 1956K| 2324K (0)|
| 9 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 7 | | | |
| 10 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 7 | | | |
|* 11 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 7 | | | |
| 12 | WINDOW SORT | | 1 | 1 | 1 |00:00:00.01 | 7 | 2048 | 2048 | 2048 (0)|
|* 13 | TABLE ACCESS STORAGE FULL | MLOG$_DEMO | 1 | 1 | 1 |00:00:00.01 | 7 | 1025K| 1025K| |
| 14 | VIEW | | 1 | 605 | 0 |00:00:00.01 | 9 | | | |
| 15 | UNION-ALL | | 1 | | 0 |00:00:00.01 | 9 | | | |
|* 16 | FILTER | | 1 | | 0 |00:00:00.01 | 0 | | | |
| 17 | NESTED LOOPS OUTER | | 1 | 600 | 0 |00:00:00.01 | 0 | | | |
| 18 | VIEW | | 1 | 100 | 0 |00:00:00.01 | 0 | | | |
|* 19 | FILTER | | 1 | | 0 |00:00:00.01 | 0 | | | |
| 20 | HASH GROUP BY | | 1 | | 0 |00:00:00.01 | 0 | 2982K| 2982K| |
|* 21 | VIEW | | 1 | 1 | 0 |00:00:00.01 | 0 | | | |
| 22 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
|* 23 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 24 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
|* 25 | TABLE ACCESS STORAGE FULL| MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
|* 26 | INDEX UNIQUE SCAN | I_SNAP$_DEMO_MV | 0 | 6 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
|* 27 | HASH JOIN | | 1 | 5 | 0 |00:00:00.01 | 9 | 3043K| 3043K| 1702K (0)|
|* 28 | MAT_VIEW ACCESS STORAGE FULL | DEMO_MV | 1 | 1 | 1 |00:00:00.01 | 9 | 1025K| 1025K| |
| 29 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
| 30 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 1956K| 1956K| 2319K (0)|
| 31 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
| 32 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
|* 33 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 34 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
|* 35 | TABLE ACCESS STORAGE FULL | MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
-----------------------------------------------------------------------------------------------------------------------------------------------------------

We got it. All the magic. The materialized view is read. The materialized view log is read. But we don’t need the source tables. All this is merged by outer join and union all. The plan is harder to read but it requires only 25 logical reads to get fresh results instead of 270 from the source table. The bigger the tables are, the more complex the query is, the more benefit you get as long as you don’t have too many changes since the last refresh. And this without any overhead on other transactions commits. That’s the beauty of 12cR2 Enterprise Edition. Can you imagine you have to code this yourself? For any query? For any modifications on source tables?

FRESH_MV

This was query rewrite: query the source table and have the CBO transform the query to query the materialized (given that the CBO costing estimates that it is cheaper). But you can also query the materialized view and ask to get fresh result by joining materialized view log to the stale result. And this can be used also in Standard Edition (only query rewrite is limited to Enterprise Edition). On-query computation when querying the materialized vue is enabled by the FRESH_MV hint:


SQL> select /*+ fresh_mv */ * from DEMO_MV;
 
A COUNT(B) SUM(B) COUNT(*)
---------- ---------- ---------- ----------
5 68378 4500058747 68378
2 285 49590 285
3 2846 4950617 2846
1 28 490 28
4 28460 494990550 28460
0 4 6 4
 
6 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last +alias'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID gyar0v20qcksu, child number 0
-------------------------------------
select /*+ fresh_mv */ * from DEMO_MV
 
Plan hash value: 2169890143
 
----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 6 |00:00:00.01 | 12 | | | |
| 1 | VIEW | | 1 | 730 | 6 |00:00:00.01 | 12 | | | |
| 2 | UNION-ALL | | 1 | | 6 |00:00:00.01 | 12 | | | |
|* 3 | VIEW | VW_FOJ_0 | 1 | 100 | 5 |00:00:00.01 | 9 | | | |
|* 4 | HASH JOIN FULL OUTER | | 1 | 100 | 6 |00:00:00.01 | 9 | 2897K| 2897K| 3217K (0)|
| 5 | VIEW | | 1 | 6 | 6 |00:00:00.01 | 9 | | | |
| 6 | MAT_VIEW ACCESS STORAGE FULL | DEMO_MV | 1 | 6 | 6 |00:00:00.01 | 9 | 1025K| 1025K| |
| 7 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
| 8 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 1956K| 1956K| 2268K (0)|
| 9 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
| 10 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
|* 11 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 12 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
|* 13 | TABLE ACCESS STORAGE FULL | MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
| 14 | VIEW | | 1 | 630 | 1 |00:00:00.01 | 3 | | | |
| 15 | UNION-ALL | | 1 | | 1 |00:00:00.01 | 3 | | | |
|* 16 | FILTER | | 1 | | 0 |00:00:00.01 | 1 | | | |
| 17 | NESTED LOOPS OUTER | | 1 | 600 | 1 |00:00:00.01 | 1 | | | |
| 18 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
|* 19 | FILTER | | 1 | | 1 |00:00:00.01 | 0 | | | |
| 20 | HASH GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 1956K| 1956K| 2304K (0)|
| 21 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
| 22 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
|* 23 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 24 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
|* 25 | TABLE ACCESS STORAGE FULL| MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
|* 26 | INDEX UNIQUE SCAN | I_SNAP$_DEMO_MV | 1 | 6 | 1 |00:00:00.01 | 1 | 1025K| 1025K| |
| 27 | MERGE JOIN | | 1 | 30 | 1 |00:00:00.01 | 2 | | | |
| 28 | MAT_VIEW ACCESS BY INDEX ROWID | DEMO_MV | 1 | 6 | 6 |00:00:00.01 | 2 | | | |
| 29 | INDEX FULL SCAN | I_SNAP$_DEMO_MV | 1 | 6 | 6 |00:00:00.01 | 1 | 1025K| 1025K| |
|* 30 | FILTER | | 6 | | 1 |00:00:00.01 | 0 | | | |
|* 31 | SORT JOIN | | 6 | 100 | 1 |00:00:00.01 | 0 | 2048 | 2048 | 2048 (0)|
| 32 | VIEW | | 1 | 100 | 1 |00:00:00.01 | 0 | | | |
| 33 | SORT GROUP BY | | 1 | | 1 |00:00:00.01 | 0 | 2048 | 2048 | 2048 (0)|
| 34 | VIEW | | 1 | 1 | 1 |00:00:00.01 | 0 | | | |
| 35 | RESULT CACHE | 6jf9k1y2wt8xc5b00gv9px6ww0 | 1 | | 1 |00:00:00.01 | 0 | | | |
|* 36 | VIEW | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
| 37 | WINDOW SORT | | 0 | 1 | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
|* 38 | TABLE ACCESS STORAGE FULL| MLOG$_DEMO | 0 | 1 | 0 |00:00:00.01 | 0 | 1025K| 1025K| |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Have you seen that we need even less logical reads (12) than before (25). There is an optimization here with RESULT CACHE. You get this when you have the sequence in the materialized view log, and you can see that the sequence is used in the predicates:


Predicate Information (identified by operation id):
---------------------------------------------------
 
3 - filter("AV$0"."OJ_MARK" IS NULL)
4 - access(SYS_OP_MAP_NONNULL("SNA$0"."A")=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
11 - filter((("MAS$"."OLD_NEW$$"='N' AND "MAS$"."SEQ$$"="MAS$"."MAXSEQ$$") OR (INTERNAL_FUNCTION("MAS$"."OLD_NEW$$") AND
"MAS$"."SEQ$$"="MAS$"."MINSEQ$$")))
13 - storage("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
16 - filter(CASE WHEN ROWID IS NOT NULL THEN 1 ELSE NULL END IS NULL)
19 - filter(SUM(1)>0)
23 - filter((("MAS$"."OLD_NEW$$"='N' AND "MAS$"."SEQ$$"="MAS$"."MAXSEQ$$") OR (INTERNAL_FUNCTION("MAS$"."OLD_NEW$$") AND
"MAS$"."SEQ$$"="MAS$"."MINSEQ$$")))
25 - storage("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
26 - access("DEMO_MV"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
30 - filter("DEMO_MV"."COUNT(*)"+"AV$0"."D0">0)
31 - access("DEMO_MV"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
filter("DEMO_MV"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
36 - filter((("MAS$"."OLD_NEW$$"='N' AND "MAS$"."SEQ$$"="MAS$"."MAXSEQ$$") OR (INTERNAL_FUNCTION("MAS$"."OLD_NEW$$") AND
"MAS$"."SEQ$$"="MAS$"."MINSEQ$$")))
38 - storage("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))
filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-02-16 20:31:08', 'syyyy-mm-dd hh24:mi:ss'))

Of course, you also see a predicate with the staleness timestamp (here 2017-02-16 20:31:08) of the materialized view.

This result cache is interesting because the materialized view log is read several times in the execution plan and this is a way to actually read it only once.

SQL> select type,column_count,row_count,cache_id,name from v$result_cache_objects;
 
TYPE COLUMN_COUNT ROW_COUNT CACHE_ID NAME
---------- ------------ ---------- ------------------------------ ------------------------------
Dependency 0 0 PDB_ADMIN.MLOG$_DEMO PDB_ADMIN.MLOG$_DEMO
Result 7 1 6jf9k1y2wt8xc5b00gv9px6ww0 DMLTYPES:MLOG$_DEMO

The result cache has a dependency on the materialized view log, to be aware of additional changes, and when tracing the transformed query, we can see a lifetime of session for this result cache. /*+ RESULT_CACHE(LIFETIME=SESSION, NAME=”DMLTYPES:MLOG$_DEMO”) */. Note that I included the sequence in the materialized view log, but this is not required. I’ll show in a future post that the execution plan is different then, and not using result cache.

So what?

This is an amazing feature. You can optimize your queries transparently by creating materialized views, get fresh result, and minimize the refresh overhead. And depending on the size of the tables and the rate of modifications. You can choose the right refresh frequency with the goal to limit the materialized view logs to apply on each query. You have real-time result and bulk refresh at the same time. Oracle Database has always been a database for mixed workloads, where readers don’t block writers. And once again we have a feature to optimize queries by pre-calculating them, with minimal impact on source.

It is transparent, but after this first test, I have a few questions that raise and that I’ll try to answer in future posts: Is it always better to have the sequence in the materialized view log? Is the default result cache size still sufficient? How can it use a 1 seconds only precision timestamp and not a SCN? What happens with this at winter Daylight Saving Time clock change? Can we get query rewrite when our own transaction has made the modifications? Do we need to invalidate cursors that read the source table? How accurate are the cardinality estimations on the very volatile materialized view? When full materialized view log is read, can it trigger a complete refresh?

 

Cet article 12cR2 real-time materialized view (on query computation) est apparu en premier sur Blog dbi services.

Oracle 12c – Combining Flashback Drop and Flashback Query

Fri, 2017-02-17 09:09

If you think that Flashback Drop feature just brings back your table, then this is only half of the story. It does much more than that. Besides undropping the table, it also brings back your constraints, your indexes, your trigger, your grants and the statistics as well.

The ugly part is, that the flashback drop brings back some strange object names e.g. your indexes and constraints with names like “BIN$…” or alike. Maybe something you don’t want. So why not combining the Flashback Drop with a Flashback Query on the Dictionary to get the old constraint and index names.

Let’s setup a few objects in the SCOTT schema. But before we do that, we need to grant the user SCOTT some extra privileges.

SQL> grant execute on dbms_flashback to scott;

Grant succeeded.

SQL> grant flashback on user_indexes to scott;

Grant succeeded.

SQL> grant flashback on user_constraints to scott;

Grant succeeded.

SQL> grant flashback on user_triggers to scott;

Grant succeeded.

Now we can setup our objects for this test. I will create 2 tables, and few grants, a trigger and statistics. The goal is to have after the flashback to before drop, exactly the same object names afterwards for the table the index, the constraints and the trigger.

SQL> connect scott/tiger
Connected.

SQL> create table dbi_t
  2  ( x int, constraint t_pk primary key(x),
  3   y int, constraint check_x check(x>0)
  4  );

Table created.

SQL> insert into dbi_t values (1,1);

1 row created.

SQL> insert into dbi_t values (2,2);

1 row created.

SQL> insert into dbi_t values (3,3);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> create table dbi_audit
  2  (x int, x_before int, y int, y_before int, z varchar2(10));

Table created.


SQL> CREATE OR REPLACE TRIGGER dbi_after_update
  2  AFTER INSERT OR UPDATE
  3     ON DBI_T
  4     FOR EACH ROW
  5  DECLARE
  6     v_z varchar2(10);
  7  BEGIN
  8     SELECT user INTO v_z FROM dual;
  9     -- Insert record into audit table
 10     INSERT INTO dbi_audit
 11     ( x,
 12       x_before,
 13       y,
 14       y_before,
 15       z)
 16     VALUES
 17     ( :new.x,
 18       :old.x,
 19       :new.y,
 20       :old.y,
 21       v_z );
 22* END;
 /

Trigger created.


SQL> insert into dbi_t values (4,4);

1 row created.

SQL> commit;

Commit complete.

SQL> insert into dbi_t values (5,5);

1 row created.

SQL> commit;

Commit complete.

SQL> update dbi_t set x=6 where y=5;

1 row updated.

SQL> commit;

Commit complete.


SQL> select * from dbi_t;

         X          Y
---------- ----------
         1          1
         2          2
         3          3
         4          4
         6          5

SQL> select * from dbi_audit;

         X   X_BEFORE          Y   Y_BEFORE Z
---------- ---------- ---------- ---------- ----------
         4                     4            SCOTT
         5                     5            SCOTT
         6          5          5          5 SCOTT

		 


SQL> begin
  2  DBMS_STATS.GATHER_TABLE_STATS (
  3  ownname => '"SCOTT"',
  4  tabname => '"DBI_T"',
  5  estimate_percent => 100
  6  );
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL> begin
  2  DBMS_STATS.GATHER_TABLE_STATS (
  3  ownname => '"SCOTT"',
  4  tabname => '"DBI_AUDIT"',
  5  estimate_percent => 100
  6  );
  7  end;
  8  /

PL/SQL procedure successfully completed.


SQL> grant select on dbi_t to hr;

Grant succeeded.

SQL> grant select on dbi_audit to hr;

Grant succeeded.

Ok. So let’s take a look how is the current situation is right now.

SQL> select TABLE_NAME, LAST_ANALYZED
  2  from user_tables
  3  where TABLE_NAME in ('DBI_T','DBI_AUDIT');

TABLE_NAME   LAST_ANALYZED
------------ --------------------
DBI_AUDIT    17-FEB-17
DBI_T        17-FEB-17

SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE from user_constraints where table_name = 'DBI_T';

CONSTRAINT_NAME                      C
------------------------------------ -
CHECK_X                              C
T_PK                                 P

SQL> select index_name from user_indexes where table_name = 'DBI_T';

INDEX_NAME
------------------------------------
T_PK

SQL> select GRANTEE, OWNER, TABLE_NAME, GRANTOR, PRIVILEGE from user_tab_privs
  2  where table_name in ('DBI_T','DBI_AUDIT');

GRANTEE        OWNER          TABLE_NAME           GRANTOR        PRIVILEGE
-------------- -------------- -------------------- -------------- --------------------
HR             SCOTT          DBI_AUDIT            SCOTT          SELECT
HR             SCOTT          DBI_T                SCOTT          SELECT

SQL> select TRIGGER_NAME, TABLE_NAME, STATUS from user_triggers;

TRIGGER_NAME             TABLE_NA STATUS
------------------------ -------- --------
DBI_AFTER_UPDATE         DBI_T    ENABLED

Everything looks good. Up to date statistics, trigger is enabled and no objects with “BIN$xx” or something. The next step is a quite important one for this demo. I am just saving the SCN number before the “drop table” into a variable. In the real world, you need to find the SCN number yourself, e.g. with the TIMESTAMP_TO_SCN function.

SQL> column SCN new_val S
SQL> select dbms_flashback.get_system_change_number SCN from dual;

       SCN
----------
   1056212

After we got the SCN, we can drop the table and undrop it afterwards.

SQL> drop table dbi_t;

Table dropped.

SQL> flashback table dbi_t to before drop;

Flashback complete.

Let’s take a look how our constraints and index names look right now. Exactly like expected. They have this ugly “BIN$xxx” names, but we want the old names back.

SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE from user_constraints where table_name = 'DBI_T';

CONSTRAINT_NAME                      C
------------------------------------ -
BIN$SLt7vMNFZNbgU8k4qMDm6g==$0       C
BIN$SLt7vMNGZNbgU8k4qMDm6g==$0       P

SQL> select index_name from user_indexes where table_name = 'DBI_T';

INDEX_NAME
------------------------------------
BIN$SLt7vMNHZNbgU8k4qMDm6g==$0

The trick is now to invoke a Flashback Query on the dictionary. Flashback query on the dictionary is not 100% supported, but it works. I just save the current index name into the variable “I” and the old name into variable “OI”.

SQL> column index_name new_val I
SQL> select index_name from user_indexes where table_name = 'DBI_T';

INDEX_NAME
------------------------------------
BIN$SLt7vMNHZNbgU8k4qMDm6g==$0

SQL> column index_name new_val OI
SQL> select index_name from user_indexes as of scn &S
  2  where table_name = 'DBI_T';
old   1: select index_name from user_indexes as of scn &S
new   1: select index_name from user_indexes as of scn    1056212

INDEX_NAME
------------------------------------
T_PK

After I have the current and the old name in place, I can do an alter index and get my old name back.

SQL> alter index "&I" rename to "&OI";
old   1: alter index "&I" rename to "&OI"
new   1: alter index "BIN$SLt7vMNHZNbgU8k4qMDm6g==$0" rename to "T_PK"

Index altered.

SQL> select index_name from user_indexes where table_name = 'DBI_T';

INDEX_NAME
------------------------------------
T_PK

 

I will do now exactly the same for the constraints and the trigger.

SQL> column constraint_name new_val CC
SQL> select constraint_name from user_constraints where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'C';

CONSTRAINT_NAME
------------------------------------
BIN$SLt7vMNFZNbgU8k4qMDm6g==$0

SQL> column constraint_name new_val OCC
SQL> select constraint_name from user_constraints as of scn &S where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'C';
old   1: select constraint_name from user_constraints as of scn &S where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'C'
new   1: select constraint_name from user_constraints as of scn    1056212 where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'C'

CONSTRAINT_NAME
------------------------------------
CHECK_X


SQL> alter table DBI_T RENAME CONSTRAINT "&CC" TO "&OCC";
old   1: alter table DBI_T RENAME CONSTRAINT "&CC" TO "&OCC"
new   1: alter table DBI_T RENAME CONSTRAINT "BIN$SLt7vMNFZNbgU8k4qMDm6g==$0" TO "CHECK_X"

Table altered.

SQL> column constraint_name new_val PC
SQL> select constraint_name from user_constraints where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'P';

CONSTRAINT_NAME
------------------------------------
BIN$SLt7vMNGZNbgU8k4qMDm6g==$0

SQL> column constraint_name new_val OPC
SQL> select constraint_name from user_constraints as of scn &S where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'P';
old   1: select constraint_name from user_constraints as of scn &S where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'P'
new   1: select constraint_name from user_constraints as of scn    1056212 where table_name = 'DBI_T' and CONSTRAINT_TYPE = 'P'

CONSTRAINT_NAME
------------------------------------
T_PK


SQL> alter table DBI_T RENAME CONSTRAINT "&PC" TO "&OPC";
old   1: alter table DBI_T RENAME CONSTRAINT "&PC" TO "&OPC"
new   1: alter table DBI_T RENAME CONSTRAINT "BIN$SLt7vMNGZNbgU8k4qMDm6g==$0" TO "T_PK"

Table altered.

SQL> col CONSTRAINT_NAME format a36
SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE from user_constraints where table_name = 'DBI_T';

CONSTRAINT_NAME                      C
------------------------------------ -
CHECK_X                              C
T_PK                                 P

SQL> col INDEX_NAME format a36
SQL> select index_name from user_indexes where table_name = 'DBI_T';

INDEX_NAME
------------------------------------
T_PK


SQL> select TRIGGER_NAME, TABLE_NAME,STATUS from user_triggers;

TRIGGER_NAME                     TABLE_NAME                       STATUS
-------------------------------- -------------------------------- --------
BIN$SLt7vMNIZNbgU8k4qMDm6g==$0   DBI_T                            ENABLED

SQL> column trigger_name new_val T
SQL> select trigger_name from user_triggers where table_name = 'DBI_T';

TRIGGER_NAME
--------------------------------
BIN$SLt7vMNIZNbgU8k4qMDm6g==$0

SQL> column trigger_name new_val OT
SQL> select trigger_name from user_triggers as of scn &S where table_name = 'DBI_T';
old   1: select trigger_name from user_triggers as of scn &S where table_name = 'DBI_T'
new   1: select trigger_name from user_triggers as of scn    1056212 where table_name = 'DBI_T'

TRIGGER_NAME
--------------------------------
DBI_AFTER_UPDATE

SQL> alter trigger "&T" RENAME TO "&OT";
old   1: alter trigger "&T" RENAME TO "&OT"
new   1: alter trigger "BIN$SLt7vMNIZNbgU8k4qMDm6g==$0" RENAME TO "DBI_AFTER_UPDATE"

Trigger altered.


SQL> select TRIGGER_NAME, TABLE_NAME, STATUS from user_triggers;

TRIGGER_NAME             TABLE_NAME             STATUS
------------------------ ---------------------- --------
DBI_AFTER_UPDATE         DBI_T                  ENABLED

The stats and the grants do come back automatically.

SQL> select TABLE_NAME, LAST_ANALYZED
  2  from user_tables
  3  where TABLE_NAME in ('DBI_T','DBI_AUDIT');

TABLE_NAME   LAST_ANALYZED
------------ --------------------
DBI_AUDIT    17-FEB-17
DBI_T        17-FEB-17


SQL> select GRANTEE, OWNER, TABLE_NAME, GRANTOR, PRIVILEGE from user_tab_privs
  2  where table_name in ('DBI_T','DBI_AUDIT');

GRANTEE        OWNER          TABLE_NAME           GRANTOR        PRIVILEGE
-------------- -------------- -------------------- -------------- --------------------
HR             SCOTT          DBI_AUDIT            SCOTT          SELECT
HR             SCOTT          DBI_T                SCOTT          SELECT

 

Conclusion

The Flashback Drop feature does not just bring back your table. It does much more, it brings back your grants, the trigger, the statistics, the indexes and the constraints as well. If you are lucky, you can even combine it with the Flashback Query to retrieve your old names for the indexes, constraints and triggers.

 

Cet article Oracle 12c – Combining Flashback Drop and Flashback Query est apparu en premier sur Blog dbi services.

OEL 7 – How to disable IPv6 on Oracle Linux 7 – Follow Up

Fri, 2017-02-17 02:56

This is a follow up to the Blog were I explained how to disable IPv6 on Oracle Linux 7.

If you have done all the steps which I have explained here http://blog.dbi-services.com/oel-7-how-to-disable-ipv6-on-oracle-linux-7/  then you have already IPv6 successfully disabled. However, some tools require some special attention afterwards if you want to avoid some ugly warning or error messages. There are so many tools that can use IPv4 and IPv6, but it is impossible to mention all of them. I will just dig a little deeper into the following 4.

  • Postfix
  • Oracle
  • NFS
  • rsyslogd
Postfix

Let’s start with Postfix. This might be one of the first warning messages you see, in case you have disabled IPv6 on your system. If you receive the following warning message when you try to send an email, then you need to adjust your /etc/postfix/main.cf file.

$ mailx -s "Test" xxx.xxx@xxx.com
Test
.
EOT
$ send-mail: warning: inet_protocols: IPv6 support is disabled: Address family not supported by protocol
send-mail: warning: inet_protocols: configuring for IPv4 support only
postdrop: warning: inet_protocols: IPv6 support is disabled: Address family not supported by protocol
postdrop: warning: inet_protocols: configuring for IPv4 support only

The solution is to configure your /etc/postfix/main.cf file to allow only the ipv4 protocol.

[root@SVPCHODAC01 sbin]# /usr/sbin/postconf | grep inet_protocols
inet_protocols = all
/usr/sbin/postconf: warning: inet_protocols: IPv6 support is disabled: Address family not supported by protocol
/usr/sbin/postconf: warning: inet_protocols: configuring for IPv4 support only

[root@SVPCHODAC01 sbin]# cd /etc/postfix/
[root@SVPCHODAC01 postfix]# cp main.cf main.cf.20170203a
[root@SVPCHODAC01 postfix]# vi main.cf

Change “inet_protocols = all”  to “inet_protocols = ipv4″ and then restart PostFix.

[root@SVPCHODAC01 postfix]# /etc/init.d/postfix restart
Shutting down postfix: [ OK ]
Starting postfix: [ OK ]

[root@SVPCHODAC01 postfix]# /usr/sbin/postconf | grep inet_protocols
inet_protocols = ipv4

That’s it. Now the ugly Postfix warning messages disappear.

Oracle

The next candidate is the Oracle Listener. In some situations,  you might see the following error message in your listener.log file when working with Cloud Control 12c.

TNS-01189: The listener could not authenticate the user

This is related to an Oracle bug, to be more precise, it is “BUG 16054202 – TNLIN EXTRACTS WRONG SUBNETMASK FOR IPV6 ADDRESSES”. The bug can be fixed by configuring the Oracle Listener to work with IPv4 only. This is done via the listener.ora IP parameter, which knows the following options.

IP=FIRST

Listen on the first IP address returned by the DNS resolution of the host name.
If the user wants the listener to listen on the first IP to which the specified host name resolves,
then the address must be qualified with (IP=first).

IP=V4_ONLY

Listen only on IPv4 addresses.

IP=V6_ONLY

Listen only on IPv6 addresses.

Simply put the (IP=V4_ONLY) after your PORT setting, and then restart the listener like shown in the following example.

-- listener.ora
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = dbidg03)(PORT = 1521)(IP=V4_ONLY))
    )
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

-- restart

$ lsnrctl stop LISTENER; lsnrctl start LISTENER

Now the messages “TNS-01189: The listener could not authenticate the user” in the listener.log should disappear.

 

NFS

Under normal circumstances, no changes should be required for NFS unless you had proto=tcp6 configured for your mount options. If so, then your mount will not work anymore.

[root@dbidg02 etc]# mount /u99
mount.nfs: an incorrect mount option was specified

And you will see the following error in the /var/log/messages file.

Feb 14 10:26:48 dbidg02 kernel: NFS: server address does not match proto= option

Now you could either remove the proto option or change it to proto=tcp.

For NFS version 4 you have the following options:

proto=netid The netid determines the transport that is used to communicate with the NFS server. Supported options are tcp, tcp6, and rdma. tcp6 use IPv6 addresses and is only available if support for TI-RPC is built in. Both others use IPv4 addresses.

In my case, I have added the proto=tcp option to my NFS mount table in the /etc/fstab

#-- NFS mounts
dbidg03:/u99   /u99  nfs  vers=4.1,proto=tcp,rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,timeo=600    

-- And now the mount works perfectly again.

[root@dbidg02 etc]# mount /u99
[root@dbidg02 etc]#
[root@dbidg02 etc]# mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
dbidg03:/u99 on /u99 type nfs4 (rw,relatime,vers=4.1,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.202,local_lock=none,addr=192.168.56.203)

Now the NFS mount works again.

rsyslogd

Almost the same applies to the rsyslogd. In case you have not specified “-6″ in your syslogd options, you are fine. If not, you need to either remove the option or replace it with “-4″

oracle@dbidg03:/etc/sysconfig/ [oms13c] rpm -qa  | grep rsyslog
rsyslog-7.4.7-16.0.1.el7.x86_64

-- from the doc
 -4  Causes rsyslogd to listen to IPv4 addresses only.  If neither -4 nor -6 is given, rsyslogd listens to all configured addresses of the system.
[root@dbidg03 sysconfig]# cat rsyslog
# Options for rsyslogd
# Syslogd options are deprecated since rsyslog v3.
# If you want to use them, switch to compatibility mode 2 by "-c 2"
# See rsyslogd(8) for more details
SYSLOGD_OPTIONS="-4"

[root@dbidg03 sysconfig]# systemctl restart rsyslog
[root@dbidg03 sysconfig]#
Conclusion

There might be some tools on your system that requires special attention after you have disable IPv6 on your system.

 

 

Cet article OEL 7 – How to disable IPv6 on Oracle Linux 7 – Follow Up est apparu en premier sur Blog dbi services.

SQL Server Failover Cluster on Linux and synchronizing uids/gids across nodes

Thu, 2017-02-16 13:13

In my previous blog post, I talked about SQL Server on Linux and high availability. During my test, I used a NFS server to share disk resources between my cluster nodes as described in the Microsoft documentation. A couple of days ago, I decided to add a fourth node (LINUX04) to my cluster infrastructure and I expected to do this work easily. But no chance, I faced a problem I never had before on this infrastructure.

blog 117 - 01 - sqlfcionlinux - archi

Switching over this last node led to a failed SQL Server FCI resource. After digging into the problem, I found out the root from the SQL Server error log as shown below:

[mikedavem@linux04 ~]$ sudo cat /var/opt/mssql/log/errorlog
2017-02-12 18:55:15.89 Server      Microsoft SQL Server vNext (CTP1.2) - 14.0.200.24 (X64)
        Jan 10 2017 19:15:28
        Copyright (C) 2016 Microsoft Corporation. All rights reserved.
        on Linux (CentOS Linux 7 (Core))
2017-02-12 18:55:15.89 Server      UTC adjustment: 0:00
2017-02-12 18:55:15.89 Server      (c) Microsoft Corporation.
2017-02-12 18:55:15.89 Server      All rights reserved.
2017-02-12 18:55:15.89 Server      Server process ID is 4116.
2017-02-12 18:55:15.89 Server      Logging SQL Server messages in file 'C:\var\opt\mssql\log\errorlog'.
2017-02-12 18:55:15.89 Server      Registry startup parameters:
         -d C:\var\opt\mssql\data\master.mdf
         -l C:\var\opt\mssql\data\mastlog.ldf
         -e C:\var\opt\mssql\log\errorlog
2017-02-12 18:55:15.91 Server      Error: 17113, Severity: 16, State: 1.
2017-02-12 18:55:15.91 Server      Error 2(The system cannot find the file specified.) occurred while opening file 'C:\var\opt\mssql\data\master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary.
2017-02-12 18:55:15.91 Server      SQL Server shutdown has been initiated

 

Well, the error speaks for itself and it seems I’m concerned by a file access permission in my case. My first reflex was to check the corresponding permissions on the corresponding NFS folder.

[mikedavem@linux04 ~]$ sudo ls -lu /var/opt/mssql/data
To.al 53320
drwxr-----. 2 995 993     4096 Feb 14 23:12 lost+found
-rwxr-----. 1 995 993  4194304 Feb 14 23:19 master.mdf
-rwxr-----. 1 995 993  2097152 Feb 14 23:19 mastlog.ldf
-rwxr-----. 1 995 993  8388608 Feb 14 23:19 modellog.ldf
-rwxr-----. 1 995 993  8388608 Feb 14 23:19 model.mdf
-rwxr-----. 1 995 993 13959168 Feb 14 23:19 msdbdata.mdf
-rwxr-----. 1 995 993   786432 Feb 14 23:19 msdblog.ldf
drwxr-----. 2 995 993     4096 Feb 14 23:08 sqllinuxfci
-rwxr-----. 1 995 993  8388608 Feb 14 23:19 tempdb.mdf
-rwxr-----. 1 995 993  8388608 Feb 14 23:19 templog.ldf

 

According to the output above we may claim this is a mismatch issue between uids/guids of the mssql user across the cluster nodes. At this stage, I remembered performing some tests including creating some linux users before adding my fourth node leading to create a mismatch for the mssql user’s uids/gids. Just keep in mind that the SQL Server installation creates a mssql user by default with the next available uid/gid. In my case uid and guid.

Let’s compare mssql user uid/gid from other existing nodes LINUX01 / LINUX02 and LINUX03:

[mikedavem@linux04 ~]$ id mssql
uid=997(mssql) gid=995(mssql) groups=995(mssql)
[mikedavem@linux04 ~]$ ssh linux01 id mssql
…
[root@linux04 ~]# ssh linux01 id mssql
uid=995(mssql) gid=993(mssql) groups=993(mssql)
…
[root@linux04 ~]# ssh linux02 id mssql
uid=995(mssql) gid=993(mssql) groups=993(mssql)
…
[root@linux04 ~]# ssh linux03 id mssql
uid=995(mssql) gid=993(mssql) groups=993(mssql)

 

Ok this explains why I faced this permission issue. After investing some times to figure out how to get rid of this issue without changing the mssql user’s uid/guid, I read some discussions about using NFS4 which is intended to fix this uids/gids mapping issue. It seems to be perfect in my case! But firstly let’s just confirm I’m using the correct NFS version

[mikedavem@linux04 ~]$ mount -v | grep nfs
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
192.168.5.14:/mnt/sql_data_nfs on /var/opt/mssql/data type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.5.14,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.5.14)

 

Well, my current configuration is not ready to leverage NFS4 yet and some configuration changes seem to be required to address it.

Firstly, let’s change fstype parameter of my FS resource to nfs4 to mount the NFS share with NFS4.

[mikedavem@linux04 ~]$ sudo pcs resource show FS
 Resource: FS (class=ocf provider=heartbeat type=Filesystem)
  Attributes: device=192.168.5.14:/mnt/sql_data_nfs directory=/var/opt/mssql/data fstype=nfs
  Operations: start interval=0s timeout=60 (FS-start-interval-0s)
              stop interval=0s timeout=60 (FS-stop-interval-0s)
              monitor interval=20 timeout=40 (FS-monitor-interval-20)

[mikedavem@linux04 ~]$ sudo pcs resource update FS fstype=nfs4
[mikedavem@linux04 ~]$ sudo pcs resource restart FS
FS successfully restarted

 

Then I had to perform some updates to my idmap configuration on both sides (NFS server and client as well) to make the mapping working correctly. The main steps were as follows:

  • Enabling idmap with NFS4 (disabled by default in my case)
  • Changing some parameters inside the /etc/idmap.conf
  • Verifying idmap is running correctly.
[root@nfs sql_data_nfs]# echo N >  /sys/module/nfsd/parameters/nfs4_disable_idmapping
…
[root@nfs sql_data_nfs]# grep ^[^#\;] /etc/idmapd.conf
[General]
Domain = dbi-services.test
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody
[Translation]
Method = static

[Static]
mssql@dbi-services.test = mssql
testp@dbi-services.test = testp
…
[root@nfs sql_data_nfs]# systemctl status nfs-idmap
. nfs-idmapd.service - NFSv4 ID-name mapping service
   Loaded: loaded (/usr/lib/systemd/system/nfs-idmapd.service; static; vendor preset: disabled)
   Active: active (running) since Wed 2017-02-15 20:29:57 CET; 1h 39min ago
  Process: 3362 ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS (code=exited, status=0/SUCCESS)
 Main PID: 3364 (rpc.idmapd)
   CGroup: /system.slice/nfs-idmapd.service
           └─3364 /usr/sbin/rpc.idmapd

 

At this point, listening user permissions shows nobody/nobody meaning translation is not performed yet.

[root@linux04 ~]# ls -lu /var/opt/mssql
total 16
drwxr-----. 2 nobody nobody 4096 Feb 15 19:00 data
…

 

I forgot to create a corresponding mssql user on the NFS service side. Let’s do it:

 [root@nfs sql_data_nfs]# groupadd mssql -g 993
[root@nfs sql_data_nfs]# useradd -u 995 -g 993 mssql –M

 

After remounting the NFS share, I finally got the expected output as shown below:

[root@linux04 ~]# mount -o remount -t nfs4 192.168.5.14:/mnt/sql_data_nfs/sqllinuxfci /mnt/testp/
…
[root@linux04 ~]# ls -lu /var/opt/mssql
total 16
drwxr-----. 2 mssql mssql 4096 Feb 15 19:00 data
…
[root@linux04 ~]# ls -lu /var/opt/mssql/data/*
-rwxr-----. 1 mssql mssql  4194304 Feb 15 19:53 /var/opt/mssql/data/master.mdf
-rwxr-----. 1 mssql mssql  2097152 Feb 15 19:53 /var/opt/mssql/data/mastlog.ldf
-rwxr-----. 1 mssql mssql  8388608 Feb 15 19:53 /var/opt/mssql/data/modellog.ldf
-rwxr-----. 1 mssql mssql  8388608 Feb 15 19:53 /var/opt/mssql/data/model.mdf
-rwxr-----. 1 mssql mssql 13959168 Feb 15 19:53 /var/opt/mssql/data/msdbdata.mdf
-rwxr-----. 1 mssql mssql   786432 Feb 15 19:53 /var/opt/mssql/data/msdblog.ldf
-rwxr-----. 1 mssql mssql  8388608 Feb 15 19:53 /var/opt/mssql/data/tempdb.mdf
-rwxr-----. 1 mssql mssql  8388608 Feb 15 19:53 /var/opt/mssql/data/templog.ldf

 

This time the translation is effective but let’s perform another test by running the previous command as the mssql user

[root@linux04 ~]# runuser -l mssql -c 'ls -lu /var/opt/mssql/data/*'
ls: cannot access /var/opt/mssql/data/*: Permission denied

The problem starts when I try to access the database files despite the correct mapping configuration. I spent some time to understand that some misconceptions about how NFSv4 and magic mismatch uids/gids fix subsist. I admit the main documentation is not clear about it but please, feel free to comment if it is not the case. After digging into further pointers, I was able to understand that NFS itself doesn’t achieve authentication but delegates it down to the RPC mechanism. If we take a look down at the RPC’s security, we may notice it hasn’t been updated to support such matching. Basically, it continues to use the historic authentication called AUTH_SYS meaning sending uids/gis over the network. Translation work comes later through the idmap service. The only way to get rid of this issue would be to prefer another protocol like RPCSEC_GSS which includes authentication based on LDAP or Kerberos for example.

 The bottom line here is that SQL Server on Linux is not an exception of course. If we want to continue using basic Unix authentication, keeping synchronizing uids and guids across my cluster nodes seems to be a good way to go. Using Kerberos authentication in this case? This is another story that I will try to tell in another blog post!

Happy clustering on Linux!

 

 

 

 

Cet article SQL Server Failover Cluster on Linux and synchronizing uids/gids across nodes est apparu en premier sur Blog dbi services.

Windows Server 2016 RTM and Docker

Thu, 2017-02-16 08:56

I wrote some months ago several blog posts about the new Container feature of Windows Server 2016. Here is the list:

Windows Server 2016: Containers
Windows Containers: Migration
Windows Container: installation and utilization

Today, I will install the Container feature, install Docker and deploy a container.
First of all, I need to enable the Container and the Hyper-V feature, take care if you use VirtualBox because after having enable Hyper-V this won’t work anymore.
Windows Server 2016 support only Hyper-V container and no more Windows Container. So I will check if those features are already enable on my server and if it is not the case enable its, don’t forget to map the Windows Server 2016 iso file to your Virtual Machine.
To do it just run this PowerShell cmdlet:
Docker1
docker2
docker3

Now, both features are installed in my VM and I can install Docker.
To do so I will download the Docker Engine and Client from the Docker project library here to the folder c:\Temp and unzip the file into c:\ProgramFiles:
docker4
docker5

I have my Docker folder with Docker executable files and binaries. Dockerd.exe for the Docker engine and docker.exe for the client:
docker6
I add also the Docker path to the path environment variable:
docker7
Optionally, we could add it forever:
docker8
I will install Docker as a Service and start it:
docker9
Docker is installed and started, I’m now able to use it. I don’t have any images for the moment but the Microsoft/nanoserver image is available in the Docker Hub:
docker10
Let’s download this Nano Server base OS image from the Hub:
docker11
Now that I have my Docker OS image, I will start an interactive session with this image:
docker12
The container starts and we I accessed to the command prompt where I could check the processes running in my container like PowerShell, cmd…:
docker13
I will create a PowerShell script in my container to write a Welcome Container message and exit from my container:
docker14
I can now see my new container with the following command:
docker15
I will now create a new image from my container’s changes named mywelcomecontainer (it’s not possible to use Upper case for the new container name otherwise you will receive this error “repository name must be lowercase”):
docker16
I can finally run my container, it means that a Hyper-V container will be created from my new image named mywelcomecontainer and my PowerShell script will be executed from my container:
docker17
As I avoid the Docker run option –rm, I still have my container available if I run a Docker ps –a command. In order to delete my container I can run a Docker rm <containerid>, rmi will delete images if needed.

To conclude, it looks very easy to create images and Hyper-V containers with Docker in Windows Server 2016. The power of Docker is now available in the Windows World and for sure will be used more and more commonly.

 

Cet article Windows Server 2016 RTM and Docker est apparu en premier sur Blog dbi services.

Oracle 12c – Issues with the HEATMAP Segment even if the heat map feature is not used

Mon, 2017-02-13 04:55

When I don’t need I feature, I don’t turn it on, or do not use it because it reduces the possibility to run into issues. Most of the times this is true, however, during the preparation for an RMAN workshop, the RMAN list failure command showed me the following dictionary issue.

RMAN> list failure;

using target database control file instead of recovery catalog
Database Role: PRIMARY

List of Database Failures
=========================

Failure ID Priority Status    Time Detected        Summary
---------- -------- --------- -------------------- -------
2          CRITICAL OPEN      13-FEB-2017 10:12:26 SQL dictionary health check: seg$.type# 31 on object SEG$ failed

I thought first, that it might be related to some incorrect errors shown by the health check (DBMS_HM), because there used to be some issues with that tool. But even after applying the following patch, nothing changed and the error still appears.

19543595: INCORRECT HEALTHCHECK ERRORS FROM DBMS_HM – FALSE ERRORS ON TS$ , FILE$ OR USER

So I started a manual health check again to get some more details.

SQL> BEGIN
  2  DBMS_HM.RUN_CHECK (check_name => 'Dictionary Integrity Check',
  3  run_name => 'WilliamsDICTrun002',
  4  input_params => 'CHECK_MASK=ALL');
  5  END;
  6  /

PL/SQL procedure successfully completed.

SQL> SELECT DBMS_HM.GET_RUN_REPORT('WilliamsDICTrun002') from dual;

DBMS_HM.GET_RUN_REPORT('WILLIAMSDICTRUN002')
---------------------------------------------------------------------
Basic Run Information
 Run Name                     : WilliamsDICTrun002
 Run Id                       : 61
 Check Name                   : Dictionary Integrity Check
 Mode                         : MANUAL
 Status                       : COMPLETED
 Start Time                   : 2017-02-13 10:56:58.250100 +01:00
 End Time                     : 2017-02-13 10:56:58.689301 +01:00
 Error Encountered            : 0
 Source Incident Id           : 0
 Number of Incidents Created  : 0

Input Paramters for the Run
 TABLE_NAME=ALL_CORE_TABLES
 CHECK_MASK=ALL

Run Findings And Recommendations
 Finding
 Finding Name  : Dictionary Inconsistency
 Finding ID    : 62
 Type          : FAILURE
 Status        : OPEN
 Priority      : CRITICAL
 Message       : SQL dictionary health check: seg$.type# 31 on object SEG$
               failed
 Message       : Damaged rowid is AAAAAIAABAAAK+RAAc - description: Ts# 1
               File# 2 Block# 28032 is referenced

Now I do have the ROWID, the file number and the block number of the affecting object. Let’s see what it is.

SQL> select FILE#, BLOCK#, TYPE#, TS#, BLOCKS from seg$ where rowid='AAAAAIAABAAAK+RAAc';

     FILE#     BLOCK#      TYPE#        TS#     BLOCKS
---------- ---------- ---------- ---------- ----------
         2      28032         11          1       1024
		 

SQL> SELECT segment_name, segment_type, block_id, blocks
  2  FROM   dba_extents
  3  WHERE
  4  file_id = 2
  5  AND
  6  ( 28032 BETWEEN block_id AND ( block_id + blocks ) );

SEGMENT_NAME               SEGMENT_TYPE               BLOCK_ID     BLOCKS
-------------------------- ------------------------ ---------- ----------
HEATMAP                    SYSTEM STATISTICS             28032       1024

Really strange. It is related to the HEATMAP segment, but I am not using the heat map feature, or used it in the past.

SQL> show parameter heat

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
heat_map                             string      OFF

SQL> select name, DETECTED_USAGES from DBA_FEATURE_USAGE_STATISTICS where name like 'Heat%';

NAME                     DETECTED_USAGES
------------------------ ---------------
Heat Map                               0

But how can I get this fixed now? You could either ignore this issue, create a SR at Oracle, or you can drop the statistics segment, in case you are not using the heatmap feature.

In my case, I decided to the drop the statistics segment by issuing the following command. Dropping the statistics segment works by setting the underscore parameter “_drop_stat_segment” to 1.

SQL> select SEGMENT_NAME, SEGMENT_TYPE from dba_extents where SEGMENT_TYPE = 'SYSTEM STATISTICS';

SEGMENT_NAME               SEGMENT_TYPE
-------------------------- ------------------------
HEATMAP                    SYSTEM STATISTICS

SQL> ALTER SYSTEM SET "_drop_stat_segment"=1 scope=memory;

System altered.

SQL> select SEGMENT_NAME, SEGMENT_TYPE from dba_extents where SEGMENT_TYPE = 'SYSTEM STATISTICS';

no rows selected

The heat map table is gone now. Let’s run the dictionary check again.

SQL> BEGIN
  2  DBMS_HM.RUN_CHECK (check_name => 'Dictionary Integrity Check',
  3  run_name => 'WilliamsDICTrun003',
  4  input_params => 'CHECK_MASK=ALL');
  5  END;
  6  /

PL/SQL procedure successfully completed.

SQL> SELECT DBMS_HM.GET_RUN_REPORT('WilliamsDICTrun003') from dual;

DBMS_HM.GET_RUN_REPORT('WILLIAMSDICTRUN003')
---------------------------------------------------------------------
Basic Run Information
 Run Name                     : WilliamsDICTrun003
 Run Id                       : 81
 Check Name                   : Dictionary Integrity Check
 Mode                         : MANUAL
 Status                       : COMPLETED
 Start Time                   : 2017-02-13 11:17:15.190873 +01:00
 End Time                     : 2017-02-13 11:17:15.642501 +01:00
 Error Encountered            : 0
 Source Incident Id           : 0
 Number of Incidents Created  : 0

Input Paramters for the Run
 TABLE_NAME=ALL_CORE_TABLES
 CHECK_MASK=ALL

Run Findings And Recommendations


RMAN> list failure;

using target database control file instead of recovery catalog
Database Role: PRIMARY

no failures found that match specification

 

Looks much better now.

Conclusion

Even if you are not using some features, you can still have trouble with them. :-)

 

Cet article Oracle 12c – Issues with the HEATMAP Segment even if the heat map feature is not used est apparu en premier sur Blog dbi services.

12cR2 DBCA can create a standby database

Sun, 2017-02-12 15:33

Do you like DBCA to create a database from command line, with -silent -createDatabase? On a simple command line you can provision a database, with oratab, tnsnames.ora directory creation and any setting you want. And you can even call a custom script to customize further. But if you want to put it in Data Guard, you have to do the duplicate manually with RMAN. This evolves in 12.2 with a new option in DBCA to do that: dbca -silent -createDuplicateDB -createAsStandby

Limitations

I’ve tried in the Oracle Public Cloud where I just created a RAC database. But unfortunately, this new feature is only for Single Instance:

[FATAL] [DBT-16056] Specified primary database is not a Single Instance (SI) database.
CAUSE: Duplicate database operation is supported only for SI databases.

Ok. RAC is complex enough anyway, so you don’t need that quick command line to create the standby. So I tried with a single instance database:

[FATAL] [DBT-16057] Specified primary database is a container database (CDB).
CAUSE: Duplicate database operation is supported only for non container databases.

Ok. That a bit surprising to have a new feature in 12.2 that works only on the architecture that is deprecated in 12.1 but if we think about it, DBCA is for fast provisioning. In multitenant you create a CDB once, put it in Data Guard, and the fast provisioning comes with the ‘create pluggable database’. And deprecated doesn’t mean that we do not use it, and it is good to have a simple command line tools for easy provisioning in non-CDB.

Then, I tried on a non-CDB that I’ve created in 12.2

I’m a big fan of EZCONNECT but I had a few problems with it. What’s worth to know is that there is no ‘impossible to connect’ message. When it cannot connect, the following message is raised:

[FATAL] [DBT-16051] Archive log mode is not enabled in the primary database.
ACTION: Primary database should be configured with archive log mode for creating a duplicate or standby database.

just because this is the first thing that DBCA checks and this is where it fails when connections is not ok.

But you can also use a tnsnames.ora network service name. This is what I’ll use for -primaryDBConnectionString

$ tnsping ORCLA
TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 11-FEB-2017 22:28:35
Copyright (c) 1997, 2016, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = MAA.compute-usslash.oraclecloud.internal)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcla.compute-usslash.oraclecloud.internal)))
OK (0 msec)

-createDuplicateDB -createAsStandby

Here is an example:

dbca -silent -createDuplicateDB -gdbName ORCLB.compute-usslash.oraclecloud.internal -sid ORCLB -sysPassword "Ach1z0#d" -primaryDBConnectionString ORCLA -createAsStandby -dbUniquename ORCLB

This will connect RMAN to the target (here called ‘primary’), with the connect string ORCLA and run a duplicate to create ORCLB as specified.

It starts to create a temporary listener (which is still there in listener.ora even after completion), create the auxiliary instance and run RMAN:
Listener config step
33% complete
Auxiliary instance creation
66% complete
RMAN duplicate
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/ORCLB/orcla.log" for further details.

Through RMAN API, the file names are set:


run {
set newname for datafile 1 to '/u01/app/oracle/oradata/orclb/system01.dbf' ;
set newname for datafile 3 to '/u01/app/oracle/oradata/orclb/sysaux01.dbf' ;
set newname for datafile 4 to '/u01/app/oracle/oradata/orclb/undotbs01.dbf' ;
set newname for datafile 7 to '/u01/app/oracle/oradata/orclb/users01.dbf' ;
set newname for tempfile 1 to '/u01/app/oracle/oradata/orclb/temp01.dbf' ;

and the DUPLICATE FOR STANDBY FROM ACTIVE is run:

duplicate target database
for standby
from active database
dorecover
spfile
set 'db_recovery_file_dest_size'='8405385216'
set 'compatible'='12.2.0'
set 'undo_tablespace'='UNDOTBS1'
set 'dispatchers'='(PROTOCOL=TCP) (SERVICE=ORCLAXDB)'
set 'db_name'='orcla'
set 'db_unique_name'='ORCLB'
set 'sga_target'='2281701376'
set 'diagnostic_dest'='/u01/app/oracle'
set 'audit_file_dest'='/u01/app/oracle/audit'
set 'open_cursors'='300'
set 'processes'='300'
set 'nls_language'='AMERICAN'
set 'pga_aggregate_target'='757071872'
set 'db_recovery_file_dest'='/u01/app/oracle/fast_recovery_area/orcla'
set 'db_block_size'='8192'
set 'log_archive_format'='%t_%s_%r.dbf'
set 'nls_territory'='AMERICA'
set 'control_files'="/u01/app/oracle/oradata/orclb/control01.ctl", "/u01/app/oracle/fast_recovery_area/orcla/ORCLB/control02.ctl"
set 'audit_trail'='DB'
set 'db_domain'='compute-usslash.oraclecloud.internal'
set 'remote_login_passwordfile'='EXCLUSIVE'
reset 'local_listener'
reset 'db_file_name_convert'
set 'log_archive_dest_1'='location=/u01/app/oracle/fast_recovery_area/orcla'
reset 'event'
reset 'remote_listener'
nofilenamecheck;
}

The parameters are coming from the ‘primary’ and adapted for the new database. Be careful. This is where I prefer to review the parameters before. For example, when you duplicate to clone the primary (without the -createAsStandby) you probably don’t want to keep the same log_archive_dest that was set in a Data Guard configuration. I’ll have to post a blog about that.

At the end, the standby database is opened read-only, so be careful to close it before starting the apply of redo if you don’t have the Active Data Guard option.

Data Guard

DBCA doesn’t go beyond the DUPLICATE. And you can use it also in Standard Edition to setup the manual standby.

I hope that one day we will have an option to create the Data Guard configuration in the same process, but here you have to do it yourself:

  • No tnsnames.ora entry is added for the standby
  • The static listener entries are not added in listener.ora
  • No Data Guard configuration is there
  • The Data Guard Broker is not started except if it was set in advance to true on primary
  • No standby redo logs are created (except when they were present on primary)

You can set dg_broker_start=true and create the standby redo logs on a post-script that you call with the -customScripts argument. However, the best way is to do it in advance on the primary, and then the duplicate will do the same on the standby.

So what?

You don’t need this new feature because it is easy to automate it yourself. It’s just a copy of spfile parameters, with a few change, and a RMAN duplicate command. But your scripts will be specialized for your environment. Generic scripts are more complex to maintain. The big advantage to have this integrated on DBCA is that is designed for all configurations, and is maintained through versions.

 

Cet article 12cR2 DBCA can create a standby database est apparu en premier sur Blog dbi services.

Oracle Public Cloud: LIOPS with 4 OCPU in PaaS

Fri, 2017-02-10 13:44

In the latest post I’ve run a cached SLOB workload on Oracle Cloud IaaS to measure logical reads per seconds on a system covered by 2 processor licences (so 4 OCPs). Just as a comparison, here is the same on Oracle PaaS database as a service.

PaaS

The CPUs in PaaS are not exactly the same: E5-2690 v2 (3.00GHz) – it was E5-2699 v3 (2.30GHz) for my IaaS test.

[oracle@DBI122 ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Stepping: 4
CPU MHz: 2992.874
BogoMIPS: 5985.74
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0,1
[oracle@DBI122 ~]$ cat /proc/cpuinfo | tail -26
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 62
model name : Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
stepping : 4
microcode : 0x428
cpu MHz : 2992.874
cache size : 25600 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms
bogomips : 5985.74
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

So it seems that PaaS has faster CPU (see frequency and bogomips) but nothing worth a real test:

SLOB

Here I’ve run 1 to 8 SLOB sessions as I did in the previous post and here is the result:


Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 25.1 0.00 2.19
DB CPU(s): 1.0 25.1 0.00 2.18
Logical read (blocks): 611,210.2 15,357,878.4
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 40.1 0.00 7.71
DB CPU(s): 2.0 40.1 0.00 7.70
Logical read (blocks): 1,195,863.3 24,031,350.5
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 75.1 0.00 11.86
DB CPU(s): 3.0 75.0 0.00 11.84
Logical read (blocks): 1,720,446.4 43,208,149.8
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 70.7 0.00 11.78
DB CPU(s): 4.0 70.6 0.00 11.76
Logical read (blocks): 2,266,196.4 40,174,995.7
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 5.0 125.1 0.00 13.17
DB CPU(s): 5.0 124.9 0.00 13.15
Logical read (blocks): 2,802,916.0 70,385,892.6
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 6.0 90.1 0.00 15.80
DB CPU(s): 6.0 90.0 0.00 15.78
Logical read (blocks): 3,312,050.8 49,898,529.6
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 7.0 95.5 0.00 17.22
DB CPU(s): 7.0 95.3 0.00 17.18
Logical read (blocks): 3,812,912.2 52,225,112.1
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 8.0 141.3 0.00 16.45
DB CPU(s): 7.9 140.2 0.00 16.33
Logical read (blocks): 4,237,433.6 75,154,623.7

Faster CPU but lower logical reads processed by seconds… Don’t look only at the specs when choosing an instance type. Test it with your workload…

Besides performance, I really like the Oracle Cloud PaaS for Database. You have easy provisioning (a few clicks) but still full access (root, grid, oracle, sysdba). There is no competitor on that. In other clouds, either you go IaaS and you have to install and configure everything yourself, or you go PaaS and you have very limited admin access. Here you have both.

 

Cet article Oracle Public Cloud: LIOPS with 4 OCPU in PaaS est apparu en premier sur Blog dbi services.

Linux – Securing your important files with XFS extendend attributes

Thu, 2017-02-09 09:19

Let’s say, the tnsnames.ora is a quite important file on your system, and you want to make sure that you notice when someone changes the file. Taking a look at the modification time of that file would be good idea, or not?

Per default, the ls -l command show only the (mtime) modification time. In my case, I know that the tnsnames.ora was changed on “Feb 9 11:24″.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] ls -l tnsnames.ora
-rw-r--r-- 1 oracle oinstall 1791 Feb  9 11:24 tnsnames.ora

But in reality, more time stamps are stored. The atime, the ctime and the mtime.

  • atime is the access time (only stored in filesystem is not mounted with the noatime option)
  • ctime is the change time, meaning the inode was change, e.g. with the chmod command
  • mtime is the modification time, meaning the content changed

The ctime is often misinterpreted as “creation time”, but this is not the case. The creation time of a file is not recorded with XFS. There are other file systems that can do it, like ZFS, but XFS does not support “creation time”. You can use the stat command to see all time stamps in one shot.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] stat tnsnames.ora
  File: ‘tnsnames.ora’
  Size: 2137            Blocks: 8          IO Block: 4096   regular file
Device: fb02h/64258d    Inode: 163094097   Links: 1
Access: (0644/-rw-r--r--)  Uid: (54321/  oracle)   Gid: (54321/oinstall)
Access: 2017-02-09 11:24:00.243281419 +0100
Modify: 2017-02-09 11:24:00.243281419 +0100
Change: 2017-02-09 11:24:00.254281404 +0100
 Birth: -

Ok. Now someone comes along and changes the tnsnames.ora

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] vi tnsnames.ora

A change was done, and the modification time of that file changed immediately.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] ls -l tnsnames.ora
-rw-r--r-- 1 oracle oinstall 2136 Feb  9 11:31 tnsnames.ora

And also other timestamps might have changed like the atime and ctime.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] stat tnsnames.ora
  File: ‘tnsnames.ora’
  Size: 2136            Blocks: 8          IO Block: 4096   regular file
Device: fb02h/64258d    Inode: 161521017   Links: 1
Access: (0644/-rw-r--r--)  Uid: (54321/  oracle)   Gid: (54321/oinstall)
Access: 2017-02-09 11:31:06.733673663 +0100
Modify: 2017-02-09 11:31:06.733673663 +0100
Change: 2017-02-09 11:31:06.738673656 +0100
 Birth: -

Cool, now I know that the file was changed at “Feb 9 11:31″. But how reliable is that information? With the touch command, I can easily change the modification time to any value I like. e.g. I can set it to the same date as beforehand.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] touch -m --date="Feb  9 11:24" tnsnames.ora

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] ls -l tnsnames.ora
-rw-r--r-- 1 oracle oinstall 2136 Feb  9 11:24 tnsnames.ora

Now I have set the modification time to almost the same value, as it was beforehand. (Almost, because the microseconds are different) Besides that, the access and the change time are different.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] stat tnsnames.ora
  File: ‘tnsnames.ora’
  Size: 2136            Blocks: 8          IO Block: 4096   regular file
Device: fb02h/64258d    Inode: 161521017   Links: 1
Access: (0644/-rw-r--r--)  Uid: (54321/  oracle)   Gid: (54321/oinstall)
Access: 2017-02-09 11:31:06.733673663 +0100
Modify: 2017-02-09 11:24:00.000000000 +0100
Change: 2017-02-09 11:36:51.631671612 +0100
 Birth: -

No problem, I can make it even more precise by specifying  the whole date format including microseconds and time zone.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] touch -m --date="2017-02-09 11:24:00.243281419 +0100" tnsnames.ora

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] stat tnsnames.ora
  File: ‘tnsnames.ora’
  Size: 2136            Blocks: 8          IO Block: 4096   regular file
Device: fb02h/64258d    Inode: 161521017   Links: 1
Access: (0644/-rw-r--r--)  Uid: (54321/  oracle)   Gid: (54321/oinstall)
Access: 2017-02-09 11:31:06.733673663 +0100
Modify: 2017-02-09 11:24:00.243281419 +0100
Change: 2017-02-09 11:39:41.775993054 +0100
 Birth: -

And if I want to, I can even change the access time.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] touch -a --date="2017-02-09 11:24:00.243281419 +0100" tnsnames.ora

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] stat tnsnames.ora
  File: ‘tnsnames.ora’
  Size: 2136            Blocks: 8          IO Block: 4096   regular file
Device: fb02h/64258d    Inode: 161521017   Links: 1
Access: (0644/-rw-r--r--)  Uid: (54321/  oracle)   Gid: (54321/oinstall)
Access: 2017-02-09 11:24:00.243281419 +0100
Modify: 2017-02-09 11:24:00.243281419 +0100
Change: 2017-02-09 11:42:22.935350329 +0100
 Birth: -

Only the ctime (change time) is not so easy to change. At least not with the touch command. For changing the ctime you need to invoke the file system debugger or stuff like that. In the end, monitoring my tnsnames.ora file changes by time is not so precise. So why not using the XFS extend attribute feature to help me. e.g. I could create md5 check sums and when the check sum differs, I know that the content was changed. Let’s do it with the root user.

As root:

[root@dbidg03 admin]# getfattr -d tnsnames.ora
[root@dbidg03 admin]#

[root@dbidg03 admin]# md5sum tnsnames.ora
d135c0ebf51f68feda895dac8631a999  tnsnames.ora

[root@dbidg03 admin]# setfattr -n user.md5sum -v d135c0ebf51f68feda895dac8631a999 tnsnames.ora
[root@dbidg03 admin]#
[root@dbidg03 admin]# getfattr -d tnsnames.ora
# file: tnsnames.ora
user.md5sum="d135c0ebf51f68feda895dac8631a999"

But this is also not so secure. Even if done with root, it can easily be removed by the oracle user.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] getfattr -d tnsnames.ora
# file: tnsnames.ora
user.md5sum="d135c0ebf51f68feda895dac8631a999"

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] setfattr -x user.md5sum tnsnames.ora
oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] getfattr -d tnsnames.ora

To overcome this issue, XFS uses 2 disjoint attribute name spaces associated with every filesystem object. They are the root (or trusted) and user address spaces. The root address space is accessible only to the superuser, and then only by specifying a flag argument to the function call. Other users (like the oracle user in my case) will not see or be able to modify attributes in the root address space. The user address space is protected by the normal file permissions mechanism, so the owner of the file can decide who is able to see and/or modify the value of attributes on any particular file.

Ok. So let’s do it again by using the root (trusted) address space.

[root@dbidg03 admin]# setfattr -n trusted.md5sum -v "d135c0ebf51f68feda895dac8631a999" tnsnames.ora
[root@dbidg03 admin]# getfattr -n trusted.md5sum tnsnames.ora
# file: tnsnames.ora
trusted.md5sum="d135c0ebf51f68feda895dac8631a999"

However, from the oracle user point of view, no attributes exist, even if you know the attribute you are looking for.

oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] getfattr -d tnsnames.ora
oracle@dbidg03:/u01/app/oracle/network/admin/ [rdbms112] getfattr -n trusted.md5sum tnsnames.ora
tnsnames.ora: trusted.md5sum: No such attribute

You can take it even further, but adding another root attribute, e.g. the time when you created the md5 checksum.

[root@dbidg03 admin]# setfattr -n trusted.md5sumtime -v "09.02.2018 13:00:00" tnsnames.ora
[root@dbidg03 admin]# getfattr -n trusted.md5sumtime tnsnames.ora
# file: tnsnames.ora
trusted.md5sumtime="09.02.2018 13:00:00"

[root@dbidg03 admin]# getfattr -n trusted.md5sum tnsnames.ora
# file: tnsnames.ora
trusted.md5sum="d135c0ebf51f68feda895dac8631a999"

Now you have a good chance to find out if the file content was changed or not, by simply checking if the file has a different check sum.

Conclusion

XFS extended attributes are quite powerful features and you can use them in a lot of scenarios. Take care that you have a backup solution that support extended attributes, else you will lose all the information once you restore your data.

 

Cet article Linux – Securing your important files with XFS extendend attributes est apparu en premier sur Blog dbi services.

SQL Server AlwaysOn – Distributed availability groups, read-only with round-robin capabilities

Wed, 2017-02-08 14:04

 

This blog post comes from a very interesting discussion with one of my friends about the read-only capabilities of secondary replicas in the context of distributed availability groups. Initially, distributed availability groups are designed to address D/R scenarios and some migration scenario types as well. I already discussed about of one possible migration scenario here. However, we may also take advantage of using secondary replicas as read-only in Reporting Scenarios (obviously after making an assessment of whether the cost is worth it.). In addition, if you plan to introduce scale-out with secondary replicas (even with asynchronous replication) you may consider to use distributed availability groups and cascading feature which will address network bandwidth overhead especially if your cross-datacenter link is not designed to handle heavily replication workload. Considering this last scenario, my friend’s motivation (Sarah Bessard) was to assess distributed availability groups in the replacement of SQL Server replication.

As a reminder, SQL Server 2016 provides new round-robin feature with secondary read-only replicas and extending it by including additional replicas from another availability group seems to be a good idea. But here things become more complicated because transparent redirection and round-robin features sound promising but in fact let’s see if it works when distributed availability group comes into play.

Let’s have a demo on my lab environment. So for the moment two separate availability groups which run on the top of their own Windows Failover Cluster – respectively AdvGrp and AdvGrpDR

 

blog 116 - 01 - distributed ag - archi

At this stage, we will focus only on my second availability group AdvDrGrp. Firstly, I configured read-only routes for my 4 replicas and here the result:

SELECT 
	r.replica_server_name,
	r.read_only_routing_url,
	g.name AS group_name
FROM 
	sys.availability_replicas AS r
JOIN 
	sys.availability_groups AS g ON r.group_id = g.group_id
WHERE 
	g.name = N'AdvGrpDR'
ORDER BY 
	r.replica_server_name;

select 
	r.replica_server_name AS primary_replica,
	r.read_only_routing_url,
	rl.routing_priority,
	r2.replica_server_name AS read_only_secondary_replica,
	r2.secondary_role_allow_connections_desc,
	g.name AS availability_group
FROM 
	sys.availability_read_only_routing_lists AS rl
JOIN 
	sys.availability_replicas AS r ON rl.replica_id = r.replica_id
JOIN 
	sys.availability_replicas AS r2 ON rl.read_only_replica_id = r2.replica_id
JOIN 
	sys.availability_groups AS g ON g.group_id =  r.group_id
WHERE 
	g.name = N'AdvGrpDR'
ORDER BY 
	primary_replica, availability_group, routing_priority;
GO

 

blog 116 - 1 - distributed ag ro - RO config

URL read-only routes and preferred replicas are defined for all the replicas. I defined round-robin configuration for replicas WIN20161SQL16\SQL16 to WIN20163SQL16\SQL16 whereas the last one is configured with a preference order (WIN20163SQL16\SQL16 first and WIN20164SQL16\SQL16 if the previous one is not available).

After configuring read-only routes, I decided to check if round-robin comes into play before implementing my distributed availability group. Before running my test I also implemented a special extended event which includes read-only route events as follows:

CREATE EVENT SESSION [alwayson_ro] 
ON SERVER 
ADD EVENT sqlserver.hadr_evaluate_readonly_routing_info,
ADD EVENT sqlserver.read_only_route_complete,
ADD EVENT sqlserver.read_only_route_fail
ADD TARGET package0.event_file ( SET filename=N'alwayson_ro' ),
ADD TARGET package0.ring_buffer;

 

My test included a basic command based on SQLCMD and –K READONLY special parameter as follows:

blog 116 - 2 - distributed ag ro - RO test

According to the above output we may claim that my configuration is well configured. We may also double check by looking at the extend event output

blog 116 - 3 - distributed ag ro - xe ro output

But now let’s perform the same test after implementing my distributed availability group. The script I used was as follows:

:CONNECT WIN20161SQL16\SQL16
 
USE [master];
GO
 
-- Primary cluster 
CREATE AVAILABILITY GROUP [AdvDistGrp]  
WITH (DISTRIBUTED)   
AVAILABILITY GROUP ON 
'AdvGrp'
WITH   
(   
    LISTENER_URL = 'tcp://lst-advgrp.dbi-services.test:5022',    
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
	SEEDING_MODE = AUTOMATIC   
),   
'AdvGrpDR'
WITH   
(   
    LISTENER_URL = 'tcp://lst-advdrgrp.dbi-services.test:5022',   
	AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
	FAILOVER_MODE = MANUAL,   
	SEEDING_MODE = AUTOMATIC   
);    
GO   
 

:CONNECT WIN20163SQL16\SQL16
 
USE [master];
GO
 
-- secondary cluster
ALTER AVAILABILITY GROUP [AdvDistGrp]   
JOIN  
AVAILABILITY GROUP ON 
'AdvGrp'
WITH   
(   
    LISTENER_URL = 'tcp://lst-advgrp.dbi-services.test:5022',    
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
	SEEDING_MODE = AUTOMATIC   
),   
'AdvGrpDR'
WITH   
(   
    LISTENER_URL = 'tcp://lst-advdrgrp.dbi-services.test:5022',   
	AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
	FAILOVER_MODE = MANUAL,   
	SEEDING_MODE = AUTOMATIC   
);    
GO

 

blog 116 - 0 - distributed ag ro - archi

Performing the previous test after applying the new configuration gives me a different result this time.

blog 116 - 4 - distributed ag ro - RO test 2

It seems that the round-robin capability is not correctly performed although I used the same read-only routes configuration. In the same way, taking a look at the extended event output gave me no results. It seems that transparent redirection and round-robin features from the listener did not come into play this time.

Let’s perform a last test which includes moving AdvDrGrp availability to another replica to confirm transparent redirection does not work as we may expect

:CONNECT WIN20164SQL16\SQL16

ALTER AVAILABILITY GROUP AdvGrpDR FAILOVER;

 

blog 116 - 5 - distributed ag ro - RO test 3

Same output than previously. The AdvDrGrp availability group has moved from WIN20163SQL16\SQL16 replica to WIN20164SQL16\SQL16 replica and the connection reached out the new defined primary of the second availability group (secondary role from the distributed availability group perspective) meaning we are not redirected on one of defined secondaries.

At this stage, it seems that we will have to implement our own load balancing component – whatever it is – in order to benefit from all the secondary replicas and read-only features on the second availability group. Maybe one feature that Microsoft may consider as improvement for the future.

Happy high availability moment!

 

 

 

 

 

 

 

 

Cet article SQL Server AlwaysOn – Distributed availability groups, read-only with round-robin capabilities est apparu en premier sur Blog dbi services.

Oracle Public Cloud: 2 OCPU for 1 proc. license

Wed, 2017-02-08 11:40

I’ve blogged recently about the Oracle Core Factor in the Clouds. And then, in order to optimize your Oracle licences, you need to choose the instance type that can run faster on less cores. In a previous blog post, I tried to show how this can be complex, comparing the same workload (cached SLOB) on different instances of same Cloud provider (Amazon). I did that on instances with 2 virtual cores, covered by 2 Oracle Database processor licences. Here I’m doing the same on the Oracle Public Cloud where, with the same number of licenses, you can run on 4 hyper-threaded cores.

Trial IaaS

I’m running with the 30-months trial subscription. I did several tests because they were not consistent at first. I had some runs where it seems that I was not running at full CPU. What I know is that your CPU resources are guaranteed on the Oracle Public Cloud, but maybe it’s not the case on trial, or I were working on a maintenance window, or…

Well, I finally got consistent results and I’ve run the following test on the IaaS (Cloud Compute Service) to do something similar to what I did on AWS, with the Bring You Own License idea.

In Oracle Public Cloud, you can run 2 cores per 1 Oracle processor licence. This means that if I have 2 processor licences, I can run on an instance shape with 4 OCPU. This shape is called ‘OC5′. Here it is:

[oracle@a9f97f ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 2294.938
BogoMIPS: 4589.87
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0-7
 
[oracle@a9f97f ~]$ cat /proc/cpuinfo | tail -26
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
stepping : 2
microcode : 0x36
cpu MHz : 2294.938
cache size : 46080 KB
physical id : 0
siblings : 8
core id : 7
cpu cores : 8
apicid : 14
initial apicid : 14
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm xsaveopt fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid
bogomips : 4589.87
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

And here are the results:


Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.2 0.00 5.48
DB CPU(s): 1.0 30.1 0.00 5.47
Logical read (blocks): 884,286.7 26,660,977.4
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 25.0 0.00 9.53
DB CPU(s): 2.0 25.0 0.00 9.53
Logical read (blocks): 1,598,987.2 20,034,377.0
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 40.9 0.00 9.29
DB CPU(s): 3.0 40.9 0.00 9.28
Logical read (blocks): 2,195,570.8 29,999,381.1
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 14.46
DB CPU(s): 4.0 42.8 0.00 14.45
Logical read (blocks): 2,873,420.5 30,846,373.9
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 5.0 51.7 0.00 15.16
DB CPU(s): 5.0 51.7 0.00 15.15
Logical read (blocks): 3,520,059.0 36,487,232.0
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 6.0 81.8 0.00 17.15
DB CPU(s): 6.0 81.8 0.00 17.14
Logical read (blocks): 4,155,985.6 56,787,765.6
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 7.0 65.6 0.00 17.65
DB CPU(s): 7.0 65.5 0.00 17.62
Logical read (blocks): 4,638,929.5 43,572,740.0
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 8.0 92.3 0.00 19.20
DB CPU(s): 8.0 92.1 0.00 19.16
Logical read (blocks): 5,153,440.6 59,631,848.6
 

This is really good. This is x2.8 more LIOPS than the maximum I had on AWS EC2. A x2 factor is expected because I have x2 vCPUS here. But CPU is also faster. So, two conclusions here:

  • There is no technical reason behind the reject of core factor on Amazon EC2. It is only a marketing decision.
  • For sure, for same Oracle Database cost, Oracle Cloud outperforms Amazon EC2 because is is cheaper (not to mention the discounts you will get if you go to Oracle Cloud)
So what?

This is not a benchmark. The LIOPS may depend a lot on your application behaviour, and CPU is not the only resource to take care. But for sure, the Oracle Public Cloud IaaS is fast and costs less when used for Oracle products, because of the rules on core factor. But those rules are for information only. Check your contract for legal stuff.

 

Cet article Oracle Public Cloud: 2 OCPU for 1 proc. license est apparu en premier sur Blog dbi services.

Oracle 12c – RMAN list failure does not show any failure even if there is one

Wed, 2017-02-08 04:11

Relying to much on the RMAN Data Recovery Advisor is not always the best idea. In a lot of situations,  it tells you the right things, however, sometimes it tells you not the optimal things, and sometimes, RMAN list failure does not show any failure at all, even if there is one.

So … let’s simulate quickly a loss of a datafile during the normal runtime of the database. The result is a clear error message which says that the datafile 5 is missing.

SQL> select count(*) from hr.employees;
select count(*) from hr.employees
                        *
ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/u01/oradata/DBTEST1/hrDBTEST01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

Of course, the error message is immediately reflected in the alert.log as well where it clearly says that Oracle in unable to open file number 5.

Errors in file /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/trace/DBTEST1_smon_17115.trc:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/u01/oradata/DBTEST1/hrDBTEST01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory

Only the RMAN Data Recovery advisor does not know what it’s going on.

RMAN> list failure;

using target database control file instead of recovery catalog
Database Role: PRIMARY

no failures found that match specification

Of course, I could shutdown the DB, and then startup again which would trigger a Health Check, but shutting down an instance is not always so easy on production systems. Especially when only one datafile is missing, but all others are available and only a part of the application is affected.

The solution to that issue, is to run a manual health check. Quite a lot of health checks can be run manually, like show in the following documentation.

https://docs.oracle.com/database/121/ADMIN/diag.htm#ADMIN11269

I start with the DB Structure Integrity Check. This check verifies the integrity of database files and reports failures if these files are inaccessible, corrupt or inconsistent.

SQL> begin
  2  dbms_hm.run_check ('DB Structure Integrity Check','Williams Check 00000001');
  3  end;
  4  /

PL/SQL procedure successfully completed.

After running the Health Check, Oracle finds the failure and in the alter.log you will see an entry like the following:

Checker run found 1 new persistent data failures

If you want to take a look what exactly the Health check found, you can invoke the ADRCI and execute the “show hm_run” command.

oracle@vmoratest1:/oracle/workshop/bombs/ [DBTEST1] adrci

ADRCI: Release 12.1.0.2.0 - Production on Tue Feb 7 16:02:21 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u00/app/oracle"
adrci> show homes
ADR Homes:
diag/clients/user_oracle/host_1833655127_82
diag/tnslsnr/vmoratest1/listener
diag/rdbms/cdb1p/CDB1P
diag/rdbms/dbtest1/DBTEST1
diag/rdbms/rcat/RCAT

adrci> set home diag/rdbms/dbtest1/DBTEST1

adrci> show hm_run

ADR Home = /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1:
*************************************************************************

...
...

**********************************************************
HM RUN RECORD 9
**********************************************************
   RUN_ID                        206
   RUN_NAME                      Williams Check 00000001
   CHECK_NAME                    DB Structure Integrity Check
   NAME_ID                       2
   MODE                          0
   START_TIME                    2017-02-07 16:03:44.431601 +01:00
   RESUME_TIME                   <NULL>
   END_TIME                      2017-02-07 16:03:44.478127 +01:00
   MODIFIED_TIME                 2017-02-07 16:03:44.478127 +01:00
   TIMEOUT                       0
   FLAGS                         0
   STATUS                        5
   SRC_INCIDENT_ID               0
   NUM_INCIDENTS                 0
   ERR_NUMBER                    0
   REPORT_FILE                   <NULL>
9 rows fetched

adrci>

However, if you take a look at the HM RUN report, is gives you an error.

adrci> show report hm_run 'Williams Check 00000001'
DIA-48415: Syntax error found in string [show report hm_run 'Williams Check 00000001'] at column [44]

This is not a bug. The HM run name must be only alphanumeric and underscore.  So … better don’t use spaces in between your name. The following would have been better.

SQL> begin
  2  dbms_hm.run_check ('DB Structure Integrity Check','WilliamsCheck');
  3  end;
  4  /

PL/SQL procedure successfully completed.

In case, the “adrci show report hm_run” does not work for you, it is not the end of the story. We still can look up the v$hm_finding view.

select RUN_ID, TIME_DETECTED, STATUS, DESCRIPTION, DAMAGE_DESCRIPTION from v$hm_finding where run_id = '206';

SQL> select RUN_ID, TIME_DETECTED, STATUS, DESCRIPTION, DAMAGE_DESCRIPTION from v$hm_finding where run_id = '206';

RUN_ID TIME_DETECTED                STATUS       DESCRIPTION                                  DAMAGE_DESCRIPTION
------ ---------------------------- ------------ -------------------------------------------- --------------------------------------------
   206 07-FEB-17 04.03.44.475000 PM OPEN         Datafile 5: '/u01/oradata/DBTEST1/hrDBTEST01 Some objects in tablespace HR might be unava
                                                 .dbf' is missing                             ilable

Now let’s check the RMAN “list failure” again.

RMAN> list failure;

Database Role: PRIMARY

List of Database Failures
=========================

Failure ID Priority Status    Time Detected        Summary
---------- -------- --------- -------------------- -------
2          HIGH     OPEN      07-FEB-2017 15:39:38 One or more non-system datafiles are missing


RMAN> advise failure;
...
Automated Repair Options
========================
Option Repair Description
------ ------------------
1      Restore and recover datafile 5
  Strategy: The repair includes complete media recovery with no data loss
  Repair script: /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/hm/reco_668410907.hm

  
RMAN> repair failure preview;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/hm/reco_668410907.hm

contents of repair script:
   # restore and recover datafile
   sql 'alter database datafile 5 offline';
   restore ( datafile 5 );
   recover datafile 5;
   sql 'alter database datafile 5 online';
Conclusion

The Oracle Data Recovery Advisor is quite good, but sometimes you need to push it into the right direction. Besides that, take care of the naming convention that you use for your health check runs. ;-)

 

Cet article Oracle 12c – RMAN list failure does not show any failure even if there is one est apparu en premier sur Blog dbi services.

Exadata Express Cloud Service: SQL and Optimizer trace

Mon, 2017-02-06 15:51

The Oracle PDBaaS is for database developers. And database developers may need to trace what happens with their queries: SQL trace and Optimizer trace. Let’s see what we can do on Exadata Express Cloud Service

V$DIAG_TRACE_FILE_CONTENTS

On the managed PDBaaS you don’t have access to the filesystem. But in 12.2 you have a view that can display the content of the trace files: V$DIAG_TRACE_FILE lists the files you can access (that pertain to your container) and V$DIAG_TRACE_FILE_CONTENTS can display the content.

Here is an example how to read it.
I get my tracefile name from v$process:

SQL> column tracefile new_value tracefile
SQL> select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));
 
TRACEFILE
--------------------------------------------------------------------------------
/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389.trc

And read the ‘payload’ from the view:

SQL> set linesize 4000 pagesize 0 trimspool on
SQL> spool last.trc
SQL> select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='&tracefile';
old 1: select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='&tracefile'
new 1: select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389.trc'
 
no rows selected

Of course, I didn’t enable any trace, so the file is empty.

10046

You can’t enable SQL Trace in the Exadata Express Cloud Service. This is disabled by lockdown profile and by not granting execute to the packages that can do it.
Here is what I tried. Please tell me if you have other ideas:
SQL> alter session set sql_trace=true;
ERROR:
ORA-01031: insufficient privileges
 
SQL> exec sys.dbms_support.start_trace;
BEGIN sys.dbms_support.start_trace; END;
 
*
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00201: identifier 'SYS.DBMS_SUPPORT' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
 
SQL> exec dbms_session.set_sql_trace(true);
BEGIN dbms_session.set_sql_trace(true); END;
 
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_SESSION", line 178
ORA-06512: at line 1
 
SQL> exec for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_system.set_sql_trace_in_session(i.sid,i.serial#,true); end loop;
BEGIN for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_system.set_sql_trace_in_session(i.sid,i.serial#,true); end loop; END;
 
*
ERROR at line 1:
ORA-06550: line 1, column 96:
PLS-00201: identifier 'SYS.DBMS_SYSTEM' must be declared
ORA-06550: line 1, column 96:
PL/SQL: Statement ignored
 
SQL> exec for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_support.start_trace_in_session(i.sid,i.serial#,true); end loop;
BEGIN for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_support.start_trace_in_session(i.sid,i.serial#,true); end loop; END;
 
*
ERROR at line 1:
ORA-06550: line 1, column 96:
PLS-00201: identifier 'SYS.DBMS_SUPPORT' must be declared
ORA-06550: line 1, column 96:
PL/SQL: Statement ignored
 
SQL> alter session set events 'sql_trace';
ERROR:
ORA-01031: insufficient privileges
 
SQL> exec execute immediate q'{alter session set events 'sql_trace'}';
BEGIN execute immediate q'{alter session set events 'sql_trace'}'; END;
 
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at line 1
 
SQL> create or replace procedure sys.my_sql_trace as begin execute immediate q'{alter session set events 'sql_trace'}'; end;
2 /
create or replace procedure sys.my_sql_trace as begin execute immediate q'{alter session set events 'sql_trace'}'; end;
*
ERROR at line 1:
ORA-01031: insufficient privileges

No doubt, we can’t sql_trace. I’m not sure it is a big problem because we have all options in the Exadata Express Cloud Service so we have ASH, SQL monitor, etc. I’m not saying that sql_trace is not useful anymore – there are cases where you need to see the wait events one by one – but from development point of view an plan with execution statistics should be sufficient.

10053

Tracing the CBO is a different thing. There is no alternative when you want to understand the choices of the optimizer. It is not something I use every day, but there are some cases where it is the only tool to troubleshoot.

SQL> alter session set events 'trace [SQL_Optimizer.*]';
ERROR:
ORA-01031: insufficient privileges

The ALTER SESSION SET EVENT is blocked by the S20 lockdown profile. But there is another way.

I’ve run the following query:

SQL> select * from dual;
X
 
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID a5ks9fhw2v9s1, child number 0
-------------------------------------
select * from dual
 
Plan hash value: 272002086
 
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS STORAGE FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------
 
13 rows selected.

Now I want more information about the execution plan.


SQL> exec dbms_sqldiag.dump_trace('a5ks9fhw2v9s1', 0, 'Compiler','Compiler');
PL/SQL procedure successfully completed.

Cool. It seems I’m allowed to do that. This procedures sets the SQL Compiler event, but it seems that it is allowed from the procedure.

I can now get the trace:


SQL> select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));
/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc
 
SQL> spool last.trc
SQL> select payload from V$DIAG_TRACE_FILE_CONTENTS
2 where adr_home=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\1')
3 and trace_filename=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\2')
4 ;
old 2: where adr_home=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\1')
new 2: where adr_home=regexp_replace('/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc','^(.*)/trace/([^/]*)','\1')
old 3: and trace_filename=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\2')
new 3: and trace_filename=regexp_replace('/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc','^(.*)/trace/([^/]*)','\2')
Trace file /u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc

This outputs the full 10053 trace:


Oracle Database 12c Enterprise Edition Release 12.2.0.0.3 - 64bit Production
Build label: RDBMS_12.2.0.0.3_LINUX.X64_160720
ORACLE_HOME: /u01/app/oracle/product/12.2.0.0.3/dbhome_1
System name: Linux
Node name: cfcldx0171.usdc2.oraclecloud.com
Release: 2.6.39-400.264.1.el6uek.x86_64
Version: #1 SMP Wed Aug 26 16:42:25 PDT 2015
Machine: x86_64
Storage: Exadata
Instance name: cfcdba1_1
Redo thread mounted by this instance: 1
Oracle process number: 109
Unix process pid: 24389, image: oracle@cfcldx0171.usdc2.oraclecloud.com
 
*** 2017-02-05T20:42:38.580323+00:00 (EPTDOJVM1KG(47))
 
*** SESSION ID:(2343.25237) 2017-02-05T20:42:38.580349+00:00
*** CLIENT ID:() 2017-02-05T20:42:38.580354+00:00
*** SERVICE NAME:(eptdojvm1kg.usdc2.oraclecloud.com) 2017-02-05T20:42:38.580358+00:00
*** MODULE NAME:(SQL*Plus) 2017-02-05T20:42:38.580363+00:00
*** ACTION NAME:() 2017-02-05T20:42:38.580368+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2017-02-05T20:42:38.580372+00:00
*** CONTAINER ID:(47) 2017-02-05T20:42:38.580377+00:00
 
Enabling tracing for cur#=6 sqlid=bqf9h9bhb6c88 recursive
Parsing cur#=6 sqlid=bqf9h9bhb6c88 len=45
sql=/* SQL Analyze(2343,0) */ select * from dual
 
End parsing of cur#=6 sqlid=bqf9h9bhb6c88
Semantic Analysis cur#=6 sqlid=bqf9h9bhb6c88
OPTIMIZER INFORMATION
 
******************************************
 
----- Current SQL Statement for this session (sql_id=3u37gu3fhx3q1) -----
/* SQL Analyze(2343,0) */ select * from dual
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
0x3c5cdebc38 161 package body SYS.DBMS_SQLTUNE_INTERNAL.I_PROCESS_SQL_CALLOUT
0x3c5cdebc38 14080 package body SYS.DBMS_SQLTUNE_INTERNAL.I_PROCESS_SQL
0x3e17d89590 1599 package body SYS.DBMS_SQLDIAG.DUMP_TRACE
0x25dda27db0 1 anonymous block
...

Unified Trace Service views

Those views are new in 12.2 and are declined to show only part of files so that you can grant access to read those files only for 10046 (SQL Trace) or for 10053 (Optimizer Trace), for all sessions or only the user’s one.

GV$DIAG_TRACE_FILE (from x$dbgtflist) and GV$DIAG_TRACE_FILE_CONTENTS (from x$dbgtfview) show all files from the ADR traces
GV$DIAG_APP_TRACE_FILE (from x$dbgatflist) is a subset showing all files containing SQL Trace or Optimizer Trace
GV$DIAG_SQL_TRACE_RECORDS (from x$dbgtfsqlt) is a subset showing all files containing SQL Trace
GV$DIAG_OPT_TRACE_RECORDS (from x$dbgtfoptt) is a subset showing all files containing Optimizer Trace
V$DIAG_SESS_SQL_TRACE_RECORDS (from x$dbgtfssqlt) and V$DIAG_SESS_OPT_TRACE_RECORDS (from x$dbgtfsoptt) are similar, but for your session (and then no GV$ as your session is on one instance only).

A new role APPLICATION_TRACE_VIEWER grants to select on views that show only SQL and Optimizer traces.

Alert.log

This is not new in 12.2, you can query V$DIAG_ALERT_EXT to see the alert.log content. Of course, from the PDB you will see only PDB related content:

Endian type of dictionary set to little
Autotune of undo retention is turned on.
This instance was first to open pluggable database EPTDOJVM1KG (container=47)
attach called for domid 47 (domuid: 0x7fa9e59a, options: 0x0, pid: 111297)
queued attach broadcast request 0x3e26185d68
[111297] Successfully onlined Undo Tablespace 2.
Undo initialization finished serial:0 start:2570370849 end:2570371107 diff:258 ms (0.3 seconds)
Database Characterset for EPTDOJVM1KG is AL32UTF8
JIT: pid 111297 requesting full stop
detach called for domid 47 (domuid: 0x7fa9e59a, options: 0x0, pid: 111297)
queued detach broadcast request 0x3e26185d18
Autotune of undo retention is turned on.
This instance was first to open pluggable database EPTDOJVM1KG (container=47)
attach called for domid 47 (domuid: 0x7fa9e59a, options: 0x0, pid: 111297)
queued attach broadcast request 0x3e26185cc8
Endian type of dictionary set to little
[111297] Successfully onlined Undo Tablespace 2.
Undo initialization finished serial:0 start:2570382111 end:2570382464 diff:353 ms (0.4 seconds)
Deleting old file#8 from file$
Deleting old file#9 from file$
Deleting old file#10 from file$
Deleting old file#11 from file$
Deleting old file#12 from file$
Deleting old file#14 from file$
Deleting old file#17 from file$
Adding new file#486 to file$(old file#8)
Adding new file#487 to file$(old file#9)
Adding new file#488 to file$(old file#10)
Adding new file#489 to file$(old file#11)
Adding new file#490 to file$(old file#12)
Adding new file#491 to file$(old file#14)
Adding new file#492 to file$(old file#17)
Successfully created internal service eptdojvm1kg.usdc2.oraclecloud.com at open
Database Characterset for EPTDOJVM1KG is AL32UTF8
Opatch validation is skipped for PDB EPTDOJVM1KG (con_id=0)
Opening pdb with no Resource Manager plan active
Creating new database key for new master key and wallet
Creating new database key with the new master key
Retiring: ena 2 flag 6 mkloc 0
encrypted key 32ce589b50b3105f9419d0a86ce8e7f400000000000000000000000000000000
mkid cbcd33d929f64fd0bf1367f6e69f2dd5
Creating: ena 2 flag e mkloc 1
encrypted key baf27c1e11623fe9837f678df2e48f8a00000000000000000000000000000000
mkid 2d99ba8694054fd5bf41f998513e1d4c
New database key and new master key created successfully
create temporary tablespace "EPTDOJVM1KG_TEMP" tempfile size 20M extent management local uniform size 16M
Force tablespace EPTDOJVM1KG_TEMP to be encrypted with AES128
Completed: create temporary tablespace "EPTDOJVM1KG_TEMP" tempfile size 20M extent management local uniform size 16M
create BIGFILE tablespace "EPTDOJVM1KG_DATA" datafile size 10485760 autoextend on next 1M maxsize UNLIMITED extent management local autoallocate segment space management auto
Force tablespace EPTDOJVM1KG_DATA to be encrypted with AES128
Completed: create BIGFILE tablespace "EPTDOJVM1KG_DATA" datafile size 10485760 autoextend on next 1M maxsize UNLIMITED extent management local autoallocate segment space management auto
Resize operation completed for file# 486, old size 307200K, new size 317440K
ALTER SYSTEM SET pdb_lockdown='S20' SCOPE=BOTH PDB='EPTDOJVM1KG';
...

So what?

We can access SQL Trace and Optimizer trace and this is very nice. Access to trace has always been a problem because they are on the server, may contain sensitive data, etc. Having views to give access easily and in a controlled way is a very nice 12.2 feature.
I don’t know if enabling Optimizer trace on Exadata Express Cloud Service is expected, or just something that was forgotten by the lockdown profile. I hope the first hypothesis is the right one and I hope that we will be allowed to enable SQL Trace as well. This service is for developers, and I’m a big advocate of giving all tools to developers so that performance is addressed before production.

 

Cet article Exadata Express Cloud Service: SQL and Optimizer trace est apparu en premier sur Blog dbi services.

Oracle – RMAN Backups to CIFS

Mon, 2017-02-06 08:26

Not to often, but sometimes you need to offload your RMAN backups and your DataPump exports to CIFS. You might run out of space on your NFS server, or you are a Windows shop who has only CIFS shares, and then you might need to put your RMAN backups to CIFS.

One very important MOS Note regarding RMAN backups to CIFS is the following

Is use of CIFS Protocol for RMAN backups supported? (Doc ID 444809.1)

Running a RDBMS on CIFS is clearly not supported because of CIFS does not guarantee atomic write of 512 byte blocks. On the other hand, RMAN is not relying on 512-byte atomic writes, and so it is fine to use RMAN to CIFS.

However, CIFS is not certified by Oracle and the RDBMS development team can not be involved is case there are any issue. The MOS note was last updated on November 2013, so it might be not a bad idea to double check it with Oracle again.

In earlier OS and DB version, there used to be issues with filesystemio_options SETALL,DIRECT or ASYNCH, and so I wanted to double check if this is still the case with more recent releases.

I have done the tests with Oracle 11.2.0.4 and 12.1.0.2 on OEL 6.8 with Samba 3.6.

[root@oel001 ~]# cat /etc/oracle-release
Oracle Linux Server release 6.8
[root@oel001 ~]# uname -a
Linux oel001 4.1.12-61.1.25.el6uek.x86_64 #2 SMP Wed Jan 11 19:13:03 PST 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@oel001 ~]# rpm -qa | grep samba-3.6.23
samba-3.6.23-36.0.1.el6_8.x86_64

I start with filesystemio_options SETALL and RMAN and Datapump with 18 parallel sessions, to make sure that I hit the CIFS share quite hard. Before we start the Test, we have to mount the CIFS share. All mount options are documented at the following link.

https://www.samba.org/samba/docs/man/manpages-3/mount.cifs.8.html

For my case, the most important CIFS mount options are:

user = M$ User to connect to the CIFS share
rw = mount read write
uid = sets the uid that will own all files or directories on the mounted filesystem
gid = sets the gid that will own all files or directories on the mounted filesystem
rsize = default network read size
wsize = default network write size

Regarding the rsize/wsize I have used the ones, which I usually take when using NFS mounts.

[root@oel001 ~]# mount -t cifs //10.0.2.2/rman /rman -o rsize=32768,wsize=32768,rw,user=wrs,uid=oracle,gid=dba --verbose
Password:
mount.cifs kernel mount options: ip=10.0.2.2,unc=\\10.0.2.2\rman,rsize=32768,wsize=32768,uid=54321,gid=54322,ver=1,user=wrs,pass=***                          *****

[root@oel001 ~]# mount | grep rman
//10.0.2.2/rman on /rman type cifs (rw)

Before running any RMAN backups to it, make sure that you can create files with the oracle user. If it is failing at this step already, then you can stop here. This has to work before continuing any further.

oracle@oel001:/home/oracle/ [OCM121] cd /rman/backup/
oracle@oel001:/rman/backup/ [OCM121] touch new_file
oracle@oel001:/rman/backup/ [OCM121] ls -l new_file
-rwxr-xr-x 1 oracle dba 0 Feb  6 09:22 new_file
oracle@oel001:/rman/backup/ [OCM121] rm new_file

Ok. Let’s start with 11.2.0.4 first. The database has filesystemio_options=SETALL, meaning that the database uses aynch and directio at the same time.

My RMAN backup tests will be a regular backup with 16 parallel sessions to that share, and a backup of the Fast Recovery Area to the same share as well.

CONFIGURE DEVICE TYPE DISK PARALLELISM 16 BACKUP TYPE TO COMPRESSED BACKUPSET;
BACKUP DATABASE FORMAT '/rman/backup/%U';
BACKUP RECOVERY AREA TO DESTINATION '/rman/backup';

My DataPump test will be a full backup with 16 parallel sessions.

expdp system/manager directory=rman dumpfile=rman%U.dmp parallel=16 full=y

ok. Let’s start doing the first backup.

oracle@oel001:/home/oracle/ [OCM112] rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Mon Feb 6 13:31:37 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: OCM112 (DBID=177109848)

RMAN> backup database format '/rman/backup/%U';

Starting backup at 06-FEB-2017 13:31:45
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=128 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=249 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=366 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=12 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=129 device type=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: SID=250 device type=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: SID=360 device type=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: SID=14 device type=DISK
allocated channel: ORA_DISK_9
channel ORA_DISK_9: SID=130 device type=DISK
allocated channel: ORA_DISK_10
channel ORA_DISK_10: SID=251 device type=DISK
allocated channel: ORA_DISK_11
channel ORA_DISK_11: SID=367 device type=DISK
allocated channel: ORA_DISK_12
channel ORA_DISK_12: SID=15 device type=DISK
allocated channel: ORA_DISK_13
channel ORA_DISK_13: SID=131 device type=DISK
allocated channel: ORA_DISK_14
channel ORA_DISK_14: SID=252 device type=DISK
allocated channel: ORA_DISK_15
channel ORA_DISK_15: SID=368 device type=DISK
allocated channel: ORA_DISK_16
channel ORA_DISK_16: SID=16 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA11G/ocm112/datafile/system.261.927382121
channel ORA_DISK_1: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_2: starting compressed full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00003 name=+DATA11G/ocm112/datafile/undotbs1.263.927382145
channel ORA_DISK_2: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_3: starting compressed full datafile backup set
channel ORA_DISK_3: specifying datafile(s) in backup set
input datafile file number=00002 name=+DATA11G/ocm112/datafile/sysaux.262.927382135
channel ORA_DISK_3: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_4: starting compressed full datafile backup set
channel ORA_DISK_4: specifying datafile(s) in backup set
input datafile file number=00004 name=+DATA11G/ocm112/datafile/example.265.927382167
channel ORA_DISK_4: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_5: starting compressed full datafile backup set
channel ORA_DISK_5: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA11G/ocm112/datafile/users.266.927382177
channel ORA_DISK_5: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_5: finished piece 1 at 06-FEB-2017 13:32:03
piece handle=/rman/backup/0krrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_5: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_3: finished piece 1 at 06-FEB-2017 13:32:13
piece handle=/rman/backup/0irrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_3: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_4: finished piece 1 at 06-FEB-2017 13:32:13
piece handle=/rman/backup/0jrrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_4: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: finished piece 1 at 06-FEB-2017 13:32:23
piece handle=/rman/backup/0grrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
channel ORA_DISK_2: finished piece 1 at 06-FEB-2017 13:32:23
piece handle=/rman/backup/0hrrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:35
Finished backup at 06-FEB-2017 13:32:23

Starting Control File and SPFILE Autobackup at 06-FEB-2017 13:32:23
piece handle=+FRA11G/ocm112/autobackup/2017_02_06/s_935242343.266.935242347 comment=NONE
Finished Control File and SPFILE Autobackup at 06-FEB-2017 13:32:30

RMAN>

Cool .. it seems to work perfectly. Ok. Let’s do the backup of the FRA to CIFS as well.

RMAN> BACKUP RECOVERY AREA TO DESTINATION '/rman/backup';

Starting backup at 06-FEB-2017 14:48:35
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
using channel ORA_DISK_5
using channel ORA_DISK_6
using channel ORA_DISK_7
using channel ORA_DISK_8
using channel ORA_DISK_9
using channel ORA_DISK_10
using channel ORA_DISK_11
using channel ORA_DISK_12
using channel ORA_DISK_13
using channel ORA_DISK_14
using channel ORA_DISK_15
using channel ORA_DISK_16
specification does not match any datafile copy in the repository
specification does not match any backup set in the repository
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=77 RECID=3 STAMP=935241746
channel ORA_DISK_1: starting piece 1 at 06-FEB-2017 14:48:36
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=78 RECID=4 STAMP=935241790
channel ORA_DISK_2: starting piece 1 at 06-FEB-2017 14:48:36
channel ORA_DISK_3: starting compressed archived log backup set
channel ORA_DISK_3: specifying archived log(s) in backup set
input archived log thread=1 sequence=79 RECID=5 STAMP=935242521
channel ORA_DISK_3: starting piece 1 at 06-FEB-2017 14:48:36
channel ORA_DISK_4: starting compressed archived log backup set
channel ORA_DISK_4: specifying archived log(s) in backup set
input archived log thread=1 sequence=80 RECID=6 STAMP=935242549
channel ORA_DISK_4: starting piece 1 at 06-FEB-2017 14:48:37
channel ORA_DISK_5: starting compressed archived log backup set
channel ORA_DISK_5: specifying archived log(s) in backup set
input archived log thread=1 sequence=81 RECID=7 STAMP=935242639
channel ORA_DISK_5: starting piece 1 at 06-FEB-2017 14:48:37
channel ORA_DISK_6: starting compressed archived log backup set
channel ORA_DISK_6: specifying archived log(s) in backup set
input archived log thread=1 sequence=82 RECID=8 STAMP=935242724
channel ORA_DISK_6: starting piece 1 at 06-FEB-2017 14:48:38
channel ORA_DISK_1: finished piece 1 at 06-FEB-2017 14:48:38
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzond8_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_2: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzondy_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_3: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzonqf_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_3: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_4: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzoop8_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_4: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_5: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzooxq_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_5: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_6: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzopqx_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_6: backup set complete, elapsed time: 00:00:01
Finished backup at 06-FEB-2017 14:48:39

Starting Control File and SPFILE Autobackup at 06-FEB-2017 14:48:39
piece handle=+FRA11G/ocm112/autobackup/2017_02_06/s_935246919.286.935246921 comment=NONE
Finished Control File and SPFILE Autobackup at 06-FEB-2017 14:48:46

RMAN>

Worked very well as well. Now lets do the DataPump Test.

SQL> connect system/manager
Connected.

SQL> create directory rman as '/rman/backup';

Directory created.

oracle@oel001:/home/oracle/ [OCM112] expdp system/manager directory=rman dumpfile=rman%U.dmp parallel=16 full=y

Export: Release 11.2.0.4.0 - Production on Mon Feb 6 13:33:41 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, Oracle Label Security, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** directory=rman dumpfile=rman%U.dmp parallel=16 full=y
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 351.8 MB
. . exported "SH"."COSTS":"COSTS_Q1_1999"                183.5 KB    5884 rows
. . exported "SH"."COSTS":"COSTS_Q2_2000"                119.0 KB    3715 rows
. . exported "SH"."COSTS":"COSTS_Q1_2001"                227.8 KB    7328 rows
. . exported "SH"."COSTS":"COSTS_Q1_2000"                120.6 KB    3772 rows
. . exported "SH"."CUSTOMERS"                            9.853 MB   55500 rows
. . exported "SH"."COSTS":"COSTS_Q3_1998"                131.1 KB    4129 rows
. . exported "SH"."COSTS":"COSTS_Q2_1998"                79.52 KB    2397 rows
. . exported "SH"."COSTS":"COSTS_Q2_1999"                132.5 KB    4179 rows
. . exported "SH"."COSTS":"COSTS_Q1_1998"                139.5 KB    4411 rows
. . exported "SH"."COSTS":"COSTS_Q3_2000"                151.4 KB    4798 rows
. . exported "SH"."COSTS":"COSTS_Q3_2001"                234.4 KB    7545 rows
...
...
...
Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/INC_TYPE
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
. . exported "ORDDATA"."ORDDCM_MAPPING_DOCS"             7.890 KB       1 rows
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/XMLSCHEMA/XMLSCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/PRE_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/VIEW
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/BITMAP_INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/DOMAIN_INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/EVENT/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/MATERIALIZED_VIEW
Processing object type DATABASE_EXPORT/SCHEMA/JOB
Processing object type DATABASE_EXPORT/SCHEMA/DIMENSION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/AUDIT
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /rman/backup/rman01.dmp
  /rman/backup/rman02.dmp
  /rman/backup/rman03.dmp
  /rman/backup/rman04.dmp
  /rman/backup/rman05.dmp
  /rman/backup/rman06.dmp
  /rman/backup/rman07.dmp
  /rman/backup/rman08.dmp
  /rman/backup/rman09.dmp
  /rman/backup/rman10.dmp
  /rman/backup/rman11.dmp
  /rman/backup/rman12.dmp
  /rman/backup/rman13.dmp
  /rman/backup/rman14.dmp
  /rman/backup/rman15.dmp
  /rman/backup/rman16.dmp

Job "SYSTEM"."SYS_EXPORT_FULL_01" completed at Mon Feb 6 13:37:32 2017 elapsed 0 00:03:44

 

Cool. That also worked perfectly.

I have repeated the same test with 12.1.0.2 also with no issues. In earlier releases I had issues with the filesystemio_options=SETALL, where I had to disable directio, and sometimes direct and aynch io. But it is not the case anymore with more resent OEL and Samba releases.

Conclusion

I would not use RMAN backups to CIFS as a permanent solution, but if you are running out of space on your NFS share, or maybe for migrations it is a good alternative.

 

 

Cet article Oracle – RMAN Backups to CIFS est apparu en premier sur Blog dbi services.

Oracle Public Cloud 12cR2: TDE is not an option

Sun, 2017-02-05 12:52

In Oracle Public Cloud, Transparent Data Encryption is not an option. You can use it because it is included in all database services for all editions. You have to use it because the database won’t allow you to create, or import, non encrypted tablespaces. This is controlled by a new parameter, encrypt_new_tablespaces, which defaults to CLOUD_ONLY;

encrypt_new_tablespaces

In previous versions, we had to ENCRYPT explicitly the tablespace in the CREATE TABLESPACE statement.
Here we don’t need to add this clause in the DDL when we are on the Oracle Public Cloud because the default is:
SQL> show parameter encrypt
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces string CLOUD_ONLY

This parameter can take the following values:

SQL> alter system set encrypt_new_tablespaces=tictactoe;
alter system set encrypt_new_tablespaces=tictactoe
*
ERROR at line 1:
ORA-00096: invalid value TICTACTOE for parameter encrypt_new_tablespaces, must be from among DDL, ALWAYS, CLOUD_ONLY

The CLOUD_ONLY will automatically encrypt new tablespaces when we are on the Oracle Public Cloud only.
The ALWAYS will always encrypt new tablespaces, which is good if you have the Advanced Security Option and want to ensure that all data is encrypted
The DDL will never encrypt new tablespaces, except when explicitely done from the CREATE TABLESPACE. This is similar to the behavior before this parameter was introduced.

encrypt_new_tablespaces=DDL

If you are not on the Oracle Public Cloud, CLOUD_ONLY, the default, behaves as DDL. If you are on the Oracle Public Cloud, CLOUD_ONLY behaves like ALWAYS.

You may think that you can bypass the obligation to encrypt, just by setting encrypt_new_tablespaces=DDL but it’s not a good idea. Let’s test it.

I’ve set encrypt_new_tablespaces=ddl in an init.ora and I’ve created a database manually (CREATE DATABASE, catalog, catproc) and it works.

In 12.2 you can encrypt SYSTEM, SYSAUX and UNDO but this is not mandatory. However, I have created a user tablespace, USERS without any problem thanks to encrypt_new_tablespaces=DDL:


SQL> show parameter encrypt
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces string ddl
 
SQL> select TABLESPACE_NAME,ENCRYPTED from dba_tablespaces;
 
TABLESPACE_NAME ENC
------------------------------ ---
SYSTEM NO
SYSAUX NO
SYS_UNDOTS NO
UNDOTBS1 NO
USERS NO

Here I’m able to work without any problem… except when I have to restart the instance.

Startup


SQL> startup
ORACLE instance started.
 
Total System Global Area 2214592512 bytes
Fixed Size 8622968 bytes
Variable Size 1090522248 bytes
Database Buffers 1107296256 bytes
Redo Buffers 8151040 bytes
Database mounted.
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-28427: cannot create, import or restore unencrypted tablespace: USERS in Oracle Cloud
Process ID: 11378
Session ID: 4 Serial number: 14394

It is impossible to open a database with unencrypted user tablespaces in the Oracle Public Cloud, whatever the encrypt_new_tablespaces is. I can only startup mount and there’s nothing to do at that point.

Here is the alert.log related entries:

Database Characterset is US7ASCII
No Resource Manager plan active
Verifying all user tablespaces in pdb 0 are encrypted in Oracle Cloud..
Found unencrypted tablespace USERS (pdb 0). Encrypted tablespace is mandatory in Oracle Cloud.
Errors in file /u01/app/oracle/diag/rdbms/test/TEST/trace/TEST_ora_11378.trc:
ORA-28427: cannot create, import or restore unencrypted tablespace: USERS in Oracle Cloud
Errors in file /u01/app/oracle/diag/rdbms/test/TEST/trace/TEST_ora_11378.trc:
ORA-28427: cannot create, import or restore unencrypted tablespace: USERS in Oracle Cloud
Error 28427 happened during db open, shutting down database
Errors in file /u01/app/oracle/diag/rdbms/test/TEST/trace/TEST_ora_11378.trc (incident=128065):
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-28427: cannot create, import or restore unencrypted tablespace: USERS in Oracle Cloud
Incident details in: /u01/app/oracle/diag/rdbms/test/TEST/incident/incdir_128065/TEST_ora_11378_i128065.trc
2017-02-04 22:25:26.612000 +00:00
opiodr aborting process unknown ospid (11378) as a result of ORA-603
ORA-603 : opitsk aborting process
License high water mark = 1
USER (ospid: 11378): terminating the instance due to error 28427
2017-02-04 22:25:27.615000 +00:00
PSP0 (ospid: 11293): terminating the instance due to error 28427
PMON (ospid: 11289): terminating the instance due to error 28427
2017-02-04 22:25:28.621000 +00:00
Instance terminated by PMON, pid = 11289

You don’t find those “Verifying all user tablespaces in pdb 0 are encrypted in Oracle Cloud..” when you are not in the Oracle Public Cloud.
But in Oracle Public Cloud, the instance is forced to stop as soon as an un-encrypted tablespace is found.

Fake it

What I’ll do now is only for academic purpose, to understand what happens and, maybe, troubleshoot if you have created unencrypted tablespaces. But it is not documented, and not supported.

From a cloud instance, you can get metadata about your instance by getting attributes from http://192.0.0.192 (Oracle Cloud uses Nimbula)
When Oracle Database instance starts, it reads the dns domain from http://192.0.0.192/latest/attributes/dns/domain:

[oracle@DBI122 ~]$ curl http://192.0.0.192/latest/attributes/dns/domain
compute-franck.oraclecloud.internal.[oracle@DBI122 ~]$

I suppose that the detection of Oracle Cloud is done from that. If you are not in Oracle Cloud, you will have no answer from http://192.0.0.192 so let’s simulate that by blocking this ip address:

[root@DBI122 opc]# iptables -A OUTPUT -p 192.0.0.192 -j REJECT
 
[root@DBI122 opc]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
 
Chain FORWARD (policy ACCEPT)
target prot opt source destination
 
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere 192.0.0.192 reject-with icmp-port-unreachable

Now back to my database, I can start it without any problem:

SQL> startup
ORACLE instance started.
 
Total System Global Area 2214592512 bytes
Fixed Size 8622968 bytes
Variable Size 1090522248 bytes
Database Buffers 1107296256 bytes
Redo Buffers 8151040 bytes
Database mounted.
Database opened.
 
SQL> select TABLESPACE_NAME,ENCRYPTED from dba_tablespaces;
 
TABLESPACE_NAME ENC
------------------------------ ---
SYSTEM NO
SYSAUX NO
SYS_UNDOTS NO
UNDOTBS1 NO
IOPS NO

In the alert.log I have nothing about checking encrypted tablespaces:

No Resource Manager plan active
2017-02-04 22:31:34.510000 +00:00
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process AQPC

So what?

encrypt_new_tablespaces=DDL is not a way to avoid encryption in the Oracle Public Cloud, except temporarily for a test on a database that you create for that and never re-start once you have created user tablespaces. There are still some bugs and issues with TDE (when importing from non-TDE, when using local undo,…) so it is good to know that there can be a workaround. But remember this is not supported. If you need this, please contact My Oracle Support.

 

Cet article Oracle Public Cloud 12cR2: TDE is not an option est apparu en premier sur Blog dbi services.

CBO parse time – matching time with transformation

Sat, 2017-02-04 14:36

By a strange coincidence, I’ve encountered several cases of long parse time recently. That was the occasion to give a closer look at a CBO trace enhancement that came with April 2016 PSU: Bug 16923858 Diagnostic enhancement to annotate 10053 trace with accumulated parse time

I have a query that takes several seconds to parse and here is the new parse time information when you trace SQL_Compiler (the 10053 trace):
$ grep TIMER db003_ora_22231_orig.trc
TIMER: costing additional plans cpu: 1.807 sec elapsed: 2.019 sec
TIMER: costing general plans cpu: 26.131 sec elapsed: 27.747 sec
TIMER: costing additional plans cpu: 8.995 sec elapsed: 9.329 sec
TIMER: SU: costing SEL$A2089C65 cpu: 38.313 sec elapsed: 40.506 sec
TIMER: costing additional plans cpu: 1.523 sec elapsed: 1.850 sec
TIMER: SU: costing Interleaved CVM SEL$488B5694 cpu: 4.255 sec elapsed: 4.807 sec
TIMER: SU: Interleaved CVM SEL$A2089C65 cpu: 4.293 sec elapsed: 4.846 sec
TIMER: i costing additional plans cpu: 1.742 sec elapsed: 1.884 sec
TIMER: i costing general plans cpu: 21.343 sec elapsed: 21.843 sec
TIMER: i costing additional plans cpu: 8.072 sec elapsed: 8.375 sec
TIMER: i JPPD: costing SEL$A2089C65 cpu: 32.798 sec elapsed: 33.838 sec
TIMER: i JPPD: iteration (#1) SEL$A2089C65 cpu: 32.822 sec elapsed: 33.863 sec
TIMER: i costing general plans cpu: 22.088 sec elapsed: 22.751 sec
TIMER: i costing additional plans cpu: 5.484 sec elapsed: 5.745 sec
TIMER: i JPPD: costing SEL$A2089C65 cpu: 30.181 sec elapsed: 31.113 sec
TIMER: i JPPD: iteration (#2) SEL$A2089C65 cpu: 30.197 sec elapsed: 31.129 sec
TIMER: SU: Interleaved JPPD SEL$A2089C65 cpu: 63.026 sec elapsed: 64.999 sec
TIMER: SU: iteration (#1) SEL$A2089C65 cpu: 105.656 sec elapsed: 110.375 sec
TIMER: SU: costing SEL$EB7B6E47 cpu: 3.749 sec elapsed: 4.025 sec
TIMER: SU: iteration (#2) SEL$EB7B6E47 cpu: 3.762 sec elapsed: 4.038 sec
TIMER: CBQT SU and CVM SEL$1 cpu: 109.512 sec elapsed: 114.507 sec
TIMER: Cost-Based Transformations (Overall) SEL$62E0E936 cpu: 110.118 sec elapsed: 115.112 sec
TIMER: costing additional plans cpu: 1.247 sec elapsed: 1.485 sec
TIMER: Access Path Analysis (Final) SEL$62E0E936 cpu: 2.227 sec elapsed: 2.227 sec
TIMER: SQL Optimization (Overall) SEL$62E0E936 cpu: 114.674 sec elapsed: 119.906 sec

The default is to have a line when an operation takes more than 1 second. You can change that with the fix control:

This will trace all times > 10 microseconds:
alter session set "_fix_control"='16923858:1';

This will trace all times > 10 seconds:
alter session set "_fix_control"='16923858:7';

You get it: each level increases the threshold to the next order of magnitude, the default is 6 (1 second)

I’ve a small awk script to display the time in a better format:


cpu(ms) ela(ms)
42138.000 42159.000 costing general plans 12.1.0.2 640468
6416.000 6418.000 costing additional plans 12.1.0.2 719893
1509.000 1509.000 costing general plans 12.1.0.2 753749
2377.000 2377.000 or expansion check 12.1.0.2 763107
52698.000 52721.000 SU: costing SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10) 12.1.0.2 763120
2035.000 2035.000 costing general plans 12.1.0.2 842348
2642.000 2642.000 or expansion check 12.1.0.2 849927
4920.000 4921.000 SU: costing Interleaved CVM SEL$959A6A10 (VIEW MERGE SEL$5061678E; SEL$CAA3F13B) 12.1.0.2 849940
4941.000 4942.000 SU: Interleaved CVM SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10) 12.1.0.2 849942
42556.000 42568.000 i costing general plans 12.1.0.2 1482291
6348.000 6348.000 i costing additional plans 12.1.0.2 1561716
1653.000 1653.000 i costing general plans 12.1.0.2 1595574
2487.000 2488.000 i or expansion check 12.1.0.2 1604932
52924.000 52938.000 i JPPD: costing SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 1604945
52933.000 52947.000 i JPPD: iteration (#1) SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 1604956
1116.000 1116.000 i costing additional plans 12.1.0.2 1630694
33570.000 33576.000 i costing general plans 12.1.0.2 2098660
10614.000 10622.000 i costing additional plans 12.1.0.2 2211397
2092.000 2093.000 i costing general plans 12.1.0.2 2246743
1396.000 1397.000 i costing additional plans 12.1.0.2 2264252
3778.000 3780.000 i or expansion check 12.1.0.2 2265816
50741.000 50759.000 i JPPD: costing SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 2265829
50752.000 50769.000 i JPPD: iteration (#2) SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 2265839
103688.000 103719.000 SU: Interleaved JPPD SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 2265844
161339.000 161394.000 SU: iteration (#1) SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 2265846
3250.000 3252.000 SU: costing SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2307338
3262.000 3264.000 SU: iteration (#2) SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2307349
164632.000 164689.000 CBQT SU and CVM SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2307498
164827.000 164884.000 Cost-Based Transformations (Overall) SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2307946
2577.000 2578.000 costing general plans 12.1.0.2 2387060
3204.000 3204.000 or expansion check 12.1.0.2 2393273
4597.000 4598.000 Access Path Analysis (Final) SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2393703
170872.000 170931.000 SQL Optimization (Overall) SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2393730

Note that I’m more interested by the CPU time because the elapsed time includes the time to write to the trace.
I’m reading a 2 million line 10053 trace file here. The CBO takes time but is doing lot of work here.

My awk script also records the maximum time spend on each query block and then displays the ‘Query Block Registry’ with those times. This is a good way to understand which CBO transformation is responsible for most of the parse time:


cpu(ms) ela(ms) Query Block Registry:
0.000 0.000 SEL$10 0x897e9b8 (PARSER) [FINAL] 161339.000 161394.000 SEL$5061678E 0x0 (SUBQUERY UNNEST SEL$93416A64; SEL$10)
0.000 0.000 SEL$6F0423A9 0x0 (PUSHED PREDICATE SEL$CAA3F13B; SEL$5061678E; "VW_SQ_3"@"SEL$93416A64" 28)
0.000 0.000 SEL$959A6A10 0x0 (VIEW MERGE SEL$5061678E; SEL$CAA3F13B)
0.000 0.000 SEL$1DF60DFD 0x0 (QUERY BLOCK TABLES CHANGED SEL$10)
0.000 0.000 SEL$CAA3F13B 0x0 (SUBQ INTO VIEW FOR COMPLEX UNNEST SEL$1DF60DFD)
0.000 0.000 SEL$6F0423A9 0x0 (PUSHED PREDICATE SEL$CAA3F13B; SEL$5061678E; "VW_SQ_3"@"SEL$93416A64" 28)
0.000 0.000 SEL$959A6A10 0x0 (VIEW MERGE SEL$5061678E; SEL$CAA3F13B)
0.000 0.000 SEL$9 0x8987df0 (PARSER) [FINAL] 0.000 0.000 SEL$8 0x8988ee8 (PARSER) [FINAL] 0.000 0.000 SEL$7 0x898a4f8 (PARSER) [FINAL] 0.000 0.000 SEL$6 0x898c028 (PARSER) [FINAL] 0.000 0.000 SEL$5 0x898d268 (PARSER) [FINAL] 0.000 0.000 SEL$4 0x898eaf8 (PARSER) [FINAL] 0.000 0.000 SEL$3 0x89912b8 (PARSER) [FINAL] 0.000 0.000 SEL$2 0x89923c0 (PARSER) [FINAL] 170872.000 170931.000 SEL$1 0x89a3ef0 (PARSER) [FINAL] 0.000 0.000 SEL$93416A64 0x0 (VIEW ADDED SEL$1)
161339.000 161394.000 SEL$5061678E 0x0 (SUBQUERY UNNEST SEL$93416A64; SEL$10)
0.000 0.000 ...

The interesting point here is that the [FINAL] is the transformation that is chosen by the optimizer, so we know that we have spent time on a query block that has been finally chosen for the best plan.

Before going further, here is my ugly awk script that gives the above output from a 10053 trace file:


BEGIN{
print ; print " cpu(ms) ela(ms)"
}
/^ *optimizer_features_enable += +/{
ofe=$3 ; sub(/^[8-9][.]/,"0&",ofe)
delete qb
}
/TIMER/{
l=$0 ; sub(/ cpu: .*$/,"",l) ; sub(/TIMER: /,"",l)
sub(/^.* cpu: /,"") # --> $1 is cpu $4 is ela
q=gensub(/^.* /,"","g",l);
if ( qb[q] == "" ) { q="" } else { sub(/ [^ ]*$/,"",l) }
if ( q!="" && $1*1e3 > cpu[q] ) cpu[q]=$1*1e3
if ( q!="" && $4*1e3 > ela[q] ) ela[q]=$4*1e3
printf "%10.3f %10.3f\t%-40s %20s %-60s\t%6s %s\n",$1*1e3,$4*1e3,l,q,qb[q],ofe,NR
if ( q != "") versions[l" "q]=versions[l" "q] ofe"="($1*1e3)"ms\n"
}
/Query Block Registry/,/^:/{
if ( $0 ~/Query Block Registry/ ) { ; print "" ; print " cpu(ms) ela(ms) "$0 ; print "" }
else { printf "%10.3f %10.3f\t%s\n",cpu[$1],ela[$1],$0 }
}

So, I know that the unnesting of subquery SEL$10 is responsible for my long parse time. The ‘+alias’ format of the explain plan is an easy way to find which subquery it is comes from, :

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
 
64 - SEL$10
65 - SEL$10 / TABLE_WITH_DATES@SEL$10
66 - SEL$10 / TABLE_WITH_DATES@SEL$10

Here is the subselect (I redacted the table names).

AND begin_date =
(SELECT MAX (begin_date)
FROM table_with_dates
WHERE some_id = some_id
AND begin_date <= ...
)

So what can I do from now? I can avoid the subquery unnesting by adding the NO_UNNEST hin in the subquery, or add it to the main query as /*+ NO_UNNEST( @SEL$10) */

This really reduces the parse time, but I have to check that the plan is still acceptable. Actually we cannot blame the CBO for the time it takes here, because the goal of subquery unnesting (SU) is exactly that: open the way to lot of new access path, that can be cost based and may give a better execution plan. And an application should not parse too often, so spending a few seconds in parsing is not bad if it helps to execute the query quickly all over the day.

So don’t forget to have the latest PSU and trace with:

alter session set events 'trace [SQL_Compiler.*]';
explain plan for ...
alter session set events 'trace [SQL_Compiler.*] off';

get timer information with:

column value new_value tracefile
select value from v$diag_info where name='Default Trace File';
column value new_value clear
host grep TIMER &tracefile

And find which transformation is responsible for most of the parse time.

 

Cet article CBO parse time – matching time with transformation est apparu en premier sur Blog dbi services.

Exadata Express Cloud Service: MAX_PDB_STORAGE

Fri, 2017-02-03 16:06

The separation of roles between system DBA and application DBA is the way to go for agile development and the PDBaaS managed service is an excellent way to play with this concept: You are PDB administrator but not the CDB administrator.
Here is an example about tablespace creation/deletion, and the kind of problems that may arise with this architecture.

The PDB administrator manages all the PDB objects. He must be able to create users and tablespaces in order to deploy an application.

Create tablespace

So yes, in Exadata Express Cloud Service we can create a tablespace and this is what I did:

SQL> create tablespace IOPS datafile size 10G;
Tablespace created.

and you don’t need to specify the datafile location because db_create_file_dest is defined:

SQL> show parameter db_create_file_dest
 
NAME TYPE VALUE
------------------- ------ -----
db_create_file_dest string +DATA

Actually, this is something that I cannot change because it is set by the CDB administrator when creating the PDB:

SQL> alter session set db_create_file_dest='/var/tmp'
 
Error report -
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-01031: insufficient privileges

This is a 12.2 feature. Impossible to create a file outside:

create tablespace IOPS datafile '/var/tmp/franck.dbf' size 10G
Error report -
ORA-65250: invalid path specified for file - /var/tmp/franck.dbf

I’ve already written about that 12cR2 feature: http://blog.dbi-services.com/12cr2-create_file_dest-for-pdb-isolation/ and this is a rather nice feature.

MAX_PDB_STORAGE

There’s another limit in PDBaaS: you have a limit on the storage you can use. And in this X20 service, the limit is 20G.

When you have reached the maximum, you get the following error:

SQL> create tablespace ANOTHERONE datafile size 30G;
 
create tablespace ANOTHERONE datafile size 30G
Error report -
ORA-65114: space usage in container is too high
65114. 00000 - "space usage in container is too high"
*Cause: Space usage in the current container exceeded the value of MAX_PDB_STORAGE for the container.
*Action: Specify a higher value for MAX_PDB_STORAGE using the ALTER PLUGGABLE DATABASE statement.

Ok. This limit is visible as a property of my PDB:
SQL> select property_name, property_value from database_properties where property_name ='MAX_PDB_STORAGE';
 
PROPERTY_NAME PROPERTY_VALUE
------------- --------------
MAX_PDB_STORAGE 24771223880

And the current size is visible from V$PDBS:
SQL> select con_id,total_size from v$pdbs;
 
CON_ID TOTAL_SIZE
------ -----------
47 24766742528

Nothing hidden here, this is the sum of my PDB files, datafiles and tempfiles:

SQL> select sum(bytes) from dba_data_files;
SUM(BYTES)
----------
21761523712
 
SQL> select sum(bytes) from dba_temp_files;
SUM(BYTES)
----------
3005218816

Of course, I’m not authorized to increase my limit:

SQL> alter pluggable database storage(maxsize 40G);
 
alter pluggable database storage(maxsize 40G)
Error report -
ORA-01031: insufficient privileges
01031. 00000 - "insufficient privileges"
*Cause: An attempt was made to perform a database operation without the necessary privileges.
*Action: Ask your database administrator or designated security administrator to grant you the necessary privileges

And this is once again a very nice feature coming in 12cR2 multitenant.

So… I’ve filled my X20 service. There a need for some housekeeping. But…

Drop tablespace

I’m allowed to create and drop tablespaces. Let’s drop that IOPS tablespace:


SQL> drop tablespace iops;
 
drop tablespace iops
Error report -
ORA-01031: insufficient privileges
01031. 00000 - "insufficient privileges"
*Cause: An attempt was made to perform a database operation without the necessary privileges.
*Action: Ask your database administrator or designated security administrator to grant you the necessary privileges

With lockdown profiles, you have always the same message: no clue about what is not authorized. I know that there is a DROP_TABLESPACE_KEEP_DATAFILES feature that you can disable with lockdown profiles and this makes sense when the CDB administrator do not want that PDB administrators leave dead datafiles in the system. Here we are on ASM, with OMF, so the datafiles are automatically dropped. But the lockdown is working at statement syntax level, so we have to mention the clause:

SQL> drop tablespace iops including contents and datafiles;
 
drop tablespace iops including contents and datafiles
Error report -
ORA-38881: Cannot drop tablespace IOPS on primary database due to guaranteed restore points.
38881. 00000 - "Cannot drop tablespace %s on primary database due to guaranteed restore points."
*Cause: An attempt was made to drop a tablespace on a primary database while there are guaranteed restore points. You cannot do this because Flashback database cannot undo dropping of a tablespace.
*Action: Drop all guaranteed restore points first and retry, or delay dropping the tablespace until all guaranteed restore points are removed.

So the syntax is accepted, but here I have another problem. I have a guaranteed restore point and this prevents the drop of tablespace.

Let’s have a look at restore points because I didn’t create one (this is something were are not allowed to do on Exadata Express Cloud Service, which would be a nice feature as this servie is focused at developers).

SQL> select * from v$restore_point;
 
SCN DATABASE_INCARNATION# GUARANTEE_FLASHBACK_DATABASE STORAGE_SIZE TIME RESTORE_POINT_TIME PRESERVED NAME PDB_RESTORE_POINT CLEAN_PDB_RESTORE_POINT PDB_INCARNATION# CON_ID
--- --------------------- ---------------------------- ------------ ---- ------------------ --------- ---- ----------------- ----------------------- ---------------- ------
84602869122 2 YES 18253611008 28-JAN-17 03.23.08.000000000 AM YES PRE_17_1_2 NO NO 0 0

The PDB has been created when I subscribed to the service, on 22-JAN-17 and we are now 03-Feb-17.

So it seems that the CDB administrator (it is a managed service, CDB DBA is Oracle) has created a restore point last Saturday.
The name, PRE_17_1_7, looks like something we do before a maintenance, in case something goes wrong. I had no notification about any maintenance. And anyway, we usually remove the restore point as soon as possible because this fills the FRA.

I can see somme CDB structures, such as the FRA:


SQL> select * from v$recovery_area_usage;
 
FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES CON_ID
--------- ------------------ ------------------------- --------------- ------
CONTROL FILE 0 0 1 0
REDO LOG 0 0 0 0
ARCHIVED LOG 0.81 0.81 193 0
BACKUP PIECE 0.2 0 26 0
IMAGE COPY 0.33 0 10 0
FLASHBACK LOG 1.3 0.98 41 0
FOREIGN ARCHIVED LOG 0 0 0 0
AUXILIARY DATAFILE COPY 0 0 0 0

Ok, that’s not too much. There is not a lot of activity during one week here. And anyway, FRA is big:

SQL> show parameter recovery
 
NAME TYPE VALUE
-------------------------- ----------- -----
db_recovery_file_dest string +RECO
db_recovery_file_dest_size big integer 6T

So far so good, but I’m stuck here. The restore point is at CDB level, so I cannot drop it:

SQL> drop restore point PRE_17_1_2;
 
drop restore point PRE_17_1_2
Error report -
ORA-38780: Restore point 'PRE_17_1_2' does not exist.
38780. 00000 - "Restore point '%s' does not exist."
*Cause: The restore point name of the DROP RESTORE POINT command does not exist.
*Action: No action required.

This means that I cannot drop my tablespace. And I cannot even resize the datafiles to their minimum:

SQL> alter database datafile '+DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351' resize 10M;
 
alter database datafile '+DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351' resize 10M
Error report -
ORA-38883: Cannot shrink data file +DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351 on primary database due to guaranteed restore points.
38883. 00000 - "Cannot shrink data file %s on primary database due to guaranteed restore points."
*Cause: An attempt was made to shrink a data file on a primary database while there are guaranteed restore points. You cannot do this because Flashback database cannot undo the shrinking of a data file.
*Action: Drop all guaranteed restore points first and retry, or delay the data file resize until all guaranteed restore points are removed.

I have an empty tablespace that takes all my allocated storage and I cannot remove it.

So what to do? Try to contact support? Or fill the FRA until raises an alert?
I would try the first one but I received a CSI with my order, but it’s not a valid one…

 

Cet article Exadata Express Cloud Service: MAX_PDB_STORAGE est apparu en premier sur Blog dbi services.

Database Diagram using SQL Developer

Thu, 2017-02-02 12:13

Last Day a client asked me if I can generate a relational diagram for an oracle schema. He was just preparing a migration and wanted to see how tables are organized in the schema.
In this article we will show how this is possible with SQL Developer . We are using SQL Developper 4.1.3.20
sqldev1
Once SQL Developer started just proceed as following :
Launch File ==>Data Modeler ==>Import==>Data Dictionary
sqldev2
Choose your connection and Click Next
sqldev3
And then after we can choose the schema
sqldev4
In this example we check all tables and click Next
sqldev5
Review the summary
sqldev6
And Then Click Finish
logfile

After closing the log file, we have the diagram
sqldev7

And we also can save the model in a PDF file or an image file
sqldev8

Hope this article will help

 

Cet article Database Diagram using SQL Developer est apparu en premier sur Blog dbi services.

Running SLOB on Exadata Express Cloud Service

Wed, 2017-02-01 07:06

The Exadata Express Cloud Service is a managed PDBaaS: Oracle is the system admin and the CDB database administrator, you are the PDB administrator. You connect with a local user, PDB_ADMIN, which is no SYSDBA privilege but has a PDB_DBA which has nearly all DBA rights, but with some features disabled by lockdown profile.
I have no worry about the performance on this service: it is an Exadata X5 half rack bare metal (my guess), the CDB running as RAC One Node on 2 compute nodes, accessing the 7 storage cells. Smart Scan is disabled and given the small size of the PDB, and the low usage of the CDB, I/O is fast (most single block latency bwtween 128 and 256us coming from cell flash cache).
But I like to run SLOB on new platforms and I wanted to see if I can run it here, without SYSDBA role, and connecting to a CDB.

TNS_ADMIN

I’ll connect remotely because SLOB will be running on my computer (I have no access to the host for the PDBaaS managed service). In order to connect remotely, through a SQL*Net over SSL, you have to download the client credentials kit
CaptureEXCS017

It is a zip file that contains the encryption wallet and the sqlnet.ora and tnsnames.ora to use. I unzip it in a directory (/var/tmp/client_credentials in this example) and I’ll use it by setting the TNS_ADMIN environment to this directory.
Thus I change the wallet location in the sqlnet.ora:
$ cat sqlnet.ora
WALLET_LOCATION = (SOURCE = (METHOD = file)
(METHOD_DATA = (DIRECTORY="$TNS_ADMIN")))

and I use the provided tnsnames.ora which defines the ‘dbaccess’ service:
$ cat tnsnames.ora
dbaccess = (description=
(address=(protocol=tcps)(port=1522)(host=dbaccess.us2.oraclecloudapps.com))
(connect_data=(service_name=eptdojjm1ag.usdc2.oraclecloud.com))
(security=(ssl_server_cert_dn="CN=dbaccess.us2.oraclecloudapps.com,O=Oracle Corporation,L=Redwood Shores,ST=California,C=US"))
)

Changes in slob.conf

The default slob.conf connects with a bequeath connection. Here I need to connect through the network service described above. I uncommented:

#ADMIN_SQLNET_SERVICE=slob
#SQLNET_SERVICE_BASE=slob
#SYSDBA_PASSWD=change_on_install

and replaced it with:

ADMIN_SQLNET_SERVICE=dbaccess
SQLNET_SERVICE_BASE=dbaccess
SYSDBA_PASSWD=xxxxxxxxx

Where xxxxxxxxx is my PDB_ADMIN password that I setup in:
CaptureEXCS024

Of course we need an Oracle Client. Download instant client if you don’t have one.

Changes in setup.sh

SLOB creates the users and grant them RESOURCE,DBA
In Exadata Express Cloud Service, I don’t have the DBA role but the CREATE SESSION, and the PDB_ROLE which is sufficient. In addition to that I need to give quota to the IOPS tablespace because PDB_ADMIN do not have unlimited tablespace. Finally here is the create user part after my modifications:


msg NOTIFY "Creating user: $user "
 
sqlplus "$constring" <<EOF
WHENEVER SQLERROR EXIT 2;
SET TERMOUT ON
SET ECHO ON
PROMPT User grants for $user
WHENEVER SQLERROR EXIT 2;
GRANT CREATE SESSION TO $user IDENTIFIED BY $user;
GRANT PDB_DBA TO $user;
ALTER USER $user DEFAULT TABLESPACE $tablespace ;
ALTER USER $user QUOTA UNLIMITED ON $tablespace ;
EXIT;
EOF

The setup.sh starts by dropping the SLOB schemas up to MAX_SLOB_SCHEMAS just in case they are there. The default is 4096 but that’s too much for me because of the latency to the server in US (it is planned to have Exadata Express Cloud Service in Europe in the future), so I reduced it to 8 (as I have only 1 CPU in this service, there’s no need for more users):

export MAX_SLOB_SCHEMAS=8

And last modification in setup.sh was to disable the Multitenant checking because PDBaaS is by definition on a Multitenant database:


if ( ! check_mto "$ADMIN_CONNECT_STRING")
then
msg FATAL ""
msg FATAL "This version of SLOB does not support Oracle Multitenant Option"
msg FATAL
#exit 1
fi

I’ve not seen any problem. Oracle designed multitenant so that everything you used to do on a database is possible to do on a PDB.

Changes in runit.sh

First, in order to connect as PDB_ADMIN instead of SYS as SYSDBA, you have to change the admin_connect_string:

export admin_connect_string="pdb_admin/${SYSDBA_PASSWD}@${ADMIN_SQLNET_SERVICE}"

Second, SLOB calls UTL_FILE which is not allowed in Exadata Express Cloud Service. Becase I’ve seen in the code that this is bypassed when the ‘nt’ argument is 1 so I’ve set it to this value in runit.sh:

arg9=1 #was: $nt

Finally, because we can’t create manual AWR snapshots in Exadata Express Cloud Service, I replaced the EXEC DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT by the creation of a small view to get the basic statistics:

create or replace view FPAV as
select 'STAT' class,name,value from V$SYSSTAT
union all
select 'WAIT' class,event,total_waits from V$SYSTEM_EVENT
union all
select 'TIME' class,stat_name,value from V$SYS_TIME_MODEL
union all
select 'V$OS' class,stat_name,value from V$OSSTAT
union all
select 'MICRO' class,event||' - '||wait_time_format,wait_count from V$EVENT_HISTOGRAM_MICRO
;
create table FPAT as select sysdate time,FPAV.* from FPAV where 1=0;
insert into FPAT select sysdate time,FPAV.* from FPAV;
commit;

and I replaced the part that runs the AWR report with a simple query:


($admin_conn < awr.txt
column per_sec format 999G999G999G999G999
spool awr.txt append
select to_char(btim,'hh24:mi:ss') btime,to_char(time,'hh24:mi:ss')etime,seconds,class,name,round(delta/seconds) per_second , decode(class||' '||name
,'STAT session logical reads','LIO/s'
,'STAT physical read total IO requests','PIO/s'
,'TIME DB CPU','CPU us'
,'TIME DB time','DB time us'
,'STAT physical read total bytes','Read B/s'
,'STAT physical write total bytes','Write B/s'
,'V$OS BUSY_TIME','OS busy cs'
,'WAIT resmgr:cpu quantum','resmgr'
) main from (
select
FPAT.*
,max(time)over(order by time rows between unbounded preceding and unbounded following) last
,value-lag(value)over(partition by class,name order by time) delta
,(time-lag(time)over(partition by class,name order by time))*24*60*60 seconds
,lag(time)over(partition by class,name order by time) btim
from FPAT
) where time=last and delta>0 order by main,per_second desc;
spool off
exit
EOF

Statspack is not an alternative here because it requires some views and grants from SYS which you cannot do on Exadata Express Cloud Service. I really don’t understand why we cannot use AWR locally because in 12.2 you can have AWR snapshots at PDB level. But any use of dbms_workload_repository is disabled by lockdown profile.

Result

So this is the kind of output I get on running 2 SLOB sessions during 10 minutes:


BTIME ETIME SECONDS CLAS NAME PER_SECOND MAIN
-------- -------- ---------- ---- ---------------------------------------------------------------- ---------- ----------
11:16:09 11:26:15 606 TIME DB CPU 1968151 CPU us
11:16:09 11:26:15 606 TIME DB time 1981479 DB time us
11:16:09 11:26:15 606 STAT session logical reads 1228557 LIO/s
11:16:09 11:26:15 606 STAT physical read total IO requests 0 PIO/s
11:16:09 11:26:15 606 STAT physical read total bytes 189 Read B/s
11:16:09 11:26:15 606 STAT physical write total bytes 189 Write B/s
11:16:09 11:26:15 606 WAIT resmgr:cpu quantum 0 resmgr
11:16:09 11:26:15 606 STAT logical read bytes from cache 1.0064E+10
11:16:09 11:26:15 606 TIME sql execute elapsed time 1980376
11:16:09 11:26:15 606 V$OS IDLE_TIME 6787
11:16:09 11:26:15 606 V$OS BUSY_TIME 391
11:16:09 11:26:15 606 V$OS USER_TIME 311

1.96 CPU seconds per second and 1.98 DB time. Those are my 2 cached SLOB sessions.
About 1.2 million LIO per seconds. This is what I expect from those test on Exadata X5 (PCT_UPDATE=0 and WORK_UNIT=64).
Resource manager ready to kick in as my CPU_COUNT is set to 2 in my PDB (set by lockdown profile, you cannot change it)
The overall system hosting the CDB has been mostly idle: 3.91 / (67.87 + 3.91) is 5.44% busy. I’m responsible for 2 sessions over 72 threads (36 cores): 2/72=2.78% CPU usage

The Exadata Express Cloud Service allocates 1 core (2 threads) but because the system is nearly idle, I didn’t use 2 threads of the same core. Let’s compare with a 1 session only run:


BTIME ETIME SECONDS CLAS NAME PER_SECOND MAIN
-------- -------- ---------- ---- ---------------------------------------------------------------- ---------- ----------
12:15:49 12:25:55 606 TIME DB CPU 989008 CPU us
12:15:49 12:25:55 606 TIME DB time 991198 DB time us
12:15:49 12:25:55 606 STAT session logical reads 668368 LIO/s
12:15:49 12:25:55 606 STAT physical read total IO requests 0 PIO/s
12:15:49 12:25:55 606 STAT physical read total bytes 203 Read B/s
12:15:49 12:25:55 606 STAT physical write total bytes 189 Write B/s
12:15:49 12:25:55 606 STAT logical read bytes from cache 5475272359
12:15:49 12:25:55 606 TIME sql execute elapsed time 990290
12:15:49 12:25:55 606 STAT consistent gets 668368

Comparing logical reads, 1228557/668368=1.8 so probably I was not running the two sessions on the same core.

So what?

SLOB (https://kevinclosson.net/slob/) doesn’t need to be installed on the database server because it is mostly PL/SQL, so no roundtrips. And this is a way to run it when you have only access to a PDB without SYSDBA privileges.

 

Cet article Running SLOB on Exadata Express Cloud Service est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 9 – Temporary tables

Wed, 2017-02-01 06:47

It has been quite a while since the last posts in this series, so here is what we looked at until now:

In this post we’ll look at temporary tables. Temporary tables hold data for the scope of a session or a transaction and the data is lost afterwards, so what are they good for then? Usually you will use them to store intermediate results that you need for further processing without the need to permanently store these. A typical use case is a business report that requires some intermediate aggregations. Can we do this in PostgreSQL? Yes, of course. Lets go.

When we look at the create table syntax the keywords TEMP or TEMPORARY are there:

(postgres@[local]:5439) [postgres] > \h create table
Command:     CREATE TABLE
Description: define a new table
Syntax:
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name ( [
  { column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
    | table_constraint
    | LIKE source_table [ like_option ... ] }
    [, ... ]
] )
[ INHERITS ( parent_table [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]

Then we should be able to create a temporay table, shouldn’t we?

(postgres@[local]:5439) [postgres] > create temporary table tmp1 ( a int, b varchar(10));
CREATE TABLE
(postgres@[local]:5439) [postgres] > \d tmp1
           Table "pg_temp_2.tmp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 

Easy. Have you looked at the schema it got created in? What is pg_temp_2? I do not have schema that is named pg_temp_2:

(postgres@[local]:5439) [postgres] > \dn
  List of schemas
  Name  |  Owner   
--------+----------
 public | postgres
(1 row)

Temporay tables get created in a special schema. You can see it if you query pg_namespace directly:

(postgres@[local]:5439) [postgres] > select nspname from pg_namespace where nspname = 'pg_temp_2';
  nspname  
-----------
 pg_temp_2
(1 row)

Important to know: When you exit from your session the temporary table is gone:

(postgres@[local]:5439) [postgres] > \d tmp1
           Table "pg_temp_2.tmp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 

(postgres@[local]:5439) [postgres] > \q
postgres@pgbox:/home/postgres/ [PG961] psql postgres
psql (9.6.1 dbi services build)
Type "help" for help.

(postgres@[local]:5439) [postgres] > \d tmp1
Did not find any relation named "tmp1".

What about the visibiliy of the data in a temporary table? It depends on how you create the table. When you want the data
to be visible for the entire lifetime of the session you do it like above:

(postgres@[local]:5439) [postgres] > create temporary table tmp1 ( a int, b varchar(10));
CREATE TABLE
(postgres@[local]:5439) [postgres] > insert into tmp1 values (1,'1');
INSERT 0 1
(postgres@[local]:5439) [postgres] > select * from tmp1;
 a | b 
---+---
 1 | 1
(1 row)

When you want the data to be visible only for the duration of the current transaction you do it like this:

(postgres@[local]:5439) [postgres] > create temporary table tmp2 ( a int, b varchar(10)) on commit delete rows;
CREATE TABLE
(postgres@[local]:5439) [postgres] > begin;
BEGIN
(postgres@[local]:5439) [postgres] > insert into tmp2 values (1,'1');
INSERT 0 1
(postgres@[local]:5439) [postgres] > end;
COMMIT
(postgres@[local]:5439) [postgres] > select count(*) from tmp2;
 count 
-------
     0
(1 row)

Another important point to know: When you create a temporary table with the same name as an existing table you will not see
the existing table anymore unless you prefix the normal table with the schema name:

(postgres@[local]:5439) [postgres] > create table tmp3 ( a int );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create temporary table tmp3 ( a int, b varchar(10));
CREATE TABLE
(postgres@[local]:5439) [postgres] > \d tmp3
           Table "pg_temp_2.tmp3"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 

(postgres@[local]:5439) [postgres] > \d public.tmp3
     Table "public.tmp3"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

Be careful with this. What about performance? Is a temporary table faster to insert than a normal table?

(postgres@[local]:5439) [postgres] > create table tmp1 ( a int, b varchar(10) );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create temporary table tmp2 ( a int, b varchar(10) );
CREATE TABLE

This is the script:

(postgres@[local]:5439) [postgres] > \! cat a.sql
truncate tmp1;
truncate tmp2;
with aa as
( select generate_series ( 1, 1000000 ) a  )
insert into tmp1 (a,b)
       select a, a
         from aa;

with aa as
( select generate_series ( 1, 1000000 ) a  )
insert into tmp2 (a,b)
       select a, a
         from aa;

This is the result:

(postgres@[local]:5439) [postgres] > \i a.sql
TRUNCATE TABLE
Time: 19.626 ms
TRUNCATE TABLE
Time: 35.665 ms
INSERT 0 1000000
Time: 7777.034 ms
INSERT 0 1000000
Time: 3327.984 ms
(postgres@[local]:5439) [postgres] > \i a.sql
TRUNCATE TABLE
Time: 46.221 ms
TRUNCATE TABLE
Time: 33.286 ms
INSERT 0 1000000
Time: 7425.957 ms
INSERT 0 1000000
Time: 3241.140 ms
(postgres@[local]:5439) [postgres] > \i a.sql
TRUNCATE TABLE
Time: 44.365 ms
TRUNCATE TABLE
Time: 35.992 ms
INSERT 0 1000000
Time: 8732.566 ms
INSERT 0 1000000
Time: 3888.795 ms

The temporary table is almost double as fast to write to than the normal table. Of course you can create indexes on temporary tables as well:

(postgres@[local]:5439) [postgres] > create temporary table tmp4 ( a int, b varchar );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create index tmpi1 on tmp4(a);
CREATE INDEX

.. and of course the index is gone as well once you end your session.

Another point to remember: Temporary tables are not visible to the vacuum deamon. You might think this is not a problem as they disappear anyway but remember that autovacuum is responsible for gathering the statistics (kicking off analyze) as well.
Depending on what you want to do with the data you loaded into the temporary table it might be wise to issue a manual analyze on it:

(postgres@[local]:5439) [postgres] > analyze verbose tmp4;

I hope I do not need to say that tempoary tables are not crash safe :) Have fun …

 

Cet article Can I do it with PostgreSQL? – 9 – Temporary tables est apparu en premier sur Blog dbi services.

Pages