Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 2 hours 43 min ago

12cR2: no cardinality feedback for small queries

Thu, 2017-01-05 09:52

This follows the comments on Clemens Bleile post about SQL Plan Directives in Oracle 12.2 where he discovered a new behavior that is not documented. The feature is good: bypass the cardinality feedback overhead when the query is fast anyway.

Feature

In Clemens Bleile post about SQL Plan Directives changes in 12cR2 he determined than the new feature coming in bug 23596611 was responsible for the bypass of automatic reoptimization (the 12c cardinality feedback) in some cases.

The description gives the idea that the reason for the bypass of cardinality feedback is ‘small queries':

SQL> select BUGNO,SQL_FEATURE,DESCRIPTION,OPTIMIZER_FEATURE_ENABLE from v$system_fix_control where bugno=23596611;
 
BUGNO SQL_FEATURE DESCRIPTION OPTIMIZER_FEATURE_ENABLE
---------- ------------------------------ -------------------------------------- -------------------------
23596611 QKSFM_STATS_FEEDBACK_23596611 No feedback for small queries 12.2.0.1

This is a new concept and in the traces we know there is nothing about ‘small queries’. Here is how to get more information.

Function

First, I’m looking for the ‘small queries’ text in the oracle executable:

$ strings $ORACLE_HOME/bin/oracle | grep "small query"
kkocfbCheckCardEst: No feedback for small query (BG %f, CPU %f (ms)
kkocfbCompareExecStats: skipping cost comparision forsmall query (BG %f, CPU %f (ms)

Good. This looks like there is a way to trace some information about that bypass decision, and there’s a clue that ‘small queries’ are evaluated on number of buffer gets (BG) and CPU time threshold. More than that, we have the name of the C function that traces this: kkocfbCompareExecStats

kkocfb probably means:

@OracleSK @FranckPachot @johnnyq72 it’s kernel kompile costing cardinality feedback probably. and it seems there is a trace function…

— Frits Hoogland (@fritshoogland) January 4, 2017

and then I’m looking for a way to get this trace. I didn’t know which event can trace that, but others know:

@fritshoogland @OracleSK @FranckPachot @johnnyq72 are you talking about event 10507 level 512?

— Mauro Pagano (@Mautro) January 4, 2017

However there is another way to get an idea of the events that are checked by a function.
The idea is described here: http://yong321.freeshell.org/computer/OracleFuncCalls.html

Event

We are not allowed to disassemble oracle code (this is in the licensing rules that you accept when you download oracle) so I’ll not do it here.
If you think you have a special permission to do it, just run:
gdb oracle <<<"disas kkocfbCheckCardEst"

I can’t show the dissassembled code, so here is how to get the list of the events that are checked by the kkocfbCheckCardEst function:
$ gdb oracle <<<"disas kkocfbCheckCardEst" | awk --non-decimal-data '/mov .*,%edi$/{gsub(/[$,]/," ");a=$4}/EventRdbmsErr/{printf "%d\n", a}' | sort -u
10507

The information we have about this event does not describe all what can be traced:

$ oerr ora 10507
10507, 00000, "Trace bind equivalence logic"
// *Cause:
// *Action:

From that, I don’t know which level to set the event, but in case of doubt and because levels are often bit flags, you can set it to the maximum 0xFFFFFFFF:


alter session set events '10507 trace name context forever, level 4294967295';

trace when cardinality feedback is bypassed

When I run the query where execution is doing less than 100 buffer gets I get the following trace with event 10507 level 512:


****** Begin CFB Hints (APA) [sql_id=a71qw8t17qpqq] xsc=0x7fca901f0e30 ctx=0x9415d4f8 ******
Dumping Hints
=============
*********** End CFB Hints (APA) ***********
kkocfbInitCardFdbkCompCtx [sql_id=a71qw8t17qpqq] monitor=y
kkocfbCheckCardEst: No feedback for small query (BG 0.000000, CPU 0.000000 (ms)

The message is clear: no feedback for small query. The Buffer Get and CPU show 0 but on my test case, increasing the size of the table, I have seen that 100 buffer gets is the threshold:


bq4fc1rdx97av is_reoptimizable: N buffer_gets: 95 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 95 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 96 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 96 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 97 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 97 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 98 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 98 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 99 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 99 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 100 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 100 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 101 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 101 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 102 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 102 cpu_time: 1000

trace when query becomes reoptimizable

When the query reads more than 100 buffer gets, the cursor is marked as reoptimizable and here is the trace:


****** Begin CFB Hints (APA) [sql_id=a71qw8t17qpqq] xsc=0x7fed7dc2ca40 ctx=0x9418f4f8 ******
Dumping Hints
=============
*********** End CFB Hints (APA) ***********
kkocfbInitCardFdbkCompCtx [sql_id=a71qw8t17qpqq] monitor=y
Reparsing due to card est...
@=0x95426550 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
kkocfbCheckCardEst [sql_id=a71qw8t17qpqq] reparse=y ecs=n efb=n ost=n fbs=n
*********** Begin Dump Context (kkocfbCheckCardEst) [sql_id=a71qw8t17qpqq cpcnt=0] ***********
@=0x95426550 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
*********** End Dump Context ***********

act=500 is the actual number of rows (A-Rows) and est=63 is the estimated one (A-Rows) and this is how misestimate is detected.

trace when reoptimized query is run again

At that point we can see the feedback as OPT_ESTIMATE hints in v$sql_reoptimization_hints.

When we run the same query another time, it is parsed again with those hits, fixing cardinality tho the A-Rows of previous execution.

Here is the trace for this second execution.


kkoarCopyCtx: [sql_id=a71qw8t17qpqq] origin=CFB old=0x63904140 new=0x7fcda3716d40 copyCnt=1 copyClient=y
**************************************************************
kkocfbCopyBestEst: Best Stats
Exec count: 1
CR gets: 279
CU gets: 4
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 6998 (us)
CPU Time: 2000 (us)
User I/O Time: 0 (us)
*********** Begin Dump Context (kkocfbCopyBestEst) **********
*********** End Dump Context ***********
kkocfbCopyCardCtx: oldCtx Dumping string mapping
----------------------
kkocfbCopyCardCtx: newCtx Dumping string mapping
----------------------
**************************************************************
kkocfbCopyCardCtx: Best stats
Exec count: 1
CR gets: 279
CU gets: 4
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 6998 (us)
CPU Time: 2000 (us)
User I/O Time: 0 (us)
**** Begin Dump Context (kkocfbCopyCardCtx - best est) ****
********************* End Dump Context ********************
*********** Begin Dump Context (Copy:old) [sql_id=a71qw8t17qpqq cpcnt=0] ***********
@=0x6446dc58 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
*********** End Dump Context ***********
*********** Begin Dump Context (Copy:new) [sql_id=a71qw8t17qpqq cpcnt=0] ***********
@=0x7fcda3716a78 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
*********** End Dump Context ***********
kkoarReparse: xsc=0x7fcda3672b98 kxsAutoReoptCtx=0x7fcda3716d40
kkocfbAddCardHints: Dumping string mapping
----------------------
******** Begin CFB Hints [sql_id=a71qw8t17qpqq] xsc=0x7fcda3672b98 ********
Dumping Hints
=============
atom_hint=(@=0x7fcda37831e8 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DEMO_TABLE"@"SEL$1" ROWS=500.000000 ) )
********** End CFB Hints **********

You can see the OPT_ESTIMATE hints here.


****** Begin CFB Hints (APA) [sql_id=a71qw8t17qpqq] xsc=0x7fcda3672b98 ctx=0x8a274118 ******
Dumping Hints
=============
atom_hint=(@=0x7fcda37831e8 err=0 resol=1 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DEMO_TABLE"@"SEL$1" ROWS=500.000000 ) )
atom_hint=(@=0x7fcda3782d10 err=0 resol=1 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DEMO_TABLE"@"SEL$1" ROWS=500.000000 ) )
*********** End CFB Hints (APA) ***********
kkocfbInitCardFdbkCompCtx [sql_id=a71qw8t17qpqq] monitor=y
**************************************************************
kkocfbCopyBestEst: Best Stats
Exec count: 1
CR gets: 279
CU gets: 4
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 6998 (us)
CPU Time: 2000 (us)
User I/O Time: 0 (us)
*********** Begin Dump Context (kkocfbCopyBestEst) **********
*********** End Dump Context ***********
kkocfbCheckCardEst [sql_id=a71qw8t17qpqq] reparse=n ecs=n efb=y ost=n fbs=n

So what?

With those adaptive feature it is good to be able to trace the decisions in order to understand and reproduce the problems we encounter. Event 10507 is very useful. It’s the execution time counterpart for the event 10053 which explains compile time decision. And in latest versions, the optimizer is more and more present at execution time.

We have always seen problems coming from cardinality feedback. Most of them are coming from bad statistics or a data model where there is not one optimal access plan. Then the CBO is always trying to find better and sometimes the best is the enemy of the good.
Now in 12.2 you have a very good way to avoid the cardinality feedback side effect: tune your queries and your data model so that the critical queries read less than 100 logical reads.

 

Cet article 12cR2: no cardinality feedback for small queries est apparu en premier sur Blog dbi services.

NLS defaults for LANGUAGE and TERRITORY

Wed, 2017-01-04 07:09

When you set NLS_LANGUAGE then the values of NLS_DATE_LANGUAGE and NLS_SORT are set to their defaults for the LANGUAGE you set.
When you set NLS_TERRITORY then values of NLS_DATE_FORMAT, NLS_NUMERIC_CHARACTERS, NLS_CURRENCY and NLS_ISO_CURRENCY are set to the defaults for the TERRITORY you set. For reference, I’ve listed the defaults in this post.

Here for each value found from V$NLS_VALID_VALUES I set the session parameter and display the values that are derived, as well as an example of a date

TERRITORY

The TERRITORY sets the decimal and thousand separators, the currency and the date format. I display the short date example of last day of 2016.

SQL> with
2 function nls_territory_defaults(t varchar2) return varchar2 as
3 s varchar2(2017):='';
4 begin
5 execute immediate 'alter session set nls_territory='''||t||'''';
6 for i in (select * from nls_session_parameters where parameter in ('NLS_DATE_FORMAT','NLS_NUMERIC_CHARACTERS','NLS_CURRENCY','NLS_ISO_CURRENCY') order by parameter desc)
7 loop
8 s:=s||lpad(i.parameter,20)||'='||rpad(i.value,20);
9 end loop;
10 return s||' example: '||to_char(trunc(sysdate,'yy')-1,'ds');
11 end;
12 select rpad(value,20)||nls_territory_defaults(value) "TERRITORY default NLS settings" from v$nls_valid_values where parameter='TERRITORY' order by 1
13 /
 
TERRITORY default NLS settings
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
AFGHANISTAN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=AFGHANISTAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=؋ example: 31/12/16
ALBANIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ALBANIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=Lek example: 31.12.2016
ALGERIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ALGERIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ج. example: 31/12/2016
AMERICA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=AMERICA NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=$ example: 12/31/2016
ARGENTINA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ARGENTINA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=$ example: 31/12/2016
ARMENIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ARMENIA NLS_DATE_FORMAT=DD.fmMM.RRRR NLS_CURRENCY=դր. example: 31.12.2016
AUSTRALIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=AUSTRALIA NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=$ example: 31/12/2016
AUSTRIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=AUSTRIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=€ example: 31.12.2016
AZERBAIJAN NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=AZERBAIJAN NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=₼ example: 31.12.2016
BAHAMAS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BAHAMAS NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=B$ example: 31/12/2016
BAHRAIN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BAHRAIN NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ب. example: 31/12/2016
BANGLADESH NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BANGLADESH NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=৳ example: 31-12-2016
BELARUS NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=BELARUS NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=р. example: 31.12.2016
BELGIUM NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BELGIUM NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
BELIZE NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BELIZE NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=BZ$ example: 12/31/2016
BERMUDA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BERMUDA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=BD$ example: 31/12/2016
BOLIVIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BOLIVIA NLS_DATE_FORMAT=fmDD-MM-RR NLS_CURRENCY=B$ example: 12-31-2016
BOSNIA AND HERZEGOVINLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BOSNIA AND HERZEGOVI NLS_DATE_FORMAT=fmMM.DD.RRRR NLS_CURRENCY=КМ example: 12.31.2016
BRAZIL NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BRAZIL NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=R$ example: 31/12/2016
BULGARIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BULGARIA NLS_DATE_FORMAT=RRRR-MM-DD NLS_CURRENCY=лв example: 31.12.2016
CAMBODIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=CAMBODIA NLS_DATE_FORMAT=RRRR-fmMM-DD NLS_CURRENCY=៛ example: 2016-12-31
CAMEROON NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CAMEROON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
CANADA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CANADA NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=$ example: 2016-12-31
CATALONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CATALONIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31-12-16
CHILE NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CHILE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31-12-2016
CHINA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=CHINA NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=¥ example: 2016-12-31
CIS NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CIS NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=р. example: 31.12.16
COLOMBIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=COLOMBIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
CONGO BRAZZAVILLE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CONGO BRAZZAVILLE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
CONGO KINSHASA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CONGO KINSHASA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=FrCD example: 31/12/16
COSTA RICA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=COSTA RICA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C example: 31/12/2016
CROATIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CROATIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=kn example: 31.12.16
CYPRUS NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CYPRUS NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=£ example: 31/12/2016
CZECH REPUBLIC NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CZECH REPUBLIC NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Kč example: 31.12.2016
CZECHOSLOVAKIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CZECHOSLOVAKIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Kčs example: 31.12.16
DENMARK NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=DENMARK NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=kr example: 31-12-2016
DJIBOUTI NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=DJIBOUTI NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ف.ج. example: 31/12/2016
ECUADOR NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ECUADOR NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=$ example: 31/12/2016
EGYPT NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=EGYPT NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ج.م. example: 31/12/2016
EL SALVADOR NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=EL SALVADOR NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C example: 31/12/2016
ESTONIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=ESTONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr example: 31.12.2016
ETHIOPIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ETHIOPIA NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=ብር example: 12/31/2016
FINLAND NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=FINLAND NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=€ example: 31.12.2016
FRANCE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=FRANCE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
FYR MACEDONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=FYR MACEDONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=ден. example: 31.12.2016
GABON NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=GABON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
GERMANY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=GERMANY NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=€ example: 31.12.2016
GREECE NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=GREECE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
GUATEMALA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=GUATEMALA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=Q example: 31/12/2016
HONDURAS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=HONDURAS NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=L example: 31.12.2016
HONG KONG NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=HONG KONG NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=$ example: 31/12/2016
HUNGARY NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=HUNGARY NLS_DATE_FORMAT=RR-MON-DD NLS_CURRENCY=Ft example: 2016.12.31.
ICELAND NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ICELAND NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr. example: 31.12.2016
INDIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=INDIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=Rs example: 31/12/2016
INDONESIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=INDONESIA NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=Rp example: 31-12-2016
IRAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRAN NLS_DATE_FORMAT=RRRR/fmMM/fmDD NLS_CURRENCY=ريال example: 2016/12/31
IRAQ NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRAQ NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ع. example: 31/12/2016
IRELAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRELAND NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=€ example: 31/12/2016
ISRAEL NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ISRAEL NLS_DATE_FORMAT=DD-MON-RRRR NLS_CURRENCY=ש"ח example: 31/12/2016
ITALY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ITALY NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=€ example: 31/12/2016
IVORY COAST NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=IVORY COAST NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
JAPAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=JAPAN NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=¥ example: 2016/12/31
JORDAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=JORDAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ا.‏ example: 31/12/2016
KAZAKHSTAN NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=KAZAKHSTAN NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=KZT example: 31.12.2016
KENYA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KENYA NLS_DATE_FORMAT=fmMM/DD/RRRR NLS_CURRENCY=S example: 12/31/2016
KOREA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KOREA NLS_DATE_FORMAT=RR/MM/DD NLS_CURRENCY=₩ example: 2016/12/31
KUWAIT NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KUWAIT NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ك. example: 31/12/2016
LAOS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LAOS NLS_DATE_FORMAT=DD/fmMM/RRRR NLS_CURRENCY=₭ example: 31/12/2016
LATVIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=LATVIA NLS_DATE_FORMAT=RRRR.MM.DD NLS_CURRENCY=Ls example: 2016.12.31
LEBANON NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LEBANON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ل.ل. example: 31/12/2016
LIBYA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LIBYA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ل. example: 31/12/2016
LITHUANIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=LITHUANIA NLS_DATE_FORMAT=RRRR.MM.DD NLS_CURRENCY=€ example: 2016-12-31
LUXEMBOURG NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=LUXEMBOURG NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31.12.2016
MACEDONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=MACEDONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=den. example: 31.12.2016
MALAYSIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALAYSIA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=RM example: 31/12/2016
MALDIVES NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALDIVES NLS_DATE_FORMAT=fmDD/fmMM/RR NLS_CURRENCY=ރ. example: 12/31/16
MALTA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALTA NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=€ example: 12/31/2016
MAURITANIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MAURITANIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ا.م. example: 31/12/2016
MEXICO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MEXICO NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
MONTENEGRO NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=MONTENEGRO NLS_DATE_FORMAT=DD.MM.RRRR. NLS_CURRENCY=€ example: 31.12.2016.
MOROCCO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MOROCCO NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.م. example: 31/12/2016
NEPAL NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NEPAL NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=र example: 12/31/2016
NEW ZEALAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NEW ZEALAND NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
NICARAGUA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NICARAGUA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C$ example: 31/12/2016
NIGERIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NIGERIA NLS_DATE_FORMAT=DD/fmMM/RRRR NLS_CURRENCY=₦ example: 31/12/2016
NORWAY NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=NORWAY NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr example: 31.12.2016
OMAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=OMAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ع. example: 31/12/2016
PAKISTAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PAKISTAN NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=PKR example: 31/12/2016
PANAMA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PANAMA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=B/ example: 12/31/2016
PARAGUAY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PARAGUAY NLS_DATE_FORMAT=fmDD/MM/RR NLS_CURRENCY=G example: 12/31/16
PERU NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PERU NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=S/ example: 31/12/2016
PHILIPPINES NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PHILIPPINES NLS_DATE_FORMAT=MM/DD/RRRR NLS_CURRENCY=Php example: 12/31/2016
POLAND NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=POLAND NLS_DATE_FORMAT=RR/MM/DD NLS_CURRENCY=zł example: 2016-12-31
PORTUGAL NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PORTUGAL NLS_DATE_FORMAT=RR.MM.DD NLS_CURRENCY=€ example: 31-12-2016
PUERTO RICO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PUERTO RICO NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
QATAR NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=QATAR NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ق. example: 31/12/2016
ROMANIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ROMANIA NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=LEI example: 31.12.2016
RUSSIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=RUSSIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=₽ example: 31.12.2016
SAUDI ARABIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SAUDI ARABIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.س. example: 31/12/2016
SENEGAL NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=SENEGAL NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=CFA example: 31/12/2016
SERBIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SERBIA NLS_DATE_FORMAT=DD.MM.RRRR. NLS_CURRENCY=RSD example: 31.12.2016.
SERBIA AND MONTENEGRNLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SERBIA AND MONTENEGR NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=din. example: 31.12.2016
SINGAPORE NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SINGAPORE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=S$ example: 31/12/2016
SLOVAKIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SLOVAKIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Sk example: 31.12.16
SLOVENIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SLOVENIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=SIT example: 31.12.16
SOMALIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SOMALIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ش.ص. example: 31/12/2016
SOUTH AFRICA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SOUTH AFRICA NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=R example: 2016/12/31
SPAIN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SPAIN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
SRI LANKA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SRI LANKA NLS_DATE_FORMAT=RRRR/fmMM/fmDD NLS_CURRENCY=ريال example: 2016/12/31
SUDAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SUDAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ج.س. example: 31/12/2016
SWEDEN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SWEDEN NLS_DATE_FORMAT=RRRR-MM-DD NLS_CURRENCY=Kr example: 2016-12-31
SWITZERLAND NLS_NUMERIC_CHARACTE=.' NLS_ISO_CURRENCY=SWITZERLAND NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=SFr. example: 31.12.2016
SYRIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SYRIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ل.س. example: 31/12/2016
TAIWAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TAIWAN NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=NT$ example: 2016/12/31
TANZANIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TANZANIA NLS_DATE_FORMAT=fmMM/DD/RRRR NLS_CURRENCY=TSh example: 12/31/2016
THAILAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=THAILAND NLS_DATE_FORMAT=DD MON RRRR NLS_CURRENCY=฿ example: 31/12/2016
THE NETHERLANDS NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=THE NETHERLANDS NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=€ example: 31-12-2016
TUNISIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TUNISIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ت. example: 31/12/2016
TURKEY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=TURKEY NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=₺ example: 31.12.2016
UGANDA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UGANDA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=CFA example: 31/12/2016
UKRAINE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=UKRAINE NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=грн. example: 31.12.2016
UNITED ARAB EMIRATESNLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UNITED ARAB EMIRATES NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.إ. example: 31/12/2016
UNITED KINGDOM NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UNITED KINGDOM NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=£ example: 31/12/2016
URUGUAY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=URUGUAY NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=NU$ example: 31/12/2016
UZBEKISTAN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=UZBEKISTAN NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=UZS example: 31/DEC/16
VENEZUELA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=VENEZUELA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=Bs example: 31/12/2016
VIETNAM NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=VIETNAM NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=₫ example: 31-12-2016
YEMEN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=YEMEN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ي. example: 31/12/2016
YUGOSLAVIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=YUGOSLAVIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=din. example: 31.12.2016
ZAMBIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ZAMBIA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=ZK example: 31/12/2016

LANGUAGE

The LANGUAGE sets the linguistic sorting and the language used for words in dates. I display the long date example of last day of 2016.

SQL> with
2 function nls_language_defaults(t varchar2) return varchar2 as
3 s varchar2(2017):='';
4 begin
5 execute immediate 'alter session set nls_language='''||t||'''';
6 for i in (select * from nls_session_parameters where parameter in ('NLS_DATE_LANGUAGE','NLS_SORT') order by parameter desc)
7 loop
8 s:=s||lpad(i.parameter,20)||'='||rpad(i.value,20);
9 end loop;
10 return s||' example: '||to_char(trunc(sysdate,'yy')-1,'dl');
11 end;
12 select rpad(value,20)||nls_language_defaults(value) "LANGUAGE default NLS settings" from v$nls_valid_values where parameter='LANGUAGE' order by 1
13 /
 
LANGUAGE default NLS settings
-----------------------------------------------------------------------------------------------------------------------------------------------
ALBANIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=ALBANIAN example: E Shtunë, Dhjetor 31, 2016
AMERICAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=AMERICAN example: Saturday, December 31, 2016
AMHARIC NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=AMHARIC example: ቅዳሜ, ዲሴምበር 31, 2016
ARABIC NLS_SORT=ARABIC NLS_DATE_LANGUAGE=ARABIC example: السبت, ديسمبر 31, 2016
ARMENIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=ARMENIAN example: Շաբա, Դեկտեմբեր 31, 2016
ASSAMESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=ASSAMESE example: শনিবাৰ, ডিচেম্বৰ 31, 2016
AZERBAIJANI NLS_SORT=AZERBAIJANI NLS_DATE_LANGUAGE=AZERBAIJANI example: Şənbə, Dekabr 31, 2016
BANGLA NLS_SORT=BINARY NLS_DATE_LANGUAGE=BANGLA example: শনিবার, ডিসেম্বর 31, 2016
BELARUSIAN NLS_SORT=RUSSIAN NLS_DATE_LANGUAGE=BELARUSIAN example: Субота, Сьнежань 31, 2016
BENGALI NLS_SORT=BENGALI NLS_DATE_LANGUAGE=BENGALI example: শনিবার, ডিসেমর 31, 2016
BRAZILIAN PORTUGUESE NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=BRAZILIAN PORTUGUESE example: Sábado, Dezembro 31, 2016
BULGARIAN NLS_SORT=BULGARIAN NLS_DATE_LANGUAGE=BULGARIAN example: Събота, Декември 31, 2016
CANADIAN FRENCH NLS_SORT=CANADIAN_M NLS_DATE_LANGUAGE=CANADIAN FRENCH example: Samedi, Décembre 31, 2016
CATALAN NLS_SORT=CATALAN NLS_DATE_LANGUAGE=CATALAN example: Dissabte, Desembre 31, 2016
CROATIAN NLS_SORT=CROATIAN NLS_DATE_LANGUAGE=CROATIAN example: Subota, Prosinac 31, 2016
CYRILLIC KAZAKH NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC KAZAKH example: Сенбі, Желтоқсан 31, 2016
CYRILLIC SERBIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC SERBIAN example: Субота, Децембар 31, 2016
CYRILLIC UZBEK NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC UZBEK example: Шанба, Декабр 31, 2016
CZECH NLS_SORT=CZECH NLS_DATE_LANGUAGE=CZECH example: Sobota, Prosinec 31, 2016
DANISH NLS_SORT=DANISH NLS_DATE_LANGUAGE=DANISH example: Lørdag, December 31, 2016
DARI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=DARI example: شنبه, دسمبر 31, 2016
DIVEHI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=DIVEHI example: ހޮނިހިރު, ޑިސެމްބަރ 31, 2016
DUTCH NLS_SORT=DUTCH NLS_DATE_LANGUAGE=DUTCH example: Zaterdag, December 31, 2016
EGYPTIAN NLS_SORT=ARABIC NLS_DATE_LANGUAGE=EGYPTIAN example: السبت, ديسمبر 31, 2016
ENGLISH NLS_SORT=BINARY NLS_DATE_LANGUAGE=ENGLISH example: Saturday, December 31, 2016
ESTONIAN NLS_SORT=ESTONIAN NLS_DATE_LANGUAGE=ESTONIAN example: Laupäev, Detsember 31, 2016
FINNISH NLS_SORT=FINNISH NLS_DATE_LANGUAGE=FINNISH example: Lauantai, Joulukuu 31, 2016
FRENCH NLS_SORT=FRENCH NLS_DATE_LANGUAGE=FRENCH example: Samedi, Décembre 31, 2016
GERMAN NLS_SORT=GERMAN NLS_DATE_LANGUAGE=GERMAN example: Samstag, Dezember 31, 2016
GERMAN DIN NLS_SORT=GERMAN NLS_DATE_LANGUAGE=GERMAN DIN example: Samstag, Dezember 31, 2016
GREEK NLS_SORT=GREEK NLS_DATE_LANGUAGE=GREEK example: Σάββατο, Δεκέμβριος 31, 2016
GUJARATI NLS_SORT=BINARY NLS_DATE_LANGUAGE=GUJARATI example: શનિવાર, ડિસેમ્બર 31, 2016
HEBREW NLS_SORT=HEBREW NLS_DATE_LANGUAGE=HEBREW example: שבת, דצמבר 31, 2016
HINDI NLS_SORT=BINARY NLS_DATE_LANGUAGE=HINDI example: शनिवार, दिसम्बर 31, 2016
HUNGARIAN NLS_SORT=HUNGARIAN NLS_DATE_LANGUAGE=HUNGARIAN example: Szombat, December 31, 2016
ICELANDIC NLS_SORT=ICELANDIC NLS_DATE_LANGUAGE=ICELANDIC example: Laugardagur, Desember 31, 2016
INDONESIAN NLS_SORT=INDONESIAN NLS_DATE_LANGUAGE=INDONESIAN example: Sabtu, Desember 31, 2016
IRISH NLS_SORT=BINARY NLS_DATE_LANGUAGE=IRISH example: Dé Sathairn, Nollaig 31, 2016
ITALIAN NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=ITALIAN example: Sabato, Dicembre 31, 2016
JAPANESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=JAPANESE example: 土曜日, 12月 31, 2016
KANNADA NLS_SORT=BINARY NLS_DATE_LANGUAGE=KANNADA example: ಶನಿವಾರ, ಡಿಸೆಂಬರ್ 31, 2016
KHMER NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=KHMER example: ថ្ងៃសៅរ៍, ធ្នូ 31, 2016
KOREAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=KOREAN example: 토요일, 12월 31, 2016
LAO NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LAO example: ວັນເສົາ, ທັນວາ 31, 2016
LATIN AMERICAN SPANI NLS_SORT=SPANISH NLS_DATE_LANGUAGE=LATIN AMERICAN SPANI example: Sábado, Diciembre 31, 2016
LATIN BOSNIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LATIN BOSNIAN example: Subota, Decembar 31, 2016
LATIN SERBIAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=LATIN SERBIAN example: Subota, Decembar 31, 2016
LATIN UZBEK NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LATIN UZBEK example: Shanba, Dekabr 31, 2016
LATVIAN NLS_SORT=LATVIAN NLS_DATE_LANGUAGE=LATVIAN example: Sestdiena, Decembris 31, 2016
LITHUANIAN NLS_SORT=LITHUANIAN NLS_DATE_LANGUAGE=LITHUANIAN example: Šeštadienis, Gruodžio 31, 2016
MACEDONIAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=MACEDONIAN example: Сабота, Декември 31, 2016
MALAY NLS_SORT=MALAY NLS_DATE_LANGUAGE=MALAY example: Sabtu, Disember 31, 2016
MALAYALAM NLS_SORT=BINARY NLS_DATE_LANGUAGE=MALAYALAM example: ശനിആഴ്ച, ഡിസംബര് 31, 2016
MALTESE NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=MALTESE example: Is-Sibt, Diċembru 31, 2016
MARATHI NLS_SORT=BINARY NLS_DATE_LANGUAGE=MARATHI example: शनिवार, डिसेंबर 31, 2016
MEXICAN SPANISH NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=MEXICAN SPANISH example: Sábado, Diciembre 31, 2016
NEPALI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=NEPALI example: शनिबार, डिसेम्बर 31, 2016
NORWEGIAN NLS_SORT=NORWEGIAN NLS_DATE_LANGUAGE=NORWEGIAN example: Lørdag, Desember 31, 2016
ORIYA NLS_SORT=BINARY NLS_DATE_LANGUAGE=ORIYA example: ଶନିବାର, ଡିସେମ୍ବର 31, 2016
PERSIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=PERSIAN example: شنبه, دسامبر 31, 2016
POLISH NLS_SORT=POLISH NLS_DATE_LANGUAGE=POLISH example: Sobota, Grudzień 31, 2016
PORTUGUESE NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=PORTUGUESE example: Sábado, Dezembro 31, 2016
PUNJABI NLS_SORT=BINARY NLS_DATE_LANGUAGE=PUNJABI example: ਸ਼ਨਿਚੱਰਵਾਰ, ਦਸੰਬਰ 31, 2016
ROMANIAN NLS_SORT=ROMANIAN NLS_DATE_LANGUAGE=ROMANIAN example: Sâmbătă, Decembrie 31, 2016
RUSSIAN NLS_SORT=RUSSIAN NLS_DATE_LANGUAGE=RUSSIAN example: Суббота, Декабрь 31, 2016
SIMPLIFIED CHINESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=SIMPLIFIED CHINESE example: 星期六, 12月 31, 2016
SINHALA NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=SINHALA example: සෙනසුරාදා, දෙසැම්බර් 31, 2016
SLOVAK NLS_SORT=SLOVAK NLS_DATE_LANGUAGE=SLOVAK example: Sobota, December 31, 2016
SLOVENIAN NLS_SORT=SLOVENIAN NLS_DATE_LANGUAGE=SLOVENIAN example: Sobota, December 31, 2016
SPANISH NLS_SORT=SPANISH NLS_DATE_LANGUAGE=SPANISH example: Sábado, Diciembre 31, 2016
SWAHILI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=SWAHILI example: Jumamosi, Desemba 31, 2016
SWEDISH NLS_SORT=SWEDISH NLS_DATE_LANGUAGE=SWEDISH example: Lördag, December 31, 2016
TAMIL NLS_SORT=BINARY NLS_DATE_LANGUAGE=TAMIL example: சனிக்கிழமை, டிசம்பர் 31, 2016
TELUGU NLS_SORT=BINARY NLS_DATE_LANGUAGE=TELUGU example: శనివారం, డిసెంబర్ 31, 2016
THAI NLS_SORT=THAI_DICTIONARY NLS_DATE_LANGUAGE=THAI example: เสาร์, ธันวาคม 31, 2016
TRADITIONAL CHINESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=TRADITIONAL CHINESE example: 星期六, 12月 31, 2016
TURKISH NLS_SORT=TURKISH NLS_DATE_LANGUAGE=TURKISH example: Cumartesi, Aralık 31, 2016
UKRAINIAN NLS_SORT=UKRAINIAN NLS_DATE_LANGUAGE=UKRAINIAN example: Субота, Грудень 31, 2016
VIETNAMESE NLS_SORT=VIETNAMESE NLS_DATE_LANGUAGE=VIETNAMESE example: Thứ Bảy, Tháng MườI Hai 31, 2016

NLS_LANG

Note that you can also set the NLS settings with environment variables, but you can do that only if NLS_LANG is set. This means that you must set NLS_LANG first, which sets the LANGUAGE and TERRITORY, and then set for example NLS_DATE_FORMAT.

Here is an example.
I set NLS_DATE_FORMAT from the environment:

[oracle@VM102 data]$ export NLS_DATE_FORMAT=DL

but the date format is the set in my session:
[oracle@VM102 data]$ sqlplus -s / as sysdba <<< "select sysdate from dual;"
 
SYSDATE
---------
04-JAN-17

Now I set NLS_LANG (and you don’t need to specify a LANGUAGE, TERRITORY or CHARACTERSET, just put the separators ‘_’ and ‘.’)

[oracle@VM102 data]$ export NLS_LANG="_."

And now my NLS_DATE_FORMAT is taken into account

[oracle@VM102 data]$ sqlplus -s / as sysdba <<< "select sysdate from dual;"
 
SYSDATE
-----------------------------
Wednesday, January 04, 2017

In general you should set NLS_LANG on any client, to the LANGUAGE and TERRITORY you want to display to client (or store in file, or manage by the application server).

I wish you an Happy New Year 2017.

 

Cet article NLS defaults for LANGUAGE and TERRITORY est apparu en premier sur Blog dbi services.

Oracle 12cR2 – DataGuard Switchover with Oracle Wallets

Wed, 2017-01-04 06:48

I would like to make my DataGuard environment more secure, by eliminating the typing of “connect sys/Manager1″ for my DGMGRL commands. Especially the ones, that I have in my scripts. For example:

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl <<-EOF
> connect sys/Manager1
> show configuration verbose;
> EOF

or something like that:

oracle@dbidg01:/u01/app/oracle/local/dg/ [DBIT122] cat show_config.dg
connect sys/Manager1;
show configuration;

oracle@dbidg01:/u01/app/oracle/local/dg/ [DBIT122] dgmgrl @show_config.dg
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 12:54:11 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "DBIT122_SITE1"
Connected as SYSDG.

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 39 seconds ago)

HINT: Be aware that “dgmgrl [<options>] @script_file_name” is a new feature with the Broker in Oracle 12.2. It was not possible to use “dgmgrl @script” beforehand.

Ok. So how can I make my scripts more secure? Of course, by using wallets, like we did already with the observer configuration. See http://blog.dbi-services.com/oracle-12cr2-how-to-setup-dataguard-observer-with-oracle-wallets/
However, I want to do also the switchover and other operations with wallets.

So, lets create the necessary wallets for the SYS user on the Primary and the Standby.

-- Primary

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS             
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

-- Standby

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

And of course, we have to test the connections to see if everything is working correctly.

-- Primary
sqlplus /@DBIT122_SITE1 as sysdba

sqlplus /@DBIT122_SITE2 as sysdba

DGMGRL> connect /@DBIT122_SITE1

DGMGRL> connect /@DBIT122_SITE2


-- Standby

sqlplus /@DBIT122_SITE1 as sysdba

sqlplus /@DBIT122_SITE2 as sysdba

DGMGRL> connect /@DBIT122_SITE1

DGMGRL> connect /@DBIT122_SITE2

So far, so good. My connections with the wallets work from the Primary to the Standby and the other way around. Now, lets try to do a DataGuard switchover with wallets.

DGMGRL> connect /@DBIT122_SITE1
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
DGMGRL>
DGMGRL>
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 58 seconds ago)

DGMGRL> SWITCHOVER TO 'DBIT122_SITE2';
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE2"
Connecting ...
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up instance "DBIT122" of database "DBIT122_SITE1"

DGMGRL>

Oppsssssss … doesn’t look good. It says “invalid username/password”, but everything worked beforehand. Ok. That output does not give me too much information. Lets try the whole thing again with the Debug mode … dgmgrl -debug

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] dgmgrl -debug
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 11:04:21 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE2
[W000 01/04 11:04:34.32] Connecting to database using DBIT122_SITE2.
[W000 01/04 11:04:34.33] Attempt logon as SYSDG
[W000 01/04 11:04:35.42] Attempt logon as SYSDBA
[W000 01/04 11:04:35.47] Executing query [select db_unique_name from v$database].
[W000 01/04 11:04:35.47] Query result is 'DBIT122_SITE2'
Connected to "DBIT122_SITE2"
[W000 01/04 11:04:35.47] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:04:35.47] Oracle database version is '12.2.0.1.0'
Connected as SYSDBA.
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 49 seconds ago)

DGMGRL> switchover to 'DBIT122_SITE2';
Performing switchover NOW, please wait...
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
[W000 01/04 11:05:04.99] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:05:04.99] Attempt logon as SYSDG
[W000 01/04 11:05:06.04] Attempt logon as SYSDBA
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up and mount instance "DBIT122" of database "DBIT122_SITE1"

DGMGRL>

What is happening here? The Broker is not using the connect string “DBIT122_SITE1″, it is using the description list  “(DESCRIPTION=(ADDRESS …..)”, and when I look up my credentials in the wallet, I see only credentials for “DBIT122_SITE1 SYS” and “DBIT122_SITE2 SYS”.

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat

The solution here is, to add the description list from the property StaticConnectIdentifier.

DGMGRL> show database 'DBIT122_SITE1' StaticConnectIdentifier;
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))'

DGMGRL> show database 'DBIT122_SITE2' StaticConnectIdentifier;
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))'

Ok. Lets add the new credentials to our wallet. Be careful that you specify it exactly like they show up in the StaticConnectIdentifier.

-- Primary

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
5: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
4: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat


-- Standby

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
5: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
4: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat

After everything is setup and done, lets try again the switchover in debug mode.

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl -debug
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 11:22:38 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE1
[W000 01/04 11:22:47.94] Connecting to database using DBIT122_SITE1.
[W000 01/04 11:22:47.94] Attempt logon as SYSDG
[W000 01/04 11:22:49.02] Attempt logon as SYSDBA
[W000 01/04 11:22:49.06] Executing query [select db_unique_name from v$database].
[W000 01/04 11:22:49.06] Query result is 'DBIT122_SITE1'
Connected to "DBIT122_SITE1"
[W000 01/04 11:22:49.06] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:22:49.06] Oracle database version is '12.2.0.1.0'
Connected as SYSDBA.
DGMGRL> switchover to 'DBIT122_SITE1';
Performing switchover NOW, please wait...
New primary database "DBIT122_SITE1" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE2"
Starting instance "DBIT122"...
[W000 01/04 11:23:18.07] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:18.07] Attempt logon as SYSDG
[W000 01/04 11:23:19.15] Attempt logon as SYSDBA
[W000 01/04 11:23:20.23] Executing query [select db_unique_name from v$database].
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0

ORACLE instance started.
[W000 01/04 11:23:36.03] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:36.03] Attempt logon as SYSDG
[W000 01/04 11:23:37.13] Attempt logon as SYSDBA
[W000 01/04 11:23:37.17] Executing query [select db_unique_name from v$database].
ORA-01507: database not mounted

[W000 01/04 11:23:37.20] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:23:37.20] Oracle database version is '12.2.0.1.0'
[W000 01/04 11:23:37.20] Executing statement [alter database mount].
[W000 01/04 11:23:42.66] Statement [alter database mount] executed successfully.
Database mounted.
[W000 01/04 11:23:42.66] Checking for bootstrap done...
[W000 01/04 11:23:42.67] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:42.67] Attempt logon as SYSDG
[W000 01/04 11:23:43.77] Attempt logon as SYSDBA
[W000 01/04 11:23:43.82] Executing query [select db_unique_name from v$database].
[W000 01/04 11:23:43.83] Query result is 'DBIT122_SITE2'
Connected to "DBIT122_SITE2"
[W000 01/04 11:23:43.83] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:23:43.83] Oracle database version is '12.2.0.1.0'
[W000 01/04 11:23:55.85] Done waiting for bootstrap after 0 retries
Switchover succeeded, new primary is "DBIT122_SITE1"
DGMGRL>
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 60 seconds ago)

DGMGRL>

Now it worked perfectly, I can run now my switchover operations with wallets and I can run my scripts now without a password in clear text, like the following.

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl <<-EOF
> connect /@DBIT122_SITE1
> show configuration verbose;
> EOF
Conclusion

Doing DataGuard switchovers with wallets work perfectly, if the setup was done correctly, and besides that, you can eliminate a lot of passwords in clear text that you might have laying around.

Cheers,

William

 

Cet article Oracle 12cR2 – DataGuard Switchover with Oracle Wallets est apparu en premier sur Blog dbi services.

Oracle 12cR2 – How to Setup DataGuard observer with Oracle Wallets

Tue, 2017-01-03 09:12

I am not a big fan of having passwords in clear text laying around. This applies not only to application servers, but also for my Data Guard observer.

I do have a script for starting the observer that is reading a config file dgobserver.cfg, and this file contains the Username, Passwords and the Connectstring to my Primary and Standby database.

#*************************************************************
# Connection string to the primary
ConnectStringPrim="sys/Manager1@DBIT122_SITE1"

#*************************************************************
# Connection string to the Standby
ConnectStringStdb="sys/Manager1@DBIT122_SITE2"

However, I don’t want to have these passwords in clear text anymore, so I setup wallets for that purpose on the observer host.

To setup the wallet connection we need to:

  • Create a wallet directory
  • Adjust the sqlnet.ora on the observer
  • Create the wallet and the credentials
  • Test the connections via wallets
  • Adjust the dgobserver.cfg file
  • Test a Fast Start Failover

Create a directory /u01/app/oracle/admin/wallets and add the following to your sqlnet.ora file

WALLET_LOCATION =
   (SOURCE =
      (METHOD = FILE)
      (METHOD_DATA = (DIRECTORY = /u01/app/oracle/admin/wallets))
)

SQLNET.WALLET_OVERRIDE = TRUE

Now, create the wallet and the credentials

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -create
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:


oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:


oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential           
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
2: DBIT122_SITE2 SYS
1: DBIT122_SITE1 SYS

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] ls -l
total 8
-rw------- 1 oracle oinstall 957 Jan  3 13:57 cwallet.sso
-rw------- 1 oracle oinstall   0 Jan  3 13:56 cwallet.sso.lck
-rw------- 1 oracle oinstall 912 Jan  3 13:57 ewallet.p12
-rw------- 1 oracle oinstall   0 Jan  3 13:56 ewallet.p12.lck

 

After everything was successfully setup, it is time to test the connection via wallets with sqlplus and with dgmgrl.

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] sqlplus /@DBIT122_SITE1 as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 3 13:59:07 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] sqlplus /@DBIT122_SITE2 as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 3 13:59:12 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122]

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] dgh
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Tue Jan 3 14:00:05 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE1
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
DGMGRL> connect /@DBIT122_SITE2
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
DGMGRL> exit

 

Looks good so far, now let’s adjust the dgobserver.cfg file and start the observer.

-- adjust the dgobserver.cfg file

#*************************************************************
# Connection string to the primary
ConnectStringPrim="/@DBIT122_SITE1"

#*************************************************************
# Connection string to the Standby
ConnectStringStdb="/@DBIT122_SITE2"

-- start the observer

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] dgobserver.ksh start DBIT122
2017-01-03_14:01:02::dgobserver.ksh::SetOraEnv          ::INFO ==> Environment: DBIT122 (/u01/app/oracle/product/12.2.0/dbhome_1)
2017-01-03_14:01:03::dgobserver.ksh::StatusObserver     ::INFO ==> Observer Stopped
2017-01-03_14:01:04::dgobserver.ksh::StartObserver      ::INFO ==> Connection to the primary database
2017-01-03_14:01:04::dgobserver.ksh::DoCommand          ::INFO ==> Start observer file='/u01/app/oracle/admin/DBIT122/etc/fsfo_DBIT122.dat
2017-01-03_14:01:06::dgobserver.ksh::StatusObserver     ::INFO ==> Observer running
2017-01-03_14:01:07::dgobserver.ksh::CleanExit          ::INFO ==> Program exited with ExitCode : 0

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] ps -ef | grep dgmgrl | grep -v grep
oracle 9186 1 0 14:01 pts/0 00:00:00 dgmgrl -logfile /u01/app/oracle/admin/DBIT122/log/dgobserver.log -silent start observer file='/u01/app/oracle/admin/DBIT122/etc/fsfo_DBIT122.dat'

After everything is setup and done, it is time for the fun part. Let’s initiate a Fast start failover by shutting down the primary with abort.

SQL> shutdown abort
ORACLE instance shut down.


-- observer log 

...
14:04:49.10  Tuesday, January 03, 2017
Initiating Fast-Start Failover to database "DBIT122_SITE2"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "DBIT122_SITE2"
14:04:58.85  Tuesday, January 03, 2017
...

14:07:39.04  Tuesday, January 03, 2017
Initiating reinstatement for database "DBIT122_SITE1"...
Reinstating database "DBIT122_SITE1", please wait...
Reinstatement of database "DBIT122_SITE1" succeeded
14:08:33.19  Tuesday, January 03, 2017

...

Cool, Fast Start Failover and the Reinstante worked as expected.

Conclusion

With Oracle wallets, I can make my DataGuard observer a little bit more secure by eliminating the passwords in clear text.

 

 

Cet article Oracle 12cR2 – How to Setup DataGuard observer with Oracle Wallets est apparu en premier sur Blog dbi services.

MS Cloud Summit Paris 2017

Mon, 2017-01-02 15:10

Une nouvelle année commence et 2017 devrait être un bon cru dans le domaine de la base donnée notamment en prédiction des effets d’annonces faites par Microsoft cette fin d’année 2016. En effet, il a été notamment été question de la prochaine vNext de SQL Server qui sera porté sous Linux ainsi que des nouvelles fonctionnalités prometteuses. A l’habitude, l’équipe dbi services tentera couvrir les différents sujets au cours cette année. N’hésitez pas à revenir de temps en temps jetez un coup d’œil sur d’éventuels nouveaux posts!

Mais avant de parler de la prochaine version de SQL Server, attardons nous à la version courante – SQL Server 2016 – qui offre d’ores et déjà des améliorations et nouvelles possibilités intéressantes dans plusieurs domaines qu’il s’agisse de la performance, la haute disponibilité, la sécurité et bien d’autres. Par ailleurs, une autre nouvelle importante qui intéressera la plupart de nos clients est la sortie récente du SP1 de SQL Server 2016 et qui permet une homogénéisation de la surface des fonctionnalités entre les différentes éditions. Il sera notamment possible d’utiliser la compression, le partitionnement ou les index columnstore avec une édition standard par exemple. Je ne suis pas devin mais je pense ne pas me tromper en disant que cette année je verrais pousser quelques infrastructures 2016 dans les écosystèmes de nos clients!

En tout cas, pour commencer du bon pied avec cette nouvelle version de SQL Server, un événement francophone à inscrire absolument dans vos agendas est le MSCloudSummit qui se déroulera à Paris à partir 23 janvier prochain. Cet événement se veut beaucoup plus large que les journées SQL Server que vous connaissez certainement déjà. Le MS Cloud Summit, c’est 600 participants attendus, 7 tracks avec 72 sessions autour du cloud, des scénarios hybrides et on-premises.  Pour ma part, je serai présent avec dbi services et j’aurai le plaisir de vous présenter les nouveautés dont vous bénéficierez en termes de haute disponibilité et plan de récupération avec le couple gagnant Windows 2016 et SQL Server 2016.

blog 112 - session

Au plaisir de vous y retrouver. D’ici là je vous souhaite une très bonne année 2017 !

 

 

Cet article MS Cloud Summit Paris 2017 est apparu en premier sur Blog dbi services.

IT-Tage 2016 Informatik aktuell: feedback

Fri, 2016-12-23 09:07

Today, to finish the year, I post a brief personal impression of the IT-Tage 2016 in Frankfurt at the Hotel Maritim, where I was also be a speaker.

IMG_3808

I presented 2 sessions on SQL Server: “SQL Server Errorlog Entmystifizierung” & “SQL Server 2016: Neue Sicherheitsfunktionen”.
I wasn’t the only one from dbi services who spoke at that conference:

  • David Barbarin with also 2 sessions: “SQL Server – Locks, latches and spinlocks” & “SQL Server 2016 Availability Group Enhancements”
  • Clemens Bleile with 1 session: “SQL Plan Directives: Neuigkeiten in 12.2. Produktions-Ausführungspläne in Testumgebungen reproduzieren”
  • Philippe Schweitzer with 1 session: “Feasibility study for building a software factory based on GIT repository”
  • Daniel Westermann with 1 session: “Breaking the deadlock: Migrating from proprietary databases to PostgreSQL”

You can already download all presentations on this link.

After my presentation day, I had the opportunity to go to a very interesting session by Oliver Hock “Ein Prozess lernt laufen: LEGO-Mindstorms-Steuerung mit BPMN”. With a Lego Mindstorm kit, he showed how to solve a magic cube.

IMG_3823

This session is also on youtube and look the demo at the end (the last 60 seconds) . It was very nice! ;-)

I would like to thank the entire team of Informatik Aktuell, who have put together a smooth and interesting process.

I hope that I can go also next year, with new sessions and follow other interesting sessions…

In the evening, you could also enjoy the Christmas Market, which is 2 metro’s stop from the Hotel. IMG_3810

I wish you a merry Christmas and like we said in Alsace: “A guetta rutsch ins neja Johr!”

 

Cet article IT-Tage 2016 Informatik aktuell: feedback est apparu en premier sur Blog dbi services.

Migrating your existing availability group infrastructure to 2016

Fri, 2016-12-23 04:57

Have you already a plan to migrate your old SQL Server 2012 availability group infrastructure to windows 2016 and SQL Server 2016? In a previous post, I talked about distributed availability groups and cross-cluster migration scenarios but this time the game is not the same because we want to achieve an in-place upgrade of the existing AG infrastructure. This question will probably be a concern the next year and if we take a closer look at improvements shipped with new versions of Windows and SQL Server, we will not be disappointed for sure.

Indeed, cluster rolling upgrade is a new feature from Windows Server 2016 which allows us to migrate smoothly (and almost transparently) the WSFC side of the existing database infrastructure. On the other side, upgrading high available replicas from SQL Server 2012 to SQL Server 2016 is also possible without reinstalling completely the availability group infrastructure. Thus, we may benefit from a temporary and mixed infrastructure at the both sides to reduce the outage timeframe of our applications. I may think about some customers where it could be helpful regarding their business and their corresponding SLAs.

So let’s just demonstrate this kind of scenario which includes a classic customer’s availability group infrastructure. Most part of availability groups implemented in my area consists of two replicas meaning a WSFC with 2 cluster nodes at the low-level of the global architecture as shown above:

blog 111 - 00 - initial infra WSFC2012

So the question is how to achieve the migration of the above infrastructure from 2012 version to 2016 version (both Windows and SQL Server) with low downtime? Well, one solution would consist in preparing and adding temporary two extra nodes which would run on Windows Server 2016 and SQL Server 2012 in a first step. Unfortunately we cannot mix directly two different versions of SQL Server in an existing availability group yet. We have to execute an extra step to upgrade one by one each replica we want to run on SQL Server 2016. Having two extra nodes will allow to prepare smoothly our migration without impacting the existing high available infrastructure.

So let’s begin with adding 2 nodes with Windows Server 2016 version. Basically, we may use either GUI or PowerShell cmdlets command for that.

The initial scenario (2012 version) is as follows:

blog 111 - 0 - initial config cluster

Let’s add the two extra cluster nodes which run on Windows Server 2016. According to Microsoft technet procedure, the key point is to perform this action from a Windows Server 2016 node exclusively.

The PowerShell cmdlet used is the same than the previous version and I executed it for the two additional nodes (WIN20168SQL16 and WIN20169SQL16) which run both on Windows Server 2016. Just remember to exclude the two extra nodes from quorum vote to avoid impacting the existing configuration.

blog 111 - 1 - Add Cluster Node 2016

However we may notice new cluster functional level property as shown below. The value is equal to 8 meaning that the cluster has switched to a temporary / mixed mode because at the moment we have cluster nodes both on 2012 and 2016 versions. 

blog 111 - 2 - Cluster Functional Level 2016

The transitioned infrastructure includes now 4 nodes. The first 2 nodes run on Windows Server 2012 whereas the last 2 nodes run on Windows Server 2016. An availability group runs on the top of the first 2 nodes and two additional replicas are ready to be enrolled to the existing infrastructure.

blog 111 - 21 - transitioned WSFC

So now let’s move on the SQL Server side and let’s add the 2 additional replicas.

The initial AG scenario is as follows:

blog 111 - 3 - Initial AG 2012

After adding the two replicas in asynchronous mode, we get the following picture:

blog 111 - 4 - Add replica 2016 to AG 2012

At this step, we have now to upgrade the new added replicas to SQL Server 2016. One important thing to keep in mind here is that we have to prevent absolutely failover to an upgraded replica before ensuring all the new secondary replicas are already upgraded. Indeed according to the Microsoft documentation an upgraded primary replica can no longer ship logs to any secondary replica whose SQL Server 2016 instance has not yet been upgraded to the same version.

So in my case, I upgraded first the WIN20169SQL16\SQL12 replica and then the WIN20168SQL16\SQL12 as shown below:

blog 111 - 5- AG config after upgrade node

The new transitioned infrastructure is shown in the picture below:

blog 111 -51- Transitioned infrastructure

 

The next part of the migration step includes a short downtime. The previous steps did not imply outage so far.

Basically the next part of the procedure will include the following steps:

  • Choose the next replica in 2016 version that will be involved as primary and change its replication mode to synchronous in order to prevent losing data
  • Failover the availability group to this replica (at this step old replicas in 2012 version will not be synchronized because the new primary replica may no longer ship logs to them as said previously)

blog 111 - 5- AG health state after upgrade node

  • Change the replication mode of the second replica in 2016 to synchronous to meet the initial configuration
  • Remove old replicas in 2012 from the availability group

blog 111 -52- Transitioned infrastructure

The T-SQL script was as follows in my case:

:CONNECT WIN20121SQL16\SQL12

-- Change temporary replication to synchronous
-- for next SQL Server 2016 primary replica 
USE [master]
GO
ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20168SQL16\SQL12' 
WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT)
GO

:CONNECT WIN20168SQL16\SQL12

-- Initiate failover to next SQL Server 2016 primary replica  
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
FAILOVER;
GO


:CONNECT WIN20168SQL16\SQL12

-- Change temporary replication to asynchronous
-- old replicas (2012)
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20169SQL16\SQL12' 
WITH ( AVAILABILITY_MODE = SYNCHRONOUS_COMMIT );

ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20169SQL16\SQL12' 
WITH ( FAILOVER_MODE = AUTOMATIC );


:CONNECT WIN20168SQL16\SQL12

-- Remove old replicas (2012)
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
REMOVE REPLICA ON N'WIN20121SQL16\SQL12';
GO

ALTER AVAILABILITY GROUP [dummygrp]
REMOVE REPLICA ON N'WIN20122SQL16\SQL12';
GO

 

Let’s go back to the cluster side and the next step will consist in removing old cluster nodes from the WSFC by using usual commands as Remove-ClusterNode

blog 111 - 6- Remove old cluster nodes

… And we may finally update the cluster functional level to 9 (2016 version). Just be aware that upgrading the cluster functional level to 2016 will make the process un-reversible. So reverting back to the initial configuration will simply not be possible or at least it will require extra steps and longer downtime as well.

blog 111 - 7 - Update cluster functionalLevel 2016

And let’s get the final view of our upgraded availability group dashboard:

blog 111 - 8 - AG 2016 dashboard

The listener stays the same and it is almost transparent from applications.

 

Bottom line

In this blog post we had a glimpse of new capabilities of both Windows 2016 and SQL Server 2016 in terms of rolling upgrade. Of course the reality would be probably a little more complicated when introducing other parameters as customer context, number of availability groups, performance impact of adding temporary replicas, external dependencies and so on. But these feature seems to be promising and may be very helpful for future migration scenarios. I’m looking forward to experiment such feature at customer shops!

Happy upgrade!

 

 

Cet article Migrating your existing availability group infrastructure to 2016 est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 8 – Transportable tablespaces

Fri, 2016-12-23 01:47

My colleague Franck posted a comment to one of my last “Can I do it with PostgreSQL” blog posts. The comment he posted is: “Here is an idea for a future “Can I do it with PostgreSQL?”. My favorite Oracle feature: transportable tablespaces.” When you’ve read my post about how PostgreSQL implements tablespaces then you probably already know the answer: No, you can not do this in PostgreSQL. Having thought about this some time I decided to check the Oracle documentation for what transportable tablespaces are good for. I know the basics and how you can do it in Oracle, but better check twice :)

According to the documentation the main goal of transportable tablespaces is to transport data. This means you can transport one or more (which then becomes a transportable tablespace set) tablespaces from one host to another and then “plug” that set into an existing database (as long as the tablespaces are self containing). One, depending on the use case, great thing you can do with it is that the target database does not need to have the same standard block size as the source database. This means you can transport a tablespace space with a 16k block size to a database with a default block size of 8k. This is another thing you can not do in PostgreSQL: In PostgreSQL the block size is configured at configure/compile time. Once you have the compiled binaries you can not change that afterwards.

Probably the greatest benefit of transportable tablespaces is that it saves you time in moving your data around. You just copy the data files that make up your tablespace(s) and then use expdp/impdp for the meta data, that’s it. When you go for the multi-tenant architecture you can use transportable tablespaces to make a non-CDB a pluggable database on the target, too. For a more detailed introduction you can check the documentation (linked above).

Back to PostgreSQL: What options do you have for transporting your data from one database to another?

The tool of choice for most cases probably is pg_dump. The big advantage of pg_dump is that you can use it over the network and directly write everything you want to export into the target database using a pipe:

pg_dump -C -h [SOURCE_HOST] -U [USER] [DATABASE] | psql -h [TARGETHOST] -U [TARGETUSER] [TARGETDATABASE]

You can even do that using parallel processes when you combine pg_dump with pg_restore although using a pipe is not supported in that case. To demonstrate this we’ll need some sample data (I am using PostgreSQL 9.5.4 as the source):

postgres@pgbox:/home/postgres/ [PG954] psql
psql (9.5.4 dbi services build)
Type "help" for help.

(postgres@[local]:5438) [postgres] > create database test;
CREATE DATABASE
(postgres@[local]:5438) [postgres] > \c test
You are now connected to database "test" as user "postgres".
(postgres@[local]:5438) [test] > create table t1 ( a int, b int );
CREATE TABLE
(postgres@[local]:5438) [test] > create table t2 ( a int, b int );
CREATE TABLE
(postgres@[local]:5438) [test] > insert into t1 values (generate_series(1,1000000), generate_series(1,1000000));
INSERT 0 1000000
(postgres@[local]:5438) [test] > insert into t2 select * from t1;
INSERT 0 1000000

Lets say I want to copy the data to a PostgreSQL 9.6.1 instance on the same host. How can I do that? Quite easy:

postgres@pgbox:/home/postgres/ [PG961] pg_dump -V
pg_dump (PostgreSQL) 9.6.1 dbi services build
postgres@pgbox:/home/postgres/ [PG961] mkdir /var/tmp/exp
postgres@pgbox:/home/postgres/ [PG961] pg_dump -h localhost -p 5438 -C -c -F d -j 2 -f /var/tmp/exp/ test
postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439 -d postgres -F d -C /var/tmp/exp/
postgres@pgbox:/home/postgres/ [PG961] psql -c "\l" postgres
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 test      | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
(4 rows)

postgres@pgbox:/home/postgres/ [PG961] psql -c "\d" test
        List of relations
 Schema | Name | Type  |  Owner   
--------+------+-------+----------
 public | t1   | table | postgres
 public | t2   | table | postgres
(2 rows)

Not a big deal and possible over the network, too.

Another option would be to use pg_basebackup to create a complete new instance from the source (source and target need to be of the same PostgreSQL release in this case). Again this is possible over the network.

You can even use rsync to copy the whole cluster to a new host and then do a second resync while the source is down for a short time. This will copy only the files that changed since the first rsync and will probably be very fast, but you will need a downtime of the source for the second rsync.

There are other methods for moving your data around in PostgreSQL but the above are the most popular.

Hope this helps …

 

Cet article Can I do it with PostgreSQL? – 8 – Transportable tablespaces est apparu en premier sur Blog dbi services.

SQL Plan Directives in 12cR2. Behavior is different than in 12cR1.

Wed, 2016-12-21 12:04

During the Frankfurter IT Tage I provided a presentation about SQL Plan Directives and how to reproduce a testcase in a test environment for a plan with used SQL Plan Directives. In that context I did a couple of tests with 12cR2 (12.2.0.1) and wanted to blog about the change in behavior in that release.

Here is my testcase I did initially run in 12.1.0.2:

First I created a table with 4 columns: A, B, C and D and load it with 1000 rows:


10:25:27 demo@GEN12102> create table DEMO_TABLE as
10:25:27 2 select mod(rownum,2) a ,mod(rownum,2) b ,mod(rownum,2) c ,mod(rownum,2) d
10:25:27 3 from dual connect by level <=1000;
 
10:25:30 demo@GEN12102> select * from DEMO_TABLE;
A B C D
---------- ---------- ---------- ----------
1 1 1 1
...
0 0 0 0
1 1 1 1
0 0 0 0

I.e. there is a correlation between the columns (all columns have a value 0 or all have a value 1).

Due to the new feature of stats gathering during CTAS and INSERT APPEND into empty tables in 12c the table has stats already:


10:28:29 demo@GEN12102> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics
10:28:29 2 where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:25:27
B 2 .5 10:25:27
C 2 .5 10:25:27
D 2 .5 10:25:27

Let’s run a query:


10:29:40 demo@GEN12102> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:29:40 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0
0 0 0 0
 
10:29:40 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------

There is a misestimation of the calculated cardinality (E-Rows) versus actual cardinality (A-Rows) of a factor 8 (63:500).
Oracle detected that and put the cursor on IS_REOPTIMIZABLE=Y:


10:32:49 demo@GEN12102> select sql_id,child_number,is_reoptimizable from v$sql
10:32:49 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 Y

Executing the SQL again uses statistics feedback:


10:33:42 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Note
-----
- statistics feedback used for this statement

The computed cardinality is correct now.

After flushing the SQL Plan Directives to the SYSAUX-tablespace I can see them with the internal_state NEW:


10:34:37 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:34:39 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used
10:34:39 2 from dba_sql_plan_directives where directive_id in(
10:34:39 3 select directive_id from dba_sql_plan_dir_objects where owner='DEMO'
10:34:39 4 ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

Executing a different SQL with the same type of predicate will use the SQL Plan Directive and dynmically gathers stats:


10:35:54 demo@GEN12102> select /*+ gather_plan_statistics */ count(*) from DEMO_TABLE
10:35:54 2 where a=0 and b=0 and c=0 and d=0;
 
COUNT(*)
----------
500
 
10:35:54 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

The internal_state of the directive changed to MISSING_STATS:


10:37:18 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:37:20 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner='DEMO' ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>MISSING_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

The SPD is still being used until fresh stats are gathered. Oracle will create extended (multi column) statistics then:


10:38:27 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
10:38:27 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
 
10:38:27 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
10:39:42 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:40:01 demo@GEN12102> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:25:27
B 2 .5 10:25:27
C 2 .5 10:25:27
D 2 .5 10:25:27
 
10:40:03 demo@GEN12102> exec dbms_stats.gather_table_stats(user,'DEMO_TABLE',options=>'GATHER AUTO',no_invalidate=>false);
 
10:40:05 demo@GEN12102> select column_name,num_distinct,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT LAST_ANA
------------ ------------ --------
A 2 10:40:04
B 2 10:40:04
C 2 10:40:04
D 2 10:40:04
SYS_STSPJNMI 2 10:40:04
 
10:40:05 demo@GEN12102> select extension_name,extension from user_stat_extensions where table_name='DEMO_TABLE';
 
EXTENSION_NAME
---------------
EXTENSION
----------------------------------------------------------------------------------------------------
SYS_STSPJNMIY_S
("A","B","C","D")

Running a SQL again still dynamically gatheres stats, but will change the internal state to HAS_STATS:


10:40:37 demo@GEN12102> select /*+ gather_plan_statistics */ distinct b,c,d,a from DEMO_TABLE
10:40:37 2 where a=0 and b=0 and c=0 and d=0;
 
B C D A
---------- ---------- ---------- ----------
0 0 0 0
 
10:40:37 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
10:40:56 10:40:56

A next execution will use the extended stats instead of the directives:


10:45:10 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
10:45:10 2 where a=1 and b=1 and c=1 and d=1;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
 
10:45:10 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

Dropping the extended stats will result in cardinalilty misestimate and setting the directive to PERMANENT:


10:45:58 demo@GEN12102> exec dbms_stats.drop_extended_stats(user,'DEMO_TABLE','("A","B","C","D")');
 
10:46:19 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a from DEMO_TABLE
10:46:19 2 where a=1 and b=1 and c=1 and d=1;
 
A
----------
1
 
10:46:19 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))
 
10:47:10 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:47:10 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>PERMANENT</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

Future executions of such SQL will use the directive again:


10:47:46 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

That’s how it worked in 12.1.0.2. Here’s the behavior of 12.2.0.1:

First of all I have to enable adaptive statistics and switch on the preference to automatically create extended statistics:


10:49:22 demo@GEN12201> alter system set optimizer_adaptive_statistics=true;
 
10:49:22 demo@GEN12201> exec DBMS_STATS.SET_PARAM ('AUTO_STAT_EXTENSIONS','ON');

Creating the table with the appropriate stats is the same as in 12.1.:


10:50:23 demo@GEN12201> create table DEMO_TABLE as
10:50:23 2 select mod(rownum,2) a ,mod(rownum,2) b ,mod(rownum,2) c ,mod(rownum,2) d
10:50:23 3 from dual connect by level <=1000;
 
10:51:02 demo@GEN12201> select * from DEMO_TABLE;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
...
1 1 1 1
0 0 0 0
 
10:51:31 demo@GEN12201> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics
10:51:31 2 where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:50:23
B 2 .5 10:50:23
C 2 .5 10:50:23
D 2 .5 10:50:23

Let’s run the first query:


10:51:42 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:51:42 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
..
0 0 0 0
0 0 0 0
 
10:51:42 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

As in 12.1. we have the misestimate in the computed cardinality (E-Rows) versus actual cardinality (A-Rows).

But here comes the first difference to 12.1.: The cursor is not marked as reoptimizable:


10:53:31 demo@GEN12201> select sql_id,child_number,is_reoptimizable from v$sql
10:53:31 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 N

I have to execute the SQL a second time to make the cursor IS_REOPTIMIZABLE=Y:


10:54:27 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:54:27 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0

The computed cardinality is still wrong:


10:54:27 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

But the cursor is on IS_REOPTIMIZABLE=Y now:


10:54:27 demo@GEN12201> select sql_id,child_number,is_reoptimizable from v$sql
10:54:27 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 Y

However, statistics feedback is NOT USED in 12.2.:


10:56:33 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:56:33 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0
 
10:56:33 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

I.e. the cardinality is still wrong and the Note saying “statistics feedback used for this statement” is not there.
But I can see a new SPD created:


10:58:37 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
10:58:39 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used
10:58:39 2 from dba_sql_plan_directives where directive_id in(
10:58:39 3 select directive_id from dba_sql_plan_dir_objects where owner='DEMO'
10:58:39 4 ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

And it’s being used:


10:59:08 demo@GEN12201> select /*+ gather_plan_statistics */ count(*) from DEMO_TABLE
10:59:08 2 where a=0 and b=0 and c=0 and d=0;
 
COUNT(*)
----------
500
 
10:59:08 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

The new feature in 12cR2 is the created Directive of type DYNAMIC_SAMPLING_RESULT:


10:59:31 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
10:59:31 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner='DEMO' ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>MISSING_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

I.e. the result of the dynamically sampled data is stored in the repository. The SQL_ID 7fp7c6kcgmzux if a stripped Dynamic Sampling query is stored, so that the result can be reused by other queries in future. I.e. Oracle just has to generate the SQL_ID of the Dynamic Sampling query and can use a prior result if it finds it in the repository.

As the internal state of the directive is on MISSING_STATS and the DBMS_STATS-preference AUTO_STAT_EXTENSIONS is set to ON, Oracle will create extended stats when gathering stats next time:


11:02:17 demo@GEN12201> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:50:23
B 2 .5 10:50:23
C 2 .5 10:50:23
D 2 .5 10:50:23
 
11:04:10 demo@GEN12201> exec dbms_stats.gather_table_stats(user,'DEMO_TABLE',options=>'GATHER AUTO',no_invalidate=>false);
 
11:04:11 demo@GEN12201> select column_name,num_distinct,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT LAST_ANA
------------ ------------ --------
A 2 11:04:10
B 2 11:04:10
C 2 11:04:10
D 2 11:04:10
SYS_STSPJNMI 2 11:04:10
 
11:04:11 demo@GEN12201> select extension_name,extension from user_stat_extensions where table_name='DEMO_TABLE';
 
EXTENSION_NAME
---------------
EXTENSION
----------------------------------------------------------------------------------------------------
SYS_STSPJNMIY_S
("A","B","C","D")

Once a query is excecuted again the internal state changes to HAS_STATS (same as in 12.1.):


11:04:33 demo@GEN12201> select /*+ gather_plan_statistics */ distinct b,c,d,a from DEMO_TABLE
11:04:33 2 where a=0 and b=0 and c=0 and d=0;
 
B C D A
---------- ---------- ---------- ----------
0 0 0 0
 
11:04:33 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
11:04:35 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:04:35 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

The next query uses the stats instead of the SPD:


11:05:23 demo@GEN12201> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
11:05:23 2 where a=1 and b=1 and c=1 and d=1;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
 
11:05:23 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

When dropping the extended stats the estimated cardinality is wrong again:


11:05:49 demo@GEN12201> exec dbms_stats.drop_extended_stats(user,'DEMO_TABLE','("A","B","C","D")');
 
11:05:57 demo@GEN12201> select /*+ gather_plan_statistics */ distinct a from DEMO_TABLE
11:05:57 2 where a=1 and b=1 and c=1 and d=1;
 
A
----------
1
 
11:05:57 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

However, the SPD won’t change to state to PERMANENT as in 12.1.:


11:06:38 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:06:38 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

I.e. future queries will have the misestimate, because the SPD remains on HAS_STATS:


11:07:16 demo@GEN12201> select /*+ gather_plan_statistics */ distinct b from DEMO_TABLE
11:07:16 2 where a=1 and b=1 and c=1 and d=1;
 
B
----------
1
 
11:07:16 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

REMARK: Interesting is the E-Rows value of 62 instead of 63 before, but I haven’t checked the reason for that yet.


11:07:46 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:07:46 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

So in summary there are several changes in 12.2. compared to 12.1.:

1.) Statistics feedback won’t mark the cursor for reoptimization after the first execution. It needs 2 executions with a cardinality misestimate to mark the cursor IS_REOPTIMIZABLE=Y
2.) If the cursor is marked IS_REOPTIMIZABLE=Y it won’t be parsed again with additional information from cardinality feedback. In 12.2. only a SQL Plan Directive is being created.
3.) When executing a query which can use a SQL Plan Directive the first time, a new SQL Plan Directive of type DYNAMIC_SAMPLING_RESULT will be created. Future queries can use that result.
4.) Dropping automatically created extended stats won’t change the state of the SPD from HAS_STATS to PERMANENT with next execution of a query (and flushing the SPD).

Bottomline is that even if you enable SPDs in 12.2 like in 12.1 (with optimizer_adaptive_statistics = TRUE and DBMS_STATS.SET_PARAM (‘AUTO_STAT_EXTENSIONS’,’ON’)) the behavior in 12.2 is different. I would describe the change of using statistics feedback only on 2nd execution of a query to create a SPD then and not consider the feedback for the next parse as a more conservative approach towards more stable plans and being less “adaptive” than in 12.1.

 

Cet article SQL Plan Directives in 12cR2. Behavior is different than in 12cR1. est apparu en premier sur Blog dbi services.

Documentum story – Add the Java urandom inside the wlst.sh script to improve the performance of all WLST sessions

Tue, 2016-12-20 04:13

WebLogicServer takes a long time (e.g. over 15 minutes) to startup. Can the performance be improved?
Using /dev/urandom option in the Weblogic Java Virtual Machine parameters from startup script, can be the solution.
In fact, /dev/urandom is a blocking device and during times of low entropy (when there is not enough random bits left in it), /dev/urandom will block any processes reading from it (and hence hang) until more random bits have been generated.
So, if you are running some custom WLST scripts to startup your Weblogic infrastructure, you have to know that there are a few things you could try to improve performance.

This change allows to improve the performances of your WLST sessions.. There is no functional impact to modify the wlst.sh script.

1- Login to the Weblogic Server.
2- Use the Unix (installation owner) account:
3- Go to the following directory using:

cd $ORACLE_HOME/oracle_common/common/bin

4- Edit the wlst.sh script with vi editor
5- Add the following option in JVM_ARGS

 JVM_ARGS="-Dprod.props.file='${WL_HOME}'/.product.properties ${WLST_PROPERTIES} ${JVM_D64} ${UTILS_MEM_ARGS} ${COMMON_JVM_ARGS} ${CONFIG_JVM_ARGS} -Djava.security.egd=file:///dev/./urandom" 

 
 
6- Please note that if you specifically want to reduce the load time in start time script, you can do It by providing :

java weblogic.WLST -skipWLSModuleScanning <script>.py 

It can be an advantage in some cases.

Others ways to increase performance of your WLS:

1- This blog deals with urandom parameter especially.
Be aware that you have another ways  to tune well your Weblogic Server(s).
Just in order to give you an overview of all possible changes :
Set Java parameters in starting Weblogic Server script or directly in JAVA_OPTS:
• urandom (see above)
• java heap size :
For higher performance throughput, set (if possible) the minimum java heap size equal to the maximum heap size. For example:

"$JAVA_HOME/bin/java" -server –Xms512m –Xmx512m -classpath $CLASSPATH ... 

 
2- Think about the number of Work Managers you’re really needed :
A default Work Manager might not be adapted to all application’s needs.
In this case, you’ll have to create a custom Work Manager.

3- Remember to have a good tuning regarding the stuck threads process detection.
Indeed, if WLS already detects when a thread process  becomes a “Stuck” thread, you have to know that the stuck status doesn’t allow to finish the current work or accept a new one.
So It impacts directly and quickly  the performance of the machine.
 
4- Don’t forget the basics :  Operating System tuning and/or Network tuning

 

Cet article Documentum story – Add the Java urandom inside the wlst.sh script to improve the performance of all WLST sessions est apparu en premier sur Blog dbi services.

Oracle 12cR2 – DataGuard and TCP.VALID_NODE_CHECKING

Tue, 2016-12-20 03:58

Security is a quite important topic, especially when running Oracle in the cloud. To make your DataGuard a little bit more secure, you can enable the TCP.VALIDNODE_CHECKING feature. I call it SQL*Net Firewall.

Enabling the SQL*Net Firewall is quite straight forward. There are only 3 sqlnet.ora parameters which are involved:

  • TCP.VALIDNODE_CHECKING
  • TCP.INVITED_NODES
  • TCP.EXCLUDED_NODES

The parameter TCP.VALIDNODE_CHECKING turns this feature on or off. If enabled, then the incoming connections are allowed only if they originate from a node that conforms to the list specified by TCP.INVITED_NODES or TCP.EXCLUDED_NODES parameters.

The parameter TCP.INVITED_NODES list all clients that are allowed access to the database.

The parameter TCP.EXCLUDED_NODES specifies which clients are not allowed to access the database. In fact not even the listener.

Some important notes before you start configuring your valid node checking
  • All host names in the TCP.INVITED_NODES or TCP.EXCLUDED_NODES must be resolvable or the listener will not start. In case a host name is not resolvable, the start of the listener fails with the error message: TNS-00584: Valid node checking configuration error In the listener trace file, you can identify which host name exactly is causing the issued. In my case it is the host dbidg04 which is not resolvable. Enabling listener tracing is very useful, especially if you have a long list of names and ip addresses.
  [20-DEC-2016 08:18:35:894] nlvlloadp: Adding host dbidg04
  [20-DEC-2016 08:18:35:894] snlinGetAddrInfo: entry
  [20-DEC-2016 08:18:35:937] snlinGetAddrInfo: getaddrinfo() failed with error -2
  [20-DEC-2016 08:18:35:938] snlinGetAddrInfo: exit
  [20-DEC-2016 08:18:35:938] nlvlloadp: Error 7001 while resolving host 17712412
  [20-DEC-2016 08:18:35:938] nlvlloadp: exit
  [20-DEC-2016 08:18:35:938] ntvllt: Problem in Loading tcp.invited_nodes
  • Any change of the values in TCP.VALIDNODE_CHECKING, TCP.INVITED_NODES or TCP.EXCLUDED_NODES requires the listener to be stopped and started. Just reloading the listener can be very dangerous is some situation. See alsoBug 22194469 : TCP.INVITED_NODES IS DISABLED AFTER RELOADING LISTENER
  • The TCP.INVITED_NODES list takes precedence over the TCP.EXCLUDED_NODES if both lists are used. E.g. if the following entries are present, the the host 192.168.56.203 is allowed to connected.
  TCP.INVITED_NODES  = (127.0.0.1, 192.168.56.201, 192.168.56.202, 192.168.56.203)
  TCP.EXCLUDED_NODES = (192.168.56.203)
  • All entries in TCP.INVITED_NODES or TCP.EXCLUDED_NODES have to be on one line. In case the entries are not in one line, the listener will not start and you will receive the following error: TNS-00583: Valid node checking: unable to parse configuration parameters. e.g. the following list is invalid
  TCP.INVITED_NODES = (127.0.0.1,
  192.168.56.201,
  192.168.56.202)
  • Adding empty placeholder strings like (comma, space, comma) are possible, however, I would not recommend them.
TCP.INVITED_NODES = (127.0.0.1, 192.168.56.201, 192.168.56.202, , ,)
  •  Entries can be added via Net Manager as well. Local -> Profile -> General -> Access Rights. HINT: If you have some special formatting or commenting in your sqlnet.ora, the netmgr might reorganize your file. So better do a copy beforehand. ;-)
  • In case you work with SCAN and the GRID Infrastructure GRID_HOME/network/admin/sqlnet.ora file, then don’t forget to add your NODE VIP’s and SCAN VIP’s to your TCP.INVITED_NODES list

By knowing all that, we can start configuring our SQL*Net Firewall for our DataGuard environment. In a minimum configuration, we need to add the localhost, all hosts involved in the DataGuard configuration (Primary, Standby 1, standby 2, observer and so on) and of course the application server. Right now I do have my primary (192.168.56.201), my standby (192.168.56.202) and the localhost in the invited nodes list. The application server 192.168.56.203 is missing.

TCP.VALIDNODE_CHECKING = YES
TCP.INVITED_NODES = (127.0.0.1, 192.168.56.201, 192.168.56.202)

If I try to connect now from the application server, I do get the following error: TNS-12547: TNS:lost contact. As you can see, not even a tnsping is possible.

oracle@dbidg03:/home/oracle/ [DBIT122] tnsping DBIT122_SITE1

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 20-DEC-2016 09:11:51

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dbidg01)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = DBIT122_SITE1_DGMGRL)))
TNS-12547: TNS:lost contact

A sqlplus connection gives you the same error:

oracle@dbidg03:/home/oracle/ [DBIT122] sqlplus system@DBIT122_SITE1

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 20 09:16:00 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password:
ERROR:
ORA-12547: TNS:lost contact

The TNS: lost contact can mean anything. Turning on listener logging gives you a much clearer error message saying that an incoming connect from 192.168.56.203 is rejected.

Incoming connection from 192.168.56.203 rejected
20-DEC-2016 09:11:51 * 12546
TNS-12546: TNS:permission denied
 TNS-12560: TNS:protocol adapter error
  TNS-00516: Permission denied

Through listener logging, you can monitor all incoming connected that are rejected. That’s quite cool from my point of view. Now … let’s add the application server (192.168.56.203) to the list, and restart the listener. (not reload) Don’t forget to do it on the Primary and the Standby.

TCP.VALIDNODE_CHECKING = YES
TCP.INVITED_NODES = (127.0.0.1, 192.168.56.201, 192.168.56.202, 192.168.56.203)

Now my application can tnsping and connect.

oracle@dbidg03:/home/oracle/ [DBIT122] tnsping DBIT122_SITE1

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 20-DEC-2016 09:21:01

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dbidg01)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = DBIT122_SITE1_DGMGRL)))
OK (0 msec)
oracle@dbidg03:/home/oracle/ [DBIT122]
oracle@dbidg03:/home/oracle/ [DBIT122]
oracle@dbidg03:/home/oracle/ [DBIT122] sqlplus system@DBIT122_SITE1

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 20 09:21:10 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password:
Last Successful login time: Mon Dec 19 2016 10:44:52 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>

 

Conclusion

Setting up valid node checking is quite easy, and the listener log gives you clear error messages if there are any rejected hosts.  Make sure that you have a DataGuard test environment where you can test this feature including switchover, failover and so on. And very important, verify your name resolution before you add any hostname to the invited host list. The only drawback I see, is that the listener has to be stopped and started, meaning that new connections are not possible for a very short period of time.

 

 

 

Cet article Oracle 12cR2 – DataGuard and TCP.VALID_NODE_CHECKING est apparu en premier sur Blog dbi services.

Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG

Tue, 2016-12-20 02:00

Ref : EMC article number
The last publication date is Sat Feb 20 21:39:14 GMT 2016. Here the link: https://support.emc.com/kb/335987

After upgrading from 6.7.x to 7.2, the following warning message is logged in JMS log files: com.documentum.fc.common.DfNewInterprocessLockImpl – [DFC_FILE_LOCK_ACQUIRE_WARN] Failed to acquire lock proceeding ahead with no lock java.nio.channels.OverlappingFileLockException at sun.nio.ch.SharedFileLockTable.checkList FileLockTable.java:255)

In order to avoid this warning, EMC has provided a solution (SR #69856498) that will be described below:

By default ACS and ServerApp dfc.properties are pointing to $DOCUMENTUM_SHARED/config/dfc.properties.

Adding separate ‘dfc.data.dir’ cache folder location in ACS and ServerApp dfc.properties.
After JAVA Method Server restart, two separate cache folders are created inside $DOCUMENTUM_SHARED/jboss7.1.1/server and then, WARNING messages had gone from acs.log.

In fact, this is just a warning that someone else has acquired lock on the physical file (in this case it is dfc.keystore).  Since ServerApps (Method Server) and ACS are invoking DFC simultaneously and both try to acquire lock on dfc.keystore file and Java throws OverlappingFileLockException. Then DFC warns that it could not lock the file and proceeds without lock. Ideally this should be just info message in this case, where file lock is acquired for read-only. But the same logic is used by other functionality like registry update and BOF Cache update, where this failure should be treated as genuine warning or error. Going forward, engineering will have to correct this code by taking appropriate actions for each functionality. There is no functional impact to use different data directory folder.

Please proceed as below to solve It:

  • Login to the Content Server
  • Change the current user to dmadmin :(administrator account)
  • Create some folders using:
 mkdir $DOCUMENTUM_SHARED/acs
 mkdir $DOCUMENTUM_SHARED/ServerApps
 mkdir $DOCUMENTUM_SHARED/bpm

 

  • Update all necessary dfc.properties files (with vi editor):

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/acs

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/ServerApps

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/bpm.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/bpm

===============================================================================================================================

  • Verify that the recently created folders are empty using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

 

  • Restart the JMS using:
sh -c "cd $DOCUMENTUM_SHARED/jboss7.1.1/server;./stopMethodServer.sh"
sh -c "$DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh"

 

Verification
  • Verify that the recently created folders are now populated with default files and folders using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

Files must not be empty now.

  • Disconnect from the Content Server.

 

Using this procedure, you won’t see this WARNING message anymore.
Regards,

Source : EMC article number : 000335987

 

Cet article Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG est apparu en premier sur Blog dbi services.

Oracle 12cR2: AWR views in multitenant

Mon, 2016-12-19 13:42

In a previous post I explained how the AWR views have changed between 12.1.0.1 and 12.1.0.2 and now in 12.2.0.1 they have changed again. This is a good illustration of multitenant object link usage.

What’s new in 12cR2 is the ability to run AWR snapshots at CDB or PDB level. I really think that it makes more sense to read an AWR report at CDB level because it’s about analysing the system (=instance) activity. But with PDBaaS I can understand the need to give a report to analyse PDB sessions, resource and statements.

I’ll start with the conclusion – a map of AWR view to show which ones read from CDB level snapshots, or PDB snapshots, or both:

C0DLx2GXEAALIG4
I’ll explain AWR reports in a future post. Basically when you run awrrpt.sql from CDB$ROOT you get CDB snapshots and when you run it from PDB you have the choice.

In the diagram above, just follow the arrows to know which view reads from PDB or CDB or both. You see two switches between the root and the PDB: data link for one way and common view for the other way. Note that all are metadata links so switches occurs also at parse time.

WRM$_

Let’s start from the table where AWR snapshots are stored:


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name='WRM$_SNAPSHOT';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ------------------------------ ----------------------- ------------------
SYS WRM$_SNAPSHOT TABLE METADATA LINK

This is a table. METADATA LINK means that the structure is the same in all containers, but data is different.

I have the following containers:

SQL> select con_id,dbid,name from v$containers;
 
CON_ID DBID NAME
---------- ---------- ------------------------------
1 904475458 CDB$ROOT
2 2066620152 PDB$SEED
3 2623271973 PDB1

From CDB$ROOT I see data for the CDB:

SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

and from PDB I see snapshots taken from PDB:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So remember, CDB$ROOT has 91 snapshots with DBID= 904475458 and PDB1 has 79 snapshots with DBID=2623271973

AWR_ROOT_ and AWR_PDB_

Views on WRM$_SNAPSHOT are referenced in DBA_DEPENDENCIES:


SQL> select owner,name,type from dba_dependencies where referenced_name='WRM$_SNAPSHOT' and type like 'VIEW';
 
OWNER NAME TYPE
----- ------------------------------ -------------------
SYS AWR_ROOT_SNAPSHOT VIEW
SYS AWR_ROOT_SYSSTAT VIEW
SYS AWR_ROOT_ACTIVE_SESS_HISTORY VIEW
SYS AWR_ROOT_ASH_SNAPSHOT VIEW
SYS AWR_PDB_SNAPSHOT VIEW
SYS AWR_PDB_ACTIVE_SESS_HISTORY VIEW
SYS AWR_PDB_ASH_SNAPSHOT VIEW

I’m interested in views that show snapshot information: AWR_ROOT_SNAPSHOT and AWR_PDB_SNAPSHOT


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') order by 3;
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM METADATA LINK
PUBLIC AWR_PDB_SNAPSHOT SYNONYM METADATA LINK
SYS AWR_ROOT_SNAPSHOT VIEW DATA LINK
SYS AWR_PDB_SNAPSHOT VIEW METADATA LINK

Besides the synonyms, we have a metadata link view AWR_PDB_SNAPSHOT and a data link view AWR_ROOT_SNAPSHOT. The data link one means that it switches to CDB$ROOT when queried from a PDB. Here is the definition:


SQL> select owner,view_name,container_data,text from dba_views where view_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS AWR_ROOT_SNAPSHOT Y select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0
 
SYS AWR_PDB_SNAPSHOT N select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0

Same definition. The difference is that AWR_PDB_SNAPSHOT reads from the current container but AWR_ROOT_SNAPSHOT being a DATA LINK always read from CDB$ROOT.

This is what we can see:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

This query when run in PDB1 displays the 91 snapshots from the CDB.

SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

This one shows what is in the current container.

Those are the views used by the AWR report, depending on the AWR location choice. But what about the DBA_HIST_ views that we know and use from previous versions?

DBA_HIST_ and CDB_HIST_

I continue to follow the dependencies:

SQL> select owner,name,type from dba_dependencies where referenced_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') and name like '%SNAPSHOT' order by 3;
 
OWNER NAME TYPE
------ ------------------------------ -------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM
PUBLIC AWR_PDB_SNAPSHOT SYNONYM
SYS DBA_HIST_SNAPSHOT VIEW
SYS CDB_HIST_SNAPSHOT VIEW
 
SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
SYS DBA_HIST_SNAPSHOT VIEW METADATA LINK
SYS CDB_HIST_SNAPSHOT VIEW METADATA LINK
PUBLIC DBA_HIST_SNAPSHOT SYNONYM METADATA LINK
PUBLIC CDB_HIST_SNAPSHOT SYNONYM METADATA LINK

Here are the views I’m looking for. They are metadata link only. Not data link. This means that they do not switch to CDB$ROOT.

But there’s more in the view definition:

SQL> select owner,view_name,container_data,text from dba_views where view_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS DBA_HIST_SNAPSHOT N select "SNAP_ID","DBID","INSTANCE_NUMBER","STARTUP_TIME","BEGIN_INTERVAL_TIME","
END_INTERVAL_TIME","FLUSH_ELAPSED","SNAP_LEVEL","ERROR_COUNT","SNAP_FLAG","SNAP_
TIMEZONE","CON_ID" from AWR_ROOT_SNAPSHOT
 
SYS CDB_HIST_SNAPSHOT Y SELECT k."SNAP_ID",k."DBID",k."INSTANCE_NUMBER",k."STARTUP_TIME",k."BEGIN_INTERV
AL_TIME",k."END_INTERVAL_TIME",k."FLUSH_ELAPSED",k."SNAP_LEVEL",k."ERROR_COUNT",
k."SNAP_FLAG",k."SNAP_TIMEZONE",k."CON_ID", k.CON$NAME, k.CDB$NAME FROM CONTAINE
RS("SYS"."AWR_PDB_SNAPSHOT") k

The DBA_HIST_SNAPSHOT is a simple one view on AWR_ROOT_SNAPSHOT which, as we have seen above, always show snapshots from CDB:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
&nbsp
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

Then CDB_HIST_SNAPSHOT reads AWR_PDB_SNAPSHOT which show current container snapshots. But this view is a COMMON DATA one, with the CONTAINER() function. This means that from CDB$ROOT when executed with a common user data from all open containers will be retrieved:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79
904475458 91

However, from a PDB you cannot see anything else:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So what?

Multitenant adds a new dimension in the dictionary views and we must be aware of that. However, compatibility is still there. The scripts that we used to run to query DBA_HIST views should still work. Don’t forget to always join on DBID and INSTANCE_NUMBER in addition to SNAP_ID so that your scripts are still working in RAC, and cross containers.
In 12.2 you can do the same for your application: used metadata links, data links, and common views for your tables. But remember to keep it simple…

 

Cet article Oracle 12cR2: AWR views in multitenant est apparu en premier sur Blog dbi services.

RMAN> TRANSPORT TABLESPACE

Sun, 2016-12-18 03:12

In a previous post I explained how to use transportabel tablespace from a standby database. Here I’m showing an alternative where you can transport from a backup instead of a standby database. RMAN can do that since 10gR2.

Transportable Tablespace is a beautiful feature: the performance of physical copy and the flexibility of logical export/import. But it has one drawback: the source tablespace must be opened read only when you copy it and export the metadata. This means that you cannot use it from production, such as moving data to a datawarehouse ODS. There’s an alternative to that: restore the tablespace with TSPITR (tablespace point-in-time recovery) into a temporary instance and transport from there.
This is what is automated by RMAN with a simple command: RMAN> TRANSPORT TABLESPACE.

Multitenant

This blog post shows how to do that when you are in 12c multitenant architecture. Even if 12.2 comes with online PDB clone, you may want to transport a single tablespace.

You cannot run TRANSPORT TABLESPACE when connected to a PDB. Let’s test it:

RMAN> connect target sys/oracle@//localhost/PDB1
connected to target database: CDB1:PDB1 (DBID=1975603085)

Here are the datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
9 250 SYSTEM NO /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 UNDOTBS1 NO /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
3 20 TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

Let’s run the TRANSPORT TABLESPACE command:

RMAN> transport tablespace USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';
RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
 
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
 
Creating automatic instance, with SID='jlDa'
 
initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=jlDa_pitr_CDB1
compatible=12.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=768M
processes=200
db_create_file_dest=/var/tmp/AUX
log_archive_dest_1='location=/var/tmp/AUX'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used
 
starting up automatic instance CDB1
 
Oracle instance started
 
Total System Global Area 805306368 bytes
 
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created
 
Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of transport tablespace command at 12/17/2016 21:33:14
RMAN-07538: Pluggable Database qualifier not allowed when connected to a Pluggable Database

You got the idea: an auxiliary instance is automatically created but then it failed because an internal command cannot be run from a PDB.

Run from CDB

So let’s run it when connected to CDB$ROOT:

echo set on
 
RMAN> connect target sys/oracle
connected to target database: CDB1 (DBID=894360530)

Whe can see all pluggable databases and all datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 800 SYSTEM YES /u02/oradata/CDB1A/system01.dbf
3 480 SYSAUX NO /u02/oradata/CDB1A/sysaux01.dbf
4 65 UNDOTBS1 YES /u02/oradata/CDB1A/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u02/oradata/CDB1A/pdbseed/system01.dbf
6 350 PDB$SEED:SYSAUX NO /u02/oradata/CDB1A/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/oradata/CDB1A/users01.dbf
8 520 PDB$SEED:UNDOTBS1 NO /u02/oradata/CDB1A/pdbseed/undotbs01.dbf
9 250 PDB1:SYSTEM YES /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 PDB1:SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 PDB1:UNDOTBS1 YES /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 240 TEMP 32767 /u02/oradata/CDB1A/temp01.dbf
2 32 PDB$SEED:TEMP 32767 /u02/oradata/CDB1A/pdbseed/temp012016-08-23_14-12-45-799-PM.dbf
3 20 PDB1:TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

We can run the TRANSPORT TABLESPACE command from here, naming the tablespace prefixed with the PDB name PDB1:USERS

transport tablespace … auxiliary destination … tablespace destination

The TRANSPORT TABLESPACE command needs a destination where to put the datafiles and dump file to transport (TABLESPACE DESTINATION) and also needs an auxiliary destination (AUXILIARY DESTINATION). It seems it is mandatory, which is different from the PDBPITR where the FRA is used by default.


RMAN> transport tablespace PDB1:USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';

And then you will see all what RMAN does for you. I’ll show most of the output.

UNDO

Restoring a tablespace will need to apply redo and then rollback the transactions that were opened at that PIT. This is why RMAN has to restore all tablespaces that may contain UNDO:

RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
 
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace PDB1:SYSTEM
Tablespace UNDOTBS1
Tablespace PDB1:UNDOTBS1

I suppose that when the UNDO_TABLESPACE has changed in the meantime, RMAN cannot guess which tablespace covered the transactions at the requested PIT but I seen nothing in the TRANSPORT TABLESPACE syntax to specify it. That’s probably for a future post and /or SR.

Auxiliary instance

So RMAN creates an auxiliary instance with some specific parameters to be sure there’s no side effect on the source database (the RMAN target one).

Creating automatic instance, with SID='qnDA'
 
initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=qnDA_pitr_PDB1_CDB1
compatible=12.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=768M
processes=200
db_create_file_dest=/var/tmp/AUX
log_archive_dest_1='location=/var/tmp/AUX'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used
 
 
starting up automatic instance CDB1
 
Oracle instance started
 
Total System Global Area 805306368 bytes
 
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created

Restore

The goal is to transport the tablespace, so RMAN checks that they are self-contained:

Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully

and starts the restore of controlfile and datafiles (the CDB SYSTEM, SYSAUX, UNDO and the PDB SYSTEM, SYSAUX, UNDO and the tablespaces to transport)

contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# restore the controlfile
restore clone controlfile;
 
# mount the controlfile
sql clone 'alter database mount clone database';
 
# archive current online log
sql 'alter system archive log current';
}
executing Memory Script
 
executing command: SET until clause
 
Starting restore at 17-DEC-16
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=253 device type=DISK
 
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp tag=TAG20161217T213038
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
output file name=/var/tmp/AUX/CDB1A/controlfile/o1_mf_d5c88zp3_.ctl
Finished restore at 17-DEC-16
 
sql statement: alter database mount clone database
 
sql statement: alter system archive log current
 
contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 9 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 11 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 10 to new;
set newname for clone tempfile 1 to new;
set newname for clone tempfile 3 to new;
set newname for datafile 12 to
"/var/tmp/TTS/users01.dbf";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 9, 4, 11, 3, 10, 12;
 
switch clone datafile all;
}
executing Memory Script
 
executing command: SET until clause
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
renamed tempfile 1 to /var/tmp/AUX/CDB1A/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 3 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_temp_%u_.tmp in control file
 
Starting restore at 17-DEC-16
using channel ORA_AUX_DISK_1
 
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /var/tmp/AUX/CDB1A/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:35
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00009 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00011 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00010 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00012 to /var/tmp/TTS/users01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:25
Finished restore at 17-DEC-16
 
datafile 1 switched to datafile copy
input datafile copy RECID=11 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_system_d5c8993k_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=12 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_d5c8d8ow_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=13 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_d5c8998b_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=14 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_d5c8d8g6_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=15 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_d5c8996o_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=16 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_d5c8d8o7_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=17 STAMP=930865006 file name=/var/tmp/TTS/users01.dbf

You noticed that the SYSTEM, SYSAUX, UNDO are restored in the auxiliary location but the tablespaces to transport (USERS here) goes to its destination. If you want to transport it on the same server, you can avoid any copying of it.

Recover

The recovery continues automatically to the PIT (which you can also specify with an UNTIL clause or restore point)


contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone 'PDB1' "alter database datafile
9 online";
sql clone "alter database datafile 4 online";
sql clone 'PDB1' "alter database datafile
11 online";
sql clone "alter database datafile 3 online";
sql clone 'PDB1' "alter database datafile
10 online";
sql clone 'PDB1' "alter database datafile
12 online";
# recover and open resetlogs
recover clone database tablespace "PDB1":"USERS", "SYSTEM", "PDB1":"SYSTEM", "UNDOTBS1", "PDB1":"UNDOTBS1", "SYSAUX", "PDB1":"SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script
 
executing command: SET until clause
 
sql statement: alter database datafile 1 online
 
sql statement: alter database datafile 9 online
 
sql statement: alter database datafile 4 online
 
sql statement: alter database datafile 11 online
 
sql statement: alter database datafile 3 online
 
sql statement: alter database datafile 10 online
 
sql statement: alter database datafile 12 online
 
Starting recover at 17-DEC-16
using channel ORA_AUX_DISK_1
 
starting media recovery
 
archived log for thread 1 with sequence 30 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc thread=1 sequence=30
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc thread=1 sequence=31
media recovery complete, elapsed time: 00:00:02
Finished recover at 17-DEC-16
 
database opened
 
contents of Memory Script:
{
sql clone 'alter pluggable database PDB1 open';
}
executing Memory Script
 
sql statement: alter pluggable database PDB1 open

Export TTS

The restored tablespaces can be set read only, which was the goal.

contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'PDB1' 'alter tablespace
USERS read only';
# create directory for datapump export
sql clone 'PDB1' "create or replace directory
STREAMS_DIROBJ_DPDIR as ''
/var/tmp/TTS''";
}
executing Memory Script
 
sql statement: alter tablespace USERS read only

Now the export of metadata run (equivalent to expdp transport_tablespace=Y)


sql statement: create or replace directory STREAMS_DIROBJ_DPDIR as ''/var/tmp/TTS''
 
Performing export of metadata...
EXPDP> Starting "SYS"."TSPITR_EXP_qnDA_urDb":
 
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table "SYS"."TSPITR_EXP_qnDA_urDb" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_qnDA_urDb is:
EXPDP> /var/tmp/TTS/dmpfile.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace USERS:
EXPDP> /var/tmp/TTS/users01.dbf
EXPDP> Job "SYS"."TSPITR_EXP_qnDA_urDb" successfully completed at Sat Dec 17 21:41:06 2016 elapsed 0 00:00:47
Export completed
 
Not performing table import after point-in-time recovery

The last message let me think that the RMAN codes shares the one that manages RECOVER TABLE.

Then RMAN lists the commands to run the import (also available in a generated script) and removes the auxiliary instance.

Cleanup

Not everything has been removed:
[oracle@VM117 blogs]$ du -ha /var/tmp/AUX
0 /var/tmp/AUX/CDB1A/controlfile
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_51_d5c8k0oo_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_52_d5c8kcjp_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_53_d5c8kskz_.log
601M /var/tmp/AUX/CDB1A/onlinelog
0 /var/tmp/AUX/CDB1A/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undo_1_d5c8m1nx_.dbf
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9
1.1G /var/tmp/AUX/CDB1A
1.1G /var/tmp/AUX

Import TTS

In the destination you find the tablespace datafiles, the dump of metadata and a script that can be run to import it to the destination:

[oracle@VM117 blogs]$ du -ha /var/tmp/TTS
5.1M /var/tmp/TTS/users01.dbf
132K /var/tmp/TTS/dmpfile.dmp
4.0K /var/tmp/TTS/impscrpt.sql
5.2M /var/tmp/TTS

For this example, I import it on the same server, in a different pluggable database:


SQL> connect / as sysdba
Connected.
SQL> alter session set container=PDB2;
Session altered.

and simply run the script provided:

SQL> set echo on
 
SQL> @/var/tmp/TTS/impscrpt.sql
 
SQL> /*
SQL> The following command may be used to import the tablespaces.
SQL> Substitute values for and .
SQL>
SQL> impdp directory= dumpfile= 'dmpfile.dmp' transport_datafiles= /var/tmp/TTS/users01.dbf
SQL> */
SQL>
SQL> --
SQL> --
SQL> --
SQL> --
SQL> CREATE DIRECTORY STREAMS$DIROBJ$1 AS '/var/tmp/TTS/';
Directory created.
 
SQL> CREATE DIRECTORY STREAMS$DIROBJ$DPDIR AS '/var/tmp/TTS';
Directory created.
 
SQL> /* PL/SQL Script to import the exported tablespaces */
SQL> DECLARE
2 --
3 tbs_files dbms_streams_tablespace_adm.file_set;
4 cvt_files dbms_streams_tablespace_adm.file_set;
5
6 --
7 dump_file dbms_streams_tablespace_adm.file;
8 dp_job_name VARCHAR2(30) := NULL;
9
10 --
11 ts_names dbms_streams_tablespace_adm.tablespace_set;
12 BEGIN
13 --
14 dump_file.file_name := 'dmpfile.dmp';
15 dump_file.directory_object := 'STREAMS$DIROBJ$DPDIR';
16
17 --
18 tbs_files( 1).file_name := 'users01.dbf';
19 tbs_files( 1).directory_object := 'STREAMS$DIROBJ$1';
20
21 --
22 dbms_streams_tablespace_adm.attach_tablespaces(
23 datapump_job_name => dp_job_name,
24 dump_file => dump_file,
25 tablespace_files => tbs_files,
26 converted_files => cvt_files,
27 tablespace_names => ts_names);
28
29 --
30 IF ts_names IS NOT NULL AND ts_names.first IS NOT NULL THEN
31 FOR i IN ts_names.first .. ts_names.last LOOP
32 dbms_output.put_line('imported tablespace '|| ts_names(i));
33 END LOOP;
34 END IF;
35 END;
36 /
PL/SQL procedure successfully completed.
 
SQL>
SQL> --
SQL> DROP DIRECTORY STREAMS$DIROBJ$1;
Directory dropped.
 
SQL> DROP DIRECTORY STREAMS$DIROBJ$DPDIR;
Directory dropped.
 
SQL> --------------------------------------------------------------
SQL> -- End of sample PL/SQL script
SQL> --------------------------------------------------------------

Of course, you don’t need to and you can run the import with IMPDP:

SQL> alter session set container=pdb2;
Session altered.
SQL> create directory tts as '/var/tmp/TTS';
Directory created.
SQL> host impdp pdbadmin/oracle@//localhost/PDB2 directory=TTS dumpfile='dmpfile.dmp' transport_datafiles=/var/tmp/TTS/users01.dbf

You may also use convert to transport to a different endianness.

Local Undo

Note that if you run it on current 12.2.0.1.0 cloud first DBaaS you will get an error when RMAN opens the PDB in the auxiliary instance because there’s a bug with local undo. Here is the alert.log part:

PDB1(3):Opening pdb with no Resource Manager plan active
PDB1(3):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 188743680 AUTOEXTEND ON NEXT 5242880 MAXSIZE 34359721984 ONLINE
PDB1(3):Force tablespace UNDO_1 to be encrypted with AES128
2016-12-17T18:05:14.759732+00:00
PDB1(3):ORA-00060: deadlock resolved; details in file /u01/app/oracle/diag/rdbms/fqkn_pitr_pdb1_cdb1/fqkn/trace/fqkn_ora_26146.trc
PDB1(3):ORA-60 signalled during: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 188743680 AUTOEXTEND ON NEXT 5242880 MAXSIZE 34359721984 ONLINE...
PDB1(3):Automatic creation of undo tablespace failed with error 604 60
ORA-604 signalled during: alter pluggable database PDB1 open...

I did this demo with LOCAL UNDO OFF.

So what?

You can use Transportable Tablespaces from a database where you cannot put the tablespace read-only. The additional cost of it is to recover it from a backup, along with SYSTEM, SYSAUX and UNDO. But it is fully automated with only one RMAN command.

 

Cet article RMAN> TRANSPORT TABLESPACE est apparu en premier sur Blog dbi services.

Oracle 12cR2 : Partitioning improvements – multi-column list partitioning & read-only partitions

Fri, 2016-12-16 10:10

In my last blog post I presented auto-list partitioning, a new partitioning functionality coming with 12cR2.
In this one I will introduce two others : multi-column list partitioning and read-only partitions.

Multi-column list partitioning

auto-list-partImage : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

With the first release of 12c it wasn’t possible to create list partitioned tables based on multi-column partition key :
ORA-14304: List partitioning method expects a single partitioning column
But now you can easily implement this functionality :
SQL> CREATE TABLE cars(
car_make VARCHAR2(30),
car_model VARCHAR2(30)
)
PARTITION BY LIST (car_make, car_model)
(PARTITION P1 VALUES ('Ford','Focus'),
PARTITION P_DEFAULT VALUES (DEFAULT));

Table created.

Check partitions :
SQL> SELECT partition_name, high_value FROM dba_tab_partitions WHERE table_name = 'CARS';


PARTITION_NAME HIGH_VALUE
-------------------- ------------------------------
P1 ( 'Ford', 'Focus' )
P_DEFAULT DEFAULT


SQL> SELECT partitioning_type, partition_count, partitioning_key_count FROM dba_part_tables WHERE table_name = 'CARS';


PARTITION PARTITION_COUNT PARTITIONING_KEY_COUNT
--------- --------------- ----------------------
LIST 2 2

This functionality allows you to use up to 16 columns for the partition key and as you can see it in this example, it’s also possible to define a DEFAULT partition.
Multi-column list partitioning on subpartitions is also permitted.

Read-only partitions

auto-list-partImage : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

We all know that Oracle is able to define a tablespace or a table as read-only. But did you know that with 12cR2 it’s now possible to define this attribute at the partition level ? :
CREATE TABLE sales(
sales_product VARCHAR2(30),
sales_date DATE
) READ WRITE
PARTITION BY RANGE (sales_date)
(
PARTITION P_2013 VALUES LESS THAN (TO_DATE('01-JAN-2014','dd-MON-yyyy')) READ ONLY,
PARTITION P_2014 VALUES LESS THAN (TO_DATE('01-JAN-2015','dd-MON-yyyy')) READ ONLY,
PARTITION P_2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','dd-MON-yyyy')) READ ONLY,
PARTITION P_2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','dd-MON-yyyy')));

Data insertion :
SQL> INSERT INTO sales VALUES ('SHIRT', '02-MAY-2013');
INSERT INTO sales VALUES ('SHIRT', '02-MAY-2013')
*
ERROR at line 1:
ORA-14466: Data in a read-only partition or subpartition cannot be modified.

SQL> INSERT INTO sales VALUES ('SHOES', '29-MAR-2016');


1 row created.

Easy to implement and reliable way to protect data changes inside a table.

 

Cet article Oracle 12cR2 : Partitioning improvements – multi-column list partitioning & read-only partitions est apparu en premier sur Blog dbi services.

Dataguard Oracle 12.2 : Support for Multiple Automatic Failover Targets

Thu, 2016-12-15 15:41

In a previous blog, we talked about Fast-Start Failover with the MaxProtection mode.
With Oracle 12.2 Oracle Data Guard now supports multiple failover targets in a fast-start failover configuration.
Previous functionality allowed for only a single fast-start failover target. If the failover target was unable to meet the requirements for fast-start failover at the time of primary failure, then an automatic failure would not occur.
Designating multiple failover targets significantly improves the likelihood that there will always be a standby suitable for automatic failover when needed.
Multiple failover targets increase high availability by making an automatic failover more likely to occur if there is a primary outage.
Indeed FastStartFailoverTarget can now have as value one site

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE1';
Property "faststartfailovertarget" updated
DGMGRL>


DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

It can also have as values two sites (Priority following the order)

DGMGRL> disable Fast_start Failover;
Disabled.
DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE1,ORCL_SITE2';
Property "faststartfailovertarget" updated
DGMGRL>


DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

It can also have as value the keyword ANY (Priority defined by oracle)

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated
DGMGRL> edit database 'ORCL_SITE1' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated
DGMGRL> edit database 'ORCL_SITE2' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated

With the value set to ANY we can see that there is a target failover but also there are also candidate targets. If a failover occurs the order will be ORCL_SITE1 first and if Not available ORCL_SITE2 will be the target.

DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

From the observer we can confirm that target is ORCL_SITE1

[W000 12/15 14:40:25.12] This observer is no longer registered with the configuration.
[W000 12/15 14:40:25.20] FSFO target standby is ORCL_SITE1
[W000 12/15 14:40:27.66] Observer trace level is set to USER
[W000 12/15 14:40:27.66] Try to connect to the primary.
[W000 12/15 14:40:27.66] Try to connect to the primary ORCL_SITE.
[W000 12/15 14:40:27.67] The standby ORCL_SITE1 is ready to be a FSFO target
[W000 12/15 14:40:29.67] Connection to the primary restored!

Now let’s simulate the case that ORCL_SITE1 is no longer available, for this let’s simply shutdown ORCL_SITE1 database, the failover target should be changed to ORCL_SITE2

SQL> shutdown abort
ORACLE instance shut down.
SQL>


DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE2
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

From the observer we can see following messages that confirm that the failover target was switched to ORCL_SITE2

[W000 12/15 14:54:52.00] Permission granted to the primary database for target switch.
[W000 12/15 14:54:54.03] Standby database has changed to ORCL_SITE2.
[W000 12/15 14:54:54.04] Try to connect to the primary.
[W000 12/15 14:54:54.04] Try to connect to the primary ORCL_SITE.
[W000 12/15 14:54:55.48] The primary database returned to SYNC/NOT LAGGING state with the standby database ORCL_SITE2.
[W000 12/15 14:54:55.48] Connection to the primary restored!
[W000 12/15 14:54:58.48] Disconnecting from database ORCL_SITE.
[W000 12/15 14:55:04.52] The standby ORCL_SITE2 is ready to be a FSFO target

Conclusion
We have seen that we can now have multiple targets for the Fast-Start Failover and how to implement it

 

Cet article Dataguard Oracle 12.2 : Support for Multiple Automatic Failover Targets est apparu en premier sur Blog dbi services.

Oracle 12cR2 – Howto setup Active DataGuard with Temporary Tablespace Groups

Thu, 2016-12-15 06:01

Temporary Tablespaces Groups exist for quite a while now (since 10gR2), but they are, for whatever reason not so often used. Personally, I think they are quite cool. Very easy to setup, and especially in big environments with a lot of parallel processing very useful. But this blog will not be about Temporary Tablespace Groups. They are already explained in the 12.2 Admin Guide.

https://docs.oracle.com/database/122/ADMIN/managing-tablespaces.htm#ADMIN-GUID-55A6AE10-6875-4B73-9A5C-CB4965AD5AFE

For my Active DataGuard environment, I would like to use Temporary Tablespace Groups in combination with Temporary Undo, which is enabled by default as soon as you activate the ADG feature. And of course, I would like to do it online, without shutting down neither the primary nor the ADG standby.

Is this possible? We will see. I start first with creating a Temporary Tablespace Group called DBITEMP
with 3 bigfile temp tablespaces DBITEMP1/DBITEMP2/DBITEMP3.

-- On Primary

SQL> CREATE BIGFILE TEMPORARY TABLESPACE DBITEMP1 TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T TABLESPACE GROUP DBITEMP;

Tablespace created.

SQL> CREATE BIGFILE TEMPORARY TABLESPACE DBITEMP2 TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T TABLESPACE GROUP DBITEMP;

Tablespace created.

SQL> CREATE BIGFILE TEMPORARY TABLESPACE DBITEMP3 TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T TABLESPACE GROUP DBITEMP;

Tablespace created.

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE DBITEMP;

Database altered.

SQL> alter system archive log current;

System altered.

SQL> select TABLESPACE_NAME, CONTENTS, BIGFILE from dba_tablespaces where CONTENTS = 'TEMPORARY' and TABLESPACE_NAME like '%DBI%';

TABLESPACE_NAME                CONTENTS              BIG
------------------------------ --------------------- ---
DBITEMP1                       TEMPORARY             YES
DBITEMP2                       TEMPORARY             YES
DBITEMP3                       TEMPORARY             YES

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2
DBITEMP                        DBITEMP3

SQL> select NAME, BYTES from v$tempfile where name like '%dbitemp%';

NAME                                                                         BYTES
-------------------------------------------------------------------- -------------
/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp          33554432
/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp          33554432
/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp          33554432

SQL> !ls -l /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp
-rw-r----- 1 oracle oinstall 33562624 Dec 14 12:28 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp

SQL> !ls -l /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp
-rw-r----- 1 oracle oinstall 33562624 Dec 14 12:29 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp

SQL> !ls -l /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp
-rw-r----- 1 oracle oinstall 33562624 Dec 14 12:28 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM database_properties
  2  WHERE property_name='DEFAULT_TEMP_TABLESPACE';

PROPERTY_NAME            PROPERTY_VALUE
------------------------ ----------------------
DEFAULT_TEMP_TABLESPACE  DBITEMP


DGMGRL> show database 'DBIT122_SITE1';

Database - DBIT122_SITE1

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    DBIT122

Database Status:
SUCCESS

DGMGRL> show database 'DBIT122_SITE2';

Database - DBIT122_SITE2

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 0 Byte/s
  Real Time Query:    ON
  Instance(s):
    DBIT122

Database Status:
SUCCESS

On the primary database, it looks good. Everything was setup correctly, and the Temporary Tablespace Group is ready to be used. Now lets check what we have on the Standby … almost nothing.

My temporary tablespaces and the group are there, however, the tempfiles are missing. So … not really useful at that stage.

SQL> select TABLESPACE_NAME, CONTENTS, BIGFILE from dba_tablespaces where CONTENTS = 'TEMPORARY' and TABLESPACE_NAME like '%DBI%';

TABLESPACE_NAME                CONTENTS              BIG
------------------------------ --------------------- ---
DBITEMP1                       TEMPORARY             YES
DBITEMP2                       TEMPORARY             YES
DBITEMP3                       TEMPORARY             YES

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2
DBITEMP                        DBITEMP3


SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM database_properties
  2  WHERE property_name='DEFAULT_TEMP_TABLESPACE';

PROPERTY_NAME            PROPERTY_VALUE
------------------------ ----------------------
DEFAULT_TEMP_TABLESPACE  DBITEMP

SQL> select NAME, BYTES from v$tempfile where name like '%dbitemp%';

no rows selected

Ok. Lets take a look at the alert.log, if Oracle is telling something. And yes, indeed, a big WARNING message is posted into the alert.log saying that the temporary tablespaces DBITEMP1/2/3 are empty.

*********************************************************************
WARNING: The following temporary tablespaces contain no files.
         This condition can occur when a backup controlfile has
         been restored.  It may be necessary to add files to these
         tablespaces.  That can be done using the SQL statement:

         ALTER TABLESPACE <tablespace_name> ADD TEMPFILE

         Alternatively, if these temporary tablespaces are no longer
         needed, then they can be dropped.
           Empty temporary tablespace: DBITEMP1
           Empty temporary tablespace: DBITEMP2
           Empty temporary tablespace: DBITEMP3
*********************************************************************

Ok. With 12.2 it is like it always was. You have to do it manually.

SQL> ALTER TABLESPACE DBITEMP1 ADD TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T;

Tablespace altered.

SQL> ALTER TABLESPACE DBITEMP2 ADD TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T;

Tablespace altered.

SQL> ALTER TABLESPACE DBITEMP3 ADD TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T;

Tablespace altered.

Now I have everthing in place. What about dropping a temporary tablespace out of the tablespace group First, I take it out of temporary tablespace group. This change is immediately reflected on the Standby.

-- Primary

SQL> alter tablespace DBITEMP3 TABLESPACE GROUP '';

Tablespace altered.

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2

-- Standby 

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2

And now you can drop it.

-- Primary

SQL> drop tablespace DBITEMP3 including contents and datafiles;

Tablespace dropped.

alert.log
...
drop tablespace DBITEMP3 including contents and datafiles
2016-12-14T16:35:25.567597+01:00
Deleted Oracle managed file /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp
Completed: drop tablespace DBITEMP3 including contents and datafiles
...

-- Standby

alert.log
...
Deleted Oracle managed file /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp3_d52dv3dv_.tmp
Recovery deleting tempfile #4:'/u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp3_d52dv3dv_.tmp'
Recovery dropped temporary tablespace 'DBITEMP3'
...

ok. Dropping works fine. This change is also immediately reflected. What about resizing a temporary file? Is this change also immediately reflected on the standby? Due to OMF, we do have different names for the tempfiles, so should we use a “alter database tempfile <NAME> resize” or a “alter database tempfile <NUMBER> resize” to make it work. In regards of regular datafiles, it doesn’t matter, you can use both ways and Oracle does it correctly.

- Primary

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp1_d52dthwg_.tmp       33554432
         3 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp2_d52dtx7f_.tmp       33554432

		 
SQL> alter database tempfile '/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp' resize 512M;

Database altered.

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp      536870912
         3 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp       33554432


-- Standby

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp1_d52dthwg_.tmp       33554432
         3 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp2_d52dtx7f_.tmp       33554432

Ok. Resizing by name is not reflected on the standby. But what about resizing the tempfile by using the file number instead of the name?

-- Primary
 
SQL> alter database tempfile 3 resize 768M;

Database altered.

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp      536870912
         3 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp      805306368

		 
-- Standby

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp1_d52dthwg_.tmp       33554432
         3 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp2_d52dtx7f_.tmp       33554432

As you can see, I doesn’t matter. Resize operations on tempfiles are not replicated to the standby.

Conclusion

Temporary Tablespace Groups can be used with Active DataGuard, and of course, it can be done online. However, when you initially create them, or when you resized them, some manual work has to be done on the standby. And yes, the Temporary Tablespace Groups can be used in combination with the Temporary UNDO feature on the Active DataGuard Standby.

 

 

Cet article Oracle 12cR2 – Howto setup Active DataGuard with Temporary Tablespace Groups est apparu en premier sur Blog dbi services.

Dataguard Oracle 12.2 : Fast-Start Failover with Maximum Protection

Wed, 2016-12-14 17:04

With Oracle 12.1 the one requirement to configure Fast-start Failover is to ensure the broker configuration is operating in either Maximum Availability mode or Maximum Performance mode.
With 12.2 Fast-Start Failover can be now configured with the Maximum Protection
Below our broker configuration

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxPerformance
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - Physical standby database
ORCL_SITE2 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 48 seconds ago)

Let’s configure the maximum protection mode.
We first have to update some database properties.

DGMGRL> edit database 'ORCL_SITE1' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'ORCL_SITE' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'ORCL_SITE2' set property LogXptMode='SYNC';
Property "logxptmode" updated

After we have to enable the Maximum Availability before enabling the Maximum protection

DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.
DGMGRL> edit configuration set protection mode as maxprotection;
Succeeded.
DGMGRL>

And now let’s enable Fast-Start failover

DGMGRL> enable fast_start failover;
Error: ORA-16693: requirements not met for enabling fast-start failover
Failed.
DGMGRL>

Oh what happens?
Remember before enabling Fast-Start Failover we have to enable flashback for databases and also to set the database property FastStartFailoverTarget
Let’s enable flashback for databases
For the Primary

SQL> alter database flashback on;
Database altered.

For Standby databases

DGMGRL> edit database 'ORCL_SITE1' set state='APPLY-OFF';
Succeeded.
SQL> alter database flashback on;
Database altered.
SQL>
DGMGRL> edit database 'ORCL_SITE1' set state='APPLY-ON';
Succeeded.
DGMGRL>


DGMGRL> edit database 'ORCL_SITE2' set state='APPLY-OFF';
Succeeded.
DGMGRL>
SQL> alter database flashback on;
Database altered.
DGMGRL> edit database 'ORCL_SITE2' set state='APPLY-ON';
Succeeded.
DGMGRL>

Let’s set FastStartFailoverTarget property for the primary database

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE2';
Property "faststartfailovertarget" updated

And then now we can enable the Fast-Start Failover with the maximum protection

DGMGRL> enable fast_start failover;
Enabled.
DGMGRL>

Checking after our configuration. The observer must be started, otherwise you will get warning about observer

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxProtection
Members:
ORCL_SITE - Primary database
ORCL_SITE2 - (*) Physical standby database
ORCL_SITE1 - Physical standby database
Fast-Start Failover: ENABLED
Configuration Status:
SUCCESS (status updated 1 second ago)
DGMGRL>

Just note that At least 2 standby databases must be available, otherwise the mode will be retrograded to MAXPERFORMANCE after Failover

 

Cet article Dataguard Oracle 12.2 : Fast-Start Failover with Maximum Protection est apparu en premier sur Blog dbi services.

Oracle Database 12c Release 2 Multitenant (Oracle Press)

Wed, 2016-12-14 16:00

Here it is. The multitenant book is out for sale…

CaptureBooks
One year ago, at DOAG2015, Vit Spinka came to me with this idea: with Anton Els they planned to write a book on multitenant and proposed me to be a co-author. I was already quite busy at that time and my short-term plan was to prepare and pass the OCM 12c exam. But this book idea was something great and that had to be started quickly. At that time, we expected the 12cR2 to be out on June 2016 and then the book should be at for Oracle Open World. So no time to waste: propose the idea to Oracle Press, find a reviewer and start as soon as possible.

For reviewers, I was very happy that Deiby Gomez accepted to do the technical review. And Mike Donovan was volunteer to do the English review. I think he didn’t imagine how hard it can be to take non-native English speakers writing, with very limited vocabulary, and put that to something that makes sense to read. It’s an amazing chance to have the language review done by someone with deep technical knowledge. This ensures that the improved style do not change the meaning. Having that language review is also a good way to uniformise the style for what is written by three different authors. I bet you cannot guess who has written what. In addition to that, Oracle Press asked to Arup Nanda to do an additional review which was great because Arup has experience about book writing.

So we worked on the 12.2 beta, tested everything (there are lot of code listings in the book), filled bugs, clarified everything. We had good interaction with support engineers and product managers. The result is a book on multitenant which covers all administration tasks you can do on a 12c database.

Cs11EMPWcAAdlSqIt was an amazing adventure from the get-go. You know people for their skills, blogs, presentations and discussions at events. And then you start to work with them on a common thing – the book – and remotely – we’re all on different timezones. How to be sure that you can work together? Actually, it was easy and went smooth. We listed the chapters and each of us has marked which chapter he prefers. And that was done: in one or two e-mail exchange the distribution of tasks was done with everybody happy. We had very short schedule: need to deliver one chapter every 2 or 3 weeks. I was happy with what I wrote and was equally happy with what I’ve read from Vit and Anton. Reviews from Deiby, Mike, Arup were all adding higher precision and clarity. Incredible team work without the need for long discussions. Besides the hard work and the delightful result, working with this team was an amazing human adventure.

Oracle Database 12c Release 2 Multitenant (Oracle Press)

Master the Powerful Multitenant Features of Oracle Database 12c
• Build high-performance multitenant Oracle databases
• Create single-tenant, multitenant, and application containers
• Establish network connections and manage services
• Handle security using authentication, authorization, and encryption
• Back up and restore your mission-critical data
• Work with point-in-time recovery and Oracle Flashback
• Move data and replicate and clone databases
• Work with Oracle’s Resource Manager and Data Guard

 

Cet article Oracle Database 12c Release 2 Multitenant (Oracle Press) est apparu en premier sur Blog dbi services.

Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations?

Wed, 2016-12-14 01:33

For security reasons, you may want that your DataGuard operations are done with a different UNIX user and with a different Oracle user which is not so highly privileged like the SYSDBA.  This is exactly where the SYSDG Administrative Privilege for Oracle Data Guard Operations comes into play.

The SYSDG privilege is quite powerful and allows you to work with the Broker (DGMGRL) command line interface and besides that, it enables the following operations:

  • STARTUP
  • SHUTDOWN
  • ALTER DATABASE
  • ALTER SESSION
  • ALTER SYSTEM
  • CREATE RESTORE POINT (including GUARANTEED restore points)
  • CREATE SESSION
  • DROP RESTORE POINT (including GUARANTEED restore points)
  • FLASHBACK DATABASE
  • SELECT ANY DICTIONARY
  • SELECT
    • X$ tables (that is, the fixed tables)
    • V$ and GV$ views (that is, the dynamic performance views
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • DELETE
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • EXECUTE
    • SYS.DBMS_DRS

In addition, the SYSDG privilege enables you to connect to the database even if it is not open.

Ok. Let’s give it a try. I want to give the user scott all the privileges he needs to do the DataGuard operational tasks. So … I create a UNIX user scott and a database user scott with the SYSDG privilege.

[root@dbidg02 ~]# useradd scott
[root@dbidg02 ~]# usermod -a -G sysdg scott
[root@dbidg02 ~]# cat /etc/group |grep sysdg
sysdg:x:54324:oracle,scott

SQL> create user scott identified by tiger;

User created.

SQL> grant sysdg to scott;

Grant succeeded.

SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS where USERNAME = 'SCOTT';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
SCOTT                  FALSE FALSE FALSE TRUE  FALSE

So far so good. Everything works. Scott can do switchovers, convert the physical standby to a snapshot database, create restore points and many more. But what happens when an error pops up? You need to take a look into the most important log files which are the alert log and broker log file in a DataGuard environment.

If you do a “show database verbose”, you will find at the end of the output the locations of the log files, which is quite useful from my point of view. This is new with Oracle 12cR2.

DGMGRL> show database verbose 'DBIT122_SITE1';

Database - DBIT122_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 3.00 KByte/s
  Active Apply Rate:  152.00 KByte/s
  Maximum Apply Rate: 152.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    DBIT122
  ...
  ...
Broker shows you the Log file location:

    Alert log               : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
    Data Guard Broker log   : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log

But unfortunately, the scott user can’t read those files, because there are no read permissions for others and
scott is not part of the oinstall group.

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 trace]$ ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 37787 Dec 13 10:36 drcDBIT122.log
[scott@dbidg01 trace]$ ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 221096 Dec 13 12:04 alert_DBIT122.log

So what possibilities do we have to overcome this issue?

1. We can add user scott to the oinstall group, but then we haven’t won to much security
2. We can set the parameter “_trace_files_public”=true, but when this one is enable, then all oracle
trace files are world readable, not just the alert and broker log
3. We can configure XFS access control lists, so that user scott gets only the permissions he needs

For security reasons, I decided to go for the last one.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(sysbkp),54324(sysdg),54325(syskm),54326(oper)
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r drcDBIT122.log


oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r-----+ 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl alert_DBIT122.log
# file: alert_DBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl drcDBIT122.log
# file: drcDBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

Cool. Now the scott user is really able to do a lot of DataGuard operation tasks, including some debugging.

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log | grep 'MAXIMUM AVAILABILITY mode' | tail -1
Primary database is in MAXIMUM AVAILABILITY mode

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log |grep "Protection Mode" | tail -1
      Protection Mode:            Maximum Availability
Conclusion

Using XFS ACL lists is quite cool if you want to give a specific user permissions to a file, but you don’t want to add him to a group, or make all files world readable. But be careful, that you configure the same ACL list on all other Standby nodes as well, and make sure that you use a Backup solution which supports ACL’s.

For example, using ‘cp’ or ‘cp -p’ makes a huge difference. In one case you loose your ACL list in the copy, in the other case you preserve it. The (+) sign at the end of the file permissions shows the difference.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp alert_DBIT122.log alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp -p alert_DBIT122.log alert_DBIT122.log.b
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.a
-rw-r----- 1 oracle oinstall 312894 Dec 13 14:25 alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.b
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log.b
 

Cet article Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations? est apparu en premier sur Blog dbi services.

Pages