Skip navigation.

Hemant K Chitale

Syndicate content
I am an Oracle Database Specialist in Singapore.
get an rss feed of this blog at http://hemantoracledba.blogspot.com/feeds/posts/default?alt=rss
follow me on twitter : @HemantKChitale
Updated: 1 hour 56 min ago

Statistics on this blog

Sun, 2014-12-07 08:40
I began this blog on 28-Dec-2006.  For the 8 years 2007 to 2014, I have averaged 56 posts per year.  Unfortunately, this year, 2014 has produced the fewest posts -- 40 including this one.  This includes the "series" on Grid / ASM / RAC and the series on StatsPack / AWR.

2011 was my most prodigious year -- 99 posts.

There were 8,176 page views in July 2007.  To date, there have been more than 930thousand page views on this blog.  By month, the peak count has been for March 2012 -- 24,346 page views.

My largest viewer counts are from USA, India, UK, Germany and France.  www.google.com has been the largest source of traffic to this blog.

.
.
.



Categories: DBA Blogs

StatsPack and AWR Reports -- Bits and Pieces -- 4

Tue, 2014-12-02 08:05
This is the fourth post in a series.

Post 1 is here.
Post 2 is here.
Post 3 is here.

Buffer Cache Hit Ratios

Many novice DBAs may use Hit Ratios as indicators of performance.  However, these can be misleading or incomplete.

Here are two examples :

Extract A: 9i StatsPack

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer  Hit   %:   99.06

It would seem that with only 0.94% of reads being physical reads, the database is performing optimally.  So, the DBA doesn't need to look any further.  
Or so it seems.
If he spends some time reading the report, he also then comes across this :
Top 5 Timed Events~~~~~~~~~~~~~~~~~~                                                     % TotalEvent                                               Waits    Time (s) Ela Time-------------------------------------------- ------------ ----------- --------db file sequential read                           837,955       4,107    67.36CPU time                                                        1,018    16.70db file scattered read                             43,281         549     9.00


                                                                   Avg                                                     Total Wait   wait    WaitsEvent                               Waits   Timeouts   Time (s)   (ms)     /txn---------------------------- ------------ ---------- ---------- ------ --------db file sequential read           837,955          0      4,107      5    403.3db file scattered read             43,281          0        549     13     20.8
Physical I/O is a significant proportion (76%) of total database time.  88% of the physical I/O is single-block  reads ("db file sequential read").  This is where the DBA must identify that tuning *is* required.
Considering the single block access pattern it is likely that a significant proportion are index blocks as well.  Increasing the buffer cache might help cache the index blocks.


Extract B : 10.2 AWR
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %:99.98Redo NoWait %:100.00Buffer Hit %:96.43In-memory Sort %:99.99Library Hit %:97.16Soft Parse %:98.16Execute to Parse %:25.09Latch Hit %:99.85Parse CPU to Parse Elapsd %:89.96% Non-Parse CPU:96.00
The Buffer Hit Ratio is very good.  Does that mean that I/O is not an issue ?
Look again at the same report 
Top 5 Timed Events
EventWaitsTime(s)Avg Wait(ms)% Total Call TimeWait ClassCPU time147,59342.3db file sequential read31,776,67887,659325.1User I/Odb file scattered read19,568,22079,142422.7User I/ORMAN backup & recovery I/O1,579,31437,6502410.8System I/Oread by other session3,076,11014,21654.1User I/O
User I/O is actually significant.  The SQLs with the highest logical I/O need to be reviewed for tuning.

.
.
.

Categories: DBA Blogs

StatsPack and AWR Reports -- Bits and Pieces -- 3

Sat, 2014-11-08 09:48
This is the third post in this series.
Post 1 is here
Post 2 is here

Note : Some figures / details may be slightly changed / masked to hide the real source.

Identifying Unusual Events / Behaviours / Applications

Here I find a few "unusual" events and mark them out.

Extract A : 11.2 AWR

Snap IdSnap TimeSessionsCursors/SessionBegin Snap:.... 03:00:57107.7End Snap:.... 07:00:07114.9Elapsed:239.17 (mins)DB Time:22.61 (mins)
Top 5 Timed Foreground Events

    EventWaitsTime(s)Avg wait (ms)% DB timeWait ClassDB CPU1,33298.16SQL*Net more data to client49,7012001.50NetworkSQL*Net more data from client213,915500.34Networkdb file scattered read1,159110.08User I/Odb file sequential read7,547100.07User I/O
    The two "SQL*Net more data" sets of waits are the unusual events.The Time on SQL*Net more data to/from client is negligible isn't it ?  So, should I be concerned ?  Over a 4 hour period, only 20seconds were on "SQL*Net more data to client".  Time on "SQL*Net more data from client" is much lower at a total time of 5seconds only.  So "Time based" tuning would ignore these two waits.Foreground Wait EventsEventWaits%Time -outsTotal Wait Time (s)Avg wait (ms)Waits /txn% DB timeSQL*Net more data to client49,701020037.781.50SQL*Net more data from client213,915050162.620.34db file scattered read1,1580110.880.08db file sequential read7,5500105.740.07






    SQL*Net message to client652,102000495.890.04













    SQL*Net message from client652,1020183,327281495.89
    Not that Oracle treats "SQL*Net message from client" as an idle wait so the 183,327seconds of wait time do NOT appear in the Top 5 Timed Foreground Events list.

    I would draw attention to the high number of "more data from client" waits and the correlation with the "message from client" waits.  Either extremely large SQL statements or PLSQL blocks are being submitted very frequently or row inserts with very large array sizes (number of rows per insert) are being received.  In this case, further investigation reveals an ETL loader that does bulk inserts of a number of rows per array.  If we need tuning, tuning the SDU may help.
    Similarly the "more data to client" indicates large data sets are being returned.  The numbers of columns and rows per every "send" are high.
    Instance Activity Stats (from the same AWR)StatisticTotalper Secondper Trans


    Requests to/from client654,73845.63497.90



    SQL*Net roundtrips to/from client654,74045.63497.91



    bytes received via SQL*Net from client1,793,072,463124,950.541,363,553.20bytes sent via SQL*Net to client552,048,24738,469.57419,808.55



    logical read bytes from cache762,514,227,20053,135,924.61579,858,727.91



    physical read total bytes8,772,479,488611,311.626,671,087.06


    physical write total bytes25,334,243,3281,765,420.7619,265,584.28



    redo size6,373,204,848444,117.794,846,543.61




    1.793billion bytes received in 654K SQL*Net trips is 2741bytes per trip received at 45 messages per second.  Given that it is still only 2,741bytes per trip, possibly the array size could also be tuned with the SDU and TDU.
    So, this is an AWR that doesn't call for tuning but reveals information about how the database is being used.  Large number of rows (large number of columns) being inserted and retrieved in each call.  The Performance Analyst needs to be aware of the nature of the "application" --- here it is not OLTP users but an ETL job that is the "application".  Although the database had more than a hundred sessions a very small number of sessions (possibly 4 ?) were active doing ETL and checking the status of ETL tables during this window.

    Would you care to analyze the other statistics I've listed -- bytes read and bytes written ?



    Extract B : 10.2 AWR
    Snap IdSnap TimeSessionsCursors/SessionBegin Snap:
     00:00:072284.5End Snap:
     23:00:062324.3Elapsed:1,379.97 (mins)DB Time:11,543.24 (mins)
    I know.  It is really bad and most likely meaningless to get an AWR for a 24hour range.  (I'll not go into the details about why the AWR is for 24hours -- save that I did NOT ask for a 24hour AWR report).
    Top 5 Timed Events
    EventWaitsTime(s)Avg Wait(ms)% Total Call TimeWait ClassCPU time258,10137.3db file sequential read62,150,655208,148330.1User I/Odb file scattered read28,242,529141,638520.4User I/ORMAN backup & recovery I/O1,597,42137,137235.4System I/Oenq: TX - row lock contention22,27634,9421,5695.0ApplicationThe RMAN Backup load is expected.  (Why ? Given a 24hour report, I expect RMAN to have run at least once during the day).For performance tuning, I would look at the "db file ... read" events and identify and analyze SQL statements and the schema.
    What is the "unusual" event here ?  It is the "enq: TX - row lock contention".  Over a period of 1,380minutes, there were 22,276 Row-Lock Contention waits.  Actually, this application does not have the same load throughout the 23hours.  Most likely, it had load for 15hours only.  So, we had 22,276 Row-Lock Contention waits over 15hours.  That translates to 1,485 waits per hour or one Row-Lock Contention wait every 2.5seconds.  Now, that is a very high frequency.  Either users are locking each other out for a very short while (1.569seconds per wait on average) or there is/are one or more jobs that run at a very high frequency and update a common "reference" table.  I won't reveal my findings here but analysis of the SQL statements indicates what the "problem" is.
    Now, should "Time based performance tuning" be worried about the 5% of time lost on these waits ?  Probably not.  But they do indicate something peculiar in the design of this system.  There are less than 250 user sessions in this OLTP database but there is/are one or more jobs that is/are locking itself every 2.5seconds -- so there is some point of serialisation occurring.  Is that job also accounting for CPU time or 'db file read' time ?  That needs further analysis.
    Both these cases show how a Performance Analyst needs to know how the database is being used.  What sort of jobs are submitted, besides OLTP users ?
    ...


    Categories: DBA Blogs

    StatsPack and AWR Reports -- Bits and Pieces -- 2

    Sat, 2014-11-01 08:52
    This is the second post in a series on reading StatsPack and AWR reports.
    (The first is available here)


    Comparing Reports :

    Here are two 9.2 StatsPack extracts from one database:

    Extract A  : 9.2 StatsPack
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    --------------- ---------------
    Redo size: 56,031.63 3,084.68
    Logical reads: 68,286.24 3,759.32
    Block changes: 314.88 17.33
    Physical reads: 842.92 46.40
    Physical writes: 134.76 7.42
    User calls: 271.32 14.94
    Parses: 146.46 8.06
    Hard parses: 7.37 0.41
    Sorts: 93.83 5.17
    Logons: 0.33 0.02
    Executes: 296.70 16.33
    Transactions: 18.16

    Extract B : 9.2 StatsPack
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    --------------- ---------------
    Redo size: 89,615.76 2,960.48
    Logical reads: 210,302.81 6,947.42
    Block changes: 541.83 17.90
    Physical reads: 1,465.04 48.40
    Physical writes: 161.68 5.34
    User calls: 213.82 7.06
    Parses: 125.28 4.14
    Hard parses: 6.13 0.20
    Sorts: 104.31 3.45
    Logons: 0.35 0.01
    Executes: 664.81 21.96
    Transactions: 30.27

    Extract B shows a higher volume of activity -- Transactions per second, Redo Size per second, Logical Reads per second.
    Actually, although these are both from the same database but for two different time-windows on different dates.  The fact that the time-windows aren't the same make the comparisons difficult.  Extract A is for an 11-hour window on day 1 and Extract B is for a 6-hour window on day 6.  The two windows aren't strictly comparable.  You have to be very careful when comparing StatsPack or AWR reports, even when from the same database if they don't cover comparable time-windows with similar loads.  The first Report includes non-peak hours, the second is for Peak hours only. The level of activity averaged out over 11hours that include non-peak hours isn't strictly comparable with a 6hour peak window.

    Extract C : 10.2 AWR
    Load Profile
    Per SecondPer TransactionRedo size: 520,776.15 50,948.36Logical reads: 353,525.71 34,585.98Block changes: 1,854.93 181.47Physical reads: 14,285.23 1,397.55Physical writes: 295.84 28.94User calls: 1,265.14 123.77Parses: 822.64 80.48Hard parses: 15.40 1.51Sorts: 168.09 16.44Logons: 0.16 0.02Executes: 1,040.31 101.78Transactions: 10.22 
    Extract D : 10.2 AWRLoad Profile
    Per SecondPer TransactionRedo size: 517,862.01 54,681.39Logical reads: 288,341.09 30,446.12Block changes: 1,879.27 198.43Physical reads: 12,820.96 1,353.77Physical writes: 323.90 34.20User calls: 1,115.78 117.82Parses: 719.39 75.96Hard parses: 17.28 1.82Sorts: 95.74 10.11Logons: 0.15 0.02Executes: 935.33 98.76Transactions: 9.47 
    Extracts C and D are from the same database for the same time-window on 2 consecutive workdays.  They are comparable.  A difference of 10% for some statistic may not be significant.  Understand that every business, every activity, every system will have some "normal" fluctuations day-to-day.

    .
    .
    .



    Categories: DBA Blogs

    StatsPack and AWR Reports -- Bits and Pieces -- 1

    Wed, 2014-10-22 04:58
    I am planning to put up a few posts on snippets from StatsPack and AWR reports.  This is my first post.
    Note : Some figures / details may be slightly changed / masked to hide the real source.

    Logical I/O and Change rates :
    1.  From a 9.2 StatsPack Report:
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: Std Block Size: 4K
    Shared Pool Size: Log Buffer:

    Load Profile
    ~~~~~~~~~~~~ Per Second
    ---------------
    Redo size: 56,031.63
    Logical reads: 68,286.24
    Block changes: 314.88
    Physical reads: 842.92
    Physical writes: 134.76

    With a 4KB Block Size 68,286.24 Logical Reads translates to slightly over 266MB/second. Logical I/O is CPU-bound.  Database activity is Read-Intensive with a high rate of Reads relative to Writes.

    2.  From an 11.2 AWR Report :
    Cache Sizes
    BeginEndBuffer Cache:
    Std Block Size:16KShared Pool Size:
    Log Buffer:Load Profile
    Per SecondPer TransactionPer ExecPer CallDB Time(s):



    DB CPU(s):



    Redo size:1,593,612.1Logical reads:51,872.5
    Block changes:4,212.4Physical reads:63.8
    Physical writes:133.1

    With a 16KB Block Size, 51,872.5 Logical Reads translates to slightly over 810MB/second.  This consumes CPU cycles.  However, here the number of Block Changes is noticeably high in this environment. This is also reflected in the high Redo rate -- slightly over 5,471MB/hour (Note : "Redo size" is in Bytes).


    CPU Consumption :
    1.  From a 9.2 StatsPack Report :
    Statistic                                      Total     per Second    per Trans
    --------------------------------- ------------------ -------------- ------------
    CPU used by this session 37.5 2.1
    CPU used when call started 37.6 2.1

    This indicates 0.375seconds of CPU usage per second -- i.e. approximately 37.5% of 1 CPU (let's take this as an older non-multicore architecture). If the server has 4 CPUs, CPU consumption is 9.4%

    2.  From an 11.2 AWR Report :
    Instance Activity Stats
    • Ordered by statistic name
    StatisticTotalper Secondper Trans... deleted rows ....


    ... deleted rows ....


    CPU used by this session46.85CPU used when call started46.27
    This indicates 0.468seconds of CPU usage per second -- i.e. approximately 46.8% of 1 Core.  This is also reflected in the Load Profile section :
    Load Profile
    Per SecondPer TransactionPer ExecPer CallDB Time(s):



    DB CPU(s):0.50.10.000.00
    How many CPUs does this machine have ?  AWR reports this :
    Host NamePlatformCPUsCoresSocketsMemory (GB)Linux x86 64-bit16162

    That means we are using less than half of 1 of 16 cores !  This translates to CPU consumption of 3.125%  The server has too many CPU cores !
    Categories: DBA Blogs

    Bandwidth and Latency

    Sat, 2014-10-18 08:40
    Here is, verbatim, an article I posted on Linked-In yesterday  [For other posts on Linked-In, view my Linked-In profile] :

    Imagine an 8-lane highway. Now imagine a 4-lane highway. Which has the greater bandwidth ?Imagine your organisation sends its employees on a wekend "retreat" by bus. You have the choice of two locations, one that is 200kilometres away and the other is 80kilometres away. Assume that buses travel at a constant speed of 80kmph. Which resort will your employees get to faster ?The first question is about bandwidth. The second is about latency.(Why should I assume a fixed speed for the buses ? Because, I can assume a fixed speed at which electrons transfer over a wire or photons over a light channel).Expand the question further. What if the organisation needs to send 32 employees in a 40-seater bus. Does it matter that the bus can travel on an 8-lane highway versus a 4-lane highway (assuming minimal other traffic on the highways at that time) ?Too often, naive "architects" do not differentiate between the two. If my organisation needs to configure a standby (DR) location for the key databases and has a choice of two locations but varying types of network services, it should consider *both* bandwidth and latency. If the volume of redo is 1000MBytes per minute and this, factoring overheads for packetizing the "data", translates to 167Mbits per second, should I just go ahead and buy bandwidth of 200Mbits per second ? If the two sites have two different network services providers offering different bandwidths, should I simply locate at the site with the greater bandwidth ? What if the time it takes to synchronously write my data to site "A" is 4ms and the time to site "B" is 8ms ? Should I not factor the latency ? (I am assuming that the "write to disk" speed of hardware at either site is the same -- i.e. the hardware is the same). I can then add the complications of network routers and switches that add to the latency. Software configurations, flow-control mechanisms, QoS definitions and hardware configuration can also impact bandwidth and latency in different ways.Now, extend this to data transfers ("output" or "results") from a database server to an application server or end-user. If the existing link is 100Mbps and is upgraded to 1Gbps, the time to "report" 100 rows is unlikely to change as this time is a function of the latency. However, if the number of concurrent users grows from 10 to 500, the bandwidth requirement may increase and yet each user may still have the same "wait" time to see his results (assuming that there are no server hardware constraints returning results for 500 users).On the flip side, consider ETL servers loading data into a database. Latency is as important as bandwidth. An ETL scheme that does "row-by-row" loads relies on latency, not bandwidth. Increasing bandwidth doesn't help such a scheme.Think about the two.
    Categories: DBA Blogs

    11g Adaptive Cursor Sharing --- does it work only for SELECT statements ? Using the BIND_AWARE Hint for DML

    Sat, 2014-10-04 08:52
    Test run in 11.2.0.2

    UPDATE 07-Oct-14 :  I have been able to get the DML statement also to demonstrate Adaptive Cursor Sharing with the "BIND_AWARE" hint as suggested by Stefan Koehler and Dominic Brooks.

    Some of you may be familiar with Adaptive Cursor Sharing.

    This is an 11g improvement over the "bind peek once and execute repeatedly without evaluating the true cost of execution" behaviour that we see in 10g.  Thus, if the predicate is skewed and the bind value is changed, 10g does not "re-peek" and re-evaluate the execution plan. 11g doesn't "re-peek" at the first execution with  new bind but if it finds the true cardinality returned by the execution at signficant variance, it decides to "re-peek" at a subsequent execution.  This behaviour is determined by the new attributes "IS_BIND_SENSITIVE" and "IS_BIND_AWARE" for the SQL cursor.

    If a column is highly skewed, as determined by the presence of  Histogram, the Optimizer, when parsing an SQL with a bind against the column as a predicate, marks the SQL as BIND_SENSITIVE. If two executions with two different bind values return very different counts of rows for the predicate, the SQL is marked BIND_AWARE.  The Optimizer "re-peeks" the bind and generates a new Child Cursor that is marked as BIND_AWARE.

    Here is a demo.


    SQL> -- create and populate table
    SQL> drop table demo_ACS purge;

    Table dropped.

    SQL>
    SQL> create table demo_ACS
    2 as
    3 select * from dba_objects
    4 where 1=2
    5 /

    Table created.

    SQL>
    SQL> -- populate the table
    SQL> insert /*+ APPEND */ into demo_ACS
    2 select * from dba_objects
    3 /

    75043 rows created.

    SQL>
    SQL> -- create index on single column
    SQL> create index demo_ACS_ndx
    2 on demo_ACS (owner) nologging
    3 /

    Index created.

    SQL>
    SQL> select count(distinct(owner))
    2 from demo_ACS
    3 /

    COUNT(DISTINCT(OWNER))
    ----------------------
    42

    SQL>
    SQL> select owner, count(*)
    2 from demo_ACS
    3 where owner in ('HEMANT','SYS')
    4 group by owner
    5 /

    OWNER COUNT(*)
    -------- ----------
    HEMANT 55
    SYS 31165

    SQL>
    SQL> -- create a histogram on the OWNER column
    SQL> exec dbms_stats.gather_table_stats('','DEMO_ACS',estimate_percent=>100,method_opt=>'FOR COLUMNS OWNER SIZE 250');

    PL/SQL procedure successfully completed.

    SQL> select column_name, histogram, num_distinct, num_buckets
    2 from user_tab_columns
    3 where table_name = 'DEMO_ACS'
    4 and column_name = 'OWNER'
    5 /

    COLUMN_NAME HISTOGRAM NUM_DISTINCT NUM_BUCKETS
    ------------------------------ --------------- ------------ -----------
    OWNER FREQUENCY 42 42

    SQL>

    So, I now have a table that has very different row counts for 'HEMANT' and 'SYS'. The data is skewed. The Execution Plan for queries on 'HEMANT' would not be optimal for queries on 'SYS'.

    Let's see a query executing for 'HEMANT'.

    SQL> -- define bind variable
    SQL> variable target_owner varchar2(30);
    SQL>
    SQL> -- setup first SQL for 'HEMANT'
    SQL> exec :target_owner := 'HEMANT';

    PL/SQL procedure successfully completed.

    SQL>
    SQL> -- run SQL
    SQL> select owner, object_name
    2 from demo_ACS
    3 where owner = :target_owner
    4 /

    OWNER OBJECT_NAME
    -------- ------------------------------
    .....
    .....

    55 rows selected.

    SQL>
    SQL> -- get execution plan
    SQL> select * from table(dbms_xplan.display_cursor);

    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID 1820xq3ggh6p6, child number 0
    -------------------------------------
    select owner, object_name from demo_ACS where owner = :target_owner

    Plan hash value: 805812326

    --------------------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    --------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | | | 3 (100)| |
    | 1 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------

    Predicate Information (identified by operation id):
    ---------------------------------------------------

    2 - access("OWNER"=:TARGET_OWNER)


    19 rows selected.

    SQL>
    SQL> -- get SQL query info
    SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
    2 from v$SQL
    3 where sql_id = '1820xq3ggh6p6'
    4 order by child_number
    5 /

    SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
    ------------- ------------ --------------- - - ---------- --------------
    1820xq3ggh6p6 0 805812326 Y N 1 55

    SQL> commit;

    Commit complete.

    SQL>

    We see one execution of the SQL Cursor with an Index Range Scan and Plan_Hash_Value 805812326. The SQL is marked BIND_SENSITIVE because of the presence of a Histogram indicating skew.

    Now, let's change the bind value from 'HEMANT' to 'SYS' and re-execute exactly the same query.

    SQL> -- setup second SQL for 'SYS'
    SQL> exec :target_owner := 'SYS';

    PL/SQL procedure successfully completed.

    SQL>
    SQL> -- run SQL
    SQL> select owner, object_name
    2 from demo_ACS
    3 where owner = :target_owner
    4 /

    OWNER OBJECT_NAME
    -------- ------------------------------
    .....
    .....

    31165 rows selected.

    SQL>
    SQL> -- get execution plan
    SQL> select * from table(dbms_xplan.display_cursor);

    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID 1820xq3ggh6p6, child number 0
    -------------------------------------
    select owner, object_name from demo_ACS where owner = :target_owner

    Plan hash value: 805812326

    --------------------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    --------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | | | 3 (100)| |
    | 1 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------

    Predicate Information (identified by operation id):
    ---------------------------------------------------

    2 - access("OWNER"=:TARGET_OWNER)


    19 rows selected.

    SQL>
    SQL> -- get SQL query info
    SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
    2 from v$SQL
    3 where sql_id = '1820xq3ggh6p6'
    4 order by child_number
    5 /

    SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
    ------------- ------------ --------------- - - ---------- --------------
    1820xq3ggh6p6 0 805812326 Y N 2 31220

    SQL> commit;

    Commit complete.

    SQL>

    This time, for 31,165 rows (instead of 55 rows), Oracle has used the same Execution Plan -- the same Plan_Hash_Value and the same expected cardinality of 55 rows. However, the Optimizer is now "aware" that the 55 row Execution Plan actually returned 31.165 rows.

    The next execution will see a re-parse because of this awareness.

    SQL> -- rerun second SQL
    SQL> select owner, object_name
    2 from demo_ACS
    3 where owner = :target_owner
    4 /

    OWNER OBJECT_NAME
    -------- ------------------------------
    .....
    .....

    31165 rows selected.

    SQL>
    SQL> -- get execution plan
    SQL> select * from table(dbms_xplan.display_cursor);

    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID 1820xq3ggh6p6, child number 1
    -------------------------------------
    select owner, object_name from demo_ACS where owner = :target_owner

    Plan hash value: 1893049797

    ------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    ------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | | | 299 (100)| |
    |* 1 | TABLE ACCESS FULL| DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
    ------------------------------------------------------------------------------

    Predicate Information (identified by operation id):
    ---------------------------------------------------

    1 - filter("OWNER"=:TARGET_OWNER)


    18 rows selected.

    SQL>
    SQL> -- get SQL query info
    SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
    2 from v$SQL
    3 where sql_id = '1820xq3ggh6p6'
    4 order by child_number
    5 /

    SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
    ------------- ------------ --------------- - - ---------- --------------
    1820xq3ggh6p6 0 805812326 Y N 2 31220
    1820xq3ggh6p6 1 1893049797 Y Y 1 31165

    SQL> commit;

    Commit complete.

    SQL>

    Aha ! This time we have a new Plan_Hash_Value (1893049797) for a Full Table Scan, being represented as a new Child Cursor (Child 1) that is now BIND_AWARE.






    Now, here's the catch I see.  If I change the "SELECT ....." statement to an "INSERT .... SELECT ....", I do NOT see this behaviour.  I do NOT see the cursor becoming BIND_AWARE as a new Child Cursor.
    Thus, the 3rd pass of an "INSERT ..... SELECT ..... " being the second pass with the Bind Value 'SYS' is correctly BIND_SENSITIVE but not BIND_AWARE.  This is what it shows :


    SQL> -- rerun second SQL
    SQL> insert into target_tbl
    2 (
    3 select owner, object_name
    4 from demo_ACS
    5 where owner = :target_owner
    6 )
    7 /

    31165 rows created.

    SQL>
    SQL> -- get execution plan
    SQL> select * from table(dbms_xplan.display_cursor);

    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID cqyhjz5a5xyu4, child number 0
    -------------------------------------
    insert into target_tbl ( select owner, object_name from demo_ACS where
    owner = :target_owner )

    Plan hash value: 805812326

    ---------------------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    ---------------------------------------------------------------------------------------------
    | 0 | INSERT STATEMENT | | | | 3 (100)| |
    | 1 | LOAD TABLE CONVENTIONAL | | | | | |
    | 2 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------------

    Predicate Information (identified by operation id):
    ---------------------------------------------------

    3 - access("OWNER"=:TARGET_OWNER)


    21 rows selected.

    SQL>
    SQL> -- get SQL query info
    SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
    2 from v$SQL
    3 where sql_id = 'cqyhjz5a5xyu4'
    4 /

    SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
    ------------- ------------ --------------- - - ---------- --------------
    cqyhjz5a5xyu4 0 805812326 Y N 3 62385

    SQL> commit;

    Commit complete.

    SQL>

    Three executions -- one with 'HEMANT' and the second and third with 'SYS' as the Bind Value -- all use the *same* Execution Plan.

    So, does this mean that I cannot expect ACS for DML ?


    UPDATE 07-Oct-14 :  I have been able to get the DML statement also to demonstrate Adaptive Cursor Sharing with the "BIND_AWARE" hint as suggested by Stefan Koehler and Dominic Brooks.

    SQL> -- run SQL
    SQL> insert /*+ BIND_AWARE */ into target_tbl
    2 (
    3 select owner, object_name
    4 from demo_ACS
    5 where owner = :target_owner
    6 )
    7 /

    55 rows created.

    SQL>
    SQL> -- get execution plan
    SQL> select * from table(dbms_xplan.display_cursor);

    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID 0cca9xusptauj, child number 0
    -------------------------------------
    insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
    from demo_ACS where owner = :target_owner )

    Plan hash value: 805812326

    ---------------------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    ---------------------------------------------------------------------------------------------
    | 0 | INSERT STATEMENT | | | | 3 (100)| |
    | 1 | LOAD TABLE CONVENTIONAL | | | | | |
    | 2 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------------

    Predicate Information (identified by operation id):
    ---------------------------------------------------

    3 - access("OWNER"=:TARGET_OWNER)


    21 rows selected.

    SQL>
    SQL> -- get SQL query info
    SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
    2 from v$SQL
    3 where sql_id = '0cca9xusptauj'
    4 /

    SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
    ------------- ------------ --------------- - - ---------- --------------
    0cca9xusptauj 0 805812326 Y Y 1 55

    SQL> commit;

    Commit complete.

    SQL>
    SQL> -- setup second SQL for 'SYS'
    SQL> exec :target_owner := 'SYS';

    PL/SQL procedure successfully completed.

    SQL>
    SQL> -- run SQL
    SQL> insert /*+ BIND_AWARE */ into target_tbl
    2 (
    3 select owner, object_name
    4 from demo_ACS
    5 where owner = :target_owner
    6 )
    7 /

    31165 rows created.

    SQL>
    SQL> -- get execution plan
    SQL> select * from table(dbms_xplan.display_cursor);

    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID 0cca9xusptauj, child number 1
    -------------------------------------
    insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
    from demo_ACS where owner = :target_owner )

    Plan hash value: 1893049797

    -------------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    -------------------------------------------------------------------------------------
    | 0 | INSERT STATEMENT | | | | 299 (100)| |
    | 1 | LOAD TABLE CONVENTIONAL | | | | | |
    |* 2 | TABLE ACCESS FULL | DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
    -------------------------------------------------------------------------------------

    Predicate Information (identified by operation id):
    ---------------------------------------------------

    2 - filter("OWNER"=:TARGET_OWNER)


    20 rows selected.

    SQL>
    SQL> -- get SQL query info
    SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
    2 from v$SQL
    3 where sql_id = '0cca9xusptauj'
    4 /

    SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
    ------------- ------------ --------------- - - ---------- --------------
    0cca9xusptauj 0 805812326 Y Y 1 55
    0cca9xusptauj 1 1893049797 Y Y 1 31165

    SQL> commit;

    Commit complete.

    SQL>
    SQL> -- rerun second SQL
    SQL> insert /*+ BIND_AWARE */ into target_tbl
    2 (
    3 select owner, object_name
    4 from demo_ACS
    5 where owner = :target_owner
    6 )
    7 /

    31165 rows created.

    SQL>
    SQL> -- get execution plan
    SQL> select * from table(dbms_xplan.display_cursor);

    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID 0cca9xusptauj, child number 1
    -------------------------------------
    insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
    from demo_ACS where owner = :target_owner )

    Plan hash value: 1893049797

    -------------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    -------------------------------------------------------------------------------------
    | 0 | INSERT STATEMENT | | | | 299 (100)| |
    | 1 | LOAD TABLE CONVENTIONAL | | | | | |
    |* 2 | TABLE ACCESS FULL | DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
    -------------------------------------------------------------------------------------

    Predicate Information (identified by operation id):
    ---------------------------------------------------

    2 - filter("OWNER"=:TARGET_OWNER)


    20 rows selected.

    SQL>
    SQL> -- get SQL query info
    SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
    2 from v$SQL
    3 where sql_id = '0cca9xusptauj'
    4 /

    SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
    ------------- ------------ --------------- - - ---------- --------------
    0cca9xusptauj 0 805812326 Y Y 1 55
    0cca9xusptauj 1 1893049797 Y Y 2 62330

    SQL> commit;

    Commit complete.

    SQL>

    However, there is a noticeable difference.  With the BIND_AWARE Hint, the SQL is Bind Aware right from the first execution (for :target_owner='HEMANT').  So, even at the second execution (for the first run of :target_owner='SYS'), it re-peeks and generates a new Execution Plan (the Full Table Scan) for a new Child (Child 1).
    .
    .
    .
    Categories: DBA Blogs

    The ADMINISTER SQL MANAGEMENT OBJECT Privilege

    Thu, 2014-09-25 08:26
    In 11.2.0.2

    Having seen in the previous post, "EXECUTE Privilege on DBMS_SPM not sufficient", let's see if there is a risk to the ADMINISTER SQL MANAGEMENT OBJECT privilege.

    First, recreating the SQL Plan

    SQL> connect spm_test/spm_test
    Connected.
    SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE;

    Session altered.

    SQL> variable qrn number ;
    SQL> exec :qrn := 5;

    PL/SQL procedure successfully completed.

    SQL> select * from spm_test_table where id_column=:qrn;
    5
    ID_COLUMN DATA_COL
    ---------- ---------------
    5 5

    SQL>select * from spm_test_table where id_column=:qrn;

    ID_COLUMN DATA_COL
    ---------- ---------------
    5 5

    SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE;

    Session altered.

    SQL>
    SQL> connect hemant/hemant
    Connected.
    SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
    2 from dba_sql_plan_baselines
    3 where creator='SPM_TEST'
    4 /

    SQL_HANDLE PLAN_NAME
    ------------------------------ ------------------------------
    SQL_TEXT
    --------------------------------------------------------------------------------
    ENA ACC FIX
    --- --- ---
    SQL_6ceee9b24e9fd50a SQL_PLAN_6tvr9q979zp8a1e198e55
    select * from spm_test_table where id_column=:qrn
    YES YES NO


    SQL>

    Next, setup the BREAK !

    SQL> create user spm_break identified by spm_break;

    User created.

    SQL> grant create session, administer sql management object to spm_break;

    Grant succeeded.

    SQL> connect spm_break;
    Enter password:
    Connected.
    SQL>
    SQL> set serveroutput on
    SQL> declare
    2 ret_value pls_integer;
    3 begin
    4 ret_value := dbms_spm.drop_sql_plan_baseline(
    5 sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
    6 dbms_output.put_line('Return : ' || ret_value);
    7 end;
    8 /
    Return : 1

    PL/SQL procedure successfully completed.

    SQL>
    SQL> connect hemant/hemant
    Connected.
    SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
    2 from dba_sql_plan_baselines
    3 where creator = 'SPM_TEST'
    4 /

    no rows selected

    SQL>

    I was able to use the SPM_BREAK account to *DROP* an SQL Plan Baseline that was created by the SPM_TEST account without identifying which account it belonged to -- i.e. which account was the creator ! See Oracle Support Doc 1469099.1 and reference to Bug 12932784.   Isn't that a bug, or a security loophole ?
    Apparently, this privilege is to be used only by Administrators.  But a non-Administrator cannot manage and evolve his own SQL Plan Baselines without this privilege.  So does that mean that only an Administrator should capture, evolve and manage SQL Plan Baselines ?

    If you have a shared environment with different development teams developing different applications in different schemas, how do you provide them the facility to manage their own SQL Plan Baselines ?  The EXECUTE privilege on DBMS_SPM is not sufficient.  Yet, the ADMINISTER SQL MANAGEMENT OBJECT is excessive as one development team could drop the SQL Plan Baselines of another development team (i.e. another application).


    Can anyone test that the ADMINISTER SQL MANAGEMENT privilege is required in addition to the EXECUTE on DBMS_SPM  in order to simply manage / evolve one's own SQL Plans in 11.2.0.4 / 12.1.0.1 / 12.1.0.2  ?
    .
    .
    .

    Categories: DBA Blogs

    EXECUTE Privilege on DBMS_SPM not sufficient

    Wed, 2014-09-24 07:57
    In 11.2.0.2

    Here is a quick demo to show that the "ADMINISTER SQL MANAGEMENT OBJECT"  privilege is required for a non-DBA user to use DBMS_SPM even if EXECUTE has been granted on DBMS_SPM.

    SQL> create user spm_test identified by spm_test quota unlimited on users;

    User created.

    SQL> alter user spm_test default tablespace users;

    User altered.

    SQL> grant create session, create table to spm_test;

    Grant succeeded.

    SQL> connect spm_test/spm_test
    Connected.
    SQL> create table spm_test_table (id_column number primary key, data_col varchar2(15));

    Table created.

    SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE;

    Session altered.

    SQL> insert into spm_test_table select rownum, to_char(rownum) from dual connect by level < 10001;

    10000 rows created.

    SQL> commit;

    Commit complete.

    SQL> variable qrn number;
    SQL> exec :qrn := 5;

    PL/SQL procedure successfully completed.

    SQL> select * from spm_test_table where id_column=:qrn;

    ID_COLUMN DATA_COL
    ---------- ---------------
    5 5

    SQL> select * from spm_test_table where id_column=:qrn;

    ID_COLUMN DATA_COL
    ---------- ---------------
    5 5

    SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE;

    Session altered.

    SQL>
    SQL> connect hemant/hemant
    Connected.
    SQL>
    SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
    2 from dba_sql_plan_baselines
    3 where creator='SPM_TEST'
    4 /

    SQL_HANDLE PLAN_NAME
    ------------------------------ ------------------------------
    SQL_TEXT
    --------------------------------------------------------------------------------
    ENA ACC FIX
    --- --- ---
    SQL_6ceee9b24e9fd50a SQL_PLAN_6tvr9q979zp8a1e198e55
    select * from spm_test_table where id_column=:qrn
    YES YES NO


    SQL>
    SQL> connect spm_test/spm_test
    Connected.
    SQL> declare
    2 ret_value pls_integer;
    3 begin
    4 ret_value := dbms_spm.drop_sql_plan_baseline(
    5 sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
    6 end;
    7 /
    declare
    *
    ERROR at line 1:
    ORA-38171: Insufficient privileges for SQL management object operation
    ORA-06512: at "SYS.DBMS_SPM", line 2532
    ORA-06512: at line 4


    SQL> select table_name, grantee, privilege
    2 from all_tab_privs
    3 where table_name='DBMS_SPM'
    4 order by 2,3;

    TABLE_NAME GRANTEE
    ------------------------------ ------------------------------
    PRIVILEGE
    ----------------------------------------
    DBMS_SPM PUBLIC
    EXECUTE


    SQL>
    SQL> connect / as sysdba
    Connected.
    SQL> grant execute on dbms_spm to spm_test;

    Grant succeeded.

    SQL> connect spm_test/spm_test
    Connected.
    SQL> declare
    2 ret_value pls_integer;
    3 begin
    4 ret_value := dbms_spm.drop_sql_plan_baseline(
    5 sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
    6 dbms_output.put_line(ret_value);
    7 end;
    8 /
    declare
    *
    ERROR at line 1:
    ORA-38171: Insufficient privileges for SQL management object operation
    ORA-06512: at "SYS.DBMS_SPM", line 2532
    ORA-06512: at line 4


    SQL>
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> grant administer sql management object to spm_test;

    Grant succeeded.

    SQL>
    SQL> connect spm_test/spm_test
    Connected.
    SQL> declare
    2 ret_value pls_integer;
    3 begin
    4 ret_value := dbms_spm.drop_sql_plan_baseline(
    5 sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
    6 dbms_output.put_line(ret_value);
    7 end;
    8 /

    PL/SQL procedure successfully completed.

    SQL>
    SQL> connect hemant/hemant
    Connected.
    SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
    2 from dba_sql_plan_baselines
    3 where creator = 'SPM_TEST'
    4 /

    no rows selected

    SQL>

    Thus, although EXECUTE on DBMS_SPM had been granted to PUBLIC and even explicitly to this ordinary user, it couldn't execute DROP_SQL_PLAN_BASELINE.  The ADMINISTER SQL MANAGEMENT OBJECT privilege was required.
    .
    .
    .


    Categories: DBA Blogs

    Index Growing Larger Than The Table

    Wed, 2014-09-10 08:52
    Here is a very simple demonstration of a case where an Index can grow larger than the table.  This happens because the pattern of data deleted and inserted doesn't allow deleted entries to be reused.  For every 10 rows that are inserted, 7 rows are subsequently deleted after their status is changed to "Processed".  But the space for the deleted entries from the index cannot be reused.

    SQL>
    SQL>REM Demo Index growth larger than table !
    SQL>
    SQL>drop table hkc_process_list purge;

    Table dropped.

    SQL>
    SQL>create table hkc_process_list
    2 (transaction_id number,
    3 status_flag varchar2(1),
    4 last_update_date date,
    5 transaction_type number,
    6 details varchar2(25))
    7 /

    Table created.

    SQL>
    SQL>create index hkc_process_list_ndx
    2 on hkc_process_list
    3 (transaction_id, status_flag)
    4 /

    Index created.

    SQL>
    SQL>
    SQL>REM Cycle 1 -------------------------------------
    > -- create first 1000 transactions
    SQL>insert into hkc_process_list
    2 select rownum, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>-- get sizes of table and index
    SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

    PL/SQL procedure successfully completed.

    SQL>select 'Table' Obj_Type, table_name, blocks Blocks
    2 from user_tables
    3 where table_name like 'HKC_PROCE%'
    4 union
    5 select 'Index', index_name, leaf_blocks
    6 from user_indexes
    7 where index_name like 'HKC_PROCE%'
    8 order by 1
    9 /

    OBJ_T TABLE_NAME BLOCKS
    ----- ------------------------------ ----------
    Index HKC_PROCESS_LIST_NDX 3
    Table HKC_PROCESS_LIST 5

    2 rows selected.

    SQL>
    SQL>-- change status flag for 70% of the transactions to 'P'
    SQL>update hkc_process_list
    2 set status_flag='P'
    3 where mod(transaction_id,10) < 7
    4 /

    700 rows updated.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>-- delete processed rows
    SQL>delete hkc_process_list
    2 where status_flag='P'
    3 /

    700 rows deleted.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>REM Cycle 2 -------------------------------------
    > -- insert another 1000 rows
    SQL>insert into hkc_process_list
    2 select rownum+1000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>-- get sizes of table and index
    SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

    PL/SQL procedure successfully completed.

    SQL>select 'Table' Obj_Type, table_name, blocks Blocks
    2 from user_tables
    3 where table_name like 'HKC_PROCE%'
    4 union
    5 select 'Index', index_name, leaf_blocks
    6 from user_indexes
    7 where index_name like 'HKC_PROCE%'
    8 order by 1
    9 /

    OBJ_T TABLE_NAME BLOCKS
    ----- ------------------------------ ----------
    Index HKC_PROCESS_LIST_NDX 7
    Table HKC_PROCESS_LIST 13

    2 rows selected.

    SQL>
    SQL>-- change status flag for 70% of the transactions to 'P'
    SQL>update hkc_process_list
    2 set status_flag='P'
    3 where mod(transaction_id,10) < 7
    4 /

    700 rows updated.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>-- delete processed rows
    SQL>delete hkc_process_list
    2 where status_flag='P'
    3 /

    700 rows deleted.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>
    SQL>REM Cycle 3 -------------------------------------
    > -- insert another 1000 rows
    SQL>insert into hkc_process_list
    2 select rownum+2000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>-- get sizes of table and index
    SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

    PL/SQL procedure successfully completed.

    SQL>select 'Table' Obj_Type, table_name, blocks Blocks
    2 from user_tables
    3 where table_name like 'HKC_PROCE%'
    4 union
    5 select 'Index', index_name, leaf_blocks
    6 from user_indexes
    7 where index_name like 'HKC_PROCE%'
    8 order by 1
    9 /

    OBJ_T TABLE_NAME BLOCKS
    ----- ------------------------------ ----------
    Index HKC_PROCESS_LIST_NDX 11
    Table HKC_PROCESS_LIST 13

    2 rows selected.

    SQL>
    SQL>-- change status flag for 70% of the transactions to 'P'
    SQL>update hkc_process_list
    2 set status_flag='P'
    3 where mod(transaction_id,10) < 7
    4 /

    700 rows updated.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>-- delete processed rows
    SQL>delete hkc_process_list
    2 where status_flag='P'
    3 /

    700 rows deleted.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>
    SQL>REM Cycle 4 -------------------------------------
    > -- insert another 1000 rows
    SQL>insert into hkc_process_list
    2 select rownum+3000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
    3 from dual
    4 connect by level < 1001
    5 /

    1000 rows created.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>-- get sizes of table and index
    SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

    PL/SQL procedure successfully completed.

    SQL>select 'Table' Obj_Type, table_name, blocks Blocks
    2 from user_tables
    3 where table_name like 'HKC_PROCE%'
    4 union
    5 select 'Index', index_name, leaf_blocks
    6 from user_indexes
    7 where index_name like 'HKC_PROCE%'
    8 order by 1
    9 /

    OBJ_T TABLE_NAME BLOCKS
    ----- ------------------------------ ----------
    Index HKC_PROCESS_LIST_NDX 15
    Table HKC_PROCESS_LIST 13

    2 rows selected.

    SQL>
    SQL>-- change status flag for 70% of the transactions to 'P'
    SQL>update hkc_process_list
    2 set status_flag='P'
    3 where mod(transaction_id,10) < 7
    4 /

    700 rows updated.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>-- delete processed rows
    SQL>delete hkc_process_list
    2 where status_flag='P'
    3 /

    700 rows deleted.

    SQL>commit;

    Commit complete.

    SQL>
    SQL>
    SQL>REM Latest State size -------------------------
    > -- get sizes of table and index
    SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

    PL/SQL procedure successfully completed.

    SQL>select 'Table' Obj_Type, table_name, blocks Blocks
    2 from user_tables
    3 where table_name like 'HKC_PROCE%'
    4 union
    5 select 'Index', index_name, leaf_blocks
    6 from user_indexes
    7 where index_name like 'HKC_PROCE%'
    8 order by 1
    9 /

    OBJ_T TABLE_NAME BLOCKS
    ----- ------------------------------ ----------
    Index HKC_PROCESS_LIST_NDX 17
    Table HKC_PROCESS_LIST 13

    2 rows selected.

    SQL>
    SQL>

    Note how the Index grew from 3 blocks to 17 blocks, larger than the table that grew to 13 and seemed to have reached a "steady-state" at 13 blocks.

    The Index is built on only 2 of the 5 columns of the table and these two columns are also "narrow" in that they are a number and a single character.  Yet it grows faster through the INSERT - DELETE - INSERT cycles.

    Note the difference between the Index definition (built on TRANSACTION_ID as the leading column) and the pattern of DELETEs (which is on STATUS_FLAG).

    Deleted rows leave "holes" in the index but these are entries that cannot be reused by subsequent
    Inserts.  The Index is ordered on TRANSACTION_ID.  So if an Index entry for TRANSACTION_ID = n is deleted, the entry can be reused only for the same (or very close) TRANSACTION_ID.

    Assume that an Index Leaf Block contains entries for TRANSACTION_IDs 1, 2, 3, 4 and so on upto 10.  If rows for TRANSACTION_IDs 2,3,5,6,8 and 9 are deleted but 1,4,7 and 10  are not deleted then the Leaf Block has "free" space for new rows only with TRANSACTION_IDs 2,3,5,6,8 and 9.  New rows with TRANSACTION_IDs 11 and above will take a new Index Leaf Block and not re-use the "free" space in the first Index Leaf Block.  The first Leaf Block remains with deleted entries that are not reused.
    On the other hand, when the rows are delete from the Table Block, new rows can be reinserted into the same Table Block.  The Table is Heap Organised, not Ordered like the Index.  Therefore, new rows are permitted to be inserted into any Block(s) that contain space for those new rows -- e.g. blocks from which rows are deleted.  Therefore, after deleting TRANSACTION_IDs 2,3,5,6 from a Table Block, new TRANSACTION_IDs 11,12,13,14 can be re-inserted into the *same* Block.

    .
    .
    .
    Categories: DBA Blogs

    RAC Database Backups

    Sun, 2014-09-07 08:20
    In 11gR2 Grid Infrastructure and RAC


    UPDATE : 13-Sep-14 : How to run the RMAN Backup using server sessions concurrently on each node.  Please scroll down to the update.


    In a RAC environment, the database backups can be executed from any one node or distributed across multiple nodes of the cluster.

    In my two-node environment, I have backups configured to go to an FRA.  This is defined by the instance parameter "db_recovery_file_dest" (and "db_recovery_file_dest_size").  This can be a shared location -- e.g. an ASM DiskGroup or a ClusterFileSystem.  Therefore, the parameter should ideally be the same across all nodes so that backups may be executed from any or multiple nodes without changing the backup location.

    Running the RMAN commands from node1 :
    [root@node1 ~]# su - oracle
    -sh-3.2$ sqlplus '/ as sysdba'

    SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 21:56:46 2014

    Copyright (c) 1982, 2010, Oracle. All rights reserved.


    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options

    SQL> show parameter db_recovery_file

    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    db_recovery_file_dest string +FRA
    db_recovery_file_dest_size big integer 4000M
    SQL>
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    -sh-3.2$ rman target /

    Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 21:57:49 2014

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    connected to target database: RACDB (DBID=762767011)

    RMAN> list backup summary;

    using target database control file instead of recovery catalog

    List of Backups
    ===============
    Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
    ------- -- -- - ----------- --------------- ------- ------- ---------- ---
    12 B F A DISK 26-NOV-11 1 1 YES TAG20111126T224849
    13 B A A DISK 26-NOV-11 1 1 YES TAG20111126T230108
    16 B A A DISK 16-JUN-14 1 1 YES TAG20140616T222340
    18 B A A DISK 16-JUN-14 1 1 YES TAG20140616T222738
    19 B F A DISK 16-JUN-14 1 1 NO TAG20140616T222742
    20 B F A DISK 05-JUL-14 1 1 NO TAG20140705T173046
    21 B F A DISK 16-AUG-14 1 1 NO TAG20140816T231412
    22 B F A DISK 17-AUG-14 1 1 NO TAG20140817T002340

    RMAN>
    RMAN> backup as compressed backupset database plus archivelog delete input;


    Starting backup at 07-SEP-14
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting compressed archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=111 RECID=77 STAMP=857685630
    input archived log thread=2 sequence=37 RECID=76 STAMP=857685626
    input archived log thread=2 sequence=38 RECID=79 STAMP=857685684
    input archived log thread=1 sequence=112 RECID=78 STAMP=857685681
    channel ORA_DISK_1: starting piece 1 at 07-SEP-14
    channel ORA_DISK_1: finished piece 1 at 07-SEP-14
    piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220131_0.288.857685699 tag=TAG20140907T220131 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:09
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_111.307.857685623 RECID=77 STAMP=857685630
    archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_37.309.857685623 RECID=76 STAMP=857685626
    archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_38.277.857685685 RECID=79 STAMP=857685684
    archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_112.270.857685681 RECID=78 STAMP=857685681
    Finished backup at 07-SEP-14

    Starting backup at 07-SEP-14
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
    input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
    input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
    input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
    input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
    input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
    input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
    input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
    channel ORA_DISK_1: starting piece 1 at 07-SEP-14
    channel ORA_DISK_1: finished piece 1 at 07-SEP-14
    piece handle=+FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709 tag=TAG20140907T220145 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:06:15
    Finished backup at 07-SEP-14

    Starting backup at 07-SEP-14
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting compressed archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=113 RECID=81 STAMP=857686085
    input archived log thread=2 sequence=39 RECID=80 STAMP=857686083
    channel ORA_DISK_1: starting piece 1 at 07-SEP-14
    channel ORA_DISK_1: finished piece 1 at 07-SEP-14
    piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220807_0.307.857686087 tag=TAG20140907T220807 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_113.309.857686085 RECID=81 STAMP=857686085
    archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_39.277.857686083 RECID=80 STAMP=857686083
    Finished backup at 07-SEP-14

    Starting Control File and SPFILE Autobackup at 07-SEP-14
    piece handle=+FRA/racdb/autobackup/2014_09_07/s_857686089.277.857686097 comment=NONE
    Finished Control File and SPFILE Autobackup at 07-SEP-14

    RMAN>

    Note how the "PLUS ARCHIVELOG" specification also included archivelogs from both threads (instances) of the database.

    Let's verify these details from the instance on node2 :

    [root@node2 ~]# su - oracle
    -sh-3.2$ rman target /

    Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 22:11:00 2014

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    connected to target database: RACDB (DBID=762767011)

    RMAN>

    RMAN> list backup of database completed after 'trunc(sysdate)-1';

    using target database control file instead of recovery catalog

    List of Backup Sets
    ===================


    BS Key Type LV Size Device Type Elapsed Time Completion Time
    ------- ---- -- ---------- ----------- ------------ ---------------
    24 Full 258.21M DISK 00:06:12 07-SEP-14
    BP Key: 24 Status: AVAILABLE Compressed: YES Tag: TAG20140907T220145
    Piece Name: +FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709
    List of Datafiles in backup set 24
    File LV Type Ckp SCN Ckp Time Name
    ---- -- ---- ---------- --------- ----
    1 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/system.257.765499365
    2 Full 1160228 07-SEP-14 +DATA2/racdb/datafile/sysaux.256.765502307
    3 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/undotbs1.259.765500033
    4 Full 1160228 07-SEP-14 +DATA2/racdb/datafile/undotbs2.257.765503281
    5 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/users.261.765500215
    6 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/partition_test.265.809628399
    7 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/hemant_tbs.266.852139375
    8 Full 1160228 07-SEP-14 +DATA3/racdb/datafile/new_tbs.256.855792859

    RMAN>

    Yes, today's backup is visible from node2 as it retrieves the information from the controlfile that is common across all the instances of the database.

    How are the archivelogs configured ?

    RMAN> exit


    Recovery Manager complete.
    -sh-3.2$
    -sh-3.2$ sqlplus '/ as sysdba'

    SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 22:15:51 2014

    Copyright (c) 1982, 2010, Oracle. All rights reserved.


    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options

    SQL> archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence 39
    Next log sequence to archive 40
    Current log sequence 40
    SQL>
    SQL> show parameter db_recovery_file_dest

    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    db_recovery_file_dest string +FRA
    db_recovery_file_dest_size big integer 4000M
    SQL>

    Both instances have the same destination configured for archivelogs and backups.
    .
    .
    .
    =======================================================
    UPDATE : 13-Sep-14 :  Running the backup concurrently from both nodes 

    There are two ways to have the RMAN Backup run from both nodes.
    A.   Issue a seperate RMAN BACKUP DATAFILE or BACKUP TABLESPACE command from each node, such that the two nodes have an independent list of Datafiles / Tablespaces

    B.  Issue a BACKUP DATABASE command from one node but with two channels open, one against each node.

    Here, method A is easy to do but difficult to control as you add Tablespaces and Datafiles.  So, I will demonstrate method B.

    I begin with ensuring that
    a.  I have REMOTE_LOGIN_PASSWORDFILE configured so that I can make a SQLNet connection from node1 to node2  (RMAN requires the connect AS SYSDBA in 11g)
    b.  I have a TNSNAMES.ORA entry configured to the instance on node2 (note that the service name is common across all [both] instances in the Cluster)

    -sh-3.2$ hostname
    node1.mydomain.com
    -sh-3.2$ id
    uid=800(oracle) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba),1021(dba)
    -sh-3.2$ sqlplus '/ as sysdba'

    SQL*Plus: Release 11.2.0.2.0 Production on Sat Sep 13 23:22:09 2014

    Copyright (c) 1982, 2010, Oracle. All rights reserved.


    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options

    SQL> show parameter remote_login_passwordfile;

    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    remote_login_passwordfile string EXCLUSIVE
    SQL> quit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    -sh-3.2$ cat $ORACLE_HOME/network/admin/tnsnames.ora
    # tnsnames.ora.node1 Network Configuration File: /u01/app/oracle/rdbms/11.2.0/network/admin/tnsnames.ora.node1
    # Generated by Oracle configuration tools.

    RACDB_1 =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
    )
    (CONNECT_DATA =
    (SERVICE_NAME = RACDB)
    )
    )

    RACDB_2 =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2)(PORT = 1521))
    )
    (CONNECT_DATA =
    (SERVICE_NAME = RACDB)
    )
    )

    -sh-3.2$

    Next, I start RMAN and allocate two Channels, one for each Instance (on each Node in the Cluster) and issue a BACKUP DATABASE that is automatically executed across both Channels.

    -sh-3.2$ rman target /

    Recovery Manager: Release 11.2.0.2.0 - Production on Sat Sep 13 23:23:24 2014

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

    connected to target database: RACDB (DBID=762767011)

    RMAN> run
    2> {allocate channel ch1 device type disk connect 'sys/manager@RACDB_1';
    3> allocate channel ch2 device type disk connect 'sys/manager@RACDB_2';
    4> backup as compressed backupset database plus archivelog delete input;
    5> }

    using target database control file instead of recovery catalog
    allocated channel: ch1
    channel ch1: SID=61 instance=RACDB_1 device type=DISK

    allocated channel: ch2
    channel ch2: SID=61 instance=RACDB_2 device type=DISK


    Starting backup at 13-SEP-14
    current log archived
    channel ch1: starting compressed archived log backup set
    channel ch1: specifying archived log(s) in backup set
    input archived log thread=2 sequence=40 RECID=82 STAMP=857687640
    input archived log thread=1 sequence=114 RECID=84 STAMP=858204801
    input archived log thread=2 sequence=41 RECID=83 STAMP=857687641
    input archived log thread=1 sequence=115 RECID=86 STAMP=858208025
    channel ch1: starting piece 1 at 13-SEP-14
    channel ch2: starting compressed archived log backup set
    channel ch2: specifying archived log(s) in backup set
    input archived log thread=2 sequence=42 RECID=85 STAMP=858208000
    input archived log thread=1 sequence=116 RECID=87 STAMP=858209078
    input archived log thread=2 sequence=43 RECID=88 STAMP=858209079
    channel ch2: starting piece 1 at 13-SEP-14
    channel ch2: finished piece 1 at 13-SEP-14
    piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.279.858209109 tag=TAG20140913T232445 comment=NONE
    channel ch2: backup set complete, elapsed time: 00:00:26
    channel ch2: deleting archived log(s)
    archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_42.296.858207997 RECID=85 STAMP=858208000
    archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_116.263.858209079 RECID=87 STAMP=858209078
    archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_43.265.858209079 RECID=88 STAMP=858209079
    channel ch1: finished piece 1 at 13-SEP-14
    piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.275.858209099 tag=TAG20140913T232445 comment=NONE
    channel ch1: backup set complete, elapsed time: 00:00:56
    channel ch1: deleting archived log(s)
    archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_40.309.857687641 RECID=82 STAMP=857687640
    archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_114.295.858204777 RECID=84 STAMP=858204801
    archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_41.293.857687641 RECID=83 STAMP=857687641
    archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_115.305.858208001 RECID=86 STAMP=858208025
    Finished backup at 13-SEP-14

    Starting backup at 13-SEP-14
    channel ch1: starting compressed full datafile backup set
    channel ch1: specifying datafile(s) in backup set
    input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
    input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
    input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
    input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
    channel ch1: starting piece 1 at 13-SEP-14
    channel ch2: starting compressed full datafile backup set
    channel ch2: specifying datafile(s) in backup set
    input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
    input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
    input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
    input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
    channel ch2: starting piece 1 at 13-SEP-14
    channel ch2: finished piece 1 at 13-SEP-14
    piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.293.858209175 tag=TAG20140913T232557 comment=NONE
    channel ch2: backup set complete, elapsed time: 00:12:02
    channel ch1: finished piece 1 at 13-SEP-14
    piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.305.858209163 tag=TAG20140913T232557 comment=NONE
    channel ch1: backup set complete, elapsed time: 00:13:06
    Finished backup at 13-SEP-14

    Starting backup at 13-SEP-14
    current log archived
    channel ch1: starting compressed archived log backup set
    channel ch1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=117 RECID=90 STAMP=858209954
    channel ch1: starting piece 1 at 13-SEP-14
    channel ch2: starting compressed archived log backup set
    channel ch2: specifying archived log(s) in backup set
    input archived log thread=2 sequence=44 RECID=89 STAMP=858209952
    channel ch2: starting piece 1 at 13-SEP-14
    channel ch1: finished piece 1 at 13-SEP-14
    piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.265.858209957 tag=TAG20140913T233915 comment=NONE
    channel ch1: backup set complete, elapsed time: 00:00:03
    channel ch1: deleting archived log(s)
    archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_117.309.858209953 RECID=90 STAMP=858209954
    channel ch2: finished piece 1 at 13-SEP-14
    piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.263.858209957 tag=TAG20140913T233915 comment=NONE
    channel ch2: backup set complete, elapsed time: 00:00:03
    channel ch2: deleting archived log(s)
    archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_44.295.858209951 RECID=89 STAMP=858209952
    Finished backup at 13-SEP-14

    Starting Control File and SPFILE Autobackup at 13-SEP-14
    piece handle=+FRA/racdb/autobackup/2014_09_13/s_858209961.295.858209967 comment=NONE
    Finished Control File and SPFILE Autobackup at 13-SEP-14
    released channel: ch1
    released channel: ch2

    RMAN>

    We can see that Channel ch1 was connected to Instance RACDB_1 and ch2 was connected to RACDB_2. Also, the messages indicate that both channels were running concurrently.
    I also verified that the Channels did connect to each instance :

    [root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
    oracle 11205 1 1 23:24 ? 00:00:00 oracleRACDB_1 (LOCAL=NO)
    You have new mail in /var/spool/mail/root
    [root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
    oracle 11205 1 3 23:24 ? 00:00:04 oracleRACDB_1 (LOCAL=NO)
    [root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
    oracle 11205 1 4 23:24 ? 00:00:49 oracleRACDB_1 (LOCAL=NO)
    [root@node1 ~]#
    [root@node2 ~]# ps -ef |grep RACDB_2 | grep LOCAL=NO
    oracle 6233 1 0 23:24 ? 00:00:00 oracleRACDB_2 (LOCAL=NO)
    You have new mail in /var/spool/mail/root
    [root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
    oracle 6233 1 0 23:24 ? 00:00:00 oracleRACDB_2 (LOCAL=NO)
    [root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
    oracle 6233 1 2 23:24 ? 00:00:24 oracleRACDB_2 (LOCAL=NO)
    [root@node2 ~]#

    As soon as I closed the RMAN (client) session, the two server processes also terminated.

    This method (Method B) allows me to run an RMAN client session from any node in the Cluster and have RMAN server sessions running concurrently across all or some nodes of the Cluster, if I have not designated a single, specific node, as my RMAN Backups node.

    Edit : I have demonstrated using ALLOCATE CHANNEL to run an adhoc, interactive, backup.  If you want to create a persistent script, you might want to use CONFIGURE CHANNEL and have the SYS password persisted in the configuration (saved in the controlfile) so that it is not in "plain text" in a script.

    .
    .
    .

    Categories: DBA Blogs