Feed aggregator

APEX SIG in the UK - Finally!

Anthony Rayner - Thu, 2009-01-22 04:22
So after much discussion and a lot of hard work from Justin Hudd from e-DBA, the 1st APEX SIG event in the UK is now officially going ahead and booked for February 13th at the Oracle City Office in London. The agenda is as follows:
Then the usual post event drinks at a nearby drinkery.

If your interested in coming along then you'll need to register for the event via the website. Pricing varies on the following:
  • If you are already a member of UKOUG then it's free to attend (and you can buy additional places at the members rate of £80 plus VAT for colleagues).
  • If you are not a member then they have setup a non-member rate of £160 plus VAT.
There are only 55 places available so attendance is limited.

Hopefully see you there!
Categories: Development

Oracle ACE Director No More

Mark A. Williams - Tue, 2009-01-20 16:16

Today marks my first day back at Oracle Corp. and I find myself in the same boat as Kevin Closson. I'm not saying I'm sitting in the same section mind you, just that I am in the same boat as regards to no longer being eligible for the Oracle ACE program.

I think I have removed the ACE and Director logos as well as text from the appropriate spots, but if I missed a place or two, well, I'll try to get them removed too!

As Kevin says, at least there is that vest thing...

SQL Plan Management (Part 2 of 4): SPM Aware Optimizer

Oracle Optimizer Team - Tue, 2009-01-20 11:59
(Keep sending your feedback and questions. We'll address them in Part 4.)

In Part 1, we saw how you can create SQL plan baselines. After you create a SQL plan baseline for a statement, subsequent executions of that statement will use the SQL plan baseline. From all the plans in the SQL plan baseline, the optimizer will select the one with the best cost in the current environment (including bind values, current statistics, parameters, etc.). The optimizer will also generate the best-cost plan that it would otherwise have used without a SQL plan baseline. However, this best-cost plan will not be used but instead added to the statement's plan history for later verification. In other words, the optimizer will use a known plan from the SQL plan baseline instead of a new and hitherto unknown plan. This guarantees no performance regression.

Let's see this plan selection process in action. First, we create a SQL plan baseline by enabling automatic plan capture and executing the query twice:

SQL> alter session set optimizer_capture_sql_plan_baselines = true;

Session altered.

SQL> var pid number
SQL> exec :pid := 100;

PL/SQL procedure successfully completed.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> select * from table(dbms_xplan.display_cursor('bfbr3zrg9d5cc', 0, 'basic note'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select p.prod_name, s.amount_sold, t.calendar_year from sales s,
products p, times t where s.prod_id = p.prod_id and s.time_id =
t.time_id and p.prod_id < :pid

Plan hash value: 2787970893

----------------------------------------------------------------
| Id&nbsp&nbsp| Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| Name&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
----------------------------------------------------------------
|&nbsp&nbsp 0&nbsp| SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 1&nbsp|&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 2&nbsp|&nbsp&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 3&nbsp|&nbsp&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 4&nbsp|&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| PRODUCTS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 5&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| PRODUCTS_PK&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 6&nbsp|&nbsp&nbsp&nbsp&nbsp PARTITION RANGE ALL&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 7&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY LOCAL INDEX ROWID| SALES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 8&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP CONVERSION TO ROWIDS&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 9&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| SALES_PROD_BIX&nbsp|
|&nbsp 10&nbsp|&nbsp&nbsp&nbsp INDEX UNIQUE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| TIME_PK&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp 11&nbsp|&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| TIMES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
----------------------------------------------------------------

25 rows selected.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> alter session set optimizer_capture_sql_plan_baselines = false;

Session altered.

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;

SQL_TEXT &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
and s.time_id = t.time_id
and p.prod_id < :pid



We can see that a SQL plan baseline was created for the statement. Suppose the statement is hard parsed again (we do it here by flushing the shared pool). Let's turn off SQL plan management and execute the query with a different bind value:


SQL> exec :pid := 100000;

PL/SQL procedure successfully completed.

SQL> alter system flush shared_pool;

System altered.

SQL> alter session set optimizer_use_sql_plan_baselines = false;

Session altered.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
960 rows selected.

SQL> select * from table(dbms_xplan.display_cursor('bfbr3zrg9d5cc', 0, 'basic note'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select p.prod_name, s.amount_sold, t.calendar_year from sales s,
products p, times t where s.prod_id = p.prod_id and s.time_id = t.time_id and
p.prod_id < :pid

Plan hash value: 2361178149

------------------------------------------
| Id&nbsp&nbsp| Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| Name&nbsp&nbsp&nbsp&nbsp&nbsp|
------------------------------------------
|&nbsp&nbsp 0&nbsp| SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 1&nbsp|&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 2&nbsp|&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 3&nbsp|&nbsp&nbsp&nbsp PARTITION RANGE ALL|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 4&nbsp|&nbsp&nbsp&nbsp&nbsp TABLE ACCESS FULL&nbsp| SALES&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 5&nbsp|&nbsp&nbsp&nbsp TABLE ACCESS FULL&nbsp&nbsp| TIMES&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 6&nbsp|&nbsp&nbsp TABLE ACCESS FULL&nbsp&nbsp&nbsp| PRODUCTS&nbsp|
------------------------------------------

20 rows selected.



We can see that the optimizer selected a different plan because the new bind value makes the predicate less selective. Let's turn SQL plan management back on and re-execute the query with the same bind value:

SQL> alter session set optimizer_use_sql_plan_baselines = true;

Session altered.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
960 rows selected.

SQL> select * from table(dbms_xplan.display_cursor('bfbr3zrg9d5cc', 0, 'basic note'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select p.prod_name, s.amount_sold, t.calendar_year from sales s,
products p, times t where s.prod_id = p.prod_id and s.time_id =
t.time_id and p.prod_id < :pid

Plan hash value: 2787970893

----------------------------------------------------------------
| Id&nbsp&nbsp| Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| Name&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
----------------------------------------------------------------
|&nbsp&nbsp 0&nbsp| SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 1&nbsp|&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 2&nbsp|&nbsp&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 3&nbsp|&nbsp&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 4&nbsp|&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| PRODUCTS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 5&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| PRODUCTS_PK&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 6&nbsp|&nbsp&nbsp&nbsp&nbsp PARTITION RANGE ALL&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 7&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY LOCAL INDEX ROWID| SALES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 8&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP CONVERSION TO ROWIDS&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 9&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| SALES_PROD_BIX |
|&nbsp 10&nbsp|&nbsp&nbsp&nbsp INDEX UNIQUE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| TIME_PK&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp 11&nbsp|&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| TIMES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
----------------------------------------------------------------

Note
-----
- SQL plan baseline SYS_SQL_PLAN_fcc170b0a62d0f4d used for this statement

29 rows selected.



The note at the bottom tells you that the optimizer is using the SQL plan baseline. In other words, we can see that the optimizer used an accepted plan in the SQL plan baseline in favor of a new plan. In fact, we can also check that the optimizer inserted the new plan into the statement's plan history:

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;


SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b08cbcb825&nbsp YES NO
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid



The 'NO' value for the accepted column implies that the new plan is in the plan history but is not available for use until it is verified to be a good plan. The optimizer will continue to use an accepted plan until new plans are verified and added to the SQL plan baseline. If there is more than one plan in the SQL plan baseline, the optimizer will use the one with the best cost under the then-current conditions (statistics, bind values, parameter settings and so on).

When you create a SQL plan baseline for a SQL statement, the SPM aware optimizer thus guarantees that no new plans will be used other than the ones in the SQL plan baseline. This prevents unexpected plan changes that sometimes lead to performance regressions.

Preventing new plans from being used is fine, but what if the new plans are in fact better? In Part 3, we will describe how new and improved plans are added to a SQL plan baseline.

SQL Plan Management (Part 2 of 4): SPM Aware Optimizer

Inside the Oracle Optimizer - Tue, 2009-01-20 11:59
(Keep sending your feedback and questions. We'll address them in Part 4.)

In Part 1, we saw how you can create SQL plan baselines. After you create a SQL plan baseline for a statement, subsequent executions of that statement will use the SQL plan baseline. From all the plans in the SQL plan baseline, the optimizer will select the one with the best cost in the current environment (including bind values, current statistics, parameters, etc.). The optimizer will also generate the best-cost plan that it would otherwise have used without a SQL plan baseline. However, this best-cost plan will not be used but instead added to the statement's plan history for later verification. In other words, the optimizer will use a known plan from the SQL plan baseline instead of a new and hitherto unknown plan. This guarantees no performance regression.

Let's see this plan selection process in action. First, we create a SQL plan baseline by enabling automatic plan capture and executing the query twice:

SQL> alter session set optimizer_capture_sql_plan_baselines = true;

Session altered.

SQL> var pid number
SQL> exec :pid := 100;

PL/SQL procedure successfully completed.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> select * from table(dbms_xplan.display_cursor('bfbr3zrg9d5cc', 0, 'basic note'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select p.prod_name, s.amount_sold, t.calendar_year from sales s,
products p, times t where s.prod_id = p.prod_id and s.time_id =
t.time_id and p.prod_id < :pid

Plan hash value: 2787970893

----------------------------------------------------------------
| Id&nbsp&nbsp| Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| Name&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
----------------------------------------------------------------
|&nbsp&nbsp 0&nbsp| SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 1&nbsp|&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 2&nbsp|&nbsp&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 3&nbsp|&nbsp&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 4&nbsp|&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| PRODUCTS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 5&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| PRODUCTS_PK&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 6&nbsp|&nbsp&nbsp&nbsp&nbsp PARTITION RANGE ALL&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 7&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY LOCAL INDEX ROWID| SALES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 8&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP CONVERSION TO ROWIDS&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 9&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| SALES_PROD_BIX&nbsp|
|&nbsp 10&nbsp|&nbsp&nbsp&nbsp INDEX UNIQUE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| TIME_PK&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp 11&nbsp|&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| TIMES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
----------------------------------------------------------------

25 rows selected.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> alter session set optimizer_capture_sql_plan_baselines = false;

Session altered.

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;

SQL_TEXT &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
and s.time_id = t.time_id
and p.prod_id < :pid



We can see that a SQL plan baseline was created for the statement. Suppose the statement is hard parsed again (we do it here by flushing the shared pool). Let's turn off SQL plan management and execute the query with a different bind value:


SQL> exec :pid := 100000;

PL/SQL procedure successfully completed.

SQL> alter system flush shared_pool;

System altered.

SQL> alter session set optimizer_use_sql_plan_baselines = false;

Session altered.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
960 rows selected.

SQL> select * from table(dbms_xplan.display_cursor('bfbr3zrg9d5cc', 0, 'basic note'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select p.prod_name, s.amount_sold, t.calendar_year from sales s,
products p, times t where s.prod_id = p.prod_id and s.time_id = t.time_id and
p.prod_id < :pid

Plan hash value: 2361178149

------------------------------------------
| Id&nbsp&nbsp| Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| Name&nbsp&nbsp&nbsp&nbsp&nbsp|
------------------------------------------
|&nbsp&nbsp 0&nbsp| SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 1&nbsp|&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 2&nbsp|&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 3&nbsp|&nbsp&nbsp&nbsp PARTITION RANGE ALL|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 4&nbsp|&nbsp&nbsp&nbsp&nbsp TABLE ACCESS FULL&nbsp| SALES&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 5&nbsp|&nbsp&nbsp&nbsp TABLE ACCESS FULL&nbsp&nbsp| TIMES&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 6&nbsp|&nbsp&nbsp TABLE ACCESS FULL&nbsp&nbsp&nbsp| PRODUCTS&nbsp|
------------------------------------------

20 rows selected.



We can see that the optimizer selected a different plan because the new bind value makes the predicate less selective. Let's turn SQL plan management back on and re-execute the query with the same bind value:

SQL> alter session set optimizer_use_sql_plan_baselines = true;

Session altered.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
960 rows selected.

SQL> select * from table(dbms_xplan.display_cursor('bfbr3zrg9d5cc', 0, 'basic note'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select p.prod_name, s.amount_sold, t.calendar_year from sales s,
products p, times t where s.prod_id = p.prod_id and s.time_id =
t.time_id and p.prod_id < :pid

Plan hash value: 2787970893

----------------------------------------------------------------
| Id&nbsp&nbsp| Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| Name&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
----------------------------------------------------------------
|&nbsp&nbsp 0&nbsp| SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 1&nbsp|&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 2&nbsp|&nbsp&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 3&nbsp|&nbsp&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 4&nbsp|&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| PRODUCTS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 5&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| PRODUCTS_PK&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 6&nbsp|&nbsp&nbsp&nbsp&nbsp PARTITION RANGE ALL&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 7&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY LOCAL INDEX ROWID| SALES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 8&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP CONVERSION TO ROWIDS&nbsp&nbsp&nbsp&nbsp&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp&nbsp 9&nbsp|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| SALES_PROD_BIX |
|&nbsp 10&nbsp|&nbsp&nbsp&nbsp INDEX UNIQUE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| TIME_PK&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
|&nbsp 11&nbsp|&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| TIMES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|
----------------------------------------------------------------

Note
-----
- SQL plan baseline SYS_SQL_PLAN_fcc170b0a62d0f4d used for this statement

29 rows selected.



The note at the bottom tells you that the optimizer is using the SQL plan baseline. In other words, we can see that the optimizer used an accepted plan in the SQL plan baseline in favor of a new plan. In fact, we can also check that the optimizer inserted the new plan into the statement's plan history:

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;


SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b08cbcb825&nbsp YES NO
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid



The 'NO' value for the accepted column implies that the new plan is in the plan history but is not available for use until it is verified to be a good plan. The optimizer will continue to use an accepted plan until new plans are verified and added to the SQL plan baseline. If there is more than one plan in the SQL plan baseline, the optimizer will use the one with the best cost under the then-current conditions (statistics, bind values, parameter settings and so on).

When you create a SQL plan baseline for a SQL statement, the SPM aware optimizer thus guarantees that no new plans will be used other than the ones in the SQL plan baseline. This prevents unexpected plan changes that sometimes lead to performance regressions.

Preventing new plans from being used is fine, but what if the new plans are in fact better? In Part 3, we will describe how new and improved plans are added to a SQL plan baseline.

Categories: DBA Blogs, Development

Silent upgrade troubles

Freek D’Hooge - Sun, 2009-01-18 17:08

Last week I was asked to write a little script to automate an upgrade of oracle client 9.2.0.1 to 9.2.0.8.
Reason for this was that we needed to update arround 1.300 clients to enable them to connect to a 10g database (we couldn’t install 10g clients because of other applications restricted the client version to 9i).

Ok, easy enough. Oracle allows you to automate installations and upgrades via response files and the response file for a client upgrade from 9.2.0.1 to 9.2.0.8 is very simple.
When I started testing the upgrade, I immediately spotted a first problem. The setup.exe (it was on windows xp) started a new console and then returned directly to the prompt in the original console. This would make it impossible to check the return codes to know if a upgrade was successful or not.

The upgrade itself finished without a problem, but at the end the following message appeared in the newly started console: “Press enter to exit”.
Huh!? This was supposed to be a “silent” install, meaning no interraction needed. But here it was, asking to press enter to exit.
And the documentation was not telling anything about it.
After some searching, I found that you can specify the “-noconsole” flag when starting the setup, which would surpress the new console and avoid the question to press enter.
You still would see the question in the logfiles, but the installation presumed you responded to it and finishes the upgrade.

This left me with the first problem: the prompt would still directly return while the upgrade was running in the background.
After some searching in the documentation I found a note stating that you need to modify the oraparam.ini file and change the BOOTSTRAP parameter from TRUE to FALSE.
Unfortunately this did not help. Yelling at it did either.

Then I found that in 10g, you had a “-waitforcompletion” flag you could set, that would do exactly what I needed. So I tried if it would work for the oui shipped in the 9.2.0.8 patchset.
At first, it didn’t, but then I found metalink note 293044.1 that said that the setup.exe in Disk1 and Disk1/install where not the same and that the one in Disk1/install should be used for the “-waitforcompletion” flag.
At last it worked.

For those interested, here is the full command I used to start the silent upgrade:

start /wait C:\oracle\patches\9.2.0.8\Disk1\install\setup.exe -silent -noconsole -waitforcompletion -responsefile c:\oracle\patches\9.2.0.8\patchset.rsp -paramfile c:\oracle\patches\9.2.0.8\oraparam.ini

—————————-

Thanks to Geert for the yelling link :)


Categories: DBA Blogs

Save some disk space - One Reminder and a Cool Windows Command

Mihajlo Tekic - Sat, 2009-01-17 00:13
CPU Reminder

It’s the time of the year when the January CPU is released. I’d like to remind you to think about cleaning up your obsolete backups from $ORACLE_HOME/.patch_storage directory (the ones that are not needed anymore for rollback purposes).

Starting from 10g R2 Oracle backs up the affected libraries along with the rest of the affected files. Some of the libraries can be pretty big and thus after several CPUs the amount of disk space consumed by can be significant.

In order to prevent from unnecessary wasting the disk space, you could use Opatch utility using util cleanup option to remove those backups that are not needed anymore.

Below is the syntax and the options available when using Opatch tool to clean up old backups:



SYNTAX
opatch util cleanup [-invPtrLoc ]
[-jre ] [-oh ]
[-silent] [-report]
[-ps , this will
be located under ORACLE_HOME/.patch_storage/]

OPTIONS
-invPtrLoc
Used to locate the oraInst.loc file. Needed when the
installation used the -invPtrLoc flag. This should be
the path to the oraInst.loc file.

-jre
This option tells OPatch to use JRE (java) from the
specified location instead of the default location
under Oracle Home. Both -jdk and -jre options cannot
be specified together. OPatch will display error in
that case.

-oh
The oracle home to work on. This takes precedence over
the environment variable ORACLE_HOME.

-ps
This option is used to specify the Patch ID with timestamp.
This Patch ID with timestamp should be the same as in
.patch_storage directory.

A directory by this name will be present under
ORACLE_HOME/.patch_storage. If this directory is specified
and is valid, then the contents specified in the description
will be cleaned up only for this patch. Otherwise, all patch
related directories will be acted upon by this utility.

-silent
In silent mode, the cleanup always takes place.

-report
Prints the operations without actually executing them.


Make sure you specify the patch id (ps parameter) of the patch you want to remove.

Cool Windows Command

Few days ago I learned about forfiles command that can be used to select a set of files using some criteria (like date modified) and run command against each of them (like delete). (Something like find –exec in Unix)

I found this very useful for cleaning up the trace and log files that are not needed anymore.

It is very handy and I am really surprised that Microsoft finally came up with something like this. Before I used to write batch scripts to implement the logic that forfiles provides.

It is available on Windows Vista, Windows 2003 Server and Windows 2008 Server.

You should really take a look if you didn’t know about this one already.
Find more (the syntax and few examples) about forfiles from this TechNet Note .

Example:

Task: Delete all the trace files that are older more than 30 days


c:\app\oracle\diag\rdbms\db11g\db11g\trace>forfiles /s /m *.trc /d -30 /c "cmd /c
echo @FILE @FDATE"

"db11g_ora_13200.trc" 12/18/2008
"db11g_ora_18716.trc" 12/18/2008
"db11g_ora_18768.trc" 12/18/2008
"db11g_ora_18892.trc" 12/18/2008
"db11g_ora_3004.trc" 12/18/2008
"db11g_ora_4428.trc" 12/18/2008
"db11g_ora_6256.trc" 12/18/2008
"db11g_ora_6444.trc" 12/18/2008
"db11g_ora_6480.trc" 12/18/2008
"db11g_ora_6504.trc" 12/18/2008
"db11g_ora_6844.trc" 12/18/2008
"db11g_ora_6912.trc" 12/18/2008
"db11g_ora_6928.trc" 12/18/2008
"db11g_ora_7044.trc" 12/18/2008

c:\app\oracle\diag\rdbms\db11g\db11g\trace>forfiles /s /m *.trc /d -30 /c "cmd /c
del @FILE"


c:\app\oracle\diag\rdbms\db11g\db11g\trace>forfiles /s /m *.trc /d -30 /c "cmd /c
echo @FILE @FDATE"
ERROR: No files found with the specified search criteria.



Pretty handy, isn’t it?

Comic-Based Communication

Tahiti Views - Thu, 2009-01-15 00:33
These days, there are as many styles of documentation as there are of programming. Structured docs (waterfall model), topic-based writing (object-oriented development), less formal styles based around wikis (agile coding). Another one that I haven't seen given a name, is what I think of as comic-based communication.If you grew up with comic books, fingers poised next to "continued on 3rd page", John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

ORA-12514 during switchover using Data Guard Broker

Mihajlo Tekic - Wed, 2009-01-14 22:45
I’ve seen, on several occasions, questions being asked about data guard broker being unable to automatically start the databases during a switchover, failing with ORA-12514 - TNS:listener does not currently know of service requested in connect descriptor?

This is most likely because the special service db_unique_name_DGMGRL has not been registered properly with the listener.

This is one of the requirements when configuring Data Guard broker.

From the Oracle documentation (Reference Oracle® Data Guard Broker 10g Release 2 (10.2)):

To enable DGMGRL to restart instances during the course of broker operations, a service with a specific name must be statically registered with the local listener of each instance. The value for the GLOBAL_DBNAME attribute must be set to a concatenation of db_unique_name_DGMGRL.db_domain. For example, in the LISTENER.ORA file:

LISTENER = (DESCRIPTION =
(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=host_name)
(PO1RT=port_num))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=sid_name)
(GLOBAL_DBNAME=db_unique_name_DGMGRL.db_domain)
(ORACLE_HOME=oracle_home)))



I think the main reason for overlooking this prerequisite is because nothing about this is mentioned in Oracle 10g Data Guard Concepts and Administration Guide, the chapters for standby databases configuration.

So once the initial configuration is set up and works fine, the listener prerequisites for setting up Data Guard Broker are probably overlooked.

Of course, this applies only if Data Guard Broker is configured manually. If one uses OEM Grid Control to set up and configure the Data Guard Configuration, OEM will make the necessary changes.

Another point I want to make here is the importance of practicing various switchover and failover scenarios. It is not only that you will gain more experience and fill more comfortable doing the same thing in real situation, but you also will learn about any hidden configuration wholes that you may have overlooked or missed in your configuration steps.

Here is an example of this problem and its solution:

For this example I used two Oracle 10.2.0.1 databases, dg1db running as a primary database and dg2db running as a physical standby database.


[oracle@dg1 ~]$ dgmgrl
DGMGRL for Linux: Version 10.2.0.1.0 - Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/***
Connected.
DGMGRL> show configuration

Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Primary database
dg2db - Physical standby database

Current status for "dg-test":
SUCCESS


Now lets try the switchover. As you can see the role transition was done successfully, but the database startup that should happened at the end failed with ORA-12514: TNS:listener does not currently know of service requested in connect descriptor.


DGMGRL> switchover to dg2db;
Performing switchover NOW, please wait...
Operation requires shutdown of instance "dg1db" on database "dg1db"
Shutting down instance "dg1db"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "dg2db" on database "dg2db"
Shutting down instance "dg2db"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "dg1db" on database "dg1db"
Starting instance "dg1db"...
Unable to connect to database
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Failed.
You are no longer connected to ORACLE
Please connect again.
Unable to start instance "dg1db"
You must start instance "dg1db" manually
Operation requires startup of instance "dg2db" on database "dg2db"
You must start instance "dg2db" manually
Switchover succeeded, new primary is "dg2db"
DGMGRL>


After starting the databases manually, I checked the status of the Data Guard configuration and it was SUCCESS.


DGMGRL> show configuration

Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Physical standby database
dg2db - Primary database

Current status for "dg-test":
SUCCESS

DGMGRL>


Now I will make the necessary changes to the listeners and try the switchover in the opposite direction. After the change listener.ora should look like (pay attention to GLOBAL_NAME=dg1db_DGMGRL.localdomain).


[oracle@dg1 admin]$ more listener.ora
# listener.ora Network Configuration File: /u01/app/oracle/product/10.2.0/ora10g/network/admin/listener.ora
# Generated by Oracle configuration tools.

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = dg1db_DGMGRL.localdomain)
(ORACLE_HOME = /u01/app/oracle/product/ora10g)
(SID_NAME= dg1db)
)
)

LISTENER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = dg1.localdomain)(PORT = 1521))
)

[oracle@dg1 admin]$ lsnrctl reload
[oracle@dg1 admin]$ lsnrctl status

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 13-JAN-2009 17:02:35

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dg1.localdomain)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 13-JAN-2009 08:08:17
Uptime 0 days 8 hr. 54 min. 18 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/10.2.0/ora10g/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/10.2.0/ora10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dg1.localdomain)(PORT=1521)))
Services Summary...
Service "dg1db.localdomain" has 1 instance(s).
Instance "dg1db", status READY, has 1 handler(s) for this service...
Service "dg1db_DGB.localdomain" has 1 instance(s).
Instance "dg1db", status READY, has 1 handler(s) for this service...
Service "dg1db_DGMGRL.localdomain" has 1 instance(s).
Instance "dg1db", status UNKNOWN, has 1 handler(s) for this service...
Service "dg1db_XPT.localdomain" has 1 instance(s).
Instance "dg1db", status READY, has 1 handler(s) for this service...
The command completed successfully

I did the same thing with the second listener.
Now the switchover will complete without any problems.
DGMGRL> show configuration

Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Physical standby database
dg2db - Primary database

Current status for "dg-test":
SUCCESS

DGMGRL> switchover to dg1db
Performing switchover NOW, please wait...
Operation requires shutdown of instance "dg2db" on database "dg2db"
Shutting down instance "dg2db"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "dg1db" on database "dg1db"
Shutting down instance "dg1db"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "dg2db" on database "dg2db"
Starting instance "dg2db"...
ORACLE instance started.
Database mounted.
Operation requires startup of instance "dg1db" on database "dg1db"
Starting instance "dg1db"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "dg1db"
DGMGRL>

DGMGRL> show configuration

Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Primary database
dg2db - Physical standby database

Current status for "dg-test":
SUCCESS

DGMGRL>


Cheers,
Mihajlo Tekic

Build an image gallery using APEX

Oracle Apex Notebook - Wed, 2009-01-14 18:39
This time, I'll show how to build a simple image gallery using Oracle Application Express (APEX) and taking advantage of Jquery framework capabilities. But first of all, follow the link to see the magic happening ;) There are several slideshow or gallery plugins for Jquery. You can choose the one you like more and re-adapt the code to fit the plugin requirements. My choice was Pikachoose.
Categories: Development

Timepicker and APEX

Oracle Apex Notebook - Wed, 2009-01-14 18:39
ClockPick is a jQuery time picker plugin. As I showed in previous posts, it's fairly easy to integrate third party javascript components with APEX. A demo of ClockPick working with APEX can be found here. How to do it? 1- Upload files to your APEX server. You can upload them directly to the server filesystem or use static files in Shared Components; jquery.clockpick.1.2.4.js clockpick.
Categories: Development

jQuery: 3rd Birthday and new release 1.3

Oracle Apex Notebook - Wed, 2009-01-14 18:37
jQuery is the way of the future in what concerns JavaScript framework integration with Oracle Apex. In the last months, several Oracle Apex Bloggers have posted nice examples to take Apex applications interfaces to another level taking advantage of jQuery functionalities.Today, on the project’s 3rd birthday, the jQuery development team has announced the release of jQuery v1.3, the latest and
Categories: Development

Using Oracle in a Windows VM

Edward Whalen - Tue, 2009-01-13 16:56
Microsoft recently introduced Hyper-v with Windows 2008 Server. In order to determine the overhead of running Oracle on Hyper-v we performed a benchmark on identical hardware using both Hyper-v and a non-virtual server using Oracle 10g for Windows. The resulting whitepaper is available for download from our website: www.perftuning.com.

With Windows Server 2008, everything needed to support database server virtualization is available as an integral part of the operating system – Windows Server 2008 Hyper-v. This whitepaper demonstrates the performance capabilities of Oracle 10g on Hyper-v. It also has provides several best practices and resources essential to virtualization of Oracle database workloads.

Please visit our website and take a look.

So, back to Oracle then?

Mark A. Williams - Sat, 2009-01-10 19:09

Next week will mark the last week at my current location as I have recently decided to return to Oracle Corp.

I will be going back to the world of C and UNIX derivatives having accepted a position on the non-Windows PLBDE team.

It's hard to say what that will mean for the blog here, which has been pretty quiet from my side as of late to be sure, but perhaps I will branch out from my typical topics of ODP.NET and/or Oracle on Windows. We'll see.

You've Got to Fight for Your Invoker's Rights

Tahiti Views - Fri, 2009-01-09 18:35
This post is about a PL/SQL feature that doesn't get enough respect, "invoker's rights".First off, what's its real name? Depending on the source, you'll see the feature name spelled "invoker's rights", "invokers' rights", or "invoker rights". That makes a difference -- you'll get different results in Google depending on what combination of singular, plural, and possessive you use. And to be John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

Using Quartz Scheduler on WebLogic Server

Debu Panda - Fri, 2009-01-09 12:47

I'm not a Quartz expert but I keep on getting emails with Quartz questions as one of my blog was published as a featured article at TheServerSide.com 4-5 years back. I got some questions on using Quartz on WebLogic Server recently. I modified the original example to run on WebLogic Server.


You can download this sample from here.

The Example

1. This sample uses TestEJB which is a stateless bean that has yourMethod that need to be scheduled as a job/service.


2. This method execution is scheduled as a Job from from QuartzManagerBean which is EJB 3.0 Session bean.

3. A Generic Servlet QuartzServlet uses dependency injection to invoke the QuartzManager from its init() method.

4. The QuartzServlet is automatically loaded using load-on-start mechanism. Hence as soon as the application is deployed - the job will be submitted .

How to Run the Example

1. You can download the code example from here.

2. This is designed to be deployed in the exampleServer. If you want to deploy in a different domain/server you have to make appropriate changes to common.xml


3. Start your WebLogic Server. Unlike OC4J that requires –userThreads, no special parameter or flag is required for WebLogic to run the example.


4. Set your environment variables. If you are using examplesServer, you can use %BEA_HOME%\wlserver_10.3\samples\domains\wl_server\setExamplesEnv.bat script to set environment variables required to compile and deploy the application

5. To compile and deploy, just run

ant

You will see the application to be deployed in console where WebLogic Server was started.

Jan 9, 2009 10:45:54 AM org.quartz.plugins.management.ShutdownHookPlugin initialize

INFO: Registering Quartz shutdown hook.

Jan 9, 2009 10:45:54 AM org.quartz.simpl.RAMJobStore initialize

INFO: RAMJobStore initialized.

Jan 9, 2009 10:45:54 AM org.quartz.impl.StdSchedulerFactory instantiate

INFO: Quartz scheduler 'WebLogicQuartzScheduler' initialized from default resour

ce file in Quartz package: 'quartz.properties'

Jan 9, 2009 10:45:54 AM org.quartz.impl.StdSchedulerFactory instantiate

INFO: Quartz scheduler version: 1.3.0

Jan 9, 2009 10:45:54 AM org.quartz.core.QuartzScheduler start

INFO: Scheduler WebLogicQuartzScheduler_$_one started.

Invoking EJB to schedule Job ..

0 0/5 * * * ?

Scheduling Job ..

0 0/5 * * * ?

Timer scheduled now ..

Generating report

Report Generation Completed


Hope this helps!

School's Out

Mary Ann Davidson - Fri, 2009-01-09 08:09

When I look back on my career, particularly my educational background, I find it interesting to consider what has stuck with me, what I have forgotten, and what continues to be useful. I have sort of a strange background for a security weenie – an undergraduate mechanical engineering degree and an MBA. I don’t talk about the MBA all that much, partly because I know plenty of overblown egos who can’t wait to trot out the fact they have an MBA from Harvard, Wharton, Stanford, (insert name of prestigious MBA program here). Second, someone who worked for me awhile ago got an MBA and proceeded to treat his directs really poorly with newly-minted MBA hubris that was astonishing (I recognize the disease because I had a bad case of it once, myself). He thought that MBA stood for Me Before All and everyone who worked for him was there to advance his career. Third, but mostly first, Impressive Academic Credentials are increasingly unimportant the longer you are in the working world; what you actually accomplish is far more important than how well educated you are.

That said, I do think that the education I got in my MBA program gave me some “tricks of the trade” I still find inordinately useful. Now, not everything one learns at universities is timeless. Particularly in the liberal arts area, there are reinterpretations and revisions of widely held knowledge going on all the time. I guess that especially for academic areas that have been strip mined more than Appalachia, the way you create academic street creds for yourself is by regularly throwing out babies after drowning your colleagues in the bathwater. My father (who has a PhD in civil engineering and has been a university professor and administrator) liked to tell the joke about an engineering professor and an economics professor conversing on a campus. “I hear you give the same exam each semester in the economics department,” says the engineering guy. “Our students get hold of the old exams. So, how on earth do you get away with it in the econ department?” “Easy, “ says the economics professor. “We just change the answers.” Now, it’s not totally true that economics changes all that fast (although I hope we have driven a stake through John Maynard Keynes* and all his sycophants once and for all). But economic theories do change.

That said, among the most useful classes I took at business school, and one of the areas I refer to often in my work, are the economics classes. It’s just timeless, whether we are talking about micro- or macroeconomics. It’s particularly important in talking about public policy issues (yes, even around security). For example, I have talked about (and there are many who have) whether there is a market failure in information security. The discussion becomes (if there is a market failure) how to correct it? How do you make markets work efficiently (e.g., to remove externalities or “social costs?”) There are also a lot of offshoots of economics that have applications in other areas (game theory, for example, which was tremendously influential in Cold War strategy). I have thought about game theory in various security discussions (in particular, discussions in which industry can maximize our collective best interests or “payoff” by hanging together, like the prisoners of the prisoners’ dilemma, and yet one “defector” can achieve a higher individual payoff by selling other vendors down the game theory river. Alas, I have seen one vendor do just that and now everybody is worse off than if the vendor had not "defected.").

Financial theory is pretty useful, too. For example, the idea that you should disregard sunk costs in future decision making, a fancy way of saying ignore how much money you’ve already thrown at a problem in analyzing whether to go forward with it. I use that all the time, especially when someone starts in with “we’ve worked so long/hard/whatever on project X.” You have to look at the expectation of success going forward and what it will cost to get there (and other options for those resources) because sunk costs are by definition non-recoverable no matter what you do. You’ve spent the money, you can’t get it back: now what?

Some of those economic and financial market lessons have been reiterated in spades in looking at the current financial market meltdown. One of them is that “diversification reduces risk” in terms of a portfolio – the financial market theory of not having all your financial eggs in one basket. The corollary is that some kinds of risk are not diversifiable (e.g., if the entire market craps out, everything sinks together and diversification doesn’t help you, as many of us who’ve opened our 401K statements recently are all too painfully aware). You really wish, when reading about people who became stupidly overleveraged (fancy term for living way beyond your means and all on credit) that they had some basic comprehension of economics. Not merely for their own financial well-being, but so that people have realistic expectations of personal responsibility and the limits of government. (Hint: governments cannot create wealth, though they can print money. If they print enough money with no actual wealth behind it devalues the money you have. Getting someone else besides you to pay for your lifestyle is nice work if you can get it, but if you keep robbing Peter to give lazy bones Paul a free ride, Peter will find ways to work less or otherwise revolt.) None of this is rocket science yet so many people never learned basic financial or economic principles. There really is no such thing as a free lunch.

Business law is another area I find incredibly useful. Granted, a lot of my pragmatic understanding of contract law in particular I learned as a contract administrator in the Navy (nothing like having a $200,000 claim over the meaning of the word “automatic” or the performance of a contract hinge on a misplaced comma, both of which I have experienced). One of the big lessons I learned is the legal equivalent of RTFM: let’s call it RTFC or “Read the Friggin’ Contract.” In particular, if you are negotiating a contract, realize that what you agree to with your counterpart has to survive the participants who drew up the contract (that’s why things are written down). The words “well, everybody knows what we meant by that” absolutely never enter into legal parlance. The legal equivalent of Murphy’s Law is that if something in a contract is ambiguous, it will be misinterpreted by at least one party to the contract. You need to read it (over and over and over) through all revisions to avoid expensive mistakes. (It took me a year to negotiate a licensing agreement with a partner, but at the end of the year, we had a deal and there were no disputes over the terms during the life of the contract. My counterpart at the other company is still a great gal pal to this day.)

The other lesson I learned is that contracts also are not good vehicles for creating trust between parties. If someone is a slimy so-and-so, you cannot write a contract that will make them less of a slimy so-and-so. A contract will not create trust; it will tell you who has to do what under the terms of the contract, and possibly spell out remedies if parties do not perform under the contract. That assumes that you can actually get a remedy out of someone that is timely or meaningful. I am always astonished at someone whose going in position is “if it doesn’t work, we can sue.” (Kind of like people who get married with the expectation, “if it doesn’t work out, we can get a divorce.” The formulation of an exit strategy on the day you ink a deal does not bode well for it, so why are you signing on the dotted line?) If someone rips off your intellectual property (for example, in a geographic area with not a whole lot of respect for intellectual property rights), good luck on a “remedy” that will make you whole. “Don’t take that bet” is sometimes a better strategy than counting on a contract to make you whole if something goes wrong, especially if you have a high expectation that you are going to have a problem.

Another of the other really useful constructs I got from B-school was my quantitative methods class. Quantitative methods are a way of putting more numeric rigor around decision making, whether it is analyzing a business strategy or problem solving. For example, you may have an optimization problem you are working on, like “the truck routing problem.” You have so many trucks, they need to make deliveries, you want to find out the most efficient route but without delaying deliveries too long, and so on. You have a number of constraints you have to work with – you only have so many trucks, so many deliveries, so much distance, and time limits within which you have to deliver things. If nothing else, you learn that while sometimes you can add resource (more trucks can make more deliveries in shorter time), you still have constraints you can’t necessarily move (houses are not going to move closer together for your convenience, and you can’t make time go faster).

Particularly when I find myself dealing with (and they mean well) academics, think-tankers or government wonks who want to create public policy, I sometimes need to remind them of quantitative methods 101: resources are always constrained. Sometimes, people who don’t actually have to earn a profit or make tradeoffs or implement their own public policies find lots of “really important” ways for other people to spend money in pursuit of A Noble Purpose. The reality is that sometimes you don’t have any more resource, and even if you did, there might be a better way to use the resource that would lead to a public policy (or other) good, like better return for investors. In discussing public policy I try to talk about what the real-world constraints are, and inevitably I ask some of the government folks questions like, “do you want absolute perfection, or do you want to give industry a way to say Yes, where they can make progress, improve what they are doing in a cost effective way that actually makes the problem better?” Something is almost always better than nothing. You can either listen, understand what people’s constraints are and allow for them, or create a problem statement that says, ‘I want to have my cake, eat it too, have it be organic, sustainable, biodegradable, delicious, rich, and have zero calories: in fact, I want to lose weight while eating it.” People who understand optimization problems know you’d do well if you got three of those conditions satisfied. You cannot satisfy all of them.

I even use the quantitative methods approach (finally!) in asking for more headcount. I am embarrassed to admit that it took me years at Oracle to figure out how to get more headcount, and the method I finally hit on (more accurately, that I finally embraced after enough people told me the best way to do it, I ignored the advice and learned the hard way that they were right), is a simple exercise. I collate, in an organized way, “what we are doing now, what we are not doing now, and what we can do with more resource.” “What we get with more resource” is along the lines of (in priority order) additional work my team could take on and the value to the company of us doing that (sometimes the value is cost avoidance or something tangible we can (roughly) quantify). I am hardnosed in that, if people who work for me who run various areas do not make a good enough case to me for their headcount requests, I draw the “wish list” line above their headcount requests. In going to my boss, I will say, “here is what we get with more resource (in priority order), meaning if I got one headcount, I’d add one person to do A, if another person, I would put them on B, then a third person to do C, here is the value to the company for adding those bodies, and any headcount beyond that should go to someone else in your organization who can spend the resource better.” Meaning, my goal is not empire building, it is adding enough resource to use it to add value (or reduce cost), and no more.

There is always a lot more to learn. In writing this blog entry, I realized I had forgotten a whole lot about economics and game theory (and I didn’t relearn them in the time it took to write the blog). But, one of my absolutely favorite hangouts – the Ketchum Community Library (they say I am their best customer) is right down the hill, and I will be trotting down there later today to see what books they have on economics and game theory.

The beginning of a new year is a bright and shiny opportunity to do things differently. While time is one of our most constrained resources (I may stay up late reading myself blind, but I still need some sleep), I always set a New Year’s resolution or two around learning a new skill, or beefing up a knowledge area. It doesn’t always have to be in computer security, either. Sometimes learning something new (or revisiting something old) can give a fresh perspective to your day job.

Hau’oli Makahiki Hou (Happy New Year!)

* FDR, in my opinion, did bubkes to get us out of the Great Depression and should not get credit for it; there was double digit unemployment into 1942. On the contrary, the Second World War ended the Great Depression, and we can thank James Forrestal (who architected the unprecedented industrial gear shifting to war production) for helping end the Great Depression far more than FDR.


Book of the Week

A Roof Against the Rain by JoEllen Collins

This is a public service announcement: I know JoEllen – she is a friend of mine. She also happens to be a cracking good writer. I loved the book – which is about loss of a loved one - and am recommending it broadly. I think it would be a particularly good “book group book” ( I recommended it to my book group). Her characters are believable and there is a moral center (so few dreary modern works of fiction have one). It’s also just a fun read and it takes place in Sun Valley; Idaho another reason I liked the book.

http://www.buy.com/prod/a-roof-against-the-rain/q/loc/106/208184025.html

Other good reads:

Why the Allies Won by Richard Overy

We forget that the victory of the Allies over the Axis in WWII was not a done deal at all but quite a close thing. This book describes the factors involve in that victory – leadership, economies, technology, among others.

http://www.amazon.com/s/ref=nb_ss_b_0_18?url=search-alias%3Dstripbooks&field-keywords=why+the+allies+won+by+richard+overy&x=0&y=0&sprefix=why+the+allies+won

The New Dealer’s War: FDR and the War Within World War II by Thomas Fleming

A very different “take” on FDR and an eye-opening one. Between FDR’s (likely) leaking of the Rainbow Plan to ignoring the Katyn Massacre, FDR hagiography is quite deftly shattered by this book.

http://www.amazon.com/New-Dealers-War-Within-World/dp/0465024653

For more information

I am completely stoked that my favorite Hawaiian music group Maunalua (together with Grammy winner John Cruz) is playing at an inaugural lu’au at the Hotel Monaco in DC on January 20. Nā mele no ka ‘oi!

http://inauguralluau.eventbrite.com/

The Prisoner’s Dilemma is an interesting book on the influence of game theory on the Cold War.

http://www.amazon.com/Prisoners-Dilemma-William-Poundstone/dp/038541580X/ref=sr_1_1?ie=UTF8&s=books&qid=1231442535&sr=1-1

Oracle's Linux Contributions

Sergio's Blog - Fri, 2009-01-09 06:40

This recently posted page summarizes some of the contributions Oracle's Linux developers make to Linux and the community in general, including:

  • OCFS2
  • BTRFS
  • Libstdc++
  • NFS on IPv6
  • RDS
  • T10-DIF Data Integrity
  • Etc.

I speak with prospects, customers, and partners on a regular basis and when I tell them that we have people at Oracle who's sole job it is to work on open source software, they are often very surprised. I'm glad this was posted.

Categories: DBA Blogs

Plan regressions got you down? SQL Plan Management to the rescue!

Oracle Optimizer Team - Thu, 2009-01-08 17:58
Part 1 of 4: Creating SQL plan baselines

Do you ever experience performance regressions because an execution plan has changed for the worse? If you have, then we have an elegant solution for you in 11g called SQL Plan Management (SPM). The next four posts on our blog will cover SPM in detail. Let's begin by reviewing the primary causes for plan changes.

Execution plan changes occur due to various system changes. For example, you might have (manually or automatically) updated statistics for some objects, or changed a few optimizer-related parameters. A more dramatic change is a database upgrade (say from 10gR2 to 11g). All of these changes have the potential to cause new execution plans to be generated for many of your SQL statements. Most new plans are obviously improvements because they are tailored to the new system environment, but some might be worse leading to performance regressions. It is the latter that cause sleepless nights for many DBAs.

DBAs have several options for addressing these regressions. However, what most DBAs want is simple: plans should only change when they will result in performance gains. In other words, the optimizer should not pick bad plans, period.

This first post in our series, describes the concepts of SQL Plan Management and how to create SQL plan baselines. The second part will describe how and when these SQL plan baselines are used. The third part will discuss evolution, the process of adding new and improved plans to SQL plan baselines. Finally, the fourth part will describe user interfaces and interactions with other Oracle objects (like stored outlines).


Introduction

SQL Plan Management (SPM) allows database users to maintain stable yet optimal performance for a set of SQL statements. SPM incorporates the positive attributes of plan adaptability and plan stability, while simultaneously avoiding their shortcomings. It has two main objectives:
  1. prevent performance regressions in the face of database system changes
  2. offer performance improvements by gracefully adapting to database system changes
A managed SQL statement is one for which SPM has been enabled. SPM can be configured to work automatically or it can be manually controlled either wholly or partially (described later). SPM helps prevent performance regressions by enabling the detection of plan changes for managed SQL statements. For this purpose, SPM maintains, on disk, a plan history consisting of different execution plans generated for each managed SQL statement. An enhanced version of the Oracle optimizer, called SPM aware optimizer, accesses, uses, and manages this information which is stored in a repository called the SQL Management Base (SMB).

The plan history enables the SPM aware optimizer to determine whether the best-cost plan it has produced using the cost-based method is a brand new plan or not. A brand new plan represents a plan change that has potential to cause performance regression. For this reason, the SPM aware optimizer does not choose a brand new best-cost plan. Instead, it chooses from a set of accepted plans. An accepted plan is one that has been either verified to not cause performance regression or designated to have good performance. A set of accepted plans is called a SQL plan baseline, which represents a subset of the plan history.

A brand new plan is added to the plan history as a non-accepted plan. Later, an SPM utility verifies its performance, and keeps it as a non-accepted plan if it will cause a performance regression, or changes it to an accepted plan if it will provide a performance improvement. The plan performance verification process ensures both plan stability and plan adaptability.

The figure below shows the SMB containing the plan history for three SQL statements. Each plan history contains some accepted plans (the SQL plan baseline) and some non-accepted plans.




(Click on the image for a larger view.)

You can create a SQL plan baseline in several ways: using a SQL Tuning Set (STS); from the cursor cache; exporting from one database and importing into another; and automatically for every statement. Let's look at each in turn. The examples in this blog entry use the Oracle Database Sample Schemas so you can try them yourself.


Creating SQL plan baselines from STS

If you are upgrading from 10gR2 or have an 11g test system, you might already have an STS containing some or all of your SQL statements. This STS might contain plans that perform satisfactorily. Let's call this STS MY_STS. You can create a SQL plan baseline from this STS as follows:

SQL> variable pls number;
SQL> exec :pls := dbms_spm.load_plans_from_sqlset(sqlset_name => 'MY_STS', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp basic_filter => 'sql_text like ''select%p.prod_name%''');


This will create SQL plan baselines for all statements that match the specified filter.

Creating SQL plan baselines from cursor cache

You can automatically create SQL plan baselines for any cursor that is currently in the cache as follows:

SQL> exec :pls := dbms_spm.load_plans_from_cursor_cache( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp attribute_name => 'SQL_TEXT', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp attribute_value => 'select%p.prod_name%');


This will create SQL plan baselines for all statements whose text matches the specified string. Several overloaded variations of this function allow you to filter on other cursor attributes.

Creating SQL plan baselines using a staging table

If you already have SQL plan baselines (say on an 11g test system), you can export them to another system (a production system for instance).

First, on the test system, create a staging table and pack the SQL plan baselines you want to export:

SQL> exec dbms_spm.create_stgtab_baseline(table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH');

PL/SQL procedure successfully completed.

SQL> exec :pls := dbms_spm.pack_stgtab_baseline( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp sql_text => 'select%p.prod_name%');


This will pack all SQL plan baselines for statements that match the specified filter. The staging table, MY_STGTAB, is a regular table that you should export to the production system using Datapump Export.

On the production system, you can now unpack the staging table to create the SQL plan baselines:

SQL> exec :pls := dbms_spm.unpack_stgtab_baseline( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp sql_text => 'select%p.prod_name%');


This will unpack the staging table and create SQL plan baselines. Note that the filter for unpacking the staging table is optional and may be different than the one used during packing. This means that you can pack several SQL plan baselines into a staging table and selectively unpack only a subset of them on the target system.

Creating SQL plan baselines automatically

You can create SQL plan baselines for all repeatable statements automatically by setting the parameter optimizer_capture_sql_plan_baselines to TRUE (default is FALSE). The first plan captured for any statement is automatically accepted and becomes part of the SQL plan baseline, so enable this parameter only when you are sure that the default plans are performing well.

You can use the automatic plan capture mode when you have upgraded from a previous database version. Set optimizer_features_enable to the earlier version and execute your workload. Every repeatable statement will have its plan captured thus creating SQL plan baselines. You can reset optimizer_features_enable to its default value after you are sure that all statements in your workload have had a chance to execute.

Note that this automatic plan capture occurs only for repeatable statements, that is, statements that are executed at least twice. Statements that are only executed once will not benefit from SQL plan baselines since accepted plans are only used in subsequent hard parses.

The following example shows a plan being captured automatically when the same statement is executed twice:

SQL> alter session set optimizer_capture_sql_plan_baselines = true;

Session altered.

SQL> var pid number
SQL> exec :pid := 100;

PL/SQL procedure successfully completed.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> alter session set optimizer_capture_sql_plan_baselines = false;

Session altered.

Automatic plan capture will not occur for a statement if a stored outline exists for it and is enabled and the parameter use_stored_outlines is TRUE. In this case, turn on incremental capture of plans into an STS using the function capture_cursor_cache_sqlset() in the DBMS_SQLTUNE package. After you have collected the plans for your workload into the STS, manually create SQL plan baselines using the method described earlier. Then, disable the stored outlines or set use_stored_outlines to FALSE. From now on, SPM will manage your workload and stored outlines will not be used for those statements.

In this article, we have seen how to create SQL plan baselines. In the next, we will describe the SPM aware optimizer and how it uses SQL plan baselines.

Plan regressions got you down? SQL Plan Management to the rescue!

Inside the Oracle Optimizer - Thu, 2009-01-08 17:58
Part 1 of 4: Creating SQL plan baselines

Do you ever experience performance regressions because an execution plan has changed for the worse? If you have, then we have an elegant solution for you in 11g called SQL Plan Management (SPM). The next four posts on our blog will cover SPM in detail. Let's begin by reviewing the primary causes for plan changes.

Execution plan changes occur due to various system changes. For example, you might have (manually or automatically) updated statistics for some objects, or changed a few optimizer-related parameters. A more dramatic change is a database upgrade (say from 10gR2 to 11g). All of these changes have the potential to cause new execution plans to be generated for many of your SQL statements. Most new plans are obviously improvements because they are tailored to the new system environment, but some might be worse leading to performance regressions. It is the latter that cause sleepless nights for many DBAs.

DBAs have several options for addressing these regressions. However, what most DBAs want is simple: plans should only change when they will result in performance gains. In other words, the optimizer should not pick bad plans, period.

This first post in our series, describes the concepts of SQL Plan Management and how to create SQL plan baselines. The second part will describe how and when these SQL plan baselines are used. The third part will discuss evolution, the process of adding new and improved plans to SQL plan baselines. Finally, the fourth part will describe user interfaces and interactions with other Oracle objects (like stored outlines).


Introduction

SQL Plan Management (SPM) allows database users to maintain stable yet optimal performance for a set of SQL statements. SPM incorporates the positive attributes of plan adaptability and plan stability, while simultaneously avoiding their shortcomings. It has two main objectives:
  1. prevent performance regressions in the face of database system changes
  2. offer performance improvements by gracefully adapting to database system changes
A managed SQL statement is one for which SPM has been enabled. SPM can be configured to work automatically or it can be manually controlled either wholly or partially (described later). SPM helps prevent performance regressions by enabling the detection of plan changes for managed SQL statements. For this purpose, SPM maintains, on disk, a plan history consisting of different execution plans generated for each managed SQL statement. An enhanced version of the Oracle optimizer, called SPM aware optimizer, accesses, uses, and manages this information which is stored in a repository called the SQL Management Base (SMB).

The plan history enables the SPM aware optimizer to determine whether the best-cost plan it has produced using the cost-based method is a brand new plan or not. A brand new plan represents a plan change that has potential to cause performance regression. For this reason, the SPM aware optimizer does not choose a brand new best-cost plan. Instead, it chooses from a set of accepted plans. An accepted plan is one that has been either verified to not cause performance regression or designated to have good performance. A set of accepted plans is called a SQL plan baseline, which represents a subset of the plan history.

A brand new plan is added to the plan history as a non-accepted plan. Later, an SPM utility verifies its performance, and keeps it as a non-accepted plan if it will cause a performance regression, or changes it to an accepted plan if it will provide a performance improvement. The plan performance verification process ensures both plan stability and plan adaptability.

The figure below shows the SMB containing the plan history for three SQL statements. Each plan history contains some accepted plans (the SQL plan baseline) and some non-accepted plans.




(Click on the image for a larger view.)

You can create a SQL plan baseline in several ways: using a SQL Tuning Set (STS); from the cursor cache; exporting from one database and importing into another; and automatically for every statement. Let's look at each in turn. The examples in this blog entry use the Oracle Database Sample Schemas so you can try them yourself.


Creating SQL plan baselines from STS

If you are upgrading from 10gR2 or have an 11g test system, you might already have an STS containing some or all of your SQL statements. This STS might contain plans that perform satisfactorily. Let's call this STS MY_STS. You can create a SQL plan baseline from this STS as follows:

SQL> variable pls number;
SQL> exec :pls := dbms_spm.load_plans_from_sqlset(sqlset_name => 'MY_STS', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp basic_filter => 'sql_text like ''select%p.prod_name%''');


This will create SQL plan baselines for all statements that match the specified filter.

Creating SQL plan baselines from cursor cache

You can automatically create SQL plan baselines for any cursor that is currently in the cache as follows:

SQL> exec :pls := dbms_spm.load_plans_from_cursor_cache( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp attribute_name => 'SQL_TEXT', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp attribute_value => 'select%p.prod_name%');


This will create SQL plan baselines for all statements whose text matches the specified string. Several overloaded variations of this function allow you to filter on other cursor attributes.

Creating SQL plan baselines using a staging table

If you already have SQL plan baselines (say on an 11g test system), you can export them to another system (a production system for instance).

First, on the test system, create a staging table and pack the SQL plan baselines you want to export:

SQL> exec dbms_spm.create_stgtab_baseline(table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH');

PL/SQL procedure successfully completed.

SQL> exec :pls := dbms_spm.pack_stgtab_baseline( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp sql_text => 'select%p.prod_name%');


This will pack all SQL plan baselines for statements that match the specified filter. The staging table, MY_STGTAB, is a regular table that you should export to the production system using Datapump Export.

On the production system, you can now unpack the staging table to create the SQL plan baselines:

SQL> exec :pls := dbms_spm.unpack_stgtab_baseline( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp sql_text => 'select%p.prod_name%');


This will unpack the staging table and create SQL plan baselines. Note that the filter for unpacking the staging table is optional and may be different than the one used during packing. This means that you can pack several SQL plan baselines into a staging table and selectively unpack only a subset of them on the target system.

Creating SQL plan baselines automatically

You can create SQL plan baselines for all repeatable statements automatically by setting the parameter optimizer_capture_sql_plan_baselines to TRUE (default is FALSE). The first plan captured for any statement is automatically accepted and becomes part of the SQL plan baseline, so enable this parameter only when you are sure that the default plans are performing well.

You can use the automatic plan capture mode when you have upgraded from a previous database version. Set optimizer_features_enable to the earlier version and execute your workload. Every repeatable statement will have its plan captured thus creating SQL plan baselines. You can reset optimizer_features_enable to its default value after you are sure that all statements in your workload have had a chance to execute.

Note that this automatic plan capture occurs only for repeatable statements, that is, statements that are executed at least twice. Statements that are only executed once will not benefit from SQL plan baselines since accepted plans are only used in subsequent hard parses.

The following example shows a plan being captured automatically when the same statement is executed twice:

SQL> alter session set optimizer_capture_sql_plan_baselines = true;

Session altered.

SQL> var pid number
SQL> exec :pid := 100;

PL/SQL procedure successfully completed.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> alter session set optimizer_capture_sql_plan_baselines = false;

Session altered.

Automatic plan capture will not occur for a statement if a stored outline exists for it and is enabled and the parameter use_stored_outlines is TRUE. In this case, turn on incremental capture of plans into an STS using the function capture_cursor_cache_sqlset() in the DBMS_SQLTUNE package. After you have collected the plans for your workload into the STS, manually create SQL plan baselines using the method described earlier. Then, disable the stored outlines or set use_stored_outlines to FALSE. From now on, SPM will manage your workload and stored outlines will not be used for those statements.

In this article, we have seen how to create SQL plan baselines. In the next, we will describe the SPM aware optimizer and how it uses SQL plan baselines.

Categories: DBA Blogs, Development

What is Direct Data Placement

Padraig O'Sullivan - Tue, 2009-01-06 01:02
I'm currently studying Oracle's white paper on Exadata and came across the following paragraph:"Further, Orace's interconnect protocol uses direct data placement (DMA - direct memory access) to ensure very low CPU overhead by directly moving data from the wire to database buffers with no extra data copies being made."This got me wondering what direct data placement is. First off, the interconnectPadraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com1

Pages

Subscribe to Oracle FAQ aggregator