Feed aggregator

Shared Feeds

Nigel Thomas - Sun, 2009-02-01 10:07
My posting rate has been quite low recently, but I have been enjoying using Google Reader to keep up with what everyone else is saying.

I've been sharing items with some friends / colleagues, but it seems harmless to open this up to the world. The topics are eclectic, but mainly around product development, event stream processing, RDBMS and data warehouse. Maybe the very occasional Dilbert or xkcd. You're all very welcome to see what I've shared - the links are now on the right hand side of this blog. And if you too are using a reader, here is a direct link to my shared items and here is the atom feed for them. If Google could just pack up my daily shares as a single blog post, wouldn't that be great.

I'm still successfully avoiding Twitter... I've no idea how I would find the time, and having seen Philip Schofield mentioned as a user (on a low rent BBC Sunday afternoon show) - and apparently now he is London's #5 tw*t(terer), so it must be hopelessly uncool anyway.

SQLstream launches v2.0 of its Event Stream Processing engine

Nigel Thomas - Sun, 2009-02-01 08:55
As well as working as a freelance Oracle consultant, I have spent most of the last year working with SQLstream Inc on their Event Stream Processing engine, version 2.0 of which has now been launched.

My initial point of contact was Julian Hyde, with whom I worked in the Oracle CASE / Oracle Designer team in the early 90s. He went out to Oracle HQ and worked on bit-mapped indexes, then spent time at Broadbase before becoming best known as the founder-architect for the Mondrian OLAP server.

Event Stream Processing (ESP) is a development of what we might have called Active Database a few years ago. Rather than running queries against stored data as in an RDBMS, we register active queries which can be used to filter data arriving from any kind of stream. These active queries behave just like topic subscriptions in a message bus - except that unlike typical JMS implementations, the SQLstream query engine can do more than just filter. SQL queries against the data-in-flight can also:
  • join data between multiple streams, or between streams and tables
  • aggregate data within a stream to give rolling or periodic aggregates based on time or row counts
  • apply built-in and user-defined functions to columns within the stream's rows
  • build a network of streams and views to support complex transformation and routing requirements
Queries are defined using SQL:2003 with minimal extensions (there are some SQL examples on the web site); so developers used to working with an RDBMS will find it easy to grok the relational messaging approach. Unlike a typical message bus, most transformation and routing takes place entirely inside the SQLstream environment, rather than being delegated to external applications.

My experience working at JMS vendor SpiritSoft convinced me of the value of asynchronous message-based techniques, which in the last 10 years have spread out from high-end financial systems to ESB implementations all over the place. Since the early 80s we have seen how the RDBMS swept all other forms of structured data storage away. Now SQLstream's relational messaging approach removes the impedance mismatch between how we store data, and how we process it on the wire. In principle, this architecture can subsume both message-oriented (ESB style) and data-oriented (ETL style) integration architectures.

It should be said that "other ESP engines are available". Oracle itself has two projects on the go: its own CQL which I believe is still internal, and Oracle CEP (the rebranded BEA WebLogic Event Server - which itself is (or was) based on the open source Esper project). These two development threads will no doubt combine at some point (perhaps they already have?). IBM also has two or three independent CEP (complex event processing) projects on the go.

I think the same thing will happen to ESP / CEP as happened to ETL/EAI tools in the last ten or fifteen years. For sure, the database/application server vendors (especially Oracle and IBM) will sell plenty of this software within their respective client bases. An Oracle CEP system that was (or could be) tightly integrated with the RDBMS itself - maybe executing PL/SQL as well as Java functions - would be an easy sell. However multi-vendor sites will be interested in an agnostic / vendor-independent tool as a basis for their integration efforts. Just as Informatica has carved out a place for itself in competition with Oracle's ODI and OWB and IBM's DataStage, so SQLstream and other ESP vendors can fight for the common ground. It will be very interesting to see how it all turns out.

See the SQLstream product page for background, plus posts from Julian Hyde, CTO of SQLstream and Nicholas Goodman, Director of BI Solutions at Pentaho.

PS: here's another post from David Raab.

Java Embedding in BPEL Process

Peeyush Tugnawat - Fri, 2009-01-30 14:34

If you have a Java embedding in your BPEL process and have ever wondered where (if not domain.log) the output is written to if you do something like

System.out.println statement from your code within the java embedding

The  output gets written to the file under <soa_suite / oas_install_home>/opmn/logs/default*.log

How to enable trace for a CRM session

Aviad Elbaz - Thu, 2009-01-29 09:30

I was being asked to examine a performance issue within one of our CRM application screens, after some users complained about a specific long time action.

First thing, I tried to enable trace for the CRM session, but It turned out that it’s definitely not simple to identify a CRM session. Especially in my case, when a session opens two (sometimes more) database sessions. It’s quite impossible actually.

So how it is possible to trace those CRM sessions anyway?

Oracle has provided an option to execute custom code for every session opened in the database through a system profile. This profile called “Initialization SQL Statement - Custom” (the short name is 'FND_INIT_SQL') and allows customize sql/pl*sql code.

Once setting this profile in user level, each session opened for this user will first execute the code within the profile. No matter which type of activity the user does – Forms, CRM, Concurrent request, or anything else that opens a database session – the content of this profile will be executed.

So, clearly we can use this capability to enable Trace for users sessions.

Steps to enable trace for specific user:

  1. Login with “Application Developer” responsibility
  2. Open the “Create Profile” form –> Query the profile “FND_INIT_SQL”
  3. Make sure that “visible” and “updateable” are checked in user level.

     
  4. Switch responsibility to “System Administrator”
  5. Navigate to Profile –> System –> Query the profile “Initialization SQL Statement - Custom” in user level for the user we would like to enable trace for.

     
  6. Update the profile option value in user level to the following:

    BEGIN FND_CTL.FND_SESS_CTL('','', '', 'TRUE','','ALTER SESSION SET TRACEFILE_IDENTIFIER='||''''||'AVIADE' ||''''||' EVENTS ='||''''||' 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12 '||''''); END;    
    ** Just replace AVIADE with the user you enable trace for.

      
  7. Now, after the user logout from the application (the user you enabled trace for), the user can login and reproduce the issue.
     
  8. When finish to reproduce the issue, you should disable the trace by clearing the profile option value and update it to NULL. (profile “Initialization SQL Statement – Custom” of course..)
  9. The trace file/s will wait for you in your udump (user_dump_dest init’ parameter) directory.

Since I enabled and disabled the trace quite a few times while investigating my performance issue, I wrote these handy simple programs which enable and disable the trace for a user in a quick and easy manner.

Execute this program to enable trace for a specific user: (substitute step 6 above)

DECLARE
  l_ret     boolean;
  l_user_id number;
BEGIN

  select user_id
    into l_user_id
    from fnd_user
   where user_name = '&&USER_NAME';

  l_ret := fnd_profile.SAVE(X_NAME        => 'FND_INIT_SQL',
                            X_VALUE       => 'BEGIN FND_CTL.FND_SESS_CTL('''','''','''', ''TRUE'','''',''ALTER SESSION SET TRACEFILE_IDENTIFIER=''||''''''''||''&&USER_NAME'' ||''''''''||'' EVENTS =''||''''''''||'' 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12 ''||''''''''); END;',
                            X_LEVEL_NAME  => 'USER',
                            X_LEVEL_VALUE => l_user_id);
  commit;

  dbms_output.put_line('Profile has updated successfully');

EXCEPTION
  when others then
    dbms_output.put_line('Failed to update the profile: '||sqlerrm);
END;

 

Execute this program to disable trace for a specific user: (substitute step 8 above)

DECLARE
  l_ret     boolean;
  l_user_id number;
BEGIN

  select user_id
    into l_user_id
    from fnd_user
   where user_name = '&USER_NAME';

  l_ret := fnd_profile.DELETE(X_NAME        => 'FND_INIT_SQL',
                              X_LEVEL_NAME  => 'USER',
                              X_LEVEL_VALUE => l_user_id);
  commit;

  dbms_output.put_line('Profile has erased successfully');

EXCEPTION
  when others then
    dbms_output.put_line('Failed to erase the profile: '||sqlerrm);
END;

Hope you find it helpful…
Feel free to leave a comment or share your thought about this issue.

Aviad

Categories: APPS Blogs

Drizzle: A Pretty Cool Project

Padraig O'Sullivan - Wed, 2009-01-28 20:38
Drizzle is a pretty cool project whose progress I've started following in the last few weeks. I'm trying to contribute in a tiny way if I can by confirming bug reports. If I had more time, I'd like to try resolving some bugs. Hopefully, I'll find some spare time to do that in the future.I think its definitely a project worth keeping an eye on though. Check it out if you have the time.Padraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com0

ORA-27123: unable to attach to shared memory segment

Sabdar Syed - Wed, 2009-01-28 06:27
Hello,

Of late, in one of our test unix (Sun Solaris) boxes, a database user is trying to connect to the database, but getting an error ORA-27123: unable to attach to shared memory segment” with permission denied issue.

Initially we thought that there might be an issue with SGA memory area or may be an issue with the shared memory segments and semaphores for the instance allocated. But later we found that the permission on ORACLE_HOME directory got changed accidentally with the full permission.

Here are our findings:

$ sqlplus testusr/password@testdb

SQL*Plus: Release 10.2.0.2.0 - Production on Tue Jan 27 20:53:44 2009

Copyright (c) 1982, 2005, Oracle. All Rights Reserved.

ERROR:
ORA-01034: ORACLE not available
ORA-27123: unable to attach to shared memory segment
SVR4 Error: 13: Permission denied
As per Oracle Error Messages:

Error: ORA-27123: unable to attach to shared memory segment
Cause: shmat() call failed
Action: check permissions on segment, contact Oracle support
Check the oracle executable file permission.

$ cd $ORACLE_HOME/bin
$ ls -l oracle
-rwxrwxr-x 1 oracle dba 119582976 Feb 3 2008 oracle
Here the oracle file permission has -rwxrwxr-x i.e. 775, but this file must have the permission -rwsr-s- -x i.e. 6751

Change the permissions for oracle file.

$ cd $ORACLE_HOME/bin
$ chmod 6751 oracle
$ ls -l oracle
-rwsr-s--x 1 oracle dba 119582976 Feb 3 2008 oracle
After changing the permissions on oracle executable file, all the users are now able to connect to the database without any errors.

Note: For further information refer the Oracle Metalink Note ID: 1011995.6 Subject: COMMON ORACLE PERMISSION PROBLEMS ON UNIX.

Regards,
Sabdar Syed

Outbound_connect_timeout

Fairlie Rego - Tue, 2009-01-27 17:34
Outbound_connect_timeout comes into play when nodes in a cluster are down and we cannot wait for the OS timeout as this causes long delays in connect time. For example on Solaris the value of tcp_ip_abort_interval = 180000 ==> which is 180 seconds ==> 3 mins

In this post I will demonstrate how outbound_connect_timeout (OCT) can effectively avoid timeouts experienced by clients connecting to RAC nodes

If we take an example of the following connect string

TEST =
(DESCRIPTION =
(ADDRESS_LIST =
(LOAD_BALANCE = OFF)
(ADDRESS = (PROTOCOL = TCP)(HOST = sdby1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = sdby2-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = sdby3-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = sdby4-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = prim1-vip)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = TEST)
)
)


In the above alias the first 4 nodes are non existent nodes and the service TEST
runs on the node prim1 which is listed last in the above alias.

I have deliberately set load_balance to OFF so that the client has to traverse through all nodes serially.

If we set an OUTBOUND_CONNECT_TIMEOUT of 3 seconds in the client's sqlnet.ora
(client is 10.2.0.3 Solaris) then the time to establish the connection is around 12 seconds.

If we were to run a sqlnet trace on the connection we see that the connection starts at

Connection started at
[27-JAN-2009 22:52:33:741] --- TRACE CONFIGURATION INFORMATION FOLLOWS ---
[27-JAN-2009 22:52:33:741] New trace stream is /tmp/cli_262.trc

and the first address which is tried is

[27-JAN-2009 22:52:33:757] nsc2addr: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=sdb1-vip)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=TEST)(CID=(PROGRAM=sqlplus@bart)(HOST=bart)(USER=oracle))))

Moving on we can see that OCT is enabled due to the line

[27-JAN-2009 22:52:33:759] nstoSetupTimeout: entry
[27-JAN-2009 22:52:33:759] nstoSetupTimeout: ATO enabled for ctx=0x1001c9280, val=3000(millisecs)


After this we the following pattern is seen continuously

[27-JAN-2009 22:52:33:776] nsevwtsg: entry
[27-JAN-2009 22:52:33:776] nsevwtsg: cid=0
[27-JAN-2009 22:52:33:776] nsevwait: entry
[27-JAN-2009 22:52:33:776] nsevwait: 1 registered connection(s)
[27-JAN-2009 22:52:33:776] nsevwait: 0 pre-posted event(s)
[27-JAN-2009 22:52:33:776] nsevwait: waiting for transport event (0 thru 0)...
[27-JAN-2009 22:52:33:776] nsevwait: 0 newly-posted event(s)
[27-JAN-2009 22:52:33:776] nsevwait: 0 posted event(s)
[27-JAN-2009 22:52:33:776] nsevwait: exit (0)
[27-JAN-2009 22:52:33:776] nstoToqCheckSingle: entry
[27-JAN-2009 22:52:33:776] nstoToqCheckSingle: normal exit


and the timeout occurs in 3 seconds

[27-JAN-2009 22:52:36:771] nstoHandleEventTO: ATO occurred for ctx=0x1001c9280

Ergo the OCT seems to be working perfectly

The client then tries to establish a connection with sdby2-vip and experiences
the same timeout

The connection is finally established at

[27-JAN-2009 22:52:45:915] nscon: no connect data
[27-JAN-2009 22:52:45:915] nscon: connect handshake is complete
[27-JAN-2009 22:52:45:915] nscon: nsctxinf[0]=0x41, [1]=0x41
[27-JAN-2009 22:52:45:915] nscon: normal exit


which translates to around 12 seconds.

From my testing for clients on Windows

OUTBOUND_CONNECT_TIMEOUT does not work with 10.2.0.3 base windows client. (Clients may get errors)
It works with patch 21 on top of 10.2.0.3
OUTBOUND_CONNECT_TIMEOUT does work with 11g client

Without OCT if we trace the connection again we see that we wait for more than 3 mins


[28-JAN-2009 14:18:35:299] nttcni: entry
[28-JAN-2009 14:18:35:299] nttcni: trying to connect to socket 10.
[28-JAN-2009 14:22:19:915] ntt2err: entry

[28-JAN-2009 14:22:19:915] ntt2err: soc 10 error - operation=1, ntresnt[0]=505, ntresnt[1]=145, ntresnt[2]=0
[28-JAN-2009 14:22:19:915] ntt2err: exit
[28-JAN-2009 14:22:19:915] nttcni: exit
[28-JAN-2009 14:22:19:915] nttcon: exit
[28-JAN-2009 14:22:19:921] nserror: entry
[28-JAN-2009 14:22:19:921] nserror: nsres: id=0, op=65, ns=12535, ns2=12560; nt[0]=505, nt[1]=145, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0
[28-JAN-2009 14:22:19:921] nsopen: unable to open transport
[28-JAN-2009 14:22:19:921] nsiocancel: entry

Siebel Record Locking with Web Services

Peeyush Tugnawat - Tue, 2009-01-27 16:03

 

Error: "The selected record has been modified by another user since it was retrieved"

Reason: The reason we were getting this error in our case was because of the custom scripts. Execution of custom scripts on the Business Components was slowing down the WriteRecord process. This was resulting in the record locking issue.

One way to know if custom scripts are causing this error is to disable all custom scripts (if any) and then try to run the web service to check if you still run into the same error. If yes, following references might help.

 

Related References:

Troubleshooting Workflow Process Execution Problems
http://download.oracle.com/docs/cd/B40099_02/books/BPFWorkflow/BPFWorkflow_Admin20.html

Defining an Error Exception to Handle an Update Conflict
http://download.oracle.com/docs/cd/B40099_02/books/BPFWorkflow/BPFWorkflow_Design22.html#wp1223641

ORABPEL-10902

Peeyush Tugnawat - Tue, 2009-01-27 16:01

I was getting this error when trying to deploy or compile a bpel project

Error:
[Error ORABPEL-10902]: compilation failed
[Description]: in "bpel.xml", XML parsing failed because "undefined part element.
In WSDL at

Please make sure the spelling of the element QName is correct and the WSDL import is complete.
".
[Potential fix]: n/a.

 

The problem was that the within my wsdl file the schema definition included an import statement and the schemaLocation attribute was not pointing to correct path

<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
    <xsd:import namespace="something" schemaLocation="http://something:someport/MyXSD.xsd"/>
</xsd:schema>

Once this was changed to correct url, the error went away.

How to configure Standalone Fuego BEA Aqualogic Oracle BPM Studio to work with Secure Web Services?

Arvind Jain - Mon, 2009-01-26 18:00
While evaluating BEA BPM Studio I had to struggle a bit with how to configure Standalone Fuego BEA Aqualogic Oracle BPM Studio .... I am trying to give credit to all companies here :) to work with Secure Web Services?
Now I have secured web services orchestrated and also using encryption in my BPM Processes. Here is the meat of the matter ...
In order to communicate with secured webservices using SSL encryption (those with WSDL end point starting as https:// ) you need to have certificates from those servers installed in your keystore.
For BPM Standalone these are the steps. And before you begin set JAVA_HOME to C:\OraBPMStudioHome\eclipse\jre if you have not done so already.
1. Download the .cer file from server. (One way is you can use IE browser to get that file and export it from browser to a local directory)
2. Put this file in %JAVA_HOME%\jre\lib\security. You can put it anywhere you want.
3. Run the following command at a command prompt:
C:\Program Files\Java\jre1.6.0_02\bin>keytool -import -trustcacerts -alias <CERT ALIAS NAME> -keystore ..\lib\security\cacerts -file ..\lib\security\gd_<cert file name>.cer
4. You will be prompted for a password. If you have not changed the password, it will be "changeit".
5. You will then get the following message if all is successful - "Certificate was added to keystore".
6. Restart Tomcat (inbuilt server in BPM Studio).
This should solve your problem.
Pls note that if you have not configured your keyStore then first do so. you will find this document handy to do so.
Quick tip: To see a list of keys in keystore
%JAVA_HOME%\bin\keytool -list -keystore ..\lib\security\cacerts
Arvind

SQL Plan Management (Part 3 of 4): Evolving SQL Plan Baselines

Oracle Optimizer Team - Mon, 2009-01-26 12:33
In the example in Part 2, we saw that the optimizer used an accepted plan instead of a brand new plan. The statement has two plans in its plan history, but only one is accepted and thus in the SQL plan baseline:

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;


SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b08cbcb825&nbsp YES NO
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid



Non-accepted plans can be verified by executing the evolve_sql_plan_baseline function. This function will execute the non-accepted plan and compare its performance to the best accepted plan. The execution is performed using the conditions (e.g., bind values, parameters, etc.) in effect at the time the non-accepted plan was added to the plan history. If the non-accepted plan's performance is better, the function will make it accepted, thus adding it to the SQL plan baseline. Let's see what happens when we execute this function:

SQL> var report clob;


SQL> exec :report := dbms_spm.evolve_sql_plan_baseline();

PL/SQL procedure successfully completed.

SQL> print :report

REPORT
-------------------------------------------------------------------------------

-------------------------------------------------------------------------------
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp Evolve SQL Plan Baseline Report
-------------------------------------------------------------------------------

Inputs:
-------
&nbsp SQL_HANDLE =
&nbsp PLAN_NAME&nbsp =
&nbsp TIME_LIMIT = DBMS_SPM.AUTO_LIMIT
&nbsp VERIFY&nbsp&nbsp&nbsp&nbsp = YES
&nbsp COMMIT&nbsp&nbsp&nbsp&nbsp = YES

Plan: SYS_SQL_PLAN_fcc170b08cbcb825
-----------------------------------
&nbsp Plan was verified: Time used .1 seconds.
&nbsp Passed performance criterion: Compound improvement ratio >= 10.13
&nbsp Plan was changed to an accepted plan.

&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp Baseline Plan&nbsp&nbsp&nbsp&nbsp&nbsp Test Plan&nbsp&nbsp&nbsp&nbsp Improv. Ratio
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp -------------&nbsp&nbsp&nbsp&nbsp&nbsp ---------&nbsp&nbsp&nbsp&nbsp -------------
&nbsp Execution Status:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp COMPLETE&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp COMPLETE
&nbsp Rows Processed:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 960&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 960
&nbsp Elapsed Time(ms):&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 19&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 15&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 1.27
&nbsp CPU Time(ms):&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 18&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 15&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 1.2
&nbsp Buffer Gets:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 1188&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 116&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 10.24
&nbsp Disk Reads:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0
&nbsp Direct Writes:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0
&nbsp Fetches:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0
&nbsp Executions:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 1&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 1

-------------------------------------------------------------------------------
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp Report Summary
-------------------------------------------------------------------------------
Number of SQL plan baselines verified: 1.
Number of SQL plan baselines evolved: 1.



The plan verification report shows that the new plan's performance was better and so it was made accepted and became part of the SQL plan baseline. We can confirm it by looking in the dba_sql_plan_baselines view:

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;


SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b08cbcb825&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid



The SQL plan baseline now has two accepted plans: SYS_SQL_PLAN_fcc170b08cbcb825 is now accepted.

You can either execute the evolve_sql_plan_baseline() function manually or schedule it to run automatically in a maintenance window.

Another way of evolving a SQL plan baseline is to use the SQL Tuning Advisor. Instead of executing evolve_sql_plan_baseline, suppose we start from the original state where we have one accepted and one non-accepted plan:

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;


SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b08cbcb825&nbsp YES NO
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid



You can execute the SQL Tuning Advisor on the cursor in the cursor cache:

SQL> var tname varchar2(30);


SQL> exec :tname := dbms_sqltune.create_tuning_task(sql_id => 'bfbr3zrg9d5cc');

PL/SQL procedure successfully completed.

SQL> exec dbms_sqltune.execute_tuning_task(task_name => :tname);

PL/SQL procedure successfully completed.

SQL> select dbms_sqltune.report_tuning_task(:tname, 'TEXT', 'BASIC') FROM dual;

DBMS_SQLTUNE.REPORT_TUNING_TASK(:TNAME,'TEXT','BASIC')
-------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name&nbsp&nbsp : TASK_505
Tuning Task Owner&nbsp : SH
Workload Type&nbsp&nbsp&nbsp&nbsp&nbsp : Single SQL Statement
Scope&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp : COMPREHENSIVE
Time Limit(seconds): 1800
Completion Status&nbsp : COMPLETED
Started at&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp : 11/11/2008 16:43:12
Completed at&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp : 11/11/2008 16:43:13

-------------------------------------------------------------------------------
Schema Name: SH
SQL ID&nbsp&nbsp&nbsp&nbsp : bfbr3zrg9d5cc
SQL Text&nbsp&nbsp : select p.prod_name, s.amount_sold, t.calendar_year
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp from sales s, products p, times t
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid

-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------
1- A potentially better execution plan was found for this statement.

-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------

1- Original With Adjusted Cost
------------------------------
Plan hash value: 2787970893


----------------------------------------------------------------
| Id&nbsp | Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | Name&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
----------------------------------------------------------------
| &nbsp 0 | SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 1 |&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 2 |&nbsp&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 3 |&nbsp&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 4 |&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | PRODUCTS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 5 |&nbsp&nbsp&nbsp&nbsp&nbsp INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | PRODUCTS_PK&nbsp&nbsp&nbsp |
| &nbsp 6 |&nbsp&nbsp&nbsp&nbsp PARTITION RANGE ALL&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 7 |&nbsp&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY LOCAL INDEX ROWID| SALES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 8 |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP CONVERSION TO ROWIDS&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 9 |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | SALES_PROD_BIX |
|&nbsp 10 |&nbsp&nbsp&nbsp INDEX UNIQUE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | TIME_PK&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp 11 |&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | TIMES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
----------------------------------------------------------------

2- Original With Adjusted Cost
------------------------------
Plan hash value: 2787970893


----------------------------------------------------------------
| Id&nbsp | Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | Name&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
----------------------------------------------------------------
|&nbsp&nbsp 0 | SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 1 |&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 2 |&nbsp&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 3 |&nbsp&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 4 |&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | PRODUCTS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 5 |&nbsp&nbsp&nbsp&nbsp&nbsp INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | PRODUCTS_PK&nbsp&nbsp&nbsp |
|&nbsp&nbsp 6 |&nbsp&nbsp&nbsp&nbsp PARTITION RANGE ALL&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 7 |&nbsp&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY LOCAL INDEX ROWID| SALES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 8 |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP CONVERSION TO ROWIDS&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 9 |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | SALES_PROD_BIX |
|&nbsp 10 |&nbsp&nbsp&nbsp INDEX UNIQUE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | TIME_PK&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp 11 |&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | TIMES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
----------------------------------------------------------------

3- Using SQL Profile
--------------------
Plan hash value: 2361178149


------------------------------------------
| Id&nbsp | Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | Name&nbsp&nbsp&nbsp&nbsp |
------------------------------------------
|&nbsp&nbsp 0 | SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 1 |&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 2 |&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 3 |&nbsp&nbsp&nbsp PARTITION RANGE ALL|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 4 |&nbsp&nbsp&nbsp&nbsp TABLE ACCESS FULL | SALES&nbsp&nbsp&nbsp |
|&nbsp&nbsp 5 |&nbsp&nbsp&nbsp TABLE ACCESS FULL&nbsp | TIMES&nbsp&nbsp&nbsp |
|&nbsp&nbsp 6 |&nbsp&nbsp TABLE ACCESS FULL&nbsp&nbsp | PRODUCTS |
------------------------------------------

-------------------------------------------------------------------------------


SQL> exec dbms_sqltune.accept_sql_profile(task_name => :tname);

PL/SQL procedure successfully completed.

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;

SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b08cbcb825&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

SQL> select sql_text, type, status from dba_sql_profiles;

SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp TYPE&nbsp&nbsp&nbsp STATUS
---------------------------------------- ------- --------
select p.prod_name, s.amount_sold, t.cal MANUAL&nbsp ENABLED
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid



What we see here is that SQL Tuning Advisor found a tuned plan (that coincidentally happened to be the non-accepted plan in our plan history). When we accepted the recommended SQL profile, the SQL Tuning Advisor created a SQL profile and also changed the non-accepted plan to accepted status, thus evolving the SQL plan baseline to two plans.

Note that the SQL Tuning Advisor may also find a completely new tuned plan, one that is not in the plan history. If you then accept the recommended SQL profile, the SQL Tuning Advisor will create a SQL profile and also add the tuned plan to the SQL plan baseline.

Thus, you can evolve a SQL plan baseline either by executing the evolve_sql_plan_baseline function or by using the SQL Tuning Advisor. New and provably better plans will be added by either of these methods to the SQL plan baseline.

SQL Plan Management (Part 3 of 4): Evolving SQL Plan Baselines

Inside the Oracle Optimizer - Mon, 2009-01-26 12:33
In the example in Part 2, we saw that the optimizer used an accepted plan instead of a brand new plan. The statement has two plans in its plan history, but only one is accepted and thus in the SQL plan baseline:

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;


SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b08cbcb825&nbsp YES NO
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid



Non-accepted plans can be verified by executing the evolve_sql_plan_baseline function. This function will execute the non-accepted plan and compare its performance to the best accepted plan. The execution is performed using the conditions (e.g., bind values, parameters, etc.) in effect at the time the non-accepted plan was added to the plan history. If the non-accepted plan's performance is better, the function will make it accepted, thus adding it to the SQL plan baseline. Let's see what happens when we execute this function:

SQL> var report clob;


SQL> exec :report := dbms_spm.evolve_sql_plan_baseline();

PL/SQL procedure successfully completed.

SQL> print :report

REPORT
-------------------------------------------------------------------------------

-------------------------------------------------------------------------------
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp Evolve SQL Plan Baseline Report
-------------------------------------------------------------------------------

Inputs:
-------
&nbsp SQL_HANDLE =
&nbsp PLAN_NAME&nbsp =
&nbsp TIME_LIMIT = DBMS_SPM.AUTO_LIMIT
&nbsp VERIFY&nbsp&nbsp&nbsp&nbsp = YES
&nbsp COMMIT&nbsp&nbsp&nbsp&nbsp = YES

Plan: SYS_SQL_PLAN_fcc170b08cbcb825
-----------------------------------
&nbsp Plan was verified: Time used .1 seconds.
&nbsp Passed performance criterion: Compound improvement ratio >= 10.13
&nbsp Plan was changed to an accepted plan.

&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp Baseline Plan&nbsp&nbsp&nbsp&nbsp&nbsp Test Plan&nbsp&nbsp&nbsp&nbsp Improv. Ratio
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp -------------&nbsp&nbsp&nbsp&nbsp&nbsp ---------&nbsp&nbsp&nbsp&nbsp -------------
&nbsp Execution Status:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp COMPLETE&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp COMPLETE
&nbsp Rows Processed:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 960&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 960
&nbsp Elapsed Time(ms):&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 19&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 15&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 1.27
&nbsp CPU Time(ms):&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 18&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 15&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 1.2
&nbsp Buffer Gets:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 1188&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 116&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 10.24
&nbsp Disk Reads:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0
&nbsp Direct Writes:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0
&nbsp Fetches:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 0
&nbsp Executions:&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 1&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp 1

-------------------------------------------------------------------------------
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp Report Summary
-------------------------------------------------------------------------------
Number of SQL plan baselines verified: 1.
Number of SQL plan baselines evolved: 1.



The plan verification report shows that the new plan's performance was better and so it was made accepted and became part of the SQL plan baseline. We can confirm it by looking in the dba_sql_plan_baselines view:

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;


SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b08cbcb825&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid



The SQL plan baseline now has two accepted plans: SYS_SQL_PLAN_fcc170b08cbcb825 is now accepted.

You can either execute the evolve_sql_plan_baseline() function manually or schedule it to run automatically in a maintenance window.

Another way of evolving a SQL plan baseline is to use the SQL Tuning Advisor. Instead of executing evolve_sql_plan_baseline, suppose we start from the original state where we have one accepted and one non-accepted plan:

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;


SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b08cbcb825&nbsp YES NO
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid



You can execute the SQL Tuning Advisor on the cursor in the cursor cache:

SQL> var tname varchar2(30);


SQL> exec :tname := dbms_sqltune.create_tuning_task(sql_id => 'bfbr3zrg9d5cc');

PL/SQL procedure successfully completed.

SQL> exec dbms_sqltune.execute_tuning_task(task_name => :tname);

PL/SQL procedure successfully completed.

SQL> select dbms_sqltune.report_tuning_task(:tname, 'TEXT', 'BASIC') FROM dual;

DBMS_SQLTUNE.REPORT_TUNING_TASK(:TNAME,'TEXT','BASIC')
-------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name&nbsp&nbsp : TASK_505
Tuning Task Owner&nbsp : SH
Workload Type&nbsp&nbsp&nbsp&nbsp&nbsp : Single SQL Statement
Scope&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp : COMPREHENSIVE
Time Limit(seconds): 1800
Completion Status&nbsp : COMPLETED
Started at&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp : 11/11/2008 16:43:12
Completed at&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp : 11/11/2008 16:43:13

-------------------------------------------------------------------------------
Schema Name: SH
SQL ID&nbsp&nbsp&nbsp&nbsp : bfbr3zrg9d5cc
SQL Text&nbsp&nbsp : select p.prod_name, s.amount_sold, t.calendar_year
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp from sales s, products p, times t
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid

-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------
1- A potentially better execution plan was found for this statement.

-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------

1- Original With Adjusted Cost
------------------------------
Plan hash value: 2787970893


----------------------------------------------------------------
| Id&nbsp | Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | Name&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
----------------------------------------------------------------
| &nbsp 0 | SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 1 |&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 2 |&nbsp&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 3 |&nbsp&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 4 |&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | PRODUCTS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 5 |&nbsp&nbsp&nbsp&nbsp&nbsp INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | PRODUCTS_PK&nbsp&nbsp&nbsp |
| &nbsp 6 |&nbsp&nbsp&nbsp&nbsp PARTITION RANGE ALL&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 7 |&nbsp&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY LOCAL INDEX ROWID| SALES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 8 |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP CONVERSION TO ROWIDS&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
| &nbsp 9 |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | SALES_PROD_BIX |
|&nbsp 10 |&nbsp&nbsp&nbsp INDEX UNIQUE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | TIME_PK&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp 11 |&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | TIMES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
----------------------------------------------------------------

2- Original With Adjusted Cost
------------------------------
Plan hash value: 2787970893


----------------------------------------------------------------
| Id&nbsp | Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | Name&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
----------------------------------------------------------------
|&nbsp&nbsp 0 | SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 1 |&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 2 |&nbsp&nbsp NESTED LOOPS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 3 |&nbsp&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 4 |&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | PRODUCTS&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 5 |&nbsp&nbsp&nbsp&nbsp&nbsp INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | PRODUCTS_PK&nbsp&nbsp&nbsp |
|&nbsp&nbsp 6 |&nbsp&nbsp&nbsp&nbsp PARTITION RANGE ALL&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 7 |&nbsp&nbsp&nbsp&nbsp&nbsp TABLE ACCESS BY LOCAL INDEX ROWID| SALES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 8 |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP CONVERSION TO ROWIDS&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 9 |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp BITMAP INDEX RANGE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | SALES_PROD_BIX |
|&nbsp 10 |&nbsp&nbsp&nbsp INDEX UNIQUE SCAN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | TIME_PK&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp 11 |&nbsp&nbsp TABLE ACCESS BY INDEX ROWID&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | TIMES&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
----------------------------------------------------------------

3- Using SQL Profile
--------------------
Plan hash value: 2361178149


------------------------------------------
| Id&nbsp | Operation&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp | Name&nbsp&nbsp&nbsp&nbsp |
------------------------------------------
|&nbsp&nbsp 0 | SELECT STATEMENT&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 1 |&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 2 |&nbsp&nbsp HASH JOIN&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 3 |&nbsp&nbsp&nbsp PARTITION RANGE ALL|&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp |
|&nbsp&nbsp 4 |&nbsp&nbsp&nbsp&nbsp TABLE ACCESS FULL | SALES&nbsp&nbsp&nbsp |
|&nbsp&nbsp 5 |&nbsp&nbsp&nbsp TABLE ACCESS FULL&nbsp | TIMES&nbsp&nbsp&nbsp |
|&nbsp&nbsp 6 |&nbsp&nbsp TABLE ACCESS FULL&nbsp&nbsp | PRODUCTS |
------------------------------------------

-------------------------------------------------------------------------------


SQL> exec dbms_sqltune.accept_sql_profile(task_name => :tname);

PL/SQL procedure successfully completed.

SQL> select sql_text, plan_name, enabled, accepted from dba_sql_plan_baselines;

SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp PLAN_NAME&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp ENA ACC
---------------------------------------- ------------------------------ --- ---
select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b08cbcb825&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

select p.prod_name, s.amount_sold, t.cal SYS_SQL_PLAN_fcc170b0a62d0f4d&nbsp YES YES
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid

SQL> select sql_text, type, status from dba_sql_profiles;

SQL_TEXT&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp TYPE&nbsp&nbsp&nbsp STATUS
---------------------------------------- ------- --------
select p.prod_name, s.amount_sold, t.cal MANUAL&nbsp ENABLED
endar_year
from sales s, products p, times t
where s.prod_id = p.prod_id
&nbsp and s.time_id = t.time_id
&nbsp and p.prod_id < :pid



What we see here is that SQL Tuning Advisor found a tuned plan (that coincidentally happened to be the non-accepted plan in our plan history). When we accepted the recommended SQL profile, the SQL Tuning Advisor created a SQL profile and also changed the non-accepted plan to accepted status, thus evolving the SQL plan baseline to two plans.

Note that the SQL Tuning Advisor may also find a completely new tuned plan, one that is not in the plan history. If you then accept the recommended SQL profile, the SQL Tuning Advisor will create a SQL profile and also add the tuned plan to the SQL plan baseline.

Thus, you can evolve a SQL plan baseline either by executing the evolve_sql_plan_baseline function or by using the SQL Tuning Advisor. New and provably better plans will be added by either of these methods to the SQL plan baseline.

Categories: DBA Blogs, Development

Review: uCertify PrepKit 1Z0-047

Mihajlo Tekic - Sun, 2009-01-25 23:09

I was asked by uCertify to review their preparation kit product.

I am currently preparing myself for Oracle 1Z0-047 Oracle Database: SQL Certified Expert Exam and I thought this could be a good chance for me test my current knowledge and chose to review their preparation kit for exactly this exam. (Link)

To be honest, this is my first GUI based prep kit I’ve ever tried. So far I was using the Oracle official documentation as my primary source, and some of the books (Exam Study Guides) that were available.

My study strategy is read the official documentation and practice, practice, practice.

From my experience, the toughest part during preparation for an exam is when it comes to answer the question: “Am I ready enough?”

Usually I go through the questions available in some of the preparation books plus making even more questions based on different scenarios. I also use OTN forums to find discussions with challenging topics where I can test my knowledge.

With the last in mind, the uCertify Prep Kit is just another handy resource that one can use to test his/her knowledge before he/she actually takes the exam.

It is a solid product that provides more than three hundred questions combined into one diagnostic, one final, four practice tests and a quiz. The user also has an ability to create custom tests using those questions he/she finds interesting.



There are also number of questions that are pretty challenging. For all questions, the users have an option to read explanations about the answers including references to the Oracle documentation or other resources.

For each question, there is an option to start a discussion with other users, to send feedback to uCertify and to write personal notes and tags.

The feedback and discussion features are really nice. I found some answers that I did not agree with and used this feature to provide feedback to uCertify. I tested the discussion feature as well and it works nice.

The exam objectives are well covered. There are also questions specific to 11g version of the database.

The software has an update option so the user can download the latest updates and fixes.

The Graphic User Interface is very nice and the navigation is great. Each of the test results can be saved and used for later reference. Also there are features like Flash cards, a Quiz, Study notes and Articles. There are number of notes and articles available.



Finally there is a readiness report that can help the user to find the answer to the question: “Am I ready for the real thing?”

In general, this is a nice product that can be very helpful for one to prepare for an exam (they have various prep kits available (Link)). However, I don’t want anybody to get an impression that I think this is the only resource that one could use to prepare and pass an exam. Whenever I was asked for an advice on what resources should (must) one use to prepare for an exam, my answer was always straight, The Official Documentation, period. (The official trainings are also very valuable, but sometimes they are a must-do regardless of someone’s recommendations :-))

This product and all the other available out there (books, prep kits, study guides) are just another resource for learning and practicing that one may consider to use when preparing for an exam.

Pages

Subscribe to Oracle FAQ aggregator