Feed aggregator

Plan regressions got you down? SQL Plan Management to the rescue!

Inside the Oracle Optimizer - Thu, 2009-01-08 17:58
Part 1 of 4: Creating SQL plan baselines

Do you ever experience performance regressions because an execution plan has changed for the worse? If you have, then we have an elegant solution for you in 11g called SQL Plan Management (SPM). The next four posts on our blog will cover SPM in detail. Let's begin by reviewing the primary causes for plan changes.

Execution plan changes occur due to various system changes. For example, you might have (manually or automatically) updated statistics for some objects, or changed a few optimizer-related parameters. A more dramatic change is a database upgrade (say from 10gR2 to 11g). All of these changes have the potential to cause new execution plans to be generated for many of your SQL statements. Most new plans are obviously improvements because they are tailored to the new system environment, but some might be worse leading to performance regressions. It is the latter that cause sleepless nights for many DBAs.

DBAs have several options for addressing these regressions. However, what most DBAs want is simple: plans should only change when they will result in performance gains. In other words, the optimizer should not pick bad plans, period.

This first post in our series, describes the concepts of SQL Plan Management and how to create SQL plan baselines. The second part will describe how and when these SQL plan baselines are used. The third part will discuss evolution, the process of adding new and improved plans to SQL plan baselines. Finally, the fourth part will describe user interfaces and interactions with other Oracle objects (like stored outlines).


SQL Plan Management (SPM) allows database users to maintain stable yet optimal performance for a set of SQL statements. SPM incorporates the positive attributes of plan adaptability and plan stability, while simultaneously avoiding their shortcomings. It has two main objectives:
  1. prevent performance regressions in the face of database system changes
  2. offer performance improvements by gracefully adapting to database system changes
A managed SQL statement is one for which SPM has been enabled. SPM can be configured to work automatically or it can be manually controlled either wholly or partially (described later). SPM helps prevent performance regressions by enabling the detection of plan changes for managed SQL statements. For this purpose, SPM maintains, on disk, a plan history consisting of different execution plans generated for each managed SQL statement. An enhanced version of the Oracle optimizer, called SPM aware optimizer, accesses, uses, and manages this information which is stored in a repository called the SQL Management Base (SMB).

The plan history enables the SPM aware optimizer to determine whether the best-cost plan it has produced using the cost-based method is a brand new plan or not. A brand new plan represents a plan change that has potential to cause performance regression. For this reason, the SPM aware optimizer does not choose a brand new best-cost plan. Instead, it chooses from a set of accepted plans. An accepted plan is one that has been either verified to not cause performance regression or designated to have good performance. A set of accepted plans is called a SQL plan baseline, which represents a subset of the plan history.

A brand new plan is added to the plan history as a non-accepted plan. Later, an SPM utility verifies its performance, and keeps it as a non-accepted plan if it will cause a performance regression, or changes it to an accepted plan if it will provide a performance improvement. The plan performance verification process ensures both plan stability and plan adaptability.

The figure below shows the SMB containing the plan history for three SQL statements. Each plan history contains some accepted plans (the SQL plan baseline) and some non-accepted plans.

(Click on the image for a larger view.)

You can create a SQL plan baseline in several ways: using a SQL Tuning Set (STS); from the cursor cache; exporting from one database and importing into another; and automatically for every statement. Let's look at each in turn. The examples in this blog entry use the Oracle Database Sample Schemas so you can try them yourself.

Creating SQL plan baselines from STS

If you are upgrading from 10gR2 or have an 11g test system, you might already have an STS containing some or all of your SQL statements. This STS might contain plans that perform satisfactorily. Let's call this STS MY_STS. You can create a SQL plan baseline from this STS as follows:

SQL> variable pls number;
SQL> exec :pls := dbms_spm.load_plans_from_sqlset(sqlset_name => 'MY_STS', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp basic_filter => 'sql_text like ''select%p.prod_name%''');

This will create SQL plan baselines for all statements that match the specified filter.

Creating SQL plan baselines from cursor cache

You can automatically create SQL plan baselines for any cursor that is currently in the cache as follows:

SQL> exec :pls := dbms_spm.load_plans_from_cursor_cache( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp attribute_name => 'SQL_TEXT', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp attribute_value => 'select%p.prod_name%');

This will create SQL plan baselines for all statements whose text matches the specified string. Several overloaded variations of this function allow you to filter on other cursor attributes.

Creating SQL plan baselines using a staging table

If you already have SQL plan baselines (say on an 11g test system), you can export them to another system (a production system for instance).

First, on the test system, create a staging table and pack the SQL plan baselines you want to export:

SQL> exec dbms_spm.create_stgtab_baseline(table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH');

PL/SQL procedure successfully completed.

SQL> exec :pls := dbms_spm.pack_stgtab_baseline( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp sql_text => 'select%p.prod_name%');

This will pack all SQL plan baselines for statements that match the specified filter. The staging table, MY_STGTAB, is a regular table that you should export to the production system using Datapump Export.

On the production system, you can now unpack the staging table to create the SQL plan baselines:

SQL> exec :pls := dbms_spm.unpack_stgtab_baseline( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp sql_text => 'select%p.prod_name%');

This will unpack the staging table and create SQL plan baselines. Note that the filter for unpacking the staging table is optional and may be different than the one used during packing. This means that you can pack several SQL plan baselines into a staging table and selectively unpack only a subset of them on the target system.

Creating SQL plan baselines automatically

You can create SQL plan baselines for all repeatable statements automatically by setting the parameter optimizer_capture_sql_plan_baselines to TRUE (default is FALSE). The first plan captured for any statement is automatically accepted and becomes part of the SQL plan baseline, so enable this parameter only when you are sure that the default plans are performing well.

You can use the automatic plan capture mode when you have upgraded from a previous database version. Set optimizer_features_enable to the earlier version and execute your workload. Every repeatable statement will have its plan captured thus creating SQL plan baselines. You can reset optimizer_features_enable to its default value after you are sure that all statements in your workload have had a chance to execute.

Note that this automatic plan capture occurs only for repeatable statements, that is, statements that are executed at least twice. Statements that are only executed once will not benefit from SQL plan baselines since accepted plans are only used in subsequent hard parses.

The following example shows a plan being captured automatically when the same statement is executed twice:

SQL> alter session set optimizer_capture_sql_plan_baselines = true;

Session altered.

SQL> var pid number
SQL> exec :pid := 100;

PL/SQL procedure successfully completed.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

--------- ----------- -------------
9 rows selected.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

--------- ----------- -------------
9 rows selected.

SQL> alter session set optimizer_capture_sql_plan_baselines = false;

Session altered.

Automatic plan capture will not occur for a statement if a stored outline exists for it and is enabled and the parameter use_stored_outlines is TRUE. In this case, turn on incremental capture of plans into an STS using the function capture_cursor_cache_sqlset() in the DBMS_SQLTUNE package. After you have collected the plans for your workload into the STS, manually create SQL plan baselines using the method described earlier. Then, disable the stored outlines or set use_stored_outlines to FALSE. From now on, SPM will manage your workload and stored outlines will not be used for those statements.

In this article, we have seen how to create SQL plan baselines. In the next, we will describe the SPM aware optimizer and how it uses SQL plan baselines.

Categories: DBA Blogs, Development

What is Direct Data Placement

Padraig O'Sullivan - Tue, 2009-01-06 01:02
I'm currently studying Oracle's white paper on Exadata and came across the following paragraph:"Further, Orace's interconnect protocol uses direct data placement (DMA - direct memory access) to ensure very low CPU overhead by directly moving data from the wire to database buffers with no extra data copies being made."This got me wondering what direct data placement is. First off, the interconnectPadraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com1

Oracle launches Rich Enterprise Application (REA) site

JHeadstart - Sun, 2009-01-04 18:59

This site is targeted towards Developers that want to build highly interactive web applications that deliver desktop quality of user interface. Oracle's REA technologies consist of Oracle ADF Faces Rich Client and Oracle WebCenter. URL: http://rea.oracle.com.

When you click the "Explore Now" button, you see a very nice tree with topics, where you can flip each node to see a small explanation, or expand it to see new nodes with subtopics.

Categories: Development

How important is having Unix/Linux OS Knowledge for Oracle DBAs?

Sabdar Syed - Sun, 2009-01-04 07:37

I wanna wish you all a very Happy New Year 2009 before being discussed on “How Important Having UNIX OS Knowledge for Oracle DBA?”

The main reason for discussing this topic is to encourage the novice or junior DBAs to have command on Unix/Linux Operating System as part of Database Administration. Because, when they were asked how comfortable they are working as DBA on Unix/Linux environment, then their responses are as following.

“They are pretty much comfortable working and administrating the databases on Windows Operating System as they were trained on administrating the database on Windows OS during their DBA Training”

“They are not enough confident in using the Unix/Linux OS commands to administer the databases”

“We didn’t get the chance yet to maintain the database on UNIX environment including Solaris, HP-Unix, IBM AIX, etc.”

“Working on windows environment is easier than working on Unix/Linux environment as it’s only required, on Windows OS, to select the files physically/copy/cut & paste from one location to another, or delete the files, and stopping the services by simply clicking, where as in Unix/Linux, one should use commands manually for the same”

…. and so on.

If anyone is with the above impression about administrating databases on Unix/Linux OS, then here are my suggestions as follows.

  • First of all, it’s not compulsory for DBAs to have operating system administration skills. But, working knowledge, on both the operating systems including Windows and Unix/Linux, is always value added to Oracle DBAs
  • The majority of the production database administration on Unix/Linux based operating systems is enormous in the globe, i.e. very large and mission critical databases are being maintained on Unix/LinuxOS.
  • Obviously, all big companies require the database administrator who’s having good UNIX scripting knowledge and working knowledge of UNIX based environments. It means that the Oracle DBAs, having good working knowledge of UNIX, have demand in the IT market.
  • Even though the DBAs intervention is manual in terms of creating directories, files, setting up environment variables, and starting/stopping the services in Unix environment, it’s very easy as good as maintaining in Windows environment.
  • Administration and OS commands are more or less similar on any flavor of Unix/Linux Operating system i.e. Solaris, HP-Unix, IBM AIX, Red Hat, Fedora, Ubuntu, etc., So, start practicing Oracle database administration on Unix/Linux based systems.
  • Probably, it may be difficult to have Unix OS installed in the personal computer for practical purpose. So, download the free/trail version of Linux (Oracle Unbreakable Linux) from Oracle, install and configure in the PC, and start practicing on Linux. To practice on different Unix/Linux OS flavors, get your pc installed with the VM Ware, where you can set up multiple and different flavors of Unix/Linux OS.
  • Note: Of course, this post is not for showing the difference between Windows and Unix/Linux Operating Systems as every OS has its own significance, but to encourage the Oracle DBAs to learn the commands of Unix/Linux required administering the databases.

Quick Links:

Download: Enterprise Linux and Oracle VM

Information: Oracle and Linux

Installation Guides: Oracle 10g Instalaltion on Unix/Linux OS

Books: Oracle9i UNIX Administration Handbook

Linux Recipes for Oracle DBAs

Training: Oracle Database 10g: Managing Oracle on Linux for DBAs

Exam: Oracle Database 10g: Managing Oracle on Linux Certified Expert Oracle

** Your comemnts and inputs are welcomed to this post. **


Sabdar Syed

Employee Directory

RameshKumar Shanmugam - Fri, 2009-01-02 21:21
Most of company have a separate team or a department to handle the Organization Development or Organization Design (OD). The companies use Software Packages to design the new Org chart or to re-org the chart.These software package will get the data from the ERP system or HRIS or will be maintained as a separate stand alone package which will be accessed by the employee via intranet.Once the organization design is done the changes will be done in the ERP or HRIS reflecting the new Org design

In this blog I am trying to explain how we can use Employee directory function in Oracle which will allow employees to view the company's Organization structure and browse the employee details. Small company's who are not having any Org charting tool can use this functionality in Oracle which will enable the employee to view the Organization structure,their subordinates and the reporting relation within the organization

Attach the function "Proxy Page" to the Self Service Global Menu with a prompt

The data can be refreshed using the concurrent request "Refresh Employee directory" with following parameter

Try it Out!
Categories: APPS Blogs

More information sources on ActiveRecord Oracle enhanced adapter

Raimonds Simanovskis - Fri, 2009-01-02 16:00

I’m glad to see that there are many users of ActiveRecord Oracle enhanced adapter and therefore comments in this blog is not anymore the best way how to communicate with Oracle enhanced adapter users. Therefore I created several other information exchange places which I hope will be more effective.

The latest addition is wiki pages on GitHub where I put usage description and examples as well as some troubleshooting hints that previously were scattered in README file and different blog posts. This is the first place where to look for information about Oracle enhanced adapter. And if you are GitHub user then you can correct mistakes or add additional content also by yourself.

If you have some question or you would like to discuss some feature then you can use Google discussion group. I will use this discussion group also for new release announcements as well so subscribe to it if you would like to get Oracle enhanced adapter news.

If you would like to report some bug or new feature (and patch would be greatly appreciated) then please use Lighthouse issue tracker.

And source code of Oracle enhanced adapter is still located in Github repository. If you are GitHub user then you can watch it or even fork it and experiment with some new features.

And BTW I just released Oracle enhanced adapter version 1.1.9 with some new features and it has been also tested with latest Rails 2.2 release.

Categories: Development


Mihajlo Tekic - Thu, 2009-01-01 16:31


The Picture above is of my daughter Monika.

She is four months old and is the best thing that happened to me in 2008.

Oracle PeopleSoft hosted docs

Tahiti Views - Wed, 2008-12-31 13:32
Just in time for the new year! The Oracle PeopleSoft group now have their docs on Oracle.com in HTML format, hooked up to a Tahiti search:Oracle PeopleSoft Enterprise Hosted PeopleBooksJohn Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

NEW option in ADRCI purge acommand - UTSCDMP

Virag Sharma - Wed, 2008-12-31 11:40

NEW option in ADRCI purge acommand - UTSCDMP
There is new option in ADRCI for purge command - UTSCDMP

adrci> help purge

Usage: PURGE [[-i ]

Purpose: Purge the diagnostic data in the current ADR home. If no
option is specified, the default purging policy will be used.

[-i id1 id1 id2]: Users can input a single incident ID, or a range of incidents to purge.
[-age ]: Users can specify the purging policy either to all the diagnostic data or the specified type. The data older than ago will be purged


Users can specify what type of data to be purged.

purge -i 123 456
purge -age 60 -type incident

There where some issue that directories cdmp_* in repositories (=$diagnostic_dest/Diag/rdbms/database_name/instance_name/bdump ) are not purging automatically. When you run following command, it remove cdmp_* directories, which is older the 3600 Minutes

adrci> purge -age 3600 -type UTSCDMP

Categories: DBA Blogs

ADF 11 Course 12-16 January in The Netherlands

JHeadstart - Tue, 2008-12-23 23:06

Oracle University just released the ADF 11 course, covering many new ADF 11 features.
I will be teaching this course from 12 to 16 january 2009 in De Meern, the Netherlands.

There are still seats available. For a list of course topics and registration, use this link.

Categories: Development

Oracle Database Resource Manager 11g - Undocumented New Parameters

Aviad Elbaz - Mon, 2008-12-22 09:31

I've played around with Oracle Database Resource Manager in 10g and it's quite nice and might be very useful for high CPU usage systems, but I found the inability to limit I/O as a drawback since in most cases I've faced the need to limit I/O is more necessary than CPU limit.

When you have, let's say, 8 CPU's on your machine, you need all the 8 to be 100% utilized by Oracle sessions for the resource manager start limit sessions. However, if your machine I/O capabilities are 50 mbps, you need only one or two sessions which perform intensive I/O (batch job/heavy report) to make the database very heavy.

In Oracle Database 11g Release 1, Resource Manager has gotten some new features related to I/O. So I've installed the 11g, made some tests and found some interesting issues.

I'm not going to write about Resource Manager basics or about 11g enhancements as some great articles have already been published about it. For example, you can read Tim's blog post - "Resource Manager Enhancements in Oracle Database 11g Release 1"

But... I'm going to discuss one missing capability (in my opinion) that will hopefully be available with Oracle Database 11g Release 2 with 2 new parameters which have already available but inactive and undocumented.

For those who are not familiar with Oracle Database Resource Manager I'll try to give a short basic introduction:

Oracle Database Resource Manager helps us to prioritize sessions to optimize resource allocation within our database by:

  1. Creating groups of sessions (Consumer Groups) based on similar resource requirements
  2. Allocate resources to those groups (Resource Plan Directive)
  3. Resource Plan - is a container of Resource Plan Directives that allocate resources to Consumer Groups

Only one Resource Plan is active at a time.

When Oracle Database 11g was introduced, some new features for Resource Manager related to I/O have been revealed. Among them:

  1. I/O Calibration -
    New procedure which helps to assess the I/O capabilities of the database storage. The output of this procedure (CALIBRATE_IO) is max_iops, max_mbps and actual_latency.
  2. Per Session I/O Limits -
    Unlike previous versions, you can limit I/O requests (or I/O MB) for a session before it moved to a new consumer group. (switch_io_megabytes and switch_io_reqs have been added to CREATE_PLAN_DIRECTIVE procedure)

Oracle have added the option to "capture" Oracle sessions by the I/O requests or by the megabytes of I/O they issued in order to move them to a lower priority consumer group.

I have a very fundamental doubt about this enhancements as I don't get the meaning of "capturing" an intensive I/O session and move it to a low priority consumer group which can have only CPU limit ... ?!  The reason we "capture" this session is the amount of I/O it makes, and when we move it to a low priority consumer group we can just limit its CPU resources. We can't limit the amount of I/O for a Consumer Group.

It could have been very useful if Oracle had added the ability to limit I/O for a Consumer Group, like we can limit CPU (with mgmt_pN) for a Consumer Group. What is missing here is the ability to limit I/O for a specific Consumer Group in terms of Maximum I/O per second or Maximum megabytes per second.

Will Oracle enhance Resource Manager in 11g Release 2 to fulfill this capability?

I don't have a confident answer for this question but I assume they will.

While playing around I've noticed two new parameters to the CREATE_PLAN procedure - MAX_IOPS and MAX_MBPS. On first sight it looked like the answer to my question - the ability to limit I/O for session within a plan,

but it's not...

Those two parameter are undocumented and totally ignored in Oracle 11g Release 1 Documentation but available in 11g Release 1 database:

-- create a new resource plan
-- Input arguments:
--   plan                       - name of resource plan
--   comment                    - user's comment
--   cpu_mth                    - allocation method for CPU resources
--                                (deprecated)
--   active_sess_pool_mth       - allocation method for max. active sessions
--   parallel_degree_limit_mth  - allocation method for degree of parallelism
--   queueing_mth               - type of queueing policy to use
--   mgmt_mth                   - allocation method for CPU and I/O resources
--   sub_plan                   - whether plan is sub_plan
--   max_iops                   - maximum I/O requests per second
--   max_mbps                   - maximum megabytes of I/O per second

PROCEDURE create_plan(plan IN VARCHAR2,
                      comment IN VARCHAR2,
                      cpu_mth IN VARCHAR2 DEFAULT NULL,
                      active_sess_pool_mth IN VARCHAR2
                      DEFAULT 'ACTIVE_SESS_POOL_ABSOLUTE',
                      parallel_degree_limit_mth IN VARCHAR2 DEFAULT
                      queueing_mth IN VARCHAR2 DEFAULT 'FIFO_TIMEOUT',
                      mgmt_mth IN VARCHAR2 DEFAULT 'EMPHASIS',
                      sub_plan IN BOOLEAN DEFAULT FALSE,
                      max_iops IN NUMBER DEFAULT NULL,
                      max_mbps IN NUMBER DEFAULT NULL


I tried to create a new plan using one of these two parameters, but it returned an error for each value I tried.
It turned out that the only valid value to MAX_IOPS and MAX_MBPS is null.

  dbms_resource_manager.create_plan(plan => 'PLAN_EX_1'
                                   ,comment => 'Plan Example 1'
                                   ,max_iops => 100);
ERROR at line 1:
ORA-29355: NULL or invalid MAX_IOPS argument specified
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 38
ORA-06512: at line 2

I've confirmed it with Oracle support and their answer was:

"This is currently an expected behaviour, we can not explicitly set either max_iops or max_mbps to any value other than null, that's why these parameters are currently not included in the documentation."

So here is my guess:
It looks like Oracle are going to enhance Resource Manager to allow I/O limit, but unlike CPU limit through Plan Directive (mgmt_pN), it's going to be a limit in Plan level, so you can create a sub plan with limited I/O resources allocated. Maybe the allocation method parameter (mgmt_mth) will get more valid options except for EMPHASIS and RATIO.

I'll keep track of it as soon as Oracle Database 11g Release 2 will be announced and I'll update.

You are welcome to leave a comment and/or share your opinion about this topic.


Categories: APPS Blogs

Coding Horror: Hardware is Cheap, Programmers are Expensive

Tahiti Views - Mon, 2008-12-22 00:17
You've probably heard both sides of this argument: throw hardware at performance problems, no no, improve the code. Jeff Atwood at Coding Horror comes down on the "more hardware" side in this post:Coding Horror: Hardware is Cheap, Programmers are ExpensiveUsually I agree with Jeff, but I John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com2

I’m an Oracle ACE Award Winner now !

Sabdar Syed - Thu, 2008-12-18 03:35
Dear All,

I’m pleased to inform you all that I have received an email from Oracle ACE Program Committee that I have been nominated for the prominent Oracle ACE Award. I’m honored to be the second person working in the Saudi Arabia to be an Oracle ACE.

Hope you are curious to know who the first person is working in the Saudi Arabia to be an Oracle ACE. Well, he is none other than my elder brother & mentor – Mr. Syed Jaffar Hussain . We are proud to be the Oracle ACE brothers from one family members and I can say, this kind of achievement is a unique combination, isn’t it?

Take a look at my Oracle ACE Profile – Sabdar Syed.

I would like to convey my sincere thanks to my elder brother, colleagues, ex-colleagues, blog viewers, friends and well-wishers for their support to aim this award.


Thanks to Mr. Mohan Dutt, OCP Advisor, for publishing about my achievement, Oracle ACE Award, in his blog - OCP Blog Community.

Take a look at - Featured Expert Wins Oracle ACE Award!

Your precious comments are welcomed!

Sabdar Syed.

Download Free Oracle Certification Dumps Here!!

Sabdar Syed - Wed, 2008-12-17 01:38

This blog post is for them; those who are desperately looking for free Oracle Certification dumps.

Well, you might be surprised to see the message of the post is irrelevant to the subject. All my intension is to bring your attention towards How bad it is? Cheating the Oracle Certifications by simply reading the exam dumps”.

Mr. Paul Sorensen, Director of Oracle Certification, and other certification team members have launched the Oracle Certification Blog , where they blog about everything on Oracle Certification. Interestingly, there are a couple of blog posts on Cheating Hurts the Oracle Certification Program” and others.

For list of blog posts on Cheating, then take a look at – Cheating Oracle Certifications . Do not forget to read the comments of every post in the above link.

Quick Links to Prepare for Oracle Certification


OCA, OCP, OCE, Dumps, Brain dumps, Practice Questions, Sample Questions, Cheat sheet, Test papers.

Sabdar Syed.

10g optimizer case study: Runtime Execution issues with View merging

Gaurav Verma - Tue, 2008-12-16 10:46


After the 10g upgrade for our client’s 11i Application system, we were a little gung-ho, but the self-congratulations were short-lived as we discovered that a bulk load of custom concurrent programs were taking longer.

Aswath Rao, headshot aswath brighter who had solved many performance issues before using the HOTSOS tool, handled it smartly. In this article, we talk about how he approached the problem, and what else could have been looked into.

Essentially, after some SQL plan analysis, he found that the merging of some views was causing a poorly executing plan (which was probably looking good to the 10g optimizer – the stats were recent) to be selected and finally got around it by using a couple time tested and clever hints for avoiding merging. He had to try and test quite a bit, but avoiding the merging worked.

First thoughts..

When I first heard about it, my first thoughts went to an initialization variable called optimizer_secure_view_merging and when I searched on Google, I came across http://oracledb.wordpress.com/2007/04/10/execution-plans-differents-with-different-users/, which talks about something similar. Now, interestingly, if you run the bde_chk_cbo.sql script (obtainable from Metalink), it says that this parameter should be set to FALSE. The note 216205.1 – Mandatory parameters for 11i Apps corroborates the same. It was set to its expected value, in which Oracle does not use view merging or predicate move-around. Still, the view merging was definitely happening.

Where’s the bottleneck?

So there was this custom concurrent program called XXMRPSTKTRANS and its runtime parameters had suddenly changed (we had the history of previous concurrent programs), an sql  of the format INSERT INTO XXCI.XXMRP_STOCK_TRANSFER_RPT_ITEMS <followed by a select query> was taking approximately 97 % of the total concurrent request’s runtime.  The query used 967 million Logical IOs.

Lets look at the Hotsos profiling output:

Hotsos output1

skew graph

Lets look deeper..

From the sql traces, it could be seen that this query was dragging us down:

OR (:B5 IS NULL ) )

Playing with plans – Worst, Better, Best..

Having determined the sql on which the future tuning efforts must be focused on, lets dive into the different sql plans observed (with and without hints) and the accompanying logical I/O stats.


Lets talk about the out-of-the-box plan (no hints) first. Obviously, there is a lot of detail here and is difficult to be absorbed at a single glance, but the key thing to note here is that the cost is actually very low (103), so on paper this looks like a great plan. Sure, there are a couple Full table scans, but those tables are really small. The caveat with this plan is that its performance is really bad in real time due to high index scans, which further contribute to excessive CPU delays.

Default plan


It got a little better with the ORDERED hint, although the plan cost went up a bit:

ordered hint plan


Better than the ORDERED hint was the NO_MERGE hint for each of the views, which brought down the consistent gets to ~27,000, all the way from 35 million! Whoa, that was a drastic change. Note that the cost of this plan is 380, as compared to 1634 of the ORDERED plan’s (cost). Interesting observation.

no merge hint plan

Moral of the story?

Well, the NO_MERGE hint definitely made things better, when used on all the views or inline views in the query. The 10g optimizer was definitely having a challenge around this part. Also, while the overall compile time plan was showing lesser cost, the runtime performance was very skewed. 

I am also forced to think that if we had played with optimizer_index_cost_adj parameter a bit at the session level, maybe we could have made some indexes look more expensive and hence pushed them down the priority list for the optimizer. But since this idea did not get tested, its just a thought at this point (it wouldn’t have been a sustainable solution anyway, but I’ve seen it being effective at session level before and can be tried in unwieldy situations).

In retrospect, note that the MRP_PLANS_SC_V internally refers to three more views! This is probably what makes this sql plan a little more special.



------------------------- ------------------------- ------------------------- -------------------------

MFG_LOOKUPS               VIEW                                                HARD

MRP_DESIGNATORS_VIEW      VIEW                                                HARD

MRP_ORGANIZATIONS_V       VIEW                                                HARD

A digressive (?) discussion..

As this wasn’t the only sql tuning issue the customer encountered, this might as well be pointing to an innate issue with the 10g optimizer workload stats that are stored in the sys.aux_stats$ table. Although, the customer had gathered a representative workload sample of  system stats using DBMS_STATS.GATHER_SYSTEM_STATS(‘START’) and DBMS_STATS.GATHER_SYSTEM_STATS(‘STOP’), the 10g optimizer does not seem to be churning out realistic execution-wise sql plans.

An interesting face is that the customer’s production machine has 32 CPUs, 192G of RAM and a really fast I/O sub-system, so maybe the workload parameters estimated through the APIs were overly optimistic? Maybe during the sampling, the optimizer API said.. Wow, This is cool. I can really do a lot. It sounds like a possible cause, but one would argue that the oracle API would be smart enough to account for it. Interestingly, the values for CPUSPEEDNW (the parameter used without workload simulation) and CPUSPEED are not very far from each other.

Here is how the stats look right now:

SQL> select sname, pname, pval1 from sys.aux_stats$;

SNAME         PNAME      PVAL1

------------- ---------- ----------














13 rows selected.

One should be aware that these parameters can be set manually using DBMS_STATS.SET_SYSTEM_STATS API. While you do find some general information on the web, unfortunately, not a lot of specific case study material is available on how manually changing each parameter affects the optimizer behavior. I think this is also where most of us fail to understand the true potential of 10g optimizer, both before and after the upgrade.

I sincerely wish that someone authoritative from the product or server technologies support team comes up with informative material on them.

Semester Project Finally Finished

Padraig O'Sullivan - Tue, 2008-12-16 09:15
We just finished our semester project yesterday for the class I am taking on High Performance Computing. It was a pretty interesting project based on the topic of software fault injection.More details can be found in the project report here.Padraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com0

Teaching a horse not to eat

Moans Nogood - Mon, 2008-12-15 17:49
My friend Ole told me this story many moons ago - and many times since:

A man decided to teach his horse not to eat anymore. Gradually he reduced the amount of food the horse got each day, and the programme worked really well.

Unfortunately, just as he had finally taught the horse not to eat at all something unforeseen and tragic happened to the horse and it died suddenly.

I was reminded of the story the other day when I was studying this article in my beloved The Economist (I've subscribed non-stop since 1983):

Stopping in a hurry

Dec 11th 2008
From The Economist print edition

Cars are getting better at avoiding collisions. Before long they may be communicating with each other to make roads safer.... and somewhere in the article this is stated:

"Jan Ivarsson, head of safety at Volvo, believes it should be possible to build a car in which people will not be killed or injured."

On the other hand I can read in various newspapers that Volvo is not doing too well, and may in fact soon be either sold or closed, just like Saab. Or maybe Sweden will try to put those two together and create a small (by international standards) entity that might survive with state funding and what have you.

So you have this carmaker - Volvo - who has been making cars safer and safer and safer over the last several decades, and JUST as they're sensing the possibility of making the perfectly safe car - in which people will not get killed - the carmaker Volvo unfortunately died. Like the horse.

In my own, little world I have also been witnessing how perfect the databases are getting, how much they can do, how much stuff you can put into them in order to save on the application coding and development side - and how coders, developers and programmers have stopped using them. Just as databases were getting damn near perfect ... people stopped using them.

I have for several years now claimed that any computer technology that reached a state of perfection, a plateau of predictability & stability and a high level of usefulness ... will be replaced with something more chaotic and hence much less productive. I have seen no exceptions.

I now realise it is connected: Technology reaching maturty, car safety reaching its logical conclusion - and feeding of horses.

xTuple Postbooks GUI client on OpenSolaris

Siva Doe - Sun, 2008-12-14 19:16

Postbooks from xTuple is an open source edition of their ERP and Accounting software. They have their GUI client available for Linux, Windows and Mac. No Solaris though. With the required components for Postbook, namely Postgres and Trolltech Qt, is known to work in OpenSolaris, I took a plunge to get the GUI client working on my OpenSolaris 2008.11.

It was not very tough to get it going. We need to set up OS 0811 to get it going. First, install SunStudio and family by installing the 'ss-dev' pkg.

pfexec pkg install ss-dev

Next in line is Postgres. I decided on installing version 8.3, even though Postbooks will work with 8.2 also. There is a list of packages that is required for 8.3.

pfexec pkg install -v SUNWpostgr-83-server SUNWpostgr-83-libs SUNWpostgr-83-client \\
SUNWpostgr-jdbc SUNWpostgr-83-docs SUNWpostgr-83-contrib SUNWpostgr-83-devel \\
SUNWpostgr-83-pl SUNWpostgr-83-tcl SUNWpgbouncer-pg83

As for the Qt version 4, I took the easy way out by using spec-files-extra (SFE) to build Qt for me. Please see Building JDS on OpenSolaris for instructions to set up the build environment for building Qt out of SFE. The SFEqt4.spec file, has to be tweaked for building the required Postgres SQL driver plugin, that is required for Postbooks. In the spec files directory, there is a subdirectory called 'ext-sources'. Within that, edit the file 'qt4-solaris-cc-ss12' and change the QMAKE_CXXFLAGS and QMAKE_LIBS variables as follows.

QMAKE_CXXFLAGS          = $$QMAKE_CFLAGS -library=stlport4 -I/usr/postgres/8.3/include
QMAKE_LIBS              = -L/usr/postgres/8.3/lib -R/usr/postgres/8.3/lib

Now, build and install SFEqt4.spec.

/opt/jdsbld/bin/pkgtool -v --download --autodeps build SFEqt4.spec

This will install two packages SFEqt4 and SFEqt4-devel in your system. Now on to the next step of building Postbooks.

Postbooks GUI client requires the source code for OpenRPT, which is an open sourced SQL report writer. Download it from OpenRPT sourceforge site. Since, Qt4 is built using SunStudio, there is a small change required for OpenRPT to get it built on SunStudio. In the file OpenRPT/renderer/graph.cpp, change line 780 to look like these two lines.

QColor cMap(_colorMap[sd->color]);
graph.setSetColor(snum, &cMap);

When extracting OpenRPT source code, you will have a directory like 'openrpt-3.0.0Beta-source', or something like that. Create a symbolic link 'openrpt' for it.

ln -s openrpt-3.0.0Beta-source openrpt

This is required as the GUI client looks for 'openrpt' at some level above its source directory. Then build the client, by running 'qmake' followed by 'gmake'.


Next is GUI client itself. Download the source from Postbooks sourceforge site. Extract it at the same level as that of 'openrpt'. For building under SunStudio, one file needs to be changed. Edit the file 'xtuple/guiclient/postPurchaseOrdersByAgent.cpp', line 90, to replace '__PRETTY_FUNCTION__' with the name of the method itself (hardcode with ::sPost). The build procedure is the same as above: 'qmake' followed by 'gmake'.

Now the GUI client can be launched by 'bin/xtuple'. I encountered a missing 'libiconv.so.2' library. I overcame this by running

env LD_LIBRARY_PATH=/usr/gnu/lib bin/xtuple

I know, using LD_LIBRARY_PATH is unsafe. May be including '/usr/gnu/lib' in the spec file change might solve this problem.

Now you should see the login screen and you should be able to log in to the configured Postgres database. Here is the obligatory screenshot.

Hope this was helpful for you. Comments and feedback are most welcome.

Should I go for Oracle 10g OCA/OCP or 11g OCA/OCP Certification?

Sabdar Syed - Sat, 2008-12-13 03:39
Hello All,

Choosing whether to go for Oracle 10g OCA/OCP or 11g OCA/OCP Certification is becoming complex to the Novice or Newbie DBAs. And also, I have seen a couple of posts asking such similar certification doubts more frequently in the OTN Forums, when they are not really sure or confused.

Well, this is ever been told by everyone that going for latest version certification is good and ideal. But, what I advise is, first go for Oracle 10g OCA and OCP, then upgrade to Oracle 11g OCP.

Following are my views on why to go for Oracle 10g OCA/OCP initially rather than 11g OCA/OCP directly.

  • As we all know that newer version (11g) does include older version features plus new features and bug fixes of older version issues.
  • Retirement date for Oracle 10g Certification has not yet been announced by Oracle. Moreover, Oracle Database 11g: Administration I (OCA) exam is only on production i.e. regular and Oracle Database 11g: Administration II (OCP) exam is not yet gone for Production i.e. still Beta Exam.
  • Oracle Database 10g is still being used as Production for the business in all most all the organizations in the globe. But very less companies are using the Oracle Database 11g for their business, as Oracle 11g is still a base release and yet to go for standard release 11g (11.2.X.X) shortly. This means that Oracle 11g is not fully deployed or used for Production purpose yet.
  • Oracle Database 10g Release (10.2) still has Oracle primary and extended (Metalink) support for few more years from now, after that Oracle 10g will also be de-supported by Oracle.
  • Both versioned (10g and 11g) certifications have two exams – Administration I (OCA) and Administration II (OCP). Each exam fees of them is $125 US i.e. there is no vary.
  • It’s mandatory for the OCP candidates to undergo one approved course from the Approved list of Oracle University Courses for hands on course requirement. This applies to both Oracle 10g and 11g Certification takers.
  • For Oracle 10g OCP Certification holders, there is only one exam 1Z0-050 - Oracle Database 11g: New Features for Administrators given to upgrade directly to the Oracle 11g OCP. No course or hands on course requirement form is to be submitted to achieve the Oracle 11g OCP (Upgrade) Certification.
  • In this way, one will have both Oracle 10g and 11g Certifications in hand, and can show the same in their resume or CV. This also fulfills the requirement where the companies are looking for the candidates those are having enough experience with Oracle 10g and 11g, and holding multiple certifications in it.
One can go for direct Oracle 11g Certification under the following circumstances.

  • If you are forced, by your company or manager, to undergo Oracle 11g Course and take Oracle 11g Certification Exams, for their specific requirement on Oracle Database 11g related projects.
  • When there is no Oracle 10g course listed in the training institute in your city, instead only Oracle 11g Courses are available.
  • When unable to afford to take Oracle 11g Upgrade exam.
  • If my above views are not Okay with you -:)

Note: Folks, above are only my views, and need not to be the same with others. So, it’s left up you to decide whether to go for Oracle 10g OCA/OCP or 11g OCA/OCP Certification. For any information or doubts, then refer the Oracle Certification Program link and navigate yourself to the options to know more about Beta Exams, Retirements, List of Exams, Upgrade your Certification, Hands on Course Requirement etc.,

Your comments are welcomed if this article information helps you.

Sabdar Syed,

My UKOUG 2008

Oracle Apex Notebook - Fri, 2008-12-12 11:39
Last week I went to Birmingham UK to attend the UKOUG 2008 conference. João came along with me and I can speak for both of us when I say that we had a great time. We found Birmingham to be a very clean and organized city and it was also good because we managed to stay in London for the weekend where we had the time to visit some of the most famous touristic attractions.The conference itself was
Categories: Development


Subscribe to Oracle FAQ aggregator