Feed aggregator

Misleading adcfgclone.pl ERROR: OUI files not found at their expected location...please make sure that the pre-requisite OUI patch was applied correctly...

Gaurav Verma - Thu, 2008-02-14 21:04

If you ever got this ERROR while adcfgclone.pl, then it would do you good to check if you have existing directories named in the context file.

As misleading the below error message is, if you debug the perl
scripts and check the code a base path is not being able to be
constructed by adcdfclone.pl.

To make a long story short, if you just
double check all the directory paths that have been mentioned in the
context file, you can just get around this problem easily.



This misleading error message is fixed in R12.

Here is an example of such an instance..

        perl /apsd01/apps/apsdcomn/clone/bin/adclone.pl
java=/apsd01/apps/apsdcomn/clone/bin/../jre mode=apply stage=/apsd01/apps/apsdcomn/clone/bin/.. component=appsTier method=CUSTOM appctxtg=/apsd01/apps/apsdappl/admin/APSD_apsd.xml  showProgress contextValidated=false

ERROR: OUI files not found at their expected location.
Setting OUI path to: /appsoui

ERROR: OUI files not found at their expected location. Please make sure that the pre-requisite OUI patch was applied correctly: patch should be unzipped directly at the ORACLE_HOME root level so that OraInstaller.jar exists in either
1/oui/jlib/
or
1/oui/lib/

Restart adclone.pl after performing required corrections.
ERROR: Failed to execute /apsd01/apps/apsdcomn/clone/bin/adclone.pl


Add new node to apps 2

Fadi Hasweh - Wed, 2008-02-13 09:30
Please read http://oracle-magic.blogspot.com/2008/02/add-new-node-to-apps.html before reading the steps below,

After finishing configuring the clone instance by running adcfgclone.pl I had to run the adaddnode.pl script according to note 464018.1.
While running adaddnode.pl I faced the following error
ERROR at line 1:
ORA-00001: unique constraint (APPLSYS.AD_APPL_TOPS_U2) violated
ORA-06512: at line 16

I followed note 394615.1 to solve it and after I did the steps in the note I run adaddnode.pl again and I faced error related to AD_TIMESTAMPS_U1 I checked on metalink but I could not find anything then I search on google and found post by dave http://newappsdba.blogspot.com/2006/09/dont-hit-ctrl-c-while-running.html from his post I took a backup the ad_timestamps and then truncate it after that I run adaddnode.pl and it completed successfully.
Then I updated the xml file to have only the forms and web servers running on this new node following the same note 464018.1 I run autoconfig the start the service on both nodes a now able to access both nodes.

What I want to do now is to configure web cache as load balancer also I want to implement Step 2.4.1.2. Implement Forms Listener Servlet Support from note (Advanced Configurations and Topologies for Enterprise Deployments of E-Business Suite 11i) Note:217368.1 in order to have Forms Server Load Balancing, so when one form server go down in one node users will be connect to the other forms node.Because the current configuration if I am connect to webnode1 and forms on that node is down I will not be able to connect to forms and I will receive an error but if load balancing is implemented it will take me to the other working forms server node.


Fadi

I'm Dumping TOAD, Helloooo Oracle SQL Developer!

Andries Hanekom - Wed, 2008-02-13 07:46
It's already come around, out of nowhere the price of flowers and greeting cards increases ten fold and that special one starts leaving hints all around the house. Yes, Valentines day, loves is in the air although with a very fake and commercial smell to it. In keeping with the love theme I have decided to announce my separation from TOAD, I have found somebody new, a young new model, little ruff around the edges but with loads of possibilities and the best of all she's much cheaper (Not that kind of cheap...).

I have been looking to move over from TOAD to Oracle's SQL Developer (OSD) since it's initial incarnation as Raptor, but have always found some key feature missing from the product. Recently I have decided to attempt the crossover again, removing TOAD from easy desktop access and focusing on doing all work previously handled by TOAD through OSD. At the start I had to make frequent trips back to TOAD, unable to find certain functions and tools or just getting frustrated with the unfamiliar interface. But as I started using OSD more and more I discovered those familiar TOAD tools in there OSD disguise, and began to appreciate the added functionality provided by OSD. It's not been easy and at first getting the preferences right for your preferred setup may be a little tricky, but once familiar with the new interface and the location of those favourite tools and functions you'll forget about that first love pretty soon.

My Top tips for migrating from TOAD to Oracle SQL Developer
  1. Intentionally or not, Oracle has placed allot of the functions and tools used in TOAD in the same navigation path in OSD, so if your stuck just remember how you accessed it in TOAD and 9 out of 10 you should have the same functionality available.
  2. Save code for later use, highlight the code and right click, select Save Snippet. They can later be retrieved by using menu option View-> Snippets
  3. Search database objects using menu option View -> Find DB Objects

Please feel free to contribute any tips you might have for making the transition a little easier, happy Valentines day.

A Little Fun with Truth in Advertising

Ken Pulverman - Tue, 2008-02-12 22:31
Just for fun....here's a few great examples of just how far Photoshop can stretch the truth....

http://www.bspcn.com/2008/02/12/top-40-photoshopped-images/

Espousing Pithiness

Ken Pulverman - Tue, 2008-02-12 19:59
So you ask, what have you been doing? You haven’t written a blog entry in a long time? Well while living the dream of thinking big Marketing 2.0 thoughts and driving the corresponding actions, I started to question the value of a blog. Oh yes, it’s there, but I think the format of mine and others needs to change. We are all super time starved, and I think lengthy musings just don’t do it for folks. However, at the same time I was questioning the nature of business blogging, my small readership was coming out of the woodwork to question why I hadn’t written in a while. My most current opinion is that I need to keep it short for my sanity and yours. With that said, I am going to completely violate that today as I am going to show you right here a couple of the things I’ve been up to since the last post.

First of all, I’ve been working on a website that will be a companion piece to this blog (indeed the blog is embedded in it) to capture resource links and threads to the points we discuss here. In the next week or so, I’ll be transferring a lot of the Marketing 2.0 resources I assembled for an internal corporate audience to this public space.

Here’s what it looks like (built with the new beta release of Netvibes).





This next item is impossible to read and Blogger's upload converted me to a member of the Blue Man Group. So dear reader(s), if you are up on the latest file posting techniques for blogs, let me know. Google Docs wanted this in HTML and it looked pretty bad though not as bad as what you see below. It’s a paper a wrote that will appear in a publication called Perform that is published by the same folks that run the magazine, Revenue Today. Anyway – some thoughts on why I believe web 2.0 is actually supporting very old behaviors for how we buy and sell not new ones. If you are interested, e-mail me, and I’ll send you a copy.



All for this addition. Shorter and pithier next time. Promise.






Sharing is Caring 4

Fadi Hasweh - Tue, 2008-02-12 13:54
Oracle Community
it the new site for oracle social networking, http://www.oraclecommunity.net/to make oracle community even smaller.

enjoy
Fadi

Lies, Damn Lies, and Statistics

Mary Ann Davidson - Tue, 2008-02-12 13:24

There is an aphorism famously attributed to Mark Twain (among others) to the effect that there are "lies, damn lies and statistics." The Mark Twain quotes on truth I was able to verify were almost as interesting though not quite so pithy:

 

A lie can travel halfway around the world while the truth is putting on its shoes. (Remember, this was before the Internet).

 

Get your facts first, and then you can distort them as much as you please.

 

I've had several reminders recently that what we think we know and take for granted is often not only wrong, but quite wrong. The ease with which we can search for things on the Internet leads us to forget that what we are finding is information, but not always expertise and almost never actual wisdom. To the extent we rely on "received wisdom" it is a good opportunity to remind ourselves that information and knowledge are two different and often diametrically opposed beasties, indeed.

 

For example, someone recently sent a resume of a former colleague. I use "resume" loosely, as the description of work experience (the portion I have direct knowledge of, which is the only section to which I address my comments) is better described as "fiction." Perhaps, "fiction based on actual events," if I am feeling generous, except that I am not. This was by far the worst example of resume embellishment I've seen in 20-some years.

 

In the interests of protecting the guilty, I will call the individual involved Fictional Resume Writer (FRW). The nature of FRW's sins were 1) claiming credit for work FRW never did 2) claiming origination of work done by others  - which I find especially reprehensible and 3) gross exaggeration of accomplishments.  I emailed FRW and said that I thought a resume rewrite was in order; especially given FRW was seeking business with Oracle. Business is personal, I said, and someone who is materially misleading in credentialing I'd be unlikely to trust or want to work with in a business setting. I also went point by point with the "issues" in the resume, just to be clear what I thought was inaccurate and why.

 

The response I got was the email equivalent of a shoulder shrug and a comment that the amount of hard work FRW expended "justified" claiming credit. (Is this the new world of Web 2.0, where "mashup" owners claim origination based on the "hard work" involved in taking others' work and creating something different from it?)

 

Perhaps I am old-fashioned, but there is a clear difference between a good idea, initiating that good idea, and carrying through on a good idea to effect positive and measurable change. And common sense if not a sense of honor should dictate how one expresses the difference among them.

 

For example, once upon a time, I got tired of explaining to developers for the umpteenth time what a buffer overflow was, so I wrote up a few pages - perhaps two or three - on what constituted a buffer overflow and how to avoid them. Though I did not know it at the time, this was the genesis of the Oracle Secure Coding Standards. I note at the outset, for reasons that will become all too obvious if you keep reading, that I do not claim "authorship" of these standards.

 

My prototype document grew over time, substantially. Someone else expanded the list to be a "top ten" list of secure coding dos and don'ts. "Top ten" then grew to be an extensive list of security vulnerabilities and how to avoid them. There are also examples of what happens if you don't write correct, secure code (drawn from actual coding errors development teams have made). All in, the document has grown to about 300 pages, to include "case law" (not just what not to do, but how to address specific technical issues the correct way). One individual (Oracle's Chief Hacking Officer) has written the bulk of the secure coding standards with input and review from others and he is clearly the author and redactor of this document. (Mahalo nui loa, Howard.)

 

There have been other enhancements and uses of the secure coding standards. Someone got the bright idea of tying the secure coding standards directly to our product release security checklists. A couple of people developed the secure coding class (online web-based training based on the Oracle Secure Coding Standards), while still others have watched over the rollout of this class to the development groups that need to take it (to include restructurings, new hires and acquisitions).

 

In theory, were I to write my resume the way FRW has, I would claim "originator," "author" or "founder" of the secure coding standards, since I wrote the first two - count them - two glorious pages. But what I wrote does not have the breadth, depth, examples, actual technical know how, proactive guidance, and utility of what now exists. My claim to "authorship" - if I were vain enough to make it - is like the person who puts the front page and inside page (the one with the ISBN number) together for a book claiming to be the "author." It's simply ridiculous, and I'd deserved to get whacked with all 300 pages, hard bound, if I made such a statement.

 

There is a security lesson here. One of them is the age-old one of "trust, but verify." It is not my job - and I would not do it - to tell FRW's current employer that FRW's resume in some particulars is much closer to fiction than fact. "Caveat emptor" - let the buyer beware. If you are hiring someone on the basis of credentials, it's well worth checking them.

 

The second security lesson is an old one. Business is still personal, and personal attributes matter, like honor and trust. Contracts, for example, cannot create trust where there is none; just specify requirements for performance and remedies for non-performance.  A person who is untrustworthy in small things is likely to be untrustworthy in large things, and if there is anything more untrustworthy than taking credit for others' work, I do not know what it is.

 

The second reminder of the difference between what we think we know and the truth was occasioned by a recent op-ed piece in the Wall Street Journal called "The Lies of Tet" by historian Arthur Herman (I can personally recommend his book To Rule the Waves - How the British Navy Shaped the Modern World).

 

For many years, I've tried a little "knowledge experiment," by asking random people if they had heard of the Tet Offensive and, if so, who they thought "won." The response (if I exclude people who have served in the armed forces who know the truth) is astonishing. Most people, when asked, believe that the Tet Offensive was a resounding defeat for the forces of the United States and the Republic of South Vietnam. In particular, those who were alive at the time and recall the media coverage are shocked to find out that what they think they know is all wrong. One hundred percent wrong, in fact.

 

As Arthur Herman says:

"The Tet Offensive was Hanoi's desperate throw of the dice to seize South Vietnam's northern provinces using conventional armies, while simultaneously triggering a popular uprising in support of the Viet Cong. Both failed. Americans and South Vietnamese soon put down the attacks, which began under cover of a cease-fire to celebrate the Tet lunar new year. By March 2, when U.S. Marines crushed the last North Vietnamese pockets of resistance in the northern city of Hue, the VC had lost 80,000-100,000 killed or wounded without capturing a single province. Tet was a particularly crushing defeat for the VC (emphasis mine).  It had not only failed to trigger any uprising but also cost them "our best people," as former Viet Cong doctor Duong Quyunh Hoa later admitted to reporter Stanley Karnow. Yet the very fact of the U.S. military victory -- "The North Vietnamese," noted National Security official William Bundy at the time, "fought to the last Viet Cong" -- was spun otherwise by most of the U.S. press." ("The Lies of Tet," Wall Street Journal, February 6, 2008)

There are "truths" that are so embedded in the fabric of what we think we know that we don't even bother reading broadly, from a breadth of sources (and reputable sources) to reach our own conclusions about what is true vs. what is received wisdom. We simply must do so on issues that matter to us, instead of "outsourcing" wisdom to pundits. Otherwise, "collective" wisdom substitutes for actual facts and analysis. Of all the maxims wandering loose about the Internet (and on it), the one I find the most obnoxiously untrue is "the wisdom of the crowds." Sometimes, the crowds are dead wrong, because they've been massively misinformed. As with Tet.

 

It is an inescapable truth that the media got Tet wrong, spectacularly wrong, and "the lies of Tet," to use Arthur Herman's phrase, continue to shape people's opinions of not only Vietnam, but warfare in general and the veracity of the armed forces decades after the actual events.

 

As much as I have expressed concerns about every idiot with an opinion being able to express it on the Internet (as I am doing here!), the fact remains that in some cases, bloggers have spotted untruths, exaggerations and fabrications reported by the media (doctored pictures and doctored service records, to think of a couple of prominent examples). There is an important utility in keeping professional journalists and industry analysts honest and objective that is worth something to the millions of people who expect that from mainstream media. Score one for the blogosphere.

 

The corollary, and cautionary note to the blogosphere, is the realization that not all truths are apparent in nanoseconds. Technologists are used to rapidity of change, and the barrage of information and the rapidity of change often consume us as we try to keep up with the latest technology trend. Often, however, it is only with the passage of time, careful analysis, and hindsight, that we can correctly weigh events. There is a reason for the phrase rendered "timeless truths" instead of "nanosecond truths."

 

I was on vacation recently at a venue that couldn't be more removed from Silicon Valley: Colonial Willliamsburg, Virginia, at The Williamburg  Antiques Forum. Looking at decorative objects that are between 300 and 400 years old and determining what they say to us now about the time at which they were made and the people who owned them could not be more different than what I do for a living. Yet even in the world of decorative arts, curators continue to uncover new facts that may lead them to reinterpret history. In short, even with a 350-year-old highboy, there is still much to learn, to the point that one's view of history may change.

 

The security issue in the above is still "trust, but verify," and I would add "from multiple sources, not merely one." Be especially wary of "received wisdom" on things that matter, and be willing to do your own research and develop your own expertise. Anything I read about military history - and history, in large part, is military history - I use at least two sources for if it is important to me, and occasionally more.

 

Thus far, I've talked about lies (FRW), damn lies (the media about the Tet offensive) but not about statistics. The statistics part comes with a presentation I have been doing recently (three times in Eastern Europe a couple of weeks ago) about security metrics.

 

I'm going to skip over a lot of what I talked about (I have already opined in a previous blog entry why "number of published vulnerabilities" is a metric very easy to "game" to achieve unintended results), to focus on a truth I stumbled upon by sheer accident. I suspect that metrics kahunas have known what I found for a long time, so I don't claim novelty, just a "eureka!" moment.

 

I talked in my presentation about what constitutes a good metric (objective, measurable, helps you answer basic questions like "are we doing better or worse," incents the right behavior, fosters additional questions, helps you identify positive or negative trends, and so on). I used as an example the various metrics we keep pertaining to the release of CPUs that I wanted to discuss as a group, because there is no single metric that you could use to answer "goodness questions" related to how we are doing. In fact, picking a single metric and focusing it to the absence of all others would lead to incorrect incentives.

 

For example, one of the metrics we keep is "number and percentage of patches that were published on the announced CPU date." That's a good one, except that you do not want people only hitting the date and ignoring quality. So, "number and percentage of patch reloads" is another one we keep, because while we want CPUs to come out on the target date, we do not want to have to reload patches because the quality was poor. Both quality and timeliness are important; hence, two metrics. We are also concerned that the issues we identify as worthy of going into a Critical Patch Update make it through the process (sometimes, issues drop out for regressions). Ideally, you'd want all critical issues you identify to actually make it into the target CPU (because there are no regressions). So, we look at number of issues that drop out through the CPU process because we are trying to make that number as low (over time) as is feasible.  I walked through all of the aforementioned metrics (and a few related to CPUs I did not discuss here) and slapped a heading on the slide: "combined metric."

 

My eureka moment was noting that, if security metrics are useful, and they are, the idea of a combined metric is even more useful. The goal of a metric is to be able to manage better, and just as (in the pre-GPS days) of navigation you need to take multiple "fixes" to triangulate your position, you are often better served by triangulating how you are doing by measuring and weighing several different metrics. Rarely can you manage well by measuring just one thing.

 

The real goal of any metric, or "statistic," to round out my theme, is to manage better. Metrics can help you allocate resources to affect the most good for the most people and to spot trends (both positive and negative) quickly. Ultimately, a good metric needs to help you answer the question, "Are we doing better or worse?" You can do a lot with metrics, and some people lie with them, but above all, you have to be honest with yourself.

 

As Shakespeare put it so well:

 

This above all: to thine own self be true,

And it must follow, as the night the day,
Thou canst not then be false to any man.

 

For More Information:

 

Book of the week - War Made New: Technology, Warfare and the Course of History: 1500 to Today by Max Boot

 

A really great read about how technological changes influence warfare. If you have no idea how IT-centric warfare now is (in terms of command and control), the last 100 pages are really insightful.

 

http://www.amazon.com/War-Made-New-Technology-Warfare/dp/1592402224

 

One of the very best security metrics kahunas I know is Dan Geer. Anyone interested in this topic should Start With Dan:

 

http://en.wikipedia.org/wiki/Dan_Geer

 

More on books by Arthur Herman:

 

http://books.google.com/books?as_auth=Arthur+Herman&ots=rYQc2DLq3w&sa=X&oi=print&ct=title&cad=author-navigational&hl=en

 

An article by Arthur Herman on the Vietnam War:

 

http://www.commentarymagazine.com/viewarticle.cfm/Who-Owns-the-Vietnam-War--11006

 

One of the best books on Vietnam is still Lewis Sorley's A Better War:

 

http://www.amazon.com/Better-War-Unexamined-Victories-Americas/dp/0156013096

 

The “log file sync” wait event is not always spent waiting for an I/O

Christian Bilien - Tue, 2008-02-12 10:41
I went through a rather common problem some times ago, which shows how one can get easily fooled by the “log file sync” wait event. Here is a log file sync wait histogram which was interpreted as a poor storage array response time by the DBAs, often a good way to get off the hook […]
Categories: DBA Blogs

Trying to lose a datafile.

Claudia Zeiler - Tue, 2008-02-12 00:22
Chen Shapira wrote me a suggestion regarding losing a data file.

“Try moving a data file while the DB is up and running, and try to make Oracle "notice"
that it is gone. On Linux, Oracle does not notice anything! You can move a datafile
and all selects, updates, inserts, will work. The OS prevents the DB from seeing
the change. But I noticed that your test system is on windows, so I'm curious
whether it is the same.”


Once my company’s operator TARed the rest of the database into the system tablespace datafile.
Oracle ‘noticed’! Are you sure that you really lost the file? Was there some sort of mirroring
of the file?

Anyway, here is what happened when I tried to lose a data file while the DB is up on a Windows based DB…

SQL> select name from v$datafile;

NAME
-------------------------------------------------------------------------------
C:\ORACLE\ORA10\ORADATA\DB10\SYSTEM01.DBF
C:\ORACLE\ORA10\ORADATA\DB10\UNDOTBS01.DBF
C:\ORACLE\ORA10\ORADATA\DB10\SYSAUX01.DBF
C:\ORACLE\ORA10\ORADATA\DB10\O1_MF_USERS_35OQZZ4H_.DBF
C:\ORACLE\ORA10\ORADATA\DB10\EXAMPLE01.DBF

SQL> host ren C:\ORACLE\ORA10\ORADATA\DB10\O1_MF_USERS_35OQZZ4H_.DBF users.dbf
The process cannot access the file because it is being used by another process.

SQL> host del C:\ORACLE\ORA10\ORADATA\DB10\O1_MF_USERS_35OQZZ4H_.DBF
C:\ORACLE\ORA10\ORADATA\DB10\O1_MF_USERS_35OQZZ4H_.DBF

The process cannot access the file because it is being used by another process.

SQL> host ren C:\ORACLE\ORA10\ORADATA\DB10\EXAMPLE01.DBF example02.dbf
The process cannot access the file because it is being used by another process.

Also a straight O/S effort:
C:\Windows\System32>del C:\ORACLE\ORA10\ORADATA\DB10\O1_MF_USERS_35OQZZ4H_.DBF

C:\ORACLE\ORA10\ORADATA\DB10\O1_MF_USERS_35OQZZ4H_.DBF
The process cannot access the file because it is being used by another process.

CPUJan2008: One thing that every DBA should know about it.

Pawel Barut - Mon, 2008-02-11 15:54
Written by Paweł Barut
While reading documentation for latest Oracle security patch CPUJan2008 for database I've noticed one new required step. This step is: Recompile Views. One whould say that it's nothing strange, but to run this script database have to be started in UPGRADE MODE. For non-RAC installations it's not big issue, as anyway you have to plan downtime. But Oracle suggest that this recompilation can take 30 minutes:
"In one internal Oracle test with approximately 2000 views and 4000 objects, the total execution time for view_recompile_jan2008cpu.sql and utlrp.sql was about 30 minutes."
Practically in my case it took no more than 5 minutes, so it's not big deal.
But it could be problem for RAC installations, that used when CPU is applied on one node, then catcpu.sql is run, then second node is being patched. So normally DB will be all the time available. But if it's needed to start database in UPGRADE mode it means that downtime is required. According to documentation it's not required to run view recompilation script during paching process. This script can be run after,
"however, the CPU installation will not be complete until the view recompilation is completed."
You can ommit this step only if:
  • Databases was created with Release 11.1.0.6 or later

  • Databases was created with any release (for example, 10.2.0.3, 10.1.0.5, or 9.2.0.8) after CPUJan2008 or a later CPU has been applied

I hope it also means that after CPUJan2008 was applied you will not have to run this script after next CPU came out (CPUApr2008, etc).


Cheers Paweł

--
Related Articles on Paweł Barut blog:
Categories: Development

Personal Substitution Variables

Oracle EPM Smart Space - Mon, 2008-02-11 10:47
A cool feature of the Smart Space SDK is the ability to create a set of global Smart Space variables that can be used by any gadget. These are essentially name value pairs that are called LocalProperties in Smart Space.

I used this feature in my latest batch of Essbase gadgets. I wanted the ability to set the Time dimension member for all of my Essbase query gadgets in one place and not have to go to each gadget to change this member. I knew that one option would have been to use the Essbase Substitution Variable functionality found in the Essbase product. But I did not want to make this a variable for all Essbase users which is what the Essbase Substitution Variable would provide.

I created a simple MyVariables gadget that allows the user to create personal variables.




These variables can then be used by other gadgets. In the case of the Essbase gadgets, I implemented a special syntax whereby any member name that is preceded by two ampersands (&&TODAY) represents a local variable. When constructing the query, the Essbase gadget then makes a call to Smart Space whenever it encounters a member name with this syntax to get the value of that variable. It replaces the syntax (&&TODAY) with the variable value (“20080823”) and issues the query.

This is another example of the great new benefits of having Smart Space running on your client. It is the analytical footprint that will allow for powerful new capabilities such as creating your own personal substitution variables.


public String getPreference(String key)
{
return Services.PreferenceManager.LocalProperties.Load(key);
}

public void savePreference(String key, String value)
{
Services.PreferenceManager.LocalProperties.Save(key, value);
}

public void deletePreference(String key)
{
Services.PreferenceManager.LocalProperties.Remove(key);
}

public void getPreferences()
{
listViewVars.Items.Clear();
try
{
String[] sPrefKeys = Services.PreferenceManager.LocalProperties.GetAllKeys();

ListViewItem item = new ListViewItem();

foreach (String sPrefKey in sPrefKeys)
{
String sPrefValue = getPreference(sPrefKey);
item = this.listViewVars.Items.Add(sPrefKey);
item.SubItems.Add(sPrefValue);
}
this.Refresh();
}
catch (Exception e1) { }
}
Categories: Development

Anyone can post comments on this blog

Chris Grillone - Mon, 2008-02-11 10:33
To encourage questions and open discussion on this blog, I have changed the settings to allow anyone can post comments on this blog. I want to learn from you and allow you to share your Oracle Flow Manufacturing, lean manufacturing and general manufacturing experiences and insight.

Partners, share your industry insight and gain industry contacts here. According to Google Analystics, people from every continent except Africa (& Antarctica, penguins are not interested) are viewing this blog. (I will be harrassing my African contacts after posting this blog.)

Oracle Sales and Sales Consultants and Industry Business Unit Consultants, share your insight and identify opportunities.

Everyone, leverage this community of Manufacturing experts whether you are considering Flow Manufacturing or another solution.

I want to hear from you to help us improve the product and customer experience.

I Just Want to Ride a Bike…

Oracle EPM Smart Space - Mon, 2008-02-11 09:13

I came across an article in Hemispheres Magazine (UAL publication) about this time last year. The article was about cycling, and last February I was just getting back into cycling after a long break from it, so I was intrigued. Aside from getting back into cycling I was also buried in everything Smart Space attempting to build and launch this new product. I was shocked how much this article related what I was doing in software to what Shimano was planning to do for cycling. The article started off talking about how the Lance Armstrong era of cycling was over and the wave of high end bike sales was coming to an end in the U.S. So Shimano went off to figure out how they could get people to again love the sport of cycling and therefore buy more products. The conclusions they found is where all the similarities came. They found that people had very fond memories of cycling and really enjoyed the idea of it but could not wrap their head around the complexity of what cycling has become. Things like 10 speed cogs, carbon forks, STI shifters, and clip less pedals had started to overwhelm the average cyclist. The basic conclusion was that people just wanted to ride a bike; simplicity. This is what I really related to; the goals of Smart Space are in line with this, end users want simplicity. Many end users don't want to worry about connection settings, indexes, caching, dimensionality, etc. they just what to know how their travel expenses will impact the department's budget for this quarter or get the status of their sales pipeline. I saw that the key for Shimano and Smart Space was in simplicity. So whenever I think about a feature or plan out a new gadget I think about the idea that people just want to ride a bike.

Here is the article so you can decide for yourself if the concepts really relate:

http://www.ideo.com/pdf/Hemispheres_2007-02.pdf

Categories: Development

Dynamic Quick Picks - APEX style!

Anthony Rayner - Sun, 2008-02-10 13:06
Introduction...
Have you ever admired the quick pick links that appear on some of the APEX pages under select lists. For example editing the 'Display As' property of a page item you can quick pick common display types, as shown here:


So, I wanted to implement this same feature in my own applications, only dynamically generating the quick pick options based on data. For example, in the standard EMP table there is a column called Job. I wanted to display 2 quick picks for this field, based on the top 2 occurences of values for jobs in the EMP table.


How...
Step 1) Firstly we need a generic database function to handle rendering the links. Here is the code:
create or replace
PROCEDURE QUICK_PICK
( p_query VARCHAR2
, p_item VARCHAR2) AS
TYPE cur_type IS REF CURSOR;
cur cur_type;
v_display VARCHAR2(4000);
v_return VARCHAR2(4000);
l_names DBMS_SQL.VARCHAR2_TABLE;
l_query VARCHAR2(1000);
BEGIN
l_query := p_query;
--get any binds and replace with the value from session
l_names := WWV_FLOW_UTILITIES.GET_BINDS(l_query);
FOR i IN 1..l_names.COUNT LOOP
l_query := REPLACE( LOWER(l_query),
LOWER(l_names(i)),
'(select v('''||
LOWER( LTRIM(l_names(i), ':')) ||
''') from dual)');
END LOOP;
HTP.P('<div>');
OPEN cur FOR l_query;
LOOP
FETCH cur INTO v_display, v_return;
EXIT WHEN cur%NOTFOUND;
HTP.ANCHOR('javascript:setValue(''' ||
p_item || ''',''' ||
v_return || ''');',
'[' || v_display || ']',
null,
'class=''itemlink''');
END LOOP;
CLOSE cur;
HTP.P('</div>');
HTP.P('<br/>');
END QUICK_PICK;
So this procedure takes a SQL query (p_query) and a page item ID (p_item) as parameters. The results from the query specified in p_query will be used as the display and setting values of each quick pick link, so the query must contain two columns in the select statement (the first selected column should be the display value to be used and the second the return value, see query below for an example). The item name specified in p_item will be the item that the quick pick link will set.

This procedure just opens a dynamic cursor based on the query specified in the p_query parameter. Then calls the HTP.ANCHOR function to generate the anchor HTML tag. The procedure also pads the lot in a div tag and adds a line break at the end for presentation (you may want to change this if you have another page item on the same line as the select list being set). The following is an example of the HTML generated by this procedure:
<div>
<a class="itemLink"
href="javascript:setValue('P2_JOB','SALESMAN');">
[SALESMAN]
</a>
<a class="itemLink"
href="javascript:setValue('P2_JOB','CLERK');">
[CLERK]
</a>
</div>

Step 2) Secondly, we need to call this function in APEX. You can do this by adding a page item with the following settings:
Name > Name: [whatever you want, P1_QUICK_PICK_JOB would be fine for this]
Name > Display As: Display As Text (based on PL/SQL, does not save state)
(Note: This display type is not available when creating a page item, so it must be set after creation.)
Displayed > Sequence: [just after the select list item you wish the quick pick link to set]
Label > Label: [blank]
Label > Template: No Label
Source > Source Type: PL/SQL Anonymous Block
BEGIN
QUICK_PICK('SELECT summed_rows.job display
, summed_rows.job return
FROM (SELECT job,
SUM(1) qty
FROM emp
WHERE job IS NOT NULL
GROUP BY job
ORDER BY SUM(1) DESC
) summed_rows
WHERE ROWNUM <= 2'
, 'P2_JOB');
END;
This example uses a query that selects the top 2 occurences of job values in the emp table. The second parameter indicates that the page item 'P2_JOB' will be set by the quick pick link.

Step 3) So finally, there is one more thing to do, create the css used by the link. I have copied the inbuilt class used by APEX to render these links and re-specified it in its own 'custom.css' file. You can do this by creating a file with the following contents:
a.itemLink:link {
color:#811919;
font-size:11px;
margin:0pt 5px 5px 0pt;
text-decoration:none;
white-space:nowrap;
}
Then save this as 'custom.css' and upload it to APEX via 'Shared Components > Cascading Style Sheets'. Finally to make it available to the application, we need to reference this in the page template, so add the following line to the 'Definition > Header' code, after the default theme css declaration:
<link rel="stylesheet" href="#WORKSPACE_IMAGES#custom.css" 
type="text/css" />

The results can be seen here:




Conclusion...
So that's it. Then in order to add more quick pick links to other select items, just create other page items referencing the QUICK_PICK database procedure as detailed in step 2.

Limitation: Currently the css doesn't work in IE, only Firefox, so the links just display as normal text. When I find out why and will post an update.

Hope it helps,
Anthony



Update...
Following a discussion on the APEX forum regarding this post, I have improved the QUICK_PICK procedure so that it can handle a query containing bind variables. This is making use of the in-built function WWV_FLOW_UTILITIES.GET_BINDS(), with a technique proposed in the forums here.


Another update...
Please note, if you are working in APEX 4.0 or above, the item type used to display the quick pick links has changed slightly, as part of a wider item consolidation that took place in the 4.0 release. You should use the 'Display Only' item type, with a 'Setting > Based On' attribute equal to 'Output of PL/SQL Code' and then define the call to the QUICK_PICK procedure in the 'PL/SQL Code which emits HTML' attribute. If you have written this in a release prior to 4.0, then when upgrading you are automatically migrated to use these new settings.
Categories: Development

WREAK_APPLICATION_HAVOC

Oracle WTF - Sun, 2008-02-10 06:35

Tom Kyte recently blogged about the senseless and yet strangely common practice of coding something like this:

WHEN OTHERS THEN
   RAISE_APPLICATION_ERROR(-20001,'Following Error Occured:' || SQLERRM);

which fairly obviously achieves nothing except take a standard message like this:

ORA-06501: PL/SQL: program error
ORA-06512: at line 6

and pointlessly scramble it into this:

ORA-20001: Following Error Occured:ORA-06501: PL/SQL: program error
ORA-06512: at line 11

which adds some meaningless text, hides the original line number, and miss-spells "occurred". Not bad for a day's work.

It turned out that some people had been doing this for years because they were simply too stupid to realise that they didn't have to.

Anyway you know all this because you read Tom Kyte's blog. But have a look at this helpful page of advice from Tech On The Net, under "Oracle/PLSQL: SQLERRM Function"

You could use the SQLERRM function to raise an error as follows:

EXCEPTION
   WHEN OTHERS THEN
      raise_application_error
      (-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
END;

Or you could log the error to a table as follows:

EXCEPTION
   WHEN OTHERS THEN
      err_code := SQLCODE;
      err_msg := substr(SQLERRM, 1, 200);

      INSERT INTO audit_table (error_number, error_message)
      VALUES (err_code, err_msg);
END;

Now that is even better. The first example adds some meaningless text, hides the original line number, and duplicates the error code (unless it's a NO_DATA_FOUND exception, but let's not go there), to produce something like this:

ORA-20001: An error was encountered - -6501 -ERROR- ORA-06501: PL/SQL: program error
ORA-06512: at line 11

The error logging example pointlessly captures SQLCODE (nobody will ever use it), throws away all but the first 200 characters of the error stack, logs nothing about what happened, and fails to re-raise the exception so if you don't check the log you won't know anything went wrong until your customers start asking where their stuff is.

Wouldn't it be great if there were, say, a Boolean third parameter to RAISE_APPLICATION_ERROR that would make it retain the existing error stack, freeing up the message line for you to put something intelligent and helpful, like, I don't know,

BEGIN
    RAISE program_error;
EXCEPTION
    WHEN OTHERS THEN
        RAISE_APPLICATION_ERROR
        ( -20001
        , 'Biscuits cannot be ordered on a ' || TO_CHAR(SYSDATE,'fmDay') ||
          ' without a hot beverage'
        , TRUE);
END;

to produce something like this:

ORA-20001: Biscuits cannot be ordered on a Sunday without a hot beverage
ORA-06512: at line 5
ORA-06501: PL/SQL: program error

We can but dream.

Last Blog Entry (sysdate-364)

Eric S. Emrick - Fri, 2008-02-08 23:20
Well, it has been nearly one year, to the day, since my last post (sorry for the confessional like preamble). I was at a luncheon today with some former colleagues and some were asking me when I was going to start blogging again. I hope to start back up here pretty soon. So, if anyone is still dropping by, I hope to resume with some new material. However, I might try and keep it a bit less technical (fewer bits and more bytes); more light hearted, yet hopefully still informative and fun. Redo log dumps and SCN dribble probably sends most into a coma. Heck, I read some of my prior posts and nearly fell asleep. I will continue the "Oracle Riddles" posts as they seem to generate interesting and fun dialogue. The key is to have FUN with it. If blogging becomes a chore then you are doing it for the wrong reason. I actually visited Tom Kyte's blog this evening and started reviewing some of his more recent entries - to get the juices flowing. BTW, who is the chap with the Johnathan Lewis-ian beard pictured on his blog? :-).

Evil Clones - The SCM nightmare that can stop you in your tracks

Susan Duncan - Fri, 2008-02-08 09:04
It can happen to anyone. First the feeling of disbelief, then the cold sweat; you try pinching yourself, hoping it's some bad dream and you'll wake up and they'll be gone . . .

There, that's my catchy first line of my first novel written, now just the other 150 pages to invent. But unfortunately it really can happen to you - if you are working with ClearCase you may have come across evil twins. But I want to talk about a different nightmare that can occur if you are using SVN or CVS.

Here's the scenario. You and a team member are both working on a file (let's call it MyClass1) checked out of SVN.







Your colleague, as part of her edits, renames MyClass1 to CloneClass1 and commits the working copy.







SVN uses copy/delete commands for a rename:
First MyClass1 is copied (so that the history is maintained) to CloneClass1.
Then MyClass1 is deleted from the repository.
The screenshot of the SVN repository (left) shows the new resulting file with CloneClass1.

CVS is not quite so clever is uses add/delete: It adds CloneClass1 and deletes MyClass1 - so none of the history of MyClass1 is copied to the new CloneClass1.

But back to the SVN example. You have edited the checked out MyClass1 and are oblivious to your team member's edits. Not a problem, you might think as an update from the repository would absorb the changes or at the very least show you a conflict over the rename. Unfortunately not!

Here is where the nightmare that I'm calling Evil Clones appears.

The repository rightly updates your working copy with CloneClass1. But wait, your class MyClass1 is not only still in your App Navigator, but is no longer under version control!







If you think about it, this is correct -
  • SVN has replaced MyClass1 with CloneClass1. It has no recollection of the object MyClass1 so there is no merge of MyClass1 and CloneClass1.
  • You wouldn't want to lose the additional changes you have made to MyClass1 in your working copy - and it is not under version control - well, it's not, because it doesn't exist in the SVN repository
  • You can easily recognize that MyClass1 has been replaced by CloneClass1 so it's a simple job to move your changes to CloneClass1 - or is it. . .
This is expected behavior from Subversion and it seems that they recognize it as a bug . The example above is very simple. But what when you have 00s of different files in your working copy?

I'd be interested in how you work around this shortcoming in SVN. Here are some of my thoughts:
  • Don't rename, if you think you must, think again!
  • If you must, then refactoring or plain renaming of files (this is not specifically a Java problem, could be XML metadata, diagram or any other file type) should always be taken seriously in team development. In our JDeveloper development teams it's usual for the developer thinking about renames to email colleagues, check for dependencies and most definitely comment both the code, the check-in and the build notes to minimize the possible problems.
  • How renaming is handled in your team should be part of your defined development process. It should take account of your source code system, your build system and provide workarounds as necessary
  • If you are using JDeveloper and find that one of your objects appears to 'lose' its versioning status after an update - think rename!
  • Always ensure that you fully comment a rename on commit so that other users can use the log to find out where and why their object was deleted from the repository
In a forthcoming post I want to look more closely at how you detect and work to correct a rename problem. Specifically I want to use JDEV's offline database objects to do this. In the Java world developers may be more aware of the power of refactoring. But, as I said above, this is not a problem restricted to Java. I want to highlight the effect on XML and database objects.

At what level should I version my source code? - Alternate Takes

Susan Duncan - Fri, 2008-02-08 05:04
So now you know that I'm a Blues fan. One of Blues most revered artists was Robert Johnson. There are only a very few recordings of him - done between 1937 and '39 - just 29 different songs, but there are alternate takes bringing his total recordings to around 40 songs. Maybe he was striving for perfection, maybe different styles suited different audiences or even his moods, but these alternate takes add to the legacy left by him. Some blues fans have their own favorites but I love them all!

Recently I've been speaking with a number of JDeveloper/SVN users concerning the level at which they version their code. While I still maintain that the best way is to create a working copy at the Application level there are times and development process requirements when this isn't practical or possible.

Take this example: A team development with one Application split into many Projects. The Application is under source control using SVN. Their development process mandates commits are done per Project. There are a number of ways to achieve this, here are my suggestions for a team member to work on one or more projects and be selective about what is committed:

Alternate Take 1: Check-out each project as a separate working copy
  1. Create a new empty Application with no projects
  2. Use the SVN Navigator to select the root node of one of your projects in the repository
  3. Check this project out of the repository
  4. For the check-out destination, select the root folder of your newly created application
  5. You will get a warning on the Confirm Check Out dialog that you have selected a directory that is not empty - this is OK
  6. Continue to check out projects under your application root folder

  7. Now you can use JDeveloper's Commit Working Copy command on any node in a specific project and that project's changes will be committed.
Alternate Take 2: Filter the Outgoing Pending Changes Window

Using JDeveloper 11g you can filter the Pending Changes window by application or project. In the screen shot below I've filtered my outgoing changes to myProject1. I can select one or more of these files and with the context menu Commit Working Copy or Commit. I'd choose to select all the files and Commit if I had checked out my Application as the working copy. I'd choose Commit Working Copy if I had used Alternate Take1 to check out each project as a working copy. Unfortunately JDEV 10g doesn't have this filtering ability so this is an 11g only Alternate Take





I'm sure there are other Alternate Takes to both the level at which you version your code and how you manage checkin/out. I'd love to hear the good practices and the pitfalls.

Displaying and reading the execution plans for a SQL statement

Oracle Optimizer Team - Thu, 2008-02-07 14:37
Generating and displaying the execution plan of a SQL statement is a common task for most DBAs, SQL developers, and preformance experts as it provides them information on the performance characteristics of a SQL statement. An execution plan shows the detailed steps necessary to execute a SQL statement. These steps are expressed as a set of database operators that consumes and produces rows. The order of the operators and their implentation is decided by the query optimizer using a combination of query transformations and physical optimization techniques.

While the display is commonly shown in a tabular format, the plan is in fact tree-shaped. For example, consider the following query based on the SH schema (Sales History):



select prod_category, avg(amount_sold)
from sales s, products p
where p.prod_id = s.prod_id
group by prod_category;


The tabular representation of this query's plan is:



------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------

While the tree-shaped representation of the plan is:

GROUP BY
|
JOIN
_____|_______
| |
ACCESS ACCESS
(PRODUCTS) (SALES)




When you read a plan tree you should start from the bottom up. In the above example begin by looking at the access operators (or the leaves of the tree). In this case the access operators are implemented using full table scans. The rows produced by these tables scans will be consumed by the join operator. Here the join operator is a hash-join (other alternatives include nested-loop or sort-merge join). Finally the group-by operator implemented here using hash (alternative would be sort) consumes rows produced by the join-opertor.

The execution plan generated for a SQL statement is just one of the many alternative execution plans considered by the query optimizer. The query optimizer selects the execution plan with the lowest cost. Cost is a proxy for performance, the lower is the cost the better is the performance. The cost model used by the query optimizer accounts for the IO, CPU, and network usage in the query.

There are two different methods you can use to look at the execution plan of a SQL statement:


  1. EXPLAIN PLAN command - This displays an execution plan for a SQL statement without actually executing the statement.

  2. V$SQL_PLAN - A dictionary view introduced in Oracle 9i that shows the execution plan for a SQL statement that has been compiled into a cursor in the cursor cache.


Under certain conditions the plan shown when using EXPLAIN PLAN can be different from the plan shown using V$SQL_PLAN. For example, when the SQL statement contains bind variables the plan shown from using EXPLAIN PLAN ignores the bind variable values while the plan shown in V$SQL_PLAN takes the bind variable values into account in the plan generation process.

Displaying an execution plan has been made easier after the introduction of the dbms_xplan package in Oracle 9i and by the enhancements made to it in subsequent releases. This packages provides several PL/SQL procedures to display the plan from different sources:


  1. EXPLAIN PLAN command

  2. V$SQL_PLAN

  3. Automatic Workload Repository (AWR)

  4. SQL Tuning Set (STS)

  5. SQL Plan Baseline (SPM)


The following examples illustrate how to generate and display an execution plan for our original SQL statement using the different functions provided in the dbms_xplan package.

Example 1 Uses the EXPLAIN PLAN command and the dbms_xplan.display function.


SQL> EXPLAIN PLAN FOR
2 select prod_category, avg(amount_sold)
3 from sales s, products p
4 where p.prod_id = s.prod_id
5 group by prod_category;

Explained.



SQL> select plan_table_output
2 from table(dbms_xplan.display('plan_table',null,'basic'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------


The arguments are for dbms_xplan.display are:


  • plan table name (default 'PLAN_TABLE'),

  • statement_id (default null),

  • format (default 'TYPICAL')


More details can be found in $ORACLE_HOME/rdbms/admin/dbmsxpln.sql.

Example 2 Generating and displaying the execution plan for the last SQL statement executed in a session:



SQL> select prod_category, avg(amount_sold)
2 from sales s, products p
3 where p.prod_id = s.prod_id
4 group by prod_category;

no rows selected


SQL> select plan_table_output
2 from table(dbms_xplan.display_cursor(null,null,'basic'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------


The arguments used by dbms_xplay.dispay_cursor are:


  • SQL ID (default null, null means the last SQL statement executed in this session),

  • child number (default 0),

  • format (default 'TYPICAL')


The details are in $ORACLE_HOME/rdbms/admin/dbmsxpln.sql.


Example 3 Displaying the execution plan for any other statement requires the SQL ID to be provided, either directly or indirectly:

  1. Directly:

    SQL> select plan_table_output from
    2 table(dbms_xplan.display_cursor('fnrtqw9c233tt',null,'basic'));


  2. Indirectly:

    SQL> select plan_table_output
    2 from v$sql s,
    3 table(dbms_xplan.display_cursor(s.sql_id,
    4 s.child_number, 'basic')) t
    5 where s.sql_text like 'select PROD_CATEGORY%';


Example 4 - Displaying an execution plan corresponding to a SQL Plan Baseline. SQL Plan Baselines have been introduced in Oracle 11g to support the SQL Plan Management feature (SPM). In order to illustrate such a case we need to create a SQL Plan Baseline first.


SQL> alter session set optimizer_capture_sql_plan_baselines=true;

Session altered.

SQL> select prod_category, avg(amount_sold)
2 from sales s, products p
3 where p.prod_id = s.prod_id
4 group by prod_category;

no rows selected

If the above statement has been executed more than once, a SQL Plan Baseline will be created for it and you can verified this using the follows query:


SQL> select SQL_HANDLE, PLAN_NAME, ACCEPTED
2 from dba_sql_plan_baselines
3 where sql_text like 'select prod_category%';

SQL_HANDLE PLAN_NAME ACC
------------------------------ ------------------------------ ---
SYS_SQL_1899bb9331ed7772 SYS_SQL_PLAN_31ed7772f2c7a4c2 YES


The execution plan for the SQL Plan Baseline created above can be displayed either directly or indirectly:

  1. Directly
    select t.* from
    table(dbms_xplan.display_sql_plan_baseline('SYS_SQL_1899bb9331ed7772',
    format => 'basic')) t


  2. Indirectly
    select t.*
    from (select distinct sql_handle
    from dba_sql_plan_baselines
    where sql_text like 'select prod_category%') pb,
    table(dbms_xplan.display_sql_plan_baseline(pb.sql_handle,
    null,'basic')) t;



The output of either of these two statements is:



----------------------------------------------------------------------------
SQL handle: SYS_SQL_1899bb9331ed7772
SQL text: select prod_category, avg(amount_sold) from sales s, products p
where p.prod_id = s.prod_id group by prod_category
----------------------------------------------------------------------------

----------------------------------------------------------------------------
Plan name: SYS_SQL_PLAN_31ed7772f2c7a4c2
Enabled: YES Fixed: NO Accepted: YES Origin: AUTO-CAPTURE
----------------------------------------------------------------------------

Plan hash value: 4073170114

---------------------------------------------------------
Id Operation Name
---------------------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 VIEW index$_join$_002
4 HASH JOIN
5 INDEX FAST FULL SCAN PRODUCTS_PK
6 INDEX FAST FULL SCAN PRODUCTS_PROD_CAT_IX
7 PARTITION RANGE ALL
8 TABLE ACCESS FULL SALES
---------------------------------------------------------


Formatting


The format argument is highly customizable and allows you to see as little (high-level) or as much (low-level) details as you need / want in the plan output. The high-level options are:

  1. Basic
    The plan includes the operation, options, and the object name (table, index, MV, etc)
  2. Typical
    It includes the information shown in BASIC plus additional optimizer-related internal information such as cost, size, cardinality, etc. These information are shown for every operation in the plan and represents what the optimizer thinks is the operation cost, the number of rows produced, etc. It also shows the predicates evaluation by the operation. There are two types of predicates: ACCESS and FILTER. The ACCESS predicates for an index are used to fetch the relevant blocks because they apply to the search columns. The FILTER predicates are evaluated after the blocks have been fetched.
  3. All
    It includes the information shown in TYPICAL plus the lists of expressions (columns) produced by every operation, the hint alias and query block names where the operation belongs. The last two pieces of information can be used as arguments to add hints to the statement.
The low-level options allow the inclusion or exclusion of find details, such as predicates and cost.
For example,


select plan_table_output
from table(dbms_xplan.display('plan_table',null,'basic +predicate +cost'));

-------------------------------------------------------
Id Operation Name Cost (%CPU)
-------------------------------------------------------
0 SELECT STATEMENT 17 (18)
1 HASH GROUP BY 17 (18)
* 2 HASH JOIN 15 (7)
3 TABLE ACCESS FULL PRODUCTS 9 (0)
4 PARTITION RANGE ALL 5 (0)
5 TABLE ACCESS FULL SALES 5 (0)
-------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("P"."PROD_ID"="S"."PROD_ID")



select plan_table_output from
table(dbms_xplan.display('plan_table',null,'typical -cost -bytes'));

----------------------------------------------------------------------------
Id Operation Name Rows Time Pstart Pstop
----------------------------------------------------------------------------
0 SELECT STATEMENT 4 00:00:01
1 HASH GROUP BY 4 00:00:01
* 2 HASH JOIN 960 00:00:01
3 TABLE ACCESS FULL PRODUCTS 766 00:00:01
4 PARTITION RANGE ALL 960 00:00:01 1 16
5 TABLE ACCESS FULL SALES 960 00:00:01 1 16
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("P"."PROD_ID"="S"."PROD_ID")

Note Section


In addition to the plan, the package displays notes in the NOTE section, such as that dynamic sampling was used during query optimization or that star transformation was applied to the query.
For example, if the table SALES did not have statistics then the optimizer will use dynamic sampling and the plan display will report it as follows (see '+note' detail in the query):


select plan_table_output
from table(dbms_xplan.display('plan_table',null,'basic +note'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------

Note
-----
- dynamic sampling used for this statement


Bind peeking



The query optimizer takes into account the values of bind variable values when generation an execution plan. It does what is generally called bind peeking. See the first post in this blog about the concept of bind peeking and its impact on the plans and the performance of SQL statements.
As stated earlier the plan shown in V$SQL_PLAN takes into account the values of bind variables while the one shown from using EXPLAIN PLAN does not. Starting with 10gR2, the dbms_xplan package allows the display of the bind variable values used to generate a particular cursor/plan. This is done by adding '+peeked_binds' to the format argument when using display_cursor().
This is illustrated with the following example:



variable pcat varchar2(50)
exec :pcat := 'Women'

select PROD_CATEGORY, avg(amount_sold)
from sales s, products p
where p.PROD_ID = s.PROD_ID
and prod_category != :pcat
group by PROD_CATEGORY;

select plan_table_output
from table(dbms_xplan.display_cursor(null,null,'basic +PEEKED_BINDS'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

1 - :PCAT (VARCHAR2(30), CSID=2): 'Women'

Pages

Subscribe to Oracle FAQ aggregator