Feed aggregator

jQuery: 3rd Birthday and new release 1.3

Oracle Apex Notebook - Wed, 2009-01-14 18:37
jQuery is the way of the future in what concerns JavaScript framework integration with Oracle Apex. In the last months, several Oracle Apex Bloggers have posted nice examples to take Apex applications interfaces to another level taking advantage of jQuery functionalities.Today, on the project’s 3rd birthday, the jQuery development team has announced the release of jQuery v1.3, the latest and
Categories: Development

Using Oracle in a Windows VM

Edward Whalen - Tue, 2009-01-13 16:56
Microsoft recently introduced Hyper-v with Windows 2008 Server. In order to determine the overhead of running Oracle on Hyper-v we performed a benchmark on identical hardware using both Hyper-v and a non-virtual server using Oracle 10g for Windows. The resulting whitepaper is available for download from our website: www.perftuning.com.

With Windows Server 2008, everything needed to support database server virtualization is available as an integral part of the operating system – Windows Server 2008 Hyper-v. This whitepaper demonstrates the performance capabilities of Oracle 10g on Hyper-v. It also has provides several best practices and resources essential to virtualization of Oracle database workloads.

Please visit our website and take a look.

So, back to Oracle then?

Mark A. Williams - Sat, 2009-01-10 19:09

Next week will mark the last week at my current location as I have recently decided to return to Oracle Corp.

I will be going back to the world of C and UNIX derivatives having accepted a position on the non-Windows PLBDE team.

It's hard to say what that will mean for the blog here, which has been pretty quiet from my side as of late to be sure, but perhaps I will branch out from my typical topics of ODP.NET and/or Oracle on Windows. We'll see.

You've Got to Fight for Your Invoker's Rights

Tahiti Views - Fri, 2009-01-09 18:35
This post is about a PL/SQL feature that doesn't get enough respect, "invoker's rights".First off, what's its real name? Depending on the source, you'll see the feature name spelled "invoker's rights", "invokers' rights", or "invoker rights". That makes a difference -- you'll get different results in Google depending on what combination of singular, plural, and possessive you use. And to be John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

Using Quartz Scheduler on WebLogic Server

Debu Panda - Fri, 2009-01-09 12:47

I'm not a Quartz expert but I keep on getting emails with Quartz questions as one of my blog was published as a featured article at TheServerSide.com 4-5 years back. I got some questions on using Quartz on WebLogic Server recently. I modified the original example to run on WebLogic Server.


You can download this sample from here.

The Example

1. This sample uses TestEJB which is a stateless bean that has yourMethod that need to be scheduled as a job/service.


2. This method execution is scheduled as a Job from from QuartzManagerBean which is EJB 3.0 Session bean.

3. A Generic Servlet QuartzServlet uses dependency injection to invoke the QuartzManager from its init() method.

4. The QuartzServlet is automatically loaded using load-on-start mechanism. Hence as soon as the application is deployed - the job will be submitted .

How to Run the Example

1. You can download the code example from here.

2. This is designed to be deployed in the exampleServer. If you want to deploy in a different domain/server you have to make appropriate changes to common.xml


3. Start your WebLogic Server. Unlike OC4J that requires –userThreads, no special parameter or flag is required for WebLogic to run the example.


4. Set your environment variables. If you are using examplesServer, you can use %BEA_HOME%\wlserver_10.3\samples\domains\wl_server\setExamplesEnv.bat script to set environment variables required to compile and deploy the application

5. To compile and deploy, just run

ant

You will see the application to be deployed in console where WebLogic Server was started.

Jan 9, 2009 10:45:54 AM org.quartz.plugins.management.ShutdownHookPlugin initialize

INFO: Registering Quartz shutdown hook.

Jan 9, 2009 10:45:54 AM org.quartz.simpl.RAMJobStore initialize

INFO: RAMJobStore initialized.

Jan 9, 2009 10:45:54 AM org.quartz.impl.StdSchedulerFactory instantiate

INFO: Quartz scheduler 'WebLogicQuartzScheduler' initialized from default resour

ce file in Quartz package: 'quartz.properties'

Jan 9, 2009 10:45:54 AM org.quartz.impl.StdSchedulerFactory instantiate

INFO: Quartz scheduler version: 1.3.0

Jan 9, 2009 10:45:54 AM org.quartz.core.QuartzScheduler start

INFO: Scheduler WebLogicQuartzScheduler_$_one started.

Invoking EJB to schedule Job ..

0 0/5 * * * ?

Scheduling Job ..

0 0/5 * * * ?

Timer scheduled now ..

Generating report

Report Generation Completed


Hope this helps!

School's Out

Mary Ann Davidson - Fri, 2009-01-09 08:09

When I look back on my career, particularly my educational background, I find it interesting to consider what has stuck with me, what I have forgotten, and what continues to be useful. I have sort of a strange background for a security weenie – an undergraduate mechanical engineering degree and an MBA. I don’t talk about the MBA all that much, partly because I know plenty of overblown egos who can’t wait to trot out the fact they have an MBA from Harvard, Wharton, Stanford, (insert name of prestigious MBA program here). Second, someone who worked for me awhile ago got an MBA and proceeded to treat his directs really poorly with newly-minted MBA hubris that was astonishing (I recognize the disease because I had a bad case of it once, myself). He thought that MBA stood for Me Before All and everyone who worked for him was there to advance his career. Third, but mostly first, Impressive Academic Credentials are increasingly unimportant the longer you are in the working world; what you actually accomplish is far more important than how well educated you are.

That said, I do think that the education I got in my MBA program gave me some “tricks of the trade” I still find inordinately useful. Now, not everything one learns at universities is timeless. Particularly in the liberal arts area, there are reinterpretations and revisions of widely held knowledge going on all the time. I guess that especially for academic areas that have been strip mined more than Appalachia, the way you create academic street creds for yourself is by regularly throwing out babies after drowning your colleagues in the bathwater. My father (who has a PhD in civil engineering and has been a university professor and administrator) liked to tell the joke about an engineering professor and an economics professor conversing on a campus. “I hear you give the same exam each semester in the economics department,” says the engineering guy. “Our students get hold of the old exams. So, how on earth do you get away with it in the econ department?” “Easy, “ says the economics professor. “We just change the answers.” Now, it’s not totally true that economics changes all that fast (although I hope we have driven a stake through John Maynard Keynes* and all his sycophants once and for all). But economic theories do change.

That said, among the most useful classes I took at business school, and one of the areas I refer to often in my work, are the economics classes. It’s just timeless, whether we are talking about micro- or macroeconomics. It’s particularly important in talking about public policy issues (yes, even around security). For example, I have talked about (and there are many who have) whether there is a market failure in information security. The discussion becomes (if there is a market failure) how to correct it? How do you make markets work efficiently (e.g., to remove externalities or “social costs?”) There are also a lot of offshoots of economics that have applications in other areas (game theory, for example, which was tremendously influential in Cold War strategy). I have thought about game theory in various security discussions (in particular, discussions in which industry can maximize our collective best interests or “payoff” by hanging together, like the prisoners of the prisoners’ dilemma, and yet one “defector” can achieve a higher individual payoff by selling other vendors down the game theory river. Alas, I have seen one vendor do just that and now everybody is worse off than if the vendor had not "defected.").

Financial theory is pretty useful, too. For example, the idea that you should disregard sunk costs in future decision making, a fancy way of saying ignore how much money you’ve already thrown at a problem in analyzing whether to go forward with it. I use that all the time, especially when someone starts in with “we’ve worked so long/hard/whatever on project X.” You have to look at the expectation of success going forward and what it will cost to get there (and other options for those resources) because sunk costs are by definition non-recoverable no matter what you do. You’ve spent the money, you can’t get it back: now what?

Some of those economic and financial market lessons have been reiterated in spades in looking at the current financial market meltdown. One of them is that “diversification reduces risk” in terms of a portfolio – the financial market theory of not having all your financial eggs in one basket. The corollary is that some kinds of risk are not diversifiable (e.g., if the entire market craps out, everything sinks together and diversification doesn’t help you, as many of us who’ve opened our 401K statements recently are all too painfully aware). You really wish, when reading about people who became stupidly overleveraged (fancy term for living way beyond your means and all on credit) that they had some basic comprehension of economics. Not merely for their own financial well-being, but so that people have realistic expectations of personal responsibility and the limits of government. (Hint: governments cannot create wealth, though they can print money. If they print enough money with no actual wealth behind it devalues the money you have. Getting someone else besides you to pay for your lifestyle is nice work if you can get it, but if you keep robbing Peter to give lazy bones Paul a free ride, Peter will find ways to work less or otherwise revolt.) None of this is rocket science yet so many people never learned basic financial or economic principles. There really is no such thing as a free lunch.

Business law is another area I find incredibly useful. Granted, a lot of my pragmatic understanding of contract law in particular I learned as a contract administrator in the Navy (nothing like having a $200,000 claim over the meaning of the word “automatic” or the performance of a contract hinge on a misplaced comma, both of which I have experienced). One of the big lessons I learned is the legal equivalent of RTFM: let’s call it RTFC or “Read the Friggin’ Contract.” In particular, if you are negotiating a contract, realize that what you agree to with your counterpart has to survive the participants who drew up the contract (that’s why things are written down). The words “well, everybody knows what we meant by that” absolutely never enter into legal parlance. The legal equivalent of Murphy’s Law is that if something in a contract is ambiguous, it will be misinterpreted by at least one party to the contract. You need to read it (over and over and over) through all revisions to avoid expensive mistakes. (It took me a year to negotiate a licensing agreement with a partner, but at the end of the year, we had a deal and there were no disputes over the terms during the life of the contract. My counterpart at the other company is still a great gal pal to this day.)

The other lesson I learned is that contracts also are not good vehicles for creating trust between parties. If someone is a slimy so-and-so, you cannot write a contract that will make them less of a slimy so-and-so. A contract will not create trust; it will tell you who has to do what under the terms of the contract, and possibly spell out remedies if parties do not perform under the contract. That assumes that you can actually get a remedy out of someone that is timely or meaningful. I am always astonished at someone whose going in position is “if it doesn’t work, we can sue.” (Kind of like people who get married with the expectation, “if it doesn’t work out, we can get a divorce.” The formulation of an exit strategy on the day you ink a deal does not bode well for it, so why are you signing on the dotted line?) If someone rips off your intellectual property (for example, in a geographic area with not a whole lot of respect for intellectual property rights), good luck on a “remedy” that will make you whole. “Don’t take that bet” is sometimes a better strategy than counting on a contract to make you whole if something goes wrong, especially if you have a high expectation that you are going to have a problem.

Another of the other really useful constructs I got from B-school was my quantitative methods class. Quantitative methods are a way of putting more numeric rigor around decision making, whether it is analyzing a business strategy or problem solving. For example, you may have an optimization problem you are working on, like “the truck routing problem.” You have so many trucks, they need to make deliveries, you want to find out the most efficient route but without delaying deliveries too long, and so on. You have a number of constraints you have to work with – you only have so many trucks, so many deliveries, so much distance, and time limits within which you have to deliver things. If nothing else, you learn that while sometimes you can add resource (more trucks can make more deliveries in shorter time), you still have constraints you can’t necessarily move (houses are not going to move closer together for your convenience, and you can’t make time go faster).

Particularly when I find myself dealing with (and they mean well) academics, think-tankers or government wonks who want to create public policy, I sometimes need to remind them of quantitative methods 101: resources are always constrained. Sometimes, people who don’t actually have to earn a profit or make tradeoffs or implement their own public policies find lots of “really important” ways for other people to spend money in pursuit of A Noble Purpose. The reality is that sometimes you don’t have any more resource, and even if you did, there might be a better way to use the resource that would lead to a public policy (or other) good, like better return for investors. In discussing public policy I try to talk about what the real-world constraints are, and inevitably I ask some of the government folks questions like, “do you want absolute perfection, or do you want to give industry a way to say Yes, where they can make progress, improve what they are doing in a cost effective way that actually makes the problem better?” Something is almost always better than nothing. You can either listen, understand what people’s constraints are and allow for them, or create a problem statement that says, ‘I want to have my cake, eat it too, have it be organic, sustainable, biodegradable, delicious, rich, and have zero calories: in fact, I want to lose weight while eating it.” People who understand optimization problems know you’d do well if you got three of those conditions satisfied. You cannot satisfy all of them.

I even use the quantitative methods approach (finally!) in asking for more headcount. I am embarrassed to admit that it took me years at Oracle to figure out how to get more headcount, and the method I finally hit on (more accurately, that I finally embraced after enough people told me the best way to do it, I ignored the advice and learned the hard way that they were right), is a simple exercise. I collate, in an organized way, “what we are doing now, what we are not doing now, and what we can do with more resource.” “What we get with more resource” is along the lines of (in priority order) additional work my team could take on and the value to the company of us doing that (sometimes the value is cost avoidance or something tangible we can (roughly) quantify). I am hardnosed in that, if people who work for me who run various areas do not make a good enough case to me for their headcount requests, I draw the “wish list” line above their headcount requests. In going to my boss, I will say, “here is what we get with more resource (in priority order), meaning if I got one headcount, I’d add one person to do A, if another person, I would put them on B, then a third person to do C, here is the value to the company for adding those bodies, and any headcount beyond that should go to someone else in your organization who can spend the resource better.” Meaning, my goal is not empire building, it is adding enough resource to use it to add value (or reduce cost), and no more.

There is always a lot more to learn. In writing this blog entry, I realized I had forgotten a whole lot about economics and game theory (and I didn’t relearn them in the time it took to write the blog). But, one of my absolutely favorite hangouts – the Ketchum Community Library (they say I am their best customer) is right down the hill, and I will be trotting down there later today to see what books they have on economics and game theory.

The beginning of a new year is a bright and shiny opportunity to do things differently. While time is one of our most constrained resources (I may stay up late reading myself blind, but I still need some sleep), I always set a New Year’s resolution or two around learning a new skill, or beefing up a knowledge area. It doesn’t always have to be in computer security, either. Sometimes learning something new (or revisiting something old) can give a fresh perspective to your day job.

Hau’oli Makahiki Hou (Happy New Year!)

* FDR, in my opinion, did bubkes to get us out of the Great Depression and should not get credit for it; there was double digit unemployment into 1942. On the contrary, the Second World War ended the Great Depression, and we can thank James Forrestal (who architected the unprecedented industrial gear shifting to war production) for helping end the Great Depression far more than FDR.


Book of the Week

A Roof Against the Rain by JoEllen Collins

This is a public service announcement: I know JoEllen – she is a friend of mine. She also happens to be a cracking good writer. I loved the book – which is about loss of a loved one - and am recommending it broadly. I think it would be a particularly good “book group book” ( I recommended it to my book group). Her characters are believable and there is a moral center (so few dreary modern works of fiction have one). It’s also just a fun read and it takes place in Sun Valley; Idaho another reason I liked the book.

http://www.buy.com/prod/a-roof-against-the-rain/q/loc/106/208184025.html

Other good reads:

Why the Allies Won by Richard Overy

We forget that the victory of the Allies over the Axis in WWII was not a done deal at all but quite a close thing. This book describes the factors involve in that victory – leadership, economies, technology, among others.

http://www.amazon.com/s/ref=nb_ss_b_0_18?url=search-alias%3Dstripbooks&field-keywords=why+the+allies+won+by+richard+overy&x=0&y=0&sprefix=why+the+allies+won

The New Dealer’s War: FDR and the War Within World War II by Thomas Fleming

A very different “take” on FDR and an eye-opening one. Between FDR’s (likely) leaking of the Rainbow Plan to ignoring the Katyn Massacre, FDR hagiography is quite deftly shattered by this book.

http://www.amazon.com/New-Dealers-War-Within-World/dp/0465024653

For more information

I am completely stoked that my favorite Hawaiian music group Maunalua (together with Grammy winner John Cruz) is playing at an inaugural lu’au at the Hotel Monaco in DC on January 20. Nā mele no ka ‘oi!

http://inauguralluau.eventbrite.com/

The Prisoner’s Dilemma is an interesting book on the influence of game theory on the Cold War.

http://www.amazon.com/Prisoners-Dilemma-William-Poundstone/dp/038541580X/ref=sr_1_1?ie=UTF8&s=books&qid=1231442535&sr=1-1

Oracle's Linux Contributions

Sergio's Blog - Fri, 2009-01-09 06:40

This recently posted page summarizes some of the contributions Oracle's Linux developers make to Linux and the community in general, including:

  • OCFS2
  • BTRFS
  • Libstdc++
  • NFS on IPv6
  • RDS
  • T10-DIF Data Integrity
  • Etc.

I speak with prospects, customers, and partners on a regular basis and when I tell them that we have people at Oracle who's sole job it is to work on open source software, they are often very surprised. I'm glad this was posted.

Categories: DBA Blogs

Plan regressions got you down? SQL Plan Management to the rescue!

Oracle Optimizer Team - Thu, 2009-01-08 17:58
Part 1 of 4: Creating SQL plan baselines

Do you ever experience performance regressions because an execution plan has changed for the worse? If you have, then we have an elegant solution for you in 11g called SQL Plan Management (SPM). The next four posts on our blog will cover SPM in detail. Let's begin by reviewing the primary causes for plan changes.

Execution plan changes occur due to various system changes. For example, you might have (manually or automatically) updated statistics for some objects, or changed a few optimizer-related parameters. A more dramatic change is a database upgrade (say from 10gR2 to 11g). All of these changes have the potential to cause new execution plans to be generated for many of your SQL statements. Most new plans are obviously improvements because they are tailored to the new system environment, but some might be worse leading to performance regressions. It is the latter that cause sleepless nights for many DBAs.

DBAs have several options for addressing these regressions. However, what most DBAs want is simple: plans should only change when they will result in performance gains. In other words, the optimizer should not pick bad plans, period.

This first post in our series, describes the concepts of SQL Plan Management and how to create SQL plan baselines. The second part will describe how and when these SQL plan baselines are used. The third part will discuss evolution, the process of adding new and improved plans to SQL plan baselines. Finally, the fourth part will describe user interfaces and interactions with other Oracle objects (like stored outlines).


Introduction

SQL Plan Management (SPM) allows database users to maintain stable yet optimal performance for a set of SQL statements. SPM incorporates the positive attributes of plan adaptability and plan stability, while simultaneously avoiding their shortcomings. It has two main objectives:
  1. prevent performance regressions in the face of database system changes
  2. offer performance improvements by gracefully adapting to database system changes
A managed SQL statement is one for which SPM has been enabled. SPM can be configured to work automatically or it can be manually controlled either wholly or partially (described later). SPM helps prevent performance regressions by enabling the detection of plan changes for managed SQL statements. For this purpose, SPM maintains, on disk, a plan history consisting of different execution plans generated for each managed SQL statement. An enhanced version of the Oracle optimizer, called SPM aware optimizer, accesses, uses, and manages this information which is stored in a repository called the SQL Management Base (SMB).

The plan history enables the SPM aware optimizer to determine whether the best-cost plan it has produced using the cost-based method is a brand new plan or not. A brand new plan represents a plan change that has potential to cause performance regression. For this reason, the SPM aware optimizer does not choose a brand new best-cost plan. Instead, it chooses from a set of accepted plans. An accepted plan is one that has been either verified to not cause performance regression or designated to have good performance. A set of accepted plans is called a SQL plan baseline, which represents a subset of the plan history.

A brand new plan is added to the plan history as a non-accepted plan. Later, an SPM utility verifies its performance, and keeps it as a non-accepted plan if it will cause a performance regression, or changes it to an accepted plan if it will provide a performance improvement. The plan performance verification process ensures both plan stability and plan adaptability.

The figure below shows the SMB containing the plan history for three SQL statements. Each plan history contains some accepted plans (the SQL plan baseline) and some non-accepted plans.




(Click on the image for a larger view.)

You can create a SQL plan baseline in several ways: using a SQL Tuning Set (STS); from the cursor cache; exporting from one database and importing into another; and automatically for every statement. Let's look at each in turn. The examples in this blog entry use the Oracle Database Sample Schemas so you can try them yourself.


Creating SQL plan baselines from STS

If you are upgrading from 10gR2 or have an 11g test system, you might already have an STS containing some or all of your SQL statements. This STS might contain plans that perform satisfactorily. Let's call this STS MY_STS. You can create a SQL plan baseline from this STS as follows:

SQL> variable pls number;
SQL> exec :pls := dbms_spm.load_plans_from_sqlset(sqlset_name => 'MY_STS', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp basic_filter => 'sql_text like ''select%p.prod_name%''');


This will create SQL plan baselines for all statements that match the specified filter.

Creating SQL plan baselines from cursor cache

You can automatically create SQL plan baselines for any cursor that is currently in the cache as follows:

SQL> exec :pls := dbms_spm.load_plans_from_cursor_cache( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp attribute_name => 'SQL_TEXT', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp attribute_value => 'select%p.prod_name%');


This will create SQL plan baselines for all statements whose text matches the specified string. Several overloaded variations of this function allow you to filter on other cursor attributes.

Creating SQL plan baselines using a staging table

If you already have SQL plan baselines (say on an 11g test system), you can export them to another system (a production system for instance).

First, on the test system, create a staging table and pack the SQL plan baselines you want to export:

SQL> exec dbms_spm.create_stgtab_baseline(table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH');

PL/SQL procedure successfully completed.

SQL> exec :pls := dbms_spm.pack_stgtab_baseline( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp sql_text => 'select%p.prod_name%');


This will pack all SQL plan baselines for statements that match the specified filter. The staging table, MY_STGTAB, is a regular table that you should export to the production system using Datapump Export.

On the production system, you can now unpack the staging table to create the SQL plan baselines:

SQL> exec :pls := dbms_spm.unpack_stgtab_baseline( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp sql_text => 'select%p.prod_name%');


This will unpack the staging table and create SQL plan baselines. Note that the filter for unpacking the staging table is optional and may be different than the one used during packing. This means that you can pack several SQL plan baselines into a staging table and selectively unpack only a subset of them on the target system.

Creating SQL plan baselines automatically

You can create SQL plan baselines for all repeatable statements automatically by setting the parameter optimizer_capture_sql_plan_baselines to TRUE (default is FALSE). The first plan captured for any statement is automatically accepted and becomes part of the SQL plan baseline, so enable this parameter only when you are sure that the default plans are performing well.

You can use the automatic plan capture mode when you have upgraded from a previous database version. Set optimizer_features_enable to the earlier version and execute your workload. Every repeatable statement will have its plan captured thus creating SQL plan baselines. You can reset optimizer_features_enable to its default value after you are sure that all statements in your workload have had a chance to execute.

Note that this automatic plan capture occurs only for repeatable statements, that is, statements that are executed at least twice. Statements that are only executed once will not benefit from SQL plan baselines since accepted plans are only used in subsequent hard parses.

The following example shows a plan being captured automatically when the same statement is executed twice:

SQL> alter session set optimizer_capture_sql_plan_baselines = true;

Session altered.

SQL> var pid number
SQL> exec :pid := 100;

PL/SQL procedure successfully completed.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> alter session set optimizer_capture_sql_plan_baselines = false;

Session altered.

Automatic plan capture will not occur for a statement if a stored outline exists for it and is enabled and the parameter use_stored_outlines is TRUE. In this case, turn on incremental capture of plans into an STS using the function capture_cursor_cache_sqlset() in the DBMS_SQLTUNE package. After you have collected the plans for your workload into the STS, manually create SQL plan baselines using the method described earlier. Then, disable the stored outlines or set use_stored_outlines to FALSE. From now on, SPM will manage your workload and stored outlines will not be used for those statements.

In this article, we have seen how to create SQL plan baselines. In the next, we will describe the SPM aware optimizer and how it uses SQL plan baselines.

Plan regressions got you down? SQL Plan Management to the rescue!

Inside the Oracle Optimizer - Thu, 2009-01-08 17:58
Part 1 of 4: Creating SQL plan baselines

Do you ever experience performance regressions because an execution plan has changed for the worse? If you have, then we have an elegant solution for you in 11g called SQL Plan Management (SPM). The next four posts on our blog will cover SPM in detail. Let's begin by reviewing the primary causes for plan changes.

Execution plan changes occur due to various system changes. For example, you might have (manually or automatically) updated statistics for some objects, or changed a few optimizer-related parameters. A more dramatic change is a database upgrade (say from 10gR2 to 11g). All of these changes have the potential to cause new execution plans to be generated for many of your SQL statements. Most new plans are obviously improvements because they are tailored to the new system environment, but some might be worse leading to performance regressions. It is the latter that cause sleepless nights for many DBAs.

DBAs have several options for addressing these regressions. However, what most DBAs want is simple: plans should only change when they will result in performance gains. In other words, the optimizer should not pick bad plans, period.

This first post in our series, describes the concepts of SQL Plan Management and how to create SQL plan baselines. The second part will describe how and when these SQL plan baselines are used. The third part will discuss evolution, the process of adding new and improved plans to SQL plan baselines. Finally, the fourth part will describe user interfaces and interactions with other Oracle objects (like stored outlines).


Introduction

SQL Plan Management (SPM) allows database users to maintain stable yet optimal performance for a set of SQL statements. SPM incorporates the positive attributes of plan adaptability and plan stability, while simultaneously avoiding their shortcomings. It has two main objectives:
  1. prevent performance regressions in the face of database system changes
  2. offer performance improvements by gracefully adapting to database system changes
A managed SQL statement is one for which SPM has been enabled. SPM can be configured to work automatically or it can be manually controlled either wholly or partially (described later). SPM helps prevent performance regressions by enabling the detection of plan changes for managed SQL statements. For this purpose, SPM maintains, on disk, a plan history consisting of different execution plans generated for each managed SQL statement. An enhanced version of the Oracle optimizer, called SPM aware optimizer, accesses, uses, and manages this information which is stored in a repository called the SQL Management Base (SMB).

The plan history enables the SPM aware optimizer to determine whether the best-cost plan it has produced using the cost-based method is a brand new plan or not. A brand new plan represents a plan change that has potential to cause performance regression. For this reason, the SPM aware optimizer does not choose a brand new best-cost plan. Instead, it chooses from a set of accepted plans. An accepted plan is one that has been either verified to not cause performance regression or designated to have good performance. A set of accepted plans is called a SQL plan baseline, which represents a subset of the plan history.

A brand new plan is added to the plan history as a non-accepted plan. Later, an SPM utility verifies its performance, and keeps it as a non-accepted plan if it will cause a performance regression, or changes it to an accepted plan if it will provide a performance improvement. The plan performance verification process ensures both plan stability and plan adaptability.

The figure below shows the SMB containing the plan history for three SQL statements. Each plan history contains some accepted plans (the SQL plan baseline) and some non-accepted plans.




(Click on the image for a larger view.)

You can create a SQL plan baseline in several ways: using a SQL Tuning Set (STS); from the cursor cache; exporting from one database and importing into another; and automatically for every statement. Let's look at each in turn. The examples in this blog entry use the Oracle Database Sample Schemas so you can try them yourself.


Creating SQL plan baselines from STS

If you are upgrading from 10gR2 or have an 11g test system, you might already have an STS containing some or all of your SQL statements. This STS might contain plans that perform satisfactorily. Let's call this STS MY_STS. You can create a SQL plan baseline from this STS as follows:

SQL> variable pls number;
SQL> exec :pls := dbms_spm.load_plans_from_sqlset(sqlset_name => 'MY_STS', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp basic_filter => 'sql_text like ''select%p.prod_name%''');


This will create SQL plan baselines for all statements that match the specified filter.

Creating SQL plan baselines from cursor cache

You can automatically create SQL plan baselines for any cursor that is currently in the cache as follows:

SQL> exec :pls := dbms_spm.load_plans_from_cursor_cache( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp attribute_name => 'SQL_TEXT', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp attribute_value => 'select%p.prod_name%');


This will create SQL plan baselines for all statements whose text matches the specified string. Several overloaded variations of this function allow you to filter on other cursor attributes.

Creating SQL plan baselines using a staging table

If you already have SQL plan baselines (say on an 11g test system), you can export them to another system (a production system for instance).

First, on the test system, create a staging table and pack the SQL plan baselines you want to export:

SQL> exec dbms_spm.create_stgtab_baseline(table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH');

PL/SQL procedure successfully completed.

SQL> exec :pls := dbms_spm.pack_stgtab_baseline( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp sql_text => 'select%p.prod_name%');


This will pack all SQL plan baselines for statements that match the specified filter. The staging table, MY_STGTAB, is a regular table that you should export to the production system using Datapump Export.

On the production system, you can now unpack the staging table to create the SQL plan baselines:

SQL> exec :pls := dbms_spm.unpack_stgtab_baseline( -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_name => 'MY_STGTAB', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp table_owner => 'SH', -
>&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp sql_text => 'select%p.prod_name%');


This will unpack the staging table and create SQL plan baselines. Note that the filter for unpacking the staging table is optional and may be different than the one used during packing. This means that you can pack several SQL plan baselines into a staging table and selectively unpack only a subset of them on the target system.

Creating SQL plan baselines automatically

You can create SQL plan baselines for all repeatable statements automatically by setting the parameter optimizer_capture_sql_plan_baselines to TRUE (default is FALSE). The first plan captured for any statement is automatically accepted and becomes part of the SQL plan baseline, so enable this parameter only when you are sure that the default plans are performing well.

You can use the automatic plan capture mode when you have upgraded from a previous database version. Set optimizer_features_enable to the earlier version and execute your workload. Every repeatable statement will have its plan captured thus creating SQL plan baselines. You can reset optimizer_features_enable to its default value after you are sure that all statements in your workload have had a chance to execute.

Note that this automatic plan capture occurs only for repeatable statements, that is, statements that are executed at least twice. Statements that are only executed once will not benefit from SQL plan baselines since accepted plans are only used in subsequent hard parses.

The following example shows a plan being captured automatically when the same statement is executed twice:

SQL> alter session set optimizer_capture_sql_plan_baselines = true;

Session altered.

SQL> var pid number
SQL> exec :pid := 100;

PL/SQL procedure successfully completed.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2&nbsp&nbsp&nbsp from sales s, products p, times t
3&nbsp&nbsp&nbsp where s.prod_id = p.prod_id
4&nbsp&nbsp&nbsp&nbsp&nbsp and s.time_id = t.time_id
5&nbsp&nbsp&nbsp&nbsp&nbsp and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> alter session set optimizer_capture_sql_plan_baselines = false;

Session altered.

Automatic plan capture will not occur for a statement if a stored outline exists for it and is enabled and the parameter use_stored_outlines is TRUE. In this case, turn on incremental capture of plans into an STS using the function capture_cursor_cache_sqlset() in the DBMS_SQLTUNE package. After you have collected the plans for your workload into the STS, manually create SQL plan baselines using the method described earlier. Then, disable the stored outlines or set use_stored_outlines to FALSE. From now on, SPM will manage your workload and stored outlines will not be used for those statements.

In this article, we have seen how to create SQL plan baselines. In the next, we will describe the SPM aware optimizer and how it uses SQL plan baselines.

Categories: DBA Blogs, Development

What is Direct Data Placement

Padraig O'Sullivan - Tue, 2009-01-06 01:02
I'm currently studying Oracle's white paper on Exadata and came across the following paragraph:"Further, Orace's interconnect protocol uses direct data placement (DMA - direct memory access) to ensure very low CPU overhead by directly moving data from the wire to database buffers with no extra data copies being made."This got me wondering what direct data placement is. First off, the interconnectPadraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com1

Oracle launches Rich Enterprise Application (REA) site

JHeadstart - Sun, 2009-01-04 18:59

This site is targeted towards Developers that want to build highly interactive web applications that deliver desktop quality of user interface. Oracle's REA technologies consist of Oracle ADF Faces Rich Client and Oracle WebCenter. URL: http://rea.oracle.com.

When you click the "Explore Now" button, you see a very nice tree with topics, where you can flip each node to see a small explanation, or expand it to see new nodes with subtopics.
rea_site.jpg

Categories: Development

How important is having Unix/Linux OS Knowledge for Oracle DBAs?

Sabdar Syed - Sun, 2009-01-04 07:37
Hello,

I wanna wish you all a very Happy New Year 2009 before being discussed on “How Important Having UNIX OS Knowledge for Oracle DBA?”

The main reason for discussing this topic is to encourage the novice or junior DBAs to have command on Unix/Linux Operating System as part of Database Administration. Because, when they were asked how comfortable they are working as DBA on Unix/Linux environment, then their responses are as following.

“They are pretty much comfortable working and administrating the databases on Windows Operating System as they were trained on administrating the database on Windows OS during their DBA Training”

“They are not enough confident in using the Unix/Linux OS commands to administer the databases”

“We didn’t get the chance yet to maintain the database on UNIX environment including Solaris, HP-Unix, IBM AIX, etc.”

“Working on windows environment is easier than working on Unix/Linux environment as it’s only required, on Windows OS, to select the files physically/copy/cut & paste from one location to another, or delete the files, and stopping the services by simply clicking, where as in Unix/Linux, one should use commands manually for the same”

…. and so on.


If anyone is with the above impression about administrating databases on Unix/Linux OS, then here are my suggestions as follows.

  • First of all, it’s not compulsory for DBAs to have operating system administration skills. But, working knowledge, on both the operating systems including Windows and Unix/Linux, is always value added to Oracle DBAs
  • The majority of the production database administration on Unix/Linux based operating systems is enormous in the globe, i.e. very large and mission critical databases are being maintained on Unix/LinuxOS.
  • Obviously, all big companies require the database administrator who’s having good UNIX scripting knowledge and working knowledge of UNIX based environments. It means that the Oracle DBAs, having good working knowledge of UNIX, have demand in the IT market.
  • Even though the DBAs intervention is manual in terms of creating directories, files, setting up environment variables, and starting/stopping the services in Unix environment, it’s very easy as good as maintaining in Windows environment.
  • Administration and OS commands are more or less similar on any flavor of Unix/Linux Operating system i.e. Solaris, HP-Unix, IBM AIX, Red Hat, Fedora, Ubuntu, etc., So, start practicing Oracle database administration on Unix/Linux based systems.
  • Probably, it may be difficult to have Unix OS installed in the personal computer for practical purpose. So, download the free/trail version of Linux (Oracle Unbreakable Linux) from Oracle, install and configure in the PC, and start practicing on Linux. To practice on different Unix/Linux OS flavors, get your pc installed with the VM Ware, where you can set up multiple and different flavors of Unix/Linux OS.
  • Note: Of course, this post is not for showing the difference between Windows and Unix/Linux Operating Systems as every OS has its own significance, but to encourage the Oracle DBAs to learn the commands of Unix/Linux required administering the databases.

Quick Links:

Download: Enterprise Linux and Oracle VM

Information: Oracle and Linux

Installation Guides: Oracle 10g Instalaltion on Unix/Linux OS

Books: Oracle9i UNIX Administration Handbook

Linux Recipes for Oracle DBAs

Training: Oracle Database 10g: Managing Oracle on Linux for DBAs

Exam: Oracle Database 10g: Managing Oracle on Linux Certified Expert Oracle

** Your comemnts and inputs are welcomed to this post. **

Regards,

Sabdar Syed

Employee Directory

RameshKumar Shanmugam - Fri, 2009-01-02 21:21
Most of company have a separate team or a department to handle the Organization Development or Organization Design (OD). The companies use Software Packages to design the new Org chart or to re-org the chart.These software package will get the data from the ERP system or HRIS or will be maintained as a separate stand alone package which will be accessed by the employee via intranet.Once the organization design is done the changes will be done in the ERP or HRIS reflecting the new Org design

In this blog I am trying to explain how we can use Employee directory function in Oracle which will allow employees to view the company's Organization structure and browse the employee details. Small company's who are not having any Org charting tool can use this functionality in Oracle which will enable the employee to view the Organization structure,their subordinates and the reporting relation within the organization

Setup:
Attach the function "Proxy Page" to the Self Service Global Menu with a prompt

The data can be refreshed using the concurrent request "Refresh Employee directory" with following parameter


Try it Out!
Categories: APPS Blogs

More information sources on ActiveRecord Oracle enhanced adapter

Raimonds Simanovskis - Fri, 2009-01-02 16:00

I’m glad to see that there are many users of ActiveRecord Oracle enhanced adapter and therefore comments in this blog is not anymore the best way how to communicate with Oracle enhanced adapter users. Therefore I created several other information exchange places which I hope will be more effective.

The latest addition is wiki pages on GitHub where I put usage description and examples as well as some troubleshooting hints that previously were scattered in README file and different blog posts. This is the first place where to look for information about Oracle enhanced adapter. And if you are GitHub user then you can correct mistakes or add additional content also by yourself.

If you have some question or you would like to discuss some feature then you can use Google discussion group. I will use this discussion group also for new release announcements as well so subscribe to it if you would like to get Oracle enhanced adapter news.

If you would like to report some bug or new feature (and patch would be greatly appreciated) then please use Lighthouse issue tracker.

And source code of Oracle enhanced adapter is still located in Github repository. If you are GitHub user then you can watch it or even fork it and experiment with some new features.

And BTW I just released Oracle enhanced adapter version 1.1.9 with some new features and it has been also tested with latest Rails 2.2 release.

Categories: Development

HAPPY NEW YEAR !!!

Mihajlo Tekic - Thu, 2009-01-01 16:31
HAPPY NEW YEAR TO EVERYONE !!!

I WISH YOU AND YOUR FAMILIES HEALTHY, HAPPY AND SUCCESSFUL NEW YEAR !!!



The Picture above is of my daughter Monika.

She is four months old and is the best thing that happened to me in 2008.

Oracle PeopleSoft hosted docs

Tahiti Views - Wed, 2008-12-31 13:32
Just in time for the new year! The Oracle PeopleSoft group now have their docs on Oracle.com in HTML format, hooked up to a Tahiti search:Oracle PeopleSoft Enterprise Hosted PeopleBooksJohn Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

NEW option in ADRCI purge acommand - UTSCDMP

Virag Sharma - Wed, 2008-12-31 11:40

NEW option in ADRCI purge acommand - UTSCDMP
There is new option in ADRCI for purge command - UTSCDMP


adrci> help purge

Usage: PURGE [[-i ]
[-age [-type ALERT INCIDENT TRACE CDUMP HM UTSCDMP]]]:

Purpose: Purge the diagnostic data in the current ADR home. If no
option is specified, the default purging policy will be used.

Options:
[-i id1 id1 id2]: Users can input a single incident ID, or a range of incidents to purge.
[-age ]: Users can specify the purging policy either to all the diagnostic data or the specified type. The data older than ago will be purged

[-type ALERT INCIDENT TRACE CDUMP HM UTSCDMP]:

Users can specify what type of data to be purged.
Examples:

purge
purge -i 123 456
purge -age 60 -type incident


There where some issue that directories cdmp_* in repositories (=$diagnostic_dest/Diag/rdbms/database_name/instance_name/bdump ) are not purging automatically. When you run following command, it remove cdmp_* directories, which is older the 3600 Minutes

adrci> purge -age 3600 -type UTSCDMP

Categories: DBA Blogs

ADF 11 Course 12-16 January in The Netherlands

JHeadstart - Tue, 2008-12-23 23:06

Oracle University just released the ADF 11 course, covering many new ADF 11 features.
I will be teaching this course from 12 to 16 january 2009 in De Meern, the Netherlands.

There are still seats available. For a list of course topics and registration, use this link.

Categories: Development

Oracle Database Resource Manager 11g - Undocumented New Parameters

Aviad Elbaz - Mon, 2008-12-22 09:31

I've played around with Oracle Database Resource Manager in 10g and it's quite nice and might be very useful for high CPU usage systems, but I found the inability to limit I/O as a drawback since in most cases I've faced the need to limit I/O is more necessary than CPU limit.

When you have, let's say, 8 CPU's on your machine, you need all the 8 to be 100% utilized by Oracle sessions for the resource manager start limit sessions. However, if your machine I/O capabilities are 50 mbps, you need only one or two sessions which perform intensive I/O (batch job/heavy report) to make the database very heavy.

In Oracle Database 11g Release 1, Resource Manager has gotten some new features related to I/O. So I've installed the 11g, made some tests and found some interesting issues.

I'm not going to write about Resource Manager basics or about 11g enhancements as some great articles have already been published about it. For example, you can read Tim's blog post - "Resource Manager Enhancements in Oracle Database 11g Release 1"

But... I'm going to discuss one missing capability (in my opinion) that will hopefully be available with Oracle Database 11g Release 2 with 2 new parameters which have already available but inactive and undocumented.

For those who are not familiar with Oracle Database Resource Manager I'll try to give a short basic introduction:

Oracle Database Resource Manager helps us to prioritize sessions to optimize resource allocation within our database by:

  1. Creating groups of sessions (Consumer Groups) based on similar resource requirements
  2. Allocate resources to those groups (Resource Plan Directive)
  3. Resource Plan - is a container of Resource Plan Directives that allocate resources to Consumer Groups

Only one Resource Plan is active at a time.

When Oracle Database 11g was introduced, some new features for Resource Manager related to I/O have been revealed. Among them:

  1. I/O Calibration -
    New procedure which helps to assess the I/O capabilities of the database storage. The output of this procedure (CALIBRATE_IO) is max_iops, max_mbps and actual_latency.
     
  2. Per Session I/O Limits -
    Unlike previous versions, you can limit I/O requests (or I/O MB) for a session before it moved to a new consumer group. (switch_io_megabytes and switch_io_reqs have been added to CREATE_PLAN_DIRECTIVE procedure)
     

Oracle have added the option to "capture" Oracle sessions by the I/O requests or by the megabytes of I/O they issued in order to move them to a lower priority consumer group.

I have a very fundamental doubt about this enhancements as I don't get the meaning of "capturing" an intensive I/O session and move it to a low priority consumer group which can have only CPU limit ... ?!  The reason we "capture" this session is the amount of I/O it makes, and when we move it to a low priority consumer group we can just limit its CPU resources. We can't limit the amount of I/O for a Consumer Group.

It could have been very useful if Oracle had added the ability to limit I/O for a Consumer Group, like we can limit CPU (with mgmt_pN) for a Consumer Group. What is missing here is the ability to limit I/O for a specific Consumer Group in terms of Maximum I/O per second or Maximum megabytes per second.

Will Oracle enhance Resource Manager in 11g Release 2 to fulfill this capability?

I don't have a confident answer for this question but I assume they will.

While playing around I've noticed two new parameters to the CREATE_PLAN procedure - MAX_IOPS and MAX_MBPS. On first sight it looked like the answer to my question - the ability to limit I/O for session within a plan,

but it's not...

Those two parameter are undocumented and totally ignored in Oracle 11g Release 1 Documentation but available in 11g Release 1 database:

-- create a new resource plan
--
-- Input arguments:
--   plan                       - name of resource plan
--   comment                    - user's comment
--   cpu_mth                    - allocation method for CPU resources
--                                (deprecated)
--   active_sess_pool_mth       - allocation method for max. active sessions
--   parallel_degree_limit_mth  - allocation method for degree of parallelism
--   queueing_mth               - type of queueing policy to use
--   mgmt_mth                   - allocation method for CPU and I/O resources
--   sub_plan                   - whether plan is sub_plan
--   max_iops                   - maximum I/O requests per second
--   max_mbps                   - maximum megabytes of I/O per second
--

PROCEDURE create_plan(plan IN VARCHAR2,
                      comment IN VARCHAR2,
                      cpu_mth IN VARCHAR2 DEFAULT NULL,
                      active_sess_pool_mth IN VARCHAR2
                      DEFAULT 'ACTIVE_SESS_POOL_ABSOLUTE',
                      parallel_degree_limit_mth IN VARCHAR2 DEFAULT
                      'PARALLEL_DEGREE_LIMIT_ABSOLUTE',
                      queueing_mth IN VARCHAR2 DEFAULT 'FIFO_TIMEOUT',
                      mgmt_mth IN VARCHAR2 DEFAULT 'EMPHASIS',
                      sub_plan IN BOOLEAN DEFAULT FALSE,
                      max_iops IN NUMBER DEFAULT NULL,
                      max_mbps IN NUMBER DEFAULT NULL

                      );

I tried to create a new plan using one of these two parameters, but it returned an error for each value I tried.
It turned out that the only valid value to MAX_IOPS and MAX_MBPS is null.

begin
  dbms_resource_manager.create_plan(plan => 'PLAN_EX_1'
                                   ,comment => 'Plan Example 1'
                                   ,max_iops => 100);
end;
/
begin
*
ERROR at line 1:
ORA-29355: NULL or invalid MAX_IOPS argument specified
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 38
ORA-06512: at line 2

I've confirmed it with Oracle support and their answer was:

"This is currently an expected behaviour, we can not explicitly set either max_iops or max_mbps to any value other than null, that's why these parameters are currently not included in the documentation."

So here is my guess:
It looks like Oracle are going to enhance Resource Manager to allow I/O limit, but unlike CPU limit through Plan Directive (mgmt_pN), it's going to be a limit in Plan level, so you can create a sub plan with limited I/O resources allocated. Maybe the allocation method parameter (mgmt_mth) will get more valid options except for EMPHASIS and RATIO.

I'll keep track of it as soon as Oracle Database 11g Release 2 will be announced and I'll update.

You are welcome to leave a comment and/or share your opinion about this topic.

Aviad

Categories: APPS Blogs

Coding Horror: Hardware is Cheap, Programmers are Expensive

Tahiti Views - Mon, 2008-12-22 00:17
You've probably heard both sides of this argument: throw hardware at performance problems, no no, improve the code. Jeff Atwood at Coding Horror comes down on the "more hardware" side in this post:Coding Horror: Hardware is Cheap, Programmers are ExpensiveUsually I agree with Jeff, but I John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com2

Pages

Subscribe to Oracle FAQ aggregator