Feed aggregator

Is 10.7 Character extinct? Hardly...

Jo Davis - Thu, 2008-02-21 22:41
With all the talk about R12 and Lifetime support, consultants can be forgiven for thinking that the old character interface is actually extinct... the truth is (for those of you out there still on it) that no, it isn't. There are actually Oracle E-Business Suite sites still on the 10.7 Character release (despite the fact that it's desupported) and wondering to themselves where they go from here? Have they totally missed the ship? So for anyone in this position, here's my two cents worth....

Upgrading to Release 11.5.10 is not impossible.... it's actually quite straightforward.... the secret (and it's not much of a secret) is to use a DBA team who've done it before... and seriously it's now 2008, we've been upgrading from 10.7 Character to 11i for eight years now, the DBAs are getting pretty good at it :) I've always said that an actual upgrade (not a point release, which is a really BIG patch, an actual upgrade which involves laying down a new $APPL_TOP and running AutoUpgrade) is not the time to break in your baby DBA. Trial by Fire may have it's uses as a training technique (although I'm not a huge fan of that either), but a massive upgrade is not a candidate.

The only thing which can bite you in the backside is going to be those customisations, so it's time to gather every single piece of documentation you can find with the word 'Oracle' in it then see what you've got. If we're lucky we can kill 90% of them while we're at it (don't get me wrong - I like developers, I just don't like massive customisations which bugger up the standard stuff).

So, as usual, I'm still recommending Upgrade over ReImplementation provided you're basically happy with the Accounting Flexfield structure and calendar. (And seriously - if you want to change your AFF, don't re-implement, just use Flexfield Express - it's cheaper and you get to keep your history).

Okay, end of babbling for today... catch you next time..


How to delete an object with a special character in Oracle

Hampus Linden - Wed, 2008-02-20 11:52
There are some things in Oracle that are possible but shouldn't be possible.
One thing I love to hate is the fact that you can create tables with almost any name, just as long as you double quote it.
SQL> create table "My Fruit Table" (id number);

Table created.

SQL> select * from tab;

------------------------------ ------- ----------
My Fruit Table TABLE
Horrible! And what's even more horrible is that people actually do this.

Now to the problem, a user created a table with a special character in the name. Not even sure what character but I need a way to drop it. PL/SQL to the rescue.
SQL> set serveroutput on
-- Lets create a dummy table with a bogus character in the name
SQL> declare
a varchar2(500);
a:='create table "abctest'||chr(150)||'" (id number)';
execute immediate a;

PL/SQL procedure successfully completed.

SQL> select * from tab;

------------------------------ ------- ----------
abctest? TABLE

SQL> desc "abctest?";
ORA-04043: object "abctest?" does not exist

-- And a bit of PL/SQL to drop it
-- change the abctest% string to something matching your single table.
SQL> declare
a varchar2(500);
tname varchar2(50);
a:='select table_name from user_tables where table_name like ''abctest%''';
execute immediate a into tname;
dbms_output.put_line('Table name is: '||tname);
execute immediate 'drop table "'||tname||'"';

Table name is: abctest?

PL/SQL procedure successfully completed.

SQL> select * from tab;

no rows selected

IOUG Collaborate 2008

Edward Whalen - Tue, 2008-02-19 07:52
It's been a while since I've added anything to this block. I guess I just got busy with other things. Well, I'm going to try to change that. First, I'm going to be at IOUG Collaborate 2008 and I am giving a session on "Advantages of Running Oracle 11g on Microsoft Windows". I hope to see you there. Second, I'm soliciting topics for future blogs. Please reply to this post with ideas and I will take them under considerations. Questions are also fine, I'll answer them if I can.

Useful links on Oracle XML DB

Pawel Barut - Mon, 2008-02-18 16:17
Written by Paweł Barut
Here is bunch of useful links related to Oracle XML DB and handling XML data in Oracle PL/SQL. It's mostly for my own reference, but I hope it will be useful for others too.

Cheers Paweł

Related Articles on Paweł Barut blog:
Categories: Development

Reactive performance management By Craig Shallahamer

Virag Sharma - Sun, 2008-02-17 00:00

Last week Oracle University arrange Seminar by Performance Management guru Craig Shallahamer in India. Craig brings his unique experiences to many as a keynote speaker, a sought after teacher, a researcher and publisher for improving Oracle performance management, and also the architect of Horizone, OraPub's service level management solution.<?xml:namespace prefix = o />

His way of teaching is very very good , also contents of seminar is different and very effective. He focused on accurate performance diagnosis of even the most complex Oracle systems and how oracle interact with OS and application’s. I have been reading Craig’s papers/articles for long time , but never got full/big picture. After attending seminar his papers/articles look easy to understand.

Focus point of seminar was how to pin point issue area and different method to resolve it. He also cover some area of Mutex, IMU ( In memory Undo ) and performance forecasting etc. The way he teach latch Internals, anybody can understand it, even small kids J, he Create an honest and understandable story explaining the problem, the solutions, and how to best implement your solutions in a complex and highly available Oracle environment.

Second day of training , we had little tough time , as next to our conference hall in Oberoi Hotel, New Delhi , Shahrukh Khan( One of India Famous actor) was coming to inugrate/launch "Kya Aap Paanchvi Paas Se Tez Hain" new serial on Star Plus channel. But Craig enjoy music and continue with training

Categories: DBA Blogs

Misleading adcfgclone.pl ERROR: OUI files not found at their expected location...please make sure that the pre-requisite OUI patch was applied correctly...

Gaurav Verma - Thu, 2008-02-14 21:04

If you ever got this ERROR while adcfgclone.pl, then it would do you good to check if you have existing directories named in the context file.

As misleading the below error message is, if you debug the perl
scripts and check the code a base path is not being able to be
constructed by adcdfclone.pl.

To make a long story short, if you just
double check all the directory paths that have been mentioned in the
context file, you can just get around this problem easily.

This misleading error message is fixed in R12.

Here is an example of such an instance..

        perl /apsd01/apps/apsdcomn/clone/bin/adclone.pl
java=/apsd01/apps/apsdcomn/clone/bin/../jre mode=apply stage=/apsd01/apps/apsdcomn/clone/bin/.. component=appsTier method=CUSTOM appctxtg=/apsd01/apps/apsdappl/admin/APSD_apsd.xml  showProgress contextValidated=false

ERROR: OUI files not found at their expected location.
Setting OUI path to: /appsoui

ERROR: OUI files not found at their expected location. Please make sure that the pre-requisite OUI patch was applied correctly: patch should be unzipped directly at the ORACLE_HOME root level so that OraInstaller.jar exists in either

Restart adclone.pl after performing required corrections.
ERROR: Failed to execute /apsd01/apps/apsdcomn/clone/bin/adclone.pl

Add new node to apps 2

Fadi Hasweh - Wed, 2008-02-13 09:30
Please read http://oracle-magic.blogspot.com/2008/02/add-new-node-to-apps.html before reading the steps below,

After finishing configuring the clone instance by running adcfgclone.pl I had to run the adaddnode.pl script according to note 464018.1.
While running adaddnode.pl I faced the following error
ERROR at line 1:
ORA-00001: unique constraint (APPLSYS.AD_APPL_TOPS_U2) violated
ORA-06512: at line 16

I followed note 394615.1 to solve it and after I did the steps in the note I run adaddnode.pl again and I faced error related to AD_TIMESTAMPS_U1 I checked on metalink but I could not find anything then I search on google and found post by dave http://newappsdba.blogspot.com/2006/09/dont-hit-ctrl-c-while-running.html from his post I took a backup the ad_timestamps and then truncate it after that I run adaddnode.pl and it completed successfully.
Then I updated the xml file to have only the forms and web servers running on this new node following the same note 464018.1 I run autoconfig the start the service on both nodes a now able to access both nodes.

What I want to do now is to configure web cache as load balancer also I want to implement Step Implement Forms Listener Servlet Support from note (Advanced Configurations and Topologies for Enterprise Deployments of E-Business Suite 11i) Note:217368.1 in order to have Forms Server Load Balancing, so when one form server go down in one node users will be connect to the other forms node.Because the current configuration if I am connect to webnode1 and forms on that node is down I will not be able to connect to forms and I will receive an error but if load balancing is implemented it will take me to the other working forms server node.


I'm Dumping TOAD, Helloooo Oracle SQL Developer!

Andries Hanekom - Wed, 2008-02-13 07:46
It's already come around, out of nowhere the price of flowers and greeting cards increases ten fold and that special one starts leaving hints all around the house. Yes, Valentines day, loves is in the air although with a very fake and commercial smell to it. In keeping with the love theme I have decided to announce my separation from TOAD, I have found somebody new, a young new model, little ruff around the edges but with loads of possibilities and the best of all she's much cheaper (Not that kind of cheap...).

I have been looking to move over from TOAD to Oracle's SQL Developer (OSD) since it's initial incarnation as Raptor, but have always found some key feature missing from the product. Recently I have decided to attempt the crossover again, removing TOAD from easy desktop access and focusing on doing all work previously handled by TOAD through OSD. At the start I had to make frequent trips back to TOAD, unable to find certain functions and tools or just getting frustrated with the unfamiliar interface. But as I started using OSD more and more I discovered those familiar TOAD tools in there OSD disguise, and began to appreciate the added functionality provided by OSD. It's not been easy and at first getting the preferences right for your preferred setup may be a little tricky, but once familiar with the new interface and the location of those favourite tools and functions you'll forget about that first love pretty soon.

My Top tips for migrating from TOAD to Oracle SQL Developer
  1. Intentionally or not, Oracle has placed allot of the functions and tools used in TOAD in the same navigation path in OSD, so if your stuck just remember how you accessed it in TOAD and 9 out of 10 you should have the same functionality available.
  2. Save code for later use, highlight the code and right click, select Save Snippet. They can later be retrieved by using menu option View-> Snippets
  3. Search database objects using menu option View -> Find DB Objects

Please feel free to contribute any tips you might have for making the transition a little easier, happy Valentines day.

A Little Fun with Truth in Advertising

Ken Pulverman - Tue, 2008-02-12 22:31
Just for fun....here's a few great examples of just how far Photoshop can stretch the truth....


Espousing Pithiness

Ken Pulverman - Tue, 2008-02-12 19:59
So you ask, what have you been doing? You haven’t written a blog entry in a long time? Well while living the dream of thinking big Marketing 2.0 thoughts and driving the corresponding actions, I started to question the value of a blog. Oh yes, it’s there, but I think the format of mine and others needs to change. We are all super time starved, and I think lengthy musings just don’t do it for folks. However, at the same time I was questioning the nature of business blogging, my small readership was coming out of the woodwork to question why I hadn’t written in a while. My most current opinion is that I need to keep it short for my sanity and yours. With that said, I am going to completely violate that today as I am going to show you right here a couple of the things I’ve been up to since the last post.

First of all, I’ve been working on a website that will be a companion piece to this blog (indeed the blog is embedded in it) to capture resource links and threads to the points we discuss here. In the next week or so, I’ll be transferring a lot of the Marketing 2.0 resources I assembled for an internal corporate audience to this public space.

Here’s what it looks like (built with the new beta release of Netvibes).

This next item is impossible to read and Blogger's upload converted me to a member of the Blue Man Group. So dear reader(s), if you are up on the latest file posting techniques for blogs, let me know. Google Docs wanted this in HTML and it looked pretty bad though not as bad as what you see below. It’s a paper a wrote that will appear in a publication called Perform that is published by the same folks that run the magazine, Revenue Today. Anyway – some thoughts on why I believe web 2.0 is actually supporting very old behaviors for how we buy and sell not new ones. If you are interested, e-mail me, and I’ll send you a copy.

All for this addition. Shorter and pithier next time. Promise.

Sharing is Caring 4

Fadi Hasweh - Tue, 2008-02-12 13:54
Oracle Community
it the new site for oracle social networking, http://www.oraclecommunity.net/to make oracle community even smaller.


Lies, Damn Lies, and Statistics

Mary Ann Davidson - Tue, 2008-02-12 13:24

There is an aphorism famously attributed to Mark Twain (among others) to the effect that there are "lies, damn lies and statistics." The Mark Twain quotes on truth I was able to verify were almost as interesting though not quite so pithy:


A lie can travel halfway around the world while the truth is putting on its shoes. (Remember, this was before the Internet).


Get your facts first, and then you can distort them as much as you please.


I've had several reminders recently that what we think we know and take for granted is often not only wrong, but quite wrong. The ease with which we can search for things on the Internet leads us to forget that what we are finding is information, but not always expertise and almost never actual wisdom. To the extent we rely on "received wisdom" it is a good opportunity to remind ourselves that information and knowledge are two different and often diametrically opposed beasties, indeed.


For example, someone recently sent a resume of a former colleague. I use "resume" loosely, as the description of work experience (the portion I have direct knowledge of, which is the only section to which I address my comments) is better described as "fiction." Perhaps, "fiction based on actual events," if I am feeling generous, except that I am not. This was by far the worst example of resume embellishment I've seen in 20-some years.


In the interests of protecting the guilty, I will call the individual involved Fictional Resume Writer (FRW). The nature of FRW's sins were 1) claiming credit for work FRW never did 2) claiming origination of work done by others  - which I find especially reprehensible and 3) gross exaggeration of accomplishments.  I emailed FRW and said that I thought a resume rewrite was in order; especially given FRW was seeking business with Oracle. Business is personal, I said, and someone who is materially misleading in credentialing I'd be unlikely to trust or want to work with in a business setting. I also went point by point with the "issues" in the resume, just to be clear what I thought was inaccurate and why.


The response I got was the email equivalent of a shoulder shrug and a comment that the amount of hard work FRW expended "justified" claiming credit. (Is this the new world of Web 2.0, where "mashup" owners claim origination based on the "hard work" involved in taking others' work and creating something different from it?)


Perhaps I am old-fashioned, but there is a clear difference between a good idea, initiating that good idea, and carrying through on a good idea to effect positive and measurable change. And common sense if not a sense of honor should dictate how one expresses the difference among them.


For example, once upon a time, I got tired of explaining to developers for the umpteenth time what a buffer overflow was, so I wrote up a few pages - perhaps two or three - on what constituted a buffer overflow and how to avoid them. Though I did not know it at the time, this was the genesis of the Oracle Secure Coding Standards. I note at the outset, for reasons that will become all too obvious if you keep reading, that I do not claim "authorship" of these standards.


My prototype document grew over time, substantially. Someone else expanded the list to be a "top ten" list of secure coding dos and don'ts. "Top ten" then grew to be an extensive list of security vulnerabilities and how to avoid them. There are also examples of what happens if you don't write correct, secure code (drawn from actual coding errors development teams have made). All in, the document has grown to about 300 pages, to include "case law" (not just what not to do, but how to address specific technical issues the correct way). One individual (Oracle's Chief Hacking Officer) has written the bulk of the secure coding standards with input and review from others and he is clearly the author and redactor of this document. (Mahalo nui loa, Howard.)


There have been other enhancements and uses of the secure coding standards. Someone got the bright idea of tying the secure coding standards directly to our product release security checklists. A couple of people developed the secure coding class (online web-based training based on the Oracle Secure Coding Standards), while still others have watched over the rollout of this class to the development groups that need to take it (to include restructurings, new hires and acquisitions).


In theory, were I to write my resume the way FRW has, I would claim "originator," "author" or "founder" of the secure coding standards, since I wrote the first two - count them - two glorious pages. But what I wrote does not have the breadth, depth, examples, actual technical know how, proactive guidance, and utility of what now exists. My claim to "authorship" - if I were vain enough to make it - is like the person who puts the front page and inside page (the one with the ISBN number) together for a book claiming to be the "author." It's simply ridiculous, and I'd deserved to get whacked with all 300 pages, hard bound, if I made such a statement.


There is a security lesson here. One of them is the age-old one of "trust, but verify." It is not my job - and I would not do it - to tell FRW's current employer that FRW's resume in some particulars is much closer to fiction than fact. "Caveat emptor" - let the buyer beware. If you are hiring someone on the basis of credentials, it's well worth checking them.


The second security lesson is an old one. Business is still personal, and personal attributes matter, like honor and trust. Contracts, for example, cannot create trust where there is none; just specify requirements for performance and remedies for non-performance.  A person who is untrustworthy in small things is likely to be untrustworthy in large things, and if there is anything more untrustworthy than taking credit for others' work, I do not know what it is.


The second reminder of the difference between what we think we know and the truth was occasioned by a recent op-ed piece in the Wall Street Journal called "The Lies of Tet" by historian Arthur Herman (I can personally recommend his book To Rule the Waves - How the British Navy Shaped the Modern World).


For many years, I've tried a little "knowledge experiment," by asking random people if they had heard of the Tet Offensive and, if so, who they thought "won." The response (if I exclude people who have served in the armed forces who know the truth) is astonishing. Most people, when asked, believe that the Tet Offensive was a resounding defeat for the forces of the United States and the Republic of South Vietnam. In particular, those who were alive at the time and recall the media coverage are shocked to find out that what they think they know is all wrong. One hundred percent wrong, in fact.


As Arthur Herman says:

"The Tet Offensive was Hanoi's desperate throw of the dice to seize South Vietnam's northern provinces using conventional armies, while simultaneously triggering a popular uprising in support of the Viet Cong. Both failed. Americans and South Vietnamese soon put down the attacks, which began under cover of a cease-fire to celebrate the Tet lunar new year. By March 2, when U.S. Marines crushed the last North Vietnamese pockets of resistance in the northern city of Hue, the VC had lost 80,000-100,000 killed or wounded without capturing a single province. Tet was a particularly crushing defeat for the VC (emphasis mine).  It had not only failed to trigger any uprising but also cost them "our best people," as former Viet Cong doctor Duong Quyunh Hoa later admitted to reporter Stanley Karnow. Yet the very fact of the U.S. military victory -- "The North Vietnamese," noted National Security official William Bundy at the time, "fought to the last Viet Cong" -- was spun otherwise by most of the U.S. press." ("The Lies of Tet," Wall Street Journal, February 6, 2008)

There are "truths" that are so embedded in the fabric of what we think we know that we don't even bother reading broadly, from a breadth of sources (and reputable sources) to reach our own conclusions about what is true vs. what is received wisdom. We simply must do so on issues that matter to us, instead of "outsourcing" wisdom to pundits. Otherwise, "collective" wisdom substitutes for actual facts and analysis. Of all the maxims wandering loose about the Internet (and on it), the one I find the most obnoxiously untrue is "the wisdom of the crowds." Sometimes, the crowds are dead wrong, because they've been massively misinformed. As with Tet.


It is an inescapable truth that the media got Tet wrong, spectacularly wrong, and "the lies of Tet," to use Arthur Herman's phrase, continue to shape people's opinions of not only Vietnam, but warfare in general and the veracity of the armed forces decades after the actual events.


As much as I have expressed concerns about every idiot with an opinion being able to express it on the Internet (as I am doing here!), the fact remains that in some cases, bloggers have spotted untruths, exaggerations and fabrications reported by the media (doctored pictures and doctored service records, to think of a couple of prominent examples). There is an important utility in keeping professional journalists and industry analysts honest and objective that is worth something to the millions of people who expect that from mainstream media. Score one for the blogosphere.


The corollary, and cautionary note to the blogosphere, is the realization that not all truths are apparent in nanoseconds. Technologists are used to rapidity of change, and the barrage of information and the rapidity of change often consume us as we try to keep up with the latest technology trend. Often, however, it is only with the passage of time, careful analysis, and hindsight, that we can correctly weigh events. There is a reason for the phrase rendered "timeless truths" instead of "nanosecond truths."


I was on vacation recently at a venue that couldn't be more removed from Silicon Valley: Colonial Willliamsburg, Virginia, at The Williamburg  Antiques Forum. Looking at decorative objects that are between 300 and 400 years old and determining what they say to us now about the time at which they were made and the people who owned them could not be more different than what I do for a living. Yet even in the world of decorative arts, curators continue to uncover new facts that may lead them to reinterpret history. In short, even with a 350-year-old highboy, there is still much to learn, to the point that one's view of history may change.


The security issue in the above is still "trust, but verify," and I would add "from multiple sources, not merely one." Be especially wary of "received wisdom" on things that matter, and be willing to do your own research and develop your own expertise. Anything I read about military history - and history, in large part, is military history - I use at least two sources for if it is important to me, and occasionally more.


Thus far, I've talked about lies (FRW), damn lies (the media about the Tet offensive) but not about statistics. The statistics part comes with a presentation I have been doing recently (three times in Eastern Europe a couple of weeks ago) about security metrics.


I'm going to skip over a lot of what I talked about (I have already opined in a previous blog entry why "number of published vulnerabilities" is a metric very easy to "game" to achieve unintended results), to focus on a truth I stumbled upon by sheer accident. I suspect that metrics kahunas have known what I found for a long time, so I don't claim novelty, just a "eureka!" moment.


I talked in my presentation about what constitutes a good metric (objective, measurable, helps you answer basic questions like "are we doing better or worse," incents the right behavior, fosters additional questions, helps you identify positive or negative trends, and so on). I used as an example the various metrics we keep pertaining to the release of CPUs that I wanted to discuss as a group, because there is no single metric that you could use to answer "goodness questions" related to how we are doing. In fact, picking a single metric and focusing it to the absence of all others would lead to incorrect incentives.


For example, one of the metrics we keep is "number and percentage of patches that were published on the announced CPU date." That's a good one, except that you do not want people only hitting the date and ignoring quality. So, "number and percentage of patch reloads" is another one we keep, because while we want CPUs to come out on the target date, we do not want to have to reload patches because the quality was poor. Both quality and timeliness are important; hence, two metrics. We are also concerned that the issues we identify as worthy of going into a Critical Patch Update make it through the process (sometimes, issues drop out for regressions). Ideally, you'd want all critical issues you identify to actually make it into the target CPU (because there are no regressions). So, we look at number of issues that drop out through the CPU process because we are trying to make that number as low (over time) as is feasible.  I walked through all of the aforementioned metrics (and a few related to CPUs I did not discuss here) and slapped a heading on the slide: "combined metric."


My eureka moment was noting that, if security metrics are useful, and they are, the idea of a combined metric is even more useful. The goal of a metric is to be able to manage better, and just as (in the pre-GPS days) of navigation you need to take multiple "fixes" to triangulate your position, you are often better served by triangulating how you are doing by measuring and weighing several different metrics. Rarely can you manage well by measuring just one thing.


The real goal of any metric, or "statistic," to round out my theme, is to manage better. Metrics can help you allocate resources to affect the most good for the most people and to spot trends (both positive and negative) quickly. Ultimately, a good metric needs to help you answer the question, "Are we doing better or worse?" You can do a lot with metrics, and some people lie with them, but above all, you have to be honest with yourself.


As Shakespeare put it so well:


This above all: to thine own self be true,

And it must follow, as the night the day,
Thou canst not then be false to any man.


For More Information:


Book of the week - War Made New: Technology, Warfare and the Course of History: 1500 to Today by Max Boot


A really great read about how technological changes influence warfare. If you have no idea how IT-centric warfare now is (in terms of command and control), the last 100 pages are really insightful.




One of the very best security metrics kahunas I know is Dan Geer. Anyone interested in this topic should Start With Dan:




More on books by Arthur Herman:




An article by Arthur Herman on the Vietnam War:




One of the best books on Vietnam is still Lewis Sorley's A Better War:




The “log file sync” wait event is not always spent waiting for an I/O

Christian Bilien - Tue, 2008-02-12 10:41
I went through a rather common problem some times ago, which shows how one can get easily fooled by the “log file sync” wait event. Here is a log file sync wait histogram which was interpreted as a poor storage array response time by the DBAs, often a good way to get off the hook […]
Categories: DBA Blogs

Trying to lose a datafile.

Claudia Zeiler - Tue, 2008-02-12 00:22
Chen Shapira wrote me a suggestion regarding losing a data file.

“Try moving a data file while the DB is up and running, and try to make Oracle "notice"
that it is gone. On Linux, Oracle does not notice anything! You can move a datafile
and all selects, updates, inserts, will work. The OS prevents the DB from seeing
the change. But I noticed that your test system is on windows, so I'm curious
whether it is the same.”

Once my company’s operator TARed the rest of the database into the system tablespace datafile.
Oracle ‘noticed’! Are you sure that you really lost the file? Was there some sort of mirroring
of the file?

Anyway, here is what happened when I tried to lose a data file while the DB is up on a Windows based DB…

SQL> select name from v$datafile;


SQL> host ren C:\ORACLE\ORA10\ORADATA\DB10\O1_MF_USERS_35OQZZ4H_.DBF users.dbf
The process cannot access the file because it is being used by another process.


The process cannot access the file because it is being used by another process.

SQL> host ren C:\ORACLE\ORA10\ORADATA\DB10\EXAMPLE01.DBF example02.dbf
The process cannot access the file because it is being used by another process.

Also a straight O/S effort:
C:\Windows\System32>del C:\ORACLE\ORA10\ORADATA\DB10\O1_MF_USERS_35OQZZ4H_.DBF

The process cannot access the file because it is being used by another process.

CPUJan2008: One thing that every DBA should know about it.

Pawel Barut - Mon, 2008-02-11 15:54
Written by Paweł Barut
While reading documentation for latest Oracle security patch CPUJan2008 for database I've noticed one new required step. This step is: Recompile Views. One whould say that it's nothing strange, but to run this script database have to be started in UPGRADE MODE. For non-RAC installations it's not big issue, as anyway you have to plan downtime. But Oracle suggest that this recompilation can take 30 minutes:
"In one internal Oracle test with approximately 2000 views and 4000 objects, the total execution time for view_recompile_jan2008cpu.sql and utlrp.sql was about 30 minutes."
Practically in my case it took no more than 5 minutes, so it's not big deal.
But it could be problem for RAC installations, that used when CPU is applied on one node, then catcpu.sql is run, then second node is being patched. So normally DB will be all the time available. But if it's needed to start database in UPGRADE mode it means that downtime is required. According to documentation it's not required to run view recompilation script during paching process. This script can be run after,
"however, the CPU installation will not be complete until the view recompilation is completed."
You can ommit this step only if:
  • Databases was created with Release or later

  • Databases was created with any release (for example,,, or after CPUJan2008 or a later CPU has been applied

I hope it also means that after CPUJan2008 was applied you will not have to run this script after next CPU came out (CPUApr2008, etc).

Cheers Paweł

Related Articles on Paweł Barut blog:
Categories: Development

Personal Substitution Variables

Oracle EPM Smart Space - Mon, 2008-02-11 10:47
A cool feature of the Smart Space SDK is the ability to create a set of global Smart Space variables that can be used by any gadget. These are essentially name value pairs that are called LocalProperties in Smart Space.

I used this feature in my latest batch of Essbase gadgets. I wanted the ability to set the Time dimension member for all of my Essbase query gadgets in one place and not have to go to each gadget to change this member. I knew that one option would have been to use the Essbase Substitution Variable functionality found in the Essbase product. But I did not want to make this a variable for all Essbase users which is what the Essbase Substitution Variable would provide.

I created a simple MyVariables gadget that allows the user to create personal variables.

These variables can then be used by other gadgets. In the case of the Essbase gadgets, I implemented a special syntax whereby any member name that is preceded by two ampersands (&&TODAY) represents a local variable. When constructing the query, the Essbase gadget then makes a call to Smart Space whenever it encounters a member name with this syntax to get the value of that variable. It replaces the syntax (&&TODAY) with the variable value (“20080823”) and issues the query.

This is another example of the great new benefits of having Smart Space running on your client. It is the analytical footprint that will allow for powerful new capabilities such as creating your own personal substitution variables.

public String getPreference(String key)
return Services.PreferenceManager.LocalProperties.Load(key);

public void savePreference(String key, String value)
Services.PreferenceManager.LocalProperties.Save(key, value);

public void deletePreference(String key)

public void getPreferences()
String[] sPrefKeys = Services.PreferenceManager.LocalProperties.GetAllKeys();

ListViewItem item = new ListViewItem();

foreach (String sPrefKey in sPrefKeys)
String sPrefValue = getPreference(sPrefKey);
item = this.listViewVars.Items.Add(sPrefKey);
catch (Exception e1) { }
Categories: Development

Anyone can post comments on this blog

Chris Grillone - Mon, 2008-02-11 10:33
To encourage questions and open discussion on this blog, I have changed the settings to allow anyone can post comments on this blog. I want to learn from you and allow you to share your Oracle Flow Manufacturing, lean manufacturing and general manufacturing experiences and insight.

Partners, share your industry insight and gain industry contacts here. According to Google Analystics, people from every continent except Africa (& Antarctica, penguins are not interested) are viewing this blog. (I will be harrassing my African contacts after posting this blog.)

Oracle Sales and Sales Consultants and Industry Business Unit Consultants, share your insight and identify opportunities.

Everyone, leverage this community of Manufacturing experts whether you are considering Flow Manufacturing or another solution.

I want to hear from you to help us improve the product and customer experience.


Subscribe to Oracle FAQ aggregator