Feed aggregator

How to execute TKPROF on trace files larger than 2GB ? --> Use pipe

Aviad Elbaz - Tue, 2008-06-24 05:54

Here is a nice trick to work with files larger than 2GB on Unix/Linux using pipe.

First case - TKPROF

When trying to execute TKPROF on a trace file larger than 2 GB I got this error:

[udump]$ ll test_ora_21769.trc

-rw-r-----  1 oratest dba 2736108204 Jun 23 11:04 test_ora_21769.trc

[udump]$ tkprof test_ora_21769.trc test_ora_21769.out

TKPROF: Release - Production on Thu Jun 23 21:05:10 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

could not open trace file test_ora_21769.trc

In order to successfully execute TKPROF on this trace file you can use the mkfifo command to create named pipe as follow:

  • Open a new unix/linux session (1st), change directory where the trace file exists and execute:

[udump]$ mkfifo mytracepipe
[udump]$ tkprof mytracepipe test_ora_21769.out

TKPROF: Release - Production on Thu Jun 23 21:07:35 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

  • Open another session (2nd), change directory where the trace file exists and execute:

[udump]$ cat test_ora_21769.trc > mytracepipe

This way you'll successfully get the output file.


Second case - spool

Similar issue with spool to file larger than 2GB can be treat similarly.

$ mkfifo myspoolpipe.out

--> Create new named pipe called 'myspoolpipe.out'

$ dd if=myspoolpipe.out of=aviad.out &

--> What you read from 'myspoolpipe.out' write to 'aviad.out'

$ sqlplus user/pwd@dbname

SQL*Plus: Release - Production on Tue Jun 24 12:05:37 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Connected to:
Oracle9i Enterprise Edition Release - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release - Production

SQL> spool myspoolpipe.out

--> Spool to the pipe

SQL> select .....

SQL> spool off
SQL> 5225309+294082 records in
5367174+1 records out

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release - Production

[1]+  Done                    dd if=myspoolpipe.out of=aviad.out

$ ls -ltr

prw-r--r--  1 oratest dba          0 Jun 24 12:22 myspoolpipe.out
-rw-r--r--  1 oratest dba 2747993487 Jun 24 12:22 aviad.out

Related Notes:

Note 62427.1 - 2Gb or Not 2Gb - File limits in Oracle
Note 94486.1 - How to Create a SQL*Plus Spool File Larger Than 2 GB on UNIX


Categories: APPS Blogs

ODTUG 2008 (week in review)

Carl Backstrom - Mon, 2008-06-23 13:39
Well alot of people have been giving day by day reports about this years ODTUG (where do they get the time). So I figured I'd just post a weekly roundup on the proceedings.


The ODTUG 2008 event itself was alot of fun and very well organized.
From Tom Kyte's opening keynote , which was a very cool and non technical , looking at why we always need to question how and why we do the things we do because the way you used to do it might not be the right way anymore and it's our job to always make sure things are done in the right way.

To the closing night get together, complete with band , fortune tellers and beads the event was just very interesting with the right amount fun to keep everybody looking forward to what comes next. And the infrastructure of the event was put together so that getting to the sessions or events you needed to took the minimal of effort or fuss. I recommend this event to anyone interested in any of the tools that Oracle provides.


All I can say is "WOW"!! I knew people are using and interested in APEX but the amount of interest and usage is amazing. Almost every APEX session had a full room and many were standing room only. Many times I would just end up standing outside the door looking in just so I wasn't taking up a seat , that's the type of guy I am ;). I would say everybody at the event was surprised at the interest and energy of the APEX crowd including the APEX crowd itself.

During the sessions I'm amazed at the things people have done with our product, both from the people that just use the stock out of the box features. To applications that don't look or act like APEX at all until you look at the URL in the browser.

Both of these scenarios are perfect examples of the awesome cross section of APEX developers and uses, from the business user just trying to solve a business problem in the quickest / cheapest / reliable way around, to the advanced developer using custom javascript and/or third party libraries to provide APEX based RIA. APEX runs the gamut. Trying to be everything to everyone is hard and from everything I saw at ODTUG I would say that APEX is doing a very good job of it.

Carl @ ODTUG

Well here I have and will to be my own worst critic , unless someone else wants to jump in just try and make it constructive. I'll start with where I felt things were bad, and then mention the good. I wasn't as happy with my presentation as I should have been, and it was 100% my own fault, I will do better next time.

The Bad.

Two things I learned for giving presentations.

1. Never rewrite your whole presentation the day of the presentation. This seems such an obvious statement but this was my second presentation ever and I've made this same mistake twice now, I will not do it again.

2. Make sure that your mic is adjusted correctly before you start. Trying to adjust a mic in a very hot room 5 minutes into you presentation , while already a bit nervous, wastes too much time, makes the presenter even more nervous and compounds the problem. Next time mic will be right on the collar and everything will be perfect.

The Good

Rewriting my presentation was the right thing to do, I just should have done it earlier. The presentation I had , which I will eventually show , was very flashy and whizbang and would have been useless to pretty much everybody, though would have made me look great ;).

What I wanted to do ,and did, was show people some nuts and bolts examples of how things can be done in APEX, not how to fix or build particular things but to give people ideas of what can be done. Using an interactive report as the example I went through the features that are in APEX that allowed us to build them.

It ended up being a short and sweet session (48 min) but the room was very hot and it was the end of the day so I think the timing was near perfect in that respect. And I had enough people come up to me afterwards to ask questions and/or ask for the application that I think I did fairly well.

Thanks to everybody that attended, it does make one feel good to know that so many people are interested.

One thing I will do next time is Dietmar's suggestion , and how he did his session, which is to use screencasts of the functionality, live demo's are great but a screencast will show the same thing without the issue of having to jump back and forth between applications.

After a bit of cleanup I'll be putting both the slides and the application out for everybody to take a look at , plus to ODTUG so they can host on their site as well.

New Orleans

This was my first time to New Orleans and I had a real nice time. There were some very good restaurants and watering holes and I only had time to sample a few of each. The French Quarter was very cool with some of it's old style architecture. Bourbon Street was a experience in itself , and considering how crowded it was on just a random week I couldn't imagine being there during Mardi Gras, it must be insane.

Being from Vegas we have a joke , "Sure it's 120 f (50 c) but it's a dry heat!" , if New Orleans taught me one thing , that is not a joke! Vegas might be 120 but New Orleans at 82 felt alot hotter. I will tell that joke again in the future, but it will be a cold shiver down my spine as I remember what a non-dry heat feels like ;).

Lakers @ ODTUG

Congratulation's Boston.

Game 6 just happened to be the same night as the APEX meetup , which made going to a place with a TV mandatory. I was the only Laker fan in attendance at the APEX meetup and I remember all the names and faces of you haters :D

Hand grenade

A horrible yet intriguing drink, the name should be enough to keep you away from it, you have been warned.


At events like this it's the people that make it worthwhile , both the people in attendance and the people giving the session's, and this years ODTUG is no exception. Some of the most interesting ideas / questions / comments come outside the sessions, though the sessions are the catalyst. I learned quite a few new things about APEX and how people use it , along as with some other Oracle technologies and can't wait to start putting this information into action.

If you want more detailed accounts try searching through APEX Blog Aggregator or the Oracle News Aggregator there are many postings with much more detail on specific sessions.

The Happiness Meter

Rob Baillie - Mon, 2008-06-23 04:01
As part of any iteration review / planning meeting there should be a section where everybody involved talks about how they felt the last iteration went, what they thought stood in the way, what they though went particularly well and suchlike.

We find that as the project goes on, and the team gets more and more used to each other, this tends to pretty much always dissolve into everyone going "alright I suppose", "yeah fine".

Obviously, this isn't ideal and will tend to mean that you only uncover problems in the project when they've got pretty serious and nerves are pretty frayed.

This is where "The Happiness Meter" comes in.

Instead of asking the team if they think things are going OK and having most people respond non-committally, ask people to put a value against how happy they are with the last iteration's progress. Any range of values is fine, just as long as it has enough levels in it to track subtle movements. I'd go with 1-10.

You don't need strict definitions for each level, it's enough to say '1 is completely unacceptable, 5 is kinda OK, 10 is absolute perfection'.

At some point in the meeting, everyone in the team declares their level of happiness. When I say everyone, I mean everyone: developers, customers, XP coaches, infrastructure guys, project managers, technical authors, absolutely everyone who is valuable enough to have at the iteration review meeting should get a say.

In order to ensure that everyone gets to provide their own thought, each person writes down their number and everyone presents it at the same time. The numbers are then taken recorded and a graph is drawn.

From the graph we should be able to see:
  1. The overall level of happiness at the progress of the project.

  2. If there is any splits / factions in the interpretation of the progress.

If the level of happiness is low, this should be investigated; if there are any splits, this should be investigated; and just as importantly - if there are any highs, this should be investigated. It's good to know why things go well so you can duplicate it over the next iteration.

Factions tend to indicate that one part of the team has more power than the rest and the project is skewed into their interests rather than those of the team as a whole.

You may want to split the graph into different teams (customer / developer) if you felt that was important, but I like to think of us all as one team on the same side...

All said and done, the graph isn't the important bit - the discussion that comes after the ballot is the crucial aspect. This should be a mechanism for getting people to talk openly about the progress of the project.

UPDATE: Someone at work suggested a new name that I thought I should share: The Happy-O-Meter.

Auxiliary Constructs Appeal

Oracle WTF - Sat, 2008-06-21 17:24

Will somebody give this guy some auxiliary constructs? He just needs to know what's the auxiliary constructs, and examples in the auxiliary constructs. So if you have any auxiliary constructs you don't need, now's the time to dig deep. The appeal starts here.

Ideas for improving innovation and creativity in an IS department

Rob Baillie - Sat, 2008-06-21 04:49
At our work we've set up a few 'action teams' to try to improve particular aspects of our working environment.

The team that I'm a member of is responsible for 'Innovation and Creativity'.

We're tasked with answering the question "How do we improve innovation and creativity in IS?" - How we can foster an environment that encourages innovation rather than stifles it.

As a bit of a background, the company is a a medium sized (2,500 plus employees) based mainly in the UK, but recently spreading through the world, the vast majority of whom are not IS based. The IS department is about 100 strong and includes a development team of 25 people. It's an SME at the point where it's starting to break into the big-time and recognises that it needs to refine its working practices a little in order to keep up with the pace of expansion.

We met early last week and have put together a proposal to be taken to the senior management tier. I get a feeling it will be implemented since our team included the IS Director (you don't get any senior in our department), but you never know what'll happen.

I figured it might be interesting to record my understanding of the plan as it stands now, and then take another look in 6 months time to see what's happened to it...

We decided that in order to have an environment that fosters creativity and innovation you need:


Time for ideas for form, for you to explore them, and then to put them into practice.


Outside influences that that can help to spark those ideas off - this may be from outside the organisation, or through cross-pollination within it.


The conviction to try things, to allow them to fail or succeed on their own merit - both on the part of the individual and the organisation as a whole.

Natural Selection:

The need to recognise success when it happens, to take it into the normal operation of the business and make it work in practice. Also, the need to recognise failure when it happens, and stop that from going into (or continuing to exist within) the team.


When we have a good idea, the people involved need to be celebrated. When we have a bad idea, the people involved DO NOT need to be ridiculed.


The initial ideas aren't always the ones that are successful, it's the 4th, 5th or 125th refinement of that idea that forms the breakthrough. We need to understand what we've tried, and recognise how and why each idea has failed or succeeded so we can learn from that.

We put together some concrete ideas on how we're going to help put these in place - and bear in mind that this isn't just for the development team, this is for the whole of the IS department - development, project management, infrastructure, operations, service-desk, even the technology procurement...


A position set up that will be responsible for defining / tracking a curriculum for each job role in the department.

Obviously this will be fed by those people that currently fulfil the roles, and will involve things ranging from ensuring the process documentation is up to scratch, through specifying reading lists (and organising the purchasing of the books for the staff) and suggesting / collecting / booking conferences, training courses and the like that might be of use.

This takes the burden of responsibility away from the staff and managers - all you need is the idea and someone else will organise it and ensure it's on the curriculum for everyone else to follow up.

IdeaSpace (TM ;-) ):

A forum for the discussion of ideas, and collection of any documentation produced on those ideas and their investigation. This will (hopefully) form a library of past investigations as well as a stimulus for future ones. Everyone in the department will be subscribed to it.

Lab days:

Every employee is entitled to 2 days a month outside of their normal job to explore some idea they might have. That time can be sandbagged to a point, although you can't take more than 4 days in one stint. Managers have to approve the time in the lab (so that it can be planned into existing projects) and can defer the time to some extent, but if requests are forthcoming they have to allow at least 5 days each rolling quarter so that the time can't be deferred indefinitely.

Whilst the exact format of the lab is yet to be decided, we're aiming to provide space away from the normal desks so that their is a clear separation from the day job and lab time. People will be encouraged to take time in the lab as a team as well as individually. Also, if we go into the lab for 3 days to find that an idea doesn't work, that idea should still be documented and the lab time regarded as a success (we learnt something)

Dragon's Den:

Gotta admit, I'm not sure about some of the connotations of this - but the basic idea is sound. Coming out of time in the Lab should be a discussion with peers about the conclusion of the investigation in a Dragon's Den format. This allows the wider community to discuss the suitability of the idea for future investigations, or even immediate applicability. One output of this meeting may be the formalisation of conclusions in the IdeaSpace.

Press Releases:

The company is already pretty good at this, but when something changes for the better we will ensure that we celebrate those changes and, even for a day, put some people up on pedestals.

None of the above should be seen as a replacement for just trying things in our day to day job - but the idea is that these things should help stress to the department that change and progress are important aspects of what we do, and that we value it enough to provide a structure in which big ideas can been allowed to gestate. Cross pollination and communication should just form part of our normal day job anyway, and we should ensure that our project teams are cohesive and communicate freely amongst and between themselves.

Also, an important factor in the success of the above has to be the format of the Dragon's Den - if it is in any way imposing or nerve-racking then the idea is doomed to failure. As soon as people feel under pressure to justify themselves then the freedom disappears.

I'm quite excited by the prospect of putting these ideas into practice, and I wonder exactly where we'll end up.

I'll keep you all posted.


Claudia Zeiler - Thu, 2008-06-19 17:36
User on test DB, "Response time is terrible."

DBA, "You are the only user on the DB, but you have a dozen sessions open. Can you close some sessions?"

User, "I can't see my sessions through the application. Bounce the database"

DBA, " I can see the sessions just fine. I'll kill your excess sessions."

User, "No, don't kill my sessions. Bounce the database. I'm bouncing the database."

We don't need no stinking controls around here.
Everyone can do everything.

He bounced the database.

User, "I bounced the database. My [one] session is running just fine."

Q.E.D. - bouncing the database improves performance.


An update to the post above:

The same user has informed me that I should always shutdown the database with "Shutdown abort" . "It works much better."

I have always assumed that everyone else knows more than I do.

Maybe that isn't true. Maybe I do understand more than some people...

Want to Add a Responsibility? How about Oracle User Management?

Solution Beacon - Thu, 2008-06-19 14:06
When you think of adding a new responsibility to an existing E-Business Suite user, does System Administrator come to mind? Visions of going to the Define User form, tabbing down and finding the new responsibility to add? Did you know that you can add responsibilities to a user through Oracle User Management? Let’s step through how this can happen together. Below I have setup a brand new

Lessons Learned after a Hell Weekend

Claudia Zeiler - Mon, 2008-06-16 22:55
  1. A necessary part of any database project plan is a fall back plan. What is planned if there is a failure at any particular step.
  2. Backup - this includes enough space allocated for additional backups as needed.
  3. Part of any upgrade script should be rollback scripts.

I should have know that I was in trouble when on Friday afternoon when I was given a time line which was called 'optimistic' with no what-if forseen.

tapiGen in the wild (new opensource)

Carl Backstrom - Thu, 2008-06-12 18:43
Dan McGhan has started a sourceforge project tapiGen which generates PL/SQL API's to access many table based database features.

You can read more about it here. Try it out and drop him a line on all his work, I'm sure he'd appreciate hearing from you.

If you like what he's done you should considering signing up to help out, or at least providing some feature requests , bug reports, and feedback and such.

APEX 3.1.1 Released

Duncan Mein - Tue, 2008-06-10 10:19
Just upgraded from APEX 3.1 to 3.1.1 on an Oracle Enterpise Linux 4 Update 4 platform.

Intall took: 5:39 and termintaed without error.

The patch can be downloaded from metalink (patch number 7032837)

All in all, a very simple upgrade and now onto the task of regression testing our current 3.1 apps

Oracle VM and multiple local disks

Geert De Paep - Mon, 2008-06-09 15:06

For my Oracle VM test environment I have a server available with multiple internal disks of different size and speed. So I was wondering if it is possible to have all these disks used together for my virtual machines in Oracle VM.

If all disks would have been the same size and speed, I could easily use the internal raid controller to put them in mirror, stripe or raid5 and end up with one large volume, alias disk, for my Oracle VM. However due to the differences in characteristics of the disks (speed/size) this is not a good idea. So I started to look in Oracle VM Manager (the java console) to see what is possible.

It turned out soon to me that Oracle VM is designed for a different architecture: in fact the desired setup is to dispose of a (large) SAN box with shared storage that is available to multiple servers. Then all these servers can be put in a server pool, sharing the same storage. This setup allows live migration of running machines to another physical server. Of course this makes sense because it fits nicely in the concept of grid computing: if any physical server fails, just restart your virtual machine on another one, and add machines according to your performance needs. But it doesn’t help me: I don’t have got one storage with multiple servers, but I have one server with multiple disks.

So I started to browse a little in all the executables of the OVM installation, and I found under /usr/lib/ovs the ovs-makerepo script. According to me the architecture is as follows (as far as I can find on the internet, because there is not much clear documentation on this): when installing OVM, you have a /boot a / and a swap partition (just as in traditional linux) and OVM requires one large partition to be used for virtual machines, which will be mounted under /OVS. In this partition you find subdirectories “running_pool” which contains all the virtual machines that you have created and that you can start, and a subdirectory “seed_pool” which contains templates you can start from for creating new machines. There is also “local”, “remote” and “publish_pool”, however they were irrelevant for me at the moment and I didn’t try to figure out what they are used for.

With this in mind I can install Oracle VM on my first disk and end up with 4 partitions on /dev/sda:

   Filesystem 1K-blocks     Used Available Use% Mounted on
   /dev/sda1     248895    25284    210761  11% /boot
   (sda2 is swap)
   /dev/sda3    4061572   743240   3108684  20% /
   /dev/sda4   24948864 22068864   2880000  89% /OVS

With this in mind I now want to add the space on my second disk (/dev/sdb) to this setup. So first I create one large partition on the disk using fdisk. Then I create an ocfs file system on it as follows:

[root@nithog ovs]# mkfs.ocfs2 /dev/sdb1
mkfs.ocfs2 1.2.7
Filesystem label=
Block size=4096 (bits=12)
Cluster size=4096 (bits=12)
Volume size=72793694208 (17771898 clusters) (17771898 blocks)
551 cluster groups (tail covers 31098 clusters, rest cover 32256 clusters)
Journal size=268435456
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 4 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful

Initially I created the file system as ext3 which worked well. However there was one strange thing. This is what you get:

  • Create a new (paravirtualized) (linux) virtual machine in this new (ext3-based) repository (see later how exactly)
  • Specify a disk of e.g. 2Gb
  • Complete the wizard
  • This prepares a machine where you can start using the linux installer on the console to install the machine (do not start to install yet)
  • Now look in …/running_pool/machine_name and see a file of 2Gb
  • Now do du -sk on …/running_pool/machine and see that only 20Kb is used
  • From the moment you start to partition your disk inside the virtual machine, the output of “du -sk” grows the same amount as the data you really put in it. So it behaves a bit like ‘dynamic provisioning’.
  • Note however that ls -l shows a file of 2Gb at any time

I don’t know for the moment if this behaviour is caused by the fact that the file system is ext3, but anyway, I leave it up to you to judge if this is an advantage or a disadvantage.

Now when trying to add my new sdb1 partition as an extra repository, I got:


[root@nithog ~]# /usr/lib/ovs/ovs-makerepo
 usage: /usr/lib/ovs/ovs-makerepo <source> <shared> <description>
        source: block device or nfs path to filesystem
        shared: filesystem shared between hosts?  1 or 0
        description: descriptive text to be displayed in manager


   [root@nithog ovs]# /usr/lib/ovs/ovs-makerepo /dev/sdb1 0 "Repo on disk 2" 
   ocfs2_hb_ctl: Unable to access cluster service while starting heartbeat mount.ocfs2: 
   Error when attempting to run /sbin/ocfs2_hb_ctl: "Operation not permitted" 
   Error mounting /dev/sdb1

Seems like the script expects something like a cluster, but I just have a standalone node… I think that this script is intended to add a shared repository to a cluster of nodes. No problem, let’s try to convert our standalone machine to a one-node cluster: create the file /etc/ocfs2/cluster.conf:

        node_count = 1
        name = ocfs2
        ip_port = 7777
        ip_address =
        number = 1
        name = nithog
        cluster = ocfs2

Note that the indented lines MUST start with a <TAB> and then the parameter with its value. After creating this file I could do:

   [root@nithog ovs]# /etc/init.d/o2cb online ocfs2
   Starting O2CB cluster ocfs2: OK

and then
[root@nithog ovs]# /usr/lib/ovs/ovs-makerepo /dev/sdb1 0 "Repo on disk 2" Initializing NEW repository /dev/sdb1 SUCCESS: Mounted /OVS/877DECC5B658433D9E0836AFC8843F1B Updating local repository list. ovs-makerepo complete

As you can see, an extra subdirectory is created in the /OVS file system, with a strange UUID as its name. Under this directory my new file system /dev/sdb1 is mounted. This file system is a real new repository, because under /OVS/877DECC5B658433D9E0836AFC8843F1B you find as well the running_pool and seed_pool directories. It is also listed in /etc/ovs/repositories (but it is NOT recommended to edit this file manually).

Then I looked in the Oracle VM Manager (the java based web gui) but I didn’t find anything of this new repository. It looks as if this gui is not (yet) designed to handle multiple repositories. However I started to figure out if my new disk could really be used for virtual machines, and my results are:

  • When creating a new virtual machine, you have no chance of specifying in which repository it has to come
  • It seems to come in the repository where there is the most amount of free space (but I should do more testing to get 100% certainty)
  • When adding a new disk to an existing virtual machine (an extra file on oracle-vm level) the file will come in the same repository, even the same directory as where the initial files of your virtual machine are located. If there is NOT enough free space on the disk, Oracle VM will NOT put your file in another repository on another disk.
  • You can move the datafiles of your virtual machine to any other location while the machine is not running, and while changing the reference to the file in /etc/xen/<machine_name>
  • So actually it looks that on xen-level you can put your vm datafiles in any directory; the concept of the repositories seems to be oracle-vm specific.
  • So if you create a new virtual machine and Oracle puts it in the wrong repository, it is not difficult at all to move it afterwards to another filesystem/repostory. It just requires a little manual intervention. However it seems recommended to keep your machines always in an oracle-vm repository, in the running_pool, because only in that way it can be managed by the Oracle-vm gui.

I am sure that there are many things that have an abvious explanation, but I have to admit that I didn’t read the manuals of ocfs and oracle vm completely from the start to the end. Also I think that Oracle

Conclusion: Oracle VM seems to be capable of having multiple repositories on different disks, but the GUI is not ready to handle them. But with a minimum of manual intervention, it is easy to do all desired tasks in command-line mode.

Checking who else has checked out?

Susan Duncan - Mon, 2008-06-09 06:08
A comment on an earlier post has prompted me to clarify the way that Subversion handles certain tasks. The commenter is using SQLDeveloper, that uses JDeveloper's SVN implementation, and wants-

1. Subversion navigator to indicate if others are working on the same code

Subversion uses the copy-modify-merge paradigm. This means that any user with the correct authorization can check out a copy of the code from the repository to a local file system. This local copy can be manipulated (using JDEV, Tortoise, command line etc) so that updates and commits can be carried out from it. However, the Subversion repository does not have any understanding of how many users are working on or have checked out the same code. Updates and Commits are instigated from the local copy. This also means that any local copy checked out from Subversion may never be checked back in.

2.When double clicking on a package in the database or in the versioning tree and have the option to load the local copy (linked to subversion).

With respect to the request to double click on the versioning tree this comes back to Subversion having no knowledge of the local copies. I think that he is asking that the local copy be updated through using the Subversion navigator - but updates are driven from the local copy, not the repository.

As for the database package, that would be another level of complexity. Presumably the single point of truth is the package definition held in the SVN repository. To ensure that the database holds the latest version the user would have to checkout a the latest version as a local copy from the repository and update the DB. I'm not sure that somehow automating this process would be desirable - it would need links from the DB to the tool to the correct local copy location and through this to the SVN repository - sounds error prone to me.

3. Be able to do a compile and see the log window

This is a SQL Developer question, rather than SVN oriented, so I'll leave that to my colleagues with SQLDeveloper to answer.

The CIO asked: How long has my production database been down?

Gaurav Verma - Sat, 2008-06-07 00:02

And I had no answer for him. I couldn't blame him. CIOs want to know this kind of information. Better still, he would have liked a pie chart depicting it like this:

I wish..Well, for once, it would have been nice if Oracle 9i or 10g kept the historical startup or shutdown information in the v$ or some dba_* tables. A simple query off the table would have got me the answer.

Anyways, it set me thinking. There are always other ways to get the same information. Statspack or AWR was one possibility, but I was not sure if it really gathers the details information about when the instance was shutdown or started up (historically) -- they sure are aware if there was an instance restart between two snaps.
An Aha moment..But wait, the database alert log has information about each startup and shutdown! So if we could mine the alert log for the right regular expressions, and then find the time difference between time stamps, it could be done.

This method would not give you the overall downtime for the production instance, including the downtime for the middle tiers or Apache web server, but the same idea could probably be extended for the other services, but in this article, the scope is just the database. There is an auxiliary script (get_epoch.sh) supplied here that would be useful in this quest.

Auxiliary script: get_epoch.shAlso available for download here.

# The format of input date is:  Thu Jun  5 21:15:48 2008
# NOTE: The format of `date` command is:  Thu Jun  5 21:15:48 EDT 2008
# -- it has the timezone in the output as well

# BUT this script does not assume that since the timestamps in
# alert log dont have the timezone in it

# This script heavily uses this function to convert a UTC timestamp
# into seconds after 1 jan 1970:

# timelocal($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst);

use Time::Local;

my $wday = $ARGV[0];

my $month = $ARGV[1];
# convert the month shortname into 0-11 number
if ( $month eq "Jan" ) { $mon = 0 }
elsif ( $month eq "Feb" ) { $mon = 1 }
elsif ( $month eq "Mar" ) { $mon = 2 }
elsif ( $month eq "Apr" ) { $mon = 3 }
elsif ( $month eq "May" ) { $mon = 4 }
elsif ( $month eq "Jun" ) { $mon = 5 }
elsif ( $month eq "Jul" ) { $mon = 6 }
elsif ( $month eq "Aug" ) { $mon = 7 }
elsif ( $month eq "Sep" ) { $mon = 8 }
elsif ( $month eq "Oct" ) { $mon = 9 }
elsif ( $month eq "Nov" ) { $mon = 10 }
elsif ( $month eq "Dec" ) { $mon = 11 };

my $mday = $ARGV[2];

# initialize time varialble and split hours (24 hr format), minutes, seconds into an array
my $time = $ARGV[3];
@time = split /:/, $time;

# if the timezone is left out of the input, the position of year becomes 5th in ARGV
my $year = $ARGV[4];

# I found that by excluding $wday, the seconds results (EPOCH) is more
# accurate, so $wday
parameter has been omitted from the call to
# timelocal() function.

$epoch= timelocal($time[2], $time[1], $time[0], $mday, $mon, $year);
print "$epoch\n";

The main script..Due to formatting issues, the main script is available for download here.
Sample Usage and output..I realized that it would probably make more sense to have an optional cutoff date to calculate the downtime from, so that was added to the version 2 of the script. The version 1 which calculates the downtime from the first database startup time is uploaded here.

sandbox:sandbox> ./calculate_downtime.sh                             
Correct Usage: calculate_downtime.sh alertlogfilepath [cutoff_date in format Sat Jun 7 08:49:34 2008]

sandbox:sandbox> ./calculate_downtime.sh $DATA_DIR/admin/bdump/alert*.log Fri Mar 28 15:20:59 2008

Cutoff date is : Fri Mar 28 15:20:59 2008

Shutdown times:

Timestamp              --  epoch (seconds)

Wed Jan 9 17:53:08 2008 - 1199919188
Wed Jan 16 12:05:09 2008 - 1200503109
Fri Jan 18 11:19:42 2008 - 1200673182
Thu Jan 24 17:34:15 2008 - 1201214055
Fri Feb 15 09:00:44 2008 - 1203084044
Wed Feb 20 16:50:14 2008 - 1203544214
Wed Mar 12 12:43:26 2008 - 1205340206
Fri Mar 28 15:21:59 2008 - 1206732119
Thu Apr 3 11:03:52 2008 - 1207235032
Thu Apr 3 11:10:20 2008 - 1207235420
Thu Apr 3 11:15:44 2008 - 1207235744
Thu Apr 3 11:22:38 2008 - 1207236158
Thu Apr 3 11:27:36 2008 - 1207236456
Thu Apr 3 11:34:35 2008 - 1207236875
Thu Apr 3 11:41:36 2008 - 1207237296
Mon May 12 14:17:13 2008 - 1210616233
Thu Jun 5 10:36:58 2008 - 1212676618

Startup times:

Timestamp              --  epoch (seconds)

Wed Jan 9 17:50:42 2008 -- 1199919042
Thu Jan 10 09:43:18 2008 -- 1199976198
Thu Jan 17 12:00:03 2008 -- 1200589203
Fri Jan 18 11:26:13 2008 -- 1200673573
Wed Jan 30 12:19:21 2008 -- 1201713561
Tue Feb 19 22:57:38 2008 -- 1203479858
Wed Mar 12 12:39:03 2008 -- 1205339943
Mon Mar 24 13:44:20 2008 -- 1206380660
Thu Apr 3 11:00:33 2008 -- 1207234833
Thu Apr 3 11:07:12 2008 -- 1207235232
Thu Apr 3 11:14:01 2008 -- 1207235641
Thu Apr 3 11:20:54 2008 -- 1207236054
Thu Apr 3 11:25:25 2008 -- 1207236325
Thu Apr 3 11:31:53 2008 -- 1207236713
Thu Apr 3 11:40:18 2008 -- 1207237218
Tue Apr 29 16:50:49 2008 -- 1209502249
Mon Jun 2 14:20:38 2008 -- 1212430838
Thu Jun 5 10:38:39 2008 -- 1212676719
 As per the alert log, The instance is currently up

Here are the downtime windows ...

Wed Jan 9 17:50:42 2008 -- Wed Jan 9 17:53:08 2008
Thu Jan 10 09:43:18 2008 -- Wed Jan 16 12:05:09 2008
Thu Jan 17 12:00:03 2008 -- Fri Jan 18 11:19:42 2008
Fri Jan 18 11:26:13 2008 -- Thu Jan 24 17:34:15 2008
Wed Jan 30 12:19:21 2008 -- Fri Feb 15 09:00:44 2008
Tue Feb 19 22:57:38 2008 -- Wed Feb 20 16:50:14 2008
Wed Mar 12 12:39:03 2008 -- Wed Mar 12 12:43:26 2008
Mon Mar 24 13:44:20 2008 -- Fri Mar 28 15:21:59 2008
Thu Apr 3 11:00:33 2008 -- Thu Apr 3 11:03:52 2008
Thu Apr 3 11:07:12 2008 -- Thu Apr 3 11:10:20 2008
Thu Apr 3 11:14:01 2008 -- Thu Apr 3 11:15:44 2008
Thu Apr 3 11:20:54 2008 -- Thu Apr 3 11:22:38 2008
Thu Apr 3 11:25:25 2008 -- Thu Apr 3 11:27:36 2008
Thu Apr 3 11:31:53 2008 -- Thu Apr 3 11:34:35 2008
Thu Apr 3 11:40:18 2008 -- Thu Apr 3 11:41:36 2008
Tue Apr 29 16:50:49 2008 -- Mon May 12 14:17:13 2008
Mon Jun 2 14:20:38 2008 -- Thu Jun 5 10:36:58 2008
Thu Jun 5 10:38:39 2008 --

Downtime 1 : Wed Jan  9 17:53:08 2008 (1199919188)            --> Thu Jan 10 09:43:18 2008 (1199976198) = 57010 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is &lt; than shutdown time Wed Jan  9 17:53:08 2008 - so not accruing
Running Cumulative downtime = 0 seconds

Downtime 2 : Wed Jan 16 12:05:09 2008 (1200503109)            --> Thu Jan 17 12:00:03 2008 (1200589203) = 86094 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is &lt; than shutdown time Wed Jan 16 12:05:09 2008 - so not accruing
Running Cumulative downtime = 0 seconds

Downtime 3 : Fri Jan 18 11:19:42 2008 (1200673182)            --> Fri Jan 18 11:26:13 2008 (1200673573) = 391 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is &lt; than shutdown time Fri Jan 18 11:19:42 2008 - so not accruing
Running Cumulative downtime = 0 seconds

Downtime 4 : Thu Jan 24 17:34:15 2008 (1201214055)            --> Wed Jan 30 12:19:21 2008 (1201713561) = 499506 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is &lt; than shutdown time Thu Jan 24 17:34:15 2008 - so not accruing
Running Cumulative downtime = 0 seconds

Downtime 5 : Fri Feb 15 09:00:44 2008 (1203084044)            --> Tue Feb 19 22:57:38 2008 (1203479858) = 395814 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is &lt; than shutdown time Fri Feb 15 09:00:44 2008 - so not accruing
Running Cumulative downtime = 0 seconds

Downtime 6 : Wed Feb 20 16:50:14 2008 (1203544214)            --> Wed Mar 12 12:39:03 2008 (1205339943) = 1795729 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is &lt; than shutdown time Wed Feb 20 16:50:14 2008 - so not accruing
Running Cumulative downtime = 0 seconds

Downtime 7 : Wed Mar 12 12:43:26 2008 (1205340206)            --> Mon Mar 24 13:44:20 2008 (1206380660) = 1040454 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is &lt; than shutdown time Wed Mar 12 12:43:26 2008 - so not accruing
Running Cumulative downtime = 0 seconds

Downtime 8 : Fri Mar 28 15:21:59 2008 (1206732119)            --> Thu Apr  3 11:00:33 2008 (1207234833) = 502714 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is > shutdown time Fri Mar 28 15:21:59 2008 - greater than cutoff, so accruing
Running Cumulative downtime = 502714 seconds

Downtime 9 : Thu Apr  3 11:03:52 2008 (1207235032)            --> Thu Apr  3 11:07:12 2008 (1207235232) = 200 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is > shutdown time Thu Apr  3 11:03:52 2008 - greater than cutoff, so accruing
Running Cumulative downtime = 502914 seconds

Downtime 10 : Thu Apr  3 11:10:20 2008 (1207235420)            --> Thu Apr  3 11:14:01 2008 (1207235641) = 221 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is > shutdown time Thu Apr  3 11:10:20 2008 - greater than cutoff, so accruing
Running Cumulative downtime = 503135 seconds

Downtime 11 : Thu Apr  3 11:15:44 2008 (1207235744)            --> Thu Apr  3 11:20:54 2008 (1207236054) = 310 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is > shutdown time Thu Apr  3 11:15:44 2008 - greater than cutoff, so accruing
Running Cumulative downtime = 503445 seconds

Downtime 12 : Thu Apr  3 11:22:38 2008 (1207236158)            --> Thu Apr  3 11:25:25 2008 (1207236325) = 167 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is > shutdown time Thu Apr  3 11:22:38 2008 - greater than cutoff, so accruing
Running Cumulative downtime = 503612 seconds

Downtime 13 : Thu Apr  3 11:27:36 2008 (1207236456)            --> Thu Apr  3 11:31:53 2008 (1207236713) = 257 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is > shutdown time Thu Apr  3 11:27:36 2008 - greater than cutoff, so accruing
Running Cumulative downtime = 503869 seconds

Downtime 14 : Thu Apr  3 11:34:35 2008 (1207236875)            --> Thu Apr  3 11:40:18 2008 (1207237218) = 343 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is > shutdown time Thu Apr  3 11:34:35 2008 - greater than cutoff, so accruing
Running Cumulative downtime = 504212 seconds

Downtime 15 : Thu Apr  3 11:41:36 2008 (1207237296)            --> Tue Apr 29 16:50:49 2008 (1209502249) = 2264953 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is > shutdown time Thu Apr  3 11:41:36 2008 - greater than cutoff, so accruing
Running Cumulative downtime = 2769165 seconds

Downtime 16 : Mon May 12 14:17:13 2008 (1210616233)            --> Mon Jun  2 14:20:38 2008 (1212430838) = 1814605 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is > shutdown time Mon May 12 14:17:13 2008 - greater than cutoff, so accruing
Running Cumulative downtime = 4583770 seconds

Downtime 17 : Thu Jun  5 10:36:58 2008 (1212676618)            --> Thu Jun  5 10:38:39 2008 (1212676719) = 101 seconds
the cutoff date Fri Mar 28 15:20:59 2008 is > shutdown time Thu Jun  5 10:36:58 2008 - greater than cutoff, so accruing
Running Cumulative downtime = 4583871 seconds

Calculating lifetime of instance as per criteria specified..

Starting time being used = Fri Mar 28 15:20:59 2008 -- 1206732059
Ending time epoch = Sat Jun 7 11:49:49 2008 -- 1212853789
Total lifetime in seconds = 6121730

   Beginning Fri Mar 28 15:20:59 2008, The instance was down 74 % of the time

Application of this data..Now, this data could be put in some kind of dashboard for CIO meetings. This would give them an approximate idea of how long do their databases and (the dependent middle tier or admin tier services) remain down due to maintenance. Sure, this method cannot distinguish between unplanned and planned maintenance, but its probably a good start.

Yet another External XML parse error: for BATCH CLOSE processing of IBYSCHEDULER module .. this time

Gaurav Verma - Fri, 2008-06-06 22:33

hile I have written a previous article on XML parsing error for Online iPayment transactions in Oracle Applications 11i, For the want of XML parsing, iPayment was lost; for the want of not being able to take payment, business was lost., the customer had always suffered another issue with the batch close processing carried out by the IBYSCHEDULER module: iPayment Scheduler program.

Unknown to me, there was another person from the customer's production support who was working diligently with Oracle Support and development to have this addressed. This is the story of Patrick Baker,  , who gets the credit for having this resolved, over a period of one long year. Interestingly, the solution was to use another copy of the xmlparserv2 archive file on the concurrent manager tier.

The version of the Oracle Applications was (as per FND & ATG) and 11.5.9 for some other products.
The problem and error messageAlmost every night, the customer would get this error in the BATCH CLOSE processing program (the details of the servers and domain name have been blurred out to protect the data integrity of the customer):

iPayment: Version : 11.5.0 - Development

Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.

IBYSCHEDULER module: iPayment Scheduler

Current system time is 28-MAR-2007 00:16:23


Processing BATCH CLOSE operations ..

empty batch for account (payee id=087295,account id=XXXXXXX Corp:087295:944599:XXXXXX:944599:CORPORAT,batch id=3343)
exception occured for (payee id=087295,account id=XXXXXXXX Corp:087295:944599:XXXXXXX:944599:CORPORAT,batch id=3344) External XML parse error.  Document passed to iPayment by external application http://ipayment.xxxxxxxx.com:8000/servlet/oramipp_ptk generated XML parse error Start of root element expected. .

The Stack Trace is -
oracle.apps.iby.exception.PSException: External XML parse error.  Document passed to iPayment by external application http://payment.xxxxxxxx.com:8000/servlet/oramipp_ptk generated XML parse error Start of root element expected. .
    at oracle.apps.iby.util.bpsUtil.raiseException(bpsUtil.java:159)
    at oracle.apps.iby.net.XMLMessenger.deliverDoc(XMLMessenger.java:138)
    at oracle.apps.iby.payment.proc.BatchCCPayment.closeBatch(BatchCCPayment.java:1147)
    at oracle.apps.iby.scheduler.SchedBatchClose.schedPmt(SchedBatchClose.java:124)
    at oracle.apps.iby.scheduler.Scheduler.doProcess(Scheduler.java:260)
    at oracle.apps.iby.scheduler.Scheduler.init(Scheduler.java:297)
    at oracle.apps.iby.scheduler.SchedInitiator.runProgram(SchedInitiator.java:200)
    at oracle.apps.fnd.cp.request.Run.main(Run.java:161)

Finished processing BATCH CLOSE
Start of log messages from FND_FILE
End of log messages from FND_FILE
Successfully resubmitted concurrent program IBYSCHEDULER with request ID 32185967 to start at 29-MAR-2007 00:15:56 (ROUTINE=AFPSRS)

Executing request completion options...

Finished executing request completion options.

Concurrent request completed
Current system time is 28-MAR-2007 00:22:09


Lets think a bit..OK, lets try to make some sense out of it. Since this error was being received in the output of a concurrent manager, obviously, the xmlparser class file was on the concurrent manager tier and NOT on the iPayment tier (which was used by the online transactions).
A different solution..On the dedicated iPayment tier, the same error message was resolved by using $JAVA_TOP/xmlparserv2.zip file in the $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties, but in this case, the solution was to the use $IAS_ORACLE_HOME/xdk/lib/xmlparserv2.jar instead in different files:

To implement the solution, open the file $APPL_TOP/admin/adovars.env on the concurrent manager tier, 
and in the values of CLASSPATH and AF_CLASSPATH variable, add $IAS_ORACLE_HOME/xdk/lib/xmlparserv2.jar:
before $JAVA_TOP/appsborg2.zip:

Note: this must be done in both: CLASSPATH and AF_CLASSPATH

To prevent the entries in $APPL_TOP/admin/adovars.env to be over-written, you can either add this at the "end of the file"
in the # BEGIN Customizations and # END Customizations tags or you can create your own
custom autoconfig template file for adovars.env with the changes.


# Begin customizations
# End customizations
Note: Make sure the content for CLASSPATH and AF_CLASSPATH is included in 1 line if you do a cut and paste. The entire value SHOULD BE 1 SINGLE LINE, otherwise the value will get corrupted.

After this, the concurrent managers do need to be bounced and then the IBYSCHEDULER module: iPayment Scheduler program gives expected results.

A new learning..From this experience, It now seems that it is possible to make $IAS_ORACLE_HOME/xdk/lib/xmlparserv2.jar 
work for the parsing needs on the dedicated iPayment tier for servicing online transactions too.

For that, you need to put the following entry in $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties:
Put this entry in the # BEGIN customizations and # END customizations (should be done at the END of the file):

# BEGIN customizations
# END customizations

Mix of Old & New style buttons in OA Framework pages

Aviad Elbaz - Fri, 2008-06-06 08:59

After some heavy patches applied on our system we noticed that some buttons in OAF pages looks like the old style gray buttons while the others are fine new style yellow buttons.

For example:


(The "Advanced" is the old style and all the others are the new style)

Trying to clear cache ($COMMON_TOP/_pages) and bounce Apache didn't solve the problem.

The solution is hiding within jserv.properties:

  1. Edit $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties
  2. Change the following to TRUE:
  3. (optional) Clear all content from $OA_HTML/cabo/images/cache (e.g rm -rf $OA_HTML/cabo/images/cache)
  4. (optional) Clear all content from $COMMON_TOP/_pages
  5. Bounce Apache

And the problem will be resolved...


In order to make this change permanent, you should update the Application context file as follow, otherwise next run of AutoConfig will overwrite your change.

  1. Edit $APPL_TOP/admin/$CONTEXT_NAME.xml
  2. Change the following to:
    <java_awt_headless oa_var="s_java_awt_headless">true</java_awt_headless>
  3. Run AutoConfig on Apps Tier.
  4. Bounce Apache

Related Note: 368188.1 - Buttons Are Not Rendering Correctly In Self Service Framework Pages.


Categories: APPS Blogs

Going To New Orleans - ODTUG Kalaidoscope 2008

Susan Duncan - Fri, 2008-06-06 07:36
I've traveled and spoken on Oracle all around the world at many events but this year will be my first at ODTUG (June 15th-19th). I'm looking forward to both the technical and the non-technical aspects of the conference. New Orleans is somewhere that I haven't visited in 15 years and I'm happy to be one of the nearly-75 strong ODTUG Brigade volunteering for a day of community service work to give back to the city that has given so much to music lovers and so many others like me.

The conference is packed with keynotes and sessions, I'm going to be presenting two:
Who Moved My Code? - Team Development in Oracle JDeveloper on Wednesday 8.00-9.00am
Seven Secrets (and more) of Successful JDeveloper Database Designers on Wednesday 2.45-3.45pm

Both of these will be predominantly demo driven sessions. In the first Lynn Munsinger will be joining me so we can demo multi-developer tips and tricks using Subversion. The Seven Secrets will focus on existing and new features for database development and visualization for application developers. I'm hoping also to squeeze in a sneak preview of a project I'm working on around Application Lifecycle Management. Please join me if you are at ODTUG as I would welcome your feedback.

Our Usability Research Team is running some feedback sessions that Lynn, Grant Ronald and myself will be attending, be sure and sign up for one of those. Plus, if you want to talk to us about any aspect of JDeveloper we will be in the exhibit halls ready and willing to demo and discuss.

Finally, I hear there is a ODTUG Jam Session and I'm pretty sure I wont be able to resist!


Subscribe to Oracle FAQ aggregator