Feed aggregator

Ingersoll Rand presenting Flow Manufacturing at Open World

Chris Grillone - Fri, 2007-11-09 13:28
Valerie Dubois and I will be co-presenting with Maggie Park from Ingersoll Rand at Open World. Maggie's presentation title is "Implementing Flow Manufacturing at Ingersoll Rand".

Session ID: S292704
Session Title: Leveraging Flow Manufacturing with Your Enterprisewide Lean Initiative
Track: Manufacturing; Automotive; High Tech; Industrial Manufacturing; Life Sciences
Room: Nob Hill CD
Date: 2007-11-14
Start Time: 11:15

Please visit us at the "Meet The Experts" Flow Manufacturing demo on Wednesday afternoon starting at 2:30 PM Space 2 in Moscone West. Barcode scanning, Kanban and RFID are featured in this demo.

Sharing is caring

Fadi Hasweh - Wed, 2007-11-07 05:08
It been a while since my last post, today I will post about some oracle books that can be viewed online using http://books.google.com go there and search for 11i or for “oracle e-business suite”+ altasen for workflow and sysadmin books of course not all the books are available online for free but only some parts, it will give you a good hint about the book before you buy it.

Good for google and good for us and sharing is caring

My next post will be technical so keep visiting.

Understanding the Server Manager Downloads

Mark Vakoc - Tue, 2007-11-06 20:49
Server Manager is mastered along the standard tools release schedule. That means that for each tools release, maintenance release, and update will include downloads for Server Manager. For each tools release there will be two downloads.

If you are installing SM for the first time I would recommend obtaining the latest installer. This will be a large (1 GB) download that is used to perform the initial installation.

If you have already installed SM you may download a significantly smaller update (around 30-50 MB) and apply that to your existing SM installation (of course using SM to perform the update).

For example installing and then applying the update will be functionally identical to initially installing the release. You may also go backwards, if desired, to an earlier release.

Remember you may always use the latest SM release even if you are managing earlier tools releases of the E1 components. Using the latest SM release ensures you have all the latest bug fixes and functionality available.

Visit us at OpenWorld '07

Mark Vakoc - Tue, 2007-11-06 20:47
I and some of my colleagues will be OpenWorld this year. Visit us at the EnterpriseOne Tools and Technology booths at DEMOgrounds and attend Server Manager session on Thursday morning. See you there!

Here we go....

Mark Vakoc - Tue, 2007-11-06 20:16
If you can read this it means we have gone GA with the 8.97 tools release.  Download, install, and go crazy with Server Manager.  And stay tuned for more tips, tricks, and detailed information about Server Manager.

OCR Mirroring

Fairlie Rego - Tue, 2007-11-06 05:10
In my last blog I talked about adding mirrors to your voting disk. Here I will document how to add mirrors to your OCR on a clustered file system like vxfs.

In the existing configuration there is only one ocr. The location of your ocr is specified in /var/opt/oracle/ocr.loc (on Solaris). It would be different depending on your port.

You need to add an OCR mirror when CRS is up else you get an error message like the one below

root@bart # /u01/app/oracle/product/10.1.0/crs/bin/ocrconfig -replace ocrmirror /u02/oracrs/ocr_file_02
PROT-1: Failed to initialize ocrconfig

For more details on why you have received an error check the latest ocrconfig log file under $ORA_CRS_HOME/log/node/client

root@bart # cat ocrconfig_1742.log
Oracle Database 10g CRS Release Production Copyright 1996, 2005 Oracle. All rights reserved.
2007-10-27 12:32:19.130: [ OCRCONF][1]ocrconfig starts...
2007-10-27 12:32:20.933: [ OCRCONF][1]Failure in initializing ocr in DEFAULT level. error:[PROC-32: Cluster Ready Services on the local node i
s not running Messaging error [9]]
2007-10-27 12:32:20.933: [ OCRCONF][1]Exiting [status=failed]...

From the above error message it is clear that CRS needs to be up.

So after starting up CRS

root@bart # /u01/app/oracle/product/10.1.0/crs/bin/ocrconfig -replace ocrmirror /u02/oracrs/ocr_file_02
PROT-16: Internal Error

But even though I get an error it appears to have created the mirror

root@bart # cat /var/opt/oracle/ocr.loc
#Device/file getting replaced by device

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 24932
Available space (kbytes) : 237188
ID : 1909893349
Device/File Name : /u02/oracrs/ocr.dbf
Device/File integrity check succeeded
Device/File Name : /u02/oracrs/ocr_file_02
Device/File integrity check succeeded

Now let us replace the existing OCR with a new one file

u01/app/oracle/product/10.1.0/crs/bin/ocrconfig -replace ocr /u02/oracrs/ocr_file_01
PROT-16: Internal Error

root@bart # cat /var/opt/oracle/ocr.loc
#Device/file /u02/oracrs/ocr.dbf getting replaced by device

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 24932
Available space (kbytes) : 237188
ID : 1909893349
Device/File Name : /u02/oracrs/ocr_file_01
Device/File integrity check succeeded
Device/File Name : /u02/oracrs/ocr_file_02
Device/File integrity check succeeded

Cluster registry integrity check succeeded

At this point you can shutdown and startup CRS and see if all is fine. One point to note is that you should check that the ocr.loc files on all remote nodes are updated with the updated location of the OCR file. In my testing the CRS stack on the remote node was down and in such cases the ocr.loc on the remote node does not get updated.

When the cluster is running if the OCR mirror is removed/corrupted the cluster continues running and all crs commands including ocrcheck can be run if you comment the following line from /var/opt/oracle/ocr.loc

If you lose the primary OCR the cluster keeps running but
ocrcheck fails and CRS cannot be stopped even if you comment out the following line even though the mirror is available.

root@bart # cat /var/opt/oracle/ocr.loc
#Device/file /u02/oracrs/ocr_file_01 getting replaced by device /u02/oracrs/ocr_file_01

Things are fine if I replace the ocr using
root@bart # /u01/app/oracle/product/10.1.0/crs/bin/ocrconfig -replace ocr /u02/oracrs/ocr_file_01
PROT-16: Internal Error

Changing DB 32-Bit to 64-Bit

Madan Mohan - Tue, 2007-11-06 02:56

1. Ensure that there is ample free size for the 64bit release installation.
Recommended free space should be 3G.

2. Start the Installer GUI.

3. On the File Locations Screen, create a new name and path for the 64bit
oracle installation under the Destination.

A typical entry would be

Name: orahome920_64b
Path: /u01/app/oracle/product/9.2.0-64b

4. Proceed with the installation. Stop at the configuration assistant
configuration screen.

5. Install the latest 64bit patch set under the new oracle installation.


The instructions in this section guide you through changing the word-size of
your current release (switching from 32-bit software to 64-bit software or vice versa).

Complete the following steps to change the word-size of your current release:

1. Start SQL*Plus.

2. Connect to the database instance AS SYSDBA.

3. Run SHUTDOWN IMMEDIATE on the database:

4. Perform a full offline backup of the data depending on the available backup
mechanism, eg BCV, Unix file copy.

5. If the initialization parameter file eg initSID.ora, spfileSID.ora, listener.ora, sqlnet.ora resides within the old OR
ACLE_HOME, then copy it to the corresponding location of the new 64b it ORACLE_HOME. If the parameter files are symbol
ic links to another location, then the symbolic links have to be created in the new ORACLE_HOME.

The same links have to be created in new $ORACLE_HOME/dbs.

6. Change your active Oracle instance environment to point at the new 64Bit ORACLE_HOME.
Edit /var/opt/oracle/oratab if using dbhome/oraenv to set the environment.


i) set it to the new 64bit Oracle Home path


Change the essential environment setting eg $ORACLE_HOME,$LD_LIBRARY_PATH to use new 64bit Oracle Home Path if h

export ORACLE_HOME=/u01/app/oracle/product/9.2.0

Change to

export ORACLE_HOME=/u01/app/oracle/product/9.2.0-64b

7. Set AQ_TM_PROCESSES=0 if it is not.

a) If using initSID.ora to start instance, then add it to the init file.

b) If using spfileSID.ora to start instance,then the database can be startup and the parameter set by running th
e below command.


c) shutdown the database again.
9. Set _system_trig_enabled = false.

a) If using initSID.ora to start instance, then add it to the init file.
b) If using spfileSID.ora to start instance, then the database can be startup and the parameter set by running th
e below command.


c) shutdown the database again.

The parameter should be set to FALSE for scripts that perform dictionary operations as the objects on which the triggers d
epend may become invalid or be dropped, causing the triggers to fail and thus preventing the scripts from running successf

10. When migrating from a 32-bit Oracle version to a 64-bit Oracle version, Oracle recommends doubling the size of paramet
ers such as:


11. At a system prompt, change to the new 64bit ORACLE_HOME/rdbms/admin

12. Start SQL*Plus.

13. Connect to the database instance AS SYSDBA.



15. Run the following script:

SQL> @?/rdbms/admin/catalog.sql

16. Check for DBMS_PACKAGE validity.

17. If invalid, run below

SQL> alter package dbms_standard compile;
18. Run the following script:

SQL> @?/rdbms/admin/catproc.sql

19. Set the system to spool results to a log file for later verification of

SQL> SPOOL /tmp/catoutw.log

20. Run utlirp.sql:

SQL> @?/rdbms/admin/utlirp.sql

The utlirp.sql script recompiles existing PL/SQL modules in the format required by the new database. This script first
alters certain dictionary tables. Then, it reloads package STANDARD and DBMS_STANDARD, which are necessary for using P
L/SQL. Finally, it triggers a recompile of all PL/SQL modules, such as packages, procedures, types, and so on.

21. Turn off the spooling of script results to the log file:


Then, check the spool file and verify that the packages and procedures compiled successfully. You named the spool fil
e in Step 12; the suggested name was catoutw.log. Correct any problems you find in this file.



23. Remove the parameter aq_tm_processes or set value back to the original value.

a) If using initSID.ora to start instance, then remove or edit it from the init file after shutting down the data

b) If using spfileSID.ora to start instance, then the parameter can be change by running the below command.


To remove the parameter.

24. Remove the parameter _system_trig_enabled = FALSE

a) If using initSID.ora to start instance, then remove it from the init file after shutting down the database.

b) If using spfileSID.ora to start instance, then the parameter can be removed by running the below command.

SQL> ALTER SYSTEM RESET "_system_trig_enabled" SCOPE=SPFILE SID=.*.;

25. The word-size of your database is changed. The database can be shutdown and reopen for normal use.

Openworld San Francisco

Jo Davis - Mon, 2007-11-05 20:57
Yes! The rumour is true! I will be at San Francisco, co-presenting about Internet Expenses! Come to the session.... or catch me for a coffee ... definately the place to be next week!

For those of you who are preregistering here is the info:
Session ID: S291212
Date: Wednesday 14th November
Time: 3pm - 4pm (What are the chances everyone will fall asleep?)
Where: Marriott Salon 3

And for those attendees who feel the urge to contact me (can't think why - perhaps that coffee?):

Join Me at Oracle OpenWorld Connect!

At what level should I version my source code?

Susan Duncan - Mon, 2007-11-05 09:11
This is a question that comes up regularly when I talk to development teams. And I believe that the answer should always be - at the top level.

If you are using JDeveloper this means at the application level. We adopt this in our own development here and it's my top tip for best practice. Some might want to argue that if their application is broken into a number of different projects then why not version those individually?

One problem with this approach could be cross- project dependencies in your application. Imagine that you work in one project on a day to day basic but there is a dependency on libraries held in another project (let's call it a 'library' project). You might checkout both projects at the beginning of the development cycle but assume that the 'library' project was complete and so you could just work and update your 'working' project. At some later stage, you could run into problems when you want to check in your 'working' project and realise that updates have been made to the 'library' project.

On a practical level, if you are using JDeveloper, you should not only follow my advice and version at the application level, but always ensure that your .jws configuration file is included in source control. This isn't a problem if you using JDeveloper's integrated SVN commands. When you select an application you automatically get the .jws file included.

But what if not all your development team are using JDeveloper? Perhaps you prefer to use Tortoise for your SVN interactions. Or perhaps you have some team members that use Eclipse or another IDE as well as JDeveloper? In this case it is important to ensure that you still place both your 'jws' and your 'jpr' files under source control.

Why? Because JDeveloper's SVN code looks at the state of those files to decide what menu options it makes available. For instance, say an initial import of an application to SVN had been done using Tortoise and the .jws file had not been included. I then check out this into a working copy in JDeveloper and make some changes. When I try to commit or update this working copy using JDeveloper I would expect to get the menu options 'Commit Working Copy' or 'Update Working Copy'. In fact, I would only see 'Import Files' as the .jws files would not be part of the versioned application and so JDeveloper offers you the option to place the application under version control.

I've come across this a few recently. The workaround is to go back to the repository and add the .jws file to version control. This brings the IDE back into kilter and allows you to continue using JDeveloper's integrated SVN support.

Recursive Calls

Vlad Sadilovskiy - Sun, 2007-11-04 23:26

Here is one interesting subject – recursive calls and performance monitoring instrumentation. This is just a few observations of the session tracing facility, Statspack, AWR and Tkprof point of view on recursive calls. We want to observe and clarify how the Tkprof, Statspack and AWR calculate top and recursive call statistics.

The definition of recursive call as per Metalink Note:41634.1 and 10g Tkprof Manual

“Recursive Calls

Sometimes to execute a SQL statement issued by a user, Oracle must issue
additional statements
. Such statements are called ‘recursive calls’ or
‘recursive SQL statements’. For example, if you insert a row into a table
that does not have enough space to hold that row, Oracle makes recursive
calls to allocate the space dynamically. Recursive calls are also generated
when data dictionary information is not available in the data dictionary
cache and must be retrieved from disk.

If recursive calls occur while the SQL trace facility is enabled, TKPROF
produces statistics for the recursive SQL statements and clearly marks them
as recursive SQL statements in the output file.  Note that the
statistics for
a recursive SQL statement are included in the listing for that statement
not in the listing for the SQL statement that caused the recursive call.  So
when you are calculating the total resources required to process a SQL
statement, you should consider the statistics for that statement as well as
those for recursive calls caused by that statement.”

Recursive calls are triggered by top level calls that governs processing of statements directly issued by users (or more precisely, issued by a program that communicates directly with Oracle on behalf of its end-users or scripts). Tkprof is supposed to report base statistics (excluding the statistics of its recursive calls) of a call at any level.

For the purpose of the following discussion let me add the following statistics definitions taken from Statistics Descriptions that can be viewed either in Statspack or AWR report or directly using v$sysstat view:

CPU used by this session – Amount of CPU time (in 10s of milliseconds) used by a session from the time a user call starts until it ends. If a user call completes within 10 milliseconds, the start and end user-call time are the same for purposes of this statistics, and 0 milliseconds are added.
recursive cpu usage – Total CPU time used by non-user calls (recursive calls). Subtract this value from “CPU used by this session” to determine how much CPU time was used by the user calls.”

Documentation says that “CPU used by this session” statistics includes CPU used by recursive calls.

Let’s run a test and take a look at this from practical perspecive and see how well DB conforms to its documentation. I’ve chosen following code for this testing:

create or replace function recursive_function(leveln number, maxlevel number) return number
    l_level number := leveln;
    c number := 0;

        exit when c > 4000000000;
        c := c + 1;
    end loop;

    if l_level+1 <= maxlevel then
        execute immediate 'select /*+ query at level '||(l_level+1)||' */ recursive_function('||(l_level+1)||','||maxlevel||') from dual' into c;
    end if;
    return c;

    perfstat.statspack.snap(i_ucomment=>'start at '||to_char(sysdate,'yy/mm/dd hh24:mi:ss'));

alter session set events '10046 trace name context forever, level 8';              

select /*+ top */ recursive_function(0,3) from dual;

    perfstat.statspack.snap(i_ucomment=>'end at '||to_char(sysdate,'yy/mm/dd hh24:mi:ss'));

alter session set events '10046 trace name context off';

It triggers chain of recursive calls with max level 3. This code generates quasi-equal amount of pure CPU load and should allow identifying resource consumption on any level quite easily.

Raw Trace:

PARSING IN CURSOR #53 len=51 dep=0 uid=57 oct=3 lid=57 tim=1166023881658166 hv=1409094977 ad='6cf1aba8'
select /*+ top */ recursive_function(0,3) from dual
PARSE #53:c=0,e=163,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1166023881658162
EXEC #53:c=0,e=37,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1166023881658276
WAIT #53: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=56115 tim=1166023881658319
WAIT #53: nam='SQL*Net message from client' ela= 44597 driver id=1413697536 #bytes=1 p3=0 obj#=56115 tim=1166023881702983
*** 2007-11-02 09:05:46.334
PARSING IN CURSOR #13 len=64 dep=1 uid=57 oct=3 lid=57 tim=1166024166342227 hv=1557986045 ad='72a95828'
select /*+ query at level 1 */ recursive_function(1,3) from dual
PARSE #13:c=2999,e=2789,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,tim=1166024166342221
EXEC #13:c=0,e=37,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=1166024166342394
*** 2007-11-02 09:10:59.564
PARSING IN CURSOR #55 len=64 dep=2 uid=57 oct=3 lid=57 tim=1166024472231315 hv=4273745374 ad='6e06c2e8'
select /*+ query at level 2 */ recursive_function(2,3) from dual
PARSE #55:c=3000,e=2617,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=1,tim=1166024472231309
EXEC #55:c=0,e=38,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=1,tim=1166024472231491
*** 2007-11-02 09:15:50.950
PARSING IN CURSOR #14 len=64 dep=3 uid=57 oct=3 lid=57 tim=1166024756787955 hv=1817767248 ad='6e2b2ec8'
select /*+ query at level 3 */ recursive_function(3,3) from dual
PARSE #14:c=2000,e=2633,p=0,cr=0,cu=0,mis=1,r=0,dep=3,og=1,tim=1166024756787948
EXEC #14:c=0,e=65,p=0,cr=0,cu=0,mis=0,r=0,dep=3,og=1,tim=1166024756788155
*** 2007-11-02 09:20:42.250
FETCH #14:c=291241724,e=284472442,p=0,cr=0,cu=0,mis=0,r=1,dep=3,og=1,tim=1166025041260627
STAT#14 id=1 cnt=1 pid=0 pos=1 obj=0 op='FASTDUAL  (cr=0 pr=0 pw=0 time=5 us)'
FETCH #55:c=582567435,e=569029386,p=0,cr=0,cu=0,mis=0,r=1,dep=2,og=1,tim=1166025041260907
STAT#55 id=1 cnt=1 pid=0 pos=1 obj=0 op='FASTDUAL  (cr=0 pr=0 pw=0 time=3 us)'
FETCH #13:c=895731828,e=874918635,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=1,tim=1166025041261058
WAIT #53: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=56115 tim=1166025041261123
FETCH #53:c=1187120530,e=1159558108,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,tim=1166025041261160
WAIT #53: nam='SQL*Net message from client' ela= 54812 driver id=1413697536 #bytes=1 p3=0 obj#=56115 tim=1166025041316063
STAT#53 id=1 cnt=1 pid=0 pos=1 obj=0 op='FASTDUAL  (cr=0 pr=0 pw=0 time=5 us)'

In the 10g raw trace time is given in microseconds. So, you see how the CPU cosumption and elapsed time are reported in the trace. At each level reported as its own CPU load plus the CPU load from the call sub-tree (all descendants):

CPUL3 (291241724)  = CPUL3.base + CPUL4(0: there are no further descendants)
CPUL2 (582567435)  = CPUL2.base + CPUL3(291241724)
CPUL1 (895731828)  = CPUL1.base + CPUL2(582567435)
CPUL0 (1187120530) = CPUL0.base + CPUL1(895731828)


CPUL3.base = CPUL3 = 291241724 = 291 s
CPUL2.base = CPUL2 (582567435) - CPUL3(291241724) = 291325711 = 291 s
CPUL1.base = CPUL1 (895731828) - CPUL2(582567435) = 313164393 = 313 s
CPUL0.base = CPUL0 (1187120530) - CPUL1(895731828) = 291388702 = 291 s


select /*+ top */ recursive_function(0,3)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1   1187.12    1159.55          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3   1187.12    1159.55          0          0          0           1

Parsing user id: 57

select /*+ query at level 1 */ recursive_function(1,3)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1    313.16     305.88          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3    313.16     305.88          0          0          0           1

Parsing user id: 57     (recursive depth: 1)

select /*+ query at level 2 */ recursive_function(2,3)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1    291.32     284.55          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3    291.32     284.55          0          0          0           1

Parsing user id: 57     (recursive depth: 2)

select /*+ query at level 3 */ recursive_function(3,3)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1    291.24     284.47          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3    291.24     284.47          0          0          0           1

Parsing user id: 57     (recursive depth: 3)

I always thought Tkprof doesn’t report net statistics for calls, but now I see how wrong I was. From this test it is obvious that the Tkprof reported base times for recursive calls at all levels, and only for the top level calls the CPU and elapsed time are defined as gross load including all levels: CPUL0 = CPUL0.base + CPUL1.base + CPUL2.base + CPUL3.base  = 291 + 313 + 291 + 291 = 1187 s. My memory might be playing tricks with me, but I recall I saw a discussion on Jonathan Lewis’s blog about this Tkprof deficiency. Maybe someone will update me where it was.


SQL ordered by Elapsed Time

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------
     1,160      1,160            1     1159.6    99.8 1rcptud9zu5a1
Module: SQL Developer
select /*+ top */ recursive_function(0,3) from dual

       875        875            1      874.9    75.3 9pgc4u1fdtyrx
Module: SQL Developer
select /*+ query at level 1 */ recursive_function(1,3) from dual

       569        569            1      569.0    49.0 0v0qmk7zbsbfy
Module: SQL Developer
select /*+ query at level 2 */ recursive_function(2,3) from dual

       284        284            1      284.5    24.5 2zfr66tq5jvah
Module: SQL Developer
select /*+ query at level 3 */ recursive_function(3,3) from dual

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
CPU time                                          1,162         100.0
control file parallel write             396           1      1    0.0 System I/O

Time Model Statistics

Statistic Name                                       Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time                              1,162.0        100.0
DB CPU                                                1,161.9        100.0
PL/SQL execution elapsed time                         1,159.6         99.8
DB time                                               1,162.1          N/A
background elapsed time                                   2.9          N/A
background cpu time                                       1.5          N/A

Instance Activity Stats

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session                    118,971          100.0      13,219.0
CPU used when call started                  118,946           99.9      13,216.2
recursive cpu usage                          89,804           75.5       9,978.2

I rearranged it a little bit for better view.  

First of all, the top section shows that each of the times reported at each level contain times of its descendants (i.e. as in raw trace). Well, no wonder, there is probably not enough information to track call dependency and subtract statistics of children from a parent. Anyhow, it reports gross load for each recursive level and top level calls. Anyone analyzing AWR reports should note statement “Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.”that precedes this section. Apparently it is actually true not only for PL/SQL code, but for any type of code. This can be seen in the time model statistics where the sql execute elapsed time roughly equal PL/SQL execution elapsed time.

Also, comparing this section to the raw trace or Tkprof output shows that in the AWR report CPU time matches exactly the elapsed time, although it was not the same in the raw trace. This would seem just a reporting “feature”, if not for the information from the further sections. CPU time in Top 5 Timed Events and DB CPU in Time Model Statistics is roughly about 1,160 s. Ignoring small difference, same time is spent by the top level statement. Only Instance Activity Stats section shows 1,189.7 s that almost matches the raw trace CPU time consumed by all subject statements with a little offset of 2.58 s. Further, the recursive cpu usage of 898 s matches the CPU time of the recursive calls starting at level 1 with difference of 2.3 s. Both deltas for the most part are time spent in Statspack subroutines that were captured by AWR.


Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
CPU time                                                     1,190          99.9
control file parallel write                        397           1      1     .0

In Statspack the Time Model Statistics and Instance Activity Stats precisely equal to the corresponding sections in the AWR report. The SQL statistics wasn’t captured for our statements because they didn’t generate logical or physical reads nor they generate other activity that is considered a treat by the Statspack. However in the Top 5 Timed Events you can see that the CPU time is the figure of CPU used by this session statistics in the Instance Activity Stats, so it comes from v$sysstat view.

This behavior is different from AWR algorithm, which rather seem to be taking CPU time in Top 5 Timed Events from v$sys_time_model or directly from their common source. In case v$sys_time_model and v$sysstat have different source of information, AWR is not bound to the figures of the later when reporting Top 5 Timed Events. This perhaps holds an explanation for the AWR oddity seen by Jonathan Lewis. It would be interesting to find the relevant clues in the reports or in this particular system.

Azul Systems Java appliance

Hampus Linden - Sun, 2007-11-04 15:03
So, I've been dragged more and more in to managing Java application containers like Weblogic, Websphere and JBoss. These have a tendency to be hugely complex beasts, almost as complex as our favorite database and performance optimization is sometimes quite difficult, or simply it's to much effort to actually upgrade or replace the server.
Azul offers a quite neat (but a tad pricey) solution to this. They off-load Java computation to a separate appliance, the smallest model has 96 processing cores and 48Gb of ram. The big daddy has a massive 768 cores and 768Gb ram. It's a by Azul in house engineered hardware with custom software (I would guess the OS is semi-based on one of our open-source friends (have a look at the ifconfig output)). The application server still a normal server (Linux/Solaris etc), the small JVM on the server pretty much acts as a proxy between external resources such as JDBC sources and the actual JVM on the appliance.
Their marketing crew calls it a "turn key solution", it's not really that easy but it's pretty straight forward to use.
Azul vega
The appliance itself takes about 15 minutes to install, setup the networking and the compute domain name and you are pretty much done with the appliance itself.
The application side is almost as easy. Azul provides sort of a "JDK wrapper", you unpack the wrapper and run a simple shell script to integrate it with an existing JDK, the script asks for the path to the JDK to "mimic". Works with IBM and Sun JDK's, both 1.4 and 1.5 (I haven't tried with Jrockit).
Change your appserver init script's to use the new JAVA_HOME, give it a couple of azul specific JVM options, give it 10Gigs or so of heap and off you go. One thing to remember is that most garbage collection arguments are now obsolete, azul uses it's own custom "pausless gc".
The first thing that hit me when starting the first app-server was how incredibly slow it was to deply, deploying EJB's etc took ages, but hm, yes, that's an almost single threaded operation. The application itself felt "ok" when using it, now the cool part, we really couldn't get the application to run slowly with our benchmarks, at least not if we count out the fact that it eventually trashed the database sever (just some random 8-core thing) behind it. Bottlenecks in the application tiers where all gone! It doesn't matter if 5 users are online or 250, the application performs exactly the same.
The simple conclusion now is "we need a bigger Oracle box!".

Azul provides a quite good web management console for the appliance, the gui provides functionality to manage compute pools if you need to throttle CPU and memory usage between applications or servers and also provides views to monitor applications and utilization.

I guess one could call it a bit of a custom "throw hardware at the problem" solution, the box itself is quite expensive but for a medium sized company with loads of j2ee apps it makes sense.

Voting disk mirroring

Fairlie Rego - Sat, 2007-11-03 04:34
A while ago I had blogged on how to add a voting disk to an existing RAC setup here.

To be safe it is best to do this with the cluster down.
I did the same recently on a 2 node RAC on running Veritas SFRAC 4.1.

In the setup I am working on there is only one voting disk the location of which is
root@bart # /u01/app/oracle/product/10.1.0/crs/bin/crsctl query css votedisk
0. 0 /u02/oravoting/voting

With CRS down on all nodes you have to use the force option else you receive an error

root@bart # /u01/app/oracle/product/10.1.0/crs/bin/crsctl add css votedisk /u02/oravoting/voting_disk_01
Cluster is not in a ready state for online disk addition

root@bart # /u01/app/oracle/product/10.1.0/crs/bin/crsctl add css votedisk /u02/oravoting/voting_disk_01 -force
Now formatting voting disk: /u02/oravoting/voting_disk_01
CLSFMT returned with error [4].
failed 9 to initailize votedisk /u02/oravoting/voting_disk_01

root@bart # cd /u02/oravoting/
root@bart # ls -ltr
total 99992
-rw-r--r-- 1 oracle oinstall 10238976 Oct 30 11:53 voting
-rw-r--r-- 1 root other 10240000 Oct 30 12:05 voting_disk_01

So although it has created a second voting disk it has been created with the wrong permissions. When you do a fresh install of CRS the rootconfig script called from root.sh sets the owner of the voting disk to the owner of the crs software (in this case oracle). Hence this is a bug and the startup of css fails and you will see errors in the Unix logfile since the permissions are incorrect.

Lets’ add a second one..

root@bart # /u01/app/oracle/product/10.1.0/crs/bin/crsctl add css votedisk /u02/oravoting/voting_disk_02 -force
Now formatting voting disk: /u02/oravoting/voting_disk_02
CLSFMT returned with error [4].
failed 9 to initailize votedisk /u02/oravoting/voting_disk_02

If we do not change permissions of the second mirror css will startup (since we have 2 voting disks online and the algorithm requires that a majority of disks to be available) but the following message will be logged in the ocssd.log file

[ CSSD]2007-11-01 11:46:26.400 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//u02/oravoting/voting)
[ CSSD]2007-11-01 11:46:26.401 [6] >TRACE: clssnmvDPT: spawned for disk 0
[ CSSD]2007-11-01 11:46:26.402 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk
[ CSSD]2007-11-01 11:46:26.402 [7] >TRACE: clssnmvDPT: spawned for disk 1 (/u02/oravoting/voting_disk_01)
[ CSSD]2007-11-01 11:46:26.403 [1] >TRACE: clssnmDiskStateChange: state from 1
to 2 disk (2//u02/oravoting/voting_disk_02
[ CSSD]2007-11-01 11:46:26.404 [8] >TRACE: clssnmvDPT: spawned for disk 2 (/u02/oravoting/voting_disk_02)
[ CSSD]2007-11-01 11:46:26.405 [8] >ERROR: Internal Error Information:
Category: 1234
Operation: scls_block_open
Location: open
Other: open failed /u02/oravoting/voting_disk_02
Dep: 9

[ CSSD]2007-11-01 11:46:26.405 [8] >TRACE: clssnmvDiskOpen: Unable to open voting device (2:

If you were to change the permissions to oracle:dba the 2nd mirror you would not see the above.

Remember that when you install CRS in 10.2 and use normal redundancy for your voting disk configuration it is mandatory to have 2 mirrors. The installer does not allow you to proceed with one mirror.

If you are adding mirrors manually using crsctl it is recommended that you add 2 mirrors. If you had a 1+1 configuration css will not come up if the mirror is unavailable/corrupt.

If you are deleting a voting disk you have to update the following files in $ORA_CRS_HOME/install with the new voting disk location


This is required if you need to re-initialize your OCR by re-running root.sh assuming your OCR backups are stuffed and you cannot restore your OCR.

Data Warehouse Project Lifecycle

Dylan Wan - Fri, 2007-11-02 01:39

Here is the typical lifecycle for data warehouse deployment project:

0. Project Scoping and Planning

Project Triangle - Scope, Time and Resource.

  • Determine the scope of the project - what you would like toaccomplish? This can be defined by questions to be answered. The numberof logical star and number of the OLTP sources
  • Time - What is the target date for the system to be available to the users
  • Resource - What is our budget? What is the role and profile requirement of the resources needed to make this happen.

1. Requirement

  • What are the business questions? How does the answers of these questions can change the business decision or trigger actions.
  • What are the role of the users? How often do they use the system?Do they do any interactive reporting or just view the defined reportsin guided navigation?
  • How do you measure? What are the metrics?

2. Front-End Design

  • The front end design needs for both interactive analysis and the designed analytics workflow.
  • How does the user interact with the system?
  • What are their analysis process?

3. Warehouse Schema Design

  • Dimensional modeling - define the dimensions and fact and define the grain of each star schema.
  • Define the physical schema - depending on the technology decision.If you use the relational tecknology, design the database tables

4. OLTP to data warehouse mapping

  • Logical mapping - table to table and column to column mapping. Also define the transformation rules
  • You may need to perform OLTP data profiling. How often the data changes? What are the data distribution?
  • ETL Design -include data staging and the detail ETL process flow.

5. Implementation

  • Create the warehouse and ETL staging schema
  • Develop the ETL programs
  • Create the logical to physical mapping in the repository
  • Build the end user dashboard and reports

6. Deployment

  • Install the Analytics reporting and the ETL tools.
  • Specific Setup and Configuration for OLTP, ETL, and data warehouse.
  • Sizing of the system and database
  • Performance Tuning and Optimization

7. Management and Maintenance of the system

  • Ongoing support of the end-users, including security, training, and enhancing the system.
  • You need to monitor the growth of the data.

Categories: BI & Warehousing

Blocked Patch files cause OPatch to fail with error code = 170

Mihajlo Tekic - Thu, 2007-11-01 12:28
I had to apply a CPU patch to Oracle Home on Windows 2003 Server box.

Opatch utility was failing with the following error:

The patch directory area must be a number.

ERROR: OPatch failed because of problems in patch area.
OPatch returns with error code = 170

Well, there are quite few reasons why you may get this error when you try to apply a patch using Opatch.
Many of them are published in some of the following Metalink notes:


But, I couldn't find anything about the one that I came across recently.

I tried everything described in the notes above. Nothing worked out, until I realized that all of the patch files have been somehow blocked.

Although the patch files had all the necessary permissions, especially read, the opatch utility couldn't read them because of the lock set by the OS.

If you check the file properties you can see the message:
"This file came from another computer and might be blocked to help protect this computer".

The reason why these files were blocked was because I copy them from a different machine.

A blocked file can be unlocked by pressing Unlock button.
The odd thing was that only one file can be unlocked at a time.

So I just deleted the patch directory and unlocked the patch zip before I extract it again.

I haven't had similar issue before. I believe this is a new "security" feature of Windows 2003 Server.

Oracle SQL to return a alphabetical subset

Hampus Linden - Thu, 2007-11-01 07:15
Yes, I know, I've been waaay bad at blogging lately. Been busy working on new projects at work, upgrading applications to Java1.5 containers (Websphere 6.1, Weblogic 9.2 etc). On the fun side we've got an Azul Java-acceleration box, that really needs a couple of blog entries!

Anyway, got asked if there was a way to return a resultset of data based on the leading character, the use case was to ignore all strings starting with a,b, or d and return e to z.

Fairly straight forward but a good SQL to have.
I simply grab the first character in the varchar and compare it's ASCII value to the ASCII value D.
SQL> select * from t;

SQL> with data as (
2 select ascii(upper(substr(a,1,1))) a_val,a from t
3 )
4 select * from data where a_val not between ascii('A') and ascii('D') order by a
5 /

---------- ----------
69 Etest
72 Htest
87 Wtest

Lean Manufacturing Pen Demo

Chris Grillone - Mon, 2007-10-29 18:42
World Famous “Hands-on” Pen Manufacturing Demo:
Featuring iStore, CTO, Flow and WMS

“Meet The Experts” Session
Wednesday, 2:30 to 5:30 PM
Moscone West – 2nd flr – Space 2

(Not on the Oracle demo grounds)

This hands-on demonstration highlights Flow Manufacturing and Configurator in a Lean Manufacturing Enterprise. The demonstration begins with a customer, an audience member, ordering a configured-to-order pen using iStore. The sales order is then sent to Order Management and released to manufacturing where a Flow schedule is created. The Flow schedule is executed with two line operations using the HTML Flow Execution Workstation, demonstrating both simple attached documents and video for assembly instructions. Kanban will be executed in the HTML Flow Execution Workstations, using Kanban bins and cards. Bar code scanning and RFID from Warehouse Management System are used in this complete manufacturing process demonstration. Although this demonstration is provided at an executive level, all manufacturing data is built to support the Flow schedule and will be available to allow the audience members to dive deeper into specific Flow Manufacturing features, such as line design and balance, mixed model map workbench and Kanban execution. Come marvel at our use of a forklift in the demo!

Mail from Application Express with Access Control Enabled

Jornica - Mon, 2007-10-29 17:28

One of the post installation task after installing Oracle 11 is the creation of Access Control List (ACL). Oracle provides a few scripts in order to allow flows_030000 to connect any host. What if you do not allow the database to connect any host but only one host. For instance, you want to send mail from Application Express (APEX) ?

First, set up the email settings in the administrative interface (Home>Manage Service>Instance Settings). Secondly, run the following statements as

-- Create an access control list
dbms_network_acl_admin.create_acl(acl => 'apexmail.xml',
description => 'Mail from Apex', principal => 'FLOWS_030000',
is_grant => TRUE, privilege => 'connect', start_date => NULL,
end_date => NULL);

-- Assign the list to your SMTP host, i.e. smtp.yyy.xx
dbms_network_acl_admin.assign_acl(acl => 'apexmail.xml',
host => 'smtp.yyy.xx', lower_port => 25, upper_port => 25);

-- Uncomment to remove access control list
-- dbms_network_acl_admin.drop_acl(acl => 'apexmail.xml');

-- Please, do commit work.

You can find more information about the usage of dbms_network_acl_admin in the Database Security Guide. For all the remaining network services you can apply the same technique.

Oracle E-Business Suite Authentication in Ruby

Raimonds Simanovskis - Mon, 2007-10-29 17:00

I was developing Ruby on Rails application that access data from existing Oracle E-Business Suite application and I wanted to add to this application possibility for users to authenticate themselves by their existing Oracle E-Business Suite user names and passwords.

Oracle is not publicly providing algorythms which are used for user passwords encryption (which are stored in FND_USER table) and therefore some googling was necessary. I was lucky to find Jira eBusiness Suite Authenticator with full source code in Java which also included all Oracle E-Business Suite password encryption / decryption functions. Actually it seems that the >1000 lines source code is decompiled from Oracle Java classes as there are no comments and variable names don’t tell anything.

But as I did not have any better source material I started Ruby rewrite of this code and happily managed to finish it in couple of days. As a result I created Oracle EBS Authentication plugin and you can find out in README file how to install it.

Usage of this plugin is quite simple – here is an example:

database_name = ActiveRecord::Base.connection.current_database
authenticator = OracleEbsAuthentication::Authenticator.new(database_name)
if authenticator.validate_user_password(login, password)
  # user authenticated
  if authenticator.user_reponsibilities.include? "System Administrator"
    # user has System Administrator role
  # ...
  # authentication failed

BTW Oracle EBS password encryption approach has quite a lot of weaknesses. So if you provide SQL*Net access to your Oracle EBS database then anyone with valid Oracle EBS user name and password can decrypt APPS database user password and also can decrypt passwords of all other users. You can have a deeper look in plugin source code to understand how it can be done :)

Categories: Development

Call-Less Cursor

Vlad Sadilovskiy - Mon, 2007-10-29 14:29

Many people know that the Tkprof does not report base statistics and times for calls that have recursive calls. This can be handled by other alternative tools. There is one more unfortunate effect to watch out for when doing tuning of a DB that does fair amount of LOB operations. It seems to be more fundamental and much harder to cope with.

Someone was trying to capture performance problem for BLOB loading application. Any method they tried didn’t show good clue to the problem, and they could wonder around it for longer time unless weren’t hinted on certain LOB handling patterns that had been known for some buggy behavior.

But this was just an issue that made me post this other related observation. Here are two excerpts from Tkprof and Statspack reports from database where the case was reproduced.



call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        7      0.02       0.02          4         86          0           0
Execute      9      0.66       1.14         82       2956       5618           8
Fetch        1      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       17      0.68       1.16         86       3044       5618           9

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message from client                100014       54.07        112.26
  db file sequential read                         1        0.00          0.00
  SQL*Net message to client                  100013        0.00          0.19
  log file sync                                  10        0.00          0.02
  latch: shared pool                             11        0.02          0.02


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse     4644      0.94       0.93          0          0        107           0
Execute  11059      6.06       9.85        570      45295       5516        8933
Fetch    23131      0.70       2.23        305      38484          3       28673
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    38834      7.71      13.03        875      83779       5626       37606

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                    113211        0.30        324.45
  control file sequential read                  855        0.03          0.06
  direct path write                          154743        0.00          0.70
  direct path read                            99973        0.00          0.64
  log file switch completion                    111        0.97         65.21
  log file switch (checkpoint incomplete)       255        0.97        119.11
  latch: shared pool                              4        0.00          0.00
  latch: checkpoint queue latch                   1        0.00          0.00
  latch: library cache                            1        0.00          0.00
  latch: cache buffers chains                     1        0.00          0.00
  db file scattered read                          1        0.00          0.00


  Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
db file sequential read                        121,368         373      3   49.4
CPU time                                                       141          18.7
log file switch (checkpoint incomplete)            263         122    463   16.1
log file switch completion                         113          66    586    8.8
log file parallel write                          3,837          47     12    6.3

Time Model System Stats  DB/Inst: TST10203/tst10203  Snaps: 2625-2631
-> Ordered by % of DB time desc, Statistic name

Statistic                                       Time (s) % of DB time
----------------------------------- -------------------- ------------
DB CPU                                             292.3         40.1
sql execute elapsed time                            45.8          6.3
PL/SQL execution elapsed time                       32.8          4.5
parse time elapsed                                   5.4           .7
hard parse elapsed time                              5.3           .7
PL/SQL compilation elapsed time                      0.1           .0
hard parse (sharing criteria) elaps                  0.1           .0
connection management call elapsed                   0.0           .0
repeated bind elapsed time                           0.0           .0
hard parse (bind mismatch) elapsed                   0.0           .0
sequence load elapsed time                           0.0           .0
DB time                                            728.5
background elapsed time                            130.7
background cpu time                                 27.5

Both parts indicate that there was serious activity, like waits on the sequential disk reading and redo log synchronization, CPU utilization, that was not reported in category “sql execute elapsed time” and “PL/SQL execution elapsed time”. There was no statement reported in Tkprof or Statspack that generated this activity, but there was a particular cursor seen all over the 10046 trace that was responsible for some of these missing components.

I kind of was lucky to see it, because the cursor has no associated calls whatsoever, only wait records. Unless there are waits associated with such cursor, there will be no records about it in the Oracle Trace. Another “funny” thing is that this call-less cursor can have recursive calls too. This cursor is created solely to write and read LOB content.

So, here are few words of caution for those who do performance troubleshooting using raw trace files, Statspack or ASH/AWR reports. Tkprof or other tools that would try to calculate CPU, Elapsed Time or Wait statistics and build hierarchy of calls of sessions containing LOB operations will likely produce incorrect results from slightly wrong to misleading. This is because calls contain major information that allows traversing call hierarchy and calculating base statistics and timing figures. The “missing” calls might cause whole call hierarchy tree to break. For the same reason Statspack reports may show resource consumption without any visible consumer.

One more thing worth mentioning. If LOB operations are wrapped into code that uses calls to communicate with the database (i.e. PL/SQL anonymous blocks or procedures), then the resources consumed by the LOB operations inside that code will be assigned to these calls.


Subscribe to Oracle FAQ aggregator