Feed aggregator

Azul Systems Java appliance

Hampus Linden - Sun, 2007-11-04 15:03
So, I've been dragged more and more in to managing Java application containers like Weblogic, Websphere and JBoss. These have a tendency to be hugely complex beasts, almost as complex as our favorite database and performance optimization is sometimes quite difficult, or simply it's to much effort to actually upgrade or replace the server.
Azul offers a quite neat (but a tad pricey) solution to this. They off-load Java computation to a separate appliance, the smallest model has 96 processing cores and 48Gb of ram. The big daddy has a massive 768 cores and 768Gb ram. It's a by Azul in house engineered hardware with custom software (I would guess the OS is semi-based on one of our open-source friends (have a look at the ifconfig output)). The application server still a normal server (Linux/Solaris etc), the small JVM on the server pretty much acts as a proxy between external resources such as JDBC sources and the actual JVM on the appliance.
Their marketing crew calls it a "turn key solution", it's not really that easy but it's pretty straight forward to use.
Azul vega
The appliance itself takes about 15 minutes to install, setup the networking and the compute domain name and you are pretty much done with the appliance itself.
The application side is almost as easy. Azul provides sort of a "JDK wrapper", you unpack the wrapper and run a simple shell script to integrate it with an existing JDK, the script asks for the path to the JDK to "mimic". Works with IBM and Sun JDK's, both 1.4 and 1.5 (I haven't tried with Jrockit).
Change your appserver init script's to use the new JAVA_HOME, give it a couple of azul specific JVM options, give it 10Gigs or so of heap and off you go. One thing to remember is that most garbage collection arguments are now obsolete, azul uses it's own custom "pausless gc".
The first thing that hit me when starting the first app-server was how incredibly slow it was to deply, deploying EJB's etc took ages, but hm, yes, that's an almost single threaded operation. The application itself felt "ok" when using it, now the cool part, we really couldn't get the application to run slowly with our benchmarks, at least not if we count out the fact that it eventually trashed the database sever (just some random 8-core thing) behind it. Bottlenecks in the application tiers where all gone! It doesn't matter if 5 users are online or 250, the application performs exactly the same.
The simple conclusion now is "we need a bigger Oracle box!".

Azul provides a quite good web management console for the appliance, the gui provides functionality to manage compute pools if you need to throttle CPU and memory usage between applications or servers and also provides views to monitor applications and utilization.

I guess one could call it a bit of a custom "throw hardware at the problem" solution, the box itself is quite expensive but for a medium sized company with loads of j2ee apps it makes sense.

Voting disk mirroring

Fairlie Rego - Sat, 2007-11-03 04:34
A while ago I had blogged on how to add a voting disk to an existing RAC setup here.


To be safe it is best to do this with the cluster down.
I did the same recently on a 2 node RAC on 10.2.0.3 running Veritas SFRAC 4.1.

In the setup I am working on there is only one voting disk the location of which is
root@bart # /u01/app/oracle/product/10.1.0/crs/bin/crsctl query css votedisk
0. 0 /u02/oravoting/voting

With CRS down on all nodes you have to use the force option else you receive an error

root@bart # /u01/app/oracle/product/10.1.0/crs/bin/crsctl add css votedisk /u02/oravoting/voting_disk_01
Cluster is not in a ready state for online disk addition

root@bart # /u01/app/oracle/product/10.1.0/crs/bin/crsctl add css votedisk /u02/oravoting/voting_disk_01 -force
Now formatting voting disk: /u02/oravoting/voting_disk_01
CLSFMT returned with error [4].
failed 9 to initailize votedisk /u02/oravoting/voting_disk_01

root@bart # cd /u02/oravoting/
root@bart # ls -ltr
total 99992
-rw-r--r-- 1 oracle oinstall 10238976 Oct 30 11:53 voting
-rw-r--r-- 1 root other 10240000 Oct 30 12:05 voting_disk_01

So although it has created a second voting disk it has been created with the wrong permissions. When you do a fresh install of CRS the rootconfig script called from root.sh sets the owner of the voting disk to the owner of the crs software (in this case oracle). Hence this is a bug and the startup of css fails and you will see errors in the Unix logfile since the permissions are incorrect.

Lets’ add a second one..

root@bart # /u01/app/oracle/product/10.1.0/crs/bin/crsctl add css votedisk /u02/oravoting/voting_disk_02 -force
Now formatting voting disk: /u02/oravoting/voting_disk_02
CLSFMT returned with error [4].
failed 9 to initailize votedisk /u02/oravoting/voting_disk_02

If we do not change permissions of the second mirror css will startup (since we have 2 voting disks online and the algorithm requires that a majority of disks to be available) but the following message will be logged in the ocssd.log file

[ CSSD]2007-11-01 11:46:26.400 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//u02/oravoting/voting)
[ CSSD]2007-11-01 11:46:26.401 [6] >TRACE: clssnmvDPT: spawned for disk 0
(/u02/oravoting/voting)
[ CSSD]2007-11-01 11:46:26.402 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk
(1//u02/oravoting/voting_disk_01
)
[ CSSD]2007-11-01 11:46:26.402 [7] >TRACE: clssnmvDPT: spawned for disk 1 (/u02/oravoting/voting_disk_01)
[ CSSD]2007-11-01 11:46:26.403 [1] >TRACE: clssnmDiskStateChange: state from 1
to 2 disk (2//u02/oravoting/voting_disk_02
)
[ CSSD]2007-11-01 11:46:26.404 [8] >TRACE: clssnmvDPT: spawned for disk 2 (/u02/oravoting/voting_disk_02)
[ CSSD]2007-11-01 11:46:26.405 [8] >ERROR: Internal Error Information:
Category: 1234
Operation: scls_block_open
Location: open
Other: open failed /u02/oravoting/voting_disk_02
Dep: 9

[ CSSD]2007-11-01 11:46:26.405 [8] >TRACE: clssnmvDiskOpen: Unable to open voting device (2:
/u02/oravoting/voting_disk_0
2)

If you were to change the permissions to oracle:dba the 2nd mirror you would not see the above.

Remember that when you install CRS in 10.2 and use normal redundancy for your voting disk configuration it is mandatory to have 2 mirrors. The installer does not allow you to proceed with one mirror.

If you are adding mirrors manually using crsctl it is recommended that you add 2 mirrors. If you had a 1+1 configuration css will not come up if the mirror is unavailable/corrupt.

If you are deleting a voting disk you have to update the following files in $ORA_CRS_HOME/install with the new voting disk location

paramfile.crs
params.crs
rootconfig

This is required if you need to re-initialize your OCR by re-running root.sh assuming your OCR backups are stuffed and you cannot restore your OCR.

Data Warehouse Project Lifecycle

Dylan Wan - Fri, 2007-11-02 01:39

Here is the typical lifecycle for data warehouse deployment project:

0. Project Scoping and Planning

Project Triangle - Scope, Time and Resource.

  • Determine the scope of the project - what you would like toaccomplish? This can be defined by questions to be answered. The numberof logical star and number of the OLTP sources
  • Time - What is the target date for the system to be available to the users
  • Resource - What is our budget? What is the role and profile requirement of the resources needed to make this happen.

1. Requirement

  • What are the business questions? How does the answers of these questions can change the business decision or trigger actions.
  • What are the role of the users? How often do they use the system?Do they do any interactive reporting or just view the defined reportsin guided navigation?
  • How do you measure? What are the metrics?

2. Front-End Design

  • The front end design needs for both interactive analysis and the designed analytics workflow.
  • How does the user interact with the system?
  • What are their analysis process?

3. Warehouse Schema Design

  • Dimensional modeling - define the dimensions and fact and define the grain of each star schema.
  • Define the physical schema - depending on the technology decision.If you use the relational tecknology, design the database tables

4. OLTP to data warehouse mapping

  • Logical mapping - table to table and column to column mapping. Also define the transformation rules
  • You may need to perform OLTP data profiling. How often the data changes? What are the data distribution?
  • ETL Design -include data staging and the detail ETL process flow.

5. Implementation

  • Create the warehouse and ETL staging schema
  • Develop the ETL programs
  • Create the logical to physical mapping in the repository
  • Build the end user dashboard and reports

6. Deployment

  • Install the Analytics reporting and the ETL tools.
  • Specific Setup and Configuration for OLTP, ETL, and data warehouse.
  • Sizing of the system and database
  • Performance Tuning and Optimization

7. Management and Maintenance of the system

  • Ongoing support of the end-users, including security, training, and enhancing the system.
  • You need to monitor the growth of the data.

Categories: BI & Warehousing

Blocked Patch files cause OPatch to fail with error code = 170

Mihajlo Tekic - Thu, 2007-11-01 12:28
I had to apply a CPU patch to 9.2.0.7 Oracle Home on Windows 2003 Server box.

Opatch utility was failing with the following error:


The patch directory area must be a number.

ERROR: OPatch failed because of problems in patch area.
OPatch returns with error code = 170


Well, there are quite few reasons why you may get this error when you try to apply a patch using Opatch.
Many of them are published in some of the following Metalink notes:

Note:369555.1
Note:294350.1
Note:418479.1


But, I couldn't find anything about the one that I came across recently.

I tried everything described in the notes above. Nothing worked out, until I realized that all of the patch files have been somehow blocked.

Although the patch files had all the necessary permissions, especially read, the opatch utility couldn't read them because of the lock set by the OS.

If you check the file properties you can see the message:
"This file came from another computer and might be blocked to help protect this computer".



The reason why these files were blocked was because I copy them from a different machine.

A blocked file can be unlocked by pressing Unlock button.
The odd thing was that only one file can be unlocked at a time.



So I just deleted the patch directory and unlocked the patch zip before I extract it again.

I haven't had similar issue before. I believe this is a new "security" feature of Windows 2003 Server.

Oracle SQL to return a alphabetical subset

Hampus Linden - Thu, 2007-11-01 07:15
Yes, I know, I've been waaay bad at blogging lately. Been busy working on new projects at work, upgrading applications to Java1.5 containers (Websphere 6.1, Weblogic 9.2 etc). On the fun side we've got an Azul Java-acceleration box, that really needs a couple of blog entries!

Anyway, got asked if there was a way to return a resultset of data based on the leading character, the use case was to ignore all strings starting with a,b, or d and return e to z.

Fairly straight forward but a good SQL to have.
I simply grab the first character in the varchar and compare it's ASCII value to the ASCII value D.
SQL> select * from t;

A
----------
Atest
Btest
Etest
Htest
Wtest
Dtest
dtest
SQL> with data as (
2 select ascii(upper(substr(a,1,1))) a_val,a from t
3 )
4 select * from data where a_val not between ascii('A') and ascii('D') order by a
5 /

A_VAL A
---------- ----------
69 Etest
72 Htest
87 Wtest
SQL>

Lean Manufacturing Pen Demo

Chris Grillone - Mon, 2007-10-29 18:42
World Famous “Hands-on” Pen Manufacturing Demo:
Featuring iStore, CTO, Flow and WMS

“Meet The Experts” Session
Wednesday, 2:30 to 5:30 PM
Moscone West – 2nd flr – Space 2

(Not on the Oracle demo grounds)

This hands-on demonstration highlights Flow Manufacturing and Configurator in a Lean Manufacturing Enterprise. The demonstration begins with a customer, an audience member, ordering a configured-to-order pen using iStore. The sales order is then sent to Order Management and released to manufacturing where a Flow schedule is created. The Flow schedule is executed with two line operations using the HTML Flow Execution Workstation, demonstrating both simple attached documents and video for assembly instructions. Kanban will be executed in the HTML Flow Execution Workstations, using Kanban bins and cards. Bar code scanning and RFID from Warehouse Management System are used in this complete manufacturing process demonstration. Although this demonstration is provided at an executive level, all manufacturing data is built to support the Flow schedule and will be available to allow the audience members to dive deeper into specific Flow Manufacturing features, such as line design and balance, mixed model map workbench and Kanban execution. Come marvel at our use of a forklift in the demo!

Mail from Application Express with Access Control Enabled

Jornica - Mon, 2007-10-29 17:28

One of the post installation task after installing Oracle 11 is the creation of Access Control List (ACL). Oracle provides a few scripts in order to allow flows_030000 to connect any host. What if you do not allow the database to connect any host but only one host. For instance, you want to send mail from Application Express (APEX) ?

First, set up the email settings in the administrative interface (Home>Manage Service>Instance Settings). Secondly, run the following statements as
system.

-- Create an access control list
BEGIN
dbms_network_acl_admin.create_acl(acl => 'apexmail.xml',
description => 'Mail from Apex', principal => 'FLOWS_030000',
is_grant => TRUE, privilege => 'connect', start_date => NULL,
end_date => NULL);
END;
/

-- Assign the list to your SMTP host, i.e. smtp.yyy.xx
BEGIN
dbms_network_acl_admin.assign_acl(acl => 'apexmail.xml',
host => 'smtp.yyy.xx', lower_port => 25, upper_port => 25);
END;
/

-- Uncomment to remove access control list
--BEGIN
-- dbms_network_acl_admin.drop_acl(acl => 'apexmail.xml');
--END;
--/

-- Please, do commit work.
COMMIT;

You can find more information about the usage of dbms_network_acl_admin in the Database Security Guide. For all the remaining network services you can apply the same technique.

Oracle E-Business Suite Authentication in Ruby

Raimonds Simanovskis - Mon, 2007-10-29 17:00

I was developing Ruby on Rails application that access data from existing Oracle E-Business Suite application and I wanted to add to this application possibility for users to authenticate themselves by their existing Oracle E-Business Suite user names and passwords.

Oracle is not publicly providing algorythms which are used for user passwords encryption (which are stored in FND_USER table) and therefore some googling was necessary. I was lucky to find Jira eBusiness Suite Authenticator with full source code in Java which also included all Oracle E-Business Suite password encryption / decryption functions. Actually it seems that the >1000 lines source code is decompiled from Oracle Java classes as there are no comments and variable names don’t tell anything.

But as I did not have any better source material I started Ruby rewrite of this code and happily managed to finish it in couple of days. As a result I created Oracle EBS Authentication plugin and you can find out in README file how to install it.

Usage of this plugin is quite simple – here is an example:

database_name = ActiveRecord::Base.connection.current_database
authenticator = OracleEbsAuthentication::Authenticator.new(database_name)
if authenticator.validate_user_password(login, password)
  # user authenticated
  if authenticator.user_reponsibilities.include? "System Administrator"
    # user has System Administrator role
  end
  # ...
else
  # authentication failed
end

BTW Oracle EBS password encryption approach has quite a lot of weaknesses. So if you provide SQL*Net access to your Oracle EBS database then anyone with valid Oracle EBS user name and password can decrypt APPS database user password and also can decrypt passwords of all other users. You can have a deeper look in plugin source code to understand how it can be done :)

Categories: Development

Call-Less Cursor

Vlad Sadilovskiy - Mon, 2007-10-29 14:29

Many people know that the Tkprof does not report base statistics and times for calls that have recursive calls. This can be handled by other alternative tools. There is one more unfortunate effect to watch out for when doing tuning of a DB that does fair amount of LOB operations. It seems to be more fundamental and much harder to cope with.

Someone was trying to capture performance problem for BLOB loading application. Any method they tried didn’t show good clue to the problem, and they could wonder around it for longer time unless weren’t hinted on certain LOB handling patterns that had been known for some buggy behavior.

But this was just an issue that made me post this other related observation. Here are two excerpts from Tkprof and Statspack reports from database where the case was reproduced.
 

Tkprof


OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        7      0.02       0.02          4         86          0           0
Execute      9      0.66       1.14         82       2956       5618           8
Fetch        1      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       17      0.68       1.16         86       3044       5618           9

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message from client                100014       54.07        112.26
  db file sequential read                         1        0.00          0.00
  SQL*Net message to client                  100013        0.00          0.19
  log file sync                                  10        0.00          0.02
  latch: shared pool                             11        0.02          0.02


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse     4644      0.94       0.93          0          0        107           0
Execute  11059      6.06       9.85        570      45295       5516        8933
Fetch    23131      0.70       2.23        305      38484          3       28673
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    38834      7.71      13.03        875      83779       5626       37606

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                    113211        0.30        324.45
  control file sequential read                  855        0.03          0.06
  direct path write                          154743        0.00          0.70
  direct path read                            99973        0.00          0.64
  log file switch completion                    111        0.97         65.21
  log file switch (checkpoint incomplete)       255        0.97        119.11
  latch: shared pool                              4        0.00          0.00
  latch: checkpoint queue latch                   1        0.00          0.00
  latch: library cache                            1        0.00          0.00
  latch: cache buffers chains                     1        0.00          0.00
  db file scattered read                          1        0.00          0.00

Statspack


  Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
db file sequential read                        121,368         373      3   49.4
CPU time                                                       141          18.7
log file switch (checkpoint incomplete)            263         122    463   16.1
log file switch completion                         113          66    586    8.8
log file parallel write                          3,837          47     12    6.3
          -------------------------------------------------------------

Time Model System Stats  DB/Inst: TST10203/tst10203  Snaps: 2625-2631
-> Ordered by % of DB time desc, Statistic name

Statistic                                       Time (s) % of DB time
----------------------------------- -------------------- ------------
DB CPU                                             292.3         40.1
sql execute elapsed time                            45.8          6.3
PL/SQL execution elapsed time                       32.8          4.5
parse time elapsed                                   5.4           .7
hard parse elapsed time                              5.3           .7
PL/SQL compilation elapsed time                      0.1           .0
hard parse (sharing criteria) elaps                  0.1           .0
connection management call elapsed                   0.0           .0
repeated bind elapsed time                           0.0           .0
hard parse (bind mismatch) elapsed                   0.0           .0
sequence load elapsed time                           0.0           .0
DB time                                            728.5
background elapsed time                            130.7
background cpu time                                 27.5
          -------------------------------------------------------------

Both parts indicate that there was serious activity, like waits on the sequential disk reading and redo log synchronization, CPU utilization, that was not reported in category “sql execute elapsed time” and “PL/SQL execution elapsed time”. There was no statement reported in Tkprof or Statspack that generated this activity, but there was a particular cursor seen all over the 10046 trace that was responsible for some of these missing components.

I kind of was lucky to see it, because the cursor has no associated calls whatsoever, only wait records. Unless there are waits associated with such cursor, there will be no records about it in the Oracle Trace. Another “funny” thing is that this call-less cursor can have recursive calls too. This cursor is created solely to write and read LOB content.

So, here are few words of caution for those who do performance troubleshooting using raw trace files, Statspack or ASH/AWR reports. Tkprof or other tools that would try to calculate CPU, Elapsed Time or Wait statistics and build hierarchy of calls of sessions containing LOB operations will likely produce incorrect results from slightly wrong to misleading. This is because calls contain major information that allows traversing call hierarchy and calculating base statistics and timing figures. The “missing” calls might cause whole call hierarchy tree to break. For the same reason Statspack reports may show resource consumption without any visible consumer.

One more thing worth mentioning. If LOB operations are wrapped into code that uses calls to communicate with the database (i.e. PL/SQL anonymous blocks or procedures), then the resources consumed by the LOB operations inside that code will be assigned to these calls.


Where is the Scott schema in APEX?

Jornica - Sun, 2007-10-28 16:39

I decided to install Oracle 11g. And with Oracle 11g Application Express (APEX)is installed by default. Before importing an application based on the emp table, I decided to create a workspace based on the existing schema Scott. However the administrative interface did not allow me to select Scott. By not using the LOV but typing the schema name, APEX gave the following error: Schema is reserved or restricted. What is going on?

APEX uses the following query to populate the list of values:

SELECT name n,
name r
FROM sys.user$ u
WHERE type# = 1
AND name <> 'FLOWS_030000'
AND name NOT IN('HTMLDB_PUBLIC_USER', 'PUBLIC_USER', 'FLOWS_FILES',
'SCHEDULER', 'PEOPLE', 'APEX_PUBLIC_USER')
AND name NOT IN
(SELECT SCHEMA
FROM wwv_flow_restricted_schemas
WHERE NOT EXISTS
(SELECT 1
FROM wwv_flow_rschema_exceptions
WHERE schema_id =
(SELECT id
FROM wwv_flow_restricted_schemas
WHERE SCHEMA = u.name)
)
)
ORDER BY 1
/

Beside some hard coded schema names, there are two flows_030000 tables used in the query, wwv_flow_restricted_schemas and wwv_flows_rschema_exception.

SQL>SELECT * FROM flows_030000.wwv_flow_restricted_schemas t
/
ID SCHEMA CREATED_BY CREATED_ON LAST_UPDATED_BY LAST_UPDATED_ON
-- ------ ---------- ---------------- --------------- ---------------
...
38 SCOTT SYS 3-8-2007 2:12:53
...

The wwv_flows_rschema_exception contains no rows.

Both tables are used by the package htmldb_site_admin_privs, which serves as an interface packages. Now remove the restriction on Scott by executing the following statements (as Sys).

SQL>exec flows_030000.htmldb_site_admin_privs.unrestrict_schema(p_schema => 'SCOTT');
SQL>commit;
And of course, this topic is covered by the Application Express User's Guide.

I'm sailing off into the sunset

Fake Larry Ellison - Sun, 2007-10-28 16:28

Friends, I hope you're sitting down right now. I'm afraid I have some bad news. Recently there have been some complications with my lawyers and the public relations department, so I'm going to have to wrap things up here on this little blog. And in case you're wondering, yes, I know what you're thinking. I'm sorry it had to end this way, friends. But look on the bright side. Every time you make a phone call and purchase something with your credit cards, every time you make an appointment with your cosmetic surgeon and fire an employee at your corporation, chances are you'll be using an Oracle product. And every time you use an Oracle product, our paths will cross, if only briefly.

I'd like to thank you, dear readers, for all of the comments and email messages. And it would probably be a crime not to send shout-outs to the kind people at the San Jose Mercury News, MacSurfer, ZDNet, InfoWorld, and Oracle Apps Lab for all of the favorable press. And, of course, Zack Urlocker. I'll do my best to send free fake beta 12g databases out to everybody. Peace.

Frameworkia

Oracle WTF - Fri, 2007-10-26 02:33

We thought long and hard about possible titles for this new PL/SQL development standard proposed on OTN, but we couldn't improve on the one it came with.

I want share a new IDEA to create a new standard PL/SQL developing:


Function ( Standard Buffer) return number variable Number; variable1 Varchar2; begin variable := get from Standard Buffer; variable1 := get from Standard Buffer; { make your business } put in standard buffer your results end;

Give me feedback if you are interested at the new STANDARD called "FRAMEWORKIA".

A lot possibilities are ready.

Do you see the genius of it?

Er, no.

Sensing that there were people who still needed convincing, user601181 posted some sample code developed using the new Frameworkia:

CREATE OR REPLACE FUNCTION iacore
    ( eobufferia IN OUT typeeodata )
    RETURN NUMBER
IS
    CURSOR cur_getroutingcondition
        ( idc_workflow IN VARCHAR2
        , idc_operation_node IN VARCHAR2 ) IS
        SELECT *
        FROM   wf_condition
        WHERE  id_workflow = idc_workflow
        AND    id_operation_node = idc_operation_node;

    rec_getroutingcondition cur_getroutingcondition%ROWTYPE;

    CURSOR cur_dobufferiaassign
        ( idc_workflow IN VARCHAR2
        , idc_operation_node IN VARCHAR2 ) IS
        SELECT *
        FROM   wf_assignement
        WHERE  id_workflow = idc_workflow
        AND    id_operation_node = idc_operation_node;

    rec_dobufferiaassign cur_dobufferiaassign%ROWTYPE;

    next_node         NUMBER;
    next_node_ck      NUMBER;
    stop_node         NUMBER;
    operation         VARCHAR2(256);
    operation_call    VARCHAR2(256);
    type_node         VARCHAR2(32);
    workflow          VARCHAR2(32);
    line              VARCHAR2(256);
    status_wf_v       VARCHAR2(3);
    pid_chain_node    NUMBER;
    ia_tid            VARCHAR2(64);
    ia_tid_micro      VARCHAR2(64);
    ret_code_default  NUMBER;
    ret_code          NUMBER;
    retval1           NUMBER;
    statementexc      VARCHAR2(256);
    schema_function   VARCHAR2(32);
    package_function  VARCHAR2(32);
    dblink_function   VARCHAR2(32);
    first_node_flag   VARCHAR2(2) := 'NO';
    id_debug_source   NUMBER;
    mapin_keyp        VARCHAR2(1024);

    headerbufferia typebufferia;
    assignbufferia typebufferia;
    checkbufferia  typebufferia;

    rec_wfnode               wf_node%ROWTYPE;
    rec_wffunctionsourcecode wf_function_source_code%ROWTYPE;
    rec_wflogger             wf_logger%ROWTYPE;
    rec_wfbusiness           wf_business%ROWTYPE;
    rec_wffieldmapping       wf_fieldmapping%ROWTYPE;
BEGIN
    headerbufferia := eobufferia(1);

    workflow := frameworkia.getvalue(headerbufferia,'ID_WORKFLOW');

    ---- DETERMINO QUALE NODO INVOCARE
    pid_chain_node := frameworkia.getvalue(headerbufferia,'WF_NODE_ID');

    ----- SE IL NODO E' NULL ALLORA E' IL PRIMO NODO
    IF pid_chain_node IS NULL
    THEN
        -------DETERMINO HANDLER E FILENAME PER IL LOGGER
        SELECT *
        INTO   rec_wflogger
        FROM   wf_logger
        WHERE  id_workflow = workflow;
        -- rec_WfLogger.ID_WORKFLOW
        -- rec_WfLogger.ID_DEBUG_LEVEL
        -- rec_WfLogger.ID_DIRHANDLER
        -- rec_WfLogger.ID_FILENAME

        --------INSERISCO NELL'HEADER
        frameworkia.setvalue
        ( headerbufferia
        , 'ID_DEBUG_WF'
        , rec_wflogger.id_debug_level );

        frameworkia.setvalue
        ( headerbufferia
        , 'ID_DIRHANDLER'
        , rec_wflogger.id_dirhandler );

        frameworkia.setvalue
        ( headerbufferia
        , 'ID_FILENAME'
        , rec_wflogger.id_filename );

        frameworkia.setvalue
        ( headerbufferia
        , 'CHARACTER_EVIDENCE'
        , '§§§§§§§§§§§§§§§§§§§§' );

        -------DETERMINO L'ID NODE
        SELECT wf_node_id
        INTO   pid_chain_node
        FROM   wf_node
        WHERE  id_workflow = workflow
        AND    wf_first_node = 'YES';

        SELECT *
        INTO   rec_wfnode
        FROM   wf_node
        WHERE  id_workflow = workflow
        AND    wf_first_node = 'YES';

        frameworkia.setvalue
        ( headerbufferia
        , 'WF_NODE_ID'
        , rec_wfnode.wf_node_id );

        SELECT b.status
        INTO   status_wf_v
        FROM   wf_node a
             , wf_name b
        WHERE  a.id_workflow = workflow
        AND    a.wf_node_id = rec_wfnode.wf_node_id
        AND    a.id_workflow = b.id_workflow;

        IF status_wf_v = 'OFF'
        THEN
            RETURN -1;
        END IF;

        ia_tid := frameworkia.getvalue(headerbufferia,'IA_TID');
        ret_code_default := 0;
        ret_code         := 0;

        frameworkia.setvalue
        ( headerbufferia
        , 'RET_CODE_DEFAULT'
        , ret_code_default );

        frameworkia.setvalue
        ( headerbufferia
        , 'RET_CODE'
        , ret_code);

        IF ia_tid IS NULL
        THEN
            ia_tid := 'TIA' || dbms_random.STRING('U',1 * 1 + 6) ||
                      TO_CHAR(SYSTIMESTAMP,'YYYYMMDDHH24MISSFF6');

            frameworkia.setvalue
            ( headerbufferia
            , 'IA_TID'
            , ia_tid );
        END IF;

That's just the first hundred lines, and I've formatted it. The complete iacore function was well over 600 lines long.

I for one welcome the new standard.

HRMS PFK RUP2 Global name Search

RameshKumar Shanmugam - Fri, 2007-10-26 01:33
Mainly in the multi BG environment we end up in creating duplicate employee name, the current HRMS search functionality some times the duplicates are missed out due to the localization and the Name format used in each legislation.

In the Family Pack K Rup 2, System Uses Global Name format to Search for the Duplicates Across Business Group.
The Global name format is independent of the Legislation which helps us to avoid the duplication.

To Enable this Functionality we need to enable the profile HR:Cross BG duplication Person Check.

Still the HRMS uses the same existing functionality to search the employee in single business group

Try this out!!!

Regards
Ramesh Kumar S
Categories: APPS Blogs

I just saved 7,532 houses

Fake Larry Ellison - Thu, 2007-10-25 14:39

I think all of us have been touched by the fires burning outside Los Angeles and San Diego this week. In fact, I've lost a couple of my own properties down in Malibu. BTW, they're the same properties I wanted to tear down and rebuild, so it's no biggie. But listen. Just thinking about your entire home going up in smoke is enough to motivate anybody to help people in harm's way. I know if my Japanese gardens were on fire -- if all the azaleas and bamboo and Japanese maples and bonsai marijuana plants were going up in smoke -- I'd want somebody to come and help me. So I thought, what the heck? I told my aircraft guys to get in there and convert my C-130 into a tanker ASAP, and then I flew down to San Diego and doused some flames. I helped put out a fire or two, and then I went back and dropped a couple thousand Satanic Surfers CDs over Qualcomm Stadium.

I made a difference, which is more than most of you can say. I bet you lazy turds just sat in your leather armchairs and had the news about BEA Systems read to you by beautiful girls in bikinis while getting foot massages and facials. Am I right or am I right? Shame on you freaks.

Tuning Collections in Queries [1]

Vlad Sadilovskiy - Thu, 2007-10-25 08:40

[To Introduction

I will continue series about tuning collections in queries starting from the simplest case – collection of values joined to one table. Our subject SQL statements look similar to the following two forms.

1)
select table1.*
  from table1
 where table1.f1 in
       (select column_value from table(test_statement.getList1) nlist1);

and

2)
select table1.*
  from table1, table(test_statement.getList1) nlist1
 where t1.object_id = nlist1.column_value;

We will run them in several ways by adding hints and CBO parameters. First test run is dedicated to show default CBO computations. Note cryptic name KOKBF$ in the predicate section of the explain plan. KOKBF$ is a generic alias given the table expressions. COLLECTION ITERATOR PICKLER FETCH is an operation that retrieves values from collections (if collection was instantiated in the query the operation will read: COLLECTION ITERATOR CONSTRUCTOR FETCH).

Test 1.1:

select /*+ gather_plan_statistics */        t1.object_name,t1.object_type   from t1
where t1.object_id in        (select *           from table(cast(test_statement.getList1 as table_of_number)) nlist1)

Plan:

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |
----------------------------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                       |          |      1 |    255 |   1252K|00:02:07.32 |    5091K|    177K|   2959 |
|   2 |   SORT UNIQUE                       |          |      1 |    255 |   1878K|00:00:31.30 |      31 |   2959 |   2959 |
|   3 |    COLLECTION ITERATOR PICKLER FETCH| GETLIST1 |      1 |        |   2000K|00:00:04.28 |       0 |      0 |      0 |
|   4 |   TABLE ACCESS BY INDEX ROWID       | T1       |   1878K|      1 |   1252K|00:01:31.73 |    5091K|    174K|      0 |
|*  5 |    INDEX UNIQUE SCAN                | T1_I     |   1878K|      1 |   1252K|00:00:37.04 |    3839K|  28328 |      0 |
----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("T1"."OBJECT_ID"=VALUE(KOKBF$))

About two minute runtime is not completely bad. But let’s see what we are driving into here. Nested Loops and Index Access isn’t very good choice for getting > 10% of the data of the base table. Clearly, the estimated cardinality in bold is the culprit here. The value in row with Id 2 is not the part of the output, rather it was taken from CBO trace figures. The original 10053 trace file doesn’t have definite clue where this figure comes from. For the reason described below, this could be a function of the default block size.

BASE STATISTICAL INFORMATION
***********************
Table Stats::
  Table: KOKBF$  Alias: KOKBF$  (NOT ANALYZED)
    #Rows: 8168  #Blks:  100  AvgRowLen:  100.00

...
Join order[2]:  KOKBF$[KOKBF$]#1  T1[T1]#0
    SORT resource      Sort statistics
      Sort width:         459 Area size:      402432 Max Area size:    80530432
      Degree:               1
      Blocks to Sort:    2865 Row size:           13 Total Rows:        1800000
      Initial runs:         2 Merge passes:        1 IO Cost / pass:       1554
      Total IO sort cost: 4419      Total CPU sort cost: 1764307007
      Total Temp space used: 28894000
***************
Now joining: T1[T1]#0
***************
NL Join
  Outer table: Card: 255.25  Cost: 4653.33  Resp: 4653.33  Degree: 1  Bytes: 2

The outer table KOKBF$ was assigned default cardinality as num_of_blocks * (block_size – cache_layer) / avg_row_len. In all tests cache_layer is 24. Something tells me it is similar to the 24 bytes per INITRANS, but I still cannot get my head around the fact where the block fixed header size has gone. It might be something else though. Anyway, given the figures, our default cardinality is 100 * (8192 – 24) / 100 = 8168. With 16K block this would equal 16360 etc. When the collection is uniquely sorted for “IN” clause, its computed cardinality is reduced to 255.25 and rounded to 255. With 16K block this becomes 511.5. This is persuasively looks like default_cardinality/32. One more reason to think the value comes from default statistics assumptions shown in the below CBO trace which is generated by this simple query.

select distinct column_value from table(test_statement.getList1);

The CBO trace in this case has following number of distinct values taken from the default statistics of the collection. If this is so, it explains how CBO arrives at the cardinality of the uniquely sorted collection values in the above calculation.

BASE STATISTICAL INFORMATION
***********************
Table Stats::
  Table: KOKBF$  Alias: KOKBF$  (NOT ANALYZED)
    #Rows: 8168  #Blks:  100  AvgRowLen:  100.00
  Column (#1): KOKBC$(VARCHAR2)  NO STATISTICS (using defaults)
    AvgLen: 1.00 NDV: 255 Nulls: 0 Density: 0.0039177

But I will shut up about it for now until I find out this for certain. Now let’s see what the second form of the testing queries produce.

Test 1.2:

select /*+ gather_plan_statistics */
 t1.object_name,t1.object_type   from t1, table(cast(test_statement.getList1 as table_of_number)) nlist1
where t1.object_id = nlist1.column_value

Plan:

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                      |          |      1 |   8168 |   1333K|01:27:04.70 |    5422K|   2692K|
|   2 |   COLLECTION ITERATOR PICKLER FETCH| GETLIST1 |      1 |   8168 |   2000K|00:00:06.30 |       0 |      0 |
|   3 |   TABLE ACCESS BY INDEX ROWID      | T1       |   2000K|      1 |   1333K|01:26:51.90 |    5422K|   2692K|
|*  4 |    INDEX UNIQUE SCAN               | T1_I     |   2000K|      1 |   1333K|00:05:39.05 |    4088K|   1382K|
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("T1"."OBJECT_ID"=VALUE(KOKBF$))

In this case the default cardinality of the 8168 rows of the collection was left intact. The plan didn’t change from Nested Loops for obvious reasons. The time and number of reads skyrocketed. This happened because the collection values were randomly chosen and not sorted as they were in the previous example. Access of the base table became more scattered and subsequently more reads took place. If a key value distribution is known and clustered, it might be worth using co-located values in collections.

Now it’s time to move on to the different settings. To change CBO default cardinality estimation this query uses officially unsupported CARDINALITY hint. You should ask Oracle permission to use it in production.

Test 2.1 (adjusted cardinality):

select /*+ gather_plan_statistics */        t1.object_name,t1.object_type   from t1  where
t1.object_id in        (select /*+ cardinality(nlist1 1800000) */ *           from table(cast(test_statement.getList1 as table_of_number)) nlist1)

Plan:

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |
----------------------------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                       |          |      1 |    255 |   1252K|00:01:37.31 |    5092K|    172K|   2959 |
|   2 |   SORT UNIQUE                       |          |      1 |    255 |   1878K|00:00:27.63 |     160 |   2970 |   2959 |
|   3 |    COLLECTION ITERATOR PICKLER FETCH| GETLIST1 |      1 |        |   2000K|00:00:04.32 |     129 |     11 |      0 |
|   4 |   TABLE ACCESS BY INDEX ROWID       | T1       |   1878K|      1 |   1252K|00:01:04.31 |    5091K|    169K|      0 |
|*  5 |    INDEX UNIQUE SCAN                | T1_I     |   1878K|      1 |   1252K|00:00:23.96 |    3839K|  23944 |      0 |
----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("T1"."OBJECT_ID"=VALUE(KOKBF$))

No changes in the plan and statistics. What happened?! Did CBO even considered the change? Let’s take a look at the CBO trace.

***********************
Table Stats::
  Table: KOKBF$  Alias: KOKBF$  (NOT ANALYZED)
    #Rows: 8168  #Blks:  100  AvgRowLen:  100.00
***************************************
SINGLE TABLE ACCESS PATH
  No statistics type defined for function TEST_STATEMENT
  Table: KOKBF$  Alias: KOKBF$
    Card: Original: 8168    >> Single Tab Card adjusted from: 8168.00  to: 1800000.00
  Rounded: 1800000  Computed: 1800000.00  Non Adjusted: 8168.00

It has obviously mentioned the hint value. But the final cardinality got value as before for the same reason of collection being processed as part of “IN” sub-query. I’ll get back to this a little later.

Alright, let’s see what the second form would be like.

Test 2.2 (adjusted cardinality):

select /*+ gather_plan_statistics cardinality(nlist1 1800000) */
t1.object_name,t1.object_type   from t1, table(cast(test_statement.getList1 as table_of_number)) nlist1  where t1.object_id = nlist1.column_value

Plan:

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |
---------------------------------------------------------------------------------------------------------------------------
|*  1 |  HASH JOIN                         |          |      1 |   1800K|   1333K|00:01:37.68 |     145K|    189K|  45724 |
|   2 |   COLLECTION ITERATOR PICKLER FETCH| GETLIST1 |      1 |   1800K|   2000K|00:00:04.22 |       0 |      0 |      0 |
|   3 |   TABLE ACCESS FULL                | T1       |      1 |     10M|     10M|00:00:31.42 |     145K|    143K|      0 |
---------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."OBJECT_ID"=VALUE(KOKBF$))

Now, the hinted cardinality is in the final execution plan. And no wonder the CBO switched the plan from Nested Loops to Hash Join. Compare the logical and physical reads of this test and tests 1.1 and 2.1. If I would have allowed bigger hash area for this query, it could run in-memory and show even better results. Last test for this post is suppose to fix the query with the sub-query. For this we would need to do an additional modification. This time modification of the CBO parameter “_always_semi_join” is needed.

Test 3.1: (adjusted cardinality, “_always_semi_join”=off):

select /*+ gather_plan_statistics */        t1.object_name,t1.object_type   from t1  where
t1.object_id in        (select /*+ cardinality(nlist1 1800000) */ *           from table(cast(test_statement.getList1 as table_of_number)) nlist1)

Plan:

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |
-----------------------------------------------------------------------------------------------------------------------------
|*  1 |  HASH JOIN                           |          |      1 |   1800K|   1252K|00:01:26.75 |     145K|    195K|  50757 |
|   2 |   VIEW                               | VW_NSO_1 |      1 |   1800K|   1878K|00:00:11.52 |       0 |   6825 |   6825 |
|   3 |    HASH UNIQUE                       |          |      1 |   1800K|   1878K|00:00:09.64 |       0 |   6825 |   6825 |
|   4 |     COLLECTION ITERATOR PICKLER FETCH| GETLIST1 |      1 |   1800K|   2000K|00:00:04.24 |       0 |      0 |      0 |
|   5 |   TABLE ACCESS FULL                  | T1       |      1 |     10M|     10M|00:00:31.40 |     145K|    144K|      0 |
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."OBJECT_ID"="COLUMN_VALUE")

Why, you might ask, the Semi Join parameter has any effect on this? In pre evaluation phase CBO tries to apply different query rewrite techniques in order to optimize the query. One of them is unnesting sub-query. In this case it can either do Semi Join or unnest the sub-query as a set (collection of distinct values). Since the Semi Join is disabled it used the later. The sub-query took following form with cardinality hint propagated to the outer query.

SELECT "T1"."OBJECT_NAME" "OBJECT_NAME","T1"."OBJECT_TYPE" "OBJECT_TYPE" FROM  (SELECT /*+ OPT_ESTIMATE (TABLE "KOKBF$" ROWS=1800000.000000 ) */ DISTINCT VALUE(KOKBF$) "COLUMN_VALUE" FROM TABLE(CAST("TEST"."TEST_STATEMENT"."GETLIST1"() AS "TABLE_OF_NUMBER") ) "KOKBF$") "VW_NSO_1","TEST"."T1" "T1" WHERE "T1"."OBJECT_ID"="VW_NSO_1"."COLUMN_VALUE"

Basically the original query was rewritten into following form. It is similar to the second original form with the exception of distinct clause. Cardinality hint was not ignored as it was in the original query and CBO generated desired plan.

select /*+ cardinality(nlist1 1800000) */
       t1.object_name,t1.object_type
  from t1
     , (select distinct column_value from table(cast(test_statement.getList1 as table_of_number))) nlist1
 where t1.object_id = nlist1.column_value;

Few words in conclusion. When tuning these queries, make sure CBO doesn’t rely on default statistics. In most cases CBO will subsequently generate sub-optimal plan, i.e. NLs instead of desired HJ or vice versa. If you cannot disable Semi Join, unnest “IN” sub-query. When tuning “IN” sub-query, use cardinality value close to the number of distinct elements, because that is what CBO will use in its calculations. If distribution of table data is deterministic, co-locate the values of the joined collection whenever possible to increase clustering of the access operation.


Multiple Verison of I.E on one machine

Duncan Mein - Thu, 2007-10-25 02:39
After the lastest round of updates were installed from Microsoft, I foolishly forgot to uncheck the "Upgrade to I.E 7" box and hence after a reboot, a fresh new version of Internet Explorer was waiting for me. On face value this appeared ok until I tried to access Mercury Test Director.

According to the error message, only I.E 6 was supported.

A quick search on google and I happened upon this website: http://tredosoft.com/Multiple_IE
and downloaded the installer which contained multiple version of I.E from 3.0 to 6.0 that run in standalone.

Not only has this fixed my problem of accessing applicaitons, it also allows me to test my applications against earlier versions of I.E.

Very useful indeed

Dear Shit for Brains...

Fake Larry Ellison - Tue, 2007-10-23 13:46

I know you laughed at the iPhone and everything, but now 1995 is calling and it wants its phones back. As in, your phones, Stevie. All of those devices running Windows are out, bro. The people spoke yesterday. They want technology that actually works. They want phones that can actually make phone calls without crashing. People would rather use rotary phones that your Winphones. Yes, I know you're shocked.

CNET cheerleaders go apeshit for MySQL

Fake Larry Ellison - Tue, 2007-10-23 13:15

This time they're citing a survey by the Independent Oracle Users Group (IOUG) and claiming that our users "go ga-ga for open source, including MySQL." Which I guess stands to reason if you only read the first two paragraphs of the survey and then ignore everything else. Like the part where people say that they would never use MySQL for anything other than testing because it's insecure, unsupported, and "typically not used for mission-critical applications."

Tell you what, CNET. Why don't you start using MySQL on your production webservers and stop repeating verbatim what you hear from MySQL's Vice President of Products? Then maybe you'll sound less like an infomercial and more like an actual news source. Maybe.

Oracle Database 11g for Windows now available

Christian Shay - Tue, 2007-10-23 12:36
Download Oracle Database 11g on Windows!

We'll be talking about Windows specific 11g new features at Oracle OpenWorld sessions so reserve your seat today!

Pages

Subscribe to Oracle FAQ aggregator