Feed aggregator

Boring life.

Moans Nogood - Thu, 2007-11-29 16:57
As my 18-year old daughter Christine said yesterday when she arrived at the scene: "There's ALWAYS something going on in this house."

Here's what happened:

We have insufficient pressure on the cold water in our house these days. It means that something must be done, so CarpenterTorben called on Grethe's brother, who can fix water things.

It required that we broke up a bit of the floor in our entre/foyer/whatever it's called in our house and dig a big hole on the outside in order to get to the water pipes. Torben and I joked about an idea: Why not tear down the whole house and build a new one where there's plenty of space for pipes, wires, and such?

Well, one should be careful about what one wishes for.

Grethe's brother arrived around 1400 hours yesterday and within minutes I had complaints from my wife via text messages and phone calls that he was leaving dust everywhere, and how on Earth were we supposed to get it cleaned in time for the weekend?

So when Anette called the third time I was tempted not to answer, but I did.

This time she wasn't worried about the dust. She was in our bedroom upstairs with Viktor and Melina (2 and 9 years old) and the staircase was blocked due to fire and thick smoke downstairs.

Grethe's brother had been cutting a water pipe when a spark ignited the styrofoam used for insulation under the floor. Since he had cut the water supply he acted quickly and took a towel, dipped in the toilet, and tried to put out the fire that way. Didn't work. It kept creeping further and further under the floor through the styrofoam.

So he called for fire figthers while Anette shut doors upstairs and opened the windows in the bedroom where she was, ready to throw the kids out of the window into either blankets or arms of the people gathered under the window. She even had the wherewithal to look for tape that could seal the door from the smoke. She then called me, CarpenterTorben and others. Then she made sure the kids were dressed warmly. Cool lady under pressure!

Turns out I can drive 180 km/h on a bike path and cross lots of red lights if I have to (due to the rush hour traffic blocking the roads). I found out later that Torben had done the same in an attempt to get quickly from our new office to Kratvej.

When I arrived there were already seven fire trucks, ambulances and police cars on site. Anette and the kids had been saved out of the window by a big, strong fire fighter, and the fire had pretty much been put out.

Anette, Viktor and Melina were taken to ER, had oxygene and came home again in a taxi, still with no shoes. All shoes are kind of rubbish. They slept in NabooPeter's house, Christine slept in the Garage and I slept (of course!) on the couch in the living room, just in case somebody wanted to sneak in and steal my laptop or other important things.

So nobody got hurt, and we didn't lose any dear possessions (bar all our shoes), which is very nice.

Oh, the Emergency Service company that took over when the fire fighters left wouldn't listen to CarpenterTorben, who kept saying that it was still burning somewhere underneath the floor. But suddenly they were convinced, too, and the fire fighters had to show up a second time. This time Torben had to break down the floor in the entre/foyer with a huge drill hammer (or whatever it's called) so they could get down to the styrofoam. Otherwise, the fire would have spread underneath the floors to the kitchen and living room (and possibly to the oak table!). Good man, this Torben. I think he saved my house yesterday.

We can't live in the house for some days - it has to be cleaned due to the smoke, particles and such. But Jytte, who runs the local restaurant, has made an apartment upstairs ready for us, and all the neightbours are ready to house us, too.

So we wished for a lot of cold water with high pressure. We got that, courtesy of the Ballerup fire brigade.

We also wished for easy access to the water pipes. We got that.

Anette wanted new shoes. Check.

I never liked the white colour of the entre/foyer walls. Man, it's black now.

Viktor always loved fire trucks. Check.

Women dream about being rescued out of a burning building through the window by a big, strong fire fighter. Check.

Tomorrow we'll have the traditional, Danish Christmas lunch in Miracle. We have things to talk about now.

Mogens

PS: And I forgot to mention that it was CarpenterTorben's birthday that day!

After 9.2.0.8 patchset adgrants.sql fails "O/S Message: No such file or directory"

Madhu Thatamsetty - Wed, 2007-11-28 21:40
SYMPTOM:sqlplus "/ as sysdba" @adgrants.sqlSQL*Plus: Release 9.2.0.8.0 - Production on Sun Nov 18 16:14:16 2007Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.Connected to:Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit ProductionWith the Partitioning, OLAP and Oracle Data Mining optionsJServer Release 9.2.0.8.0 - ProductionO/S Message: No such file or directorySQL*PLUS Madhu Sudhanhttp://www.blogger.com/profile/11947987602520523332noreply@blogger.com0

Converting MySQL "on update current_timestamp" to Oracle

Hampus Linden - Wed, 2007-11-28 15:20
Another short simple SQL for all out there in the process of converting old MySQL schemas to Oracle.

MySQL has got a built in feature to automatically update a column to the current timestamp whenever the row is updated, just by using the default-clause. The "on update current_timestamp" feature can be quite handy if you have lazy developers who can't be bothered writing full insert statements. :)

The MySQL create table statement would be something like this:
create table p (
id int,
a varchar(10),
d timestamp DEFAULT CURRENT_TIMESTAMP
ON UPDATE CURRENT_TIMESTAMP,
constraint p_pk primary key (id)
);

Not difficult to do in Oracle either, but we need a trigger to assist.
Example:
SQL> alter session set nls_Date_format='HH24:MI:SS';

Session altered.

SQL> create table p (id number, a varchar2(10), d date default sysdate,
constraint p_pk primary key (id));

Table created.

SQL> insert into p(id,a) values(1,'test');

1 row created.

SQL> host cat p_test.sql
CREATE OR REPLACE TRIGGER p_d_trig
BEFORE UPDATE ON p
FOR EACH ROW
BEGIN
select sysdate into :new.d from dual;
END p_d_trig;
/

SQL> @p_test

Trigger created.

SQL> select * from p;

ID A D
---------- ---------- --------
1 test 21:15:05

SQL> update p set a='foo' where id=1;

1 row updated.

SQL> select * from p;

ID A D
---------- ---------- --------
1 foo 21:16:44

SQL>

Safe Harbor Statement

Chris Grillone - Wed, 2007-11-28 12:53
The postings on this blog are intended to outline general product direction. They are intended for information purposes only, and may not be incorporated into any contract. They are not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

Data Auditing in Oracle Applications - Audit Trail

Aviad Elbaz - Wed, 2007-11-28 02:01
Following my last post about Audit users in Oracle Applications, in this post I’m going to talk about the Audit Trail - the data auditing feature in Oracle Applications.
The AuditTrail enable us to know who, when and what was changed on each table we would like to audit.
For example: Some of the profiles in Oracle Applications are very critical and important for normal activity of the system (like MO: Operating Unit).
We might be interested to know who changed the value of critical profiles and the Audit Trail certainly can help us.

Now I’ll show a step by step demo how to start Audit Trail on a table, I will go on with my previous example on the profiles table.
The table we would like to audit is FND_PROFILE_OPTION_VALUES.

1) Logon to system with System Administrator responsibility.

2) Make sure the profile “AuditTrail:Activate” is set to Yes

3) Check the audited table's owner
select owner
from dba_tables
where table_name like 'FND_PROFILE_OPTION_VALUES';

The owner is APPLSYS.

4) Check that audit is enabled for APPLSYS user
Go to: Security -> AuditTrail -> Install -> Query for APPLSYS




5) Define new Audit Group
Since the audit enabled on groups of tables, we should define a new Audit Group which will contain the FND_PROFILE_OPTION_VALUES table.

Find the table’s application name by this query:
select fav.application_name
from fnd_application_vl fav, fnd_tables ft
where fav.application_id = ft.application_id
and ft.table_name = 'FND_PROFILE_OPTION_VALUES';

The query result: Application Object Library

Navigate to: Security -> AuditTrail -> Groups

Application: Application Object Library
Audit Group: AC FND Profile Values Audit
Group State: Enable Requested


At this level, the columns that will be audited are the columns of the primary key or the first unique index on the audited table.
You can add columns to be audited as much as you want.

6) Check/Add column to audit
Navigate to: Security -> AuditTrail -> Tables
Query for FND_PROFILE_OPTION_VALUES table.


Add column PROFILE_OPTION_VALUE to be audited.


7) Run the “AuditTrail Update Tables” concurrent to enable the audit
Navigate to: Requests -> Run -> Single Request -> choose request name: “AuditTrail Update Tables” -> Submit


All definitions we made until here will not take effect until we execute this concurrent.
This concurrent will create all objects (triggers and procedures) required for audit, and views to retrieve the audited data.

This request should be executed each time we make changes in audit definitions, to generate new audit objects.

To make sure it works, we can check if all audit objects were created:


All audit objects names will be the first 26 characters of the audied table + suffix (_A, _AC, _AD...).
We can also run report “AuditTrail Report for Audit Group Validation” to make sure all audit objects created successfully.


Now let’s make a test to see how it works:
To test the audit on the profile's table, we will change a value for a profile (any profile) and check the shadow table (fnd_profile_option_value_a) for an audit inormation.

Navigate to: Profile -> System , and update the profile “FND: Diagnostics” (just for testing... you can choose any profile) to Yes.

To see the audited data of the last change we can run this query:


The change was audited as expected...

As you can see, the “U” in the audit_transaction_type column indicates an update and the profile_option_value contain the value before update.

The audit_transaction_type could be:
1) U - update
2) D - delete
3) I - insert


How to disable Audit Trail?

The AuditTrail could be stopped by 3 ways:
1) Disable Prepare for Archive
2) Disable Interrupt Audit
3) Disable Purge Table

I've tried the third only… It deletes the data from the shadow table and drops all the audit objects from database.

1) Login to application and choose the System Administrator responsibility.
2) Security -> AuditTrail -> Groups
3) Query for your group
4) Update the Group State field to: “Disable – Purge Table”.
5) Run the “AuditTrail Update Tables” concurrent to make the changes.

That’s all about auditing…

You can read more at "Oracle Applications System Administrator’s Guide - Security Release 11i" - Chapter 5 - User and Data Auditing

To get my posts directly to your email, you can register for email subscription by using the box at the right side bar.

You are welcome to leave a comment.

Aviad

Categories: APPS Blogs

AMR Research: The Manufacturing Operations Software Application Market Sizing Report, 2006-2011

Chris Grillone - Tue, 2007-11-27 10:28
Published: October 29, 2007
Analyst: Alison Smith

"While SAP has placed its manufacturing operations bets on delivering performance visibility, an integration stack, and a strong network of manufacturing partners, Oracle is taking a different strategy, adding deep manufacturing operations management capabilities that are tightly integrated with core-ERP financial accounting functions, resource planning, and its Daily Business Intelligence. Oracle’s functionality for manufacturing execution rivals that offered by best-of-breed competitors and includes centralized creation of routings and workflows that accommodate local customization, part- and lot-level track and trace and genealogy, in-process quality tracking, work-in-process (WIP) tracking, marriage of as-built record with the as-planned product design, and of course, EAM functionality that can be linked directly to real-time asset status data via Oracle’s Sensor Edge server technology."

Starting UNIX Enterprise Servers

Mark Vakoc - Mon, 2007-11-26 21:34
Starting and stopping an enterprise servers is an easy task. Start the appropriate service on Windows, run STRNET in the appropriate system library on iSeries, and execute the RunOneWorld.sh script on UNIX based platforms. Simple enough, right? While the first two examples are as easy as they sound configuring the proper environment in order to start the UNIX based servers has become an art form of its own. This article while detail the steps, fixes, and other details on how Server Manager starts the enterprise server on UNIX based platforms.

The various install permutations of apps releases and introduction of the platform pack have resulted in different environments for the UNIX servers. Prior to 8.11SP1 an installer was used to install the enterprise server code. This installer created a script, named .oneworld, in the operating system user's home directory and modified the user's .profile to call this script. The .oneworld script defined several environment variables used by the RunOneWorld.sh and other scripts including the EVRHOME and SYSTEM variables. These environment variables must be properly set in order for the startup scripts to operate properly.

8.11SP1 introduced the platform pack, an improved installation program, used to install the enterprise server and database files. The platform pack did away with the .oneworld script instead favoring a more robust script named enterpriseone.sh located in the $EVRHOME/SharedScripts directory

The $EVRHOME environment variable refers to the installation path of the enterprise server. Underneath this directory would be, among other things, the pathcodes and system directory containing the tools release.

The user's .profile was modified to call the enterpriseone.sh script during login.
Since Server Manager supports all application releases from 8.9 through 8.12 the startup and shutdown logic must accommodate both of these conventions. In all cases the corresponding environment setup script (.oneworld or enterpriseone.sh) must be called prior to calling RunOneWorld.sh or EndOneWorld.sh.

8.97.0.0 Server Manager

The 8.97.0.0 release of Server Manager contained several issues around properly setting up the environment prior to invoking the start/stop scripts. Most of these issues have been addressed by 8.97.0.1 and as such all UNIX based installations should immediately upgrade server manager to 8.97.0.1.

8.97.0.1 Server Manager

Server Manager starts/stops the enterprise server by calling the script startEntServer.sh/stopEntServer.sh located in the $EVRHOME/SharedScripts location. The scripts are dynamically created each time the server is started or stopped. If that directory does not exist (as it may not in 8.9, 8.10, or 8.11) it will be created. The startEntServer.sh and stopEntServer.sh scripts will do the following tasks
  • Include the .oneworld or .enterpriseone.sh script, depending on release
  • Call the RunOneWorld.sh or EndOneWorld.sh script
Including the appropriate environment scripts resolved most of the startup issues discovered.

There remains one pending issue for IBM UDB users: prior to 8.11SP1 the installation instructions required the user to add a call to the db2profile script used to setup the environment needed for the UDB binaries and environment variables. The documentation instructed the user to add this call to the operating system user's .profile. The startEntServer.sh/stopEntServer.sh scripts did not directly include the db2profile script call, resulting in an enterprise server started using server manager failing to connect to UDB. The jde.log files for the kernel processes that establish a database connection would include numerous errors indicating that libodbc.[sl or so, depending on platform] failed to load.

8.97.0.next Server Manager

The next tools release will include some fixes to address the UDB issue mentioned above. As the startEntServer.sh and stopEntServer.sh files are created, during start or shutdown, the management agent will parse the user's .profile looking for a call to the db2profile script. If found it will add a call to this script thus properly setting up the UDB environment.

If you are using 8.97.0.1 Server Manager in a UDB environment you can easily workaround this issue by adding a call to the db2profile into the .oneworld script rather than the .profile.

Depending on how the user is setup, how the db2profile script is called, and the virtually unlimited permutations we realize that our .profile parsing logic may not always find the db2profile script. There may also be other environment setup that we did not anticipate. Since the startEntServer.sh and stopEntServer.sh scripts are dynamically created it is not possible to modify these files directly. To address this a check for the scripts startExtras.sh or stopExtras.sh has been added to the dynamically created scripts. We do not deliver these scripts, however, if present they will be invoked. This permits the administrator to add any additional setup each time the server is started or stopped. These scripts are located in $EVRHOME/SharedScripts and are available for all apps releases.

The JVM Issue

The 8.96 tools release introduced Java based kernel processes using a bundled JVM for the metadata kernel. 8.97 makes further use of this capability for both the BI Publisher and Server Manager kernel processes. The required JVM is bundled with the tools release (except iSeries where it is built-in to the OS).

The server manager managed home agent also includes a bundled JVM (again, except for iSeries). The 8.12 platform pack (and the included enterpriseone.sh) shell script properly configures the LD_LIBRARY_PATH or SHLIB_PATH (depending on platform) to properly define the directories in which shared libraries should be loaded.

The scripts created by the installers/platform packs prior to 8.12 did not have these additional environment configuration parameters needed to properly startup the enterprise server. With Server Manager the startup scripts are invoked by a Java process (our bundled JVM, based on 1.5). The JVM will modify the LD_LIBRARY_PATH or SHLIB_PATH to include locations specific to the 1.5 based JDK we deliver. This will conflict with the 1.4 JDKs delivered with the enterprise server tools release.

If you installed your enterprise server using the 8.12 or later platform pack stop reading; this issue will not affect you. If you initially registered your enterprise server using the 8.97.0.2 release or later stop reading; we have fixed the issue.

If you installed your enterprise server using 8.11SP1 or earlier AND registered your enterprise server using 8.97.0.1 or earlier server manager you may still have an issue. We modified, during the registration process, the .oneworld or enterpriseone.sh script to properly setup the LD_LIBRARY_PATH/SHLIB_PATH for you. However, there was a flaw that you may have to manually correct. If you look in these files you will see a line that was added by server manager that defines the JVM_LIB environment variable, and the LD_LIBRARY_PATH/SHLIB_PATH are modified to include it. The flaw, however, is in the order added. We appended the JVM_LIB environment variable to the existing LD_LIBRARY_PATH/SHLIB_PATH. Instead it should have been prepended to it.

To resolve this issue edit the appropriate file and ensure that LD_LIBRARY_PATH is redefined to be something like:
LD_LIBRARY_PATH=$SYSTEM/lib:$(JVM_LIB)........:$LD_LIBRARY_PATH rather than
LD_LIBRARY_PATH=$SYSTEM/lib:......$LD_LIBRARY_PATH:$JVM_LIB

Summary

If you registered your enterprise server using 8.97.0.2 or later server manager don't worry -- all these issues have been resolved.

If you registered your enterprise server using 8.97.0.1 or earlier server manager and your enterprise server was installed using 8.11SP1 or earlier platform pack/installer you may need to modify the LD_LIBRARY_PATH/SHLIB_PATH to move the JVM_LIB definition.

If you use UDB and you are using 8.97.0.1 or earlier server manager and your enterprise server was installed using the 8.11 or earlier installer you may need to add a call to db2profile to the .oneworld shell script.

Installing Server Manager for Non-English Machines

Mark Vakoc - Mon, 2007-11-26 18:01
An issue has been discovered that prevents the successful installation of Server Manager when the user that performs the installation is configured for something other than English.  The problem relates around messages that are output by the embedded container.  The installer is looking for particular words, in English, and the container is outputing that text localized.  So even though the installation was successful the installer thinks it failed and begins to uninstall.

A workaround is to change the current user's language, in Windows, back to English and perform the installation.  Also make sure the 'Language for non-Unicode programs' on the advanced tab is configured to English as well.  These settings can be found in the 'Regional and Language Options' control panel option in Windows.

Once the installation is complete you may return to using the desired language of choice.

The Management Kernel

Mark Vakoc - Mon, 2007-11-26 12:04
Tools release 8.97 adds two new kernel definitions to the enterprise server: the BI Publisher (XML Publisher) and Server Manager kernels. Like the metadata kernel introduced in 8.96 these new kernels are Java based kernels that start their own JVM instance to run Java code rather than the traditional C based code of kernels past. SM will automatically add and configure these kernels into the JDE.INI when changing the tools release to 8.97. This post will focus on the server manager kernel.


The server manager kernel (number 32 for those keeping count) operates on all platforms and loads a JVM upon startup. This JVM will in turn load the management agent. This is the same codebase as that used by the managed home agent and is used to provide runtime information about the enterprise server to the management console. It uses a couple of INI settings, configured automatically, to provide it with the instance name and managed home location associated with the enterprise server instance. From the managed home location it will read the agent.properties discussed in previous posts to obtain the connection details for the management console. It also follows the same connection logic used to establish communications as outlined in that post.


The management console uses this kernel to obtain all the runtime information about the server such as the active process list. It also uses this agent to expose and provide the log files that are active for the processes that appear in the process list.

This kernel definition is a singleton; that is there may only be a single process active and must be configured to start automatically, again all configured automatically. In fact the management console will not permit configuring more than one process for this kernel. The underlying network communications used by the management agents is different than the JDENET communications used by other kernels. As such the singleton can properly handle any amount of load and introducing additional processes is not necessary and will cause unexpected results.

The SAW Kernel

The Although SAW has been replaced with 8.97 there are still portions of the SAW infrastructure that are used by server manager. On the enterprise server the SAW kernel is still used. The Management Kernel (Java) exposes the runtime information using the JMX standard. It obtains much of this information by sending JDENET messages to the SAW kernel.

It is advisable to continue to run multiple SAW processes. The information exposed by the management kernel is collected periodically, approximately every 30 seconds. After that elapsed time JDENET messages are sent to the SAW kernels to collect the information. Having multiple SAW kernels will ensure that the information update to the management kernel occurs quickly and reduces the likelihood of any of these messages timing out.

Troubleshooting

In reality there is very little that needs to be known about this kernel. That said if the runtime information isn't available for a running enterprise server it may be desirable to investigate the log files for the kernel. There are two sets of log files of interest: the jde/jdedebug.log and the java log files.

The first place to check is the server manager jde.log. Since the process list isn't available (that's what we're troubleshooting) we'll have to randomly go through the log files exposed on the management page for the enterprise server until we find the management kernel. A successful startup is shown below. If there are any errors about starting up the JVM, loading the SM classes, or any other content in this log file this is the first place to look.

If there are no problems indicated in the JDE.LOG you may look at the java based logs. Since this is standard E1 Java code, logging is configured in the jdelog.properties file contained within the system/classes directory. For each log defined in the jdelog.properties you will see a corresponding log file created for each java based kernel process. Since multiple processes can be defined for the other java kernel types (metadata and xml publisher) the process id is added to the filename, as shown below:


Summary

The management kernel provides server manager powerful new monitoring capabilities. The operation of the management kernel can generally be ignored and should only be of concern should runtime information not be available for a particular enterprise server in the management console web application.

How to Import the PUTTY Settings from One Machine to Another Machine

Madan Mohan - Mon, 2007-11-26 00:15
Most of the DBA's work 24 x 7 and finds difficiult and time consuming to configure the Whole List of Server settings in PUTTY. I had gone through Google hits and found one workaround to import your putty settings .


Work Around
************

1. Run the command --> regedit /e "%userprofile%\desktop\putty.reg" HKEY_CURRENT_USER\Software\SimonTatham at the command prompt.

2. Copy the Putty.exe and putty.reg onto Target Machine.

3. Right Click the putty.reg and click the option "Merge", this will import the settings to the target registry, and after that you can see all the server details which were defined by you earlier in Source Machine.

Note:- SimonTatham is the person behind the PUTTY Software.

Copy and Compress the Datafiles using multiple Processes.

Madan Mohan - Fri, 2007-11-23 00:35
Following are the scripts used for copying and compressing the Datafiles within the Same Server.

Scripts
*********

1. copy_process.sh ------> This File consists of all the functions which are used for copy and compress.

2. copy_file_process.sh ------> This File consists of commands used for copying , compressing , uncompressing . This file is being called by the copy_process.sh

3. worker_no ------> Define the No. of Workers (Process) for the whole process. This value can be dynamically changed by using the command , echo 4 > worker_no.


copy_process.sh
****************

#! /usr/bin/ksh
##################################################################################
# bkp_dir_path is the source (TO directory ) name
# src_path is the target (From directory ) name
# worker_pid is the worker pid file
# worker_no is the number of workers, can be adjusted while the script is running
# example, to set 3 workers, perform the following before running the script: echo 3 > worker_no
# file

function Gen_Env
{
bkp_dir_path="/tmp/to_data"
src_path="/tmp/from_data"
Log_Date=$(date +"%d%m%y")
worker_pid=/tmp/copy_process_
worker_no=/tmp/worker_no
worker_max_count=0
COPY_FILE_PROCESS=/tmp/copy_file_process.sh
File_List=$(cd $src_path; ls -l *.dbf |grep -v cntrl | awk '{print $9}')
}

function Copy_Phase
{
num_copy_workers=`cat $worker_no`
if [[ $worker_max_count -lt $num_copy_workers ]] then
worker_max_count=num_copy_workers
fi

worker=1
while [[ $worker -le $num_copy_workers ]]
do
worker_file=${worker_pid}${worker}.pid
if [[ ! -s $worker_file ]] then
echo "Busy" > $worker_file
$COPY_FILE_PROCESS $src_path $datafile $worker $worker_file $bkp_dir_path &
echo "Copy Assigned to Worker pid file: $worker"
Copy_Assigned=Yes
return
fi
let worker=$worker+1
done
sleep 5
}

function Check_Final_Copy
{
worker=1
while [[ $worker -le $worker_max_count ]]
do
worker_file=${worker_pid}${worker}.pid
if [[ -s $worker_file ]] then
echo "Background Copy is still Running... $(date)"
sleep 60
CheckFinal=No
return
else
rm -f $worker_file
fi
let worker=$worker+1
done
CheckFinal=Yes
}

function main
{
Gen_Env
echo "Total number of Datafiles in the Source Instance"
src_dbf_count=`ls $src_path|wc -l`
echo $src_dbf_count
date > $bkp_dir_path/time.log
for datafile in `echo $File_List`
do
echo "Copy $datafile : $(date)"
echo "Wait for the Next Worker ... $(date)"
Copy_Assigned=No
while [[ $Copy_Assigned == 'No' ]]
do
Copy_Phase
done
done

CheckFinal=No
while [[ $CheckFinal == 'No' ]]
do
Check_Final_Copy
done
date >> $bkp_dir_path/time.log
echo "**********************************************"
echo " Copy Process Completed Successfully"
echo "**********************************************"
echo "Number of Data files copied to the target "
target_dbf_count=`ls $bkp_dir_path|wc -l`
echo $target_dbf_count
}
main

################### End of Copy_process.sh ###########


copy_file_process.sh
**********************

#!/usr/bin/ksh
##############################################################
## Copy and compress script
## Phase - I = Copy the files from Source to Destination
## Phase - II = Compress the Destination Files.
## Phase - III = Uncompress the Destination File. (if required)
##
##
##############################################################
s_path=$1
filename=$2
worker_no=$3
statusfile=$4
d_path=$5
statuss=`cat $statusfile`
##############################################################
if [[ $statuss == 'Busy' ]] then
echo "$$" >$statusfile
## Phase I
echo "Copying $filename by worker $worker_no"
cp $s_path/$filename $d_path
sleep 5

# Phase II
echo "Zipping $filename by worker $worker_no"
/usr/bin/gzip $d_path/$filename
sleep 5

#Phase III
#echo "Unzipping $filename.gz by worker $worker_no"
#/usr/bin/gunzip $s_path/$filename.gz

>$statusfile
else
echo "Process is not Busy"
fi

echo '*********************'


################### END of copy_file_process.sh #############


Execution Syntax
*****************

1. Need to update the directory structure for the below variables within the script "copy_process.sh".

a) bkp_dir_path ---> Where to backup the datafiles.
b) src_path ---> Location of source data files.
c) COPY_FILE_PROCESS ---> Location of "copy_file_process.sh script.


Syntax
********

nohup ./Location of copy_process.sh &

Things I learned at Oracle Open World in SFO 2008

Moans Nogood - Tue, 2007-11-20 15:53
I just came back from a few days at OOW, and it was fun. I shared a biggish apartment with Anjo Kolk, Krister (Sweden) and Oliver (Danish CSC), and it was beautiful to see the beer bottles (good beers, mind you!) gradually filling up the kitchen table allocated for that purpose.

On Sunday, November 11, I was invited to a seven-hour briefing for Oracle ACE Directors (I am such a thing). It was mostly about the Fusion Middle Ware (MW) and in the end a bit about the 11g database.

I have three observations:

1. During the MW presentations I saw more acronyms than in my entire military career.

2. 'Oracle' was the only word with less than seven letters in all those slides.

3. The best thing that can happen to any product is to be bought by Oracle. Turns out, that the purchase itself will transfer the product overnight from being worth-, use- and hopeless to being an absolutely state-of-the-art, best-of-beer product.

Interesting to learn that MW is database agnostic and Apps server agnostic. This obviously generates some interesting discussions inside Oracle.

And, man, do things change in the MW: Forget SOA, here comes SCA. Forget hub-and-spoke - it's just SO yesterday. Forget Portal - here comes WebCenter.

I am tempted to quote the standup comedian Billy Connolly, who said some years ago: "... and it will all change tomorrow, so f.... stay awake!"

Apart from that, it was a good day with knowledgeable presenters, and I learned a lot. Thanks to the Oracle ACE ladies (Emily & Victoria) for setting this up.

Monday, Tuesday and Wednesday I simply set up a virtual office at the tex-mex restaurant Chevy's and met a bunch of friends from inside and outside Oracle during those days. It was good, and it generated a lot of good ideas.

There were 1600 presentations in total. 100 of these were database-related. Interesting.

Upgrading the Recovery Catalog Database from 9i to 10g

Madan Mohan - Mon, 2007-11-19 19:18
The Rman Catalog Upgarde is same as normal Database upgrade and can be accomplished in two ways.

a) Updrade the Database from 9i to 10g
- Connect to rman catalog datase.
- Issue the rman command "upgrade catalog" as this upgrades the catalog database from 09.02.00 to 10.02.00.03

b) Fresh Install of 10g Database / Use the existing 10g Database.
- Create the rman user and grant create session, recovery_catalog_owner, create type to rman user.
- Export import of Rman Schema
- Issue the rman Command :upgrade catalog"


Note:- You will encounter the below warning or error message , if you have not upgraded the catalog after the database version upgrade.
connected to target database: DSSPROD (DBID=1021024992)
connected to recovery catalog database
PL/SQL package RMAN.DBMS_RCVCAT version 09.02.00 in RCVCAT database is too old

Solution
**********
1. Connect to recovery catalog database as rman user.
2. issue the rman command " upgrade catalog" twice

Debugging with the APEX repository

Anthony Rayner - Mon, 2007-11-19 06:56
Introduction...
I was recently working on an APEX application on a familiar 'Form on a Table' wizard-built page and encountering a ORA-00957 Duplicate Column Name error, meaning that a column name must be specified more than once in the INSERT.

So the problem must be that I had more than one page item bound to the same database column. This page had got quite bloated with lots of other business logic so I started by turning off regions, the old 'process of elimination', debugging through seeing if there was anything obvious, but nothing sprang up. Then I thought if only there was a way to view all the page items on my page which were bound to the same database column. Well there is, using the APEX repository.

For those of you who are not familiar with the repository, it is basically a set of views that expose all the APEX application metadata. For further information see Have a clean up, utilising the APEX repository which contains links to loads more information.


How...
In the repository there is a view called APEX_APPLICATION_PAGE_DB_ITEMS which is what we are interested in. It is described in the APEX_DICTIONARY view as...

'Identifies Page Items which are associated with Database Table Columns. This view represents a subset of the items in the APEX_APPLICATION_PAGE_ITEMS view.'

We can then run the following query to return all the items for a specific application / page bound to a db column more than once.
SELECT   db_column_name,
SUM(1) Duplicates
FROM apex_application_page_db_items
WHERE page_id = :page_id
AND application_id = :app_id
GROUP BY db_column_name
HAVING SUM(1) > 1
ORDER BY SUM(1) DESC
(Note: This view does not contain conditional rendering information, so if this was required you would need to join to APEX_APPLICATION_PAGE_ITEMS on ITEM_ID).


Conclusion...
So if ever you think, I wish I could see this information about this page / report or whatever, you probably can, just take a look into the repository. I wonder if there would be any scope for an APEX debugging framework that defines sets of processes linked to common 'ORA' errors. So for this example, it would simply be:

Error: ORA-00957 - Duplicate column name
  • Step 1: Run the following query, binding in your page and application id.
    SELECT   db_column_name,
    SUM(1) Duplicates
    FROM apex_application_page_db_items
    WHERE page_id = :page_id
    AND application_id = :app_id
    GROUP BY db_column_name
    HAVING SUM(1) > 1
    ORDER BY SUM(1) DESC

  • Step 2: Investigate all rows returned from the query and unbind items which should not be bound to the database column.

  • Step 3: Retest your page.


That would be nice.
Anthony.

Categories: Development

Survey results

Anthony Rayner - Mon, 2007-11-19 06:53
Following the recent survey I conducted on this blog, 'What would you like to read more about on my blog?', the results were:

Developer Tips - 54%
AJAX General - 45%
AJAX with JSON - 45%
BI Publisher Integration - 33%
Access Migration - 0%

I will thus be focusing my efforts on Developer Tips and AJAX related posts in the near future. I was quite surprised at the 0% interest in 'Access Migration' and would have thought this was be quite popular as this is one of the main platforms systems are built on that APEX applications replace. Or maybe it's just that developers aren't using the migration functionality built in to APEX / SQL Developer and just doing it without looking at these. Interesting.

Thank you all for your feedback. More to come shortly.
Anthony.

Categories: Development

BLOB write size and CPU

Vlad Sadilovskiy - Mon, 2007-11-19 01:21

Someone asked to help in identifying a strange problem.  The problem found to be a combination of two issues, ASSM and the BLOB loading software, that manifested as an excessive CPU utilization. In this post we should see how different write buffer size can affect write efficiency.

Unfortunately, Tkprof and Statspack would often fail to point in proper direction when it gets to profiling activity performed by a Call-Less Cursor. In this case such cursor was responsible for writing and reading BLOBs. This issue was not specific to JDBC API. Later it was reproduced with DBMS_LOB API.

Just as an additional point to the mentioned above topic, here is an example of Tkprof and Statspack reports from the system where this issue was reproduced with help of single threaded application in an idle environment.

Tkprof

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       16      0.00       0.02          0          0          0           0
Execute     23      3.09       4.57          6      13894      18366          17
Fetch        3      0.00       0.00          0          6          0           3
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       42      3.10       4.59          6      13900      18366          20


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse   230212      5.39       5.14          0          0          0           0
Execute 230215    176.87     173.89         47     692034     175390      176321
Fetch   153422      4.22       3.96          5     230821          3       77048
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   613849    186.49     183.01         52     922855     175393      253369

Statspack

Load Profile                            Per Second       Per Transaction
~~~~~~~~~~~~                       ---------------       ---------------
                  Redo size:          1,553,489.88        668,736,513.89
              Logical reads:            630,504.58        271,415,631.42
              Block changes:                782.47            336,833.37
             Physical reads:                171.33             73,752.42
            Physical writes:                181.92             78,311.00
                 User calls:                 42.80             18,424.58
                     Parses:                 44.40             19,114.63
                Hard parses:                  0.00                  1.00
                      Sorts:                  0.39                169.89
                     Logons:                  0.00                  0.84
                   Executes:                 44.94             19,344.89
               Transactions:                  0.00

Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
CPU time                                                     7,764          95.9
log file parallel write                         85,190         147      2    1.8
log file switch (checkpoint incomplete)            246          88    357    1.1
log file switch completion                         502          76    151     .9
control file parallel write                      7,973           9      1     .1
          -------------------------------------------------------------

Instance Activity Stats
Statistic                                      Total     per Second    per Trans
--------------------------------- ------------------ -------------- ------------
...
db block gets                          5,116,321,961      625,543.7 ############
db block gets direct                       1,442,703          176.4     75,931.7
db block gets from cache               5,114,879,258      625,367.3 ############
...
physical writes                            1,487,909          181.9     78,311.0
physical writes direct                     1,444,083          176.6     76,004.4
physical writes direct (lob)               1,442,684          176.4     75,930.7
session logical reads                  5,156,896,997      630,504.6 ############
...

Segments by Logical Reads
                                           Subobject    Obj.       Logical   Pct
Owner      Tablespace Object Name          Name         Type         Reads Total
---------- ---------- -------------------- ------------ ----- ------------ -----
SYS        SYSTEM     SEG$                              TABLE      434,992  23.6
SYS        SYSTEM     TSQ$                              TABLE      432,816  23.5
TEST       TEST       SYS_LOB0000056448C00              LOB        289,808  15.7
SYS        SYSTEM     I_FILE#_BLOCK#                    INDEX      288,448  15.7
SYS        SYSTEM     FILE$                             TABLE      220,416  12.0
          -------------------------------------------------------------

As it can be seen, the number of session logical reads in the test doesn’t align well with figures in Segments by Logical Reads section. Tkprof doesn’t even have a clue about 7K CPU seconds.

But let’s get back to the business. What could be happening that triggered such a vast amount of reads when writing BLOBs? A call to the vendor of the software revealed that the BLOB data is written in chunks and for each chunk the BLOB was closed and reset with an updated position. So, we duplicated this behavior and have gotten following statistics using “runstats” and DbmsOutput.java shared by Tom Kyte and Java API shared by R. M. Menon on AskTom.

BLOB...chunk size                    32,768  32,768  32,768  32,768  32,768  32,768  32,768  32,768  32,768
BLOB...write buffer size             2,048   4,096   8,132   8,192   16,264  16,384  24,396  32,528  32,768
STAT...consistent gets               971     375     119     319     51      234     27      190     222
STAT...consistent gets direct        252     159     52      79      16      32      9       4       20
STAT...db block gets                 6,146   884     423     479     209     268     109     100     208
STAT...db block gets direct          268     140     64      76      32      44      32      16      36
STAT...physical reads direct (lob)   252     124     48      60      16      28      16      0       20
STAT...physical writes direct (lob)  268     140     64      76      32      44      32      16      36
STAT...session logical reads         7,117   1,259   542     798     260     502     136     290     430

BLOB...chunk size                    16,384  16,384  16,384  16,384  16,384  16,384  16,384  16,384  16,384
BLOB...write buffer size             2,048   4,096   8,132   8,192   16,264  16,384  24,396  32,528  32,768
STAT...consistent gets               449     222     99      213     47      74      25      35      63
STAT...consistent gets direct        126     80      24      53      8       22      3       8       19
STAT...db block gets                 3,904   733     397     813     182     295     113     143     250
STAT...db block gets direct          142     78      32      46      16      30      22      16      26
STAT...physical reads direct (lob)   126     62      16      30      0       14      6       0       10
STAT...physical writes direct (lob)  142     78      32      46      16      30      22      16      26
STAT...session logical reads         4,353   955     496     1,026   229     369     138     178     313

BLOB...chunk size                    8,192   8,192   8,192   8,192   8,192   8,192   8,192   8,192   8,192
BLOB...write buffer size             2,048   4,096   8,132   8,192   16,264  16,384  24,396  32,528  32,768
STAT...consistent gets               347     264     95      128     71      91      23      17      39
STAT...consistent gets direct        63      78      16      35      16      27      0       0       7
STAT...db block gets                 2,657   996     349     468     273     369     103     85      161
STAT...db block gets direct          79      47      16      31      16      23      16      16      21
STAT...physical reads direct (lob)   63      31      0       15      0       7       0       0       5
STAT...physical writes direct (lob)  79      47      16      31      16      23      16      16      21
STAT...session logical reads         3,004   1,260   444     596     344     460     126     102     200

By looking at physical reads direct (lob) statistics we can see that there are certain write buffer sizes for each BLOB chunk size that do no produce any additional reading. For our tests with chunk sizes 8K, 16K and 32K it appears that these local minimum points can be derived from BLOB chunk size and certain chunk overhead as in N*(chunk size – K), where N is any natural number and K overhead which dependents on the chunk size as in 8K – 60 Bytes, 16K – 120 Bytes and 32K – 240 Bytes. In these points the amount of physical writes direct (lob) is minimal and what is interesting depends only on the size of the incoming data and the Block Size as in Blob Size/Block Size. From which we can see that each BLOB chunk is written only once.

Here is one more interesting thing to mention, although it is not confirmed. Number of additional operations in other cases exactly matches following scenario that can be clearly described as rewriting BLOB chunks, when ongoing operation reads entire BLOB chunk that is left incompleteby the previous write operation, appends data from current buffer and stores all chunk’s blocks back into the DB. This also triggers significant degradation of LIO per effective amount of written data ratio, which improves with the increase of the write buffer size. This behavior was one of the culprits of the original issue. Of course, not closing BLOB stream on each write would be a better way. However, sometime it isn’t possible to rewrite the code. If an application can be configured to use buffer of a certain length, the issue can be alleviated.

ASSM was playing not the least role in this situation. It has known for causing process “spinning” behavior. In our case the session was sitting in the following stack.

#0  0x000000000159fa03 in kcbgcur ()
#1  0x0000000001007f3b in ktugusc ()
#2  0x0000000001019032 in ktugti () - KTU: Kernel Transaction Undo Get Transaction table entry Information
#3  0x0000000001047e66 in ktbIsItlEntryCommitted ()
#4  0x000000000104db76 in ktrIsItlEntryCommitted ()
#5  0x00000000027cfa2e in kdlxgs_init () - reclaim space from transaction freelist in index
#6  0x0000000000bfab81 in ktsplbfmb ()
#7  0x0000000000bfbd80 in ktsplbrecl ()
#8  0x0000000000bd36e9 in ktspgsp_cbk1 ()
#9  0x00000000027e740f in kdlgsp_init () - Space management batching
#10 0x00000000027e45cb in kdl_write1 ()
#11 0x0000000001d2ece3 in koklwrite () - KOK Lob WRITE
#12 0x00000000022f2094 in kpolob () - KPP Lob operations
#13 0x0000000000711ef8 in opiodr ()
#14 0x00000000039a973b in ttcpip ()
#15 0x000000000070df90 in opitsk ()
#16 0x0000000000710e36 in opiino ()
#17 0x0000000000711ef8 in opiodr ()
#18 0x000000000070bc23 in opidrv ()
#19 0x000000000070a0ce in sou2o ()
#20 0x00000000006d008b in opimai_real ()
#21 0x00000000006cffdc in main ()

Moving hot BLOBs to tablespaces with manual segment space management further improved the situation.

Here are few papers on Metalink that could be helpful in resolving similar BLOB issues.

Note:162345.1 “LOBS – Storage, Read-consistency and Rollback”
Note:66431.1 “LOBS – Storage, Redo and Performance Issues”
Note:268476.1 “LOB Performance Guideline”
Bug: 5253061 “LOB INSERT PERFORMANCE DIFFERS AFTER INSERT DEPENDING ON COMMIT”
Bug: 6128525 “ELAPSED TIME IS TOO HIGHER THAN “WAITED TIME + CPU TIME” IN ASSM LOB ACCESS”
Bug: 5131464 “10.1.0.4 RDBMS 10.1.0.4 SPACE PRODID-5 PORTID-226”


Improving Database Connection Pools

Mark Vakoc - Sun, 2007-11-18 18:54
Although this isn't a Server Manager specific post the SM tool may provide insight into the database connection pooling behavior. The EnterpriseOne web based servers maintain a pool of connections to a database. When a database connection is required it will be obtained from the connection pool. When no longer used it will be returned to the pool.

The maximum size, initial size, growth increment, and other parameters are configured in the JDBJ Database Configuration -> JDBj Connection Pools configuration parameters. Refer to the Server Manager Guide for information about the particular settings.

For each effective database connection, or connection URL, a pool of connections is created upon first request. What is a connection URL? The actual logic varies based on database type but can be summarized as a combination of the connection information (server, port, SID, etc) appended by the name of the proxy database user. A good way to think of it is evaluating the actual connection details defined by an EnterpriseOne datasource. For example a typical Oracle database based installation will use the same SID for each datasource. Thus regardless of whether the datasource is 'System - 812' or 'Central Objects - PD812' it all boils down to the same database -- the SID. In this case the connection URL will be the concatenation of the SID with the name of the proxy user.

Other databases may use more complex naming conventions for the connection URL. For example SQL Server based datasources will look something like 'jdbc:sqlserver://DENDSQWN01:1433_JDE_DEVELOPMENT_true_JDE'. This URL includes the server name, listening port, physical database name, unicode enabled, and proxy user.

The active database connections may be viewed using Server Manager. Navigate to the HTML server of intesrest and select 'JDBj Connection Caches' from the runtime metrics. You will see all the connection pools that have been established.


In the screen shot above you can see I actually have two sets of 10 database connections to two different connection URLs. The thing is these are the same database differing only by case. This didn't seem like a good thing so I went through my database datasources (F98611) and found I used all upper case there. I then went through the configuration metrics in server manager for my JAS server and found both upper and lower case uses of the Oracle SID. Changing those all to uppercase and restarting my server resulted in a single pool to the database.

Multiple JVMs in OAS

Mark Vakoc - Sun, 2007-11-18 17:37
It may be advantageous, performance-wise, to configure multiple JVMs to run a single JAS instance rather than a single, larger JVM. Server Manager has made it easy to configure and utilize multiple JVMs and enhancements in the 8.97 tools release ensure there are no conflicts with log files.

A multi-JVM configuration allows a single URL to be load balanced onto two or more JVM processes for a single OC4J container. The Apache http server will automatically load balance users onto a particular JVM where their session will remain for the duration of their sign-in. A multi-JVM setup is not a failover clustering configuration; that is, if a JVM were to crash all the user sessions on that JVM will also terminate.

The number of simultaneous JVMs is configured at the OC4J container level. To view or modify the JVM count navigate to the management page for the Oracle Application Server within Server Manager. In the middle of the page you will see a section listing each J2EE container (OC4J instance) within the OAS instance.



Listed next to each OC4J instance is the JVM Processes edit and a list of active JVMs if the container is currently running. To modify the JVM processes edit and save the change. Changes will take effect the next time the container is started. Starting or stopping a JAS instance within server manager starts/stops the corresponding container.

Runtime Metrics

Server Manager will automatically detect the multi-JVM configuration and display all the runtime metrics (user sessions, database caches, et. al) separately for each JVM. Each JVM is assigned, by OAS, a unique JVM identifier. This ID is in the format container_name.group_name.index, where container_name is the name of the OC4J instance, group_name is the name of the OAS defined group to which the container belongs, and index is an integer beginning with 1 and incremented for each JVM started.



Shown above is the runtime summary displayed within the management page for a JAS server. It shows the number of active JVMs (2) and the uptime, online users, and login status for each JVM. Selecting a runtime metric, in this case JDBJ Database Caches, will display the metrics separately for each active JVM:


Configuration

Each JVM will utilize the same configuration files. As such it is not necessary to configure items for each JVM. In fact, it is not possible to configure items separately for each JVM.

Log Files

Since each JVM utilizes the same configuration files, including the jdelog.properties file using to configure the E1 logging, creating and writing to the log files would historically conflict in a multi-JVM environment. Tools release 8.97 will now automatically detect a multi-JVM environment and append the JVM ID, described above, into the filename for JVMs with a process index of 2 or greater as shown below:


Compatibility

Multiple JVM configuration is supported only for the EnterpriseOne HTML (JAS) server. Configuring multiple JVMs for the other E1 web products, such as the Transaction Server, PIMSync Server, or Business Services Server is not supported and may cause unpredictable results.

UPDATE:
While playing around with this I discovered that making changes to the JVM count on a running container will take effect immediately. That is if you have a container that is configured for a single JVM and change the value to 3 you will see two more java processes appear (after a few minutes). Changing this back to 1 will shut down the additional JVMs. Note: I do not know if it will shut down JVMs with active users on it.

This is pretty cool and can be used to aid a HTML server that is getting overloaded without having to restart it.

Joe's Blog: Happy First Anniversary SQL Snippets!

Joe Fuda - Sun, 2007-11-18 16:00

Well, it's been exactly one year since SQL Snippets first appeared on the web as a prototype site containing 33 pages viewed by a handful of visitors in its first month (a few close friends and the occasional searchbot). It now has over 250 pages, RSS feeds, site search, and HaloScan commenting. It gets thousands of visitors each month from around the globe (including the loyal searchbots, we've become steady friends this past year) and has been blogged about, StumbleUpon'd, and del.icio.us'd. (*ouch*) I'd like to say a big THANK-YOU to all of you who helped spread the word about SQL Snippets or provided feedback about the site in its first year. Your efforts are much appreciated.

The last year has seen a big change in my career as well. Some of you may have noticed the rate of new SQL Snippets topics started dwindling back in July. That's because I went back to work for Oracle that month and no longer have the luxury of working on this site full time. I'm doing pretty much the same job I had when I last worked there in 2006. In fact, I was even re-assigned some of the projects I worked on during my prior stint and they're exactly the way I left them. Some things never change, literally.

Fear not though, I still plan on adding new content when I can. The content management system I built for this site is effectively complete now so, unlike the past 12 months, I won't need to spend too much time on the mechanics of the site and can focus mostly on content going forwards. In fact, I even managed to put together a quick page on a new 11g feature this weekend. Check it out at SQL Snippets: Columns to Rows - UNPIVOT (11g).

Time for some cake ...


...

Columns to Rows: UNPIVOT (11g) (New SQL Snippets Tutorial)

Joe Fuda - Sat, 2007-11-17 18:00
SQL Snippets "Columns to Rows" section has been expanded to include a topic on using Oracle 11g's new UNPIVOT clause, which makes all prior techniques for transforming columns into rows now obsolete.
...

Pages

Subscribe to Oracle FAQ aggregator