Feed aggregator

Installing Server Manager for Non-English Machines

Mark Vakoc - Mon, 2007-11-26 18:01
An issue has been discovered that prevents the successful installation of Server Manager when the user that performs the installation is configured for something other than English.  The problem relates around messages that are output by the embedded container.  The installer is looking for particular words, in English, and the container is outputing that text localized.  So even though the installation was successful the installer thinks it failed and begins to uninstall.

A workaround is to change the current user's language, in Windows, back to English and perform the installation.  Also make sure the 'Language for non-Unicode programs' on the advanced tab is configured to English as well.  These settings can be found in the 'Regional and Language Options' control panel option in Windows.

Once the installation is complete you may return to using the desired language of choice.

The Management Kernel

Mark Vakoc - Mon, 2007-11-26 12:04
Tools release 8.97 adds two new kernel definitions to the enterprise server: the BI Publisher (XML Publisher) and Server Manager kernels. Like the metadata kernel introduced in 8.96 these new kernels are Java based kernels that start their own JVM instance to run Java code rather than the traditional C based code of kernels past. SM will automatically add and configure these kernels into the JDE.INI when changing the tools release to 8.97. This post will focus on the server manager kernel.


The server manager kernel (number 32 for those keeping count) operates on all platforms and loads a JVM upon startup. This JVM will in turn load the management agent. This is the same codebase as that used by the managed home agent and is used to provide runtime information about the enterprise server to the management console. It uses a couple of INI settings, configured automatically, to provide it with the instance name and managed home location associated with the enterprise server instance. From the managed home location it will read the agent.properties discussed in previous posts to obtain the connection details for the management console. It also follows the same connection logic used to establish communications as outlined in that post.


The management console uses this kernel to obtain all the runtime information about the server such as the active process list. It also uses this agent to expose and provide the log files that are active for the processes that appear in the process list.

This kernel definition is a singleton; that is there may only be a single process active and must be configured to start automatically, again all configured automatically. In fact the management console will not permit configuring more than one process for this kernel. The underlying network communications used by the management agents is different than the JDENET communications used by other kernels. As such the singleton can properly handle any amount of load and introducing additional processes is not necessary and will cause unexpected results.

The SAW Kernel

The Although SAW has been replaced with 8.97 there are still portions of the SAW infrastructure that are used by server manager. On the enterprise server the SAW kernel is still used. The Management Kernel (Java) exposes the runtime information using the JMX standard. It obtains much of this information by sending JDENET messages to the SAW kernel.

It is advisable to continue to run multiple SAW processes. The information exposed by the management kernel is collected periodically, approximately every 30 seconds. After that elapsed time JDENET messages are sent to the SAW kernels to collect the information. Having multiple SAW kernels will ensure that the information update to the management kernel occurs quickly and reduces the likelihood of any of these messages timing out.

Troubleshooting

In reality there is very little that needs to be known about this kernel. That said if the runtime information isn't available for a running enterprise server it may be desirable to investigate the log files for the kernel. There are two sets of log files of interest: the jde/jdedebug.log and the java log files.

The first place to check is the server manager jde.log. Since the process list isn't available (that's what we're troubleshooting) we'll have to randomly go through the log files exposed on the management page for the enterprise server until we find the management kernel. A successful startup is shown below. If there are any errors about starting up the JVM, loading the SM classes, or any other content in this log file this is the first place to look.

If there are no problems indicated in the JDE.LOG you may look at the java based logs. Since this is standard E1 Java code, logging is configured in the jdelog.properties file contained within the system/classes directory. For each log defined in the jdelog.properties you will see a corresponding log file created for each java based kernel process. Since multiple processes can be defined for the other java kernel types (metadata and xml publisher) the process id is added to the filename, as shown below:


Summary

The management kernel provides server manager powerful new monitoring capabilities. The operation of the management kernel can generally be ignored and should only be of concern should runtime information not be available for a particular enterprise server in the management console web application.

How to Import the PUTTY Settings from One Machine to Another Machine

Madan Mohan - Mon, 2007-11-26 00:15
Most of the DBA's work 24 x 7 and finds difficiult and time consuming to configure the Whole List of Server settings in PUTTY. I had gone through Google hits and found one workaround to import your putty settings .


Work Around
************

1. Run the command --> regedit /e "%userprofile%\desktop\putty.reg" HKEY_CURRENT_USER\Software\SimonTatham at the command prompt.

2. Copy the Putty.exe and putty.reg onto Target Machine.

3. Right Click the putty.reg and click the option "Merge", this will import the settings to the target registry, and after that you can see all the server details which were defined by you earlier in Source Machine.

Note:- SimonTatham is the person behind the PUTTY Software.

Copy and Compress the Datafiles using multiple Processes.

Madan Mohan - Fri, 2007-11-23 00:35
Following are the scripts used for copying and compressing the Datafiles within the Same Server.

Scripts
*********

1. copy_process.sh ------> This File consists of all the functions which are used for copy and compress.

2. copy_file_process.sh ------> This File consists of commands used for copying , compressing , uncompressing . This file is being called by the copy_process.sh

3. worker_no ------> Define the No. of Workers (Process) for the whole process. This value can be dynamically changed by using the command , echo 4 > worker_no.


copy_process.sh
****************

#! /usr/bin/ksh
##################################################################################
# bkp_dir_path is the source (TO directory ) name
# src_path is the target (From directory ) name
# worker_pid is the worker pid file
# worker_no is the number of workers, can be adjusted while the script is running
# example, to set 3 workers, perform the following before running the script: echo 3 > worker_no
# file

function Gen_Env
{
bkp_dir_path="/tmp/to_data"
src_path="/tmp/from_data"
Log_Date=$(date +"%d%m%y")
worker_pid=/tmp/copy_process_
worker_no=/tmp/worker_no
worker_max_count=0
COPY_FILE_PROCESS=/tmp/copy_file_process.sh
File_List=$(cd $src_path; ls -l *.dbf |grep -v cntrl | awk '{print $9}')
}

function Copy_Phase
{
num_copy_workers=`cat $worker_no`
if [[ $worker_max_count -lt $num_copy_workers ]] then
worker_max_count=num_copy_workers
fi

worker=1
while [[ $worker -le $num_copy_workers ]]
do
worker_file=${worker_pid}${worker}.pid
if [[ ! -s $worker_file ]] then
echo "Busy" > $worker_file
$COPY_FILE_PROCESS $src_path $datafile $worker $worker_file $bkp_dir_path &
echo "Copy Assigned to Worker pid file: $worker"
Copy_Assigned=Yes
return
fi
let worker=$worker+1
done
sleep 5
}

function Check_Final_Copy
{
worker=1
while [[ $worker -le $worker_max_count ]]
do
worker_file=${worker_pid}${worker}.pid
if [[ -s $worker_file ]] then
echo "Background Copy is still Running... $(date)"
sleep 60
CheckFinal=No
return
else
rm -f $worker_file
fi
let worker=$worker+1
done
CheckFinal=Yes
}

function main
{
Gen_Env
echo "Total number of Datafiles in the Source Instance"
src_dbf_count=`ls $src_path|wc -l`
echo $src_dbf_count
date > $bkp_dir_path/time.log
for datafile in `echo $File_List`
do
echo "Copy $datafile : $(date)"
echo "Wait for the Next Worker ... $(date)"
Copy_Assigned=No
while [[ $Copy_Assigned == 'No' ]]
do
Copy_Phase
done
done

CheckFinal=No
while [[ $CheckFinal == 'No' ]]
do
Check_Final_Copy
done
date >> $bkp_dir_path/time.log
echo "**********************************************"
echo " Copy Process Completed Successfully"
echo "**********************************************"
echo "Number of Data files copied to the target "
target_dbf_count=`ls $bkp_dir_path|wc -l`
echo $target_dbf_count
}
main

################### End of Copy_process.sh ###########


copy_file_process.sh
**********************

#!/usr/bin/ksh
##############################################################
## Copy and compress script
## Phase - I = Copy the files from Source to Destination
## Phase - II = Compress the Destination Files.
## Phase - III = Uncompress the Destination File. (if required)
##
##
##############################################################
s_path=$1
filename=$2
worker_no=$3
statusfile=$4
d_path=$5
statuss=`cat $statusfile`
##############################################################
if [[ $statuss == 'Busy' ]] then
echo "$$" >$statusfile
## Phase I
echo "Copying $filename by worker $worker_no"
cp $s_path/$filename $d_path
sleep 5

# Phase II
echo "Zipping $filename by worker $worker_no"
/usr/bin/gzip $d_path/$filename
sleep 5

#Phase III
#echo "Unzipping $filename.gz by worker $worker_no"
#/usr/bin/gunzip $s_path/$filename.gz

>$statusfile
else
echo "Process is not Busy"
fi

echo '*********************'


################### END of copy_file_process.sh #############


Execution Syntax
*****************

1. Need to update the directory structure for the below variables within the script "copy_process.sh".

a) bkp_dir_path ---> Where to backup the datafiles.
b) src_path ---> Location of source data files.
c) COPY_FILE_PROCESS ---> Location of "copy_file_process.sh script.


Syntax
********

nohup ./Location of copy_process.sh &

Things I learned at Oracle Open World in SFO 2008

Moans Nogood - Tue, 2007-11-20 15:53
I just came back from a few days at OOW, and it was fun. I shared a biggish apartment with Anjo Kolk, Krister (Sweden) and Oliver (Danish CSC), and it was beautiful to see the beer bottles (good beers, mind you!) gradually filling up the kitchen table allocated for that purpose.

On Sunday, November 11, I was invited to a seven-hour briefing for Oracle ACE Directors (I am such a thing). It was mostly about the Fusion Middle Ware (MW) and in the end a bit about the 11g database.

I have three observations:

1. During the MW presentations I saw more acronyms than in my entire military career.

2. 'Oracle' was the only word with less than seven letters in all those slides.

3. The best thing that can happen to any product is to be bought by Oracle. Turns out, that the purchase itself will transfer the product overnight from being worth-, use- and hopeless to being an absolutely state-of-the-art, best-of-beer product.

Interesting to learn that MW is database agnostic and Apps server agnostic. This obviously generates some interesting discussions inside Oracle.

And, man, do things change in the MW: Forget SOA, here comes SCA. Forget hub-and-spoke - it's just SO yesterday. Forget Portal - here comes WebCenter.

I am tempted to quote the standup comedian Billy Connolly, who said some years ago: "... and it will all change tomorrow, so f.... stay awake!"

Apart from that, it was a good day with knowledgeable presenters, and I learned a lot. Thanks to the Oracle ACE ladies (Emily & Victoria) for setting this up.

Monday, Tuesday and Wednesday I simply set up a virtual office at the tex-mex restaurant Chevy's and met a bunch of friends from inside and outside Oracle during those days. It was good, and it generated a lot of good ideas.

There were 1600 presentations in total. 100 of these were database-related. Interesting.

Upgrading the Recovery Catalog Database from 9i to 10g

Madan Mohan - Mon, 2007-11-19 19:18
The Rman Catalog Upgarde is same as normal Database upgrade and can be accomplished in two ways.

a) Updrade the Database from 9i to 10g
- Connect to rman catalog datase.
- Issue the rman command "upgrade catalog" as this upgrades the catalog database from 09.02.00 to 10.02.00.03

b) Fresh Install of 10g Database / Use the existing 10g Database.
- Create the rman user and grant create session, recovery_catalog_owner, create type to rman user.
- Export import of Rman Schema
- Issue the rman Command :upgrade catalog"


Note:- You will encounter the below warning or error message , if you have not upgraded the catalog after the database version upgrade.
connected to target database: DSSPROD (DBID=1021024992)
connected to recovery catalog database
PL/SQL package RMAN.DBMS_RCVCAT version 09.02.00 in RCVCAT database is too old

Solution
**********
1. Connect to recovery catalog database as rman user.
2. issue the rman command " upgrade catalog" twice

Debugging with the APEX repository

Anthony Rayner - Mon, 2007-11-19 06:56
Introduction...
I was recently working on an APEX application on a familiar 'Form on a Table' wizard-built page and encountering a ORA-00957 Duplicate Column Name error, meaning that a column name must be specified more than once in the INSERT.

So the problem must be that I had more than one page item bound to the same database column. This page had got quite bloated with lots of other business logic so I started by turning off regions, the old 'process of elimination', debugging through seeing if there was anything obvious, but nothing sprang up. Then I thought if only there was a way to view all the page items on my page which were bound to the same database column. Well there is, using the APEX repository.

For those of you who are not familiar with the repository, it is basically a set of views that expose all the APEX application metadata. For further information see Have a clean up, utilising the APEX repository which contains links to loads more information.


How...
In the repository there is a view called APEX_APPLICATION_PAGE_DB_ITEMS which is what we are interested in. It is described in the APEX_DICTIONARY view as...

'Identifies Page Items which are associated with Database Table Columns. This view represents a subset of the items in the APEX_APPLICATION_PAGE_ITEMS view.'

We can then run the following query to return all the items for a specific application / page bound to a db column more than once.
SELECT   db_column_name,
SUM(1) Duplicates
FROM apex_application_page_db_items
WHERE page_id = :page_id
AND application_id = :app_id
GROUP BY db_column_name
HAVING SUM(1) > 1
ORDER BY SUM(1) DESC
(Note: This view does not contain conditional rendering information, so if this was required you would need to join to APEX_APPLICATION_PAGE_ITEMS on ITEM_ID).


Conclusion...
So if ever you think, I wish I could see this information about this page / report or whatever, you probably can, just take a look into the repository. I wonder if there would be any scope for an APEX debugging framework that defines sets of processes linked to common 'ORA' errors. So for this example, it would simply be:

Error: ORA-00957 - Duplicate column name
  • Step 1: Run the following query, binding in your page and application id.
    SELECT   db_column_name,
    SUM(1) Duplicates
    FROM apex_application_page_db_items
    WHERE page_id = :page_id
    AND application_id = :app_id
    GROUP BY db_column_name
    HAVING SUM(1) > 1
    ORDER BY SUM(1) DESC

  • Step 2: Investigate all rows returned from the query and unbind items which should not be bound to the database column.

  • Step 3: Retest your page.


That would be nice.
Anthony.

Categories: Development

Survey results

Anthony Rayner - Mon, 2007-11-19 06:53
Following the recent survey I conducted on this blog, 'What would you like to read more about on my blog?', the results were:

Developer Tips - 54%
AJAX General - 45%
AJAX with JSON - 45%
BI Publisher Integration - 33%
Access Migration - 0%

I will thus be focusing my efforts on Developer Tips and AJAX related posts in the near future. I was quite surprised at the 0% interest in 'Access Migration' and would have thought this was be quite popular as this is one of the main platforms systems are built on that APEX applications replace. Or maybe it's just that developers aren't using the migration functionality built in to APEX / SQL Developer and just doing it without looking at these. Interesting.

Thank you all for your feedback. More to come shortly.
Anthony.

Categories: Development

BLOB write size and CPU

Vlad Sadilovskiy - Mon, 2007-11-19 01:21

Someone asked to help in identifying a strange problem.  The problem found to be a combination of two issues, ASSM and the BLOB loading software, that manifested as an excessive CPU utilization. In this post we should see how different write buffer size can affect write efficiency.

Unfortunately, Tkprof and Statspack would often fail to point in proper direction when it gets to profiling activity performed by a Call-Less Cursor. In this case such cursor was responsible for writing and reading BLOBs. This issue was not specific to JDBC API. Later it was reproduced with DBMS_LOB API.

Just as an additional point to the mentioned above topic, here is an example of Tkprof and Statspack reports from the system where this issue was reproduced with help of single threaded application in an idle environment.

Tkprof

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       16      0.00       0.02          0          0          0           0
Execute     23      3.09       4.57          6      13894      18366          17
Fetch        3      0.00       0.00          0          6          0           3
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       42      3.10       4.59          6      13900      18366          20


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse   230212      5.39       5.14          0          0          0           0
Execute 230215    176.87     173.89         47     692034     175390      176321
Fetch   153422      4.22       3.96          5     230821          3       77048
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   613849    186.49     183.01         52     922855     175393      253369

Statspack

Load Profile                            Per Second       Per Transaction
~~~~~~~~~~~~                       ---------------       ---------------
                  Redo size:          1,553,489.88        668,736,513.89
              Logical reads:            630,504.58        271,415,631.42
              Block changes:                782.47            336,833.37
             Physical reads:                171.33             73,752.42
            Physical writes:                181.92             78,311.00
                 User calls:                 42.80             18,424.58
                     Parses:                 44.40             19,114.63
                Hard parses:                  0.00                  1.00
                      Sorts:                  0.39                169.89
                     Logons:                  0.00                  0.84
                   Executes:                 44.94             19,344.89
               Transactions:                  0.00

Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
CPU time                                                     7,764          95.9
log file parallel write                         85,190         147      2    1.8
log file switch (checkpoint incomplete)            246          88    357    1.1
log file switch completion                         502          76    151     .9
control file parallel write                      7,973           9      1     .1
          -------------------------------------------------------------

Instance Activity Stats
Statistic                                      Total     per Second    per Trans
--------------------------------- ------------------ -------------- ------------
...
db block gets                          5,116,321,961      625,543.7 ############
db block gets direct                       1,442,703          176.4     75,931.7
db block gets from cache               5,114,879,258      625,367.3 ############
...
physical writes                            1,487,909          181.9     78,311.0
physical writes direct                     1,444,083          176.6     76,004.4
physical writes direct (lob)               1,442,684          176.4     75,930.7
session logical reads                  5,156,896,997      630,504.6 ############
...

Segments by Logical Reads
                                           Subobject    Obj.       Logical   Pct
Owner      Tablespace Object Name          Name         Type         Reads Total
---------- ---------- -------------------- ------------ ----- ------------ -----
SYS        SYSTEM     SEG$                              TABLE      434,992  23.6
SYS        SYSTEM     TSQ$                              TABLE      432,816  23.5
TEST       TEST       SYS_LOB0000056448C00              LOB        289,808  15.7
SYS        SYSTEM     I_FILE#_BLOCK#                    INDEX      288,448  15.7
SYS        SYSTEM     FILE$                             TABLE      220,416  12.0
          -------------------------------------------------------------

As it can be seen, the number of session logical reads in the test doesn’t align well with figures in Segments by Logical Reads section. Tkprof doesn’t even have a clue about 7K CPU seconds.

But let’s get back to the business. What could be happening that triggered such a vast amount of reads when writing BLOBs? A call to the vendor of the software revealed that the BLOB data is written in chunks and for each chunk the BLOB was closed and reset with an updated position. So, we duplicated this behavior and have gotten following statistics using “runstats” and DbmsOutput.java shared by Tom Kyte and Java API shared by R. M. Menon on AskTom.

BLOB...chunk size                    32,768  32,768  32,768  32,768  32,768  32,768  32,768  32,768  32,768
BLOB...write buffer size             2,048   4,096   8,132   8,192   16,264  16,384  24,396  32,528  32,768
STAT...consistent gets               971     375     119     319     51      234     27      190     222
STAT...consistent gets direct        252     159     52      79      16      32      9       4       20
STAT...db block gets                 6,146   884     423     479     209     268     109     100     208
STAT...db block gets direct          268     140     64      76      32      44      32      16      36
STAT...physical reads direct (lob)   252     124     48      60      16      28      16      0       20
STAT...physical writes direct (lob)  268     140     64      76      32      44      32      16      36
STAT...session logical reads         7,117   1,259   542     798     260     502     136     290     430

BLOB...chunk size                    16,384  16,384  16,384  16,384  16,384  16,384  16,384  16,384  16,384
BLOB...write buffer size             2,048   4,096   8,132   8,192   16,264  16,384  24,396  32,528  32,768
STAT...consistent gets               449     222     99      213     47      74      25      35      63
STAT...consistent gets direct        126     80      24      53      8       22      3       8       19
STAT...db block gets                 3,904   733     397     813     182     295     113     143     250
STAT...db block gets direct          142     78      32      46      16      30      22      16      26
STAT...physical reads direct (lob)   126     62      16      30      0       14      6       0       10
STAT...physical writes direct (lob)  142     78      32      46      16      30      22      16      26
STAT...session logical reads         4,353   955     496     1,026   229     369     138     178     313

BLOB...chunk size                    8,192   8,192   8,192   8,192   8,192   8,192   8,192   8,192   8,192
BLOB...write buffer size             2,048   4,096   8,132   8,192   16,264  16,384  24,396  32,528  32,768
STAT...consistent gets               347     264     95      128     71      91      23      17      39
STAT...consistent gets direct        63      78      16      35      16      27      0       0       7
STAT...db block gets                 2,657   996     349     468     273     369     103     85      161
STAT...db block gets direct          79      47      16      31      16      23      16      16      21
STAT...physical reads direct (lob)   63      31      0       15      0       7       0       0       5
STAT...physical writes direct (lob)  79      47      16      31      16      23      16      16      21
STAT...session logical reads         3,004   1,260   444     596     344     460     126     102     200

By looking at physical reads direct (lob) statistics we can see that there are certain write buffer sizes for each BLOB chunk size that do no produce any additional reading. For our tests with chunk sizes 8K, 16K and 32K it appears that these local minimum points can be derived from BLOB chunk size and certain chunk overhead as in N*(chunk size – K), where N is any natural number and K overhead which dependents on the chunk size as in 8K – 60 Bytes, 16K – 120 Bytes and 32K – 240 Bytes. In these points the amount of physical writes direct (lob) is minimal and what is interesting depends only on the size of the incoming data and the Block Size as in Blob Size/Block Size. From which we can see that each BLOB chunk is written only once.

Here is one more interesting thing to mention, although it is not confirmed. Number of additional operations in other cases exactly matches following scenario that can be clearly described as rewriting BLOB chunks, when ongoing operation reads entire BLOB chunk that is left incompleteby the previous write operation, appends data from current buffer and stores all chunk’s blocks back into the DB. This also triggers significant degradation of LIO per effective amount of written data ratio, which improves with the increase of the write buffer size. This behavior was one of the culprits of the original issue. Of course, not closing BLOB stream on each write would be a better way. However, sometime it isn’t possible to rewrite the code. If an application can be configured to use buffer of a certain length, the issue can be alleviated.

ASSM was playing not the least role in this situation. It has known for causing process “spinning” behavior. In our case the session was sitting in the following stack.

#0  0x000000000159fa03 in kcbgcur ()
#1  0x0000000001007f3b in ktugusc ()
#2  0x0000000001019032 in ktugti () - KTU: Kernel Transaction Undo Get Transaction table entry Information
#3  0x0000000001047e66 in ktbIsItlEntryCommitted ()
#4  0x000000000104db76 in ktrIsItlEntryCommitted ()
#5  0x00000000027cfa2e in kdlxgs_init () - reclaim space from transaction freelist in index
#6  0x0000000000bfab81 in ktsplbfmb ()
#7  0x0000000000bfbd80 in ktsplbrecl ()
#8  0x0000000000bd36e9 in ktspgsp_cbk1 ()
#9  0x00000000027e740f in kdlgsp_init () - Space management batching
#10 0x00000000027e45cb in kdl_write1 ()
#11 0x0000000001d2ece3 in koklwrite () - KOK Lob WRITE
#12 0x00000000022f2094 in kpolob () - KPP Lob operations
#13 0x0000000000711ef8 in opiodr ()
#14 0x00000000039a973b in ttcpip ()
#15 0x000000000070df90 in opitsk ()
#16 0x0000000000710e36 in opiino ()
#17 0x0000000000711ef8 in opiodr ()
#18 0x000000000070bc23 in opidrv ()
#19 0x000000000070a0ce in sou2o ()
#20 0x00000000006d008b in opimai_real ()
#21 0x00000000006cffdc in main ()

Moving hot BLOBs to tablespaces with manual segment space management further improved the situation.

Here are few papers on Metalink that could be helpful in resolving similar BLOB issues.

Note:162345.1 “LOBS – Storage, Read-consistency and Rollback”
Note:66431.1 “LOBS – Storage, Redo and Performance Issues”
Note:268476.1 “LOB Performance Guideline”
Bug: 5253061 “LOB INSERT PERFORMANCE DIFFERS AFTER INSERT DEPENDING ON COMMIT”
Bug: 6128525 “ELAPSED TIME IS TOO HIGHER THAN “WAITED TIME + CPU TIME” IN ASSM LOB ACCESS”
Bug: 5131464 “10.1.0.4 RDBMS 10.1.0.4 SPACE PRODID-5 PORTID-226”


Improving Database Connection Pools

Mark Vakoc - Sun, 2007-11-18 18:54
Although this isn't a Server Manager specific post the SM tool may provide insight into the database connection pooling behavior. The EnterpriseOne web based servers maintain a pool of connections to a database. When a database connection is required it will be obtained from the connection pool. When no longer used it will be returned to the pool.

The maximum size, initial size, growth increment, and other parameters are configured in the JDBJ Database Configuration -> JDBj Connection Pools configuration parameters. Refer to the Server Manager Guide for information about the particular settings.

For each effective database connection, or connection URL, a pool of connections is created upon first request. What is a connection URL? The actual logic varies based on database type but can be summarized as a combination of the connection information (server, port, SID, etc) appended by the name of the proxy database user. A good way to think of it is evaluating the actual connection details defined by an EnterpriseOne datasource. For example a typical Oracle database based installation will use the same SID for each datasource. Thus regardless of whether the datasource is 'System - 812' or 'Central Objects - PD812' it all boils down to the same database -- the SID. In this case the connection URL will be the concatenation of the SID with the name of the proxy user.

Other databases may use more complex naming conventions for the connection URL. For example SQL Server based datasources will look something like 'jdbc:sqlserver://DENDSQWN01:1433_JDE_DEVELOPMENT_true_JDE'. This URL includes the server name, listening port, physical database name, unicode enabled, and proxy user.

The active database connections may be viewed using Server Manager. Navigate to the HTML server of intesrest and select 'JDBj Connection Caches' from the runtime metrics. You will see all the connection pools that have been established.


In the screen shot above you can see I actually have two sets of 10 database connections to two different connection URLs. The thing is these are the same database differing only by case. This didn't seem like a good thing so I went through my database datasources (F98611) and found I used all upper case there. I then went through the configuration metrics in server manager for my JAS server and found both upper and lower case uses of the Oracle SID. Changing those all to uppercase and restarting my server resulted in a single pool to the database.

Multiple JVMs in OAS

Mark Vakoc - Sun, 2007-11-18 17:37
It may be advantageous, performance-wise, to configure multiple JVMs to run a single JAS instance rather than a single, larger JVM. Server Manager has made it easy to configure and utilize multiple JVMs and enhancements in the 8.97 tools release ensure there are no conflicts with log files.

A multi-JVM configuration allows a single URL to be load balanced onto two or more JVM processes for a single OC4J container. The Apache http server will automatically load balance users onto a particular JVM where their session will remain for the duration of their sign-in. A multi-JVM setup is not a failover clustering configuration; that is, if a JVM were to crash all the user sessions on that JVM will also terminate.

The number of simultaneous JVMs is configured at the OC4J container level. To view or modify the JVM count navigate to the management page for the Oracle Application Server within Server Manager. In the middle of the page you will see a section listing each J2EE container (OC4J instance) within the OAS instance.



Listed next to each OC4J instance is the JVM Processes edit and a list of active JVMs if the container is currently running. To modify the JVM processes edit and save the change. Changes will take effect the next time the container is started. Starting or stopping a JAS instance within server manager starts/stops the corresponding container.

Runtime Metrics

Server Manager will automatically detect the multi-JVM configuration and display all the runtime metrics (user sessions, database caches, et. al) separately for each JVM. Each JVM is assigned, by OAS, a unique JVM identifier. This ID is in the format container_name.group_name.index, where container_name is the name of the OC4J instance, group_name is the name of the OAS defined group to which the container belongs, and index is an integer beginning with 1 and incremented for each JVM started.



Shown above is the runtime summary displayed within the management page for a JAS server. It shows the number of active JVMs (2) and the uptime, online users, and login status for each JVM. Selecting a runtime metric, in this case JDBJ Database Caches, will display the metrics separately for each active JVM:


Configuration

Each JVM will utilize the same configuration files. As such it is not necessary to configure items for each JVM. In fact, it is not possible to configure items separately for each JVM.

Log Files

Since each JVM utilizes the same configuration files, including the jdelog.properties file using to configure the E1 logging, creating and writing to the log files would historically conflict in a multi-JVM environment. Tools release 8.97 will now automatically detect a multi-JVM environment and append the JVM ID, described above, into the filename for JVMs with a process index of 2 or greater as shown below:


Compatibility

Multiple JVM configuration is supported only for the EnterpriseOne HTML (JAS) server. Configuring multiple JVMs for the other E1 web products, such as the Transaction Server, PIMSync Server, or Business Services Server is not supported and may cause unpredictable results.

UPDATE:
While playing around with this I discovered that making changes to the JVM count on a running container will take effect immediately. That is if you have a container that is configured for a single JVM and change the value to 3 you will see two more java processes appear (after a few minutes). Changing this back to 1 will shut down the additional JVMs. Note: I do not know if it will shut down JVMs with active users on it.

This is pretty cool and can be used to aid a HTML server that is getting overloaded without having to restart it.

Joe's Blog: Happy First Anniversary SQL Snippets!

Joe Fuda - Sun, 2007-11-18 16:00

Well, it's been exactly one year since SQL Snippets first appeared on the web as a prototype site containing 33 pages viewed by a handful of visitors in its first month (a few close friends and the occasional searchbot). It now has over 250 pages, RSS feeds, site search, and HaloScan commenting. It gets thousands of visitors each month from around the globe (including the loyal searchbots, we've become steady friends this past year) and has been blogged about, StumbleUpon'd, and del.icio.us'd. (*ouch*) I'd like to say a big THANK-YOU to all of you who helped spread the word about SQL Snippets or provided feedback about the site in its first year. Your efforts are much appreciated.

The last year has seen a big change in my career as well. Some of you may have noticed the rate of new SQL Snippets topics started dwindling back in July. That's because I went back to work for Oracle that month and no longer have the luxury of working on this site full time. I'm doing pretty much the same job I had when I last worked there in 2006. In fact, I was even re-assigned some of the projects I worked on during my prior stint and they're exactly the way I left them. Some things never change, literally.

Fear not though, I still plan on adding new content when I can. The content management system I built for this site is effectively complete now so, unlike the past 12 months, I won't need to spend too much time on the mechanics of the site and can focus mostly on content going forwards. In fact, I even managed to put together a quick page on a new 11g feature this weekend. Check it out at SQL Snippets: Columns to Rows - UNPIVOT (11g).

Time for some cake ...


...

Columns to Rows: UNPIVOT (11g) (New SQL Snippets Tutorial)

Joe Fuda - Sat, 2007-11-17 18:00
SQL Snippets "Columns to Rows" section has been expanded to include a topic on using Oracle 11g's new UNPIVOT clause, which makes all prior techniques for transforming columns into rows now obsolete.
...

Configuring the HTML Server (JAS) Template Configuration

Mark Vakoc - Sat, 2007-11-17 13:25
A powerful feature of Server Manager is the template configurations defined for each server type within a server group. Properly configuring the server group template greatly simplifies the creation of a new HTML server. When the server is created it will copy the configuration defined for the server group selected thus requiring minimal, if any, configuration during deployment.

I've often been asked what individual settings must be configured in order for a new HTML server to be functional immediately. First navigate to the 'Server Groups' page

Next select the 'Configure' icon for the group you wish to modify
Navigate to the 'EnterpriseOne HTML Server' section

Configure the following settings:
  • Network Settings -> JDENET Configuration -> Outgoing JDENET Port - Specify the JDENET port used by the enterprise server
  • Network Settings -> Security Server Configuration -> Primary Security Server - Specify the machine name of the enterprise server to use for security services
  • JDBJ Database Configuration -> JDBj Bootstrap Datasource - Configure this section to point to the location of the system tables (e.g. System - 812). You can find these values in the JDE.INI of a development client.
  • JDBJ Database Configuration -> JDBC Drivers - If you are using the SQL2005 or DB2/UDB on Itanium you may need to change the default JDBC driver
  • JDBJ Database Configuration -> JDBj Bootstrap Session -> Bootstrap Environment - Specify a valid environment that will be used to determine the location of various system tables through OCM
  • JDBJ Database Configuration -> Oracle Database Settings -> File Contents - If you are using an Oracle database cut and paste the contents of the TNSNAMES.ORA that the server should use.
  • JDBJ Database Configuration -> JDBj Spec Datasource - If you define your serialized objects tables (F989998 and F989999) in a particular datasource rather than using OCM to determine their location complete this section
  • Web Runtime -> Web Runtime -> Pathcodes - Specify a single pathcode to use for this server. Note it must be surrounded by parenthesis and single quotes, such as ('DV812')
  • Web Runtime -> Web Runtime -> Default Environment - Specify the default environment to display in the login form
There are many other options that may be configured; however, these are the core values I always configure. Properly configuring the server group once significantly simplifies the new creation of servers at a later time.

One last note: when a new server group is created the template configuration is copied from the default server group. Configuring the default server group prior to creating additional server groups means you don't have to reconfigure these items for each new group unless a change to these particular settings is required.

Active Server Manager Users

Mark Vakoc - Sat, 2007-11-17 13:07

Here's a real quick one: I was asked today if it is possible to know who all is signed in to the management console. To see this simply navigate to the management console instance (Instance Name: home) and select the 'User Sessions' runtime metric on the left. You'll see a grid containing all the active management console users. Your current session will be highlighted in bold.

Troubleshooting Agent Communications

Mark Vakoc - Fri, 2007-11-16 11:54
The managed home agents communicate with the management console using secure JMX connections. Once started the agent will connect with the console, perform some registration tasks, and appear automatically in the management dashboard. This article provides inside to the communication process and steps to troubleshoot the communications.

Agent Communication
The management console is configured with a JMX port used to establish communication between the management agents (both the managed home agent and the embedded agent contained within the server products). This port is specified during the installation wizard and defaults to 14501.

When the agent is installed a configuration file containing the name of the management console and JMX port to use is configured. This is in the file install_location/config/agent.properties.

management.server.name=myserver.mydomain.com
management.server.port=14501

The name of the server may either be a short machine name, a fully qualified domain name, or an IP address depending on how the Java process running the management console was able to resolve the host name.

The agent will use this information to attempt to connect to the management console. If unsuccessful, for example if the console isn't running, the agent will continually re-attempt the connection. This activity is recorded in the agent's log files. Look in the log file e1agent_0.log (the zero will always be the most current log) located within the install_location/logs directory of the managed home. In the log file you will see something along the following, in this case the management server is on denlcmwn5.mlab.jdedwards.com and the JMX port is 18501:

Nov 7, 2007 10:35:37 AM com.jdedwards.mgmt.agent.E1Agent$ManagementServerDaemonThread run
FINER: Attempting to connect to service:jmx:jmxmp://denlcmwn5.mlab.jdedwards.com:18501

If the connection could not be established a corresponding error will appear shortly in the log file.

Tip #1
On the machine the agent was installed attempt to ping the management console using the server name configured in the agent.properties file. If the ping is not successful you may either change the agent.properties file (for example to not use a fully qualified name) or modify host files (e.g. /etc/hosts on unix) as necessary. Either way restart the agent after making any changes.

Tip #2
If the ping was successful you can attempt to telnet to the management console using the same port. For example 'telnet denlcmwn5.mlab.jdedwards.com 18501'. If the connection is successful you know the agent is able to establish a connection to the console. If this fails there may be firewall or other networking issues preventing the connection that need to be resolved. Note: On a successful connection you may ignore any content displayed in the connection; it will not be human readable.

Once that connection has been made the management console will assign a TCP/IP port that the agent should use to listen for incoming connections. The agent will pass in it's machine name and install location, and the console will provide the next unused TCP/IP port starting at the 'Management Agent Starting Port' which was also configured during the installation wizard and defaults to 14502.

In the managed home agents log you will see the port that was assigned, in this case it was 18607:

Nov 7, 2007 10:35:42 AM com.jdedwards.mgmt.agent.Server startListener
INFO: Starting the management agent listener on port '18607'.
Nov 7, 2007 10:35:42 AM com.jdedwards.mgmt.agent.Server startListener
FINE: Attempting to start the local management agent listener on port 18607
Nov 7, 2007 10:35:42 AM com.jdedwards.mgmt.agent.Server startListener
FINE: Succesfully started the management agent listener on port 18607

If the operation was successful, as shown above, you may continue to to the next step. If there are errors indicating the listener could not be started you should make sure no other program is using that same port (and if they are you may change the 'Management Agent Starting Port' to something else in the management console (Select the 'Management Agents' link in the Quick Links. Do not change the 'Management Server JMX Port' setting.

If everything has been successful so far we will now focus our attention on the logs for the management console. You may view these logs using the console application itself. Navigate to the managed home for the management console (the managed home that contains the 'home' instance). On the bottom of that page select the log file home_0.log. The log should contain an entry indicating the initial connection (thus a port of -1) from the managed agent:

FINER: Received heartbeat from the remote management agent on denlcmlx2 listening on port -1 of type 2 in managed home /home/oracleas/oasagent

Next you will see an entry about the calculated port discussed above.

FINER: Determining the port the remote agent 'denlcmlx2' should start listening on.
FINER: Assigning the port 18607 to the remote agent 'denlcmlx2'.

Followed by a "heartbeat" request from that agent:

FINER: Received heartbeat from the remote management agent on denlcmlx2 listening on port 18,607 of type 2 in managed home /home/oracleas/oasagent

Finally the console will attempt to connect to the remote agent on the port assigned. If successful you will see something like:

FINE: Attemping to establish a connection from the management console to the remote agent 'denlcmlx2' on port 18607.
FINE: Successfully established a connection from the management console to the remote agent 'denlcmlx2' on port 18607 with connection id 'jmxmp://10.139.163.53:3213 32330841.

This completes the communication negotiation process and the managed home will soon appear in the dashboard.

Tip #3
If there are errors indicating the connection was not successful follow similar steps as above to troubleshoot the issue:
  1. On the management console machine ping the server using the name reported in the log files (in this case denlcmlx2). If not successful ensure the network/dns configuration is correct. Add an entry to the hosts file (\windows\system32\drivers\etc\hosts) for the machine name if necessary.
  2. If the ping was successful telnet from the management console to the specified name and port, for example 'telnet denlcmlx2 18607'. If that connection was not successful ensure that there are no firewall or other networking issues preventing the connection.
Nearly all the problems related to the managed agents not appearing in the management dashboard are related to networking and host name resolution issues or firewalls that are in place between the server running the management console and the remote machine.

Tip #4
On the iSeries platforms if there are errors in the logs indicating crypto or encryption related problems (because the connection between agents is fully encrypted) this usually indicates the required JDK (1.5) is not present.



Sizing the UNDO TABLESPACE for Automatic Undo management

Madan Mohan - Thu, 2007-11-15 21:43
Sizing an UNDO tablespace requires three pieces of data.

- (UR) UNDO_RETENTION in seconds
- (UPS) Number of undo data blocks generated per second
- (DBS) Overhead varies based on extent and file size (db_block_size)

UndoSpace = [UR * (UPS * DBS)] + (DBS * 24)

Two can be obtained from the initialization file: UNDO_RETENTION and DB_BLOCK_SIZE.
The third piece of the formula requires a query against the database. The number of undo blocks generated per second can be acquired from V$UNDOSTAT.


The following formula calculates the total number of blocks generated and divides it by the amount of time monitored, in seconds:

SQL>SELECT (SUM(undoblks))/ SUM ((end_time - begin_time) * 86400) FROM v$undostat;

Column END_TIME and BEGIN_TIME are DATE data types. When DATE data types are subtracted, the result is in days. To convert days to seconds, you multiply by 86400, the number of seconds in a day.

The result of the query returns the number of undo blocks per second. This value needs to be multiplied by the size of an undo block, which is the same size as the database block defined in DB_BLOCK_SIZE.


The following query calculates the number of bytes needed:

SQL> SELECT (UR * (UPS * DBS)) + (DBS * 24) AS "Bytes"
FROM (SELECT value AS UR FROM v$parameter WHERE name = 'undo_retention'),
(SELECT (SUM(undoblks)/SUM(((end_time - begin_time)*86400))) AS UPS FROM
v$undostat), (select block_size as DBS from dba_tablespaces where
tablespace_name= (select value from v$parameter where name
= 'undo_tablespace'));

Some issues with Oracle views as ActiveRecord source

Raimonds Simanovskis - Thu, 2007-11-15 16:00

I am using Ruby on Rails to publish data from existing “legacy” application on Oracle database which already have existing complex data model. I am defining additional database views on existing legacy data to which I grant select rights to Rails schema. And I am using Rails conventions when defining these views – view names as pluralized / underscored version of Rails model name, ID column as primary key and %_ID columns as foreign keys.

Typically this works quite well and I can use Rails find methods to automatically generate SQL SELECTs from these views. But for some legacy data I got the problem with Oracle number type mapping to Ruby integer type.

Rails standard convention for database NUMBER type mapping is the following:

  • NUMBER with specified scale and without precision (e.g. NUMBER) is mapped to :integer
  • NUMBER with specified scale and with precision (e.g. NUMBER) is mapped to :decimal
  • NUMBER without scale and precision (just NUMBER) is mapped to :decimal

If primary keys and foreign keys in legacy tables are defined as e.g. NUMBER then everything is OK and they will be mapped to :integer in Rails. But if primary keys or foreign keys in legacy tables are defined as NUMBER then they will be mappec to :decimal in Rails.

And what happens if e.g. primary key is mapped to :decimal in Rails? Then, for example, you get that customer.id is equal to “123.0” and you get ugly URLs like “/customers/123.0”.

One workaround is to use customer.id.to_i everywhere but it is quite annoying. Therefore I created patch for Oracle adapter (this is tested with Rails 1.2.3) which always sets data type as :integer for all primary keys (column name ID) and foreign keys (column name like %_ID). This includes also date columns patch that I wrote about previously.

module ActiveRecord::ConnectionAdapters
  class OracleColumn
    def simplified_type(field_type)
      return :boolean if OracleAdapter.emulate_booleans && field_type == 'NUMBER(1)'
      case self.name
        # RSI: treat columns which end with 'date' as ruby date columns
        when /date$/i then :date
        # RSI: removed 'date' from regex
        when /time/i then :datetime
        # RSI: treat id columns (primary key) as integer
        when /^id$/i then :integer
        # RSI: treat _id columns (foreign key) as integer
        when /_id$/i then :integer
        else super
      end
    end
  end

  # RSI: patch to change selected results NUMBER to integer for primary and foreign keys
  class OracleAdapter
    def select(sql, name = nil)
      cursor = execute(sql, name)
      cols = cursor.get_col_names.map { |x| oracle_downcase(x) }
      rows = []

      while row = cursor.fetch
        hash = Hash.new

        cols.each_with_index do |col, i|
          hash[col] =
            case row[i]
            when OCI8::LOB
              name == 'Writable Large Object' ? row[i]: row[i].read
            when OraDate
              (row[i].hour == 0 and row[i].minute == 0 and row[i].second == 0) ?
              row[i].to_date : row[i].to_time
            else row[i]
            end unless col == 'raw_rnum_'
          # RSI: patch - convert to integer if column is ID or ends with _ID
          hash[col] = hash[col].to_i if (col =~ /^id$/i || col =~ /_id$/i) && hash[col]
        end

        rows << hash
      end

      rows
    ensure
      cursor.close if cursor
    end
  end  
end

I have not yet verified this with Rails 2.0. And probably I will collect all my Oracle adapter patches and will release it as plugin. Is anybody interested in this?

Categories: Development

2nd day at sys-con conference

Rakesh Saha - Thu, 2007-11-15 03:36

Enabling , Disabling , Change the password protection on the RDBMS and Tools (8.0.6) listeners in an Oracle Applications 11.5.x

Madan Mohan - Thu, 2007-11-15 01:38
Pre-requisites
The steps in this guide should be performed after applying the following AutoConfig patches:-

3453499 (11i.ADX.F)
5225940 (Post ADX.F Fixes)
5107107 - AUTOCONFIG ENGINE & CONFIG TOOLS ROLLUP PATCH N or higher



How to enable/disable/change the password

To enable/disable or change the password script addlnctl.pl must be used.

DB Tier: $ORACLE_HOME/appsutil/bin/addlnctl.pl
Apps Tier: $AD_TOP/bin/addlnctl.pl

This has the following syntax:-

Valid arguments for addlnctl.pl:
help : get usage information
contextfile : provide Applications or DB Tier context file name

Set the Applications (APPSORA.env) or RBDMS ($CONTEXT_NAME.env)
Run one of the following commands

Example 1: To enable listener password

addlnctl.pl contextfile=$CONTEXT_FILE enablepassword

Example 2: To disable listener password

addlnctl.pl contextfile=$CONTEXT_FILE disablepassword

Example 3: To change existing listener password

addlnctl.pl contextfile=$CONTEXT_FILE changepassword

Further information:
This will change the contextfile variable s_enable_listener_password to "ON" or "OFF"
Update/Remove the listener.ora with a PASSWORDS_PROD entry and the encypted password value

Example

PASSWORDS_PROD=BC73ED1DD01AC862
If the listener is not running it will not start it.
If the listener is currently running it will stop and restart it.
Controlling a password protected listener .Once the password is enabled the following methods can now be used to stop/start the listener:-


Apps Tier:

Use script $COMMON_TOP/admin/scripts/$CONTEXT_NAME/adalnctl.sh

Examples:-
$COMMON_TOP/admin/scripts/$CONTEXT_NAME/adalnctl.sh start
$COMMON_TOP/admin/scripts/$CONTEXT_NAME/adalnctl.sh stop

This script will check the listener.ora for the encypted password, and use this to stop/start the listener.

DB Tier:

Method A: Use script $ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME/addlnctl.sh

Examples:-

$ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME/addlnctl.sh start PROD
$ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME/addlnctl.sh stop PROD

This script will check the listener.ora for the encypted password, and use this to stop/start the listener.

Method B : Use command line options in the Listener Control Utility LSNRCTL
Start the Listener Control Utility

$ lsnrctl

Enter the listener name
LSNRCTL > set current_listener PROD

Enter password for the Listener
LSNRCTL > set password or

Examples:

LSNRCTL > set password 654FA907952B15B

or:

LSNRCTL > set password
Password: apps

To determine the Status of the listener:
LSNRCTL > status

To Stop the listener:
LSNRCTL > stop

To start the listener:
LSNRCTL > start

To exit :
LSNRCTL > start

Pages

Subscribe to Oracle FAQ aggregator