Home » RDBMS Server » Performance Tuning » Requie help to tune the process (Oracle,9.2.0.5.0,SunOS 5.9)
Requie help to tune the process [message #471110] Thu, 12 August 2010 07:03 Go to next message
sathik
Messages: 196
Registered: August 2009
Location: INDIA
Senior Member
One of the process experiencing ORA-4030 error while it is executing. The descriptions about the job is given below.

Basically this job is used for purging the old datas. it functions by selecting the data from error_messages_1 and error_messages_2 table
those are 2 months old and then creating a file and placing into the server. Once the file is placed into the server, the basic data from the
tables are being deleted.

But we are facing ORA-4030 error during the select phase itself(query is given below). This may be due to the volume of data is huge, it requires high memory
to process those but the available memory in the OS is not sufficient.

      CURSOR arch_logs_1 IS
         SELECT TIMESTAMP,SOURCE,message,severity FROM error_messages_1 WHERE TIMESTAMP <= (SELECT MIN(em_start_date) + 1
         FROM   em_config_data);
      CURSOR arch_logs_2 IS 
         SELECT TIMESTAMP,SOURCE,message,severity FROM error_messages_2 WHERE TIMESTAMP <= (SELECT MIN(em_start_date) + 1
         FROM   em_config_data);



There is a separate plan is going for adding the RAM.

But before that, we are planning to tune the process.
As part of tuning the process, the 2 months older data can be created as files through manual process. And then we can go for deleting the the old data
from the table manuaaly. As a result of this, the volume of data will be reduced drastically, thus the memory utilization for the select phase can be lowered resulting no memory issues.


I need your help to take the backup of that 2 months old data in file.Since the volume of data is very huge, So I feel, the same ORA-4030 error will
occur when we take the backup. But I did not tested till now. So can anybody help me to take the backup of this data
without getting ORA-4030 error?

Error_messages_1 count = more than 204 millions (gathered from num_rows)
Error_messages_2 count = more than 160 millions

Note:
====

1. RAM size = 32GB
2. SGA size = 12 GB

3. error_messages_1 and error_messages_2 tables are hash partitioned as below.

   SQL> select TABLE_NAME,PARTITION_NAME,HIGH_VALUE,TABLESPACE_NAME,NUM_ROWS,CHAIN_CNT,to_char(LAST_ANALYZED,'dd/mm/yyyy hh24:mi:ss')
     2  from dba_tab_partitions where table_name in ('ERROR_MESSAGES_1','ERROR_MESSAGES_2');

TABLE_NAME                     PARTITION_NAME                 HIGH_VALUE              TABLESPACE   NUM_ROWS  CHAIN_CNT TO_CHAR(LAST_ANALYZ
------------------------------ ------------------------------ ----------------------- ---------- ---------- ---------- -------------------
ERROR_MESSAGES_1               SYS_P1                                                 DATA         51240020          0 07/08/2010 17:40:36
ERROR_MESSAGES_1               SYS_P2                                                 DATA         51352720          0 07/08/2010 18:06:22
ERROR_MESSAGES_1               SYS_P3                                                 DATA         51388780          0 07/08/2010 18:31:57
ERROR_MESSAGES_1               SYS_P4                                                 DATA         51078280          0 07/08/2010 18:53:41
ERROR_MESSAGES_2               SYS_P5                                                 DATA         40757520          0 07/08/2010 20:00:57
ERROR_MESSAGES_2               SYS_P6                                                 DATA         40981260          0 07/08/2010 20:19:26
ERROR_MESSAGES_2               SYS_P7                                                 DATA         40851160          0 07/08/2010 20:38:46
ERROR_MESSAGES_2               SYS_P8                                                 DATA         40870160          0 07/08/2010 20:53:10 


4. The package em_management is attached herewith for your reference.

Index details:
==============
these 2 tables has global indexes.

SQL> select index_name,column_name,table_name from user_ind_columns where table_name in
  2  ('ERROR_MESSAGES_1','ERROR_MESSAGES_2');

INDEX_NAME                     COLUMN_NAME                        TABLE_NAME
------------------------------ ---------------------------------- -------------------
EM1_SEVERITY_INDX              SEVERITY                           ERROR_MESSAGES_1
EM1_TIMESTAMP_INDX             TIMESTAMP                          ERROR_MESSAGES_1
EM2_SEVERITY_INDX              SEVERITY                           ERROR_MESSAGES_2
EM2_TIMESTAMP_INDX             TIMESTAMP                          ERROR_MESSAGES_2

[Updated on: Thu, 12 August 2010 07:06]

Report message to a moderator

Re: Requie help to tune the process [message #471114 is a reply to message #471110] Thu, 12 August 2010 07:14 Go to previous messageGo to next message
Mahesh Rajendran
Messages: 10707
Registered: March 2002
Location: oracleDocoVille
Senior Member
Account Moderator
It could be useful if you can post the complete error message,
relevant contents of trace dumps and output of
 ulimit -a
.
If pga_aggregate_target is set, what is the value?
Re: Requie help to tune the process [message #471116 is a reply to message #471110] Thu, 12 August 2010 07:15 Go to previous messageGo to next message
cookiemonster
Messages: 13921
Registered: September 2008
Location: Rainy Manchester
Senior Member
You appear to be using BULK COLLECT to load all the data you want to delete into an array.
I suspect it's the size that the array needs to grow to that's causing the error.
Load the data into the array in smaller chunks.

Re: Requie help to tune the process [message #471120 is a reply to message #471114] Thu, 12 August 2010 07:24 Go to previous messageGo to next message
sathik
Messages: 196
Registered: August 2009
Location: INDIA
Senior Member

Error:

ORA-04030: out of process memory when trying to allocate 8216 bytes (koh-kghu sessi,static frame of inst)
ORA-04030: out of process memory when trying to allocate 16408 bytes (koh-kghu call ,pmuccst: adt/record)
04030. 00000 - "out of process memory when trying to allocate %s bytes (%s,%s)"
*Cause: Operating system process private memory has been exhausted


#ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
nofiles(descriptors) 10000
vmemory(kbytes) unlimited

And the value of PGA_AGGREGATE_TARGET is zero.

I didnt trace this job till now. So I am not in the position to provide the dump file now. I will give you as soon as possible. Sorry for the inconvenience.

Re: Requie help to tune the process [message #471124 is a reply to message #471116] Thu, 12 August 2010 07:34 Go to previous messageGo to next message
sathik
Messages: 196
Registered: August 2009
Location: INDIA
Senior Member

Hi Cookiemonster,

I can't get you. Firstly, the package is collecting the records to create a file to place it in the server.

If the file is created successfully, then only the package will go for deleting the records. But here the file itself is not craeted. So I suspect the problem exist during the select phase itself.

or

do you mean I should not select the whole records that are 2 months old to delete but need to select some smaller chunks like 1 week or 1 day data to delete.

Can you please give explain detaily?

And Please forgive me, if I am wrong in anything.
Re: Requie help to tune the process [message #471132 is a reply to message #471124] Thu, 12 August 2010 07:55 Go to previous messageGo to next message
cookiemonster
Messages: 13921
Registered: September 2008
Location: Rainy Manchester
Senior Member
Your procedure goes something like this (pseudo code):
BULK COLLECT everything to be archived from error_messages_1 into v_err_msg
LOOP over v_err_msg
Call archive_log for each record
  Write file
  Delete from error_messages_1 where timestamp = v_err_msg.timestamp
  COMMIT;

Repeat above for error_messages_2



So you are loading all the records that need to be archived into an array. Arrays are held in RAM and don't live in the SGA.
So most likely it is the select that populates the array that is giving the error. But it's not the fact that you are selecting lots of data that's causing the problem but the fact that you're putting it all into a single array in one go.
Loop over the cursor and use the limit clause to get the data in smaller chunks (you should still use bulk collect of course) say 10000 rows at a time max.

Also why are you doing this:
        FOR i IN 1 .. v_ndays LOOP

Nothing in the loop references i so you're just repeating the same processing multiple times.
Though I imagine on 2nd and subsequent loops it does nothing since you've deleted the data.

You should also do the delete at the end instead of on a row by row basis.

Also:
               IF v_flipcontrol = 0 THEN
                  update_emconfig;
                  dbfl_error.raise_error('I'
                                        ,'Job Manually exited '
                                        ,pk_name ||
                                         proc_name);

                  RETURN;

Why are you waiting until you get in the loop to do that check?
Re: Requie help to tune the process [message #471134 is a reply to message #471124] Thu, 12 August 2010 07:57 Go to previous messageGo to next message
Michel Cadot
Messages: 68647
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
If you use BULK COLLECT to fetch from your cursor then use the LIMIT clause to prevent from this error.

Regards
Michel
Re: Requie help to tune the process [message #471135 is a reply to message #471132] Thu, 12 August 2010 08:05 Go to previous messageGo to next message
cookiemonster
Messages: 13921
Registered: September 2008
Location: Rainy Manchester
Senior Member
And I'd abandon the plan to get more RAM. If you're running out with 32G then you've got some serious design flaws that need fixing.
You should be able to run happily with less than half of that.
Adding more will (if you're lucky) just delay the inevitable point where you have to fix the code. And it might not fix anything at all.
Re: Requie help to tune the process [message #471136 is a reply to message #471135] Thu, 12 August 2010 08:17 Go to previous messageGo to next message
cookiemonster
Messages: 13921
Registered: September 2008
Location: Rainy Manchester
Senior Member
Working out the filename and location and then opening and closing it for every row you want to archive isn't exactly efficient either.
Open it, write all the data currently in the array, close it again.
Re: Requie help to tune the process [message #471144 is a reply to message #471136] Thu, 12 August 2010 08:44 Go to previous messageGo to next message
cookiemonster
Messages: 13921
Registered: September 2008
Location: Rainy Manchester
Senior Member
And on closer inspection you're creating a file per day. So select the data per day.
Re: Requie help to tune the process [message #471175 is a reply to message #471132] Thu, 12 August 2010 12:03 Go to previous messageGo to next message
sathik
Messages: 196
Registered: August 2009
Location: INDIA
Senior Member
Thanks for providing your valuable time for the depth analysis.

Quote:
"Loop over the cursor and use the limit clause to get the data in smaller chunks (you should still use bulk collect of course) say 10000 rows at a time max."


I need one more favor, Can you please tell me in the form of code, how can I use the limit clause here?

[Updated on: Thu, 12 August 2010 12:16] by Moderator

Report message to a moderator

Re: Requie help to tune the process [message #471177 is a reply to message #471175] Thu, 12 August 2010 12:17 Go to previous messageGo to next message
Michel Cadot
Messages: 68647
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
PL/SQL User's Guide and Reference
Application Developer's Guide - Fundamentals

Go to the index and scroll down to BULK COLLECT. There are examples of code in these books.
(There are also examples on the web that you can easily find using Google for instance.)

Regards
Michel

[Updated on: Thu, 12 August 2010 12:18]

Report message to a moderator

Re: Requie help to tune the process [message #471195 is a reply to message #471132] Thu, 12 August 2010 14:21 Go to previous messageGo to next message
sathik
Messages: 196
Registered: August 2009
Location: INDIA
Senior Member
         FOR i IN 1 .. v_ndays LOOP
            OPEN arch_logs_1;
            FETCH arch_logs_1 BULK COLLECT
               INTO v_err_msg;
            CLOSE arch_logs_1;
            -- Archive oldest data into log files and delete in the main tables
            FOR indx IN 1 .. v_err_msg.COUNT LOOP
               IF v_flipcontrol = 0 THEN
                  update_emconfig;
                  dbfl_error.raise_error('I'
                                        ,'Job Manually exited '
                                        ,pk_name ||
                                         proc_name);

                  RETURN;
               END IF;
               archive_log(v_err_msg(indx)
                           .TIMESTAMP
                          , v_err_msg(indx)
                           .SOURCE
                          , v_err_msg(indx)
                           .message
                          , v_err_msg(indx)
                           .severity);
            END LOOP;
            OPEN arch_logs_2;
            FETCH arch_logs_2 BULK COLLECT
               INTO v_err_msg;
            CLOSE arch_logs_2;
            FOR indx IN 1 .. v_err_msg.COUNT LOOP
               archive_log(v_err_msg(indx)
                           .TIMESTAMP
                          , v_err_msg(indx)
                           .SOURCE
                          , v_err_msg(indx)
                           .message
                          , v_err_msg(indx)
                           .severity);
            END LOOP;
            update_emconfig;
         END LOOP;



I have small doubt in the above mentioned part, please clarify.

Doubt 1:
========
if v_days=50

the loop will select the 1 day records 1st by executing archive_log_1 cursor.(say for 10 millions records)
then pass those 10 millions records into v_err_msg varibale.

and then, the archive_log procedure process this 10 millions records by putting the 1 record in file then delete it,and then
taking 2nd record to put in file and then delete, like goes on.

please correct me, if I am wrong.


Doubt 2:
========
Instead, are you saying to alter the loop as below?

          declare
            rows PLS_INTEGER := 10000;

           begin

            FOR i IN 1 .. v_ndays LOOP
            OPEN arch_logs_1;
            FETCH arch_logs_1 BULK COLLECT
               INTO v_err_msg LIMIT ROWS;
            CLOSE arch_logs_1;

          end loop;  


If I change like above,in that 10 millions rows, 1st 10000 rows will process by putting this in file and delete.
And then,loop will fetch the 2nd 10000 records and goes on..
But the cursor executes only one time. fetching operation only limit the rows into 10000 from 10 millions records.

please correct me if my understanding is wrong.


Doubt 3:
========
and also I need to change the procedure to put these 10000 records in file at a time.but in this case delete operation
will go one by one right? or do I need to change delete query also to delete the 10000 records at a time?

I hope this will be the final clarification from my side. So please clarify.Once again,I thank you very much for
your great help.

[Updated on: Thu, 12 August 2010 14:26]

Report message to a moderator

Re: Requie help to tune the process [message #471247 is a reply to message #471195] Fri, 13 August 2010 00:54 Go to previous messageGo to next message
sathik
Messages: 196
Registered: August 2009
Location: INDIA
Senior Member

Can I have the reply please?
Re: Requie help to tune the process [message #471298 is a reply to message #471195] Fri, 13 August 2010 04:05 Go to previous messageGo to next message
cookiemonster
Messages: 13921
Registered: September 2008
Location: Rainy Manchester
Senior Member
Have some Patience. I'm in a completely different timezone to you.

sathik wrote on Thu, 12 August 2010 20:21

Doubt 1:
========
if v_days=50

the loop will select the 1 day records 1st by executing archive_log_1 cursor.(say for 10 millions records)
then pass those 10 millions records into v_err_msg varibale.

and then, the archive_log procedure process this 10 millions records by putting the 1 record in file then delete it,and then
taking 2nd record to put in file and then delete, like goes on.

please correct me, if I am wrong.


I assume you mean v_ndays.
Are you describing the current functionality? Because the cursors make no reference to v_ndays. They just get all days at once at the moment.
You should change to get a day at a time by making the cursors reference that variable.

sathik wrote on Thu, 12 August 2010 20:21

Doubt 2:
========
Instead, are you saying to alter the loop as below?

          declare
            rows PLS_INTEGER := 10000;

           begin

            FOR i IN 1 .. v_ndays LOOP
            OPEN arch_logs_1;
            FETCH arch_logs_1 BULK COLLECT
               INTO v_err_msg LIMIT ROWS;
            CLOSE arch_logs_1;

          end loop;  


If I change like above,in that 10 millions rows, 1st 10000 rows will process by putting this in file and delete.
And then,loop will fetch the 2nd 10000 records and goes on..
But the cursor executes only one time. fetching operation only limit the rows into 10000 from 10 millions records.

please correct me if my understanding is wrong.

You're right. Which is why you need to write an additional loop to fetch the records from each cursor. You'll loop until the cursor
has fetched all the records. You can use %NOTFOUND to check. Have a search on this site and on asktom for BULK COLLECT LIMIT - you'll see examples.

sathik wrote on Thu, 12 August 2010 20:21

Doubt 3:
========
and also I need to change the procedure to put these 10000 records in file at a time.but in this case delete operation
will go one by one right? or do I need to change delete query also to delete the 10000 records at a time?



To be honest if no new data can be inserted for the dates you archiving while the process is running ( I assume that is the case) then I would just delete all the rows at the very end.
Just write a delete statement with the same where clause as the cursor.
Re: Requie help to tune the process [message #471349 is a reply to message #471298] Fri, 13 August 2010 10:11 Go to previous message
sathik
Messages: 196
Registered: August 2009
Location: INDIA
Senior Member

Thank you very much. I wiil amend the package accordingly.
Previous Topic: how to get explain plan
Next Topic: Low performing query
Goto Forum:
  


Current Time: Tue Apr 30 13:16:24 CDT 2024