Purging master table and loading Aggregate tables
From: python <angani_at_gmail.com>
Date: Thu, 3 Mar 2011 16:21:41 -0800 (PST)
Message-ID: <128b5e2e-a55f-4879-bb80-4785ebeb7206_at_a11g2000pro.googlegroups.com>
[Quoted] I have a fairly large table(200Mil rows) where data is inserted and deleted regularly. Given in an hour data is deleted to almost 1-5million rows and the application is inserting of the similar load in an hour.
The procedure is as follows, and this sp runs every 10 mins and completes under 2 minutes, and lot of redo/undo is being generated, is there a way to cut down the redo log size as I am noticing a BIG problem when bouncing the database, it takes almost 2-3 hours to cleanly shutdown the database.
Date: Thu, 3 Mar 2011 16:21:41 -0800 (PST)
Message-ID: <128b5e2e-a55f-4879-bb80-4785ebeb7206_at_a11g2000pro.googlegroups.com>
[Quoted] I have a fairly large table(200Mil rows) where data is inserted and deleted regularly. Given in an hour data is deleted to almost 1-5million rows and the application is inserting of the similar load in an hour.
The procedure is as follows, and this sp runs every 10 mins and completes under 2 minutes, and lot of redo/undo is being generated, is there a way to cut down the redo log size as I am noticing a BIG problem when bouncing the database, it takes almost 2-3 hours to cleanly shutdown the database.
<<Partial code>>
open c1;
loop
fetch c1 bulk collect into rids limit batchsize; exit when rids.count = 0; forall i in 1..rids.last delete from DATA1 where rowid=rids(i) ; v_del := v_del + SQL%ROWCOUNT; commit; dbms_application_info.set_client_info('purge_stat1_proc processed'||v_dell||' rows ');
end loop;
close c1;
commit;
Thank you
RA
Received on Fri Mar 04 2011 - 01:21:41 CET