Re: Purging master table and loading Aggregate tables

From: Carlos <miotromailcarlos_at_netscape.net>
Date: Fri, 4 Mar 2011 00:01:27 -0800 (PST)
Message-ID: <3e108de7-5835-4d1c-b124-ebe83687f8f1_at_z27g2000prz.googlegroups.com>


[Quoted] On Mar 4, 1:21 am, python <ang..._at_gmail.com> wrote:
> I have a fairly large table(200Mil rows) where data is inserted and
> deleted regularly. Given in an hour data is deleted to almost
> 1-5million rows and the application is inserting of the similar load
> in an hour.
> The procedure is as follows, and this sp runs every 10 mins and
> completes under 2 minutes, and lot of redo/undo is being generated, is
> there a way to cut down the redo log size as I am noticing a BIG
> problem when bouncing the database, it takes almost 2-3 hours to
> cleanly shutdown the database.
>
> <<Partial code>>
> open c1;
>     loop
>       fetch c1 bulk collect into rids limit batchsize;
>       exit when rids.count = 0;
>       forall i in 1..rids.last
>        delete from  DATA1 where rowid=rids(i) ;
>       v_del := v_del + SQL%ROWCOUNT;
>       commit;
>      dbms_application_info.set_client_info('purge_stat1_proc processed
> '||v_dell||' rows ');
>     end loop;
>     close c1;
>     commit;
>
> Thank you
> RA

Two main reasons:

DELETE rows one-by-one and commit inside a LOOP...

Two no-no's in one procedure.

Cheers.

Carlos. Received on Fri Mar 04 2011 - 09:01:27 CET

Original text of this message