Re: Purging master table and loading Aggregate tables

From: onedbguru <onedbguru_at_yahoo.com>
Date: Sun, 13 Mar 2011 14:42:36 -0700 (PDT)
Message-ID: <da17def5-6aa8-4716-aaa9-5c3531cf7474_at_r4g2000prm.googlegroups.com>


On Mar 4, 4:01 am, Carlos <miotromailcar..._at_netscape.net> wrote:
> On Mar 4, 1:21 am, python <ang..._at_gmail.com> wrote:
>
>
>
>
>
>
>
>
>
> > I have a fairly large table(200Mil rows) where data is inserted and
> > deleted regularly. Given in an hour data is deleted to almost
> > 1-5million rows and the application is inserting of the similar load
> > in an hour.
> > The procedure is as follows, and this sp runs every 10 mins and
> > completes under 2 minutes, and lot of redo/undo is being generated, is
> > there a way to cut down the redo log size as I am noticing a BIG
> > problem when bouncing the database, it takes almost 2-3 hours to
> > cleanly shutdown the database.
>
> > <<Partial code>>
> > open c1;
> >     loop
> >       fetch c1 bulk collect into rids limit batchsize;
> >       exit when rids.count = 0;
> >       forall i in 1..rids.last
> >        delete from  DATA1 where rowid=rids(i) ;
> >       v_del := v_del + SQL%ROWCOUNT;
> >       commit;
> >      dbms_application_info.set_client_info('purge_stat1_proc processed
> > '||v_dell||' rows ');
> >     end loop;
> >     close c1;
> >     commit;
>
> > Thank you
> > RA
>
> Two main reasons:
>
> DELETE rows one-by-one and commit inside a LOOP...
>
> Two no-no's in one procedure.
>
> Cheers.
>
> Carlos.

What version?

Sounds like you need a rework of your app so you are not having to "hourly" delete so many records - also sounds like an adaptation of a MySQL app.

Should you be fortunate enough to be using 11gR2 (11.2.0.x), look at using DBMS_PARALLEL_EXECUTE. Works GREAT for these sort of things - in parallel. Received on Sun Mar 13 2011 - 22:42:36 CET

Original text of this message