Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Massive deletes & performance

Re: Massive deletes & performance

From: Vinay Joshi <vjoshi_at_pinnacle.co.uk>
Date: Sat, 26 Sep 1998 14:43:05 GMT
Message-ID: <01bde95c$006c4300$55c015ac@klangley.pinnacle.co.uk>


Try writing the rowid of the records you want to delete in a seperate table and then delete using those rowids. I think it will be quicker than adding a column which will invalidate procedures etc.

tomscott_at_nospam.abac.com wrote in article <360ceec4.673470774_at_news1.abac.com>...
> We have a large table (up to 50 million rows) with a very simple
> structure:
> Person_Id Number(10),
> Group_Id Number(10),
> Cell_Id Number(10)
> Primary Key(Person_Id, Group_id, Cell_id)
>
> We are "cleaning" this table using packaged procedures. Part of the
> cleaning involves removing rows that don't conform to certain business
> rules. This may mean removing up to 25% of them, or around 10 to 15
> million rows at a time. To clean the entire table, we have to run
> through hundreds of procedures with hundreds of select statements.
>
> In my preliminary testing, the more deletes I do, the worse
> performance is getting. I am doing a commit from within the package
> every 500 records.
>
> Am I having a problem with the primary key index getting so out of
> balance that it can't function correctly?
>
> Would I be better off having a 4th field that operates as a
> delete_flag and just changing the value of that field to indicate
> whether or not we want to keep the row?
>
Received on Sat Sep 26 1998 - 09:43:05 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US