| Oracle FAQ | Your Portal to the Oracle Knowledge Grid | |
Home -> Community -> Usenet -> c.d.o.server -> Massive deletes & performance
We have a large table (up to 50 million rows) with a very simple
structure:
Person_Id Number(10),
Group_Id Number(10),
Cell_Id Number(10)
Primary Key(Person_Id, Group_id, Cell_id)
We are "cleaning" this table using packaged procedures. Part of the cleaning involves removing rows that don't conform to certain business rules. This may mean removing up to 25% of them, or around 10 to 15 million rows at a time. To clean the entire table, we have to run through hundreds of procedures with hundreds of select statements.
In my preliminary testing, the more deletes I do, the worse performance is getting. I am doing a commit from within the package every 500 records.
Am I having a problem with the primary key index getting so out of balance that it can't function correctly?
Would I be better off having a 4th field that operates as a delete_flag and just changing the value of that field to indicate whether or not we want to keep the row? Received on Sat Sep 26 1998 - 08:46:14 CDT
![]() |
![]() |