Re: Best way to delete million records
Date: 30 Jun 2008 20:42:48 GMT
Message-ID: <48694548$0$1344$834e42db@reader.greatnowhere.com>
On Mon, 30 Jun 2008 10:00:51 +0200, fefe78 wrote:
> Hi all,
>
> I have to delete about 43 millions rows from a table, but the following
> conditions exist:
>
> - The table is not partitioned and unfortunately I can't modify the
> structure
> - The delete procedure must be done without stopping service, so I can't
> use CTAS, moving records and renaming table. - I can't use truncate
> because the records affected are selected in a range of dates
>
> Waiting your suggestions and probable problems for this procedure.
>
> Thanks for help.
>
> Bye
You can also write a PL/SQL script that would delete million by
million records and commit it every time. That will prevent you
from running out of the UNDO space but will probably run for a week,
in the background. To delete 43 million records is a major intervention
and if you want to do it in a non-invasive fashion, background job running
a PL/SQL delete script is, in my opinion, the best way. Of course, you
should also be checking space on your archive log destination. Something
like this is more then likely to create a major log switching storm.
Expect a checkpoint every few seconds, with an occasional "checkpoint not
complete" message. Users are more then likely to notice that kind of
activity. It it is possible, you can launch several delete scripts, one
deleting Mondays, one Tuesdays and so on. If your machine capacity is
good enough, the machine will survive. To survive a thing like that, you
should have at least 6 CPUs.
Also, if there is a standby, it needs to be carefully monitored, too. This
might flood your 1GB LAN card.
-- http://mgogala.freehostia.comReceived on Mon Jun 30 2008 - 15:42:48 CDT