Re: Deleting Large Numbers of Rows w/o Filling up my redo logs

From: Michel Cadot <micadot{at}altern{dot}org>
Date: Sat, 20 Dec 2008 17:37:55 +0100
Message-ID: <494d1f65$0$15221$426a74cc@news.free.fr>

"Pat" <pat.casey_at_service-now.com> a écrit dans le message de news: 23e92acf-7881-42df-af78-71cd2fb2fb53_at_z28g2000prd.googlegroups.com...
|
| I'm managing a server that spins about a million rows of logging
| information a day. This is up from about 200k rows a year ago, or
| about 400k rows last summer.
|
| Since we didn't want the log table to grow to absurd sizes, we like to
| only keep about 7 days worth of logs. So we've got a nightly job that
| basically does:
|
| begin transaction
| delete from log_table where timestamp < 7 days ago
| commit transaction
|
| Historically, this worked fine, although, for obvious reasons, the
| disks spun like rabid badgers while that job was running.
|
| Recently, as the logs have gotten larger though, the cleanup job has
| been putting so much stuff in the redo logs that the rest of the
| activity on the server is grinding to a halt.
|
| I changed the job recently to do something along the lines of:
|
| begin
| for i in reverse 7..30
| loop
| for j in reverse 1..24
| loop
| delete from log where "stamp" < sysdate - i - j/24;
| commit;
| end loop;
| sys.dbms_lock.sleep(60);
| end loop;
| end;
|
| So I'm deleting 1 hours worth of log at a time, or maybe 50k rows
| instead of 1M.
|
| In test, that seemed to work better e.g. redo logs seemed healthy and
| I got lots of little commits instead of one mega commit.
|
| My questions through are:
|
| 1) I'm putting lots of small transactions into the redo logs instead
| of one mega transaction, but I'm still spinning the same absolute
| amount of redos aren't I? Am I just going to move the problem here so
| that the redo's will be full of lots of little commits and the log
| writer won't be able to spool them off to archive logs mode quickly
| enough?
|
| 2) Is there a "better" way to delete a big chunk of rows? If I could
| take the database out of production I know the usual tricks e.g. copy
| the rows I want to keep into a new table, then drop the old one, but
| this is a live table that I can't mess with that way.
|
| Basically, how are other people deleting large numbers of rows?

Partition your log table per week and drop oldest partition each week.

Regards
Michel Received on Sat Dec 20 2008 - 10:37:55 CST

Original text of this message