Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Best way to clear stale data from very active audit tables

Re: Best way to clear stale data from very active audit tables

From: <bobdurie_at_gmail.com>
Date: Mon, 23 Jul 2007 12:36:21 -0700
Message-ID: <1185219381.513910.199350@g4g2000hsf.googlegroups.com>


On Jul 23, 2:20 pm, "Chris L." <diver..._at_uol.com.ar> wrote:
> On Jul 23, 12:37 pm, "bobdu..._at_gmail.com" <bobdu..._at_gmail.com> wrote:
>
>
>
> > Hi there, i have a java application using oracle's jdbc driver that
> > has a bunch of audit style tables. We're experiencing significant
> > issues with our application when the data within these tables is being
> > purged, and i need to know a better method for designing the schema
> > (or application) to better handle this situation.
>
> > Typically:
> > - our "audit" table is created with a bunch of fields, one being
> > "auditdate" which is a timestamp with a typical index on it.
> > - this table is continually insert'ed into
> > - select queries are run against this table on a regular basis
> > - once an hour our "purge" job runs that does:
> > DELETE FROM audits WHERE auditdate < (CURRENT_TIMESTAMP + INTERVAL
> > '-7' DAY(5))
>
> > When this table gets upwards of 20million+ records and many inserts
> > are happenning concurrently, the purge job hammers the system, and
> > causes the subsequent inserts to slow to a crawl, not to mention the
> > DELETE command above to start taking upwards of 10+ minutes to run.
>
> > I've debated doing a SELECT auditID with the where above and deleting
> > the records one by one, but that seems horribly inefficient. Stopping
> > the inserts altogether during the purge is not really an option. Can
> > someone suggest an alternate approach or design for this scenario?
>
> > A few other points:
> > - our application has several different "audit" style tables, each of
> > them has similar requirements, some get purged every week, some every
> > 2 hours. I'm hoping this approach is universal for all because all
> > are experiencing similar problems.
> > - our application can also run against sql server, which while fast,
> > occasionally has deadlock issues doing the above purging, inserting,
> > and selecting concurrently... but it is fast. This also means we aim
> > to have db agnostic code but i realize this is not always possible.
>
> > Thank you for any advice you can offer!!
>
> > Bob
>
> Is the index on "auditdate" table being used by Oracle?
> I tested your statement and came up with "table access full" on the
> explain plan. Using "DELETE FROM audits WHERE auditdate < sysdate-7"
> instead, came up with "index range scan" on the plan.
>
> I'm assuming that the table contains 168 hours (plus the current
> fraction) worth of logs and with the DELETE statement you posted
> (which runs hourly), you intend to delete 1 of those hours (the
> stalest)
>
> Kind regards
> C.

Oracle has told me because of the inequality (<) that it cannot use the index, or to create a "function based index". This table doesn't always contain 168 (or whatever) hours of data, this is controlled by the application/administror's purge settings. It is as small as a week, as long as they want, if they want to retain the audits for 4 months, they can, we'll still run this delete command every hour (thats right - every hour we run this removing the stale elements). As well, the "retain logs for X weeks" setting is configurable on the fly, so ideally changing it won't result in redesigning the table (but if thats the case then so be it).

If a partitioned table is the answer, do you think i could set this up from an application, ie via sql? Would the partitioning scheme need to be adjusted all the time, ie by creating new partitions as the days/ weeks go by? If anyone has any sample sql i'd love to see it. Received on Mon Jul 23 2007 - 14:36:21 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US