Re: optimal method for purging time based data

From: uberchump <>
Date: Wed, 23 Sep 2009 10:24:37 -0700 (PDT)
Message-ID: <>

On Sep 21, 2:56 pm, joel garry <> wrote:
> On Sep 21, 10:24 am, "" <> wrote:
> > Hi,
> > I'm a develper of a j2ee based web application that supports mutliple
> > databases (oracle obviously included).  We access oracle via their
> > JDBC driver.
> > We have several tables in our database that store time based events,
> > and periodically we have a routine in our code that "purges" the
> > data.  We run a query similar to the one below, this one will purge
> > all but the last 7 days of data:
> > DELETE FROM someeventtable WHERE eventdate < (LOCALTIMESTAMP +
> > INTERVAL '-10080' MINUTE);
> > This "someeventtable" is constantly being inserted to in our
> > application as well.  We've found that on busy systems this query can
> > take a VERY long time.  I did some experimentation in the past and
> > found that using "CURRENT_TIMESTAMP" is in fact far slower than
> > "LOCALTIMESTAMP" so thats why we're using that - however, i'm hoping
> > we can do better.
> > Can anyone suggest a SQL based method for doing this more
> > efficiently?  It must be SQL based as we support other DB's and need
> > to remain agnostic as much as possible.
> > Thx,
> > Bob
> This is one of those things where using a database blind command can
> really bite you in places you don't want to be bitten.  I'm pretty
> sure Jonathan Lewis has written something recently about these kinds
> of windowed deletes, but I couldn't find it with a quick google.
> However, I did find
> , which, if you follow some links, gives some idea of the types of
> specific problems one can run into.  Of course, specific problems tend
> to have specific solutions, so in the Oracle world they tend to be
> things like coalescing after deleting, or using partitioning, or doing
> a create table as select for the rows you want to keep, then renaming
> table.
> For situations like these, you might as well fork code for database
> engines.  The alternative is to have people think badly of your work.
> Blaming Oracle for having Oracle specific solutions is pretty silly,
> too.
> You also might want to poke around on, I'm sure this
> subject has been covered there, somewhere.
> There is a tuning book by Cary Millsap (
> ) that has a methodology for figuring out what is slowing you down.
> You also should use the tools available to you to figure out what is
> actually slowing you down, rather than relying on mysterious
> incantations that may do nothing to solve your problem.  Look for
> tools that ogle tables with wait in the table name.  Enterprise
> Manager allows you to drill down and look at what a slow session is
> doing.
> jg
> --
> is bogus.
> Step aside!

Hi Joel,

Thanks for the comments! I'm definitely not blaming oracle for anything here, and certainly do have some flexibility as to forking code and doing db specific operations, but for complexity reasons we try to keep it as simple as possible.

To that end, is there anything i can/should do after these deletes? For example (based on what you've pointed me to and some addtional reading), i could run "alter index coalesce" after massive deletes. I could for instance say if the delete operation took longer than 1 or 5 minutes to operate, run this command to hopefully free up some of the space. Does this sound like a decent idea? I can run any other similar commands before and after the delete as long as the table itself is not affected (as it will continue to be inserted to during the delete operation).

Its tricky because we do our best to have a hands off approach for customers to their DB and try to manage things as much as we can. But when we run into issues like these, it is clear that a real DBA should come in help determine whats going wrong (as you may have guess ianadba :)).

Let me know if you have further thoughts, thanks!

Bob Received on Wed Sep 23 2009 - 12:24:37 CDT

Original text of this message