Re: long running select min(timestamp) query

From: <bobdurie_at_gmail.com>
Date: Wed, 30 Jan 2008 09:26:48 -0800 (PST)
Message-ID: <46e21d58-11d6-479e-bc90-4e0b5a5e7cf2@l1g2000hsa.googlegroups.com>


On Jan 30, 5:02 am, Robert Klemme <shortcut..._at_googlemail.com> wrote:
> On Jan 29, 10:36 pm, "bobdu..._at_gmail.com" <bobdu..._at_gmail.com> wrote:
>
> > I definitely have done some large scale deletes. When i say
> > occasionally, there have been times when the query has been fast, but
> > it hasn't been fast in a long time... likely since the last large
> > scale delete operation actually.
>
> I doubt it makes a difference for the min/max types of queries but do
> you update statistics on a regular basis, or at least after such mass
> deletes? There *may* be an impact if concurrently running queries are
> less efficient because of stale statistics and your IO becomes slower
> just because of increased concurrent IO load.

I don't do anything regularily, which is bad yes, but ianaodba, so i don't even know what to do, i try to let this intelligent system handle itself :) In any event, i have updated the statistics via the command described in an earlier post, there are still no active clients to the db, and the query is still uberslow.

>
> > I just tried to get more stats via the following call, and then
> > checked again for chained rows (not sure if it would've affected, but
> > tried anyways):
>
> > SQL> exec
> > DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'EVENTS,CASCADE=>TRUE);
>
> Maybe you can even update the whole schema's statistics if you haven't
> done in a time or on a regular basis.
>
> Do you have system statistics set?
>
> Kind regards

Not a bad idea - perhaps i'll get the whole schema statistics now and see what happens. How do i know if i have system statistics set?

Thanks again! Received on Wed Jan 30 2008 - 11:26:48 CST

Original text of this message