Re: Yes, this is probably a stupid question but...

From: hpuxrac <johnbhurley_at_sbcglobal.net>
Date: Sat, 12 Apr 2008 17:51:02 -0700 (PDT)
Message-ID: <76dbb999-e8cf-40df-8962-68bc61521640@m3g2000hsc.googlegroups.com>


On Apr 12, 8:01 pm, Pat <pat.ca..._at_service-now.com> wrote:
> On Apr 12, 4:44 pm, "Ana C. Dent" <anaced..._at_hotmail.com> wrote:
>
>
>
>
>
> > Pat <pat.ca..._at_service-now.com> wrote in news:0653963d-1d32-4e85-9bae-
> > 55e038b9e..._at_f63g2000hsf.googlegroups.com:
>
> > > Question I have is:
>
> > > Short of constructing a summary table (which is probably a non starter
> > > in this environment), is there anything I could to to speed up the
> > > return time of a count query like the above?
>
> > Ready, Fire, Aim!
>
> > If you don't know specifically where the time is being spent,
> > you'd only be guessing what change is most optimal.
>
> > Run your query with SQL_TRACE=TRUE & run the results thru tkprof.
>
> > Post the results & EXPLAIN PLAN back here is you still need more asisstance
>
> I'm working over a remote desktop at the moment so I can't cut and
> paste the plan so I'll transcribe:
>
> select count(*) as recordcount from (a inner join b on a.sys_id =
> b.sys_id) where a.active = 1
>
> Table has aprox 180k rows, of which 151k have active = true
>
> SELECT STATEMENT
> _SORT AGGREGATE
> __NESTED LOOPS
> ___INDEX RANGE SCAN (index on a.active, a.sys_id)
> ___INDEX UNIQUE SCAN (index on b.sys_id)
>
> Total Query time: 2.39 seconds
>
> I've got a covering index on the primary table and an index on the
> child table's join condition, so there's no actual table reads going
> on here e.g. it's just an index join. I have current stats on the
> table (as of last night). From what I can see this is not an absurdly
> bad optimizer plan, but then I suspect there's something I'm not
> seeing, hence my post here.- Hide quoted text -
>
> - Show quoted text -

The nested loops technique works pretty well with a limited volume of matching records but if you are really getting counts of 200k at times well it's a non-performer from the get go.

A hash join might perfom a ( little ) better ... but still may not be adequate.

What release level and patchset are you running on exactly? What kind of hardware platform?

Does the application really require an absolutely accurate count in these circumstances and if so how is this accurate count going to be used?

Re-design may be called for here or at least some bypass type techniques.

If absolutely correct counts are not required one way of thinking about a bypass is limiting the count to some base number of rows via rownum ( select count(*) from whatever the tables are with join condition AND rownum <= 10,100,1000 ).

Another bypass technique might be to create a table that has counts already calculated and refresh that table on a periodic basis. Then point the query against the pre-aggregated table. Received on Sat Apr 12 2008 - 19:51:02 CDT

Original text of this message