Re: Yes, this is probably a stupid question but...

From: Robert Klemme <shortcutter_at_googlemail.com>
Date: Mon, 14 Apr 2008 22:52:02 +0200
Message-ID: <66hufjF2juk9uU1@mid.individual.net>


On 14.04.2008 00:18, Pat wrote:

> On Apr 13, 2:52 pm, "Jonathan Lewis" <jonat..._at_jlcomp.demon.co.uk>
> wrote:

>> "Pat" <pat.ca..._at_service-now.com> wrote in message
>>
>> news:b1ed3a55-a7f3-4ba1-a6fd-53c0f0548bd5_at_b64g2000hsa.googlegroups.com...
>>
>>
>>
>>> On Apr 13, 11:20 am, hpuxrac <johnbhur..._at_sbcglobal.net> wrote:
>>>> On Apr 12, 5:55 pm, Pat <pat.ca..._at_service-now.com> wrote:
>>>> Another option to consider is looking at is using/creating a
>>>> materialized view to support the count query.
>>>> Going down the posts in the other part of this thread, if you force
>>>> the count query to use a hash join, this might help the queries that
>>>> return a high count ( 200k ) but could ( possibly severely ) impact
>>>> the queries that ( running now with a nested loop ) return a small
>>>> count.
>>> I'm going to show my ignorance here, but would a materialized view
>>> help if there's a lot of transactional volume on the table e.g. if
>>> there are, say, 500 or so updates per hour across that table? Wouldn't
>>> I need to refresh the view every few seconds to keep it current? Or is
>>> this the part I'm missing?
>> There is an option to "refresh on commit".
>>
>> It's not a good idea to use it on a very busy table as the overheads
>> are dramatic (a single row update resulted in about 45 statement
>> executions on the commit the last time I tested it). However, at
>> one update every 7 seconds, you may find that the overheads are
>> acceptable.
>>
>> There are a couple of oddities with read-consistency, though.
>>
>> Check what happens if you update the table, then (from the same session)
>> run the query BEFORE committing. Your version of the query has to go to
>> the base tables to see your update rather than using the materialized
>> view -
>> and you may then be caught in a trap where other users start sharing the
>> cursor that visited the base tables rather than the view. Unless you have
>> very good control over how the updates and queries synchronise, you
>> could get caught in the trap of randomly changing performance.
>>
>> A completely different thought - this may be a case where you want to
>> get literal values into your query and build histograms to support the
>> query. You had 150,000 out of 180,000 rows with status 'active'.
>> If you create a histogram on the column, and use literals in your SQL
>> you may find that the optimizer uses the 'damage-limiting' hash join on
>> the worst case, and the nested loop on the other cases. The cost of
>> the extra optimisation stages may be insignificant compared to the benefit
>> of doing the right thing every time.
>>
>> --
>> Regards
>>
>> Jonathan Lewishttp://jonathanlewis.wordpress.com
>>
>> Author: Cost Based Oracle: Fundamentalshttp://www.jlcomp.demon.co.uk/cbo_book/ind_book.html
>>
>> The Co-operative Oracle Users' FAQhttp://www.jlcomp.demon.co.uk/faq/ind_faq.html
> 
> In this case, looks like having a histogram didn't help. The active
> column has two potential values (0 or 1), so it's essentially a
> boolean. Even with a 2 columns histogram on it, the optimizer is still
> going down nested loops.
> 
> I may take a crack at putting in an a refreshing materialized view
> though.

An alternative approach may be to use a trigger and a counter table. You might face locking issues though. That depends on how much concurrency you have on that table, how much original rows fold down into one row and on the duration of your transactions.

> Need to sit down and do a little match on relative transaction
> frequency though, want to make sure I end up saving cycles rather than
> burning them.

That's always good to do some math beforehand.

Kind regards

        robert Received on Mon Apr 14 2008 - 15:52:02 CDT

Original text of this message