Re: Query performance: DivideAndConquer vs. BruteForce

From: Keith Boulton <kboulton_at_ntlworld.com>
Date: Thu, 17 Jan 2002 00:24:05 -0000
Message-ID: <IMo18.31292$Hg7.3178427_at_news11-gui.server.ntli.net>


Someone has already said that if using a divide and conquer approach, you do a single table scan during the session so that you can expand lookup codes. This gets more efficient the more likely people are to stay connected - however:

> is good enough. Given the extra programming effort needed to
> implement "divide and conquer", and the extra maintenance effort that
Minimal, given adequate programmers (what is that pink thing that just flew by the window?)

> a mere brute force solution. The cost based optimizer will come up
> with an execution plan that reflects available indexes, various join
> algorithms, table cardinalities (size expressed in rows), and other
> considerations.
And then randomly select one.

> If, for example, there is one huge table, with selection criteria
> that will narrow the search down to a few dozen rows, and the other
> tables can all be
> searched using a index for table lookup, the response will be
> blazingly fast, just using the optimizer.
Not true of queries using bind variables.

> The database design might be illogical or clumsy. The database may be
> overpopulated, compared to what was originally planned for.
The point of the CBO is that it should allow for this (it doesn't)

> The query may be poorly phrased
ie the cbo is crap and can't understand it

> possible, and to try it out. Often, this yields satisfactory
> performance, and I move on to the next issue. In cases where the
The rule of all rules: fast enough is fast enough Received on Thu Jan 17 2002 - 01:24:05 CET

Original text of this message