Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Query Slow in 10g, runs very fast in 8i

Re: Query Slow in 10g, runs very fast in 8i

From: EscVector <Junk_at_webthere.com>
Date: 7 Jan 2007 21:03:13 -0800
Message-ID: <1168232593.640388.30820@38g2000cwa.googlegroups.com>

EscVector wrote:
> Jonathan Lewis wrote:
> > "EscVector" <Junk_at_webthere.com> wrote in message
> > news:1168025465.514691.206890_at_42g2000cwt.googlegroups.com...
> > >
> > > Jonathan Lewis wrote:
>
> "You could at least have supplied an argument in favour of
> your suggestion so that DBdude had an option for deciding"
>
> Jonathan,
> Thank you for the response. I'm not out to impress anyone, but I did
> miss an opportunity here and your point is well taken. I admit I had
> ulterior motives when I made that suggestion, one of which was to get
> responses such as yours and as Niall has posted, but in fairness, I
> really should have included the following:
>
> The "guess" is that the query is using a large driving table in a
> nested loop.
> Using "ALTER SESSION SET optimizer_index_cost_adj = 10000;" should
> bias the optimizer to use hash joins. xplan should show this if it is
> true on dbdue's system. I am not saying to set it to some number in
> between. I'm saying to set it to 10000 to bias all the way with no
> gray area. The option is 10000 and nothing else. Any number short of
> 10000 introduces unwanted variance. Doing this could make the query
> run faster provided there is enough fast temp space if the hash can't
> fit in memory and is not on raid-5. This assumes it is a warehouse
> procedure and not OLTP think time app where first_rows would be
> necessary for display. The assumption is based on the original 11
> minute runtime. Most end uses don't let apps spin for 11 minutes. It
> also assumes that init settings are all default and automatic memory
> manager is in use. Also provided nested loop with large driving table
> is the issue.
>
> Why 10000? Because I have found that 10g is naturally biased toward
> nested loop fresh out of the genero box for many large join queries.
> The default setting is 100. So to throw this to the other end. Not
> 250, or 1, or 500. Turn the dial full tilt as far as it can go. It is
> not a guess, it is based on delivering hash.
>
> Nested Loop Bias, how so? Well this is the gray area. There is no one
> answer that I've seen. Each system had different "reasons" why this
> happened. There are symposiums for those topics.
>
> "Possibly the only sensible "quick and dirty" option at this
> point is to set optimizer_features_enable to 8.1.7, and
> restart the database."
>
> Why is this sensible? I'm not asking as a critique, I really want to
> understand why you say it is sensible. Here's why I ask: While this
> eliminates some vectors from the equation by eliminating optimizer
> options, it does not factor in that the sga is probably set larger,
> different build for os, etc. It also skews any new results. Also, why
> not just set it for that session as this requires no downtime at all.
> Downtime = less money + unhappy people in my world.
>
> "How many "no effort, no-brainer" chances does DBdude
> have for modifying a production system with each of these
> changes in turn before he's wasted a lot of effort and time,
> annoyed a lot of people, and lost credibility."
>
> I wasn't suggesting modifying the system. I'm suggesting modifying the
> session and in this case, only for that query's run and only that
> setting, nothing else. Also I'm only saying try this one guess and
> only while other solutions are perused.
>
> I think the real issue here is that DBDude's modes pones assumes linear
> performance gains based on system upgrades; that is, "I have a better
> machine on better software, therefore I will have a better, faster
> system." He's already not looking too good if that is the case since
> hardware and upgrade time is already booked and some parts are slower
> out of the gate. My question to dbdude is why did you think the system
> would be faster? DBDude now has to justify why dollars must be
> spent to fix a slow query on a newer, faster system that ran fine on a
> lesser machine running older software.
>
> "there is NO easy option."
>
> It may not be the smartest or most impressive, but I bet there is an
> easy option for this one query. We just don't know what it is without
> a little work or maybe a heavy bias.
>
> Any additional input is greatly appreciated.

Also, any test DBDude performs should take no longer than 12 minutes. If longer, the test fails. CTRL-C and start again. Once again, this is focused on 1 query, not the entire system. Received on Sun Jan 07 2007 - 23:03:13 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US