Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Index management

Re: Index management

From: Noons <wizofoz2k_at_yahoo.com.au>
Date: Tue, 4 May 2004 23:25:42 +1000
Message-ID: <40979aeb$0$4547$afc38c87@news.optusnet.com.au>


"Joel Garry" <joel-garry_at_home.com> wrote in message news:91884734.0405031221.3a68bcdf_at_posting.google.com...

> I think this extreme ignores the reality that so many "typical
> business systems" will still have times of batch processing. Even
> with multiple, er, streams, of batches, whatever scheduling algorithm
> is used will hopefully allow some chunks of time for each.

<brain-storming>
Yeah, but is that still a determining factor? I mean, maybe we should ignore fine tuning of batch and go all out for an even I/O performance across the board? Much simpler to keep going, no?

> The idea is to account for common possibilities. SAME may be ok for
> high-transactional times, but I think the allocation optimisation this
> thread is talking about may make some difference during "other" times.

Well, I think the thread is saying very clearly that index-level clustering and other "defrag" techniques are not that important as soon as multi-user is involved. Which is the vast majority of the time. Should we really bother with the odd "other" conditions? Does it really gain us much anymore? </brain-storming>

> I'd rather have the tools to optimizer multiple layers for different
> possible configurations, _and have the tools work together_. This
> "trust us, we're experts" stuff from Oracle and MS is condescending
> crap inevitably leading to stuff like NT defragmentation and
>

http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=BUG&p_id=3455402
> We've seen over and over again that increasing the number and depth of
> abstraction layers with the hope that Moore's law will bail us out
> leads to putting stuff in the wrong layers.

Amen!
The problem is of course how to find a middle ground...

> > a decade, when it first appeared in Sybase. I'm not sure it means anything
> > in a NTFS environment where physical allocation is all over the place unless
> > one takes precautions? Or with any modern SAN?
>
> I think predictive read-ahead could mean a lot in a modern SAN.

Used to think so but not so sure nowadays. The SAN knows remarkably little about the db structure and allocation strategy. With something like ASM (or any SQL Server equivalent in future) it can all go horribly wrong. If "intelligence" is to be added in terms of predictive behaviour, I'd rather it was applied in one place only: either the SAN or the db. Mixing two "intelligences" is a recipe for disaster IME.

Not sure yet what Oracle's recommendation on ASM is gonna be, but I suspect they'll eventually ask anyone relying on it to turn off all the "intelligent" bits everywhere else.

-- 
Cheers
Nuno Souto
wizofoz2k_at_yahoo.com.au.nospam
Received on Tue May 04 2004 - 08:25:42 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US