Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: why are full table scans good??

Re: why are full table scans good??

From: Howard J. Rogers <dba_at_hjrdba.com>
Date: Thu, 6 Dec 2001 06:28:09 +1100
Message-ID: <3c0e7548$0$27198$afc38c87@news.optusnet.com.au>


It's actually more like 2%-5%.
Regards
HJR

--
Resources for Oracle: http://www.hjrdba.com
===============================


"koert54" <koert54_at_nospam.com> wrote in message
news:gmmP7.72231$XM4.6411_at_afrodite.telenet-ops.be...

> > Can anyone provide any insight for why a datawarehouse would want more
> full
> > table scans and a higher multiblock read count?
>
> Typically for a warehouse are the reports that are run against large
amounts
> of denormalized data/datamarts.
> It's not like you issue a query and get 1 record back - no .. we're
talking
> huge amounts of
> resultsets !
> What the cost based optimizer does, is estimate the amount data the query
> will be retrieving. I don't
> know the exact threshold, but I think in 8i if the query returns more 7%
of
> the table's data, a full tablescan
> is used.
> The cost based optimizer will say : "hold on - if I have to walk through
my
> indexes AND do table-lookups for a 60GB
> table that has 10% chained rows for a query returning 20% of the data from
> that table then ..... screw it ... I just go with FTS because otherwise
> I'll generate just some more I/O going through the indexes ... "
> This is especially true if you keep histograms of your tables. (analyze
> table compute statistics for all indexed columns). Because then the CBO
> has a good idea of the datadistribution and will be more accurate in
> estimating the amount of data hit by the where clause. Check a
> performance/DWH book and
> read about 'histograms' - it should become clear ....
> The word 'index' is at some DWH sites equal to I/O overhead.
>
>
>
> > I assume they recommend DSS systems should be set at 128 for a reason,
but
> > right now I can't seem to figure out what that reason is.
>
> If I'm not mistaken you should set
> db_file_multiblock_read_count=I/Obuffer/db_block_size. I/O buffer is
> platform specific but typical 64K
>
> In my experience
> > datawarehouse are the most well indexed schemas I've seen.
>
> I've never seen a TB warehouse that was very well indexed ... we force as
> much as
> we can HASH JOINS and use a massive degree of parallel.
> I don't know how you do your loading of your warehouse - but If you use
> direct parallel loads
> against 50GB tables, your indexes are pretty much screwed. For me this
would
> mean rebuilding like 20GB
> of indexes each month !
> This is also the very reason why people are using MPP servers like IBM
> RS6000 SP2 and OPS, striping all
> their data over the available nodes using no indexes at all...
>
>
> <gdas1_at_yahoo.com> wrote in message news:9ukk89$354$1_at_news.netmar.com...
> > We're currently going through a performance evaluation of our database.
I
> > installed the the tuning pack that came from oracle and ran through it
to
> see
> > what it could do. We're running 8.1.6 on Windows 2000 (yeah, I know).
> >
> > Our query performance was pretty good, a bit sluggish for certain
> queries,but
> > not all that slow.
> >
> > We're running a decision support system.
> >
> > The tuning pack does it's thing and comes back and recommends to me that
I
> > should increase db_file_multiblock_read_count from 16 to 128. It also
> checks
> > my indexes on all my tables based on the queries coming in and tells me
> that
> > all my indexes are the right ones for the queries that are coming in.
"No
> > recommendations" it says...
> >
> > So when I see this recommendation about the multiblock_read_count, I
know
> > that
> > this going to increase the favorability of full table scans, but I
thought
> > I'd try it out anyway. So I set the parameter to 128 and bounced the
> > database. Now all our queries are noticeably slower and I'm here
posting
> > this message because I'm trying to understand why oracle made such a
> > recommendation.
> >
> > I have current (full not estimated) statistics generated on all tables
and
> > indexes. I'll admit, I haven't generated histograms however.
> >
> > We have a couple large tables in our database. They exceed a million
rows
> > and
> > I can't see how the optimizer believes scanning a million row table is
> more
> > efficient than scanning an index.
> >
> > So, I'm trying to understand what basis this Tuning Pack has in
suggesting
> > db_file_multiblock_read_count should be set at 128. Why would I want to
> > increase the favorability of full table scans? Why would anyone want
that
> > especially since It's quite clearly proved to degrade our query
> performance?
> >
> > I assume they recommend DSS systems should be set at 128 for a reason,
but
> > right now I can't seem to figure out what that reason is. In my
> experience
> > datawarehouse are the most well indexed schemas I've seen. Why would I
> want
> > oracle to ignore the indexes and start doing full table scans?
> >
> > Can anyone provide any insight for why a datawarehouse would want more
> full
> > table scans and a higher multiblock read count?
> >
> > Thanks
> > Gavin
> >
> >
> >
> >
> > ----- Posted via NewsOne.Net: Free (anonymous) Usenet News via the
> eb -----
> > http://newsone.net/ -- Free reading and anonymous posting to 60,000+
> groups
> > NewsOne.Net prohibits users from posting spam. If this or other
posts
> > made through NewsOne.Net violate posting guidelines, email
> abuse_at_newsone.net
>
>
Received on Wed Dec 05 2001 - 13:28:09 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US