Re: sane number of the table partitions in DWH
Date: Wed, 1 Apr 2009 08:59:26 -0500
So, in a nutshell, row source from bitmap index 'OR; operations are not selective enough since access to table blocks returns 10 Million rows from 183 rows at bitmap index step. Most of the time is spent in single block reads accessing table blocks with rowids from bitmap segments. Looks like, hash_join is not reducing row count either.
I guess, to reduce that time a) either keep buffer cache huge and reduce single block read I/O b) avoid single block reads by reading one partition or sub-partition. c) use some sort of b-tree index so as to avoid table block reads.
I would probably prefer (b) due to operational nature of your database. Of course, bitmap indices on operational warehouse type tables is a liability, unless bitmap index rebuilding is allowed as a scheduled activity(yuck!).
If you can design a partitioning scheme such a way that your query will access just one sub-partition and read it with multiblock reads, I think, parsing time will be a very small fraction of total elapsed time. Gain will be much more.
Lines 4) - 5) : Converting 87 a bitmap into rowids takes 105- 99 = 6 seconds. Purely CPU consumption (no additional LIO, PIO etc) Lines 3) - 4) TABLE ACCESS BY LOCAL INDEX ROWID RB_FACT_PD_OUT_ITM PARTITION operation takes 731 - 105 = 626 seconds and pr=240,653 PIOS Not taking into account the CPU consumption of LIOs , each PIO takes 626 000 (ms seconds)/240,653 (PIOs) = ~ 2.6 ms
The SQL statent resouce profile (Top 3 wait events) though: ...Received on Wed Apr 01 2009 - 08:59:26 CDT