Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Index vs. table scans in statspack reports
I recently tuned an OLTP type of DB by changing following init
parameters:
DB_FILE_MULTIBLOCK_READ_COUNT=64 (formerly 16) OPTIMIZER_INDEX_COST_ADJ=30 (formerly default=100) OPTIMIZER_INDEX_CACHING=70 (formerly default=0)
As a result, some "long" lasting operations (15-20 min) run 20 times faster, while short operations (several seconds) got slower (up to half a minute).
Observing disk activity (iostat on Unix), I found out that the application uses far more resources now, i.e., when a query is started, data transfer rate rises dramatically indicating a massive I/O operation with I/O sizes about 450K with sustained 30MB/s.
My intention was to improve full table scans when they take place, but not to favor them more than before (this is a canned DB so one should be carefull). In order to compensate for the increased willingness of Oracle to do full table scans, I set the optimizer parameters as shown above, so indexes should be still used in a healthy extend.
My suspicion is that Oracle is choosing to many full table scans now, so perhaps I should decrease OPTIMIZER_INDEX_COST_ADJ still further, and/or increase OPTIMIZER_INDEX_CACHING. Or should I change the optimizer_mode to first_rows?
Now my question: How do I recognize a general shift from index based execution paths towards full table scans (or the opposite) from STATSPACK reports? This is a 7x24 production DB and opportunities to change anything are rare. I can't afford to try blindly (statspack is setup and running hourly).
You might say: "Why don't you look at the statspack reports and compare figures?". Well, these are a lot of figures. I lack the real experience to deal with and soundly interpret the numbers (although I am trying). Some magnitudes seem to be meaningless at all.
Oracle 8.1.7.0/Solaris.
Thanks a lot
Rick Denoire
Received on Thu Nov 27 2003 - 17:33:00 CST