Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Storage guidelines in 9iR1 ??

RE: Storage guidelines in 9iR1 ??

From: Gaja Krishna Vaidyanatha <>
Date: Fri, 21 Mar 2003 10:12:58 -0800 (PST)
Message-Id: <>

Hi Kevin,

Long time no talk or see. Hope things are well with you. Going forward it will nice for us to "quantify" any performance differences that we observe, preferably with data supporting the claim. Don't get me wrong, I am not trying to beat you up on this, but trying to bring some clarity to the situation. Your original posting very strongly suggested that 1024 was some kind of magical number beyond which SQL performance took a "significant" dive. We need to find out (at this stage hypothesise), what caused the performance decrease.

If we were to look at this more objectively, it is very unlikely that full-table scans would perform worse with more extents, assuming that everything else remains constant. This is because, Oracle would have issued the same number of "read system calls", to process the data below the high-water mark,regardless of the number of extents. This ofcourse assumes that the value of db_file_multiblock_read_count did not change over time.

Index scans are also rarely affected by the number of extents, because an index-scan will still require reading of the same root, branch and leaf nodes to determine the ROWIDs for the search (regardless of the number of extents in the table).

One very plausible and probable cause for the performance decrease, could be the onset of "block-level fragmentation" that happens over time. This can be measured by calculating the "data density" (defined as rows/block below the high-water mark) of the blocks in the segment over time. Data density issues are usually caused by un-optimal settings of PCTUSED and PCTFREE. Again, if PCTUSED is causing the data density issue, then even that can be overcome by using Automatic Segment Space Management in 9i.

So, if the high-water mark of the table has overtime inflated to 100000 blocks (due to frequent INSERT & DELETE operations), and the current data density is 10 rows per block, and if we assume that each block in reality can actually hold 100 rows, then when the table is re-organized, the number of blocks below the high-water mark will drop to 10000 blocks.

This most certainly will cause SELECTs that are performing full-table scans to perform faster, as Oracle will issue "fewer" read system calls to read the data. In all cases here, I am assuming here that the data is NOT in the database buffer cache. The same issue of "data density" can be suggested for indexes, as fewer and more dense leaf blocks, will result in less logical/physical I/O on the index blocks itself.

So, the increase in the number of extents in a segment by itself does not cause the performance problem. It "may be" a symptom or a signal of something else occuring on segment, obviously caused by your application. I am not in anyway suggesting that everyone should start having 10000+ extents for all of their objects, but on the flip side I also don't want to see people inflicting pain on themselves, but re-organizing their objects whenever it grows beyond "a small number of extents". I will leave the definition of "small" to the reader.

Best regards,


Received on Fri Mar 21 2003 - 12:12:58 CST

Original text of this message