I have a question regarding the db_block_size parameter for large
decision support type databases (> 100GB). We have found a number
of good reasons for increasing the block size to 8K on our system,
such as a larger number of extents allowed and increased row-size per
block. It would also stand to reason that the increased block-size,
coupled with a large db_file_multiblock_read_count parameter would
increase performance for the large table scan queries which the
typical DSS user submits.
One argument against the larger block size has been that the cache
hit ratio would be reduced, since there would be a fewer number of
larger blocks in the SGA at any given time, thus a lower probability of
another query finding the same data. This does not make sense to me,
since in a large database, where many users are submitting queries which
process thousands of blocks, the cache hit ration is basically meaningless
for anything but the dictionary and look-up tables - most of the data
from the large tables will not be retained in cache.
At any rate, is there anyone out there with 'real' experience in tuning
the db_block size for a 100GB (or larger) database. Any comments would
be appreciated.
Thanks in advance.