Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> DB_FILE_MULTIBLOCK_READ_CNT (Problem restated)
My apologies for a poorly written original post. Thanks to those who replied, but allow me restate the problem.
The most critical aspect of this database is I/O to a single, large (3GB) table. Most queries to this table are SELECT statments involving perhaps 25% of the rows. The goal is to increase throughput to this table. For the time being, OPS and PQO are not options, but disk striping is.
The table will be striped at the file system level (Veritas), assume
among three disks. Assume a stripe width of 64K. This implies that a
192K
byte read request from Oracle will be transferred by the file system
into 3 separate 64K byte read requests directed to each of the disks in
the logical volume.
Existing parameters of note are as follows:
The reason I question the use of the "standard" configuration of these
values ((DB_BLOCK_SIZE * DB_FILE_MULTIBLOCK_READ_CNT) = MAX I/O size for
O/S) is that the file system understands that the logical volume on
which the
table (tablespace file) resides is indeed not a single disk, but three
disks.
Is it reasonable, therefore, to ask Oracle to read three times "Max I/O
size
for the O/S", so as to keep those babies humming?
The question may also relate to Oracle readahead processing. As the
DBMS understands that a large, contiguous part of the table needs to be
scanned, it could
simultaneously post multiple nowait reads to the table, hoping to "move
the queue" to the disk. If so, it would argue for leaving
DB_FILE_MULTIBLOCK_READ as it is, letting Oracle "tune the disks"
itself.