Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Block Reads per second

Re: Block Reads per second

From: Paul Drake <>
Date: Tue, 8 Feb 2005 22:26:24 -0500
Message-ID: <>


Great job on the platform information, but you're asking for physical IOs here, without supplying any info as to the storage subsystem. A good upper bound would assume a buffer cache hit ratio of zero on a full table scan.

I think that the first thing that you'd want to find out is - "How many blocks are you looking to access?". After you have the number of blocks - you can approximate a rate, and an overall execution time. Oh, we don't know anything about the storage subsystem. I don't think that we can approximate a rate of blocks/second either.

Are these servers connected to a SAN?

    How many paths?
    What is the speed of the fibre channel host bus adapters (e.g. 2 Gbps)     How many IOPs is (are) the controllers rated for?     Do you have any idea as far as the physical configuration of the SAN?

Do you expect this query to run in memory, or do you expect it to go to disk for sorts and hash joins (temp usage) for sort_area_size and hash_area_size (or pga_aggregate_target)?

What is the average row length of the tables? (user_tables.avg_row_len) How many blocks are in the tables (and their indexes if used)? What is the degree of parallelism for the tables (and their indexes if used)? Do you have a sample execution plan for the statement?

Without knowing any details as to the storage subsystem, I would guess 37.5 MB/sec.
Its a nice round number on the low side. A single channel 1 Gbps FCHBA ought to be able to hit that - figure 300 IOPs x 256 KB * 50% efficiency and you get 37.5 MB/sec.

Without knowing the number of blocks being accessed, I would guess that its a long enough time that its worth investigating. I could use a lower bound of pk_id + one column (and rowid) as a lower bound, and 2 rows per block as an upper bound and I think that you'd be covered.

lets try 2 rows per block, no indexes.
hash join aside, you'd be looking at 50 million blocks for table A, and 50 million blocks for table B. 100 million blocks (upper bound). (this would likely spill of memory and onto disk)

100,000,000 blocks * 8192 bytes/block

-------------------------------------------------------- = 20833 seconds
39321600 bytes/sec

Hopefully you have more than 2 rows per block, and more that 1 x 1 Gbps FCHBA.

Most SAN type environments support a max_IO_size of 1 MB. Solaris certainly can support this size - check Connor's site for a script to check your settings.
You might want to consider hiking the db_file_multiblock_read_count to 128 for this statement.

enough for now, I'm hungry.


On Tue, 8 Feb 2005 20:51:19 -0500, Sami Seerangan <> wrote:
> Hi All,
> This may be a silly question, but I am trying to estimate the time it
> would take to execute a query which joins two tables (range
> partitioned) and both of them have more than 100 million records.
> Software/Hardware Specification:
> 9i R2, 4 CPU, 2 Node RAC, Sun Solaris, BLOCK_SIZE=8K, m
> db_file_multiblock_read_count=32
> Question:
> What would be the normal count of Block Reads per second ?
> Thanks in advance,
> --
> Sami Seerangan
> Oracle DBA
> --

Received on Tue Feb 08 2005 - 22:29:09 CST

Original text of this message