Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: DB_BLOCK_SIZE and Digital UNIX with AdvFS

Re: DB_BLOCK_SIZE and Digital UNIX with AdvFS

From: Steve Adams <steveadams_at_acslink.net.au>
Date: 1997/05/14
Message-ID: <3379e9da.944874@nntp.peg.apc.org>

Hi Robert,

This topic has been under discussion on the oracle-l listserver lately. Here is the text of two recent posting of mine that answer your questions in part.

"There is a better reason for using a block size that is at least as large as the OS block size, and a multiple of it, and that is to avoid read-modify-write behaviour needed to satisfy DBWR writes.

Say your database block size is 4K, and your datafiles are on Unix filesystems with a block size of 8K (the normal default), then whenever DBWR does a single block write (which is what it normally does) the filesystem code first has to read the filesystem block (8K) into the Unix buffer cache, then modify that block by overwriting half (4K), then write it back. The problem with that is that you need to wait for a full rotation of the disk between the read and write. This typically extends service times by about 50%. Of course, if you have async I/O for DBWR, you may not notice this, but it still costs CPU time and reduces scalability.

If your datafiles are on raw partitions, and you are using a volume manager product to manage them, then even though there is no filesystem code layer, and no Unix buffering involved, the volume manager code (pseudo-driver) may well do same to your writes. I happen to know that HP's LVM uses an 8K page size, to match their virtual memory page cluster size. It is most likely that other vendors use either 4K or 8K, but unless you do their internals courses it is jolly hard to find out! If any of you happen to be aware of this tidbit of information for anything other than HP-UX, please let me know."

AND "I'd like to take issue with your statement that, with a larger database block size one will probably need the same number of buffers. In my opinion this depends on the application.

When the application performs index range scans or full table scans (or fast full index scans in 7.3.3) the buffer space needed is invariant on the block size, and commonly there is a significant I/O saving with larger block sizes. The same applies to rollback segment data blocks, if the undo generated per transaction is non-trivial.

All active rollback segment header blocks are normally in the cache, and so we DO need the same number of block for these. However, the increased size of these blocks is in fact a good thing, because it allows more transaction slots, which reduces the work required for some consistent reads, and the risk of some ORA-1555 (snapshot too old) errors.

For index branch blocks, the greater size is also an advantage because in general the depth of the B-tree may be reduced thus saving I/Os on indexed lookups.

So the only cases in which one wastes memory by using a larger block size are segment header blocks, some data dictionary blocks, index leaf blocks read for unique scans, and table accesses by rowid. Even in the most classical OLTP instances, this would only represent about 75% of the cached blocks, but in many cases it is actually less than 50%.

A fairer statement then is that for a larger block size, if you want the same cache hit ratio, then you will need a much larger database buffer cache, but the increase is far from directly proportional."

HTH, Steve Adams

On Mon, 12 May 1997 18:21:56 -0500, Robert Lowe <Robert.H.Lowe_at_lawrence.no.spam.edu> wrote:
>Hi!
>
>I'm not a DBA, so go easy on me! ;-) One of the tuning tips I've run
>across indicates that it is best to make DB_BLOCK_SIZE at least as large
>as the file system buffer size, if not a larger multiple of it. It's my
>understanding that AdvFS uses 8 KB "pages" which would seem to indicate
>that I should use at least an 8 KB DB_BLOCK_SIZE. Is this reasonable
>for a small-medium sized dB, and what might be the ramifications of
>using a block size this large? I understand that I'll have to
>correspondingly reduce DB_BLOCK_BUFFERS to reduce the size of the SGA to
>avoid memory problems. How much inefficiency is introduced by using a
>smaller DB_BLOCK_SIZE? Or can one do this at all?
>
>I realize I'm not providing a lot of detail (the application is OGF, if
>that helps a bit), so I don't expect a fully detailed response, but if
>there are others of you using AdvFS, I'd be interested in hearing how
>you approached this.
>
>TIA,
>Robert
Received on Wed May 14 1997 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US