Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: DB_BLOCK_SIZE set to the highest value

Re: DB_BLOCK_SIZE set to the highest value

From: Nuno Souto <nsouto_at_nsw.bigpond.net.au.nospam>
Date: Fri, 25 May 2001 15:19:58 GMT
Message-ID: <3b0e6eef.19799819@news-server>

On Fri, 25 May 2001 11:13:33 +1000, "Howard J. Rogers" <howardjr_at_www.com> wrote:

>
>http://www.ixora.com.au/tips/creation/block_size.htm

The only bit of Steve's points that I have had a bad experience with is this:    

" The size of the transaction tables in the rollback segment header blocks is increased. This reduces the risk of having to roll back to the transaction tables themselves during consistent read operations on other blocks. This in turn reduces the risk of snapshot too old errors due to an inability to roll back the transaction tables. "

and this:    

" Most transactions use fewer rollback segment data blocks. This means that they require fewer gets on the rollback segment header block, and thus the risk of contention for the header blocks is reduced.
"

(Steve, I hope you don't mind me quoting from your site? Full aknowledgement here as to where it's coming from.)

I'm sure it was because of the particular characteristics of the application in question rather than this being wrong. Bear in mind his points are somewhat general. That means: they may not apply to EVERY situation. In my case, a 16K block size *consistently* generated snapshot too old messages. To the point where half the batch programs were being affected by it. And I needed nearly 8 Gb of RLB size for a total database table size of 15Gb and the largest table at 300Mb!

Changing to 8K fixed the whole thing like magic. I haven't had a single snapshot too old in months and the RLB size is a mere 3Gb.

>(Now, don the flameproof overalls, and retire to a safe distance).
>

Not at all. I'm a great follower of the idea of larger block sizes. Long before everyone started increasing them from the default of 2K for Unix, I was happily winning benchmarks with V6 with db_block_size as high as 8K! First tried it on "hydra". (Was that system still available in your time at Oracle Australia? I named it) :-)

I must admit I have an anal thing against using 16K in NT. I reckon the most important thing in NT is to check at what size the fastest memory block transfer occurs and use that as the block size.

This has to do with the way memory architecture works in the Intel "environment". There are memory benchmark programs available that test exactly this.

I've been able to verify (in my own systems and a few others) that for older Pentium IIs, a memory block transfer of 4K happpens at nearly twice the speed of any other size. In that class of system, a 4K db block size (for NT!) consistently gets the best performance. Particularly if I can match it with a disk sector size of 4k.

Of course like in anything else, YMMV, IMHO, no animals were hurt testing these theories, etc,etc.

Anybody blindly following rules of thumb without first thinking if they apply to their case deserves to be burned when the proverbial hits the fan. As Steve himself so often recommends, testing and measuring is everything.

Cheers
Nuno Souto
nsouto_at_bigpond.net.au.nospam
http://www.users.bigpond.net.au/the_Den/index.html Received on Fri May 25 2001 - 10:19:58 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US