Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: db block size, too big wasting buffer?

Re: db block size, too big wasting buffer?

From: sg <s4v4g3_at_europe.com>
Date: Mon, 24 Jun 2002 23:12:55 +0200
Message-ID: <3D178B57.3010408@europe.com>


Hi

I read this from

http://www.dbasupport.com/forums/showthread.php?threadid=23841&pagenumber=2

there were an argument between some people, some said big is good and some said not, one said this



Posted by Sweetie

in oltp small amount of rows are fetched, if let's sayin a query has to returns 10 rows in 10 blocks then instead of 80K Oracle have to read 320K into memory

Has anyone benchmark this? Or this just theory you peeps are talking about?

This is not correct.
Take an example

8K BLOCK SIZE:
A table has 200 rows. Its block size is 8k. Assume that 20 rows can fit into one block. So, the total number of blocks needed is 10 blocks. The total memory needed to bring the data is 10*8 = 80K.

32K BLOCK SIZE:
With 32K block size, the same table needs only 3 blocks, because each block stores 80 rows.
The total memory needed to bring the data is 3*32 = 96K.

I agree there is a small wastage in the memory if the block size is set to 32K, this wastage can be offset with the real benefits reap out in the disk space management. However, Oracle SGA is designed to keep the buffer cache as long as possible, until it ages out based on the LRU algorithm.

One can argue that what if the 10 rows come from 10 different blocks of 32K size, then there is a more memory wastage than that of memory wastage with 8K size?

It is very difficult to answer those type of question.

In my experience, I have not seen any true OLTP system in the world. Even if you say OLTP, many real time reports are running against the OLTP system only. OLTP performance takes dive when the real time reports are running. So it is always a hybrid system.

That is why I recommend 32K block size.


the other said



8K BLOCK SIZE:
A table has 200 rows. Its block size is 8k. Assume that 20 rows can fit into one block. So, the total number of blocks needed is 10 blocks. The total memory needed to bring the data is 10*8 = 80K.

That would be a perfect world if you asume your query is going to fetch rows from a single block which is almost never true, so this is not an argument it is just a fact. Plus in OLTP since all reads are almost indexed we are even reading more blocks into memory!

It is correct that many OLTP systems are hybrid but that cannot justify that a big block size is good, enough big is enough

Plus even we are making the block big and we say that minimize the I/O, but hey we are telling ourselves that I/O is minimized at Oracle level, is this true at OS level? Are we minimizing at OS level? If not what is the use of minimizing at Oracle level. At end of the day Oracle runs on an OS and it is limited by OS limitations.

Assuming most UNIX/NT I/O are 64K I do agree that at some point I/O is minimized because we make Oracle to do less work by just reading 2 blocks if we have a block size of 32K and reading 8 blocks if block size were 8K, but can anyone tell me what is the difference between system calls of 2 oracle data block reads and 8 oracle data block reads? I would try it but I am sorry because I dont have any 32K block sizes databases, since the moderators are recommending this size they mush have vast experience with these 32k block sizes database and they can probably give it a shot and try it then post some benchmark results?

I think we should run strace at OS level to benchmark the system calls with different block sizes. And SAR the system I/O. Dont you think so?


So who's right?

Sybrand Bakker wrote:

> "sg" <s4v4g3_at_europe.com> wrote in message
> news:3D175E5F.6090202_at_europe.com...
>

>>Hi
>>
>>I am about to create a OLTP database of 16KB in 8.1.7, I have always
>>read that dont make the db block size too big or we would be wasting
>>data buffer. Is this true?
>>Some say that db block size is another Oracle Myth, it does not really
>>matter, is this a correct statement?
>>
>>

>
> Fact: in an OLTP you read individual records mostly
> Fact: Oracle will always read a complete block.
>
> Question: do you think a 16k block size will fit your OLTP scenario?
>
> Also: please define 'some'? Hearsay? Knowledge?
>
> Regards
>
>
> --
> Sybrand Bakker
> Senior Oracle DBA
>
> to reply remove '-verwijderdit' from my e-mail address
>
>
>
>
Received on Mon Jun 24 2002 - 16:12:55 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US