| Oracle FAQ | Your Portal to the Oracle Knowledge Grid | |
Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle block size - OS block size
Maybe 9i will help solve this delima as from what I read you can have more than one block size for your database. Havn't seen the details but if it is anything like how Sybase implements this you can create cache pools in memory with different i/o sizes then assign specific segments/tables to them. The problem you may run into is you are robbing Peter to Pay Paul as you have to dedicate some of your precious memory to a specific io size or two.
I've tried this in Sybase and you really need to know your application
inside and out and hopefully have lots of memory to get much of a
performance boost.
If allocation is dynamic it works better, for example, at night when
that big statistics report runs, you can grap 3/4 your memory and
dedicate it to 16K io and do faster table scans. When you are done you
put it all back to 2 or 8K for normal daily processing. This is a neat
feature they are adding to Oracle 9i but I personally don't have any
systems as clear cut as my example above and it became a real headache
to manage when I tried it.
"Howard J. Rogers" <howardjr_at_www.com> wrote in message news:<3b2e6ca5_at_news.iprimus.com.au>...
> "Bass Chorng" <bchorng_at_yahoo.com> wrote in message
> news:bd9a9a76.0106181141.7c33d830_at_posting.google.com...
> > db_block size should be determined based on your application type.
>
> No it shouldn't, though ORacle in its wisdom has been pushing that line for
> several versions and many years.
>
> > If you are 100% OLTP you should use 2K. If you are purely data warehouse
> > maybe you should use 32K. If you are in between, locate where you are
> > in this scale and determine db_block_size appropriately.
> >
>
> Why do you think OLTP=small DSS=big applies?
>
> The standard reply goes that big blocks will contain lots of rows, so in an
> OLTP environment with lots of concurrent transactions, you are likely to get
> contention for blocks. In DSS, you tend to have one or two people doing
> huge reports requiring full table scans, so if there are lots of rows in a
> block, you don't need to read so many blocks to complete the scan. True
> enough, as far as it goes. But to suggest you cure block contention by
> reducing your block size is rather like suggesting you cure your headache by
> chopping off your head. It will work, but it is somewhat inelegant, rather
> drastic, and introduces one or two problems of its own. There are better
> techniques for dealing with the problem of the headache, just as there are
> for dealing with possible block contention issues.
>
> > It is not that the bigger is better. I have databases running on 2K
> > and I also have databases running on 16K.
>
> Of course it's not 'bigger=better', otherwise we'd all be suggesting 32K
> blocks. Block size is entirely and absolutely determined by what your file
> system can cope with (again, I recommend Steve Adams' article 'Why a large
> database block size' at ixora.com.au, and in particular the first paragraph
> of that article in which he explains why most Unixes should be using 8K, and
> NT can go 16K).
>
> >
> > You will get more contentions if the blocks are too large. Typical
> > ones are buffer busy waits and write complete waits. The theory is
> > simple, more transactions are after the same block because it contains
> > more rows.
> >
> > With larger blocks, you also waste more memory, more rollback segments..
> > etc as everything is stored in the unit of blocks in buffer cache and
> > rbs.
> >
>
> The whole idea of a 'cache' is that stuff stays floating around in memory on
> the offchance that someone else will want it. Big blocks means more stuff
> in memory. Which means more chance of it being useful to someone else. And
> if it proves to be not useful, it gets aged out. In any event, you can't
> have it both ways: either big blocks are hugely popular, and thus risk
> contention (in which case, the presence of that block in the cache cannot be
> considered a waste, since everyone is trying to bundle into it)... or they
> are deeply unpopular, in which case 8K of memory is being wasted housing it
> in the cache... but, since no-one wants that block, there won't be
> contention to get in it.
>
>
> > You get major advantages in full table scans if you use large blocks.
> > Other advantages are reduced overhead (as each block has header and
footer)
> > and reduced B-tree level.
> >
>
> All straight out of the official Oracle handbook, true enough. And all
> rather dodgy advice. I guess that if you never do a full table scan, and
> never use indexes (and that also means that you never use primary and unique
> key constraints) it makes a sort of sense.... but it doesn't sound like any
> application I know of.
>
> > Some of the disadvantages can be offset by other parameters like
> > db_file_multiblock_read_count. As long as you do not exceed your
> > OS max physical read size, you can still make your full table scan
> > with 2K efficient.
> >
>
> Don't forget inittrans and maxtrans.
>
> > I think most people choose 8K feeling it is safe. But this does not
> > mean 2K is unrealistic.
> >
>
> No, it's not unrealistic. It's just that from the word go, you will be
> fighting your file system. 8K is not 'safe' for Unix with a file system...
> it's 'correct'.
>
> Regards
> HJR
>
>
>
>
> > -Bass Chorng
Received on Thu Jun 21 2001 - 12:55:22 CDT
![]() |
![]() |