Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: formulas for correct sizing of various parameters

Re: formulas for correct sizing of various parameters

From: Howard J. Rogers <howardjr_at_www.com>
Date: Sun, 2 Sep 2001 08:46:33 +1000
Message-ID: <3b916436@news.iprimus.com.au>


Hi Pete,

I'll back off sufficiently to say that, in extremis, block size might be worth looking at. The trouble I have is that the 'it all depends on your application' myth is doing the rounds, and beginners pick it up as if it were gospel. It doesn't help that Oracle documentation itself still touts it as the 'correct' answer -and you'll lose points on OCP unless you spout it as The Truth, too.

So, I'll agree to say that when everything else has been tried, it might be worth a look, but it certainly shouldn't be your first port of call.

Regarding your specific discussion of your OLTP system: what you are basically saying is that large blocks caused your buffer hit ratio to plummet, and that rebuilding with 2K allowed it to rise... that can only happen if there aren't enough buffers (ie, you are stinting on memory). Now, budgets not being infinite, it is certainly true that you might not be *able* to allocate sufficient memory to the buffer cache -but that's the real problem here, not the block size.

As for your second example, I can only note that you changed a lot of stuff, not just the block size, and therefore it's difficult to work out precisely what the effect of the block size change, in isolation, would have been. You mentioned 'placing the datafiles better on disk', for example -I would have expected a lousy datafile layout to contribute far more to awful batch performance than block size.

Regards
HJR "Pete Finnigan" <pete_at_peterfinnigan.demon.co.uk> wrote in message news:DwT1EgALKPk7EwHa_at_peterfinnigan.demon.co.uk...
> Hi
>
> But if you have an OLTP system where records are very small and there
> are 10's or hundreds of millions and the queries are generally well
> distributed, say in a utility suppliers database or a bank where 100's
> of millions of records could be stored across the name / address range
> then why read 8K for a few bytes index data. OK you can read a lot of
> index records in, in one go but unless your block buffers hold most of
> the database after a while then you will continually replace blocks in
> the block buffers in the SGA, as the need to age out of the LRU arises,
> because the spread of data being queried is large and the data needed is
> not in RAM. I have experience of this, and of rebuilding a large OLTP
> database and changing the block size from 8K to 2K made a difference in
> performance, as we had less disk accesses as a larger spread of data
> resided in the SGA, strange as it may seem.
>
> I also have experience of a 3 terabyte database that was essentially
> batch processing only, ie 10's of millions of records were loaded each
> night. Few users queried it during the day. The batch performance was
> appalling before we started. We rebuilt the database and changed the
> block size from 8K to 16K, and placed the datafiles beter on the disks
> and changed the multiblock read count to match the OS read width, ie we
> read as much as we could in one access. The performance went from 9-18
> hours (erratic performance) down to 2.5 hours ( consistent ) for the
> batch. We ran trace on batches of records before and after the changes
> and the number of reads dropped a lot, after a reasonable number of
> batches of records had been processed, compared to before the change.
> Admitedly we did quite a lot of other tuning work as well. This was the
> nature of sequential work and reading as much as possible into the SGA
> in one read.
>
> These are two examples of why the block size is affected not just by the
> OS, but also by the application. I have read Steve's book a number of
> times and looked at his paper on his site on this subject and whilst in
> "general" i agree with what you and he say, If the database is not
> essentially large and that extra bit of performance is not critical then
> you could say as a rule the database block size can be based solely on
> the OS block size.
>
> BUT if its a critical database, or a large database with performance
> issues, then i have to disagree with you, you have to consider what the
> applications do, where the data is located on disk, and the importantly
> the spread of data on disk and how it is queried during normal use. If
> everyone queries in general a small set of data out of a large set then
> its going to quite quickly reside in memory, and the block size doesn't
> matter, but if not it does.
>
> I am not talking about machines and databases where resources are not
> available here, or where you shouldn't consider oracle, in the second
> example above the block buffers were set to 2gig and the sga 500M, the
> whole of the SGA and block buffers were locked in core and we had 3
> terabytes of disk attached to a 6 cpu E4500, not cheap!!.
>
> any way sorry for going on, its just my opinion that you have to take
> into account what the thing does when sizing in some cases when the size
> and performance are critical, for a case where its not so important,
> then i agree with you and Steve.
>
> thanks for listening
>
> Pete
>
>
> In article <3b902213_at_news.iprimus.com.au>, Howard J. Rogers
> <howardjr_at_www.com> writes
> >
> >"Pete Finnigan" <pete_at_peterfinnigan.demon.co.uk> wrote in message
> >news:UrsAyTApR0j7Ewt3_at_peterfinnigan.demon.co.uk...
> >> Hi
> >>
> >> The block size you choose should be a multiple of the OS block size,
> >> otherwise reads and writes of Oracle blocks could / would cross OS
block
> >> boundaries. The block size you choose depends on the type of database
> >> and applications you have ie OLTP. In batch processing for instance
> >> where you have large amount
>
> >s of data to process a larger block size is
> >> better and a large db_block_buffers, as the data you need will tend to
> >> be in the SGA more, reducing disk reads. For OLTP where random reads
> >> occur a smaller block size is better to reduce the read times and the
> >> amount of data stored in the SGA that is not likely to be needed. ie
> >> users could read data from anywhere in OLTP, in batch processing the
> >> data is likely to be read in some sort of order and processed in some
> >> sort of order. Just picking a general size such as 8k as Steve Adams
> >> suggests might work in most occasions, but in larger systems choosing
> >> the correct block size can make quite a difference in performance.
> >>
> >
> >Wrong, wrong, wrong! Steve Adams doesn't suggest a 'general' size. He
> >suggests a specific size, and explains why it's the right one, and it's
to
> >do with an interaction with your file system. Anything other than that,
and
> >you induce performance degradation.
> >
> >Of course, that's an ideal -and for sure, if you are so hard up for
> >resources that you can't allocate sufficient buffers to the Buffer Cache,
> >then the business of reading 8k into memory when you only want to access
458
> >bytes of it is certainly a consideration.
> >
> >But as Steve also points out, a larger block size makes for huge
performance
> >improvements on index reads -and what do OLTP systems use to retrieve the
> >small bits of data they want to access? Yup -indexes.
> >
> >Block size is file-system dependent, not application-dependent, unless
you
> >are short on resources (iun which case you probably shouldn't be using
> >Oracle in the first place).
> >
> >Regards
> >HJR
> >
> >
> >
> >
> >
> >
> >> In view of the above, I cannot see easily how one formula can be used.
> >>
> >> just my two penneth
> >>
> >> cheers
> >>
> >> Pete Finnigan
> >> www.pentest-limited.com
> >>
> >> In article <3B8D37BC.5B72B7FB_at_mitre.org>, David Wilburn
> >> <dwilburn_at_mitre.org> writes
> >> >I've got plenty of books and documents that all say that specifying
> >> >correct values for initial/next/max extents, db_block_size, etc., are
> >> >all critically important. However, I've yet to find any formulas
> >> >specifying how to come up with the correct values.
> >> >
> >> >I've read that (at least of solaris) the block size should be some
> >> >multiple of the output of the command 'pagesize' (in this case, 8KB),
> >> >but I'm not sure whether it should be 8KB, 16KB, etc. Which should it
> >> >be?
> >> >
> >> >How do I calculate appropriate values for initial/next/max extents?
> >> >
> >> >We're getting a new system in shortly, and I'll have the opportunity
to
> >> >rebuild our DB, so I'd like to get things right the first time.
> >> >
> >> >-Dave Wilburn
> >>
> >> --
> >> Pete Finnigan
> >> IT Security Consultant
> >> PenTest Limited
> >>
> >> Office 01565 830 990
> >> Fax 01565 830 889
> >> Mobile 07974 087 885
> >>
> >> pete.finnigan_at_pentest-limited.com
> >>
> >> www.pentest-limited.com
> >
> >
>
> --
> Pete Finnigan
> IT Security Consultant
> PenTest Limited
>
> Office 01565 830 990
> Fax 01565 830 889
> Mobile 07974 087 885
>
> pete.finnigan_at_pentest-limited.com
>
> www.pentest-limited.com
Received on Sat Sep 01 2001 - 17:46:33 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US