Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Myth revisited ...

Re: Myth revisited ...

From: Holger Baer <holger.baer_at_science-computing.de>
Date: Tue, 18 Nov 2003 07:35:08 +0100
Message-ID: <bpceis$rn$1@news.BelWue.DE>


Jonathan Lewis wrote:
> Notes in-line.
>
>
> --
> Regards
>
> Jonathan Lewis
> http://www.jlcomp.demon.co.uk
>
> The educated person is not the person
> who can answer the questions, but the
> person who can question the answers -- T. Schick Jr
>
>
> One-day tutorials:
> http://www.jlcomp.demon.co.uk/tutorial.html
> ____Belgium__November (EOUG event - "Troubleshooting")
> ____UK_______December (UKOUG conference - "CBO")
>
>
> Three-day seminar:
> see http://www.jlcomp.demon.co.uk/seminar.html
> ____UK___November
>
>
> The Co-operative Oracle Users' FAQ
> http://www.jlcomp.demon.co.uk/faq/ind_faq.html
>
>
> "Holger Baer" <holger.baer_at_science-computing.de> wrote in message
> news:bpa1e8$2cp$1_at_news.BelWue.DE...
>

>>Well, if it means anything to you, Don Burleson promotes it:
>>
>>http://www.oracle-training.cc/oracle_tips_block_sizes.htm
>>
>>(I believe I once read a paper on dbazine.com, also by Don, you

>
> might
>
>>want to search there, too).
>>

>
>
> You mean if he says it twice is must be true ?

Surely not. I mentioned dbazine.com because I seem to remember the article was a bit more elaborate, however none of them made much sense to me. Perhaps I should have added more of a personal opinion in my first post.

My impression was that if a large enough quantity of my database would profit by a larger blocksize, so be the whole database of the size. No need to fiddle with different sizes.
>
>
>

>>Anyway, even Oracle makes use of multiple blocksizes in

>
> TPC-Benchmarks,
>
>>exactly to eliminate the need for an overflow segment as Nuno

>
> suggested
>
>>in his post.

>
>
> There are always special cases where a little extra edge can
> be found by doing something that is irrelevant to the rest of
> the world. I suspect TPC benchmarks fall into the category
> of 'tune it properly, then see if we can squeeze a little more'.
>

Spot on. Thats what I meant: In very very special cases there might be a measurable effect, however normally we're dealing with real life.

>

>>I once asked Tom Kyte on asktom about this, however his answer never
>>got published, so I can't point you there. But he didn't see much

>
> point
>
>>in using multiple blocksizes.
>>

>
>
>
> I had a quick look at the article - Don Burleson suggests that he
> has six GENERAL rules: I think you should read this to mean
> 'here's a point to consider if you think your requirement match',
> not 'this is a good idea, get the blinkers on'.
>
> In particular:
> 1) Why should large tables that experience tablescans go
> into the largest possible block size ? If I am doing such
> regular tablescans that it makes a difference , then (a) should
> I be tuning the code to reduce tablescans before I fiddle with
> block sizes, or (b) should I be using parallel query - which
> bypasses the buffer anyway and (c) how much single block
> I/O should I be doing before the suggestion should be ignored ?
>
> 2) Seems to be saying that any table that gets hit with
> large tablescans regularly should be in the recycle pool -
> but this means EVERY i/o to those table is lost fairly
> rapidly, and you might want to keep some of them.
> I'll agree with the general principle, though, that if an
> object is damaging the general effectiveness of your
> cache then you should think about putting it into it's
> own section of cache until you can fix the problem
> properly.
>
> 3) In a well-tuned system, you do not want the
> data dictionary cached in the db_cache_size - that's
> what the v$rowcache and v$library_cache areas are
> for. If you're depending on the buffer pool to minimise
> the cost of acquiring object definition data, then you
> need to do something more important than adjust the
> block size.
>
> 4) Indexes will NOT always favour large block sizes.
> If you double the size of an index block, then you double
> the activity on the block, which means you double the
> latching on the block. Remember what happened in
> 8.1.7.0 when a bug caused root block latching to go
> over the top. Even in the absence of updates, introducing
> a larger block size for indexes could produce a performance
> problem. (There are other reasons why this should not
> be considered a generally sound rule - but this is a good
> starter for 10).
>
> 5) The block size for a table should, indeed, be larger
> than the average row length - if the design is right. But
> if someone showed me a system with 7 CLOBs in the
> table definition, I wouldn't suggest that they create the
> table in a 32K block size - I'd ask if it was a suitable
> design, and then ask if the CLOBs should be stored
> out of line, and then worry about damage limitation.
>
>
> 6) (a) The TEMP tablespace has to have the same
> block size as the SYSTEM tablespace. (b) The sorting
> happens in memory as dictated by sort_area_size or
> pga_aggregate_target. (c) The larger the I/O, the more
> space from the sort_area_size you have to allocate
> to disk buffers for TEMP segment reads and writes,
> hence the less memory you have for sorting and the
> smaller the number of sort runs you can merge
> concurrently, hence the more likely you are to increase
> the sort costs if you choose a large block size without
> adjusting your memory allocation accordingly. And if
> you increase the memory to cater for very large block
> sizes, then you encourage the optimizer to use
> sort/merge joins and hash joins when they might not
> be the most appropriate thing.
>

Thank you for your detailed explanation.

Holger Received on Tue Nov 18 2003 - 00:35:08 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US