Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: An oldie on Blocksize and Performance?
Soory about this, but another long reply,
pulling together about 5 of the postings.
What a lot of questions to answer.
1.5Gb buffer
Did you realise this makes a 'small table' 30 Mb (2%)
400 Mb cache size threshold.
How big are the tables declared as CACHE'd.
What method are you using to determine that only
400Mb of the buffer is in use ?
Very large Numbers of inserts.
FREELISTS=8 (which you suggest is not sufficient)
but INITRANS=1 (which you think is).
How many inserts per second ?
Are they array inserts or single row inserts ?
Do the inserts to any automatic referential
integrity checks (i.e Foreign Key) ?
Are there really very few updates ?
INITRANS is only a bottom limit, and if there is free space in the block (ca. 21 bytes per transaction) then ITL contention would not be a problem. You could run into trouble though if you have very small rows and PCTFREE=0, or if you have enough updates activity to fill the block, but this is more likely to show up as processes waiting for TX locks.
The default value for CBC latches is the smallest prime number greater than or equal to 25% of the value of db_block_buffers. Statistically the aim is to have about 4 different DBAs (database block addresses) associated with any one hash chain, hence my throwaway comment equating CBC latches with Blocks. Of course, although only 4 DBAs on average will be associated with a CBC latch there may be multiple copies of each block in the buffer at any one time, so the chain could be much longer.
In answer to the question:
'Why do tablescans not affect the CBC latches ? '
It sounds like you have ca. 100,000 buffer blocks,
and a largest scanned table of about 25,000 blocks.
My (approximation) of what happens in a FTS is
Each time you do a multiblock read, the single
cache buffers LRU chain latch will be hit.
8 buffer blocks (or 4 or whatever) have to be
kicked out, so their CBC latches will be hit.
8 new blocks will be read in and attached to
hash chains - their latches (probably 8 different
ones) will be hit as Oracle checks if the blocks
are already in the buffer, then they will be hit
again as the blocks which were not in the buffer
are loaded.
Bottom line: in a 25,000 block FTS you will get a lot
of CBC latch hits, BUT it will be lots of different
latches being a few times each, and so probably very
little contention and very few misses even if you
scanned the same table several times concurrently.
You've told us that you can identify a through
v$latch_children and x$bh a few blocks that are
probably the critical ones, but you say nothing
about them: Are the blocks
segment header blocks
root blocks of indexes
blocks relatively to RI constraints
Do the blocks change slowly over time ?
Strategy:
If you have plenty of memory, set the hash_chains
parameter (which I thought was hidden) to a value
much larger than (e.g. 2x) db_block_buffers to see
if you can get one block per chain, and identify
exactly which blocks are the problem.
If the blocks relate to small Referential integrity tables then recreate the table with a large PCTFREE so that the data in the object is spread across more discs.
If the blocks are migrating and appear to be from the tail end of the inserted table, then increase the number of free lists.
If the blocks are root blocks of RI indexes, you may have a problem.
Anything else, give us a clue what it is.
P.S. You mention blocks in XCUR mode.
Is this v$bh.state = 'xcur' . I think this applies
to OPS only. Blocks 'gotten' in current mode
are indicated by a bit in the x$bh flag (see
the bitmap breakdown on my website).
--
Jonathan Lewis
Yet another Oracle-related web site: www.jlcomp.demon.co.uk
Received on Wed May 12 1999 - 16:43:35 CDT
![]() |
![]() |