Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: data block waits

RE: data block waits

From: Steve Adams <>
Date: Fri, 1 Sep 2000 15:26:17 +1000
Message-Id: <>

Hi Gaja,

Yes, I do still have that old thread, and in fact we dropped it without reaching any agreement.

On Wednesday, 21 October 1998 9:20, I disagreed with your claim about the cost of multiple process freelists and freelist groups as follows. "There is no CPU cost to having multiple process free lists, and a trivial cost to having multiple free list groups. If you have an inordinate number of process free lists the worst you get is more free space below the HWM that is not accessible to any particular process. You also get a lower limit on the number of possible transaction free lists you can have."

On Monday, 26 October 1998 14:49, I posted a detailed follow-up to the list in response to several requests for clarification. You responded on Tuesday, October 27, 1998 10:29 AM with a similar detailed explanation that was largely in agreement with mine but differed at several points. I emailed you off the list an hour later to point out where I thought your explanation was wrong. We exchanged emails again the next day in which you explained your understanding, and I again disagreed. We exchanged several other emails on other matters, but the disagreement over the cost of freelists was never resolved.

If you will not accept my "lab tests", and if I am sceptical about your interpretation of your production experience, then I don't see how we can agree this time either. Instead, let's just agree to differ! ;-)

@   Regards,
@   Steve Adams
@   Going to OpenWorld?
@   Catch the Ixora performance tuning seminar too!
@   See for details.

-----Original Message-----
From: Gaja Krishna Vaidyanatha [] Sent: Friday, 1 September 2000 8:19
To: Multiple recipients of list ORACLE-L Subject: RE: data block waits

Hi Steve,

I think we have had this discussion in great length (in a past thread) many moons ago in 1998. I am not sure whether you still have that thread. If you do, you might want to refer to that. In that thread, we had "mutually" agreed on everything that I had mentioned in my posting (including the overhead of too many freelists, the incremental CPU contention that it poses, the overhead allocation mechanism etc.). Unfortunately, I no longer have that thread with me and I currently do not have the time to re-write what I did 2 years ago.

Having said that, I do wish to write something that is relevant to this and other discussions. Every thing that I write about, are based on findings from "real life production systems" with substance. These are systems with many 100s of gigabytes of data, supporting 100s of users using applications such as Oracle Apps., SAP, Peoplesoft and also a good variety of data warehouses. The above implementations have also been done across multiple storage vendors. What I write is based on my experience and findings on the aforementioned systems that I have dealt with in the past 8 1/2 years.

I do not and will not publish anything, just based on simulations or lab tests, as in my humble opinion I don't think they reflect "reality" accurately. I am not suggesting that findings from lab tests are worthless, but I what I do want to say is that anything that I write about, is purely based on the "real-life practical experience".

While, there might be valid backing to corroborate the numbers that you have published, such as "2.5 blocks of overhead per freelist", I tend to disagree with your conclusion, as one cannot generalize such things across the board. I have numbers and reasons to believe otherwise (based on my prior experience on production systems). I have seen the overhead much higher and it has indeed had an effect of pushing the high-water-mark to levels that are not needed.

To fix the problem, I have had to re-org the table and re-create it with fewer freelists. The issue of stressing the I/O sub-system is relevant depending on the number of users performing full-table scans on the said table, the size of the given table, bandwidth of the controller etc.

On the subject of block allocation to freelists, my point was not referring to the timing of the allocation, rather to the fact that blocks of an extent do get disbursed across multiple freelists. The issue is that the blocks of the extent allocated, will eventually get disbursed across multiple freelists, on an as needed basis.

So in the final analysis, configuring any component of Oracle should be done within "reason", and in my experience configuring "freelists" for tables that encounter heavy-concurrent inserts, should be done at (2 * # of CPUs).

Best Regards,

--- Steve Adams <> wrote:
> Hi Gaja (and list),
> There are a few points in your post with which I would like to
> disagree.
> There is no great overhead in having relatively large numbers
> of process
> freelists in a segment. However, 400 would not normally be
> possible because of
> space limits in the freelist header block. 400 would only just
> fit in a 16K
> dedicated freelist header block. If you need to support 400
> concurrently
> inserting sessions, then I would still recommend 400 free
> lists, but would do it
> using hash partitioning and a modest number of process
> freelists per partition.
> By default, each process free list has the effect of raising
> the segment
> high-water mark by 2.5 blocks on average - worth knowing about
> as a disincentive
> to going overboard sure, but hardly enough to stress the I/O
> subsystem during
> sequential I/O.
> Blocks are not allocated to freelists when an extent is
> allocated, nor yet when
> the high-water mark is raised and the blocks formatted. Even
> though the number
> of blocks formatted when the high-water mark is raised is a
> function of the
> number of process freelist, those block are placed on the
> master free list in
> the first instance and only allocated to individual process
> freelists as
> required.
> @ Regards,
> @ Steve Adams
> @
> @
> @
> @ Going to OpenWorld?
> @ Catch the Ixora performance tuning seminar too!
> @ See for details.
> -----Original Message-----
> Sent: Thursday, 31 August 2000 11:51
> To: Multiple recipients of list ORACLE-L
> [snip]
> For e.g., if you have an environment with 400 concurrent
> sessions inserting data into a table, creating 400 freelists
> is
> unreasonably high. This is because, the potential overhead
> incurred in tracking and accessing so many freelists when
> concurrent inserts occur on the system, is very high.
> Also, configuring too many freelists, will have the effect of
> "artificially increasing the high water mark" of a table. An
> "artificially high" high-water-mark for a table, will cause
> Oracle to perform more I/O on the table than required for
> full-table scans, there by imposing an undesired stress on the
> I/O sub-system.
> Realize, that if you have multiple freelists, when an extent
> gets allocated, the blocks of the allocated extent, get
> disbursed across the number of freelists for the table. An
> optimal value of "2 x # of CPUs", is more than adequate on
> most
> systems.
> --
> Author: Steve Adams
> Fat City Network Services -- (858) 538-5051 FAX: (858)
> 538-5051
> San Diego, California -- Public Internet access /
> Mailing Lists

> To REMOVE yourself from this mailing list, send an E-Mail
> message
> to: (note EXACT spelling of 'ListGuru')
> and in
> the message BODY, include a line containing: UNSUB ORACLE-L
> (or the name of mailing list you want to be removed from).
> You may
> also send the HELP command for other information (like

Gaja Krishna Vaidyanatha
Director, Storage Management Products
Quest Software Inc.

"Opinions and views expressed are my own and not of Quest"

Do You Yahoo!?
Yahoo! Mail - Free email you can access from anywhere!
Author: Gaja Krishna Vaidyanatha

Fat City Network Services    -- (858) 538-5051  FAX: (858) 538-5051
San Diego, California        -- Public Internet access / Mailing Lists
To REMOVE yourself from this mailing list, send an E-Mail message
to: (note EXACT spelling of 'ListGuru') and in
Received on Fri Sep 01 2000 - 00:26:17 CDT

Original text of this message