Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: DB_BLOCK_SIZE set to the highest value

Re: DB_BLOCK_SIZE set to the highest value

From: Nuno Souto <nsouto_at_nsw.bigpond.net.au.nospam>
Date: Sat, 26 May 2001 02:35:46 GMT
Message-ID: <3b0f0a21.1635705@news-server>

On Fri, 25 May 2001 21:09:24 +0100, "Jonathan Lewis" <jonathan_at_jlcomp.demon.co.uk> wrote:

>
>There is a feature of rollback segment headers that is
>not often mentioned - the 'free block pool' that,
>loosely speaking, lists the last record position of
>the most recent five transactions to commit.
>

Why 5? I mean, it looks like a number out of some internal strategy, but which one? Would this have anything to do with the old rule-of-thumb of 4 connections per rollback segment?

>Having this list allows Oracle to step backwards
>(to a very limited degree) in the rollback segment
>and reuse space that would otherwise be wasted.
>
>If you persistently run with more than 5 concurrent
>transactions per rollback segment, and the
>transactions are all very short, you can waste blocks
>very rapidly - and bigger blocks waste more space.
>
>Might this have something to do with your problem ?
>

I'm not sure. The app has a 3-tier architecture. But it also allows for 2-tier. I have a mixed load of users, about 50 in 2-tier and around 150 in 3-tier, with 12 backend connections for the app server. Permanent batch schedule of max 6 jobs, 4 during the day.

The daily number of connections to ORACLE hardly exceeds 70 or thereabouts, total number of users is around 200. No MTS, the 3-tier app server does the equivalent. The interactive user load is most definitely short transaction stuff. Highly variable in a typical day, varies with day of month too, financials stuff.

Batch is long transactions, characterized by loading extracted data into temp (shared!) tables, processing it, loading it back into other tables and then deleting temp data. Up to 5 million rows get processed this way on each run. I don't need to explain the problems I have with these tables... ;-)
The blowout of rlbseg size I was experiencing before happened with these batch executions. In one abnormal case a batch run blew 4Gb of rlbseg to process a 100Mb table.

The DB has 16 rlbsegs. I've run all the stats I could find and I was getting consistent waits on them at 16K. With 8K, I get an occasional wait here and there and not much else. If anything, 16 rlbsegs is over the top now whereas before it wasn't enough. All rlbsegs are shrunk twice a day. Once in the morning before dayly load starts, once in the evening before overnight batch load starts. We used to use "optimal", but that caused a lot of snapshot too old messages. Much better to do it like this.

Do you reckon this sort of load would have caused the problem at 16K? I'd have stayed there if I could, but the darn thing was constantly giving me headaches.

Cheers
Nuno Souto
nsouto_at_bigpond.net.au.nospam
http://www.users.bigpond.net.au/the_Den/index.html Received on Fri May 25 2001 - 21:35:46 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US