Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Got the darn buffer busy waits under control, at last! (for R. Sanchez)

Re: Got the darn buffer busy waits under control, at last! (for R. Sanchez)

From: Zo\(t\) <wizofoz2k_at_yahoo.com.au>
Date: Mon, 17 Jun 2002 20:55:57 +1000
Message-ID: <3d0dc1b4$0$28005$afc38c87@news.optusnet.com.au>


"Jonathan Lewis" <jonathan_at_jlcomp.demon.co.uk> wrote in message news:1024230336.29549.0.nnrp-08.9e984b29_at_news.demon.co.uk...

> It's a pity that you've managed to chop out the redo size
> stats from the first report, as these stats don't really
> look as if they were really comparable - even though the
> commit count in the same ballpark.

bummer! I'm very sorry, I don't understand how they got chopped. Here are some from another log taken MOL at same time:

redo blocks written 131192 29.76 411.26 14.58
redo buffer allocation retr 49 .01 .15 .01

redo entries                      301728        68.43       945.86
33.53
redo log space requests                8            0          .03
0
redo log space wait time              79          .02          .25
.01
redo ordering marks                   14            0          .04
0
redo size                      127433064     28902.94    399476.69
14160.8
redo small copies                  17307         3.93        54.25
1.92
redo synch time                     4493         1.02        14.08
.5
redo synch writes                   4566         1.04        14.31
.51
redo wastage                     4799784      1088.63     15046.34
533.37
redo write time                     7014         1.59        21.99
.78
redo writer latching time             70          .02          .22
.01

and the corresponding bbw, from same log:

latch free                                8523         11198
1.31
buffer busy waits                        60760          6327
.1

>
> Personally I would have said that the buffer busy waits
> were a visual irritant, but not a problem. The changes
> that MAY (and I stress MAY) have caused the beneficial
> increase are:

Thing you gotta remember is that these two logs have nearly three months between them. There have been many trials in-between. I can't possibly load them all here. So I just loaded some that are significant. But if you see a parameter changed between the two, don't immediately assume that happened in one strike between the two log files. I change parameters ONE AT A TIME, not in a bunch. And examine the results for a while before deciding if change stays or not.

> Reduction in size of multiblock read count

that was tried before, with no impact whatsoever. I went back to this because a higher value was causing a lot of unwarranted full scans since I turned the parallel query on. This value seemed to stop it from happening. It never affected the bbw in all the instances I've changed it.

> Disabling of fast full scans

That could have had an impact. However, they were disabled before. I enabled them to see if I could get better performance in some cases. Got it, but most of the dynamic SQL from one of their "AI" tools got completely stuffed. So I went back to disabled which is the default IIRC.

> Increase in optimizer_index_caching

That was tried before. I used to have it at 90. Then I found out that in some cases I was getting EXCESSIVE index usage. So I throttled it back to a balance value, which turned out to be 50. But this could indeed have been another cause of the drop in the bbws. See later, timing was possibly right.

> Movement of files in file system

Hmmm, I doubt it. What happened was that a new file system was "made" on the same recoverable units as I had my SYSTEM tablespace on. To be used for NT file server stuff. I found out about this and asked to have it moved somewhere else. So my SYSTEM "disk" was back to normal. I say "disk" because this is an EMC and "disks" is not always what we think they are. But I'm essentially back to the same situation hardware-wise where I had high bbws before. Excpet now they're gone.

>
> A common cause of buffer busy waits is excessive
> table / index scanning, with one process simply waiting
> for another process to complete its multiblock read so
> that it can read the same blocks.

Or excessive index usage? As in a lot of index blocks being visited when a full scan might have resulted in less overall disk access? That could have been the case indeed. I'll have to try this again with the original 90% (and nothing) to see if it really has such an impact.

>
> The various changes in access paths that reduce the
> amount of sorting, combined with the increase in
> sort_area_retained_size to reduce the I/O to the temp
> files would also have had a beneficial effect on the
> filesystem overload, again potentially having a side
> effect on the BBW issue.

Yeah, but would it be enough for such a dramatic reduction? We're talking in the same daily period a reduction from around 60000 average (default spin count) to 14000 (spin count = 3000) to around 300 (spin count=4000)? That is a very large reduction for such a simple change as the optimizer_index_caching, no? The other parameters were introduced one at a time and caused no reduction whatsoever. Spin count dropped them dead in this instance.

Of course when and if we upgrade to 8i/9i, this init.ora will be reset to vanilla. Last thing I need is all this baggage in my brand new 8i instance! Besides I've tried some of my "problem" SQL in 8i and I don't anywhere nearly as many troubles even with default settings. Big change in that optimizer, I reckon! Between 8.0 and 8i.

OBTW, thanks a lot for taking the time to look into this. Much appreciated.

And to Ricky as well. Sorry, I forgot to mention it im my reply to him.

--
Cheers
Nuno Souto
wizofoz2k_at_yahoo.com.au
Received on Mon Jun 17 2002 - 05:55:57 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US