Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Got the darn buffer busy waits under control, at last! (for R. Sanchez)
In article <1024322671.24583.0.nnrp-12.9e984b29_at_news.demon.co.uk>, you
said (and I quote):
> effect I doubt if you are going to be able to
> resurrect a sufficiently large chain of stats
> outputs that prove that the _spin_count
> change was definitely the critical factor.
I'll try to capture another log from a similar load. It should be possible in the next month end, that's when all heck breaks lose in that system.
> (And if you did, I'd only duck out of reading
> them all anyway).
Heaven forbid!!!! It's most definitely not THAT important! :-D
> One of the reasons I view statspack with
> extreme caution is that it is still only taking
> a large-scale average of statistics. It took me
> about 15 minutes yesterday to come up with
> a couple of bits of code which individually are
> catastrophic, but result in statspack reporting
> near-perfect database behaviour when they
> run in combination.
It's a problem with any tuning data based on a "sample-and-hold" methodology for data capture.
One thing I must grant the mainframe guys: they sorted all this rigmarole out ages ago.
They produce a stream of measurement data that can be trapped / diverted / summarised / whatever by a simple and efficient process. It's up to us to pull the golden seed out of the chaff which is the constant stream of performance event data.
A much better solution IMHO. We can use whatever tools to process this raw data (even >> /dev/null, if we want!), rather than having to rely on sample-and-hold numbers that may or may not be relevant for what we have to trap.
Of course, what we have now in Oracle is miles ahead what was available in 6 and even 7. But to my mind it still needs to be better provided. The "on-the-spot" event views are a case in point: they are virtually useless unless you have the luck of hitting the right time slice.
If I had that stream of data condensed into a flat file like I can get for DB2/mainframe, I could easily trap EVERY single one of the buggers! As is, it's a chance event. Usable, but far from perfect.
>
> One of the nicer things about statspack, though
> is that if you run off the report for every adjacent
> pair of intervals, you can grep for (say) "buffer busy wait"
> and "redo size" from all of them, and check very
> quickly for any correlation between results.
Hmmm, must look into that. Good idea.
> one sluggish scanner causing the BBW, and a
> set of concurrent scanners could ALL end up
> reporting 'the same' BBW at the same time.
Bloody hard to detect if the events are not time stamped and streamed out. Anyways, enough ranting.
> if you try running the following script from just
> three concurrent processes:
> select * from source$ where length (source) = -1;
> select * from source$ where length (source) = -1;
> select * from source$ where length (source) = -1;
> select * from source$ where length (source) = -1;
> select * from source$ where length (source) = -1;
> select * from source$ where length (source) = -1;
> select * from source$ where length (source) = -1;
>
> I think you should find a reasonable number of BBWs
> appear quite readily. For some reason, the result is not
> so nasty in 8.1.7).
Will definitely look into this. Thanks a lot.
-- Cheers Nuno Souto nsouto_at_optushome.com.au.nospamReceived on Tue Jun 18 2002 - 05:46:30 CDT