Re: disks for redo logs

From: joel garry <joel-garry_at_home.com>
Date: Thu, 12 Feb 2009 10:51:21 -0800 (PST)
Message-ID: <2d13a476-09c5-4072-bd1c-448544f495d0_at_u18g2000pro.googlegroups.com>



On Feb 11, 6:19 pm, Noons <wizofo..._at_gmail.com> wrote:
> On Feb 12, 6:26 am, joel garry <joel-ga..._at_home.com> wrote:
>
> > > I've got two database - rac and single - and two RAID groups - 4 disks
> > > in RAID 10 and two disks in RAID 1 on storage array. At this time I'm
> > > using RAID 10 for RAC database redo and RAID 1 for single instance
> > > database redo.
>
> I hope you mean 4 mirrored pairs in RAID10, because if it is just 4
> disks in total, what you got is a 2 mirrored device stripe which
> achieves little in terms of speed-up.
>
> > > RAC is producing about 100% percent more redo then single db.
> > > I am wondering if it would be more efficiently to make one RAID 10 from
> > > 6 disks and use it for both database ? I have no possibilty to test
> > > this configuration so that is why I am asking You. Thanks for any advices.
>
> What you need is a heap more disks, if you want to mess around with IO
> configurations.  6 disks is not enough for any decent amount of tuning
> in a SAN for 2 databases.
>
> I've now got two stripes of 8 mirrored pairs each for our main DW and
> I'm *starting* to see *some* IO throughput improvement.
>
> > 10.2.0.4 (and I would guess above) check out the ${ORACLE_SID}
> > _lgwr*trc trace files, it spits out a message whenever log writes take
> > more than .5 seconds or something, compare to your performance
> > monitoring.  
>
> Interesting!  Thanks for that, Jgar!

There's a metalink note, I guess enough people asked about those warnings.

>
> >I was a bit surprised when I first noticed what's in
> > those traces, haven't quite figured out how to translate that for
> > management.  The surprising part was the size of writes versus the
> > waits - apparently quite unrelated.  Which I guess makes sense if the
> > waits are caused by other things than the log writes.
>
> What span of sizes are you seeing?

Most of the time not anything at all, but open an EM maintenance window and...

[...]
Warning: log write time 900ms, size 28KB
*** 2009-02-11 11:45:02.949

Warning: log write time 760ms, size 12KB
*** 2009-02-11 12:06:35.470

Warning: log write time 540ms, size 1KB
*** 2009-02-11 12:06:36.284

Warning: log write time 810ms, size 3KB
*** 2009-02-11 12:06:37.284

Warning: log write time 630ms, size 1KB
*** 2009-02-11 12:46:47.591

Warning: log write time 660ms, size 2KB
*** 2009-02-11 13:02:08.343

Warning: log write time 500ms, size 1KB
*** 2009-02-11 13:08:37.813

Warning: log write time 2720ms, size 1KB
*** 2009-02-11 13:23:04.347

Warning: log write time 2270ms, size 5KB
*** 2009-02-11 13:33:55.449

Warning: log write time 500ms, size 1024KB
*** 2009-02-11 13:52:43.694

Warning: log write time 740ms, size 5KB
*** 2009-02-11 14:10:38.786

Warning: log write time 3010ms, size 3KB
*** 2009-02-11 15:09:22.242

Warning: log write time 1710ms, size 1KB
*** 2009-02-11 21:05:54.817

Warning: log write time 1310ms, size 8KB
*** 2009-02-11 21:06:10.603

Warning: log write time 500ms, size 2861KB
*** 2009-02-11 21:06:12.259

Warning: log write time 510ms, size 3143KB
*** 2009-02-11 21:06:13.094

Warning: log write time 830ms, size 3916KB
*** 2009-02-11 21:06:13.723

Warning: log write time 520ms, size 3170KB
*** 2009-02-11 21:06:14.388

Warning: log write time 650ms, size 2186KB
*** 2009-02-11 21:06:15.247

Warning: log write time 850ms, size 3382KB
*** 2009-02-11 21:06:16.254

Warning: log write time 1000ms, size 3709KB
*** 2009-02-11 21:06:17.090

Warning: log write time 840ms, size 3409KB
*** 2009-02-11 21:06:18.030

[etc.]

I see that ramp-up kind of effect (at 21:05) several times through the window. Ah, those were a series of shrink space command scripts I submitted.

>
> > I used to think having different volume groups would help, but I've
> > been disabused of that notion.  Now I don't understand it at all.  SAN
> > cache makes it all strange, too.
>
> One thing I've found makes a huge difference: jack up the priority of
> the lgwr and arcn processes.  They are mainly IO bound processes,
> hardly accumulating any CPU, they are perfect candidates for higher
> priority so that IO gets initiated asap.

Guess I'll have to actually look at the thing in the middle of the night. (Haven't been able to justify extra-cost performance recording tools).

I've seen this advice before, but haven't done anything, since, hey, if the thing is I/O bound, wouldn't making I/O initiates more often make it worse? During normal production times, I see no performance problems except for certain programs becoming cpu bound - some are due to SGA thingies (and a few typical CBO issues, and one strange view/ CBO issue), some due to stupid app where programmers decided that since memory is faster than disk, they should use lots of bespoke in memory virtual tables with no random access, ignoring vaguely documented limits. Needless to say, the latter has gotten much worse with scaling and going to fewer, faster Itanium processors. And the vendor decides that all performance problems must be the database, and so ask for Oracle trace files. Which of course don't say anything at all because the program isn't asking the database for anything when it is cpu bound. I wouldn't want to make that worse fighting background processes.

I do like the way I know immediately when someone runs one of those, big green bar in EM performance window, I can go right to see who is doing it. No way to know which app code is generating the sql without asking the person, though (OCI generator).

>
> > But hey, if users aren't complaining, everything is peachy.  Right?
>
> Bingo!  ;)

Unfortunately, some end-of-month processing has been converted to use those misfeatures in the new version, so I get a lot of "arewethereyetarewethereyetarewethereyettheoldsystemwasfaster!" Users having forgotten not so long ago when it took days so they called me in... 8 years already? Wore out a whole Chrysler...

jg

--
_at_home.com is bogus.
http://www3.signonsandiego.com/stories/2009/feb/11/1m11fraud233323-financial-abuse-case-set-trial/?uniontrib
Received on Thu Feb 12 2009 - 12:51:21 CST

Original text of this message