RE: Measuring the impact of Redo Log change

From: Mark W. Farnham <>
Date: Tue, 9 Sep 2008 03:53:06 -0400
Message-ID: <>

When you want to measure the affects of a change, it really helps to decide what metrics to track and to get a "before the change" set of values before you make the change. Now it might turn out you have coincidentally accumulated some metrics that will be useful.  

If you have some recurring batch jobs with start and end times and volume of transactions recorded, that could be useful if those jobs were running against reasonably similar competitive loads previously. You can get an idea of the "before" throughput variability by charting #transactions/unit time. If the variability of "before" transactions is high, find some other metric, unless the variability is driven by outliers related to a known cause. (For example some job might run much slower against a full backup every Saturday night, but runs within a small throughput range all other days.)  

If you have a particular job or application that was previously experiencing delays due to writing redo logs, is it faster now?  

If you were not previously experiencing delays due to writing and archiving redologs, then you should not expect large differences in behavior under the same load. That comes under the heading of having fixed a problem you didn't have. Now that might not be all bad, because you probably did increase your peak throughput capacity by reducing the service time of flushing the log buffer and archiving. If you were previously experiencing delays due to writing and archiving redologs then performance should improve. Reductions in waits plus improvements in service times are part of the picture (and essentially the whole picture if you're trying to solve a particular problem), but if your redologs are intermingled with other actual i/o on your disk farm and you were pushing your throughput capacity, you may discover that things are now faster across the board. Measuring it after the fact without a before set of values is not going to be easy.  

Likewise, if this change was a forward looking attempt to increase maximum throughput, you might not notice any change right now. Upgrading from standard (US) 80 inch high doors to 88 inch doors is not something I notice much. But if you expect Kareem Abdul Jabbar to become a frequent visitor they would be nice. (Hmm, maybe I should have updated that to Shaq for you youngsters.)  



From: [] On Behalf Of Deepak Sharma
Sent: Tuesday, September 09, 2008 12:33 AM To:
Subject: Measuring the impact of Redo Log change  

We recently (today) moved the redo logs from RAID5 to RAID1+0.

The obvious reason of doing this is to make things better in terms of performance, as RAID1+0 is better suited for read/writes. Someone pls correct me if that's not true.

What are different ways to measure the impact of this change?

The platform is AIX, DB is and DB size is 55TB, and 2TB redo is generated each day. There are plans underway to reduce the redo generation using direct path etc. (so let's not get diverted by that).

We can try measuring the wait event stats using AWR, session stats etc., but what exactly to look for? 'log file switch completion', 'log file sync', are some things that come to mind - what else?


Received on Tue Sep 09 2008 - 02:53:06 CDT

Original text of this message