Re: DBWR - How Many is Too Many?

From: Robyn <robyn.sands_at_gmail.com>
Date: Wed, 27 Feb 2008 14:28:13 -0500
Message-ID: <ece8554c0802271128x24c5e251k25c8c1e99ce50b66@mail.gmail.com>


Had a similar experience on SAP with a SAN. I set our db_writers based on the number of cpu's - we had 8 actual cpu's, so 8 db_writers.

If you are on a new SAN, it could be the SAN is not configured properly. I've used the system stats to convince the the SAN people to research SAN performance issue in the past. Once they found one channel was configured incorrectly and in another case, we ended up getting a new SAN.

If you haven't collected systems stats yet, collect a set during peak processing and share the numbers with the SAN team. If numbers are less than they would expect, it might convince them to look into it for you.

Any other changes since moving to the SAN?

Robyn

On Wed, Feb 27, 2008 at 1:53 PM, David Barbour <david.barbour1_at_gmail.com> wrote:

> We recently moved our database to a new SAN. Performance has just
> tanked. Here's the environment:
> AIX5.3L
> Oracle 9.2.0.7
> SAN - IBM DS4800
>
> We've got 8 filesystems for Oracle data files. Redo, Archive, Undo and
> Temp are all on seperate disk/filesystems from the data files.
>
> All the Oracle datafiles are on RAID5 LUNs with 12 15K RPM 73 (68 usable)
> GB drives. SAN Read and Write Caching are both enabled.
>
> A statspack (generally for any given interval - this was for a period of
> "light" processing) shows me our biggest hit is:
> Buffer wait Statistics for DB: PR1 Instance: PR1 Snaps: 12609 -12615
> -> ordered by wait time desc, waits desc
>
> Tot Wait Avg
> Class Waits Time (s) Time (ms)
> ------------------ ----------- ---------- ---------
> data block 278,194 20,811 75
>
> sar is scary (just a small portion)
>
> AIX r3prdci1 3 5 00CE0B8A4C00 02/27/08
>
> System configuration: lcpu=8
>
> 00:00:00 %usr %sys %wio %idle physc
> 02:15:01 19 19 42 19 4.00
> 02:20:00 21 25 40 14 4.00
> 02:25:00 19 18 43 20 4.00
> 02:30:00 18 18 43 21 4.00
> 02:35:00 20 24 40 16 4.00
>
> We're running JFS2 filesystems with CIO enabled, 128k element size on the
> SAN and AIO Servers are set at minservers = 220 and maxservers = 440
> We've got 32GB of RAM on the server and 4 CPUs (which are dual core for
> all intents and purposes - they show up as eight). We're running SAP which
> has it's own memory requirements. I've configured my SGA and PGA using
> Automatic Memory Management and the SGA currently looks like:
> SQL> show sga
>
> Total System Global Area 1.0739E+10 bytes
> Fixed Size 757152 bytes
> Variable Size 8589934592 bytes
> Database Buffers 2147483648 bytes
> Redo Buffers 1323008 bytes
>
> filesystemio_options = setall
>
> I'm thinking the data block waits is the result of too many modified
> blocks in the buffer cache. Solution would be to increase the number of
> db_writer_processes, but we've already got 4. Metalink, manuals, training
> guides, Google, etc. seem to suggest two answers.
>
> 1. One db writer for each database disk - in our case that would be 8
> 2. CPUs/8 adjusted for multiples of CPU groups - in our case that would
> be 4
>
> Any thoughts?
>
>

-- 
I may not have gone where I intended to go, but I think I have ended up
where I needed to be.
Douglas Adams

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Feb 27 2008 - 13:28:13 CST

Original text of this message