Re: gc buffer busy and rac testing

From: Greg Rahn <>
Date: Thu, 8 Jan 2009 23:35:51 -0800 (PST)
Message-ID: <>

On Jan 8, 5:05 am, Mladen Gogala <> wrote:
> The thing to try would be to restrict the Swingbench program to the 1st
> node and run queries and reports from the 2nd node. That would be a crude
> version of the functional partitioning.

Ideally you would want any access (write or read) for a given block to be on the same node. This will prevent the CR copies being sent over the interconnect.

> Also, please remember that RAC is a redundancy and availability option, not a performance option.

Not true. RAC can scale very well, but the application has to be scalable also. OLTP workloads can be more challenging to scale and may require data dependant routing but data warehousing workloads are very easy to scale.

> One big blue P6 box will run circles around 20 Dell PC's with quad-CPU boards and
> 64GB each. On paper, RAC with Dell boxes will be much faster, but in
> reality, one small P-595 with 32 cores and 128 GB RAM will run circles
> around any Intel-based configuration for OLTP.

16 Power6 CPUs (32 cores) vs. 20 Intel Xeon Quad-core processors (80 cores)? The P6 core is probably around twice as fast as the Xeon core but with 32 vs. 80, but I think the P6 would lose. Now, I will say that it will take more engineering to do so (data routing) but it certainly could be done. Is the RAC "tax" plus the extra engineering effort worth it? It all depends...

The other thing you are not considering is that with 128GB RAM vs. 1280GB (20 x 64GB), 10X as much data can be in the buffer cache. Given that RAM access is probably in the range of 10x faster than disk access (micro seconds vs. milli seconds) you would be giving up quite a bit. Even a remote buffer cache CR would be faster than a disk access.

It is for this exact reason that ERP apps scale quite well with RAC - data access via buffer cache is faster than via disk access, even if it is a remote buffer cache.

Greg Rahn
Received on Fri Jan 09 2009 - 01:35:51 CST

Original text of this message