Re: gc buffer busy and rac testing

From: Mladen Gogala <gogala.mladen_at_gmail.com>
Date: Fri, 09 Jan 2009 09:49:57 +0100
Message-ID: <gk733m$k8r$1_at_news.motzarella.org>



Greg Rahn wrote:

> On Jan 8, 5:05 am, Mladen Gogala <gogala.mla..._at_gmail.com> wrote:

>> The thing to try would be to restrict the Swingbench program to the 1st
>> node and run queries and reports from the 2nd node. That would be a crude
>> version of the functional partitioning.

>
> Ideally you would want any access (write or read) for a given block to
> be on the same node. This will prevent the CR copies being sent over
> the interconnect.

Yes, of course, but that would also severely limit the usefulness of the RAC. It would also require much more serious effort in application partitioning. That is why I called it "a crude version of the functional partitioning".

>

>> Also, please remember that RAC is a redundancy and availability option,
>> not a performance option.

>
> Not true. RAC can scale very well, but the application has to be
> scalable also.

And that is the contention point. If you buy your friendly neighborhood general ledger or CRM application from your local vendor, it is highly unlikely that the application will scale well. Also, people are trying to save money and develop on a small non-clustered box while deploying on the RAC.
> OLTP workloads can be more challenging to scale and may require
> data dependant routing

Yes. I had more then one consulting assignment doing precisely that. Now we're talking about my livelihood.

> but data warehousing workloads are
> very easy to scale.

No contention there. RAC, partitioning, bitmap indexes and direct load are a dream come true for the DW developers.

>

>> One big blue P6 box will run circles around 20 Dell PC's with quad-CPU
>> boards and 64GB each. On paper, RAC with Dell boxes will be much faster,
>> but in reality, one small P-595 with 32 cores and 128 GB RAM will run
>> circles around any Intel-based configuration for OLTP.

>
> 16 Power6 CPUs (32 cores) vs. 20 Intel Xeon Quad-core processors (80
> cores)? The P6 core is probably around twice as fast as the Xeon core
> but with 32 vs. 80, but I think the P6 would lose.

Again, it depends on the application. For a generic OLTP application which is mainly non-scalable, I am rather certain that P-595 would run circles around RAC. I've seen it happen at Oxford Health Plans with 4 HP 9000/N machines running a homegrown claims processing application. Those 4 HP nodes were replaced by a single IBM P 595 (with P4's at the time), only to discover that the performance is better then it used to be. It wasn't an exact comparison, because the version change was done simultaneously with the platform change (8i -> 9i) but the performance was better. The boxes it replaced were big HPPA boxes with 8 CPU and 16GB RAM each. I am fully aware that I am comparing apples to oranges, because 9i had very significant RAC improvements over 8i with the OPS technology. Cache fusion was in its infancy on 8i and was turned off for the most part. DBA group was monitoring "false pings" and we were using GC_FILES_TO_LOCK parameter to dedicate static PCM locks to files. I would have liked it very much to be allowed to benchmark 9i RAC against the big box, but I wasn't allowed to do that. Unfortunately, OXHP no longer exists, it was bought by United Health, a company that uses UDB and zOS, so the question will remain unanswered. Arup Nanda was working with me on the same configuration.

Also, if there is one thing that a big box like P-595 does well, it's I/O. That can be a big factor, since most of the DB servers are I/O bound, rather then CPU bound. Oracle has messed things a bit up with the Exadata hardware, which can do "smart scans" on the disk array itself. That can prove to be a very hard blow for the traditional servers like P-595 indeed.

Given that Exadata cannot be used for file serving and file systems, and that it cannot be used for anything else then Oracle, I am not sure how well will that "brainy SAN" sell. It all depends on the pricing and I really do hope that Oracle and HP marketing guys will not try to make too much money too fast, because Exadata might go the way of "database machine" and "network computer", the previous two pieces of hardware that Oracle Corp. was trying to push. It's now up to the market forces.

> Now, I will say that it will take more engineering to do so (data
> routing) but it certainly could be done.
> Is the RAC "tax" plus the extra engineering effort worth it? It all
> depends...

The right answer is "it depends". Unfortunately, magazine readers in the CIO positions sometimes think that RAC is the proverbial silver bullet which would rid them of expensive consultants and wizards while providing the good service and availability. I'd rather say that a good consultant or a DBA is the silver bullet. Unfortunately, sr. DBA is much more expensive than a Dell box.
Unfortunately, in the world of the "DBA 2.0" which chan be described as a chimp with the OEM, a traditional cautious DBA which urges restraint and proper testing is usually seen as an obstacle to making deadlines and deploying the application in production and gladly replaced by a Dell box.

-- 
http://mgogala.freehostia.com
Received on Fri Jan 09 2009 - 02:49:57 CST

Original text of this message