Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: DB2 Crushes Oracle RAC on TPC-C benchmark

Re: DB2 Crushes Oracle RAC on TPC-C benchmark

From: JEDIDIAH <jedi_at_nomad.mishnet>
Date: Thu, 03 Feb 2005 14:49:43 GMT
Message-ID: <1107442183.deda828781f4ae07bc28c6795dc7dbce@1usenet>

On 2005-02-02, Serge Rielau <> wrote:
> DA Morgan wrote:
>> Niall Litchfield wrote:
>>> DA Morgan wrote:
>>>> Maybe I'm showing my innocence here but if Mark gives away RAC
>>> licenses
>>>> for free it costs Oracle one piece of 8 1/2 x 11 paper. I think of
>>>> hardware as being different.
>>> One of the big ways in which RAC has been sold is that you can save big
>>> bucks by throwing out all that expensive hardware. Now if RAC is free
>>> or nearly so then that would be fair enough, but last time I looked the
>>> list price of RAC was $20k per processor. So instead of buying an 8 way
>>> Unix box (say the sun v880 at 86k list for 8 processors and 16gb RAM)
>>> you buy 2 4 way dell boxes with linux on them (I've used 2 4 way power
>>> edge 7250s with 8gb ram each) for 47k and save 40k on hardware. Great.
>>> You've also added 256k including the standard discount from
>>> to the system price. So that is paying 200k more for a
>>> more complex and less widely adopted system.
>>> Incidentally even without RAC the software was the significant cost of
>>> the system anyway.
>>> Niall
>> The published list price for RAC is $20/proc AFAIK but I have never
>> seen anyone actually pay that price. Here's the calculation I had
>> the procurement folks at a Boeing division do for me last year for
>> a project to the extent that I can divulge the numbers.
>> 2 x 4CPU H/P-Compaq 1U servers $11K
>> 8 x RAC licenses (using the published price) $160K (it was less)
>> total cost $175K with rounding up for miscellaneous items.
> But if one machine fails you risk thrashing the system because it will
> be 100% overloaded. How does Oracle react when you "exceed" 100% CPU?
>> 2 x 8CPU equivalent box from Sun because if one machine failed
>> we would need an identical machine available as a cold standby.
>> A lot more money.
>> ===============================================================
>> The actual apples to apples comparison pretty much this:
>> generic hardware + RedHat Linux + RAC licenses = $250K US
>> Sun hardware + Solaris with no RAC licensing = $750K US
>> The purchase included 3 NetApp 920 but that was the same with
>> either configuration. We gladly gave Larry his money and gained
>> TAF without cold failover in the process.
>> Not bad for a day's work.
>> ===============================================================
>> But the real advantage was that we knew we wouldn't need a
>> forklift in the future. Here's the most important part of the

        No, you just need a bigger forklift upfront. That's not something to be trivialized. That's new expertise that a shop needs to have above and beyond more simplistic SMP implementations.

>> RAC savings.
> You think you don't need a forklift. Let's coem back to this thread in 2
> years when Boing is running on 5 nodes.
>> Development ... one cluster with 2 nodes.
>> Testing ... one cluster with 2 nodes.
>> Production ... one cluster with 2 nodes initially.
> See with the Sun aproach you could have shared used the idle standby for

        With the IBM approach you could keep most of the CPU's turned off and only pay for them when you actually need to turn them on. The big upfront cost with either the Sun or IBM approach is paying for that class of hardware. p690's and E15K's are more expensive per quanta of machine.

        However, slamming another system board into a E15K domain is remarkably simpler and easier than adding to a cluster.

> development and testing (kick the developers out in case of fail over).
> leaves 750k (3 * 250k) facing 750k. Earlier it was stated that
> Unix servers are /cpu more powerful...
>> as the need for more resources increases ... add more nodes
>> one at a time. Keeps the cost of hardware in line with need
>> and revenues. And as you add new nodes with faster CPUs the
>> load balancing improves performance.

        NUMA gives you the same benefit without the need to re-engineer the the system and retrain your people.

>> The SMP alternative is to either day one purchase two boxes
>> big enough to handle the anticipated requirement 2+ years in the

        Not quite.

        The SMP alternative requires you to purchase two boxes big enough to be expanded to handle the anticipated requirement. Sophisticated SMP machines can be serviced while applications are still online.

>> future or expect to have to forklift out the current box after
>> one or so years (which will then worth only a fraction of its
>> original cost as it will be obsolete) and replace it with two
>> brand new bigger boxes.
>> Try to sell a CFO on buying to very large computers, one of
>> which will hopefully never be utilized ... just sit there
>> idling in backup mode ... with all of the costs up front versus

        That's easy. Simply tell him that the box can be upgraded to meet future capacity as easy as putting lego bricks together. This can be done with zero impact to running applications and with minimal staff intervention.

>> buying commodity hardware on an as-needed basis with the
>> changes over time in hardware performance benefiting the
>> overall ROI.

Commodity hardware also has an shorter support lifecycle that would tend to increase the diversity of hardware in the shop while limiting the lifetime of the nodes of your cluster.

>> I've yet to see meet the CFO, when shown the numbers, that
>> didn't make that decision for the IT folks using a very large
>> hammer.

     If you think that an 80G disk can hold HUNDRENDS of           |||
hours of DV video then you obviously haven't used iMovie either.  / | \

Received on Thu Feb 03 2005 - 08:49:43 CST

Original text of this message