Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: 4 Node RAC Cluster GigE vs Infiniband

Re: 4 Node RAC Cluster GigE vs Infiniband

From: Daniel Morgan <damorgan_at_x.washington.edu>
Date: Sat, 25 Sep 2004 17:59:00 -0700
Message-ID: <1096160417.745801@yasure>


Steffen Roegner wrote:
> Daniel Morgan wrote:
>

>> Rodrick Brown wrote:
>>
>>> Hello all I've been getting mixed answers on how fast the 
>>> interconnect should be between the RAC clusters some say 100mb, GigE 
>>> and  infiniband i'm planning on using Oracle 10g for a Geographical 
>>> information system database, currently I'm planning on using Sun Fire 
>>> V280R's 2x 1.2Ghz 8GB mem in a 4 way RAC cluster has anyone deployed 
>>> something similar on GigE and was was the response time like on big 
>>> queries.
>>>
>>> I'm planning on using Veritas Cluster File System and have it under 
>>> ODM if it makes any difference--
>>>
>> Faster is better so if you have the $ for InfiniBand use them. It is
>> quite probable that GigE would be sufficient but without testing
>> impossible to predict. Forget anything slower than GigE.
>>
>> But why spend money on Veritas when it is not required. For far less
>> money than the V280s you could pick up some DL360s or equivalent,
>> use RedHat EL AS and Oracle's clusterware at no additional charge.
>> It would save your organization tens of thousands of dollars and
>> give you equal or better performance. And if you wanted better
>> performance look at IBM's P5s.
>>

>
> Roderick,
>
> first of all, I'd like to totally agree with Daniel. We have a 4 Node
> RAC running on Dell 2650s here and we indeed use Red Hat EL 3.
> The thing runs very stable and I would point your attention away from
> the interconnect to the shared storage. We saw some driver issues with
> the fibre channel HBAs (QLogic) in the beginning which were solved
> in time by Red Hat. So depending on what your application does, you need
> fast storage and AFTER THAT a fast interconnect link. We are fine with
> GB ethernet here - there are Waits in the perfstat report for Global
> Cache CR Requests (especially during batches) but the resulting wait
> time is low enough compared to disk reads and cpu. We still consider
> disk performance our biggest problem (besides the application itself of
> course :-) so I would recommend you to put as much effort as possible
> on storage design.
>
> This is taken from a snapshot taken during one of our batch jobs:
>
> Top 5 Timed Events
> ~~~~~~~~~~~~~~~~~~ %
> Event Waits Time (s) Ela Time
> ------------------------ ------------ ----------- --------
> db file sequential read 482,314 2,011 43.53
> CPU time 1,107 23.97
> db file scattered read 47,597 867 18.77
> db file parallel read 8,271 240 5.19
> global cache cr request 1,380,502 184 3.97
>
> If you look at this, using Infiniband or whatever faster solution
> instead of el-cheapo Dell GB Switches and Broadcom onboard chips
> would probably lower the latency time lets say by three times - that
> would then result in an elapsed time of 1.30 instead of 3.90 seconds for
> the same number of requests.
>
> But well, maybe this is just a foul estimate.
>
> hth

We are using NetApp 920s ... 3 of them ... and are very happy with the performance for what we are doing. One nice thing is that they come cluster aware.

-- 
Daniel A. Morgan
University of Washington
damorgan_at_x.washington.edu
(replace 'x' with 'u' to respond)
Received on Sat Sep 25 2004 - 19:59:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US