Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: 4 Node RAC Cluster GigE vs Infiniband
Daniel Morgan wrote:
> Rodrick Brown wrote:
>
>> Hello all I've been getting mixed answers on how fast the interconnect >> should be between the RAC clusters some say 100mb, GigE and >> infiniband i'm planning on using Oracle 10g for a Geographical >> information system database, currently I'm planning on using Sun Fire >> V280R's 2x 1.2Ghz 8GB mem in a 4 way RAC cluster has anyone deployed >> something similar on GigE and was was the response time like on big >> queries. >> >> I'm planning on using Veritas Cluster File System and have it under >> ODM if it makes any difference-- >>
Roderick,
first of all, I'd like to totally agree with Daniel. We have a 4 Node RAC running on Dell 2650s here and we indeed use Red Hat EL 3. The thing runs very stable and I would point your attention away from the interconnect to the shared storage. We saw some driver issues with the fibre channel HBAs (QLogic) in the beginning which were solved in time by Red Hat. So depending on what your application does, you need fast storage and AFTER THAT a fast interconnect link. We are fine with GB ethernet here - there are Waits in the perfstat report for Global Cache CR Requests (especially during batches) but the resulting wait time is low enough compared to disk reads and cpu. We still consider disk performance our biggest problem (besides the application itself of course :-) so I would recommend you to put as much effort as possible on storage design.
This is taken from a snapshot taken during one of our batch jobs:
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Event Waits Time (s) Ela Time ------------------------ ------------ ----------- -------- db file sequential read 482,314 2,011 43.53 CPU time 1,107 23.97 db file scattered read 47,597 867 18.77 db file parallel read 8,271 240 5.19 global cache cr request 1,380,502 184 3.97
If you look at this, using Infiniband or whatever faster solution instead of el-cheapo Dell GB Switches and Broadcom onboard chips would probably lower the latency time lets say by three times - that would then result in an elapsed time of 1.30 instead of 3.90 seconds for the same number of requests.
But well, maybe this is just a foul estimate.
hth
-- ... Steffen Roegner ------------------------------- http://www.sroegner.de, http://www.gagabut.deReceived on Sat Sep 25 2004 - 04:59:16 CDT