RE: Oracle RAC interconnect method - 10g, Infiniband ?

From: Crisler, Jon <>
Date: Tue, 23 Feb 2010 18:02:27 -0500
Message-ID: <56211FD5795F8346A0719FEBC0DB0675060FEBE7_at_mds3aex08.USIEXCHANGE.COM>

This is going to be OLTP - thanks for the doc reference.

-----Original Message-----
From: Greg Rahn [] Sent: Tuesday, February 23, 2010 4:56 PM To: Crisler, Jon
Subject: Re: Oracle RAC interconnect method - 10g, Infiniband ?

What kinds of workloads? OLTP? BI/DW? On Linux?

Personally I'd recommend using RDS over IB. I've been using that for several years (4+) and it's also what the Sun Oracle Database Machine uses (if that matters at all).

The main benefit of using jumbo frames with IP is that it reduces the CPU (sys time) because (if you use the default block size of 8k or smaller) a single data block fits within a single frame so there is no splitting/reassembly required.

10GbE may get more following once iWARP (RDMA over Ethernet) gets more exposure.

Might be a useful reference:

On Tue, Feb 23, 2010 at 10:29 AM, Crisler, Jon <> wrote:
> I am looking for some input from DBA's that have worked on Oracle RAC using
> something faster than gigabit Ethernet.  We are getting ready to build some
> large Linux RAC clusters that may have as many as 10-14 nodes, so the
> interconnect performance has me concerned.  I noticed that 10 G Ethernet
> switches are starting to become more mainstream, so if anybody could share
> their experiences with me using this equipment with RAC I would appreciate
> it.  Did it help performance?  How troublesome was the install /
> configuration of the hardware and server NICs / drivers ?
> On that note, if anybody has any performance experience in running RAC back
> to back to compare jumbo frames vs. standard frames that would also be
> helpful.

Greg Rahn
Received on Tue Feb 23 2010 - 17:02:27 CST

Original text of this message