Re: Exadata V2 Compute Node 10GigE PCI card installation

From: Greg Rahn <greg_at_structureddata.org>
Date: Wed, 9 Nov 2011 07:09:11 -0800
Message-ID: <CAGXkmiu1zvxH5feYT8nER9FRVuFGJRRysVJ0cr6=FqBDUb+XAw_at_mail.gmail.com>



Options #1 and #2 are not allowed, so that leaves you with #3 and #4.

On Wed, Nov 9, 2011 at 2:47 AM, Vishal Gupta <vishal_at_vishalgupta.com> wrote:
> Hi,
>
> One of my client currently has Exadata V2 racks, which do not have 10GigE card in them. We were thinking of provisioning Tier-2 storage via dNFS to a NAS filer (read NetApp). Currently since V2 compute nodes have only 40Gb/s infiniband and 1GigE cards in them, we can only connect over 1Gb/s speed to NAS filer. Oracle is quoting ridiculous price (even thought of mention the range makes me little unhappy) for their ZFS Storage appliance which have infiniband connectivity.
>
> Following are the few options to get the decent enough performance for Tier-2 Storage on exadata racks.
> 1. Oracle installs a 10GigE PCI card in V2 compute nodes.
> 2. Customer install a 10GigE PCI card in V2 compute nodes. They do have a empty PCI slot, so this is technically possible.
> 3. Customer uses Voltaire Grid Director 4036E bridge, a low latency QDR Infiniband to 10Gig bridge.
> 4. Customer buys the Oracle/Sun ZFS Storage appliance to connect to NAS over infiniband directly.
>
> My order of preference of choices above are 1,2,3,4 as listed above.
>
> My question to the list and Exadata community out there is, does Oracle allow you to install 10GigE PCI card in V2 compute nodes?

-- 
Regards,
Greg Rahn
http://structureddata.org
--
http://www.freelists.org/webpage/oracle-l
Received on Wed Nov 09 2011 - 09:09:11 CST

Original text of this message