Re: Exadata V2 Compute Node 10GigE PCI card installation

From: Vishal Gupta <vishal_at_vishalgupta.com>
Date: Sat, 12 Nov 2011 00:39:20 +0000
Message-Id: <30901811-4A3F-4983-A481-A2889098E310_at_vishalgupta.com>



Andy,

Please share the results of your findings with us, if you see any change in airflow path. I would like to know how much truth is their in support's statement regarding airflow.

Regards,
Vishal Gupta
Email: vishal_at_vishalgupta.com
Blog: http://blog.vishalgupta.com

On 10 Nov 2011, at 14:37, Andy Colvin wrote:

> Imagine that.  I would have expected them to tell you to buy a new Exadata.  In all seriousness, once you have an Exadata, you are not allowed to modify the physical hardware.  This ranges from replacing unused 10GbE cards with fibrechannel HBAs, installing additional hard drives in the unused SAS slots on the front of compute nodes, adding memory to the V2 systems (you can purchase a memory upgrade kit for the X2-2 systems), or even installing non-Exadata servers in the physical Exadata rack.  I'll have to crack open our V2 and X2-2 compute nodes and compare the airflow paths today.
> Andy Colvin
> 
> Principal Consultant
> Enkitec
> andy.colvin_at_enkitec.com
> http://blog.oracle-ninja.com
> Office - 972-607-3744
> Mobile - 214-763-8140
> 
> On Nov 9, 2011, at 9:55 AM, Vishal Gupta wrote:
> 

>> I raised an support call with Oracle Exadata support team to get their view on this. They said, 10GigE cards generate too much heat and V2 compute nodes are not capable of handling that heat due to air flow problems. It results in surrounding compoents and servers becoming too hot. In X2-2 and X2-8 compute nodes, airflow has been improved and they have been able to put 10GigE cards.
>>
>> Without me even mentioning Voltaire 4036E bridge, support straight said i could use that as a option.
>>
>> I think i will have to use option #3. As option #4 looks too expensive at the moment.
>>
>> Regards,
>> Vishal Gupta
>> Email: vishal_at_vishalgupta.com
>> Blog: http://blog.vishalgupta.com
>>
>>
>> On 9 Nov 2011, at 15:09, Greg Rahn wrote:
>>
>>> Options #1 and #2 are not allowed, so that leaves you with #3 and #4.
>>> 
>>> On Wed, Nov 9, 2011 at 2:47 AM, Vishal Gupta <vishal_at_vishalgupta.com> wrote:
>>>> Hi,
>>>> 
>>>> One of my client currently has Exadata V2 racks, which do not have 10GigE card in them. We were thinking of provisioning Tier-2 storage via dNFS to a NAS filer (read NetApp). Currently since V2 compute nodes have only 40Gb/s infiniband and 1GigE cards in them, we can only connect over 1Gb/s speed to NAS filer. Oracle is quoting ridiculous price (even thought of mention the range makes me little unhappy) for their ZFS Storage appliance which have infiniband connectivity.
>>>> 
>>>> Following are the few options to get the decent enough performance for Tier-2 Storage on exadata racks.
>>>> 1. Oracle installs a 10GigE PCI card in V2 compute nodes.
>>>> 2. Customer install a 10GigE PCI card in V2 compute nodes. They do have a empty PCI slot, so this is technically possible.
>>>> 3. Customer uses Voltaire Grid Director 4036E bridge, a low latency QDR Infiniband to 10Gig bridge.
>>>> 4. Customer buys the Oracle/Sun ZFS Storage appliance to connect to NAS over infiniband directly.
>>>> 
>>>> My order of preference of choices above are 1,2,3,4 as listed above.
>>>> 
>>>> My question to the list and Exadata community out there is, does Oracle allow you to install 10GigE PCI card in V2 compute nodes?
>>> 
>>> -- 
>>> Regards,
>>> Greg Rahn
>>> http://structureddata.org

>>
>> --
>> http://www.freelists.org/webpage/oracle-l
>>
>>
> 
> 
> --
> http://www.freelists.org/webpage/oracle-l
> 
> 

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Nov 11 2011 - 18:39:20 CST

Original text of this message