Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Storage array advice anyone?

RE: Storage array advice anyone?

From: Barr, Stephen <Stephen.Barr_at_bskyb.com>
Date: Thu, 16 Dec 2004 14:51:50 -0000
Message-ID: <6B4A3CF190E26E4781E7F56CCDEE4083026EFFEB@sssl_exch_usr3.sssl.bskyb.com>


Yes, it's shared with two other large systems - both OLTP.  

The current striping at the OS level (vxfs) is 128k * 8 - which for our DWH is a bit disastrous.  

We have four dedicated paths to the disks.  



From: Loughmiller, Greg [mailto:greg.loughmiller_at_cingular.com] Sent: 16 December 2004 14:09
To: 'bdbafh_at_gmail.com'; Stephen.Barr_at_bskyb.com Cc: Amir.Hameed_at_xerox.com; oracle-l_at_freelists.org Subject: RE: Storage array advice anyone?  

In our experience, getting EMC to perform an analysis of the frame has usually provided some benefit. Is the Frame shared with other hosts/ports? But getting EMC to do a deep down dive on the Frame may shed some light for you..
greg  

-----Original Message-----
From: Paul Drake [mailto:bdbafh_at_gmail.com <mailto:bdbafh_at_gmail.com> ] Sent: Wednesday, December 15, 2004 3:52 PM To: Stephen.Barr_at_bskyb.com
Cc: Amir.Hameed_at_xerox.com; oracle-l_at_freelists.org Subject: Re: Storage array advice anyone? Amir,
Obviously, you need more cache :).
(ducking and running)
It might be that you're saturated at the controller (FCHBAs - host bus adapter(s)) or in internal bandwidth - depends upon the number of paths allocated to your mount points.
Paul  

On Wed, 15 Dec 2004 20:34:40 -0000, Barr, Stephen <Stephen.Barr_at_bskyb.com> wrote:
> Hi Amir,
> We also have a DMX 3000 box and have it striped 8 ways.
>
> We have 83 meta devices, each meta device is ~67Gb is size and is
> made of eight 8.43Gb volumes. Each volume is RAID 1, however, each meta
> volume is striped across it's eight individual volumes with a stripe size
of
> 0.94Mb.
>
> The issue we have at present is that we are a datawarehouse doing
> lots of 1Mb direct path reads. Each read will hit 8 physical devices (with

> 1Mb stripe unit size at OS). I assuming this is a bad thing - surely each
of
> our reads should be hitting only a single device? i.e. we're waiting on 8
> devices instead of only one.
>
> I've performed a number of tests with PQ on the current setup, and

> it looks like the IO subsystem is saturated with a single PQ query (degree

> 4) to such an extent that two PQ queries running together BOTH take twice
as
> long to complete....surely this isn't the pattern we should be seeing? It
> essentially means that the system is 100% non-scalable.
>
> 1 query PARALLEL 4 (FTS)
>
> 1Mb Stripe unit 5 mins 18 secs
> 512k Stripe unit 5 mins 18 secs
> 128k Stripe unit 5 mins 52 secs
> CONCAT 5 mins 10 secs
>
> 2 queries hitting same table PARALLEL 4 (FTS)
>
> 1Mb Stripe unit 8 mins 43 secs (each)
> 512k Stripe unit 10 mins 12 secs (each)
> 128k Stripe unit 8 mins 35 secs (each)
> CONCAT 8 mins 10 secs (each)
>
> Does anyone have any experience of setting up this type of storage
solution
> for a data warehouse?
>
> Thanks,
>
> Steve.

-- 
http://www.freelists.org/webpage/oracle-l
<http://www.freelists.org/webpage/oracle-l>  


.


-----------------------------------------------------------------------
Information in this email may be privileged, confidential and is 
intended exclusively for the addressee.  The views expressed may
not be official policy, but the personal views of the originator.
If you have received it in error, please notify the sender by return
e-mail and delete it from your system.  You should not reproduce, 
distribute, store, retransmit, use or disclose its contents to anyone.
 
Please note we reserve the right to monitor all e-mail
communication through our internal and external networks.
-----------------------------------------------------------------------



--
http://www.freelists.org/webpage/oracle-l
Received on Thu Dec 16 2004 - 08:54:18 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US