Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: SAME methodology opinions? anti-SAME, SAN rant.
bchorng_at_yahoo.com (Bass Chorng) wrote in message news:<bd9a9a76.0303101233.26904a1_at_posting.google.com>...
> yong321_at_yahoo.com (Yong Huang) wrote in message news:<b3cb12d6.0303080723.5f3e2713_at_posting.google.com>...
> > bchorng_at_yahoo.com (Bass Chorng) wrote in message news:<bd9a9a76.0303071653.41b5878a_at_posting.google.com>...
> > > However, my honest opinion is, if you do not have real load
> > > challenge, go ahead use SAME, even with RAID 5. For most sites
> > > this just work fine. Chances are that your NVRAM will cover all
> > > your wrong doings.
> > >
> > > If you expect to have real load challenge, then do all the
> > > home work and plan your IO. I have seen sites that can only
> > > survive this way.
> >
> > Could you elaborate on the NVRAM part? Thanks.
> >
> > Yong Huang
>
> Most SAN now-a-days has very large NVRAM. I think a
> Hitachi 9980 can have 32 GB of NVRAM (thats mirrored).
> Everything you write goes to the cache. At that point, it
> really does not matter where you put your redo.
>
> Your IO is between OS kernel and the SAN cache. Whatever
> happens between the cache and the actual disks usually does
> not have anything to do with your performance.
I beg to differ.
if you have write-back caching enabled on a SCSI RAID controller, a write is seen as being committed with it hits the cache of the SCSI RAID controller (that has NVRAM cache and a UPS for the server). You don't have to buy a SAN or even a point-to-point FibreChannel external storage device to have the benefits of write-back caching. You can even purchase solid-state drives for such purposes. If you have a quad-channel PCI-X bus server (IA-32) you likely have bandwith on the bus to spare.
> However, it does make a difference if either you do not have
> a lot of cache (or cash) or/and your IO load is very high, as
> your NVRAM could be saturated. Now, once it does, you will
> have a very bad IO wait. This is especially prominant on an
> EMC. In this case, if you plan your redo strategically
> like everything Steve Adams suggested, then the saturation
> is less likely to happen.
>
> As to RAID, I think some SANs (such as IBM Shark I believe)
> only supports RAID5. They use large NVRAM and sophisticated
> caching algorithem to cover the shortcomings of using RAID 5
> (or 5+ with Hitachi) for OLTP.
In theory, if you are writing full stripes, then there is little penalty to pay for RAID 5. If you are writing 512 bytes of redo on a 1 MB stripe, you're dealing with lots of overhead.
> My personal belief is the technology will eventually make
> IO tuning less and less important, unless you have a very
> extreme situation.
So - having dedicated I/O channels and dedicated drives for oracle files such as redo, arch and rbs/undo is not the way to go? Just dump it all in the SAN cache? I'll bet that sounds great when you're teeing it up ...
Having low latency for seeks for oracle activities such as creating consistent reads for non-cached blocks seems like a better methodology to me, but actual mileage will vary ... I guess.
Did you know, that you can get an external storage unit for direct attached SCSI from vendors like Dell containing 14 drives of 36 GB each (nominal) for under $7500.00 (USD)?
If you are paying Oracle $15K per CPU, you can keep a starving CPU well fed with lots of I/O with 28 disks over 4 I/O channels for the same cost as an Oracle Server CPU license. Sounds like money well invested to me. You, the DBA benefit by having lots of extra room for trial recoveries, hot backups, rman backups, exports on disk.
Use 1 GB off of a 36 GB drive for live files. Leave the rest alone.
Yes, it sounds like a waste, but these drives are $275 apiece. I think that its awesome to have an 8 GB database on what could be 1/2 Terabyte of RAID 1 storage. Low latency and high throughput.
In discussing SAME with a storage vendor at a presentation in nyc, the external storage cabinet vendor relented that segregating online redo and archived redo from the datafiles (as the read/write characteristics are quite different) was a good move. Salespeople will solve problems by selling you more cache, possibly by selling you more mount points, FCHBAs.
Statspack reports for a site with 2 such external storage units reveal an average of less than 1 ms wait for reads. Yes, I have write-back caching enabled, as there is memory on the RAID controller and the unit is on a UPS with shutdown alerts enabled. Those waits on writes are neglible, rounded to 0.0 ms in the statspack reports. Dedicated disks give you insulation from outside disturbances creating periods of relatively high latency, for which you may not have the ability to monitor.
SANS may be great for (lazy) SysAdmins, but I still believe that direct attached SCSI is still king. If you aren't clustering, there is no point in using a SAN, unless you get a charge out of blowing a large budget. I get a charge out of great performance, while minimizing cost. Having lots of local storage for on disk hot backups and archived redo makes me not worry as much as to whether the SysAdmin is running the tapes thru the bulk eraser.
One more thing: channels.
How many FC-HBAs (or ports) does it require to equal the I/O of a single quad-channel, PCI-X RAID controller? (ideally, you would balance your I/O across multiple controllers on multiple PCI bus channels).
Does it make sense to saturate a FibreChannel switch with I/O from Oracle Database(s) in a non-clustered, high I/O situation? I doubt it.
SCSI sells itself. SANS require huge amounts of schmoozing.
just my opinion.
Paul Received on Mon Mar 10 2003 - 22:34:50 CST
![]() |
![]() |