Re: Do we need multiple REDOLOG member if it is already on SAN box?
Date: Tue, 3 May 2011 09:08:49 -0700 (PDT)
On May 2, 9:30 pm, Noons <wizofo..._at_gmail.com> wrote:
> On May 3, 2:54 am, joel garry <joel-ga..._at_home.com> wrote:
> > > YMMV based on how paranoid you are.
> > You may be over-estimating everyone else's ability because your ship
> > is so tight! I'm not running standby now because a particular array
> > works with two versions of hp-ux, but not the version in between, and
> > no one knows why. SAN's are not magic, they fail when their
> > components get old or their environment changes. The failure may lack
> > grace.
> They are not magic but they are by far the best current compromise
> between reliability, performance and price. Because I can't afford
> better bang with my buck, I'm happy with them and their demonstrated
> level of reliability so far.
I did not know when I wrote that that the reason I hadn't seen too many cow-orkers that morning was because two disks had failed over the weekend in a san that was set up for one disk failure. Fortunately not one for the db's, just one that everybody else uses.
I haven't been keeping close track, but I think that means over 10 years, a once-in-100-year event approximately every other year for one (distributed) site. Not counting the whole batch of IBM disks that didn't even get past initial burn-in.
Of course, they've had a few once-in-23,000-year events in the nuke industry lately, so I guess that's not the worst. (Can't get to the exact stats since net access is still degraded while replicants catch up. But it was either 23K or 17K, kind of caught my attention, official US regulatory estimates.)
-- _at_home.com is bogus. http://www.newscientist.com/gallery/magical-mathematics/6Received on Tue May 03 2011 - 11:08:49 CDT