Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: recovery strategies for multi-terabyte database

Re: recovery strategies for multi-terabyte database

From: Joel Garry <joel-garry_at_home.com>
Date: 17 Aug 2004 14:03:34 -0700
Message-ID: <91884734.0408171220.5f39eb21@posting.google.com>


Daniel Morgan <damorgan_at_x.washington.edu> wrote in message news:<1092704454.739095_at_yasure>...
> Joel Garry wrote:
>
> > Daniel Morgan <damorgan_at_x.washington.edu> wrote in message news:<1092447562.913049_at_yasure>...
> >
> >>Prem K Mehrotra wrote:
> >>
> >>
> >>>Daniel Morgan <damorgan_at_x.washington.edu> wrote in message news:<1092406609.341901_at_yasure>...
> >>>
> >>>
> >>>>Rob De Langhe wrote:
> >>>>
> >>>>
> >>>>
> >>>>>Hi,
> >>>>>
> >>>>>we are interested to know what DBAs have selected as realistic recovery
> >>>>>(and corresponding backup) strategy for a database with multiple
> >>>>>terabytes of data.
> >>>>>
> >>>>>Internet talks everywhere about backup performances, but nowhere the
> >>>>>actual recovery method is discussed for such a large database. Even when
> >>>>>doing online backups, you still need a way to get this huge dbase back
> >>>>>into a consistent mode, or get a set of data backup in the dbase.
> >>>>>
> >>>>>We are using Solaris-9, Oracle 9.2, SAN storage, Veritas Netbackup, and
> >>>>>LTO tape robot.
> >>>>>
> >>>>>TIA for any suggestions
> >>>>>
> >>>>>Rob
> >>>>
> >>>>Get a duplicate storage array likely NetApp, EMC, Hitachi, or IBM and
> >>>>use the snap-mirror capability to mirror changed blocks to the second
> >>>>array. Be sure the duplicate array is at least 500 miles away from
> >>>>the primary and connect them with a T3.
> >>>>
> >>>>Then don't waste your time backing up anything.
> >>>
> >>>Dan:
> >>>
> >>>Pardon my ignorance, what happends if something got corrpted, you
> >>>accidently deleted some data/table or for whatever reaosn you have to
> >>>do point in time recovery. How will one accomplish that using
> >>>snap-mirror type of backups.
> >>
> >>Corruption:
> >>Same thing that happens when you have a tape containing corrupt blocks.
> >>
> >>Deletion:
> >>Learn about how snap works ... learn about how table flashback works.
> >>Implement the appropriate solution.
> >>
> >>Point-in-time:
> >>Archive logs
> >>
> >>To be truthful I was being a bit flippant. I do believe in backups.
> >>But not like I used to. I haven't had to run for a backup tape in
> >>more than 5 years. And I don't believe anyone is backing up today's
> >>mutli-terabyte databases to tape anymore.
> >
> >
> > Is http://groups.google.com/groups?selm=3FA92229.F1ECFA%40remove_spam.peasland.com&output=gplain
> > out of date?
> >
> > jg
> > --
> > @home.com is bogus.
> > I think it is more important to be able to recover from mistakes than
> > to avoid them. You can't avoid them.
>
> Ok ... a few people are. Though I can't imagine why.

I can imagine several reasons right off the bat:

Budget cycles. Some projects that go on for years need to be budgeted up front, and can't or shouldn't be re-architected halfway.

Tape is still cheaper than disk. Petabytes of tape are still cheaper than terrabytes of disk.

The next frontier of computing is stability over time. We don't know yet what will shake out as the replacement for mass storage. It may be k001 to have the latest and greatest, but some things will be laughed at in the future. Non-error-correcting laser-based storage that hardly lasts 15 years will perhaps be one of them. Ironically named inexpensive disks will perhaps be another. I have reel-to-reel tapes that are 50 years old and still work. I also have a 5 1/4 floppy drive that doesn't work because my wife stuck a Reader Rabbit cd in it and I haven't gotten around to taking the whole thing apart yet. sigh.

You really think flashback is stable, proven and dependable enough to use as a backup procedure??? And even if it were, wouldn't it use up so much disk you might as well take backups? I don't think it would even be economic to do the testing required to be sure, and practically speaking, I think it would require way too much manual monitoring (with the attendant possibility of mistakes) to use in a production system with varying future loads.

jg

--
@home.com is bogus.
I know!  Let's humanize and democratize the net by making everything
like ebay!  http://www.signonsandiego.com/uniontrib/20040814/news_1b14ebay.html
Received on Tue Aug 17 2004 - 16:03:34 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US