Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Backup and Recovery strategy and DB design

Re: Backup and Recovery strategy and DB design

From: <why317_at_my-deja.com>
Date: Tue, 14 Dec 1999 22:21:52 GMT
Message-ID: <836fth$prv$1@nnrp1.deja.com>


In article <834jqq$dp0$1_at_nnrp1.deja.com>,   kachkar_at_my-deja.com wrote:
> Hi all,
>
> I need some help/ opinions/ suggestions for the following matter:
>
> We are expecting the size of our database to be about 200GB ,
>
> The current design is :
> 4 tablespace each table space has one datafile
>
> Tablespace no 1:
> Cmacdata contains about 160 table
> Size:5GB
>
> Tablespace no 2:
> CmacIndex contains indexes for the 160 table
> Size: 2 GB.
>
> Tablespace no 3:
> CmacMeter contains 3 Big tables each one is partitioned by month
> Size: 10GB.
>
> Tablespace no 4:
> CmacMeterIndex contains indexes for the 3 big tables and they are
> partitioned too)
> Size: 4GB
>
> Is this a good design , considering the database performance and
> backup and recovery performance
>
> Isn't better to have as many datafiles as possible specially for
> the partitioning?

I would say that the size of your tablespace only matters in how much of a window you have for on-line (hot) backups. The larger the tablespace is, the larger the backup will be, the longer it will take, and the more data is written to the redo logs (if I remember correctly) while the tablespace is in backup mode.

> Isn't better to have for each month( partition ) a separate
> datafile ? and if so , how is it going to affect the database
> performance , backup and recovery ?
>
> Also for the backup and recovery strategy the plan is:
> The database is on archive mode
> Every day we will backup the transaction files ( logfiles )
> Every week we will back up the whole database ( I don't know how
> long it is going to be ) , we can do the backup for tables space
> no 1 on Saturday and for table space no 2 on Sunday and so on ?..
> Now, is this a good plan? If yes or no , why ?
> And how about incremental backup ,incremental levels ,
> non-cumulative and cumulative

Using my experiences with the databases that I've inherited, I would like to caution you on a few points.

First, if the database is transactional, or frequently loaded (even once per hour), it may be difficult to obtain a consistent export. Also, be aware that the 2GB or 4GB file limitations will apply to your export *.dmp files, in which case I've had to implement compression using named pipes (mine are Unix systems, by the way). Another option would be to break up the export into several smaller ones by using the "table" parameter on the "exp" command. This is harder to obtain a truly consistent logical backup, but it may help improve your data loaders' availability, and reduce the load on your rollback segments.

If you do rely upon exports to recover the database in case of a catastrophic loss, be prepared for it to take a while, especially if you have to use the named-pipe compression trick on a database of that size. I would definitely concentrate my efforts on making sure there are good hot/cold backups, and be extremely careful to backup *every* archive log.

> One last thing :
>
> Our business requirements says that the data should be available
> for 7 years , now because I don't want the database and datafiles
> to explode , I think ( I might be wrong ) I need to backup the
> data for each year and save them to tapes then remove the data for
> that year from the database using delete from table where ?. ,
> then if the people needed the data I have to restore it again .

This scenario, too, is a familiar one for me. What we did (in Oracle7 using partition views) was to create another, small database capable of capturing one month of transactional data, as well as all of the "meta-data" (that's what the developers called Products, Workcenters, and other reference tables that were almost, but not completely, static). The developer used a database link to copy one month's worth of data to the smaller "archive" database, and then I simply ran a full export on the archive database. Once I spooled that archive to tape, the archive database could be truncated. A prior month's data can be restored as necessary, but only one at a time. Our on-line data retention was only 11-12 months, so we would, for example, truncate November's table on or before October 30.

> Firstly, I don't know if this logic is good or bad
> Secondly. If it is OK , how can I implement it ?

It sounds like lots of fun. At least you're involved from the beginning. I've been playing catch-up for the last couple of years.

Good luck,
Bill

Sent via Deja.com http://www.deja.com/
Before you buy. Received on Tue Dec 14 1999 - 16:21:52 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US