Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Database file configuration for backup/recovery

Re: Database file configuration for backup/recovery

From: Edzard <edzard_at_volcanomail.com>
Date: 23 Feb 2002 14:58:06 -0800
Message-ID: <5d75e934.0202231458.6ba6e422@posting.google.com>


"Brian Dick" <bdick_at_cox.net> wrote in message news:<Bucd8.36389$nI1.181475_at_news1.wwck1.ri.home.com>...
> "Edzard" <edzard_at_volcanomail.com> wrote in message
> news:5d75e934.0202211041.4d304426_at_posting.google.com...
> > Why not multiplex the online backup of the redundancy set as well? Or
> > are you sure that you keep all the logs since you last burned the
> > backup on CD?
>
> I like the idea of multiplexing the online backup. I plan on running a CD
> burning service(daemon) on a second machine (my workstation). But, I may
> take it offline to use the CDRW for other purposes. The multiplexed online
> backup would reduce my recovery time in case of media failure.
>
> Put the second copy on disk2? How about the placement of my other files?

Hello Brian,

Thanks for taking my suggestion seriously. Below is how I worked it out in detail:

Disk1 (60GB IDE)
  Control file (1)
  Redo log members (member b of all groups)   Archived redo logs (LOG_ARCHIVE_DEST_1)   Online backup of redundancy set

Disk2 (9GB SCSI)
  Control file (2)

  TS_MYAPP_DATA data file
  TS_MYAPP_INDEX data file
  TS_MYAPP_TEMP data file

Disk3 (4GB SCSI)
  Control file (3)
  Redo log members (member a of all groups)

Network:
  Archived redo logs (LOG_ARCHIVE_DEST_2)   Online backup of redundancy set (2)

As you see I added a Network drive. This protects against total loss of the server. If this is not an option, I would leave 2nd archive destination out as it seems exaggerated to have this on one machine. Note that if you use a network drive that requires Windows domain login, then you must set a user and password for this domain in the NT services that startup the database.

Further I put the tablespaces all on the same disk. Caching data in RAM is more effective here. You must have 512Mb extra and set the db_block_buffers correspondingly. Use large redo log files (128 M each) and set log_checkpoint_interval to 1800 to prevent flushing temporary segments or intermediate data to disk.  

Finally, I assumed that the archiver copies the redo logs from group a (and not those from group b). Never tested this though. Copy across disks is of course faster than within one disk.

Nice puzzle on a saturdayevening this was

Edzard Received on Sat Feb 23 2002 - 16:58:06 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US