Hot Backup Q&A [Long]

From: Lee Parsons <lparsons_at_eskimo.com>
Date: 1995/06/08
Message-ID: <D9vG90.Fu9_at_eskimo.com>


The following was a email message I received the other day from Mark Vantzelfde <markv_at_hpwadec.wal.hp.com> along with my replys.

If you disagree with me or have any additional comments, please post them. We'll both be looking for them.


> We want to implement 7x24 operation of a fair sized (50GB+) oracle
> database. Up to this point, cold backups have been taken, but going
> to 7x24 means online hot backup will be needed. I don't anticipate
> we'll ever want to do a cold backup, once converted.

I tend to be very conservative and like to do regular cold backups, weekly? monthly?. No real technical reason but it makes me sleep better at night.

In a situation like yours, I might do colds weekly or bi-weekly in the very begining. This may give you some breathing room until your sure you have worked all the kinks out. Sorry to be so vague here. As I have said, there is no real technical reason for this, but Hot backups are so damn unforgiving and I have been burned once or twice because sombody made a change and didn't understand the implications.

> Some questions:
>
> 1. should you take a "final" cold backup before switching to archivelog mode
> or should you take a full backup after the switch?

Since the change to the database is not that big a deal (one command), I wouldn't worry about a cold before the switch. You definatly need a backup right after the switch so that you have something to rollforward with incase the system goes down that afternoon. I would make that backup a cold, just because.

> 2. once set up to run in archivelog mode, when do online redo logs files get
> archived to log_archive_dest? when each file fills or when the log group
> round robins and an overwrite would occur?

They should be written to log_archive_dest after the log switch when the database has finished writting to it. If it waited until the log rolled around again, you would risk stalling the database until the write was finished.

> 3. any good suggesting on setting the log_archive_format parameter?

We set the log_arch_dest to /"arch_directory"/redo"ORACLE_SID"_ and log_archive_format = %s.arc

"arch_directory" and "ORACLE_SID" are of course site specific.

> 3. after you take a hot backup, are some of the archived redo log no longer
> needed?

Strickly speaking none of the old logs are needed. But I like to leave them out there long enough so that each log is on two seperate tapes.

> 4. how do you determine which archived redo log files are no longer needed?

If they are archived then the database doesn't need them. You may need them later to restore, but the database doesn't care anymore.

I think the term need is throwing me. If you restore at a later date you must have a hot backup and all the archlogs that were written upto the point you wish to restore to. As long as you have these log files on atleast one tape your ok. But having them on two tapes would be a good idea in case a tape snaps. It also is nice to have a weeks worth online incase you have to restore a tape from 5 days ago and rollforward. If you had all the logs on disk you could only read one backup and then zoom forward.

This are nice things to have and often a good idea depending on your situation but do you need them? ...

> 5. should archive redo log files be temporarily placed on disk in
> log_archive_dest and then move to tape/optical? Or if hot backup are
> done before the log_archive_dest fills, can the no longer needed
> redo log file be removed and the files kept online? Is this a risk?

Keeping archivelogs online is a very reasonable thing to do and I would certianly encourage it. But it really depends on your site? This may not be an option if you product 5GB of logs daily. If you create 1GB weekly then you may want to consider.

> 5. do you need hot backups of tablespaces that contain only index data
> or take your chances and recreate the indexes if there is a problem?

Depends on where your trying to save the time. If you don't backup your indexes your backups will be shorter, but your restore will be longer. If you do, your restore will be quicker but backup will take more time and tape. You do many more backups than you restore, but you want to restore as quickly as possable.

I personally think backup up the index tablespaces is the way to go.

> 6. during a restore when you've been using archivelog mode, do all archived
> redo log have to be available in log_archive_dest? Can they be copied
> in as needed?

They CAN be copied in as needed if you are short on space. But since your designing the system now, you should plan on having a enough disk available to store all the archiveslog your likely to need.

If they are all there you can just start the restore and let it find them in the same directory. Otherwise you will be watching tapes spin while your users are waiting for the system to come backup.

> 7. What's the best method for estimating the amount of space the archived
> redo logs will take. If I know the interval between hot backups and I
> know the number and type of DB writes/updates during the interval,
> how do I guessitmate the size of the redo logs?

The problem here is your going to have to start with some kind of hard number to start your guessitmation from. If you say "Well, the average write will be XYZ bytes" then you can do something like (XYX+overhead of change) * updates per hour * hours between backup = ??? The problem is that two of these numbers XYZ and "updates per hour" are just wild ass guesses. It's like using a desk ruler to measure something to the nearest 100th of an inch. Why bother? If you feel confident that you do know these two number, then give oracle a call for the overhead. I could give you a guess, but it really should come from the authorative source.

Or if you know that your current environment is producing 1/2 of the logs you'll have in your future environment, then you can look at v$logs and see that your producing ABC logs between backups, double that and multiply by the size of the logfiles. TaDa. You have another wild ass guess.

To be fair, I have always used the "Screw it, Lets start it and see what happens" Methodology. Via this method you have a REALLY big archive log space, clean it up often and wait to see what happens when the database reaches a steady state. I like this because without a doubt the numbers you used to do your careful calculation will turn out to be wrong and throw everything off anyway.

-- 
Regards, 

Lee E. Parsons                  		
Systems Oracle DBA	 			lparsons_at_world.std.com
Received on Thu Jun 08 1995 - 00:00:00 CEST

Original text of this message