Re: large database maintenance
Date: Fri, 19 Nov 1993 00:46:38 GMT
Message-ID: <CGprHr.M4p_at_freenet.carleton.ca>
In a previous article, petter_at_fdmetd.uucp (Petter Henrik Hansen) asks:
>
>When using online backup and archive:
> 1. How are the data accessed when the
> tablespace are in "begin backup" mode.
The database files belonging to a tablespace are read and written normally during a backup. The only difference is that the timestamp in the file header block will not be updated should a checkpoint occur during the backup.
> 2. After "end backup" for a tablespace, the data in the redo log
> are applied to the tablespace. Will this cost me much CPU?
The data in the redo log are only applied to the tablespace during crash recovery. What happens during a backup is that extra timestamps are written to the redo-log in order to delimit the so called "fuzzy" redo entries which were written during the backup. In addition, there is an init.ora parameter "LOG_BLOCKS_DURING_BACKUP". During normal operation, when a change is made to a database block, only information on the changed bytes is written to the log. During a backup, when LOG_BLOCKS_DURING_BACKUP = TRUE (the default) the entire block is written to the log. This is to protect against a situation when Oracle is writting changes to a database block at the same time that block is being backed up. If you can guarantee the atomicity of a database block read, then you can set LOG_BLOCKS_DURING_BACKUP=FALSE, and save this extra work. However, my experience is that if you run the backup during a period of low activity (which I realize not everyone has), the extra redo activity is not a problem.
> 3. When restoring an online backup of the database, is this going
> to take extremely much time. (In short what is exactly happening:)
It depends on how much work has been done on the database since the file was last backed up (i.e. how many redo-logs you need to apply). What is happening is that once you have restored the file from a backup, Oracle must re-apply all of the changes made to that file between the time it was backed up and the present. In my experience, the time to perform a restore depends on first: how fast you can locate and restore the backup copy of the file to a working disk, second: how fast you can locate the required logs and get them onto disk, and third: how many logs you need to apply.
> 4. What is the best disc configuration to do this? (Striping
> each big table and index, lots of disks etc.)
>Anybody out there that may give me any present and working solutions on
>a +4Gb database that must be up 24 hours?
A great rule of thumb for large databases is the more disks the merrier (if you can afford it). Nothing is faster than backing up to and restoring from disk (With secondary & historical backups to tape). But even if you have everything on tape, ORGANIZATION is the key to a good backup strategy. When a crisis occurs, if you know right away exactly which backups you need and where they are.
>
>What exactly is written to the redo log? We seem to have 200 Kb written
>to the redo log for each transaction, and this seems a bit much.
Every change made to the database is also written to the redo log.
>
>Is there any way to look at the contents of the redo log?
>
>Is it better to split a table of +-30 000 000 rows in smaller tables or
>just stripe it on disk?
For run-time performance, stiping is better. But many tables are easier to restore if you ever need to do so using import (since hopefully you will only need to restore one).
-- - Doug Harris Database Administrator, System Development Division, Statistics CanadaReceived on Fri Nov 19 1993 - 01:46:38 CET