Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Backup Script
use RMAN and forget this "oracle 7 museum stuff "
"Charles J. Fisher" <cfisher_at_rhadmin.org> schrieb im Newsbeitrag
news:Pine.LNX.4.44.0207251047340.30545-100000_at_galt.rhadmin.org...
> Hi, Howard.
>
> I'm attaching my own backup script that you can forward to this guy if you
> want. I've used it successfully both in cloning and in "oh my god the disk
> died" situations.
>
> Advantages:
>
> 1. My script doesn't use a lot of shell functions; the logic is easier to
> follow.
> 2. My script can use compression utilities. I use both gzip and bzip2 in
> production, but a really big database should go with compress.
> I know that you don't like compression.
> 3. I use sqlplus /nolog - Oracle on most UNIX platforms hides passwords in
> ps -ef, but this didn't used to be the case. Do examine hide.c on
> metalink if this is a problem for you.
> 4. I do clean up the archived logs, and my script will deal with 10,000
> logs in the archive dest (when ls * fails with "too many files").
> 5. I use a little more whiz-bang Korn shell functionality.
> 6. I get pretty bored at work sometimes, so it's documented up the wazoo.
>
>
> Disadvantages:
>
> 1. I do take the tablespace in and out of backup mode for each datafile.
> Since I'm compressing, this seems like a good idea.
> 2. Most Linux systems don't come with the Korn shell; download it at
> http://kornshell.com.
> 3. I do make hot copies of online redo, which is *very* useful for
> cloning, but causes the heads of Oracle purists to spin (I hate the
> sound of snapping vertebra in the morning). Using hot-copied redo will
> prevent you from rolling forward through logs (assuming that recovery
> works at all).
> 4. I don't skip anything.
>
> ----------------------------------------------------------------------
# --------------------------------------------------------------------------
---Received on Sat Jul 27 2002 - 13:02:01 CDT
>
> integer NFILES=0 INDEX=0
>
>
> # Clean out any remnants of a previous backup.
> cd $DEST/arch
>
> if [ $(pwd) != "$DEST/arch" ]
> then
> exit
> fi
>
> ls | xargs rm > /dev/null 2>&1
> cd ..
> rm * > /dev/null 2>&1
>
>
> # Copy the init(sid).ora files.
> cp $INITORA $DEST
>
>
> # Start sqlplus as a Korn shell coprocess to generate a list of datafiles.
> $SQLPLUS /nolog |&
>
> # Stuff sqlplus' input - it is easy to deadlock a coprocess; "quit" is
critical.
> print -p "$CONNECT
> set sqlprompt ''
> set sqlnumber off
> set trimout on
> set head on
> set pagesize 2048
> set linesize 2048
>
> ALTER DATABASE BACKUP CONTROLFILE TO '$DEST/CONTROL.BAK';
>
> SELECT
> '+ '||RTRIM(tablespace_name), RTRIM(file_name)
> FROM
> sys.dba_data_files
> ORDER BY
> file_name;
> quit"
>
>
> # Process the output of sqlplus.
> while read -p line
> do
>
> # Any line not beginning with "+" is to be ignored.
> # Any line beginning with "+" is stripped of the "+" and appended to the
array.
>
> case $line in
> +*)
> DFLIST[NFILES]=${line#+}
>
> NFILES=$((NFILES + 1))
>
> ;;
> esac
> done
>
> # The array of tablespaces and datafiles has now been constructed; begin
backup.
>
>
>
> $COMPRESS $DEST/CONTROL.BAK
>
>
> function backup_datafile
> {
> # This shell function will make a hot backup of a single Oracle data file.
> # It takes two arguments - the Oracle tablespace name, and the
fully-qualified
> # path of the datafile.
>
> print "$CONNECT
> ALTER TABLESPACE $1 BEGIN BACKUP;
> host cp $2 $DEST
> ALTER TABLESPACE $1 END BACKUP;
> quit" \
> | $SQLPLUS /nolog > /dev/null 2>&1
>
> # If enough disk space is available, the compress operation can be moved
> # outside of the function, or eliminated altogether.
>
> $COMPRESS $DEST/$(basename $2)
> }
>
>
> while (( INDEX < NFILES ))
> do
>
> # Even though there is only one argument here, it becomes two in the
function.
>
> backup_datafile ${DFLIST[INDEX]}
>
> INDEX=$((INDEX + 1))
>
> done
>
>
> # Force a write of the current redo.
> #print "$CONNECT
> # ALTER SYSTEM ARCHIVE LOG CURRENT;
> # quit" | \
> # $SQLPLUS /nolog > /dev/null 2>&1
>
>
> # Copy the online redo logs. Oracle says that you're not supposed to do
this.
> # The "Oracle8 Backup and Recovery Handbook" (ISBN 0-07-882389-7) has the
> # following warning on page 94: "You should never take a backup of the
online
> # redo log files, as the online log file has the 'end of backup' marker
and
> # would cause corruptions if used during recovery."
> $SQLPLUS /nolog |&
>
> # The ugly SQL below selects only a single member from each group.
> print -p "$CONNECT
> set sqlprompt ''
> set sqlnumber off
> set trimout on
> set head on
> set pagesize 2048
> set linesize 2048
>
> SELECT
> '+ '|| RTRIM(x.member)
> FROM
> v\$logfile x
> WHERE
> x.member IN
> (
> SELECT
> member
> FROM
> v\$logfile
> WHERE
> group# = x.group# AND rownum < 2
> );
> quit"
>
> # The copy must be done with all possible speed, compressing afterwards.
> CLIST=""
>
> # Process the output of sqlplus.
> while read -p line
> do
>
> # Any line not beginning with "+" is to be ignored.
> # Any line beginning with "+" is a redolog file to be copied and
compressed.
>
> case $line in
> +*)
> cp ${line#+} $DEST/do-not-use-$(basename ${line#+})
> CLIST="$CLIST $DEST/do-not-use-$(basename ${line#+})";
> ;;
> esac
> done
>
>
> $COMPRESS $CLIST
>
> # Force a write of the current redo again, to ensure a good copy of the
hotback.
> # (I don't like to do this now - it seems to corrupt the archived logs.)
> #print "$CONNECT
> # ALTER SYSTEM ARCHIVE LOG CURRENT;
> # quit" | \
> # $SQLPLUS /nolog > /dev/null 2>&1
>
> # Now attend to the archive logs.
> cd $ARCH
>
> # Move the archivelogs to the temporary area.
> # Note: "mv * $ARCH2" may not work, since the argument list might be too
large.
> for x in *
> do
> mv $x $ARCH2
> done
>
>
> # If Oracle was in the act of writing an archivelog, give it time to
finish.
> sleep 30
>
>
> cd $ARCH2
>
> # Compress all the archivelogs. Do this with cron on a regular basis as
well.
> for x in *[!$SUFFIX]
> do
> $COMPRESS $x
> done
>
>
> mkdir $DEST/arch
>
>
> # Move the archivelogs to the destination area.
> for x in *
> do
> mv $x $DEST/arch
> done
>
>
> # Now attend to the controlfile. Make a backup copy and a trace.
> cd $UDUMP
>
> # Locate and delete old outputs of 'ALTER DATABASE BACKUP CONTROLFILE TO
TRACE'.
> CONTROL=$(grep -l 'CREATE CONTROLFILE' *.trc)
>
> if [[ ! -z "$CONTROL" ]]
> then
> rm -f $CONTROL
> fi
>
> print "$CONNECT
> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
> quit"| \
> $SQLPLUS /nolog > /dev/null 2>&1
>
> CONTROL=$(grep -l 'CREATE CONTROLFILE' *.trc)
>
> mv $CONTROL $DEST
>
> # Restore Instructions
>
> # If you have simply lost or corrupted a data file, restore the backup
data
> # file, plus all archive logs produced after the timestamp on the file.
> # With this material in place, you can run
> # "RECOVER AUTOMATIC DATAFILE '/path/to/datafile.dbf';"
>
> # If you need to clone the entire database, load the *exact same* Oracle
> # software binary image, restore the contents of the backup, then edit
the
> # file named "ora*.trc". Remove everything except for the "CREATE
CONTROLFILE",
> # modify the paths if necessary, then enter svrmgrl (or sqlplus on 8i+),
> # do a "startup nomount", then run the "CREATE CONTROLFILE". After a
delay,
> # do a "RECOVER AUTOMATIC DATABASE;" and after another delay, enter
> # 'ALTER DATABASE OPEN;' and your database should come online. Also make
> # sure that you've renamed the "do-not-use*" redologs to working paths.
>
> # For a real restore, you should be using "RECOVER DATABASE UNTIL CANCEL
> # USING BACKUP CONTROLFILE;" (and do not use any of the do-not-use*
files).
>
>