Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: NT/2000 schedulable Process

Re: NT/2000 schedulable Process

From: Paul Drake <paled_at_home.com>
Date: Sat, 23 Feb 2002 16:48:15 GMT
Message-ID: <3C77C84E.2020408@home.com>


Sybrand Bakker wrote:

> On Fri, 22 Feb 2002 23:20:21 -0500, "Rom" <orom_at_systec.com> wrote:
>
>

>>Thanks for the answer.
>>Thing is that I am about to create the backup scripts, but I don't have any
>>samples. do you have one by any chance?
>>
>>I mean, I made many of them on Unix, but NT is kinda new to me.
>>
>>Thanks,
>>Ofer
>>

>
>
> You'll find buying a backup product like Arcserve or BackupExec
> with a special Oracle Agent is really *much* cheaper.
> The Oracle Agent is usually just around $1000, I don't know what your
> hourly rates are, but if you really want to do some robust scripting,
> the off-the-shelf tested solution is probably what you want, or you
> must really want to do it yourself.
>
> Regards
>
>
> Sybrand Bakker, Senior Oracle DBA

Do you want to maintain a copy of the hot backup set on disk (either compressed or not compressed) besides having backup sets on tape

   to help reduce your time to recovery? (if you have free space available) Do you want to back up to a SAN/NAS instead of locally attached tape? If so, ARCServe/Veritas might not be your best bet. Have you looked into the use of RMAN yet?

If you're going to script it, I'd use the following: (lines marked with * are optional)

clear the staging area
switch the logfile (spool out the current logfile # to file) loop through each tablespace

    put the tablespace in backup mode
    loop through each datafile in the tablespace

       ocopy the datafile to the backup staging area
       *run dbv.exe against the backup copy to look for corrupt blocks
       *compress the datafile with a command line utility
       *delete the staged copy of the datafile
    end loop
    take the tablespace out of backup mode end loop
create the backup controlfile
switch the logfile again (spool out the current logfile # to file) create a report of v$backup joined to v$datafile to summarize what was backed up and that no datafiles in v$backup have a status of 'ACTIVE' mail the dbv logfile and report
rename the backup set with a unique date/time create an entry in your catalog that the backup occurred (could be log file that is appended to or table in the/a database)

What I use is one script that queries dba_tablespaces, dba_data_files to include all data files in the backup set dynamically (nothing new here) that spools the actual hot backup script and then executes it.

The reason that I switch the logfile prior to and after the backup executes is that I know what archived redo logs are required with this set to make it consistent during recovery.

hth.

Paul Received on Sat Feb 23 2002 - 10:48:15 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US