Re: Where is New DBA FAQs Part 1

From: Kevin Loney <"75663,327"_at_compuserve.com>
Date: 1996/11/25
Message-ID: <3299BC41.6F9B_at_compuserve.com>


stephanie yolayes wrote:
>
> I am looking for the New DBA FAQs Part 1.

The New DBA FAQs
(Frequently Asked Questions for/from Database Administrators) by Kevin Loney

First in a Series.



Modification history

24-SEP-92 Original version, by Kevin Loney. 07-OCT-92 Modified to include NOTE about O/S specific conditions that

                affect whether or not an entire block image is written
                out per transactions during hot backup mode (see note 3
                of the 'Backup Strategy for Archivelogs' section).
                Note came from gpongrac_at_us.oracle.com.
07-FEB-96 MODIFIED, added NOTES. - Kevin Loney. 19-NOV-96 Updated as part of the New DBA FAQ series. - Kevin Loney


This article cannot be published.
Reference: ORACLE DBA Handbook, 7.3 Edition. (Loney)

        ORACLE Backup and Recovery Handbook (Velpuri) For scripts from these books, visit the Oracle Press web site at www.osborne.com.


Part 1. Backup Strategy

Q: What backup strategy should I use to best protect myself

        against media and node failures?  How should I structure
        my backup procedures to minimize the vulnerabilities they
        cause?  And how can I have them check for both physical
        and logical errors?

A:
I will assume throughout this discussion that the reader has at least a passing knowledge of the backup options available in Oracle. The recovery options and procedures are well documented, both in the DBA Guide and in the online RECOVERY.DOC file in the RDBMS directory. The procedures described here will allow the user to best exploit the tools available to him/her.

NOTE: Rama Velpuri, the Backup/Recovery guru at Oracle, has written an entire book on this subject (Oracle Backup and Recovery Handbook, Osborne/McGraw-Hill). It's from my publisher, but my honest appraisal: it's one of the best-written computer books I've ever read.

If you're really into the subject, then get Rama's book; this article only scratches the surface. The Backup and Recovery chapter of the ORACLE DBA Handbook is not as intensive as Rama's book, but is certainly more thorough than this article. *

There are three valid ways of backing up an Oracle (V6-7) database. They are:

  1. Shut the database down and backup up all tablespace, log, and control files.
  2. While the database is up, export the entire database.
  3. While the database is up, running in ARCHIVELOG mode, set the tablespaces into backup state and back up their files, then get the control files and the archived redo logs.
  4. The Cold Backup.
    Shut down the database and perform a cold backup by going disk by disk and backing up all files in the [oracle] directories. When complete, restart the database.

You may need to put an automated check into your backup program to use cold backups effectively, as several situations can cause the database shutdown to fail. For example, deadlocked users can prevent a SHUTDOWN IMMEDIATE from working. Also, "memory leaks" can keep the database is a suspended mode - in which the only solution is to SHUTDOWN ABORT. So add a script to your shutdown commands that does an independent check to see if the database actually went down.

Some DBAs prefer choose to use the following command sequence: shutdown abort;
startup;
shutdown immediate;

That way, they can be sure the database goes down via the SHUTDOWN ABORT command, and then shut it down cleanly via the SHUTDOWN IMMEDIATE command. The alternative is to use just a SHUTDOWN IMMEDIATE, followed by a separate command that checks to make sure the shutdown worked.

2. The Export.



A full system export picks up information that user exports do not; this includes third-party indexes and grants. Why perform an export at all? Well, regular backups check that the database files are physically acceptable; export checks that they are logically readable.

[Aside: This does not imply that they are logically importable; corrupt records may be exported into the dump file, preventing import. See my paper, #111, "Editing Corrupted Export Files (When Import Fails)" in the proceedings from the 1992 IOUW. Also, see Sam Youngman's article on the Export file structure, as presented at OOW 1996.]

For example, export will detect data structure errors in tables (these are particularly prevalent in tables containing LONG datatypes). Export is of limited use for full-system backups; in order to recreate from an export file, you'd have to re-create the database from scratch first. However, it is worthwhile to periodically validate every record in the database via export. The dump file can then be used to retrieve particular tables/users if needed.

Immediately following the export, run scripts to map the tablespaces to owners, and owners to tablespaces. This will allow you to see who owns what where. In the event of a tablespace loss, you would then be able to quickly determine what users/systems will be affected. Here are two scripts I use:

rem user object locations
set pagesize 60
break on owner on tablespace_name
column owner format A20
column tablespace_name format A32
column objects format A20

select substr(owner,1,20) owner,

        substr(tablespace_name,1,32) tablespace_name,
        count(*)||' tables' objects

from sys.dba_tables
group by
        substr(owner,1,20),
        substr(tablespace_name,1,32)

union
select substr(owner,1,20) owner,
        substr(tablespace_name,1,32) tablespace_name,
        count(*)||' indexes' objects

from sys.dba_indexes
group by
        substr(owner,1,20),
        substr(tablespace_name,1,32)


spool user_locs.lst
/
spool off
clear columns
clear breaks

rem ts_users.sql
rem user list by tablespace for export reference column tablespace_name format A32
column owner format A20
column objects format A20
break on tablespace_name on owner
set pagesize 60

select

        substr(tablespace_name,1,32) tablespace_name,
        substr(owner,1,20) owner,
        count(*)||' tables' objects

from sys.dba_tables
group by
        substr(tablespace_name,1,32),
        substr(owner,1,20)

union
select
        substr(tablespace_name,1,32) tablespace_name,
        substr(owner,1,20) owner,
        count(*)||' indexes' objects

from sys.dba_indexes
group by
        substr(tablespace_name,1,32),
        substr(owner,1,20)

spool ts_users.lst
/
spool off

These will create the tablespace/user mapping files.

NOTE: Export/Import improved between V6 and V7. The biggest headache with them in V7 is that Import brings back the tables in alphabetical order, and if you have constraints on the table, you need to bring the tables back in hierarchical key order - so you need to disable the constraints prior to Import. Also, having the indexes on the tables during Import makes the Import process take orders or magnitude longer than it should.

As of 7.3, you can use Direct Exports to cut the time required by your exports in half; but if you use this option, you can't Import the generated dump file into an earlier release of the database (7.2, eg).

You can also map objects to datafiles, via the following script.

select DBA_EXTENTS.Owner,
       DBA_EXTENTS.Segment_Name,
       DBA_DATA_FILES.Tablespace_Name,
       DBA_DATA_FILES.File_Name,
       SUM(DBA_EXTENTS.Bytes) Bytes

  from DBA_EXTENTS, DBA_DATA_FILES
 where DBA_EXTENTS.File_ID = DBA_DATA_FILES.File_ID  group by DBA_EXTENTS.Owner, DBA_EXTENTS.Segment_Name,   DBA_DATA_FILES.Tablespace_Name, DBA_DATA_FILES.File_Name;

3. The Hot Backup.



The ARCHIVELOG function that came in OracleV6 opened up a whole new set of possible backup options, but also caused a lot of heartache until they got it working right.

Good parts about Archivelog:

  1. It gives you 24-hour availability.
  2. It gives you point-in-time recovery.
  3. It allows you to restore without losing any data at all.
  4. It lets you keep the database up, preserving the data in your SGA.

Bad parts about Archivelog:

  1. Administration can be difficult.
  2. Repetitive failed load attempts can create massive logs.
  3. When it gets stuck, the database comes to a halt.

Implementing Archivelog:



To start using Archivelog, simply alter your startup command file:

startup mount <SID> exclusive;
connect internal;
alter database archivelog;
archive log start;
alter database open;

and startup the database. From then on, the database will be in Archivelog mode until you turn it off (alter database noarchivelog). Note that a failed transaction is still a transaction; Archivelog keeps track of rollback segment extents, so rolled back inserts or deletes affect it just like completed transactions. Turn it off (shutdown, restart with noarchivelog mode) before doing big loads from SQL*Loader.

Another option for controlling the amount of archived redo log entries generated is to use the UNRECOVERABLE keyword, available as of 7.2. UNRECOVERABLE can be used with the CREATE INDEX and CREATE TABLE AS SELECT commands; if it is used, then the transaction will not generate redo entries.

When you start archiving, archived logs will be written every time the redo operation is about to overwrite a previously written logfile. It will write it to the directory indicated by the log_archive_dest parameter in your init.ora file. They will all be the same size (in V6; V7 can have variably sized archive logs) as your redo logs. They will increase in number until they run out of space on their destination device. At that point the database will freeze until you clear more space for them in the log_archive_dest location. SO, have a second location ready to receive them.

Backup strategy for Archivelogs:


  1. Do each tablespace one at a time. That is, rather than setting them all offline, then backing them up, then setting them back online, do them each separately. You don't want to risk having a system crash while the entire database is in begin backup state; recovery is a mess. Minimize your window of vulnerability by having only one tablespace in backup state at any one time.

If you are using RAID devices or mirrored disks, in which the loss of a single disk does not cause the loss of a file, then you can simplify your backup procedures by taking all of the tablespaces offline at once, and backing them up as a set.

2. Before you backup the control file, force an archive log switch. This will update the header information in the control file.

3. Don't do it during user activity. When in backup state, a tablespace's activity is still written to the archive logs. However, it's written block-by-block rather than byte-by-byte. So changing one record in a tablespace that's being backed up will result in that record's entire block being written to the archive area.

NOTE: This is correct only for those platforms where the physical sector size is less than the Oracle logical block size. On systems where the physical disk transfer size is equal to the Oracle block size, then we do not incur the penalty of having to log the entire block. This is true for MVS, VM, and perhaps other systems.

Sample Archive log command file for VMS:

NOTE: See Rama's book for a more complete script; the code is available at the Web site.



$ dup = "backup/ignore=(noback,interl,label)/log" $ sqldba
        CONNECT INTERNAL
        alter tablespace system begin backup;
        exit

$ dup u01:[oracle]ora_system.dbs tape1:ora_system.bck/sav $ sqldba
        CONNECT INTERNAL
        alter tablespace system end backup;
        alter tablespace appl1 begin backup;
        exit

$ dup u02:[oracle]appl1.dbs tape1:appl1.bck/sav $ sqldba
        CONNECT INTERNAL
        alter tablespace appl1 end backup;
        exit
$!
$! get archive logs
$ rename/log u03:[oracle.arcs]*.arc *.arclogs
$ rename/log u04:[oracle.arcs2]*.arc *.arclogs !secondary arcs dir $ sqldba
        CONNECT INTERNAL
        alter system switch logfile;
        exit

$ dup u03:[oracle.arcs]*.arclogs,u04:[oracle.arcs2]*.arclogs tape1:logs.bck/sav
$ del/log u03:[oracle.arcs]*.arclogs;0
$ del/log u04:[oracle.arcs2]*.arclogs;0
$!
$! get control file
$ sqldba
        CONNECT INTERNAL
        alter database backup controlfile to 'u01:[oracle]control.bkp' reuse;
        exit

$ dup u01:[oracle]control.bkp tape1:control.bck/sav

Note: The "alter system switch logfile" command is all but undocumented,

        (see pg 3-15 of the DBA guide.  It refers you to a nonexistent
        cross-reference).  It will NOT show up in the alert log.  Don't
        be alarmed by that; it does actually work.



NEW NOTE: Some people modify the script above to automatically pull the file names from DBA_DATA_FILES, so they don't have to hardcode the file names. I would only do that if the datafiles were all mirrored, so that a media failure wouldn't take down my database.

Integrating the three methods.



Use hot backups for all of your transaction-critical data. As a backup to the hot backups, perform cold backups periodically. As a backup to the physical file backups, use Exports.

If you have a large amount of static data, you may wish to export only certain small tables, while relying on loading programs to re-create the static data when needed. As your database grows larger, the time required to perform Imports will remove Export as a viable backup option for all but the smallest of your tables.



If at all possible in your O/S, it is also a good idea to shadow the disks on which your realtime and archived redo logs reside.

Kevin Loney is the author of the ORACLE DBA Handbook, 7.3 Edition, coauthor of ORACLE: The Complete Reference, Electronic Edition, and coauthor of Advanced ORACLE Tuning and Administration, all published under Osborne/McGraw-Hill's Oracle Press imprint. See www.osborne.com for scripts and sample chapters. Received on Mon Nov 25 1996 - 00:00:00 CET

Original text of this message