Re: Archive log compress

From: Joel Garry <joel-garry_at_home.com>
Date: 5 Mar 2003 13:59:03 -0800
Message-ID: <91884734.0303051359.54e06142_at_posting.google.com>


Mark.Powell_at_eds.com (Mark D Powell) wrote in message news:<2687bb95.0303050724.5405c1a_at_posting.google.com>...
> badrinathn_at_yahoo.com wrote in message news:<f643a33e.0303040857.34870964_at_posting.google.com>...
> > Archive log question
> > Hi,
> >
> > I have a question on Archive logs. I am compressing the archive logs
> > with a cron job. What will happen if arch process has just started to
> > create a new file and the compress cron job start compressing it.
> >
> > As the arch file being written has compressed, will the oracle write
> > to a different arch file or will it hang and fail.

I believe the compress will compress whatever is there until it is done compressing, then change the inode to point at the newly created compressed file. Meanwhile, arch will continue to write to the old file, so you will wind up with a compressed corrupt log, and an unreachable good log which will be immediately given back to the filesystem as unused space.

> >
> > Badrinath
>
> Badrinath, modify your shell to not compress the most recent n number
> of archive log files and then you do not have to worry about
> potentially corrupting the archived log file.
>
> The wc command can be used to quickly count the number of files and an
> ls -ltr can be used to list them in time order so your read loop stops
> short of the full number.
>
> HTH -- Mark D Powell --

Or justify a failover system and buy lots more disk space.

jg

--
_at_home is bogus.
Remember to mount a scratch monkey.
Received on Wed Mar 05 2003 - 22:59:03 CET

Original text of this message