Re: tar program not suitable for backup?
Date: 1996/02/01
Message-ID: <Pine.HPP.3.91.960201115310.24801B-100000_at_hp712ptt.detroit.deco.com>#1/1
On Wed, 31 Jan 1996, Tom Cooke wrote:
This happened to me (sort of) long ago. See below.
> In article <4e47vv$lok_at_raffles.technet.sg>, Ngo Lip Wee
> <lwngo_at_tmi.com.sg> writes
. . .> >df shows much less than 2.6G occupied,
> >After tar of all datafiles to tape, erasing them from the disk,
> >and doing a full restore, the following is observed:
> >
> >du and ls shows 2.6G occupied, but
> >and the dba_data_files view shows 2.6G
If you are sure you're converting blocks to bytes correctly . .
> >
> >It seems that the tar program packs the datafiles (database
> >files are sparse in our case) as reflected by df. The concern
> >here is will the 'free space' as reported by df (they shouldn't
> >be free, they should be reserved for future table expansions)
> >be used for some other unrelated files, causing problems to my
> >database later.
> >
> >Any comment or advice would be much appreciated.
> >
In some filesystems, sparse files are implemented like . . . well . .
for example:
suppose your c program opens a file, writes one byte, seeks 1GB forward,
then writes one byte, some filesystems can store that file as 2 blocks:
1 for the first block, it just "remembers" the big hole in the middle of
the file, then stores the last block. The middle is defined to be filled
with nulls for the purposes of subsequent reads of that file. In this
case, ls would report a 1,000,000,002 byte file, but df would say that
your 1G filesystem is empty.
It used to be on some unixes, that tar was not "smart enough" to see this, and would write a tape filled with nearly a gigabyte of zeroes. when you read the file back in and it wrote the sparse file in a "dumb way" actually allocating and writing 1G worth of zeroes. Maybe your version of tar is "smarter" about sparse files :-). I dimly recall this happening to me, except it burned me because the "dumb" version of tar i was using could not fit the sparse files back onto the drive I'd backed them up from. Sort of the inverse of your problem. I believe it was on SunOS.
> What we use is cpio (especially for raw partitions).
BTW, cpio is a file oriented tool and does not work on raw partitions (unless it has changed a lot since I used it)
> I'll follow this with interest. BTW, Oracle have said to us that they
> will under no circumstances support any database which you use
> compression tools on (in backups or elsewhere)...
> --
> Tom Cooke
cpio vs. tar: the original problem of sparse files might be fixed by using cpio to backup then restore the files, because cpio may be 'dumber' than tar w. respect to sparse files. It might restore the sparse files as real files filled with nulls where tar was putting back the "magic" sparse hole and putting back real blocks only where oracle had written to during the tablesapce create. I've found that on SysV derivative unixes tar is usually dumber, and on BSD derived systems cpio is dumber. The dumbness I speak of is with regard to other things like named pipes, device drivers, symbolic links etc, and may not be true w respect to sparse files. And in this case, "dumb" would be good.
Another workaround might be to pre-create your datafiles so that they are "filled" and not sparsely allocated, then use the "reuse" clause in the datafile part of the tablespace create. i.e:
$ dd if=/dev/zero of=/u01/datafile.dbf bs=1M count=1000
. .
sql > create tablespace foo datafile '/u01/datafile.dbf' size 1000M reuse;
The dd will create a "real" file filled with nulls, not a "sparse" file by writing at the beginning, seeking really far to the end then writing something at the end of the file.
I'm not sure whether reuse will care whether the reused file has oracle specific data in it though. Received on Thu Feb 01 1996 - 00:00:00 CET