Re: 10g datapump
Date: Mon, 20 Oct 2008 21:24:28 GMT
> On 14 Oct, 16:29, Chuck <chuckh1958_nos..._at_gmail.com> wrote:
>> gazzag wrote: >>> On 13 Oct, 21:40, Chuck <chuckh1958_nos..._at_gmail.com> wrote: >>>> When exporting to multiple files using options like filesize=100m and >>>> dumpfile=filename%U, is there a way to definitively know when oracle is >>>> finished writing to a specific dump file? >>>> Here's why I ask. I need to compress the dump files, but don't want to >>>> wait until the entire job is finished before starting. I want to begin >>>> compressing files once I know oracle is finished with them, while it may >>>> still be writing to other files. >>> Any particular operating system and version? Actually, a proper >>> Oracle version might help too. >> Oracle 10.2.0.3 on Solaris 10
> As I understand it 10gR2 supports a COMPRESSION parameter with
> DataPump as well as a PARALLEL one. The functionality that you're
> trying to create might actually exist. Does this document help?
> Failing that, a home-grown solution might unvolve the Unix "fuser"
That document mentions the compression option only on the 11g datapump utility, when connected to an 11g database.
I tried both the fuser and lsof commands. It seems like the 10g expdp closes files that it's not completely done with and reopens them later. I've still yet to find a way to determine if expdp is really done with a file and is not going to go back to it later. I suspect there might be a query you could run against the master table for the export, but there's no documentation on how that file is used. Received on Mon Oct 20 2008 - 16:24:28 CDT