Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Data Pump slower than equivalent Export ?!?

Re: Data Pump slower than equivalent Export ?!?

From: Ronald Rood <>
Date: Sat, 17 Sep 2005 12:44:18 +0200
Message-ID: <>

On Fri, 16 Sep 2005 21:47:42 +0200, Matthias Hoys wrote (in article <432b215e$0$29281$>):

> "Ronald Rood" <> wrote in message 

>> On Fri, 16 Sep 2005 16:19:29 +0200, wrote
>> (in article <>):
>>> Oracle EE on RHEL 3
>>> Oracle claim that Data Pump is a considerably faster method of loading
>>> and unloading data to/from the database.

>> <snip>
>>> Export takes 43 seconds and Data Pump takes over 3 minutes..!
>>> 95% of the delay in Data Pump appears to be in the step:
>>> "Estimate in progress using BLOCKS method..."
>>> What exactly is done during this step..?

>> Hi Matt,
>> I am quite sure there can be made some improvements in the data pump but
>> for
>> a serious speed test you need to take a larger database. I just finished
>> some
>> testing and was quite happy with it. impdp read 46GB of data from a source
>> database and stored in a target database in less than 2 hours, over a
>> network
>> link (100Mb). The ftp session alone of the conventional expdat.dmp (49GB)
>> of
>> the same database took 1 hour and 20 minutes...
>> Disappointing is the fact that only for the data impdp uses parallel
>> workers
>> and not for the other objects. They are serialized and in my case that
>> took
>> nearly 10 hours to complete. Conventional exp (full) took 10 hours to
>> complete.
>> So, I think impdp deserves some more testing ...
>> Completely missing are warnings about NLS problems. Conventional exp did
>> not
>> much but it did give warnings ...
>> With kind regards / met vriendelijke groeten,
>> Ronald
> Today, I did some tests with Data Pump on AIX 5.2 (Oracle First 
> impressions ? Not so good. Lots of ORA-00600 errors during a full export, or 
> an export of only metadata. I opened a TAR and Oracle is investigating this. 
> And yes, I have read the docs ;-) Other issue : to be able to perform a full 
> export, a user does not only need EXP_FULL_DATABASE, but also CREATE TABLE. 
> Second issue : expdp can't overwrite already existing dump files with the 
> same name ...
> The "good old" exp works fine though.
> Matthias Hoys

I also noticed some problems during expdp ('illegal use of LONG datatype' for example ... That's why I started with impdp directly from the source database over a network link. Quite powerfull, if it works... It is sensitive for NLS settings and I did get data corruption in some cases that where tracked back to be caused by an insert into a table from a remote table. It looks like impdp first creates the table and does the insert afterwards. In the end I also used exp/imp to do the job but I will sure check for improvements in release 2.
possible improvements are

1) better compatibility check for NLS settings
2) get rid of the 'invalid use of LONG datatype in expdp
3) get rid of the datacorruption - database link issue
4) parallelism after the data move (procs, indexes etc)
5) a summary of errors encountered in the end of the log
6) keep the filesize limit when adding extra files to expdp

impdp already is fast but still can be lots faster - and better. It's as good as te weakest link, in my case the database link.

With kind regards / met vriendelijke groeten, Ronald Received on Sat Sep 17 2005 - 05:44:18 CDT

Original text of this message