Re: Expdp Dump File Infomation

From: Mikhail Velikikh <mvelikikh_at_gmail.com>
Date: Wed, 15 Apr 2020 10:39:37 +0100
Message-ID: <CALe4HpkDr6DOunn4+NWDdOj3iOsg0g0kSrt3FTLYzRPpVMjo-w_at_mail.gmail.com>



> -- In this case if you try SQLFILE option with keep_master it will fail
> with ORA- error and datapump will not find master table in any of the dumps
> from 1-9.
>

That is fine. The goal is to see if a dumpfile set is complete. If a piece with the master table is missing, that dumpfile set is automatically incomplete.

If someone has given a dumpset to you and before import itself we want to
> make sure that dumpset is complete . [ Without using impdp sqlfile option ]
> We have access via sqlplus only to system views/packages to check .
>

The impdp itself validates the completeness of the dump through the master table:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sutil/oracle-data-pump-overview.html#GUID-6F532CA9-CE90-47D6-84ED-AD10F1E18056

Once the entire master table is found, it is used to determine whether all
> dump files in the dump file set have been located.
>

Therefore, I would try to use the same. I initially missed that you do not want to use impdp sqlfile option. You should be able to run the same sqlfile job through the Data Pump API.
There are two downsides:
1) the impdp job is actually run which might be time consuming; 2) it might create a SQL file on the server filesystem;

Command line impdp has the ABORT_STEP parameter for that which can be used to validate the dumpfile set without doing import (to the database or to the SQL file):
https://docs.oracle.com/en/database/oracle/oracle-database/20/sutil/oracle-datapump-import-utility.html#GUID-0CB0C4E9-54E7-49DB-AEBC-AA3ACC074F74

I see that the relevant parameter in the Data Pump API does not have the same description, which might be a documentation bug:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/arpls/DBMS_DATAPUMP.html#GUID-A032469F-8225-4530-9C21-F2AF71F257F1

Value must be 0. Inserting values other than 0 into this argument will have
> unintended consequences.
>

Thankfully, there is another parameter MASTER_ONLY which can be used in both command line and PL/SQL Data Pump imports:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/arpls/DBMS_DATAPUMP.html#GUID-62324358-2F26-4A94-B69F-1075D53FA96D

Indicates whether to import just the master table and then stop the job so
> that the contents of the master table can be examined.
>

Beware though, it has slightly different semantics than the ABORT_STEP=-1 parameter. For instance, here is the output of the impdp job when I tried to import just the piece containing the master table and deliberately omitted the other piece:

impdp userid=test_user dumpfile=df1 directory=my_tmp job_name=test_incomplete sqlfile=test01 abort_step=-1

Import: Release 12.2.0.1.0 - Production on Wed Apr 15 04:28:10 2020

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
ORA-39002: invalid operation
ORA-39059: dump file set is incomplete

The same MASTER_ONLY job finished without errors (I removed the sqlfile option due to: ORA-39208: Parameter MASTER_ONLY is invalid for SQL_FILE jobs):

impdp userid=test_user dumpfile=df1 directory=my_tmp job_name=test_incomplete master_only=yes

Import: Release 12.2.0.1.0 - Production on Wed Apr 15 05:20:25 2020

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "TEST_USER"."TEST_INCOMPLETE" successfully loaded/unloaded Job "TEST_USER"."TEST_INCOMPLETE" successfully completed at Wed Apr 15 05:20:28 2020 elapsed 0 00:00:01

Both approaches can be used to get the master table loaded into the database (obviously, if it is present in the dumpfile set) but the ABORT_STEP approach also tests the dumpfile set for completeness.

On Fri, 10 Apr 2020 at 20:35, Sumit Kumar T <dba.tyagisumit_at_gmail.com> wrote:

> Great yes but it all depends on the dump piece missing is having master
> table or not .
>
> -- say if total 10 dumps were generated during expdp
> EXP_DP_01 to EXP_DP_10.
>
> -- Assume that master table is stored in dump number 10 (EXP_DP_10)
>
> -- And unfortunately dump 10 is only missing at the target side .
>
> -- In this case if you try SQLFILE option with keep_master it will fail
> with ORA- error and datapump will not find master table in any of the dumps
> from 1-9.
>
> To see if a particular dump piece contains master table we have
> datapump_info under dbms_datapump api.
>
> On Mon, Apr 6, 2020, 17:34 Mikhail Velikikh <mvelikikh_at_gmail.com> wrote:
>
>>
>> I experimented with it a little bit. I was running impdp using an
>> incomplete set of files:
>>
>> impdp userid=test_user dumpfile=df1 directory=my_tmp
>> table_exists_action=skip keep_master=yes job_name=test_incomplete
>>
>> Import: Release 12.2.0.1.0 - Production on Mon Apr 6 07:34:53 2020
>>
>> Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights
>> reserved.
>> Password:
>>
>> Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 -
>> 64bit Production
>> ORA-39002: invalid operation
>> ORA-39059: dump file set is incomplete
>>
>> After adding the trace parameter (trace=480300) to the command, I got the
>> following in the trace file:
>>
>> KUPM:07:41:01.217: ORA-39245: import job requires export dump file
>> "df2.dmp"
>>
>> The important part is this is happening *after *the data pump creates
>> the master table.
>> Thus, I am intended to think that this kind of information can be
>> obtained only when the master table is imported. I do not know a way to get
>> that information from the dump file itself but you can easily get the
>> master table using a simple sqlfile job:
>>
>> impdp userid=test_user dumpfile=df1 directory=my_Tmp
>> job_name=test_incomplete keep_master=yes sqlfile=test01
>>
>> Once the master table is created, I obtained the list of files using the
>> following query:
>>
>> SQL> select distinct file_name from test_incomplete where file_name is
>> not null;
>>
>> FILE_NAME
>> ------------------------------------------------------------
>> /tmp/df2.dmp
>> /tmp/df1.dmp
>>
>> I would rather run a sqlfile import in your scenario to validate that a
>> dumpset is self-contained.
>>
>>
>> On Sun, 5 Apr 2020 at 05:38, Vanita Sharma Tyagi <
>> dba.vanitasharma_at_gmail.com> wrote:
>>
>>> There is no error this is a genric scenario , we have our end users
>>> starting import process (self service from an tool we have provided them )
>>> without copying the dump files completely some time and impdp obviously
>>> fails with "dumpfile set is not complete" which is correct .
>>> Wanted to check if we can validate somehow if a dumpset is comete of not
>>> without running the actual import so that we can restrict them from running
>>> the impdp if dumpset is not complete.
>>>
>>>
>>> On Sun, Apr 5, 2020, 02:16 Al B. <albert.y.balbekov_at_gmail.com> wrote:
>>>
>>>> I would ask for accompanying export log file and for parameter file and
>>>> command line used. Having export log and export parameters will help in
>>>> potential troubleshooting.
>>>>
>>>> thx,
>>>> Albert
>>>>
>>>>
>>>> On Sat, Apr 4, 2020, 2:30 AM Vanita Sharma Tyagi <
>>>> dba.vanitasharma_at_gmail.com> wrote:
>>>>
>>>>> Hi Experts ,
>>>>>
>>>>> Need some help in validating the dump files in a expdp dump set . I
>>>>> followed oracle metalink 462488.1 but no where i can see any information
>>>>> about how many files are present in my dumpset .
>>>>>
>>>>> If someone has given a dumpset to you and before import itself we want
>>>>> to make sure that dumpset is complete . [ Without using impdp sqlfile
>>>>> option ] We have access via sqlplus only to system views/packages to check
>>>>> .
>>>>>
>>>>>
>>>>> BR
>>>>> Vanita
>>>>>
>>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Apr 15 2020 - 11:39:37 CEST

Original text of this message