Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Importing huge dump file.

Re: Importing huge dump file.

From: Krish <Krishna.Bussu_at_gmail.com>
Date: Fri, 07 Sep 2007 14:38:20 -0000
Message-ID: <1189175900.366623.12970@g4g2000hsf.googlegroups.com>


On Sep 7, 2:33 am, aina.t..._at_gmail.com wrote:
> On Sep 7, 8:12 am, Krish <Krishna.Bu..._at_gmail.com> wrote:
>
>
>
>
>
> > Hi all,
>
> > Need your suggestions to import a large dump file.
> > The compressed dump file size is around 9 GB( if I do gunzip it will
> > be morethan 80gb).
> > This dump file has around 100 tables that needs to be imported.
>
> > I don't have the luxury to import the entire dump file at once as this
> > takes lot of time and resources.
> > So I have descided to import few tables at a times. I created few
> > parfiles with
> > tables=" parameter and planning to run import whenever i get an
> > opportunity( normally night...)
>
> > I'm following the below given steps to import:
> > * truncate the tables to be imported
> > * disable all the constrains , triggers and drop indexes
> > * make sure tablespaces have enough freespace
> > * set buffersize around 25mb
> > * statistics=none, grants=none, resumable=y feedback=100000 ignore=y
> > , buffer=
> > 3145728 (30M)
>
> > * import the data using parfile.
> > * enable constraints, triggers and create the indexes on the imported
> > tables.
>
> > However i am not satisfied with the performance.
> > Could somebody please let me know your thoughts on this process and
> > also recommendations to improve the performance of import process.
>
> > Also has some concern about the parameter COMMIT=y/N . bit confused
> > about this commit parameter setting.
> > I don't mind increasing UNDO tablespace if needed.
>
> > Oracle Version Details:
> > Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
> > With the Partitioning option
>
> > OS:Linux 2.6.5-7.282-bigsmp #1 SMP
>
> > Thanks,
> > krish
>
> Put DMP file on other disk than the tablespace file
> Use more buffer for the imp command
> use RECORDLENGTH parameters
> We import 140Gb in 20hours.- Hide quoted text -
>
> - Show quoted text -

Thanks a lot for your reply and sharing your experience. yes, the dump file is in a different drive than the db files. I remember reading about the recordlength parameter that we can set either RECORDLENGTH or BUFFER(they are mutually exclusive. I have already set Buffer)

Now planning to implement the following too:

Please advice. Received on Fri Sep 07 2007 - 09:38:20 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US