Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Importing huge dump file.

Re: Importing huge dump file.

From: Krish <Krishna.Bussu_at_gmail.com>
Date: Fri, 07 Sep 2007 17:11:16 -0000
Message-ID: <1189185076.579867.5920@19g2000hsx.googlegroups.com>


On Sep 7, 9:44 am, "fitzjarr..._at_cox.net" <fitzjarr..._at_cox.net> wrote:
> On Sep 7, 2:33 am, aina.t..._at_gmail.com wrote:
>
>
>
>
>
> > On Sep 7, 8:12 am, Krish <Krishna.Bu..._at_gmail.com> wrote:
>
> > > Hi all,
>
> > > Need your suggestions to import a large dump file.
> > > The compressed dump file size is around 9 GB( if I do gunzip it will
> > > be morethan 80gb).
> > > This dump file has around 100 tables that needs to be imported.
>
> > > I don't have the luxury to import the entire dump file at once as this
> > > takes lot of time and resources.
> > > So I have descided to import few tables at a times. I created few
> > > parfiles with
> > > tables=" parameter and planning to run import whenever i get an
> > > opportunity( normally night...)
>
> > > I'm following the below given steps to import:
> > > * truncate the tables to be imported
> > > * disable all the constrains , triggers and drop indexes
> > > * make sure tablespaces have enough freespace
> > > * set buffersize around 25mb
> > > * statistics=none, grants=none, resumable=y feedback=100000 ignore=y
> > > , buffer=
> > > 3145728 (30M)
>
> > > * import the data using parfile.
> > > * enable constraints, triggers and create the indexes on the imported
> > > tables.
>
> > > However i am not satisfied with the performance.
> > > Could somebody please let me know your thoughts on this process and
> > > also recommendations to improve the performance of import process.
>
> > > Also has some concern about the parameter COMMIT=y/N . bit confused
> > > about this commit parameter setting.
> > > I don't mind increasing UNDO tablespace if needed.
>
> > > Oracle Version Details:
> > > Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
> > > With the Partitioning option
>
> > > OS:Linux 2.6.5-7.282-bigsmp #1 SMP
>
> > > Thanks,
> > > krish
>
> > Put DMP file on other disk than the tablespace file
> > Use more buffer for the imp command
> > use RECORDLENGTH parameters
> > We import 140Gb in 20hours.- Hide quoted text -
>
> > - Show quoted text -
>
> The buffer and recordlength parameters are, essentially, mutually
> exclusive. buffer is used by conventional path imports, where
> recordlength is used by direct path imports. Yes, when exporting/
> importing LOB data a direct path export/import will use conventional
> path, and it may be necessary to specify both parameters in the
> parameter list. No errors are thrown when both are specified, but
> know that it is one or the other which is used, depending upon the
> mode of the export/import. And what good is your 'advice' to 'Put DMP
> file on other disk than the tablespace file'? I don't know of any DBA
> who would knowingly and willingly place a dmp file on a filesystem
> dedicated to datafiles/tempfiles. And the OP may have space
> considerations you know nothing about.
>
> Your advice is ... lacking.
>
> David Fitzjarrell- Hide quoted text -
>
> - Show quoted text -

Thanks for your suggestions David.

we have many tables with LOB segments. as per your suggestions I'm going to set both the BUFFER and RECORDLENGTH paramets. Thanks a lot this valuable suggestion. Received on Fri Sep 07 2007 - 12:11:16 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US