Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Importing huge dump file.
On Sep 7, 8:12 am, Krish <Krishna.Bu..._at_gmail.com> wrote:
> Hi all,
>
> Need your suggestions to import a large dump file.
> The compressed dump file size is around 9 GB( if I do gunzip it will
> be morethan 80gb).
> This dump file has around 100 tables that needs to be imported.
>
> I don't have the luxury to import the entire dump file at once as this
> takes lot of time and resources.
> So I have descided to import few tables at a times. I created few
> parfiles with
> tables=" parameter and planning to run import whenever i get an
> opportunity( normally night...)
>
> I'm following the below given steps to import:
> * truncate the tables to be imported
> * disable all the constrains , triggers and drop indexes
> * make sure tablespaces have enough freespace
> * set buffersize around 25mb
> * statistics=none, grants=none, resumable=y feedback=100000 ignore=y
> , buffer=
> 3145728 (30M)
>
> * import the data using parfile.
> * enable constraints, triggers and create the indexes on the imported
> tables.
>
> However i am not satisfied with the performance.
> Could somebody please let me know your thoughts on this process and
> also recommendations to improve the performance of import process.
>
> Also has some concern about the parameter COMMIT=y/N . bit confused
> about this commit parameter setting.
> I don't mind increasing UNDO tablespace if needed.
>
> Oracle Version Details:
> Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
> With the Partitioning option
>
> OS:Linux 2.6.5-7.282-bigsmp #1 SMP
>
> Thanks,
> krish
Put DMP file on other disk than the tablespace file
Use more buffer for the imp command
use RECORDLENGTH parameters
We import 140Gb in 20hours.
Received on Fri Sep 07 2007 - 02:33:50 CDT