Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Importing huge dump file.

Re: Importing huge dump file.

From: <fitzjarrell_at_cox.net>
Date: Fri, 07 Sep 2007 07:38:59 -0700
Message-ID: <1189175939.487745.317180@r29g2000hsg.googlegroups.com>


On Sep 7, 12:12 am, Krish <Krishna.Bu..._at_gmail.com> wrote:
> Hi all,
>
> Need your suggestions to import a large dump file.
> The compressed dump file size is around 9 GB( if I do gunzip it will
> be morethan 80gb).
> This dump file has around 100 tables that needs to be imported.
>
> I don't have the luxury to import the entire dump file at once as this
> takes lot of time and resources.
> So I have descided to import few tables at a times. I created few
> parfiles with
> tables=" parameter and planning to run import whenever i get an
> opportunity( normally night...)
>
> I'm following the below given steps to import:
> * truncate the tables to be imported
> * disable all the constrains , triggers and drop indexes
> * make sure tablespaces have enough freespace
> * set buffersize around 25mb
> * statistics=none, grants=none, resumable=y feedback=100000 ignore=y
> , buffer=
> 3145728 (30M)
>
> * import the data using parfile.
> * enable constraints, triggers and create the indexes on the imported
> tables.
>
> However i am not satisfied with the performance.
> Could somebody please let me know your thoughts on this process and
> also recommendations to improve the performance of import process.
>
> Also has some concern about the parameter COMMIT=y/N . bit confused
> about this commit parameter setting.
> I don't mind increasing UNDO tablespace if needed.
>
> Oracle Version Details:
> Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
> With the Partitioning option
>
> OS:Linux 2.6.5-7.282-bigsmp #1 SMP
>
> Thanks,
> krish

I'd be setting up an import through named pipes so you don't have to uncompress the .dmp file; the imp process will do that through the named pipe. As an example:

#!/bin/ksh

date>imp_pipe.log

ORACLE_SID=<your SID here>
ORACLE_HOME=<your ORACLE_HOME here>
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin: $ORACLE_HOME/bin:.
export ORACLE_HOME ORACLE_SID PATH

touch /tmp/imp_pipe ; rm /tmp/imp_pipe
mknod /tmp/imp_pipe p

uncompress< my_db.dmp.Z > /tmp/imp_pipe &

imp parfile=parfile_imp

date>>imp_pipe.log

In your parfile you'd list the file as /tmp/imp_pipe, then provide any other parameters as you'd normally supply them. Your parfile_imp might look like this:

userid=<user and password here>
log=imp_pipe_2.log
file=/tmp/imp_pipe
ignore=y
commit=y
full=y
buffer=1100000
statistics=none

Your compressed .dmp file remains compressed on the server, and the uncompressed output passed through the name pipe is processed by imp.

It's a fairly common technique.

David Fitzjarrell Received on Fri Sep 07 2007 - 09:38:59 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US