Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Export Questions - Large DB

Export Questions - Large DB

From: Peter Stryjewski <pstryjew_at_worldnet.att.net>
Date: 2 Feb 1999 15:58:47 GMT
Message-ID: <36B74A54.D37DC9C6@worldnet.att.net>


Background:
Using exp/imp to migrate database between two UNIX machines (IBM->Sun, etc) or UNIX -> NT.
Database mostly consists of a single large table (30M+ rows).

Solutions so far:
Use "compress on the fly" with pipes for exp/imp. Use indexfile to build tables, import data (no indexes), build indexes. (Used because initial extent size for tables usually needs to be fiddled with or tables being built in a different tablespace that they came from).

Questions?
I read in Deja News that using pipes slows down the process, because the pipe file size is usually only 8K. I haven't really seen this problem. Does this slow down the process?

For speed should I "compress" out of the pipe or 'split' out of the pipe if I have disk space?

Problems-
Initially don't know how many chunks (thus filenames) there will be to import. 'cat' could bomb.
Can't use imp with the indexfile option, because it reads through the exportfile too fast (not all chunks are available).

Do I loose significant speed using the create table, import data, build indexes process? I allows flexiblity. I can easily modify table and index create command for initial/next extent for the new platform. If it pukes on an index create (not enough temp tblspace), I can drop and restart the index create easily. What am I loosing?

For NT are there similar processes?

Pete Stryjewski
pstryjew_at_worldnet.att.net Received on Tue Feb 02 1999 - 09:58:47 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US