Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Export Questions - Large DB
Background:
Using exp/imp to migrate database between two UNIX machines (IBM->Sun,
etc) or UNIX -> NT.
Database mostly consists of a single large table (30M+ rows).
Solutions so far:
Use "compress on the fly" with pipes for exp/imp.
Use indexfile to build tables, import data (no indexes), build indexes.
(Used because initial extent size for tables usually needs to be fiddled
with or tables being built in a different tablespace that they came
from).
Questions?
I read in Deja News that using pipes slows down the process, because the
pipe file size is usually only 8K. I haven't really seen this problem.
Does this slow down the process?
For speed should I "compress" out of the pipe or 'split' out of the pipe if I have disk space?
Problems-
Initially don't know how many chunks (thus filenames) there will be to
import. 'cat' could bomb.
Can't use imp with the indexfile option, because it reads through the
exportfile too fast (not all chunks are available).
Do I loose significant speed using the create table, import data, build indexes process? I allows flexiblity. I can easily modify table and index create command for initial/next extent for the new platform. If it pukes on an index create (not enough temp tblspace), I can drop and restart the index create easily. What am I loosing?
For NT are there similar processes?
Pete Stryjewski
pstryjew_at_worldnet.att.net
Received on Tue Feb 02 1999 - 09:58:47 CST