Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: IMP/EXP accross platforms: What will be missing?

Re: IMP/EXP accross platforms: What will be missing?

From: Burt Peltier <burttemp1ReMoVeThIs_at_bellsouth.net>
Date: Mon, 3 Nov 2003 23:35:07 -0600
Message-ID: <uSGpb.31601$SV2.25259@bignews3.bellsouth.net>


Have you ever done a full export to disk ? - Export to a named pipe and route output of pipe to compress and route output of compress to disk. You might be surprised at how much it gets compressed to. There are probably examples in Google search and in Metalink, if you haven't done this yet.

If you cannot do a cleaner process because of size/time , then it still seems you could do an export full=y and rows=n (and then a rows =y on upgrade date). This would at least get things like public synonyms and users created.

Of course, you want to pre-create the tablespaces as LMT (if not already LMT). Db links are always fun and Plsql and views . It seems there is always at least 1 or 2 that are invalid or become invalid.

Doing a test execution is of course required and you will see the types of errors and be better prepared for them on the day(s) when this is running.

-- 
"Rick Denoire" <100.17706_at_germanynet.de> wrote in message
news:o5oaqvkme2ogp0aichlj38mpsp46teesr4_at_4ax.com...

> Hello
>
> Going from 8.1.7 (Solaris) to 9.2.0.4 (Linux Redhat AS2.1) I will have
> to do exp/imp. One instance, about 230 GB (size of DB files). My plan:
>
> 1) Do an export from the source with rows=no and freeze DB structure
> 2) Convert the exported dump file into executable sql scripts
> 3) Edit the scripts (removing storage params, etc.)
> 4) Generate a DB creation script with Version 9 (target environment)
> 5) Run the DB creation script from step 4, so creating a standard DB
> 6) Delete spfile, tune new DB init parameters
> 7) Test rman+Veritas setup (backup/restore) in new environment
> 8) Precreate tablespaces manually
> 9) Run the scripts from step 3 to precreate objects in DB from step 5
> 10)Do a 2nd export (full, rows=y, compress=n) into a pipe, set DB RO
> 11)Immediately start an import from pipe (fromuser= touser=) on target
> 12)Gather statistics of new DB
> 13)Shutdown source DB
> 14)Edit central tnsnames.ora for all clients, announce new platform
> 15)Drop example users (HR, SH, etc) cascade
> 16)Repeat step 7
>
> Done.
>
> In praxis: Not done (being realistic). I will test this of course, the
> goal being a reliable procedure and a minimal downtime. But what
> really concerns me is the kind of unexpected problems that I won't be
> prepared to cope with due to my lack of experience (well, I have
> already transferred a number of DBs this way, at times something fails
> and has to be fixed manually).
>
> These are the things that seem prone to failure. Please comment or put
> your ideas:
>
> a) DB Links. We have some.
> b) Synonyms (no full import done...)
> c) Roles
> d) Object and system privileges
> e) Sequences (?)
> f) Constraints, triggers (?)
> g) Resource Plans (yeah, they save admin's life during night jobs)
> h) rdbms/admin things that were run for years in the old DB (views,
> packages etc.). How to find which should be rerun? One I know:
> Statspack :-)
> i) Scheduled jobs (!!)
> j) Client software. When to update? About 40 PCs.
> k) ..(what else?)..
>
> Many questions here. How to freeze a DB structure? (I am not a
> developer, I don't trust the developers). How do I do a quick "sanity"
> check of the new DB? (Something could go wrong and remain unnoticed).
> How can I transfer jobs? Resource plans? Are sequences an issue?
> Should I install the new version of the clients beforehand? (I think
> so, but will there be any limitation until the new DB version is up
> and running?). Since the DB will have a new name, will DB links still
> be valid? Where can I find an exact list of example users? (I don't
> mean OUTLN, PERFSTAT, etc.)-
>
> Init parameters to adapt, some specific to V 9 or with changed
> default:
>
> DB_CHECKSUM (false)
> DB_FILE_MULTIBLOCK_READ_COUNT
> PGA_AGGREGATE_TARGET
> OPTIMIZER_INDEX_COST_ADJ (about 30)
> OPTIMIZER_INDEX_CACHING (perhaps 50)
> PARALLELITY (don't remember exactly). Four CPUs, lots of disks.
> START_ARCHIVING (or similar; true, of course).
> DISK_ASYNCH_IO (true)
> QUERY_REWRITE (true)
> CURSOR_CACHING (force)
> NUMBER_OF_PROCESSES
> JOB_QUEUE_PROCESSES (1 or 2)
> ..
> and some other memory parameters. I am afraid that something will
> break. Until now, I have always done imp/exp in the same platform,
> same version.
>
> You could help me minimize Murphy's chances!
>
> Thanks a lot
> Rick Denoire
> (who will spend Christmas/New Year on duty).
Received on Mon Nov 03 2003 - 23:35:07 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US