Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Transferring a DB to a new platform WITHOUT export/import

Re: Transferring a DB to a new platform WITHOUT export/import

From: JustAnotherDBA <burttemp1REMOVE_THIS_at_bellsouth.net>
Date: Tue, 13 May 2003 23:32:49 -0500
Message-ID: <vvjwa.29243$775.19272@fe04.atl2.webusenet.com>


I have done many export/imports (and many copies of a table - for different reasons - across db links) and I have a hard time imagining your proposed process could be anywhere near as quick as the export/import process. Especially if (as you defined below) you pre-create tables AND indexes on the Linux database. The insert across a db link will take FOREVER (compared to with no indexes on the Linux database during the insert).

If you reconsider export/import ... you obviously want to do as much as possible up front on the Linux destination database, before starting the export on the source. For example, pre-create all tablespaces for one thing. Make sure the Linux database is in NOARCHIVELOG during the import process (index creations during import will then default to nologging option , I think). Export to a local file system and ftp the export to the Linux machine and import from the local file. Oh yeah, obviously do a test run to see how long it will all take.

Another option ... if you don't have a lot of schemas with objects... do the export/import for multiple schemas at the same time to different local disks if possible - assuming both machines are multi-cpu machines with multiple disk drives each.

Note: I don't think the transportable tablespace option is an option for you because it only works if the source and target OS are the same.

Good luck.

-- 

"Rick Denoire" <100.17706_at_germanynet.de> wrote in message
news:36r2cvgs1jgbghupl0blkle8mg8u3ro841_at_4ax.com...

> Going from Oracle 8.1.7/Solaris/Sun E3500 to
> Intel-Industry-Standard-Hardware/Linux, I will by no means use
> export/import: It takes too long for a 100 GB DB. Downtime should be
> minimal.
>
> So my intention is to transfer data through the network via a DB link.
> Of course, it is simple to say "create table xxx as select ..." but
> you need to know which tables are there in the first place. And
> indexes, triggers, packages, users, roles, profiles etc.
>
> And since this is a very volatile DB (its structure changes very
> quickly), I must be able to determine the structure of the source DB
> immediately prior to transfer time. This is an approximate plan to
> transfer the DB:
>
> a) Shutdown and start the source DB in readonly mode
> b) reproduce its structure on the target platform
> c) fire data transfer based on the objects found (no assumptions made)
> d) make some kind of plausibility test about success or failure
> e) shutdown the source DB
> f) set the passwords of users (if not transferred)
> f) reconfigure Oracle Names to serve the new DB using the old
> connection string
>
> and that should it be. This assumes that at destination a DB exists
> with at least adequate tablespaces. The main point here is that
> everything created in the target DB should be based on what is found
> in the source DB, so some script would select objects from all_tables,
> all_indexes, etc.
>
> I would pleasantly avoid rediscovering all getchas involved in this
> process. If there is some script out there to do this, I would very
> much appreciate a pointer, or a hint about white papers,
> recommendations, etc. I don't expect a ready solution (I will have to
> exercise this in a test environment anyway), but I would like to *save
> time* avoiding getting into any trouble. I am doing this for the 1st
> time this way (this time the target host is a different *platform* as
> well as a different Oracle version).
>
> Thanks a lot
> Rick Denoire
>
Received on Tue May 13 2003 - 23:32:49 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US