Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Suggestions please

Re: Suggestions please

From: Joel Garry <>
Date: 19 Apr 2004 15:10:36 -0700
Message-ID: <>

"Howard J. Rogers" <> wrote in message news:<4083b4b2$0$442$>...
> Scenario: two networked servers. One server currently houses a 9i
> database. Database is to be moved to the new server. (Both servers are
> Solaris). Database is 24x7, so downtime must be minimal. Database is
> only 20GB in size.
> The two servers cannot share storage of any kind, so my original idea of
> creating new tablespaces within the same database but on the new box's
> storage and then moving tables/indexes across to the new tablespaces is
> not a go-er (unless maybe an NFS mount would do the trick...??). For

NFS might do the trick, but it might use the Americanism whereby trick refers to prostitution, and you might be very embarrassed by that at a customer site. In other words, you might be publicly screwed. I've posted before that 20G is right around the cutoff, at least in a config I've seen.

> the same reason, my other idea of using dbms_redefinition to move the
> data across and incurring minimal table locking is not a flyer.
> There is a possibility of creating a new database on the new server,
> populating it with empty copies of the source tables, and then
> populating the new tables via dblinks... but the concern is that clients
> will modify data in the source tables as the move takes place (thanks to
> the 24x7 ruke0, and therefore another possibility has been discussed of
> setting up replication between the two databases to capture
> post-initial-setup DML... but replication to handle the move of a 20GB
> database strikes me as absurd.
> Basically, I'm just casting around for suggestions as to how you would
> approach the issue.
> Anyone got any timings on a 20GB export and import?

Depends quite a bit on hardware. I get 16G export file in about 2.5 hours on a not so fast HP over a gigabit connection to an nfs device. The import takes quite a bit longer, it's been a few months so I don't recall exactly. But I certainly wouldn't do it for a zero downtime production move. Works fine as an "uncomment the cron and create a test db overnight" though.

Probably dataguard is the way to go.

Isn't there some need to understand how users are connecting? You might want to do things differently depending on that for the actual switchover. I've seen some situations that can quite happily just connect new users to a clone created starting when most users go home and do minus joins over dblinks to get all the stuff missed after the clone stuff started. But I've found the more usual case is there really is some time like Sunday morning when you can actually have a planned outage, and this comes out when you show the projected expenses for the different options.


-- is bogus.
"5 Things You Should Know Before Starting the Installation" - Heading
in OAS10g installation guide, followed by 12 things.
Received on Mon Apr 19 2004 - 17:10:36 CDT

Original text of this message