Re: Streams for one-time single source replication

From: Reed Hammond <>
Date: Thu, 6 Feb 2014 14:47:13 -0700
Message-ID: <>

I recently used streams to upgrade some databases from to and move them to new clusters in the process. I had a week to learn and preform the test on a non-production database and then the following week the production database was migrated.

It was somewhat intimidating but went fairly smooth. I referred to the documentation here. toc.htm Appendix D Fit our situation fairly well.

One of the gotchas we had was with sequences. Those aren't kept in sync between the databases via streams. To deal with a connection using a sequence on the source at the same time connections on the destination are updating we changed the source to increment by 2 and use odd numbers while the destination used even numbers.

Another issue we dealt with was record updates. Connections on the source were making changes to records while connections on the destination were making updates to the same records. This would cause a conflict in the apply process and cause the apply to stop working. It would add an entry in the error log but would need to be restarted. The biggest problem with this was due to the applications updating the entire record even though only one or two fields were changed.

Streams will work for your situation and will provide the potential for no downtime. We were able to complete the migrations without shutting down the applications. That reminds me of another consideration. How are you applications connecting to the database. Are they using remote resolution with an alias? That would provide an easier cut over, just updating where the alias points.

On Thu, Feb 6, 2014 at 2:20 PM, Chris Taylor <> wrote:

> In that case I would definitely go with the physical standby with log
> shipping and drop those schemas post-cutover.
> The only "downtime" doing this on the cutover weekend would be the patch
> to Of course, you'd want to test the .8 patch application
> prior to go-live weekend on the AIX 7 box to make sure there are no
> surprises.
> So, in a nutshell I would do:
> Prior to cutover weekend:
> ------------------------------------------
> RMAN backup existing DB
> Restore DB as standby on AIX 7 server
> Setup archivelog shipping from AIX5 server to AIX7 server
> Verify log application is working on the AIX7 server and it is staying up
> to date
> (the "other" Chris mentioned setting up Flashback here on the standby
> server - so look into that)
> Open Standby Database on AIX7 box - apply .08 patch and verify
> Blowaway or flashback standby database if possible and remove 08 patch
> from binaries on AIX 7 server
> If not possible, then recreate standby from new RMAN backup
> Restart log application on AIX7 server and verify it's staying up to date
> Cutover Weekend:
> --------------------------
> stop all processes connecting to db
> archive log current AIX5 db
> shutdown AIX5 db
> Verify AIX7 db has newest archivelog file and applied it
> Open AIX7 db and apply .08 patch
> Open database for "business"
> Backup schemas to be dropped
> Drop irrelevant schemas
> Optional:
> --------------
> Have users scream at you for dropping a schema they told you was safe to
> be dropped.
> Chris
> On Thu, Feb 6, 2014 at 2:49 PM, Rich Jesse <
>> wrote:
>> Chris replies:
>> > Well, that definitely complicates things...
>> Uh-huh. :)
>> > So, every schema in the existing AIX 5.x 11.x database is not moving
>> over
>> > to your new AIX 7 box? I'm curious why only some schemas would be
>> moved I
>> > guess...
>> That's my "fault". The current DB has a mix of schemas for both business
>> data and application repository/data-dictionary/metadata. Not only is it
>> tougher to manage security but also the SGA. So, I split them into
>> separate
>> DBs. Great! Except for the cutover, apparently... ;)
>> For this move, I can only replicate the business data. Or maybe I just
>> drop
>> those schemas and tablespaces post-cutover? To make matters more
>> complex, I
>> originally thought we'd have more downtime, so I haven't shrunk our
>> current
>> business data tablespaces after archiving (1TB -> 400GB).
>> Oh well. That's why the company has a DBA I guess. :)
>> Thanks!
>> Rich
>> --

Received on Thu Feb 06 2014 - 22:47:13 CET

Original text of this message