Re: Dataguard role switching procedure with active GoldenGate replication - is there a best practice?

From: Guenadi Jilevski <gjilevski_at_gmail.com>
Date: Wed, 19 Sep 2012 18:09:19 +0300
Message-ID: <CADFytLiqc8q9BooWqEfEgR3FTJrzrxLPfBXUM2KX_j_X1M3=Jg_at_mail.gmail.com>



Hi,
In case of a copy or NFS only you will stop the pump for the B->C transition and modify the pump parameters to point to the C instead of B. ( I assume you are using rmthost parameter). With passive-alias extract is more complicated.

Restart the pump after the switchover, use the same replicats parameter files , checkpoint files and trail files on the target etc. Make sure that userid parameter user can connect to the database on C. Pay attention to the user connectivity after switchover to C.

After the B switchovers to C start the same replicates against C already receiving the data records from the restarted pump. As you already copied the checkpoint files (*.cpr) the replicats will start from the point when that they were stopped. Due to the fact that there is no data loss in the switchover you have a transactionally consistent target database for the replicats not to abend.

No need to even stop/start or reconfigure the initial primary extract.

Regards,

Guenadi Jilevski

On Wed, Sep 19, 2012 at 8:01 AM, De DBA <dedba_at_tpg.com.au> wrote:

> Hi Guenadi,
>
> Yes, it is temporary and a once-off. This has merit.. I hadn't considered
> it even. As I understand, as long as the standby is completely
> synchronised, the replicats should in this scenario simply pick up at the
> point where they left off? If so, there should be no need even to stop
> datapump or extract on server A when NFS mount from B to C is used, should
> there?
>
> Cheers,
> Tony
>
>
> On 19/09/12 12:36 PM, Guenadi Jilevski wrote:
>
>> Hi,
>> Will it be just one time move or a permanent requirement for replication
>> to B or C?
>> If it is a one time move I would NFS mount OGG home from B to C or copy
>> OGG from B to C and will start same replicats on C after B switchover to
>> C, note same checkpoint files on both replicat site. If replicat crashes
>> it is Recoverable. The adjustment will be the rmthost parameter on the
>> pump only.
>> If it is a constant requirement than I would implement two pumps one
>> to B and the second to C, but can use the above approach as well.
>> The previously suggested order of making sure that extract, capture
>> transactions ( send extract * logend, send extract getlag), make sure
>> that pump process trail records and replicat applies until all record
>> applied (send replicat status etc) applies in both cases.
>> Regards,
>> Guenadi Jilevski
>>
>> On Tue, Sep 18, 2012 at 3:05 PM, De DBA <dedba_at_tpg.com.au
>> <mailto:dedba_at_tpg.com.au>> wrote:
>>
>> G'day!
>>
>> I've got a production database (9i) running on an old server A in
>> the "Production" data centre, a new 11g database on server B in the
>> "DR" data centre and an active dataguard standby on server C, which
>> is in the "Production" data centre. There is a GoldenGate extract
>> and a datapump on server A, and 4 GoldenGate Replicats on server B
>> (in "DR"). The plan is to migrate the 9i database on server A
>> (Production) to server B (DR) and perform a switch so that server C
>> will eventually run production.
>>
>> The problem that now arises is that due to a change in the project
>> the role switch between server B (DR, now primary) and server C
>> (Production, currently standby) has to be made whilst replication
>> with GoldenGate is still in place. The 4 replicats will therefore
>> need to be moved from server B to server C as part of the role
>> switch. As this is a financial system, and the intended future
>> production environment, it is unacceptable to loose even one
>> transaction in the process.
>>
>> I was thinking of stopping the datapump extract on server A, noting
>> where it left off (trail file, RBA), and allowing the replicats on
>> server B to finish applying all trail files that were shipped. Then
>> create a new datapump extract to pump to server C, and create new
>> replicats on server C that then can start applying with the first
>> trail file they find. This would narrow the point of failure to only
>> the datapump, which can be instructed to start exactly at the point
>> where the original pump was stopped.
>>
>> I wonder if this scenario is anywhere near best practice, or indeed
>> whether best practices for switching the GG target database over to
>> the standby do exist?
>>
>> Cheers,
>> Tony
>> --
>> http://www.freelists.org/**webpage/oracle-l<http://www.freelists.org/webpage/oracle-l>
>>
>>
>>
>>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Sep 19 2012 - 10:09:19 CDT

Original text of this message