Re: Golden Gate replication for a huge table

From: Kumar Madduri <ksmadduri_at_gmail.com>
Date: Wed, 19 Oct 2016 10:12:41 -0700
Message-ID: <CAHDOOG5nfj2jYLj8Qb6MP7PQkKFxwSCvBrwRuaJEPWWzHHeyJg_at_mail.gmail.com>



Thank you for the information. I have another related question. It appears that each database has to be associated with a golden gate installlation (cannot share the GG installation. You can do it but will have to depend on archive logs and archive log switches for the replicat to pick up changes). Is my understanding correct? (that there is a one to one mapping between number of databases and number of GG installations you need).

Thanks
Kumar

On Tue, Oct 18, 2016 at 11:23 PM, Ls Cheng <exriscer_at_gmail.com> wrote:

> Hi
>
> That workload is low. We replicate tables with hundred of million of
> changes per day and all fine.
>
> BR
>
>
> On Wed, Oct 19, 2016 at 12:32 AM, Kumar Madduri <ksmadduri_at_gmail.com>
> wrote:
>
>> Hi
>> We are in the initial phases of evaluating Golden Gate. One of our
>> developers has a concern.
>> She has a table that has 40 million rows that would be replicated to the
>> target and after the initial load, the data volume changes on a daily basis
>> would be in the 1000's (basically not in the range of millions of records
>> but in 1000's or 10000's during year end processing for example).
>> Her concern was will Golden Gate replication be able to handle this.
>> From my understanding this should not be a problem.
>> Wanted to get inputs from folks who have done this already (since my
>> experience at this point of time is only working with small tables and
>> reading documentation).
>>
>> Thanks for your time.
>>
>> Kumar
>>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Oct 19 2016 - 19:12:41 CEST

Original text of this message