Re: DB Links for Integration

From: Subodh Deshpande <deshpande.subodh_at_gmail.com>
Date: Mon, 9 Jan 2012 12:52:57 +0530
Message-ID: <CAJsOtB404+wYubR8d8Zf86=+YemZNd_zbgndgm5t1y+WW8BZwA_at_mail.gmail.com>



explore golden gate, streams or replication if it is batch job..and no architectural changes are required..
also explore dataware house, data cleansing, data loading tools if it architectural changes and some sort of software development to monitor data loading from 80 sources..
I tried to answer your queries as below..

Here are my Questions
>
> 1. Can we use DB links here to integrate this? Just issue a direct
> insert statement to the target DB?
>
>

can you explore tools like golden gate, streams, replication etc

>
> 1. What are the performance implications given a decent network? One
> interface would synchronize be 10,000 rows approximately. 70-80
> interfaces
> can run at a time(meaning 80 different sources will be interfaced each
> with
> 10K records daily).

performance factor you will have to take into account...this is an extra load..you need to check the feasibility whether it can be scheduled a batch job when there is a lesser load
80 sources..mostly you will have to create a different and efficient arrangement for this so that you will get the status of data load from each sources

>

> 1. Are the errors pertaining to 'in doubt transactions' frequent? Is
> data consistency using two phase commit reliable
>

> create a different and efficient arrangement with proper identification
> for every batch load..so that
> in doubt transactions can be trapped..and for the reasons there in, you
> need to have different mechanism and also for corrective action on it..
>

> 1. Are there any performance benchmark documents for DB links available
> which we can refer?
>
> may be but they may not be relevant.. eg. diesel engine is same in
principle but you can not use the bench marking data of a truck to a car vice verse..

>
> 1. Are there any other important factors to consider.
>

a lot..of mostly it can be a separate module...or just a batch job...depends upon the data and purpose of the data..architecture of your present system to accommodate, identify the data from different sources in your system..

thanks..subodh

On 8 January 2012 14:55, Dwaraknath Krishnamurthi <dwarak.k_at_gmail.com>wrote:

> Hello All,
> We are trying to integrate 3 legacy systems to an integration hub. All the
> legacy systems and the integration hub run on Oracle DB and all run on
> different servers. Here are my integration requirements.
>
> 1. We have Daily/Weekly/Monthly interfaces which are supposed to run
> picking up data from the legacy systems and populate them to the
> integration hub.
>
>
> 1. The transformation efforts are minimal. Mostly it will be a column to
> column mapping.
>
>
> 1. We will NEVER read database across links and its just INSERT
> operation alone from source to target.
> 2. We have no encryption requirements just fixed user/ password
> authentication is good enough.
>
>
> 1. The source system will always issue commit and it will be the
> transaction co-coordinator
>
>
>
> DB Version 11GR2, Clustered Env.
>
>
>
> Thanks in advance
>
>
> --
> http://www.freelists.org/webpage/oracle-l
>
>
>

-- 
=============================================
Love me or Hate me both are in my Favour.
Love me, I am in your Heart. Hate me, I am in your Mind.
=============================================


--
http://www.freelists.org/webpage/oracle-l
Received on Mon Jan 09 2012 - 01:22:57 CST

Original text of this message