Flashback query to ensure data consistency

From: Debaditya Chatterjee <debaditya.chatterjee_at_gmail.com>
Date: Wed, 26 Mar 2008 16:41:46 -0400
Message-ID: <90f8af090803261341l7a6fa680pc0522adf068fa094@mail.gmail.com>


All,

In a 24*7 data warehouse (accessed by users globally from different timezones), incremental ETL jobs are executed to load data from different source systems. Due to excessive volume of source data the ETL window has grown to over 8 hours and users from a particular timezone (the most affected ones) have started complaining about performance of the system and also have become very vocal about data inconsistency. e.g. figures of a sales report can change as the ETL tool commits every 10K records.

In order to give the users a consistent view of data we are thinking of using the Flashback query feature of the database. i.e. When the ETL starts, flashback query will be enabled and the user queries (through a logon trigger) will run against the flashback data and once the ETL completes the queries will run against the tables.

Although the solution seems feasible, but having flashback to work for more than 8 hours seems to be our biggest challenge. I wanted to verify if you all agree with this approach or would you suggest using a simpler (but costly) approach like routing queries to a read only standby database while ETL jobs execute on the primary database.

Your suggestions are welcome. The oracle database version is 10.2.0.3, the ETL tool used is Informatica and the reporting tool used is Siebel Analytics.

Thanks
Deba.

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Mar 26 2008 - 15:41:46 CDT

Original text of this message