Re: Oracle Streams - worth it?
Date: Wed, 06 Feb 2008 21:07:24 GMT
"Frank van Bortel" <frank.van.bortel_at_gmail.com> wrote in message news:e6bba$47a9ff0e$524b5c40$1220_at_cache4.tilbu1.nb.home.nl...
> Isaac Blank wrote:
>> We are evaluating different ways for a one-way replication from one
>> database to another. Additional requirement is, pat of replicated data
>> (detail rows only) should be updateable. Currently, we use materialized
>> views with refresh jobs running every minute. An insert trigger on the
>> master table's snapshot creates a queue entry for each replicated row.
>> Later, an asyncronous job goes through the queue and pulls detail rows
>> via database link. This has worked fine so far, but we started thinking
>> of creating multiple replicas (each of them would only recieve a portion
>> of the overall data), and we're afraid that materilized view maintenance
>> overhead in the master database will become too taxing.
> Eh? MV's are on the slave side, the master has MV log tables and
> triggers. ......................
I was not clear enough. Indeed I had in mind MV log tables.
>..................Unless you have written your own replication mechanism,
> describe advanced replication.
Well, initially we had MVs defined on prebuilt tables. We also needed the detail data to be updateable, so we created a row level trigger on the detail replica that would merge all the new/changed rows (we never delete) into another table that we would later update per application logic whenever needed. It worked fine for a while, then the load got high, and the performance degraded catastrophically. The higher the load, the longer the refresh takes and it is a single transaction, no matter how many rows. Next refresh takes even longer because it has to catch up more, and even longer, etc. So we decided not to replicate the detail table at all. Instead, a row level trigger on master table replica would just create an entry in a queu and few second later a dbms_job would dequeue and pull detail rows for each master row one transaction per master row.
> Oracle can do horizontal as well as vertical partitioned replication.
True, but we're thinking of some really sofisticated rules, or maybe even changing rules on the fly. That's why Oracle Streams looks so promising, though I am not sure if using ANYDATA is not a pain in the neck
>> One of the possibilities we're looking at is Oracle Streams, but I
>> have heard some allegations that it is inherently slow because it uses
>> AQ as transport mechanism.
> Consider it, especially if you're on 10G. In 9iRel2 it was (still)
> buggy, and I had better results with AdvRep. Have not done serious
> testing with 10G
Yes, we are on 10G. I should have mentioned that in the original post.
>> So my question is, is it worth considering Oracle Streams for really
>> high load, almost mission-critical application? The lag time should not
>> exceed few minutes to fulfill some of our Service Level Agreements.
> That has nothing to do with high load. If you're talking
> several thousands to millions of transactions per second, I'd
> be worried about your setup (replicate data near real time),
> too. Independent of technology.
At peak times, we might be close to a thousand rows insereted per second, I think.
Anyway, based on what you provided, I think we will start evaluating both Advanced Replication and Oracle Streams. Will be fun setting up test environment to see which one performs better.
Thank you for your input, it is very helpful
> Frank van Bortel
> Top-posting in UseNet newsgroups is one way to shut me up
Received on Wed Feb 06 2008 - 15:07:24 CST