Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: (long) Data Guard (zero data loss)
> And the sites where no data loss is absolutely essential would be
> precisely those that *have* a bazillion bucks to spend on their
> requirements.
>
> Were you mad enough to do Zero Data Loss with nothing but a bit of Cat5
> and 2 ethernet cards, you could expect severe performance impacts.
Hi,
Zero data loss could be achieved with smaller performance hits and smaller investments in infrastructure, when we'd configure transaction monitors or our application layer just to do all changes in two (or more) databases.
Then we'd have less latency issues, all the queries/DMLs could be sent to two databases simultaneously, instead of the serial approach of zero loss data quard (1. client does dml and commit on primary server. 2.redo is sent to standby 3. confirmation of application of redo on at least one SYNC node is waited).
This is more like logical redundancy. And it's better in the sense that if the primary server's memory/network or cpu starts to throw in small errors for example, the second database still works correctly. In data guard maximum protection scenario, the first instance's faults might easily be replicated to the other.
Also, performancewise, you can make queries from both databases, while having both perfectly up to date. (with physical standby you have to open DB read only for querying, thus leaving it out of sync and for a while). One can have a zero loss configuration, where GUI gets OK when the change is committed in at least two databases, or very-near-zero data loss conf, where OK is returned to client where at least one DB has the change commited in it. The advantage over DataGuard again is, that we don't have to rely on log apply services, which have long lag in cases of non-synchronous protection and cause too many logswitches in case of parameters such are archive_lag_target, thus introducing a performance hit.
Logical redundancy would also be superior to Oracle's multimaster replication, because with replication, again same scheme is used, that the DML goes to first instance and the instance has somehow to propagate it to another database (including possible corruptions).
But the main drawback to logical redundancy using application layer is, that your software has to support it. It's no problem when you have planned it from start of software's lifecycle, but out-of-box solutions won't work with it. Oracle's DataGuard itself is transparent to application. Also, recovery strategy is probably more complex, since we are dealing with two different databases. But since we are talking about *no data loss* scenario, both of these databases are separately backed up anyway. Performance tuning gets a bit more interesting too, it would be good if the servers behave similarly, for example a DML should take about same time on both databases, to reduce waiting overhead. This primarily means identical configuration and maintenance.
So, my point is, that this kind of logical redundancy is much more cheaper, reliable and can achieve much better performance than Oracle's DataGuard - in case you really need high-performance zero-loss solution and you are building your software from scratch. And if you really need those characteristics, you probably are building from scratch, because normal business software isn't designed for that.
(if anyone wonders, why I posted this text here, I'd just like to hear other interesting opinions on this "logical redundancy" topic)
Tanel. Received on Wed Mar 12 2003 - 07:41:47 CST
![]() |
![]() |