Re: Getting a consistent copy
Date: Thu, 30 Jul 2009 06:41:32 -0700 (PDT)
On Jul 28, 5:37 pm, joel garry <joel-ga..._at_home.com> wrote:
> The thing that my users have in mind is replicating an OLTP database
> remotely, just the storage with the disaster site node shut down, then
> in a disaster, fire it up and everything works. Instead of just using
> a standby db.
> As I understand things:
> In EMC-speak, this requires either synchronous mode or doing a suspend
> and then snapping.
No. Even with async you can design things so that suspends are not necessary. We don't have any suspends in place.
It all deals with preserving write order dependencies ... EMC certainly can handle these requirements. I cannot imagine that any of the major storage vendors have problems here. Then it comes down to design requirements and performance requirements.
> What I don't know, and would like to find out:
> What are the network requirements for synchronous replication.
That's going to vary based on how much changes are occurring in the database that is getting replicated.
Synchronous replication can have a severe impact on commit intensive systems. Each write goes to cache in the local storage subsystem ... then gets transmitted to the remote storage system ... then an acknowledgement that it was received goes back to local storage subsystem ...
The wait event that often gets impacted the most is log file sync.
Can your applications survive the performance impact of a potentially huge hike in log file sync?
> Is it really a good idea to be suspending the OLTP db every 20
> minutes. Client/server and n-tier order entry people can get a ways
> ahead of the app already.
Unclear to me why you believe this is a requirement. Just because some people have stuck it in does not mean it is a requirement in all designs.
> How this relates to a Nimbus RH100, which claims "Snapshots, cloning,
> volume migration, synchronous mirroring, and asynchronous replication"
> among other things.
You lost me completely here. You want to really understand the specific capabilities of the actual storage hardware and software that will be used.
These things are not generic designs that always work the same and you can swap out pieces parts and change vendors etc.
> How would one make a fair test of recovery. I'm not convinced a
> normal load is a fair test.
Well you have a DR test plan, a simulated DR test plan, a planned moved to the DR site, and a planned move back from the DR site.
You document it all and test it repeatedly and regularly and make adjustments.
There's a pretty steep learning curve in this area and the help that you get from the storage vendors ( beyond the basic yeah yeah we can do it ) is often dicey. Hiring someone or bringing in a consultant who has been there and done it with the storage hardware and software that you plan on putting in place is often the best bet. Received on Thu Jul 30 2009 - 08:41:32 CDT