Re: DR Options.

From: <stevedhoward_at_gmail.com>
Date: Mon, 9 Mar 2009 15:01:44 -0700 (PDT)
Message-ID: <518d3359-e8b2-46fc-b58d-8cb106658fa2_at_q30g2000prq.googlegroups.com>



On Mar 9, 2:54 pm, "Preston" <dontwant..._at_nowhere.invalid> wrote:
> We're looking into providing an off-site disaster recovery server for
> some of our clients, & I'd appreciate some input on the best way to do
> this.
>
> All the clients' databases are 11g Standard Edition on Windows servers,
> typically 5 - 10gb, up to 20 concurrent users, & all use archive log
> mode. A full RMAN backup is done every night.
>
> The plan is to put a server in a datacentre, with one or two databases
> containing a schema for each client. We'll then synchronise each schema
> with the clients' live data every hour or so (it doesn't matter if they
> lose an hour's data). If a client's own server goes into meltdown, they
> can connect to ours using terminal services & carry on working until
> their server's repaired.
>
> So the question is, what's the best way to keep our DR server
> synchronised (more or less) with the clients' data, bearing in mind
> most of them only have standard ADSL connections?
>
> --
> Preston.

Assuming each database generates 1GB of redo per day (AWR can generate a good bit), you should be able to copy the archived redo logs to the remote data centre and script an apply process for them. That assumes you can sustain 512K per second of throughput for about 40GB per day if you absolutely saturate the link, which would allow you to do this for 40 databases.

Set archive_lag_target on the primary databases and let'er fly.

Conversely, you could use materialized views on the primary database and run a fast refresh of these in a group every 60 minutes at the standby.

HTH, Steve Received on Mon Mar 09 2009 - 17:01:44 CDT

Original text of this message