Re: 2-Phase commit or replication?

From: Ganesh Puranik <surya_at_usa.net>
Date: 1995/05/31
Message-ID: <3qidmd$128_at_earth.usa.net>#1/1


> myung_at_hk.super.net (Mr Michael Wai Kee Yung) writes:
> Dear all, my company is running a 40-50 Gigabyte database using Oracle 7.
> We're planning to set up a secondary site as the backup of our business
> database. Could you all experts give me a clue what's the best solution
> to implement this?
>>>>

Well, it entirely depends on how much down time you can afford for the partial data that may not be transfered between the last refresh of the snapshot and database failure.

I take it you are safeguarding against non-disk failure:

2PC gives immidiate availability in case of failure. It will be more than twice slower where you have duplicate data for online applications.

Snapshots may be give you better online response time. Refresh can be sheduled at off-peak-times. You need mechanisms to transfer data in case of failures.

Have you considered the following:

  1. Fault tolerant database with multiple machines accessing same disk.
  2. Instead of duplicating table by table using snapshots, use volcopy or such other mechanisms to transfer entire tablespaces (actually datafiles) and archive logs. In case of failure, recover on the other machine and continue. This is not easy thing to do but may give you better response time if you cannot get off-peak hours.

To safeguard against disk failures, you can use RAID mechanisms to achieve different level of safety. Again you have choices e.g RAID-1gives good response but requires twice the space or RAID-5 doesn't give as good a response but is efficient in terms of space needing approximately 15% more.

Ganesh Puranik
Consultant, Surya Systems, Inc. Received on Wed May 31 1995 - 00:00:00 CEST

Original text of this message