Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: OFA Structure issue (for gurus only)

Re: OFA Structure issue (for gurus only)

From: Neil Greene <ngreene_at_laoc.SHL.com>
Date: 1997/05/24
Message-ID: <MPG.df13ff761789b8c98968f@news.internetMCI.com>

In article <863822345.12280_at_dejanews.com>, eric.san.andres_at_kellogg.com says...
> Volume Group Export/Import as I have used it, allows a set of disks
> belonging to the same volume group to be transferred physically to
> another system. And From what I understand,it can be a helpful function,
> whenever a CPU crashes and with all its disks safe and operational. And
> if that is a production system, of course, the database/s should be up
> ASAP. And by doing this, another system, or a clone can be setup in a
> snap , by just booting the server via disk,cd or tape, the operating
> system can be accessed and crashed system's volume group can be set for
> import. And I suppose this volume group comprise the sets of disks which
> contains the Database Data. Then we go to the import stage of the disks.
> The newly exported set of disks can be imported by physically
> transferring these disks to the other system. Of course, proper
> adjustment of the scsi addresses and scsi cables is implied, same is true
> with the proper configuration of the disks. Then , import the newly
> attached volume group. All the filesystems in that volume group will be
> added to the filesystem list of the CLONED UNIX Server. And if there are
> filesystems, with the same mount point. This is where the problem goes.
> The filesystems mount points need to be unique in order to get it
> mounted. You will not be able to mount two /u020 mount points at the same
> time, but you can mount two filesystems such as these, /u020/server1 and
> /u020/server2.
>
> Right now, our system is implementing:
> /u020
> /u031
> /u032
> ...
> ...
>
> for ALL the servers,
>
> My PROPOSITION is to have unique mount points to all the servers,
> and by doing this , I'd propose the ff. structure:
>
> /u020/`hostname`

I do the following under Solaris and Volume Manager to create any environment where I can import/export disks and automatically mount on other systems for fail over reasons.

STEP 1: Configuring Volume Manager Disk Groups Assign your disks to disks groups which represent all of the file systems necessary to run that entire environment. For instance, on our production system, I have a PROD disk group which contains all of the disks and file systems for the production database and the environments supporting this environment including backups, in/out data streams, etc., etc.

The ROOTDG disk group ONLY CONTAINS disk and file systems I would not necessary want or need to use in the event of CPU failure during fail over. This is primarily the OS level file systems such as /, /etc, /var, /opt. NOTE: For fail over reasons, our production system and development system stay at the same OS rev level. Other boxes are used for testing operating system patch releases before applying to production and development systems.

STEP 2: Oracle Mount Point Names
I do not use Oracle OFA standard. For my production environments, or databases that I know I need to perform fail over, I configure mount points based off the sid name. The format below has all file systems for the production database below the sid_name. These file systems can be of mix RAID levels created in Volume manager including RAID 0,1,0+1 and 5. These file systems are all located in the PROD disk group.

	/sid_name
		oradata1
		oradata2
		oradata3
		oradata4
		[ ..... ]
		oradataN

	For Example:

	/orafin
		/oradata1
		[ ... ]
		/oradataN

STEP 3: Miscellaneous File Systems
All miscellaneous file systems are created using disks from the PROD disk group. In cases where their may be potential for mount point conflicts I have appended the server name to the mount point. For instance, /usr/local becomes /usr/local.enterprise, and /usr/local.voyager. This resolves the mount point conflicts. For other in/out data file systems, we have configured as follows:

	/application
		/sid_name1		// PROD DISK GROUP FILESYSTEM
		/sid_name2		// SCOPUS DISK GROUP FILESYSTEM
	/archives
		/sid_name1		// PROD DISK GROUP FILESYSTEM
		/sid_name2		// SCOPUS DISK GROUP FILESYSTEM

In the above example, /archives is the static directory where all archive log file systems are mounted, same for application which is where all database specific Oracle Application file systems are mounted for concurrent manager input/output streams, reports, etc., etc. These file systems can be in a mix of disk groups as noted in the comments to the right above.

STEP 4: Supporting New Projects
If I was to create another project or environment that would potentially need to support fail over, I would create another disk group and assign disks to this project so I could fail over just this specified project if I needed to or if I wanted to move it to another systems. For instance, if a SCOPUS database was to be created, I would create a SCOPUS disk group, assign the disks to the group and create the database and miscealleous file systems. All SCOPUS related database file systems would be mounted on         

	/scopus
		/oradata1
		[ ... ]
		/oradataN

STEP 5: Grabbing Disk Groups
Sun has a non supported script called 'grabdg' for Grab Disk Group that allows you to automate the process of grabbing a disk group and mounting all of its file systems. Along with pre/post level scripts you can run to automate other fail over procedures like starting your production listener on the fail over host, configuring virtual ip address on the fail over node for the production server or anything else. This script has an option to rsh to the production server, see what the mount points are for the disk group you want to grab and then stores this information on the fail over node to modify /etc/vfstab automatically for you.

In my case, I am not concerned with conflicts in SCSI addresses since I am using Sun storage arrays. The production arrays are dual hosted to the production and development (fail over) system. Hence, the development (fail over) system sees its storage arrays and the production systems arrays.

If anyone has any questions regarding this strategy, drop me an email.

-----BEGIN PGP SIGNATURE-----
Version: 2.6.2

mQCNAzNWiBwAAAEEALmJ2Zho8BRcFc6vOHuUJp1TJ4+fZsmDgvi57DfDcnVCIcEv e//qQ185dRN03821V7+MwfdKT51KOFcKRnHKFe8xGdDgMCB73ZFUn6X0acn3dVKn K7kSTLpjcqlwGcEQb5MsH2oPz2ejUZ4+BghTN66nrZsEptkZOI+PVZH4HYSBAAUR tCJOZWlsIEdyZWVuZSA8bmdyZWVuZUBsYW9jLlNITC5jb20+ =kEst
-----END PGP SIGNATURE--------------- Received on Sat May 24 1997 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US