Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Multiple instances on a single database

Re: Multiple instances on a single database

From: koert54 <koert54_at_nospam.com>
Date: Mon, 29 Oct 2001 04:58:59 GMT
Message-ID: <nm5D7.13876$PH1.894@afrodite.telenet-ops.be>


It's possible to run an OPS configuration on ONE node on linux starting from Oracle 8.1.7.0.1.
I know it's silly but I'm doing it aswell at home - just as a test set-up to keep my OPS knowledge alive...

Here's the doc taken from Metalink on how to do it ! Koert

PURPOSE


This document is intended for peoples who would experience Oracle Parallel Server (OPS) technology on a common hardware . This is possible thanks to the OPS options that is newly introduced in the Linux (x86) port of the oracle RDBMS v. 8.1.7.0.1 . The following contents are strictly related to Linux Intel 32bit platform.

SCOPE & APPLICATION



The intended audience, are technicals with a good experience on Linux operating environment and also well introduced to OPS technology. Here we will detail the steps necessary to setup the environment and part of theory behind them .

TITLE


  Setting up Oracle Parallel Server environment on Linux - Single node

  CONTENTS


  1. Introduction 1.1) DLM and Parallel Cache: 1.2) Cluster Management layer : 1.3) Node Monitor services : 1.4) Watchdog (WDT) services [6],[7]
  2. Architecture 2.1) Two-node / Two-instance Oracle Parallel Server Archtecture 2.2) Single-Node / Two-instance Oracle Parallel Server Architecture
  3. Kernel and System Configuration 3.1) - Bill of Materials 3.2) - Prepare your disk for RAW partitions 3.3) - Prepare your kernel for OPS 3.3.2) Raw device patch 3.3.3) Raw device compatible fileutils 3.3.4) Kernel setup 3.3.5) Kernel & Modules rebuild
  4. install Oracle software
  5. Setup Oracle Cluster Manager components 5.1) load the softdog module in your kernel 5.2) Create watchdog device 5.3) Configure the NM 5.4) NM and CM 5.5) Check the /etc/rc.local and reboot your system
  6. Parallel database setup 6.1) - Create paralle server database 6.1.1) Prepare the symbolic links 6.1.2) Prepare the init.ora files 6.1.3) create the database 6.1.4) Redo logs threads 6.1.5) Final checks
  7. Startup both the istances in parallel mode 7.2) startup the instances
  8. Conclusions
    • -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
  9. Introduction [1],[2]

 1.1) DLM and Parallel Cache:

 Briefly, Parallel Cache 8i (PC) synchronization relies on a Distributed  Lock Management (DLM) software layer that allows lock of resources among  all the instances partecipating to the parallel server in a distributed  fashion. This layer is part of the RDBMS kernel and mantains an interinstance   consistent view of the database of all the locks. DLM is  responsable of granting access to protected resources to the instances  processes requesting for and tracking the ownership of each lock .

 Most of these tasks are carried out by OPS specific background processes :

    LMON - The "Lock Monitor Process" monitors the entire cluster to manage

           global locks and resources. LMON manages instance and process
           deaths and the associated recovery for the
           Distributed Lock Manager.

    LMD  - The "Lock Manager Daemon" is the lock agent process that manages
           lock manager  service  requests   for  Parallel Cache Management
           locks  to   control   access to   global  locks  and  resources.
           The LMDN process  also  handles deadlock  detection  and  remote
           lock requests.  Remote  lock requests are  requests  originating
           from another instance

    LCK -  The "Lock Process" manages non-Parallel Cache Management locking
           to coordinate shared resources  and  remote  requests for  those
           resources

    BSP  - The "Block Server Process" rolls back uncommitted transactions
           for blocks  requested  by  cross-instance  read/write requests
           and sends the consistent read block to the requestor


 1.2) Cluster Management layer :

 Cluster Manager (CM) is responsible for maintaining the process-level  cluster status. Cluster Manager accepts registration of Oracle instances  to the cluster and provides a consistent view of which Oracle instances  are alive and which are dead.
 Cluster Manager also propagates status information to all the Oracle  instances,enabling communication among instances .  Typically this layer is an Operating System service and depends by the  specific port (Solaris/Aix/HP-UX/OpenVMS), starting with Oracle 8.1.7.0.1  for Linux x86, Oracle Corporation developed a proprietary CM software  that natively interfaces with the 8i kernel (currently it stands out of  the kernel).

    DLM relies on CM services to detect cluster events like :

'instance joining the parallel server' or
'process died unexpectedly'

      ...

 1.3) Node Monitor services :

 Node Monitor (NM) is responsible for maintaining a consistent view of the  nodes, and reports to Cluster Manager which node is alive or dead.  The Node Monitors on all nodes in a cluster regularly send ping messages  to each other. Each node maintains a database containing status  information on the other nodes. The Node Monitors in a cluster mark a node  inactive if the node fails to send out a ping message within a defined  time interval plus a margin. The Node Monitor of a node can fail to send  a ping message due to:

  + the death of Node Monitor?s sending thread on the remote machine
  + a network failure
  + abnormally heavy conditions on the remote machine

    CM relies on NM services to detect cluster events like :

'network failures' or
'cluster node died unexpectely'

      ...

 1.4) Watchdog (WDT) services [6],[7]

 The watchdog device allows applications to make use of a timer facility.  First the application should register an action routine with the watchdog  device, and start the watchdog running. After this the application must  call a watchdog reset function at regular intervals, or else the device  will cause the installed action routines to be invoked. The assumption is  that the watchdog timer should never trigger unless there has been a  serious fault in either the hardware or the software, and the  application's action routine should perform an appropriate reset  operation.

 The main reason for having a watchdog timer is to guard against software  and certain hardware failures that may cause the system to stop working,  but can be cured by a reset or some other form of restart .

 The watchdog device is either implemented by interfacing to a hardware  watchdog device, or it is emulated using an alarm on the kernel real-time  clock.

 'Watchdogd' daemon supplies Watchdog services to Node Monitor,  Cluster Manager, and the LMON (Lock Monitor) process by using a Watchdog  device. If Watchdogd finds a process that did not notify it during the  defined interval, Watchdogd crashes the entire system (!) .  On Linux, a Watchdog device is implemented at the kernel level .

    NM,CM,DLM rely on WDT services to detect abnormal key processes     abends or hangs:

2) Architecture


 2.1) Two-node / Two-instance Oracle Parallel Server Archtecture


                             Shared Disks,
                            Database on RAW
                             partitions
                         /                 \
                        /                   \
                    Node 1                    Node 2
             ---------------               ---------------
            | SGA Instance A|             |SGA Instance B |
            |---------------|             |---------------|
            |     |pmon|__  |             |    |pmon|___  |
            |   |smon |lgwr||             |   |smon |lgwr||
            |    ---|dbwr|  |             |    ---|dbwr|  |
            |        ----   |             |       ----    |
            |   ------------|             |------------   |
            |  |lmon lmd0   |             |lmon lmd0   |  |
      lipc  |  |  lck0      |    nipc     |   lck0     |  | lipc
          ------- DLM    <===================> DLM    -------
         |  |  |------------|             |------------|  |  |
         |  |  |  CM        |             |    CM      |  |  |
         +------>        <===================>        <------+
         |  |  |___________ |             | ___________|  |  |
         |  |  |  NM        |             |    NM      |  |  |
         +------>        <===================>        <------+
         |  |  |___________ |             |____________|  |  |
          ------> WDT       |             |    WDT    <------
            |  |            |             |            |  |
             ---------------               ---------------
            |////kernel/////|             |////kernel/////|
             ---------------               ---------------

    '-' lipc : local inter-process communication     '=' nipc : networked inter-process communication (via tcp/ip)

    Note : Typically the shared disks box is wired via SCSI or Optical     Fiber to the main boards, this introduce compatibility issues related     to the drivers used and to the controllers used . Please refer to HW     certification matrices for that.

 2.2) Single-Node / Two-instance Oracle Parallel Server Architecture


                               Local Disk,
                               Database on RAW
                               partitions
                                 |
             single Node         |
             ---------------------------------------------
            | SGA Instance A|             |SGA Instance B |
            |---------------               ---------------|
            |     |pmon|__                     |pmon|___  |
            |   |smon |lgwr|                  |smon |lgwr||
            |    ---|dbwr|                     ---|dbwr|  |
            |        ----      (localhost)         ----   |
            |     __________      nipc     __________     |
            |    | DLM      | <=========> | DLM      |    |
            |    |    lmon  |             | lmon     |    |
            |    | lmd0 lck0|             | lmd0 lck0|    |
            |    |__________|             |__________|    |
            |     ____|________________________|_____     |
            |    |    |                        |     |    |
            |    |    +-->      CM svcs     <--+     |    |
            |    |____|________________________|_____|    |
            |    |    |                        |     |    |
            |    |    +-->      NM svcs     <--+     |    |
            |    |____|________________________|_____|    |
            |    |    |                        |     |    |
            |    |     -->      WDT svcs    <-- lipc |    |
             ---------------------------------------------
            |//////////////// kernel  ////////////////////|
             ---------------------------------------------

    '-' lipc : local inter-process communication     '=' nipc : networked inter-process communication (via tcp/ip)

    Note : this kind of configuration is not so exposed to compatibility     issues related to various SCSI controller and drivers that may limit     the implementations .

3) Kernel and System Configuration


 Now that the overall architecture is defined, we can start to detail  step-by-step the phases that will bring us to the final configuration

 3.1) - Bill of Materials [4]



 (we tested on Dell latitude pIII 500Mhz - 12Gb IDE - 256MB ram - CDROM)

   Intel x86 X-windowed and networked box    Red Hat 6.2 distribution
   Related Linux kernel source rpms (kernel-source-2.2.14-5.0.i386.rpm)    Oracle 8.1.7.0.1 for Linux Intel

   Operating System Linux : kernel 2.2
   Operating System Patches: Patch 14 or later

                             Raw Device Patch Required
   System Libraries        : GNU C Tools egcs-1.1.2

   Disk Space Requirements : 766 MB is typical.
                             Note: 600KB for Oracle Cluster Manager and
                             765MB for Oracle8i Enterprise Edition




 3.2) - Prepare your disk for RAW partitions


 Raw Devices (RD):

 OPS on linux needs direct access to datafiles without the interaction  with the file system . In fact the Unix file system cache will produce  unpredictable effects and potentially corrupt the database due to the fact  that blocks that Oracle thinks written on the disk are in effect cached  in the file system cache. This normally is not a problem in a non-OPS  environment, but when multiple writers have to synchronize on the disk  it is necessary that 'unsynchronized' caches are turned-off .

 RD feature is available with linux kernels and allow processes direct disk  partitions access, without any FS interaction nor cache . To determine if  your kernel source tree is patched for Raw Devices simply check if

     /usr/src/linux/drivers/char/raw.c exists

 Every oracle database datafile,controlfile,rlogfile must be placed on RD  so you will need a lot of them . Every RD is bound to a disk partition  with 1-1 relation so we will need at least nine, considering a minimal  reasonable configuration of two OPS instances for one database :

 Raw Devices assignements :

    Usage | Object | raw device | Partition | size (MB)

  Note : 'OPS_A' and 'OPS_B' will be the instance names )

  Note : raw1 is oversized, the right dimension should be at least

         4 + [(the number of nodes in the cluster) * 4] KB . [1]

 So we will need at least 9 disk partitions on our hard drive . The number  of PRIMARY PARTITIONS that you can create on your disk (IDE) is limited  to 4 and considering that you are using typically one for root file  system, one for swap and one for data you need 8 more . The solution  adopted here is to use one EXTENDED PARTITION that allow us to create  inside it as many LOGICAL PARTITIONS we need . If your current hard drive  is does not containt an EXTENDED PARTITION with at least 500MB free, you  will need to repartition it (...) .

 Disk Partioning Table : [5]

   Disk /dev/hda: 255 heads, 63 sectors, 1467 cylinders    Units = cylinders of 16065 * 512 bytes

      Device Boot    Start       End    Blocks   Id  System
   /dev/hda1           410      1392   7895947+   5  Extended  <-- Note
   /dev/hda2   *        37       256   1767150   83  Linux
   /dev/hda3           257       409   1228972+   6  FAT16
   /dev/hda4             1        36    289138+  84  OS/2
   /dev/hda5           410       505    771088+   6  FAT16
   /dev/hda6           506       538    265041   82  Linux swap
   /dev/hda7           539       921   3076416   83  Linux
   /dev/hda8           922      1334   3317391   83  Linux
   /dev/hda9          1335      1335      8001   83  Linux    <-- Our LOG.
   /dev/hda10         1336      1378    345366   83  Linux        PART. are
   /dev/hda11         1379      1380     16033+  83  Linux        placed
   /dev/hda12         1381      1382     16033+  83  Linux        in the
   /dev/hda13         1383      1384     16033+  83  Linux        EXTENDED
   /dev/hda14         1385      1386     16033+  83  Linux        Look the
   /dev/hda15         1387      1388     16033+  83  Linux        start/end
   /dev/hda16         1389      1390     16033+  83  Linux        points
   /dev/hda17         1391      1392     16033+  83  Linux    <--

 Note : normally after defining the new partions, you'll need to reboot  your system in order to re-read the partition table .

 Note : RH62 pre-defines in /dev hda1 -> hda16 probably you'will need  to add more, use for example '# mknod /dev/hda17 b 3 17 ' .

 Now you can bind the newly created partitions to respective raw devices.  This binding is volatile and must be restored after any reboot (put in  your /etc/rc.local) .

         #to be added in system /etc/rc.local
         #(part 1/3)

         #-- rawdevice binding
         /usr/bin/raw /dev/raw/raw1 /dev/hda9
         /usr/bin/raw /dev/raw/raw2 /dev/hda10
         /usr/bin/raw /dev/raw/raw3 /dev/hda11
         /usr/bin/raw /dev/raw/raw4 /dev/hda12
         /usr/bin/raw /dev/raw/raw5 /dev/hda13
         /usr/bin/raw /dev/raw/raw6 /dev/hda14
         /usr/bin/raw /dev/raw/raw7 /dev/hda15
         /usr/bin/raw /dev/raw/raw8 /dev/hda16
         /usr/bin/raw /dev/raw/raw9 /dev/hda17

         #-- print out the current bindings
         #
         /usr/bin/raw -qa

         #-- rawdevice ownership & access rights
         #   (oracle:dba) is our oracle account

         /bin/chmod 600 /dev/raw/raw1
         /bin/chmod 600 /dev/raw/raw2
         /bin/chmod 600 /dev/raw/raw3
         /bin/chmod 600 /dev/raw/raw4
         /bin/chmod 600 /dev/raw/raw5
         /bin/chmod 600 /dev/raw/raw6
         /bin/chmod 600 /dev/raw/raw7
         /bin/chmod 600 /dev/raw/raw8
         /bin/chmod 600 /dev/raw/raw9
         /bin/chown oracle /dev/raw/raw1
         /bin/chown oracle /dev/raw/raw2
         /bin/chown oracle /dev/raw/raw3
         /bin/chown oracle /dev/raw/raw4
         /bin/chown oracle /dev/raw/raw5
         /bin/chown oracle /dev/raw/raw6
         /bin/chown oracle /dev/raw/raw7
         /bin/chown oracle /dev/raw/raw8
         /bin/chown oracle /dev/raw/raw9

         #
         # END of (part 1/3)


 3.3) - Prepare your kernel for OPS [1]



 Most of the kernel setup for our purpouses regards the WATCHDOG device and  RAW DEVICES and must be perfomed by 'root' . Expert users probably will  jump the sub-steps they don't need .

   3.3.1) Kernel sources : check and install if needed the current kernel    sources packages .

        % rpm -qa | grep kernel

        ...
        kernel-source-2.2.14-5.0.i386.rpm
        kernel-headers-2.2.14-5.0.i386.rpm
        ...

   3.3.2) Raw device patch

        % ls -l /usr/src/linux/drivers/char/raw.c

   3.3.3) Raw device compatible fileutils (optional) :    When you use the fileutils package and its dd command to back up raw    devices, ensure that you use version 4.0j or later. Earlier versions    of fileutils do not support the dd command for raw devices.

        fileutils-4.0-21.i386.rpm RPM package

   3.3.4) Kernel setup (we assume working on a clean RH62 and you are    logged in with a graphic X-session )

        % cd /usr/src/linux
        % cp configs/kernel-2.2.14-i686.config .config

        (we assume a PIII single processor)

        % make xconfig

        in section 'Watchdog Cards' select to :

        'Software Watchdog' = 'm'
        'Disable watchdog shutdown on close' = 'y'

        return to the main form and select 'Save and Exit'

   3.3.5) Kernel & Modules rebuild (experience with linux kernel rebuilds    is welcomed as well all the file backups you need before proceeding).    We suggest to save the current copy of the kernel and system maps    and make a emergency boot disk 'see mkbootdisk' ... [5]

         Clean up the old version files of modules by removing files from
        /usr/src/linux/include/linux/modules by entering:

        % cd /usr/src/linux/include/linux/modules
        % rm -f *

        Check dependencies. The makefile performs all dependency-checking
        procedures after you enter:

        % cd /usr/src/linux
        % make dep

        Clean objects for safety by entering at /usr/src/linux:

        % cd /usr/src/linux
        % make clean

        Make a boot image by entering the following at /usr/src/linux:

        % cd /usr/src/linux
        % make bzImage

        Make sure the boot images finishes without error.

        Make modules by entering the following at /usr/src/linux:

        % make modules

        Install the modules by entering the following at /usr/src/linux,
        as root user:

        # make modules_install

        Copy the bzImage from /usr/src/linux/arch/i386/boot to /boot.
        (we overwrote the current image )

        # cd /usr/src/linux/arch/i386/boot
        # cp bzImage /boot/vmlinuz-2.2.14-5.0

        Install System.map file by copying the System.map generated at
        /usr/src/linux to /boot.

        # cd /usr/src/linux
        # cp System.map /boot/System.map-2.2.14-5.0

        setup & run 'lilo' to update the boot image with the new
        kernel (check linux documentation)

        Create the intial ram disk image for the kernel

        # cd /usr/src/linux
        # /sbin/mkinitrd /boot/initrd-2.2.14-5.0.img 2.2.14-5.0


        Edit /etc/lilo.conf by copying a Linux entry for an already-working
        kernel. Modify the image and label it initrd for the new entry.

        For example :

           image=/boot/vmlinuz-2.2.14-5.0
           label=new_kernel
           initrd=/boot/initrd-2.2.14-5.0.img
           read-only
           root=/dev/hda2

        # /sbin/lilo

        Now the new kernel should be registered and ready to boot.


        Note that you may build a new kernel and its modules instead of
        removing the old ones by simply editing /usr/src/linux/Makefile
        and adding an uniquely identified tag to the EXTRAVERSION line
        before you begin the build process.


4) install Oracle software



 Simply install the software as usual taking in care of explicitly select  'Oracle Parallel Server Option' from the software list in addition to the  other options you will need .

5) Setup Oracle Cluster Manager components


 WDT services : Now that kernel supports watchdog devices, we can start the  Watchdog Daemon (WD) . After that, the others key process (NM,CM,LMON) of  the Oracle Cluster Manager will be able to register with 'watchdogd' and  work. WD is responsabile to REBOOT IMMEDIATELY YOUR SYSTEM if some  component of Oracle Cluster Manager 'forget' to 'ping' it periodically.  The system crash is a solution that prevents that one instance that loses  syncronization with the others can survive and continue to operate on the  database.

 Currently whole system is rebooted but probably in the future lesser  drastic solution will be implemented in order to limit the shutdown to  the Oracle instances only , like in the other major OPS implementations .

   5.1) load the softdog module in your kernel

         #to be added in system /etc/rc.local
         #(part 2/3)

         /sbin/insmod softdog soft_margin=60

         #
         # END of (part 2/3)

   5.2) Create watchdog device

         % mknod /dev/watchdog c 10 130
         % chmod 600 /dev/watchdog
         % chown oracle /dev/watchdog


   5.3) Configure the NM

   create '$ORACLE_HOME/oracm/admin/nmcfg.ora' and insert the lines :

         DefinedNodes=<your node>
         CmDiskFile=/dev/raw/raw1

   Note : the node name should be returned by 'hostname' and resolved in    /etc/hosts (if you're using DHCP you may set the /etc/hosts alias for    127.0.0.1 to the output of 'hostname' )

   5.4) NM and CM
   You need to define part of the oracle environment in orderd to NM and CM    work . They will generate some further threads (this actually is done    due to a limit in the watchdog services that works on a per-process    basis)

         #to be added in system /etc/rc.local
         #(part 3/3)

         ORACLE_BASE=/u01/ora
         ORACLE_HOME=/u01/ora/product/81701
         PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/oracm/bin
         LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/oracm/lib
         export ORACLE_BASE
         export ORACLE_HOME
         export PATH
         export LD_LIBRARY_PATH

         echo "..starting Watchdog daemon"
         watchdogd -g dba -e $ORACLE_HOME/oracm/log/watchdogd.log

         echo "..starting Oracle Node Monitor"
         oranm /m /e:$ORACLE_HOME/oracm/log/nm.log \
            </dev/null>$ORACLE_HOME/oracm/log/nm.out 2>&1 &

         echo "..starting Oracle Cluster Manager"
         oracm /e:$ORACLE_HOME/oracm/log/cm.log  \
            </dev/null>$ORACLE_HOME/oracm/log/cm.out 2>&1 &

         #
         # END of (part 3/3)


   5.5) Check the /etc/rc.local and reboot your system    At this point your /etc/rc.local should be complete (part 1/3, part 2/3,    part 3/3) and the overall system should be ready .

   REBOOT and check the process you should observe something like this :

        UID        PID  PPID    CMD
        ...
        root       830     1    watchdogd -g dba -e /u01/ora/pro <-- WD
        root       831     1    oranm /m /e:/u01/ora/product/8.1 <-- NM
        root       832     1    oracm /e:/u01/ora/product/8.1.7. <-- CM
        root       844   831    oranm /m /e:/u01/ora/product/8.1
        root       845   844    oranm /m /e:/u01/ora/product/8.1
        root       851   844    oranm /m /e:/u01/ora/product/8.1 <-- child
        root       852   832    oracm /e:/u01/ora/product/8.1.7.    threads
        root       853   852    oracm /e:/u01/ora/product/8.1.7.
        root       854   844    oranm /m /e:/u01/ora/product/8.1
        root       855   844    oranm /m /e:/u01/ora/product/8.1
        root       856   844    oranm /m /e:/u01/ora/product/8.1
        ...


6) Parallel database setup



 At this point our system is ready to start a couple of instances on the  same database, but before that, we have to create the database ..  A database to be opened in parallel mode is quite the same that one in  exlusive mode, except for some main issues :

 6.1) - Create paralle server database



 Details of this phase will be ommitted, refere to standard documentation  for them . We suggest you to use the DataBase configuration ASSISTant  'dbassist' to create the base scripts files and then modify them  manually 'dbassist' will ask you if you want to create and OPS database  or a normal one, we suggest to choose the normal one because there are  some annoying limitations in the dialogues (it force you to specify  absolute raw devices paths ... ). Do not forget to run 'catparr.sql'  after the usual 'catalog.sql', 'catproc.sql' .

   6.1.1) Prepare the symbolic links
   Just to setup a more readable configuration we user OFA template and    symbolic links to Raw Devices . It will look more readable

         % ls -l /u01/ora/oradata/OPS/

         lrwxrwxrwx    1 oracle   dba  controlfile -> /dev/raw/raw4
         lrwxrwxrwx    1 oracle   dba  rlog1 -> /dev/raw/raw5
         lrwxrwxrwx    1 oracle   dba  rlog2 -> /dev/raw/raw6
         lrwxrwxrwx    1 oracle   dba  rlog3 -> /dev/raw/raw7
         lrwxrwxrwx    1 oracle   dba  rlog4 -> /dev/raw/raw8
         lrwxrwxrwx    1 oracle   dba  rollback.dbf -> /dev/raw/raw9
         lrwxrwxrwx    1 oracle   dba  system.dbf -> /dev/raw/raw2
         lrwxrwxrwx    1 oracle   dba  temp.dbf -> /dev/raw/raw3


   6.1.2) Prepare the init.ora files
   Here we will have two instances 'OPS_A' and 'OPS_B' and the respective    private and common parameters files .

   Private parameters files :


   #--OPS_A private parameters         #--OPS_B private parameters
   #  file: ?/dbs/initOPS_A.ora        #  file: ?/dbs/initOPS_B.ora
   db_name = "OPS"                     db_name = "OPS"
   ifile=$ORACLE_HOME/dbs/initOPS.ora ifile=$ORACLE_HOME/dbs/initOPS.ora
   thread=1                            thread=2
   instance_number=1                   instance_number=2

   rollback_segments = (RBS1)          rollback_segments = (RBS2)

   Common parameters file (part of) :


   #--OPS Relevant common instance parameters (trivial)    # file ?/dbs/initOPS.ora

   db_name                = "OPS"
   control_files          = ("/u01/ora/oradata/OPS/controlfile")
   compatible             ="8.1.7"
   parallel_server        =true   #-- database in Parallel mode
   #parallel_server       =false  #-- database in Standalone  mode
   gc_rollback_locks      ="0-128=32!8REACH" #Default: 0-128=32!8REACH
   GC_RELEASABLE_LOCKS =1000    6.1.3) create the database
   Start the one instance in standalone mode (parallel_server=false).    You will need to create at least two or more rollbacks ('RBS1','RBS2')    one for each instance and one more RLOGS thread. We will create the    'SYSTEM','RBS' and 'TEMP' tablespace accordling to the partitions    created before .

   6.1.4) Redo logs threads
   Hint : we suggest to create the initial RLOGS to normal files and then    redefine them to the dedicated raw devices prior to open parallel .

ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 11 '/dev/raw/raw5' size 10M ;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 12 '/dev/raw/raw6' size 10M ;
ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 21 '/dev/raw/raw7' size 10M ;
ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 22 '/dev/raw/raw8' size 10M ;
ALTER DATABASE ENABLE THREAD 2;    (optionally use the 'reuse' clause)

   drop the dummy RLOG on normal files

   6.1.5) Final checks
   Before opening parallel (parallel_server=true) check that :

     +) all the WD,NM,CM are running
     +) each instance has a private rollback segment created
     +) both redo logs thread are active and on raw devices
     +) all database is defined on raw devices



7) Startup both the instances in parallel mode



 following tasks are performed with oracle unix account

    7.1) setup to unix environment in order to have the ORACLE_HOME     pointing to the 8.1.7.0.1 code path and two distict ORACLE_SID :

Instance OPS_A                         Instance OPS_B
--------------                         --------------

ORACLE_SID=OPS_A                       ORACLE_SID=OPS_B
ORACLE_BASE=/u01/ora                   ORACLE_BASE=/u01/ora
ORACLE_HOME=/u01/ora/product/81701     ORACLE_HOME=/u01/ora/product/81701
ORACLE_TERM=vt100                      ORACLE_TERM=vt100
PATH=$PATH:$ORACLE_HOME/bin:           PATH=$PATH:$ORACLE_HOME/bin:
PATH=$PATH:$ORACLE_HOME/oracm/bin      PATH=$PATH:$ORACLE_HOME/oracm/bin
LD_LIBRARY_PATH=$ORACLE_HOME/lib:\     LD_LIBRARY_PATH=$ORACLE_HOME/lib:\
$ORACLE_HOME/oracm/lib                 $ORACLE_HOME/oracm/lib
export ORACLE_BASE                     export ORACLE_BASE
export ORACLE_HOME                     export ORACLE_HOME
export ORACLE_TERM                     export ORACLE_TERM
export ORACLE_SID                      export ORACLE_SID
export PATH                            export PATH
export LD_LIBRARY_PATH                 export LD_LIBRARY_PATH



    7.2) startup the instances
    The usual symbolic links to init files should already be created in     $ORACLE_HOME/dbs, make sure at this point that 'parallel_server=true'     in common init file.

<OPS_A>                                <OPS_B>
sqlplus / as sysdba                    sqlplus / as sysdba
startup                                startup
...                                    ...

    if all worked ok, both the istances will startup, monitor     'alertOPS_A.log' , 'alertOPS_B.log' for errors

    the process view should show the follwing process runnings :

         UID        PID  PPID   CMD
         ...
         oracle    1159     1   ora_pmon_OPS_A
         oracle    1161     1   ora_lmon_OPS_A
         oracle    1163     1   ora_lmd0_OPS_A
         oracle    1184     1   ora_dbw0_OPS_A
         oracle    1186     1   ora_lgwr_OPS_A
         oracle    1188     1   ora_ckpt_OPS_A
         oracle    1190     1   ora_smon_OPS_A
         oracle    1192     1   ora_reco_OPS_A
         oracle    1200     1   ora_lck0_OPS_A
         oracle    1203     1   ora_bsp0_OPS_A
         ...
         oracle    1231     1   ora_pmon_OPS_B
         oracle    1233     1   ora_lmon_OPS_B
         oracle    1235     1   ora_lmd0_OPS_B
         oracle    1237     1   ora_dbw0_OPS_B
         oracle    1239     1   ora_lgwr_OPS_B
         oracle    1241     1   ora_ckpt_OPS_B
         oracle    1243     1   ora_smon_OPS_B
         oracle    1245     1   ora_reco_OPS_B
         oracle    1253     1   ora_lck0_OPS_B
         oracle    1256     1   ora_bsp0_OPS_B
         ...


 <end>

8) Conclusions



 The solution exposed here does have not many useful practical  applications because we are using a single node, losing most of the key  features of a Parallel Server solution that depends on multiple nodes to  scale-up application performances and to be Fault Tolerant eliminating  single points of failures such instances and nodes .

 This document should be intended principally for didactic scopes and used  by peoples that are testing the Oracle Parallel Server technology and  need to setup a quick solution for simulation, test and developement  purposes .

"Marco Mapelli" <mapellim_at_usa.net> wrote in message news:53d76e1f.0110281212.bced917_at_posting.google.com...

> Hello All,
>
> I am trying with no success to start multiple instances
> using a single physical database.
> When I mount the second instance I get and error saying that
> the database cannot be mounted when in shared mode.
> Is there a particular option that let multiple mounting of
> an Oracle 8.1.6 DB (RH Linux 6.1).
>
> Thanks in advance.
>
> Marco Mapelli
Received on Sun Oct 28 2001 - 22:58:59 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US