Home » Server Options » RAC & Failsafe » RAC Testing
RAC Testing [message #220320] Tue, 20 February 2007 04:44 Go to next message
alzuma
Messages: 46
Registered: July 2006
Location: CA
Member
hi

i made 2-node , 2-instances RAC
my question , how to test the RAC working, loadbalance, configuration , drop one node and see the results (i.e RAC testing) ??
is there sql-scripts or certain senarios??

Thanks
icon14.gif  Re: RAC Testing [message #220329 is a reply to message #220320] Tue, 20 February 2007 05:27 Go to previous messageGo to next message
tanmoy7
Messages: 20
Registered: February 2007
Location: Dhaka, Bangladesh
Junior Member

HERE is yr answer::



Verifying the RAC Cluster / Database Configuration

After starting the Storage and Servers, wait for 10 minitues and execute the following RAC verification checks to be performed on all nodes to ensure that servers are in well condition in the cluster! For this article, I will only be performing checks from node1.

Login as “oracle” user (passward- oracle). Open a terminal and execute the following commands:

Status of all instances and services
$ srvctl status database -d orcl
Instance orcl1 is running on node node1
Instance orcl2 is running on node ndoe2

Status of a single instance
$ srvctl status instance -d orcl -i orcl2
Instance orcl2 is running on node node2

Status of a named service globally across the database
$ srvctl status service -d orcl -s orcltest
Service orcltest is running on instance(s) orcl2, orcl1

Status of node applications on a particular node
$ srvctl status nodeapps -n node1
VIP is running on node: node1
GSD is running on node: node1
Listener is running on node: node1
ONS daemon is running on node: node1

Status of an ASM instance
$ srvctl status asm -n node1
ASM instance +ASM1 is running on node node1.

List all configured databases
$ srvctl config database
orcl

Display configuration for our RAC database
$ srvctl config database -d orcl
node1 orcl1 /u01/app/oracle/product/10.1.0/db_1
node2 orcl2 /u01/app/oracle/product/10.1.0/db_1

Display all services for the specified cluster database
$ srvctl config service -d orcl
orcltest PREF: orcl2 orcl1 AVAIL:

Display the configuration for node applications - (VIP, GSD, ONS, Listener)
$ srvctl config nodeapps -n node1 -a -g -s -l
VIP exists.: /vip-linux1/192.168.101.5/255.255.255.0/eth0:eth1
GSD exists.
ONS daemon exists.
Listener exists.

Display the configuration for the ASM instance(s)
$ srvctl config asm -n node1
+ASM1 /u01/app/oracle/product/10.1.0/db_1
________________________________________
Starting & Stopping the Cluster
At this point, everything has been installed and configured for Oracle10g RAC. We have all of the required software installed and configured plus we have a fully functional clustered database.
With all of the work we have done up to this point, a popular question might be, "How do we start and stop services?". If you have followed the instructions in this article, all services should start automatically on each reboot of the Linux nodes. This would include CRS, all Oracle instances, Enterprise Manager Database Console, etc.
There are times, however, when you might want to shutdown a node and manually start it back up. Or you may find that Enterprise Manager is not running and need to start it. This section provides the commands (using SRVCTL) responsible for starting and stopping the cluster environment.
Ensure that you are logged in as the "oracle" UNIX user. I will be running all of the commands in this section from node1:
# su - oracle

$ hostname
node1

Stopping the Oracle10g RAC Environment
The first step is to stop the Oracle instance. Once the instance (and related services) is down, then bring down the ASM instance. Finally, shutdown the node applications (Virtual IP, GSD, TNS Listener, and ONS).
$ export ORACLE_SID=orcl1
$ lsnrctl stop
$ emctl stop dbconsole
$ srvctl stop instance -d orcl -i orcl1
$ srvctl stop asm -n node1
$ srvctl stop nodeapps -n node1

Starting the Oracle10g RAC Environment
The first step is to start the node applications (Virtual IP, GSD, TNS Listener, and ONS). Once the node applications are successfully started, then bring up the ASM instance. Finally, bring up the Oracle instance (and related services) and the Enterprise Manager Database console.
$ export ORACLE_SID=orcl1
$ lsnrctl start
$ srvctl start nodeapps -n node1
$ srvctl start asm -n node1
$ srvctl start instance -d orcl -i orcl1
$ emctl start dbconsole

Start / Stop All Instances with SRVCTL
Start / Stop all of the instances and its enabled services. I just included this for fun as a way to bring down all instances!
$ srvctl start database -d orcl

$ srvctl stop database -d orcl

-------------------------------------------------

All running instances in the cluster
SELECT
inst_id
, instance_number inst_no
, instance_name inst_name
, parallel
, status
, database_status db_status
, active_state state
, host_name host
FROM gv$instance
ORDER BY inst_id;

INST_ID INST_NO INST_NAME PAR STATUS DB_STATUS STATE HOST
-------- -------- ---------- --- ------- ------------ --------- -------
1 1 orcl1 YES OPEN ACTIVE NORMAL linux1
2 2 orcl2 YES OPEN ACTIVE NORMAL linux2

All data files which are in the disk group
select name from v$datafile
union
select member from v$logfile
union
select name from v$controlfile
union
select name from v$tempfile;

NAME
-------------------------------------------
+ORCL_DATA1/orcl/controlfile/current.256.1
+ORCL_DATA1/orcl/datafile/indx.269.1
+ORCL_DATA1/orcl/datafile/sysaux.261.1
+ORCL_DATA1/orcl/datafile/system.259.1
+ORCL_DATA1/orcl/datafile/undotbs1.260.1
+ORCL_DATA1/orcl/datafile/undotbs1.270.1
+ORCL_DATA1/orcl/datafile/undotbs2.263.1
+ORCL_DATA1/orcl/datafile/undotbs2.271.1
+ORCL_DATA1/orcl/datafile/users.264.1
+ORCL_DATA1/orcl/datafile/users.268.1
+ORCL_DATA1/orcl/onlinelog/group_1.257.1
+ORCL_DATA1/orcl/onlinelog/group_2.258.1
+ORCL_DATA1/orcl/onlinelog/group_3.265.1
+ORCL_DATA1/orcl/onlinelog/group_4.266.1
+ORCL_DATA1/orcl/tempfile/temp.262.1

15 rows selected.

All ASM disk that belong to the 'ORCL_DATA1' disk group
SELECT path
FROM v$asm_disk
WHERE group_number IN (select group_number
from v$asm_diskgroup
where name = 'ORCL_DATA1');

PATH
----------------------------------
ORCL:VOL1
ORCL:VOL2
ORCL:VOL3


-----------------------------------------------


Connecting to Clustered Database From an External Client
This is an optional step, but I like to perform it in order to verify my TNS files are configured correctly. Use another machine (i.e. a Windows machine connected to the network) that has Oracle installed (either 9i or 10g) and add the TNS entries (in the tnsnames.ora) from either of the nodes in the cluster that were created for the clustered database.
Then try to connect to the clustered database using all available service names defined in the tnsnames.ora file:
C:\> sqlplus system/manager@db102
C:\> sqlplus system/manager@db101
C:\> sqlplus system/manager@orcltest
C:\> sqlplus system/manager@db10

Re: RAC Testing [message #220433 is a reply to message #220329] Tue, 20 February 2007 12:38 Go to previous message
marcinmigdal
Messages: 10
Registered: October 2006
Location: Poland
Junior Member

Hi Alzuma
COuld You write me what is Your RAC configuration, I mean hardware configuration for Your RAC and operating system.
Best regards
Martin
kisonar@wp.pl
Previous Topic: oracle 10g rac on rhel 4
Next Topic: RAC on 1 server
Goto Forum:
  


Current Time: Thu Mar 28 16:29:59 CDT 2024