Home » Server Options » RAC & Failsafe » RAC Testing
|RAC Testing [message #220320]
||Tue, 20 February 2007 04:44
Registered: July 2006
i made 2-node , 2-instances RAC
my question , how to test the RAC working, loadbalance, configuration , drop one node and see the results (i.e RAC testing) ??
is there sql-scripts or certain senarios??
| Re: RAC Testing [message #220329 is a reply to message #220320]
||Tue, 20 February 2007 05:27
Registered: February 2007
Location: Dhaka, Bangladesh
HERE is yr answer::|
Verifying the RAC Cluster / Database Configuration
After starting the Storage and Servers, wait for 10 minitues and execute the following RAC verification checks to be performed on all nodes to ensure that servers are in well condition in the cluster! For this article, I will only be performing checks from node1.
Login as “oracle” user (passward- oracle). Open a terminal and execute the following commands:
Status of all instances and services
$ srvctl status database -d orcl
Instance orcl1 is running on node node1
Instance orcl2 is running on node ndoe2
Status of a single instance
$ srvctl status instance -d orcl -i orcl2
Instance orcl2 is running on node node2
Status of a named service globally across the database
$ srvctl status service -d orcl -s orcltest
Service orcltest is running on instance(s) orcl2, orcl1
Status of node applications on a particular node
$ srvctl status nodeapps -n node1
VIP is running on node: node1
GSD is running on node: node1
Listener is running on node: node1
ONS daemon is running on node: node1
Status of an ASM instance
$ srvctl status asm -n node1
ASM instance +ASM1 is running on node node1.
List all configured databases
$ srvctl config database
Display configuration for our RAC database
$ srvctl config database -d orcl
node1 orcl1 /u01/app/oracle/product/10.1.0/db_1
node2 orcl2 /u01/app/oracle/product/10.1.0/db_1
Display all services for the specified cluster database
$ srvctl config service -d orcl
orcltest PREF: orcl2 orcl1 AVAIL:
Display the configuration for node applications - (VIP, GSD, ONS, Listener)
$ srvctl config nodeapps -n node1 -a -g -s -l
VIP exists.: /vip-linux1/192.168.101.5/255.255.255.0/eth0:eth1
ONS daemon exists.
Display the configuration for the ASM instance(s)
$ srvctl config asm -n node1
Starting & Stopping the Cluster
At this point, everything has been installed and configured for Oracle10g RAC. We have all of the required software installed and configured plus we have a fully functional clustered database.
With all of the work we have done up to this point, a popular question might be, "How do we start and stop services?". If you have followed the instructions in this article, all services should start automatically on each reboot of the Linux nodes. This would include CRS, all Oracle instances, Enterprise Manager Database Console, etc.
There are times, however, when you might want to shutdown a node and manually start it back up. Or you may find that Enterprise Manager is not running and need to start it. This section provides the commands (using SRVCTL) responsible for starting and stopping the cluster environment.
Ensure that you are logged in as the "oracle" UNIX user. I will be running all of the commands in this section from node1:
# su - oracle
Stopping the Oracle10g RAC Environment
The first step is to stop the Oracle instance. Once the instance (and related services) is down, then bring down the ASM instance. Finally, shutdown the node applications (Virtual IP, GSD, TNS Listener, and ONS).
$ export ORACLE_SID=orcl1
$ lsnrctl stop
$ emctl stop dbconsole
$ srvctl stop instance -d orcl -i orcl1
$ srvctl stop asm -n node1
$ srvctl stop nodeapps -n node1
Starting the Oracle10g RAC Environment
The first step is to start the node applications (Virtual IP, GSD, TNS Listener, and ONS). Once the node applications are successfully started, then bring up the ASM instance. Finally, bring up the Oracle instance (and related services) and the Enterprise Manager Database console.
$ export ORACLE_SID=orcl1
$ lsnrctl start
$ srvctl start nodeapps -n node1
$ srvctl start asm -n node1
$ srvctl start instance -d orcl -i orcl1
$ emctl start dbconsole
Start / Stop All Instances with SRVCTL
Start / Stop all of the instances and its enabled services. I just included this for fun as a way to bring down all instances!
$ srvctl start database -d orcl
$ srvctl stop database -d orcl
All running instances in the cluster
, instance_number inst_no
, instance_name inst_name
, database_status db_status
, active_state state
, host_name host
ORDER BY inst_id;
INST_ID INST_NO INST_NAME PAR STATUS DB_STATUS STATE HOST
-------- -------- ---------- --- ------- ------------ --------- -------
1 1 orcl1 YES OPEN ACTIVE NORMAL linux1
2 2 orcl2 YES OPEN ACTIVE NORMAL linux2
All data files which are in the disk group
select name from v$datafile
select member from v$logfile
select name from v$controlfile
select name from v$tempfile;
15 rows selected.
All ASM disk that belong to the 'ORCL_DATA1' disk group
WHERE group_number IN (select group_number
where name = 'ORCL_DATA1');
Connecting to Clustered Database From an External Client
This is an optional step, but I like to perform it in order to verify my TNS files are configured correctly. Use another machine (i.e. a Windows machine connected to the network) that has Oracle installed (either 9i or 10g) and add the TNS entries (in the tnsnames.ora) from either of the nodes in the cluster that were created for the clustered database.
Then try to connect to the clustered database using all available service names defined in the tnsnames.ora file:
C:\> sqlplus system/manager@db102
C:\> sqlplus system/manager@db101
C:\> sqlplus system/manager@orcltest
C:\> sqlplus system/manager@db10
Current Time: Wed Jan 18 22:53:06 CST 2017
Total time taken to generate the page: 0.08894 seconds