|
|
|
|
|
|
|
|
|
|
Re: 10g RAC configuration on Solaris 10 [message #413121 is a reply to message #412907] |
Tue, 14 July 2009 08:20 |
kudur_kv
Messages: 75 Registered: February 2005
|
Member |
|
|
I think the reason behind that is because the cluster services must be up and running before the ASM instance can come up. If you place the OCR and the voting files in the ASM directory, the cluster serices might not come up correctly.
I hope I read that right!
Regards,
KV.
and Thank you all for the links.
Hope this discussion link gives me more tips when i do the implementation at the end of the month.
Thanks again.
|
|
|
|
Re: 10g RAC configuration on Solaris 10 [message #413230 is a reply to message #413123] |
Tue, 14 July 2009 22:06 |
trantuananh24hg
Messages: 744 Registered: January 2007 Location: Ha Noi, Viet Nam
|
Senior Member |
|
|
Following the brief of article
Build Your Own Oracle RAC Cluster on Solaris 10 and iSCSI
Quote: |
On Solaris, the UDP parameters are udp_recv_hiwat and udp_xmit_hiwat. The default values for these paramters on Solaris 10 are 57344 bytes. Oracle recommends that you set these parameters to at least 65536 bytes.
To see what these parameters are currently set to, enter the following commands:
# ndd /dev/udp udp_xmit_hiwat
57344
# ndd /dev/udp udp_recv_hiwat
57344
To set the values of these parameters to 65536 bytes in current memory, enter the following commands:
# ndd -set /dev/udp udp_xmit_hiwat 65536
# ndd -set /dev/udp udp_recv_hiwat 65536
Now we want these parameters to be set to these values when the system boots. The official Oracle documentation is incorrect when it states that when you set the values of these parameters in the /etc/system file, they are set on boot. These values in /etc/system will have no effect for Solaris 10. Please see Bug 5237047 for more information.
|
and
Quote: |
Setting Kernel Parameters
In Solaris 10, there is a new way of setting kernel parameters. The old Solaris 8 and 9 way of setting kernel parameters by editing the /etc/system file is deprecated. A new method of setting kernel parameters exists in Solaris 10 using the resource control facility and this method does not require the system to be re-booted for the change to take effect.
Let's start by creating a new resource project.
# projadd oracle
Kernel parameters are merely attributes of a resource project so new kernel parameter values can be established by modifying the attributes of a project. First we need to make sure that the oracle user we created earlier knows to use the new oracle project for its resource limits. This is accomplished by editing the /etc/user_attr file to look like this:
#
# Copyright (c) 2003 by Sun Microsystems, Inc. All rights reserved.
#
# /etc/user_attr
#
# user attributes. see user_attr(4)
#
#pragma ident "@(#)user_attr 1.1 03/07/09 SMI"
#
adm::::profiles=Log Management
lp::::profiles=Printer Management
root::::auths=solaris.*,solaris.grant;profiles=Web Console Management,All;lock_after_retries=no
oracle::::project=oracle
..........................
|
Do I really not configure some kernel lines in /etc/system, such as:
set shmsys:shminfo_shmmax=12884901888
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmseg=10
set shmsys:shminfo_shmmni=100
set semsys:seminfo_semmni=800
set semsys:seminfo_semmsl=256
set semsys:seminfo_semmns=204800
set noexec_user_stack=1
.....
One more time, I tried to set kernel by using projmod command, and did not set those above arguments in /etc/system (Solaris 10, of course, but single database), but Oracle returned problem events when it passed & checked in implementation steps. Were I wrong?
Thank you!
[Updated on: Tue, 14 July 2009 22:12] Report message to a moderator
|
|
|
|
|
Re: 10g RAC configuration on Solaris 10 [message #414096 is a reply to message #413339] |
Mon, 20 July 2009 06:44 |
kudur_kv
Messages: 75 Registered: February 2005
|
Member |
|
|
In continuation of the original topic, i am trying to configure RAC on solaris. Due to constraints i am working this out on VMWare. As of now I have crated 2 nodes that are able to talk to each other. Before I can install oracle on these individual nodes, I am taking the help of a system admin to install and configure the Sun cluster 3.2 on the VMware.
The question that i have is how can I add a common storage on VMware that will be equally accessible to all nodes?
Any clues pls?
TIA
KV
|
|
|
|
|
|
|
|
Re: 10g RAC configuration on Solaris 10 [message #417307 is a reply to message #412907] |
Fri, 07 August 2009 02:20 |
trantuananh24hg
Messages: 744 Registered: January 2007 Location: Ha Noi, Viet Nam
|
Senior Member |
|
|
Dear!
I've fixed this above problem. That caused by simple reason: I did not create link to ssh in /usr/bin.
Fix by following on all of 2 (or more) nodes:
# mkdir -p /usr/local/bin
# ln -s -f /usr/bin/ssh /usr/local/bin/ssh
And now, I've recheck by cluvfy utility:
login as: root
Using keyboard-interactive authentication.
Password:
Last login: Fri Aug 7 11:28:31 2009 from 10.252.20.110
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
You have mail.
Sourcing //.profile-EIS.....
root@mbfdb01 # mkdir -p /usr/local/bin
root@mbfdb01 # ln -s -f /usr/bin/ssh /usr/local/bin/ssh
root@mbfdb01 # su - oracle
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
$ export SRVM_TRACE=true
$ cd 10gR2_RAC/Cluster/cluvfy
$ ./runcluvfy.sh comp nodecon -n mbfdb01,mbfdb02 -verbose
Verifying node connectivity
Verification of node connectivity was unsuccessful on all the nodes.
$ ./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "mbfdb01"
Destination Node Reachable?
------------------------------------ ------------------------
mbfdb01 yes
mbfdb02 yes
Result: Node reachability check passed from node "mbfdb01".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
mbfdb02 passed
mbfdb01 passed
Result: User equivalence check passed for user "oracle".
Pre-check for cluster services setup was unsuccessful on all the nodes.
$ ./runcluvfy.sh stage -post hwos -n mbfdb01,mbfdb02 -verbose
Performing post-checks for hardware and operating system setup
Checking node reachability...
Check: Node reachability from node "mbfdb01"
Destination Node Reachable?
------------------------------------ ------------------------
mbfdb01 yes
mbfdb02 yes
Result: Node reachability check passed from node "mbfdb01".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
mbfdb02 passed
mbfdb01 passed
Result: User equivalence check passed for user "oracle".
Post-check for hardware and operating system setup was unsuccessful on all the n odes.
$
|
|
|
|
Re: 10g RAC configuration on Solaris 10 [message #417323 is a reply to message #412907] |
Fri, 07 August 2009 04:00 |
trantuananh24hg
Messages: 744 Registered: January 2007 Location: Ha Noi, Viet Nam
|
Senior Member |
|
|
Dear!
When I use cluvfy utility to verify configuration, I've got some thing:
$ ./runcluvfy.sh comp sys -n mbfdb01,mbfdb02 -p crs -verbose
Verifying system requirement
Verification of system requirement was unsuccessful on all the nodes.
$ ./runcluvfy.sh stage -post hwos -n mbfdb01,mbfdb02 -verbose
Performing post-checks for hardware and operating system setup
Checking node reachability...
Check: Node reachability from node "mbfdb01"
Destination Node Reachable?
------------------------------------ ------------------------
mbfdb01 yes
mbfdb02 yes
Result: Node reachability check passed from node "mbfdb01".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
mbfdb02 passed
mbfdb01 passed
Result: User equivalence check passed for user "oracle".
Post-check for hardware and operating system setup was unsuccessful on all the nodes.
$ ./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "mbfdb01".
Checking user equivalence...
User equivalence check passed for user "oracle".
Pre-check for cluster services setup was unsuccessful on all the nodes.
$ ./runcluvfy.sh comp nodecon -n mbfdb01,mbfdb02 -verbose
Verifying node connectivity
Verification of node connectivity was unsuccessful on all the nodes.
It did not verify successfully, as the Oracle document note:
Quote: |
The CVU Oracle Clusterware pre-installation stage check verifies the following:
* Node Reachability: All of the specified nodes are reachable from the local node.
* User Equivalence: Required user equivalence exists on all of the specified nodes.
* Node Connectivity: Connectivity exists between all the specified nodes through the public and private network interconnections, and at least one subnet exists that connects each node and contains public network interfaces that are suitable for use as virtual IPs (VIPs).
* Administrative Privileges: The oracle user has proper administrative privileges to install Oracle Clusterware on the specified nodes.
* Shared Storage Accessibility: If specified, the OCR device and voting disk are shared across all the specified nodes.
* System Requirements: All system requirements are met for installing Oracle Clusterware software, including kernel version, kernel parameters, memory, swap directory space, temporary directory space, and required users and groups.
* Kernel Packages: All required operating system software packages are installed.
* Node Applications: The virtual IP (VIP), Oracle Notification Service (ONS) and Global Service Daemon (GSD) node applications are functioning on each node.
|
I'm bad of networking, then, I post here, hope you help me.
This is my network and /etc/hosts which I wrote in
Node 1:
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
bge0: flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS> mtu 1500 index 2
inet 10.252.20.72 netmask ffffff00 broadcast 10.252.20.255
groupname ipmp-mbfdb01
ether 0:21:28:1a:66:5e
bge0:1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
inet 10.252.20.71 netmask ffffff00 broadcast 10.252.20.255
nxge0: flags=239040803<UP,BROADCAST,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED,STANDBY,CoS> mtu 1500 index 3
inet 10.252.20.73 netmask ffffff00 broadcast 10.252.20.255
groupname ipmp-mbfdb01
ether 0:21:28:38:38:c6
sppp0: flags=10010008d1<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 4
inet 10.252.20.2 --> 10.252.20.1 netmask ff000000
ether 0
# cat /etc/hosts
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
10.252.20.71 mbfdb01 mbfdb01.neo.com.vn loghost
10.252.20.76 mbfdb02 mbfdb02.neo.com.vn
10.252.20.72 mbfdb01-priv
10.252.20.73 mbfdb01-test-nxge0
10.252.20.74 mbfdb01-vip
10.252.20.79 mbfdb02-vip
10.252.20.77 mbfdb02-priv
#
I did not understand how to set priv and public network to do.
May you help me?
Thank you very much!
Original brief information about network:
Quote: |
No ITEMS Values
1 Server Type Sun SPARC Enterprise M4000
2 Host Name mbfdb01
3 Hostname IP Address 10.252.20.71
4 Netmask 255.255.255.0
5 Default Gateway 10.252.20.254
6 1st NIC bge0
7 2nd NIC nxge0
8 Test address 1 [IPMP] 10.252.20.72
9 Test address 2 [IPMP] 10.252.20.73
10 IPMP Group ipmp- mbfdb01
11 ORACLE VIP IP address 10.252.20.74
12 Admin IP Address (XSCF) (port0/port1) 10.252.20.75
|
[Updated on: Fri, 07 August 2009 04:11] Report message to a moderator
|
|
|
|
|
Re: 10g RAC configuration on Solaris 10 [message #417497 is a reply to message #412907] |
Sun, 09 August 2009 22:39 |
trantuananh24hg
Messages: 744 Registered: January 2007 Location: Ha Noi, Viet Nam
|
Senior Member |
|
|
Dear Gent!
I'm not at RAC's machine now, but, I've executed this command
./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02 -verbose
It replied as unsuccessfully at node verification.
I meant that I've not had little experiences for some situations like that. May you describe/guess about problem that caused?
Thank you very much!
|
|
|
|
|
|
|
|
Re: 10g RAC configuration on Solaris 10 [message #417668 is a reply to message #417640] |
Mon, 10 August 2009 20:35 |
trantuananh24hg
Messages: 744 Registered: January 2007 Location: Ha Noi, Viet Nam
|
Senior Member |
|
|
gentlebabu wrote on Tue, 11 August 2009 00:42 |
Sorry to say you; If your not posting output of my above command. We can't help you.
This Error/Warning very easy to fix using my command; But I don't think why your not posting output. I hope you know OraFaq Rules.
Thanks
|
Dear Gent!
Said I that I were not at RAC's machines, I'm at the other location, so that, I can not connect through LAN/VPN/... to the servers.
Of course, I must post the result of your above command as soon as possible when I come back. I hope you help me to resolve this problem as well.
Thank you!
|
|
|
|
Re: 10g RAC configuration on Solaris 10 [message #418078 is a reply to message #417429] |
Wed, 12 August 2009 21:00 |
trantuananh24hg
Messages: 744 Registered: January 2007 Location: Ha Noi, Viet Nam
|
Senior Member |
|
|
gentlebabu wrote on Sat, 08 August 2009 19:01 |
Try
./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02 -verbose
|
Dear gent!
I've executed this command as following
root@mbfdb01 # su - oracle
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
$ pwd
/oracle/app
$ cd 10gR2_RAC/Cluster/cluvfy
$ ./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "mbfdb01"
Destination Node Reachable?
------------------------------------ ------------------------
mbfdb01 yes
mbfdb02 yes
Result: Node reachability check passed from node "mbfdb01".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
mbfdb02 passed
mbfdb01 passed
Result: User equivalence check passed for user "oracle".
Pre-check for cluster services setup was unsuccessful on all the nodes.
$
Hope you help me soon!
Thank you!
|
|
|
Re: 10g RAC configuration on Solaris 10 [message #418124 is a reply to message #412907] |
Thu, 13 August 2009 01:45 |
trantuananh24hg
Messages: 744 Registered: January 2007 Location: Ha Noi, Viet Nam
|
Senior Member |
|
|
This is the brief I did below:
SSH & oracle user (Passed)
Oracle user (2 nodes)
$ id -a
uid=175(oracle) gid=116(oinstall) groups=116(oinstall),115(dba)
$
Host file (2 nodes)
$ cat /etc/hosts
#
# Internet host table
#
::1 localhost
# Public IPs
127.0.0.1 localhost
10.252.20.71 mbfdb01 mbfdb01.neo.com.vn loghost
10.252.20.76 mbfdb02 mbfdb02.neo.com.vn
# Private IPs
10.252.20.73 mbfdb01-priv
10.252.20.78 mbfdb02-priv
# VIPs
10.252.20.74 mbfdb01-vip
10.252.20.79 mbfdb02-vip
# Test IP-nxge0
10.252.20.72 mbfdb01-test
$ exit
root@mbfdb01 # ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
inet 10.252.20.73 netmask ffffff00 broadcast 10.252.20.255
ether 0:21:28:1a:66:5e
bge0:1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
inet 10.252.20.71 netmask ffffff00 broadcast 10.252.20.255
nxge0: flags=201000802<BROADCAST,MULTICAST,IPv4,CoS> mtu 1500 index 3
inet 0.0.0.0 netmask 0
ether 0:21:28:38:38:c6
sppp0: flags=10010008d1<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 4
inet 10.252.20.2 --> 10.252.20.1 netmask ff000000
ether 0
root@mbfdb01 #
SSH connectivity
At Node 1:
$ ssh oracle@mbfdb02
Last login: Thu Aug 13 13:16:22 2009 from mbfdb01
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
$ exit
Connection to mbfdb02 closed.
$ ssh oracle@mbfdb02.neo.com.vn
Last login: Thu Aug 13 13:19:24 2009 from mbfdb01
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
$ exit
Connection to mbfdb02.neo.com.vn closed.
$ ssh oracle@mbfdb02-priv
Last login: Thu Aug 13 13:19:38 2009 from mbfdb01
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
$ exit
Connection to mbfdb02-priv closed.
$ ssh mbfdb02 "date;hostname"
Thu Aug 13 13:20:29 ICT 2009
mbfdb02
$ ssh mbfdb02.neo.com.vn "date;hostname"
Thu Aug 13 13:20:44 ICT 2009
mbfdb02
$ ssh mbfdb02-priv "date;hostname"
Thu Aug 13 13:20:55 ICT 2009
mbfdb02
$
At Node 2:
$ ssh oracle@mbfdb01
Last login: Thu Aug 13 13:16:15 2009 from mbfdb01
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
$ exit
Connection to mbfdb01 closed.
$ ssh oracle@mbfdb01.neo.com.vn
Last login: Thu Aug 13 13:21:49 2009 from mbfdb02
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
$ exit
Connection to mbfdb01.neo.com.vn closed.
$ ssh oracle@mbfdb01-priv
Last login: Thu Aug 13 13:21:57 2009 from mbfdb02
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
$ exit
Connection to mbfdb01-priv closed.
$ ssh mbfdb01 "date;hostname"
Thu Aug 13 13:22:13 ICT 2009
mbfdb01
$ ssh mbfdb01.neo.com.vn "date;hostname"
Thu Aug 13 13:22:26 ICT 2009
mbfdb01
$ exit
Connection to mbfdb01-priv closed.
$ ssh mbfdb01-priv "date;hostname"
Thu Aug 13 13:22:43 ICT 2009
mbfdb01
$
Cluster verification: cluvfy.(Failed)
Check hardware OS:
$ ./runcluvfy.sh stage -post hwos -n mbfdb01,mbfdb02 -verbose
Performing post-checks for hardware and operating system setup
Checking node reachability...
Check: Node reachability from node "mbfdb01"
Destination Node Reachable?
------------------------------------ ------------------------
mbfdb01 yes
mbfdb02 yes
Result: Node reachability check passed from node "mbfdb01".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
mbfdb02 passed
mbfdb01 passed
Result: User equivalence check passed for user "oracle".
Post-check for hardware and operating system setup was unsuccessful on all the nodes.
Check shared storage (2 nodes)
$ ./runcluvfy.sh comp ssa -n mbfdb01,mbfdb02
Verifying shared storage accessibility
Verification of shared storage accessibility was unsuccessful on all the nodes.
$ ./runcluvfy.sh comp ssa -n mbfdb01,mbfdb02 -s /dev/dsk/c3t600A0B80002AFEF600001C4A4A497F8Ad0s0 -verbose
Verifying shared storage accessibility
Verification of shared storage accessibility was unsuccessful on all the nodes
$ ./runcluvfy.sh comp ssa -n mbfdb01,mbfdb02 -s /dev/dsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0 -verbose
Verifying shared storage accessibility
Verification of shared storage accessibility was unsuccessful on all the nodes.
$
Check node connectivity (2 nodes)
$ ./runcluvfy.sh comp nodecon -n mbfdb01,mbfdb02 -verbose
Verifying node connectivity
Verification of node connectivity was unsuccessful on all the nodes.
$
Check pre-crs_installation (2 nodes)
$ ./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "mbfdb01"
Destination Node Reachable?
------------------------------------ ------------------------
mbfdb01 yes
mbfdb02 yes
Result: Node reachability check passed from node "mbfdb01".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
mbfdb02 passed
mbfdb01 passed
Result: User equivalence check passed for user "oracle".
Pre-check for cluster services setup was unsuccessful on
[/size]Internal disk & shared storage (RAID 1) information (2 nodes)[/size=3]
At Node 1
$ hostname
mbfdb01
$ df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d100 20655025 6770856 13677619 34% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 94219368 1688 94217680 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
sharefs 0 0 0 0% /etc/dfs/sharetab
fd 0 0 0 0% /dev/fd
swap 94217768 88 94217680 1% /tmp
swap 94217752 72 94217680 1% /var/run
/dev/dsk/c3t600A0B80002AFF0200001A474A498CE9d0s0
495674704 65553 490652404 1% /mbfdata
/dev/dsk/c3t600A0B80002AFEF600001C494A497E1Ad0s0
82611933 65553 81720261 1% /mbfbacku
/dev/dsk/c3t600A0B80002AFEF600001C4A4A497F8Ad0s0
402735694 65553 398642785 1% /mbfcrs
/dev/md/dsk/d130 54298766 955157 52800622 2% /oracle
/dev/dsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0
3077135 3089 3012504 1% /ocr_voti
$
At Node 2:
$ hostname
mbfdb02
$ df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d200 20655025 4699124 15749351 23% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 95027496 1648 95025848 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
sharefs 0 0 0 0% /etc/dfs/sharetab
fd 0 0 0 0% /dev/fd
swap 95025888 40 95025848 1% /tmp
swap 95025920 72 95025848 1% /var/run
/dev/dsk/c3t600A0B80002AFEF600001C494A497E1Ad0s0
82611933 65553 81720261 1% /mbfbacku
/dev/dsk/c3t600A0B80002AFEF600001C4A4A497F8Ad0s0
402735694 65553 398642785 1% /mbfcrs
/dev/dsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0
3077135 3089 3012504 1% /ocr_voti
/dev/dsk/c3t600A0B80002AFF0200001A474A498CE9d0s0
495674704 65553 490652404 1% /mbfdata
/dev/md/dsk/d230 54298766 4833208 48922571 9% /oracle
/vol/dev/dsk/c0t3d0/sol_10_509_sparc
2621420 2621420 0 100% /cdrom/sol_10_509_sparc
$
Iostat (2 nodes)
At Node 1
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C4A4A497F8Ad0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C494A497E1Ad0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFF0200001A474A498CE9d0s0
At Node 2
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C4A4A497F8Ad0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C494A497E1Ad0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFF0200001A474A498CE9d0s0
Were I wrong? May you clarify me more?
Thank you very much!
[Updated on: Thu, 13 August 2009 01:49] Report message to a moderator
|
|
|
|
|
|
|