Re: Host Enterprise Manager
Date: Mon, 21 Apr 2008 12:18:33 +0100
2008/4/21, Mark Brinsmead <pythianbrinsmead_at_gmail.com>:
> I think I understand your situation. Before I start discussing OEM,
> though, I would like to not that your database configuration is quite
> I suspect that there is an excellent chance that you will corrupt your
> database if anybody were ever to accidentally varyon the "shared" volume
> groups on both nodes.
> Now, if I understand you correctly, you have set up an "active-passive"
> database cluster with 10gR2, and successfully demonstrated the ability to
> "fail-over" the database from one node to the other. Your "problem" is that
> no matter which node the database is currently running on, OEM always
> reports the "host" as the one on which the database was first started.
> Sadly, the last serious experience I have with OEM was on version
> 9iR2. Some of the details of OEM implementation have changed considerably
> between 9i and 10g, the fundamentals remain pretty much the same.
> In 9i, when to start the OEM "agent" (dbsnmp on that release) it
> creates a file in $ORACLE_HOME/network/admin, containing a considerable
> amount of information, including all listeners configured on the server, all
> databases configured on the server, and -- no doubt -- the name of the
> server. I suspect that you will find that the 10g agent maintains a similar
> file with a similar name.
> Do note, however, that I wrote the word "problem" in quotes. I did so
> for a reason.
> Normally, when you set up an "active-passive" cluster of this type, you
> will want (need) to create a "virtual IP address" that moves from node to
> node along with the database. (In your case, you will need to do this
> manually, just as you manually varyon and varoff the disk groups.) Your
> oracle listener should listen only on the virtual IP address, and your
> database clients should look for the database only at the virtual IP
> If you configure th virtual IPs correctly, and stop and start the
> listener and OEM agent, chances are the "problem" will go away. I expect
> that OEM will probably begin reporting the virtual IP as the "host"
> associated with your database.
> In the meantime, you should really reconsider running an active-passive
> cluster without suitable clusterware. Unless AIX 5.2 has a mechanism that
> prevents you from simultaneously attaching the same volume group to two
> servers (I do not believe it does), you are really asking for trouble.
thank you very much Mark. Yes, it is dangerous if someone tries varyon the
VG's, but I only know the root password, and varyon fails if the VG's is
just with varyon in the other server, it detects it. You have to pass some
parameter for force the varyon ... I'm not worried for that (only me and
another colleague have access).
Yes, you have understood very well my problem, thanks.We haven't configured a VIP address, for passing it of a server to other, because we haven't configured it in HACMP (I don't know hacmp, it is configured for another people). We simulates a rac in the tns :-) Each server has its IP, 10.XXX.XXX.105 and the other .106 and our tns is using failover (not balance, it doesn't work because we have only a node active). Like it uses failover in the tns, he looks what database is active and tryes connect to that DB, it works. We pass both IP's in the tns with failover, like I've said.
We use OAS (Oracle Application Server), and in the connection string we pass that string, with both IP's with failover. Yes, if we had VIP, there wouldn't any problem. Now, another questions:
- If I want mount a RAC now, is it possible?
- What happens with jfs2 filesystems? It hasn't concurrent access. If I use ASM, there wouldn't problem, isn't it? I've installed ASM sometime, but in single instances only. Thank you very much again. I'm sorry for my bad english ;-) Thanks.
Cheers...Received on Mon Apr 21 2008 - 06:18:33 CDT