Re: Host Enterprise Manager

From: Mark Brinsmead <pythianbrinsmead_at_gmail.com>
Date: Sun, 20 Apr 2008 21:02:56 -0600
Message-ID: <cf3341710804202002m3fe28743ua8664ae4277fd414@mail.gmail.com>


Oliver,

   I think I understand your situation. Before I start discussing OEM, though, I would like to not that your database configuration is quite dangerous.

   I suspect that there is an excellent chance that you will corrupt your database if anybody were ever to accidentally varyon the "shared" volume groups on both nodes.

   Now, if I understand you correctly, you have set up an "active-passive" database cluster with 10gR2, and successfully demonstrated the ability to "fail-over" the database from one node to the other. Your "problem" is that no matter which node the database is currently running on, OEM always reports the "host" as the one on which the database was first started.

   Sadly, the last serious experience I have with OEM was on version 9iR2. Some of the details of OEM implementation have changed considerably between 9i and 10g, the fundamentals remain pretty much the same.

   In 9i, when to start the OEM "agent" (dbsnmp on that release) it creates a file in $ORACLE_HOME/network/admin, containing a considerable amount of information, including all listeners configured on the server, all databases configured on the server, and -- no doubt -- the name of the server. I suspect that you will find that the 10g agent maintains a similar file with a similar name.

   Do note, however, that I wrote the word "problem" in quotes. I did so for a reason.

   Normally, when you set up an "active-passive" cluster of this type, you will want (need) to create a "virtual IP address" that moves from node to node along with the database. (In your case, you will need to do this manually, just as you manually varyon and varoff the disk groups.) Your oracle listener should listen only on the virtual IP address, and your database clients should look for the database only at the virtual IP address.

   If you configure th virtual IPs correctly, and stop and start the listener and OEM agent, chances are the "problem" will go away. I expect that OEM will probably begin reporting the virtual IP as the "host" associated with your database.

   In the meantime, you should really reconsider running an active-passive cluster without suitable clusterware. Unless AIX 5.2 has a mechanism that prevents you from simultaneously attaching the same volume group to two servers (I do not believe it does), you are really asking for trouble.

On Sat, Apr 19, 2008 at 5:35 AM, Oliver aka v1k1ng0 <ofabelo_at_gmail.com> wrote:

> Hello,
>
> 2008/4/18, Oliver aka v1k1ng0 <ofabelo_at_gmail.com>:
> >
> > Hello,
> > I have 1 node with a Oracle DB single instance in AIX 5L (64bits). When
> > I created the DB I did put that it uses Enterprise Manager. The DB is
> > created correctly. When I try access to EM, it shows well, I put user and
> > password and it shows a error because the host is other ... Where may I
> > configure (what file) for that it has the correct host?
> > When I run "hostname", the output is correct, the /etc/hosts file is
> > correct too. Thanks beforehand.
> >
>
> I'm sorry for my bad english. I have more time now for giving to you more
> data.
> It's 2 nodes, each one with its single oracle 10gr2 instance (not RAC, not
> Dataguard). The servers have AIX5L, 2 physical disks and much SAN disks.
> Both nodes see the hdisk's of SAN disks and each node see its 2 physical
> disks. I have /u01 in a LV of physical disk in each node, with Oracle
> software installed (/u01/app/oracle, oracle_base, and
> $oracle_base/product/10.2.0/db_1 like oracle_home).
> I have /u02/oradata, /u03/oradata, and so on in SAN disks. They have jfs2
> filesystems, I know that it hasn't concurrent access, for that, one will be
> active and the other standby (it is all done manually, varyoff/on vg's,
> umount/mount filesystems).
> Each node has its listener, DB instance and EM repository. Both instances
> work well for separated. The directory
> /u01/app/oracle/product/10.2.0/db_1/dbs (where is the spfile) is like mount
> point in a SAN disk, for both instances have the same spfile (in both nodes
> have that mount point), so both nodes share the same spfile. The control
> files, redo logs, and the rest of datafiles are shared too in SAN disks.
> When I created the DB in both nodes, I configured all datafiles for the same
> path (of san disks). Can someone understand me for now? :-)
> Ok, now the problem: I installed like last the second node, so the second
> DB was created later than 1st database. When I access to Enterprise Manager,
> it puts like host the second node. Ok, it is good. But now, I stop all
> processes in 2 node, umount filesystems, varyoff vg's and I go to 1 node and
> start all. All well, except the EM, that it puts like host the second node!
> :-(. I've fixed recreating the EM, with emca tool (sure, the second node
> will show the fist host now in the EM). I've been looking the tables of
> sysman schema. Someone knows if touching some table of sysman, I can change
> this issue? Changing the host that EM attack for default :-? Thanks
> beforehand and thank you very much for people than can reply me.
>
> Cheers...
>

-- 
Cheers,
-- Mark Brinsmead
Senior DBA,
The Pythian Group
http://www.pythian.com/blogs

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Apr 20 2008 - 22:02:56 CDT

Original text of this message