RE: Quick and Dirty Grid Control

From: Freeman, Donald G. CTR (ABL) <"Freeman,>
Date: Fri, 2 May 2014 19:15:50 +0000
Message-ID: <85D44D05C4C24C40AFDED6C1FC0E1BDF295BAF88_at_SNSLCVWEXCH02.abl.cda.navy.mil>



The DBA who explained to me the history told me that discovery just bogged down and failed. It would never complete. Once I said that we should re-examine this a second objection was raised: In this environment databases and other targets are dropped, cloned, and otherwise hammered all the time. In my previous experience if I didn't look at the grid control for three days it would "dirty up" pretty fast and I'd have to go in and troubleshoot why this or that database, or listener, or agent was off-line. It took a fair amount of time with far fewer targets than exist here. The person who is telling me this is an Oracle advocate, an Oracle champion, and she's not on board with this.

These servers are not isolated from one another. All the databases are development and test instances. Over time we have ended up having to maintain multiple baselines of our product suite. Our goal is to drastically reduce the number of versions that we maintain and shrink the number of databases and other targets that we maintain. That will take the load off the staff here.

-----Original Message-----
From: Mark W. Farnham [mailto:mwf_at_rsiz.com] Sent: Friday, May 02, 2014 12:04 PM
To: Freeman, Donald G. CTR (ABL)
Cc: 'oracle-l digest users'
Subject: RE: Quick and Dirty Grid Control

Even if there are network secure fences between some of the five Solaris servers, it seems like at most you would need to set up 5 independent grid controls.

I mention secure fences only because of your email address and the statement that discovery was alleged to have previously failed.

In terms of counting things for humans to manage, 600 is error prone simply by head count. Five seems a lot more reasonable and if there are security ring threshold issues one each may be the way to go. If you have a machine that hosts databases of less stringent availability requirements, having a pioneer run for a while after a raft of patches before you do the more critical systems may also be useful.

mwf

-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Fuad Arshad (Redacted sender "fuadar_at_yahoo.com" for DMARC) Sent: Friday, May 02, 2014 11:46 AM
To: donald.freeman.ctr_at_ablcda.navy.mil Cc: oracle-l digest users
Subject: Re: Quick and Dirty Grid Control

600 databases are not that many from a grid control or cloud control perspective . I'm wondering why the comment was made . if you have ovm running you can down the EM templates and run a discovery . I have had over 1200 databases being monitored usif em11g grid control where the oms and repo was running on a tiny SPARC v880 with not many issues

Fuad

> On May 2, 2014, at 8:28, "Freeman, Donald G. CTR (ABL)"
<donald.freeman.ctr_at_ablcda.navy.mil> wrote:
>
> The place I work doesn't use grid control. They have about 600 active
> databases in the development regions. We lack hardware infrastructure.
> All of these databases are mounted on five Solaris 10 Servers. Another
DBA
> told me that they previously tried to get Grid Control running but it
> failed on discovery. It couldn't handle that many objects on a
> server. That was some time ago.
>
> I'm about to get drafted (listening over the wall) to patch this
> weekend (I'm not on that team) and I'm not really interested in trying
> to

patch that
> many databases manually, at least not twice. Is there a method to
install
> grid control in some way that it can handle this situation? Can 12C
> handle this?
>
> I would start doing some reading but I'm afraid somebody is going to
> walk around the corner in about 5 minutes and give me, "the look."
> I'm looking for a direction to march in that will fix this going forward.
>
>

--
http://www.freelists.org/webpage/oracle-l





-- http://www.freelists.org/webpage/oracle-l
  • application/pkcs7-signature attachment: smime.p7s
Received on Fri May 02 2014 - 21:15:50 CEST

Original text of this message