Re: OEM 12c (12.1.0.5) generating too many alerts on Metrics "Logons Per Sec"

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Wed, 27 Dec 2017 18:42:56 +0000
Message-Id: <2BDC9CE5-9D08-4694-9AF0-DA2E0F80E19B_at_jlcomp.demon.co.uk>



3b) Is the current OEM connecting and disconnecting much more frequently than the previous one and making itself the source of the alerts it's raising?

Regards
Jonathan Lewis
(From my iPad mini; please excuse typos and auto-correct)

> On 27 Dec 2017, at 16:09, Mark W. Farnham <mwf_at_rsiz.com> wrote:
>
> This begs for the answer to the three part excluded middle question:
>
> 1) Was OEM 11g failing to report this enormous number of logins per second, but they were there all along?
> 2) Is your current release of OEM falsely reporting a high number?
> 3) Did something new happen such that both OEM 11g and the current OEM are correct, but now you do have this huge number of logins per unit time?
>
> If the answer is 1, then regardless of any urgent action to silence the alarm, you almost certainly need to take action to prevent this rapid rate of logins.
> Likewise, number 3.
>
> Only in the case of 2 do you need to just silence the alarms until OEM is corrected. (I believe Kellyn gave you the recipe card for doing that and I doubt you can find a better source for that info.)
>
> Rarely have I seen that rapid a rate and the only cases that were marginally justifiable resulted in a pool of service demons being constructed that remained logged in to absorb a high rate of arriving distinct transactions (thereby requiring a secure handshake in place of the thousands of statements executed for a login).
>
> JL mentioned a bit about WHY such a rapid rate of logins is bad. No need to repeat that, but please do not underestimate the bad effects he mentioned.
>
> The login version of slow by slow (row by row) is indeed to log in for each fresh single transaction. Of course that does not scale.
>
> good luck,
>
> mwf
>
> From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Ashoke Mandal
> Sent: Wednesday, December 27, 2017 10:41 AM
> To: jonathan_at_jlcomp.demon.co.uk
> Cc: ORACLE-L
> Subject: Re: OEM 12c (12.1.0.5) generating too many alerts on Metrics "Logons Per Sec"
>
> Hi Kellyn, Jonathan,Mladen,Thank you all for the reply. Here are some answer to some of your points.
>
> I never had this type of flooding alerts in OEM 11g.
> It is happening for each of my server(host) not from one ow two.
> So I would like to disable this alert if there is an option.
> Ideally it maybe good to investigate why so many logins happening but it is very hard for me to spend time to do that instead I would like to simply disable it.
> Any further tips will be appreciated.
>
> Ashoke
>
> On Tue, Dec 26, 2017 at 3:49 PM, Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk> wrote:
>
> Ashoke,
>
> The correct way to fix this problem is to find out what is logging on 146 times per second and stop it - it can't possibly be necessary and it's probably having an appalling effect on the performance of the database. I'd expect to see lots of time lost on library cache latch and dictionary cache latch activity, probably with plenty of mutex sleeps as well and "connection management" showing up with a significant (though relatively small) amount of time in the Time Model report.
>
> If you want to avoid too many messages flooding your inbox (until you've fixed the important problem) create a rule that redirects the incoming alert messages to a separate file based on the subject and some aspect of the message content.
>
> Regards
> Jonathan Lewis
>
> ________________________________________
> From: oracle-l-bounce_at_freelists.org <oracle-l-bounce_at_freelists.org> on behalf of Ashoke Mandal <ramukam1983_at_gmail.com>
> Sent: 26 December 2017 13:17
> To: Kellyn Pot'Vin-Gorman
> Cc: Mladen Gogala; ORACLE-L
> Subject: Re: OEM 12c (12.1.0.5) generating too many alerts on Metrics "Logons Per Sec"
>
> Hi Kellyn, Here is the details of the alert. This alert is is coming too many times and flooding my Inbox. Any help will be appreciated. Thanks, Ashoke
>
> Host=<Host_Name>
> Target type=Database Instance
> Target name=<DB_Name>
> Categories=Load
> Message=Metrics "Logons Per Sec" is at 149.676<https://gva04cn1.lau.medtronic.com:7800/em/redirect?pageType=sdk-core-event-console-detailEvent&issueID=600D37F6B8350327E054001B2194CD2D>
> Severity=Warning
> Event reported time=Dec 26, 2017 6:02:38 AM MST
> Operating System=SunOS
> Platform=sparc
> Associated Incident Id=2606<https://gva04cn1.lau.medtronic.com:7800/em/redirect?pageType=sdk-core-event-console-detailIncident&issueID=613EE5C7DFBA26EDE054001B2194CD2D>
> Associated Incident Status=New
> Associated Incident Owner=
> Associated Incident Acknowledged By Owner=No
> Associated Incident Priority=None
> Associated Incident Escalation Level=0
> Event Type=Metric Alert
> Event name=Server_Adaptive_Threshold_Metric:instance_throughput__logons_ps
> Metric Group=Server_Adaptive_Threshold_Metric
> Metric=Cumulative Logons Per Sec<https://gva04cn1.lau.medtronic.com:7800/em/redirect?pageType=sdk-core-metric-details&timePeriod=byDay&metric=Server_Adaptive_Threshold_Metric&target=thdmas.dmas.medtronic.com&metricColumn=instance_throughput__logons_ps&type=oracle_database&keyValue=SYSTEM>
> Metric value=149.675702644271
> Key Value=SYSTEM
> Rule Name=MTC_MECC_DMAS Incident All Targets,Create incident for critical metric alerts
> Rule Owner=SYSMAN
> Update Details:
> Metrics "Logons Per Sec" is at 149.676
> Incident created by rule (Name = MTC_MECC_DMAS Incident All Targets, Create incident for critical metric alerts; Owner = SYSMAN).
>
>
> On Mon, Dec 25, 2017 at 8:42 PM, Kellyn Pot'Vin-Gorman <dbakevlar_at_gmail.com<mailto:dbakevlar_at_gmail.com>> wrote:
> Can you please paste the alert you receive, editing out andy company Info here? The alert contains the data to isolate the metric.
> Thank you,
> Kellyn
>
> On Sun, Dec 24, 2017 at 2:16 PM Ashoke Mandal <ramukam1983_at_gmail.com<mailto:ramukam1983_at_gmail.com>> wrote:
> I have disabled the entire Throughput Metric (containing Cumulative Logons Per Sec) but it doesn't help and still receiving too many alerts on Metrics "Logons Per Sec".
>
> I am still looking for help on how to turn off this alert.
>
> Thanks,
> Ashoke
>
> On Fri, Dec 22, 2017 at 3:55 PM, Ashoke Mandal <ramukam1983_at_gmail.com<mailto:ramukam1983_at_gmail.com>> wrote:
> Hello Kellyn and Mladen, Thanks for your reply.
>
> So far I found out Throughput Metric and that has Cumulative Logons Per Sec. I have disabled the entire Throughput Metric and will see if that stops this warning alert.
> under Oracle Database -> Monitoring -> All Metrics -> Throughput
>
> Ashoke
>
> On Fri, Dec 22, 2017 at 1:14 PM, Mladen Gogala <gogala.mladen_at_gmail.com<mailto:gogala.mladen_at_gmail.com>> wrote:
> The most probable cause is having both OEM with Grid Control DB and a local EM running.
> Regards
>
>
> On Fri, 22 Dec 2017 09:43:32 -0600
> Ashoke Mandal <ramukam1983_at_gmail.com<mailto:ramukam1983_at_gmail.com>> wrote:
>
> > Dear All,
> >
> > I have migrated some of our databases from OEM 11g to OEM 12c (12.1.0.5).
> > It working OK but getting too many extra warning like this.
> >
> > EM Event: Warning:thdmas.dmas.medtronic.com<http://thdmas.dmas.medtronic.com> - Metrics "Logons Per Sec" is
> > at 105.409
> >
> >
> > I would like to disable this but not able to locate the related metric in
> > 12c OEM console.
> >
> > I logged into the database via OEM 12C and checked under the following area
> > but not able to find the Metrics "Logons Per Sec" .
> >
> > Oracle Database -> Monitoring -> Metric and collection settings
> >
> > Oracle Database -> Monitoring -> All Metrics
> >
> > Any help will be highly appreciated.
> >
> > Thanks,
> > Ashoke
>
>
> --
> Mladen Gogala
> Oracle DBA
> Tel: (347) 321-1217<tel:%28347%29%20321-1217>
>
>
> --
> <http://about.me/dbakevlar>
>
> [Kellyn Pot'Vin on about.me]
>
> Kellyn Pot'Vin-Gorman
> about.me/dbakevlar
>
>
>
> --
> http://www.freelists.org/webpage/oracle-l
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Dec 27 2017 - 19:42:56 CET

Original text of this message