Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Monitoring the alert log ...

Re: Monitoring the alert log ...

From: Queen Roo Roo <ruth.gramolini_at_gmail.com>
Date: Mon, 11 Sep 2006 13:31:41 -0400
Message-ID: <e3f4f26a0609111031v427ba6b1u272d1404c897348c@mail.gmail.com>


If you have EE you can use the Alert Log event to let you know when errors are found in the alert log. There is also a free tool from Zymurgy to check you alert logs. If you are interested I can try to find the info.

I have a job that compresses the alert_sid.log when it get to 30M, This will cause Oracle to open a new one and you can archive or delete the compressed file.

HTH,
Ruth

On 9/10/06, Reverend Stephen Booth <stephenbooth.uk_at_gmail.com> wrote:
>
> On 10/09/06, stv <stvsmth_at_gmail.com> wrote:
> > Howdy,
> >
> > I'm a newish DBA and I wanna simplify some daily checks. I'm curious
> > as to how other people monitor the alert logs. Is this something most
> > folks do?
>
> I do, but from a 'belts and braces'paradigm. Anything important (i.e.
> that impacts the users) I would expect to pick up when it happens
> (either through a user problem report or another monitoring tool),
> checking the alert log is just a backstop to pickup anything error
> messages that didn't impact the users noticably so I can decide if
> it's something I need to deal with now, something I can deal with when
> I have time or something I need to note but not worry about unless it
> happens again soon.
>
> > Also, what do other folks do about cycling the alert_SID.log? Is there
> > a size you aim for? Date range?
> >
> Typically I go for archiving off the alert log (i.e. move the current
> log to alert_SID.TIMESTAMP.log then touch lert_SID.log) at the end of
> the nightly backup (actually just Monday to Friday for most systems as
> they don't see much use over the weekend) then check the archived
> copy.
>
> i have another script that runs after backup that compresses any files
> over 20 days old (based on last accessed) and deletes any over 40 that
> runs against bdump, udump and a few other log destinations (logs from
> the backup jobs, applications, monitoring scripts &c).
>
> Each day I (and the ops email address) get an email either saying
> "Nothing wrong" or listing the potential errors found. The reason for
> sending a mail even if there isn't a problem is that if the mail
> doesn't arrive then I know that either the script didn't run or it did
> but the message got lost/blocked somehow. Either way I want to know
> and look into it.
>
> We're currently looking into some sort of console/dashboard for
> close-to-real-time monitoring.
>
> Stephen
>
> --
> It's better to ask a silly question than to make a silly assumption.
>
> http://stephensorablog.blogspot.com/
>
> 'nohup cd /; rm -rf * > /dev/null 2>&1 &'
>
> There's a strong arguement for the belief that running a command
> without first knowing what it does is 'Darwin in action'.
> --
> http://www.freelists.org/webpage/oracle-l
>
>
>

-- 
Ruth Gramolini
ruth.gramolini_at_gmail.com

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Sep 11 2006 - 12:31:41 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US