RE: Opinion on change control for DBA scripts

From: Fergal Taheny <ftaheny_at_gmail.com>
Date: Sat, 29 Jun 2013 08:48:47 +0100
Message-ID: <CAOuMUT4BnsmMN3cM3VF7BSZNSSACZ+deBHE9QznR-FhqKSbk9Q_at_mail.gmail.com>



Hi Dave,
One of the important features that any monitoring solution should have is the the ability to detect if an agent has gone down or communication from the agent has broken. This is easy to achieve using a "heartbeat". So if your monitoring scripts run from cron and raise alerts via email then you add a script to every crontab that sends a heartbeat message also via e-mail. Then whatever picks up the alerts now starts picking up the heartbeats and raises an alert if heartbeat message doesn't arrive from a server (every hour or so). So now if cron stops on a box or e-mail config on the box is broken you'll know about it.

You can apply a similar concept to scheduled jobs. Set up a central repository listing all the jobs that are scheduled to run on every server. A simple table with server/database, job and frequency will suffice for this. Then add a call to the end of all your scripts to send a message back to the repository when it runs reporting the time, job, server/db and status. In our case we have every script insert a record to a central database every time they run. Then you build a simple exception report that checks your repository and compares what was supposed to run to what did run and maybe e-mail you the results. A colleague of mine set this up in my current workplace. These is a bit of effort a the start but not huge and it saves you a lot of heartache and effort.

Auditors love this especially for sox compliance because now you have evidence that your controls are running as intended.

Regards,

Fergal

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Jun 29 2013 - 09:48:47 CEST

Original text of this message