Re: How to determine if a database is in use?

From: Niall Litchfield <niall.litchfield_at_gmail.com>
Date: Wed, 13 Apr 2016 16:13:19 +0100
Message-ID: <CABe10sYr3_j_wzp6SkYOcKVK8gFGGHkyqw2Nae8FC1O==05g0g_at_mail.gmail.com>



Each database *instance* in EM will have a metric *logons_current *(derived eventually from v$sysstat). That's a pretty good indicator. I suspect that database prior to 10g won't accurately report this stuff in EM (nor do I know when logons current was added to v$sysstat but I bet it was decades ago. ). In the gui you can find it under database limits, but I'd drive a query against mgmt$metric_daily in the sysman schema on the OMR.

On Wed, Apr 13, 2016 at 1:33 AM, Oracle DBA <justanotheroracledba_at_gmail.com> wrote:

> Question – How to determine if a database is in use.
>
>
>
> Now that may appear to be a *newbie stupid *question but let me
> elaborate. We are a global company with 500+ offices along with about 6
> largish datacentres around the globe. We have over 1000 instances running
> in various capacities on around 260 servers (some physical, some Virtual)
> with a 60% Windows, 25% Sun and the remaining Linux. Some of our larger
> ones are generating 70gb a day in redo, while others barely kick over 2mb a
> day. They range in versions from (embarrassed to say but its starts with a
> 7) to 12.1.0.2 EE. We do run a Global installation of Cloud Control
> 12.1.0.4 and pretty much all databases are in there.
>
>
>
> With the ever present threat of audits from that magical team called LMS
> we are trying to identify those instances that are not being used, or
> haven’t been used in 6+ months with the goal to archive and then promptly
> decommission the database and uninstall the binaries. New databases are
> coming online weekly, but it appears that the business has a difficulty in
> telling anyone that they are not used anymore. They don’t realise or care,
> that for every instance, there is a cost, whether it be CPU, memory,
> storage, tapes, my time and the big costs of licences.
>
>
>
> So, given that I have 1000+ instances, how would you identify which of
> those haven’t been used for 6+ months?
>
>
>
> My initial thoughts
>
> 1) I’ve used Cloud Control repository to mine the wonderful data
> and got a list of databases ordered by redo generation, but just because a
> database generates 2mb a day does not mean that its not being used. So
> while it may highlight low use databases, its by no mean perfect.
>
>
>
> 2) Parse the classic listener.log file using SED to strip the noise
> and then grep on the SID and see what users are connecting, and what dates
> they connected. This sounds like what I’m after thinking there might be a
> better way like creating an external table on the 11g diag listener.xml
> files and using some magic query ? Anyone done that and like to share?
>
>
>
> 3) There is an option going forward to create a few tables and
> create a custom login trigger to record logins, but obviously this does
> require some work, a few RFC’s and 6+ months to get the data.
>
>
>
> 4) Then again, there is probably some out of the box auditing that
> can be turned on that will have this, but again a few (lots) of RFC and
> then work out how to query this audit data. Actually does the seeded DBSNMP
> have read access into the audit trails? As I can generate a global
> Tnsnames.ora file that lists all our servers.
>
>
>
> 5) Is there anything in the Cloud Control repo that has this
> information?
>
>
>
> My perfect solution would be use the info in Cloud Control to
> automate/script this and was wondering if anyone had any insights.
>

-- 
Niall Litchfield
Oracle DBA
http://www.orawin.info

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Apr 13 2016 - 17:13:19 CEST

Original text of this message