Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Audting using log miner

Re: Audting using log miner

From: Howard J. Rogers <hjr_at_dizwell.com>
Date: Sun, 6 Jun 2004 09:55:41 +1000
Message-ID: <40c25d68$0$1584$afc38c87@news.optusnet.com.au>

"Prem K Mehrotra" <premmehrotra_at_hotmail.com> wrote in message news:43441e77.0406051448.6b9c8d8f_at_posting.google.com...
> 1. I have read in several forum discussion that
> log miner has advantages for auditing because it does not add
> performance overhead of
> database auditing. I know log miner does not provide information on
> slecet statements and it can provide modified values for insert,
> delete, update. bBut, does log miner really provide most of the
> information which database auditing provides.

It does, and provides a lot more on top. Database auditing will tell you only that Scott did some deleting on EMP. But you won't have the slightest idea what he deleted. Or that he updated EMP, but you won't know which records he changed, or from what, or to what. Log Miner exposes all this information.

In short, database auditing is essentially auditing of privilege use. It does not capture actual values (in the jargon, it isn't "value-based auditing"). If you need to see values, then Log Miner is your only option.

> Extracting auditing information from log miner seems to be lot more
> work than getting similar information from database auditing.

Really? It's a three-stage process, one of which is only ever done quite rarely (the creation of the static dictionary file). After that, it's simply: specify the logs you want analyzed and then exec dbms_logmnr.start_logmnr(dictfilename=>'/wherever'). It's not exactly onerous.

Do you mean the syntax is not as easy as 'select * from aud$'? True enough if that's the point you're making. But after you've done it a couple of times, it's not really *that* much more difficult. I claim no special abilities in this regard, but I can do a log miner session in about 1 minute flat these days, and get all the syntax right from memory. It just comes with practice.

> Has
> anyone
> developed some packages which are available to others which can be
> used
> to extract auditing information from v%logminer_contents, i.e., who
> dropped a specific table, when someone logged on to database with dba
> privileges.

Ah. Well, those are not the sort of operations you can audit with Log Miner in 8i. DDL such as 'drop table' require you to hunt v$logmnr_contents for deletes from tab$ and obj$. It can be done, but it's not easy, especially if your application tends to create and drop objects on-the-fly as part of its standard operations (there's very little to help you distinguish between the various things being deleted from obj$). Log ons to the database with dba privileges are also not a Log Miner issue: log miner looks at the redo logs. If an operation doesn't generate redo (and logging on to the database doesn't) then Log Miner can't see it, can it?

In 9i, you can use Log Miner for the drop table sort of problem very easily, because the statement 'DROP TABLE EMP' appears in clear text in the SQL_REDO column. Log ons are better handled with 9i's 'after logon on database' trigger, though.

> 3. Are the performance overhead of database auditing really
> significant. If I want to monitor 5 key events, is it really going to
> slow down my database
> performance. i.e., will it make everything ~20-30% slower.

You'll have to test it to be sure, but the overhead can be significant. Although you are only auditing 5 events, each server process will have to check whether what it's just been asked to do is one of those events every time it's asked to do something new. How significant that becomes as an overhead is something only you can measure.

> 4. I have 8i database which generated redo and archived logs. I will
> be analyzing them using Oracle9i logminer. Will the limtation of
> Oracle8i logminer such as not fiding chianed rows etc will disappear.

Nope. 9i might be doing the analysis, but it's only analysing the redo that the originating database put there. And all the 9i new features, such as DDL statement support, chained and migrated rows support, index cluster DML support, and so on, all require that the originating database write some additional information into the redo stream. If that info isn't there, the new feature support isn't there either. Your 8i logs will therefore be subject to the 8i Log Miner limitations, whichever database you get to do the analysis.

Regards
HJR Received on Sat Jun 05 2004 - 18:55:41 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US