Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle 9.2 Log Miner Scripts?
"Charles Davis" <cdavis10717_at_comcast.net> wrote in message news:<fpmdnU6Ty-NWXTeiRVn-sQ_at_comcast.com>...
> Hi,
>
> Does anyone have a working script for Oracle 9.2 Log Miner that runs on
> Unix, uses the system catalog for its Dictionary, and extracts the redo
> records into a user tablespace from all the redo files for the instance???
>
> I would sure love to see a working script for that. I've not been able to
> get that working from the docs and from other scripts I've googled on the
> internet.
>
> I've found the Oracle docs woefully inadequate in pulling all that together
> in a working order.
In this case I disagree with Sybrand, because I've done something similar, it works with no problem and sometimes it can be a quite good solution.
My scenario:
An internet banking database, up to fifty gigs, but heavily accessed
(up to several thousands of concurrent users). Business requierement
to archive all financial transaction and some other stuff, mostly
application audit (not database one!), millions of rows per day. The
database is already under heavy load and has sometimes problems to
cope with it, so increasing of this load was *highly* problematic.
Solution:
In house developed Logical Standby Database. Operating system script
copies archive redologs to other server. On that server another OS
scripts invoked by cron runs package in DB. This package will log mine
new logs, search for insert/updates/deletes affecting subset of
tables, parses them, change them (updates/deletes turned to insert
with appropriate flags, timestamp added) and insert them to tables in
archive database. Everything is automatic and parameter driven, even
creation and copying of new log miner catalog is automated and
performed if needed (after change of data model of the primary DB).
Also all errors are automatically propagated to ITO and database error
logs.
Because in this case there is need to select exact row in all cases
when processiong updates and deletes, we're also forced to turn on
some supplemental logging. I had some fears about performance impacts
of this, but I performed several tests and it show that, if used
wisely, there is almost no impact on productional enviromnent (in our
case only few percents in generated redo volume).
Pros:
Almost no additional load on the primary database (with exception of
IOs and supplemental logging, but both of these are OK). No operator
assistance needed. The archive database always almost up to date with
production.
Cons:
Some programming effort needed (but nothing huge). Supplemental
logging needed.
Conclusion:
You can use log miner to do more than usual ad-hoc stuff, it can work
and be deployed even in highly demanding productional environment.
-- with regards Dusan BolekReceived on Fri Nov 07 2003 - 03:29:45 CST
![]() |
![]() |