Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: How to show biggest redo producers
Neat. Bit of a bummer if he's wanting to analyze redo generated a while
ago, though, long after the User has disconnected and gone home for the
afternoon!
But yes... two different approaches, able to achieve much the same ends, for different sorts of situations.
Regards
HJR
-- ---------------------------------------------- Resources for Oracle: http://www.hjrdba.com =============================== "Jeremiah Wilton" <jwilton_at_speakeasy.net> wrote in message news:Pine.LNX.4.44.0203011112290.8731-100000_at_grace.speakeasy.net...Received on Fri Mar 01 2002 - 13:55:53 CST
> I would say the best and easiest way to detect the heaviest redo
> producers would be to select from v$sesstat.
>
> select module, osuser, sql_hash_value, value redo from v$session s,
> v$sesstat ss, v$statname sn where s.sid = ss.sid and ss.statistic# =
> sn.statistic# and name = 'redo size' order by redo;
>
> That would give you the total redo size by session generated over the
> lifetime of the sessions. Since some of these might have generated
> the redo a long time ago and jut never logged off, you could get fancy
> and get a better measurement of redo generated per unit time by
> dividing the redo size by the elapsed time since logon:
>
> select module, osuser, sql_hash_value, value / (sysdate - logon_time)
> redo from v$session s, v$sesstat ss, v$statname sn where s.sid =
> ss.sid and ss.statistic# = sn.statistic# and name = 'redo size' order
> by redo;
>
> To find out which objects, just look at the biggest culprit's SQL in
> v$sqlarea.
>
> --
> Jeremiah Wilton
> http://www.speakeasy.net/~jwilton
>
> On Sat, 2 Mar 2002, Howard J. Rogers wrote:
>
> > Yup, strangely enough, Log Miner will allow you to analyze the stream of
> > redo contained within redo log files, and that would most certainly be
the
> > easiest way to do what you need to do (for a start, you could analyze
> > archived redo logs [assuming you're in archive log mode] using an
entirely
> > separate database, and apart from some physical I/O, such an analysis
won't
> > impact on your database at all (and if you do the sensible thing and
stick
> > archives on entirely their own hard drives, even the I/O won't be an
issue
> > for your production database).
> >
> > However, you might start to need to get a bit clever... analyzing just
for
> > "seg_name='EMP'", for example, won't show you all the rollback and index
> > entries that a transaction produces. You'll have to search by serial#,
and
> > group all the redo generated by a transaction by that unique identifier.
> >
> > "Tom" <tdwilsonng_at_yahoo.com> wrote...
> > > I have a database producing tremendous amounts of redo, and I'd like
> > > to show which tables' updates are the biggest offenders. Is there a
> > > way to get these statistics? Will log miner work? Is it the only
> > > way?
>
![]() |
![]() |