Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Diagnose Slow System--OraPerf

Re: Diagnose Slow System--OraPerf

From: Tim Gorman <Tim_at_SageLogix.com>
Date: Sun, 26 May 2002 12:18:19 -0800
Message-ID: <F001.0046BDF6.20020526121819@fatcity.com>


Not astonishing, but unusual and -- as it turns out -- probably unnecessary...

The query against V$LOG_HISTORY reveals that you are performing anywhere from 2 to 42 log switches per day, but no higher than 15 since you raised the size from 50M to 100M. The major question here is whether the log switches on the high end were forced early (i.e. by ALTER SYSTEM SWITCH LOGFILE or numerous STARTUP/SHUTDOWN cycles) or whether they filled the entire log file...

That's the purpose of looking at the number of changes in addition to the count of switches. Focusing on 29-APR, when the high number of 42 switches of the 50M sized redo log files coincides with a relatively low number of redo changes, suggests that there might be occasions when log switches are forced early. It's not definite, as that day may have been full of operations that generate large redo entries, but that's my take on it from afar.

All in all, the main reason for large numbers of redo groups might be a situation where there is huge volume of redo being generated and there is a threat that the ARCH processes may not have completed archiving before the LGWR wants the redo group currently being archived. Since your history of 2-42 log switches per days suggests a volume of a minimum of 0.2Gb to a maximum 2.1Gb generated per day, I cannot imagine that there would be a problem in either the LGWR or the ARCn processes being overwhelmed (provided you don't have a grossly underconfigured server!). So, while it's neither here nor there, you should probably be able to get by with the standard number of 3, 4, or 5 redo log files groups, instead of the 20 you have.

But, having 20 groups isn't hurting anything (except perhaps space consumption), so there's no need to change anything, if you'd prefer not to...

Anjo's already explained why the YAPP report recommended 9Gb for online redo logfiles, so there is no need to take action on that recommendation; it's just a freak of the reporting mechanism when confronted with an unusual configuration of high LOG_CHECKPOINT_INTERVAL and large number of small logfiles...

---

One side comment:  it is a mistake to set LOG_CHECKPOINT_INTERVAL or
LOG_CHECKPOINT_TIMEOUT with performance considerations placed ahead of
availability considerations.  These are not performance-tuning parameters,
but instance-restart tuning parameters...

These parameters are intended to allow the DBA to "voluntarily" checkpoint
more frequently than the standard "involuntary" checkpoint frequencies (i.e.
at log switch, shutdown, alter tablespace, etc).  This functionality was
further enhanced with Oracle8i's FAST_START_IO_TARGET parameter and the
V$INSTANCE_RECOVERY view.  While postponing checkpoints might seem like a
good idea from a performance standpoint (it isn't, but there are a lot of
myths and bad advice flying around that say it is), they cannot be postponed
indefinitely, so why bother even trying?  Even if you could postpone
checkpointing indefinitely, would it really help performance?  (I would
argue to the contrary!)  Would never checkpointing be a good thing?  (No
way, Jose!)

Frequent checkpoints have the positive benefit of permitting fast database
instance startups, so why not set these parameters based on your desire for
fast instance restart, instead of poorly-considered or ill-advised
expectations of performance optimization?

Hope this helps...

----- Original Message -----
To: "Multiple recipients of list ORACLE-L" <ORACLE-L_at_fatcity.com>
Sent: Sunday, May 26, 2002 12:38 PM



>
> Aack! I don't know why, it makes more sense to me the way I said it, but
I
> can see now my error. Here is the result of the first query...I have 20
> groups with 2 members in each group! (That's not so astonishing I hope!)
>
> select thread#, group#, members, bytes/1048575 mb from v$log;
> THREAD# GROUP# MEMBERS MB
> ------- ------- ------- -------
> 1 1 2 100
> 1 2 2 100
> 1 3 2 100
> 1 4 2 100
> 1 5 2 100
> 1 6 2 100
> 1 7 2 100
> 1 8 2 100
> 1 9 2 100
> 1 10 2 100
> 1 11 2 100
> 1 12 2 100
> 1 13 2 100
> 1 14 2 100
> 1 15 2 100
> 1 16 2 100
> 1 17 2 100
> 1 18 2 100
> 1 19 2 100
> 1 20 2 100
>
> And from the other query (this is great--thanks!):
>
> THREAD# DT CHANGES CNT
> ------- --------- ---------------- -------
> 1 27-APR-02 8,293,428 11
> 1 28-APR-02 6,180,502 4
> 1 29-APR-02 2,151,943 42
> 1 30-APR-02 10,732,708 31
> 1 01-MAY-02 10,733,462 19
> 1 02-MAY-02 6,402,380 12
> 1 03-MAY-02 2,331,168 15
> 1 04-MAY-02 14,845,034 6
> 1 05-MAY-02 4,737,492 5
> 1 06-MAY-02 5,791,174 15
> 1 07-MAY-02 2,080,204 17
> 1 08-MAY-02 11,791,301 16
> 1 09-MAY-02 8,123,046 16
> 1 10-MAY-02 2,375,798 13
> 1 11-MAY-02 11,111,829 7
> 1 12-MAY-02 3,330,510 4
> 1 13-MAY-02 4,050,327 17
> 1 14-MAY-02 8,238,873 26
> 1 15-MAY-02 8,480,568 22
> 1 16-MAY-02 5,711,059 15
> 1 17-MAY-02 1,902,406 32
> 1 18-MAY-02 13,463,213 19
> 1 19-MAY-02 747,431 2
> 1 20-MAY-02 9,780,205 12
> 1 21-MAY-02 6,745,098 11
> 1 22-MAY-02 5,984,587 10
> 1 23-MAY-02 5,991,105 15
> 1 24-MAY-02 1,847,971 12
> 1 25-MAY-02 4,311,367 2
>
> 29 rows selected.
>
> I doubled the redo log size from 50 MB to 100 MB on May 18. Also, I plan
> to have a stand-by database by the end of summer...after we can upgrade to
> 9i release 2.
>
> Thanks,
>
> Debi
>
> > Two groups of 20 members apiece? That must be a misprint! Is that even
> > possible? Perhaps you mean 20 groups of 2 members apiece? That is
still
> > astonishing, but not as astonishing as the way it reads originally...
> >
> > Just to make sure we have the right story, please post the results of
the
> > following query:
> >
> > select thread#, group#, members, bytes/1048575 mb from
v$log;
> >
> > Another query that might shed some light on the volume of redo that you
are
> > generating is:
> >
> > select thread#, trunc(first_time) dt,
> > max(next_change#) - min(first_change#) changes,
> > count(*) cnt
> > from v$log_history
> > group by thread#, trunc(first_time);
> >
> > This will tell us how many redo log switches and how many individual
redo
> > changes you are recording day by day. This information, coupled with
> > knowledge about the size of your online redo log files, should give us a
> > decent idea of how big your redo log files should be. There is
generally
> > little reason for there to be more than 3-5 groups and there is
generally
> > little reason to have more than 2-3 members per group. If you are using
> > Standby database or Quest SharePlex (both are redo logfile sniffing
> > replication products, in essence), then there would be a reason for
greater
> > numbers of smaller-sized groups, but more than 2-3 members is still very
> > difficult to justify...
> >
> > If your poor old LGWR process is being forced to write to 20 members,
then
> > it's no wonder you're seeing information messages...
> >
> > ----- Original Message -----
> > To: "Multiple recipients of list ORACLE-L" <ORACLE-L_at_fatcity.com>
> > Sent: Saturday, May 25, 2002 6:28 PM
> >
> >
> > >
> > > After seeing the original post and replies, I checked out the analyzer
at
> > > oraperf.com, as I am having performance problems for the first time
since
> > > becoming a DBA for our student information system. Over time, we've
gone
> > > from 7.3.4 to 8.1.7.3 64-bit, from sequent to Sun Solaris, from
800-1200
> > > users...underlying application changes (more database packages and
> > > procedures), and never really been tuned! I think our server upgrades
> > have
> > > masked the database tuning issues, until we get a busy period, then it
> > shows!
> > >
> > > Anyway, the analyzer had some recommendations I put in place this
weekend
> > > (my only opportunity to bounce the db) and I will be evaluating the
impact
> > > these changes on performance next week...however, one recommendation
that
> > > puzzles me is to make my redo logs 9765 MB! I have two groups of 20.
> > Last
> > > weekend I doubled their size from 50 to 100 MB, but 9765 MB?! I am
> > getting
> > > errors in the alert log that indicate slow archiving of redo logs,
"Failed
> > > to archive..." but them a minute or so later it is archived. This
change
> > > made no difference in the number of Failures reported. I am hoping
the
> > > log_buffer change recommended by the oraperf analyzer will help, but
can
> > > anyone comment on that redo log size recommendation? I ran a variety
of
> > > statspack reports through and all said the same thing!
> > >
> > > Thanks,
> > >
> > > Debi
> > >
> > >
> > >
> --
> Please see the official ORACLE-L FAQ: http://www.orafaq.com
> --
> Author:
> INET: dlorraine_at_ucdavis.edu
>
> Fat City Network Services -- (858) 538-5051 FAX: (858) 538-5051
> San Diego, California -- Public Internet access / Mailing Lists
> --------------------------------------------------------------------
> To REMOVE yourself from this mailing list, send an E-Mail message
> to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in
> the message BODY, include a line containing: UNSUB ORACLE-L
> (or the name of mailing list you want to be removed from). You may
> also send the HELP command for other information (like subscribing).
-- Please see the official ORACLE-L FAQ: http://www.orafaq.com -- Author: Tim Gorman INET: Tim_at_SageLogix.com Fat City Network Services -- (858) 538-5051 FAX: (858) 538-5051 San Diego, California -- Public Internet access / Mailing Lists -------------------------------------------------------------------- To REMOVE yourself from this mailing list, send an E-Mail message to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in the message BODY, include a line containing: UNSUB ORACLE-L (or the name of mailing list you want to be removed from). You may also send the HELP command for other information (like subscribing).
Received on Sun May 26 2002 - 15:18:19 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US