Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Latch free waits..

Re: Latch free waits..

From: Ryan Gaffuri <rgaffuri_at_cox.net>
Date: 5 Aug 2004 07:58:45 -0700
Message-ID: <1efdad5b.0408050658.d39c51c@posting.google.com>


"Richard Foote" <richard.foote_at_bigpond.nospam.com> wrote in message news:<xBMNc.20376$K53.3804_at_news-server.bigpond.net.au>...
> "Ron" <support_at_dbainfopower.com> wrote in message
> news:t96dnWl8DP4nTpvcRVn-vg_at_comcast.com...
> >
> > Hello Thomas,
> >
> > As a start point, please generate statspack report during the time
> > database experiences bad performance and compare it to the statspack
> report
> > from the time when performance is acceptable.
> >
> > Statspack report provides you with information on what database is
> waiting
> > most and in the latch section of the report what latches are the problem
> > ones.
> >
> > If it is acceptable, please share generated statspack report with the
> > group for further review (no need to send SQL part)
> >
>
> Actually Ron, using statspack to determine performance issues for these
> types of scenarios is entirely the *wrong* thing to do.
>
> Why ?
>
> Because the OP is complaining about particular sessions performing
> sub-optimally, perhaps/perhaps not because of latch contention issues. I say
> perhaps not, because without determining the *particular* bottlenecks for
> the *sessions* in question, one would only be guessing at what the root
> cause of the problems really are.
>
> The problem with statspack is that it provides information at the *database*
> level which in most/many cases will totally drown out the wait information
> associated with the transaction performance that matters most to the
> business/OP. What may look like issues at the database level may have
> absolutely *nothing* to do with the problems needing to be addressed by the
> OP. Using statistics generated and summarised by potentially 10,000s of
> different transactions may very well be totally misleading with addressing
> issues face by the "few" transactions in question/interest.
>
> At the end of the day, using such data to diagnose these issues means you're
> only *guessing and hoping* you're on the right track. The infamous Method C
> approach to performance tuning ...
>
> A far better "start point" for the OP to take would be to trace just those
> sessions that are "slowing down" with a level 12 10046 event and see
> *exactly* what is causing the slow down. No summarisations, no taking all
> the other transactions data into the mix, no guessing and hoping you might
> stumble on the real issue(s), but *know for sure exactly* what the
> associated waits and resources are that are causing the slow down. Once you
> know what the issues are, what's most affecting the response times, then
> you'll know what actions to take to address them.
>
> Ron, it's time you dumped Method C and moved onto Method R ...
>
> Cheers
>
> Richard

The problem I am running into with Method R in the real world is that most of the projects I am on use middle tier connection pooling(gets even messier when you throw in RAC). How do you tell which user is connecting to the database? On my last project they grabbed a NEW session for every sql statement.

The only way to use method R in this type of environment is to build a debug mode into the middle tier that allowed you to enable tracing. The problem with that is convincing management and people in the middle tier group to allocate the time to do it. I haven't been able to do that.

On my last project, we were in acceptance testing and our Users did not like overtime so by 5PM they were pretty much all logged off. So I was able to get around this by waiting until 7PM(to be safe)using a logon trigger to enable tracing and having someone log in(after identifying exactly what the user did that lead to the slow performance). This is somewhat problematic. First, I cannot start until the evening, second I run the risk of someone using the application, third each session has a different trace file and fourth we were using RAC so I had to hunt between the 4 different udump directories for the 10046 trace.

Once I had that information, it was very easy to identify bottlenecks. However, just getting it is the problem plus having to wait until 7PM to get started...

Has anyone built or have had built a way to implement user tracing into a middle tier? I got rebuffed repeatedly. Received on Thu Aug 05 2004 - 09:58:45 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US