Re: Performance tuning of a system

From: Kellyn Pot'vin <kellyn.potvin_at_ymail.com>
Date: Sun, 9 Sep 2012 14:22:34 -0700 (PDT)
Message-ID: <1347225754.62642.YahooMailNeo_at_web121003.mail.ne1.yahoo.com>



I don't know the answer to this, but I would be interested to know-  If you DID change the parameters for how often ASH flushed to AWR, would the ASH and AWR data calculations still be correct in the reports?  Would it be essential to query the distinct ASH data directly to perform correct analysis? I was surprised to see this, as in the last "Oracle Supported" ASH/AWR presentation on this subject, the guys were quite clear that we should NOT be changing how often it was flushed and "...the metric for flushing this was carefully calculated to work with the rest of the design..."

Something to think about and query...

Thanks!

 
Kellyn Pot'Vin
Senior Technical Consultant
Enkitec
DBAKevlar.com
RMOUG Director of Training Days 2013

~Tombez sept fois, se relever huit!



 From: Martin Berger <martin.a.berger_at_gmail.com> To: s.cislaghi_at_gmail.com
Cc: kyle Hailey <kylelf_at_gmail.com>; Oracle L <oracle-l_at_freelists.org> Sent: Sunday, September 9, 2012 2:50 PM
Subject: Re: Performance tuning of a system  

Ste,

do you have any chance to pin down a 'transaction' for sure? Something like dbms_application_info is setting?

My problem:

* curently we do not know which part of your 'transactions' is slow.
* it's rare and can not be reproduced at will
* a target of 5 sec, and even a 'bad' query of 30 sec is hard to find
in dba_hist_active_session_history - you would need to sample v$active_session_history manually immediately after the issue raises.

Doug Burns wrote about ASH and AWR sampling can be manipulated: http://oracledoug.com/serendipity/index.php?/archives/1395-ASH-and-the-psychology-of-Hidden-Parameters.html But I'd not like to go that way in production without good reason and support.

That's why I come back to identify the bad 'transactions'. If you can pin those down, it should still give enough samples over some days so they can be used in statistical methods?

Do you have a tracking table where all 'transactions' - and their timing is tracked by the application?
If you have, you can create a trigger there and do your own 'flush v$active_sesion_history into persistent table'

hth,
Martin

On Sun, Sep 9, 2012 at 6:16 PM, Stefano Cislaghi <s.cislaghi_at_gmail.com> wrote:
> Hi Kyle,
>
> your queries are very interesting and I will check them ASAP. Anyway I
> gained some more information about the problem.
> The constraint is to have all transactions performed in less than 5
> seconds. Transaction is not a single statement but, from the
> application point of view,
>
> - few selects into the db
> - elaborate data (by appserver)
> - run 2 or 3 store proc
>
> So we know that almost 99% are under 2 sec but sometimes others are
> above 5sec, 20/30sec. DB is not directly incriminated but it's been
> asked to provide, if any, any information for debugging. IMHO I think
> it is quite hard because the 'long' transactions happen everyday in
> totally different time windows; I do not have enough data, even with
> AWR, ecc to say what was happening for instance yesterday at 16:44:56
> sec where a transaction took 33sec instead 2. From the application we
> know that during the long transaction no abnormal load was done on db,
> I mean no abnormal number of transaction in the same second. Disk
> speed? maybe, but how to get the disk queue or any other interesting
> info from the DB?

--
http://www.freelists.org/webpage/oracle-l

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Sep 09 2012 - 16:22:34 CDT

Original text of this message