RE: Accessing ASH is slow

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Mon, 22 Jul 2013 15:00:20 -0400
Message-ID: <01df01ce870d$b7070be0$251523a0$_at_rsiz.com>



I defy anyone to state a useful operational definition of "best practice" that is not self-contradictory. (For starters, knowing that something is "best" requires either a logical construct with agreed premises that proves no other practice can be better, or else an enumeration of all possible practices and agreement that each of them is inferior. I suggest you try for perpetual motion first.) That pet peeve out of the way:

I personally think it is a bad practice to consume production RDBMS cycles doing ad hoc analysis of metrics.

Rather, I would suggest that such data should be copied (meaning some data transport, but reading any given window of data only once) into a datawarehouse for the DBA.
The destination schema would of course *NOT* be sys, Gormanesque scaling to infinity is plausible, and you can feel free on your DBA datawarehouse to add indexes and aggregates to your heart's content.

Note that I have not opined on standard reports that pretty much scan everything once occasionally (like ADDM) as regards consuming production resources, nor on the expense of collection of the standard metrics now built into Oracle. If you're licensed to use them, it seems likely the collection of such metrics is worthwhile, even if only for the insurance value that they exist. (If you're not licensed to use them that is a financial statement on the value of performance to your organization. Since your question involves ASH, it seems likely you have the appropriate licenses.)

IF Oracle provided an option to deposit metric collection results in a destination other than the production database, I tend to think that would work out nicely in conserving overall throughput (though there is potentially a network traffic argument if the repository is off of the physical host of the production database.)

Of course if the resources to set up a warehouse for the DBA are not available, then you're stuck on the production database. Whether pulling SYS contents you want to repeatedly analyze into a different schema you can safely manipulate is an effective strategy probably varies from case to case.

mwf

-----Original Message-----

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Martin Klier
Sent: Monday, July 22, 2013 8:52 AM
To: ORACLE-L
Subject: Accessing ASH is slow

Dear listers,

accessing v$active_session_history is quite slow when fetching data from a wide (time) window. What's best practice to get ash queries back with decent performance?

For example, to find all entries for a SQL_ID from last 24h: select * from gv$active_session_history
where SAMPLE_TIME>sysdate -1
and sql_id='f29fxwd5kh2pq';

It's ending up with two full table scans on X$ASH and X$KEWASH, plus awfully calculated cardinality. I dared and created table stats with histograms there, and cardinality was calculated to realistic values. But it seems that the two tables don't have any index to improve the access path. My daring went not far enough to create objects in SYS. :)

So what du YOU do when your access to performance repos is too slow, due to the sheer mass of collected data? (I don't like to duplicate the view/table - I'm not currently solving a problem, I'm working on a concept that I can come up with as soon as a customer system need this analysis. I simply can't waste the time+space then...)

Thanks a lot in advance
Martin
--

Usn's IT Blog for Oracle and Linux
http://www.usn-it.de

--

http://www.freelists.org/webpage/oracle-l

--

http://www.freelists.org/webpage/oracle-l Received on Mon Jul 22 2013 - 21:00:20 CEST

Original text of this message