Re: Does Oracle keep a history of which blocks are/were loaded into the buffer cache?

From: Cary Millsap <cary.millsap_at_method-r.com>
Date: Thu, 16 Dec 2010 09:25:50 -0600
Message-ID: <AANLkTikw2-2wSgOTOi0069QBWC4OBeAONnc376upGJSO_at_mail.gmail.com>



Absolutely, working more closely with your developers will improve your chances of having a fast, efficient system. I believe that Donald Knuth was right, though: even the best developers are bad at guessing where their code spends its time. You have to *measure* it in real life. The only way to know which blocks your application visits is to measure the fact operationally.

Cary Millsap
Method R Corporation
http://method-r.com

On Thu, Dec 16, 2010 at 7:30 AM, Stefan Knecht <knecht.stefan_at_gmail.com>wrote:

> Hey Charles
>
> Buy them a beer and have a nice chat! Works all the time, even with
> developers ;-)
>
> Cheers
>
> =========================
>
> Stefan P Knecht
> CEO & Founder
> s_at_10046.ch
>
> 10046 Consulting GmbH
> Schwarzackerstrasse 29
> CH-8304 Wallisellen
> Switzerland
>
>
> Cell +41 (0) 79 571 36 27
> info_at_10046.ch
> http://www.10046.ch
>
> =========================
>
>
>
> On Thu, Dec 16, 2010 at 2:24 PM, Charles Schultz <sacrophyte_at_gmail.com>wrote:
>
>> Thanks for all the comments. Niall, I think you hit the nail on the head.
>> The problem is that my functional users and application support folks do not
>> have a complete picture on how the data is used in the first place, thus I
>> was researching a possibility of a "back door" analysis. Time to fire back
>> to the application support team, and the vendor especially, and have _them_
>> tell _me_ how they use the data.
>>
>> Cheers!
>>
>>
>> On Thu, Dec 16, 2010 at 04:27, Niall Litchfield <
>> niall.litchfield_at_gmail.com> wrote:
>>
>>> Petr,
>>>
>>> Cary's suggestion was to analyze the timed events that correspond
>>> directly to O/S physical read calls - even if these are satisfied from the
>>> O/S filesystem or storage cache - these calls still correspond to the time
>>> that Oracle found it needed to read the block for the first time - i.e a
>>> buffer cache miss.
>>>
>>> Charles,
>>>
>>> I'm pretty sure that unless you have a very good idea of which blocks
>>> contain which rows then you'll not be able definitively to determine what
>>> you seem to be after - after all the read calls are for blocks not rows. I'd
>>> have thought that you'd have better luck arguing for/against partitioning
>>> based on the data model and application use cases. You need really to know
>>> how people are going to want to retrieve data before considering
>>> partitioning - but you (or at least the business folk) probably do know
>>> this.
>>>
>>> On Thu, Dec 16, 2010 at 9:28 AM, Petr Novak <Petr.Novak_at_trivadis.com>wrote:
>>>
>>>> Hallo Charles,
>>>>
>>>> is the counting /aggregation of readed blocks reasonable ? Using this
>>>> information is probably misleading.
>>>> Even if you have blocks only with 'new' and 'old' rows and no mixture -
>>>> the whole table can be loaded in cache and you have no reads associated with
>>>> your query.
>>>> The other possibility for getting wrong information is missing
>>>> appropriate index - you could do full scan and need only small number of
>>>> rows.
>>>>
>>>>
>>>>
>>>> Best Regards,
>>>> Petr
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --------------------------------
>>>> Von: oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org]&quot;
>>>> im Auftrag von &quot;Cary Millsap [cary.millsap_at_method-r.com]
>>>> Gesendet: Donnerstag, 16. Dezember 2010 05:45
>>>> Bis: sacrophyte_at_gmail.com
>>>> Cc: Bobak, Mark; ORACLE-L
>>>> Betreff: Re: Does Oracle keep a history of which blocks are/were loaded
>>>> into the buffer cache?
>>>>
>>>> Charles,
>>>>
>>>> If you had a directory full of trace files and wanted to see what times
>>>> your blocks were read in, you could grep for the 'db.*read' events and then
>>>> sort by the tim value on each line (assuming you're using Oracle release
>>>> 10.2+).
>>>>
>>>> We have two software tools that you might be interested in knowing
>>>> about. If you passed your "grep ... | sort" data that I mentioned above to
>>>> our mrnl tool (http://method-r.com/component/content/article/116), it
>>>> would show you the wall time at which each read call was made. If you wanted
>>>> to filter or aggregate your trace data (using group-by and where-type
>>>> features), that's what our mrskew tool does. You can see some examples at
>>>> http://method-r.com/component/content/article/117.
>>>>
>>>>
>>>> Cary Millsap
>>>> Method R Corporation
>>>> http://method-r.com
>>>>
>>>>
>>>> On Tue, Dec 14, 2010 at 7:31 PM, Charles Schultz <sacrophyte_at_gmail.com
>>>> <mailto:sacrophyte_at_gmail.com>> wrote:
>>>> Mark,
>>>>
>>>> Yeah, I realized my goof with logminer as soon as I sent it. =) Oh well.
>>>>
>>>> Fortunately, I am not overly worried about the overhead associated with
>>>> SQL_TRACE; that is an interesting idea. I am not so sure I could sell that
>>>> on a Production database, though. v$segment_stats sounds like the best way
>>>> to at least get some idea. As I mentioned to another lister, partitioning is
>>>> an option on the table but I really want to justify it first before I
>>>> blindly offer it as a solution. *grin* The table is not currently
>>>> partitioned, hence my analysis.
>>>>
>>>> Thanks for the ideas.
>>>>
>>>> On Tue, Dec 14, 2010 at 17:31, Bobak, Mark <Mark.Bobak_at_proquest.com
>>>> <mailto:Mark.Bobak_at_proquest.com>> wrote:
>>>> Charles,
>>>>
>>>> I’m pretty sure you’re out of luck. Consider that blocks are loaded
>>>> from disk to satisfy queries, as well as DML, and logminer will have nothing
>>>> to say about that. You can enable SQL_TRACE, and it will capture
>>>> file#/block# data, but at great overhead and expense.
>>>>
>>>> Oracle just doesn’t track to that level of detail, as far as I know.
>>>> You can look at V$SEGMENT_STATISTICS, to get an idea of which segments have
>>>> more physical I/O happening on them, but no detail about which set of blocks
>>>> in a table were most recently loaded from disk. If the table is partitioned
>>>> by date, and the partitions closely align w/ the level of granularity that
>>>> you require for your analysis, that may allow you to use
>>>> V$SEGMENT_STATISTICS and periodic snapshots to determine if newer data is
>>>> read from disk more recently than old data.
>>>>
>>>> Just some thoughts….hope that helps,
>>>>
>>>> -Mark
>>>>
>>>> From: oracle-l-bounce_at_freelists.org<mailto:
>>>> oracle-l-bounce_at_freelists.org> [mailto:oracle-l-bounce_at_freelists.org
>>>> <mailto:oracle-l-bounce_at_freelists.org>] On Behalf Of Charles Schultz
>>>> Sent: Tuesday, December 14, 2010 5:47 PM
>>>> To: ORACLE-L
>>>> Subject: Does Oracle keep a history of which blocks are/were loaded into
>>>> the buffer cache?
>>>>
>>>> Good day,
>>>>
>>>> I am trying to analyze a given table in an effort to determine how often
>>>> the older records are used compared to the newer records. I know I can go
>>>> against the AWR and determine the age of some statements by searching for a
>>>> date column in the table and grabbing the associated bind variable, but this
>>>> obviously misses any queries that limit data via a join (ie, no bind
>>>> variable). Is there a way to determine when a particular range of datafile
>>>> blocks have been last loaded from disk, or how often? I did some digging in
>>>> x$bh but that has no history (that I know of). In the meantime, I'll start
>>>> up a LogMiner session and scrape all interested ROWIDs, but this is not a
>>>> pretty solution. Maybe the only solution?
>>>>
>>>> --
>>>> Charles Schultz
>>>>
>>>>
>>>>
>>>> --
>>>> Charles Schultz
>>>>
>>>> --
>>>> http://www.freelists.org/webpage/oracle-l
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Niall Litchfield
>>> Oracle DBA
>>> http://www.orawin.info
>>>
>>
>>
>>
>> --
>> Charles Schultz
>>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Dec 16 2010 - 09:25:50 CST

Original text of this message