serial direct reads

From: vsevolod afanassiev <vsevolod.afanassiev_at_gmail.com>
Date: Thu, 28 Aug 2014 01:41:02 -0700 (PDT)
Message-ID: <19b70377-3d56-4b81-903d-af08c7a7a889_at_googlegroups.com>



I have seen several examples of the direct reads issue. This is what typically happens:
- there is a medium-sized table, let's say 500 MB. It is less than 10% of the db_cache_size (7 GB).

It appears that the only place where most of the information on serial direct reads is collected together is "hacking session" video by Tanel Poder: http://blog.tanelpoder.com/2012/09/03/optimizer-statistics-driven-direct-path-read-decision-for-full-table-scans-_direct_read_decision_statistics_driven/ This is what I understood from this video and subsequent blog post:

  1. Back in the old good days (before 11.1.0.6) everything was simple. There was no such thing as sequential direct read (may by in Export utility with DIRECT=Y?). All conventional reads were buffered (I am not talking about reading from TEMP or parallel query).There was parameter _small_table_threshold. It was in blocks and by default it was set to 2% of the buffer cache size.

When a table was accessed through FTS Oracle compared table size with value of _small_table_threshold. Table size came from the header block - I assume it is always kept in sync with SYS.SEG$ (DBA_SEGMENTS). This was run-time decision, done for every execution. As I understood there was no multipler, so if db_block_buffers was 10,000 blocks then _small_table_threshold was 200 blocks and table size was compared with 200 blocks (not with 5*200).

If this was small table then the blocks were sent to the most recently used end of the LRU list and so they were kept in the buffer for longer. If this was large table then the blocks were sent to another end of the list and expired sooner.

As table size came from segment size there was no way to change this value for a particular table by fiddling with statistics. The whole thing was simple and reliable. It was possible to do "ALTER TABLE name CACHE" and this table would be treated as small table irrespectively of the actual size.

2. Then 11.1.0.6 came along and things have changed. Now if table size exceeded the 5 * _small_table_threshold Oracle would do sequential direct path read. I doubt that it was documented in the New Features Guide. There was some dissatisfaction in the user community and MOS note 793845.1 appeared telling the readers that 'perhaps the query could be tuned?'.

3. Things got even "better" in 11.2.0.2:
- 3.1 Now Oracle also takes into account how many blocks are cached, however the precise algorithm isn't known.

  • 3.2 Instead of using segment size as table size Oracle uses table size from statistics. There is a parameter for it called _direct_read_decision_statistics_driven. By default it is TRUE.
  • 3.3 The decision whether to use direct read or buffered read isn't made for every execution, it is property of a child cursor

So if we face the issue of Oracle using direct reads for a relatively small frequently accessed table we need to do following:
- adjust number of blocks using DBMS_STATS.SET_TABLE_STATS to less than 10% of db cache size (use min value of db_cache_size)

  • do ALTER TABLE CACHE
  • force re-parse
Received on Thu Aug 28 2014 - 10:41:02 CEST

Original text of this message