X-Received: by 10.236.105.197 with SMTP id k45mr12221736yhg.19.1409215262802; Thu, 28 Aug 2014 01:41:02 -0700 (PDT) X-Received: by 10.50.154.66 with SMTP id vm2mr83534igb.5.1409215262597; Thu, 28 Aug 2014 01:41:02 -0700 (PDT) Path: news.cambrium.nl!textnews.cambrium.nl!feeder3.cambriumusenet.nl!feed.tweaknews.nl!138.195.8.3.MISMATCH!news.ecp.fr!news.glorb.com!m5no3794843qaj.0!news-out.google.com!aw9ni6214igc.0!nntp.google.com!uq10no6190732igb.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail Newsgroups: comp.databases.oracle.server Date: Thu, 28 Aug 2014 01:41:02 -0700 (PDT) Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=203.13.128.104; posting-account=h2bfjgoAAAB8OxqMUJkSqT-KE-FeFQmL NNTP-Posting-Host: 203.13.128.104 User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: <19b70377-3d56-4b81-903d-af08c7a7a889@googlegroups.com> Subject: serial direct reads From: vsevolod afanassiev Injection-Date: Thu, 28 Aug 2014 08:41:02 +0000 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Xref: news.cambrium.nl I have seen several examples of the direct reads issue. This is what typica= lly happens: - there is a medium-sized table, let's say 500 MB. It is less than 10% of t= he db_cache_size (7 GB). - the table is frequently accessed through full table scan (FTS). It takes = 0.5 second as most blocks are in the buffer cache - as this table grows its size exceeds 10% of the buffer cache and Oracle s= witches to direct reads. Now the same query takes 15 seconds It appears that the only place where most of the information on serial dire= ct reads is collected together is "hacking session" video by Tanel Poder: http://blog.tanelpoder.com/2012/09/03/optimizer-statistics-driven-direct-pa= th-read-decision-for-full-table-scans-_direct_read_decision_statistics_driv= en/ This is what I understood from this video and subsequent blog post: 1. Back in the old good days (before 11.1.0.6) everything was simple. There= was no such thing as sequential direct read (may by in Export utility wit= h DIRECT=3DY?). All conventional reads were buffered (I am not talking abou= t reading from TEMP or parallel query).There was parameter _small_table_thr= eshold. It was in blocks and by default it was set to 2% of the buffer cach= e size. When a table was accessed through FTS Oracle compared table size with value= of _small_table_threshold. Table size came from the header block - I assum= e it is always kept in sync with SYS.SEG$ (DBA_SEGMENTS). This was run-time= decision, done for every execution. As I understood there was no multipler= , so if db_block_buffers was 10,000 blocks then _small_table_threshold was= 200 blocks and table size was compared with 200 blocks (not with 5*200). If this was small table then the blocks were sent to the most recently used= end of the LRU list and so they were kept in the buffer for longer. If thi= s was large table then the blocks were sent to another end of the list and = expired sooner. As table size came from segment size there was no way to change this value = for a particular table by fiddling with statistics. The whole thing was sim= ple and reliable. It was possible to do "ALTER TABLE name CACHE" and this t= able would be treated as small table irrespectively of the actual size. 2. Then 11.1.0.6 came along and things have changed. Now if table size exce= eded the 5 * _small_table_threshold Oracle would do sequential direct path = read. I doubt that it was documented in the New Features Guide. There was s= ome dissatisfaction in the user community and MOS note 793845.1 appeared te= lling the readers that 'perhaps the query could be tuned?'.=20 3. Things got even "better" in 11.2.0.2: - 3.1 Now Oracle also takes into account how many blocks are cached, howeve= r the precise algorithm isn't known. - 3.2 Instead of using segment size as table size Oracle uses table size fr= om statistics. There is a parameter for it called _direct_read_decision_statistics_driven. By default it is TRUE.=20 - 3.3 The decision whether to use direct read or buffered read isn't made f= or every execution, it is property of a child cursor So if we face the issue of Oracle using direct reads for a relatively small= frequently accessed table we need to do following: - adjust number of blocks using DBMS_STATS.SET_TABLE_STATS to less than 10%= of db cache size (use min value of db_cache_size) - do ALTER TABLE CACHE - force re-parse