Re: table doesn't get cached

From: ddf <oratune_at_msn.com>
Date: Fri, 18 Oct 2013 08:04:13 -0700 (PDT)
Message-ID: <14a70cd4-61c3-48b8-bc2f-53f6c4b51935_at_googlegroups.com>


On Thursday, October 17, 2013 11:38:19 PM UTC-6, vsevolod afanassiev wrote:
> We have an application where many queries perform full table scan (for various reasons). Data size is small (tables and and indexes = 16 GB), so we configured db_cache_size = 7 GB and hoped that most frequently accessed data will get cached. This works for all tables but one, let's call it TABLE1. It is normal table, non-partitioned, table size is 818 MB. It has 16 columns including 2 LOB columns. LOBS have been defined with in-line storage, however some values exceed threshold (I think around 4000 bytes), so size of one lob segment = 203 MB and another = 1272 MB. We set CACHE property for the table (DBA_TABLES.CACHE='Y') and for the LOB columns (DBA_LOBS.CACHE='YES'). Sill during full table scans number of physical reads is almost equal to the number of buffer gets - example below. Number of INSERT/UPDATE/DELETE for this table is small.
>
>
>
> What could cause such poor caching?
>
>
>
> This is 11.2.0.3 on AIX
>
>
>
> STATSPACK SQL report for Old Hash Value: 1674743855 Module: xxxxxxx
>
>
>
> DB Name DB Id Instance Inst Num Release RAC Host
>
> ------------ ----------- ------------ -------- ----------- --- ----------------
>
> XXXXX 430379916 XXXXX 1 11.2.0.3.0 NO
>
>
>
> Start Id Start Time End Id End Time Duration(mins)
>
> --------- ------------------- --------- ------------------- --------------
>
> 13087 18-Oct-13 15:00:03 13088 18-Oct-13 15:07:19 7.27
>
>
>
> SQL Statistics
>
> ~~~~~~~~~~~~~~
>
> -> CPU and Elapsed Time are in seconds (s) for Statement Total and in
>
> milliseconds (ms) for Per Execute
>
> % Snap
>
> Statement Total Per Execute Total
>
> --------------- --------------- ------
>
> Buffer Gets: 1,577,265 105,151.0 30.93
>
> Disk Reads: 1,560,464 104,030.9 86.54
>
> Rows processed: 0 0.0
>
> CPU Time(s/ms): 18 1,231.7
>
> Elapsed Time(s/ms): 126 8,377.6
>
> Sorts: 15 1.0
>
> Parse Calls: 15 1.0
>
> Invalidations: 0
>
> Version count: 1
>
> Sharable Mem(K): 27
>
> Executions: 15
>
>
>
> SQL Text
>
> ~~~~~~~~
>
> SELECT m1."COL1" FROM TABLE1 m1
>
> WHERE
>
> (m1."COL2" IS NULL or m1."COL2"=:Y) and
>
> (m1."COL3" IS NULL or m1."COL3"=:Y)
>
>
>
>
>
> --------------------------------------------------------------------------------
>
> | Operation | PHV/Object Name | Rows | Bytes| Cost |
>
> --------------------------------------------------------------------------------
>
> |SELECT STATEMENT |----- 458433099 -----| | | 28 |
>
> |SORT ORDER BY | | 1 | 288 | 28 |
>
> | TABLE ACCESS FULL | TABLE1 | 1 | 288 | 27 |
>
> --------------------------------------------------------------------------------
>
>
>
> End of Report

I would suspect it's the LOB data (especially the LOBs that exceed 4000 bytes and are then relegated to out-of-line storage). Also why did you not set the db_cache_size to 16GB? If you're trying to cache all of the tables you can't do that with a cache sized smaller than the overall data volume.

David Fitzjarrell Received on Fri Oct 18 2013 - 17:04:13 CEST

Original text of this message