Re: Buffer gets - array size, fetch size - High latch activity - buffer gets??? bug or default behavior

From: Tanel Poder <tanel_at_tanelpoder.com>
Date: Sat, 28 Dec 2019 15:41:16 -0500
Message-ID: <CAMHX9J+LJBnUhi9HaeN7cme2HHfPP-4_udua_cpa3rBxVokVGw_at_mail.gmail.com>



Lower fetch size means less data is retrieved/sent back per *database call*. Buffer pins (and held latches) must be released at the end of every call to avoid some idle session holding these locks for too long. So if you only ask for 5 rows per fetch, you end up re-visiting the same block to get another 5 much more likely compared to fetching 500 rows every time.

Also you can try to just add an ORDER BY to the end of your test query (and make sure that the ORDER BY operation actually shows up in the plan) and the LIO/CBC latch get difference with different array sizes should mostly go away as the ORDER BY step in the plan would use max internal batch sizes to populate the query resultset in the sortarea in PGA/TEMP and you'd fetch the results after that from there without needing CBC latches or LIOs.

Tanel

On Wed, Dec 25, 2019 at 7:56 PM Vishnu Potukanuma < vishnupotukanuma_at_gmail.com> wrote:

> Got it,
>
> looks like it is the default/expected behaviour...
> since Oracle doesn't use asynchronous calls which is evident from the fact
> that it uses only single thread per process model (server processes), the
> logic flow must shift at point in time or another to a different function
> to transmit the data (which can take more time and is dependent on various
> factors such as network delays etc) to the client and returns to the
> previous call where it has to mandatorily touch or read the blocks to
> continue processing from where it left and this is completely dependent on
> the fetchsize, lower the fetchsize more buffegets and latchgets..
>
> looks like this is not the case all the time ...
>
> Thanks,
> Vishnu
>
> On Wed, Dec 25, 2019 at 4:43 AM Vishnu Potukanuma <
> vishnupotukanuma_at_gmail.com> wrote:
>
>> Hi,
>>
>> The situation is as follows:
>> create table randomload(roll number, mark1 number, mark2 number, mark3
>> number, mark4 number);
>> load the table with random data;
>> then create an different table so that the clustering factor on mark4
>> column will be close to table blocks.
>>
>> create table randomload2 as select * from randomload order by mark4;
>> create index idx on randomload2(mark4);
>> exec dbms_stats.gather_table_stats('VISHNU','RANDOMLOAD', CASCADE=> TRUE);
>>
>> now carefully consider any value from mark4 (indexed column) that
>> retrieves more than 500 rows (example) eventually we will find that 100
>> rows also causes high latching activity...
>>
>> select * from randomload2 where mark4= 100;
>> see the buffergets for this SQL and also the carefully monitor the latch
>> activity on each of these latches holding these data blocks (CBC) latches.
>>
>> i used a query such as the following:
>> select sum(gets) from v$latch_children where addr in
>> ('000000006C9A4160','000000006CA29DA8','000000006CA37068','000000006CAFEF20','000000006CB52450','000000006CB52450','000000006CB52450','000000006CB91E28','000000006CB91EF0');
>>
>> it is very interesting that the latch activity or buffer gets for a SQL
>> statement is dependent on the array size or fetch size..
>> average latch gets or buffer gets will always be higher when the fetch
>> size is small, even 100, the database repeatedly accesses the data blocks,
>> or latches...
>>
>> testing with set fetchsize 20, 30, 60, 100, 200.
>>
>> we will see different buffer gets per execution each time and latch
>> activity is very high...
>> even though the number of blocks accessed will be less (physical blocks
>> in my case only 6) since average row length is less and the mark4 column
>> had only 3200 distinct values).
>>
>> Can someone please tell me
>> Is this expected? or I am missing any thing?
>>
>> Thanks
>> Vishnu
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Dec 28 2019 - 21:41:16 CET

Original text of this message