Re: Buffer gets - array size, fetch size - High latch activity - buffer gets??? bug or default behavior

From: Vishnu Potukanuma <vishnupotukanuma_at_gmail.com>
Date: Sun, 29 Dec 2019 11:46:21 +0530
Message-ID: <CAP-RywxUfrOyHBp6420Trj1rR9YH_Bet3qidGVb8aCTqgzYHhw_at_mail.gmail.com>



Thanks Tanel,

I performed many tests considering all the scenarios and appears that this fetch size has serious implications at the database level which basically undermines the prefetch and batching capabilities at the IO level.

found the following cases where side affects of low fetch sizes in terms of buffer gets and latches required are minimal to none.

ORDER BY on the indexed column is also affected as the sorting step is avoided... I was also looking at the ORM solutions such as hibernate which generate queries, in these cases, if the application is heavily dependent on such solutions, we have no control over the order by except increasing the fetchsize to a very high value, but this takes much time if the application uses FIRST_ROWS and most environments have a shared network between the database and application servers...

Looks like there is more to this index range/skip scan.... have to do many more tests to see any other aspects are also effected....

Thanks,

Vishnu

On Sun, Dec 29, 2019 at 2:11 AM Tanel Poder <tanel_at_tanelpoder.com> wrote:

> Lower fetch size means less data is retrieved/sent back per *database
> call*. Buffer pins (and held latches) must be released at the end of
> every call to avoid some idle session holding these locks for too long. So
> if you only ask for 5 rows per fetch, you end up re-visiting the same block
> to get another 5 much more likely compared to fetching 500 rows every time.
>
> Also you can try to just add an ORDER BY to the end of your test query
> (and make sure that the ORDER BY operation actually shows up in the plan)
> and the LIO/CBC latch get difference with different array sizes should
> mostly go away as the ORDER BY step in the plan would use max internal
> batch sizes to populate the query resultset in the sortarea in PGA/TEMP and
> you'd fetch the results after that from there without needing CBC latches
> or LIOs.
>
> Tanel
>
> On Wed, Dec 25, 2019 at 7:56 PM Vishnu Potukanuma <
> vishnupotukanuma_at_gmail.com> wrote:
>
>> Got it,
>>
>> looks like it is the default/expected behaviour...
>> since Oracle doesn't use asynchronous calls which is evident from the
>> fact that it uses only single thread per process model (server processes),
>> the logic flow must shift at point in time or another to a different
>> function to transmit the data (which can take more time and is dependent on
>> various factors such as network delays etc) to the client and returns to
>> the previous call where it has to mandatorily touch or read the blocks to
>> continue processing from where it left and this is completely dependent on
>> the fetchsize, lower the fetchsize more buffegets and latchgets..
>>
>> looks like this is not the case all the time ...
>>
>> Thanks,
>> Vishnu
>>
>> On Wed, Dec 25, 2019 at 4:43 AM Vishnu Potukanuma <
>> vishnupotukanuma_at_gmail.com> wrote:
>>
>>> Hi,
>>>
>>> The situation is as follows:
>>> create table randomload(roll number, mark1 number, mark2 number, mark3
>>> number, mark4 number);
>>> load the table with random data;
>>> then create an different table so that the clustering factor on mark4
>>> column will be close to table blocks.
>>>
>>> create table randomload2 as select * from randomload order by mark4;
>>> create index idx on randomload2(mark4);
>>> exec dbms_stats.gather_table_stats('VISHNU','RANDOMLOAD', CASCADE=>
>>> TRUE);
>>>
>>> now carefully consider any value from mark4 (indexed column) that
>>> retrieves more than 500 rows (example) eventually we will find that 100
>>> rows also causes high latching activity...
>>>
>>> select * from randomload2 where mark4= 100;
>>> see the buffergets for this SQL and also the carefully monitor the latch
>>> activity on each of these latches holding these data blocks (CBC) latches.
>>>
>>> i used a query such as the following:
>>> select sum(gets) from v$latch_children where addr in
>>> ('000000006C9A4160','000000006CA29DA8','000000006CA37068','000000006CAFEF20','000000006CB52450','000000006CB52450','000000006CB52450','000000006CB91E28','000000006CB91EF0');
>>>
>>> it is very interesting that the latch activity or buffer gets for a SQL
>>> statement is dependent on the array size or fetch size..
>>> average latch gets or buffer gets will always be higher when the fetch
>>> size is small, even 100, the database repeatedly accesses the data blocks,
>>> or latches...
>>>
>>> testing with set fetchsize 20, 30, 60, 100, 200.
>>>
>>> we will see different buffer gets per execution each time and latch
>>> activity is very high...
>>> even though the number of blocks accessed will be less (physical blocks
>>> in my case only 6) since average row length is less and the mark4 column
>>> had only 3200 distinct values).
>>>
>>> Can someone please tell me
>>> Is this expected? or I am missing any thing?
>>>
>>> Thanks
>>> Vishnu
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Dec 29 2019 - 07:16:21 CET

Original text of this message