Re: high Block change tracking waits

From: Lok P <loknath.73_at_gmail.com>
Date: Mon, 8 Feb 2021 10:21:48 +0530
Message-ID: <CAKna9Va5pq=HVsvvm+OdDbdUNJx0gB++HSG8uXbWc7FNXkZbnA_at_mail.gmail.com>



Thanks much, Jonathan.

We did a few things first we increased _bct_buffer_allocation_max to 1GB(from ~128MB) and large pool to 2GB(from ~1GB). But then we didn't notice any difference. Then we turn OFF the BCT , set the _bct_pubic_dba_buffer_size to ~256MB(from ~128MB) , and turn it ON again. Then I see the BCT file size has now become ~9GB(it was ~22Gb earlier). So now the "block change tracking buffer space" wait is gone and we have the data load queries and the database is performing as it was before.

So I think it's mainly the increasing of _bct_pubic_dba_buffer_size from ~128Mb to ~256MB which helped us in eliminating the block change tracking buffer space wait here. And as you mentioned initially the buffer may not be growing proportionately as we added new datafiles to the system and thus it looks to us like a buggy behaviour as we endup manually bumping up the underscore parameter. But then keeping in mind the max size of this parameter as ~1GB and if in near future we keep adding some more datafiles, do we may also have to consider increasing this to ~512MB or ~1GB. Or should the current size ~256MB should take care?

Regards
Lok

On Fri, Feb 5, 2021 at 4:25 AM Jonathan Lewis <jlewisoracle_at_gmail.com> wrote:

>
>
> Two thoughts that might reduce your level of surprise;
>
> First - I said it's also dependent on the number of files: maybe you've
> added a few extra files over the last few months
>
> Second - read https://jonathanlewis.wordpress.com/2007/07/29/nls/ (It has
> nothing to do with BCT, but makes the point that a small change in size can
> have a big impact in response time). In both cases (the blog note and your
> BCT behaviour) a relatively small change in the data size (in the blog the
> table size, in your case the size/number of files) could have been enough
> to have a big impact on the caching.
>
> If you want to check the current value of _bct_buffer_allocation_max
> there's a quick and dirty script at _bct_buffer_allocation_max - you'll
> need to set up some page and column format setting for a tidy output, and
> change the search predicate to '%bct%' rather than '%trace%' of course.
> It looks as if Vinicius has exactly the right previous experience to help
> you, and has previously so I won't make any further comment.
>
> Regards
> Jonathan Lewis
>
>
>
>
>
> On Thu, 4 Feb 2021 at 21:26, Lok P <loknath.73_at_gmail.com> wrote:
>
>> And the thing which I am surprised thus database was always been 100+ TB
>> so how this wait event was in control with small bct buffer till a month
>> back and then started increasing day by day in recent past And now
>> impacting application queries.
>>
>> On Fri, 5 Feb 2021, 2:46 am Lok P, <loknath.73_at_gmail.com> wrote:
>>
>>> Thanks much.
>>>
>>> It's a big database when I sum up all the USER_BYTES in dba_data_files
>>> it comes as ~140TB when and the count of total number of files in
>>> dba_data_file is ~3788.
>>>
>>> Doing show parameter _bct_buffer_allocation_max; giving no results , so
>>> hope this is not set. and _bct_public_dba_buffer_size is set as ~128MB.
>>>
>>> We will raise SR with Oracle to have the correct size of these
>>> underscore parameters so as to avoid the wait "block change tracking buffer
>>> space".
>>>
>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Feb 08 2021 - 05:51:48 CET

Original text of this message