Re: high Block change tracking waits

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Wed, 10 Feb 2021 13:06:56 +0000
Message-ID: <CAGtsp8=-p0bgoXLcZwvb=o5pi3nBZWT1R=BO9GU0e37eoz=+aQ_at_mail.gmail.com>



*So I think it's mainly the increasing of _bct_pubic_dba_buffer_size from ~128Mb to ~256MB which helped us in eliminating the block change tracking buffer space wait here. *

I don't think I can agree with that - after all you didn't get any improvement when you increased the value from 128M to 1GB. It seems more likely that th1e drop from 22GB to 9GB for the block change tracking file was the most significant point. This raises the question of how to prove, or disprove, the hypothesis; but it also raises the question of WHY the file was 22GB in the first place.

My first guess would be that at some point you were resizing data files, which means the bitmaps had to grow, and either Oracle created conpletely new bitmaps for a file or (perhaps more likely) it created a chain of bitmap sections that was gradually scattered through the file. Maybe the rate of growth was such that the bitmaps started wasting space, maybe you also dropped/shrunk a few tablespaces/datafiles. leaving unused bitmaps in the middle of the file.

Whatever you did, it's possible that Oracle had to follow some sort of linked list of pointers when it was updating the BCT file for some of the updates, and the need to read the file and follow chains of pointer MIGHT have been why write were slowed down and led to waits for the buffer to clear. Stopping and restarting BCT would have produced a completely clean - no chained sections - file, and so writes could be completed on the first access, allowing the buffer to clear more quickly.

Note - as a general rule the "end" of data files tends to be the part most subject to change, which means if the BCT file had lots of linked lists then most of the updates to the file would have been following the longest linked list.

If the hypothesis is correct then it could be tested - but it would be very tedious to set up the experiment.

Regards
Jonathan Lewis

Regards
Jonathan Lewis

On Mon, 8 Feb 2021 at 04:52, Lok P <loknath.73_at_gmail.com> wrote:

> Thanks much, Jonathan.
>
> We did a few things first we increased _bct_buffer_allocation_max to
> 1GB(from ~128MB) and large pool to 2GB(from ~1GB). But then we didn't
> notice any difference. Then we turn OFF the BCT , set the
> _bct_pubic_dba_buffer_size to ~256MB(from ~128MB) , and turn it ON again.
> Then I see the BCT file size has now become ~9GB(it was ~22Gb earlier). So
> now the "block change tracking buffer space" wait is gone and we have the
> data load queries and the database is performing as it was before.
>
> So I think it's mainly the increasing of _bct_pubic_dba_buffer_size from
> ~128Mb to ~256MB which helped us in eliminating the block change tracking
> buffer space wait here. And as you mentioned initially the buffer may not
> be growing proportionately as we added new datafiles to the system and thus
> it looks to us like a buggy behaviour as we endup manually bumping up the
> underscore parameter. But then keeping in mind the max size of this
> parameter as ~1GB and if in near future we keep adding some more datafiles,
> do we may also have to consider increasing this to ~512MB or ~1GB. Or
> should the current size ~256MB should take care?
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Feb 10 2021 - 14:06:56 CET

Original text of this message