Re: Block size qn

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Wed, 17 Jun 2020 18:04:22 +0100
Message-ID: <CAGtsp8kq0dnf7DSbyq01N3a0U3B5eZhe8iPcbRAqUGV2qYbncA_at_mail.gmail.com>



Ahmed,

I hope you didn't take the "haven't noticed" as a serious objection.

The problem I described is very much an indirect issue of the 32KB block size. But the closer cause is the nature of the queries being sent and the row definitions
Exadata allows for a 1MB package between the cells and the dbserver, and if a query does a "select * from table", or even "select {lots of columns} from {lots of rows of} table" then the amount of data that gets decompressed, extracted and packaged for a single request could be too large.

Exadata is very good at minimising the traffic for "select a few columns from a tiny subset of rows" because it's very good at discarding unneeded data, but if you need to select big chunks of a table you need to think carefully about how densely you try to compress it, and even how big a multiblock read you set to read it back.

One statistic that might be helpful is

cell physical IO bytes sent directly to DB node to balance CPU

I think this tells you when the cells have sent uncompressed CU because they're working hard and the db server isn't.

Regards
Jonathan Lewis

On Wed, Jun 17, 2020 at 5:27 PM Ahmed Aangour <ahmed.aangour_at_gmail.com> wrote:

> Hi Jonathan,
>
> Thank you for correcting me, you're absolutely right. I should have said
> "haven't noticed".
>
> Regarding what you've mentioned on hitting the double decompression, I
> recently noticed that some queries were consuming much more CPU than usual
> and thanks to the snapper tool I understood it was because 40% of the CUs
> were sent to the DB node as compressed. To deduct this, I compared the
> Statistics "Cells CUs processed for uncompressed" and "Cells CUs sent
> uncompressed". I can even see the "Cells CUs sent head piece" statistics
> which is incremented. I didn't find the reason for this phenomenon. I
> thought it was because the cells were too busy, but now I have to check
> whether the block size of 32kb might not be the root cause
>
>
> Le mer. 17 juin 2020 à 11:30, Jonathan Lewis <jlewisoracle_at_gmail.com> a
> écrit :
>
>>
>> I prefer to use the expression: "haven't noticed any issues", rather than
>> "haven't had any issues". I've seen far too many systems that are allowed
>> to perform at sub-optimal speed because their owners didn't realise they
>> could do significantly better ;)
>>
>> On a data warehouse with direct path loads and little in the way of
>> updates you're unlikely to run into the oddity of UNDO using 32KB blocks
>> that I wrote about a few years ago. (There is (was?) a hard limit of 255
>> undo records per undo block.) There's also a threat of undo space wastage
>> due to the implementation of the undo free pool in a highly concurrent
>> system that won't be relevant to you.
>>
>> One potential threat that would be easy to miss is mentioned in Roger's
>> article - with a 32KB block you could get compression that was "too good" -
>> he mentioned a hard limit that I hadn't known about. There's also the
>> possibility that your queries could end up hitting the "double
>> decompression" problem more frequently because the disk server decompresses
>> and discovers the result set is to large to return so sends back the
>> compression unit to be decompressed by the database server.
>>
>> Regards
>> Jonathan Lewis
>> .
>>
>> On Sat, Jun 13, 2020 at 7:25 AM Ahmed Aangour <ahmed.aangour_at_gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> For information, my current client has been using a 32k block size in an
>>> Exadata X6 data warehouse database and has encountered none issue for 10
>>> years regarding that setting. Of course, we never update rows, and delete
>>> statements are very rare. We mainly use direct path inserts and truncate
>>> partitions.
>>>
>>> Regards,
>>> Ahmed
>>>
>>>
>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Jun 17 2020 - 19:04:22 CEST

Original text of this message