Re: Block size qn

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Wed, 17 Jun 2020 10:29:16 +0100
Message-ID: <CAGtsp8mX-BvUnnJjoiXbiyyYYgA6st47cqmvx3qR=a8X=FVJDg_at_mail.gmail.com>



I prefer to use the expression: "haven't noticed any issues", rather than "haven't had any issues". I've seen far too many systems that are allowed to perform at sub-optimal speed because their owners didn't realise they could do significantly better ;)

On a data warehouse with direct path loads and little in the way of updates you're unlikely to run into the oddity of UNDO using 32KB blocks that I wrote about a few years ago. (There is (was?) a hard limit of 255 undo records per undo block.) There's also a threat of undo space wastage due to the implementation of the undo free pool in a highly concurrent system that won't be relevant to you.

One potential threat that would be easy to miss is mentioned in Roger's article - with a 32KB block you could get compression that was "too good" - he mentioned a hard limit that I hadn't known about. There's also the possibility that your queries could end up hitting the "double decompression" problem more frequently because the disk server decompresses and discovers the result set is to large to return so sends back the compression unit to be decompressed by the database server.

Regards
Jonathan Lewis
.

On Sat, Jun 13, 2020 at 7:25 AM Ahmed Aangour <ahmed.aangour_at_gmail.com> wrote:

> Hi,
>
> For information, my current client has been using a 32k block size in an
> Exadata X6 data warehouse database and has encountered none issue for 10
> years regarding that setting. Of course, we never update rows, and delete
> statements are very rare. We mainly use direct path inserts and truncate
> partitions.
>
> Regards,
> Ahmed
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Jun 17 2020 - 11:29:16 CEST

Original text of this message