Re: Bigger block sizes

From: Orlando L <oralrnr_at_gmail.com>
Date: Mon, 19 Oct 2015 15:51:38 -0500
Message-ID: <CAL8Ae76EkwCxmLoo9pxRkjhqGvN3MazDf4XdpVznziR=vtJ1aA_at_mail.gmail.com>



Is there anyone in this world where the data cannot fit into 8K blocks, even when the design is optimized? I have seen tables with plenty of columns in warehouse schemas in my career, but unfortunately I dont remember much about the blocksize used then.

On Sun, Oct 18, 2015 at 5:21 AM, Dragutin Jastrebic <orahawk_at_gmail.com> wrote:

> Hello
>
> This topic comes back from time to time, and has been a subject of somehow
> emotive discussions in the past.
>
> There was another discussion started on Asktom and then followed on
> Oracle's OTN about 8 years ago.
>
>
> https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:44779427427730#463291000346484016
>
> https://community.oracle.com/thread/583865?start=15&tstart=0
>
> etc.
>
> Since I feel a little bit responsable for this, let me say a couple of
> words.
>
> Eight years ago my DBA friend and I were playing on a test platform with
> call detail records database and had an idea to test bigger blocksize
> tablespace by puting a table and index of historical records in a 16K
> tablespace. My friend was inspired by the example by Robin Schumacher, that
> was mentioned mentionned in the book by Donald Burleson. The first test
> were suprising indeed, the execution time was faster, and the number of
> logical reads were almost halved. Knowing that there was an ongoing
> discussion on the Asktom thread, he hurried up to put it there, hiding
> behind "The reader from Russia" pseudonym.
>
> Tom was not convinced, and asked to see the tkprof trace.
>
> Infortunately, we really had a lot of work at the time, and the test
> database was refreshed by the new version, and the old data was not
> available anymore. A week later, I have tried the same test,with I was
> supposed to be almost
> the same data sample , but I was simply unable to reproduce the case -
> this time both tests were giving the same results, same execution time,
> same logical reads statistics and so on.
>
> Being unable to provide any further information, my DBA friend did not
> provide any followup on Asktom.
>
> At the time, I have never posted any comment on any forum, I have been
> reading them occasionnaly.
>
> I did not even see that Donald Burleson immediately took our example and
> put it on his website , as " another evidence of the benefits of the
> bigger blocksize" !
>
> It was only about 2 or 3 years later that I have discovered how much
> discussion, between Richard Foot, Jonathan Lewis , Donald Burleson and the
> others was following our exemple ...
>
> If only I knew it before, I would certainly worked further on this
> testcase.
>
> Thinking later about it, I believe that, for some reason, the execution
> plans were different, so there was a full scan with the slower test (with
> 8K tablespace) and an index range scan with the faster test (16k
> tablespace),that is the only explanation.
>
> And talking about the example of Robin Scumacher, Richard Foot explained
> on the OTN thread that the number of logical reads was related to the way
> Oracle is showing them with different block sizes, but it is not related to
> the performance.
>
> So, as far as I know, still nobody provided a good test case to prove that
> there is a true performance benefits of the bigger blocksize.
>
> If anyone is facing such a situation, he is really invited to publish it !
>
> And I don't think that Exadata has to do something with bigger
> blocksizes, since it does not even exchanging blocks, only rows.
>
>
> My 2 cents
>
> Dragutin
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Oct 19 2015 - 22:51:38 CEST

Original text of this message