Re: Table with ~255+ columns

From: Lok P <loknath.73_at_gmail.com>
Date: Thu, 19 Nov 2020 01:49:02 +0530
Message-ID: <CAKna9VYHsfHmkCMU2Pf3Ur8XRw91c7E-5ePo3neOE3KYkrSxZw_at_mail.gmail.com>



Thank You So much.

It's a weekly partition table , and I picked the 3rd Nov partition to see the partition stats and space usage stats , they are as below. I don't see any UNUSED Blocks noted by DBMS_SPACE package. But when i multiply the avg_row_len to the num_rows i.e. ((41001195*277)/8192)*1.25= ~1.3million blocks , where as currently we are having total ~2.5million blocks as noted by DBA_TAB_PARTITIONS, so does it mean we are actually having
(2.5million-1.3million)= ~2.2million blocks of waste space? I do see we
have mostly INSERTS happening into this table with very few UPDATES , so i need to see how the updates are getting impacted though.

*Output from DBA_TAB_PARTITIONS:- *
[image: User: "image.png"]

*Output from DBMS_SPACE package :-*

v_total_blocks :2557824

v_total_bytes :20953694208

v_unused_blocks :0

v_unused_bytes :0

v_last_used_extent_file_id :570

v_last_used_extent_block_id :775040

v_last_used_block :128

TABLE                            UNUSED BLOCKS    TOTAL BLOCKS  HIGH WATER
MARK
  • --------------- ---------------
TRAN_TAB                            0        2557824         2557823


The test query , which i ran was doing a "SELECT *" on the table for that one row using unique index scan, so thought of seeing the "table fetch continued statistics", but i am not seeing that and may be that the same row is not span across multiple blocks, so we were getting lucky that way and we are restricted to intra block chaining here. So just by having >~255 columns won't cause inter block row chaining, unless the length of the single row will exceed the size of one DB block i.e 8K, (and in our case AVG_ROW_LEN is ~277 bytes). Correct me if wrong? And that way, intra block row chaining causes less harm as compared to inter block row chaining. In our case it should be one more logical read for column 15th to ~270th as compared to 1st till 15th column.

I was not able to fully understand, if in this intra block row chaining scenario, if we add two more columns towards the end, but will UPDATE the historical data that can cause significant damage? If the total size of row still stays less than 8K, will it still turn out to be inter block row chaining and will impact us?

My apology, I wanted to see to what extent we are affected by row chaining for this table, but the below part is going over my head.

 "*Write code to calculate the row length for a row (sum of column length + overheads) and get a histogram of row lengths - anything other than a simple bell curve is suggestive of "insert stub and grow".*

I tried doing , select
nvl(Vsize(C1),0)+nvl(Vsize(C2),0)+nvl(Vsize(C3),0)+nvl(Vsize(C4),0)+.. from TRAN_TAB partition(P_03NOV2020); and seeing the row length coming as below and of course as this partition has million of rows , it will go on. Not able to understand how I can get a Histogram of row length? And how will that help us interpret anything here?

128

155

133

119

138

152

119

129

153

134

119

119

 "*Write code to draw a picture of null and not null columns for a row
(i.e. a string of 270 bytes with "-" for used column, " for blank column"
and see where the unused columns are - you may find a column rebuild with a change in column order will make most rows use less than the critical 255*."

 As per above statement, i did try putting "B" for blank/null values/columns and "N" for non null values/columns, and i formed a string out of that for ~20 rows from the same partition, it does show a lot of blank columns in between. And ~20 blanks columns towards the end of the row. So do you mean to say , if we plan to put all the NULL columns towards the end of the row and that will minimize chances of our intra block row chaining anyway? But isn't it that the queries doing "select *" wil still do that extra I/O?

 select decode(C1,null,'B' ,'N')||decode(C2,null,'B' ,'N')|| .. from from TRAN_TAB partition(P_03NOV2020) where rownum<20;

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBNBNBBBBBBBBNNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNNNNNNBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNBBBBBBBBBBBNBBBBBBBBBBNBBBBBBBBBBNBBBBBBBBNBNBBBBBBBBNNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBNBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBNBNBBBBBBBBNNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNNNNNNBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBNNBBBBBNBBBBBNBBBBBBBBBBBBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNNNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBNNBBBBBBBBBBBNBBBBBNBBBBNBBBBBBBBBBNBBBBBBBBNBNBBBBBBBBNNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNNNNNNBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNNNNNNBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNBBBBBNBBBBBNBBBBBBBBBBBBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNNNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNNNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBNBBBBBBBBBBBBBBBBBBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNNNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBNBBBBBBBBBBBNBBBBBNBBBBBBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBNBBBBNBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNNNNNNBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBNBBBBBBBBNBNBBBBBBBBNNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNNNNNNBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBBNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

BBBBBBBNNNNNBBBBBBBBBBBBBBNNBNNBBNNBBBBBBBBBBBBNBNNNNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBBBBBBBBBBBBBBBNBBBBBBBBNBNBBBBBBBBNBNBBBBBBBBBNBBBBBBBBBBBBBNNBBBBBBBBBBBNNBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBNNNNNBBBBBBBBBBBBBBBBBBBBBBB

************* Ran below queries for the DBMS_SPACE output*******************


DECLARE  v_partition_name VARCHAR2(30) := 'P_03NOV2020';

 v_total_blocks NUMBER;

 v_total_bytes NUMBER;

 v_unused_blocks NUMBER;

 v_unused_bytes NUMBER;

 v_last_used_extent_file_id NUMBER;

 v_last_used_extent_block_id NUMBER;

 v_last_used_block NUMBER;

BEGIN  IF v_partition_name != 'NA' THEN

   DBMS_SPACE.UNUSED_SPACE (segment_owner => UPPER('USER1'),

                            segment_name              => UPPER('TRAN_TAB'),

                            segment_type              => UPPER('TABLE
PARTITION'),
                            total_blocks              => v_total_blocks,

                            total_bytes               => v_total_bytes,

                            unused_blocks             => v_unused_blocks,

                            unused_bytes              => v_unused_bytes,

                            last_used_extent_file_id  =>
v_last_used_extent_file_id,
                            last_used_extent_block_id =>
v_last_used_extent_block_id,
                            last_used_block           => v_last_used_block,

                            partition_name            => v_partition_name);

 ELSE    DBMS_SPACE.UNUSED_SPACE (segment_owner => null,--UPPER('USER1'),

                            segment_name              => UPPER('TRAN_TAB'),

                            segment_type              => UPPER('TABLE
PARTITION'),
                            total_blocks              => v_total_blocks,

                            total_bytes               => v_total_bytes,

                            unused_blocks             => v_unused_blocks,

                            unused_bytes              => v_unused_bytes,

                            last_used_extent_file_id  =>
v_last_used_extent_file_id,
                            last_used_extent_block_id =>
v_last_used_extent_block_id,
                            last_used_block           => v_last_used_block);

 END IF;  DBMS_OUTPUT.PUT_LINE('v_total_blocks :' || v_total_blocks);

 DBMS_OUTPUT.PUT_LINE('v_total_bytes :' || v_total_bytes);

 DBMS_OUTPUT.PUT_LINE('v_unused_blocks :' || v_unused_blocks);

 DBMS_OUTPUT.PUT_LINE('v_unused_bytes :' || v_unused_bytes);

 DBMS_OUTPUT.PUT_LINE('v_last_used_extent_file_id :' || v_last_used_extent_file_id);

 DBMS_OUTPUT.PUT_LINE('v_last_used_extent_block_id :' || v_last_used_extent_block_id);

 DBMS_OUTPUT.PUT_LINE('v_last_used_block :' || v_last_used_block);

END; /


DECLARE  CURSOR cu_tables IS

   SELECT a.owner,

          a.table_name

   FROM dba_tables a

   WHERE a.table_name = ('TRAN_TAB')

   AND a.owner = Upper('USER1');

 op1 NUMBER;

 op2 NUMBER;

 op3 NUMBER;

 op4 NUMBER;

 op5 NUMBER;

 op6 NUMBER;

 op7 NUMBER;

BEGIN  Dbms_Output.Disable;

 Dbms_Output.Enable(1000000);

 Dbms_Output.Put_Line('TABLE                            UNUSED
BLOCKS TOTAL BLOCKS HIGH WATER MARK');
 Dbms_Output.Put_Line('------------------------------ ---------------
--------------- ---------------');

 FOR cur_rec IN cu_tables LOOP

   Dbms_Space.Unused_Space(cur_rec.owner,cur_rec.table_name,'TABLE PARTITION',op1,op2,op3,op4,op5,op6,op7,'P_03NOV2020');

   Dbms_Output.Put_Line(RPad(cur_rec.table_name,30,' ') ||

                        LPad(op3,15,' ')               ||

                        LPad(op1,15,' ')               ||

                        LPad(Trunc(op1-op3-1),15,' '));

 END LOOP; END; /

On Tue, Nov 17, 2020 at 3:56 PM Jonathan Lewis <jlewisoracle_at_gmail.com> wrote:

>
> Lok P,
>
> There are so many details about that "continued rows" statistic that have
> to be tested by version it's not really safe to give you a straight answer
> to "how do you work out the impact"
>
> In your version I think the statistic is only incremented if the continued
> fetch has to visit another block, (in earlier versions you the count would
> go up even if all the pieces were in the same block) so one measure would
> be to compare the number of table fetch continued rows with the number of
> session logical I/Os - because (if I'm right) the each fetch continued row
> will also be a session logical I/O. If most of the instance CPU is spent in
> SQL, and you aren't doing a huge number of tablescans (compare rows fetched
> by tablescan with rows fetched by rowid) then as a rough guide the count of
> continued row fetches compared to session logical I/O is an indication of
> CPU used because of rows that are spread across two or more blocks.
>
> This is a very rough guide, and you will need to check whether my memory
> of counting for your version is correct. Even if the tests show that the
> above is technically correct, it doesn't mean this table is the major
> culprit since (as you've already noted) you can get row migration and row
> chaining without breaking the 255 limit.
>
> Something to bear in mind when pursuing this issue - it takes a lot of
> developer and dba time in coding and testing to split a table into two sets
> of columns so even if it's theoretically the right thing to do it may not
> be sensible to do it.  Possibly the better option is to see if a one-off
> rebuild, and some simpler changes in coding and configuration could do some
> effective damage limitation.  (e.g. a common cause of row migration is
> having a bad setting of pctfree, and code that inserts row stubs and then
> updates them to make them grow).
>
> If you've got a clone of the production system, running a few static
> analyses of the data to what its storage patterns look like might be a
> starting point to highlight potential waste of resources and strategies for
> addressing them. e.g.
>
> Compare table HWM with space required to hold the rows in the table.  If
> intra-row chaining has one nasty things then there might be a lot of waste
> space all through the table. This can be interpreted not only as a waste of
> space but also as an indication of excess costs of updates and queries.
> Write code to calculate the row length for a row (sum of column length +
> overheads) and get a histogram of row lengths - anything other than a
> simple bell curve is suggestive of "insert stub and grow". Write code to
> draw a picture of null and not null columns for a row (i.e. a string of 270
> bytes with "-" for used column, " " for blank column" and see where the
> unused columns are - you may find a column rebuild with a change in column
> order will make most rows use less than the critical 255.
>
> Apart from anything else, a consideration for your developers is that if
> they add 2 columns and then update a lot of history to put values in those
> two columns the impact on the table will probably be a disaster as Oracle
> will try to rewrite those columns on the update and may do terrible things
> with chaining AND migration. The effect MUST be tested carefully before
> implementation.
>
> Regards
> Jonathan lewis
>
>
>
>
>
>
>
>
>
> On Mon, 16 Nov 2020 at 21:14, Lok P <loknath.73_at_gmail.com> wrote:
>
>>  We are using version 11.2.0.4 of oracle. We have a table having ~270
>> columns. But considering the rows chaining is inevitable after exceeding
>> the ~255 columns limit(i also see in the below blog stating the same) we
>> are asking the team to not add additional new columns and to plan for
>> dropping those existing columns such that the total number of columns will
>> be restricted within ~255. But the team is asking to measure the
>> current overhead on the queries as we already have ~15more columns in the
>> table exceeding the ~255 column limit. Is there a way to measure the
>> overhead of row chaining for that table in production in terms of DB time ?
>>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Nov 18 2020 - 21:19:02 CET

Original text of this message