Re: Table with ~255+ columns

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Tue, 17 Nov 2020 10:26:23 +0000
Message-ID: <CAGtsp8kA-m9UMLY5LKjZdwCgB0JGHiKwXC6DWZr4o2O1Ksni9Q_at_mail.gmail.com>



Lok P,

There are so many details about that "continued rows" statistic that have to be tested by version it's not really safe to give you a straight answer to "how do you work out the impact"

In your version I think the statistic is only incremented if the continued fetch has to visit another block, (in earlier versions you the count would go up even if all the pieces were in the same block) so one measure would be to compare the number of table fetch continued rows with the number of session logical I/Os - because (if I'm right) the each fetch continued row will also be a session logical I/O. If most of the instance CPU is spent in SQL, and you aren't doing a huge number of tablescans (compare rows fetched by tablescan with rows fetched by rowid) then as a rough guide the count of continued row fetches compared to session logical I/O is an indication of CPU used because of rows that are spread across two or more blocks.

This is a very rough guide, and you will need to check whether my memory of counting for your version is correct. Even if the tests show that the above is technically correct, it doesn't mean this table is the major culprit since (as you've already noted) you can get row migration and row chaining without breaking the 255 limit.

Something to bear in mind when pursuing this issue - it takes a lot of developer and dba time in coding and testing to split a table into two sets of columns so even if it's theoretically the right thing to do it may not be sensible to do it. Possibly the better option is to see if a one-off rebuild, and some simpler changes in coding and configuration could do some effective damage limitation. (e.g. a common cause of row migration is having a bad setting of pctfree, and code that inserts row stubs and then updates them to make them grow).

If you've got a clone of the production system, running a few static analyses of the data to what its storage patterns look like might be a starting point to highlight potential waste of resources and strategies for addressing them. e.g.

Compare table HWM with space required to hold the rows in the table. If intra-row chaining has one nasty things then there might be a lot of waste space all through the table. This can be interpreted not only as a waste of space but also as an indication of excess costs of updates and queries. Write code to calculate the row length for a row (sum of column length + overheads) and get a histogram of row lengths - anything other than a simple bell curve is suggestive of "insert stub and grow". Write code to draw a picture of null and not null columns for a row (i.e. a string of 270 bytes with "-" for used column, " " for blank column" and see where the unused columns are - you may find a column rebuild with a change in column order will make most rows use less than the critical 255.

Apart from anything else, a consideration for your developers is that if they add 2 columns and then update a lot of history to put values in those two columns the impact on the table will probably be a disaster as Oracle will try to rewrite those columns on the update and may do terrible things with chaining AND migration. The effect MUST be tested carefully before implementation.

Regards
Jonathan lewis

On Mon, 16 Nov 2020 at 21:14, Lok P <loknath.73_at_gmail.com> wrote:

> We are using version 11.2.0.4 of oracle. We have a table having ~270
> columns. But considering the rows chaining is inevitable after exceeding
> the ~255 columns limit(i also see in the below blog stating the same) we
> are asking the team to not add additional new columns and to plan for
> dropping those existing columns such that the total number of columns will
> be restricted within ~255. But the team is asking to measure the
> current overhead on the queries as we already have ~15more columns in the
> table exceeding the ~255 column limit. Is there a way to measure the
> overhead of row chaining for that table in production in terms of DB time ?
>

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Nov 17 2020 - 11:26:23 CET

Original text of this message