Re: Chained Rows

From: Gaja Krishna Vaidyanatha <gajav_at_yahoo.com>
Date: Mon, 28 Apr 2014 09:31:09 -0700 (PDT)
Message-ID: <1398702669.91492.YahooMailNeo_at_web184301.mail.ne1.yahoo.com>


Hey Mark,

I think we saw 2 different flavors of it. In one case, it was DML to any column and in another it was columns in the 2nd row-piece. Can't remember which version I saw what, as we did tests on both 11.2.0.3 and 11.2.0.4. It has been a few months and single-malts have erased the pain since then ;)

I have used Tim's "carl" method before many times to get trailing nulls in storing a row, but we could not use it here, for app-specific issues. In our case it was clear, that there was an unnecessary inflation of custom/flex columns, to take care of "worst-case scenarios" for ALL deployments of the application. The preferred approach would be just to have as many custom columns as required in each deployment and we would have been < 255 in all cases.

Hi Kenny,

I think there is still value in splitting the table, if you don't have the option of reducing the columns to < 255 in a single table AND if the access path to the table/partition is a full-scan. Here is why:

Prior to the DML, when you trace the query accessing the said table/partitions you will observe "db file scattered read" on a full scan, as both row pieces will be in the same block. Once the DML is done, the bug kicks in, the row pieces are in different blocks and with the SAME access path of full-scan, you will observe "db file sequential read" for the entire operation. Basically the full scan will be many times slower as Oracle will be doing it using single-block reads. Similar and comparable wait event switching will happen on Exadata, where we go from "cell smart table scan" to "cell single block physical read". With 2 different tables with  < 255 columns, a full scan will do "db file scattered read" even post-DML, thus getting around the bug.

If the access pattern is an index scan (unique), you are correct in your assertion that there will be more logical I/O to get the entire row and splitting really may not help the cause. But it may be worth a test, but in the big picture, it does bring about stability to SQL performance, post-DML, that is just not there with a single table with > 255 columns.

Hope this helps,

Cheers,

Gaja
 

Gaja Krishna Vaidyanatha,

CEO & Founder, DBPerfMan LLC
http://www.dbperfman.com
http://www.dbcloudman.com

Phone - +1 (650) 743-6060
LinkedIn - http://www.linkedin.com/in/gajakrishnavaidyanatha

Co-author: Oracle Insights:Tales of the Oak Table - http://www.apress.com/9781590593875
Primary Author: Oracle Performance Tuning 101 - http://www.amzn.com/0072131454
Enabling Exadata, Big Data and Cloud Deployment & Management for Oracle


________________________________
 From: Mark W. Farnham <mwf_at_rsiz.com>
To: dmarc-noreply_at_freelists.org; oracle-l_at_freelists.org 
Sent: Sunday, April 27, 2014 4:21 PM
Subject: RE: Chained Rows
 


Thanks for the bug note Gaja.
 
Was that any DML at all, or only DML that changed columns in a non-first row piece?
 
IF the latter, Tim’s “carl” might be a solution if fewer than 255 columns are ever non-null.
 
mwf
 
From:oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce@freelists.org] On Behalf Of Gaja Krishna Vaidyanatha (Redacted sender "gajav@yahoo.com" for DMARC)
Sent: Sunday, April 27, 2014 2:01 PM
To: oracle-l_at_freelists.org
Subject: Re: Chained Rows
 
Hi Kenny,
 
In addition to everything the others have said, if you are on 11.2.0.3, you may be running into Bug#9373758 on tables that have more than 255 columns. The flavor of this bug we encountered was that any DML to the table/partitions of the table, caused the second row piece which was originally in the same block, to be migrated to a different block. So we went from 2 row pieces in the same block to 2 row pieces in two different blocks. Issuing a "move partition" fixed the issue and was the workaround until 11.2.0.4 was available. But we unfortunately also hit another flavor of the bug in 11.2.0.4.
 
So bottom line - we engaged in a table re-design to get the number of columns < 255. This was the only surefire way to avoid this issue. Hope this helps!
 
Cheers,
 
Gaja
 
Gaja Krishna Vaidyanatha,
CEO & Founder, DBPerfMan LLC
http://www.dbperfman.com
http://www.dbcloudman.com
Phone - +1 (650) 743-6060
LinkedIn - http://www.linkedin.com/in/gajakrishnavaidyanatha
Co-author: Oracle Insights:Tales of the Oak Table - http://www.apress.com/9781590593875
Primary Author: Oracle Performance Tuning 101 - http://www.amzn.com/0072131454
Enabling Exadata, Big Data and Cloud Deployment & Management for Oracle
 

________________________________

From:Tim Gorman <tim_at_evdbt.com>
To: oracle-l_at_freelists.org 
Sent: Thursday, April 24, 2014 4:52 AM
Subject: Re: Chained Rows
 
Ken,

Coming into this thread late, so I've not seen your original post in the thread.

Since the table has so many columns, and if there is the possibility that not all of those columns are used, have you tried re-organizing the table so that the most likely NULL columns trail, and the most likely NOT NULL columns lead?  I have a PL/SQL stored procedure called CARL (file "carl.sql" at "http://evdbt.com/scripts/") which is designed to answer the "what if" question about using "trailing NULL columns" compression, much the same as the DBMS_COMPRESSION package is designed to answer the "what if" question about whether or not to use different types of compression (i.e. basic, oltp, hcc, etc).  The big difference is that CARL does not temporarily create a compressed table as DBMS_COMPRESSION does, but calculates everything using gathered statistics residing in the DBA_TAB_COLUMNS view, so you'd want good statistics.

It may not help your problem, but it'll tell you pretty quickly whether it is worth trying.

Hope this helps.

Thanks!

-Tim
On 4/24/2014 3:34 AM, Kenny Payton wrote:
Yes.   We are at 8k and I suspect a 16k block size would reduce the rate by 1/2.  We have talked about doing this for some time.   Unfortunately the most widely affected areas of the database is 20T of data out of 160T of databases.   Also efficiently managing multiple buffer pools becomes difficult. 
>What I find challenging is that I don't see a way to avoid it all together.  If you have a table with more than 255 columns and you use any of the columns beyond 255 this is going to happen at some difficult to predict rate. 
>Ideally I would want Oracle to put all row pieces in a single block if they would fit. If not then consider them chained and break it up.   It seems to treat each row piece independently from the start. 
>On Apr 24, 2014 1:42 AM, "Hans Forbrich" <fuzzy.graybeard_at_gmail.com> wrote:
>Is this possibly a valid use case for larger tablespace block size?
>
>On 23/04/2014 4:29 PM, Kenny Payton wrote:
>Thanks.
>>Unfortunately the data and access patterns change pretty frequently.   Another reason we find ourselves in this situation. 
>>On Apr 23, 2014 6:02 PM, "Sayan Malakshinov" <xt.and.r_at_gmail.com> wrote:
>> 
>>On Thu, Apr 24, 2014 at 12:56 AM, Kenny Payton <k3nnyp_at_gmail.com> wrote: 
>>I know a lot of the tricks for avoiding chained rows and migrated rows but the only trick I know of to element these intra or inter block chained rows is to break the table into multiple tables or the row into multiple rows. 
>>
>>
>>Sometimes might be helpful to redefine table with moving less-used columns to the ends of rows. It allows to reduce extra-work if most queries use only the first columns
>> 
--
http://www.freelists.org/webpage/oracle-l
Received on Mon Apr 28 2014 - 18:31:09 CEST

Original text of this message