performance for update degrading

From: Kumar Madduri <>
Date: Thu, 15 Mar 2012 10:25:23 -0800
Message-ID: <>

We have an application which started running slow starting March 1 and we found that the sql that was getting repeatedly executed was this UPDATE a SET b = 'N' , MODIFIED_BY= NULL , MODIFIED_ON=SYSDATE WHERE USR_ID = 'KK' and EDIT_FLAG = 'Y' ;

The table 'a' has a long column and this table has chained rows which is ok. The data for this table from dba_tables is

BLOCKS 155030
AVG_ROW_LEN 28651. The above update statement keeps slowing down as time progresses and the application developers have this update running whenever user logs in. That is a design flaw but developers argue that they need it. Developers and users say this app is running for 4 years with the same update statement.

What we found is whenever the update slows down, analyze table a compute statistics or analyze statistics or delete statistics would make the update run from 30 sec to less than one sec.

This issue does not happen in UAT or DEV instances and we found that AVG_ROW_LEN for this table is 182 in UAT.

My question is , how do I find when the avg_row_len was increased (date or time) (may be users would have updated the long column in this table but I dont know the application). If I can somehow find the avg_row_len was increeased around Mar 1 time frame then I can relate the slowness. Secondly what is causing the analyze to work internally to fix the issue (I know it is obsolete). The number of rows does not change much (not like a high activity table but not a static lookup table as well).

If it matters then,
SQL> select * from v$version;


Oracle Database 11g Enterprise Edition Release - 64bit Production PL/SQL Release - Production
CORE Production
TNS for Linux: Version - Production NLSRTL Version - Production

SQL> Thank you for any pointers.

  • Kumar
Received on Thu Mar 15 2012 - 13:25:23 CDT

Original text of this message