Re: small table takes more time as compared to bigger table

From: Mark D Powell <>
Date: Tue, 17 May 2011 09:38:18 -0700 (PDT)
Message-ID: <>

On May 17, 11:56 am, MadhavC <> wrote:
> Hello Experts,
> I am working on an enterprise application and facing some issue while
> working on DB queries.
> I have a table - TABLE1 (having total 5 million rows)
> col1, col2, col3, col4, col5, col6, col7, col8, col9, col10
> I have another table - TABLE2 (having total only 1000 rows)
> col1, col4, col6
> TABLE2 contains redundent information for col1, col4, and col6 from
> I run the query as below for 16000 times for different values of col4
> in where clause-
> select col1, col4, col6 from TABLE1 where col4='SOMEVALUE'
> This finish in few milliseconds
> I run the following query for 16000 times for different values of col4
> in where clause
> select col1, col4, col6 from TABLE2 where col4='SOMEVALUE'
> Even though TABLE2 is a very small table as compared to TABLE1, this
> time these 16000 queries takes around 2 seconds. Almost 5-6 times more
> than that of the query for TABLE1.
> TABLE1 and TABLE2 are indexed on col1.
> Can you please suggest what parameters I should be looking for to
> understand and fix this.
> Thanks in advance for your help.

First, the number of rows does not really represent size very as you should really consider number of rows times the average row size which basically identifies how many bytes of data you have to process.

Second, look at the explain plans for the queries. Make sure the index is being used. Old statsistcs could cause the index to not be used. Also if you simplified the queries you posted then make sure that no functions were placed on the table column side of the operator in the where clause conditions instead of on the variable side.

If the statistics are current check the dba_tables.avg_row_len and chained_cnt values. If the rows are short then chain_cnt should be near zero as a percentage of the number of rows. If it isn;t then past activity on the table created migrated rows and a high percentage of migrated rows could impact performance. This is not very likely, but I have seen it once or twice in the last decade.

You may also want to compare the table allocation to the needed allocation. If the table is significantly larger than necessary to hold the data including necessary block overhead (initrans, pctfree, fixed header) then a table reorganization may be of benefit. This is also not usually necessary or beneficial.

The use of FGAC/VPD on the table could also be a factor you may need to check for.

HTH -- Mark D Powell -- Received on Tue May 17 2011 - 11:38:18 CDT

Original text of this message