Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Table open times and rowsize discontinuity

Re: Table open times and rowsize discontinuity

From: <sybrandb_at_hccnet.nl>
Date: Thu, 09 Aug 2007 07:19:22 +0200
Message-ID: <co8lb3dlopcg83vfmb78823cvghtvjdjqv@4ax.com>


On Wed, 08 Aug 2007 14:34:38 -0700, dean <deanbrown3d_at_yahoo.com> wrote:

>On Aug 8, 2:27 pm, DA Morgan <damor..._at_psoug.org> wrote:
>> dean wrote:
>> > Hello all,
>>
>> > We have a table T with the following structure:
>>
>> > Name Null? Type
>> > ----------------------------- -------- --------------------
>> > USER_I NOT NULL VARCHAR2(30)
>> > FORM_I NOT NULL VARCHAR2(100)
>> > CATEGORY_I NOT NULL VARCHAR2(100)
>> > KEY_I NOT NULL VARCHAR2(100)
>> > VALUE_X VARCHAR2(2000)
>>
>> > By varying the VALUE_X field (varchar2(N)) from N=100 to N=2000,
>> > however, we can alter the time it takes to open this table either onto
>> > a grid or to scroll through all records. We are using Borland's
>> > TADOQuery here. Oracle is 9.2i or 10g, windows thick client over a
>> > LAN.
>>
>> > All data was the same, however, since the longest actual VALUE_X field
>> > was onyl 27 characters.
>>
>> > Times to open a TdbGrid on user_preference table:
>>
>> > VALUE_X field: Time to open:
>>
>> > varchar2(100): 10.44s
>> > varchar2(200): 10.75s
>> > varchar2(300): 10.91s
>> > varchar2(400): 10.93s
>> > varchar2(500): 11.21s
>> > varchar2(600): 11.15s
>> > varchar2(700): 28.44s <- steps up here
>> > varchar2(800): 26.84s
>> > varchar2(900): 26.59s
>> > varchar2(1000): 26.36s
>> > varchar2(1100): 26.78s
>> > varchar2(1200): 26.42s
>> > varchar2(1300): 26.78s
>> > varchar2(1400): 28.27s
>> > varchar2(1500): 28.96s
>> > varchar2(1600): 28.34s
>> > varchar2(1700): 27.64s
>> > varchar2(1800): 28.5s
>> > varchar2(1900): 28.13s
>> > varchar2(2000): 28.36s
>>
>> > Why does this time step up at N = 700 characters? I have narrowed it
>> > down to the ADO component, since an alternative component (ODAC) does
>> > not report ANY differences, irrespective of the size of the field. SQL
>> > + also reports the same number of bytes transfered and times to open,
>> > irrespective of N.
>>
>> > Similar effect was noted in an alternative table (one with around 30
>> > fields) at N = 600, so I am assuming the total rowsize is the property
>> > to consider.
>>
>> > Thanks for any hint. We are considering removing such fields and using
>> > a lookup table in such cases, since the data is sparsely populated.
>>
>> > Dean
>>
>> Look at array size and cache size settings. My guess is that it is in
>> some manner related to ADO or the tool you are using (Delphi ?). It
>> isn't something experienced in the database. Something you can confirm
>> by running in SQL*Plus.
>>
>> Anything you ever see in a tool, that you don't see in SQL*Plus, IS
>> the tool.
>> --
>> Daniel A. Morgan
>> University of Washington
>> damor..._at_x.washington.edu (replace x with u to respond)
>> Puget Sound Oracle Users Groupwww.psoug.org- Hide quoted text -
>>
>> - Show quoted text -
>
>This is using a serverside cursor, so caching won't have any effect.
>For the array fetching, the only parameter I can find in the ADO
>documentation is to append FetchSize=N to the connection string that
>gets passed to Oracle, but I am not sure if ADO is over-riding this,
>since it has no effect.

As soon as the number of records fetched times the record size exceeds sqlnet's SDU and/or your network's MTU, you will potentially see remarkable slowdowns.
Just increasing the array fetch size to the ceiling doesn't help. Many years ago I conducted an investigation for an online banking system.
It appeared that as soon as I increased sql*plus array size to anything above 10, performance collapsed. This was the net result of exceeding the SDU and the MTU.

-- 
Sybrand Bakker
Senior Oracle DBA
Received on Thu Aug 09 2007 - 00:19:22 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US