Re: performance question

From: Thomas J. Kyte <tkyte_at_us.oracle.com>
Date: 1996/06/14
Message-ID: <31c1c989.23913335_at_dcsun4>#1/1


>Terry Rose <terryr_at_inetnebr.com> wrote:
>
>>My company is currently evaluating Oracle for OLTP. On the server side, we have a
>>Tricord with 4 Pentium CPU's, 512MB RAM, running Solaris 2.5 and Oracle 7.3. On
>>the client side, we're currently using Windows NT 3.51.
 

>>I have an OCI application being used to benchmark Oracle performance. A table is
>>created with two columns: key CHAR(5), data CHAR(75). An index is created on the
>>key.
 

>>In the first phase, 20000 rows with unique key values 0-19999 are inserted in
>>random order. The data column is initially blank.
>
>*
>
>>The second phase involves selecting 10000 rows by key, in random order.
>
>*
>
>>My question is about the third phase which updates the data column of 10000 random
>>rows. This phase progresses in "bursts".
>
>*
>
>>Using a displayed counter, I can see
>>several hundred updates...then a pause...several hundred more...a pause...etc.
 

>>What is causing the pauses?
 

>>The final phase updates the key column of 10000 random row to 'ZZZZZ'. This phase
>>also exhibits the burst behavior.
>
>*
>
>>Performance in all phases has been disappointing. We have tried adjusting the
>>server configuration from SMALL to LARGE. This resulted in better performance in
>>phase 2, worse performance in phases 3 and 4. Best times were recorded with
>>MEDIUM settings. Any other suggestions or tuning tips would be appreciated!
>

Whenever you see bursty performance like this, it indicates a lot of 'check point not completes are probably being generated. Look in the file $ORACLE_HOME/rdbms/log/alert$ORACLE_SID.log. In there I expect you will find lots of "Checkpoint Not Complete, Cannot Allocate New Log" messages. You should increase the size of your log files, especially if you have done a default database install. The default is 2 or 3 512k logs which is horribly undersized for anything other then a demo database. Bump em up. To do this in the 'easiest' way (remember all, easy is in the eye of the beholder...) I would in sql*plus or svrmgrl:

SQL> select member from v$logfile;

/usr/oracle/dbs/logxx.dbf

/usr/oracle/dbs/logyy.dbf		/* your output will be different */
					/* just a list of current logfiles */

SQL> select bytes from v$log;
512000

512000					/* this will indicate that you in fact*/
					/* do have the small log files */

SQL> alter database add logfile '/usr/oracle/dbs/bigLog1.dbf' size 10m; SQL> alter database add logfile '/usr/oracle/dbs/bigLog2.dbf' size 10m; .... as many times as you want.

SQL> alter database drop logfile '/usr/oracle/dbs/logxx.dbf'; -- if you get an error that this is the current log file, issue SQL> alter system switch logfile;
-- and then
SQL> alter database drop logfile '/usr/oracle/dbs/logxx.dbf'; -- do the above for each of the small logfiles

As for the OCI program, are you using array inserts or single row inserts? Array updates or single row updates? Array processing will speed up all aspects of your program many times.

<snip>
Thomas Kyte
Oracle Government
tkyte_at_us.oracle.com                          

http://govt.us.oracle.com -- Check out our web site! Brand new, uses Oracle Web Server and Database


statements and opinions are mine and do not necessarily reflect the opinions of Oracle Corporation Received on Fri Jun 14 1996 - 00:00:00 CEST

Original text of this message