Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: insert many records in short time

Re: insert many records in short time

From: John Vernes <vernes.j_at_consunet.nl>
Date: Tue, 27 Apr 1999 17:32:24 +0200
Message-ID: <7g4l8o$pdj$1@news.worldonline.nl>


Hello all,

here is the source code I used. I have included 2 different programs one WITH the bulk option (bulk.sql) and one WITHOUT (bulk_no.sql).

>I would like to see the skelton source too. As I posted in previous
messages,
>I need to insert at the rate of 18 records/second. Each record has 1200
>columns.

WHOEHA, 1200 columns for ONE table is a bit much isn't it?? You must me working a dataware house or so. 1200 * 20 bytes a column = 24000 bytes. what is your log_buffer size --> I would suggest big, because of long rows -->
log_buffer is somewhere between 2M and 4M. Do you have any indexes on this table. I would suggest drop em and recreate AFTER your batch job.

>I'd aldo like to know what type of hardware you are using including memory
>and disk storage. I'd like to know the CPU performance. Did you run this
over
>long period of time?

We are using Oracle80513 on NT, 4 * P2-450Mhz (2Mb cache) Compaq Proliant 7000 with 2.5Gb Memory of which 1.9Gb is used by our database.
Further we have 30 * 4.3Gb RAID disks for better performance on read and write in the database and for more uptime.

2 * 4.3 Gb for NT/Oracle/Programs/Pagefiles RAID1 (effective use = 4.3Gb)
2 * 4.3 Gb for redo RAID1 (effective use = 4.3Gb)
2 * 4.3 Gb for Rollback RAID1 (effective use = 4.3Gb) --> good write
activity
1 * 4.3 Gb for Temporary tablespace
2 * 6 * 4.3 Gb for Data RAID5 --> good for READ performance. Not on writes,
                                                but you can't have em all
2 * 6 * 4.3 Gb for Index RAID5 --> good for READ performance
                                                but you can't have em all
We have 3 cache controllers which adress the drives. Each one is provided with 64Mb of memory.

Quite a machine I must admit, but I would say this performance must also be possible on a less well equiped machine. The CPU is a relevant factor in the array bulk inserts as is the available memory on your server. Never, and I say
never cut on you're memory when you install a db-server. Extra memory is garanteed to gain immediate better performance. Also CPU power is important, ALL blocks that need to be handled GO THROUGH the processor, so those MHz ARE relevant!!
Sorry, I lost track.
Do NOT use a to big an array size, this will cost ya. The optimum (in our case) lies somewhere between a 1000 and 2000. We figured that is, because the overhead of claiming memory for the plsql tables (more than 2Mb in our case) will be more significant. Differences between 100 and 1000 are almost not noticable in our case.
AGAIN: our table has 10 column, of which 7 are not null. We have 4 indexes on the table (1 Primary key on object_id, 1 on process_id,object_class_id,value,
1 on input_id,object_class_id and 1 on process_id, input)

The calls in the scripts to pk_timer are calls to our own packages, to create our own timing statistics in case you wonder.

Well, I hope everone gets what they want and that is a good performance on whatever system youre using. To all you people out there not using Oracle8 yet. Put aside your doubts and use it if possible. Besides a few minor issues it's a hell of a db (starting from 805)

Greetz
-John Received on Tue Apr 27 1999 - 10:32:24 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US