Re: HELP: Large ORACLE Tables > 3million rows

From: Peter Y. Hsing <hsing_at_ix.netcom.com>
Date: 1996/05/19
Message-ID: <4nnkhv$3tj_at_sjx-ixn4.ix.netcom.com>#1/1


Personally, I think your hardware setup is going to be your main bottleneck. What sort of configuration do you have? I hope you have some sort of hot-swappable RAID system. In the least, you should have your root filesystem and SYSTEM tablespaces set up on a separate drive from your data tablespaces.

-Peter

On Wed, 15 May 1996 13:17:27 -0500, Vince Cross <bartok_at_nortel.ca> wrote:

>Ren=E9 Brisson wrote:
>> =
 

>> My experience on this is based on tables with from 100.000 to 3.000.000 r=
 ecords.
>> I've experienced a remarkable good performance doing inserts, especially=
 

>> when using oracle arrays thus inserting for instance 100 records a time.
>> =
 

>> I have however experienced performance problems updating records in
>> large tables.
>> =
 

>> If large updates or deletes has to be done, for instance doing yearly cle=
 an-up I can
>> stringly recommend dropping indexes before doing cleanup and recreation
>> afterwards.
>>
>
>If you are doing large numbers of updates or deletes for clean-up
>purposes, I would also recommend dropping any foreign key references to
>the table. Even with small tables (less than 100,000 rows) I have had a
>single row delete take over a minute due to number of child tables
>referencing it. I guess that's the price we pay for data integrity.
>
>Vince
Received on Sun May 19 1996 - 00:00:00 CEST

Original text of this message