Re: HELP: Large ORACLE Tables > 3million rows

From: (wrong string) é Brisson <rbrisson_at_centrum.dk>
Date: 1996/05/02
Message-ID: <4masq1$qms_at_underworld.centrum.dk>#1/1


In article <4ll4fq$qti_at_romeo.logica.co.uk>, carrdj_at_logica.com (Darius Carr) says:
>
>Hello,
>
>I need some help with the design of a system with some large ORACLE
>tables.
>
>The table will have 100 rows inserted 100 times per day. This will
>lead to approx 3.5 million rows per year, all of which must be
>accessible online.
>
>Each row is approx 150 byte, so the table for one year will be approx
>500Mb (plus one index of about 50 to 100Mb).
>
>The routine which inserts the 100 records must run quickly and I am
>worried about the time to update the index.
>
>1) Is the index overhead going to be noticeable.
>
>I can split the table so that data over a certain age goes into a
>history table.
>
>2) How many rows shoudl I keep in my main table before inserts are
>slowed.
>
>3) If I use a view (UNION ALL so avoid remove distinct rows) to
>combine the two tables will I get a large performance hit?
>
>I am tendinf towards two tables with the big history table on a
>separate disk but I do not want to go to this trouble if it is not
>really necessary.
>
>Many Thanks
>
>Darius Carr (Logica)

My experience on this is based on tables with from 100.000 to 3.000.000 records. I've experienced a remarkable good performance doing inserts, especially when using oracle arrays thus inserting for instance 100 records a time.

I have however experienced performance problems updating records in large tables.

If large updates or deletes has to be done, for instance doing yearly clean-up I can stringly recommend dropping indexes before doing cleanup and recreation afterwards.

My advise would be to try to insert directly into the table first.

Yours

Rene Brisson Received on Thu May 02 1996 - 00:00:00 CEST

Original text of this message