Re: table with rows>10000000

From: Jon Finke <finkej_at_ts.its.rpi.edu>
Date: 1995/04/18
Message-ID: <3n1ktv$53d_at_ts.its.rpi.edu>#1/1


In article <3n0pdr$tk_at_homer.alpha.net>, Saad Ahmad <sahmad_at_mfa.com> wrote:
>
>If I want to archive information and the data can grow to
>outrageous numbers > 10,000,000; would it be benificial to
>break up the table into smaller tables, or the numbers in this
>neigbourhood no problem for Oracle.
>
>(If the information is broken up we might have to use UNION
> for some of the reports).
>
>If anyone has tables in this neigbourhood, I would be interested
>in knowing their average indexed query times, and index rebuild
>times; even if the hardware is different.

While not in the same neighborhood, I did run into a similer question on a recent project. I was collecting WTMP records (signon/signoff) from a collection of Unix machines. This was about 130,000 records per month, and when I got up to around 1,500,000 records in the table, I discovered that clearing old records off the "front" of the table was a non trivial task. (The SQL was simple, but the impact on our backup files was nasty...)

My data was very date dependent, and we are currently using one table per month. When we go to insert into a table, we derive the table name from the "connect date". This makes it trivial to drop off last year's data, one month at a time. It does complicate some of the queries we need to make, but fortunatly, the query software we use can be easily modified to merge several queries if a date range spans more than one table. (A paper describing this project can be found via my URL).

-- 
Jon Finke                           finkej_at_rpi.edu
Senior Network Systems Engineer     http://www.rpi.edu/~finkej
Information Technology Services     518 276 8185 (voice) | 518 276 2809 (fax)
Rensselaer Polytechnic Institute    110 8th Street, Troy NY, 12180
Received on Tue Apr 18 1995 - 00:00:00 CEST

Original text of this message