Qu. on best way to deal with 'really big' tables

From: Steve Loboyko <slob_at_mindspring.com>
Date: 1996/01/21
Message-ID: <4dslp9$6ak_at_brickbat.mindspring.com>#1/1


I am using oracle to store very detailed logs of a real-time system. This seems weird but the logs must be accessible in real-time also by multiple people. This grows very quickly, and I'm debating how to keep it reasonable.
Ideas:

  1. My app will trim the "tail" (the old logs) off of the end on a minute or hourly basis;
  2. Use Oracle to do this with a stored procedure and the cronlike capabilities in Oracle (probably better than a)

It sure would be nice if Oracle itself could take old data and put it (append it?) to a file, possibly naming the file <date>.log. I have done this sort of thing via the application, but can Oracle do this?

Also, I need to make a query of the database on this real-time system every minute, and it needs to be as fast as possible no matter how big the tables get. The system runs on what most people reading this would consider very modest hardware, and I've "tuned" the DB and the table as best I can. Is the following a good idea?

  1. Have Oracle destroy and then produce via a stored procedure at, say, 3:00 AM, the info pertaining to that day from the "big" table, a little table.
  2. when the big table is updated in any way, triggers and procedures will mirror these changes to the little table.

Theoretically, I only have to do "a" once, and then maybe at 3:00 just clean out old stuff from the table.

I found out that a view is no good for this purpose, because as I suspected it is merely a filter.

Input would be appreciated - please have mercy, I am a Windows programmer. Received on Sun Jan 21 1996 - 00:00:00 CET

Original text of this message