Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Large History Table: How Dangerous to Let it Grow Significantly?

Large History Table: How Dangerous to Let it Grow Significantly?

From: James <jw697569_at_hotmail.com>
Date: 20 Nov 2003 12:10:48 -0800
Message-ID: <44050548.0311201210.67cd5c1d@posting.google.com>


BACKGROUND:
Our application currently has a main history table with optional child history tables. Every transaction results in one main history table entry. The child history tables are filled in for about 10-20% of the transactions deemed "critical" (there are about 6 different child history tables -- at most one child history table applies to a given transaction, based on its type).

The main history table currently gets about 12m rows per month and we retain the data for 6 months before archiving. With new subsystems coming online, it will probably start to get about 20m rows per month soon.

The main history table has about 30 columns (half are optional) and the average size is about 80 bytes per row (I determined this by doing a "sum(vsize(col_name))" on every column in the table. Apart from the primary key, 5 other columns are indexed to allow for searching.

The application is very performance sensitive (handling real-time investment transactions from hundreds of simultaneous web users where rapid performance is vital). The big concern for this table is that every transaction in the application inserts one row into the history table (as well as one child history table, 10-20% of the time). Of concern, but to a lesser degree, is that the history table itself is searchable since certain admin users need to be able to locate certain historical transactions.

The Oracle 8.1.7 database itself runs on a multiple (dual or quad) CPU Sun Solaris. It uses Level 1 RAID disks (on a SAN, I think) and the database server is accessible to the other tiers on a fast local network.

QUESTION:
The issue: users would like to be able to have access to this history data online for 1 year or, ideally, up to 7 years. This means that instead of working with a history table with an expected size of 120m rows (6 mo. @ 20m rows per month), I am looking at the implications of moving to a history table of either 240m rows (1 year scenario) or 1,700m rows (7 year scenario). For the sake of argument, let's say that I want to keep data in the same structure as today (not break into new tables).

My question:
1. what the implications for insert performance if: a) I let this table double in size (120m -> 240m rows)? b) I let this table grow by a factor of 14 (120m -> 1,700m rows)?

Is it possible to use a rule-of-thumb and say insert performance is likely to decrease by x%-y% in scenario (a) vs. a%-b% in scenario (b)?

2. what the implications for search performance if: a) I let this table double in size (120m -> 240m rows)? b) I let this table grow by a factor of 14 (120m -> 1,700m rows)?

Keep in mind that I'm not a DBA and the DBAs available are not very experienced. I'm trying to understand the implications of allowing this history table to grow .. and will look at radically different solutions (splitting up tables, using data warehouse, if the performance implications are too significant). Received on Thu Nov 20 2003 - 14:10:48 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US