Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Performance problems with really big tables

Re: Performance problems with really big tables

From: Mark Powell <Mark.Powell_at_eds.com>
Date: 14 Dec 1998 14:06:45 GMT
Message-ID: <01be276b$4fed7aa0$a12c6394@J00679271.ddc.eds.com>


We have a dozen tables with row counts of 10 - 20 million rows each that are often involved in joins with acceptable performance levels. You need to look at each SQL statement involved where performance is a problem and tune these.

 Also the number of rows in a table is a lot less important than the number of rows times the average row length, i.e., total size in bytes of the data that will be processed by the query and how you get those rows.

If you post some of the queries to try to get help on tuning them be sure to post the explain plan, the version level of Oracle, the database optimizer goal, and how the statistics if any were generated. You might want to post the stats.

We do not use surrogate keys. We find them to be of little practical value.

Marco Ribeiro <mar_at_bart.inescn.pt> wrote in article
> We've got two tables with more than 6 million records in each one and
> performance deteriorates as they grow, can anyone give some advice on
> improving the performance of operations on these tables.
>
> Also is it more eficient to have a surrogate key or a compund key ?
>
Received on Mon Dec 14 1998 - 08:06:45 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US