Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> design for query performances on large tables

design for query performances on large tables

From: <dexterward_at_despammed.com>
Date: 8 Feb 2005 03:43:26 -0800
Message-ID: <1107863006.499207.130250@g14g2000cwa.googlegroups.com>


Hi all,

SCENARIO:
- oracle 9i RAC installation on linux RedHat, with partitioning enabled

the system has to correlate records coming from table in sqlserver and from other 2 in oracle db.
the tables records are generated at a very fast rate, around 1M records/day
they are coming from text files, imported to the DB to handle complexity, range partitioning is used in oracle, but is unavailable in sqlserver.

the records are going to be correlated via primary key, which is the same in a 1-1 relationship
a set of queryies with group function are going to be runned nightly on the correlated data

the solution we are projecting are the following:

  1. import the SQLserver records in Oracle, too , and correlate using a view and optimize it. that woud be the best, but info duplication and tuning activities are required
  2. write a service that create a table with the correlated info in sqlserver, so joins are not required and performances would be less demanding

which would be the best approach in tour opinion? any help would be appreciated
vl Received on Tue Feb 08 2005 - 05:43:26 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US