Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Querying 400m records: Tuning it

Querying 400m records: Tuning it

From: <o_dba_at_hotmail.com>
Date: Tue, 26 Oct 1999 12:57:59 GMT
Message-ID: <7v48gm$ugd$1@nnrp1.deja.com>


I have a requirement to configure an 8i system on solaris to update 1000 records per second in a system with 8million clients, and a total of 400million historical transactions online. This is an oltp system, and client updates will be random accross all 8m, so cache tuning is very difficult.
I will use all feature available, including hash clusters, partitioning, PQ and OPS. I am about to start testing the different options. Who out there has configured large systems for massive thtoughput and what suggestions do you have? Im looking forward to testing 8i to its limits.

Regards
Mark
o_dba_at_hotmail.com

Sent via Deja.com http://www.deja.com/
Before you buy. Received on Tue Oct 26 1999 - 07:57:59 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US