Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Querying 400m records: Tuning it

Re: Querying 400m records: Tuning it

From: <karsten_schmidt8891_at_my-deja.com>
Date: Wed, 27 Oct 1999 11:47:18 GMT
Message-ID: <7v6oo6$oh0$1@nnrp1.deja.com>


Hi,

 those are awesome numbers. you are certainly looking at some kind of transaction monitor to cut down on the number of concurrent sessions.

 stress-testing: there used to be a whitepaper on oracle technet,  describing how they did their oracle 8 scalability tests.  Sorry, i don't remember its title, you may have to search their site a bit.

Karsten

In article <7v48gm$ugd$1_at_nnrp1.deja.com>,   o_dba_at_hotmail.com wrote:
> I have a requirement to configure an 8i system on solaris to update
> 1000 records per second in a system with 8million clients, and a total
> of 400million historical transactions online. This is an oltp system,
> and client updates will be random accross all 8m, so cache tuning is
> very difficult.
> I will use all feature available, including hash clusters,
> partitioning, PQ and OPS. I am about to start testing the different
> options. Who out there has configured large systems for massive
> thtoughput and what suggestions do you have? Im looking forward to
> testing 8i to its limits.
>
> Regards
> Mark
> o_dba_at_hotmail.com
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.
>

Sent via Deja.com http://www.deja.com/
Before you buy. Received on Wed Oct 27 1999 - 06:47:18 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US