Re: How Large is a BIG Relational DB

From: David Kosenko <davek_at_informix.com>
Date: 9 Mar 1994 05:34:15 GMT
Message-ID: <2ljn4n$91e_at_infmx.informix.com>


Kent S. Gordon writes:
>Does any relational database engine other than TeraData allow for
>multiple cpus and disks to work on a single
>query/insert/update/delete/load/indexing?
>
>I think Informix 6.x will have similar functionality.
>
>I am working on a large (300GB - 2TB) combined DSS/OLTP application.
>I would like to have an alternative to using TeraData (NCR 3600), but
>have found no other database that will use multiple cpu's well in
>handling large (10 - 100 million row) insert/update/delete statements.

6.0 will make use of multiple cpus (via what we call CPU VPs, or virtual processors) for index builds. What you really want to look at is our 7.0 product, which includes a full PDQ (Parallel Data Query) capability. This combined with intelligent table fragmentation makes most of the operations you mention parallelizeable (is that a word?). I believe the only exception to this is inserts, where you are generally providing records in serial fashion. Breaking it up into multiple insert processes would allow this to use multiple CPUs.

OnLine 7.0 is designed specifically to address OLTP and DSS needs on the same server (simultaneously!).

Dave

-- 
Disclaimer: These opinions are not those of Informix Software, Inc.
**************************************************************************
"I look back with some satisfaction on what an idiot I was when I was 25,
 but when I do that, I'm assuming I'm no longer an idiot." - Andy Rooney
Received on Wed Mar 09 1994 - 06:34:15 CET

Original text of this message