Re: 100 Gb DB solution wanted!!!

From: Jens Goerke <griffin_at_jgfl1.allcon.com>
Date: 1995/12/29
Message-ID: <DKD71z.zD_at_jgfl1.allcon.com>#1/1


Ramesh Dwarakanath (rameshdw_at_world.std.com) wrote:
> Hi,

Hiya!

> We are trying to setup a database for an application which
> will have a maximum of 100 GB of data to be queried upon at a
> time but these queries will be few and far between.... Hence
> the idea of having a database of 100 Gb size does not appeal
> too much.
 

> Is there an intermediate way of maybe having a combination of
> raw data on the Unix box and reducing the size of the database
> to about 25 Gb OR storing the data in the database itself in a
> compressed fashion OR any vendor/product to deal with a similar
> scenario etc...???

Well, I would take a Sparc 20 with Online DiskSuite, striping together 4 arrays of 36 GB each (one FSBE-controller for each array to maximize throughput). This way I would end up with roughly 120 GB of disk space (Raid level 5, chunk size 128 blocks, stripe size 128 blocks). With an average throughput of 3 MB per array and second, that would mean about 12 MB per second.
I haven't tried files that large yet, but IMHO smaller files are more robust, since disk problems usually affect only a few files.

Have Fun,

        Jens

-- 
Missing coffee error - operator halted.
Received on Fri Dec 29 1995 - 00:00:00 CET

Original text of this message