Re: Design needed: large arrays
Date: Mon, 3 May 1993 09:18:21 GMT
Message-ID: <JS5CBHDI_at_gwdu03.gwdg.de>
: >Is not an array another term for repeating group, and does relational
: >theory have anything to say about this?
: >I would think so.
: Well, relational theory says you put each element of the array in a
: separate row. In the real world, however, this is rarely, if ever,
: done. It uses too much space and is too slow.
: The interesting question is how do we denormalize this information in
: the optimum fashion to take advantage of current technology.
Well, my arrays are timeresolved spectra. I have time as one dimension
and frequency as the other dimension.
In the meantime, I found a design which runs rather fast and uses not too
much of disk space.
Instead of creating one row for each number to be stored in a relation
which needs two indeces (one for queries across time, one for queries across
frequency), I group then into bunches of ten. So I get a relation like that:
tdlvalue(datasetID,spectraID,bunchID,value0,value1,...value9)
This reduces index activity to a factor of 10.
For me, now, it is possible to insert about 15000 values/min without any optimisation and to read one spectrum of 1000 points in less then 2s. (Values for Sun10/20, 128MB RAM, 1.3GB Tablespace)
All comments I received point to a OODBMS or InterBase. I do not have any
experience with any of them.
I need to do dynamic queries (because I use an interactive language to display
the results (IDL)). Is this possible with OODBMS or InterBase?
With Oracle, I manage this with an RPC-Server residing on the database server written with the Pro*C precompiler using dynamic SQL.
I will appreciate further help and information,
Joerg Plewe, MPI Dortmund, Germany Received on Mon May 03 1993 - 11:18:21 CEST