Re: Large dataset performance

From: paul c <toledobythesea_at_oohay.ac>
Date: Thu, 22 Mar 2007 00:42:34 GMT
Message-ID: <_hkMh.48883$zU1.4691_at_pd7urf1no>


jma wrote:
> Hello Paul,
>
> the story is like this: the server is given a set of engineering
> simulations that need to be performed. The clients (software clients,
> solvers actually) perform simulations. The latter by the way are
> legacy software, so there's little if any touching. My idea is that
> instead of waiting for each solver to spit its output to the disk and
> collect the output, which has many problems, especially with the
> control of the files and what (human or sw) users might do with them,
> provide the clients with a stream to a single repository and write
> there whatever has to be written. This also saves me from having the
> server collecting, copying, pasting and bundling files, where even if
> the files are there, sound and safe, each action is a problem of its
> own. Now, using a database saves me the trouble to develop everything
> on my own and gets me into using a dedicated high quality fit-for-
> purpose application. Further, having all those gigas in one file, the
> next thing as you might guess is start digging. Digging means creating
> all kinds of views from the data in order to postprocess as well as
> bundling parts and results for visualization. Oh, and don't forget, I
> also need to store geometry models, materials, metadata and all kinds
> of stuff in the same place, so that the server will be able to use the
> script and the repository to nicely start the clients.
>
> Hope now its clear :-/

Clear as mud. Go with files, perl, rexx, awk or somesuch until you know what you want!

p Received on Thu Mar 22 2007 - 01:42:34 CET

Original text of this message