Re: Large dataset performance

From: paul c <toledobythesea_at_oohay.ac>
Date: Tue, 20 Mar 2007 22:20:11 GMT
Message-ID: <v6ZLh.44708$DN.28555_at_pd7urf2no>


Cimode wrote:
> On 20 mar, 16:39, "jma" <junkmailav..._at_yahoo.com> wrote:
> <<I would like your opinion and experience on the performance of
> writing
> large datasets. I am writing in one loop about 3.5 million rows where
> each row is an integer and 3 doubles to an Interbase db. All in all
> it's about 100MB. My time measurement was 210 seconds. Is this normal?
> To me it appears as a veeerryyy long time.... >>
> The principle reason I see is the *Looping* algirythmics which is not
> what a db does best. I suggest you learn better the power of set
> operation through a better mastery of good ol SQL...Hope this helps...
>
>

This is not an SQL problem and there is not enough information in the question to answer it properly, eg., is there some application requirement that the 3.4 M rows be written atomically (all or nothing), are 100 users going to do this 100 times per day each, etc., etc.. There was another comment about fewer commits which would make no sense if some transaction notion was involved, in fact it would be dangerous.

p Received on Tue Mar 20 2007 - 23:20:11 CET

Original text of this message