Re: Large dataset performance

From: Daniel <danielaparker_at_gmail.com>
Date: 20 Mar 2007 09:33:34 -0700
Message-ID: <1174408414.776592.13650_at_o5g2000hsb.googlegroups.com>


On Mar 20, 11:39 am, "jma" <junkmailav..._at_yahoo.com> wrote:
> Hello all,
>
> I would like your opinion and experience on the performance of writing
> large datasets. I am writing in one loop about 3.5 million rows where
> each row is an integer and 3 doubles to an Interbase db. All in all
> it's about 100MB. My time measurement was 210 seconds. Is this normal?
> To me it appears as a veeerryyy long time....
>
> TIA!!!!
>
You can batch your inserts, that is, perform a commit every two or three hundred rows rather than after every insert.

You could consider using a bulk load utility rather than inserts.

Also, as Johnathan alluded to, you can expect faster performance if you drop any indexes on the table before loading, and rebuild them after loading.

With some databases, performance will be heavily affected by database log configuration, don't know anything about Interbase.

Regards,
Daniel Parker Received on Tue Mar 20 2007 - 17:33:34 CET

Original text of this message