Re: Large dataset performance

From: jma <junkmailavoid_at_yahoo.com>
Date: 20 Mar 2007 10:15:18 -0700
Message-ID: <1174410918.931547.322040_at_b75g2000hsg.googlegroups.com>


On Mar 20, 5:33 pm, "Daniel" <danielapar..._at_gmail.com> wrote:
> On Mar 20, 11:39 am, "jma" <junkmailav..._at_yahoo.com> wrote:> Hello all,
>
> > I would like your opinion and experience on the performance of writing
> > large datasets. I am writing in one loop about 3.5 million rows where
> > each row is an integer and 3 doubles to an Interbase db. All in all
> > it's about 100MB. My time measurement was 210 seconds. Is this normal?
> > To me it appears as a veeerryyy long time....
>
> > TIA!!!!
>
Hello Daniel,

> You can batch your inserts, that is, perform a commit every two or
> three hundred rows rather than after every insert.

Thanks, that's a good tip, I'll see what I can do

> You could consider using a bulk load utility rather than inserts.

I don't know of any... any ideas?

> Also, as Johnathan alluded to, you can expect faster performance if
> you drop any indexes on the table before loading, and rebuild them
> after loading.

I don't get this index thing. What do you mean by indexes?

Thanks a LOT!!!! Received on Tue Mar 20 2007 - 18:15:18 CET

Original text of this message