Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: A Nifty Commit Optimization...

Re: A Nifty Commit Optimization...

From: -=< a q u a b u b b l e >=- <aquabubble_at_Remove.No.Parma.Ham.Remove.geocities.com>
Date: Tue, 26 Oct 1999 00:42:51 +0100
Message-ID: <7v4shq$jcb$1@plutonium.btinternet.com>


Randy DeWoolfson <randy_at_euclidsys.com> wrote in message news:3814CF00.2A367D70_at_euclidsys.com...
> Hey all ORACLE Readers...
>
> This is not a question. I thought I would break up the monotony of the
> newbie questions and throw out this neat little trick I came up with for
> optimizing Commits.
>
> Maybe you have noticed that when you work on a large block of records,
> say parsing, manipulating and then either updating or inserting, that if
> you commit once for every record, then you get alot of overhead, and
> worse performance than if you commit say every 50 records...
>
> I had this problem, and then the following question: what exactly is
> the optimal number of records to process between commits?
>
> My first thought was to run the app several times manually tuning the
> commit frequency in the code. This worked fine until I ran the same
> test the next day and got a different number. (number of users online
> and transaction volume were variables).
>
> So realizing that there was no one correct answer, I came up with this:
>
> In the code, I picked a commit increment, and set it to say 10.
>
> then I started off through the loop processing records keeping track of
> the total time for each record plus the commit. Then I would
> programatically increment the number of records before the next commit.
> If the time per record got smaller, I would increment again etc until it
> got bigger, then i could use the resulting optimized commit interval for
> the rest of the processing...
>
> This was cool because then the program would tune itself each time it
> was run for the current environment.

Nice idea, but this would incur an overhead in iself. My own experience and reading tells me that the performance curve for commit frequency tails off to a flat line eventually, so that any longer commit frequency does not produce any advantage. I've found that it's safe to stick with between 500 - 1000 rows. That's pretty optimal, whatever circumstances I've come across. Whatever you do, don't go for between 1-50! Received on Mon Oct 25 1999 - 18:42:51 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US