Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Deadly sins againts database performance/scalability

Re: Deadly sins againts database performance/scalability

From: Mark D Powell <Mark.Powell_at_eds.com>
Date: 29 Nov 2003 14:29:07 -0800
Message-ID: <2687bb95.0311291429.69d69bac@posting.google.com>


Daniel Morgan <damorgan_at_x.washington.edu> wrote in message news:<1069883739.896524_at_yasure>...
> Mark D Powell wrote:
>
> <snipped>
> >
> > Dusan, I would suggest you change your wording about issuing commit
> > within a loop since in many cases the commits need to be performed
> > within a loop. I think it is the frequency of commits rather than the
> > logical coding structure that is the issue.
> ><snipped>
>
> Going to take issue with you here Mark. Granted that nothing should ever
> been done dogmatically. But 98% of the time a commit in a loop is not
> needed, not desirable, the result of someone moving from another RDBMS
> or someone that has poorly configured rollback.
>
> The rare times that a commit in a loop is required are the extreme
> exception.

Daniel, in the real world I would hazard to claim that most updates are done inside a loop because the majority of DML activity takes place in batch processing. It might be online batch or traditional batch but the programs have all have the same basic feature. The programs all perform their logic until there are no more input records whether those records come from a batch file or a driving cursor. In other words the programs all loop.

When the target rows are also the target of concurrent DML access then commits within a loop are a necessity because you cannot hold rows that are the target of OLTP activity for several seconds without adversely impacting performance. So the question is not whether you commit in a loop, but how frequently.

Snapshots too old are generally only a problem if the source and update target are the same table. When that is the case resorting to use of a driving table or similar technique will normally avoid the problem.

Most any shop that runs MRP II, or ERP, which is nearly every major corporation around has an environment that makes heavy use of batch syte processes. It may be a customer who triggers the job but the task processes the same logic for every target row in a loop. When you add the OLTP updates up and compare them to the number of rows processed via batch the majority of updates come from batch. And batch programs by nature involve a driving loop, hence my contention that it isn't doing updates in a loop structure that is the problem but rather the frequency of commit.

You can disagree, but I think your are failing to consider how much real world work is done in batch type processing.

Received on Sat Nov 29 2003 - 16:29:07 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US