Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Deadly sins againts database performance/scalability

Re: Deadly sins againts database performance/scalability

From: Mark D Powell <Mark.Powell_at_eds.com>
Date: 30 Nov 2003 11:05:28 -0800
Message-ID: <2687bb95.0311301105.6a405a35@posting.google.com>


Daniel Morgan <damorgan_at_x.washington.edu> wrote in message news:<1070150069.551425_at_yasure>...
> Mark D Powell wrote:
> >
> > You can disagree, but I think your are failing to consider how much
> > real world work is done in batch type processing.
> >
> > -- Mark D Powell --
>
> And I will. I remember the first time I was a Boeing and wrote code that
> did that. Seemed reasonable at the time as I was bringing in gigabytes
> every weekend. But now with UNDO I'd argue just the opposite. Disk is so
> inexpensive it makes no sense not to just assign the equivalent of 1 or
> more drives, often 40+GB to UNDO and let it run to comletion.
>
> But your point is valid in a datawarehouse situation. Can we at least
> agree that commits in OLTP loops is almost undoubtedly without merit.

I think we still disagree. In a data warehouse I would expect there to be few updaters, but in a large heavily used OLTP where concurrent access to the same rows is a reality then commits within a loop will be a necessity for some processes.

There are applications where this may not be necessary and low work loads allow less frequent commits, but I have yet to look at an MRP or ERP system that did not contain commits within a loop.

If you this is bad design then I will just refer you to Material Requirement Planning: The New Way of Life in Production and Inventory Management by Joseph Orlicky published by McGraw-Hill in 1975. When I sought certification by the American Production and Inventory Society, APICS, this was one of the recommended texts. It describes what MRP systems have to do. Seriously, if you can think of how to perform the work without using loops, and allowing concurrent update access without commits, then you can probably make a small fortune since I know a lot of firms that would be interested.

I think how someone views this argument might be colored by the types of applications they have experience with. I have a lot of experience with applications with high concurrent user work loads that target a few, large objects and concurrency is a priority. Smaller systems would have different characteristics.

The original point was that blanket statements are dangerous because any one rule may not be suitable for different applications and under varying work loads. While I agree there are definite drawbacks to committing after updating on a per row basis there are situations where doing so is a necessity and guidelines produced by the DBA for developers need to allow/account for these situations. Rather than say do this or do not do that the guidelines should lay out the side effects of design decisions based on how Oracle handles the work. Then it is up to the developers who should know the data better than the DBA to make the right choices.

IMHO -- Mark D Powell -- Received on Sun Nov 30 2003 - 13:05:28 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US