Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Default column value of MAX +1 of column - possible in Oracle 9i?

Re: Default column value of MAX +1 of column - possible in Oracle 9i?

From: HansF <News.Hans_at_telus.net>
Date: Fri, 25 Mar 2005 14:06:31 GMT
Message-Id: <pan.2005.03.25.15.08.00.675837@telus.net>


On Fri, 25 Mar 2005 07:20:32 +0000, wrote:

> You have an application. A minor correction must be made. It is now
> easier to implement a trigger instead of changing the code. So you do.
>
> Next time you need corrections, another trigger is implemented. And so
> you continue n times.
>
> You now have 2 tracks to housekeep: The original application code AND
> the triggers. One is flow-based, and one is event-based (the
> triggers). And those 2 tracks have to be coordinated. This will get
> more and more tedious over time, as the application logic changes over
> time, mostly in a more complicated direction.
>
> Result : More errors, more difficult and expensive application
> development. Which gets worse and worse over time. In that sense you
> have reduced scalability regarding your application's ability to
> implement changes over time.
>
> I've seen the effect of this a lot of times, and the above logic tells
> me why.
>
> BTW : If one logical application corrrection results in 15 physical
> corrections as you mentioned, your application is.....doubious. But
> rather than introduction triggers I would go for the correct solution
> : modularizing the code, making one logical correction equal to one
> physical correction.
>
> - Kenneth Koenraadt

Thanks for taking the time to respond. Seems you have had exactly the opposite experience I have had. In my experience:

You have an application. Since many business activities are event based, it is easy to create triggers that correspond to those events. As necessary, these triggers can be made more comprehensive and rapidly reflect the changes in the business rules that occur in the organization.

With business rules stored in the database, it is not necessary to repeat these rules in each of the independent code segments matching the 'application' segments by duplicating code. This reduces the amount of maintenance and even the initial cost and time of getting a new feature, screen or capability up. Making 15 business changes in one location - the 'set of triggers' - can be considerably faster and more testable than making those same 15 changes in 12 different sets of 'front end' screens. Result: fewer errors and faster testing time.

Since there is a clean separation between business logic and user interface, very similar to that desired by the J2EE - JSP/Servlet vs EJB separation, developers dealing with users can concentrate entirely on getting the GUI correct without ever having to worry about whether the business logic is included. Conversely, developers dealing with management about business logic changes never needs to concern itself with 'which screens need to change'. Both these aspects reduce development & manitenance time and cost. We now have the ability to separate allows appropriate developer skills (user interface vs business logic) to be deployed as appropriate, Result: faster deployment, fewer errors (due to skills focus)), lower cost.

For client-server applications, organizational cost is reduced by not needing to use larger client machines and larger network between clients and database server. Note that many developer's are not trained for DBA work, nor have the aptitude or patience. Scalability becomes a trained DBA concern, not a developer's concern. Result: better scalability at a lower organizational cost.

I agree that adding triggers (probably without testing, without documentation, and without thought) instead of adding TO existing triggers after evaluating, enhancing and testing is generally a recipie for disaster. It is possible to keep a minimum set of triggers - IMO not doing so seems to reflect a developer's need to get it done fast, at any cost, without analysis.

And, of course, triggers are permitted to call stored procedures. Therefore, it can become even easier to maintain a 'one logical change - one physical change' mechanism.

On the flip side, I can verify your argument is true with a subset of the following conditions:

  1. organizations - with a minialist IT attitude (understaffing and JOAT developer); - limited developer skills (super user 'come developer, no DB training); - where interdepartmental politics is priority (DBA vs Dev syndrome).
  2. coding standards - that don't exist; - do not separate between business logic and presentation; - do not [know how to] use the database correctly.
  3. developers - who are called to DBA but have no appropriate training/experience; - who don't understand SQL (eg: use loops instead of sets); - who have not learned PL/SQL ; - who have developed a database independence religion; - who have only delivered quick 'n dirty prototypes; - who have never had to do a cost analysis including operating expense; - who have never gone past the 'MS Access is fun' stage.

Realize the above is not comprehensive. Realize also the wording is intentionally sever and worst-cases. This represents what I have seen and do not imply your situation.

So, I guess it comes down to experience. Mine (which goes back a few decades) yields a conclusion exactly opposite to yours. Religion anyone?

/Hans Received on Fri Mar 25 2005 - 08:06:31 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US