Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Default column value of MAX +1 of column - possible in Oracle 9i?

Re: Default column value of MAX +1 of column - possible in Oracle 9i?

From: HansF <News.Hans_at_telus.net>
Date: Sun, 27 Mar 2005 21:48:26 GMT
Message-Id: <pan.2005.03.27.22.50.00.92355@telus.net>


Comments embedded ...

On Sun, 27 Mar 2005 17:53:21 +0000, wrote:

> On Fri, 25 Mar 2005 14:06:31 GMT, HansF <News.Hans_at_telus.net> wrote:

<snip>
>

>>So, I guess it comes down to experience.  Mine (which goes back a few
>>decades) yields a conclusion exactly opposite to yours.  Religion anyone?

>
> Probably religion. Which doesn't make this debate less interesting of
> enlightning.
>
> I must clearly admit, though, that my statement "Triggers are evil",
> was an exaggeration.

Thank you for that. In which case, I suspect we are primarily in agreement, as I do not believe that Triggers are the only possible, or even universally best, solution.

I do find that people who reject them out of hand have had limited or bad experience with them, either from other RDBMSs or older versions of Oracle.

I also find it quite amusing that people, especially OO specialists, trumpet the glories of event-based programming in response to screen events, but reject the same model in response to database events. Which seems silly, as many screen map directly to tables, and many creen events (other than pretty-print) map directly to database events.

>
>

>>Since there is a clean separation between business logic and user
>>interface, very similar to that desired by the J2EE - JSP/Servlet vs EJB
>>separation, developers dealing with users can concentrate entirely on
>>getting the GUI correct without ever having to worry about whether the
>>business logic is included.

>
> This separation generally is a good thing and holds true in some
> cases. But not all.

I have found the number of cases where such separation is beneficial seems to be increasing, especially with n-tier applications. The acceptance of this separation is even in vogue - it's the whole premise of XML (not that this is a valid argument <g>).

For the sake of argument, would you please identify situations where this is NOT a good thing. I'm truly interested - even response by email is OK.

>
> But let's pick a case in which it is true.
>
> A GUI is used for entering, say, a new customer. The end-user types
> all the relevant info and presses the "Insert this new customer and do
> all the business stuff involved"-key. This can be carried out in (at
> least) 2 different ways :
>
> A) The key press generates an insert statement, executes it and an
> INSERT trigger implements the rest of the business stuff. This trigger
> possible calls one or more stored procedures to do it's work.
>
> B) The key press calls a stored procedure which does all the
> business stuff, possibly calling other SP's in doing so.
>
>
> B) and A) both accomplish the same.
>
> B) requires no more work on the client than A).
>
> B) Involves 1 step, B) involves 3 steps.
>
> B) Requires no more maintenance than A).
>
> I would like you to give another common example in which the most
> robust and easy way to the goal involves a trigger.
>

There are several differences between these two cases. At the very least:

  1. generally has access to the old and new values of the row without any additional code. This could eliminate some code, especially passing parameters;
  2. also poitentially eliminates a network round trip;
  3. requires a separate SP call, and therefore requires that the developer remember to make the call. It also demands a test case for each screen (and probably several buttons or key on a screen) on which the SP call occurs. Such is not the case for triggers - one test case will cover all screens and all variations on one screen.

IFF there will ever only be one screen and one input method to do this insert, it is irrelevant which method is used. But the moment a second screen or perhaps an EDI/XML data interchange is used, the enforcement is either centralized (trigger) or duplicated (SP call in each location), increasing code, maintenance, docco and test costs - all of which frequently being ignored and allowing a source for a bug.

I concur with your assessment that results from the changing roles and responsibilities. I just don't have to like the damage done in our industry by the 'lowering of standards'. The person who owns a high-end Mercedes Benz could take the car to the corner garage for an oil change, but probably will take it to a specialist ... I've often wondered why they stopped doing the same with their business tools.

I agree that patching through triggers, rather than solving problems though proper analysis, can end up with a less-than-optimal solution. In a similar fashion to the "guns don't kill people, people with guns do" argument, I have to ask - is the improper use of triggers the fault of the technology?

I also find that CONSTRAINTS are under-utilized. Developers tend to code constraints into application, or worse, code constraints into triggers. For the class of constraints that can be declared, this is absolute stupidity ... proof by blatant assertion (TM) <g>

Finally, I happen to believe, very strongly, that Stored Procedures DO play a significant role in this discussion, as I do not feel Triggers and Stored Procedures are mutually exclusive. In fact, I would prefer to see the actual code handled in SPs and called from the app code or the trigger as appropriate - appropriateness being determined through situational testing.

/Hans Received on Sun Mar 27 2005 - 15:48:26 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US