Re: A new proof of the superiority of set oriented approaches: numerical/time serie linear interpolation

From: Cimode <cimode_at_hotmail.com>
Date: 2 May 2007 23:33:06 -0700
Message-ID: <1178173985.960152.227740_at_l77g2000hsb.googlegroups.com>


On 3 mai, 03:44, "Brian Selzer" <b..._at_selzer-software.com> wrote: [Snipped]
> > I would also add to the bill of procedurally inclined programmer the
> > *physical bias* to create overhead objects (temp tables, additional
> > columns) in order to meet the requirement of a procedural approach.
> > We have one example here. For instance, the immediate instinct of
> > dear Brian was to create additional columns and objects (+ unecessary
> > operation) where none was in fact necessary. Procedural approaches
> > produce both physical AND logical overhead. What can be done in 3 set
> > operations may require much more operations in procedural mindset (out
> > of itterations I don't count).
>
> Have you actually tried what I suggested?
Yes. Enough not to believe in it anymore. From what I observed, it simply creates unecessary additional logical complexity and physical overhead. As stated the redundancy of SQL and SQL DBMS's are not what performance in RM is all about.

> How can you possibly argue that
> it produces physical and logical overhead without proof.
No need to produce additional proofs. The solutions you have been suggesting (creating additional overhead + UPDATE) are sufficient proofs of the point I am trying to make and that apparently I failed to get across to you.

> In an earlier post, I offered what I think is a solid theoretical foundation for the
> thought process that triggered my immediate instinct: Multiple self-joins
> can often be eliminated with a single iterative pass through the data. Your
> solution has six--count 'em, SIX--self joins. To make things worse, four of
> them are theta-joins! So yes, my immediate instinct was to eliminate those
> self-joins. As a bonus, your solution also includes an aggregate, which I
> immediately saw could be computed during the same pass through the data,
> eliminating yet another optimizer step.

> Eliminating self-joins is beneficial regardless of the implementation.
Beneficial to what ? Performance ? What performance ? Response time ? concurrency ? cost of administration ? (Please answer these precise questions). Do you realize that despite some lengthy (maybe worthy) attempts at clarifying your point, some people here have no clue what you are talking about.

> Have you even looked at my earlier post? You didn't respond.
Quite frankly, I do not see what I should respond to.

> From a logical standpoint, my suggestion actually takes fewer operations, since
> there is only one inner equijoin and one relation-valued function instead of four
> outer theta-joins, an outer equijoin, an inner equijoin and an aggregate.
> There is no doubt that a procedural approach involves additional statements,
> but it is not *always* the case that that means that the system will have to
> do more work, or take more time, or use more resources.
What cases are you refering to? I must admit that what bothers me most in the arguments you posted is that you seem to imply that procedural approaches *may* be beneficial in some cases but you have not established any so I am having having trouble following you on that.

Regards.. Received on Thu May 03 2007 - 08:33:06 CEST

Original text of this message