Re: A new proof of the superiority of set oriented approaches: numerical/time serie linear interpolation

From: David Cressey <cressey73_at_verizon.net>
Date: Mon, 30 Apr 2007 12:26:12 GMT
Message-ID: <EflZh.958$YQ1.219_at_trndny02>


"Cimode" <cimode_at_hotmail.com> wrote in message news:1177934438.105342.126120_at_h2g2000hsg.googlegroups.com...
> On Apr 30, 12:42 pm, "David Cressey" <cresse..._at_verizon.net> wrote:
> > "Cimode" <cim..._at_hotmail.com> wrote in message
> >
> > news:1177913268.965918.63520_at_p77g2000hsh.googlegroups.com...
> >
> > > On 29 avr, 21:29, "David Cressey" <cresse..._at_verizon.net> wrote:
> > > > "Cimode" <cim..._at_hotmail.com> wrote in message
> > > [Snipped]
> > > > The point I got from your remarks is that set oriented approaches
are
> > > > superior, not principally due to run times, but due to inherent
> > solidity of
> > > > the code. The gap between intent and expression of algorithm is
often
> > much
> > > > less with set oriented approaches, and I believe you have
illustrated
> > this
> > > > in the case in point.
> > > Not only. I have not talked about *order*. Set oriented approaches
> > > are totally *order insensitive* in the sense they never require some
> > > kind of order as a prerequisite.
> >
> > I am not sure I understand your point. If I got it right, I'd like to
> > suggest that procedural oriented thinkers like to superimpose an order
> > requirement on the actual requirements in order to force a strategy that
> > they know (rightly or wrongly) to be superior to the one chosen by the
> > optimizer in the absence of ordering directives.
> Agreed. the key word here is *subjectivity*. procedural approaches
> naturally induce *subjectivity* while set oriented structures favorize
> objectivity based on existing structures.

>

> In several occasions several programmers I have met required force the
> use of order by clause into sub selects. One of the obvious mark of
> thinking procedurally is that several programmers end up *having to*
> assume some form of order into the sub selects that consitute their
> queries (they also use the magical select top 1 hack to do that). In
> fact it would be reasonnable to assume that a clean set oriented
> approach would always end up with order by clauses only at highest
> level. Hope that makes sense.
>

> [Snipped DEC example]
> > I guess I profile people based on their usage as well as Joe Celko or
Bob
> > Badour. In this case, anyone who strains to omit an "order by" clause
> > based on response time rather than requirements, or based on a side
effect
> > of "group by", is still thinking "how" rather than "what". At least
that's
> > my take.
> > > > Many times, those who wish to cling to procedural ways of doing
things
> > latch
> > > > onto processing time as their reason for rejecting a set oriented
> > approach.
> > > Until they have slow disks and can not solve CPU bottlenecks anymore
> > > through more CPU power. A recurring problem in modern systems.
> >
> > > [Snipped]
> > > > I've seen the same "speed" arguments used for avoiding views, and
> > avoiding
> > > > logical data independence generally. Also used to defend the "one
big
> > > > table" design as opposed to normalized, or even mostly normalized
> > design.
> > > If you ask me, I think the *speed obsession* comes from the good deal
> > > between poor software editors and hardware manufacturers. The rest is
> > > just a consequence of cookbook approach.
> >
> > I got out of the field before the following became prevalent, but
judging
> > from inputs in this newsgroup and several other forums, speed
requirements
> > are increasingly being defined as "faster than our comptitors, however
fast
> > that is." Back in the 90s, I dealt with people whose speed
requirements
> > were relatively crisp: "the response must come in in 7 seconds, 90% of
the
> > time". If you came up with a solution that satisfied this speed
> > requirement, you DIDN'T spend the next evening of weekend figuring out
how
> > to do it in 3 seconds.
> Getting worse and worse. Only difference with the 90's is that dbms
> editors are hitting a dead end because all the hack rules that have
> been applied until now are decreasingly effective.
>

> [Snipped]
> You have not told me what you thought about the idea of using
> interpolation as a possible computing method to systematically handle
> missing data.
>

You are right. I'm still thinking about that. Off the top of my head, I'd suggest that
interpolation is useful where data points represent a finite sample of some kind of continuum, and that some, but not all, situations of missing data lendthemselves to that description. If this thinking sounds incomplete, that's because it is. Received on Mon Apr 30 2007 - 14:26:12 CEST

Original text of this message