Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> comp.databases.theory -> Re: Bob's 'Self-aggrandizing ignorant' Count: Was: What databases have taught me

Re: Bob's 'Self-aggrandizing ignorant' Count: Was: What databases have taught me

From: Tony D <tonyisyourpal_at_netscape.net>
Date: 30 Jun 2006 15:42:01 -0700
Message-ID: <1151707321.214996.117690@p79g2000cwp.googlegroups.com>


David Barrett-Lennard wrote:

[ snippage ]

> The study of (imperative) algorithms is an interesting field. Proving
> algorithms correct can sometimes take pages of formal, inductive
> proofs. Some concise, useful and powerful algorithms are implicitly
> imperative in nature, and trying to write them in PROLOG seems
> unnatural, inefficient and painful.
>

Sometimes, writing *anything* in Prolog can seem unnatural, inefficient and painful. It depends on whether you're using the right tool for the job. For example, no sane human would try writing a disk sector editor in Prolog. (I do know one certifiable case who wrote a text editor using only dd(1), though.)

[ more snippage ]

> However, in certain domains I choose C++. I may spend lots of time
> away from the code editor developing highly tailored imperative
> algorithms that are provably correct, and then carefully implement to
> get the best possible performance.

I like these kinds of statement ! How can we actually show that an implemented algorithm is provably correct, especially in referentially opaque languages like C++ ? Consider what has to be proved: the algorithm; the code the programmer writes to represent the algorithm in the chosen language; the specification of the language; the conformance of the compiler to the specification of the language; the correctness of any optimisation carried out by the compiler. And that's before we get to the CPU itself (Pentium bug be damned ! ;)

Please know that I'm not taking any kind of shot at you; I'm just considering the mountain of concrete that has to be shifted before we can safely assert that we have proved anything about the behaviour of programs written in imperative languages with side-effects (with or without OO).

[ some more snippage ]

> I agree with all that you say, and think there are lots of OO
> programmers who need a good swift kick up the backside.
>

Could I make the minor suggestion of removing the "OO" part of that sentence ;) (And yes, I'm including myself there.)

[ yet more snippage ]

> There seems to be a lack of science going on. For example, what work
> has been done to determine whether a given algorithm is best
> represented declaratively or imperatively? In my (limited) reading on
> comp.object, I haven't seen anyone discuss such an important question.
> Instead everyone seems to hand wave with all too generic, meaningless
> statements.
>

I'm not sure how much has been done anywhere on this to be honest - if anyone has some compelling references I'd love to see them. (Although I think looking at this may have been stymied by the fact that every iterative algorithm has a recursive equivalent, and that all Turing complete languages compute the same set of functions.) I usually stick by what I call the Principle of Inconvenience, which I informally define as: when you spend more time thinking about "how" or side-issues peculiar to the tool you've chosen, rather than the "what" of the task at hand, you're using the wrong tool. Received on Fri Jun 30 2006 - 17:42:01 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US