Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> comp.databases.theory -> Re: Bob's 'Self-aggrandizing ignorant' Count: Was: What databases have taught me

Re: Bob's 'Self-aggrandizing ignorant' Count: Was: What databases have taught me

From: Bob Badour <>
Date: Sat, 01 Jul 2006 01:55:22 GMT
Message-ID: <eCkpg.4387$>

Tony D wrote:

> Bob Badour wrote:
> [ snippage ]

>>I don't necessarily agree with the statement that some algorithms are
>>implicitly imperative or procedural in nature on the basis of possible
>>implementations in PROLOG. Even if they are imperative, this does not
>>preclude declarative solutions because a language can be both imperative
>>and declarative, and one can imagine other non-procedural languages than

> Given that all Turing complete languages compute the same functions, I
> think this comes down to the fuzzy notion of convenience; some
> algorithms *might* be "conveniently" described in an imperative manner.
> But this will vary from programmer to programmer.

Yes, I agree: any difference is psychological. However, human factors can make big differences.

> [ snippage ]

>>These observations naturally lead one to two important questions:
>>1) Can one create a pair of languages where the proofs written in one
>>language establish the correct outcome and where some other language
>>instructs the compiler on how to implement to get the best possible

> Just to be sure I'm reading you correctly; you want to write proofs in
> language (a). Some other language (b) then provides hints to a compiler
> for language (a) on optimising proofs for some or other machine
> architecture (c). (In which case, wouldn't (b) normally just be the
> optimising stage on a compiler ?)


  Or would you want to effectively
> hand-optimise by writing your own optimiser hints for architecture (c)
> in language (b) for a particular proof in language (a) ?

When one creates an index for a table, is that hand-optimization? Somehow programmers must communicate physical design tradeoffs to the compiler. In a language like C++, the programmer specifies both the algorithm and the physical tradeoffs at the same time and intermixed with each other.

For instance, when using a local relation variable, one could represent it physically as a singly-linked list, doubly-linked list, red-black tree, array, etc. Or one could break it up with part of it stored one way and the rest another using some sort of pointer scheme etc.

  Basically, is
> a program a pair of (proof in (a), hints in (b)) ?


>>2) Could one create a compiler to translate proofs as above in 1) into
>>assembler code with an automated proof that the maximum number of CPU
>>cycles consumed in the worst-case situation would not exceed some threshold?

> The first part sounds like a compiler, with a customisable optimiser.
> The second part sounds like a job for abstract interpretation & static
> analysis - just beware of the halting problem & Rice's theorem waiting
> to pounce.

I suspect one could provide a usable worst-case real-time feature for broad classes of programs or subroutines etc. The feature might have some limitations such that the language for which it is defined is not quite turing-complete or is not available for all algorithms.

Presumably, if the program is meant to halt and if the programmer wrote it such that the programmer knows it will halt, some halting proof is available. Perhaps the availability of such a proof would be the prerequisite. A finite worst-case, in fact, implies that such a proof is available somehow.

>>These then lead into a third question: Can one combine the above
>>features in a single product?

> Possibly. Partial evaluators sounded bizarre to me until I got my head
> around them (to some extent, anyway).
Received on Fri Jun 30 2006 - 20:55:22 CDT

Original text of this message