Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> comp.databases.theory -> Re: Bob's 'Self-aggrandizing ignorant' Count: Was: What databases have taught me

Re: Bob's 'Self-aggrandizing ignorant' Count: Was: What databases have taught me

From: Bob Badour <bbadour_at_pei.sympatico.ca>
Date: Wed, 28 Jun 2006 22:54:56 GMT
Message-ID: <4NDog.3463$pu3.84109@ursa-nb00s0.nbnet.nb.ca>


Adrian Alston wrote:

> "Bruno Desthuilliers" <onurb_at_xiludom.gro> wrote in message
> news:44a2b543$0$6509$626a54ce_at_news.free.fr...
>

>>What bother me here is not about OO being born from needs for simulation
>>- FWIW, at least part of Bob's assertion seems perfectly and obviously 
>>true:
>>
>><quote>
>>(OO) is a
>>computational model comprising a collection of features useful for
>>constructing large unpredictable state machines from small
>>predictable state machines
>></quote>

>
> Hi Bruno and all,
>
> I have lots of C programming experience but I'm pretty new to OOP can you
> please help.
>
> I can understand how objects are a sort of state machine but I thought OO
> programs should be predictable. What's this unpredictable bit about? I know
> about simulations but I write regular non-simulation apps. Ok the world
> around the program may be hard to predict but just how are my "OO" programs
> suppose to be unpredictable?
>
> Thanks in advance.

Keep in mind that this is cross-posted to comp.databases.theory, and I post from the perspective of c.d.t

Unless you are writing simulations or chaotic attractors, your programs are supposed to be predictable.

The unpredictability of simulations comes from the interactions among many interacting state machines, the potentially large set of boundary conditions and sometimes from the complex or chaotic equations used. The whole purpose of the simulation is to discover what would happen in various situations. If that were predictable, one would not need a simulation.

When one uses OO, one naturally tends to have interacting state machines and to have more of these than when does not use OO. When one's application is naturally a very large state machine, one can decompose it into smaller state machines as an abstraction technique to break it up into human-manageable chunks.

However, if one were truly abstracting a large state machine like that, one would tend to want to prove that the aggregate of the smaller state machines omits no states and introduces no new and unwanted state transitions etc. Nobody, as far as I know, practices OO in this way.

Further, as Dijkstra notes, the tools we use affect the way we think. If one has a tool that makes creating large state machines very easy, one will naturally tend to think of solutions in terms of large state machines and more importantly in terms of state transitions. Focusing on state transitions leads one to take a much more procedural and operational perspective than is often warranted.

Declarative techniques allow one to specify solutions closer to the level of intent. A good declarative formalism causes one to think more about "what" than about "how", which naturally separates the concern for correctness from the concern for efficiency. Those of us who understand the relational model and SQL, understand that one can address the concern for efficiency separately, for example by creating indexes, by clustering data, and by automating the translation from the declarative language to the physical hardware.

The OO computational model is simply not as amenable to higher order transformations. The executable code more directly reflects the specification. For instance, optimizers do not combine object classes to transform a collection of possibly inefficient state machines into a single more-efficient state machine.

As Aho et al noted way back when in the dragon book (on compilers), algorithmic replacement arguably offers the greatest opportunity for performance optimization and is almost never done because of the enormous difficulty involved. Thus, if the optimizer could recognize a bubble-sort, it could replace it with a quick-sort. However, I suspect the task of recognizing the equivalence of any two arbitrary algorithms is probably close to the complexity of the halting problem.

However, a declarative set-level language specifies what to do at a much higher level often making the similar optimization a task of algorithmic choice rather than algorithmic replacement. Received on Wed Jun 28 2006 - 17:54:56 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US