Re: Mixing OO and DB

From: David Cressey <cressey73_at_verizon.net>
Date: Sat, 09 Feb 2008 14:34:33 GMT
Message-ID: <ZRirj.45$eU3.38_at_trndny04>


"mAsterdam" <mAsterdam_at_vrijdag.org> wrote in message news:47ad7dc5$0$85790$e4fe514c_at_news.xs4all.nl...

> Any ideas except 'the other guys are so stupid'?

My reaction is biased towards databases. That's where I'm coming from. No apologies for that.

I'd like to suggest that the origins of the database appraoch and the origins of the object oriented approach are very different. The database approach started from an enterprise wide database and has scaled its way down to tiny (conceptually) databases embedded in single applications. The object oriented approach started with simulations and scaled its way up to the problems that are being worked on now, including enterprise wide solutions.

The fundamental problem to be addressed in database work is shareability. If database data is not shareable, it's less valuable. Codd's work on applying the relational model of data to the organization of databases for large scale sharing of large scale data is fundamentally aimed at promoting uniformity and thereby shareability without unduly limiting flexibility at the low end. In the trade-off between uniformity and flexibility, the relational model pushed the envelope so far out that all of the other database contenders from the 1970s have been almost pushed off the map. I'm thinking of the graph based DBMS products that dominated the market in circa 1978.

While theoretical advances such as those discussed in c.d.t. are important in their own right, the fundamental market presence of the relational model in practical work has settled on the SQL model of data. A lot of us mere practicioners treat the SQL model and the relational model as variations on the same theme. In terms of the internals of data storage, data blocks that can contain table rows or index nodes make up the bulk of the data whose management is transparent to the users of the DBMS.

The fundamental problem in simulations, as I see it, is autonomy. If each unit in a simulation is not autonomous, the simulation will be constrained at the system level to behave in ways that do not mimic the systems being simulated. Objects are largely autonomous, and that is their great utility. Objects are more autonomous than functions and procedures from the great programming languages of the 1970s because objects carry state. That is why OOP has displaced structured programming to a large extent.

In order for objects to collaborate they have to exchange signals (data). In order to usefully exchange data, there has to be a common convention regarding both the form and the content of the messages passed between objects. Most of the OO stuff I've seen (which isn't very much) concentrate on bilateral contracts between pairs of objects as to what the form and content (meaning?) of exchanged messages will be.

Most of the best database work concentrates on universal contracts (although the word "contracted" isn't widely used) governing the entire body of data stored in a database, and a vast multitude of transactions that collaborate (going forward in time) that adhere to the single contract or some subset of it.

This looks overly contraining to the programmer accustomed to object oriented thinking.

The bilateral contracts that tend to govern data exchange in object worlds look chaotic to someone such as myself, schooled in database work.

This is the best I can do to describe the differences in non-pejorative terms. Received on Sat Feb 09 2008 - 15:34:33 CET

Original text of this message