Re: Possible bridges between OO programming proponents and relational model

From: Cimode <cimode_at_hotmail.com>
Date: 2 Jul 2006 14:49:18 -0700
Message-ID: <1151876958.841523.14030_at_v61g2000cwv.googlegroups.com>


Alvin Ryder wrote:
> Cimode wrote:
> > As I was suspecting...
> >
> > The non response from BB to the posts made totally proves my point. I
> > rest my case... His attitude was clear response to all questions I was
> > asking myself...
> >
>
> Cimode, this sub-thread was between you and I (Alvin Ryder) so I'm not
> sure why you'd expect BB to reply here? Ok, I'll admit "Alvin Ryder" is
> just a spam deflector and not my real name but I'm not BB.
>
> As for lack of reply, don't draw too many inferences from it. I was
> just trying to focus on theory not physical details as per this ng.
>
> However, since I have worked with dbs at this level maybe I should
> reply, see below.
>
> > I will make a pleasure debunking his and his peer's incoherence in
> > future posts.
> >
> > Cimode wrote:
> > > Alvin Ryder,
> > >
> > > Thank you for helping adress this complex issue...
> > >
> > > //Actually the relational model (RM) allows for and expects "user
> > > defined
> > > types", a user defined type without operators is useless so it also
> > > expects user defined operators. Operators are also called functions or
> > > methods and that is what OOP proponents refer to as "behavior".//
> > > I am not convinced functions and methods are sufficient to adress the
> > > RM issue of RTabme representation and manipulation. Operators are just
> > > a part of what a data type is. My understanding is that *behavior*
> > > (through methods and functions) may address the issue of manipulation
> > > but not sufficiently the issue of adequate representation of relvar.
> > > SQL table on the other hand address the representation in a clear but
> > > limited manner.
> > >
> > > Data+code.
> > >
> > > // But most commercial DBMS don't make it easy to create user
> > > (programmer)
> > > defined attribute types. You can easily use the vendor supplied types
> > > but defining your own is another story. However this limitation was
> > > never imposed by the RM.// Yes no doubt about that. Limitation comes
> > > clearly from current implementation of SQL. My guess is that this
> > > limitation is bound to the current in memory addressing schemes that
> > > fail to represent and therefore to manipulate adequately relvar. What
> > > kind
> > >
> > > //In earlier editions of "Introduction to Databases ..." he uses the
> > > word
> > > "atomic" and gives clear examples where a Set is *not* allowed as an
> > > attribute (column). The set elements {A, B, C} had to be "decomposed
> > > and normalized.//
> > >
> > >
> > > But later, in books like "Database in Depth", this definition was
> > > "clarified" - "domains are types", now you *can* have an attribute
> > > (column) of type Array, Set and whatever else. As long as you only
> > > have
> > > a single value or instance of a type like Set it's OK.
> > >
> > > For example: Set instances like {A, B, C} are allowed in a column,
> > > other tuples (rows) might have {A, B}, {A}, {} in that same column.
> > > But
> > > none can have {A, B, C} + {X, Y Z} because that isn't a /single/ value
> > > or instance.
> > >
> > > You can even have an attribute of type Relation and plug an entire
> > > relation into an attribute (RVA). Now since we already have operators
> > > to handle relations why not allow it to be decomposed?
> > >
> > > I hope it can be seen this "reclarified" definition is no small step
> > > in
> > > bridging OOP and the RM. If you use relational keys instead of in
> > > memory pointers, in theory, you now have enough to support "object
> > > based" programming (ie OOP with no inheritence)!
> > >
> > > In "The Third Manifesto" we go one step further, sub-types are covered
> > > which opens up the inheritence door as well.
> > >
> > > In practice most people I've encountered are stuck on the earlier
> > > "Intro to databases" definitions, they'll get a shock when they see
> > > ADT's in SQL 3. But all SQL aside, in theory the gap between OOP and
> > > RM
> > > is much smaller.//
> > >
> > > Yes. What would be the consequences onto defining physical schemes in
> > > memory. How should these look like to optimize decompositions and
> > > normalization? At present time most RAM are cut out in Row/Column
> > > Adresses and the 64 bit architectures tend to add Memory Blocks
> > > addressing schemes. What correlations are possible into representing
> > > RTables (logical tables)?
>
> <Lets get physical then>
>
> When I did this kind of work we had at least 2 levels if indirection
> above what you've described.
>
> Firstly we used C, then C++, these let you view memory as being linear.
> Memory starts at M and is of size Z. Unless you go down to the
> controller or driver level you don't see memory as you've described it.
>
> I'm not saying your approach is wrong and it may be interesting to
> explore but for now I'm just talking from past experience.
>
> Secondly up and above this "linear memory model", the notion of
> "blocks" or "pages" with a Cache Manager are used to ensure database
> fragments like tables and indexes are in memory when required. The
> cache manager doesn't know anything about normalization or even tables
> it only sees pages and swaps them in and out as required from disk.
> Initially as void * type memory, laster overlayed to a specific struct
> as per the schema. (Yes this approach shows a strong C rather than C++
> heritage).
>
> I'm not saying your uber-physical approach is wrong or impossible but
> it isn't how I've seen it done and from what I calculate you'd lose
> portabiliby at the cost of additional programming that I would rather
> not do. Ok, I'll admit you might gain some performance increase but not
> at a price I'd like to pay.
>
> Now days I do game programming (and still some Java/biz stuff ) but in
> games we use similar principles, vast worlds need to be strategically
> mapped into finite memory - but again as with db's there is no need to
> dig deeper into physical memory models. What C/C++ provide is good
> enough.
>
> Mind you for some applications a custom memory manager instead of
> malloc/new can be useful but even then we don't go down to the level
> you're talking about.
>
> </Lets get physical then>
>
> I may have misunderstood you but you seem to be going from logical
> tables straight to memory at the controller level? Frankly I can't see
> that approach working due to too many short cuts.
Shortcuts is not the term...consequence is more adequate...There is an obvious logical reasonning fo implementing adequately relations in computing...
Abstract mathematics (relations) defines Logical data management concepts (RM) which defines a logical implementation computational level on a given physical configuration (Yes RAM and all low level garbage)

You are correct in assuming that the issue I want to raise is purely physical. The logical issues do not require rediscussion except at implementation comutational level given the current physical limitations. I can understand that your past experience revealed some discouragement.

My point is that there's a limiting effect of *purely* physical considerations that are not sufficiently adressed at logical computational level. A simple observation is that current RAM tridimensional configurations (blocks, col, rows) force projections (mathematically speaking) from multidimensional coordinate space to tridimensional space which creates unecessary additional overhead work to perform operations involving relvars. I am trying to evaluate that overhead work and see how it could be reduced more effectively (either at computational level mechanisms and algorhytmics OR at pure physical configuration).

> Cheers,
> "Alvin Ryder"
Received on Sun Jul 02 2006 - 23:49:18 CEST

Original text of this message