Re: Object oriented database

From: <patrick61z_at_yahoo.com>
Date: Sat, 1 Nov 2008 08:34:43 -0700 (PDT)
Message-ID: <30cd4b21-5cb5-4e23-8229-c9f2960967c7_at_u29g2000pro.googlegroups.com>


On Nov 1, 10:12 am, "Walter Mitty" <wami..._at_verizon.net> wrote:
> <patrick..._at_yahoo.com> wrote in message
>
> news:ab77a4d8-c281-4417-af55-7f9b86af7fac_at_x1g2000prh.googlegroups.com...
>
> > The biggest weakpoint with dbms is that it was pretty darn hard to
> > modify either the tables or the relationships (sets). Pretty much was
> > a process of 'unload/reload', but the fact that organizations were
> > running routinely on '386 class hardware' does not escape me. I
> > remember at least one project keeping the transactional database on
> > dbms and farming out the reporting to an oracle server with nightly
> > datamart dumps.
>
> By 'dbms' do you mean VAX DBMS?
>
> > Digital's datatrieve and cdd was very forwardlooking at the time
> > (imo), and I would see secretaries using 'ade' or something in telnet
> > windows to create entire applications and building their own queries
> > without the it deparment knowing anything about it. Obviously the cdd
> > could target rdb and regular files too. I remember the entire product
> > line from digital's database folks to be incredibly interesting.
>
> Datatrieve was very interesting form the point of view of "integrating
> relational and non relational data", something I said in another reply.
> By using Datatrieve's CROSS operator, you could, in effect, join data from
> a relational database like VAX RDB with data from a non relational database
> like VAX DBMS, or from RMS files. If you could get through a gateway you
> could even use data from IMS, IDMS, or CICS.
>
> In particular, the CROSS operator had no particular difficulty in dealing
> with repeating groups. It made input from files with repeating groups "look
> flat".
>
> So, would an up to date datatrieve do what you want? Why or why not?

No, in this thread its still the persistance. Sticking with dbms to keep it familiar, it had features when defining your records and relationships in such a way that these 'sets' already knew what records would be returned in a join. The ultimate precalculation is an actual list of disk offsets that you can then go fetch your subordinate records in the relationship (between parent and children hierarchialwise, or records on each side of the join. This needs an interface that either does not need a 'join', or otherwise lets my join actually specify the children (members) in the set. In some cases the disk offsets are really into the very record you have just fetched (repeating groups), but the fact that there are children records subordinate to a parent rec is easily visualized. Its what pick folks call a 'nested relation' I think

Yes, rdbms has this where the actual implementation of the database is not specified, but no, today's rdbms often will not let me specify it either in the language or an out of band 'hint' that I'd like the subordinate records to maybe actually be a repeating group. The products that do allow repeating groups are seen as wimping out on rdbms dogma.

The very small engine I worked on for a bit ditched the schema entirely. For each key in a record that points to another foreign key containing record, there are actually two or more storage locations, one for the application generated key, and one for the cached resolution of where the record actually is. Space time tradeoff of sorts. I literally do not even want to calc a hash. Records can also be marked as 'prefer to be cached and writethrough.'

Yes I know rdbms theory probably doesn't want to hear about that sort of stuff. Thats what killfiles are for. Received on Sat Nov 01 2008 - 16:34:43 CET

Original text of this message