Re: 3vl 2vl and NULL

From: dawn <dawnwolthuis_at_gmail.com>
Date: 16 Feb 2006 07:49:49 -0800
Message-ID: <1140104989.083574.22250_at_z14g2000cwz.googlegroups.com>


Frank Hamersley wrote:
> dawn wrote:
> > Frank Hamersley wrote:
> >>dawn wrote:
<snip>
> >>For my benefit can you identify the top 5 (in your opinion)?
> >
> > 1) The query language has no corresponding update language ever put
> > into production
>
> Was there a plan to do so, or was PickBasic deemed capable enough?

Chandru (who has posted in this forum) wrote an update processor in the early 70's that wasn't quite ready for prime time when Pick joined forces with Microdata, so that software was not bundled with the system. I don't know the whole story, but it appears that the project was picked up in the 80's and was a pet project for Dick Pick. There are interactive tools, but no language-based set processing update processors that I have found in my look over the MV landscape. I would guess that one reason this was never completed or implemented was that DataBASIC is a very full-featured language that satisfied the primary requirements for developers, so there was no outcry (and still isn't).

> So
> every update is a programme? Thats not so bad if you have a "copy deck"
> of standard templates and access/update methods.

Yes. Often CRUD services are written (SOAs are not a new concept ;-).

> Of course with SQL
> large portions of the query statement can be used in forming the update.

Similarly, the query language can be used to get what is called a select list or saved list of keys for the selection aspect of what to udpate.

<snip>
> > 2) The DataBASIC language often used in MV solutions
<snip>
> So is C :-). What sort of library support does it offer?

I'm not a DataBASIC programmer, so maybe someone else will pick this up.

<snip>
> How do you predict all of the required logical views for ad hoc queries

That is a significant issue. Most sites don't predict, but react to requests throughout the development process and after an application is live, adding in virtual fields (derived data or path to attributes in other files) as needed. At one site I worked, we hired student employees to work a few hours a week to add virtual fields as requested and if any report was too complex for end-users, then they would write the report for them too.

During all those years of companies working hard to get end-user reporting, MV end-users were either churning out reports and ad hoc queries or asking their VARs or IT shops for new ones and getting a fast turn-around on their requests.

[Anecdote: I moved from an IMS COBOL shop with a significant backlog for reports to an MV shop with no backlog, zero, zip, for any reporting. It was very enlightening.]

> - is this an important additional phase in the design phase or is it
> simple to retrofit on demand?

simple, but not something you typically pass to an end-user to do since it adjusts the schema (which is descriptive and not prescriptive, so the risk is to other queries).

> > 4) There is no standard across vendors for a data source definition.
> > There is no standard for client-server connectivity. This gives third
> > parties the floor to write products like the jBASE mv.NET product that
> > works with many different flavors of MV.
>
> So each programme instance latches on to the files directly - no daemon
> request-response mechanism possible?

Each vendor has their own daemon approach for client-server, but these are not standardized. VARs and 3rd parties do not write MV-database-independent applications as often as they do in the SQL world.

> > 5) Third-party products for Business Intelligence and practically
> > everything else speak ODBC
<snip>

> Is it easy to dump (aka bcp out) to another (SQL?) environment for ad
> hoc queries, routine reporting or data warehousing?

A techie would say that it is easy and I have advised and assisted in such efforts many times, but find it not altogether satisfying. Once you normalize the data, your solution is no better than the average SQL implementation so you lose the charm (e.g. ease in working with multiple 1:M's in queries) and have the added pain of such things as switching from 2VL to 3VL, for example (equating a Pick null with a SQL null does not typically work well). Those for whom I have recommended this approach for their specific requirements have told me it is working well for them, however.

> > I'm not sure these are my top 5, but they are 5 of the top issues I
> > have with MV.
>
> Ouch - in my current business space, any of those would hurt like
> amputation without anaesthetic!

Yup, there are pros and cons with SQL products and with non-SQL products. In spite of these issues and others, MV typically still seems to be a bigger bang for the buck solution for companies. Just as people defend the RM while having problems with SQL, I think the MV data model has a simple elegance and something like it could help the s/w dev industry reduce costs and improve products in a big way. JSON and XML are both "something like it" in different ways but do not carry successfully into the persistence layer (yet?), while MV has a significant (even if invisible, bz it is sold through VARs) installed base.

> >>>When talking about data modeling, however, that is one area
> >>>where I cannot leave MV behind unless I can find a better data model.
<snip>
> Even if your data model ambitions were accepted, I doubt I could cop any
> of your top 5 to get that.

I'm not aiming to convince a current SQL-DBMS customer to adopt an MV database, even if they would be well-served. I'm trying to convince "the industry" to adopt more flexible (dare I say "agile") data models and related tools. That is why I smiled when Oracle bought a non-RDBMS product line with Berkeley DB and DB-XML and when IBM bought U2 (although they didn't know what they had at first). I don't think you can easily transition an RDBMS to where it needs to be (backward compatibility issues) so these companies need to start with something other than the RM (i.e. ditching the Information Principle) and SQL and move forward from there on some front.

> >>OK - but to summarise our discussion to date your problem with the RM is
> >>the inherent constraints prevent you from having a visually aligned data
> >>model that correlates exactly with the user view of the data model as
> >>particularly expressed by the UI.
> >>
> >>Is this a (relatively) succinct statement of your view?
> >
> > I don't think so because I think your defs sound different from mine.
>
> Thats why I restated in just 4 lines - on rereading them and my next
> para is it the nub of your opinion?
>
> > By "correlates exactly with the user view" it sounds like you think I
> > want the data from a single screen to be stored as a single entity or
> > something like that. I would not look for the logical data model of UI
> > screen to be identical to the logical data model of the database.
> > Surely one screen might collect data that updates multiple entities.
>
> I didn't (except for the need of brevity) aim to make it all or nothing,
> more that the alignment is very obvious in the MV situation and can be
> apparently less clear (to some) in the RM form.

I still would not say "visually align" although perhaps you could add "and linguistically aligned," so you can tell it is more aligned with a person's model of the situation. The UI gives hints on what aligns with how humans think as does the data structure of a UI. Data need not be stored in a way that works for humans, but the interface between the rest of the software (e.g. UI) and the database API as well as the human working with the interface to the database can be much better than with a language that implements the RM, IMO.

<snip>

> >>b) CLR
>
> I think the Lazza's O and others have some similar facilities - Sybase
> opts for Java, but Bill is going for the whole sorry .Net kitchen sink!
>
> > Are these both SQL Server only? I don't know what the downside of CLR
> > is.
>
> RISK - like you have with MV (only joking, well not really, but far
> worse). Every half wit who finds regular SQL too taxing will start
> building bits and pieces of smart alec code and then ram it into the
> database -

That is what I figured you were thinking. Coming at it from the s/w dev standpoint, I have mixed feelings on that one. I very .NET-ignorant (by decision so far), however.

> that like all M$ stuff will fail from time to time. Argggh -
> all the mayhem you can handle and then more! As a foretaste I have
> already heard of ppl shelling out of sprocs to instantiate an object
> that invokes a DTS package to munge the very data they are interested
> in. So what if you lose a bit on the way.

laughing. Flexibility is one of those features that some folks just hate, eh? You then have to hire good developers and have good testing (even then I'll grant it isn't air tight).
> [..]
> >>This simply confirms what Canute discovered. His goal may be correct
> >>but his method not. That doesn't mean it is unachievable as the Dutch
> >>have showed.
> >
> > Unless you are talking about King Canute and the Dutch holding back the
> > sea? That is the only thing that comes to mind, otherwise I'm clueless
> > on the allusion.
>
> Couldn't be any other conclusion could it? He didn't and they did!

I wasn't sure you wanted me to haul out the liberal arts or if there was some techie stuff I was missing ;-)

Cheers! --dawn Received on Thu Feb 16 2006 - 16:49:49 CET

Original text of this message