Re: 3vl 2vl and NULL

From: FrankHamersley <FrankHamersleyZat_at_hotmail.com>
Date: Fri, 17 Feb 2006 12:45:12 GMT
Message-ID: <szjJf.10307$yK1.7167_at_news-server.bigpond.net.au>


dawn wrote:
> Frank Hamersley wrote:

>>dawn wrote:
>>>Frank Hamersley wrote:
>>>>dawn wrote:

> <snip>
>>>>For my benefit can you identify the top 5 (in your opinion)?
>>>
>>>1) The query language has no corresponding update language ever put
>>>into production
>>
>>Was there a plan to do so, or was PickBasic deemed capable enough?

>
> Chandru (who has posted in this forum) wrote an update processor in the
> early 70's that wasn't quite ready for prime time when Pick joined
> forces with Microdata, so that software was not bundled with the
> system. I don't know the whole story, but it appears that the project
> was picked up in the 80's and was a pet project for Dick Pick. There
> are interactive tools, but no language-based set processing update
> processors that I have found in my look over the MV landscape. I would
> guess that one reason this was never completed or implemented was that
> DataBASIC is a very full-featured language that satisfied the primary
> requirements for developers, so there was no outcry (and still isn't).

Fair enough - maybe one day, maybe not!

>>So
>>every update is a programme?  Thats not so bad if you have a "copy deck"
>>of standard templates and access/update methods.

>
> Yes. Often CRUD services are written (SOAs are not a new concept ;-).
>
>>Of course with SQL
>>large portions of the query statement can be used in forming the update.

>
> Similarly, the query language can be used to get what is called a
> select list or saved list of keys for the selection aspect of what to
> udpate.

So you actually save the list of targets? What happens in a TP environment where it is a moving feast?

>>>2) The DataBASIC language often used in MV solutions

> <snip>
>>So is C :-).  What sort of library support does it offer?

>
> I'm not a DataBASIC programmer, so maybe someone else will pick this
> up.
> <snip>
>>How do you predict all of the required logical views for ad hoc queries

>
> That is a significant issue. Most sites don't predict, but react to
> requests throughout the development process and after an application is
> live, adding in virtual fields (derived data or path to attributes in
> other files) as needed. At one site I worked, we hired student
> employees to work a few hours a week to add virtual fields as requested
> and if any report was too complex for end-users, then they would write
> the report for them too.

OK - what is the risk profile of less experience people writing ad hoc stuff. Is it easy for them to generate run away trains in a view that takes a lot of compute to resolve or are there safety nets?

> During all those years of companies working hard to get end-user
> reporting, MV end-users were either churning out reports and ad hoc
> queries or asking their VARs or IT shops for new ones and getting a
> fast turn-around on their requests.
>
> [Anecdote: I moved from an IMS COBOL shop with a significant backlog
> for reports to an MV shop with no backlog, zero, zip, for any
> reporting. It was very enlightening.]

No doubt - COBOL would not be my lang of choice however!

>>- is this an important additional phase in the design phase or is it
>>simple to retrofit on demand?

>
> simple, but not something you typically pass to an end-user to do since
> it adjusts the schema (which is descriptive and not prescriptive, so
> the risk is to other queries).

What happens to these view if cardinality is changed - perhaps even arbitrarily - are they "self healing" or do they require maintenance?

>>>4) There is no standard across vendors for a data source definition.
>>>There is no standard for client-server connectivity. This gives third
>>>parties the floor to write products like the jBASE mv.NET product that
>>>works with many different flavors of MV.
>>
>>So each programme instance latches on to the files directly - no daemon
>>request-response mechanism possible?

>
> Each vendor has their own daemon approach for client-server, but these
> are not standardized. VARs and 3rd parties do not write
> MV-database-independent applications as often as they do in the SQL
> world.

Does it get deployed in multi-tier apps much - the one I saw was single tier.

>>>5) Third-party products for Business Intelligence and practically
>>>everything else speak ODBC

>
> <snip>
>
>>Is it easy to dump (aka bcp out) to another (SQL?) environment for ad
>>hoc queries, routine reporting or data warehousing?

>
> A techie would say that it is easy and I have advised and assisted in
> such efforts many times, but find it not altogether satisfying. Once
> you normalize the data, your solution is no better than the average SQL
> implementation so you lose the charm (e.g. ease in working with
> multiple 1:M's in queries) and have the added pain of such things as
> switching from 2VL to 3VL, for example (equating a Pick null with a SQL
> null does not typically work well). Those for whom I have recommended
> this approach for their specific requirements have told me it is
> working well for them, however.

Is there no other way to leverage these BI products?

>>>I'm not sure these are my top 5, but they are 5 of the top issues I
>>>have with MV.
>>
>>Ouch - in my current business space, any of those would hurt like
>>amputation without anaesthetic!

>
> Yup, there are pros and cons with SQL products and with non-SQL
> products. In spite of these issues and others, MV typically still
> seems to be a bigger bang for the buck solution for companies. Just as
> people defend the RM while having problems with SQL, I think the MV
> data model has a simple elegance and something like it could help the
> s/w dev industry reduce costs and improve products in a big way. JSON
> and XML are both "something like it" in different ways but do not carry
> successfully into the persistence layer (yet?), while MV has a
> significant (even if invisible, bz it is sold through VARs) installed
> base.

There are no silver bullets! It reminds me of a prediction that I made when I first started in IT. I suspect I would need a different job in about 5 years because at the rate IT was expanding by that time programme writing programmes would make me as a hacker redundant. Of course I was coding in ASM at the time and we had been exposed to Modula-2 plus reviewed the Ada spec in the course, and it seemed like a formality. Of course I was wrong - so I gave trying to predict and just look for empirical data - which is why I see the market share as confirming the RM over the MV. Anything that upsets this apple-cart is going to be novel, not a throwback - yikes, another prediction!
>

>>>>>When talking about data modeling, however, that is one area
>>>>>where I cannot leave MV behind unless I can find a better data model.

>
> <snip>
>
>>Even if your data model ambitions were accepted, I doubt I could cop any
>>of your top 5 to get that.

>
> I'm not aiming to convince a current SQL-DBMS customer to adopt an MV
> database, even if they would be well-served. I'm trying to convince
> "the industry" to adopt more flexible (dare I say "agile") data models
> and related tools. That is why I smiled when Oracle bought a non-RDBMS
> product line with Berkeley DB and DB-XML and when IBM bought U2

I wouldn't ascribe too much to moves by marketing types.

> (although they didn't know what they had at first). I don't think you
> can easily transition an RDBMS to where it needs to be (backward
> compatibility issues) so these companies need to start with something
> other than the RM (i.e. ditching the Information Principle) and SQL and
> move forward from there on some front.

Good luck to them - hey why not bet on red and black, since 70 to 80% (anecdotally) of project sinks without much trace why not do a Chaney and spray shot all around!

>>>>OK - but to summarise our discussion to date your problem with the RM is
>>>>the inherent constraints prevent you from having a visually aligned data
>>>>model that correlates exactly with the user view of the data model as
>>>>particularly expressed by the UI.
>>>>
>>>>Is this a (relatively) succinct statement of your view?
>>>
>>>I don't think so because I think your defs sound different from mine.
>>
>>Thats why I restated in just 4 lines - on rereading them and my next
>>para is it the nub of your opinion?
>>
>>>By "correlates exactly with the user view" it sounds like you think I
>>>want the data from a single screen to be stored as a single entity or
>>>something like that.  I would not look for the logical data model of UI
>>>screen to be identical to the logical data model of the database.
>>>Surely one screen might collect data that updates multiple entities.
>>
>>I didn't (except for the need of brevity) aim to make it all or nothing,
>>more that the alignment is very obvious in the MV situation and can be
>>apparently less clear (to some) in the RM form.

>
> I still would not say "visually align" although perhaps you could add
> "and linguistically aligned," so you can tell it is more aligned with a
> person's model of the situation. The UI gives hints on what aligns
> with how humans think as does the data structure of a UI. Data need
> not be stored in a way that works for humans, but the interface between
> the rest of the software (e.g. UI) and the database API as well as the
> human working with the interface to the database can be much better
> than with a language that implements the RM, IMO.

OK - I accept that RM forms are not always instantly appealing to a casual observer. However I still rate the solidity of its foundations as more than adequate compensation, and once it is under your skin (i.e. it all clicks) then you hardly even credit that a "problem" exists.

> <snip>
>

>>>>b) CLR
>>
>>I think the Lazza's O and others have some similar facilities - Sybase
>>opts for Java, but Bill is going for the whole sorry .Net kitchen sink!
>>
>>>Are these both SQL Server only?  I don't know what the downside of CLR
>>>is.
>>
>>RISK - like you have with MV (only joking, well not really, but far
>>worse).  Every half wit who finds regular SQL too taxing will start
>>building bits and pieces of smart alec code and then ram it into the
>>database -

>
> That is what I figured you were thinking. Coming at it from the s/w
> dev standpoint, I have mixed feelings on that one. I very
> .NET-ignorant (by decision so far), however.

Look closely at the world around you. I just recently saw a .Net crash panel on a customer facing cash register screen. The operator just shrugged and rebooted the app - no problem, what problem, nothing to see, move along!

>>that like all M$ stuff will fail from time to time.  Argggh -
>>all the mayhem you can handle and then more!  As a foretaste I have
>>already heard of ppl shelling out of sprocs to instantiate an object
>>that invokes a DTS package to munge the very data they are interested
>>in.  So what if you lose a bit on the way.

>
> laughing. Flexibility is one of those features that some folks just
> hate, eh? You then have to hire good developers and have good testing
> (even then I'll grant it isn't air tight).

Yep - in some places it is more dangerous and others it is desirable. Always it must be in good hands.

>>>>This simply confirms what Canute discovered.  His goal may be correct
>>>>but his method not.  That doesn't mean it is unachievable as the Dutch
>>>>have showed.
>>>
>>>Unless you are talking about King Canute and the Dutch holding back the
>>>sea?  That is the only thing that comes to mind, otherwise I'm clueless
>>>on the allusion.
>>
>>Couldn't be any other conclusion could it?  He didn't and they did!

>
> I wasn't sure you wanted me to haul out the liberal arts or if there
> was some techie stuff I was missing ;-)
>
> Cheers! --dawn

Thanks for being candid about Picks top 5 shortcomings - your co-conspirators prolly think you are related to Judas Iscariot by blood :-).

Cheers, Frank. Received on Fri Feb 17 2006 - 13:45:12 CET

Original text of this message