Re: Fails Relational, Fails Third Normal Form

From: Derek Asirvadem <derek.asirvadem_at_gmail.com>
Date: Thu, 12 Feb 2015 06:55:38 -0800 (PST)
Message-ID: <08311c25-51d5-470d-a31d-8d110160202a_at_googlegroups.com>


James

Great post

> On Thursday, 12 February 2015 17:23:44 UTC+11, James K. Lowden wrote:
> On Tue, 10 Feb 2015 23:06:25 -0800 (PST)
> Derek Asirvadem <derek.asirvadem_at_gmail.com> wrote:
>
> > > From the academic point of view -- indeed from the point of view of
> > > the DBMS, as you know -- no column has meaning.
> >
> > Totally disagree. When you say "DBMS", you may be meaning
> > "theoretical DBMS", in which case, I don't agree or disagree, you are
> > welcome to entertain it as a theoretical concept, if such is valid.
>
> Actually, I'm sure you agree. By "DBMS" I mean "database management
> system". It's a machine, and it has no concept of meaning. It
> provides us with the illusion of semantic engagement by representing
> its tuples with names with which we associate meaning. To the machine,
> each column simply has a type and some defined relationship to other
> columns. It enforces those relationships, thereby consistency, thus
> supporting verifiable logical manipulation.

Ok, so you don't means DBMS.

You mean the theoretical concept of a DBMS. Abstracted to the point where those statements can be true. Ok, I agree.

> > I am not saying the theoreticians in the RDB space are stupid because
> > they have assumptions and can't proceed, etc.
> > - I am saying the theoreticians in the RDB space are stupid because
> > they are using a hammer for a task that calls for an axe.
> > ==AND== they will not observe the evidence that the hammer is not
> > working, that it is not suited to the job.
> > ==AND== the are ignorant (or in denial) that axes exist.
>
> Because they insist on an using an algorithm and FDs to determine
> keys?

I smell bait. I take it, you've never kissed a fish.

I did once, it was a musky that I had been hunting for two years. When I finally caught it, it was much smaller than the fight that it had put up, and just above the limit, so I kissed it and let it go. The bastard went straight to the bottom, so I had to grab it again and flood its gills for half an hour, until it finally awakened and swam away.

Their algorithm is the hammer. Codd's algorithm is the axe. Read my paragraph above again, with that in mind.

Don't forget that theirs doesn't work, hasn't worked, weeks have gone by, and we are still waiting. That Codd's algorithm works in a few minutes.

> > We want to prevent "Warshinton", when "Washington" is in the database.
>
> I see. I never worked on an application where that was warranted.
>
> Re soundex, I guess I reached for it half a dozen times over the
> years, ever hopeful, always disappointed.

That means you didn't get the hang of it, and you didn't use the full four characters returned. It is not something for everyday use, but used properly, it works perfectly for detecting spelling errors and similar.

> > But it concerns me, that you use the term FD interchangeably, whilst
> > knowing full-well that the real FD and the theoretical one are quite
> > different, the latter being only a fragment of the former.
>
> I am using the term conventionally; I have no tricks up my sleeve.
> What part of the "real" definition does the "theoretical" one lack?

I have detailed that in my responses to Jan in this thread, I won't repeat, please read. It appears he accepts it (no further response).

Yours might be conventional within the 1%, the Codd FD is conventional in the 99%.

The issue loses meaning when we talk about it in "parts". Codd's FD is an integrated whole, 3NF and FD defined together, and in the context of Relational Normalisation. No algebraic definition. Yours is a single fragment of that, with no context, no Normalisation, in an algebraic definition. They cannot be compared, the difference is not a list of parts.

The difference is a horse vs the boiled femur of a horse. Yes, absolutely, if and when you find a horse, you can verify that it is a horse, by using your boiled femur. But you can't build another horse with that boiled femur of yours. The 99% can with theirs.

Since you seem to be somewhat aware of the relevance of meaning, and feeding that into the machine, etc, it appears that you take the opposite position here, and that remains a concern for me. A key difference is that in your algorithm, you strip the meaning out of the names (step 1 in the algorithm, if you will) and then you use the non-FD-fragment to determine the non-keys from the x's and y's. Whereas the Codd algorithm retains the meaning, finds the Key first, then uses the FD to validate the determined Key, then the attributes, etc. So yours is bottom-up, devoid of the meaning that you claim is relevant, and ours is top-down down with meaning, and the meaning gets clarified (eg. we improve the names, improving and discriminating the meaning) during the process.

The second (first?) "part" that ours has, that yours lacks is, we take the whole picture is (hence the relevance of a diagrammatic model), we evaluate all the tables in a contemplated cluster, we look at all the keys; FDs (including your "MVDs") together. And we go through iterations of that. Where as, you take one table at a time, again removing it from the context of the tables in the cluster, and evaluate it using a non-FD-fragment in terms of x's and y's.

Here I am defining two discrete levels of meaning: - one within the table itself, relative to all the keys and attributes in the table (at whatever level of progress; whatever iteration) - a second, within the context of all the other tables in the cluster, that it could possibly relate to (ditto); reworking the keys overall; the hierarchy; etc

The third (first?) "part" that ours has, that yours lacks, is success. Two full weeks and zero runs on the board.

> Namely, intuition. ... "jumping out"

Nonsense. An algorithm is hardly an intuition. The "jumping out" is something that happens with experience, when the success of the algorithm straightens out the neuronal pathways.

In this example the "jumping out" I meant was the tables are so familiar to all, or should be familiar, and the meaning of the columns are really clear. If you are handling a trading inventory, you would be very clued up about countries.

Other than this example, in the normal case, when examining a new set of requirements ...

> I'm not being pejorative: The "jumping out" is the
> practice of associating meanings with (column) names and deciding what
> identifies what.

There you go with that word again. Yes. Agreed.

Which is why I say your algorithm is stupid because it strips the meaning out of the names that require the meaning to be maintained; rolled over the tongue; and determined; in the context of all the tables in the cluster; not alone. It is self-crippling. So you are left with a Sudoku puzzle, and it is "interesting" when three 9's appear in a line somewhere.

It is crazy, how you admit and claim to understand that meaning is very important, and then, first thing, you remove meaning. The technically correct word is schizophrenic. Second thing, you evaluate it in isolation from its context. Same word.

> > c. that they are using it anyway, which is silly, given [a][b]
> > d. they still haven't produced anything by that method
> > e. but that it remains a valid method for determining keys when a
> > human is not present to perform the analysis of data. (eg. I am not
> > saying it has no use; the use has not been defined to me; I leave
> > that open)
>
> I think you mean that you've never seen a good tool for describing a
> database that uses [non-]FD[fragment]s as its vocabulary.

(Insertion for clarity)

Nah. Any tool that is based on such a stupid algorithm, is unworthy of being written.

And then after you have done all that, you will want to pour the meaning (two levels) that you removed at the first step, back into the bag of bones. Wouldn't it be better to retain both levels of meaning, all the way through ?

> Neither have I, but I
> suggest to you that ERwin is basically that in disguise, more on which
> in a moment.

Ok, I will wait for the more.

> For an undisguised version, consider Nicola's exercise:
>
> Quoth Nicola on Thu, 05 Feb 2015:
> > A few years ago I implemented a few algorithms from Koehler's PhD
> > thesis in a Ruby script.

You may be feeding yourself to the lions here.

Taking up a challenge from Nicola, that Codd's and my way is the "hard" way to determine keys, I looked at Köhler's DNF paper. I understand that his thesis was the basis for his DNF. I looked at that paper, which he alleges is "relational", and it is a total unconscious Straw Man. When I placed his unnormalised data in the Relational context, per his stated requirements and rules, his problem disappeared, vaporised. So his proposal is null and void. It does not apply to the Relational context, or to Normalised data.

This is the trick that many theoretical papers in this space pull: they give a set of data which they allege is "relational", which they allege contains some devious, mysterious problem or other; then they give a proposal on how to sort out that alleged devious problem. Get this. The data given is not Relational, the first aggravating falsity, it is a bunch of unnormalised, non-relational garbage. Easily recognised by the Sudoku players as their compulsive puzzle and they go for it with their non-FD-fragments, like sex addicts who have found a new pornography channel. Köhler did that same.

Easily recognised by the implementers for what it is, unnormalised, non-relational garbage, and ONLY to understand his alleged problem (at that point, no understanding of the proposal, no intention of vaporising it), we simply put the data in a Relational context. And the problem disappers. Boom, whooshka, gone, what problem.

I have detailed all that re the Köhler paper, in a post '"Hard" Key Determination Method is Easy. DNF Paper is Done' on 9 Feb in this thread, please read. Here is a one page summary: http://www.softwaregems.com.au/Documents/Article/Normalisation/DNF%20Data%20Model%20B.pdf

In that case there were no FDs or non-FD-fragments to worry about, it was a simple arrangement of data. That is, there are no attributes, the entire data set is keys only, I never had to check an FD, and of course I did not reference his non-FD-fragments. Granted, in his paper, he non-FD-fragmented himself to death over the non-relational data. Imagine, I got meaning out of the column names that he has been contriving it (nothing wrong with that particular act, when contemplating an example for a paper), mulling over it, for years. The task took about 10 minutes, 30 mins to draw up the page.

The same thing happened when I examined Jan's DBPL paper. Boom, whooshka, gone.

Anyway, the point is, the Codd and Derek method works like a crucifix at black mass, instantaneous, all creatures and their howling vaporised upon entry. The Date & Zaniolo method works like a black mass without a crucifix, an orgy of creatures, howling without end.

Pssst. Wanna buy a crucifix ? Blessed and everything. Never used. For you, sir, special price.

So that paper, or his PhD thesis, is a non-problem, a favourite of the non-FD-fragmenters, totally without merit in the Relational context. Vaporised by the FD-ers. Score so far is about six /matches/ on our side, exactly zero /runs/ on your side.

But I will hold your context for the rest of the post ...

> Given a set of FDs, the script finds all the
> > keys and all the minimal covers.... Then, I had a graduating student
> > re-implement it in Java and adding a dependency-preserving
> > decomposition in BCNF (when it exists) or in 3NF to the output.
>
> My first reaction is a little unkind. I think this is what lawyers call
> "assuming facts not in evidence". *Given* a set of FDs, the program
> generated a 3NF database design. Hurray! Now, where to find those
> pesky FDs for input?

You mean the meaning ? That you stripped out at the outset ? It is still there, go back to the source docs.

Nah. Any tool that is based on such a stupid algorithm, is unworthy of being written.

> On second thought, though, it's cause for optimism. If a Ruby script
> and a grad student can generate a correct design, then it is a
> tractable problem. What remains is a convenient syntax for capturing
> the "pesky FDs", something that is the purview of academia.

Any problem is tractable. That is not the main consideration. Whether it is worth it, is the main consideration. Also, whether there is a better algorithm, before you spend a penny. Having just one algorithm , that is tractable is not an economical position to be in.

Especially if it has never determined a key.

> > Do get a trial copy of ERwin, and look into it.
>
> The last time I used ERwin in a serious way was in the late 90's. It's
> what printed out the size E paper charts. We used the "team" version
> that kept the diagrams in a database, and relied heavily on the version
> management and reconciliation function. I also reverse engineered
> their diagram database and posted that on the wall, to help us fix
> anomalies that crept in from time to time. I remember the "role" FK
> dialog could rename a column (or associate two names, if you prefer).
>
> I wrote macros to generate triggers to enforce relationships that
> couldn't be declared, such as sub/supertype relationships that required
> both parts of a two-part entity, where the first part was fixed and the
> second part was in one of N mutually exclusive tables. (For example,
> every security has a name and type, but bonds have coupons, equities
> have common/preferred, and options have an underlying). Note that both
> parts *must* exist: no security is described only by its name and type,
> and every security (let us say) does have a name and type. ISTR you
> said such relationships don't exist, but I think you must have meant you
> never came across one.
>
> ERwin is a good tool, the best I ever saw. (We also used the CAST
> workbench for stored procedure development, quite a boon.) I have
> sometimes wished for a better version of their macro language
> independent of the tool. It would be nice to define a
> relationship-rule in a symbolic library, and be able to apply it to a
> given set of tables.

You must have been using a very old un-maintained version. By 1995, it had all that and more. The macro language is much more powerful. But I never use triggers, and I have never had the need for "relationship-rules". All my rules are declared constraints, only, resident in the db, only. One does have to make decisions re what elements to deploy, and at what layer.

> ISTR you
> said such relationships don't exist, but I think you must have meant you
> never came across one.

I wouldn't have said that, because I have some very "complex" relationships, base/subtypes, multiple levels, multiple tables, etc.

> I wrote macros to generate triggers to enforce relationships that
> couldn't be declared, such as sub/supertype relationships that required
> both parts of a two-part entity, where the first part was fixed and the
> second part was in one of N mutually exclusive tables. (For example,
> every security has a name and type, but bonds have coupons, equities
> have common/preferred, and options have an underlying). Note that both
> parts *must* exist: no security is described only by its name and type,
> and every security (let us say) does have a name and type.

First let me say that I am a tiny bit of an expert in the trading space, I have been working almost exclusively for Aussie banks for over twenty years. I can't give the shop away, so this is limited to concepts and elements that are over ten years old, and only where the Funds Under Management is greater than 100 billion. Ie. ultra- legal and compliant with legislature. Institutional banking, massive portfolios, and no dime trades.

My InstrumentType is a genuine hierarchy, eleven levels deep, handling about 150 elements (your security types) at the leaf level. Yes, of course we track both legs, regardless of who owns each leg (your underlying), and for derivatives, we track the nominal as well and real exposure, or risk. All AssetClasses: Eq, FI, Property, commodity, unit trust, currency, etc. And all exchanges in the Pacific, as well as the bigger exchanges in America and Europe. We trade before we get burned, and we hold until the last moment. When we dump a security onto the market, we disguise it, so that the market is minimally affected, the tricks are just too many.

I use base/subtypes freely, but not "everywhere", none in the definition of InstrumentType. All my rules are declared constraints, only, resident in the db, only. Pure ACID high-concurrency transactions (my OLTP standards, but I don't think that matters here). I have never had occasion to use a trigger, and I have ripped out and replaced thousands (I generate most code). But then, I have no circular references, all the tables are Normalised into Relational Hierarchies, etc.

So, after reading your para four times, and I really want to understand the problem, I still have no idea what the problem is. Would you please give me a better description or draw a picture, so that I can help you, or at least so that I can understand the problem and discuss it with vigour. This position of "no idea", in my ambit, is too stupid for me to hold for long.

> > > Are you prepared to say that's the last and best way? I'm not.
> >
> > I am. And it is worse than that. I am saying it is the only method
> > for implementers, for practical people, for humans.
>
> Everything that can be invented has been invented?

Ok, as long as you credit Codd, not me.

There ain't nothing new under the Sun.

> (Cf.
> http://patentlyo.com/patent/2011/01/tracing-the-quote-everything-that-can-be-invented-has-been-invented.html,
> seriously)

Tomorrow.

> At least on this point we're clear. You think ERwin is the best (kind
> of) tool that can exist for the purpose.

Whoa. That sounds like you just switched barrels.

I said the Codd 3NF/FD method was the one and only tool for determining Keys, and as part of the modelling/Normalising task. That your non-FD-fragment method has no merit outside the theoretical context, and that in any case it was severely limited because it isolated itself from two levels of meaning.

Then, quite separately, I said that modelling with diagrams (IDEF1X with a strict Relational context vs circles in Visio or rectangles in UML), which engages 100% of the brain, is way more advanced than modelling using algebraic relation notation which engages 4% of the brain.

And that ERwin happens to be the best for the second task.

The brain, and only the brain, remains the vehicle for the first task.

If you think that I meant ERwin can do both, definitely not.

> I can imagine better, but
> doubt the market will demand or produce it.

For sure. Your tool is good for 1% of the market, irrelevant to the 99%, who do not perceive data in terms of x's and y's. They have other tools.

> Specifically, the IDEF1X diagram you construct is a tautology. You say,
> here is a key, there is a foreign key, this is unique, that has
> such-and-such domain. And, great, you can generate the SQL for the
> DBMS. You are doing the designing, designating the FDs by way of those
> keys, and reasoning about all the dependencies and potential
> anomalies. It's right because you say it's right. The tool doesn't
> know beans about normal forms.

Agreed. Not quite so silly, but I will let you have your fun.

> It can't *check* anything.

That is incorrect, it checks about 40%, before I hit the "generate SQL" button. But that is a result of using standards, a few sexy macros, etc.

For the purpose of this post, ok, it can't check anything.

> All it can
> do is convert *your* design, expressed as a diagram, into SQL.

That is unfair. Especially unreasonable because you have experience with (a) the process and (b) employing ERwin for the process. You should know that it is a model, with hundreds of types of elements (not instances, which is dependent on the db), it is not a mere diagram of a model. We create the model, using the tool, and then in many iterations, keep modelling, until we have a db definition that is sound.

The SQL generation bit is tiny, yes.

And sure, I can do the entire job without ERwin, but it is much faster with it. So at the end of the day, it is a productivity tool, that has modelling capabilities.

Sure, I can draw IDEF1X models in OmniGraffle, but it does even less checking and no change propagation, compared to ERwin.

> Sure,
> the SQL will be right; that's about as automatic and brain-dead a thing
> as can be imagined.

Agreed.

> Is it 3NF? In 2015, that's on you.

Yes.

What about correct ? What about Efficient, high performance, concurrency ? Totally on me ? Yes.

Same as if I wrote a contract using MS Word. Is it correct ? Legal ? Fair ? Totally on me.

Same as a IDEF1X model in OmniGraffle, or Visio. Is it 3NF ? Correct ? Efficient ? Totally on me.

So ?

> I spent many, many hours re-arranging my boxes and routing my
> connections,

I do that, only when ready to publish, once, for each db/app version release.

> and tediously moving names up and down to reflect what's
> the key and what's not (and what order I wanted the physical columns
> in).

I never do that.

> I worked my way through a few knotty 4-column keys and used such
> diagrams to explain contradictory nebulous understandings, wherein
> different departments had different ideas of what, say, a product is.

I do that on a whiteboard, once, draw it up on OmnuGraffle, once, and publish and forget.

I would not dream of doing that in the model itself (ERwin). Fiddling with and changing keys is a serious matter, the changes have to be propagated down the line, and the whole line has to be checked again.

Oh, I forgot. You have an RFS, no Relational Keys, yes, of course, you do not have a propagation problem. But then you don't have a Relational Database for propagation to be a problem in. Ok, you can keep changing your "keys" and moving the columns around without the considerations that I have.

> I am not at all convinced that's the best way.

What, to model ? Nonsense, setting aside the differences between the way you are I use ERwin, there is no other way, and there hasn't been for thirty years. You have to go through many, many iterations, as you develop and improve the model. For us, who cut a new release of db+app every quarter, there is no other way for an additional number of reasons.

But I suspect you mean something else.

> I don't deal in 200-table databases anymore. The databases I deal with
> nowadays have fewer tables and users, and lots more rows. I write the
> SQL directly

No wonder you hate it. I stopped doing that in 1993. I don't think I will ever forget that first IDE, DBArtisan.

> and rely on DRI for almost everything.

I presume you don't use triggers any more, or was that a different project.

I use constraints for everything, and DRI is one form of constraint.

> When I want a
> diagram for a database, I go the other way, and generate it with pic
> (cf. groff) from the SQL, plus some ancillary information for box
> placement and line drawing.

Ok, so you have no model, no iterations. The database is the "model".

> I suggest to you that pic is every bit as smart about diagrams as ERwin
> is about databases.

I wouldn't know, I stopped using troff when decent diagramming programs came out. Since about 2000, there is nothing out there that comes close to OmniGraffle. You have seen only my simplest drawings. Try this: http://www.softwaregems.com.au/Documents/Article/Sybase%20Architecture/Sybase%20ASE%20Architecture.pdf That is the free public version, about five years old, the current paid version is 40 pages, a fully cross-referenced (click an object and it takes you to the definition of it, etc) PDF.

> And both are equally smart about database design.

As long as you mean brain-dead, I agree.

> > Oh yeah, they are still working with text strings, 1's and 2's.
>
> You're convinced that your way -- designing via diagram --

See, I knew you switched barrels. To that point, my comments above apply.

First, it is not my way. I did not invent it, or implement it or provide the tools for it. It is Codd, Chen, Brown's way. LogicWorks provided the tool.

Second, the exercise is modelling, an iterative task, the object is a model, that increases in definition and quality, the drawing is single rendition of the model, same as the SQL DDL produced is a single rendition of the model. The sales people will tell you it is a full-blown repository.

Third, the other barrel. The human brain only, for Normalisation; FD; Relationalisation; accuracy; efficiency. There is no alternative. That is not replaced by a diagram, or a diagramming method, or a model.

You are erecting a sly Straw Man. It only takes your proposed counter-argument down, it does not touch the real thing, the real world. There is no smoke over heree, and all the flame is on your side. Make sure you have good-quality marshmallows, I hate the ones that fall off the stick.

> isn't just
> the best way, but the only way and the last way that ever will exist.

Counting the last thirty years and the present landscape, yes. Competition is non-existent.

I wouldn't bet on the future, but there are no contenders anywhere on the horizon, so I would say, at least for the next ten years, yes.

> I'm convinced that's not true.

Ok.

> I'm sure a better language than SQL
> could be invented that could be "compiled to" SQL and represented
> diagrammatically.

Why is that relevant ?

I'm sure a language better than awk could be invented, but I wouldn't give it up, or consider writing a replacement. When something substantially better than troff came along I switched.

SQL is not a language, I really don't understand the logic of faulting it for not being what it isn't.

If you like to write database commands in x's and y's and tick marks, go for it: write a pre-processor for SQL. Less than 1% will use it, but since you are obsessed with it, go for it.

The rest of us need the SQL, because that is what we have to look into when something in the hieroglyphics goes wrong, or has unintentional side-effects; that is what we have to work with when fiddling with the server. It cannot be displaced.

And for iterative modelling, a tool such as ERwin. The fact that it squirts SQL, and not TOTAL commands or COBOL WRITE commands, is irrelevant to this issue.

SQL is not the issue. Get over it. It is jsut the village that you get all the straw men from. Get an IDE.

> I don't see why it couldn't also capture meaning not
> in the SQL catalog, such as ownership, provenance, and definition.

Ownership, security, and definition are already there, at least in commercial SQLs.

Provenance is in the model, not in the catalogue, and it is transported to the catalogue in an indirect way, but it is there.

Oh, I forgot, you don't use hierarchies. Ok, no provenance for you.

Meaning. Well, just write the notes for the two levels at the beginning, just prior to excising it, and pour it back into the catalogue when your bones are ready.

I use ERwin, so my notes are stuck to the object, and go through all the transformations, all the iterations, and stick like glue, for eternity. When we do the SQL squirt bizo, I have a script (macro to you) that transfers all those notes to the SQL catalogue.

> Rather than defining tables per se, we'd define columns and their
> dependencies, and let a model-generator express them as tables in a
> schema known to be, say, 3NF.

(I don't design tables, I design a database in the full context of all its tables.)

That is a great idea.

But there are two problems. First, you don't have the 3NF/FDs, you have a tiny non-FD-fragment, so you are not going to get very far.

You have to get this right, and you are nowhere near it (pardon this, it is a clipping, I cannot give you the doc): http://www.softwaregems.com.au/Documents/Article/Normalisation/NF%20Note%20FD.pdf

You guys are messing with a non-FD-fragment, plus 17 NF fragments that in toto deliver about 15% of 3NF/FD, devoid of two levels of meaning. We are using a 3NF/FD definition with full context, retaining two levels of meaning. You guys can't even discern a FD from "MVD" properly. We have only FDs, since 1971, with two types of dependency, single and "multi-valued". To me, "MVDs/4NF" are a failure to make that discernment, it actually breaks 2NF, just like Jan broke 1NF without being conscious of it. It is also a failure to understand Atomicity of Facts. (refer Köhler and Hidder papers). You are working with fragments of facts.

The diagram is from Course Notes, of course. It simply shows that the FD is dependent on the Key; without the key, there is not a THING for the FD to be functionally dependent on. And the two types of FDs, single and "MVD".

Until you get those very basic issues sorted, you have no chance at all, of getting columns-and-dependencies model to produce anything except more of the same, as you guys have produced in this thread. Zero. ø.

And then the issue of columns-and-dependencies minus keys, is absurd.

Second, where exactly, does that "known to be 3NF" come in, how is that provided ? Nothing short of a full AI system will give you that. You cannot substitute the human brain with an algorithm.

> There have been countless attempts in the last
> 30 years to express logic graphically. We might start with Logo, say,
> and include any number of CASE tools. (I don't suppose you remember
> Software Through Pictures?)

Loved it, but only as a novelty. SSADM (DFDs) has stayed with us for forty years.

> They. All. Fail. Visual Basic becomes
> Manual Basic when you're done drawing the dialog boxes.

I agree, with each of those statements, but not the context or intent.

> History is on my side.

Now the context and intent.

To the extent that you draw an history of the Straw Man, sure, that is awful, just awful. But it doesn't apply, so it doesn't matter.

Concerning real history, that applies, on our side: Get a grip. History is on our side, the 99%. ERwin is over 30 years old, the others in that category are a little younger. Millions of databases.

Concerning real history, that applies, on your side: there is zero history on your side.

> Meanwhile, we're living through an explosion of languages and are
> making progress with previously intractable problems such as
> correctness. If history is any guide, better database designs will
> come from better languages, not better diagrams.

Well, that is just a recap.

Better database designs come from one source and one source only: better educated humans.

They do not come from diagrams, so I don't know why you go on about that.

Humans will use tools during the iterative modelling process, so the tools have to stay, and you idea hasn't touched the iterative process.

Languages: I am all for it. Of course, it has to be something that the 99% can use. That excludes relation commands in an algebra.

You have articulated the pipe dreams of the one percent. It was never properly thought out or architected (same as Ingres/QUEL., same as Oracle). You have left a number of large and important areas unaddressed. The treatment is very superficial, and very optimistic.

Further you have diminished the value of the tools that exist on an inapplicable (Straw Man) basis. Such diminution has no effect.

The TTM groupies have produced nothing in twenty years.

Nothing has changed.


Thank you for your excellent post. I will get to the rest on the weekend

Cheers
Derek Received on Thu Feb 12 2015 - 15:55:38 CET

Original text of this message