Re: Testing relational databases

From: Bob Badour <>
Date: Mon, 10 Jul 2006 13:15:14 GMT
Message-ID: <Cpssg.8640$>

Phlip wrote:

> Bob Badour wrote:

>>I read it. It made tremendous sense. Others read it and made sense of it 
>>too. The only people I have encountered who reached the same conclusion as 
>>you were demonstrably stupid. Sadly, I have to conclude the same of you.

> And everyone else both these newsgroups consider Fabian a crackpot except
> you.

If by everyone, you mean all of the self-aggrandizing ignorants I have in my twit filter, you are probably right. However, if you mean anyone who is educated and intelligent, like Chris Date for instance, then you are wrong. Smart educated people recognize that Fabian is anything but a crackpot.

> Oh, if only we were as smart as you, to figure out what Fabian's point is
> inside all the ranting!

If only.

>>Doesn't it strike you as stupid to expend energy on tests that one could 
>>more productively expend on proofs?

> Those aren't real math proofs. They are softer, within the non-rigorous
> constraints of hardware and software. Hence, they are a lot like unit tests.
> Just harder to write.

You are wrong. They are real math proofs. Idiot.

> The most advanced proofs guys, on the Ada SPARKS project, have admitted they
> frequently make small edits to their code, then pass all their tests - oops
> I mean proofs.

If you think Ada has anything to do with writing sound software, you are an even bigger idiot than I previously thought.

> Put another way, their design emerges within constraints, programmed first.
> That sounds familiar...

Hey, if they are stupid enough to use Ada given the very well-documented flaws in the language, what do you expect?

>>>The goal is code designed for testing, so that tests and clean code, 
>>>together, can inhibit bugs. So if you design BY testing, then you are in 
>>>the best position to possibly discover the remaining few.
>>That in no way causes the probability of correctness to approach unity 
>>with the certainty required to overcome P^N when N is large. For example, 
>>a probability of 90% correct for each of 100 units gives a probability of 
>>only 0.0027% correct for the resulting system. A probability of 99% 
>>correct for each of 100 units gives a probability of only 37% correct for 
>>the resulting system.

> Nobody needs to use tests to proof correctness.

That's the first thing you have said that even begins to approach anything intelligent. One needs proofs not use tests or unit tests.

You are not responding to
> the claim that proof-first, and test-first, help the code resist bugs.
> There's a difference.

You are right. I am responding to idiocy that ignores an empirical observation published 37 years ago.

>>As the system grows larger, the certainty required approaches even closer 
>>to unity. For 1000 modules, even if one achieves a 99.9% certainty of 
>>correctness for each unit, the system has only a 37% chance of 

> I thought the same could be said of proofs.

A proof has a good chance of approaching 100% certainty and a much better chance than a unit test does.

> So it's a good thing neither group attempts to exhaustively prove
> correctness.

You are a moron.

>>Following Amblers advice means expending considerable effort on tests to 
>>achieve an almost certainly buggy system.

> Uh, other people promote TDD besides Ambler.

Are you suggesting I think there is any shortage of self-aggrandizing ignorants and snake-oil salesmen eager to jump on that gravy train? You need to pay more attention.

And teams who use it
> overwhelmingly report an order of magnitude fewer defects. So once again you
> link a non-sequitur to an imagined conclusion that is not observed in real
> projects.

So the resulting software is 0.027% correct instead of 0.0027% correct. Big fucking deal.


> Oh, pleeease please please!!
> ;-)

If you would stick to one email address, you would be gone already. Received on Mon Jul 10 2006 - 15:15:14 CEST

Original text of this message