Re: Specifying all biz rules in relational data

From: Kenneth Downs <firstinit.lastname_at_lastnameplusfam.net>
Date: Wed, 22 Sep 2004 11:11:10 -0400
Message-ID: <em4sic.mu2.ln_at_mercury.downsfam.net>


Laconic2 wrote:

>
> "Laconic2" <laconic2_at_comcast.net> wrote in message
> news:yNKdnZlKDaZodc3cRVn-qQ_at_comcast.com...

>> Yes but reading and writing 10 million rows in a data warehouse is a
>> piece of cake.

>
> I'm backing off from this one. It's not a piece of cake. It's more like
> an overnight batch job.
>
> You need a server that will process about 3,000 rows per second. That's a
> little over a million rows an hour.
> In about 10 hours, you'll be done with 10 million rows.
>
> 3,000 rows per second is definitely feasible, with a reasonable amount of
> hardware, a competent DBMS, exclusive access, and a well designed
> database.

When I had that job, there was a guy called Nils who would constantly brag about the speeds he achieved. Since his boasts were actually real, we created a unit of measure in his honor, the "Nillion". A Nillion is a measure of throughput, one Nillion is one million records per second, in honor of the man we believe will be the first to get there. Of course, you'll have to deal with Foxpro DOS on OS/2 to do it, but what price perfection? :)

Therefore, we would say a reasonable throughput expectation would be 3000 microNillions.

P.S. 3000 microNillions is about what we accomplished, and this was about 4 years ago, for operations that are looking into a secondary source. For massage operations like reformat, where you do not look up into a secondary source, you can get 10 milliNillions.

-- 
Kenneth Downs
Use first initial plus last name at last name plus literal "fam.net" to
email me
Received on Wed Sep 22 2004 - 17:11:10 CEST

Original text of this message