Re: normalization problems

From: BG <bill_at_microsoft.com>
Date: Tue, 26 Dec 2000 15:07:32 -0800
Message-ID: <3a492489_at_news1.ltinet.net>


"Karen Osmond" <kosmond_at_softhome.net> wrote in message news:3a44fac6.7134192_at_news.ntlworld.com... [snip]
> As for the integer keyfields, my client wants the keyfields to be in
> the form AAA001, so I've got to do that, sadly. It's a pretty small
> database anyway in terms of number of records, so hopefully the
> performance won't be very appreciably affected.

In all likelihood the DBMS is written in C, and the string compares will be done byte at a time.

So what performance difference would you expect between doing a single instruction to compare two word registers, versus looping through say, a 20 byte string ? He's (the DBMS) probably not even smart enough to map the bytes onto words and will instead loop through 2 20 byte arrays.

Granted that two load instructions plus one compare instruction versus 40 loads, 20 compares, and the load/increment/compare for the loop index is still a drop in the bucket versus what all else goes on in processing the query. But it does add up, and it can be a measurable difference. The larger the string, the worse it is for a key/index.

OTOH, you now have at hand an immediate change you can propose when they come back complaining that the system is too slow. Depending on the job mix you might pull 5% improvement or even more just on this alone.

It has nothing to do with the number of records (this is a DBMS or at least an ISAM, right ? Not a flat file ?), and everything to do with the number of times you'll be using that field. Received on Wed Dec 27 2000 - 00:07:32 CET

Original text of this message