Re: Why do programmers start counting from 0?
Date: Thu, 09 Dec 2004 10:44:16 +0100
On 8 Dec 2004 19:07:20 -0800, timothychung_at_gmail.com wrote:
>One day, I started wondering why we start counting from 0 and couldn't
>find a good search string to get much out from google.com. Can anyone
I can't give you an authorative answer, but I do have a theory.
When computers were still slow and memory was expensive, starting at 0 made sense. If you have to store an array of 256 elements, you can still use just 1 byte for the position if you start at 0 (elements 0 through 255). If you start at 1, you only get to 255 elements before having to use a second byte.
But I guess that speed of finding the location is more important. Let's say that we have an array of 40 elements, each element uses 4 bytes. Most (if not all) computers would store the array in 4*40=160 bytes of contiguous memory and save the base address somewhere. If the base address if 0x6B00, then the first element would be located at 0x6B00, the second at 0x6B04, etc. If numbering had started at 1, the formula to find it's location would be
(base address) + (element# - 1) * (length) or (base address) + (element#) * (length) - (length) But with numbering starting from 0, the formula can be simplified to (based address) + (element#) * (length)
This saves the computer one instruction. And though the saved instruction is "only" a DEC instruction, it does use time - measured in milliseconds in those days. On array-intensive programs, all these milliseconds do add up! And it also saves one or two bytes in the program's code (remember that some computers required that program and data combined fit in 64 Kb of RAM memory - or even less).
-- (Remove _NO_ and _SPAM_ to get my e-mail address)Received on Thu Dec 09 2004 - 10:44:16 CET