Re: SQL Humor

From: Stu <stuart.ainsworth_at_gmail.com>
Date: 18 Aug 2005 15:31:39 -0700
Message-ID: <1124404299.928909.272180_at_z14g2000cwz.googlegroups.com>


Let me try to be more clear; my objection to your original statement (paraphrased as "we've got lot's of space; why should a couple of bytes matter?") is grounded NOT in practical matters, but rather issues of design theory and discipline. I would agree that in terms of performance you'd be hard pressed to find differences between char and varchar for 90% of use cases that you can come up with. Char is useful in other ways besides just performance (e.g., validation, but that too can be emulated by other means); however, I'll concede (as I did in my original post) that distinguishing between char and varchar IN MOST CASES is moot in terms of performance.

However, my point is this; if you disregard the differences here because of substantial resources, why bother with any sort of datatype at all? If a varchar(50) will do the trick of a char(10), what's 42 extra bytes? In the days of terabytes sized RAID arrays, why not use a varchar (8000) for everything? Why use an integer if a varchar will do? Obviously at some point, there will be a performance impact; why not be disciplined enough to strike at the lowest level of that curve and use the least expensive resource that is the best fit for your data? Received on Fri Aug 19 2005 - 00:31:39 CEST

Original text of this message