Throughput and storage requirements questions for modern vs old database processing apps

From: John Becich <jbecich_at_nospam.net>
Date: Sat, 24 Nov 2001 12:41:53 GMT
Message-ID: <lAML7.2306$Kc2.222985_at_newsread1.prod.itd.earthlink.net>



I'm a systems engineer who usually manages hardware and operating systems. I have a client who operates a rentals company, using a single Netware 4.11 server that hosts an old DOS-like database system. I have been told that the existing custom-designed rentals-management software was written using Clipper with Comix databases, in about 1994. There are several .DBF (at up to 90 MB each) and .NTX (what are these?) files in the existing system.

The system is running slowly. It has slowed gradually over the years. I am considering recommending a migration to a modern client/server system based on a Windows 2000 Server.

I realize that the "bottleneck" in this system might be hardware, and I intend to address that first. Indeed, there is room for improvement in this area. The server is new, but it has only a narrow ultra SCSI hard drive system. I could improve throughput at the hard drive interface by several times. I could also increase the amount of RAM in the server's motherboard. I don't think the bottleneck is related to the server's processor.

Nevertheless, I am seeking general guidelines as to which database processing system affords better throughput. It might be that the existing software is choking on a database that has grown beyond usefulness. On the other hand, I have witnessed many instances in which old software, because it is "leaner," runs much faster than new software, on the same hardware.

Why is client/server so popular? Does it run faster than the competition?

How about modern software, written in Visual Foxpro, that is not client/server, but requires a Windows client?

I run DOS software on modern computers occasionally, and of course I am delighted because it runs unbelievably fast. I generalize that old software on new hardware is a happy combination. We replace old software when we tire of its absence of modern features. To use a distant analogy, a 1960 Chevy will get you around town, but if you live in Seattle you might like the interval wipers provided with a 2001 Chevy.

Is database management the great exception to my generalization that old DOS software runs extremely fast on modern hardware? That is, if the database gets so large, is there something about a DOS program that even modern hardware cannot help? Does "modern software" offer a remedy, especially if the bottleneck appears to be in the process of reading and writing database information across the file server's hard drive interface?



Most of the "workstations" at the site are ancient, running a DOS operating system. Call them "B." They have a Novell Netware 16-bit client hook connecting to the server through 10 Mb Ethernet. One "A" workstation is a modern Windows 2000 computer with a direct 100 Mb pipe to the server. Unfortunately, my customer has not yet performed an A/B speed comparison test, which I have requested. This is something I cannot do because it must be done when the rental company staff processes its customers under very busy conditions. The point is that the file server is just that - only a file server, while each workstation executes the rentals program in its own RAM. I don't know yet if the antiquated workstations and slow Ethernet are at fault.

It is possible that the tiny RAM and 64KB barriers in the old DOS workstations are a problem. Modern workstations (i.e., clients) running under 32-bit operating systems shouldn't be so limited. Does modern client/server software take advantage of this?



If a client/server system is implemented, it is required that I save all the information stored over the years, and migrate it into the new system. The existing hard drive requirements for the database (exclusive of the server's operating system) are under .5 GB. If I implement a modern client/server system, would we require a much larger strorage capacity, for the same amount of information? Certainly, operating systems and modern Office applications require hard drive capacities that exceed those implemented in 1994 by at least 100 times. Any comments on how the database might grow by the mere fact that it has been migrated from DBFs to a client/server system? (I will make additional considerations for growth over the next five years.)

Thanks Received on Sat Nov 24 2001 - 13:41:53 CET

Original text of this message