Re: Throughput and storage requirements questions for modern vs old database processing apps

From: Paul Linehan <linehanp_at_tcd.ie>
Date: Sat, 24 Nov 2001 16:07:34 GMT
Message-ID: <3bffa6cb.8398576_at_news.tcd.ie>


"John Becich" <jbecich_at_nospam.net> wrote:

> I'm a systems engineer who usually manages hardware and operating systems.
> I have a client who operates a rentals company, using a single Netware 4.11
> server that hosts an old DOS-like database system. I have been told that
> the existing custom-designed rentals-management software was written using
> Clipper with Comix databases, in about 1994. There are several .DBF (at up
> to 90 MB each) and .NTX (what are these?) files in the existing system.

I'm no expert, but I *_think_* that 90 MB is rather large for dbf files - they, AFAIK, simply weren't designed for this sort of size. I think that NTX files are the indexes on the tables.  

> The system is running slowly. It has slowed gradually over the years. I am
> considering recommending a migration to a modern client/server system based
> on a Windows 2000 Server.

Also, the slowing may be due to the fact that as the system has grown over the years, the clients are fetching more and more data from the system and this may be clogging up your network - i..e. say at the beginning there were 1000 rental items starting with the code xyz, now there are 20000?

Do you have access to the code? Can you see if this is the case.

Also, I'm pretty sure that there are programmes out there that can give you reports on network usage, so you could look into that also - if the network isn't under strain, then adding more capacity is a waste of time.

I suspect (though am not certain) that you are running into the limits of fileserver databases.  

> I realize that the "bottleneck" in this system might be hardware, and I
> intend to address that first. Indeed, there is room for improvement in this
> area. The server is new, but it has only a narrow ultra SCSI hard drive
> system. I could improve throughput at the hard drive interface by several
> times. I could also increase the amount of RAM in the server's motherboard.
> I don't think the bottleneck is related to the server's processor.

Can you see the CPU and RAM usage during busy times? If it's not running flat out, then more won't help. What about disk usage? Again, if it's not under stress, a faster disk will just be siiting around waiting for requests. What OS is your server running? You mention a W2000 machine - could you make that the server and then use ctrl-alt-delete for the task manager?  

> Nevertheless, I am seeking general guidelines as to which database
> processing system affords better throughput. It might be that the existing
> software is choking on a database that has grown beyond usefulness.

Which is what I suspect.

> On the
> other hand, I have witnessed many instances in which old software, because
> it is "leaner," runs much faster than new software, on the same hardware.

Would this be old fileserver software with very big tables or just utilities or something that doesn't do a whole lot of data access/retrieval?  

> Why is client/server so popular? Does it run faster than the competition?

Well AFAICS your fileserver dbf/paradox type apps run very well and quickly when the amount of data being thrown around is small.

It may be that a fully fledged db solution may not be ideal for a "small" app, however, the advantage with client-server is *_scalability_* - it will grow better - and from what you're sayiing about 90MB tables, I think you've reached the limits of dbfs.  

> How about modern software, written in Visual Foxpro, that is not
> client/server, but requires a Windows client?

<Possible Controversy>

Well (since proprietary systems aren't a problem for you), if I were you I'd look at Delphi (a complete OO programming language and development environment) rather than visual foxpro. It also runs under Linux and called Kylix for that OS. I took a look at the foxpro site and AFAICS it only does tables (like you have) or desktop MS SQL Server - with Delphi you can use any database and with Delphi, you can also access DBase files and lots of other file types.

<Possible Controversy>

As to the database that you might choose, there's probably even more controversy there.

There are those who swear by Oracle (aka The Beast) - it is enormous and can do virtually anthing, but is expensive and requires an expensive dba to run it. There are those who say that MS SQL Server (not cheap either) with its ease of use and reasonable performance is your only man, but it only runs on Windows, not a problem for you, though the M$oft haters would prefer to store data on stone tablets rather than use M$oft products.

I was just at a talk here in Dublin by Alan Cox (a Linux kernel maintainer), hence my Open Source evangalising!!! 8-)

Then you have Sybase which has its fans, esp. in the "crank it up and forget it" world. Informix also has its fans - recently taken over by IBM AFAIK, so maybe the DB2 engine of IBM will get all of the attention from now on, with Informix being gradually discontinued? You could also look at Pervasive (ex Btrieve) - not sure on pricing here.

Then you could look at the Open Source databases MySQL (not free if you're using it in a commercial environment - but probably the best supported free DB, but it doesn't support transactions and stuff), Postgres (Unix only), SAP or my own *_PERSONAL FAVOURITE_* Firebird from www.ibphoenix.com (This was previously Interbase (still available) from Borland), but the Firebird distro is the true Open Source version - Borland made a complete balls of releasing Interbase as opensource and now the project is being taken on by the ibphoenix group.  

> I run DOS software on modern computers occasionally, and of course I am
> delighted because it runs unbelievably fast. I generalize that old software
> on new hardware is a happy combination. We replace old software when we
> tire of its absence of modern features. To use a distant analogy, a 1960
> Chevy will get you around town, but if you live in Seattle you might like
> the interval wipers provided with a 2001 Chevy.

See my question about old *_database_* software.

If the thing is running under *_exactly_* the same conditions, then it will run faster under better hardware (though there is a point of diminishing returns), but as I said, the size of the tables could be the reason for your problems.

You could maybe test this hypothesis by copying the system onto another machine and delete 90% of the records (obviously not essential ones...) and see does the speed improve.  

> Is database management the great exception to my generalization that old DOS
> software runs extremely fast on modern hardware?

I would think so when fileserver systems get as large as 90MB per table.

> That is, if the database
> gets so large, is there something about a DOS program that even modern
> hardware cannot help? Does "modern software" offer a remedy, especially if
> the bottleneck appears to be in the process of reading and writing database
> information across the file server's hard drive interface?

The "remedy" is a proper database.  

> Most of the "workstations" at the site are ancient, running a DOS operating
> system. Call them "B." They have a Novell Netware 16-bit client hook
> connecting to the server through 10 Mb Ethernet. One "A" workstation is a
> modern Windows 2000 computer with a direct 100 Mb pipe to the server.
> Unfortunately, my customer has not yet performed an A/B speed comparison
> test, which I have requested.

Surely you can get some anecdotal data - i.e. ask employees who have used both, they will have some idea what response times are &c.

> This is something I cannot do because it must
> be done when the rental company staff processes its customers under very
> busy conditions.

Stand over their shoulders at peak times?

> The point is that the file server is just that - only a
> file server, while each workstation executes the rentals program in its own
> RAM. I don't know yet if the antiquated workstations and slow Ethernet are
> at fault.

Well, if it's slowing down on the same hardware, then that has nothing to do with the workstations RAM or anything else - it's either network related or table related or the app is poorly written.

I suspect the tables first, then the app and then the network.  

> It is possible that the tiny RAM and 64KB barriers in the old DOS
> workstations are a problem.

But, you said it has slowed down over the years. Therefore, it's either the tables themselves or the app is poorly written and sending so much data that the RAM is swamped.

> Modern workstations (i.e., clients) running
> under 32-bit operating systems shouldn't be so limited. Does modern
> client/server software take advantage of this?

If the application is crappily written, you're going to have problems. The whole point about client/server *_PROGRAMMING_* is to try and have as little data as possible moving around the network.

The point of a proper *_DATABASE_* is to quickly service requsts from the clients, however the fastest database in the world will be of no use if it can't send out its responses due to the fact that the network is jammed, hence the need for good programming practice.

> If a client/server system is implemented, it is required that I save all the
> information stored over the years, and migrate it into the new system. The
> existing hard drive requirements for the database (exclusive of the server's
> operating system) are under .5 GB.

You've lost me here - you say above that you have several up to 90MB each - how many tables do you have exactly? How many are over 20MB?

> If I implement a modern client/server
> system, would we require a much larger strorage capacity, for the same
> amount of information?

No - well, I tell a small lie - if you install Oracle, you will need at approx a GB for that alone and shitloads of RAM for the thing to work properly, but I suppose that you can spend a bit on the server machine, likewise the MS SQL install is big as well (approx. 500MB). (As an aside, Interbase/Firebird is relatively tiny - a footprint of maybe 20MB - and costs nothing....)

But as a rule, if you have 500 MB of data in your tables, you should have approx. 500 MB on your client server DB - in fact, I know that Interbase compresses the data on the fly, so you may need even less space.

However, all of this is really academic (it's impossible to get HDs less than 10GB these days) - if you are worried about hardware, it's the clients that you should worry about. With 64 MB of RAM, you can run Windows 98 on your client, so if you use Delphi or other 32 bit development tool (AFAIK, you just can't get supported 16 bit tools around - you could try Delphi 1 maybe, if you can get it, if you fancy running windows 3.1).

You could possibly run some Linux based solution also?

> Certainly, operating systems and modern Office
> applications require hard drive capacities that exceed those implemented in
> 1994 by at least 100 times.

True, unfortunately.

> Any comments on how the database might grow by
> the mere fact that it has been migrated from DBFs to a client/server system?

The data itself will not grow - your database software might be bigger (certainly is with Oracle), but I'm assuming that you're willing to throw a bit of money at your server? Like I said, it's impossible to get less than 10GB these days, so with your OS (say Windows 2000 Server (I'm guessing here - 3GB?) + the biggest DB server - Oracle (1GB), that leaves you with 7GB for your data, which you say is 500MB, so I would suggest that your problem won't be too little space, it'll be finding things to do with the space you have over!

> (I will make additional considerations for growth over the next five years.)
   

It's been running since 94 and has 500MB, so that's ~ 70 MB per annum, and you'll have 6.5 free Gigs on your new server) so that's almost 100 years of worry free computing - I don't think space will be a problem? 8-)

I don't know why you're so concerned about space, the biggest cost *_by far_* here is going to be writing a new system - that's going to cost *_way_* more than even a reasonable server.  

If if you want more info about this topic, post to the borland.delphi.sqlservers (or something like that) newsgroup - there are some very expert people there who are very helpful - BTW, don't crosspost there, they get very annoyed at that, though personally, I think two or three groups is OK, as long as its OT.

I fully confess to being a Borland fan, which is partly why I recommend products made by them (Delphi) or related in some way (Firebird/Interbase). I notice that you also posted to a Microsoft group, so you'll probably get biased people there also, I acknowledge my biases.

Paul...

--
Paul Linehan

plinehan at yahoo dot com/linehanp at tcd dot ie

I drink to keep body and
soul apart - O. Wilde.

"Mens sana in campari soda" - anon.
Received on Sat Nov 24 2001 - 17:07:34 CET

Original text of this message