Re: Throughput and storage requirements questions for modern vs old database processing apps

From: John Becich <jbecich_at_nospam.net>
Date: Sun, 25 Nov 2001 02:31:20 GMT
Message-ID: <YJYL7.3876$Kc2.366800_at_newsread1.prod.itd.earthlink.net>


"Paul Linehan" <linehanp_at_tcd.ie> wrote in message news:3bffa6cb.8398576_at_news.tcd.ie...
>
>
> "John Becich" <jbecich_at_nospam.net> wrote:

> > The system is running slowly. It has slowed gradually over the years.
I am
> > considering recommending a migration to a modern client/server system
based
> > on a Windows 2000 Server.
>
>
> Also, the slowing may be due to the fact that as the system has grown
> over the years, the clients are fetching more and more data from the
> system and this may be clogging up your network - i..e. say at the
> beginning there were 1000 rental items starting with the code xyz, now
> there are 20000?
>
> Do you have access to the code? Can you see if this is the case.
The source code? No. I wouldn't know what to look for, even if I did. I suppose I could decipher a DBF file to see how many rental items were used 5 years ago. But then again, I could just ask the people that work there.

>
> Also, I'm pretty sure that there are programmes out there that can
> give you reports on network usage, so you could look into that also -
> if the network isn't under strain, then adding more capacity is a
> waste of time.
The customer is ready to accept new wiring and modern switching equipment. So the 10 Mb Ethernet network is on the way out, regardless.
>
>
> I suspect (though am not certain) that you are running into the limits
> of fileserver databases.
So what is the remedy? Eliminating records is not practical. I'm looking for a database methodology that can handle "large" databases. I expect client/server is the way to go.

>
>
> > I realize that the "bottleneck" in this system might be hardware, and I
> > intend to address that first. Indeed, there is room for improvement in
this
> > area. The server is new, but it has only a narrow ultra SCSI hard drive
> > system. I could improve throughput at the hard drive interface by
several
> > times. I could also increase the amount of RAM in the server's
motherboard.
> > I don't think the bottleneck is related to the server's processor.
>
>
> Can you see the CPU and RAM usage during busy times? If it's not
> running flat out, then more won't help. What about disk usage?
Good questions, all. Unfortunately, the peak usage occurs only rarely, and I have never been there at that time. The customer complains about the slowness under such peak conditions, and is willing to hire me to remedy it. Furthermore, the customer has never implemented some experiments I have petitioned them to perform, to give me a sense of where the bottlenecks are. I will be looking for an opportunity, therefore, to visit the site, and conduct several experiments to discover bottlenecks. That is yet to come. I don't have those answers yet...

> Again, if it's not under stress, a faster disk will just be siiting
> around waiting for requests.
I upgraded the disks and Host Bus Adapter last February, and there was a performance jump. But by now, it seems to be slower than ever. There are many variables in the soup, so I can't pin down one cause. I'm looking now for more of a "strategic" remedy than a "tactical" remedy.
>What OS is your server running? You
> mention a W2000 machine - could you make that the server and then use
> ctrl-alt-delete for the task manager?
Current server OS is Netware 4.11. We have W2K Pro on a recently purchased workstation, P-III 900 MHz. There is an outside chance I'll attempt to host the database on my own W2K Pro 1.1 GHz Athlon computer, which I would carry into the facility. Thus I could use it as a file server during an exercise in which I look for bottlenecks. Another respondent to my thread cautioned me against that, for reasons that the application in use might be dependent upon features within the Netware server.
>
>
> > Nevertheless, I am seeking general guidelines as to which database
> > processing system affords better throughput. It might be that the
existing
> > software is choking on a database that has grown beyond usefulness.
>
>
> Which is what I suspect.
>
>
> > On the
> > other hand, I have witnessed many instances in which old software,
because
> > it is "leaner," runs much faster than new software, on the same
hardware.
>
>
> Would this be old fileserver software with very big tables or just
> utilities or something that doesn't do a whole lot of data
> access/retrieval?
The latter. Precisely my point. I'm seeking guidance for the former.
>
>
> > Why is client/server so popular? Does it run faster than the
competition?
>
>
> Well AFAICS your fileserver dbf/paradox type apps run very well and
> quickly when the amount of data being thrown around is small.
>
> It may be that a fully fledged db solution may not be ideal for a
> "small" app, however, the advantage with client-server is
> *_scalability_* - it will grow better - and from what you're sayiing
> about 90MB tables, I think you've reached the limits of dbfs.

Are you making this comment about the "limits of dbfs" because of my testimony, or because of other experiences you've had. I don't want to put conclusions in your mind. I'm *asking*, not *telling* what the problem is.
>
>
> > How about modern software, written in Visual Foxpro, that is not
> > client/server, but requires a Windows client?
>
>
> <Possible Controversy>
>
> Well (since proprietary systems aren't a problem for you), if I were
> you I'd look at Delphi (a complete OO programming language and
> development environment) rather than visual foxpro. It also runs under
> Linux and called Kylix for that OS. I took a look at the foxpro site
> and AFAICS it only does tables (like you have) or desktop MS SQL
> Server - with Delphi you can use any database and with Delphi, you can
> also access DBase files and lots of other file types.
>
> <Possible Controversy>
>
>
> As to the database that you might choose, there's probably even more
> controversy there.
>
> There are those who swear by Oracle (aka The Beast) - it is enormous
> and can do virtually anthing, but is expensive and requires an
> expensive dba to run it. There are those who say that MS SQL Server
> (not cheap either) with its ease of use and reasonable performance is
> your only man, but it only runs on Windows, not a problem for you,
> though the M$oft haters would prefer to store data on stone tablets
> rather than use M$oft products.
>
> I was just at a talk here in Dublin by Alan Cox (a Linux kernel
> maintainer), hence my Open Source evangalising!!! 8-)
>
>
> Then you have Sybase which has its fans, esp. in the "crank it up and
> forget it" world. Informix also has its fans - recently taken over by
> IBM AFAIK, so maybe the DB2 engine of IBM will get all of the
> attention from now on, with Informix being gradually discontinued?
> You could also look at Pervasive (ex Btrieve) - not sure on pricing
> here.
>
>
> Then you could look at the Open Source databases MySQL (not free if
> you're using it in a commercial environment - but probably the best
> supported free DB, but it doesn't support transactions and stuff),
> Postgres (Unix only), SAP or my own *_PERSONAL FAVOURITE_* Firebird
> from www.ibphoenix.com (This was previously Interbase (still
> available) from Borland), but the Firebird distro is the true Open
> Source version - Borland made a complete balls of releasing Interbase
> as opensource and now the project is being taken on by the ibphoenix
> group.
>
>
> > I run DOS software on modern computers occasionally, and of course I am
> > delighted because it runs unbelievably fast. I generalize that old
software
> > on new hardware is a happy combination. We replace old software when we
> > tire of its absence of modern features. To use a distant analogy, a
1960
> > Chevy will get you around town, but if you live in Seattle you might
like
> > the interval wipers provided with a 2001 Chevy.
>
>
> See my question about old *_database_* software.
>
> If the thing is running under *_exactly_* the same conditions, then
> it will run faster under better hardware (though there is a point of
> diminishing returns), but as I said, the size of the tables could be
> the reason for your problems.
Well, I am suspicious of those tables too. Would client/server handle such table sizes without difficulty? As client/server seems to be the successor to our type of database application, my intuition tells me it is the remedy.
>
> You could maybe test this hypothesis by copying the system onto
> another machine and delete 90% of the records (obviously not essential
> ones...) and see does the speed improve.

If I could figure out how to delete records, I could use the *same* machine. There is plenty of empty hard drive available, and I know how to make the program run. I just don't know how to delete records. Don't worry, there's no way I would let it contaminate the real system.

>
>
> > Is database management the great exception to my generalization that old
DOS
> > software runs extremely fast on modern hardware?
>
>
> I would think so when fileserver systems get as large as 90MB per
> table.
>
>
> > That is, if the database
> > gets so large, is there something about a DOS program that even modern
> > hardware cannot help? Does "modern software" offer a remedy, especially
if
> > the bottleneck appears to be in the process of reading and writing
database
> > information across the file server's hard drive interface?
>
>
> The "remedy" is a proper database.
Please clarify. Be reminded I'm a newbie with databases. Isn't a *database* just a file that the information is kept in? I would have thought
the remedy is proper databases and application...Do you really mean "database" (thus table architecture?), or "application"?.

>
>
> > Most of the "workstations" at the site are ancient, running a DOS
operating
> > system. Call them "B." They have a Novell Netware 16-bit client hook
> > connecting to the server through 10 Mb Ethernet. One "A" workstation is
a
> > modern Windows 2000 computer with a direct 100 Mb pipe to the server.
> > Unfortunately, my customer has not yet performed an A/B speed comparison
> > test, which I have requested.
>
>
> Surely you can get some anecdotal data - i.e. ask employees who have
> used both, they will have some idea what response times are &c.
>
>
> > This is something I cannot do because it must
> > be done when the rental company staff processes its customers under very
> > busy conditions.
>
>
> Stand over their shoulders at peak times?
I wish I could...I've never had the privilege, so far. I have always gone over there when the workload was expected to be light, so that they could tolerate having their system taken down.
>
>
> > The point is that the file server is just that - only a
> > file server, while each workstation executes the rentals program in its
own
> > RAM. I don't know yet if the antiquated workstations and slow Ethernet
are
> > at fault.
>
>
> Well, if it's slowing down on the same hardware, then that has nothing
> to do with the workstations RAM or anything else - it's either network
> related or table related or the app is poorly written.
Well, it could be the workstations, because all the DOS workstations are identical. But I'm with you, as you state next...
>
> I suspect the tables first, then the app and then the network.
Exactly.
>
>
> > It is possible that the tiny RAM and 64KB barriers in the old DOS
> > workstations are a problem.
>
>
> But, you said it has slowed down over the years. Therefore, it's
> either the tables themselves or the app is poorly written and sending
> so much data that the RAM is swamped.
If the data has gotten larger over the years, then the DOS workstations might now be grinding under the strain, finally.
>
>
> > Modern workstations (i.e., clients) running
> > under 32-bit operating systems shouldn't be so limited. Does modern
> > client/server software take advantage of this?
>
>
> If the application is crappily written, you're going to have problems.
> The whole point about client/server *_PROGRAMMING_* is to try and have
> as little data as possible moving around the network.
Ah! Thank you for that jewel. The server is the "workhorse," of course.
>
> The point of a proper *_DATABASE_* is to quickly service requsts from
> the clients, however the fastest database in the world will be of no
> use if it can't send out its responses due to the fact that the
> network is jammed, hence the need for good programming practice.
>
>
> > If a client/server system is implemented, it is required that I save all
the
> > information stored over the years, and migrate it into the new system.
The
> > existing hard drive requirements for the database (exclusive of the
server's
> > operating system) are under .5 GB.
>
>
> You've lost me here - you say above that you have several up to 90MB
> each - how many tables do you have exactly? How many are over 20MB?

Sorry. I'm shooting from the hip. It's hard for me to say right now, because I'm not at the site, and visit there only rarely. It's not geographically close; it's across a busy metropolis. I usually "visit" virtually, via PCAnywhere. I go there physically when I have a plan to implement, which is one reason I enjoy exchanging information with you. Still, to answer your question, in February, the largest DBF was about 76 MB. I'm guessing it's about 90 MB now, and there are several that are progressively smaller. The NTX files are small. They must be index files...(?) I visited there a few times since February, but didn't inspect the DBF files on those subsequent visits. Suffice it to say that there is at least one DBF that is large, and they're all getting larger.
>
>
> > If I implement a modern client/server
> > system, would we require a much larger strorage capacity, for the same
> > amount of information?
>
>
> No - well, I tell a small lie - if you install Oracle, you will need
> at approx a GB for that alone and shitloads of RAM for the thing to
> work properly, but I suppose that you can spend a bit on the server
Yes.
> machine, likewise the MS SQL install is big as well (approx. 500MB).
> (As an aside, Interbase/Firebird is relatively tiny - a footprint of
> maybe 20MB - and costs nothing....)
>
> But as a rule, if you have 500 MB of data in your tables, you should
> have approx. 500 MB on your client server DB - in fact, I know that
> Interbase compresses the data on the fly, so you may need even less
> space.
>
> However, all of this is really academic (it's impossible to get HDs
> less than 10GB these days) -
The existing hard drives are a mirrored pair of 9GB each.
> if you are worried about hardware, it's
> the clients that you should worry about. With 64 MB of RAM, you can
> run Windows 98 on your client, so if you use Delphi or other 32 bit
> development tool (AFAIK, you just can't get supported 16 bit tools
> around - you could try Delphi 1 maybe, if you can get it, if you fancy
> running windows 3.1).
I intend to deep-six the clients as soon as the customer says to...and such willingness is already apparent. The current clients are ancient, barely capable of running Windows 3.1.
>
>
> You could possibly run some Linux based solution also?
>
>
> > Certainly, operating systems and modern Office
> > applications require hard drive capacities that exceed those implemented
in
> > 1994 by at least 100 times.
>
>
> True, unfortunately.
>
>
> > Any comments on how the database might grow by
> > the mere fact that it has been migrated from DBFs to a client/server
system?
>
>
> The data itself will not grow - your database software might be bigger
> (certainly is with Oracle), but I'm assuming that you're willing to
> throw a bit of money at your server? Like I said, it's impossible to
> get less than 10GB these days, so with your OS (say Windows 2000
> Server (I'm guessing here - 3GB?) + the biggest DB server - Oracle
> (1GB), that leaves you with 7GB for your data, which you say is 500MB,
> so I would suggest that your problem won't be too little space, it'll
> be finding things to do with the space you have over!

You're very accurate in your suppositions here. Still, I am considering upgrading the hard drives and Adaptec Host Bus Adapter, to get a lot more speed there. Right now we are using the AHA2940, with narrow ultra SCSI. Ultra160 would be an improvement.
>
>
> > (I will make additional considerations for growth over the next five
years.)
>
>
> It's been running since 94 and has 500MB, so that's ~ 70 MB per
> annum, and you'll have 6.5 free Gigs on your new server) so that's
> almost 100 years of worry free computing - I don't think space will be
> a problem? 8-)
>
>
> I don't know why you're so concerned about space, the biggest cost
> *_by far_* here is going to be writing a new system - that's going to
> cost *_way_* more than even a reasonable server.
>
>
> If if you want more info about this topic, post to the
> borland.delphi.sqlservers (or something like that) newsgroup - there
> are some very expert people there who are very helpful - BTW, don't
> crosspost there, they get very annoyed at that, though personally, I
> think two or three groups is OK, as long as its OT.
>
>
> I fully confess to being a Borland fan, which is partly why I
> recommend products made by them (Delphi) or related in some way
> (Firebird/Interbase). I notice that you also posted to a Microsoft
> group, so you'll probably get biased people there also, I acknowledge
> my biases.
>
>
>
> Paul...
I *really* appreciate your extensive response. You've given me a brief tour of a wide variety of applications...just what I like to read. However, I should clarify one point. I will not be writing the application. Instead, I am shopping for rental softwrare that is already written. I have examined two so far. I always ask what the system requirements are, which leads me to discover whether the candidate is client/server. I inquire as to the language the app is written in. So your description of the available platforms is not wasted. It will make me a smarter shopper.

I think, ultimately, the rental software currently in place will be replaced by client/server software on a W2K Server...

John Received on Sun Nov 25 2001 - 03:31:20 CET

Original text of this message