Re: Throughput and storage requirements questions for modern vs old database processing apps

From: Paul Linehan <plinehan_at_not.a.chance.ie>
Date: Sun, 25 Nov 2001 14:42:39 GMT
Message-ID: <3c010358.15039660_at_news1.eircom.net>


 "John Becich" <jbecich_at_nospam.net> wrote:

>"Paul Linehan" <linehanp_at_tcd.ie> wrote in message

>> Do you have access to the code? Can you see if this is the case.

>The source code? No. I wouldn't know what to look for, even if I did.
>I suppose I could decipher a DBF file to see how many rental items were used
>5 years ago. But then again, I could just ask the people that work there.

*_WHATEVER_* you do, when you get the new app, make *_SURE_* that you have the source code and that it is developed in a reasonably mainstream development environment - get your client to invest in one copy of the environment so that they can hire someone to "tweak" it if and when necessary.

>> Also, I'm pretty sure that there are programmes out there that can
>> give you reports on network usage, so you could look into that also -
>> if the network isn't under strain, then adding more capacity is a
>> waste of time.

>The customer is ready to accept new wiring and modern switching equipment.
>So the 10 Mb Ethernet network is on the way out, regardless.

Fair enough, but can you lay your hands on a network analysis programme? Can you ask for access to the server durning the busy time (don't use PC anywhere!!) and bring up the task manager and just look at the numbers - or just ring them up and ask someone there who has (IQ > house-plant) to do this - it *_may_* be that your network is fine.

>> I suspect (though am not certain) that you are running into the limits
>> of fileserver databases.

>So what is the remedy? Eliminating records is not practical.
>I'm looking for a database methodology that can handle "large" databases. I
>expect client/server is the way to go.

As another poster said, it might not be appropriate to suggest a specific solution - suffice to say that any of the databases that I mentioned in my original post should do the trick - what's most important is that the application works well for your client.

Ask about things like "mission creep" and what about archiving stuff - i.e. can the app cope with data being added indefinitely - what about removing/archiving redundant/old data?

>> Can you see the CPU and RAM usage during busy times? If it's not
>> running flat out, then more won't help. What about disk usage?

>Good questions, all. Unfortunately, the peak usage occurs only rarely, and
>I have never been there at that time.

Ctrl - Alt - Delete - Task Manager - you're not actually (AFAIK) imposing any overhead (or it's minimal) doing this, since the OS keeps these stats anyway.

Check disk usage, cpu usage, ram usage. See if you can install some sort of network analysis programme and check to see if this is an issue - your client will not be very impressed if they install a new network and it does nothing.  

> The customer complains about the
>slowness under such peak conditions, and is willing to hire me to remedy it.
>Furthermore, the customer has never implemented some experiments I have
>petitioned them to perform, to give me a sense of where the bottlenecks are.
>I will be looking for an opportunity, therefore, to visit the site, and
>conduct several experiments to discover bottlenecks. That is yet to come.
>I don't have those answers yet...

It is *_CRITICAL_* that you do this before changing anything. If they wanted a doctor, they wouldn't be too happy if he consulted over the phone - they can't expect you to do the same.

>> Again, if it's not under stress, a faster disk will just be siiting
>> around waiting for requests.

>I upgraded the disks and Host Bus Adapter last February, and there was a
>performance jump.

What are the stats on this? Do you actually have any? On what basis do you say that there was a jump?

> But by now, it seems to be slower than ever.

But realistically, you don't know - you really have to stand over their shoulders and *_time_* the processes - I know this is a pain, but you can't diagnose without a clear view of the symptoms.

> There are
>many variables in the soup, so I can't pin down one cause. I'm looking now
>for more of a "strategic" remedy than a "tactical" remedy.

Fair enough - though I don't think you should actually do anything *_until_* you have done at least the basics outlined above.

>>What OS is your server running? You
>> mention a W2000 machine - could you make that the server and then use
>> ctrl-alt-delete for the task manager?

>Current server OS is Netware 4.11. We have W2K Pro on a recently purchased
>workstation, P-III 900 MHz. There is an outside chance I'll attempt to host
>the database on my own W2K Pro 1.1 GHz Athlon computer, which I would carry
>into the facility. Thus I could use it as a file server during an exercise
>in which I look for bottlenecks. Another respondent to my thread cautioned
>me against that, for reasons that the application in use might be dependent
>upon features within the Netware server.

Unfortunately I know nothing about Netware - is it possible to run Task Manager or equivalent?

>> Would this be old fileserver software with very big tables or just
>> utilities or something that doesn't do a whole lot of data
>> access/retrieval?

>The latter. Precisely my point. I'm seeking guidance for the former.

I'm not sure of the theory of why tables are not so good at large sizes - post to the Borland groups - you might get an idea as to why!

As for databases, go with something mainstream - don't go near proprietary formats, by a small company that nobody's ever heard of.

If you had the source code for this app, you could possibly have modified it if, in fact, it is the app that's causing the problem.

>> It may be that a fully fledged db solution may not be ideal for a
>> "small" app, however, the advantage with client-server is
>> *_scalability_* - it will grow better - and from what you're sayiing
>> about 90MB tables, I think you've reached the limits of dbfs.

>Are you making this comment about the "limits of dbfs" because of my
>testimony, or because of other experiences you've had. I don't want to put
>conclusions in your mind. I'm *asking*, not *telling* what the problem is.

Well, on the Borland newsgroups I have seen people writing about this issue - many people have issues similar to yours and I know that people have mentioned the limits of file server databases, one of which was the size of the tables - this does not happen with rdbms server systems - it's not that RDBMS systems aren't affected by size, it's just that they're designed to grow better.

Maybe you can get more details about the theoretical limits of fileserver dbs there - if you do find a good link or post, let me know - email is plinehan__at__yahoo__dot__com.

>> If the thing is running under *_exactly_* the same conditions, then
>> it will run faster under better hardware (though there is a point of
>> diminishing returns), but as I said, the size of the tables could be
>> the reason for your problems.

>Well, I am suspicious of those tables too. Would client/server handle such
>table sizes without difficulty?

In a word, yes. 90 MB is not a problem for rdbms systems and I think it is getting large for fileserver dbs.

> As client/server seems to be the successor
>to our type of database application, my intuition tells me it is the remedy.

Client server is only as good as the programming that's done behind the scenes - if you have an rdbms system and the programmer does "select * from 90MB_Table" and this goes across the network, it will also grind to a halt.

This is why I'm so interested in seeing the stats from the task manager - it *_may_* be the app, but it could be simply a network bottleneck, or maybe just doubling the RAM on the server or workstations might solve the problem for a few years.

>> You could maybe test this hypothesis by copying the system onto
>> another machine and delete 90% of the records (obviously not essential
>> ones...) and see does the speed improve.

>If I could figure out how to delete records, I could use the *same* machine.
>There is plenty of empty hard drive available, and I know how to make the
>program run. I just don't know how to delete records.
>Don't worry, there's no way I would let it contaminate the real system.

<plug>
I'm fairly sure that with Delphi (or any other mainstream programming language) you could easily peer directly at the tables in the system - it's the really big ones that you need to reduce - but get the other stats first.
</plug>

>> The "remedy" is a proper database.

>Please clarify. Be reminded I'm a newbie with databases. Isn't a
>*database* just a file that the information is kept in? I would have
>thought the remedy is proper databases and application...Do you really mean
>"database" (thus table architecture?), or "application"?.

I'll rephrase - the remedy *_could be_* a proper database - see my points about the limits of file sever based systems - I'll try and find some info about this...

A "proper" database is *_far_* more than "just a file that the information is kept in" - it is infinitely more powerful and flexible than that - see below.

>> Stand over their shoulders at peak times?

>I wish I could...I've never had the privilege, so far. I have always gone
>over there when the workload was expected to be light, so that they could
>tolerate having their system taken down.

You don't have to take the system down - just use the task manager on the server - explain *_forcefully_* to your client that unless you have some access to the problem, you will be unable to propose a solution that works - use the doctor analogy.

AFAIK a network analysis programme won't impose that much overhead either.

>> Well, if it's slowing down on the same hardware, then that has nothing
>> to do with the workstations RAM or anything else - it's either network
>> related or table related or the app is poorly written.

>Well, it could be the workstations, because all the DOS workstations are
>identical. But I'm with you, as you state next...

But it worked fine before you said?

>> I suspect the tables first, then the app and then the network.

>Exactly.

But this is just guessing - try and get some stats from the system before doing anything.

>> But, you said it has slowed down over the years. Therefore, it's
>> either the tables themselves or the app is poorly written and sending
>> so much data that the RAM is swamped.

>If the data has gotten larger over the years, then the DOS workstations
>might now be grinding under the strain, finally.

This is true - the simple way to check this is to throw more RAM into them - if they'll take it that is! Do a check if you can - see if putting more RAM into the workstations makes a difference - or if they can't take anymore, try *_removing_* some from the DOS stations and see does this have a significant effect.

>> If the application is crappily written, you're going to have problems.
>> The whole point about client/server *_PROGRAMMING_* is to try and have
>> as little data as possible moving around the network.

>Ah! Thank you for that jewel. The server is the "workhorse," of course.

The whole point is to make your application "thin client" - I know that this is a bit of a trendy buzzword, but it makes sense - you try and construct the application so that as little data as possible is moving around the network and that as much work as possible is done *_on the RDBMS server_*.

There are lots of things that you can get an RDBMS server to do that are impossible with fileserver databases without adding code to the client and bloating your app. Things like stored proceedures, triggers and the like mean that all the hard work is done on the server and that the client will run on a crappy machine with 2 MB of RAM.

It is worth noting that attention has to be paid to the programming also - i.e. no "select * from Humungous_Table" statements are passed to the server and huge amounts of data are firing around the network.

>> You've lost me here - you say above that you have several up to 90MB
>> each - how many tables do you have exactly? How many are over 20MB?

>Sorry. I'm shooting from the hip. It's hard for me to say right now,
>because I'm not at the site, and visit there only rarely. It's not
>geographically close; it's across a busy metropolis. I usually "visit"
>virtually, via PCAnywhere. I go there physically when I have a plan to
>implement, which is one reason I enjoy exchanging information with you.
>Still, to answer your question, in February, the largest DBF was about 76
>MB. I'm guessing it's about 90 MB now, and there are several that are
>progressively smaller. The NTX files are small. They must be index
>files...(?) I visited there a few times since February, but didn't inspect
>the DBF files on those subsequent visits. Suffice it to say that there is
>at least one DBF that is large, and they're all getting larger.

The only thing that I can say now is that you *_have_* to get the stats and find out what is causing the problem - if it's the app, then upgrading the network will do no good, and might make you look bad in the eyes of your client ("We spent XX dollars on what YOU advised and it's done nothing ... YAK YAK.....").

>> if you are worried about hardware, it's
>> the clients that you should worry about. With 64 MB of RAM, you can
>> run Windows 98 on your client, so if you use Delphi or other 32 bit
>> development tool (AFAIK, you just can't get supported 16 bit tools
>> around - you could try Delphi 1 maybe, if you can get it, if you fancy
>> running windows 3.1).

>I intend to deep-six the clients as soon as the customer says to...and such
>willingness is already apparent. The current clients are ancient, barely
>capable of running Windows 3.1.

However, you can spend all the money you like on fancy new clients and if it's the app, then all you'll get for your trouble is Free Cell for the floor people to play while they're waiting for the system to respond.

>> The data itself will not grow - your database software might be bigger
>> (certainly is with Oracle), but I'm assuming that you're willing to
>> throw a bit of money at your server? Like I said, it's impossible to
>> get less than 10GB these days, so with your OS (say Windows 2000
>> Server (I'm guessing here - 3GB?) + the biggest DB server - Oracle
>> (1GB), that leaves you with 7GB for your data, which you say is 500MB,
>> so I would suggest that your problem won't be too little space, it'll
>> be finding things to do with the space you have over!

>You're very accurate in your suppositions here. Still, I am considering
>upgrading the hard drives and Adaptec Host Bus Adapter, to get a lot more
>speed there. Right now we are using the AHA2940, with narrow ultra SCSI.
>Ultra160 would be an improvement.

That's great if the HD of the server is thrashing around like a mad thing at busy times - you might also consider disk controllers with their own RAM. However, see my point about finding out where the bottleneck is - the hard drive might be spending 90% of its time sitting there waiting for the application to return data.

>> I fully confess to being a Borland fan, which is partly why I
>> recommend products made by them (Delphi) or related in some way
>> (Firebird/Interbase). I notice that you also posted to a Microsoft
>> group, so you'll probably get biased people there also, I acknowledge
>> my biases.

>I *really* appreciate your extensive response. You've given me a brief
>tour of a wide variety of applications...just what I like to read. However,
>I should clarify one point. I will not be writing the application.
>Instead, I am shopping for rental softwrare that is already written.

But, if it's not the app? Are they happy with the app as it stands?

> I have
>examined two so far. I always ask what the system requirements are, which
>leads me to discover whether the candidate is client/server. I inquire as
>to the language the app is written in. So your description of the
>available platforms is not wasted. It will make me a smarter shopper.

>I think, ultimately, the rental software currently in place will be replaced
>by client/server software on a W2K Server...

Fair enough - though don't forget that if it's not the app that's causing the problem, this will all be for nothing - if it ain't broke, don't fix it.

If you find that appropriate hardware can solve the problem, then why bother changing? If they want to add functionality, you could use a mainstream app (Delphi... 8-) ) to do reporting on the tables that are there (carefully).

Paul...

>John
Received on Sun Nov 25 2001 - 15:42:39 CET

Original text of this message