Re: Throughput and storage requirements questions for modern vs old database processing apps

From: Jim Kennedy <kennedy-family_at_home.com>
Date: Sat, 24 Nov 2001 17:00:28 GMT
Message-ID: <MmQL7.68844$XJ4.38953129_at_news1.sttln1.wa.home.com>


In this case adding an index won't help unless they have access to source code. In Clipper you have to specify what index to use when you are looking for data. (no really you do.)
I would be very careful replacing the server with w2k. The program may stop working if you do. Clipper was written to specific api's and w2k may support those api's differently. Since this is really a client application your best bet is to probably see if it is possible to upgrade to 100 MB network. However, what you need to do in any event is find the cause of the slowness.
Is the CPU working hard at peak times (probably not) (do a "load monitor" on the server to look at statistics on that.) Are there a lot of packet collissions (again the monitoring app on the server has those statistics)
Is the disk drive IO saturated? Listen to the drive or look at the light is it always on?
If you have some spare ram then putting more in the server might help a lot. Netware was always good at file cacheing. Jim
"David Cressey" <david_at_dcressey.com> wrote in message news:s_OL7.11$P_6.1473_at_petpeeve.ziplink.net...
> John Becich,
>
> I want to give you some very general commentary on old databases that run
> slow. This is by way of response to your question about whether database
SW
> is the exception to the rule that says that older SW runs very fast. My
> comments
> are unavoidably vague, because there is too much about your specific
> situation that I don't know.
>
> Old databases sometimes run a lot slower than they did during the first
few
> years of their operation. There a lot of things that can bring about this
> slowdown, but here are two fairly general ones: mission creep and
> overpopulation.
>
> Mission creep is where the number of different ways the data is used
expands
> over time. Mission creep in reading data sometimes results in queries
that
> the original database was just not set up to handle. Those queries might
> run agonizingly slow. Mission creep in writing data could result in new
> data structures that don't have the necessary indexes.
>
> Overpopulation just happens when the same old data is being accumulated,
> and is used in the same old way, but there is a lot more or it.
Sometimes,
> the transaction data just builds up over time. With each added amount of
> data, processing is a little slower. Sometimes the slowdown is
negligible
> until a certain point, after which it becomes dramatic.
> Sometimes the population buildup isn't transaction data, but reference
data.
>
> One database had a table with about 15 "cost centers" in it. There was
no
> index on this table. The program that added a new contract had to scan
this
> table about 20 times in the course of validating various cost centers for
> the new contract. It was running 3 to four seconds slower than it should,
> but nobody even knew it. Then the company reorganized. Suddenly the cost
> center table went to almost 1000 entries. Equally suddenly, performance
> went into the toilet. It was taking about 15 *minutes* to put a new
> contract in.
>
> Find the problem, build the necessary index, and wham! All of a sudden
the
> system was running "better than new".
>
> This was all on very different HW and SW than what you have described.
But
> the principles are the same.
>
> So this may explain why your experiences with old SW and with old
databases
> don't seem to agree.
>
> --
> Regards,
> David Cressey
> www.dcressey.com
>
>
Received on Sat Nov 24 2001 - 18:00:28 CET

Original text of this message