Re: Quote from comp.object
Date: Thu, 01 Mar 2007 17:37:44 -0700
Message-ID: <m3fy8o8opz.fsf_at_garlic.com>
Sampo Syreeni <decoy_at_iki.fi> writes:
> In the end such an organization would probably be faster than an IMS
> database because every overhead that could be cut would have been,
> yet higher level operations like multitable joins which allow for
> cost amortization would have been properly declared in relational
> syntax, and fully exploited. Such savings are not possible under the
> interface offered by IMS, evenwhile they're the lifeblood of the RM.
as i've mentioned before ... the exchange between the IMS group and the System/R group in the late 70s ... basically was that System/R had drastically increased system overhead while significantly reducing manual/human maintenance effort.
The next human constraint/bottleneck appears to be the intellectual
effort related to "normalization". Some past studies have indicated
that this is significant enuf that some large organizations were found
with six thousand different RDBMS deployments ... where over 90precent
of the information was common. The evoluation appears to have been
that a RDBMS (potentially because of the normalization contraints) is
relatively specific mission oriented (potentially a number of
different applications, but still focused on a specific business
mission). A some point, adding a somewhat different mission, it became
simpler to take a subset of the original data and add just the
additional items for the different mission. This repeatedly happening
a number of times over a decade or more ... and the organization finds
itself with 6000 very similar but still different deployments.
There are still some number of significantly large business operations
which continue to find they aren't able to justify the move from IMS
type infrastructures to RDBMS operation. For the most part the value
of the operation easily justifies both the hardware and people costs
... and the aggregate data may be so large ... and access patterns are
sparse enough that there is not high probability that significant
amounts of (RDBMS) index would already be cached, to eliminate needing
several disk operations to arrive at the desired record. In some
cases the issue may be that they have elapsed time constraints (like
overnight batch windows) where elapsed processing time and number of
(serially ordered) disk I/Os represents a significant consideration.
Periodically there are statements that there may still be more
aggregate data in these types of respositories than aggregate data
existing in RDBMS repositories.
misc. past posts mentioning system/r
http://www.garlic.com/~lynn/subtopic.html#systemr
Received on Fri Mar 02 2007 - 01:37:44 CET