Re: Client/Server query strategy

From: NRaden <nraden_at_aol.com>
Date: 7 Mar 1995 02:14:31 -0500
Message-ID: <3jh14n$hdv_at_newsbf02.news.aol.com>


wmeahan_at_ef0424.efhd.ford.com (Bill Meahan) writes:

>If you have a large number of users, and have common
>queries that might result in a common result set over an
>extended time period (say the data only changes once a
>day or once a "shift") you might even want to consider
>"precalculating" the common result set immediately after
>the data has been updated and placing THAT where the
>uses can get at it. Yes, I know you're not supposed to
>have calculated/derived data stored in the database but under
 many real-world circumstances, you have to do what you
>have to do to prevent having the system be unusably slow.

Where is the rule that says pre-calculated data in a database in a no-no? We do it all of the time in the data warehouses we build. It is not only a good idea, it is usually the only practical way to get the performance needed.

Now here is the rub: how do you know what to calculate or aggregate? HP claims that their Intelligent Query will tell you. It apparently acts like a profiler, logging requests for aggregated data and, I am told, transparently creating the aggregates and intercepting queries, redirecting them to the newly formed tables. Does this really work? Anyone out there have any experience with it?

Red Brick's TMU loader has an auto-aggregate feature that is quite handy. The Multidimensional DB's (Essbase, for example) routinely use aggregated and calculated information.

Neil Raden
Decision Support/Data Warehouse Consulting

Envirometrics, Inc.                            805.564.8672
133 E. De La Guerra St.                    805.962.3895 (fax)
Santa Barbara, California 93101         E-Mail:  NRaden_at_aol.com
Received on Tue Mar 07 1995 - 08:14:31 CET

Original text of this message