Re: Client/Server query strategy

From: Chuck Hamilton <chuckh_at_ix.netcom.com>
Date: 13 Mar 1995 13:32:10 GMT
Message-ID: <3k1hgq$ld9_at_ixnews4.ix.netcom.com>


In <3k1gbs$p7j_at_ef2007.efhd.ford.com> wmeahan_at_ef0424.efhd.ford.com (Bill Meahan) writes:

>
>chuckh_at_ix.netcom.com (Chuck Hamilton) wrote:
>>Just out of curiousity, even in a heavy OLTP database, couldn't you
>>insure that the detail and aggregate tables are always in sync via
>>triggers or stored procedures? (If this is a dumb question, remember
 I'm
>>an Oracle novice.)
>>
>> ><> Chuck Hamilton <><
>
>In principle, sure. HOWEVER, how long does it take to create the
>aggregate? Does doing the aggregate consume significant CPU resources?
>If the OLTP trasnaction rate is such that the (average) time between
>transactions is less than the time it takes to do the aggregation, you
>face the possibility of having the whole system brought to its knees by
>the constant triggering of the aggregation. Even if the transaction
>rate is fairly "slow" (whatever represents, "slow" in a given
>situation) you still could be out of sync for the time it takes to do
>the aggregation.

I'd just be sure to design my aggregate table in such a way that it was simple and fast to update. If it was a summary table, just add the new total to the appropriate aggregate column upon insert, subtract on deletion, and both on an update. It shouldn't take that long should it? For averages you'd have to keep a row count somewhere in the aggregate table to insure speedy recalculation.

Is it possible to keep the transaction open until the aggregate table is updated too? (I'm not real familiar with how Oracle trigger's work. Like I said, when it comes to Oracle I'm a novice).

        ><> Chuck Hamilton <>< Received on Mon Mar 13 1995 - 14:32:10 CET

Original text of this message