Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Implementation design advice needed

Implementation design advice needed

From: Jack Addington <jaddington_at_shaw.ca>
Date: Fri, 14 Oct 2005 16:39:56 GMT
Message-ID: <wbR3f.182423$oW2.47331@pd7tw1no>


I have a system that stores data in a propietary format that has a number of external views and tables in order to transpose the data into client specific formats. When data is updated according to the header table I need to update the external tables. I am trying not to impact the performance of the original upates too much with all the client specific transposing but I do need the updates to propagate in essentially real time.

I built some logic around setting a timestamp variable in the before update statement trigger of the header table and then in the after update statement I was calling a stored proc with that timestamp variable and the current_timestamp. That allowed me to build a number of delete/insert statements for each external table but I could use an IN clause to capture all the pkeys for each external table that were updated. That cut down on the number of individual delete/inserts for each external table. The data header updates are so different that I can't estimate in advance how many rows are going to be affected.

Bottom line is that this method is lousy to debug, probably not to great for multiple users working on related bits of data at the same time and I'm thinking its a bit to complicated. I can't use materialized views as the select statements are far to complicated and I can't do complete refreshes of the external tables either.

I was looking for some advice on the following thoughts:

  1. Spawning an I/D statement for each header update isn't that bad. Performance wise the queries are quick and I'm not sure I would save that much over grouping all the statements via an IN clause. Also keeps everything extrememly simple and creates a realtime environment. I'm a touch leary how this will scale up though.
  2. What about calling a dbms_jobs for each header row update? That would result in a tiny bit more overhead for the database but less impact on the user. The data would still be updated in near real time.
  3. Change my BUS/AUS trigger logic and use a temp global table to store each header key in the row trigger and then process the temp table in the AUS trigger. This creates a slightly simpler structure that I can debug a bit easier - although session based tables can be a bit of a pain.
  4. Same as three but spawn a single dbms_jobs to process the temp table of keys.

Thanks for any input / advice.

Jack Received on Fri Oct 14 2005 - 11:39:56 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US