Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.misc -> Re: INSTEAD OF INSERT trigger and SELECT
On Mon, 17 Mar 2003, gq437x_at_yahoo.de wrote:
> Galen Boyer <galenboyer_at_hotpop.com> writes:
>
>> On Sun, 16 Mar 2003, gq437x_at_yahoo.de wrote: >>> Galen Boyer <galenboyer_at_hotpop.com> writes: >>>> On Fri, 14 Mar 2003, gq437x_at_yahoo.de wrote:
I'm losing the issue. You have a duplicate index rejection but you are saying the values aren't duplicated and they aren't even getting rejected, but instead you are getting this error and your rows still show up in the table?
Looks to me your analysis is incorrect. If you are getting duplicate index rejections, some index is duplicated. Oracle will guarantee that. Your problem is that you don't know where the duplication happens.
>>> from the natural key data on the client (kind of >>> compression). This ID calculation is guaranteed to produce >>> unique results. >> >> This looks like it might be the culprit. I would bet that two >> different rows are "hashing" to the same key (especially since >> the error is a PK violation?). Why not just use a sequence id >> as the PK and then use a unique index on the natural key? >> Then, check for unique index violations when inserting? (ie, >> you rolled your own "hashed natural key")
But, a unique index on the entire table would give you the required protection. The sequence would give you the small datatype for your relationships.
>>> after the calculation has started. So, the duplicate >>> detection has to be done by the database. Thats where it >>> belongs anyway, IMHO. >> >> To isolate this, you could add to the existing trigger so, on >> insert, it inserts all inserted rows to a new unprotected >> table and you could do some analysis on that table? I bet >> you'll find duplicate PKs with different attributes on that >> row.
Whats a valid duplicate? I thought a duplicate was when the entire row was duplicated and this was to be rejected.
> I have checked the current content of that table and
> recalculated and compared the hash value. All pairs of ID+DATA
> are ok. The rejected row is even in the database.
>
> Writing them aside into a different table may be an option for
> documenting them, but the utility.debug.f do the same.
But, writing the exact contents as well as the exact inserted contents into a table without RI would clearly let you find the true duplicates.
A debugger shouldn't be used here. You need a final result set stored so you can do a series of analytic queries.
>
> Different questions: 1) Is there any reason why a select on the
> same table as an insert within the same transaction could cause
> a problem?
Yes. You could select from a table that is currently being written to by another process but during your read, was uncommitted. When you go to insert your rows, that process could have committed some rows that are now going to cause a unique rejection. (This is why a trigger into an unprotected table might yield some fruitful results.)
> 2) The day the problem started, I did some performance tests.
> I have changed the FREELISTS parameter for that table from 1 to
> 2. Can that cause a problem?
Not data integrity issues.
-- Galen BoyerReceived on Mon Mar 17 2003 - 12:03:49 CST