Re: Dealing with locking in bulk update scenario

From: Gints Plivna <gints.plivna_at_gmail.com>
Date: Tue, 2 Nov 2010 13:51:23 +0200
Message-ID: <AANLkTimxevUKQoWc2KqnzMPk=V0VBV=fNYjzfvYUnG4D_at_mail.gmail.com>



Umm, not always. I know (just conceptually) a banking app, which at the end of the day, performs various calculations etc. This is done using PL/SQL and widely using collections for performance reasons. Then new transactions (for example incoming from internet-bank) are not allowed i.e. these are postponed till the "closing of the day" mega process will end. Obviously this involves some kind of locking from app part, for example, postponing incoming transactions untill "closing of the day" process signals that is has succesfully ended its work.

Gints Plivna
http://www.gplivna.eu

2010/11/2 Stephane Faroult <sfaroult_at_roughsea.com>:
> Ah the joys of memory caches!
> If I were you I would reconsider the "load 50,000 records in memory" part.
> For me it's a major design flow in your system. What are the db block
> buffers in the SGA, if not a cache? Database management systems were
> introduced partly to solve locking problems. Either you rely on Oracle, or
> you reinvent the wheel. For instance you can keep both the new and the old
> values (hello memory consumption) and update only when the value is the same
> as what you know as the old one - but then it doesn't tell you what to do
> when it no longer matches, and so on.
> Alternatively, instead of computing the new value and setting a new value,
> eg
>
>     set col = computed_value
>
> you could compute a delta, and execute
>
>      set col = col + delta
>
> If col has been increased in between, it would not mess everything up. But
> once again the big mistake is the cache. Let me guess, Java programmers who
> just "persist" data in Oracle?
>
>
> Stephane Faroult
> RoughSea Ltd
> Konagora
> RoughSea Channel on Youtube

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Nov 02 2010 - 06:51:23 CDT

Original text of this message