Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Large set-based UPDATE
On Wed, 12 Feb 2003 04:02:13 +0000, Paul wrote:
> Hi all,
>
> I am reviewing an Oracle stored procedure (BULK_UPDATE) that makes a large
> set-based update to a large table (ACCOUNTS, 5 million rows).
<irony>
This sounds like dbms_enron.
</irony>
>
> The problem we are encountering with this is that, because the UPDATE
> statement applies changes to a large number of rows at a time to the
> ACCOUNTS table (around 150,000), a web layer that also allows updates to
> this table is being regularly locked out for an unacceptable length of
> time.
Some detail is missing. You're saying that a large bulky update co-exists with users making oltp-like changes to data via a web-based app?
In which case: how often does the bulk update happen? Why can't it be shunted off into the midnight hours so it doesn't upset your web users? What's the (logical) relationship between the bulky updates and the web-based ones? If I update a row via the web, will your bulky update do things to my row that I might regret?
>
> To summarise the code:
>
> An UPDATE statement is constructed within a stored procedure and executed
> using EXECUTE IMMEDIATE
>
> This UPDATE statement affects around 150,000 rows of a 5,000,000 row table
>
> I was thinking a possible solution to this is to break the 150,000 row set
> based update into 150,000 updates to single rows by processing them
> through a cursor. I would then COMMIT within the cursor every 1000 rows.
>
> Is this a good way to go?
>
No, it's a horrible way to go. You commit when it makes logical sense to do so. Anything else, and you risk logically corrupting your data. It also will probably be slower, because commits are not free lunches, but make Oracle do quite a lot of work. And you may find that users start getting things like ORA-1555s because the commits render your bulky undo over-writeable.
> Am I right in thinking that only 1000 rows at a time will be locked using
> this method?
Yes, but in principle don't go there. Commit when it makes logical sense to do so, not because you think it might speed things up (which it probably won't).
Classic example of a transaction: withdraw $1000 from account A, insert it into Account B. Money safely transferred? Commit. Cost? Both the row in table A and the row in table B are locked until the commit.
Compare with: withdraw $1000 from account A, commit, insert into Account B, commit. Benefit: only one row is locked at a time. Money safely transferred? No, not if there's an instance crash after the first commit, at which point I've just lost $1000.
Hence my original question: what can *safely* (and logically) be committed in your bulk update? And what relationship is there between what gets updated one way compared with what users are doing on the web?
Strikes me that the real issue here is batch processing and oltp stuff co-existing in the same database. The usual recommendation is to shunt the batch stuff off into quiet times (if you have any). Or to look at structural design issues to see if there's better ways of making them co-exist. But introducing commits which are not logically appropriate is not the way to go.
Just my thoughts.
Regards
HJR
> Is this likely to be much slower given that I am now updating 1 row at a
> time through a cursor?
>
> All help is gratefully received.
>
> Paul.
Received on Wed Feb 12 2003 - 10:57:49 CST
![]() |
![]() |