Re: Massive Update commit every 1000 records
Date: Fri, 13 Nov 2009 15:16:19 -0500
There could be a couple of answers which depend on how your table is organized.
If you have an index on SSN then one of the things you might try is a simple loop like that:
loop update sales set ssn=mod(ssn, 10000) where ssn > 9999 and rownum <= 1000; commit; exit when sql%rowcount < 1000; end loop;
though that might be slow in certain cases (if your clustering factor for ssn isn't good, for example).
You might try bulk collect/update a shot and see how something like this will perform for you:
cursor l_cur is select rowid from sales where ssn > 9999; type l_tab is table of rowid index by binary_integer; l_buffer l_tab; begin open l_cur; loop fetch l_cur bulk collect into l_buffer limit 1000; forall i in 1 .. l_buffer.count update sales set ssn=mod(ssn,10000) where rowid=l_buffer(i); commit; exit when l_buffer.count < 1000; end loop; close l_cur;
that should be faster compared to row-by-row stuff in your example.
Another thing you could consider is dong CTAS and then swapping the tables.
On Fri, Nov 13, 2009 at 2:50 PM, Roger Xu <wellmetus_at_gmail.com> wrote:
> Hi List,
> Background: 9i (22.214.171.124.0) HP-UX B.11.31 U ia64
> We need to shrink SSN to the last 4 digit.
> update sales set ssn=to_number(substr(to_char(ssn),-4));
> But the sales table is quite large and we want to commit every 1000 records.
> Does anyone have a better idea to do this rather than below?
> cursor c12 is select ssn from sales for update of ssn;
> n number := 0;
> for eachc12 in c12 loop
> if eachc12.ssn is not null then
> n := n + 1;
> update sales set ssn=to_number(substr(to_char(ssn),-4)) where current
> of c12;
> if ( n = 1000 )
> end if;
> end if;
> end loop;
-- Alex Fatkulin, http://afatkulin.blogspot.com http://www.linkedin.com/in/alexfatkulin -- http://www.freelists.org/webpage/oracle-lReceived on Fri Nov 13 2009 - 14:16:19 CST