Re: cluster at a time processing

From: Adrian Shepherd <Adrian.Shepherd_at_BTINTERNET.COM>
Date: 1998/01/03
Message-ID: <68ld7u$2lp$1_at_mendelevium.btinternet.com>#1/1


Depending on how much you have changed and how many indexes are on the changed records I would commit at a much lower level, around the 1000-5000 mark. You will be creating large rollback entries and definitely run into space management issues. Make sure you have rollbacks / indexes / data / redo files on separate disks to avoid i/o contention. Updating through "clusters" as you put it wont speed up the processing by that much if at all. As you need the disk writes to be performed at their fastest, consider more / faster disks.

Hope this helps...

ssharma_at_clearnet.com wrote in message <883769763.1711669254_at_dejanews.com>...
>I need to make changes to a massive, massive table - currently I use a
>pl/sql procedure to loop through the table committing after each million
>updates.
>
>I am wondering if there is an easy way to update a cluster at a time. ie
>instead of updating a record each time through the loop and doing a
>commit on the millionth loop, could I update a cluster each time through
>the loop and the next time through the loop use the next cluster (or
>block or file).
>
>I realise I can easily get to clusters through rowid to char
>manipulations, I am hoping there are easier ways to get hold of this
>info.
>
>Would this update - by - cluster instead of update - by - rowid provide
>any speedups?
>
>-------------------==== Posted via Deja News ====-----------------------
> http://www.dejanews.com/ Search, Read, Post to Usenet
Received on Sat Jan 03 1998 - 00:00:00 CET

Original text of this message