Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Fetch accross commit

Re: Fetch accross commit

From: Chuck <chuckh_nospam_at_softhome.net>
Date: 22 Dec 2003 14:43:19 GMT
Message-ID: <Xns945962E5A8853chuckhsofthomenet@130.133.1.4>


Andy Hassall <andy_at_andyh.co.uk> wrote in news:nf64uvoeu9go7i1cf9m04spvqkl2l85fo4_at_4ax.com:

> On Thu, 18 Dec 2003 21:30:36 +0100, Sybrand Bakker
> <gooiditweg_at_sybrandb.nospam.demon.nl> wrote:
>

>>On 18 Dec 2003 19:26:29 GMT, Chuck <chuckh_nospam_at_softhome.net> wrote:
>>
>>>Oracle 8.1.7 on Win2k
>>>
>>>I have to delete a large number of rows from a very large table. I
>>>cannot do the delete as a single transaction as it will generate
>>>gigabytes of rollback data and blow out the largest RBS I have. If I
>>>delete individual rows inside a cursor loop with a commit every N
>>>rows, will that cause unnecesarry performance problems? For example,
>>>will the select be executed again after each commit? I remember being
>>>told this would happen by someone way back in version 7.1.

>
> This isn't true. But each time you commit, you potentially make it
> harder to
> continue executing the query. If you revisit a block that was
> previously modified, Oracle will have to use rollback/undo segments to
> present you with a version that is read-consistent with the time you
> opened the cursor.
>
>>>The purge will run after hours and there should be no other selects
>>>going against the table so I should not create any ora-1555's.

>
> Depends how the source cursor runs; if you're running off an index
> you'll
> almost certainly visit a block from the table that you modified in a
> previous transaction, which will lead to 1555. If you're running off a
> full table scan you should only be visiting each block once, so you
> may get away with it (you mention FKs in the next paragraph, which
> possibly makes the situation more likely to produce 1555s, if they
> have ON DELETE clauses and so cause modifications in other blocks).
> You say there's no selects, but what about other DML? If there's
> anything else on the system modifying the data you're fetching across
> commits, you will rapidly get snapshot too old.
>
>>>I am not deleting a high enough % of the rows to justify recreating
>>>the table with just the rows I want to keep. Plus there are several
>>>Fk's involved and I really don't want to have to go through
>>>recreating them which I would have to do if I took the "create a new
>>>table" route. 
>>
>>I agree with Brian's reply. Apart from that, the only sensible route
>>seems to split up the big transaction into several smaller transaction
>>according to a formal criterion like all records in a single state or
>>all names starting with an 'A' and commit that smaller bulk
>>transaction

>
> Thomas Kyte's new book has a neat example of splitting a table into
> an
> arbitrary number of equal-sized and non-overlapping ROWID ranges, on
> which you can run ROWID range scans in the same way as Oracle's own
> parallel query does. This might be useful if it's a reasonable
> percentage of rows being affected - you get the same multiblock IO
> mechanism as full table scans but restricted to a section of the
> table. You can start up multiple instances of it on different ranges
> to take advantage of more than one processor, but still have control
> over how much is deleted at one time.
>
> I've left the book at the office, but I think it's near the end of
> the book,
> presented as a demonstration of analytic functions, rather than the
> 'DIY parallelism' earlier in the book which I think runs off primary
> key ranges similar to Sybrand's suggestion.
>
> It's basically an up-to-date version of the ideas presented at
> http://www.jlcomp.demon.co.uk/big_upd.html . Whether it's appropriate
> or not depends on what criteria you're using for deleting the rows.
>
> There's also the data integrity question of whether it makes logical
> sense to
> delete the rows separately, which depends on your data model, or
> possibly whether you can absolutely guarantee nothing sees any
> inconsistent states you may set up part way through the process (i.e.
> your applications are totally offline).
>
> Still, the ideal situation is being able to have large enough
> rollback
> segments for Oracle to handle it all itself in a single DELETE
> statement in the first place.
>

Thanks for the info. I planned to avoid ora-1555 by adding an ORDER BY ROWID to the query. That should cause the cursor to be driven off a temporary segment instead of the table itself and this preventing ora- 1555.

I agree that ideally the solution is to have one enormouse RBS that I can use but in the real world you have to work with what you've got. I don't have space for such a RBS.

-- 
Chuck
Remove "_nospam" to reply by email
Received on Mon Dec 22 2003 - 08:43:19 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US