Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Increase the size of rollback segments
In article <971618594.24915.0.nnrp-08.9e984b29_at_news.demon.co.uk>,
"Jonathan Lewis" <jonathan_at_jlcomp.demon.co.uk> wrote:
>
> I have to disagree with you on this point.
> In well-structured system, the optimum rollback
> size is the smallest size that you can get away
> with that does not result in frequent small extends
> and shrinks of rollback segments.
>
> However, there are often cases where you expect
> an occasional , but not predictably timed, transaction
> to demand an excessive amount of rollback. In this case,
> the optimum strategy is to set the optimal at the 'steady
> state' figure, and allow the rollback segment to shrink
> automatically some time after the major transaction
> completes.
>
> The only alternatives are to code the transaction to
> acquire the same rollback segment every time(nasty),
> or to code the transaction to find and shrink the extended
> rollback segment as its last step (just as nasty), or get
> the dba to keep checking the database and shrink the
> segment manually (nastiest of all).
>
> --
>
> Jonathan Lewis
> Yet another Oracle-related web site: http://www.jlcomp.demon.co.uk
>
> Howard J. Rogers wrote in message <39e7a664$1_at_news.iprimus.com.au>...
> >Why avoid dropping them? There's no point. Create a new one first,
then
> >drop the old ones. Incidentally, Optimal is a very bad idea (it
will cause
> >shrinking just exactly when you don't want it to happen: when a new
> >transaction is looking to acquire new rollback blocks).
> >
>
Rather that attach a post lower on the thread where the topic now has
to do rbs IO I am going to attach here.
If the errors are purely random then extending the minextents is probably going to help, but if the errors are being received by the same jobs then some processing changes may be in order.
As most of the readers and posters to this thread know it is possible for a job that reads and updates the same table commiting as it runs to create its own snapshot too old error. In this case it may be necessary to modify the application to use a driving table so that the locating of the target rows is separated from the DML operation.
Sometimes this problem can be traced to two jobs that both make heavy updates to the same table that run at the same time. Some of the time job A take the error and some of the time job B, but the jobs will only receive the error when both are running at the same time. A minor scheduling change or the use of a user lock to single thread the jobs can eliminate the problem in this case.
Anyway these are a couple of ideas if extending the segments does not solve the entire problem.
-- Mark D. Powell -- The only advice that counts is the advice that you follow so follow your own advice -- Sent via Deja.com http://www.deja.com/ Before you buy.Received on Tue Oct 17 2000 - 10:08:16 CDT