Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Excessive Logical and Physical I/O

Re: Excessive Logical and Physical I/O

From: Howard J. Rogers <>
Date: Fri, 26 Mar 2004 08:35:36 +1100
Message-ID: <406350ad$0$8359$>

"Joel Garry" <> wrote in message
> Brian Peasland <> wrote in message
> > > I don't normally disagree with Brian, but I think he's missed the
> >
> > Feel free to disagree with me at any time. I don't mind. ;)
> >
> > If I need to be corrected, then I need to be corrected.
> >
> > > The problem is rather that if a rbs keeps extending, it keeps
acquiring new
> > > blocks instead of re-using old ones. And new blocks have to be read
> > > disk, rather than old ones being over-written in memory. Therefore,
> > > rollback segments are associated with more physical I/O than smaller
> > > And therein lies the real performance issue with rollback segments
that keep
> > > growing.
> >
> > Point taken. However, this is starting to show that there are many
> > factors at work here. What if the blocks were in memory? Of course
> > memory access is quicker than disk I/O and with larger rollback
> > segments, you increase your chances that a block will not be in memory,
> > but then we have to weigh in the algorithms that keep blocks in memory
> > and how that contributes to your chances that what you need will be in
> > memory when you need it.
> I'm having a bit of trouble with Howard's point. If the RBS is
> growing, then it needs to grow.


> If it is growing because of bad
> coding, that is the bad coding's fault. Once it has grown (which is
> the crux of the question), then on the next transaction, what is the
> difference between it being read in from this big RBS, or some other
> RBS that would have been chosen?

If it's a large rollback segment (say of 1000 blocks); and say each transaction only writes one block of undo; then it will take 1000 transactions before you start re-using the first block of the rollback segment. And by the time you get to that point, it is quite likely that the 1st block will have aged out of the buffer cache. That block will therefore have to be re-read into the cache via a physical read.

If it was only 100 blocks big, then only 100 transactions would be needed to take you back to the 1st block, and the chances are good that the block will still be in the buffer cache, not yet aged-out. Therefore, you will be able to over-write its contents via a logical I/O not a physical one.

Obviously the situation gets more complicated in a real environment, with different sized transactions, and multiple simultaneous transactions, and multiple rollback segments, but the basic principle is sound: small segments recycle themselves faster, and therefore will be less likely to have aged out of the cache than big ones. Ergo, small ones are likely to have more logical, and less physical, I/O associated with them than big ones.

But please don't do your usual trick of extending that observation to some simplistic advocacy on my part that all rollback segments should be peanut-sized. They need to be as big as needed, and that depends on how you've coded your transactions.

>That's why I think it is often best
> to have a large RBS TS, with a bunch of large RBS's sized to fit the
> normal cases, and room to grow in there in case some new code goes
> nuts. In that case manually shrink it, optimal just opens ORA-155x
> doors.

I don't disagree with having a large rollback tablespace with plenty of room to grow when needed (a large undo tablespace poses different problems, however). The tablespace isn't the issue. It's whether you have larger-than-needed rollback segments. I don't disagree either with preferring manual shrinking to automatic optimal.

HJR Received on Thu Mar 25 2004 - 15:35:36 CST

Original text of this message