Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Excessive Logical and Physical I/O

Re: Excessive Logical and Physical I/O

From: Joel Garry <>
Date: 25 Mar 2004 13:14:19 -0800
Message-ID: <>

Brian Peasland <> wrote in message news:<>...
> > I don't normally disagree with Brian, but I think he's missed the point
> Feel free to disagree with me at any time. I don't mind. ;)
> If I need to be corrected, then I need to be corrected.
> > The problem is rather that if a rbs keeps extending, it keeps acquiring new
> > blocks instead of re-using old ones. And new blocks have to be read from
> > disk, rather than old ones being over-written in memory. Therefore, large
> > rollback segments are associated with more physical I/O than smaller ones.
> > And therein lies the real performance issue with rollback segments that keep
> > growing.
> Point taken. However, this is starting to show that there are many
> factors at work here. What if the blocks were in memory? Of course
> memory access is quicker than disk I/O and with larger rollback
> segments, you increase your chances that a block will not be in memory,
> but then we have to weigh in the algorithms that keep blocks in memory
> and how that contributes to your chances that what you need will be in
> memory when you need it.

I'm having a bit of trouble with Howard's point. If the RBS is growing, then it needs to grow. If it is growing because of bad coding, that is the bad coding's fault. Once it has grown (which is the crux of the question), then on the next transaction, what is the difference between it being read in from this big RBS, or some other RBS that would have been chosen? That's why I think it is often best to have a large RBS TS, with a bunch of large RBS's sized to fit the normal cases, and room to grow in there in case some new code goes nuts. In that case manually shrink it, optimal just opens ORA-155x doors.

> And what if the rollback segments really needed to be that large to
> support the transaction? Do we force smaller rollback segments and force
> the transaction to commit more often just to help performance? Won't
> frequent commits *typically* have a negative impact on performance than
> having a larger rollback segment and committing once? The following
> example comes to mind...
> Perform DML operation
> So in that case, couldn't one argue that being forced to use a smaller
> rollback segment is actually hurting performance?

Actually the last time I had a situation very much like this, performance increased very, very noticeably due to the reduction in need for read consistency for everybody else. (app written to lowest-common-denominator-database with no ability to designate RBS, of course.)

> Lots and lots of factors at play here. And too much to spend time
> worrying about if no one is complaining about a problem.
> > Where I do agree with Brian is that you don't really have a performance
> > problem unless someone is noticing (and complaining about) it.
> Thanks,
> Brian

I can't believe you guys have never walked into a site and gone "geez, these people don't even _know_ what problems they have." Then suggest the most basic changes and they think you are a genius. The "if no one is complaining" presupposes a well-adminstered site to begin with.  My experience probably is skewed, but it seems to me most places are set up by consultants/vendors who may or may not be good, then left to the, er, less experienced, or are barely managed to be set up by those with no ODBA experience. Bigger shops may or may not have it better, but they are probably struggling with RAC anyways :-)


-- is bogus.
Received on Thu Mar 25 2004 - 15:14:19 CST

Original text of this message