Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: optimal size for rollback

Re: optimal size for rollback

From: Karen Abgarian <abvk_at_ureach.com>
Date: Sun, 22 Sep 2002 23:30:17 GMT
Message-ID: <3D8E51F7.F49E997F@ureach.com>


Richard -

But what you are saying about rollbacks and optimals is exactly my point. I would
love this discussion to be more oriented towards pros and cons and be more professional and friendly. I apologize if there is my fault that it was not.

As for our relationship with Howard, I think that he should develop some respect for
me and my professional skills. I actually also have worked for many years with Oracle
and it is obvious that I could have some knowledge/experience and an opinion of my own.

As anybody can see, I have never doubted Howard's or anybody else's talents or demonstrated disrespect. If Howard can manage that, then, of course, we could be friends.

Howard ... cup of coffee :-)?

Regs
Karen

Richard Foote wrote:

> Hi Howard and Karen,
>
> A couple of things.
>
> Firstly, I reckon you two should meet for a cafe latte sometime, it could be
> the start of a beautiful relationship :)
>
> To me, this discussion is similar to the buffer cache thread and the
> discussion regarding the pros and cons of incremental checkpoints. Yes,
> there is a potential performance impact of using optimal and yes the use of
> optimal shouldn't be abused and yes using optimal shouldn't be an excuse for
> not properly sizing rollback segments. If you have rollback segments that
> extend and shrink, extend and shrink, extend and shrink ..., yes there's
> going to be an impact (and potential issues with ORA-1555s).
>
> *BUT* if you size your rollback segments to cater for 99.99 % of
> transactions and by doing so have them conservatively sized and if every
> 0.01% of transactions (i.e. very rarely) a rollback segment extents and
> subsequently shrinks and if the *actual* performance impact is that such
> rare transactions take 1.45 seconds instead of the usual 1.25 seconds (i.e.
> "I didn't notice anything, did you ?") then this is all a bit of a non
> issue.
>
> If by saving a substantial amount of disk space and by reducing
> administration overheads I suffer rare, unnoticeable performance hits, then
> you get no arguments from me. That's a logical and appropriate use of
> another database feature (optimal). And yes, it's the *actual performance
> hit* and it's impact that's important per se, not whether there's a
> performance hit. If it's unnoticeable or if I occasionally save a fraction
> of a second then it's a non issue. I could pretty well tune any database and
> make things runs faster and more efficient. But is someone willing to pay me
> to save a fraction of a second here or there, I think not.
>
> Which brings me to cost. Disk is cheap, go out and buy more is often touted
> as the solution. In this example, don't worry about disk, go out and buy the
> extra whatever and size your rollback segments to cater for the largest
> probable transaction. Then you can forget about optimal and be done with it.
> And I agree, storage is relatively cheap, but ...
>
> Here in sunny Canberra, we're a Government town and most of the key sites
> here are Government. And the Australian government has gone through an
> outsourcing program (bless them) where many key sites have their IT
> infrastructure outsourced and controlled by the large IT giants. And guess
> what, getting additional disk isn't cheap, it's bloody and criminally
> expensive. And by the time you add all the redundancy disks costs, costs of
> maintenance, costs of implementation, costs of support, costs of breaking
> your IT requirements, etc, etc, etc. it's a case of "F&#*, how much !!".
> Many Government agencies see such savings as most important and so yes, "you
> want additional storage to improve performance by how much !!" does not go
> down too well.
>
> Outsourcing hasn't exactly been a great success ...
>
> But I digress.
>
> To me, this discussion on optimal is the same old thing. Knowing the
> options, knowing the pros and cons, using and selecting the option that is
> most appropriate is the way to go.
>
> One last point. Note that automatic undo management does extra work 'under
> the covers' which could be viewed as being sub-optimal. I haven't heard many
> that suggest using manual rollback segments is the way to go in 9i ...
>
> My thoughts for what it's worth.
>
> Richard
>
> "Howard J. Rogers" <howardjr2000_at_yahoo.com.au> wrote in message
> news:357j9.37408$g9.107167_at_newsfeeds.bigpond.com...
> >
> > "Karen Abgarian" <abvk_at_ureach.com> wrote in message
> > news:3D8CFB0B.9C1957AA_at_ureach.com...
> > > > I gave you the technical explanation, and whilst I could give you the
> > > > numbers (as I've done on many occassions in the past), its not
> actually
> > a
> > > > hard test to set up for yourself: build a 16K rollback segment with
> > optimal
> > > > set to 16K, and update 1000000 rows to force growth. Then update 1000
> > rows
> > > > to force shrinkage. Do the same thing without optimal. Create the
> > segment in
> > > > both LM and DM tablespace.
> > >
> > > I mentioned the extent sizing. How many extents you expect to be
> purged?
> > > Unless you really create rollback segments with 8K extents and have it
> > allocate
> > > thousands of extents. Can this be more than a few dozen in practice?
> > >
> > > When a shrink happens, the majority of work that needs to be done by
> > Oracle is
> > > to update fet$ and uet$ on dictionary managed tablespaces. There are
> good
> > > chances
> > > that contents of these tables will be cached. If so, Oracle will have
> to
> > do a
> > > few
> > > operations in memory. Even if it needs to go to disk, we are talking
> > about
> > > reading a
> > > few blocks. Without getting into too many details, there is also
> internal
> > > locking for
> > > fet$/uet$, this should depend on how actively they are
> > allocating/deallocating.
> > > On LMTs
> > > this shrink is to flip the bits in the file header(s).
> >
> > Actually, there are trips to the dictionary even with LMTs. They reduce
> > contention for the data dictionary tables, they don't eliminate them
> > altogether. Something has to know which bits belong to which segment, and
> > thus which bits to flip. And there are queries against the data dictionary
> > tables, too, since the setting for optimal has to be checked. By every
> > transaction as it crosses the extent boundary, whether or not a shrink has
> > to actually happen.
> >
> > And your transaction waits as those checks are performed.
> >
> > >
> > > How many seconds do you expect the above to take, if we assume that the
> > rollback
> > >
> > > segments are NOT sized like you suggested for the experiment? A second,
> t
> > wo
> > > seconds?
> > > What this performance can be dependent on? Any numbers?
> > >
> >
> > Frankly, who cares how long it takes? It's extra work being done by the
> > database (thanks for finally aknowledging it has to happen). And that
> extra
> > work is entirely optional, because it doesn't happen when optimal is not
> > set.
> >
> > You might consider the waits trivial. That's another of your value
> > judgements, which wouldn't apply to all databases at all times. That
> they're
> > there at all, however, is not a value judgement, but a matter of plain,
> hard
> > fact which does apply to all databases for which optimal is in use.
> >
> > HJR
> >
> >
> > > Regs
> > > AK
> > >
> > >
> >
> >
Received on Sun Sep 22 2002 - 18:30:17 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US