Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Undo segments and disk usage

Re: Undo segments and disk usage

From: joel garry <joel-garry_at_home.com>
Date: 24 May 2007 13:27:31 -0700
Message-ID: <1180038451.753252.7930@u36g2000prd.googlegroups.com>


On May 23, 6:56 am, "David.E.M...._at_gmail.com" <David.E.Mur..._at_gmail.com> wrote:
> On May 21, 7:33 pm, joel garry <joel-ga..._at_home.com> wrote:
>
>
>
>
>
> > On May 21, 12:22 pm, "David.E.M...._at_gmail.com"
>
> > <David.E.Mur..._at_gmail.com> wrote:
> > > I have had a couple problems with undo in the past I want to make sure
> > > I'm on the right track. I'm using 9i with AUM and my retention is set
> > > to 900. First of all, I have read that 900 is just a guideline and
> > > oracle will overrule it if it needs to. It doesn't really matter to
> > > me as we don't use flashback queries.
>
> > > Secondly, we had a serious issue where the undo tablespace grew to
> > > over 12 gig and filled up a hard drives at some sites. From what I
> > > understand, oracle keeps 10 segments in the undo area and selects
> > > expired segments almost randomly for transactions. We have a weekly
> > > transaction that runs for about an hour and generates over a gig of
> > > undo. Since we have our maxsize set to unlimited, you could
> > > theoretically have all 10 segments grow to over a gig after a couple
> > > months or so. My solution to this is to split up and optimize the
> > > long running weekly transaction. Also, I am putting a 4 gig maxsize
> > > on the undo.
>
> > > If my concept of undo segments is correct then one could calculate the
> > > maximum amount of hard drive space used by your undo tablespace by
> > > multiplying the largest-amount-of-undo-used-per-period * number-of-
> > > segments. Where the 'period' in the previous equation would be the
> > > larger of the following: undo_retention or longest running query. In
> > > my case it would be 1.2gig * 10 = 12 gig of disk needed.
>
> > > Do you guys think this sounds reasonable?
>
> > No. Oracle does not necessarily use expired extents, and has other
> > issues that can be set off simply from code it doesn't like. See not-
> > a-bug 4244922 on metalink, and browse around "all sources" in the
> > knowledgebase for the term 30036 (not that you are approaching that,
> > just it finds interesting discussions as a search term - though you
> > might if you maxsize). Also, asktom.oracle.com has some interesting
> > clarifications - Tom's general feeling is to size the db for as big as
> > it needs to be and forget about it. He may support your concept, I'm
> > not sure as it's been a while since I've looked at that there in
> > detail.
>
> > Some things you might consider:
> > What exact version are you on?
> > What exactly is your long-running code doing? Are you mass-deleting
> > anything? Mass-updating indices? Committing in a loop? Considering
> > committing in a loop?
> > Collecting info from dba_undo_extents, v$undostat and v$transaction?
>
> >http://www.akadia.com/services/ora_optimize_undo.htmlmaybe helpful.
>
> > Deletes are going to use up way more undo than anything else, as it
> > has to keep all the before image data.
>
> > jg
> > --
> > @home.com is bogus.http://en.wikipedia.org/wiki/Rubber-hose_cryptanalysis
>
> Thanks for the reply Joel.
>
> Here are a couple of my resources: The first post in this asktom
> thread talks about Oracle overruling the undo_retention time -http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6...
>
> I used this guy's method to watch my undo segments grow -http://orafaq.com/node/61
> Does anyone see anything wrong with this concept? My testing seems to
> confirm it and I think it is an important point that the undo
> tablespace can grow to 10x the largest undo used by a transaction.
> All of the reading I have done implies that the undo tablespace will
> grow to fit the needs of your largest transaction. In reality this
> should be multiplied by 10 which is an exponential increase and a big
> deal in my case.
>
> I'm using oracle 9.2.0.5
>
> I'm not worried about the 30036 as I sized the undo to 4x of what it
> will currently use. And... I have split the large transaction and
> optimized it so that it will use less in the future.
>
> The long transaction is a weekly job that clears old data by moving x
> days of data from a main table to a temp table and then copying it
> back after the main table is truncated. There are limited deletes
> from smaller tables and the indexes are rebuilt. No commits in a
> loop.
>
> At this point I'm thinking about throwing out the autoextend option
> and sizing the undo tablspace to 4 gig and calling it done

Well, you've read the docs, read asktom, read the faq, understand your app, tested rule of thumbs in docs and found them wanting, monitor your database, asked for others to review your thought processes - would that all dba's were as good as you! You might consider writing this up as a presentation to get free admission to a major user group conference.

All I could add is some tool told me my db needs a 40G undo, and I run it at 10G with process kills for people who leave transactions open (if I don't kill, I get ORA-1555's). But of course, my app is different than yours, and I need to fiddle with it every time some major organizational change happens.

jg

--
@home.com is bogus.
'There's no need to have my name on this. I already have a Nobel
Prize.' - Harold Urey, to Stanley L. Miller
Time to update the contact info:  http://exobio.ucsd.edu/miller.htm
Received on Thu May 24 2007 - 15:27:31 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US