RE: Sizing rollback segments
Date: Tue, 6 May 2008 20:51:17 -0400
Okay, so let's say that 295 is 300 so the math is easier. 300*5mb = 1500mb or about 1.5 G. Have you got 210 G laying around you can use? That would be the simplest cheapest answer if you do. Note that "laying around" is a precise technical term meaning you're not really using it for something else. You know, like free space on the set of drives you already use for UNDO that you really don't want people putting stuff on anyway.
If you need to dig a bit further, you might ask yourself whether the big ones got to be that way from a single monolith or the unfortunate collision of two or more middling large jobs in the round robin (which as of 8.1 was by transaction count balancing with rotation in the order of creation of segments to break ties.)
So if a single monolith runs periodically and bumps the segment it hits to 295*5, and you don't have the 210GB (14*1.5 if I do the math right) to spare, then you are unfortunately in the business of shrinking stuff.
If you only get that big with unfortunate collisions, then you might do better with more rollback segments. I know they say you can have many per segment, but that doesn't make it easier to manage sizes (you may have notices they scrapped the whole module, baby and bathwater, for automatic undo in later releases). The problem is that they bound user control of rollback segments to transactions, not sessions, and there was no way to gain exclusive use of a particular segment short of building an OPS instance to run your batch jobs. So even if you go to the trouble of setting your monolith to a particular segment, more transactions can still pile in on you if you have more transactions than rollback segments. Now the trivial solution for them to implement was exclusive use of classes of rollback sizes that would have a name, so you wouldn't have to wait to use BIGRB (or blow it up even bigger if you ran at the same time as the other monolith.)
Anyway, I digress. If you don't have enough space for all of them to be the same large size that handles everything (and please this is a whole different thing if you're trying to cache some of the Rollback segments in the file system cache or the buffer cache, but I "GUESS" that is not a concern at the moment) then you probably want to set them all as big as you are comfortable setting them (and something in the range 10 to 40 is in the sweet spot (sound like 20 to you) for number of extents, so yeah bump your extent up. Then set yourself up an alert so after a "BIG" transaction finishes you can rotate it off, remake it "normal" and put it back on line. (Or trust optimal to do it).
You don't want to have to spend time managing this unless someone is getting really outrageous in monolith size.
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Schauss, Peter
Sent: Tuesday, May 06, 2008 5:03 PM
Subject: Sizing rollback segments
This is for Oracle 22.214.171.124.
I have a data warehouse database which I have inherited on which we run an ETL every hour. It currently has 14 rollback sized:
- initial_extent = 5 mb
- next extent 5 mb
- min extents 11
After the database has been up for a day the number of extents on each segment ranges from 34 to 295. Right after the database was restarted a few days ago several of the users' reports returned with ORA-01555 errors. The problem seems to have gone away now that the rollback segments have increased in size.
So my question is should I increase min_extents to 34 in hopes of minimizing the ORA-01555 errors on startup? I note that Oracle recommends setting this parameter no higher than 20, making some vague reference to their own testing. Does this mean that I should think about rebuilding the rollback segments at a larger size, 10 mb for example?
Given that we run ETL's every hour I have not seen the segments shrink appreciably so I suspect that the optimal parameter is probably not going to be relevant in my situation.
Any suggestions welcome.