Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Who's the big transaction?
"Joel Garry" <joel-garry_at_home.com> wrote in message
news:91884734.0211211543.533d0db4_at_posting.google.com...
> Everyone:
>
> I've straightened out a bunch of serious rbs problems (and other
> stuff, like insufficient memory and swap, some transactions that
> needed to be broken up, etc.), now I'm getting into the more subtle
> things.
>
> I decided to shrink the rbs's down to their initial 10M sizes (I had
> changed a lot of other stuff, removed optimal, didn't want to mess too
> much with this before things stabilized, intended to consider Mr.
> Rogers excellent posts very seriously when I did adjust them), and see
> what happens, figuring they'd all grow to some fairly constant size if
> there is a repetitive large transaction. After a month+, this is how
> things look:
>
> SQL> select usn, extents,hwmsize, shrinks, wraps, extends from
> v$rollstat;
>
> USN EXTENTS HWMSIZE SHRINKS WRAPS EXTENDS
> ---------- ---------- ---------- ---------- ---------- ----------
> 0 9 745472 0 0 0
> 26 5 10641408 0 9 0
> 1 5 10641408 0 11 0
> 2 8 17031168 0 15 0
> 3 5 10641408 0 9 0
> 4 13 27680768 0 8 0
> 5 5 10641408 0 10 0
> 6 6 12771328 0 7 0
> 7 5 10641408 0 10 0
> 8 45 95838208 0 57 40
> 9 5 10641408 0 6 0
> 10 16 34070528 0 16 0
> 11 38 80928768 0 10 0
> 12 8 17031168 0 6 0
> 13 5 10641408 0 8 0
> 14 5 10641408 0 11 0
>
> 16 rows selected.
>
> I was kind of expecting a smoother distribution across the rbs's. So
> question number 1: Does Oracle consider the rbs size when allocating
> to a transaction? I was under the distinct impression it did not. I
> guess it is possible that I did a better job than I expected breaking
> up the periodic huge updates... if the answer is "no," then Question
> Number 2 becomes, how do you catch an infrequent huge transaction?
Hi Joel,
No, I don't believe Oracle considers the size of the rollback segment. The last time I tested this (which I was with 8.1.6) Oracle used a least busy algorithm. That is, it picks the RBS with the fewest concurrent transactions. If there is a tie in which is currently least busiest, Oracle then picks the RBS which has the next USN from the last RBS selected. This is easily tested by looking at the pattern of xidusn in v$transaction as transactions are added/removed.
I'm not sure what to go is with 9i but I would be surprised if the algorithm was much different and Oracle doesn't spread it's load in a similar manner.
With regard to catching large transactions, there's a couple of things to consider. Firstly, what do you mean by a large transaction because there are two types that can cause your RBS to grow.
Firstly, there's the large transaction in terms of the amount of work that it's doing and the amount of undo that it generates. Obviously, if this transaction in combination with other transactions concurrently assigned to the RBS is more than it can contain, then yes the RBS will grow. Two options here. One, is to assign all your RBS to be big enough to cope with your largest transaction. That way, it doesn't matter which one it picks, it can cope and you won't incur possible performance issues of having it dynamically growing on you. The second option is to pre-create one big mamma of a RBS and use the SET TRANSACTION statement before the large transaction to preassign it to the large RBS. There are well documented pluses and minuses with both approaches.
Then there's the large transaction in terms of the time it takes for the transaction to actually complete. It doesn't necessarily need to do much work, but if such a "blocking transaction" doesn't commit/rollback within the time it takes Oracle to wiz around the extents of the RBS, Oracle will be forced to grow the RBS. The solution here is obviously to commit/rollback as quickly as possible after the logical completion of the transaction. Ad-hoc update of a row followed by a 2 hour lunch, followed by a boring management meeting followed by the commit is not recommended.
>
> usn 11 and usn 4 grew to their size within a couple of days of the
> shrinkage. usn 8 did very little until a few days ago.
It only take one large transaction of either type described above to cause these RBSs to grow.
>
> There had been no ORA-1555's until this morning, when I was running a
> huge report (usn 2 is the one that blew up...). I hadn't been
> worrying too much about huge reports since I hope to be putting a
> reporting instance on another box. But until that actually happens...
>
> I am writing a cron that would periodically query v$rollstat, and if
> it sees a change, run the script from metalink Note:1039126.6 to see
> who's on what rollback, the idea being a big transaction should be
> around for at least a few minutes...
>
> Anyone have another solution? I'm guessing
> http://www.ixora.com.au/scripts/rbs.htm#rollback_reuse_time and such
> are not useful, since the undo is so inconsistent, although I haven't
> tried it. I see this as the converse problem.
I'm not entirely sure what the problem is that you are trying to resolve. Is it that you want all the RBS to be of the same size ?
If so, there are two general solutions. One like I said before to size them all to be the same large size (in your example all around 100M) that can then cope with these large transactions. That way, it doesn't matter which RBS is chosen, any will cope. Beforehand, I would check your waits/gets ratio and determine the appropriate *number* of RBSs. Advantages, you can stop worrying about your RBSs growing dynamically (to a point), you can also stop worrying about 1555s (to a point). Disadvantages, you will require a substantial amount of disk, you will possibly induce a fair amount of extra I/O while Oracle writes all these RBS blocks to disk.
Or solution two which is to set an appropriate optimal setting (in your example around 10M) and make Oracle automatically shrink back any RBS that has a large transaction/volume run against it. That way, all your RBS stay at approximately the same size. Check the number of growths/shrinks to make sure that 10M is not too small, you don't want an excessive number here. Advantages, you save a substantial amount of storage, you potentially reduce the amount of I/O associated with RBS (check v$filestat to confirm). Disadvantages, you have the possibility of RBS dynamically allocating/deallocating extents on the fly (if too frequent, you really need to increase optimal), you increase the odds of 1555s (if too frequent, again you really need to increase optimal). You can also use the set transaction command on a large RBS to reduce the incidence of your smaller RBSs having to cope with large transaction.
>
> At some point, I've read most all the stuff out there on the subject,
> but most of it is buried deep within the folds of my brain, haven't
> refreshed recently. Pointers gladly accepted.
My thoughts for what they worth.
Cheers
Richard
>
> 8.1.7 on 4 processor K570, hp-ux 11.0, some kinda RAID which may be
> changing shortly. Generally processor bound, will be converting to a
> bigger and faster box. Packaged "Thin client" finance/manufacturing
> system with some customization.
>
> open_cursors = 7000
> db_block_buffers = 73728
> shared_pool_size = 78643200
> large_pool_size = 614400
> java_pool_size = 0
> log_checkpoint_interval = 10000
> log_checkpoint_timeout = 1800
> processes = 300
> db_block_size = 8192
>
> No other strange init.ora's set. Yes, open_cursors must be that.
>
> TIA
>
> jg
> --
> @home is bogus.
> http://www.garry.to
Received on Fri Nov 22 2002 - 09:08:55 CST