Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: rollback segment too small

Re: rollback segment too small

From: Stephane Faroult <sfaroult_at_oriolecorp.com>
Date: Thu, 22 Apr 1999 21:12:21 -0700
Message-ID: <371FF325.42C2@oriolecorp.com>


Bob hammond wrote:
>
> Hi, everybody!
> I have 6 questions about very big and very active database.
> I hope other people already met (and solved) the problems
> I am fighting with. I will appreciate any definite answers,
> hints, comments, proposals, etc.
> Thank you in advance.
> Andrey Radul
> aradul_at_mirror-image.com
>
> We have ORACLE database under UNIX on Sun Solaris.
> I work with table ENTRIES that has about 13 million records.
> >From 2 to 5 million records change every day.
> I used OCI to execute the SQL statement
>
> select count(*) from ENTRIES
>
> and it gave me ORACLE error message ORA-1555 "rollback segment too small".

This is the old 'snapshot too old' message which has puzzled so many Oracle users. Oracle attempts to give you the image of your table at the time when your query started, getting from the rollback segments the old image of the database block you are accessing, even if the transaction has long been committed when you reach the block. If you have 5 million updates say in 10 hours, it's close to 150 per second. Counting 13 million rows takes 'some time', and you probably update a lot during this time, which means that transactions are committed, Oracle wraps around the rollback segment and the image when you started the query is gone when you need it.

> 1.What are possible ways to make sure that simple SQL statements will not
> crash?

  1. Increase the size of the rollback segments
  2. Try to work faster and to diminish your 'before image' requirements. I presume that you have a primary key on your table? Don't tell me the table is totally unindexed to speed up inserts! Basically, the problem is to know whether when you say 'updates' you refer to actual SQL updates or if it's in a more general acceptance which includes inserts and deletes as well. If it's true updates (mostly) then most obviously the primary key if you have one musn't be touched - the primary key index is likely to stay more or less the same for a much longer time than the table. select count(primary_key) from entries should be enough to make Oracle understand it should scan the primary key index rather than the table (if it doesn't, use hints) - in fact I am surprised the optimizer doesn't do this spontaneously, but the ways of the optimizer are sometimes strange to mere mortals. Perhaps the primary key may not be the best of indexes, by the way. Your ideal index is the smallest in size (look at DBA_SEGMENTS) of all indexes on NOT NULL columns - NOT NULL to be sure to have as many entries in the index as we have rows in the table, if, once again, the said column is not too heavily updated.
  3. Cheat. An obvious solution would be to have a one line table which would contain the number of rows, maintained by triggers. Given your rate of updates, this is certainly not a solution to advise - first triggers would add some noticeable overhead and you can bet you would have a lot of contention on the table. But perhaps you can implement something more or less similar without relying on Oracle, say some shared memory area where processes which insert or delete rows would update one or several counters which would be accessible to your program. This of course implies serious (re)development.

> Currently I extract (approximate) number of rows in ENTRIES table using
> SQL commands
>
> analyze table ENTRIES estimate statistics sample 1 percent
>
> and
>
> select num_rows from TABS where table_name = 'ENTRIES'
>
> It works for the moment.
>
> 2.How I could make sure that it is not going to crash later?
> (our database tend to grow very fast)
  Working with Oracle is more a matter of faith than cold certainty ;-)  

> 3.I am working on a program that should adjust it's behavior depending on
> whether our tables are close to be full. The only way to get this
> information
> I have found so far is to use "analyze table ..." statement
> (dbms_space.unused_space procedure is based on the same "analyze table ..."
> statements). Am I correct? Are there other (more reliable) methods to
> get (estimate) the same information?

  In fact you should think more in terms of used blocks rather than number of rows. This number is stored (by ANALYZE) in DBA_TABLES, but what makes it interesting is that it is more or less a high-water mark which is stored in the first block of the table - which means that you get the same value whatever the size of the sample. If you rely on this, then you can have a fixed, ridiculously low sample size (say 10 rows) with your ANALYZE statement - which will protect you against performance degradation when the table size increases. Now relying on a high water-mark implicitly assumes that your tables grow in a more or less orderly fashion, which may not quite be the case. Be careful with the PCTUSED/PCTFREE parameters.

> 4. Suppose, there are no other ways to get (estimate) the percentage of how
> full our database is.

  Database or tables? Do not mix both concepts. Free space in a database is unallocated space. But space allocated to a table does not necessarily contain data - and may never contain data if you have grossly overestimated space requirements.

> What precautions should I take to make sure that
> my "analyze table ..." statements are not going to crash in the future?
> For instance, suppose I will use a dedicated rollback segment for
> "analyze table ..." statements.

 Bad idea. ANALYZE is a DDL statement; if it needs some rollback storage (I am not sure it does) then it will use the SYSTEM rollback segment. Your rollback segment problem doesn't come from YOUR query, it comes from the rollback segments used by the transactions which update the table you are querying. You cannot have any control there.

> Should I increase the size of that rollback segment proportionally to size
> of
> the whole database?

 The relatonship is probably much more complex than that. Unfortunately with rollback segments it's chiefly a matter of trial and error. Perhaps you should use v$rollstat at separate intervals and see how long, on average, it takes to wrap around a rollback segment - if you know how long your SELECT takes, then you should resize all your rollback segments (except SYSTEM) so that it takes them say twice that time to wrap around - just an idea.

> 5. Another idea is to recreate the ENTRIES table as partitioned table.
> Will partitioning help to solve the reliability problem?
 Partioning is a 'divide and conquer' approach. You should certainly, with your volumes, consider partitioning (by the way, there is an interesting paper on the oriolecorp site on partitioning (LIBRARY section), tests carried out with volumes in your order of magnitude (a Telecoms operator)). I would certainly try it in your case.

> 6. I can catch ORACLE error if "analyze table ..." crashes.
> Are there any absolutely reliable ways (which do not crash regardless of the
> table
> size ) to do emergency cleanup of our table(s)? In the case of emergency
> I would like to delete, say, 10% of all records.
 There is nothing absolutely reliable in this world. To try to delete 10% out of 13 million rows in an emergency situation is the Oracle equivalent of suicide if you do an actual DELETE - this also uses space in rollback segments and it's very very long on such volumes. Use partitioning and TRUNCATE a partition - it's very fast and doesn't use rollback segments.
--
Regards,

  Stéphane Faroult
  Oriole Corporation



http://www.oriolecorp.com, designed by Oracle DBAs for Oracle DBAs
Received on Thu Apr 22 1999 - 23:12:21 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US