Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: rollback segment too small
Bob hammond wrote:
>
> Hi, everybody!
> I have 6 questions about very big and very active database.
> I hope other people already met (and solved) the problems
> I am fighting with. I will appreciate any definite answers,
> hints, comments, proposals, etc.
> Thank you in advance.
> Andrey Radul
> aradul_at_mirror-image.com
>
> We have ORACLE database under UNIX on Sun Solaris.
> I work with table ENTRIES that has about 13 million records.
> >From 2 to 5 million records change every day.
> I used OCI to execute the SQL statement
>
> select count(*) from ENTRIES
>
> and it gave me ORACLE error message ORA-1555 "rollback segment too small".
This is the old 'snapshot too old' message which has puzzled so many Oracle users. Oracle attempts to give you the image of your table at the time when your query started, getting from the rollback segments the old image of the database block you are accessing, even if the transaction has long been committed when you reach the block. If you have 5 million updates say in 10 hours, it's close to 150 per second. Counting 13 million rows takes 'some time', and you probably update a lot during this time, which means that transactions are committed, Oracle wraps around the rollback segment and the image when you started the query is gone when you need it.
> 1.What are possible ways to make sure that simple SQL statements will not
> crash?
> Currently I extract (approximate) number of rows in ENTRIES table using
> SQL commands
>
> analyze table ENTRIES estimate statistics sample 1 percent
>
> and
>
> select num_rows from TABS where table_name = 'ENTRIES'
>
> It works for the moment.
>
> 2.How I could make sure that it is not going to crash later?
> (our database tend to grow very fast)
Working with Oracle is more a matter of faith than cold certainty ;-)
> 3.I am working on a program that should adjust it's behavior depending on
> whether our tables are close to be full. The only way to get this
> information
> I have found so far is to use "analyze table ..." statement
> (dbms_space.unused_space procedure is based on the same "analyze table ..."
> statements). Am I correct? Are there other (more reliable) methods to
> get (estimate) the same information?
In fact you should think more in terms of used blocks rather than
number of rows. This number is stored (by ANALYZE) in DBA_TABLES, but
what makes it interesting is that it is more or less a high-water mark
which is stored in the first block of the table - which means that you
get the same value whatever the size of the sample. If you rely on this,
then you can have a fixed, ridiculously low sample size (say 10 rows)
with your ANALYZE statement - which will protect you against performance
degradation when the table size increases. Now relying on a high
water-mark implicitly assumes that your tables grow in a more or less
orderly fashion, which may not quite be the case. Be careful with the
PCTUSED/PCTFREE parameters.
> 4. Suppose, there are no other ways to get (estimate) the percentage of how
> full our database is.
Database or tables? Do not mix both concepts. Free space in a database
is unallocated space. But space allocated to a table does not
necessarily contain data - and may never contain data if you have
grossly overestimated space requirements.
> What precautions should I take to make sure that
> my "analyze table ..." statements are not going to crash in the future?
> For instance, suppose I will use a dedicated rollback segment for
> "analyze table ..." statements.
Bad idea. ANALYZE is a DDL statement; if it needs some rollback storage
(I am not sure it does) then it will use the SYSTEM rollback segment.
Your rollback segment problem doesn't come from YOUR query, it comes
from the rollback segments used by the transactions which update the
table you are querying. You cannot have any control there.
> Should I increase the size of that rollback segment proportionally to size
> of
> the whole database?
The relatonship is probably much more complex than that. Unfortunately
with rollback segments it's chiefly a matter of trial and error. Perhaps
you should use v$rollstat at separate intervals and see how long, on
average, it takes to wrap around a rollback segment - if you know how
long your SELECT takes, then you should resize all your rollback
segments (except SYSTEM) so that it takes them say twice that time to
wrap around - just an idea.
> 5. Another idea is to recreate the ENTRIES table as partitioned table.
> Will partitioning help to solve the reliability problem?
Partioning is a 'divide and conquer' approach. You should certainly,
with your volumes, consider partitioning (by the way, there is an
interesting paper on the oriolecorp site on partitioning (LIBRARY
section), tests carried out with volumes in your order of magnitude (a
Telecoms operator)). I would certainly try it in your case.
> 6. I can catch ORACLE error if "analyze table ..." crashes.
> Are there any absolutely reliable ways (which do not crash regardless of the
> table
> size ) to do emergency cleanup of our table(s)? In the case of emergency
> I would like to delete, say, 10% of all records.
There is nothing absolutely reliable in this world. To try to delete
10% out of 13 million rows in an emergency situation is the Oracle
equivalent of suicide if you do an actual DELETE - this also uses space
in rollback segments and it's very very long on such volumes. Use
partitioning and TRUNCATE a partition - it's very fast and doesn't use
rollback segments.
--
Regards,
Stéphane Faroult
Oriole Corporation