Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Very Large Table extending after mass deletes
Thanks for the compliment.
First thing, if you switch to Oracle 8, see if
you can change the Long Raw to BLOB -
the storage mechanism is much more efficient
at reusing space. The features are also much
more flexible. The problem is that you will probably
need to recode your application to cope.
As a 'least worst' strategy for LONGs, you
could consider splitting the table vertically
into two, the structured data bit, and the
long column with a meaningless key to
join them. (BTW it is a good idea to store
a 'length' column with the long column
as Oracle does internally for its VIEW$
table).
Since you don't do any updates, set
the PCTFREE to 0.
Working on the line ' most of the rows
that use this column have 1K to 3K of data,
on an 8K block, you could set the PCTUSED
to about 60%). It's a bit hit and miss with
row lengths that can vary wildly, but the idea
is this:
At 60%, you need to delete 3 small rows or 1 large row to get onto the free list. If you delete only two small rows, or a medium size row, the block doesn't get on the free list so you waste 20-40 percent of the space. This may be less wastage and fragmentation than you are suffering at present.
At 60%, if a block is on the free list, and you throw a large row at it, it will fit, instead of knocking 5 rows off the free list and pulling more clean blocks under the high water mark. (and it is the large rows that are causing the problem at present).
There is nothing you can do about the enormous rows (100K to 400K - they will always knock 5 blocks off the free list, unless you reduce the _release_insert_threshold (and it still does what I think it used to ;)). You might also increase the _walk_insert_threshold a bit to give the 3K size rows a slightly better chance of finding a home. You have to remember though that these parameters are (a) hidden for a reason and (b) global and may therefore have undesirable performance side-effects when applied to other tables in the system
You can change the pctused, pctfree dynamically, by the way - the change affects all new blocks, and any old blocks the next time that they are updated. So the situation may improve itself over time if all the rows go through a delete/insert cycle at some point.
In answer the question about column position - yes, long columns are always transferred to the end of the row. This is a strategy designed to avoid having to do a chained row fetch to collect structured data from a point past the long column. You can see this in sys.col$, where the col# and the segcol# differ - col# is the user#s perception of the column position, segcol# is the internal position of the column. This breaks, of course, if you add columns to the table after its initial creation.
--
Jonathan Lewis
Yet another Oracle-related web site: http://www.jlcomp.demon.co.uk
Jane Schrock wrote in message <3899fb2c_2_at_news3.prserv.net>...
>Not surprising, Jonathan, you are right on. I love your website, by the
>way. I looked at some column lengths of the LONG RAW column and about 1
in
>10 rows has data in this column. The size ranges from about 1K up to 43K.
>Most of the columns that have data are in the 1K to 3K range for this
>column. 9 out of 10 rows have nothing in this column. There is also a
>VARCHAR2(1024) in this table. About 1 in 50 rows have this column greater
>than 400K. The rest are 100K or less. I figured this out only from a
>sampling of the rows, but it's probably close enough to explain the table
>extension.
>
Received on Thu Feb 03 2000 - 16:58:28 CST