Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Very Large Table extending after mass deletes

Re: Very Large Table extending after mass deletes

From: Kevin A Lewis <Kevin_A_Lewis_at_Hotmail.com>
Date: Mon, 07 Feb 2000 10:11:45 GMT
Message-ID: <BTwn4.6$i11.808@newreader.ukcore.bt.net>


Where in Metalink? - just using the title in a search doesn't find it. Does it have the article reference.?

Is it in a Library or Forum or where else do I find it.?

Regards

--
Kevin A Lewis (BOCM PAULS LTD) - Animal Feed Manufacturer - Ipswich United Kingdom)

                        <Kevin_A_Lewis_at_Hotmail.com>

The views expressed herein by the author of this document are not necessarily those of BOCM PAULS Ltd. Jane Schrock <jschrock_at_us.ibm.com> wrote in message news:389b23a7_2_at_news3.prserv.net...
> I was out on Oracle's MetaLink today and looked up the article "How to
stop
> defragmenting and start living" recommended by Steve Adams in another
post.
> This article is fabulous! I haven't read it in detail yet, but the parts
I
> did read are unbelievable. It talks about my problem to the tee.
According
> to that article, I should set my pctused for this table to be 1 and make
my
> pctfree very small. This makes perfect sense. It's amazing how we are
led
> to the information we need. I was out in Metalink looking for information
> on another problem and just decided to look up this article for later
> reading. It turns out it was exactly the article I need now!
>
> Jane
>
> Jane Schrock <jschrock_at_us.ibm.com> wrote in message
> news:3899fb2c_2_at_news3.prserv.net...
> > Not surprising, Jonathan, you are right on. I love your website, by the
> > way. I looked at some column lengths of the LONG RAW column and about
1
> in
> > 10 rows has data in this column. The size ranges from about 1K up to
43K.
> > Most of the columns that have data are in the 1K to 3K range for this
> > column. 9 out of 10 rows have nothing in this column. There is also a
> > VARCHAR2(1024) in this table. About 1 in 50 rows have this column
greater
> > than 400K. The rest are 100K or less. I figured this out only from a
> > sampling of the rows, but it's probably close enough to explain the
table
> > extension.
> >
> > It appears that although a row is deleted from a block, the space freed
up
> > in the block is not sufficient to satisfy most insert requests.
> Especially
> > if Oracle only tries 5 blocks on the free list before going to the clean
> > blocks. There may be some somewhere in the unwieldly free list, but I
> > suppose searching the entire thing would be unreasonable. I believe I
> > probably have alot of junk on my free list because of improperly set
block
> > utilization parameters, pctfree of 10 and pctused of 90. This is
> ridiculous
> > for this table. There is no way a row with pctused just below 90 could
> > accomodate a row in this table. However, we probably never have the
> actual
> > pctused in the 90's because of the average row size. If I get a chance
to
> > recreate this table, would it be prudent to drop pctused to at most 80,
> > maybe even less, and up pctfree to 5? (There are no updates to this
> table.)
> > Would doing this put only useful blocks on the free list? Also should I
> add
> > more process free lists to improve concurrency? If most of my pre-used
> > blocks are not reusable, then I would say yes to that. Comments?
> >
> > Also as I was examining some block dumps, I noticed that the LONG RAW
> > column, when populated, always appears at the end of the row even though
> it
> > was not defined as the last column. Is Oracle doing this automatically?
> If
> > I get a chance to recreate this table, should I put the LONG RAW as the
> last
> > column?
> >
> > I believe I now understand why the table is extending. My conclusion is
> > that this table is going
> > to blow at some point if I don't reorg it or find more space for it to
> grow
> > into more of a monster than it already is. Now that I understand what's
> > going on I can try to predict it's breaking point and come up with some
> > creative solutions for my client that can be implemented before it
blows.
> > Because of the nature of the way the application uses the data in this
> > table I do have some ideas. Unfortunately, whatever I do, the table will
> > probably get back to the state it's in now. Unless we can go to Oracle
8
> > and partition, I'm screwed.
> >
> > Jane
> >
> > Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk> wrote in message
> > news:949529819.25269.0.nnrp-10.9e984b29_at_news.demon.co.uk...
> > >
> > > Tables with LONGs in them tend to be major pains.
> > > Even though your average row length is only 895
> > > bytes (I assume you have 16K blocks by the
> > > way), is it may possible that a significant number of rows
> > > are much longer - say one in 12 is up to 4K ?
> > >
> > > ** Just seen your follow up post - my guess
> > > ** about the block size was wrong, but the
> > > ** general theory still applies.
> > >
> > > I haven't checked the following notes recently, but
> > > it applied to earlier versions of Oracle and may
> > > still be true.
> > >
> > > When you try insert a row into a 'pre-used' block on
> > > the free list, if it is too long then Oracle removes that
> > > block from the free list and tries the next block - for a
> > > total of 5; after 5 discards Oracle bumps the high water
> > > mark to get some clean blocks, and does the insert
> > > there.
> > >
> > > Therefore - if you have the odd row of 4K, and the free
> > > space in your free block is less than 5.5K (the 4K
> > > of the row plus the 10% of a 16K block) the block
> > > is bumped off the free list. It wouldn't take many
> > > every long rows to cause lots of blocks to have
> > > a 25% wastage in them.
> > >
> > >
> > > There are a few events you can set to watch the activity
> > > of the free lists (not guaranteed to work on all versions
> > > on all platforms).
> > >
> > >
> > > 10083 - trace the hwm has been bumped
> > > 10084 - trace when the free list becomes empty
> > > 10045 - may trace all free-list update operations.
> > >
> > > Its a long time since I've played about with these,
> > > so I don't know what they do on this version of
> > > Oracle.
> > >
> > > There are also some hidden parameters which may
> > > have an effect if the problem is as I described:
> > >
> > > _release_insert_threshold
> > >
> > > If you reduce this, Oracle doesn't dump 5 blocks before
> > > calling for a new one, so you don' waste to many
> > > blocks with space for shorter rows.
> > >
> > > _bump_highwater_mark_count
> > >
> > > if you reduce this, Oracle allocates fewer blocks above
> > > the hwm when it decides to call for a clean block - so
> > > you get to retry some dirty blocks sooner.
> > >
> > > _walk_insert_threshold
> > >
> > > How far to step through the free list before giving up
> > > and calling for clean blocks. If you reduce this Oracle
> > > gives up searching sooner.
> > >
> > > I think they all default to 5.
> > >
> > >
> > > Experiment at your own risk - but your first check should
> > > be on the extremes in variation in length of the rows.
> > >
> > > --
> > >
> > > Jonathan Lewis
> > > Yet another Oracle-related web site: http://www.jlcomp.demon.co.uk
> > >
> >
> >
> >
>
>
Received on Mon Feb 07 2000 - 04:11:45 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US