Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: sqllder hangs at 99-100% CPU on select of fet$

Re: sqllder hangs at 99-100% CPU on select of fet$

From: Don Seiler <don_at_seiler.us>
Date: Wed, 10 Sep 2003 00:48:23 GMT
Message-ID: <pan.2003.09.10.00.49.05.905639@seiler.us>


In-line responses throughout:

On Wed, 10 Sep 2003 09:20:45 +1000, Howard J. Rogers wrote:
> First, try not to get too wound-up about Sybrand's ad hominem attacks on
> you.

Thanks. Like I said to him, I'm not a certified anything and would welcome some formal Oracle training. I can be unnecessarily harsh sometimes as well. :p

> Second, uet$ and fet$ are two tables in the data dictionary that are used to
> search for Used ExTents and Free ExTents. When you do a mass load like this,
> your table runs out of space, and has to extend. Therefore, it has to run
> off to fet$ to find some free space that it can use to extend into.

I realize that this is just a small bandage, but would adding a datafile allow things to continue. Again, I know the problem is serious and needs to be addressed ASAP, but some operations need to run ASAP and if I can add a block on contiguous free space in the form of new datafiles than I would like to do that now and start planning for the remedies you suggest below.

> Now: if this were 8i or 9i, the use of locally managed tablespace would
> eliminate the visits to fet$ at a stroke (their main claim to fame). But
> you're on 7, so that's not an option.

Like I said Oracle 9i is our chosen path for Q4. Obviously I'd like to also know of any fixes I should try and do to the data while doing the upgrade, so I don't just bring old baggage along for the ride. Or will 9i automagically fix a lot of these issues?

> So: what causes huge amounts of time to be spent wading through fet$? Vast
> numbers of pieces of free space, none of which are considered large enough
> to be suitable for your table to extend into -because each piece of free
> space is recorded as a row in fet$ The more pieces of free space you have,
> the more rows in fet$. This is the classic definition of 'tablespace
> fragmentation', and your troubles with fet$ is the classic symptom of this
> particular disease.
>
> Note that this is a tablespace issue, not a sqlloader or a table-specific
> issue.

Again I wonder if I can provide a small crutch in the form of throwing more datafiles at the tablespace.

> Therefore, the other half of the story is to prevent the problem happening
> in the future. That means once you've exported all the tables, and dropped
> them, you should really re-create all the tables by hand, using consistent
> settings for INITIAL and NEXT for all tables. Then you can import with
> IGNORE=Y, and the tables will be re-populated. And since everything now
> comes in one extent size, this particular issue should never arise again in
> the future.

[snip]
> By the way, because of the number of rows you already have in your
> tables, it is possible that export and import won't be particularly nice
> tools to use for the re-organisation job (because you might hit
> filesystem export dumpfile size limits, for example). Therefore,
> consider, too, the use of 'create table blahcopy tablespace ANOTHERONE
> as select * from blah'. That copies your data elsewhere within the
> database. Then you can drop table blah, re-create it with good extent
> sizes, and then 'insert into blah select * from blahcopy', followed by a
> drop of the copy table to tidy up. That's going to be a lot of disk
> space, probably, but I suspect will be quicker than the export-import
> route.

Yes the main problem no matter what is finding actual disk space to hold another copy of the data in that tablespace. But your solution is definitely the route to go with re-setting the initial and next extents. Is there a way to calculate the optimal extent size?

> All of which is quite subtle stuff, and it doesn't make you "clueless"
> just because you've never come across the issue before. As I said, try
> and ignore silly comments like those.

Thanks. It stings a little because I think I have come a long way on my own, but as you say I usually only have leisure to do more research when problems occur.

Don. Received on Tue Sep 09 2003 - 19:48:23 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US