Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: sqllder hangs at 99-100% CPU on select of fet$

Re: sqllder hangs at 99-100% CPU on select of fet$

From: Howard J. Rogers <howardjr2000_at_yahoo.com.au>
Date: Wed, 10 Sep 2003 09:20:45 +1000
Message-ID: <3f5e60c8$0$14561$afc38c87@news.optusnet.com.au>

"Don Seiler" <don_at_seiler.us> wrote in message news:f6r7b.487$ev2.331412_at_newssrv26.news.prodigy.com...
> I made an earlier post but the info in that is incorrect and I can't
> wait for it to show up on usenet.
>
> Using Oracle 7.3.4.5 on HP-UX. I have an application that uses sqlldr
> to bulk load a table. The table usually has around 65 million rows.
> This application has worked fine in the past, but for some reason this
> week it isn't working. When the application is started, it might load
> some (once it got up to 90,000 records, now it does 0), but eventually
> it will hang, the oraclesid process for it will go to 99-100% CPU usage,
> and nothing will happen.
>
> I load up TOra to see what it is doing and this is the current SQL it
> shows me for that process:
>
> SELECT length
> FROM fet$
> WHERE file# = :1
> AND block# = :2
> AND ts# = :3
>
> I'm assuming that sqlldr does that on it's own because my developers
> have no idea what fet$ is. This process locks fet$ and eventually will
> block SMON from accessing the table. This whole mess makes the database
> unusable until I kill the sqlldr's oraclesid process.
>
> Any remedies would be welcomed with open arms.
>

First, try not to get too wound-up about Sybrand's ad hominem attacks on you.

Second, uet$ and fet$ are two tables in the data dictionary that are used to search for Used ExTents and Free ExTents. When you do a mass load like this, your table runs out of space, and has to extend. Therefore, it has to run off to fet$ to find some free space that it can use to extend into.

Now: if this were 8i or 9i, the use of locally managed tablespace would eliminate the visits to fet$ at a stroke (their main claim to fame). But you're on 7, so that's not an option.

So: what causes huge amounts of time to be spent wading through fet$? Vast numbers of pieces of free space, none of which are considered large enough to be suitable for your table to extend into -because each piece of free space is recorded as a row in fet$ The more pieces of free space you have, the more rows in fet$. This is the classic definition of 'tablespace fragmentation', and your troubles with fet$ is the classic symptom of this particular disease.

Note that this is a tablespace issue, not a sqlloader or a table-specific issue.

The cure for fragmentation in 7 is not nice. It's to export everything out of the tablespace, table by table. The drop the lot (warning: if fragmentation is accompanied by huge numbers of extents, then dropping a table can take a long time, because that requires huge numbers of deletes and inserts on uet$ and fet$). Then re-import the lot.

However, that's only half the story, because fragmentation can only ever occur in the first place when different tables within the one tablespace have differnet INITIAL and NEXT extent sizes. It's because extents of, say, 256KB have been freed up in the past, but another table now wants to acquire extents of, say, 333K, that you end up having to hunt like mad for a single extent of the right size. If all extents were, say, 256K, then the freeing up of one table's extents (by earlier dropping or truncating) would have left 'holes' of exactly the right size for your current table to make use of. And then you wouldn't spend so long wandering around through fet$.

Therefore, the other half of the story is to prevent the problem happening in the future. That means once you've exported all the tables, and dropped them, you should really re-create all the tables by hand, using consistent settings for INITIAL and NEXT for all tables. Then you can import with IGNORE=Y, and the tables will be re-populated. And since everything now comes in one extent size, this particular issue should never arise again in the future.

Of course, this means your tablespace re-organisation is going to be a fairly labour-intensive task, and it's going to take time to do it. But once it's done, if it's done properly, it should be fixed for a long, long time to come.

By the way, because of the number of rows you already have in your tables, it is possible that export and import won't be particularly nice tools to use for the re-organisation job (because you might hit filesystem export dumpfile size limits, for example). Therefore, consider, too, the use of 'create table blahcopy tablespace ANOTHERONE as select * from blah'. That copies your data elsewhere within the database. Then you can drop table blah, re-create it with good extent sizes, and then 'insert into blah select * from blahcopy', followed by a drop of the copy table to tidy up. That's going to be a lot of disk space, probably, but I suspect will be quicker than the export-import route.

All of which is quite subtle stuff, and it doesn't make you "clueless" just because you've never come across the issue before. As I said, try and ignore silly comments like those.

Best of luck
HJR Received on Tue Sep 09 2003 - 18:20:45 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US