Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: sqlldr locks fet$ and uet$, blocks smon, 100% cpu usage

Re: sqlldr locks fet$ and uet$, blocks smon, 100% cpu usage

From: Howard J. Rogers <howardjr2000_at_yahoo.com.au>
Date: Wed, 10 Sep 2003 09:27:40 +1000
Message-ID: <3f5e6267$0$14560$afc38c87@news.optusnet.com.au>

"Don Seiler" <don_at_seiler.us> wrote in message news:F1q7b.459$ev2.317667_at_newssrv26.news.prodigy.com...
> Using Oracle 7.3.4.5 on HP-UX.
>
> The subject pretty much says it all. We have an application that uses
> sqlldr to post back a lot of rows to a table. This table holds ~ 65
> million rows. This application has worked fine until this week. Now it
> will hang, and there will be a process on the server (HP-UX) that uses
> 100% of a CPU. This pretty much makes the rest of the database
> unusable. I can't even view the sessions to see who owns that process.
> I think it is spawned off by the sqlldr, but the parent pid when I do
> ps -aef says 1, which usually means it's a remote process but the
> application in question is run locally. However the time run by the
> process has matched pretty much exactly the time that the sqlldr
> application was run.
>
> Anyway what we can see is that there will be an extraordinary amount of
> locks on the system fet$ and uet$ tables by the db user that the sqlldr
> app runs as. These locks are blocking the SMON process.
>
> My developers tell me this happens after 90,000 rows or so have gone in,
> then they can't do any more with sqlldr. They can still insert data
> with sql*plus, but that isn't really an option.
>
> I can't say if it's related, but it shocked me and will probably shock
> you that this table has over 10 million chained rows. I've read that
> the two main causes are that the db_block size is too small or the
> pctfree is wrong. What would be the best way to diagnose the problem
> and implement the solution. We are planning an upgrade to Oracle 9i
> (9.2.0.x) this winter so I can make adjustments then I assume. But the
> locking problem above is of more urgent need.
>

You need to determine whether it is row chaining or row migration (the relevant column in DBA_TABLES is CHAIN_CNT... which covers both eventualities, despite its name).

The way to tell the difference is to look at the table definition. If you "desc blah', and see:

COL1 CHAR(2)
COL2 NUMBER(8,2)
COL3 VARCHAR2(30) ....then the row-length in this table is about 40 bytes long, give or take, and there is no way this can be row chaining (which is, as you say, when the row length exceeds the database block size). It must be row migration.

On the other hand, if you saw:

COL1 LONG
COL2 LONG RAW etc etc etc, then you are dealing with row lengths potentially in the gigabytes, and that's definitely going to be row chaining. About which you can do sod-all, short of re-creating your database with a bigger block size... but last time I checked, Oracle did not permit block sizes to be in the gigabyte range!

If it's migration, then 'alter table blah pctfree X' (where X is bigger than it is now) will help prevent future migrations. But it won't fix up anything that's already migrated. The only way to do that is to somehow arrange for the affected rows to be re-inserted into the table, and the easiest way of doing that is 'create table blahcopy as select * from blah', 'truncate table blah', insert into blah select * from blahcopy', 'drop table blahcopy'.

Which happens to be one of the same solutions I mentioned in my reply to your other post. So if you are fixing up tablespace fragmentation, you can fix up row migration at exactly the same time, for free.

Regards
HJR Received on Tue Sep 09 2003 - 18:27:40 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US