Home » RDBMS Server » Performance Tuning » Fragmented Tables
Fragmented Tables [message #277373] Mon, 29 October 2007 16:22 Go to next message
rubhatta
Messages: 7
Registered: August 2007
Junior Member
Hi All,

We are using Oracle database 10.2.0.3.0 and wanted to know whether reorganization of fragmented tables can lead to significant performance improvement.

Could you please tell me how I can find out the tables that are fragmented and thus candidates for reorg?

Also, what is the relation between the table extents and fragmentation? I see that the extents in dba_segments for some tables are well above 100. Should these tables be reorganized? If yes, is there a cut-off for the exents above which a table should be reorganized?

Looking forward to your replies. Any help would be highly appreciated.

Thanks.
Re: Fragmented Tables [message #277375 is a reply to message #277373] Mon, 29 October 2007 16:31 Go to previous messageGo to next message
BlackSwan
Messages: 25050
Registered: January 2009
Location: SoCal
Senior Member
>Could you please tell me how I can find out the tables that are fragmented and thus candidates for reorg?

I have not done a table reorg in more than 3 years.
Especially with LMT, table reorg are no longer useful.

>Also, what is the relation between the table extents and fragmentation?
NONE

>I see that the extents in dba_segments for some tables are well above 100.
So there are more than 100 grains on sand in the Sahara,
does that fact mean the Sahara needs to be reorged?

>Should these tables be reorganized?
No

>If yes, is there a cut-off for the exents above which a table should be reorganized?

It appears you suffer from Compulsive Tuning Disorder.

The number of extents associated with any table is a straight forward mathematical calculation = (total amount of table data)/ extent_size.

If you desire fewer extents for a table; either reduce the amount of data or increase the extent size.

Re: Fragmented Tables [message #277387 is a reply to message #277375] Mon, 29 October 2007 20:50 Go to previous message
rleishman
Messages: 3724
Registered: October 2005
Location: Melbourne, Australia
Senior Member
To add to that...

If you have a monolithic number of extents (say 1M), then it probably means you need a bigger extent size. Allocating a new extent when all others are full entails an overhead, and that overhead is pretty static no matter how big the extent.

So, when you perform a load of 1M rows and it allocates 100 new extents, then you are copping that overhead 100 times instead of (perhaps) once (or not at all).

Auto-extending files complicates it a bit further. But all of this just means that your INSERTs are slower, not your SELECTs.

Now, say you've got a number of tables with tens of thousands of extents. This means your data dictionary is REALLY big. What if it gets SO big that it can't fit into memory? You will get recursive SQL; that's SQL that Oracle performs to probe the data dictionary when it needs DD data that is not cached.

You can tell if you are getting recursive SQL by tracing a job and checking the footer of the TK*Prof output. Your first move would probably be to fiddle with initialisation parameters to increase the memory. If you discovered that it didn't help AND that there were some enormous DD tables due to many extents, THEN it would be time to reorg the tables in question into a tablespace with larger extents.

Ross Leishman
Previous Topic: difference between locking and disabling statistics
Next Topic: export statistics form production to test database
Goto Forum:
  


Current Time: Sat Dec 10 05:30:20 CST 2016

Total time taken to generate the page: 0.23911 seconds