RE: Datafile HWM without querying dba_extents

From: Mark W. Farnham <>
Date: Sun, 1 Feb 2015 09:23:53 -0500
Message-ID: <65df01d03e2a$b5d33250$217996f0$>


I would add parenthetically: "(without removing some contents)"  

IF you know you have a big chunk or big chunks of internal free space unlikely to be used in a reasonable amount of time, it *may* be worthwhile to do further analysis to determine *whether* some (or all) of the contents can be cost effectively moved to a different tablespace.  

Avoiding a treadmill of activity that exceeds value is essential. Usually things with a monthly or quarterly cycle do not justify such activity. Storage acreage is usually relatively cheap compared to i/o operations and throughput. Your mileage may vary, especially if you have large windows of low usage or you are trying to defer purchases to reach an anticipated price break or storage media improvement being temporarily close to full.  


From: [] On Behalf Of Jonathan Lewis
Sent: Sunday, February 01, 2015 12:53 AM To:
Subject: RE: Datafile HWM without querying dba_extents  

Your case 2 comment is correct - but if you do a resize datafile aimed at the highest starting block and it fails you know that you can't shrink the file. Alternatively, if you check the start block and block count and find that that doesn't take you to the end of file then you know that you can't resize the file downwards.      

Jonathan Lewis

From: "" <> (Redacted sender "" for DMARC) Sent: 31 January 2015 21:21
To: Jonathan Lewis;; Subject: Re: Datafile HWM without querying dba_extents

Thanks for sharing your thoughts, esp. getting the TS dump (will give it a try).  

As for the dba_free_space, I have 2 datafile cases as below (T- Used, x - Empty) :  





In Case1, the contiguous free space for max block_id (per dba_free_space) would begin at BlkId 10  

In Case2, the max blockid for free chunk would begin at BlkId 5, but we really cannot shrink that Datafile, since the last blocks are already used.  

So, the MAX(block_id) for a given File_Id, in dba_free_space may not necessarily point to the free blocks at the 'End' of a datafile.  


On Saturday, January 31, 2015 4:58 AM, Jonathan Lewis <> wrote:      

On second thoughts, why are you querying dba_extents to find where last used block id is ? If all you want to do is shrink the datafile then querying user_free_space (ordered by file id and block id) will allow you to find the starting block of the highest free area in file.  

You only need to query dba_extents if you think you've got a lot of space lower down the file and think that moving a couple of small objects might be sufficient to clear the way to releasing it.            

Jonathan Lewis

From: [] on behalf of Deepak Sharma [] Sent: 31 January 2015 05:57
Subject: Datafile HWM without querying dba_extents

In order to resize a datafile to release space at the end, we need to find whatever the last block_id that is at the start of that free contiguous space.  

Problem is that we have a very large database such that querying dba_extents to find the last block is probably not an option. The standard query(ies) that make use of dba_extents runs for hours at stretch and also sometimes fails with a 'snapshot too old' (just gives up).  

Is there an alternative to using dba_extents?  

For example, if the datafile size is 100mb and the last 10mb is vacant, I want to know the block_id of where that 10mb begins.    

Received on Sun Feb 01 2015 - 15:23:53 CET

Original text of this message