Re: Identifying deleted rows for a table

From: joel garry <>
Date: Mon, 18 Apr 2011 10:16:42 -0700 (PDT)
Message-ID: <>

On Apr 18, 8:48 am, ddf <> wrote:
> On Apr 18, 7:48 am, dba cjb <>
> wrote:
> > Oracle enterprise windows 2003
> > I am interested in finding out how much of a table is made up of
> > deleted rows
> > / just to see if there is a potential benefit for backups /
> > operations
> > I have run following query
> > SELECT blocks as BLOCKS_USED, empty_blocks
> > FROM dba_tables
> > WHERE table_name='TASK'
> > It shows me I have no empty blocks
> > Is there a query that shows me deleted rows  / non-deleted rows
> > within the used blocks for this table
> > My aim is to assess wether a table export / import would be beneficial
> > for
> > 1) rman backups
> > 2) database operations
> > regards
> > Chris B
> I don't understand why knowing what rows were deleted affects whether
> or not  you perform an export; possibly you could explain that to us.
> Oracle provides a least two tools you could use to identify which rows
> have been deleted: the DBMS_LOGMNR package or Change Data Capture.
> There is (to my knowledge) no way to scan each data block associated
> with a table to find the rows currentlty marked as deleted, and using
> either of the above methods of deleted row identification could still
> not show an accurate 'image' of the table data as slots vacated by
> deleted rows could be filled by newly inserted data.  You could also
> query the table for rowids then pass those through the DBMS_ROWID
> package to extract the file, block and row address then visually scan
> that for 'holes' (a time-consuming task).  Again I don't understand
> why this is necessary information in the decision process on whether
> or not to perform an export.
> David Fitzjarrell

It seems apparent to me he's asking about sparseness. There's plenty of discussion about index sparseness like

Table sparseness could be inferred by looking at how many rows there are, times the average row length (plus row overhead), and compare that to how much space is taken up. It could get more precise if you worked out pctfree, pctused and block overhead effects, but I think that isn't really necessary for a general "is there a lot of wasted space here?" A small amount of space is taken up by deletion markers, so it is possible to fragment blocks, depending on deletion patterns, which is entirely app dependent.

The decision about whether to use exp/imp to reorder a segment is complicated by any implicit ordering the data gets from the app. It is entirely possible that an update process fills up blocks fully utilizing them, so an exp/imp with default pctfree could make a table larger, leaving unwanted free space. It's also possible a gazillion block splits have happened, which could be fixed by an exp/imp. Row chaining, row migration, large deletes could add up to needing this kind of operation. Of course, a table move could fix these too, except where rows just are too big to fit.

The better way to answer this might be to duplicate the db with rman to a test system, run through the exp/imp and watch what happens. I've found some things become obvious with the dbconsole tablespace map option, others require actual testing to see whether any difference in space or performance actually happens. In general, with modern LMT's everything works fairly well, but there are apps that work against it and some features/bugs that can be wasteful, as the above link demonstrates. Just generally doing exp/imp to "defragment" tablespaces is almost certainly a waste of time at best.

Also, beware of the exp compress=y option and the initial value for the segment, and you might want to consider temporarily changing pctfree.


-- is bogus.
Received on Mon Apr 18 2011 - 12:16:42 CDT

Original text of this message