Re: Problem with tablespace file size limit

From: Noons <wizofoz2k_at_yahoo.com.au>
Date: Thu, 20 Aug 2009 22:36:50 +1000
Message-ID: <h6jfnq$m9l$1_at_news.eternal-september.org>



jstuglik wrote,on my timestamp of 20/08/2009 9:24 PM:

> with new datafile it should be enough space in one file to store 2
> years behind
> of them, and if it's not I will just add another datafile then.

Sounds good in principle.

> I hope also this will improve performance of the db in data
> manipulation in those tables left
> in the old tablespace (and datafile).

It always helps to separate tables by mode of access, rather than purely size. Highly volatile tables should be in a dedicated tablespace, read-mostly ones in another, and so on. That way a dba can apportion things to the appropriate devices, if desirable or needed.

> I will do the CTAS on those tables and then drop the old ones - do you
> have any hints on this to make
> it as fast as possible? The are 5 tables to move, the biggest one
> contains about 50 millions rows and
> is about 4GB of size on disk. I'm asking because I'm affraid this
> could take a long time and I will have
> only few-hour long window to do this.

It's all highly contingent on the target hardware. I wouldn't worry too much with a 50Mrow table in the system I work with, we have them with up to 2 billion rows: *those* are a worry! But in my prior job, 50Mrow would be a concern in that particular hardware: all relative.

Don't want to over-complicate things but the ideal for your case would be partitioning: it's then dirt easy to just drop partitions with the older data and add them for new data. But partitioning is an additional cost Oracle option and that will make your product more expensive as well as complicating your code. If you can survive in the target hardware with CTAS performance, then it's OK.

Ideally with CTAS, you'd be moving the table to a new tablespace. Or make sure you use a locally managed uniform size tablespace (EXTENT MANAGEMENT LOCAL UNIFORM SIZE [xxx]M, look it up). Pick a value for the "[xxx]M" that keeps even largest tables at < 10000 extents. That will avoid creation of "holes" and potentially making it impossible for the new trimmed table to be created even though there is enough total free space left.

You may also want to have a look at the DBMS_REDEFINITION PL/SQL package. It helps a lot with the task of moving/reloading tables after massive deletes while keeping track of dependent objects. I think it needs Enterprise Edition as it uses MVs to keep constant online access to the table while being redefined.

The other option is to delete all rows, then just move the table to another tablespace at a convenient time: deleted space is not moved (except in very unusual cases, like half-filled blocks) and you need to rebuild indexes and recalc stats after. But it does not invalidate views or other dependents. Have a try.

Not an easy task or decision. It all is very dependent on what sizes, hardware capacity and so on you're dealing with.
I don't envy the work ahead of you... Received on Thu Aug 20 2009 - 07:36:50 CDT

Original text of this message