Re: Problem with tablespace file size limit

From: jstuglik <>
Date: Thu, 20 Aug 2009 05:52:31 -0700 (PDT)
Message-ID: <>

On 20 Sie, 14:36, Noons <> wrote:
> jstuglik wrote,on my timestamp of 20/08/2009 9:24 PM:
> > with new datafile it should be enough space in one file to store 2
> > years behind
> > of them, and if it's not I will just add another datafile then.
> Sounds good in principle.
> > I hope also this will improve performance of the db in data
> > manipulation in those tables left
> > in the old tablespace (and datafile).
> It always helps to separate tables by mode of access, rather than purely size.
> Highly volatile tables should be in a dedicated tablespace, read-mostly ones in
> another, and so on.  That way a dba can apportion things to the appropriate
> devices, if desirable or needed.

I think in this case it's ok because the tables I'm going to move are all very similar when looking on mode of access - very frequent inserts and no deletes (until one of archiving sessions). The rest of the tables are all similar also - many selects, updates and inserts but very seldom deletes (it's in general OLTP db but in some aspects is treated like data warehouse to improve performance of some specific tasks).

> > I will do the CTAS on those tables and then drop the old ones - do you
> > have any hints on this to make
> > it as fast as possible? The are 5 tables to move, the biggest one
> > contains about 50 millions rows and
> > is about 4GB of size on disk. I'm asking because I'm affraid this
> > could take a long time and I will have
> > only few-hour long window to do this.
> It's all highly contingent on the target hardware.  I wouldn't worry too much
> with a 50Mrow table in the system I work with, we have them with up to 2 billion
> rows: *those* are a worry!  But in my prior job, 50Mrow would be a concern in
> that particular hardware: all relative.

We just migrated to a new server few months ago and I think it should be ok. It's HP ProLiant BL460c G1 with Hitachi drive array connected via FC.

> Don't want to over-complicate things but the ideal for your case would be
> partitioning: it's then dirt easy to just drop partitions with the older data
> and add them for new data.  But partitioning is an additional cost Oracle option
> and that will make your product more expensive as well as complicating your
> code.  If you can survive in the target hardware with CTAS performance, then
> it's OK.

Just as you say: we have purchased Standard Edition One per proccessor license and it's relatively cheap. Partitioning is, I think, Enterprise Edition feature so there is no way we can afford it right now. We could use a lot of other features available in Enterprise but instead we have to struggle to make our app better using what is available in Standard Edition.

> Ideally with CTAS, you'd be moving the table to a new tablespace.  Or make sure
> you use a locally managed uniform size tablespace (EXTENT MANAGEMENT LOCAL
> UNIFORM SIZE [xxx]M, look it up).  Pick a value for the "[xxx]M" that keeps even
> largest tables at < 10000 extents.  That will avoid creation of "holes" and
> potentially making it impossible for the new trimmed table to be created even
> though there is enough total free space left.

I will definitely look it up. It sounds like a good solution for archiving purposes too.

> You may also want to have a look at the DBMS_REDEFINITION PL/SQL package. It
> helps a lot with the task of moving/reloading tables after massive deletes while
> keeping track of dependent objects.  I think it needs Enterprise Edition as it
> uses MVs to keep constant online access to the table while being redefined.
> The other option is to delete all rows, then just move the table to another
> tablespace at a convenient time: deleted space is not moved (except in very
> unusual cases, like half-filled blocks) and you need to rebuild indexes and
> recalc stats after. But it does not invalidate views or other dependents.  Have
> a try.
> Not an easy task or decision.  It all is very dependent on what sizes, hardware
> capacity and so on you're dealing with.
> I don't envy the work ahead of you...

Thanks a lot for all your help. I'll have to get deep into this problem to make it right it seems:-) Received on Thu Aug 20 2009 - 07:52:31 CDT

Original text of this message