Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: drop user cascade undo /redo?

Re: drop user cascade undo /redo?

From: joel garry <joel-garry_at_home.com>
Date: 23 Feb 2007 14:53:31 -0800
Message-ID: <1172271211.187890.117420@h3g2000cwc.googlegroups.com>


On Feb 23, 12:14 pm, Unknown <m.ley..._at_chello.nl> wrote:
> Rdbms version: 9.2.0.5
> Database configuration: Data-warehouse / OLAP (noarchivelog)
> O.S. AIX 5L Version 5.1
>
> Hi,
>
> I am preparing for a change. 3 tablespaces where created using a
> too large uniform size of 1M. This needs to be rectified. The particular
> tablespaces need to be recreated with a smaller uniform size because the
> uniform size is too large for the size of objects the tablespace is
> housing causing inefficient data storage.
>
> The tablespaces in question contain only objects belonging to one
> user/schema and the user/schema only owns objects that are created in the
> tablespaces in question.
>
> What in globally needs to be done.
> - save create user & tablespace ddl from OEM.
> - lock user.
> - set tablespaces in question read only, to be certain no data is
> entered during change and lost during the coming deletion of the
> tablespaces?
> - export data belonging to the user or tablespaces in question.
> - set tablespaces read/write before drop?
> - drop tablespaces is question containing data, then drop user cascade
> - or-
> - drop user cascade without dropping the tablespaces first.
> - recreate tablespaces in question with correct uniform size using
> modified ddl scripts.
> - recreate user using ddl script.
> - import user objects.
>
> I have been advised by the customer to drop the user with cascade option,
> considering the user owns 100GB of data, I am afraid of performance
> issues, generation of large amounts of undo redo.
>
> If possible I would rather drop the tablespaces belonging to the user
> first and then drop the user with cascade to remove all other objects
> belonging to the user that are saved in the data dictionary.
>
> Please advise what the best strategy is, any other related tips are more
> the welcome.
>
> Regards Mick
> (Junior DBA).

Please post your calculations that led you to the conclusions that data storage is inefficient. Unless you have ten thousand never to be used empty tables, it seems a strange statement for a 100G DW. Then again, maybe it's time for me to review the concepts manual about segment allocation... The essential question is, how often are you going to be allocating segments where you will never use up 1M? If you have a mix of tiny and large tables, then you want to use autoallocate anyways, it is pretty quick at coming to a correct one of a limited number of sizes. Most DW tables tend to be large, so you want larger allocations at any given time. What you don't want is to wind up with hundreds of thousands of extents. If you are worried about wasted space, maybe you've done something strange with PCTFREE and PCTUSED? If that is the case, Valentin's suggestion is spot-on. In fact, depending on how your tables are updated, it is critical to get those correct to avoid unpleasant surprises when you export or move things about. Or have you been doing a lot of purging making sparse blocks?

I think you may be a bit off on the undo usage too. imp/exp will use a lot, but not ddl. I think. Do you not have a test system to try it on?

jg

--
@home.com is bogus.
http://video.stumbleupon.com/#
Received on Fri Feb 23 2007 - 16:53:31 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US