Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Import extent size problems

Re: Import extent size problems

From: <hjr.pythian_at_gmail.com>
Date: Thu, 23 Aug 2007 15:38:18 -0700
Message-ID: <1187908698.761808.53220@i13g2000prf.googlegroups.com>


On Aug 24, 5:58 am, stephen O'D <stephen.odonn..._at_gmail.com> wrote:
> On Aug 23, 8:43 pm, stephen O'D <stephen.odonn..._at_gmail.com> wrote:
>
>
>
> > On Aug 23, 5:08 pm, "fitzjarr..._at_cox.net" <fitzjarr..._at_cox.net> wrote:
>
> > > On Aug 23, 10:40 am, stephen O'D <stephen.odonn..._at_gmail.com> wrote:
>
> > > > > Reading the documentation is always worthwhile:
>
> > > > >http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_im...
>
> > > > > Notice that compress defaults to y, and you're painfully aware of the
> > > > > consequences. You need to set this parameter to n on the command line
> > > > > or in your parfile:
>
> > > > > exp .../... file=.... compress=n .....
>
> > > > > This will prevent exp from 'resizing' the initial extent.
>
> > > > I know what compress = y/n does - but if my original create table
> > > > statement on the big database said
>
> > > > "create table x .... storage initial 80M .... "
>
> > > > Then will export not generate the DDL to have the same initial extent
> > > > size if compress is N?
>
> > > You might want to run:
>
> > > set long 50000
> > > select dbms_metadata.get_ddl('TABLE','<table name here>')
> > > from dual;
>
> > > to see what Oracle believes the storage parameters to be. You might
> > > be surprised at what you find.
>
> > > David Fitzjarrell
>
> > OK, I ran the sql you suggested and the end of the output is:
>
> > ...
> > STORAGE(INITIAL 67108864 NEXT 4194304 MINEXTENTS 1 MAXEXTENTS
> > 2147483645
>
> > So my storage clause is coming up as ~67Meg. If that size comes up in
> > the export file, when I import it into locally managed tablespace with
> > uniform extents of 64K will that table, when empty consume 67M or 64K
> > on Oracle 10g?
>
> I tested this again - compress is set to N for sure. The table that
> reported 67Meg initial extent size using DBMS_Metadata consumes 67Meg
> when I import it. It seems Oracle just ignores the uniform extent if
> you specify one that is bigger in your DDL.
>
> Is there any way to override this, or am I going to have to manually
> (or via a script) adjust the storage parameters of every table before
> exporting?

It doesn't "ignore" the request for 67MB. It seeks to honour it, although it won't do it in the precise way the storage clause is demanding it.

That is, if you've got INITIAL 67M, your table will be created with 67 1MB extents (or 134 512KB extents etc). You will thus get your 67MB, but not quite in the way you were expecting it.

Locally managed tablespaces have always tried to 'honour in the intention' the iNITIAL clause; it's the NEXT they completely ignore!

If your import was done with consistent=Y (one of the worst defaults Oracle ever invented), then -for large tables- your import will be demanding large amounts of free space up-front. It won't be demanding a single piece of 67MB free space, but it will still be demanding 67MB of free space somewhere. If you haven't got it, you'll run into trouble.

Do what David told you to do a long time ago: compress=N is the only sensible option for export anyway, and you might as well do it. Then your dump file will be asking for INITIAL 1000M NEXT 512K(for example), and it will be given 1 1MB initial extent and as many 1MB next extents as it will need to import the entire data set.

If you really and truly want to muck about with manipulating storage clauses (not a sensible option, but if you insist...) then do a Google Search for DataBee. Allows you to override storage clauses in a dump file without having to modify them in the source database. Received on Thu Aug 23 2007 - 17:38:18 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US