Re: Reducing the number of table extents

From: Tony Jambu <aaj_at_phantom.trl.OZ.AU>
Date: 23 Feb 95 00:24:36 GMT
Message-ID: <3igkk4$30n_at_newsserver.trl.OZ.AU>


I think my previous posting got lost somewhere in space. Here goes again.

In article <3ie7qm$6bs_at_romulus.ncsc.mil>, jts_at_romulus.ncsc.mil (Jamie T. Sutton) writes:
.
.
>
> 1. Why can't I create an extent larger than a datafile's size?

No. And the maximum size of a datafile will be the maximum operating system's filesize. In the case of most UNIX O/S it is 2G and some are 4G.

>
> 2. How can I combine all of these extents into one extent or a
> few extents? Will I have to dump the whole table onto tape,
> disk, etc. and reload the table? Or allocate space for
> a temp. table, copy all the rows from the orig. table, drop
> the original table, and then reload the orig. table? Or is
> there some util. I can use to do this?

I would advice you not to combine your extends to be larger than the maximum physical size of your datafiles. If your database is in the Gigabytes, then dont use the compress=Y on export.

Also having your table fragmented is not such a big deal. Think about it. What are the overheads associated with spaning another extent during a read (timing that is) and compared to the time taken to read 2G extent. Hardly significant.

I wouldn't bother about combining the extents when you ar talking datafiles in the Gigabytes.  

ta
tony

-- 
 _____       ________ / ___ |Tony Jambu, Database Consultant
  /_  _        /_ __ /      |Wizard Consulting,Aust (ACN 065934778)
 /(_)/ )(_/ \_/(///(/_)/_(  |CIS: 100250.2003_at_compuserve.com FAX: +61-3-2536173
 \_______/                  |Email:TJambu_at_wizard.com.au PHONE: +61-3-2536385
Received on Thu Feb 23 1995 - 01:24:36 CET

Original text of this message