Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Managing Large tablespaces

Re: Managing Large tablespaces

From: Michael Austin <maustin_at_firstdbasource.com>
Date: Sat, 13 Jul 2002 09:23:56 -0400
Message-ID: <3D3029EC.A6B8FCB4@firstdbasource.com>


Richard Foote wrote:
>
> Hi Steve,
>
> Your question appears (?) to confuse tablespace sizes with data file sizes.
>
> Note a tablespace can have many datafiles so there is no need to create one
> "mamma" big data file for the tablespace. In fact even if you could I would
> question it. Can you imagine having to restore one 40G data file in case of
> media recovery ? Can you image how much recovery might be required on such a
> large datafile ? Can you image the lack of flexibility performing hot
> backups as you must backup the large file in one go. It's much better to
> create a number of smaller data files (max limit of 2G is quite common, not
> just from some O/S limitations but from a practicality point of view as
> well). In your case 25 data files is not a huge number.
>
> You only need really big data files on really really big DBs (in the many T
> byte range).
>
> In summary, use a smaller number of data files for this tablespace and
> appreciate the flexibility (especially in backup/recovery situations)
> when/if they arise.
>
> Good Luck
>
> Richard
>
> "Steve S" <stevens_at_coloradocustomware.com> wrote in message
> news:bafba412.0207121504.5e46de58_at_posting.google.com...
> > We have a customer that will be deploying our application soon with a
> > DB size of ~50 GB. The largest we have in production now is ~15 GB.
> > We have data and indexes seperated into their respective tablespaces.
> >
> > What would be a recommendation for a size limit a 9i tablespace on
> > Windows 2000? What kind of problems would a single 30 ro 40 GB file
> > create?
> >
> > Thanks for any help!
> >
> > Steve

To answer your question, you will need to see what kind of transactions you are doing and how much data will be continually added. The answer will also have a lot to do with your back and recovery methods. Exp/Imp? HotBackup? ColdBackup? Backupto tape? backup to disk? How many spindles (disks) are being used.

Although 50G is NOT a big database by any stretch of the imagination, if tablespace size is a concern, you can also look at partitioning. This is where the table data is stored across multiple tablespaces that can use multiple datafiles.

COLA has values AAAAA through ZZZZZ

	store AAAAA-GZZZZ  in tablespace1  that has 10 data files 1G in size.
	store HAAAA-LZZZZ  in tablespace2  that has 10 data files 1G in size.
	...... You get the picture (I hope)

Let's also say that your system has 10 logical disks (raid 0+1 or 5**), spread the datafiles for each tablespace over all 10 spindles for improved I/O performance. Should you loose a disk, media revovery of all data files stored on that disk would need to be done. While all of this would be possible, you are going to have to sit down and work through all the possible failure scenarios and what it would take to do a recovery. Several very large data files, a bunch of smaller data files, partitioning. Each postition has it's own advantages/disadvantages.

-- 
Regards,

Michael Austin            OpenVMS User since June 1984
First DBA Source, Inc.    Registered Linux User #261163
Sr. Consultant            http://www.firstdbasource.com
                          http://www.firstdbasource.com/donation.html
704-947-1089 (Office)     704-236-4377 (Mobile)
Received on Sat Jul 13 2002 - 08:23:56 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US