Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Managing Large tablespaces

Re: Managing Large tablespaces

From: Richard Foote <richard.foote_at_bigpond.com>
Date: Sun, 14 Jul 2002 19:44:39 +1000
Message-ID: <rDbY8.34708$Hj3.104971@newsfeeds.bigpond.com>


Hi Steve,

There is no performance hit in spanning a tablespace across more datafiles per se (in fact there could be significant performance improvements). What does matter is using the correct disk configuration to meet your processing needs.

Howard Rogers recently summarised the various Raid options very nicely (see John Wood 'Raid Requirement' thread). Note even if you don't use Raid, you can manually stripe tables across devices by allocating different data files from different devices in the tablespace. Having enough disk spindles to cope with database I/O requirements is the crucial factor.

Whenever I hear large tables being uttered, I think Partitioning. Many advantages, performance being the most important. Worth checking out (although Oracle will ask for more monies).

In terms of what disk configuration is most suitable to your data processing, I always suggest checking out Steve Adams site http://www.ixora.com.au who has an excellent summary on this topic.

Good Luck

Richard
"Steve S" <stevens_at_coloradocustomware.com> wrote in message news:bafba412.0207130444.407a0c4f_at_posting.google.com...
> Richard,
> So you would recommend breaking up a large tablespace into many
> smaller (2 GB)datafiles for manageability. Is there a performance hit
> in spanning a tablespace across many datafiles over having one big
> file?
>
> THanks for your help,
> Steve
>
> "Richard Foote" <richard.foote_at_bigpond.com> wrote in message
news:<CqMX8.33887$Hj3.101443_at_newsfeeds.bigpond.com>...
> > Hi Steve,
> >
> > Your question appears (?) to confuse tablespace sizes with data file
sizes.
> >
> > Note a tablespace can have many datafiles so there is no need to create
one
> > "mamma" big data file for the tablespace. In fact even if you could I
would
> > question it. Can you imagine having to restore one 40G data file in case
of
> > media recovery ? Can you image how much recovery might be required on
such a
> > large datafile ? Can you image the lack of flexibility performing hot
> > backups as you must backup the large file in one go. It's much better to
> > create a number of smaller data files (max limit of 2G is quite common,
not
> > just from some O/S limitations but from a practicality point of view as
> > well). In your case 25 data files is not a huge number.
> >
> > You only need really big data files on really really big DBs (in the
many T
> > byte range).
> >
> > In summary, use a smaller number of data files for this tablespace and
> > appreciate the flexibility (especially in backup/recovery situations)
> > when/if they arise.
> >
> > Good Luck
> >
> > Richard
> >
> > "Steve S" <stevens_at_coloradocustomware.com> wrote in message
> > news:bafba412.0207121504.5e46de58_at_posting.google.com...
> > > We have a customer that will be deploying our application soon with a
> > > DB size of ~50 GB. The largest we have in production now is ~15 GB.
> > > We have data and indexes seperated into their respective tablespaces.
> > >
> > > What would be a recommendation for a size limit a 9i tablespace on
> > > Windows 2000? What kind of problems would a single 30 ro 40 GB file
> > > create?
> > >
> > > Thanks for any help!
> > >
> > > Steve
Received on Sun Jul 14 2002 - 04:44:39 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US