Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: 9.2 32-bit to 10.2 64-bit Migration

Re: 9.2 32-bit to 10.2 64-bit Migration

From: joel garry <joel-garry_at_home.com>
Date: 17 Jul 2006 16:33:22 -0700
Message-ID: <1153179202.274223.147010@m73g2000cwd.googlegroups.com>

Charles Hooper wrote:
> Sybrand Bakker wrote:
> > On 14 Jul 2006 04:16:04 -0700, "Charles Hooper"
> > <hooperc2000_at_yahoo.com> wrote:
> >
> > >* Pre-create the tablespaces to reduce the time that it takes to
> > >perform the import, and to reduce fragmentation at the operating system
> > >level - if you know that you will need a 25GB data file for a
> > >tablespace, create the data file at that size before the import.
> >
> > As a datafile in many cases is the smallest unit of recovery, anything
> > beyond 2 Gb is IMO undesirable, and 25 Gb is just asking for trouble.
> >
> > --
> > Sybrand Bakker, Senior Oracle DBA
>
> That is a point that I had not considered. I was looking at it in
> terms of performance after the creation - less hard drive track to
> track seeking.

I've fallen for this myth myself. At this point I have to say "we require proof."

First of all, you might get some sort of graphical tool (OEM 9 has Tablespace Map, for example) that shows where blocks are being loaded within tablespaces. Even with a sorted load, you'll note Oracle round-robins where it puts segments within the datafiles.

Second of all, Oracle uses pointer links to know where the next block is. Even if the segments aren't all over the place, everything going on on a big system means even with full table scans the disk isn't going to suck up all the blocks on one track, then all the blocks on the next track and so forth. Even if you think you are the only one on the system. It is quite difficult to disprove this, I welcome anyone to try. The net effect is, once you start using Locally Managed Tablespaces, you really have to try to mess things up to make any difference in performance. So don't let the graphical tool scare you into thinking you are going to have a performance issue just because you have lots of little pieces all over the place. Beyond that, depending on your filesystem, there may be more going on under the covers than you would like.

>
> To help my understanding, what if the tablespace were 25GB in size, and
> you therefore have at least 13 - 2GB data files for the tablespace.
> You find that the twelfth data file for the tablespace is corrupt. Can
> you recover just that data file? My suspicion is that the tables and
> indexes, which have data blocks in that data file likely are not
> completely contained within that data file, but rather have data blocks
> in several of the data files. If you were to recover just this
> corrupted data file, would that not leave the database in an
> inconsistent state?

Aw c'mon man, read the concepts manual and the backup/restore manuals and try some disastrous scenarios. All read/write data files backed up hot or by RMAN are inconsistent, the whole idea of recovery is to make them consistent. Look around on this group for all the file 1 requires more recovery screwups (hint - file 1 isn't the one that is inconsistent). If you just restore the file without recovery, well yeah, the database will be inconsistent, and you might get error messages to that effect trying to open it. And don't forget, it's not really the database that is being made consistent - it's transactions. The redo stream will contain the information needed to make transactions consistent, whereever the data blocks may be. If you need to restore a different file, you will get an error message. Depending on what you did (and assuming you are trying to recover to a point before the corruption, and maybe some blocks in another file are part of a transaction longer than your backup cycle), it might be $ oerr rman 6556
6556, 1, "datafile %s must be restored from backup older than scn %s"

// *Cause:  An incomplete recovery session was started, but the
//          file is newer than the UNTIL clause.
// *Action: Check the UNTIL clause or restore the file from a
sufficient
//          old backup.

Amazing, what Oracle can do.

>
> What about bigfile tablespaces, which support single data files up to
> 128TB? Is there no point in considering bigfile tablespaces?

Only if you have some pretty, um, loose restoration time constraints.

jg

--
@home.com is bogus.
Those darn hackers!
http://www.pcadvisor.co.uk/news/index.cfm?newsid=6601
Received on Mon Jul 17 2006 - 18:33:22 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US