Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle data file format

Re: Oracle data file format

From: Jeremiah Wilton <jwilton_at_speakeasy.net>
Date: Sun, 9 Jun 2002 22:57:35 -0700
Message-ID: <Pine.LNX.4.44.0206092239120.20524-100000@grace.speakeasy.net>


On Sun, 9 Jun 2002, Jim Kennedy wrote:

> Wrong. We actually have some idea of how the thing works and
> understand the technical reasons. We are highly innovative, but
> knowledgable enough that we try NOT to do things that are going to
> break.

That's fine. That's different than scaremongering.

> Also one wants to be very careful with a production system - there
> is a database with valuable data and we take the responsiblity
> seriosly and do not have so cavalier attitude about it. On real
> systems we actually care about the data and are not so quick to just
> try it; it might work.

Uh, yeah. Don't experiment on your production database !! ??

> Any tool that would allow you to copy and then "fix up" so it would
> work on a cross platform environment would end up being no faster
> than export. Why? The data blocks would each and everyone need to
> be read in one format and written in another. It would have to take
> care of the situation where the resulting write made the data larger
> than the block and create a chained row. (hence creating a
> performance problem ) The problem goes beyond bin and little endian
> systems, different systems have different byte alignment and so
> forth.

I think the most convincing argument here is the chained row problem. I think you could write a program to do the rest faster than export/import would work. Just think of the number of operations, system calls, etc. hypothetically required for the two methods.

I bet despite these challenges we are going to see cross-platform transportable tablespaces from Oracle in some upcoming version.

> There are other ways to do it other than export (eg sqlplus copy) or
> a link and pl/sql code or java code. While it would be difficult to
> prove, export, import is probably fastest on cross platform massive
> data migration.

There's no way to use import without logging, which costs a lot of time for most I/O subsystems.

CTAS or insert as select across a DB link does a round trip per row!

So, I've done a bunch of these on terabyte and larger systems, and there are a variety of approaches that work in a supported manner. I would say that the fastest was is parallel unload/load using an OCI program with array fetch and read ahead piped across a hagh-capacity network in large chunks to sqlload in nologging format.

Then there are really sneaky methods like having an application temporarily use both systems, and lazily migrate records as needed and in off-hours idle time.

Anyway, good posting Jim, and an interesting conversation.

--
Jeremiah Wilton
http://www.speakeasy.net/~jwilton
Received on Mon Jun 10 2002 - 00:57:35 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US