Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: What is the best way to copy 2G table data between two databases
We do nightly exports of databases in the 40 Gig (that's USED space, not
just allocated) range within 4 hours (doing this from memory, 4 is the max -
actually I think it was less like 2 or 3 hours). Oh yeah, and it is piped
into compress at the same time.
- We do not use the direct option, which improved the time significantly but
there was a good reason to not use it - lost memory again.
- This is not just 1 table, but 10s of thousands of tables... some tables
are several Gigs and some just 24K.
- The import also ran fairly quickly (like 6 hours - again, doing this from
memory, but definitely less than 10 hours).
If I were doing a 2 gig table, I would expect an export to run in less than 30 minutes for sure.
So, those times you mention seem too high.
- This was to a local disk?
- Anyone else hitting the table while exporting? and is consistent=y set?
(lots of rollback generation)
- This table wouldn't by any chance have a lot of row-chaining or other poor
performance problems?
- Any special columns like clobs or blobs or longs or ?
- Doing compress=y by any chance (not good)?
Others have made some interesting comments about parallel execution of the
select and and insert and this might make the copy of a large table perform
better.
- Does seem like a good idea to test this.
Again, I would be interested to know which is fastest...
Note: Above numbers for me are on newer SunFire V880 server running 8.1.7.4 .
-- "David" <david_at_david.nospam.com> wrote in message news:c0pt4b$a84174_at_imsp212.netvigator.com...Received on Tue Feb 17 2004 - 00:36:29 CST
> Hi Burt,
>
> Thanks again the info.
>
> IN fact, I tried exp/imp approach. It takes about 1 hour to export
> (direct=no), and about 2-3 hours to import (no index imported).
>
> I also think that using transportable tablespace is the best..... but
> not sure whether it is suitable during office hours in the production
> database (we need to this exercise in office hours)...
>
> About database link, don't test it before. I don't know whether this
> will affect the overall network performance.
>
> David
>
>
> "Burt Peltier" <burttemp1ReMoVeThIs_at_bellsouth.net> wrote in message
> news:OUYXb.60844$8a5.25000_at_bignews1.bellsouth.net...
> > I have lots of concerns in using db link. I don't know all the
background
> > technical details, but just my opinion it would be slower.
> >
> > I must admit I have not timed a db link for something like this... I
just
> > never considered it.
> >
> > I use exp/imp a LOT and have used the technique I listed below MANY
times.
> > - Note1: If you do not have fast local disk, then this would not work
> > anywhere near as well as I have seen.
> > - Note2: If you do not have a fast network between the servers, then all
> > these options are going to be slower, but it would be interesting to
know
> > the speed of the network between the servers. We currently have Gigbit
> > Ethernet between our large Sun Servers.
> >
> > It would be interesting to see which is quicker. Maybe you could do a
test
> > and tell everyone what you found :)
> >
> > 1) db link
> > 2) exp/imp (being sure to use local disks, ftp over fast network, use
> local
> > disk again, and no indexes on existing tables)
> > 3) transfer table to a tablespace by itself and do the transportable
> > tablespace option
> > - If anything beats exp/imp, I'll bet this is it.
> >
> > Thanks.
> >
> > --
> >
> > "David" <david_at_david.nospam.com> wrote in message
> > news:c0olbf$a8g2050_at_imsp212.netvigator.com...
> > > Hi Peltier,
> > >
> > > Thanks for your info. Could you tell me why option 1 is better?
Any
> > > concerns in using database link?
> > >
> > > Thanks,
> > > David
> > >
> > > "Burt Peltier" <burttemp1ReMoVeThIs_at_bellsouth.net> wrote in message
> > > news:tXOXb.634$fE4.238_at_bignews5.bellsouth.net...
> > > > Based on your info, I would choose option #1, but with a couple of
> > > comments:
> > > > - Do the export on server for database A to a local disk on this
> server.
> > > > - It would be best if the disk is fast and separate from online
> database
> > > > activity/files.
> > > > - Ftp the export dump file to server for database B, again to a good
> > > > performing local disk on server for database B.
> > > > - Run the import on server for database B .
> > > > - Make sure there are no indexes on the table during the import
which
> is
> > > how
> > > > a normal import would run (creates table, inserts data, then does
> > > indexing).
> > > > - So, don't pre-create the table unless you make sure to drop
indexes
> > also
> > > > (or just not create).
> > > >
> > > > --
> > > >
> > > > "David" <david_at_david.nospam.com> wrote in message
> > > > news:c0o8uh$a8717_at_imsp212.netvigator.com...
> > > > > Hi all,
> > > > >
> > > > > I want to copy all data from a table (which is about 2G size)
> from
> > > > > database A to B (both are 8.1.7, archive log mode, but on
different
> > > > server).
> > > > > What is the safest and fastest way to do this? I have some
options:
> > > > >
> > > > > 1) exp/imp
> > > > > 2) insert /*+ append */ with nologging, with database link
> > > > >
> > > > > Which option is better? Or any other ways?
> > > > >
> > > > > Thanks,
> > > > > David
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> >
> >
> >
>
>
![]() |
![]() |