Re: Q: Exporting large databases,tables (> 2gig)

From: Steve Haynes <steve_at_rwx777.demon.co.uk>
Date: 1996/11/26
Message-ID: <EFo2OPAE5tmyEwx4_at_rwx777.demon.co.uk>#1/1


In article <mjrE1DtvG.4uy_at_netcom.com>, Mark Rosenbaum <mjr_at_netcom.com> writes
>In article <NEWTNews.848249081.21586.dafis_at_ppp.test.jccbi.gov>,
>Martin Douglas <dafis_at_ppp.test.jccbi.gov> wrote:
>>
>>In article <328DD4E5.3D15_at_kna.com>, <krishnak_at_kna.com> writes:
>>> Path: news.ou.edu!news.ecn.uoknor.edu!feed1.news.erols.com!howland.erols.net!
 newspump.sol.net!ddsw1!news.mcs.net!usenet
>>> From: "Krishna K. Kurup" <krishnak_at_kna.com>
>>> Newsgroups: comp.databases.oracle,comp.databases.olap
>>> Subject: Q: Exporting large databases,tables (> 2gig)
>>> Date: Sat, 16 Nov 1996 08:51:17 -0600
>>> Organization: MCSNet Services
>>> Lines: 34
>>> Message-ID: <328DD4E5.3D15_at_kna.com>
>>> NNTP-Posting-Host: krishnak.pr.mcs.net
>>> Mime-Version: 1.0
>>> Content-Type: text/plain; charset=us-ascii
>>> Content-Transfer-Encoding: 7bit
>>> X-Mailer: Mozilla 3.0Gold (WinNT; I)
>>> Xref: news.ou.edu comp.databases.oracle:91867 comp.databases.olap:7566
>>>
>>> Hi:
>>>
>>> From the last thread I ran in the same topic. I got plenty of responses
>>> and good suggestions. I will summarize the same, once I am successful in
>>> the implementation.
>>>
>>> Prob, Def.: What I am trying to do, is port an 80 gig database from
>>> one platform to another.
>>>
>>> Oracle version 7.2.3
>>> Source Platform: hpT500
>>> Destination: sequent se-80 (Dynix-ptx 4.2)
>>>
>>> The destination platform is running a 64-bit OS (Dynix-ptx 4.2 on
>>> sequent) and supports (theoretically) terra-byte filesystems and files.
>>> We tested by creating a 100 gig filesystem, 6gig filesystem and >2 gig
>>> files. So, it was quite logical to think that if we could export the
>>> databases over the network (or dedicated line) to this system, we would
>>> be able to create a 80-gig export dump. But, when we tried a test to
>>> export, the export died at the 2 gig limit (ulimit was set to
>>> unlimited). We found out from oracle support that "export" does not
>>> support greater than 2 gig filesizes. But I have heard from other
>>> threads that there was a patch for this export problem.
>>>
>>> Question 1: Does anyone knows about such a patch ? The export was being
>>> run on an hp-T500, but it could be run on the sequent box also. (i.e. if
>>> the patch exists for either OS; Note that the patch is not an OS patch,
>>> but an Oracle patch)
>
>The typical reason for filesizes to be limited to 2 GB is the lseek and tell
>commands. These are low level UNIX section 2 commands. They are used for
>direct access into disk files. lseek moves the current point and tell
>returns the current point (for complete info try man lseek). These
>functions typically return 4 byte signed integers. 31 bits gives
>plus or minus 2 Gig or 2 E9, 63 bits gives 8 E18. You might try
>exporting directly to tape. Have not tried it but sometimes on some
>systems tape files can be > 2 GB.
>
>If the goal here is to move the data across from your HP to your
>Sequent you might try creating DB links between the two instances
>and the create the tables on the Sequent from the tables on the HP.
>
>You will probably want to use the unrecoverable feature new to 7.2
>since if you don't your logs will get completly out of control.
>
>An other way to move the data is to create ASCII files with selects.
>You will want to format the output and set termout off and spool the
>data to a disk file. By partitioning the selects (using where clauses)
>you should be able to get the whole file in multiple > 2 GB files.
>This methode is not pretty. The partitions could be along time lines
>and then you could load the data into partitions on the Sequent.
>
>This would require Oracle 7.3 or later, or a front end tool that
>supports partitioning. Current 7.3 partitioned views need to be tested
>on a platform/app basis. Do not assume your combo will do what you need.
>For more info look at append C in the 7.3 performace guide.
>
>Hope this help.
>
>
>Mark Rosenbaum Otey-Rosenbaum & Frazier, Inc.
>mjr_at_netcom.com Consultants in High Performance and
>(303) 727-7956 Scalable Computing and Applications
>POB 1397 ftp://ftp.netcom.com/pub/mj/mjr/resume/
>Boulder CO 80306

Well, obvious solution is to export a table at a time, assuming < 2gb.
I have just written an export to flat file in ulimit size file chunks which I guess oracle have done in the patch mentioned above. something else which might help is to export to a unix named pipe, and have compress(1) read from it to create a compressed export file.
Ho hum, the world has too much data, and not enough information.

-- 
Steve Haynes
Received on Tue Nov 26 1996 - 00:00:00 CET

Original text of this message