Re: Q: Exporting large databases,tables (> 2gig)

From: Adrian P Challinor <Adrian.Challinor_at_osiris.co.uk>
Date: 1996/11/28
Message-ID: <329d44a3.1479968_at_news.demon.co.uk>#1/1


On Sun, 24 Nov 1996 16:02:03 GMT, mjr_at_netcom.com (Mark Rosenbaum) wrote:
[SNIP]
>
>An other way to move the data is to create ASCII files with selects.
>You will want to format the output and set termout off and spool the
>data to a disk file. By partitioning the selects (using where clauses)
>you should be able to get the whole file in multiple > 2 GB files.
>This methode is not pretty. The partitions could be along time lines
>and then you could load the data into partitions on the Sequent.
>
>This would require Oracle 7.3 or later, or a front end tool that
>supports partitioning. Current 7.3 partitioned views need to be tested
>on a platform/app basis. Do not assume your combo will do what you need.
>For more info look at append C in the 7.3 performace guide.
>
>Hope this help.
>
>
>Mark Rosenbaum Otey-Rosenbaum & Frazier, Inc.
>mjr_at_netcom.com Consultants in High Performance and
>(303) 727-7956 Scalable Computing and Applications
>POB 1397 ftp://ftp.netcom.com/pub/mj/mjr/resume/
>Boulder CO 80306

We have a product thats not really an export tool, but can be used in this way. Its an oracle ARCHIVE system, but which can select data based on a simple criteria, say everything to do with one day. We have one client who regularly exports tables which exceed 14GB (yes thats per table!) in this manner.

The next version will have an option to archive to multiple files based on file size to overcome just these issues. It will allow a file archive (read export in this case) to be split based ULIMIT size.

Of course, the tool has other features (row level referential integrity, cross platform data structures, cross database restores, security, etc etc).

More info via Email to: info_at_osiris.co.uk Received on Thu Nov 28 1996 - 00:00:00 CET

Original text of this message