Re: Split a dmp file before import
Date: Wed, 11 Nov 2009 15:26:22 -0800 (PST)
On Nov 11, 1:51 pm, Mark D Powell <Mark.Powe..._at_hp.com> wrote:
> On Nov 10, 3:56 pm, Sashi <small..._at_gmail.com> wrote:
> > On Nov 10, 1:15 pm, joel garry <joel-ga..._at_home.com> wrote:
> > > On Nov 10, 4:22 am, Sashi <small..._at_gmail.com> wrote:
> > > > Hi all, I have a dmp file that contains about 9 million rows.
> > > > Is there a utility/technique that will split the file into two (or
> > > > more) so that I can run the import in two (or more)
> > > > stages?
> > > > Thanks,
> > > > Sashi
> > > Mark's answer pretty much says it all, but there may be more info
> > > available if you tell us what problem you are trying to solve. Disk
> > > space? Speed of imp? Redo generation? Trying to parallelize?
> > > jg
> > > --
> > > _at_home.com is bogus.
> > > Yay Stu! (Stu and I were buddies years ago):http://www.campinglife.com/output.cfm?ID=2209609
> > Thanks for your replies, and that pretty much sums it up well for me.
> > My problem is that my archiver keeps getting filled up, and I'm
> > running short of disk space. It's unable to generate enough undo
> > tablespace to represent my transaction.
> > I googled around and took the approach of commiting regularly and
> > using a buffer size of 50 MB.
> > The DMP file is actually a single table, and is about 950 MB.
> > So on my import command I set commit=Y and buffer=50000000.
> > This is 10.2.0.4.0 on solaris 10.
> > Regards,
> > Sashi- Hide quoted text -
> > - Show quoted text -
> How do you back up your archive logs? (rman, manually to tape, etc...)
> If you use rman to back up the archive logs you may want to run a
> backup and delete task before you start and perhaps again while you
> are running.
> If you just back up the archived redo logs to tape then delete them
> then again you may want to schedule this task to run just prior to
> your load.
> It does seem like you may need to allocate more space to the archive
> log directory file system as 950MB is not that much redo though if
> your system normally only generates a couple hundred meagbytes per day
> I can understand not having enough space available to handle a special
> load. On the other hand if this load is going to be repeated or is
> typical of future load then your current issue is warning that your
> archive file directory file system is too small. You might also need
> to double check some of you other file system allocations such as for
> backups, trace files, etc ....
> HTH -- Mark D Powell --
Some good points here, but I think the 950MB dump could possibly explode into much more, as exports don't include indexes, just the commands to create indexes, and of course, the data may be more dense than in database format, depending.
I consider simply turning off archivelog mode, doing the import, turning it back on, then taking a backup when I'm confronted by such limitations on a non-repetitive basis. Also, I believe in the past I had situations where doing the index generation after the import relieved a lot of the undo issue. Nowadays I just keep an undo more than half as big as the db :-O (and maybe more, I can't recall offhand if I was given the raid level I asked for on that device).
-- _at_home.com is bogus. "I think they're communists!" - Paul Reubens, in remanufactured Cheech & Chong film. http://lauren.vortex.com/archive/000627.htmlReceived on Wed Nov 11 2009 - 17:26:22 CST