Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Importing huge dump file.
Comments embedded.
>
> Thanks a lot for your reply and sharing your experience. yes, the dump
> file is in a different drive than the db files.
> I remember reading about the recordlength parameter that we can set
> either RECORDLENGTH or BUFFER(they are mutually exclusive. I have
> already set Buffer)
>
> Now planning to implement the following too:
> * create one large rollback segment and set COMMIT=Y
Why are you not using automatic UNDO? You are erunning 9.2.0.6. And even for large imports I don't set commit=y.
> * change db to noarchivelog mode
I can agree with this.
> * create minimum 3 redo log groups(no mirroring) with 128M filesize
> and change the LOG_CHECKPOINT_INTERVAL accordingly.
Why change the log_checkpoint_interval at all? I can understand increasing your redo log size to reduce log switches, but it shouldn't be necessary to alter the log_checkpoint_interval from the default, in my opinion.
> *set TRACE_LEVEL_SERVER and TRACE_LEVEL_CLIENT to OFF
Those are SQL*Net parameters; are you planning on performing this import across the network? Are you experiencing network problems such that these parameters need to be set to track and trace transmission errors? I'd think twice about running imp across an intranet connection if your service is not reliable.
>
> Please advice.- Hide quoted text -
>
> - Show quoted text -
David Fitzjarrell Received on Fri Sep 07 2007 - 09:52:19 CDT
![]() |
![]() |