Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Importing huge data

Re: Importing huge data

From: Howard J. Rogers <howardjr2000_at_yahoo.com.au>
Date: Wed, 3 Sep 2003 06:51:29 +1000
Message-ID: <3f550339$0$14559$afc38c87@news.optusnet.com.au>

"Jay" <no_at_spam.com> wrote in message news:bj2sbj$5jo$1_at_msunews.cl.msu.edu...
> I used
> Alter table target_table NOLOGGING
> and then inserted a record and
> I was able to rollback.
> If NOLOGGING stop logging into redo file.
> how can I rollback?
>
> Should I just use autocommit for my session?
>
>
> Thanks.
> -Jay
>

I think you've got some serious learning to do on Oracle architecture, Jay.

Redo ("logging") is one thing, and Undo ("rollback") is completely another. So the fact that you used the nologging keyword (which -under certain circumstances only- switches off redo) means nothing about whether or not you can rollback (because undo/rollback is *never* switch off-able).

Second, NOLOGGING doesn't work, ever, for ordinary, simple inserts. Ordinary inserts, updates and deletes *always* generate redo, no matter how many times you say 'alter table X nologging'.

That is why numerous people by now have suggested to you using SQL Loader, or a special form of the insert statement (called a 'direct path insert'), because those two forms of loading data *will* respect the NOLOGGING keyword if it's set.

SQL Loader takes text files from the O/S and bulk-loads them into Oracle tables. When you run SQL Loader, you can specify direct=y, and that means you are doing a 'direct load', and direct loads will respect the nologging attribute. You can also run SQL Loader with direct=n, and that's called a 'conventional load', and conventional loads will always generate redo. So if you use SQL Loader, you will want to use the direct option.

The direct path insert is a modified form of the insert statement. Instead of it reading 'insert into X values ('A','B','C')', it reads 'insert /*+APPEND */ into X values ('A",'B','C')'. The append hint there causes data blocks to be constructed entirely in memory, and then slammed down on disk in a way that's guaranteed not to interfere with data that's already in the table. By doing the bulk of its work in memory, it's faster than a normal insert. And by respecting the nologging keyword, it has less overhead than a regular insert, and is thus faster again.

Bear in mind that these two methods will succesfully switch off the production of redo for your particular set of inserts. But *nothing* switches off the production of rollback. So that's still an overhead you will have to live with, and the only thing you can do there to speed things up would be to make sure that your rollback segments are big enough to start with that they never need to grow during the loading process itself, and that they are housed within a rollback tablespace which is not using the same disk resources as the table you are inserting into (so that you avoid I/O contention issues).

With regards to your idea I saw in another post: doing an ALTER SYSTEM ARCHIVE LOG STOP will do nothing for you, apart from hanging your database. That switches off the ARCH process whose job it is to copy the online redo logs when they are full. It doesn't stop the generation of redo within the online logs in the first place. You want to switch the generation of redo off entirely for this insert, not simply stop ARCH copying it. (And the problem with just switching ARCH off is that if you are still merrily filling the online logs, but ARCH can't copy them, the database will reach a point where there is no further free space in the online logs, and thus no further insert/update/delete activity becomes possible. Hence, the database just stalls until you switch ARCH back on).

However you do your load, autocommit is not a good idea, because it might potentially mean issuing a commit after every insert, and that would be a massive amount of work for the database. Much better to bulk load your bazillion records and do a single massive commit at the end of it all.

It sounds like you're going to need to read up on some Oracle fundamentals, and then experiment like mad to find a combination of things to do to help speed things up.

Best of luck
HJR Received on Tue Sep 02 2003 - 15:51:29 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US