Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Import foibles

Import foibles

From: Jared Still <>
Date: Tue, 13 Jan 2004 18:24:38 -0800
Message-ID: <>

My apologies if there are multiple copies of this.

Something funky going on with email from work.


First, the basics:

System Configuration: Sun Microsystems sun4u SUN Enterprise 420R (2 X UltraSPARC-II 450MHz)
System clock frequency: 113 MHz
Memory size: 1024 Megabytes

========================= CPUs =========================
                    Run   Ecache   CPU    CPU
Brd  CPU   Module   MHz     MB    Impl.   Mask
---  ---  -------  -----  ------  ------  ----
 0     2     2      450     4.0   US-II    10.0
 0     3     3      450     4.0   US-II    10.0

The CPU's are about 95% idle. Not much memory paging activity.

The disk is (gasp!) a single RAID 5 volume. As this is a mostly read system, it (usually) doesn't matter.

Oracle is

Doing an import into the database with the following script:

imp userid=$USERNAME/$PASSWORD@$DB \

        file=/u03/tmp/${OWNER}_dv01.dmp \
        buffer=10485760 \
        fromuser=$OWNER \
        touser=$OWNER \
        ignore=y \
        commit=y \
        constraints=n \
        indexes=n \
        grants=n \

Notice that the buffer is 10m and commit=y.

This job is running very slowly. Querying v$session_event reveals many and
long waits for log file sync.

      TIME    AVG
                                                             TOTAL TOTAL 
USERNAME          SID EVENT                                  WAITS 
---------- ---------- ----------------------------------- -------- 
-------- -------- ------
JKSTILL            12 latch free                                63 49   0  
                      log buffer space                        4818 2 641  
                      log file switch completion                 9 0  3  
                      log file sync                         628432 6 
212193     34
                      db file sequential read                   27 0  3  
                      file open                                  3 0  0  0
                      SQL*Net message to client            1257012 0  3  0
                      SQL*Net message from client          1257012 0 
211774     17
                      SQL*Net more data from client         118572 0  9  0

9 rows selected.

Notice that the value for log file sync seems a bit high for a session that has been connected for a little over 2 hours.

Even so, it does accumulate rapidly. 10 seconds of activity garners 8 seconds of log file sync waits.

This is not a terribly fast system, but it should not be this slow.

The following query shows that the average blocks per commit is about 4.5.

select blocks_changed, user_commits,

   blocks_changed / user_commits blocks_per_commit from (


      stat.value blocks_changed
   from v$sesstat stat, v$statname name, v$session sess    where

      stat.sid = sess.sid
      and stat.statistic# = name.statistic#
      and = 'db block changes'
      and stat.sid = 12

) r1,


      stat.value user_commits
   from v$sesstat stat, v$statname name, v$session sess    where

      stat.sid = sess.sid
      and stat.statistic# = name.statistic#
      and = 'user commits'
      and stat.sid = 12

) r2

With an 8k block, that is about 36k per commit. Somewhat less than the 10m per commit I expected.

Suspecting that the LONG datatype in some of the tables may be the culprit, a quick perusal of TFM
reveals the following regarding the use of the LONG datatype with the imp utility:

The integer specified for BUFFER is the size, in bytes, of the buffer through which data rows are transferred. BUFFER determines the number of rows in the array inserted by Import. The following formula gives an approximation of the buffer size that inserts a given array of rows:
buffer_size = rows_in_array * maximum_row_size

For tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, or DATE columns, rows are inserted individually. The size of the buffer must be large enough to contain the entire row, except for LOB and LONG columns.
If the buffer cannot hold the longest row in a table, Import attempts to allocate a larger buffer.

So, the buffer parameter has no effect on tables containing columns of the type long, lob, bfile, ref, rowid, urowid or date.

This seems rather limiting for such an important utility. This applies to versions 8.1.7 and 9.2.0

I ran a test to load 90k rows into 2 different tables, the only difference being that one used a long column for text and the other used a varchar2(4000).

The long table took 90 seconds to load with imp and committed every 4.5 blocks.

The varchar2 table took 9 seconds to load and committed every 1000 blocks.

This is know doubt old hat to many of you, but it's the first time I can recall encountering this. Don't
really use imp too much.

The fact that writes on this system are *so* slow is what made it worth investigating.

This 1 gig file took 10 hours to load. On our speedy linux dev box with fast IO it took quite
awhile, though I'm not going to run it again to get the exact timing. And that without indexes
or constraints.

It's sqlloader from now on.


Please see the official ORACLE-L FAQ:
Author: Jared Still

Fat City Network Services    -- 858-538-5051
San Diego, California        -- Mailing list and web hosting services
To REMOVE yourself from this mailing list, send an E-Mail message
to: (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L

(or the name of mailing list you want to be removed from). You may
also send the HELP command for other information (like subscribing).
Received on Tue Jan 13 2004 - 20:24:38 CST

Original text of this message