Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> HELP: Inefficient insert
I'm trying to load several hundred rows into a table as defined below. I understand that during the insert Oracle needs to read the index to enforce the PK but the amount of disk reads increases with time. I've checked the index and it seems OK, the blevel is only 2.
The table itself has 500,000+ rows in it but I can't understand why the number of disk reads (and therefore execution time) increases with the number of rows in the table. The number of rows being inserted is held constant.
Can anyone tell me why this takes more time? How can anyone do a batch insert on a VLDB? HELP!
Below are the table creation script and the tkprof output.
The particulars:
Oracle 8.0.4
Solaris 2.5
++++++++
CREATE TABLE tn_tab
( sectid VARCHAR2(20),
TSTAMP DATE, F1 INTEGER, F2 INTEGER,
INSERT INTO tn_tab (sectid,TSTAMP,F1,F2)
VALUES
(:p0,:p1,:p2,:p3)
call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0 Execute 1 1.12 21.12 724 445 2998 980 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 3 1.12 21.12 724 445 2998 980
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 26 (SRDEBUG)
Rows Execution Plan
------- --------------------------------------------------- 0 INSERT STATEMENT GOAL: CHOOSEReceived on Tue Dec 15 1998 - 20:04:22 CST