Home » SQL & PL/SQL » SQL & PL/SQL » Bulk inserts get EXTREMELY slow after 10,000 queries (11g)
icon5.gif  Bulk inserts get EXTREMELY slow after 10,000 queries [message #444303] Sat, 20 February 2010 08:43 Go to next message
JustinBailey
Messages: 14
Registered: December 2009
Junior Member
I am running a custom script that creates about 100,000 rows of demo data.

The table I am loading in to is fairly wide (100 columns), and only has about 10,000 rows at the moment.

The script goes really fast for the first 10K rows (100 inserts per second), and then incrementally gets slower. By 20,000 rows it is doing about 1 row per second. At this rate, it will never finish!.

Each insert is a separate statement, using bind variables and wrapped in a single transaction. I've tried dropping the indexes first but it didn't make a difference.

OEM shows it's 100% CPU bottleneck with no other information I can glean.

Any ideas on how I can speed this up?
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444304 is a reply to message #444303] Sat, 20 February 2010 08:46 Go to previous messageGo to next message
rahulvb
Messages: 924
Registered: October 2009
Location: Somewhere Near Equator.
Senior Member
can you please post the query ?
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444305 is a reply to message #444303] Sat, 20 February 2010 08:49 Go to previous messageGo to next message
JustinBailey
Messages: 14
Registered: December 2009
Junior Member
INSERT INTO UMD_PATIENT (ADOPTED,BASE_EVENT_ID,BIGR_DONOR_ALIAS,BIGR_DONOR_ID,BIGR_HLA,BOLD_ID,CREATED_BY,CREATION_DATE_TIME,CREDIT_CARD_ID,EMPLOYED_YEARS,EM PLOYER,EMPLOYER_ADDRESS,EMPLOYER_PHONE,EMPLOYMENT_TYPE,EMR_STATUS,ETHNICITY2_ID,ETHNICITY_ID,FACILITY,FACILITY_FLAG,HEIGHT,HOSPITAL_I D,IDEAL_WEIGHT,INITIALS,INITIAL_BMI,INITIAL_CHIEF_COMPLAINT,INITIAL_HEIGHT,INITIAL_WEIGHT,LAST_REGISTRATION_STEP,LAST_UPDATED_BY,LAST _UPDATE_DATE_TIME,LAST_UPDATING_PROCESS,LNAME_HYPHEN,LOCATION,MAIDEN_NAME,MARITAL_STATUS,MENSES_ONSET,MENSES_ONSET_DATE,MNAME1,MNAME2 ,NATIONALITY_ID,OCCUPATION,OCCUPATION_TYPE,OWNER_ID,PATIENT_INITIAL_BMI,PATIENT_INITIAL_HEIGHT,PATIENT_INITIAL_WEIGHT,PATIENT_NUMBER, PATIENT_RECORD_ID,PAYMENT_DATE,PAYMENT_ID,PAYMENT_METHOD,PHARMACY,PHONE2_MSG,PHONE3_MSG,PHONE_MSG,PHOTO_NAME,PREFERRED_NAME,PRIMARY_P HYSICIAN_ID,RECORD_STATUS,REFDOC_ADDRESS,REFDOC_CITY,REFDOC_DATE,REFDOC_FNAME,REFDOC_LNAME,REFDOC_PHONE,REFDOC_SPECIALTY,REFDOC_STATE ,REFDOC_ZIP,REFERRAL_DATE,REFERRAL_TYPE,REFERRING_DOCTOR_ID,REGISTRATION_DATE,RELIGION_ID,SMOKER,SPOUSE_DOB,SPOUSE_EMPLOYED_YEARS,SPO USE_EMPLOYER,SPOUSE_EMPLOYER_ADDRESS,SPOUSE_EMPLOYMENT_STATUS,SPOUSE_NAME,SPOUSE_OCCUPATION,SPOUSE_SSN,SSN,SUBSCRIPTION_ID,USER_ID,VA LID_END_DATE,VALID_START_DATE,WEEKLY_REMINDER) VALUES (:varp1,:varp2,:varp3,:varp4,:varp5,:varp6,:varp7,:varp8,:varp9,:varp10,:varp11,:varp12,:varp13,:varp14,:varp15,:varp16,:varp17,:varp 18,:varp19,:varp20,:varp21,:varp22,:varp23,:varp24,:varp25,:varp26,:varp27,:varp28,:varp29,:varp30,:varp31,:varp32,:varp33,:varp34,:v arp35,:varp36,:varp37,:varp38,:varp39,:varp40,:varp41,:varp42,:varp43,:varp44,:varp45,:varp46,:varp47,:varp48,:varp49,:varp50,:varp51 ,:varp52,:varp53,:varp54,:varp55,:varp56,:varp57,:varp58,:varp59,:varp60,:varp61,:varp62,:varp63,:varp64,:varp65,:varp66,:varp67,:var p68,:varp69,:varp70,:varp71,:varp72,:varp73,:varp74,:varp75,:varp76,:varp77,:varp78,:varp79,:varp80,:varp81,:varp82,:varp83,:varp84,: varp85,:varp86,:varp87,:varp88)

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444306 is a reply to message #444305] Sat, 20 February 2010 08:54 Go to previous messageGo to next message
rahulvb
Messages: 924
Registered: October 2009
Location: Somewhere Near Equator.
Senior Member
try Append hint and put the table nologging mode
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444307 is a reply to message #444306] Sat, 20 February 2010 09:01 Go to previous messageGo to next message
Michel Cadot
Messages: 68737
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
rahulvb wrote on Sat, 20 February 2010 15:54
try Append hint and put the table nologging mode

APPEND and NOLOGGING is irrelevant with INSERT VALUES.

Regards
Michel

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444308 is a reply to message #444303] Sat, 20 February 2010 09:02 Go to previous messageGo to next message
Michel Cadot
Messages: 68737
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Quote:
Each insert is a separate statement, using bind variables and wrapped in a single transaction.

This is the error.
Use a single transaction for ALL inserts.

Regards
Michel
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444309 is a reply to message #444303] Sat, 20 February 2010 09:03 Go to previous messageGo to next message
rahulvb
Messages: 924
Registered: October 2009
Location: Somewhere Near Equator.
Senior Member
NOLOGGING
ALTER TABLE t1 NOLOGGING;
Table altered.


Append hint

insert /*+ append */ into t
select rownum,mod(rownum,5)
from all_objects
where rownum <=1000


Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444311 is a reply to message #444307] Sat, 20 February 2010 09:06 Go to previous messageGo to next message
rahulvb
Messages: 924
Registered: October 2009
Location: Somewhere Near Equator.
Senior Member
Michel Cadot wrote on Sat, 20 February 2010 09:01
rahulvb wrote on Sat, 20 February 2010 15:54
try Append hint and put the table nologging mode

APPEND and NOLOGGING is irrelevant with INSERT VALUES.

Regards
Michel



Michel NOLOGGING Should help up to some extent right?

[Updated on: Sat, 20 February 2010 09:06]

Report message to a moderator

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444313 is a reply to message #444308] Sat, 20 February 2010 09:09 Go to previous messageGo to next message
JustinBailey
Messages: 14
Registered: December 2009
Junior Member
I misstyped. Each insert statement is in the form that I posted above.

ALL statements are in ONE transaction, with ONE commit at the end of 1000 rows. Is that right?

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444314 is a reply to message #444311] Sat, 20 February 2010 09:10 Go to previous messageGo to next message
JustinBailey
Messages: 14
Registered: December 2009
Junior Member
I'll try the NOLOGGING option...
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444317 is a reply to message #444311] Sat, 20 February 2010 09:18 Go to previous messageGo to next message
Michel Cadot
Messages: 68737
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
rahulvb wrote on Sat, 20 February 2010 16:06
Michel NOLOGGING Should help up to some extent right?

If you add the time of backup you must do after a NOLOGGING operation, I don't see the interest for such a small loading.

Forgot to mention, NOLOGGING is only active in direct path INSERT (insert with APPEND hint) which is not the case of INSERT VALUE where this hint is ignored.

Regards
Michel

[Updated on: Sat, 20 February 2010 09:31]

Report message to a moderator

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444319 is a reply to message #444313] Sat, 20 February 2010 09:21 Go to previous messageGo to next message
Michel Cadot
Messages: 68737
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
JustinBailey wrote on Sat, 20 February 2010 16:09
I misstyped. Each insert statement is in the form that I posted above.

ALL statements are in ONE transaction, with ONE commit at the end of 1000 rows. Is that right?

So you do not have ONE transaction.
A transaction start after the DML and ends at the next commit or rollback.
If you commit each 1000 rows, then you have a transaction each 1000 rows.

Commit only ONCE after all inserts.

Regards
Michel

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444325 is a reply to message #444303] Sat, 20 February 2010 09:51 Go to previous messageGo to next message
JustinBailey
Messages: 14
Registered: December 2009
Junior Member
The plot thickens.

I think I stayed up too late last night. I am doing ALL inserts, across ALL tables, in ONE transaction, with ONE commit at the end.

I am loading data in to 2 other tables, one of which is exhibiting the same behavior. The other is not.

This table gets incrementally slower after around 10K rows. It starts out doing 1000 per second but near the end (around 20K rows) it is doing 2-3 per second.

ALTER TABLE EMY1.UMD_VITALS
 DROP PRIMARY KEY CASCADE;

DROP TABLE EMY1.UMD_VITALS CASCADE CONSTRAINTS;

CREATE TABLE EMY1.UMD_VITALS
(
  EVENT_ID                NUMBER                DEFAULT 0,
  HEIGHT                  FLOAT(126),
  WEIGHT                  FLOAT(126),
  SYSTOLIC                NUMBER,
  DIASTOLIC               NUMBER,
  PULSE                   NUMBER,
  RESPIRATORY_RATE        NUMBER,
  TEMPERATURE             FLOAT(126),
  GLUCOSE_LEVEL           FLOAT(126),
  HBA1C                   FLOAT(126),
  CREATION_DATE_TIME      DATE,
  CREATED_BY              NUMBER,
  LAST_UPDATE_DATE_TIME   DATE,
  LAST_UPDATED_BY         NUMBER,
  LAST_UPDATING_PROCESS   VARCHAR2(50 BYTE),
  BMI                     FLOAT(126),
  EWL_PERCENT             FLOAT(32),
  IDEAL_WEIGHT            FLOAT(32),
  EXCESS_WEIGHT           FLOAT(32),
  TARGET_WEIGHT           FLOAT(32),
  BODY_FAT                FLOAT(32),
  FM                      NUMBER,
  FFM                     NUMBER,
  TOTAL_BODY_WATER        NUMBER,
  HYDRATION_PERCENT       FLOAT(32),
  DYNAMOMETER_L           NUMBER,
  DYNAMOMETER_R           NUMBER,
  HEAD                    FLOAT(32),
  NECK                    FLOAT(32),
  CHEST                   FLOAT(32),
  ARM                     FLOAT(32),
  WAIST                   FLOAT(32),
  HIPS                    FLOAT(32),
  THIGH                   FLOAT(32),
  WAIST_TO_HIP            FLOAT(32),
  BMR                     FLOAT(32),
  EXCESS_BMI              FLOAT(32),
  PEFR                    FLOAT(32),
  GCS                     FLOAT(32),
  O2SATURATION            VARCHAR2(255 BYTE),
  O2CONDITIONS            VARCHAR2(255 BYTE),
  POSTURAL_DROP           VARCHAR2(255 BYTE),
  SYSTOLIC_SUPINE         NUMBER,
  DIASTOLIC_SUPINE        NUMBER,
  BF_VOL_FOUND            FLOAT(32),
  BF_VOL_ADDED            FLOAT(32),
  BF_VOL_TOTAL            FLOAT(32),
  LDL                     FLOAT(32),
  MACR                    VARCHAR2(5 BYTE),
  IMPEDANCE               VARCHAR2(255 BYTE),
  SYSTOLIC2               NUMBER,
  DIASTOLIC2              NUMBER,
  WT_LOST                 FLOAT(32),
  HDL                     FLOAT(32),
  TOTAL_CHOLESTEROL       FLOAT(32),
  TRIGLYCERIDES           FLOAT(32),
  BMI_PERCENTILE          FLOAT(126),
  BMI_RISK_YOUTH          VARCHAR2(100 BYTE),
  GLYCEMIC_LOAD           NUMBER,
  WEIGHT_PERCENTILE       FLOAT(32),
  HEIGHT_PERCENTILE       FLOAT(32),
  HEAD_CIRCUMFERENCE      FLOAT(126),
  HEAD_CIRCUMFER_PERCENT  FLOAT(126),
  VISUAL_ACUITY_OD        VARCHAR2(10 BYTE),
  VISUAL_ACUITY_OS        VARCHAR2(10 BYTE),
  VISUAL_ACUITY_OU        VARCHAR2(10 BYTE),
  EBMIL_PERCENT           FLOAT(32),
  RESTING_METABOLIC_RATE  FLOAT(32)
)
TABLESPACE EMY1
PCTUSED    0
PCTFREE    10
INITRANS   1
MAXTRANS   255
STORAGE    (
            INITIAL          80K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
LOGGING 
NOCOMPRESS 
NOCACHE
NOPARALLEL
MONITORING;


CREATE UNIQUE INDEX EMY1.UMD_VITALS_PK ON EMY1.UMD_VITALS
(EVENT_ID)
LOGGING
TABLESPACE EMY1
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
NOPARALLEL;


CREATE OR REPLACE TRIGGER EMY1."TR_UMD_VITALS_X2" BEFORE DELETE ON umd_vitals FOR EACH ROW
BEGIN INSERT INTO umd_vitals_X (CREATION_DATE_TIME,CREATED_BY,LAST_UPDATE_DATE_TIME,LAST_UPDATED_BY,LAST_UPDATING_PROCESS,EVENT_ID,HEIGHT,WEIGHT,SYSTOLIC,DIASTOLIC,PULSE,RESPIRATORY_RATE,TEMPERATURE,GLUCOSE_LEVEL,HBA1C,actual_end_date_time) VALUES (:old.CREATION_DATE_TIME,:old.CREATED_BY,:old.LAST_UPDATE_DATE_TIME,:old.LAST_UPDATED_BY,:old.LAST_UPDATING_PROCESS,:old.EVENT_ID,:old.HEIGHT,:old.WEIGHT,:old.SYSTOLIC,:old.DIASTOLIC,:old.PULSE,:old.RESPIRATORY_RATE,:old.TEMPERATURE,:old.GLUCOSE_LEVEL,:old.HBA1C,SYSDATE); END;
/


CREATE OR REPLACE TRIGGER EMY1."TR_UMD_VITALS_X" BEFORE UPDATE ON umd_vitals FOR EACH ROW
BEGIN INSERT INTO umd_vitals_X (CREATION_DATE_TIME,CREATED_BY,LAST_UPDATE_DATE_TIME,LAST_UPDATED_BY,LAST_UPDATING_PROCESS,EVENT_ID,HEIGHT,WEIGHT,SYSTOLIC,DIASTOLIC,PULSE,RESPIRATORY_RATE,TEMPERATURE,GLUCOSE_LEVEL,HBA1C,actual_end_date_time) VALUES (:old.CREATION_DATE_TIME,:old.CREATED_BY,:old.LAST_UPDATE_DATE_TIME,:old.LAST_UPDATED_BY,:old.LAST_UPDATING_PROCESS,:old.EVENT_ID,:old.HEIGHT,:old.WEIGHT,:old.SYSTOLIC,:old.DIASTOLIC,:old.PULSE,:old.RESPIRATORY_RATE,:old.TEMPERATURE,:old.GLUCOSE_LEVEL,:old.HBA1C,SYSDATE); SELECT SYSDATE INTO :new.last_update_date_time FROM dual; END;
/


DROP SYNONYM MASTER_B.UMD_VITALS;

CREATE SYNONYM MASTER_B.UMD_VITALS FOR EMY1.UMD_VITALS;


ALTER TABLE EMY1.UMD_VITALS ADD (
  CONSTRAINT UMD_VITALS_PK
 PRIMARY KEY
 (EVENT_ID)
    USING INDEX 
    TABLESPACE EMY1
    PCTFREE    10
    INITRANS   2
    MAXTRANS   255
    STORAGE    (
                INITIAL          64K
                NEXT             1M
                MINEXTENTS       1
                MAXEXTENTS       UNLIMITED
                PCTINCREASE      0
               ));

ALTER TABLE EMY1.UMD_VITALS ADD (
  CONSTRAINT UMD_VITALS_FK 
 FOREIGN KEY (EVENT_ID) 
 REFERENCES EMY1.UMD_EVENT (EVENT_ID)
    ON DELETE CASCADE);

GRANT ALTER, DELETE, INDEX, INSERT, REFERENCES, SELECT, UPDATE, ON COMMIT REFRESH, QUERY REWRITE, DEBUG, FLASHBACK ON EMY1.UMD_VITALS TO MASTER_B;



This table (which is an archive table of the one above), can take 100,000 rows at pretty much constant speed:

DROP TABLE EMY1.UMD_VITALS_X CASCADE CONSTRAINTS;

CREATE TABLE EMY1.UMD_VITALS_X
(
  EVENT_ID               NUMBER,
  HEIGHT                 FLOAT(126),
  WEIGHT                 FLOAT(126),
  SYSTOLIC               NUMBER,
  DIASTOLIC              NUMBER,
  PULSE                  NUMBER,
  RESPIRATORY_RATE       NUMBER,
  TEMPERATURE            NUMBER,
  GLUCOSE_LEVEL          FLOAT(126),
  HBA1C                  FLOAT(126),
  CREATION_DATE_TIME     DATE,
  CREATED_BY             NUMBER,
  LAST_UPDATE_DATE_TIME  DATE,
  LAST_UPDATED_BY        NUMBER,
  LAST_UPDATING_PROCESS  VARCHAR2(50 BYTE),
  ACTUAL_END_DATE_TIME   DATE
)
TABLESPACE EMY1
PCTUSED    0
PCTFREE    10
INITRANS   1
MAXTRANS   255
STORAGE    (
            INITIAL          80K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
LOGGING 
NOCOMPRESS 
NOCACHE
NOPARALLEL
MONITORING;


DROP SYNONYM MASTER_B.UMD_VITALS_X;

CREATE SYNONYM MASTER_B.UMD_VITALS_X FOR EMY1.UMD_VITALS_X;


GRANT ALTER, DELETE, INDEX, INSERT, REFERENCES, SELECT, UPDATE, ON COMMIT REFRESH, QUERY REWRITE, DEBUG, FLASHBACK ON EMY1.UMD_VITALS_X TO MASTER_B;



The only real difference I can see are triggers and indexes. Should they make that big of a difference?
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444326 is a reply to message #444325] Sat, 20 February 2010 10:01 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
ALTER SESSION SET SQL_TRACE=TRUE;
-- invoke the SQL code
ALTER SESSION SET SQL_TRACE=FALSE;

now find the trace file within ./udump folder
tkprof <trace_file.trc> trace_results.txt explain=<username>/<password>

post the contents of trace_results.txt back here
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444327 is a reply to message #444325] Sat, 20 February 2010 10:16 Go to previous messageGo to next message
ThomasG
Messages: 3212
Registered: April 2005
Location: Heilbronn, Germany
Senior Member
Is there an index on EMY1.UMD_EVENT (EVENT_ID)?

And triggers can basically mess everything up.

The only sure way to figure it out is a session trace, like BlackSwan suggested.
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444353 is a reply to message #444327] Sat, 20 February 2010 15:16 Go to previous messageGo to next message
JustinBailey
Messages: 14
Registered: December 2009
Junior Member
Here is the TKPROF output for the particularly slow piece. I only let it trace for about 10 minutes, but it was during the slow portion.

I've never run one of these before, but it looks to me like it's spending all of its time on the EXECUTE and in the CPU.

Anything else in here I'm missing?

TKPROF: Release 11.2.0.1.0 - Development on Sat Feb 20 16:11:11 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Trace file: ././falx_ora_22726.trc
Sort options: default

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

SQL ID: 1mdr2dqx45nda
Plan Hash: 0
INSERT INTO UMD_PATIENT (ADOPTED,BASE_EVENT_ID,BIGR_DONOR_ALIAS,BIGR_DONOR_ID,
 BIGR_HLA,BOLD_ID,CREATED_BY,CREATION_DATE_TIME,CREDIT_CARD_ID,
 EMPLOYED_YEARS,EMPLOYER,EMPLOYER_ADDRESS,EMPLOYER_PHONE,EMPLOYMENT_TYPE,
 EMR_STATUS,ETHNICITY2_ID,ETHNICITY_ID,FACILITY,FACILITY_FLAG,HEIGHT,
 HOSPITAL_ID,IDEAL_WEIGHT,INITIALS,INITIAL_BMI,INITIAL_CHIEF_COMPLAINT,
 INITIAL_HEIGHT,INITIAL_WEIGHT,LAST_REGISTRATION_STEP,LAST_UPDATED_BY,
 LAST_UPDATE_DATE_TIME,LAST_UPDATING_PROCESS,LNAME_HYPHEN,LOCATION,
 MAIDEN_NAME,MARITAL_STATUS,MENSES_ONSET,MENSES_ONSET_DATE,MNAME1,MNAME2,
 NATIONALITY_ID,OCCUPATION,OCCUPATION_TYPE,OWNER_ID,PATIENT_INITIAL_BMI,
 PATIENT_INITIAL_HEIGHT,PATIENT_INITIAL_WEIGHT,PATIENT_NUMBER,
 PATIENT_RECORD_ID,PAYMENT_DATE,PAYMENT_ID,PAYMENT_METHOD,PHARMACY,
 PHONE2_MSG,PHONE3_MSG,PHONE_MSG,PHOTO_NAME,PREFERRED_NAME,
 PRIMARY_PHYSICIAN_ID,RECORD_STATUS,REFDOC_ADDRESS,REFDOC_CITY,REFDOC_DATE,
 REFDOC_FNAME,REFDOC_LNAME,REFDOC_PHONE,REFDOC_SPECIALTY,REFDOC_STATE,
 REFDOC_ZIP,REFERRAL_DATE,REFERRAL_TYPE,REFERRING_DOCTOR_ID,
 REGISTRATION_DATE,RELIGION_ID,SMOKER,SPOUSE_DOB,SPOUSE_EMPLOYED_YEARS,
 SPOUSE_EMPLOYER,SPOUSE_EMPLOYER_ADDRESS,SPOUSE_EMPLOYMENT_STATUS,
 SPOUSE_NAME,SPOUSE_OCCUPATION,SPOUSE_SSN,SSN,SUBSCRIPTION_ID,USER_ID,
 VALID_END_DATE,VALID_START_DATE,WEEKLY_REMINDER)
VALUES
 (:varp1,:varp2,:varp3,:varp4,:varp5,:varp6,:varp7,:varp8,:varp9,:varp10,
 :varp11,:varp12,:varp13,:varp14,:varp15,:varp16,:varp17,:varp18,:varp19,
 :varp20,:varp21,:varp22,:varp23,:varp24,:varp25,:varp26,:varp27,:varp28,
 :varp29,:varp30,:varp31,:varp32,:varp33,:varp34,:varp35,:varp36,:varp37,
 :varp38,:varp39,:varp40,:varp41,:varp42,:varp43,:varp44,:varp45,:varp46,
 :varp47,:varp48,:varp49,:varp50,:varp51,:varp52,:varp53,:varp54,:varp55,
 :varp56,:varp57,:varp58,:varp59,:varp60,:varp61,:varp62,:varp63,:varp64,
 :varp65,:varp66,:varp67,:varp68,:varp69,:varp70,:varp71,:varp72,:varp73,
 :varp74,:varp75,:varp76,:varp77,:varp78,:varp79,:varp80,:varp81,:varp82,
 :varp83,:varp84,:varp85,:varp86,:varp87,:varp88)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse     4621      0.02       0.21          0          0          0           0
Execute   4622    138.60     178.28        267       4846      84711        4622
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     9243    138.63     178.50        267       4846      84711        4622

Misses in library cache during parse: 0
Misses in library cache during execute: 1628
Optimizer mode: ALL_ROWS
Parsing user id: 90  (MASTER_B)

Rows     Row Source Operation
-------  ---------------------------------------------------
     0  LOAD TABLE CONVENTIONAL  (cr=1 pr=0 pw=0 time=0 us)


Rows     Execution Plan
-------  ---------------------------------------------------
     0  INSERT STATEMENT   MODE: ALL_ROWS
     0   LOAD TABLE CONVENTIONAL OF 'UMD_PATIENT'


Elapsed times include waiting on following events:
 Event waited on                             Times   Max. Wait  Total Waited
 ----------------------------------------   Waited  ----------  ------------
 SQL*Net message to client                    4621        0.00          0.02
 SQL*Net message from client                  4621        0.08         18.87
 db file sequential read                       267        0.08          1.41
 log file switch (private strand flush incomplete)
                                                 1        0.04          0.04
 latch: shared pool                             27        0.00          0.00
 library cache: mutex X                          1        0.00          0.00



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse     4621      0.02       0.21          0          0          0           0
Execute   4622    138.60     178.28        267       4846      84711        4622
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     9243    138.63     178.50        267       4846      84711        4622

Misses in library cache during parse: 0
Misses in library cache during execute: 1628

Elapsed times include waiting on following events:
 Event waited on                             Times   Max. Wait  Total Waited
 ----------------------------------------   Waited  ----------  ------------
 SQL*Net message to client                    4621        0.00          0.02
 SQL*Net message from client                  4621        0.08         18.87
 db file sequential read                       267        0.08          1.41
 log file switch (private strand flush incomplete)
                                                 1        0.04          0.04
 latch: shared pool                             27        0.00          0.00
 library cache: mutex X                          1        0.00          0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      0      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        0      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0

 4622  user  SQL statements in session.
   0  internal SQL statements in session.
 4622  SQL statements in session.
   1  statement EXPLAINed in this session.
********************************************************************************
Trace file: ././falx_ora_22726.trc
Trace file compatibility: 11.1.0.7
Sort options: default

      1  session in tracefile.
   4622  user  SQL statements in trace file.
      0  internal SQL statements in trace file.
   4622  SQL statements in trace file.
      1  unique SQL statements in trace file.
      1  SQL statements EXPLAINed using schema:
          MASTER_B.prof$plan_table
            Default table was used.
            Table was created.
            Table was dropped.
 1914864  lines in trace file.
    198  elapsed seconds in trace file.
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444354 is a reply to message #444353] Sat, 20 February 2010 15:27 Go to previous messageGo to next message
Michel Cadot
Messages: 68737
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
So it tooks less than 3 minutes to load 4622 rows.
Not so bad, above all if you have indexes and triggers.

Regards
Michel
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444355 is a reply to message #444303] Sat, 20 February 2010 16:45 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>4622 user SQL statements in trace file.
Doing 1 row per INSERT does not qualify as "BULK" to me.

INSERT INTO <table_name> SELECT might perform better.
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444368 is a reply to message #444355] Sat, 20 February 2010 23:54 Go to previous messageGo to next message
rahulvb
Messages: 924
Registered: October 2009
Location: Somewhere Near Equator.
Senior Member
BlackSwan wrote on Sat, 20 February 2010 16:45
>4622 user SQL statements in trace file.
Doing 1 row per INSERT does not qualify as "BULK" to me.

INSERT INTO <table_name> SELECT might perform better.



Good Caught BlackSwan.

[Edit :Typo]

[Updated on: Sat, 20 February 2010 23:56]

Report message to a moderator

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444405 is a reply to message #444368] Sun, 21 February 2010 13:31 Go to previous messageGo to next message
JustinBailey
Messages: 14
Registered: December 2009
Junior Member
Thank you all for your help.

I'm posting a new trace file that I took today.
I started tracing after about 20K rows, so more or less, from 20K to 30K.

At this point it is roughly 1.3 rows per second.

Is there anything here that would explain the dramatic decrease in rows per second? 25 rows per second would be tolerable, but 1 per second seems unreasonable.

OEM shows it's all CPU...

Any other ideas? If I am generating this data from a script is there anyway to take BlackSwan's advice and use some form of:

Quote:
INSERT INTO <table_name> SELECT might perform better



TKPROF: Release 11.2.0.1.0 - Development on Sun Feb 21 14:11:17 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Trace file: ././falx_ora_23791.trc
Sort options: default

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing 
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

SQL ID: 1mdr2dqx45nda
Plan Hash: 0
INSERT INTO UMD_PATIENT (ADOPTED,BASE_EVENT_ID,BIGR_DONOR_ALIAS,BIGR_DONOR_ID,
  BIGR_HLA,BOLD_ID,CREATED_BY,CREATION_DATE_TIME,CREDIT_CARD_ID,
  EMPLOYED_YEARS,EMPLOYER,EMPLOYER_ADDRESS,EMPLOYER_PHONE,EMPLOYMENT_TYPE,
  EMR_STATUS,ETHNICITY2_ID,ETHNICITY_ID,FACILITY,FACILITY_FLAG,HEIGHT,
  HOSPITAL_ID,IDEAL_WEIGHT,INITIALS,INITIAL_BMI,INITIAL_CHIEF_COMPLAINT,
  INITIAL_HEIGHT,INITIAL_WEIGHT,LAST_REGISTRATION_STEP,LAST_UPDATED_BY,
  LAST_UPDATE_DATE_TIME,LAST_UPDATING_PROCESS,LNAME_HYPHEN,LOCATION,
  MAIDEN_NAME,MARITAL_STATUS,MENSES_ONSET,MENSES_ONSET_DATE,MNAME1,MNAME2,
  NATIONALITY_ID,OCCUPATION,OCCUPATION_TYPE,OWNER_ID,PATIENT_INITIAL_BMI,
  PATIENT_INITIAL_HEIGHT,PATIENT_INITIAL_WEIGHT,PATIENT_NUMBER,
  PATIENT_RECORD_ID,PAYMENT_DATE,PAYMENT_ID,PAYMENT_METHOD,PHARMACY,
  PHONE2_MSG,PHONE3_MSG,PHONE_MSG,PHOTO_NAME,PREFERRED_NAME,
  PRIMARY_PHYSICIAN_ID,RECORD_STATUS,REFDOC_ADDRESS,REFDOC_CITY,REFDOC_DATE,
  REFDOC_FNAME,REFDOC_LNAME,REFDOC_PHONE,REFDOC_SPECIALTY,REFDOC_STATE,
  REFDOC_ZIP,REFERRAL_DATE,REFERRAL_TYPE,REFERRING_DOCTOR_ID,
  REGISTRATION_DATE,RELIGION_ID,SMOKER,SPOUSE_DOB,SPOUSE_EMPLOYED_YEARS,
  SPOUSE_EMPLOYER,SPOUSE_EMPLOYER_ADDRESS,SPOUSE_EMPLOYMENT_STATUS,
  SPOUSE_NAME,SPOUSE_OCCUPATION,SPOUSE_SSN,SSN,SUBSCRIPTION_ID,USER_ID,
  VALID_END_DATE,VALID_START_DATE,WEEKLY_REMINDER) 
VALUES
 (:varp1,:varp2,:varp3,:varp4,:varp5,:varp6,:varp7,:varp8,:varp9,:varp10,
  :varp11,:varp12,:varp13,:varp14,:varp15,:varp16,:varp17,:varp18,:varp19,
  :varp20,:varp21,:varp22,:varp23,:varp24,:varp25,:varp26,:varp27,:varp28,
  :varp29,:varp30,:varp31,:varp32,:varp33,:varp34,:varp35,:varp36,:varp37,
  :varp38,:varp39,:varp40,:varp41,:varp42,:varp43,:varp44,:varp45,:varp46,
  :varp47,:varp48,:varp49,:varp50,:varp51,:varp52,:varp53,:varp54,:varp55,
  :varp56,:varp57,:varp58,:varp59,:varp60,:varp61,:varp62,:varp63,:varp64,
  :varp65,:varp66,:varp67,:varp68,:varp69,:varp70,:varp71,:varp72,:varp73,
  :varp74,:varp75,:varp76,:varp77,:varp78,:varp79,:varp80,:varp81,:varp82,
  :varp83,:varp84,:varp85,:varp86,:varp87,:varp88)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    10619      0.00       0.41          0          0          0           0
Execute  10619   6442.51    8304.82       6652      11133     203257       10619
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    21238   6442.51    8305.23       6652      11133     203257       10619

Misses in library cache during parse: 0
Misses in library cache during execute: 1437
Optimizer mode: ALL_ROWS
Parsing user id: 90  (MASTER_B)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=1 pr=0 pw=0 time=0 us)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  INSERT STATEMENT   MODE: ALL_ROWS
      0   LOAD TABLE CONVENTIONAL OF 'UMD_PATIENT'


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                   10619        0.00          0.06
  SQL*Net message from client                 10619        0.15         51.70
  db file sequential read                      6652        0.02          0.48
  latch: shared pool                           1036        0.13          2.28
  library cache: mutex X                         12        0.00          0.00
  latch: row cache objects                        1        0.00          0.00
  free buffer waits                               1        0.28          0.28
********************************************************************************

SQL ID: 9tgj4g8y4rwy8
Plan Hash: 3755742892
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,
  NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0),
  NVL(scanhint,0),NVL(bitmapranges,0) 
from
 seg$ where ts#=:1 and file#=:2 and block#=:3


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          1          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          1          3          0           1

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS CLUSTER SEG$ (cr=3 pr=1 pw=0 time=0 us cost=2 size=68 card=1)
      1   INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=2 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 9)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                         1        0.00          0.00
********************************************************************************

SQL ID: bsa0wjtftg3uw
Plan Hash: 1512486435
select file# 
from
 file$ where ts#=:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          4          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          4          0           1

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS FULL FILE$ (cr=4 pr=0 pw=0 time=0 us cost=2 size=12 card=2)




********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    10619      0.00       0.41          0          0          0           0
Execute  10619   6442.51    8304.82       6652      11133     203257       10619
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    21238   6442.51    8305.23       6652      11133     203257       10619

Misses in library cache during parse: 0
Misses in library cache during execute: 1437

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                   10619        0.00          0.06
  SQL*Net message from client                 10619        0.15         51.70
  db file sequential read                      6652        0.02          0.48
  latch: shared pool                           1036        0.13          2.28
  library cache: mutex X                         12        0.00          0.00
  latch: row cache objects                        1        0.00          0.00
  free buffer waits                               1        0.28          0.28


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch        3      0.00       0.00          1          7          0           2
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        7      0.00       0.00          1          7          0           2

Misses in library cache during parse: 0

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                         1        0.00          0.00

10619  user  SQL statements in session.
    2  internal SQL statements in session.
10621  SQL statements in session.
    1  statement EXPLAINed in this session.
********************************************************************************
Trace file: ././falx_ora_23791.trc
Trace file compatibility: 11.1.0.7
Sort options: default

       1  session in tracefile.
   10619  user  SQL statements in trace file.
       2  internal SQL statements in trace file.
   10621  SQL statements in trace file.
       3  unique SQL statements in trace file.
       1  SQL statements EXPLAINed using schema:
           MASTER_B.prof$plan_table
             Default table was used.
             Table was created.
             Table was dropped.
 4218297  lines in trace file.
    8362  elapsed seconds in trace file.

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444406 is a reply to message #444405] Sun, 21 February 2010 13:42 Go to previous messageGo to next message
Michel Cadot
Messages: 68737
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
You didn't answer to my questions:

Quote:
Not so bad, above all if you have indexes and triggers.

The table in the trace is not the table you gave us the DDL.

Regards
Michel

[Updated on: Sun, 21 February 2010 13:44]

Report message to a moderator

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444407 is a reply to message #444405] Sun, 21 February 2010 13:50 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
I see a couple of things that will "degrade" performance, but they are only minor contributors to overall elapsed time.

1) SQL "should not" be parsed every time.
2) You are making 1 round trip to client for every row inserted.

The problem is simply CPU time per row inserted & I am scratching my head to understand why this is so.

Is it correct to state that no ON INSERT trigger exists for this table?

Even if you can do INSERT INTO <table_name> SELECT, I doubt this would change the CPU time consumed.
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444408 is a reply to message #444303] Sun, 21 February 2010 13:56 Go to previous messageGo to next message
JustinBailey
Messages: 14
Registered: December 2009
Junior Member
Here is the DDL for the table in the trace.


ALTER TABLE EMY1.UMD_PATIENT
 DROP PRIMARY KEY CASCADE;

DROP TABLE EMY1.UMD_PATIENT CASCADE CONSTRAINTS;

CREATE TABLE EMY1.UMD_PATIENT
(
  PATIENT_RECORD_ID         NUMBER,
  USER_ID                   NUMBER              NOT NULL,
  OWNER_ID                  NUMBER              NOT NULL,
  SSN                       VARCHAR2(15 BYTE),
  ETHNICITY_ID              NUMBER,
  NATIONALITY_ID            NUMBER,
  SMOKER                    NUMBER,
  EMR_STATUS                VARCHAR2(255 BYTE),
  REFERRING_DOCTOR_ID       NUMBER,
  ETHNICITY2_ID             NUMBER,
  CREDIT_CARD_ID            NUMBER,
  EMPLOYMENT_TYPE           VARCHAR2(255 BYTE),
  LNAME_HYPHEN              VARCHAR2(255 BYTE),
  MARITAL_STATUS            CHAR(1 BYTE),
  MNAME1                    VARCHAR2(255 BYTE),
  MNAME2                    VARCHAR2(255 BYTE),
  OCCUPATION                VARCHAR2(255 BYTE),
  PATIENT_NUMBER            VARCHAR2(255 BYTE),
  RECORD_STATUS             VARCHAR2(255 BYTE),
  REGISTRATION_DATE         DATE,
  RELIGION_ID               NUMBER,
  WEEKLY_REMINDER           VARCHAR2(255 BYTE),
  PAYMENT_METHOD            VARCHAR2(255 BYTE)  DEFAULT NULL,
  REFERRAL_DATE             DATE,
  BASE_EVENT_ID             NUMBER,
  CREATED_BY                NUMBER,
  CREATION_DATE_TIME        DATE,
  LAST_UPDATED_BY           NUMBER,
  LAST_UPDATE_DATE_TIME     DATE,
  LAST_UPDATING_PROCESS     VARCHAR2(255 BYTE),
  VALID_START_DATE          DATE,
  VALID_END_DATE            DATE                NOT NULL,
  LOCATION                  VARCHAR2(255 BYTE),
  PAYMENT_DATE              DATE,
  PHOTO_NAME                VARCHAR2(255 BYTE),
  REFERRAL_TYPE             VARCHAR2(255 BYTE),
  PREFERRED_NAME            VARCHAR2(255 BYTE),
  MAIDEN_NAME               VARCHAR2(255 BYTE),
  PRIMARY_PHYSICIAN_ID      NUMBER,
  SUBSCRIPTION_ID           VARCHAR2(255 BYTE),
  PAYMENT_ID                VARCHAR2(255 BYTE),
  EMPLOYER                  VARCHAR2(255 BYTE),
  EMPLOYED_YEARS            FLOAT(32),
  EMPLOYER_ADDRESS          VARCHAR2(255 BYTE),
  SPOUSE_NAME               VARCHAR2(255 BYTE),
  SPOUSE_EMPLOYMENT_STATUS  VARCHAR2(255 BYTE),
  SPOUSE_OCCUPATION         VARCHAR2(255 BYTE),
  SPOUSE_SSN                VARCHAR2(255 BYTE),
  SPOUSE_DOB                DATE,
  SPOUSE_EMPLOYER           VARCHAR2(255 BYTE),
  SPOUSE_EMPLOYED_YEARS     FLOAT(32),
  SPOUSE_EMPLOYER_ADDRESS   VARCHAR2(255 BYTE),
  ADOPTED                   INTEGER,
  HEIGHT                    FLOAT(126),
  IDEAL_WEIGHT              INTEGER,
  INITIAL_WEIGHT            FLOAT(32),
  LAST_REGISTRATION_STEP    INTEGER,
  EMPLOYER_PHONE            VARCHAR2(255 BYTE),
  HOSPITAL_ID               NUMBER,
  REFDOC_LNAME              VARCHAR2(100 BYTE),
  REFDOC_FNAME              VARCHAR2(100 BYTE),
  REFDOC_ADDRESS            VARCHAR2(100 BYTE),
  REFDOC_CITY               VARCHAR2(50 BYTE),
  REFDOC_STATE              VARCHAR2(15 BYTE),
  REFDOC_ZIP                VARCHAR2(15 BYTE),
  REFDOC_PHONE              VARCHAR2(50 BYTE),
  REFDOC_SPECIALTY          VARCHAR2(50 BYTE),
  REFDOC_DATE               DATE,
  INITIALS                  VARCHAR2(5 BYTE),
  OCCUPATION_TYPE           VARCHAR2(255 BYTE),
  FACILITY                  VARCHAR2(255 BYTE),
  INITIAL_CHIEF_COMPLAINT   VARCHAR2(4000 BYTE),
  INITIAL_HEIGHT            FLOAT(126),
  INITIAL_BMI               FLOAT(126),
  PHONE2_MSG                NUMBER,
  PHONE3_MSG                NUMBER,
  PHONE_MSG                 NUMBER,
  MENSES_ONSET              VARCHAR2(20 BYTE),
  MENSES_ONSET_DATE         DATE,
  PHARMACY                  VARCHAR2(300 BYTE),
  BOLD_ID                   NUMBER,
  BIGR_DONOR_ALIAS          VARCHAR2(50 BYTE),
  BIGR_DONOR_ID             VARCHAR2(20 BYTE),
  BIGR_HLA                  VARCHAR2(50 BYTE),
  PATIENT_INITIAL_WEIGHT    FLOAT(126),
  PATIENT_INITIAL_HEIGHT    FLOAT(126),
  PATIENT_INITIAL_BMI       FLOAT(126),
  FACILITY_FLAG             CHAR(1 BYTE)
)
TABLESPACE EMY1
PCTUSED    0
PCTFREE    10
INITRANS   1
MAXTRANS   255
STORAGE    (
            INITIAL          80K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
LOGGING 
NOCOMPRESS 
NOCACHE
NOPARALLEL
MONITORING;


CREATE INDEX EMY1.UMD_PATIENT_N1 ON EMY1.UMD_PATIENT
(USER_ID)
LOGGING
TABLESPACE EMY1
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
NOPARALLEL;


CREATE INDEX EMY1.UMD_PATIENT_N2 ON EMY1.UMD_PATIENT
(VALID_START_DATE)
LOGGING
TABLESPACE EMY1
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
NOPARALLEL;


CREATE INDEX EMY1.UMD_PATIENT_N3 ON EMY1.UMD_PATIENT
(PATIENT_RECORD_ID)
LOGGING
TABLESPACE EMY1
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
NOPARALLEL;


CREATE INDEX EMY1.UMD_PATIENT_N4 ON EMY1.UMD_PATIENT
(OWNER_ID, USER_ID)
LOGGING
TABLESPACE EMY1
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
NOPARALLEL;


CREATE INDEX EMY1.UMD_PATIENT_N5 ON EMY1.UMD_PATIENT
(LOWER("PATIENT_NUMBER"))
LOGGING
TABLESPACE EMY1
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
NOPARALLEL;


CREATE UNIQUE INDEX EMY1.UMD_PATIENT_PK ON EMY1.UMD_PATIENT
(PATIENT_RECORD_ID, VALID_START_DATE)
LOGGING
TABLESPACE EMY1
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
NOPARALLEL;


CREATE UNIQUE INDEX EMY1.UMD_PATIENT_U1 ON EMY1.UMD_PATIENT
(USER_ID, OWNER_ID, VALID_START_DATE)
LOGGING
TABLESPACE EMY1
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
NOPARALLEL;


CREATE OR REPLACE TRIGGER EMY1."TR_UMD_PATIENT" 
BEFORE INSERT ON EMY1.UMD_PATIENT REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
WHEN (
(NEW.PATIENT_RECORD_ID IS NULL) OR (NEW.PATIENT_RECORD_ID = 0)
      )
BEGIN
 SELECT EMY1.SEQ_UMD_PATIENT.NEXTVAL INTO :NEW.PATIENT_RECORD_ID FROM DUAL;
END;
/

DROP SYNONYM MASTER_B.UMD_PATIENT;

CREATE SYNONYM MASTER_B.UMD_PATIENT FOR EMY1.UMD_PATIENT;


ALTER TABLE EMY1.UMD_PATIENT ADD (
  CONSTRAINT UMD_PATIENT_PK
 PRIMARY KEY
 (PATIENT_RECORD_ID, VALID_START_DATE)
    USING INDEX 
    TABLESPACE EMY1
    PCTFREE    10
    INITRANS   2
    MAXTRANS   255
    STORAGE    (
                INITIAL          64K
                NEXT             1M
                MINEXTENTS       1
                MAXEXTENTS       UNLIMITED
                PCTINCREASE      0
               ));



Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444409 is a reply to message #444408] Sun, 21 February 2010 14:09 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
CREATE OR REPLACE TRIGGER emy1."TR_UMD_PATIENT"
  BEFORE INSERT ON emy1.umd_patient
  REFERENCING NEW AS NEW OLD AS OLD
  FOR EACH ROW
  WHEN ( (NEW.patient_record_id IS NULL)
        OR (NEW.patient_record_id = 0) )
BEGIN
  :NEW.patient_record_id := emy1.seq_umd_patient.nextval;
END;
/ 


does above perform any better?

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444421 is a reply to message #444406] Sun, 21 February 2010 20:59 Go to previous messageGo to next message
hkchital
Messages: 128
Registered: September 2008
Location: Singapore
Senior Member
See :
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    10619      0.00       0.41          0          0          0           0
Execute  10619   6442.51    8304.82       6652      11133     203257       10619



The "current" mode gets (203,257 blocks for 10,619 single row inserts) is very high. These would be for updating the Indexes on the table.
Are you DELETEing rows from the table in between INSERTs ? Are these single row deletes (like the inserts) or multi-row deletes ? At what point do you COMMIT ?
Are the indexes on ASSM Tablespaces ?

Hemant K Chitale

Code Tags added by BlackSwan

[Updated on: Sun, 21 February 2010 21:21] by Moderator

Report message to a moderator

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444426 is a reply to message #444409] Sun, 21 February 2010 23:24 Go to previous messageGo to next message
Michel Cadot
Messages: 68737
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
7 indexes and 1 trigger!
Remove all indexes and recreate them after the load.
Remove the trigger and include its logic (which is simple) inside your inserts.
Transform the INSERT VALUES to INSERT ALL regrouping many inserts into one.

Regards
Michel
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444430 is a reply to message #444426] Sun, 21 February 2010 23:46 Go to previous messageGo to next message
Frank
Messages: 7901
Registered: March 2000
Senior Member
This is a very basic trigger, and, considering the INSERTs are done with VALUES, I don't see how moving this trigger to the inserting part would speed up things.
These kind of triggers (ID, audit columns) are pretty standard in the majority of applications I have seen (probably differs per country, just like some other db-habits) but have never been the culprit for bad performance, as far as I know.
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444438 is a reply to message #444430] Mon, 22 February 2010 00:40 Go to previous messageGo to next message
Michel Cadot
Messages: 68737
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Quote:
I don't see how moving this trigger to the inserting part would speed up things.

Just because it avoids a call to PL/SQL engine, the execution of the trigger (with locking in SGA structure) and the return to SQL engine and so will reduce the CPU. When CPU is 3/4 of the elapsed time reducing it is important.

Regards
Michel


Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444439 is a reply to message #444426] Mon, 22 February 2010 00:41 Go to previous messageGo to next message
Michel Cadot
Messages: 68737
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Michel Cadot wrote on Mon, 22 February 2010 06:24
7 indexes and 1 trigger!
Remove all indexes and recreate them after the load.
Remove the trigger and include its logic (which is simple) inside your inserts.
Transform the INSERT VALUES to INSERT ALL regrouping many inserts into one.

Regards
Michel

I will add preallocate the space needed by all these rows.

Regards
Michel

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444551 is a reply to message #444439] Mon, 22 February 2010 08:10 Go to previous messageGo to next message
JustinBailey
Messages: 14
Registered: December 2009
Junior Member
Thanks for the suggestions. I'll try to run it again tonight with the indexes removed and the triggers disabled.

Michael, what do you mean by pre-allocate the space? Is it as simple as making sure the tablespace has a datafile with enough free MB to fit all the rows?

Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444553 is a reply to message #444551] Mon, 22 February 2010 08:14 Go to previous messageGo to next message
Michel Cadot
Messages: 68737
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Not only that but also the tables has sufficient allocated space for all the rows you want to insert.
In your trace, you have 2 recursive queries that indicate that the table was extended (once) during the inserts, this have costed you 1 minute just for the queries (plus the internal space management).

Regards
Michel
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444555 is a reply to message #444303] Mon, 22 February 2010 08:27 Go to previous messageGo to next message
JustinBailey
Messages: 14
Registered: December 2009
Junior Member
Could you explain how one would pre-allocate space for a specific table?
Re: Bulk inserts get EXTREMELY slow after 10,000 queries [message #444556 is a reply to message #444555] Mon, 22 February 2010 08:30 Go to previous message
Michel Cadot
Messages: 68737
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
ALTER TABLE ALLOCATE EXTENT

Regards
Michel
Previous Topic: Copy Command in a procedure
Next Topic: which Index
Goto Forum:
  


Current Time: Wed Feb 19 15:41:09 CST 2025