RE: Quite interesting performance problem.

From: Chitale, Hemant K <>
Date: Mon, 22 Jun 2015 02:53:02 +0000
Message-ID: <>

Was the COMMIT every 'N' rows ? Or after the 4million rows are inserted ?

The GTT could have been either ON COMMIT PRESERVE ROWS or ON COMMIT DELETE ROWS. In the latter case, after every COMMIT (or at the end), rows would be "lost".

Was the GTT recreated in the upgrade ?

Hemant K Chitale

-----Original Message-----

From: [] On Behalf Of Howard Latham Sent: Friday, June 19, 2015 8:06 PM
Subject: Quite interesting performance problem.

RH Linux 64bit E4
Oracle 11.2

In case anyone runs into this problem I thought I'd share it. We have some code that ran fine on Oracle in about 30 minutes - when we moved to it took over 3 hours.
It wrote 3/4 million rows to a temp table then wrote it to disk. Using UTL_FILE

Nothing in the traces no tuning fiddling with spin count, moving disks, analyzing tables, watching spotlight for hours, nothing helped.

Eventually our developer tried to reproduce the problem and discovered there was no commit after writing the temp table. Adding a commit returned the speed to what we used to get. I agree "commit little and often" is or should be a developer's mantra - along with "beware of nulls". However clearly some behaviour in a temporary buffer has been changed. But I ackknowledge that the code was 'wrong' in the first place.


Howard A. Latham

This email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please delete all copies and notify the sender immediately. You may wish to refer to the incorporation details of Standard Chartered PLC, Standard Chartered Bank and their subsidiaries at i0zX+n{+i^ Received on Mon Jun 22 2015 - 04:53:02 CEST

Original text of this message