Home » RDBMS Server » Performance Tuning » Slow update with Bulk collect and Forall
Slow update with Bulk collect and Forall [message #269832] Mon, 24 September 2007 15:35 Go to next message
aliceg
Messages: 29
Registered: July 2007
Location: Los Angeles
Junior Member

I posted a while ago a procedure which was using bulk collect and forall to update not null columns in a massive table (pbl billions of rows).
I ran a trace and below are the execution plan and the update:

UPDATE SALES_DATA SET STORE_MAX = :B4 , STORE_MIN = :B3 , FACINGS = :B2 , 
  FIXTURE_FILL = :B1 
WHERE
 ITEM_ID = :B6 AND LOCATION_ID = :B5 AND STORE_MAX IS NOT NULL AND STORE_MIN 
  IS NOT NULL AND FACINGS IS NOT NULL


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0          0          0           0
Execute     56     72.87    1675.80     250355     521880      31720       31243
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       57     72.88    1675.81     250355     521880      31720       31243

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 28     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  UPDATE  
  31243   TABLE ACCESS BY INDEX ROWID SALES_DATA 
1166844    INDEX RANGE SCAN SALES_DATA_ORDER_PK (object id 7719611)

******************************************************************************


I recently put an explicit commit and I am planning to increase the bulk limit to 1000. Is this good, is this enough? Please advise on any other solutions.

I also enclosed the whole tkprof report
Re: Slow update with Bulk collect and Forall [message #270153 is a reply to message #269832] Tue, 25 September 2007 23:16 Go to previous messageGo to next message
rleishman
Messages: 3724
Registered: October 2005
Location: Melbourne, Australia
Senior Member
What columns does SALES_DATA_ORDER_PK contain?

Ross Leishman
Re: Slow update with Bulk collect and Forall [message #270386] Wed, 26 September 2007 14:01 Go to previous messageGo to next message
aliceg
Messages: 29
Registered: July 2007
Location: Los Angeles
Junior Member

The index is on locatin_id, item_id, and sales_date
Re: Slow update with Bulk collect and Forall [message #270429 is a reply to message #270386] Wed, 26 September 2007 21:32 Go to previous messageGo to next message
rleishman
Messages: 3724
Registered: October 2005
Location: Melbourne, Australia
Senior Member
And how many rows are there for each location_id, item_id combination. Run this:
SELECT count(*), max(cnt), min(cnt), avg(cnt), stddev(cnt)
FROM (
  SELECT location_id, item_id, count(*) AS cnt
  FROM sales_data
  group by location_id, item_id
)

And how many rows for a given location_id, item_id being updated will have STORE_MAX IS NOT NULL AND STORE_MIN
IS NOT NULL AND FACINGS IS NOT NULL


Reason I ask is that the index will only narrow the search down to all rows for the location_id, item_id. Say there are 1000. Oracle then has to fetch all 1000 rows from the table to determine whether the other predicates are satisified, and discard the one that are not.

Say it only needs to update 1 row; that means you are reading 1000 rows for the benefit of 1 update.

Ross Leishman
Re: Slow update with Bulk collect and Forall [message #270599 is a reply to message #270429] Thu, 27 September 2007 11:17 Go to previous messageGo to next message
aliceg
Messages: 29
Registered: July 2007
Location: Los Angeles
Junior Member

SELECT count(*), max(cnt), min(cnt), avg(cnt), stddev(cnt)
FROM (
  SELECT location_id, item_id, count(*) AS cnt
  FROM sales_data
  group by location_id, item_id
)
COUNT(*) MAX(CNT)  MIN(CNT) AVG(CNT)    STDDEV(CNT)
1071870	 60	    32	    34.0675623	4.640697091



And yes, you are right it is doing a full scan of 1000 records just to determine that only let's say 6 records have store_min,store_max and facings NOT NULL.

I implemented a new idea with a temporary table that will store the records that are matching my t_src_matrix location_id, and item_id, and isolate only those records that have store_min, store_max and facings not null. However, the update did not look at the sales_date which is part of the primary_key index (item_id, location_id, sales_date) and it would do the update of even those rows that have the above values as NULL.
I will implement in my temp_table the sales_date as well and measure the time it takes.
Any other ideas are greately appreciated
Re: Slow update with Bulk collect and Forall [message #270739 is a reply to message #270599] Thu, 27 September 2007 22:47 Go to previous messageGo to next message
rleishman
Messages: 3724
Registered: October 2005
Location: Melbourne, Australia
Senior Member
That's not a bad idea. Do a single full table scan to get all of the candidates into a temp table (including the SALES DATE), index the temp table, apply the updates transactionally, and then merge back into the main table.

Depending on data volumes, it may be slightly better (although probably not orders of magnitude; say <50% improvement) to leave the temp table empty, INSERT the rows into the temp table instead of updating the main table, and then use a SQL MERGE statement to merge the temp table into the main table. This method may produce unpredicatable results if you cannot guarantee 1 to 1 relationship between the temp and main tables though.

The only other alternative: is there any way you can get the sales_date into the main cursor so that it is available for the UDPATE?

Ross Leishman
Re: Slow update with Bulk collect and Forall [message #271023 is a reply to message #269832] Fri, 28 September 2007 14:13 Go to previous messageGo to next message
BlackSwan
Messages: 25046
Registered: January 2009
Location: SoCal
Senior Member
Alice,
Are the statistic current for all tables & indexes involved?
Re: Slow update with Bulk collect and Forall [message #271055 is a reply to message #270739] Fri, 28 September 2007 18:27 Go to previous messageGo to next message
aliceg
Messages: 29
Registered: July 2007
Location: Los Angeles
Junior Member

Actually this is what I did. I included the sales_date in the selection, and I do an update with the needed values in the sales_data table based on the match by item_id, location_id, and sales_date (this is also the PK of the sales_data table - the one to be updated). The results are quite spectacular. We will still continue doing validation, and see if the results are the same on the slow one with the fast one.
This is quite a learning experience.
Thanks so much for all your help. -- Alice

Complete code is provided below.
/**This script is improving the speed of execution for 'CHG-LIVE' records
   for dr.reset_change_param procedure by combining temp_table which stores only
   the records that are matching the location_id and item_id between t_src_matrix and
   dr.sales_data and whose store_min, store_max and facings are not null.
   From this table we bulk collect with Forall and we update only the not null records
   By Alice G. on 09/25/07   
*/
--DROP TABLE temp_reset_change_param;

CREATE TABLE temp_reset_change_param
(ITEM_ID         NUMBER(10,0) NOT NULL,
LOCATION_ID      NUMBER(10,0)     NULL,
STORE_MIN        NUMBER(10,0)     NULL,
STORE_MAX        NUMBER(10,0)     NULL,
FACINGS          NUMBER(10,0)     NULL,
FIXTURE_FILL     NUMBER           NULL,
TRANSIT_TIME     NUMBER(10,0)     NULL,
EFFECTIVE_DATE   VARCHAR2(200)    NULL,
SALES_DATE       DATE             NULL
);


CREATE OR REPLACE PROCEDURE RESET_CHANGE_PARAM(
   process_date   IN   DATE DEFAULT TRUNC (SYSDATE)
)
IS
    CURSOR  chg_live_cur is
   SELECT * FROM temp_reset_change_param;

	CURSOR chg_rst_cur IS
				SELECT vi.item_id, vl.location_id, mt.store_min, mt.store_max, mt.facings, mt.fixture_fill,
          			   mt.transit_time, mt.effective_date
 	 			  FROM dp.t_src_matrix mt,
       					items vi,
       					t_ep_Catal tc,
       					location vl,
	   					t_ep_shinr ts
				 WHERE (mt.action_code = 'CHANGEPLUS'   OR
                       mt.action_code = 'CHANGE_MINUS' OR
                       mt.action_code = 'CHG-RST'      OR
                       mt.action_code = 'CHANGE')
				   AND mt.catalog = tc.catal
  				   AND tc.t_ep_catal_ep_id = vi.t_ep_catal_ep_id
  				   AND mt.ship_to_number = ts.shinr
  				   AND vl.t_ep_shinr_ep_id = ts.t_ep_shinr_ep_id;

	-- collection of scalars
	TYPE itemloc_typ IS TABLE OF sales_data.item_id%TYPE INDEX BY PLS_INTEGER;
    TYPE stf_typ IS TABLE OF sales_data.store_max%TYPE INDEX BY PLS_INTEGER;
    TYPE fixfill_typ IS TABLE OF sales_data.fixture_fill%TYPE INDEX BY PLS_INTEGER;
    TYPE action_typ IS TABLE OF dp.t_src_matrix.action_code%TYPE INDEX BY PLS_INTEGER;
    TYPE date_typ IS TABLE OF dp.t_src_matrix.effective_date%TYPE INDEX BY PLS_INTEGER;

action action_typ;
item itemloc_typ;
loc  itemloc_typ;


stmin stf_typ;
stmax stf_typ;
facing stf_typ;
fixfill fixfill_typ;

itemN itemloc_typ;
locN itemloc_typ;
stminN stf_typ;
stmaxN stf_typ;
facingN stf_typ;
fixfillN fixfill_typ;


--Date and time variables
trTime stf_typ;
efDate date_typ;
nb_err  Pls_integer ;
total  Pls_integer := 0 ;
salDate date_typ;



v_start_date  DATE := SYSDATE;
v_end_date   DATE ;
cnt_rec    NUMBER := 0;
cnt_rec_ch NUMBER := 0;
v_time  NUMBER := 0;
v_seq   NUMBER := 0;


    -- create an exception handler for ORA-24381
    dml_errors EXCEPTION;
    PRAGMA EXCEPTION_INIT(dml_errors, -24381);

BEGIN
    dynamic_ddl('truncate table temp_reset_change_param');
    
    INSERT INTO temp_reset_change_param
    SELECT vi.item_id, vl.location_id, mt.store_min, mt.store_max, mt.facings, mt.fixture_fill,
          mt.transit_time, mt.effective_date, sd.sales_date
 	 FROM dp.t_src_matrix mt,
       					items vi,
       					t_ep_Catal tc,
       					location vl, 
	   					t_ep_shinr ts,
	   					sales_data sd
				 WHERE  mt.action_code = 'CHG-LIVE'
  				  AND  mt.catalog = tc.catal
  				  AND  tc.t_ep_catal_ep_id = vi.t_ep_catal_ep_id
  				  AND  mt.ship_to_number = ts.shinr
  				  AND  vl.t_ep_shinr_ep_id = ts.t_ep_shinr_ep_id
  				  AND  vi.item_id = sd.item_id
  				  AND  vl.location_id = sd.location_id
  				  AND  sd.store_min IS NOT NULL 
                  AND  sd.store_max IS NOT NULL 
                  AND  sd.facings IS NOT NULL;
     COMMIT;

	BEGIN
		SELECT COUNT(1) INTO cnt_rec FROM dr.temp_reset_change_param;
		EXCEPTION
  	WHEN NO_DATA_FOUND THEN NULL;
	END;

 --Check the content of sales_cur
	BEGIN
		SELECT COUNT(1) INTO cnt_rec_ch FROM dp.t_src_matrix
		WHERE action_code = 'CHANGEPLUS'   OR
              action_code = 'CHANGE_MINUS' OR
              action_code = 'CHG-RST'      OR
              action_code = 'CHANGE';
		EXCEPTION
  	WHEN NO_DATA_FOUND THEN NULL;
	END;

 IF cnt_rec > 0 THEN
	OPEN chg_live_cur;
 LOOP
     FETCH chg_live_cur BULK COLLECT INTO
	 item, loc, stmin, stmax, facing, fixfill, trTime, efDate, salDate LIMIT 1000;

	 BEGIN
	 FORALL i IN item.FIRST .. item.LAST SAVE EXCEPTIONS
            UPDATE sales_data
               SET store_max = stmax(i),
                   store_min = stmin(i),
                     facings = facing(i),
                fixture_fill = fixfill(i)
             WHERE item_id = item(i)
               AND location_id = loc(i)
			   AND sales_date = salDate(i);
     COMMIT;
     EXCEPTION
     WHEN dml_errors THEN NULL;
        --FOR i IN 1 .. SQL%BULK_EXCEPTIONS.COUNT LOOP
         --   DBMS_OUTPUT.put_line('Error in CHG-LIVE');
        --END LOOP;
    END;
EXIT WHEN chg_live_cur%NOTFOUND;
END LOOP;
CLOSE chg_live_cur;
ELSE NULL;
END IF;


--For Chg-Rst
 IF cnt_rec_ch > 0 THEN
	OPEN chg_rst_cur;
 LOOP
     FETCH chg_rst_cur BULK COLLECT INTO
	 item, loc, stmin, stmax, facing, fixfill, trTime, efDate LIMIT 1000;
     --Update with values from the cursor if sales_date < ef_date - trtime
	 BEGIN
	 FORALL i IN item.FIRST .. item.LAST save EXCEPTIONS

	  UPDATE dr.sales_data
       SET store_max = NVL(store_max, stmax(i)),
           store_min = NVL(store_min, stmin(i)),
             facings = NVL (facings, facing(i)),
        fixture_fill = NVL (fixture_fill, fixfill(i))
     WHERE   item_id = item(i) AND
         location_id = loc(i) AND
	       sales_date < TO_DATE (efDate(i), 'DD-MON-YYYY') - trTime(i) AND
          (item_id, location_id) IN
          (SELECT s.item_id, s.location_id
             FROM mdp_matrix m, sales_data s
            WHERE s.item_id = item(i)
              AND s.location_id = loc(i)
              AND s.sales_date = TRUNC(PROCESS_DATE)
              AND s.item_id = m.item_id
              AND s.location_id = m.location_id
              AND (NVL (s.store_max, m.store_max) <> stmax(i)
               OR  NVL (s.store_min, m.store_min) <> stmin(i)
               OR  NVL (s.facings, m.facings) <> facing(i)
               OR  NVL (s.fixture_fill, m.fixture_fill) <> fixfill(i))
			);
	COMMIT;
    EXCEPTION
         WHEN dml_errors THEN NULL;
        --FOR i IN 1 .. SQL%BULK_EXCEPTIONS.COUNT LOOP
         --   DBMS_OUTPUT.put_line('Error in CHG-LIVE');
        --END LOOP;
    END;

	BEGIN
	 FORALL i IN item.FIRST .. item.LAST save EXCEPTIONS
       --Set to NULL if sales_date > ef_date - transit time

               UPDATE dr.sales_data
                  SET store_max = NULL,
                      store_min = NULL,
                      facings = NULL,
                      fixture_fill = NULL
                WHERE item_id = item(i)
                  AND location_id = loc(i)
                  AND sales_date >=
                           TO_DATE (efDate(i), 'DD-MON-YYYY')
                         - (trTime(i));
     COMMIT;
     EXCEPTION
     WHEN dml_errors THEN NULL;
        --FOR i IN 1 .. SQL%BULK_EXCEPTIONS.COUNT LOOP
         --   DBMS_OUTPUT.put_line('Error in CHG-LIVE');
        --END LOOP;
     END;
EXIT WHEN chg_rst_cur%NOTFOUND;
END LOOP;
CLOSE chg_rst_cur;
ELSE NULL;
END IF;

EXCEPTION WHEN NO_DATA_FOUND THEN NULL;
END;
/

Re: Slow update with Bulk collect and Forall [message #271057 is a reply to message #271023] Fri, 28 September 2007 18:31 Go to previous message
aliceg
Messages: 29
Registered: July 2007
Location: Los Angeles
Junior Member

The 500 records bulk collect was run a while ago, but the one with the 1000 records we ran this week.
Thank You --ag
Previous Topic: Performance of Oracle 9i in Unix machine (merged 3)
Next Topic: What is the number of cost?
Goto Forum:
  


Current Time: Thu Dec 08 10:17:09 CST 2016

Total time taken to generate the page: 0.08642 seconds