Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> performance problem with partitioned table query.

performance problem with partitioned table query.

From: Jessica Mao <jmao_at_portal.com>
Date: Wed, 23 Jan 2002 15:39:43 -0800
Message-ID: <F001.003F9437.20020123151519@fatcity.com>

Oracle 8.1.7.0.0

table event_t range partitioned by column poid_id0. only 1 partition called p_1 out of the 14 contains data. A query on event_t became significantly slow after rows increase:

select poid_DB, poid_ID0, poid_TYPE, poid_REV, start_t, end_t, sys_descr from event_t
where event_t.end_t >= :1 and event_t.end_t < :2 and   event_t.poid_TYPE like :3 and (event_t.account_obj_ID0 = :4 and   event_t.account_obj_DB = 1 ) order by event_t.end_t desc

Rows Execution Plan

-------  ---------------------------------------------------
      0  SELECT STATEMENT   GOAL: CHOOSE
      0   SORT (ORDER BY)
      0    PARTITION RANGE (ALL) PARTITION: START=1 STOP=14
      0     TABLE ACCESS   GOAL: ANALYZED (BY LOCAL INDEX ROWID) OF
                'EVENT_T' PARTITION: START=1 STOP=14
      0      INDEX   GOAL: ANALYZED (RANGE SCAN) OF
               'I_EVENT__ACCTOBJ_END_T' (NON-UNIQUE) PARTITION: START=1
                 STOP=14

Index I_EVENT__ACCTOBJ_END_T was created on event_t ( account_obj_id0, end_t ) using LOCAL.
Other 2 columns involved in the where clause have either only one distinct value or a few. So are not indexed.
column account_obj_id0 has 1 million unique values in event_t and remain unchanged during the tests. when rows insert, average rows per account_obj_id0 value increase as well.

Trace shows always the same execution plan but elapsed time increased enormously! I did 2 rounds of tests, every round I dropped and recreated event_t empty:

In test round 1:
1.) inserted 1 million rows into event_t with same end_t value. Query returned: call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse       23      0.02       0.09          0          0          0           0
Execute    156      0.02       0.29          0          0          0           0
Fetch      156      0.14       1.09          8       2698          0         195

------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 335 0.18 1.47 8 2698 0 195

2.) inserted ANOTHER 1.5 million rows into event_t with 10,000+ different end_t values. Query returned:

Parse       36      0.00       0.04          0          0          0           0
Execute    118      0.01       0.01          0          0          0           0
Fetch      118      0.61      86.71       1385       5045          0         587

------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 272 0.62 86.76 1385 5045 0 587

In test round 2:
1.) inserted 1 million rows into event_t with same end_t value. Query returned as round1 step 1.)

2.) inserted ANOTHER 5 million rows into event_t with ANOTHER end_t value. Query returned:

Parse       40      0.00       0.11          0          0          0           0
Execute    139      0.02       0.12          0          0          0           0
Fetch      139      0.25       4.66        303       2868          0         761

------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 318 0.27 4.89 303 2868 0 761

3.) inserted ANOTHER 2 million rows into event_t with 12,000+ different end_t values. Query returned:

Parse       34      0.01       0.01          0          0          0           0
Execute     97      0.00       0.06          0          0          0           0
Fetch       97      0.58      89.93       1257       4260          0         614

------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 228 0.59 90.00 1257 4260 0 614

In test round 1 elapsed time increased 60 times from 1 million to 2.5 million rows. In round 2 it increased 3 times from 1 to 6 million rows, and 18 times from 6 to 8 million rows. So #-of-rows in event_t is not the #1 convict for large physical reads. It's more likely the #-of-different-end_t-values. Before I always thought that to an indexed column the more different values the better. But what's going on in this case? Am I missing anything?

Top wait event in statspack report is 'db file sequential read'. oradebug event 10046 shows 'db file sequential read' is waiting on object event_t.

Thank you!

Jessica Mao
Portal Software, Inc.

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: Jessica Mao
  INET: jmao_at_portal.com

Fat City Network Services    -- (858) 538-5051  FAX: (858) 538-5051
San Diego, California        -- Public Internet access / Mailing Lists
--------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).
Received on Wed Jan 23 2002 - 17:39:43 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US