| Oracle FAQ | Your Portal to the Oracle Knowledge Grid | |
Home -> Community -> Usenet -> c.d.o.server -> Materialized view refresh and very long wait events
Hello,
Im on 9.2.0.7 EE AIX v5 The server is a 3 CPU box and the disk system is "SHARK" (whatever that is ). 8 DBWR's . I have a materialized view with 31 million records in a base table log. The other logs have are empty except one that has 5000.
I have indexes on all the rowids ( all local to the partition ) materialezed view and a composite primary key. create materialized view dis_mv2
PARALLEL
PARTITION BY HASH(store_id)
PARTITIONS 8
STORE IN (DISCOVERER_DATA,
DISCOVERER_DATA,
DISCOVERER_DATA,
DISCOVERER_DATA,
DISCOVERER_DATA,
DISCOVERER_DATA,
DISCOVERER_DATA,
DISCOVERER_DATA)
psr.id_str_rt store_id,
psr.rowid psr_rid,
po.nm_orgn store_name,
po.rowid ps_rid,
soh.id_itm item_id,
soh.rowid soh_rid,
SUBSTR(itm.id_strc_mr,1,INSTR(itm.id_strc_mr,':')-1)
Dept,
soh.id_itm ||'-'|| itm.de_itm id_desc,
itm.de_itm item_description,
itm.lu_uom_dft default_uom,
itm.rowid itm_rowid,
itmloc.id_spr primary_supplier,
itmloc.rowid itmloc_rid,
ps.nm_spr vendor_name,
ps.rowid ps_rowid,
itmloc.id_spr ||'-'|| ps.nm_spr VDesc,
ris.vpn vendor_style,
ris.rowid ris_rid,
itmloc.repn_type repn_type,
soh.total_quantity soh,
soh.in_transit_quantity in_transit_quantity,
soh.adjust_unavail_qty unavailable_quantity,
aq.allocated_quantity,
aq.rowid aq_rid,
ooq.on_order_quantity,
ooq.rowid ooq_rid,
r.rowid region_rid,
decode( sign(soh.total_quantity), 0, 'ZERO', -1,
'NEGATIVE' ) status
FROM sim.pa_str_rtl psr,
sim.as_itm_rtl_str itmloc,
sim.as_itm itm,
sim.rk_item_supplier ris,
sim.pa_spr ps,
sim.rk_store_item_soh soh,
sim.pa_orgn po,
sim.alloctab aq,
sim.on_ord_qty ooq,
sim.regionA r
WHERE itmloc.id_str_rt = psr.id_str_rt
AND itmloc.id_itm = itm.id_itm
AND ooq.store_id(+) = soh.id_str_rt
AND ooq.item_id(+) = soh.id_itm
AND itmloc.id_itm = ris.id_itm
AND itmloc.id_spr = ris.id_spr
AND ris.id_spr = ps.id_spr
AND aq.id_str_rt(+) = soh.id_str_rt
AND aq.id_itm(+) = soh.id_itm
AND soh.id_str_rt = itmloc.id_str_rt
AND soh.id_str_rt = r.storeid
AND soh.id_itm = itmloc.id_itm
AND psr.id_prty = po.id_prty_orgn;
I attempted a fast refresh and it was still running after 18 plus hours. I finally issued a kill. The below is what I got.
free buffer waits:
Total Waits = 87201514
Total Timeouts = 34172
Time Waited = 2437656
db file sequential read:
Total Waits = 87201514
Total Timeouts = 0
Time Waited = 5155028
db file scattered reads:
Total Waits = 59886
Total Timeouts = 0
Time Waited = 42692
write complete waits
Total Waits = 492
Total Timeouts = 33
Time Waited = 27412
is the problem my local indexes on my partitions? Or is the server and IO sub system too small? Should I do a complete vs a fast refresh for this amount of data? BTW the trace file had a bunch of these WAIT #1: nam='pipe get' ela= 1953135 p1=504403159891195536 p2=4096 p3=600 before it stopped ( which I think have to do with tracing )
Thanks for your help. Received on Wed Aug 09 2006 - 10:00:02 CDT
![]() |
![]() |