Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Materialized view refresh and very long wait events

Re: Materialized view refresh and very long wait events

From: g3000 <carlton_gregory_at_yahoo.com>
Date: 9 Aug 2006 11:56:27 -0700
Message-ID: <1155149787.097126.180610@b28g2000cwb.googlegroups.com>


Frank,

       Sorry if I was not clear. The MV was previously built and it already contains 15 million rows. I am attempting a refresh:

exec dbms_mview.refresh('DIS_MV2','F','', TRUE, FALSE, 0,0,0, FALSE);

one of my logs containts 31 million records 1 contains 5000 and 1 contains 14.

Well I restarted the refresh and I am getting buffer busy waits and db file sequential read waits that are very high now.

I am going to ask the DBA to run statspack if this refresh finishes in a reasonable amount of time.

I was also thinking about those indexes on the rowid columns if I should kill them but I cant see that being wise for every refresh.

Im also wondering why Oracle creates two new sessions after I execute the refresh call.

Frank van Bortel wrote:
> g3000 schreef:
> > Hello,
> > Im on 9.2.0.7 EE AIX v5 The server is a 3 CPU box and the disk
> > system is "SHARK" (whatever that is ). 8 DBWR's . I have a materialized
> > view with 31 million records in a base table log. The other logs have
> > are empty except one that has 5000.
> >
> > I have indexes on all the rowids ( all local to the partition )
> > materialezed view and a composite primary key.
> > create materialized view dis_mv2
> > PARALLEL
> > PARTITION BY HASH(store_id)
> > PARTITIONS 8
> > STORE IN (DISCOVERER_DATA,
> > DISCOVERER_DATA,
> > DISCOVERER_DATA,
> > DISCOVERER_DATA,
> > DISCOVERER_DATA,
> > DISCOVERER_DATA,
> > DISCOVERER_DATA,
> > DISCOVERER_DATA)
> > NOCACHE
> > NOLOGGING
> > build immediate
> > refresh fast on demand
> > as
> > SELECT /*+ index(itmloc pk_as_itm_rtl_str) index(soh
> > pk_rk_store_item_soh) index(ris pk_rk_item_supplier) index(itm
> > pk_as_itm)*/
> > psr.id_str_rt store_id,
> > psr.rowid psr_rid,
> > po.nm_orgn store_name,
> > po.rowid ps_rid,
> > soh.id_itm item_id,
> > soh.rowid soh_rid,
> > SUBSTR(itm.id_strc_mr,1,INSTR(itm.id_strc_mr,':')-1)
> > Dept,
> > soh.id_itm ||'-'|| itm.de_itm id_desc,
> > itm.de_itm item_description,
> > itm.lu_uom_dft default_uom,
> > itm.rowid itm_rowid,
> > itmloc.id_spr primary_supplier,
> > itmloc.rowid itmloc_rid,
> > ps.nm_spr vendor_name,
> > ps.rowid ps_rowid,
> > itmloc.id_spr ||'-'|| ps.nm_spr VDesc,
> > ris.vpn vendor_style,
> > ris.rowid ris_rid,
> > itmloc.repn_type repn_type,
> > soh.total_quantity soh,
> > soh.in_transit_quantity in_transit_quantity,
> > soh.adjust_unavail_qty unavailable_quantity,
> > aq.allocated_quantity,
> > aq.rowid aq_rid,
> > ooq.on_order_quantity,
> > ooq.rowid ooq_rid,
> > r.rowid region_rid,
> > decode( sign(soh.total_quantity), 0, 'ZERO', -1,
> > 'NEGATIVE' ) status
> > FROM sim.pa_str_rtl psr,
> > sim.as_itm_rtl_str itmloc,
> > sim.as_itm itm,
> > sim.rk_item_supplier ris,
> > sim.pa_spr ps,
> > sim.rk_store_item_soh soh,
> > sim.pa_orgn po,
> > sim.alloctab aq,
> > sim.on_ord_qty ooq,
> > sim.regionA r
> > WHERE itmloc.id_str_rt = psr.id_str_rt
> > AND itmloc.id_itm = itm.id_itm
> > AND ooq.store_id(+) = soh.id_str_rt
> > AND ooq.item_id(+) = soh.id_itm
> > AND itmloc.id_itm = ris.id_itm
> > AND itmloc.id_spr = ris.id_spr
> > AND ris.id_spr = ps.id_spr
> > AND aq.id_str_rt(+) = soh.id_str_rt
> > AND aq.id_itm(+) = soh.id_itm
> > AND soh.id_str_rt = itmloc.id_str_rt
> > AND soh.id_str_rt = r.storeid
> > AND soh.id_itm = itmloc.id_itm
> > AND psr.id_prty = po.id_prty_orgn;
> >
> > I attempted a fast refresh and it was still running after 18 plus
> > hours. I finally issued a kill. The below is what I got.
> >
> > free buffer waits:
> >
> > Total Waits = 87201514
> > Total Timeouts = 34172
> > Time Waited = 2437656
> >
> > db file sequential read:
> > Total Waits = 87201514
> > Total Timeouts = 0
> > Time Waited = 5155028
> >
> > db file scattered reads:
> > Total Waits = 59886
> > Total Timeouts = 0
> > Time Waited = 42692
> >
> > write complete waits
> > Total Waits = 492
> > Total Timeouts = 33
> > Time Waited = 27412
> >
> > is the problem my local indexes on my partitions? Or is the server and
> > IO sub system too small? Should I do a complete vs a fast refresh for
> > this amount of data? BTW the trace file had a bunch of these
> > WAIT #1: nam='pipe get' ela= 1953135 p1=504403159891195536 p2=4096
> > p3=600 before it stopped ( which I think have to do with tracing )
> >
> > Thanks for your help.
> >
> Where were the updates? You are not giving enough
> information; the 31 M rows don't mean a thing - right after creation,
> a fast refresh should be instantaneous. Because there's nothing
> in the MV log tables, so there's nothing to refresh!

>

> Updates on a partitioned table/MV may be slow (indexes!), but
> I am not familiar with that (partitions + MV) at all, so I'll
> leave someone knowledgeable to comment on that
> --
> Regards,
> Frank van Bortel
>
> Top-posting is one way to shut me up...
Received on Wed Aug 09 2006 - 13:56:27 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US