Re: Sequence Skipping

From: Tim Gorman <tim_at_evdbt.com>
Date: Wed, 13 Jun 2012 20:25:36 +0000
Message-ID: <W1819028349271051339619136_at_webmail27>



There is absolutely no evidence that the sequence is being "flushed from the SGA", but rather more likely that the WHEN NOT MATCHED condition of the MERGE statement (i.e. when the sequence NEXTVAL is referenced) is possibly being accessed unnecessarly during the WHEN MATCHED condition. Setting NOCACHE on a heavily-accessed sequence as a perceived "solution" is the classic situation of the "cure" causing more pain than the "problem".

However, *temporarily* setting the sequence to NOCACHE might be a useful diagnostic technique, as a SQL trace of the MERGE statement would then clearly indicate whether the sequence is being accessed inappropriately during WHEN MATCHED, as the recursive UPDATE SEQ$ statement should be visible and countable.

But setting a sequence to NOCACHE is almost always due to a typo during CREATE/ALTER SEQUENCE or a misunderstanding of when to use/when not to use sequences. The only appropriate use of NOCACHE is diagnostic, such as described just above. I'd be curious to know if anyone has any other valid use of NOCACHE or even under-caching a sequence?

-----Original Message-----

From: Scott Graves [mailto:Scott.Graves_at_nisc.coop] Sent: Wednesday, June 13, 2012 01:51 PM
To: oracle-l_at_freelists.org
Subject: RE: Sequence Skipping

When the sequence is flushed from the SGA, the unused numbers in the cache are burnt. Set your sequences to nocache and that behavior goes away.-----Original Message-----From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Kenneth NaimSent: Wednesday, June 13, 2012 2:32 PMTo: oracle-l_at_freelists.orgSubject: Sequence SkippingI use the following two merge statements to load data into a dimension table nightly. Every night the process runs the sequence (seq_procedure_key) gets incremented by approximetly 57,000 values even though only 4-37 rows are added. There are about 57,000 in the procedures table, Millions in the medical_bill_detail tables and 17k in the cpt table. It seems one of the two merge statements is wasting 1 sequence value for every row that does not get inserted. I've tried searching metalink for bugs and so far have come up empty. I am experiencing this in 10.2.0.3 prod environment and an 11.2.0.3 test environment. If this is ex  pected behavior, any known workarounds. I don't mind losing some values but 57k per day is too many. The load failed as the column related to the sequence was defined as number(8) and the sequence was into the 9th digit. BTW the cache on the sequence is 50.merge into procedures a using (select nvl(replace(cpt_code,' '),'None') cpt_code, cpt_descr from cpt where cpt_code is not null) b on (a.cpt_code=b.cpt_code) when matched then update set a.procedure_name=b.cpt_descr where nvl(procedure_name,'~')<>nvl(b.cpt_descr,'~') and cpt_descr is not null when not matched then insert (procedure_key, cpt_code, procedure_name) values (seq_procedure_key.nextval, b.cpt_code, b.cpt_descr); v_number_of_records:=sql%rowcount; pkg_etl.p_end_process(v_process_id, v_number_of_records, null, false); v_process_type_id:9; pkg_etl.p_start_process(v_process_type_id, load_id_in, null, null, null, v_process_id); merge into procedures a using (select distinct replace(cpt_code,' ') cpt_code, null cpt_de  scr from wcis_dba.medical_bill_detail where cpt_code is not null) b on (a.cpt_code=b.cpt_code) when not matched then insert (procedure_key, cpt_code, procedure_name) values (seq_procedure_key.nextval, b.cpt_code, b.cpt_descr);Thanks,Ken

--

http://www.freelists.org/webpage/oracle-l Received on Wed Jun 13 2012 - 15:25:36 CDT

Original text of this message