Fwd: Slower db file parallel read waits after migrating to SAN

From: Neil Kodner <nkodner_at_gmail.com>
Date: Tue, 23 Feb 2010 10:53:36 -0500
Message-ID: <ae2c46ba1002230753o63279fe9sdc1f298b1198e448_at_mail.gmail.com>


  • Forwarded message ---------- From: Neil Kodner <nkodner_at_gmail.com> Date: Tue, Feb 23, 2010 at 10:53 AM Subject: Re: Slower db file parallel read waits after migrating to SAN To: Paul Drake <bdbafh_at_gmail.com>

Thanks - our local sysadmin isn't the storage admin so he'll try and find out the stripe size. In the meantime, we're using 32k blocks as the db_block_size and the tablespace block size for indexes and data. This is a DW environment but our etl process requires us to retrieve some counts for the active clients.

On Tue, Feb 23, 2010 at 10:31 AM, Paul Drake <bdbafh_at_gmail.com> wrote:

> Neil,
>
> Did you consider gathering system stats after the change in storage?
> e.g.
>
> SQL> exec dbms_stats.gather_system_stats('start');
>
> <run a representative workload>
>
> SQL> exec dbms_stats.gather_system_stats('end');
>
> You might find that you want to set the dbfmbrc
> (db_file_multiblock_read_count) higher than 16.
> What is the db_block_size and stripe size on the SAN?
> e.g.
> db_block_size=8192 bytes
> stripe size = 1MB
> db_file_multiblock_read_count=128
>
> Your question actually relates to avoiding the table scan so what I
> posted above isn't exactly what you're looking for, but you do want to
> get the foundation squared away prior to moving up the stack.
>
> hth.
>
> Paul
>
>
> On Tue, Feb 23, 2010 at 6:54 AM, Neil Kodner <nkodner_at_gmail.com> wrote:
> > A batch job which used to take 20-30 minutes is now taking upwards of
> three
> > hours following a change in storage. I
> > The query
> > select count(*)
> > from pm_clntmnth c
> > where c.client_number = abawd_rec.client_number
> > and c.benefit_month >= abawd_rec.first_abawd_month
> > and c.abawd_status_code = 'AB';
> > executes inside of a loop. Table pm_clntmnth is 422 million rows and is
> > indexed by client_number,benefit month, meaning we have to access the
> table,
> > in addition to the index. Before I add abawd_status_code to the index,
> to
> > eliminate table access outright, I'd like to know if db file parallel
> read
> > is my culprit, or if there is something else at play after the storage
> was
> > changed. I dont have as much experience with tables of this size so I'm
> > wondering if I'm missing something. The plan on the query looks
> unchanged.
> > Here's sample 10046 trace information
> > EXEC #9:c=0,e=123,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=1237162345257814
> > WAIT #9: nam='db file sequential read' ela= 13786 file#=32 block#=198624
> > blocks=1 obj#=246504 tim=1237162345272111
> > WAIT #9: nam='db file sequential read' ela= 60181 file#=33 block#=197057
> > blocks=1 obj#=246504 tim=1237162345332354
> > WAIT #9: nam='db file parallel read' ela= 476870 files=7 blocks=29
> > requests=29 obj#=75690 tim=1237162345809667
> > WAIT #9: nam='db file sequential read' ela= 47232 file#=25 block#=248564
> > blocks=1 obj#=75690 tim=1237162345857222
> > WAIT #9: nam='db file parallel read' ela= 395248 files=7 blocks=39
> > requests=39 obj#=75690 tim=1237162346253035
> > WAIT #9: nam='db file sequential read' ela= 34 file#=29 block#=99981
> > blocks=1 obj#=75690 tim=1237162346253668
> > WAIT #9: nam='db file parallel read' ela= 363176 files=7 blocks=35
> > requests=35 obj#=75690 tim=1237162346617350
> > WAIT #9: nam='db file sequential read' ela= 49773 file#=30 block#=286527
> > blocks=1 obj#=75690 tim=1237162346667765
> > WAIT #9: nam='db file parallel read' ela= 123836 files=7 blocks=30
> > requests=30 obj#=75690 tim=1237162346792040
> > WAIT #9: nam='db file sequential read' ela= 1368 file#=28 block#=95528
> > blocks=1 obj#=75690 tim=1237162346793987
> > FETCH
> >
> #9:c=13998,e=1536345,p=139,cr=156,cu=0,mis=0,r=1,dep=1,og=1,tim=1237162346794175
> > I dont know if the db file parallel reads were present before the storage
> > change, I just know that the process is taking 4x longer, and performance
> > was never really an issue until then. It's not a matter of the procedure
> > gradually taking longer and longer, the runtimes immediately grew longer
> > after the recent switch in back-end storage.
> > Here are some of the vitals
> > degree is set to 1 on all relevant tables and indexes. we're not using
> > parallelism at all during this proc.
> > 10.2.0.4 on linux
> > storage was changed from single-channel SCSI disks to a SAN with 4 fiber
> > paths.
> > PNAME PVAL1
> > ------------------------------ ----------
> > CPUSPEEDNW 1371.36113
> > IOSEEKTIM 10
> > IOTFRSPEED 4096
> > SREADTIM
> > MREADTIM
> > CPUSPEED
> > MBRC
> > MAXTHR
> > SLAVETHR
> > NAME TYPE VALUE
> > ------------------------------------ -----------
> > ------------------------------
> > optimizer_dynamic_sampling integer 2
> > optimizer_features_enable string 10.2.0.4
> > optimizer_index_caching integer 0
> > optimizer_index_cost_adj integer 100
> > optimizer_mode string ALL_ROWS
> > optimizer_secure_view_merging boolean TRUE
> > db_file_multiblock_read_count integer 16
> > db_file_name_convert string
> > db_files integer 200
>
>
>
> --
> http://www.completestreets.org/faq.html
> http://safety.fhwa.dot.gov/ped_bike/docs/pamanual.pdf
>

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Feb 23 2010 - 09:53:36 CST

Original text of this message