Re: parallel select

From: <>
Date: Tue, 14 Jul 2009 10:08:43 -0500
Message-ID: <>

Hi. Greg:

Thank you for your Email. I never knew that PX granule size is 100M. How can I see that? Is there a DB parameter that determites that?

I have looked at the waits for the master and slave sessions. For the master session I see SQL*Net message to and from client waits

For the slaves I see PX Deq Credit: send blkd and PX qref latch . Wait time for the latter is 0. I read that the qref latch means that consumers are not keeping up with the producers, but I am not sure how to correct that.

The multiblock read count is currently 32 , The array size is 15, trimspool is off.

The filesystems are not local, they are on SAN. The data is spooled to a remote server. These will affect the timing, but there is not much I can do about them

I will try increasing the MBRC and see if that helps the performance. But my concern is that most of the transactions against this database are quick small reads and I don't want to read extra data and slow things down for the online users.

thank you

Gene Gurevich

             Greg Rahn                                                 
   >                                                   To
             Sent by:             
             oracle-l-bounce_at_f                                          cc
                                       Re: parallel select             
             07/13/2009 06:51                                          
             Please respond to                                         

On Mon, Jul 13, 2009 at 2:42 PM, <> wrote:
> I am watching a very long select * from a tables with about 500mil rows.
> runs under oracle for hours how. The table is about 35-40G
> and the app team needs to dump it into a flat file. My DB block size is
> the extent size (this is a LMTS) is 300M.

That is quite a large extent size for a 40GB table. Generally I would recommend auto allocate and just use a large initial/next if you want to bypass the 64KB/1MB/... extent ranges and jump to the larger extents. Using such a large extent size could result in nearly 300MB being wasted from jumping above the HWM with a direct path load.

> I have been watching the v$session_longops and the message that I have
> seeing looks something like this:
> Rowid Range Scan:  TABLE_NAME : 2899 out of 12647 Blocks done
> The table has about 5mil blocks. 12647 blocks is about one third of one
> extent. I run it with DOP=4. So it looks like the parallel slaves are
> each reading from one third of one extent at a time.

The PX granule size is 100MB, which happens to be 1/3 of the table extent size of 300MB. A granule is the unit of work for PX severs.

> This operation seems
> to take about 40 minutes. Extrapolating to 5mil blocks yields a
> huge number with will not be satisfactory to the application.  Why would
> take 40 minutes to read 100M worth of data? What could
> be the bottleneck?

I would recommend that you:
1) look at the ASH report see what the session waits are or 2) look at the wait events for the PX servers in v$session and see what they are

No need to guess what the bottleneck would be when the wait interface will tell you.

Things to consider:
- what is your MBRC? For FTS operations like this, 1MB is ideal (8KB block * 128 MBRC). This should be the default if left unset. - the speed of the filesystem you are writing to (hopefully a local FS) - sqlplus settings (arraysize, trimspool, etc)

Greg Rahn

Please consider the environment before printing this email.

Received on Tue Jul 14 2009 - 10:08:43 CDT

Original text of this message