Re: parallel select

From: <>
Date: Tue, 14 Jul 2009 13:34:29 -0500
Message-ID: <>


Thank you for the info. I found that hidden parameter for my own education. I did some testing with higher MBRC (from 32 to 128), but that did not change the timing. I also tried increasing arraysize to 5000, and that led to increase in time by about 30%. I am running again with lower value (500).

As far as the qref latch, what can I do to handle it? The Suck it dry article says to increase the parallel_execution_message_size, but this may create issues if the consumer is taking time to process the incoming data, which is probably the case here. Is there anything else I can do here?

I don't have any control over the disks or filesystems so that is a "given" as far as I am concerned. I run all my tests with >/dev/null to exclude the spooling part.


thank you

Gene Gurevich

             Greg Rahn                                                 
   >                                                   To
             07/14/2009 01:03                                           cc
             PM              ,       
                                       Re: parallel select             

On Tue, Jul 14, 2009 at 8:08 AM, <> wrote:
> Thank you for your Email. I never knew that PX granule size is 100M. How
> can I see that? Is there a DB parameter that determites that?

There is a hidden parameter that controls this. I'm purposely not going to mention it here to discourage its use as a tuning knob, but it should be very obvious to those who look for it.

> I have looked at the waits for the master and slave sessions. For the
> master session I see SQL*Net message to and from client waits
> For the slaves I see PX Deq Credit: send blkd  and PX qref latch . Wait
> time for the latter is 0. I read that the qref latch means that consumers
> are not keeping up with the producers, but I am not sure how to correct
> that.

This will likely be the case. The scanners will be able to read the data faster than the QC can write it out. Think many-to-one fan in. If the table is partitioned it would be beneficial to run one export session per partition to speed this up.

> The multiblock read count is currently 32 , The array size is 15,
> is off.

I mentioned array size 200 because the default of 15 is way too small for big spool files. Using a 1MB (128 MBRC) will cut the I/O requests by 4x.

> The filesystems are not local, they are on SAN.  The data is spooled to a
> remote server. These will affect the timing, but there is
> not much I can do about them

Be mindful that the limiting factor will be the spool rate, so if it is filesystem with few spindles, it will likely impact the rate.

> I will try increasing the MBRC and see if that helps the performance. But
> my concern is that most of the transactions against this
> database are quick small reads and I don't want to read extra data and
> things down for the online users.

The PX scan will do a direct path read (physical read) from disk, but small quick reads (assuming non-PX) could be satisfied by cache (buffer or storage). Also, it is unlikely that you can impact the storage too much as you only have a single writer (the QC) so the PX scan rate will be limited by this.

Greg Rahn

Please consider the environment before printing this email.

Received on Tue Jul 14 2009 - 13:34:29 CDT

Original text of this message