Re: Increasing row retrieving speed via net8
Date: Mon, 16 Apr 2012 21:13:28 +0300
And if you're wondering, how come it's possible that 10 x application connections can retrieve 10 x more than a single application connection - this is why I listed the Bandwidth Delay Product as #2 in the list. If you have too small TCP buffer sizes for your network latency (delay) and link max theoretical throughput (bandwidth) then this buffer size ends up throttling your TCP connection throughput. The reason - TCP is a reliable protocol, thus a TCP packet can not be discarded from the TCP send buffer before an ACK has arrived for at least that packet sequence number (or higher). So, the higher the network latency (time to get ACK) and the lower the TCP buffer size, the lower your connection's throughput will be throttled to.
With 10 x connections however, each connection have their own TCP send/receive buffers, thus the aggregate throughput for all the connections is 10 x higher.
On Mon, Apr 16, 2012 at 9:09 PM, Tanel Poder <tanel_at_tanelpoder.com> wrote:
> Does more threads mean more Oracle connections or just more app threads
> using the same connection?
> If you're using the same connection, then obviously there can be only one
> request on the fly and the other thread has to wait...
> On Mon, Apr 16, 2012 at 6:19 PM, GG <grzegorzof_at_interia.pl> wrote:
>> W dniu 2012-04-16 15:21, Tanel Poder pisze:
>> So, I'm still not fully convinced that this is an application side
>>> contention issue - it might well be, but some apps just work that way.
>>> As I
>>> wrote previously, increasing the arraysize would probably be the best way
>>> to get extra throughput out of a single connection (or just use multiple
>>> connections as that gave you the aggregate throughput you needed).
>> Ok, I got Your point . Talking about array size its somehow fixed at 136
>> , I havent found a place to change that yet .
>> Going back to scalability , how do You expain such case:
>> When I separatelly run pmdtm binary I can easy scale to number of that
>> execs x 30k rows per sec each .
>> But when doing that 'inside' one pmdtm via parallel threads I can go for:
>> 2x 30k rows per sec for parallel = 2
>> but only
>> 4x 15k for parallel = 4
>> so its not scalling well even when Im quering different partitions .
>> But still not sure why I cant saturate 100Mbit eth with one net8 session .