Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: How do I speed up processing the data from a database link?
On Mon, 22 May 2006 22:03:28 +0100, "Jonathan Lewis"
<jonathan_at_jlcomp.demon.co.uk> wrote:
>
>"Jaap W. van Dijk" <j.w.vandijk.removethis_at_hetnet.nl> wrote in message
>news:44722104.1586781_at_news.hetnet.nl...
>> Oracle 9.2.0.5 (local) / 8.1.7.4 (remote)
>> Open VMS 7.3-2.
>>
>> Processing data from a remote table over a database link seems to take
>> much more CPU, than processing data from a local table.
>>
>> Kindly take a look at these two cases:
>>
>> If I insert into a local table A, selecting from a local table B, this
>> takes about 10 minutes, and the significant wait events and CPU are
>>
>> Wait event: db file scattered read
>> Number of waits: 7772
>> Total time of waits: 57,3 seconds
>> CPU used by this session: 489,85 seconds
>>
>> But when I insert into the local table A, from a remote table (with
>> the same contents as the local table B) over a database link, this
>> takes about 36 minutes, with
>>
>> Wait event: SQL*Net more data from dblink
>> Number of waits: 1635610
>> Total time waited: 84,73 seconds
>> CPU used by this session 1496,86 seconds
>>
>> So getting the data across is not the bottleneck, but the increase in
>> CPU consumption is. Seemingly, processing data from a database link
>> takes much more time than processing data from the full table scan.
>>
>> Do I interpret these figures correctly, and if so, is there any way I
>> can speed up processing of the data from the database link?
>>
>> The number of records loaded is 29984031
>> The number of roundtrips is 1635610 (so 18 records/roundtrip)
>> The average number of bytes/roundtrip is about 2000
>>
>> Kind regards,
>>
>> Jaap.
>
>
>I haven't checked this, but if you are simply
>doing
> insert into tabA select from tabB
>then the local insert can be fully optimized
>at the block level to minimize undo, redo,
>and latch activity.
>
>The distributed insert has to do array processing
>at a size dictated by the SDU (session data unit)
>implied by the default values for SQL*Net.
>
>For a 'fair' comparison, you could try writing a
>simple pl/sql that does an array fetch of
>about 18 rows at a time, from the local table
>and a matching array insert, viz:
>
>open cursor for select from tabA
>loop
> fetch bulk collect ... limit 18
> forall
> insert into tabB
>
>end loop
>
>
>
>--
>Regards
>
>Jonathan Lewis
>http://www.oracle.com/technology/community/oracle_ace/ace1.html#lewis
>
>The Co-operative Oracle Users' FAQ
>http://www.jlcomp.demon.co.uk/faq/ind_faq.html
>
>Cost Based Oracle: Fundamentals
>http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html
>
>
Jonathan,
Indeed I'm doing a simple
INSERT /*+ APPEND */ INTO tableA
SELECt * FROM tableB_at_dblink
For fun I'll try your suggestion of performing a local bulk collect / insert. I'm not sure though what you would try to proof with the result from that test. I would be surprised if it would not result in an even higher CPU consumption, because of the high number of context switches.
What occupies me:
Is the higher CPU consumption in case of the remote table to be
expected?
Have others seen similar results? (to test that would take 20 minutes
I guess, so if anybody feels like it...)
Is it due to the smaller unit of data, compared to a multiblock read?
Can it be made to consume less CPU?
Is increasing the SDU an option?
If so, how to do this for a database link?
Regards, Jaap. Received on Tue May 23 2006 - 14:17:08 CDT
![]() |
![]() |