Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Bulk Collect and Pipelined Table Function

Re: Bulk Collect and Pipelined Table Function

From: Peter Kallweit <peter.kallweit_at_vector-consulting.de>
Date: Mon, 28 Feb 2005 10:43:34 +0100
Message-ID: <38g7e6F5lneufU1@individual.net>


DA Morgan wrote:

> Peter Kallweit wrote:
> 

>> Hi all,
>>
>> I have a table function which selects from a view with a where clause
>> defined at runtime. The fetch works with bulk collect.
>> Today this function is not pipelined. Therefore, the select where this
>> table function is called, is sometimes quite slow.
>> That's the function:
>>
>> FUNCTION F_GET_FILTER_RESULT
>> (sFILTER IN VARCHAR2
>> )RETURN NUMBERTABLE_TYPE IS
>> TYPE RULECURSORTYPE IS REF CURSOR;
>> RULE_CURSOR RULECURSORTYPE;
>> RESOLVED_RULE NUMBERTABLE_TYPE;
>> BEGIN
>> OPEN RULE_CURSOR FOR 'SELECT V_NR FROM SHOW_VERSION WHERE ' ||sFILTER;
>> FETCH RULE_CURSOR BULK COLLECT INTO RESOLVED_RULE;
>> CLOSE RULE_CURSOR;
>> EXCEPTION
>> WHEN OTHERS THEN
>> RESOLVED_RULE := NUMBERTABLE_TYPE();
>> END;
>> RETURN RESOLVED_RULE;
>> END F_GET_FILTER_RESULT;
>>
>>
>> Now I'd like to change this function into a pipelined table function.
>> AFAIK, to do the pipelining I have to work with 'pipe row' for each
>> single row.
>> But I don't have / want the single row, because I fetch with bulk
>> collect. How can I resolve this contradiction?
>>
>> Btw., I'm working with 9.2.0.5 on W2K.
>>
>>
>> Regards
>> Peter
> 
> 
> First I would suggest that you return to the starting point ... what is
> it that makes you think a pipelined table function is going to perform
> faster than a bulk collection in and of itself? I've seen nothing that
> would support that presumption.


Hi Daniel,

AFAIK the bulk collect is creating the complete result table in memory before returning from the function. This consumes a lot of memory. With typically more then 100K rows in the result table and multiple users executing this statement, this eats a lot of memory on the server. And, sadly enough, the machine isn't the biggest.

I think of something like that: I'm doing the bulk collects in chunks of 1K rows. These chunks I give direct into the pipeline. This way, I think, I could combine the advantages of both.

Regards
Peter Received on Mon Feb 28 2005 - 03:43:34 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US