Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Bulk Collect and Pipelined Table Function

Re: Bulk Collect and Pipelined Table Function

From: Peter Kallweit <peter.kallweit_at_vector-consulting.de>
Date: Tue, 01 Mar 2005 10:31:08 +0100
Message-ID: <38ir2tF5nagrmU1@individual.net>


DA Morgan wrote:

> Peter Kallweit wrote:
> 

>> DA Morgan wrote:
>>
>>> Peter Kallweit wrote:
>>>
>>>> DA Morgan wrote:
>>>>
>>>>> Peter Kallweit wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> I have a table function which selects from a view with a where
>>>>>> clause defined at runtime. The fetch works with bulk collect.
>>>>>> Today this function is not pipelined. Therefore, the select where
>>>>>> this table function is called, is sometimes quite slow.
>>>>>> That's the function:
>>>>>>
>>>>>> FUNCTION F_GET_FILTER_RESULT
>>>>>> (sFILTER IN VARCHAR2
>>>>>> )RETURN NUMBERTABLE_TYPE IS
>>>>>> TYPE RULECURSORTYPE IS REF CURSOR;
>>>>>> RULE_CURSOR RULECURSORTYPE;
>>>>>> RESOLVED_RULE NUMBERTABLE_TYPE;
>>>>>> BEGIN
>>>>>> OPEN RULE_CURSOR FOR 'SELECT V_NR FROM SHOW_VERSION WHERE '
>>>>>> ||sFILTER;
>>>>>> FETCH RULE_CURSOR BULK COLLECT INTO RESOLVED_RULE;
>>>>>> CLOSE RULE_CURSOR;
>>>>>> EXCEPTION
>>>>>> WHEN OTHERS THEN
>>>>>> RESOLVED_RULE := NUMBERTABLE_TYPE();
>>>>>> END;
>>>>>> RETURN RESOLVED_RULE;
>>>>>> END F_GET_FILTER_RESULT;
>>>>>>
>>>>>>
>>>>>> Now I'd like to change this function into a pipelined table function.
>>>>>> AFAIK, to do the pipelining I have to work with 'pipe row' for
>>>>>> each single row.
>>>>>> But I don't have / want the single row, because I fetch with bulk
>>>>>> collect. How can I resolve this contradiction?
>>>>>>
>>>>>> Btw., I'm working with 9.2.0.5 on W2K.
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> Peter
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> First I would suggest that you return to the starting point ...
>>>>> what is
>>>>> it that makes you think a pipelined table function is going to perform
>>>>> faster than a bulk collection in and of itself? I've seen nothing that
>>>>> would support that presumption.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Hi Daniel,
>>>>
>>>> AFAIK the bulk collect is creating the complete result table in
>>>> memory before returning from the function. This consumes a lot of
>>>> memory. With typically more then 100K rows in the result table and
>>>> multiple users executing this statement, this eats a lot of memory
>>>> on the server. And, sadly enough, the machine isn't the biggest.
>>>>
>>>> I think of something like that: I'm doing the bulk collects in
>>>> chunks of 1K rows. These chunks I give direct into the pipeline.
>>>> This way, I think, I could combine the advantages of both.
>>>>
>>>> Regards
>>>> Peter
>>>
>>>
>>>
>>>
>>> You are correct ... but wrapping that functionality inside of a
>>> pipelined table function won't change it. What you should look at then
>>> is using the LIMIT clause on the bulk collect.
>>>
>>> http://www.psoug.org
>>> click on Morgan's Library
>>> click on Bulk Binding
>>> search for the keyword LIMIT
>>>
>>> HTH
>>>
>>
>> Right, using the LIMIT clause gives me the first part of what I want
>> to achieve.
>> In the second step I would have to go through the table, use 'pipe row'
>> on each single value to put it in the pipeline and the make the next
>> round.
>> But isn't this handling of each single value loosing performance?
>> Something like 'pipe row bulk' would be fine. I think it could be a
>> good trade-off.
>>
>> Regards
>> Peter
> 
> 
> Instead of piping out rows ... how about piping out an array or ref cursor?

Interesting idea, I'll try it. Received on Tue Mar 01 2005 - 03:31:08 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US