Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Bulk Collect and Pipelined Table Function

Re: Bulk Collect and Pipelined Table Function

From: DA Morgan <damorgan_at_x.washington.edu>
Date: Mon, 28 Feb 2005 07:16:35 -0800
Message-ID: <1109603627.821550@yasure>


Peter Kallweit wrote:

> DA Morgan wrote:
> 

>> Peter Kallweit wrote:
>>
>>> Hi all,
>>>
>>> I have a table function which selects from a view with a where clause
>>> defined at runtime. The fetch works with bulk collect.
>>> Today this function is not pipelined. Therefore, the select where
>>> this table function is called, is sometimes quite slow.
>>> That's the function:
>>>
>>> FUNCTION F_GET_FILTER_RESULT
>>> (sFILTER IN VARCHAR2
>>> )RETURN NUMBERTABLE_TYPE IS
>>> TYPE RULECURSORTYPE IS REF CURSOR;
>>> RULE_CURSOR RULECURSORTYPE;
>>> RESOLVED_RULE NUMBERTABLE_TYPE;
>>> BEGIN
>>> OPEN RULE_CURSOR FOR 'SELECT V_NR FROM SHOW_VERSION WHERE ' ||sFILTER;
>>> FETCH RULE_CURSOR BULK COLLECT INTO RESOLVED_RULE;
>>> CLOSE RULE_CURSOR;
>>> EXCEPTION
>>> WHEN OTHERS THEN
>>> RESOLVED_RULE := NUMBERTABLE_TYPE();
>>> END;
>>> RETURN RESOLVED_RULE;
>>> END F_GET_FILTER_RESULT;
>>>
>>>
>>> Now I'd like to change this function into a pipelined table function.
>>> AFAIK, to do the pipelining I have to work with 'pipe row' for each
>>> single row.
>>> But I don't have / want the single row, because I fetch with bulk
>>> collect. How can I resolve this contradiction?
>>>
>>> Btw., I'm working with 9.2.0.5 on W2K.
>>>
>>>
>>> Regards
>>> Peter
>>
>>
>>
>> First I would suggest that you return to the starting point ... what is
>> it that makes you think a pipelined table function is going to perform
>> faster than a bulk collection in and of itself? I've seen nothing that
>> would support that presumption.
> 
> 
> 
> Hi Daniel,
> 
> AFAIK the bulk collect is creating the complete result table in memory 
> before returning from the function. This consumes a lot of memory. With 
> typically more then 100K rows in the result table and multiple users 
> executing this statement, this eats a lot of memory on the server. And, 
> sadly enough, the machine isn't the biggest.
> 
> I think of something like that: I'm doing the bulk collects in chunks of 
> 1K rows. These chunks I give direct into the pipeline.
> This way, I think, I could combine the advantages of both.
> 
> Regards
> Peter

You are correct ... but wrapping that functionality inside of a pipelined table function won't change it. What you should look at then is using the LIMIT clause on the bulk collect.

http://www.psoug.org
click on Morgan's Library
click on Bulk Binding
search for the keyword LIMIT

HTH

-- 
Daniel A. Morgan
University of Washington
damorgan_at_x.washington.edu
(replace 'x' with 'u' to respond)
Received on Mon Feb 28 2005 - 09:16:35 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US