Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.misc -> Re: bulk collect in pl/sql
andyho99_at_yahoo.com (Andrew) wrote in message
> I wrote a program that reads a table with a million plus rows and
> writes to another table. I understand that use SQL is the fatest way
> to do this.
INSERT INTO foo
SELECT bluh FROM source WHERE blah
> However, I need to keep track what records fail and
> implement some logic.
Why? What logic is there that you cannot do within the SELECT statement? Why would you want to break it into pieces and thus introduce the problems of maybe?
With the above INSERT it is either a yes or a no.
It works. Transaction committed.
It fails. Transaction rollbed back.
Result. Database integrity guaranteed.
Now you want to do it in chunks. What happens if it fails some way through? Do you commit? Do you rollback? What happens to the previous chunks that were committed?
Result. Database integrity is *NOT* guaranteed.
What is more imporant? Small rollback/undo space with screwed data? Or sufficient rollback/undo space and data integrity?
> So, I have to use pl/sql to do this.
Not all problems are nails.
> However, I was told by an expert not to do
> this because this database need to be up 24 by 7. Is that right?
No.
> I did
> a quick check in the manual and I could not find any restrictions like
> that.
You get 1 credit for that. RTFM is always a Good Thing (tm).
> I only saw the restrictions like 'run the program in the server
> side not the client side', etc.
Exactly.
> Did I miss something in the manual??? I thought it'll only reduce
> burden for Oracle datebase engine. Thanks.
Forget about the Oracle burden. Do the thing The Right Way. Let Oracle then worry about how big the burden is and deal with that.
Technical issues like this SHOULD NEVER EVER dictate business flow and data integrity.
You need to move a million rows - if that is the business requirement and process flow, then you move that 1 million rows with a single transaction. Do not make technical compromises when it comes things like this.
However - by all means *question* the business logic that requires moving such a large amount of data. Moving data around in tables in a RDBMS is stupid. That implies IMO a serious flaw in database design. What about marking the data? What about a view on that data? What about partitioning? Etc.
So given my rant, then why does PL/SQL have BULK COLLECT? Am I being a dork? Yeah, but I'm one with a lead pipe. ;-)
IMO, BULK COLLECT is not for moving data around between tables. It is there for bulk processing of data - like processing 10 million insurance claims effectively and efficiently.
If you want to do data transformation (transforming 1 row from source table into 1 or more different looking destination rows), then use pipeline table functions.
Not all the tools in the Oracle Feature Toolbox are hammers.
Not all problems are nails.
-- BillyReceived on Fri Sep 19 2003 - 07:40:07 CDT
![]() |
![]() |