Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: utl_file: to fflush or not to fflush?
> > I am having a hard time with the code below (8.1.5, solaris). If the blob being put is > 32768, then the produced filesize is always 32768. The fflush in the loop is needed to write out the complete file. I think that's not what's documented.
It's even worse! The code (with fflush) does not work. fflush generates an exception 'write error' while flushing the second buffer of 32K. This means that utl_file.put cannot be used for writing more then 32k of binary data!
utl_file.put_line IS able to write more then 32k of data. Of course it puts a carriage return after each buffer, which defeats the purpose writing of binary data.
I do not agree with the statement that utl_file is not intended for writing binary data. utl_file.put is IMHO created for that purpose. My idea for writing blobs with utl_file is from an Oracle whitepaper about blobs in 8i. So, even someone within Oracle thinks it is suitable. Unfortunately, the given example wrote only about 1040 bytes of data, thereby giving the assumption that .put can be used for writing binary data AND hiding a bug. That's mean.
In my humble opinion, this is just a silly arithmic bug in utl_file.put /utl_file.fflush.
Anyway, when I'am done with my frustration, I'll try another interface/database.
Joost Received on Wed Oct 31 2001 - 13:25:39 CST
![]() |
![]() |