Re: Questions about SQL*Developer CLI for exporting data

From: Mladen Gogala <gogala.mladen_at_gmail.com>
Date: Thu, 10 Dec 2020 23:12:26 -0500
Message-ID: <1b54b293-a572-d640-06e4-3477caaaf94a_at_gmail.com>



Hi Tim,

I don't have any special handling, I didn't have any need to do that yet. I rely on Python's CSV module to do the right thing. One way of dealing with such fields would be to URL encode all fields, but that would create problems with the loading such fields. Encoding and decoding every field in a hundred million rows table would be a huge expenditure of CPU and would significantly slow down the process. And hundred million rows is not even a particularly large table.

Another way of dealing with that is to write a specialized script which would extract rows from Oracle and insert them directly into the foreign DB using the module for that DB, like psycopg2 for PostgreSQL or mysql-connector for MySQL. I am not sure that CSV would be the right tool for the job. I would hesitate to use Pyodbc because, as I've detailed earlier on this list, very few drivers support what Oraclites know as "array interface" and what Pyodbc calls "fast_executemany". Only MS ODBC driver 17 and newer and Oracle ODBC driver from Instat Client 19.9 and later (the latest is the Instant Client 21) for now support that flag. PostgreSQL ODBC driver is particularly bad and slow.

Regards

On 12/10/20 10:27 PM, Tim Gorman wrote:
> Mladen,
>
> I'm guessing that it may not work for wide tables with lots of
> columns, and I'm curious how you handle RAW, LONG RAW, and BLOB data
> types?
>
> I also enjoy scripting, and I had posted a similar script HERE
> <http://evdbt.com/download/dump_data-sql/> for my PL/SQL package
> DUMP_DATA which defines a pipelined table function CSV, with a
> SQL*Plus script intended for testing HERE
> <http://evdbt.com/download/csv-sql/>, along with a shell-script also
> using SQL*Plus HERE <http://evdbt.com/download/csv-sh/>, as well as a
> Perl script for serious use calling it HERE
> <http://evdbt.com/download/csv-pl/>. I don't have a Python version yet.
>
> DUMP_DATA works really well, has several nice bells and whistles like
> configurable array processing, but it has a fatal flaw with wide
> tables.  I wrote DUMP_DATA using native PL/SQL dynamic SQL method 1
> which is limited to a 4000-byte string for the entire SQL statement; 
> exceeding the limit results in the ORA-01489 (i.e. "result of string
> concatenation is too long") error.  When I realized that I'd have to
> rewrite DUMP_DATA to use the DBMS_SQL package with dynamic SQL method
> 4 (which I know very well from my PRO*C days) is when I switched over
> to recommending SQL Developer to the customer.  This way, the customer
> can go chasing after Jeff's team at Oracle for support, not me.  :)
>
> Thanks so much for the Python code!
>
> -Tim
>
> On 12/10/2020 6:12 PM, Mladen Gogala wrote:
>> Hi Tim,
>>
>> I like scripting. I have the script that dumps Oracle tables or
>> queries to CSV:
>>
>> http://mgogala.byethost5.com/dump2csv.zip
>>
>> It even allows you to specify to number of rows for the array fetch.
>> It's not as fancy as SQL*Developer, but does the job.
>>
>> Regards
>>
>> On 12/10/20 10:00 AM, Tim Gorman wrote:
>>> That was what I needed, thanks so much! I had read the article, but
>>> hadn't realized what the screenshot was showing; I'm still one of
>>> those who looks for words; gotta adapt.
>>>
>>>
>

-- 
Mladen Gogala
Database Consultant
http://mgogala.byethost5.com


--
http://www.freelists.org/webpage/oracle-l
Received on Fri Dec 11 2020 - 05:12:26 CET

Original text of this message