Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Temp space in 10g and data insert speed

RE: Temp space in 10g and data insert speed

From: Mark W. Farnham <>
Date: Tue, 11 Jan 2005 13:08:55 -0500
Message-ID: <>

Please do not interpret this as a blanket endorsement of sqlplus copy for your purpose, but rather as a possible objection removal as you consider your options.

sqlplus copy by its nature supports incremental commits (copycommit times arraysize values in the sqlplus session unless copycommit is zero, which directs a monolithic copy). It allows supports predicates and sorting. If sqlplus copy of your vintage supports the data types you need to copy (and beware LONG, you must set the sqlplus session long length to at least the longest long or you are instructing the copy command to truncate any longer long column values without raising an error or warning.) If the nature of your network services and listeners allows you to designate the pathway used for a particular sqlplus connection, you may in fact more effectively consume the network bandwidth you have available.

So you can run your copies in parallel to multiple destinations if you like and limit the copies to only the needed tuples. Various schemes can be used to then locally insert from the destinations into the final destination as may be desired getting rows possibly as soon as the first incremental commit.

Which overall scheme is least in total time to complete the batch operation depends far too much on your individual situation to make a prognostication, and also I have no idea if you need to conserve any other resource than time. But subject to other overheads, the extra time using copy as the first step is as little as the time to the first commit. You just need to use a good strategy for not repeatedly scanning over rows you've already loaded to the final destination.

Also, depending upon the physical distance between the servers and the speed of your network, do not forget to consider sneakernet (which in this case would presumably be whole disk drives).



-----Original Message-----
[]On Behalf Of Ron Rogers Sent: Tuesday, January 11, 2005 12:43 PM To:;;;
Subject: RE: Temp space in 10g and data insert speed

Thanks for the input.
 That might be a good choice if the network was not reliable.( not saying that ours is the best)
but the table sizes are around 3 - 4 Gig in size and have 10s of millions of rows. Time to transfer the
data to the server temp table and then load from the temp table could be very prohibitive.
I have checked the reference manual for 10g that I downloaded yesterday and I am starting with
the parameters pga_aggregate_target while adjusting the sga_target to match the memory
in the server.
I will keep you posted.

>>> "Guang Mei" <> 01/11/2005 11:56:57 AM >>>
I have never tested it, but it may worth trying by "copy" the remote table
to a local table first (I think there is a copy command in sqlplus), then
insert them into your destination table.


-----Original Message-----
[]On Behalf Of Ron Rogers Sent: Tuesday, January 11, 2005 11:35 AM To:;;
Subject: RE: Temp space in 10g and data insert speed

 Thanks for the feedback, It gives me other avenues to venture down. The load procedures are.
Log into server 1 and issue "insert into table1_at_databaselink select * from table1;"
on the larger tables I would separate the inserts by the primary key (saledate) and
load a month at a time then commit;

I will look into the WORKAREA_SIZE_POLICY parameter and it's impact when I get a chance. Right now I'm tied up with new client pc deployement.
It's great to have a multi- tasking job. Ron

>>> "Gints Plivna" <> 01/11/2005 9:47:43 AM >>>
According to docs default value for 10g for parameter WORKAREA_SIZE_POLICY is AUTO. As you havent it explicitly set to MANUAL
in init file which means you are using automatic sizing of PGA defined by pga_aggregate_target. As you have commented out pga_aggregate_target
it may use its default value and once again from docs it is 10 MB or 20%
of the size of the SGA, whichever is greater. Probably 20% (~130M I think) in your case is big enough.

But if you are using WORKAREA_SIZE_POLICY=AUTO then Oracle doesn't use sort_area_size and sort_area_retained_size.

And you haven't told anything about data loading process, so probably problem isn't in db parameters.


> -----Original Message-----
> From:

> On Behalf Of Igor Neyman
> Sent: Tuesday, January 11, 2005 4:24 PM
> To:;
> Subject: RE: Temp space in 10g and data insert speed
> Why do you think tempfile should be increasing? May be it's big
> To check usage of tempfile look into V$TEMPSTAT.
> Also, increasing sort_area_size and sort_area_retained size might
> with the performance.
> Igor Neyman, OCP DBA



Received on Tue Jan 11 2005 - 12:21:43 CST

Original text of this message