| Oracle FAQ | Your Portal to the Oracle Knowledge Grid | |
Home -> Community -> Usenet -> c.d.o.server -> Re: Speeding up an unload - late comment (what else is new?)
Gary,
You reminded me of something that's bugged me forever about Oracle. INFORMIX
has or used to have a nice command called dbunload (or was it dbdump)?
that was very user (or DBA at least) friendly, and would kindly dump the SQL to create the database, or that, and the data that went into the database, or just the data. It was a Godsend when I was told that the 3-4 months I spend building a whole app in Informix had to be (youknowwhat)canned, because it had to be in Oracle. And by some irony, the data dumps were in the format that Oracle's SQLLDR prefers by default, ASCII flat files, variable length records, with fields separated by the ASCII US (unit separator) character, like
|112140|Harris|Gregory|T|
Some coincidence, huh?
Anyhow, you were naughty not to mention just what tool you were using to do this, but I suspect SQL*PLUS (I sure hope you aren't trying to do this with sqldba / svrmgrl, that would be insane.
Anyways, too bad sqlplus can't be single-task relinked like sqlldr, imp and exp; that often gives huge performance gains. (I don't THINK it can...naw.) On platforms that support single-task modes for loader, import and export, a lot of overhead is just done away with. But only the DBAs should be permitted access to those executables.
You might want to try using PL/SQL and writing a simple program in that that would really just be a wrap around your query and making use of UTL_FILE to write the output; if I had a Sun box right here (here I am, I must be a machine bigot or something, saying that) . The suggestion about ARRAYSIZE is a very good one too if you are indeed using sqlplus to do these extracts. It would be an interesting experiment to see if PL/SQL would end up being faster, slower, or no difference. Being natively executed inside Oracle gives PL/SQL an edge; using UTL_FILE might be a gagging point.
Am I correct in assuming you are pulling from a flat table and just want its contents dumped out? What are you feeding this into subsequently? Another insane thought experiment comes to mind: attaching your table in MS Access and having it write out the records to a flat file on the PC if it's got the disk space for it and decent CPU and network resources. I can't see how that could be faster, all logic dictates otherwise, but on the other hand it wouldn't take but a few minutes to set it up and try it.
Other random thoughts include whether other apps are active on your Oracle server when you're doing this, if any of them access the instance, and most particularly whether any of them are accessing this table you're dumping. Even though it's simple as a query can get, a read-consistent image in query results is an obsession of Oracle's, as should be. Imagine the chaos of dumping a table of oh say ledger data and while you're doing that someone changes January stuff and you've already gotten to April, or changes April before you get to it, so you get an inconsistent mess that won't balance out.
Trying your query when the instance is in RESTRICTED mode and you're the only possible accessor might be a thought, if you have been doing this in prior times when other users were on or jobs running that accessed that table, or even just general performance, to try it after throwing everybody off, oracle users or not. sar or vmstat would give you some clue as to how much that would help.
If it's a big table, db_block_buffers and block size would impact you as well, as would other parameters governing how many blocks are read at one swoop. Also, if the table is highly fragmented with lots of chaining and such going on, that would impact you too. If you are on an SMP system (multiple CPUs, more than 2 at least), and table is... um what kind of disk storage are you using? er. If the table were physically striped onto multiple disks, and you do have an SMP type system, altering the table to increase parallelism might even help, even though it's basically a serial read of all rows of the table, since the query could be divided among multiple processors to do, and if the table is physically sliced among multiple disks, the PQ mechanism could be extracting rows among multiple processors and then the results merged (rather silly since you simply want all of them); but Oracle can be full of surprises. If you do try the PQ route, and the table is in fact striped across multiple spindles, make sure you're feeding Oracle enough memory in the SGA.
I continue to be astonished at the number of things that I swear "oh come on, how could THIS make a difference?" that do, as well as things I am dead sure will help that make things worse.
AHA, one last idea; if your table is in a tablespace of its own, or is in with tables that don't need to be changed for the duration of your extract, you might try ALTERing the tablespace to be READONLY before your query and putting it back afterwards.
That would shut down a number of mechanisms in Oracle that might help the thing run faster, as well, though if you can put the DB in RESTRICTED mode and nobody else can get in, I don't know if it would make much difference., but it's worth a try.
I've found over the years that often it's little tweaks and fooling around that gain you ground by inches, but don't forget that a mile is 63,360 inches; a few inches here, a few inches there, they add up. We took a program that ran for 108 hours and still never finished, and by inches here and inches there, we got it down to 2 hours. Little things can add up!
RSH.
"Gary" <garychars_at_yahoo.com> wrote in message
news:1Whd8.149077$Re2.11835689_at_bin6.nnrp.aus1.giganews.com...
> I need to extract data from the DB to a flat file(non-export format), but
> the extracts take forever-close to an hour for 2.1Gig. Does anyone know
of
> a way to extract data faster?? Are there any performance tips I should
> know? How long should something like this take?? Thanks a bunch!!
>
> Gary
>
>
>
>
Received on Thu Feb 28 2002 - 06:02:14 CST
![]() |
![]() |