Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Breaking down values in a large table

RE: Breaking down values in a large table

From: John Hallas <john.hallas_at_hcresources.co.uk>
Date: Tue, 11 Feb 2003 08:56:43 -0800
Message-ID: <F001.00549771.20030211085643@fatcity.com>


Tom, Lisa, Stephane

I did consider that but I was originally exporting using direct export and I could get the whole table out in 4 hours (compressed via a pipe). However the import takes 24 hours.

I was looking at using a normal (via the buffer ) export with a query in so that I could import the 4 exports separately and in parallel.  

I think the issue with filesize/file and split is that it is still only a single import which will take 24 hours plus.  

I know the min and max log_no and I will try an export based around dividing the max log_no by 4 . The breakdown will not be accurate but it should be good enough for me  

Thanks for your input - welcome as always  

John  

-----Original Message-----
Sent: 11 February 2003 15:27
To: john.hallas_at_hcresources.co.uk

>Listers,
>I have a table of 125M rows (not partitioned) which
>I am exporting. I
>want to break the export into 4 dmp files using the
>query command on the
>pk column.
>
>I am looking at how the best way of finding the
>values of the PK
>(number) which are at 25%, 50% and 75% ish for the
>table so that I can
>get 4 evenly sized exports
>
>My query line in the parameter file will be along
>the lines of where
>1) log_no < xx
>2) log_no >= xx and < yy
>3) log_no >= yy and < zz
>4) log_no >= zz
>
>I am thinking of a sql something like the following
>
>
>Select /*+ index ffs(table_name index_name) */
>Log_no , floor(log_no / 4), count(*)
>>From table_name group by floor(log_no / 4), log_no
>
>
>Version is 8.1.7.1
>
>Can anybody help please
>
>Thanks
>
>John
>
 

John,

   I am not sure it is worth the trouble. If you specify the PK :

  1. if Oracle uses the index to fetch each row it will probably be slower than what it should be
  2. if it doesn't you will scan your table four times instead of once.

I presume you are under Unix ? And that you are on a multi-CPU machine ? In which case I'd rather try to set parallelism on the table, create a named pipe, export to the pipe and use 'split' if you really want 4 files. All in all, it will probably be faster. NOW, if what you ultimately want is being able to reload in parallel to 4 different partitions, it may of course be different but then you will sacrifice export speed to lesser import slowness ... and I'd rather think in terms of future partitions than mere number of rows.

Regards,

Stephane Faroult
Oriole

-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: John Hallas
  INET: john.hallas_at_hcresources.co.uk

Fat City Network Services    -- 858-538-5051 http://www.fatcity.com
San Diego, California        -- Mailing list and web hosting services
---------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).
Received on Tue Feb 11 2003 - 10:56:43 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US