Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: design question, use of partitioned tables?

Re: design question, use of partitioned tables?

From: Manuela Mueller <mueller_m_at_fiz-chemie.de>
Date: Wed, 06 Aug 2003 17:54:00 +0200
Message-ID: <bgr86c$rmr97$1@uni-berlin.de>


Dear Nuno,
ansers embedded.
Greetings
M. Mueller

Noons wrote:
> "Manuela Mueller" <mueller_m_at_fiz-chemie.de> wrote in message news:bgqr98$rlcs6$1_at_uni-berlin.de...
>
>>Only the node with the files to load is Reiser FS.
<snip>

> Ah, that's better. Try and get smaller partitions,
> definitely. Make sure you create first the db datafiles
> in freshly-made partitions, before they get too much
> activity. Check out if the block size used internally
> by xfs is the same as your database block size. Should
> be in the "man" pages how to verify this.
>

xfs block size is 4k (default value). I should change this to 8k in the next setup.

> Another thing: these "files to load", how are you getting them
> into sqlldr? Catenate and then load as a single file? Can you
> merge them into larger files?
>
> You may be able to run, say, 4 parallel loads into the
> same table by splitting the input files into 4 independently
> run scripts? Before you do that, make sure you set the table
> with INITRANS 4 and FREELISTS 4 (a bit of an overkill,
> but it won't hurt).

I'm currently running 4 independent sqlldr sessions but initrans, freelists are on the default value 1. Thanks for the hint, I must change these values. Each session has 3.5 million rows to load, location of the lob file is dynamically specified in control file (filler) and the *.dat file contains the absolut path to the lobfile. Size of each *.dat file is about 860M.

>
> Table partitioning with sqlldr to each partition is of
> course another option, but you need an easy way of
> separating the input data files by partitioning key
> and that is not always possible.
>
> Another thing to try would be direct path
> data loads instead of normal, given that you use sqlldr.
> Read about them on the Utilities manual and see if it is
> applicable to your data.

I already tried direct path loads a few months ago, but unfortunately I can't guarantee that the lob is already present (nature of the application). Direct load exits when I try to load many records with a null lob. After a few initial tests I returned to slower conventional path :( .

>
>
>>The Database Server is another node which has XFS partitions with 272 GB
>>in size. Partitons reside on a SAN. During setup of the DB Server I
>>created fresh partitions with XFS and datafiles each 32 GB in size.
>
>
> Much better. I'm a bit surprised you can get
> db datafiles of 32Gb in Linux with a 32-bit version of
> Oracle and not have any problems. Still if it's
> working, no problem. But much larger file system
> partitions might have performance problems with Unix/Linux.
>

32 Gb is max size for a datafile on Linux. We haven't encountered any problems with the size yet.
>
>
>>Redo log files (each 100 MB) are also located on a XFS partitions, log
>>switch account 1 - 2 per day in 'normal' usage (not Oracle Text Index
>>creation).
>
>
> Yeah but how many switches in a load situation? Doesn't hurt
> (other than disk overhead) to make them larger.
Log switch every 30 min, DB runs in NOARCHIVELOG Mode during the load and creation of Oracle Text Indexes (not enough space for ARCHIVELOG Mode). After the update procedure is finished, I switch to ARCHIVELOG Mode and perform a cold backup. But anyway, I'll enlarge the redo log files.
>
> Another thing: do a complete check of constraints
> and primary keys and such being disabled while the
> big data load is going. Also triggers. All you need
> is one left active and bang: there goes your performance.
> Big data loads like these need a vanilla plain table
> with nothing fancy in it. It's easy to forget about
> some long forgotten detail, so do another check.
>

PK and all triggers on the table are disabled during the load.

> How long does it take to create indexes and such
> after the data is in? Is the total time you mentioned
> including index creation time, if so how long is
> the table load time alone?

Good question, I never performed a load at this magnitude. When I load about 3 Million records it takes about 1 h.
>
>
> --
> Cheers
> Nuno Souto
> wizofoz2k_at_yahoo.com.au.nospam
>
>
Received on Wed Aug 06 2003 - 10:54:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US