Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: design question, use of partitioned tables?

Re: design question, use of partitioned tables?

From: Manuela Mueller <mueller_m_at_fiz-chemie.de>
Date: Wed, 06 Aug 2003 14:13:41 +0200
Message-ID: <bgqr98$rlcs6$1@uni-berlin.de>


Dear Nuno,
answer embedded

HAND
Manuela Mueller

Noons wrote:
> "Manuela Mueller" <mueller_m_at_fiz-chemie.de> wrote in message news:bgqhgl$qssm6$1_at_uni-berlin.de...
>
>

>>The data to load reside a seperate raid array on a 720 MB partition. The
>>partition currently holds about about 50 million small files.
>>Filesystem is Reiser FS, partition is almost full and mounted via NFS.

>
>
> NFS???? <alarm bells>
> zillion small files??? <even more alarm bells, cathedral-sized ones!>
>
> Short term: vacate some serious space in that partition.
> Then try to allocate all the free space in one go, rather
> than letting the file system try to do that out of small
> fragments all over the disks as small files are created/deleted.
> You simply CANNOT have an Oracle datafile split by the file system
> into a zillion small blocks and expect good I/O performance.
>

Only the node with the files to load is Reiser FS. The DB Server node runs with XFS. Please see below. For import I mount this big Reiser FS partition via NFS. We will allocate free space on this node and work with smaller partitions.

>>Unfortunately this machine is under responsibility of my associate and
>>due to shortage of space he can only migrate the data piecewise to
>>other, smaller partitions. I observe the bad load performance since 2
>>months.

>
>
> Well, if they are planning to scale they better get serious
> about making some major local disk space available. Nope, NAS
> or nfs doesn't cut it, and I don't give a hoot how many
> maker's marketeers jump into this thread to call me a
> liar.
>
>
>>We suppose problems with this partition, top shows constant load average
>>of 4.0, the 2 CPUs are 97% idle and there are no other processes running
>>except nfsd.

>
>
> I'd say the node with the NFS server has a somewhat busy disk subsystem.
> 4.0 is way too high, BTW. And the low CPU activity is a dead giveaway
> of an I/O bottleneck.
>
> Not surprised: if the partition is volatile and has a lot of small files,
> the disk space allocated to the database datafiles will be highly fragmented
> to start with.
>
> I'd suggest this as your first line of attack: compact the free space in
> that nfs partition somehow. Talk to the resident Linux guru to see what
> can be done.
>
> That partition needs to be unloaded, re-created, datafiles for db
> created, and THEN all the small files added if and when needed.
> But create the db datafiles FIRST, in an empty partition.

The Database Server is another node which has XFS partitions with 272 GB in size. Partitons reside on a SAN. During setup of the DB Server I created fresh partitions with XFS and datafiles each 32 GB in size.

>
> And if he/she tells you fragmentation is not a problem in Reiser-fs,
> simply tell them uncle Nuno said *it is* and will come around and clip
> their doolies if they continue to say otherwise!
>
>

>>On the DB Server machine, the data via eth0 come in at a 'speed' with
>>400 K/sec (I expect 1.2-1.6 M/sec). The 4 CPUs are idle, no RAM
>>shortage. A trace of one of the oracle processes performing the load,
>>shows that he mostly asks of time (gettimeofday).
>>Before dropping the main table, load performance was 24-40 K/sec.

>
>
> Yup, definitely I/O starvation.
>
>
>>sqlldr conventional path, commit every 256 rows.

>
>
> try a larger commit size. How many rows do you fit into
> a block on average, multiply by 100 and make that your array commit.

Thanks for the tip, I'll try it immediately.

>
> Another thing: you sure your redo log files are not in another
> nfs partition? If they are that's not a killer, that's a WMD!
>
> And make those redo logs files BIG!!! As in at least 500Mb if not 1Gb
> each.
>

Redo log files (each 100 MB) are also located on a XFS partitions, log switch account 1 - 2 per day in 'normal' usage (not Oracle Text Index creation).

>

>>I got the idea with partitioning in order to scale up the application.
>>Within 2 years total size should be 10 TB, one main table is impossible
>>to handle.

>
>
> You need much better I/O. NFS partitions simply won't cut it at
> that size/volume.
>
> On how to partition the table, I'd start by looking at what will scale
> up in 2 years: the total number of rows you have to load in a month
> or the number of months you have to have on-line?
>
> If number of months, then it's a no-brainer: partition on date
> (there MUST be some column you can use for that).
>
> If it's the number of rows per load (each month), then you need
> to investigate another partitioning rule. Hash would be a way
> to go, with multiple parallel and concurrent loads.
>
> HTH
>
> --
> Cheers
> Nuno Souto
> wizofoz2k_at_yahoo.com.au.nospam
>
>
Received on Wed Aug 06 2003 - 07:13:41 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US