Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: design question, use of partitioned tables?

Re: design question, use of partitioned tables?

From: Noons <wizofoz2k_at_yahoo.com.au>
Date: Thu, 7 Aug 2003 21:19:35 +1000
Message-ID: <3f3239c9$0$10356$afc38c87@news.optusnet.com.au>


"Manuela Mueller" <mueller_m_at_fiz-chemie.de> wrote in message news:bgt878$r2t9n$1_at_uni-berlin.de...

> a) create table:

OK, I'll look into this at the office tomorrow (my time). It's night now and I need to sleep...

> Management wants upscaling from currently 1 TB to 10 TB during the next
> 2 years. Additionally, other document types should be indexed (xml, pdf
> files) in the near future. We can theoretically imagine 2 approaches:
> a) one main table per subject specific Oracle Text Index (Redundancy in
> html files approx. 20-30%)

I'd think along these lines myself.
Spread the load. It works. Plus eventually it will be easier to RAC the lot if you so wish.

> the data are evenly distributed but I lack experience in partitioned
> tables with BLOBs.

You and just about everybody else here! ;) Still: nothing like trying no?

> Is it possible to defragment one partition at a time?

There should be no problem with doing that. However, I have a few doubts about the syntax. Tried partitioning a small table with CLOB today with 8ir3 and got nowhere fast. The syntax was way too dense for me to sort out. Will try again on 9ir2.

> Any other ideas how to deal with this situation?

Well, I've done a little testing myself with 8ir3 db in a P4 NT4Server. Single data disk, redo logs in another disk, no archivelog. Block size 8Kb. LMT 8Mb fixed alloc like yours. Create table in one tablespace, put a CLOB column in another. disable storage in row.

Timed inserts with EMPTY_CLOB() as the value for the column, got about 11 million in one hour, single process. The lobindex was defined as initial/next 8Mb and showed just a little volume past the initial extent. Not really significant.

Then I tried to load the CLOB with the same value all over, for 65000 rows. About 1K of character data, the chunk size is 8K. I know, it is redundant but what the heck, I don't have 9i loaded in that system to try a different block size. Timing this got me 20 minutes to load the 65K rows! No logging. CPU and disk flat out, which I found surprising. Then again, it's Windows...

I'd say there is something very strange going on anyway, it shouldn't have taken that long. Total size on datafile holding the lobdata was around 550M, which tallies with 8K chunks times 65K rows + some overhead.

Now, I know it takes a lot less than that to load a normal table of the same size (550M) in that database so something weird is going on with clob data. I can't believe it's all to do with the lobindex, but you never know. I'll do some more digging/comparing tomorrow.

> > Compare CPU and I/O activity across the system for both. That
> > should tell you for sure if the prob is the db handling of lob's
> > or the nfs.
> I'll try it on a development box.

Please. Let us know what you got.

> I looked at Metalink, found nothing in the bug DB. We already applied
> the latest patch.

ah well, there is always 10G... :D

--
Cheers
Nuno Souto
wizofoz2k_at_yahoo.com.au.nospam
Received on Thu Aug 07 2003 - 06:19:35 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US