Re: resizing/moving datafiles for performance ??

From: Svend Jensen <svend.jensen_at_it.dk>
Date: Fri, 19 Jul 2002 19:16:19 +0200
Message-ID: <3D384963.6000807_at_it.dk>


Prem J Khanna wrote:

> Guys,
>
> the DB is on 8.1.6 / Win2K.
>
> there entire DB ( 33 tables and 110 indexes) is in USERS tablespace.
> the USERS tablespace has only one datafile USERS.DAT of size 12GB.
>
> it's on 8 * 10 GB HDD with RAID 5.
>
> we are facing performance bottleneck now.
>
> V$SESSION_WAIT shows many rows for DB FILE SCATTERED READ and DB FILE
> SEQUENTIAL READ.
> all these events wait for USERS.DAT to be read.
> it's a highly read OLTP site.
>
> So,will moving the table and index data to different tablespaces
> with relatively smaller datafiles ( say 1 GB ) improve performance ?
> OR
> since it is on RAID5 , will moving make any difference ?
>
> 10-20 MB of data gets added to the DB daily.
> if resizing/moving improves performance,what should be the size of my
> datafile ?
>
> kindly give me a solution guys.
> TIA.
>

Hi,
you make it very hard to give glorifying advise - caused by severe lack of information.
1) how many sessions are there in a v$session_wait snapshot (select) with db_file waits? v$session_wait is highly dynamic, is the pattern in some way consistent?
2) how long are the waits? how many are waiting, waited unknown/short time, known time... (waiting means they wait while you look - the wait is not over yet and the wait time is hence unknown) You cant know if this wait is measurable, i.e.. the time is in the range of < 2/100 second, but the law of large numbers will give a avg. wait time of ~0.005 to < 0.01 for all having unknown/short time, and that doesn't hurt anybody.

3) Is it mostly scattered or sequential reads with 'real timed' waits?
4) How many blocks do scattered (full scans) actually read (P3)
5) what is the value of db_file_multiblock_read_count & hash....
6) the controller, has it cache memory (with backup), read ahead is set 
to? what is the raid5 stripe size, OS block size, db_block_size? 7) how large is your buffer cache and memory overall/usage? 8) where are the redo log files and archive?

Moving files around on the very same spindle's (disks) wont help you. By the way, I never create data file larger than 2G. Makes the toolbox larger in case of disaster strikes. If I remember, dbw wont work on files larger then 2G on 8.1.6, at least on some platforms (solaris).

Some advice, based on the non-info given: If you have mainly scattered read, tuning your statements and proper indexing can (easily) solve your problem. Isolate full scan table(s) into recycle pool can help, but it is still patching poorly tuned statements, bad or missing indexes (and poor design?).

It might help to split the disks into 2*4 drives with raid5, if you got two or more channels on the controller, but not much if full scans is the main reason for heavy disk access (see above).

rgds
/Svend Jensen Received on Fri Jul 19 2002 - 19:16:19 CEST

Original text of this message