Re: Performance Tips for Large Updates?

From: Robin Parker <parkerr_at_logica.com>
Date: 1996/03/05
Message-ID: <parkerr.132.0008CFA3_at_logica.com>#1/1


In article <4hfc5p$q95_at_arl-news-svc-1.compuserve.com> Joe_Zukowski%FEDERAL-MOGUL_at_notesgw.CompuServe.com (Joe Zukowski) writes:

>We are currently running a data warehousing application on Oracle 7.2.2
>with an HP K200 machine (using Unix 10.01 and 256M memory). So far,
>we've experienced very acceptable read query performance (both for
>large tablespace scans as well as multi-way joins).
 

>During the regular load process, however, it's necessary to run a
>couple of correlated update SQL statements between two large tables
>(source - one million rows, target 100,000 rows) and one large insert
>statement (1 million rows). The machine seems to really bog down
>(noticeable both in unix and oracle) during them . I would expect
>processes to take a couple of hours, but instead takes many, many hours
>instead.
 

>Background -
>- We're not doing much unix paging or swapping to speak of.
>- Using LVM, not using raw disk. No raid or mirroring.
>- Archive Logging turned off.
>- I've added and sized an RBS segment just for the processes.
>- No other significant process going on in the machine or oracle at the
>time.
>- I've spread out data among the disks appropriately.
 

>First, any suggestions? Trace shows a lot of writing to RBS. Any way
>to turn this off?
 

>Second, any way to use async writing. The oracle parms, ASYNC_WRITE
>and USE_ASYNC_IO, are not accepted during database startup. Do I need
>to add async driver at the unix level? Is it possible without going to
>raw disks? Any suggestions for bumping up disk buffers within unix?
 

>Other tricks?
 

>TIA for any suggestions or past experiences in this arena,
 

>Joe Zukowski (JZUKOWSK_at_FMO.COM)

Commit more often would reduce size of rollbacks and should increase speed.

Robin. Received on Tue Mar 05 1996 - 00:00:00 CET

Original text of this message