Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Performance differences migrating to different server
Recently we migrated our db to a hosting site and immediately got hit with huge differences in performance. We've been tuning for awhile with only small success and I wanted to pick a few brains in a spare moment:
Initial setup was Oracle 8.1.5.0.0 on a Solaris 2.6 Enterprise 4500. 8x366 processors, 3 gb RAM and a photon array with fibre channels. Typical #server processes was 250 at peak and 200 at slow times, dedicated servers were used. Due to the application, bind variables cannot be used so caching is virtually nullified.
New setup was Oracle 8.1.5.0.0 on Solaris 2.6 Enterprise 5500, 10x366 processors, 3.8 gb RAM and SCSI disk array. OS Patch levels and /etc/system parameters were equal and Oracle init parameters are virtually identical except that DBWR processes went from 6 to 1. We don't use parallel query however.
Once we moved to this new box in a new network load averages went from 5-7 at peak to 25 - 30. DB statistics look great... no issues with buffer cache, only latch problems are with the library cache but we can't do much due to the dynamic SQL hitting the DB. We noticed that with our new client configuration we were getting about 50 more processes.
Admittedly the SQL hitting the DB is horrid. However, the application code did not change, so I don't see how that could raise much of an issue. We do have a team tuning the code right now, but we still think there is something else.
Looking at the OS stats, there is very little IO contention. I initially was worried about the change from fibre to SCSI but monitoring io waits in oracle and the io stats shows that the main contention is in CPU. Load averages are consistently high, looking at top about 90% of cpu time is on user activity and you almost never see free cpu. Processes in the run-time queue are usually at around 25-40 and context switching is very high as well.
I have two ideas that I'm still working on right now and was wondering what you all thought of them, and if any other suggestions may be made...
Idea 1: There are more server processes being created now, and they are quick connections typical of an OLTP database. This is a e-commerce db, with a high query to write ratio. I was figuring that since context switching and cpu utilization seems to be the issue that implementing MTS might help reduce the load of constantly creating and destroying a decent amount of server processes. The number of server processes is usually in the high 200s, do you feel that this number of processes could benefit from MTS? Also, does anyone know offhand the max processes supported per dispatcher on Solaris 2.6?
Idea 2: I know that processor affinity is frowned upon generally, but from the reading I've done in my manuals and on technet my situation is one where processor affinity could be helpful. Do you think this is the case? And if so, how do I implement this? I don't want the background processors bound, and I've never had to do this. We have 10 processors on the machine, so tuning the usage should help a lot.
Those are my ideas. If you need any more detail or have any ideas please let me know at lvcampbell_at_iname.com or post to the group.
Laine Campbell
Sr. DBA
Sent via Deja.com
http://www.deja.com/
Received on Tue Jan 02 2001 - 20:10:54 CST
![]() |
![]() |