Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Performence on Oracle instance running on Sun Solaris/Sparc

Re: Performence on Oracle instance running on Sun Solaris/Sparc

From: Tommy Svensson <tkman23_at_hotmail.com>
Date: 23 May 2001 15:29:28 -0700
Message-ID: <ebf7c6db.0105231429.4a848fff@posting.google.com>

"David Sisk" <davesisk_at_ipass.net> wrote in message news:<mKDO6.52751$Pp1.15474145_at_typhoon.southeast.rr.com>...
> Good detail.
>
> Question: Is one instance accessed over a LAN and the other accessed over a
> WAN. The network latency (not necessarily bandwidth) could be the issue.
> SQL*Net does quite a bit of talking back and forth.

No, all test was done local on both services. But in the standard application is used over network with SQL*Net.

But because the performance problem is where even when we run local on each server I don't think it has anything todo with network performance.

So we haven't much option left to try or test.

I think we don't have much often left.

The last option is to reinstall everything the os, the oracle binaries and re-create the database in Canada.

Do you have any suggestions?

Kind Regard

>
> Regards,
> Dave
>
> Tommy Svensson <tkman23_at_hotmail.com> wrote in message
> news:ebf7c6db.0105221323.3860f883_at_posting.google.com...
> > Hi!
> >
> > We have huge and strange performance problem on one of our oracle
> > instance
> > running our ERP software on Sun Solaris Servers and OS.
> > We have done a lot of tuning and testing to found the performance
> > problem but
> > without any luck :-(
> >
> > We have two server running our ERP software one in Canada and one in
> > Norway both
> > useing Oracle 7.3.4 on Solaris 2.6 Sparc platform. The hardware and
> > OS setup is
> > identical between Canada and Norway unless the memory which is 2Gb in
> > Canada and
> > only 1Gb in Norway. The database size is 12Gb in Canada and only 4Gb.
> >
> > The performence in the Canada instance is good, but the performence in
> > Norway
> > is terriable. A report this is take 1 minutes in Canada can take up to
> > 2-3 hours on the Norway setup.
> >
> > Please note the Hardware isn't the high-end, and can be more tuned by
> > adding
> > additional CPU, but the main "problem" is the different in performance
> > between
> > Canada and Norway. The setup can be better with on extra diskset for
> > the second
> > redolog member of the groups, more CPU, even more memory. But again
> > the
> > problem is the different in performance between Canada and Norway not
> > to make
> > the ultimate setup.
> >
> > Canada setup
> > ------------
> > - Sun Ultra Enterprise 450 Server with 1 248Mhz CPU, 2Gb Memory
> > - 9Gb disk for OS+Oracle binary, redo logs
> > - RAID10 Setup 2*6 4Gb disk = 24Gb data for databases datafiles
> > - RAID5 setup, 3 18GB disk = 36Gb data for archive logs, redo logs
> > - Database size 12Gb
> > - Orginal SGA Size: 200Mb
> >
> > Norway Setup
> > --------------
> > - Sun Ultra Enterprise 450 Server with 1 248Mhz CPU, 1Gb Memory
> > - 9Gb Disk for OS+Oracle binary, redo logs
> > - RAID10 Setup 2*6 4Gb disk = 24 Gb data for databases datafiles
> > - RAID5 setup, 3 18GB disk = 36 Gb data for archive logs, redo logs
> > - Database size 4Gb
> > - Orginal SGA Size: 200Mb
> >
> > 1) Our first hint was that the different in memory was the reason to
> > the
> > bad performance on our Norway server. But we run a lot of huge reports
> > on
> > the server and running iostat and vmstat but they didn't show any sign
> > on
> > paging/swapping problem.
> >
> >
> > -- vmstat in Norway during huge report --
> >
> > 1 0 0 503488 121416 0 204 0 0 0 0 0 0 0 0 0 121 663 75
> > 99 1 0
> > 1 0 0 503360 121416 0 210 0 0 0 0 0 0 2 0 2 147 2614 87
> > 97 3 0
> > 0 0 0 503488 121416 0 163 0 0 0 0 0 0 1 0 1 128 1700 74
> > 99 1 0
> > 1 0 0 503488 121400 0 204 0 0 0 0 0 0 3 0 3 166 4378 97
> > 98 2 0
> > 1 0 0 503488 121384 0 204 0 0 0 0 0 0 1 0 1 157 1522 85
> > 98 2 0
> > 1 0 0 503360 121368 0 210 1 0 0 0 0 0 2 0 2 154 4014 92
> > 97 3 0
> > 0 0 0 503488 121368 0 163 0 0 0 0 0 0 1 0 1 129 1605 76
> > 98 2 0
> > 1 0 0 503488 121368 0 197 0 0 0 0 0 0 3 0 3 166 5599 96
> > 98 2 0
> > 1 0 0 503488 121360 0 204 0 0 0 0 0 0 0 0 0 115 658 76
> > 99 1 0
> > 1 0 0 503360 121352 0 210 0 0 0 0 0 0 3 0 3 166 6180 96
> > 98 2 0
> > 0 0 0 503488 121344 0 163 0 0 0 0 0 0 1 0 1 136 2784 74
> > 99 1 0
> > 1 0 0 503488 121344 0 197 0 0 0 0 0 0 3 0 3 177 2388 88
> > 98 2 0
> > 1 0 0 503488 121344 0 204 0 0 0 0 0 0 0 0 0 118 653 78
> > 100 0 0
> > 1 0 0 503360 121344 0 210 0 0 0 0 0 0 2 0 2 149 3710 88
> > 97 3 0
> > procs memory page disk faults
> > cpu
> > r b w swap free re mf pi po fr de sr f0 m0 m3 m1 in sy cs
> > us sy id
> > 1 0 0 503488 121344 0 163 0 0 0 0 0 0 1 0 1 129 2648 74
> > 99 1 0
> > 1 0 0 503488 121344 0 204 0 0 0 0 0 0 2 0 2 148 5683 87
> > 97 3 0
> > 1 0 0 503488 121344 0 210 0 0 0 0 0 0 1 0 1 138 3462 88
> > 99 1 0
> > 1 0 0 503360 121328 0 204 4 0 0 0 0 0 2 0 2 143 5340 92
> > 98 2 0
> > 0 0 0 503488 121320 0 163 0 0 0 0 0 0 2 0 2 167 2354 78
> > 99 1 0
> > 1 0 0 503488 121320 0 204 0 0 0 0 0 0 1 0 1 135 2325 81
> > 99 1 0
> > 1 0 0 503488 121320 0 210 0 0 0 0 0 0 0 0 0 121 681 81
> > 100 0 0
> > 1 0 0 503360 121320 0 204 0 0 0 0 0 0 1 0 1 145 4440 88
> > 99 1 0
> > 0 0 0 503488 121312 0 163 0 0 0 0 0 0 1 0 1 146 4596 78
> > 98 2 0
> > 1 0 0 503488 121304 0 204 0 0 0 0 0 0 1 0 1 132 2640 83
> > 98 2 0
> > 1 0 0 503488 121304 0 210 0 0 0 0 0 0 1 0 1 144 2682 89
> > 99 1 0
> > 1 0 0 503360 121304 0 204 0 0 0 0 0 0 3 0 3 179 4019 87
> > 97 3 0
> > 0 0 0 503488 121296 0 183 0 0 0 0 0 0 1 0 1 137 1455 76
> > 99 1 0
> > 1 0 0 503488 121296 0 232 0 6 6 0 0 0 1 0 1 138 2918 86
> > 98 2 0
> > 0 0 0 503624 121296 0 170 0 0 0 0 0 0 0 0 0 123 984 72
> > 99 1 0
> > 2 0 0 503360 121288 0 197 3 0 0 0 0 0 2 0 2 147 6037 87
> > 96 4 0
> > 1 0 0 503488 121280 0 176 0 0 0 0 0 0 1 0 1 133 1336 73
> > 99 1 0
> > 0 0 0 503312 121040 0 214 0 0 0 0 0 0 0 0 0 123 2206 95
> > 99 1 0
> > 1 0 0 502840 120872 0 197 0 0 0 0 0 0 4 0 4 222 4298 89
> > 98 2 0
> > procs memory page disk faults
> > cpu
> > r b w swap free re mf pi po fr de sr f0 m0 m3 m1 in sy cs
> > us sy id
> > 0 0 0 502968 120872 0 176 0 0 0 0 0 0 2 0 2 159 5390 86
> > 99 1 0
> > 1 0 0 502968 120872 0 204 0 0 0 0 0 0 1 0 1 130 2181 75
> > 99 1 0
> > 1 0 0 502840 120872 0 224 0 0 0 0 0 0 1 0 1 131 3176 83
> > 99 1 0
> > 0 0 0 502968 120872 0 176 0 0 0 0 0 0 0 0 0 121 579 75
> > 99 1 0
> > 1 0 0 502968 120872 0 204 0 0 0 0 0 0 1 0 1 148 3561 85
> > 97 3 0
> > 1 0 0 502968 120864 0 210 0 0 0 0 0 0 0 0 0 124 1320 80
> > 100 0 0
> > 0 0 0 502384 120320 0 271 0 0 0 0 0 0 5 0 5 237 7261 106
> > 94 6 0
> > 1 0 0 502232 120176 0 204 3 0 0 0 0 0 1 0 1 128 1883 82
> > 99 1 0
> > 1 0 0 502104 120168 0 190 0 0 0 0 0 0 2 0 2 151 4678 90
> > 99 1 0
> > 0 0 0 502232 120168 0 183 0 0 0 0 0 0 0 0 0 118 605 76
> > 99 1 0
> >
> >
> > -- end vmstat norway --
> >
> >
> > -- iostat --
> >
> > tty fd0 md0 md3 md10
> > cpu
> > tin tout kps tps serv kps tps serv kps tps serv kps tps serv us
> > sy wt id
> > 0 8 0 0 0 10 1 9 0 0 0 11 1 13 99
> > 1 0 0
> > 0 24 0 0 0 6 1 10 0 0 0 6 1 17 99
> > 1 0 0
> > 0 8 0 0 0 11 1 13 0 0 0 11 1 19 96
> > 4 0 0
> > 0 8 0 0 0 13 2 13 0 0 0 13 2 19 98
> > 2 0 0
> > 0 8 0 0 0 10 1 10 0 0 0 10 1 15 98
> > 2 0 0
> > 0 8 0 0 0 14 2 12 0 0 0 14 2 17 99
> > 1 0 0
> > 0 8 0 0 0 8 1 11 0 0 0 8 1 12 99
> > 1 0 0
> > 0 8 0 0 0 13 2 11 0 0 0 13 2 19 98
> > 2 0 0
> > 0 8 0 0 0 14 2 9 0 0 0 14 2 13 98
> > 2 0 0
> > 0 8 0 0 0 9 1 12 0 0 0 9 1 16 99
> > 1 0 0
> > 0 8 0 0 0 7 1 11 0 0 0 7 1 15 99
> > 1 0 0
> > 0 8 0 0 0 18 2 10 0 0 0 18 2 15 98
> > 2 0 0
> > 0 8 0 0 0 10 1 8 0 0 0 10 1 11 99
> > 1 0 0
> > 0 8 0 0 0 7 1 11 0 0 0 7 1 15 99
> > 1 0 0
> > 0 8 0 0 0 14 2 11 0 0 0 14 2 16 97
> > 3 0 0
> > 0 8 0 0 0 15 2 10 0 0 0 15 2 15 98
> > 2 0 0
> > 0 8 0 0 0 6 1 10 0 0 0 6 1 15 99
> > 1 0 0
> > 0 8 0 0 0 10 1 10 0 0 0 11 1 16 99
> > 1 0 0
> > 0 8 0 0 0 22 3 10 0 0 0 22 3 13 98
> > 2 0 0
> > tty fd0 md0 md3 md10
> > cpu
> > tin tout kps tps serv kps tps serv kps tps serv kps tps serv us
> > sy wt id
> > 0 8 0 0 0 5 1 11 0 0 0 5 1 15 99
> > 1 0 0
> > 0 24 0 0 0 9 1 13 0 0 0 9 1 20 98
> > 2 0 0
> > 0 8 0 0 0 13 2 11 0 0 0 13 2 17 97
> > 3 0 0
> > 0 8 0 0 0 14 2 10 0 0 0 14 2 13 98
> > 2 0 0
> > 0 8 0 0 0 6 1 12 0 0 0 6 1 18 98
> > 2 0 0
> > 0 8 0 0 0 8 1 11 0 0 0 8 1 21 99
> > 1 0 0
> > 0 8 0 0 0 15 2 10 0 0 0 15 2 17 98
> > 2 0 0
> > 0 8 0 0 0 12 2 12 0 0 0 12 2 15 98
> > 2 0 0
> > 0 8 0 0 0 5 1 9 0 0 0 6 1 11 98
> > 2 0 0
> > 0 8 0 0 0 14 2 11 0 0 0 14 2 22 98
> > 2 0 0
> > 0 8 0 0 0 13 2 11 0 0 0 14 2 14 98
> > 2 0 0
> > 0 8 0 0 0 10 1 11 0 0 0 10 1 15 98
> > 2 0 0
> > 0 8 0 0 0 6 1 10 0 0 0 6 1 14 99
> > 1 0 0
> > 0 8 0 0 0 16 2 10 0 0 0 16 2 16 96
> > 4 0 0
> > 0 8 0 0 0 11 1 11 0 0 0 11 1 14 99
> > 1 0 0
> > 0 8 0 0 0 7 1 11 0 0 0 7 1 15 99
> > 1 0 0
> > 0 8 0 0 0 13 2 10 0 0 0 13 2 15 98
> > 2 0 0
> > 0 8 0 0 0 14 2 10 0 0 0 14 2 17 98
> > 2 0 0
> >
> > -- end iostat --
> >
> > As far as I can say even if the Norway server is using 700Mb of the
> > swap space
> > the memory should not be a performance problem. It don't do a lot of
> > paging
> > according to the vmstat (I have learn to watch the sr field not the
> > pi/po field).
> >
> > Iostat and vmstat also show that the disk isn't the problem, io-wait
> > is near
> > 0% so there is not disk or i/o problem causing the bad performance in
> > Norway.
> >
> > We also note from vmstat,iostat,top that the Oracle process near take
> > 99% of
> > the CPU during the reports.
> >
> > 2) Next step was do found out if the oracle instance run other huge
> > reports during the time we did the performance test but we didn't find
> > any
> > other job unless our test jobs in the oracle server.
> >
> > 3) Next step was do tune the setup of buffers and SGA pool size on the
> > oracle
> > instance in Norway, we have try different values both go upp to 500Mb
> > and even
> > go down the half the size but we didn't "won much" just on 1-5%
> > win/lose.
> >
> > 4) Next step was do some tracing on the SQL-level we try to compare
> > the setup
> > of views,index between the Norway and Canada setup even if our ERP
> > supplier
> > says that is should be the same.
> > We found some minor different but it should cause the performance
> > problem.
> > We also try to create the views,index used in Norway in the Canada
> > database to
> > se if we did hit the same performance as in Norway but the performance
> > was the
> > same as we used Canada original views and sql-questions.
> >
> > 5) Next step we did was to do explain on the sql_questions to see if
> > we found any
> > different between the Norway and Canada setup. But again the explain
> > was the
> > same unless one case but that was on of the few sql-questions which
> > run with the same
> > speed in Norway and Canada.
> >
> > 6) Next step was to turn on full sql_trace and use tkprof to compare
> > the same
> > sql questions between the Norway and Canada. Here we found something
> > interesting.
> >
> > We below attach some of the output, we only show the part of trace
> > file which
> > take longest time and is showed the greatest different between the
> > Canada setup
> > and Norway setup
> >
> > --- Canada setup ---
> >
> > select *
> > from
> > payment_from_auth_customer
> >
> >
> > call count cpu elapsed disk query current
> > rows
> > ------- ------ -------- ---------- ---------- ---------- ----------
> > ----------
> > Parse 1 0.11 0.12 1 0 1
> > 0
> > Execute 1 0.00 0.00 0 0 0
> > 0
> > Fetch 1149 11.35 15.70 3979 289910 0
> > 1148
> > ------- ------ -------- ---------- ---------- ---------- ----------
> > ----------
> > total 1151 11.46 15.82 3980 289910 1
> > 1148
> >
> > --- end canada ---
> >
> > Look good, great performance at least compared to the Norway
> > performance.
> >
> > --- Norway setup ---
> > select *
> > from
> > payment_from_auth_customer
> >
> >
> > call count cpu elapsed disk query current
> > rows
> > ------- ------ -------- ---------- ---------- ---------- ----------
> > ----------
> > Parse 1 0.03 0.03 0 0 0
> > 0
> > Execute 1 0.00 0.00 0 0 0
> > 0
> > Fetch 151 12483.17 12819.54 2 397742108 0
> > 150
> > ------- ------ -------- ---------- ---------- ---------- ----------
> > ----------
> > total 153 12483.20 12819.57 2 397742108 0
> > 150
> >
> >
> > --- end norway ---
> >
> > Strange, look like a joke? Look at the query field 397 742 108 blocks
> > = 397 742 108*8192 = 3258 303 348 736 = 3200 Gb, almost 1000 times our
> > Norway
> > databases size?
> > Is this right? We don't think so but we can't understand why?
> >
> > Other example
> >
> > --- canada ---
> > SELECT SUM(CURR_AMOUNT) PLAN_CURR_AMOUNT
> > FROM
> > INVOICE_UTILITY_PAY_PLAN_PUB WHERE COMPANY = :b1 AND IDENTITY = :b2
> > AND
> > PARTY_TYPE_DB = 'SUPPLIER' AND INVOICE_ID = :b3 AND AUTHORIZED =
> > 'FALSE'
> >
> >
> > call count cpu elapsed disk query current
> > rows
> > ------- ------ -------- ---------- ---------- ---------- ----------
> > ----------
> > Parse 1 0.02 0.02 0 0 0
> > 0
> > Execute 3148 1.20 2.60 0 0 0
> > 0
> > Fetch 3148 0.77 2.78 147 15749 0
> > 3148
> > ------- ------ -------- ---------- ---------- ---------- ----------
> > ----------
> > total 6297 1.99 5.40 147 15749 0
> > 3148
> >
> > --- end canada ---
> > --- norway ---
> >
> > SELECT SUM(CURR_AMOUNT) PLAN_CURR_AMOUNT
> > FROM
> > INVOICE_UTILITY_PAY_PLAN_PUB WHERE COMPANY = :b1 AND IDENTITY = :b2
> > AND
> > PARTY_TYPE_DB = 'SUPPLIER' AND INVOICE_ID = :b3 AND AUTHORIZED =
> > 'FALSE'
> >
> >
> > call count cpu elapsed disk query current
> > rows
> > ------- ------ -------- ---------- ---------- ---------- ----------
> > ----------
> > Parse 1 0.04 0.04 0 0 0
> > 0
> > Execute 1477 0.73 0.72 0 0 0
> > 0
> > Fetch 1477 738.65 760.88 269 29113399 0
> > 1477
> > ------- ------ -------- ---------- ---------- ---------- ----------
> > ----------
> > total 2955 739.42 761.64 269 29113399 0
> > 1477
> >
> > --- en or index have to much extents
> > in norway compared to Canada but we didn't found anythings suspicious.
> >
> > 8)We have already defrag the databases both in Canada and Norway.
> >
> > 9)We have also look in the log files run prtdiag to found any possible
> > hardware problem but everything works okey.
> >
> > Any idea? What we can test or any hint to find the real problem? WE
> > don't think
> > the problem is 2Gb in Canada vs 1Gb in Norway and because of the bad
> > financl
> > status of our company we don't like to buy in 1Gb Sun memory for $6000
> > to test
> > with. We don't think it will solve the performence problem, and if we
> > spend
> > $6000 on it and it didn't we will be the first of 1000 to be fired.
> >
> > Other problem on the Norway instance is that the database file in our
> > main
> > tablespace is soon full, we only have 1-2Mb free on it. Perhaps which
> > is what
> > cause the performance problem we have in Norway? But should it be so
> > huge
> > performance problem if one tablespace is soon full?
> >
> > Perhaps the best and first option is to recreate the Norway instance,
> > but the
> > problem has been in Norway even before the last defrag (done in less
> > then
> > 1 month) and we are pretty sure the tablespace had more space before
> > the
> > defrag.
> >
> > We have also read the alert.log file but don't find anything there.
> >
> > Please help us!
> >
> > Kind regards
> >
> > file://Tommy
> >
> > North Sea Fish and Meat Company
Received on Wed May 23 2001 - 17:29:28 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US