Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Performance problems when large jobs are run

Re: Performance problems when large jobs are run

From: Dana Smith <dsmith_at_velcro.com>
Date: Sat, 21 Nov 1998 13:39:31 -0500
Message-ID: <BID52.41$PI4.278200@lwnws01.ne.mediaone.net>


Actually, the machine has 1.5G of RAM so there should be plenty to bump up the overall SGA (starting with shared pool.) When you say "tuned the SQL," are you referring to the actual SQL statements being executed? If so, we really have no control of the code (it is Oracle-developed code) and the problem we are having is not specific to particular code -- simply to large, somewhat complex routines.

We will be capturing some trace files next week of some of these routines when presumably running poorly. I have received a post from someone asking to see the results of the trace (and I'll probably post the significant snippets on this thread also.)

Since my original post, we have bumped the shared pool from 150M to 200M and I hope to see some results from this change next week also.

DSS Van Messner wrote in message ...
>Hello Dana:
>
> You may already have checked these things but two possible problems
are:
>How much memory do you have? If the shared pool size is 150M and you don't
>have enough memory, you may be engaging in a lot of very slow page
swapping.
>In that case you'd actually want to reduce the shared pool size.
>Have you tuned the SQL? Oracle recommends this as the first step in
>improving performance. Since you are getting bad results from simple
>querries the problem may be in your SQL.
>
>Van
>
>Dana Smith wrote in message ...
>>We are an Oracle Financials/Mfg site running Oracle apps 10.7/prod 16
>(HP-UX
>>10.20)and we are experiencing ongoing performance problems when large jobs
>>are run. Some examples are:
>>
>>Cost rollups for large part volumes never complete due to poor
performance
>>(runs for 12 hours and we terminate job.) Smaller volume jobs (less number
>>of parts) run in a reasonable amount of time (usually 4-6 hours.) The
>>significant point here is that in virtually all cases, if the large job is
>>terminated and split into smaller jobs, the smaller jobs complete in
>>reasonable amount of time. The relationship between size of job and length
>>of execution time is not linear and a cliff in performance is hit at some
>>point along the way.
>>
>>The cost rollup scenario above is echoed in other areas where data is
>>inserted/updated in the database but we have also seen examples where jobs
>>have exhibited the same poor performance when large datasets have been
>>involved but NO UPDATES are made (read this as a report only -- no data
>>changes.) Again, in these cases, we have seen acceptable performance when
>>the jobs have been broken up into smaller pieces and executed separately.
>>
>>One additional observation we have made is in the read only scenario
above,
>>we have found that performance can be dramatically improved by running the
>>job with virtually no other users on the system. This fact leads us to
>>believe the job is competing for a shared resource, although it is unclear
>>what resource is required.
>>
>>We currently have our shared pool size set at 150M and DB buffers at 25K.
I
>>am inclined to bump up the shared pool size but based on gut feel only. If
>>anyone has any insight into our scenario and can suggest potential
>>solutions.... I'd also like to know if there are any guidelines into how
to
>>determine whether shared pool size (and overall SGA) are set correctly.
>>
>>Thanks for any help.
>>
>>
>>Mr. Dana S. Smith
>>
>>
>
>
Received on Sat Nov 21 1998 - 12:39:31 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US