Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Rule vs. Cost

Re: Rule vs. Cost

From: attwoody <attwoody_at_my-deja.com>
Date: Thu, 11 Nov 1999 02:24:46 GMT
Message-ID: <80d9dd$tbo$1@nnrp1.deja.com>


In article <807c6u$52f$1_at_schbbs.mot.com>,   "jawa" <Jim.Wadas_at_motorola.com> wrote:
> Rule-based optimization runs more efficiently than cost on our
application.
> The supplier told us that the version 7 optimizer has problems. Is
this
> true, or is it that their application was not written by folks
knowledgeable
> of the hints and tricks designed to take full advantage of cost-based
> optimization? For instance, I ran a small benchmark report on the
schema
> and found: (1) >40 tables with no index on them at all (even small
tables
> without an index become the driving table), and (2) >100 foreign keys
> missing child index references (all having the potential to put locks
on the
> parent table). Could these contribute to a cost-based scenario that
would
> perform poorly, or is it true that version 7's optimizer has problems
like
> the supplier alleges? Has anyone else experienced a similar case
study?
>

Greetings, Jim,

  I'm supporting a couple of databases on a IBM RS/6000 S70, with eight (8) 125MHz processors, 2GB RAM and about 72GB of SSA disks. The application is PeopleSoft 6.10.20, Fixed Assets and AP/GL, with AIX 4.3.1 and Oracle 7.3.4.4.

  I recently reorganized the databases, and changed the blocksize from 4K to 8K. I also changed the OPTIMIZER_MODE from CHOOSE to RULE, and adjusted parameters such as LOG_BUFFER, SORT_AREA_SIZE, etc.

  The first week after the reorganization, Crystal Reports that had been running for an hour went down to two minutes; some of the nVision reports improved as well. Others still ran over an hour. I attributed this to the defragmentation that the reorg took care of the more efficient blocksize, and the rebuilding of indexes.

  The second week, which began our month-end close, the system went to pieces - all eight (8) of the processors on the S70 were _pegged_ at 85-90%, and complaints rolled in about s-l-o-w response time. I kept getting pulled into meetings and asked what the problem was.

  I changed the OPTIMIZER_MODE back to CHOOSE, ran an ANALYZE TABLE... COMPUTE STATISTICS, and the next day, everything was great!   The users were happy, they were able to get most of their reports done - the nVision reports were still running in about an hour, and the Crystal Reports were taking about 2-10 minutes (there were a lot more people on the system).

  The other thing I noticed was that when the OPTIMIZER_MODE was set to RULE, the individual buffer hit ratios were about 99-100%, overall about 98%. CONSISTENT_GETS were in the hundreds of millions. When I changed it back to CHOOSE, individual buffer hit ratios went down a bit, with most being between 97-100%, but the overall hit ratio went down to about 80% (it had never gone below 90% when the mode was RULE). CONSISTENT_GETS went down to the hundreds of thousands.

  It seems that RULE caused everything to be in memory, which   inflated the buffer hit ratio (making me thing everything was great).

  I would appreciate any insights on this - I can't ever get a   straight answer from PeopleSoft when it comes to tuning Oracle.

  Thank you (I know this is rather long)

> --
> Jim Wadas
> Information Technology Solutions and Services (ITSS)
> Motorola Systems Solutions Group (SSG)
> Scottsdale, AZ 85257
> (480) 441-8196
> Jim.Wadas_at_motorola.com
>
>

Sent via Deja.com http://www.deja.com/
Before you buy. Received on Wed Nov 10 1999 - 20:24:46 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US