Steve,
Sure, just let me know when you get those "undocumented
parameters" working and I will be your Oracle slave for
life....;-). Boy, is this guy funny or what?
Gaja.
- Steve Orr <sorr_at_arzoo.com> wrote:
> Gaja,
>
> > The calculation of system/application throughput is done by
> > metrics that you put down...
>
> I once supported a database/app which had a large and very
> compute intensive
> set of nightly batch jobs (utility bill calculations and
> printing). With
> some tuning we were able to reduce the "batch run" from 6
> hours to 2 hours.
> That was my "metric."
>
> You mean just knowing Oracle isn't enough? Now I have to learn
> the innards
> of the apps? Ugghhh. That's a lot of work. Can't we just solve
> the problem
> by throwing overwhelming computing power at it? What was that
> parameter? ...
> _max_db_buffers_to_available_mem = true?
> _make_SQL_run_faster = true ?
>
> Steve
>
>
> -----Original Message-----
> Krishna Vaidyanatha
> Sent: Tuesday, July 11, 2000 3:02 PM
> To: Multiple recipients of list ORACLE-L
>
>
> Steve,
>
> Comments included.
>
> Gaja.
>
> --- Steve Orr <sorr_at_arzoo.com> wrote:
> > > I personally prefer a system which has a 99% throughput
> > factor
> > > than a 99% db buffer cache hit ratio.
> >
> > Hmmm... I know how to calculate the latter but how do you
> > calculate the
> > former? :)
> >
> The calculation of system/application throughput is done by
> metrics that you put down to determine the number of
> transactions, queries, jobs etc that need to complete within a
> certain period of time (minute, hour, day etc.). It is
> tailored
> to each individual application and is directly tied into the
> characteristics of the "workload" on the system within the
> said
> period of time.
>
>
> > I have seen instances where tuning the SQL actually lowers
> the
> > cache hit
> > ratio because the queries are no longer wasting time
> > repeatedly hitting on
> > data that wasn't needed in the first place.
>
> BINGO...exactly my point!
>
> > Better "un-tune" that SQL so you can brag about achieving a
> > high cache hit ratio. ;-)
>
> Sound like a "Job Security" issue....;-)
>
> >
> >
>
>
> > Steve Orr
> >
> >
> > -----Original Message-----
> > Krishna Vaidyanatha
> > Sent: Tuesday, July 11, 2000 12:20 PM
> > To: Multiple recipients of list ORACLE-L
> >
> >
> > Friends,
> >
> > Here we go again, the false correlation between db cache hit
> > ratio and its effect on performance. We just got done with
> a
> > similar thread a couple of weeks ago. Even without seeing
> > anything in Prakash's environment, I would probably guess
> that
> > the current size of the database buffer cache may be OK. A
> > 79%
> > cache hit ratio is by no means "WAY too low". WAY too low
> > compared to what - 99%? Is there any guarantee that at 99%
> > cache hit ratio that the queries will be screaming? I
> > strongly
> > doubt it.
> >
> > There are way too many folks who pin all of their hopes in
> > performance increase by throwing memory at the Oracle SGA,
> > specically at the db buffer cache.
> >
> > Myth : Quite possibly one of the biggest performance myths
> out
> > there is, "If I do not run my Oracle database with a cache
> hit
> > ratio of 90% or above, I have bad performance".
> >
> > Performance needs to be measured by "system throughput", how
> > much work has the system done for you. We have even dealt
> > with
> > systems with 65-70% cache hit ratios where the performance
> of
> > the transactions and the queries was steller. The
> incremental
> > increase in cache hit ratio that one achieves by configuring
> > the
> > db buffer cache with more memory, does not do much to system
> > performance.
> >
> > Hence the memory should be probably utilized for something
> > else
> > (e.g. sort_area_size, hash_area_size etc.). Like anything
> in
> > life, the law of diminishing returns will come to play
> sooner
> > or
> > later. Further, the reference to a 99% cache hit ratio and
> > the
> > assumption that performance is great as a result, it not
> > correct. Not by a long shot.
> >
> > While a reasonable cache hit ratio is required to avoid an
> > inordinate amount of physical I/O, the bigger issue that
> needs
> > to be tackled is "What is causing such a high amount of
> > physical
> > I/O?". 9.9 out of 10 times, it is an application problem.
> May
> > be the application needs to perform a "full table scan"
> > instead
> > of an "index scan" or vice versa. May be the index used is
> > not
> > optimal. It does not have all the "required columns". A
> bad
> > cache hit ratio is a "symptom", while a bad query is "the
> > disease".
> >
> > Another point to keep in mind is that on systems that are
> > configured with mass storage devices (which have
> > controller/disk
> > level caches), a physical I/O from Oracle's perspective
> still
> > does not mean "physical I/O from disk". It could mean
> > "logical
> > I/O from the storage array's cache". There is more to
> > performance than the cache hit ratio of your db buffer
> cache.
> >
> > Fact: Within reason and hardware limitation if the db
> buffer
> > cache is configured optimally, the cache hit ratio is only a
> > very small indicator/symptom of the bigger problem/disease.
> >
> > Remember the 80-20 rule. 80% or more of our problems with
> > Oracle is with bad SQL and no matter how much memory you
> throw
> > at Oracle, if the disease is not cured, the symptoms will
> > still
> > exist. A 99% cache hit ratio may not mean much when
> measuring
> > system performance and system throughput. I personally
> prefer
> > a
> > system which has a 99% throughput factor than a 99% db
> buffer
> > cache hit ratio.
> >
> > Don't get me wrong I am all for "logical I/O" instead of
> > "physical I/O" and I am fully aware of the 3 orders of
> > magnitude
> > in performance difference between the two. I am all for
> > giving
> > Oracle "enough" memory. But only upto a certain point. I'd
> > rather spend my time fixing, re-writing and tuning my
> queries.
> > The performance difference that I get in tuning a query is
> > many
> > times higher than 3 orders of magnitude. I believe in
> curing
> > diseases not symptoms. Let's go tune those awful queries.
> >
> > Best Regards,
> >
> > Gaja.
> >
> > --- Jack Silvey <JSilvey_at_XOL.com> wrote:
> > > Prakash,
> > >
> > > 79% is WAY too low. For an OLTP system (small number of
> rows
> > > returned) you
> > > should see in the >90% range, and want to see >95% range.
> > >99%
> > > is ideal. Get
> > > after that DBA! It will help greatly across the board.
> > >
> > > Now, about the number of blocks and rows visited for the
> > > number of rows
> > > returned:
> > >
> > > The question we must ask ourselves is why does oracle
> think
> > it
> > > necessary to
> > > run through 1.4 million rows to get to the 156 it returns?
> > > Especially with a
> > > unique index?
> > >
> > > Gut instinct tells me that the optimizer is making
> incorrect
> > > choices based
> > > on the prescense of the :b7 variable. While using bind
> > > variables will reduce
> > > parsing, it can also cause incorrect optimizer choices.
> > >
> > > I would bet that the optimizer is choosing all the rows
> > where
> > > DBSTS = 'A'
> > > and then sorting through those rows to find out how many
> > have
> > > the sku=:b7
> > > clause.
> > >
> > > How many rows in the table have DBSTS = 'A'? If it is 1.4
> > > million, we have
> > > our answer.
> > >
> > > Jack
> > >
> > >
> > >
> > > -----Original Message-----
> > > Sent: Monday, July 10, 2000 10:46 AM
> > > To: Multiple recipients of list ORACLE-L
> > >
> > >
> > > Thanks to all who replied.
> > > Steve/Stephane, I tried rebuilding indexes and
> > > analyzing schema after that, that didn't work. The new
> > > execution plan uses 'index range scan' instead of
> > > previous 'index unique scan'.
> > > Rajesh, the avg row len is 147 bytes and the total no
> > > of blocks used for this table is 9255. Also, the
> > > 'rows' information in the tkprof is there, but is
> > > slightly difficult to see because of bad formatting.
> > > Jack, the buffer cache hit ratio is 79%, i think it's
> > > much less than it should be. I'll ask our dba(i'm a
> > > developer) to increase db_block_buffers, but it will
> > > still try to read SO much information from data block
> > > buffers, which is too much for a small no of rows
> > > (157)??
> > >
> > > Prakash
> > >
> > > The new output from tkprof is following:
> > >
> > >
> > >
> > >
> > > UPDATE EC_PRDITM SET DSCR=:b1 || ', ' || :b2
> > > ,PRDITMNAME=:b1 || ', ' || :b2
> > > || '; ' || :b5 || ', ' || :b6
> > > WHERE
> > > SKU = :b7 AND DBSTS = 'A'
> > >
> > >
> > > call count cpu elapsed disk
> > > query current rows
> > > ------- ------ -------- ---------- ----------
> > > ---------- ---------- ----------
> > > Parse 1 0.00 0.00 0
> > > 0 0 0
> > > Execute 156 546.82 1330.06 1437402
> > > 1443835 787 156
> > > Fetch 0 0.00 0.00 0
> > > 0 0 0
> > > ------- ------ -------- ---------- ----------
> > > ---------- ---------- ----------
> > > total 157 546.82 1330.06 1437402
> > > 1443835 787 156
> > >
> > > Misses in library cache during parse: 1
> > > Misses in library cache during execute: 1
> > > Optimizer goal: CHOOSE
> > > Parsing user id: 31 (SDOSTLO20) (recursive depth:
> > > 1)
> > >
> > > Rows Execution Plan
> > > -------
> > > ---------------------------------------------------
> > > 0 UPDATE STATEMENT GOAL: CHOOSE
> > > 0 UPDATE OF 'EC_PRDITM'
> > > 0 INDEX GOAL: ANALYZED (RANGE SCAN) OF
> > > 'EC_PRDITM_SKU_UK'
> > > (UNIQUE)
> > >
> > >
> >
>
> > > ****
> > >
> > > --- Steve Boyd <pimaco_oracle_at_yahoo.com> wrote:
> > > > You may not have chained rows on your table, but is
> > > > your 'EC_PRDITM_SKU_UK' index stagnated? Do
> > > > the following commands:
> > > > ANALYZE INDEX EC_PRDITM_SKU_UK VALIDATE STRUCTURE;
> > > > SELECT * FROM INDEX_STATS;
> > > >
> > > > If the DEL_LF_ROWS value is high, then you probably
> > > > need to rebuild your index. Of course, I am
> > > > assuming that the EC_PRDITM_SKU_UK index is on the
> > > > SKU column. If not, then try creating an index
> > > > on that column.
> > > >
> > > > --- PK J <pkj_01_at_yahoo.com> wrote:
> > > > > Hi All,
> > > > >
> > > > > I have some queries which take too long to
> > > > execute,
> > > > > perform too
> > > > > many disk reads and affects very small no of rows.
> > > > I
> > > > > have analyzed
> > > > > the table,indexes and there are no chained rows
> > > > for
> > > > > this table.
> > > > > Explain plan says that the cost of the queries are
> > > > 3/2
> > > > > etc and I
> > > > > don't know if they could be optimized any more.
> > > > Please
> > > > > could
> > > > > anyone tell what could be wrong?
> > > > >
> > > > > The output of the TKPROF is following:(sorry for
> > > > the
> > > > > bad formatting)
> > > > >
> > > > > TIA,
> > > > >
> > > > > Prakash
> > > > >
> > > > >
> > > > > UPDATE EC_PRDITM SET DSCR=:b1 || ', ' || :b2
> > > > > ,PRDITMNAME=:b1 || ', ' || :b2
> > > > > || '; ' || :b5 || ', ' || :b6
> > > > > WHERE
> > > > > SKU = :b7 AND DBSTS = 'A'
> > > > >
> > > > >
> > > > > call count cpu elapsed disk
> > > >
> > > > > query current
> > > > > rows
> > > > > ------- ------ -------- ---------- ----------
> > > > > ---------- ---------- ----------
> > > > > Parse 1 0.00 0.00 0
> > > >
> > > > > 0 0 0
> > > > > Execute 157 475.75 508.74 1441954
> > > > > 1453090 793
> > > > > 157
> > > > > Fetch 0 0.00 0.00 0
> > > >
> > > > > 0 0 0
> > > > > ------- ------ -------- ---------- ----------
> > > > > ---------- ---------- ----------
> > > > > total 158 475.75 508.74 1441954
> > > > > 1453090 793
> > > > > 157
> > > > >
> > > > > Misses in library cache during parse: 1
> > > > > Misses in library cache during execute: 1
> > > > > Optimizer goal: CHOOSE
> > > > > Parsing user id: 31 (SDOSTLO20) (recursive
> > > > depth:
> > > > > 1)
> > > > >
> > > > > Rows Execution Plan
> > > > > -------
> > > > >
> > > > ---------------------------------------------------
> > > > > 0 UPDATE STATEMENT GOAL: CHOOSE
> > > > > 0 UPDATE OF 'EC_PRDITM'
> > > > > 0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
> > > >
> > > > > 'EC_PRDITM_SKU_UK'
> > > > > (UNIQUE)
> > > > >
> > > > >
> > > >
> > >
> >
>
> > > **
> > > > > **
> > > > >
> > > > >
> > > > > SELECT PRDITMID
> > > > > FROM
> > > > > EC_PRDITM WHERE SKU = :b1 AND DBSTS = 'A'
> > > > >
> > > > >
> > > > > call count cpu elapsed disk
> > > >
> > > > > query current
> > > > > rows
> > > > > ------- ------ -------- ---------- ----------
> > > > > ---------- ---------- ----------
> > > > > Parse 1 0.00 0.00 0
> > > >
> > > > > 0 0 0
> > > > > Execute 157 0.01 0.01 0
> > > >
> > > > > 0 0 157
> > > > > Fetch 157 373.98 393.26 1441790
> > > > > 1453035 471
> > > > > 157
> > > > > ------- ------ -------- ---------- ----------
> > > > > ---------- ---------- ----------
> > > > > total 315 373.99 393.27 1441790
> > > > > 1453035 471
> > > > > 314
> > > > >
> > > > > Misses in library cache during parse: 1
> > > > > Optimizer goal: CHOOSE
> > > > > Parsing user id: 31 (SDOSTLO20) (recursive
> > > > depth:
> > > > > 1)
> > > > >
> > > > > Rows Execution Plan
> > > > > -------
> > > > >
> > > > ---------------------------------------------------
> > > > > 0 SELECT STATEMENT GOAL: CHOOSE
> > > > > 0 TABLE ACCESS GOAL: ANALYZED (BY INDEX
> > > > > ROWID)
> > > > > OF 'EC_PRDITM'
> > > > > 0 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF
> > > >
> > > > > 'EC_PRDITM_SKU_UK'
> > > > > (UNIQUE)
> > > > >
> > > > >
> > > >
> > >
> >
>
>
> --
> Author: Steve Orr
> INET: sorr_at_arzoo.com
>
> Fat City Network Services -- (858) 538-5051 FAX: (858)
> 538-5051
> San Diego, California -- Public Internet access /
> Mailing Lists
>
> To REMOVE yourself from this mailing list, send an E-Mail
> message
> to: ListGuru_at_fatcity.com (note EXACT spelling of 'ListGuru')
> and in
> the message BODY, include a line containing: UNSUB ORACLE-L
Received on Tue Jul 11 2000 - 18:06:00 CDT