Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Another Oracle "Myth"?
> You probably subscribe to the Burleson school of 'tune by hit ratio's'
> Millsap is advocating you should tune the SQL rather than just
> throwing memory at the problem.
Hi Sybrand,
No, not many people tune exclusively by hit ratio's anymore. Incidentally, neither does Burleson (if you read anything he wrote after 1994)! For example this Burleson wait event tuning article remarkably similar to Millsap's conclusions:
http://www.dbazine.com/burleson8.html
Anyway, even Millsap agrees that the DBHR is not totally useless, just useless as a sole metric of performance. If we assume well-tuned SQL with a recycle pool for large full table scans, then the more RAM, the less overall physical I/O.
If the data buffer hit ratio is useless, then why is Oracle using the v$db_cache_advice view as a component of the 10g self-tuning memory?
Is Oracle on the wrong track with 10g Automatic Memory Management?
Again, my problem was with this statement:
"A hit ratio in excess of 99% often indicates the existence of extremely inefficient SQL"
I was simply hoping that someone could explain why a stallar BHR often indicates poorly optimized SQL. Can someone explain the "OFTEN" part here? Received on Fri Nov 21 2003 - 06:59:40 CST