RE: Data Dictionary Hit Ratio - myth or fact?
Date: Tue, 18 Mar 2008 08:00:58 -0400
Yes, it is definitely either a myth or a fact.
Data Dictionary Hit Ratio should be > 90% Check Oracle Memory assignment.
An appropriate response might be to take a look at the total time spent parsing to establish the ceiling of the possible benefit of "fixing" your variance from the dashboard light advice that it is worth a look.
Of course the fraction of parsing spent on recursive dictionary lookups is in turn the smaller ceiling benefit. Oracle should be dynamically tuning the space to hold this stuff, so you may in fact be thrashing a lot of totally new database object references on a continuous basis. If you are, that might be something you can change. As with all such things, the most productive thing to do is to assure that you are not wasting time where users are waiting. The most likely places to look for waste are the largest chunks of time, but if you have already honed the large users of time to eliminate waste (including "it didn't need to be executed"), then smaller chunks of time such as recursive dictionary references missing cache might be reasonable targets for improvement. When you reach the point where the remaining time possible to save has no significance, you have to decide whether the value of additional potential learning is worth the effort or whether you are afflicted by CTD.
Your original post seems to indicate you will probably embark on a reasonable course of action. Your ultimate response, whether you change this ratio or not, can be: "That statement indicates we should check Oracle Memory assignment. We have done that." And then continue to make sure you minimize meaningful waste in your system.
PS: Back when you had to hand tune each little bit of the SGA where Oracle stashed things, and the default allocations were based on the likelihood of having to argue for even a few megabytes of memory being allocated to Oracle, it was common to walk into a situation that was so memory bound that preemptive repairs to many "init" parameters were required before you could really even tell whether there were other problems. The useful purpose of all those "cache ratio should be" statement was to help the Oracle administrators lobby for sufficient memory. Some people even wrote "diagnostic" test scripts to demonstrate more memory was needed for Oracle. If the ratios were the same or worse after getting enough memory for Oracle so I could create a reasonably running instance, why would I mention that? The point is always to save wasted time and improve service time within cost constraints.Received on Tue Mar 18 2008 - 07:00:58 CDT