Back in March of last year I wrote an article on the five frequently misused metrics in Oracle: These Aren’t the Metrics You’re Looking For.
To sum up, my five picks for the most misused metrics were:
- db file scattered read – Scattered reads aren’t always full table scans, and they’re certainly not always bad.
- Parse to Execute Ratio – This is not a metric that shows how often you’re hard parsing, no matter how many times you may have read otherwise.
- Buffer Hit Ratio – I want to love this metric, I really do. But it’s an advisory one at best, horribly misleading at worst.
- CPU % – You license Oracle by CPU. You should probably make sure you’re making the most of your processing power, not trying to reduce it.
- Cost – No, not money. Optimizer cost. Oracle’s optimizer might be cost based, but you are not. Tune for time and resources, not Oracle’s own internal numbers.
Version after version, day after day, these don’t change much.
Anyways, I wanted to report to those who aren’t aware that I created a slideshow based on that blog for RMOUG 2014 (which I sadly was not able to attend at the last moment). Have a look and let me know what you think!Metric Abuse: Frequently Misused Metrics in Oracle
Have you ever committed metric abuse? Gone on a performance tuning snipe hunt? Spent time tuning something that, in the end, didn’t even really have an impact? I’d love to hear your horror stories.
Also while you’re at it, have a look at the Sin of Band-Aids, and what temporary tuning fixes can do to a once stable environment.
And lastly, keep watching #datachat on Twitter and keep an eye out for an update from Confio on today’s #datachat on Performance Tuning with host Kyle Hailey!
We’ve all got problems. More to the point, every IT department or team has problems of some kind. It’s why we hire consultants, buy products, start long and arduous journeys into the great unknown depths of root cause analysis, and so on.
What fascinates me is the level at which we come to identify with our problems. When I’ve gone into an environment to deliver recommendations, the conversation usually goes something like this:
The reality is of course that there are going to be issues…perhaps budgets are tight and new servers are rarely if ever an option. Or a QA refresh takes days so we all know we’ll never get one into a project timeline. Or we never had the chance to set up security properly on a new application and we know that the developers all have DBA access and can’t do anything about it. The list goes on. What’s interesting (and slightly amusing) is these problems are announced with almost a sense of pride, as though the manager or IT administrator is showcasing their new big screen TV or manicured lawn and not a debilitating architectural deficiency.
In short, through constant business growth and changing requirements while languishing under the limitations of budgets, infrastructure, time, and staff many IT professionals have begun to accept that the answer will always be “no”, the system will never be perfect, the problems will never be fixed, and there is absolutely nothing they can do about it. After a time of this torment, the IT professional not only begins accepting these limitations but embracing them, parroting them, defending them, and reveling in them during meetings and water cooler sessions.
Why? Perhaps it is like Stockholm Syndrome, where Wikipedia explains “One commonly used hypothesis to explain the effect of Stockholm syndrome is based on Freudian theory. It suggests that the bonding is the individual’s response to trauma in becoming a victim. Identifying with the aggressor is one way that the ego defends itself. When a victim believes the same values as the aggressor, they cease to be a threat.” Or perhaps Romans 5:3 has the right of it: “Not only so, but we also glory in our sufferings, because we know that suffering produces perseverance.”
Whatever the case, it is a breath of fresh air when I work with companies that are more focused on finding solutions than reveling in problems. Everyone says they want to find solutions of course, but it’s rare that folks are actually willing to put in the time and thought process to do so rather than come up with reasons why it can’t be done.
So what can we do? Deploy outside the box. Figure out the best path to solve performance problems. Stop taking things like security concerns for granted. Keep learning, because sometimes the “fix” might seem like an erroneous approach until you understand it better. And always, always, always look for solutions instead of focusing on problems.
Now with that being said, enjoy your weekend and forget about those problems for a bit! Unless you’re on call of course.