Overview of the Database Performance-improvement Process
The performance-improvement process is an iterative, long-term approach to monitor and tune various aspects of a database. Depending on the result of monitoring, the DBA should adjust the configuration of the database server and make changes to the applications that use the database server. But before embarking on the journey to tune the database server, the Application itself should be tuned to remove inefficient sql code. So is there a method to the madness?
Performance monitoring and tuning decisions should be based on the kinds of applications that use the data and the patterns of data access. Different kinds of applications have different performance requirements.
Consider the following outline of the performance-improvement process as a guideline.
To develop a performance-improvement process:
- Define performance objectives.
- Establish performance indicators for the major constraints in the system.
- Develop and execute a performance monitoring plan.
- Continually analyze the results of monitoring to determine which resources require tuning.
- Make one adjustment at a time.
Database performance management is dependent on many factors, including the server platform, database configuration, number of users, query workload and type of database. Since there are so many interdependencies, it is essential to collect measurable performance points to monitor and perform ongoing performance analysis. Failure to set specific goals can result in unnecessary time spent in tuning the system without significant gains. Management and user input are vital to set realistic expectations of everyone involved in this exercise.
Numerous software packages are available in the market to benchmark the database. You could also use the traditional STATSPACK to collect the report for database performance. Apart from analyzing the report, it could be run though the Online Oracle Performance Analyzer at www.oraperf.com. It is also a good idea to collect some stats corresponding to the application level. Seek out resource hogs in the system like the least performing queries in the database with regard to the time they take to complete, worst resource consumers etc.
Once the goals are set and database has been benchmarked, it is time to work on a plan. Some of the non-intrusive monitoring tasks like database file space, database extents, table stats collection, sql monitoring can be automated using off-the-shelf automation technology or scheduled home-grown scripts. Database monitoring and daily performance metric collection is a critical aspect of comprehensive database performance management. By taking control of such database issues and setting thresholds, automated event notification enables organizations to make the transition from being reactive to proactive. Corrective action can be seen as advanced event driven automation.
Database server tuning is not the next logical step. No matter how many times it is stressed, it does not hurt to say it again? "Tune the SQL". In most cases, the bad performance that the user is experiencing could just be the result of a bad plan or a missing index. The payoff in catching these violating queries is huge. Start by revisiting the design of the query, the business requirement and assumptions. Verify that the query being tuned is still appropriate for the business needs. It is of paramount importance that due time be spent on application tuning and SQL tuning before starting the next step on database tuning. A lot of white papers have been written to show that approximately 80% of all Oracle system performance problems are resolved with optimal SQL. SQL Trace, EXPLAIN PLAN and TKPROF are all useful settings and tools that aid SQL Tuning. Third party tools like SQL Lab are also very handy. One useful feature is that they offer quick comparison between the various execution plans to identify the best course of action.
Database server tuning should focus on the following: tuning memory, tuning Disk I/O and tuning wait times. Adjust the size of the shared pool and database buffers by looking at the hit ratios. A good practice would be to pin frequently used objects in memory to prevent reloads. Datafiles need to be placed on proper disks or filesystems to provide maximum throughput. Adhere to the Optimal Flexible Architecture or as close as possible to prevent bottlenecks and resource contention. Symptoms of problems with I/O issues would involve disk sorts and full table scans. Also watch for row-chaining issues and data fragmentation. Next study the wait times, locks and latches. Collect all the sql from the sessions causing the waits and eliminate where possible. When tuning the database server, one traditional school of thought preaches changing only one feature at a time, though more than one resource might require tuning. This helps make sure that the tuning efforts are producing the positive effects that is desired.
The final step in the performance plan is to get down to the OS level. Areas to focus here would be CPU, Memory and I/O bottlenecks. On UNIX, sar, vmstat and iostat have become the most common tools to provide insight to the operating system. Other performance monitoring tools like top utility and other X11 based tools are also available. NTs Performance Monitor is the built-in Windows NT utility that offers insight into the operational activities of the local system and the attached network.
When the performance plan is being implemented it is important to rerun the benchmarks to make sure that the effort is continually moving forward. At some point, all the performance objectives would have been met or the effort would have reached a point where performance can no longer be improved by tuning the database server and applications. Then upgrading the hardware becomes imperative. That's a whole another story for another day.