Re: Estimating effectivity of performance improvement measures

From: Steve Howard <stevedhoward_at_gmail.com>
Date: Wed, 26 Nov 2008 07:59:08 -0800 (PST)
Message-ID: <ac3b9800-bea2-4840-91e5-14d60d779d0d@o2g2000yqd.googlegroups.com>


On Nov 26, 5:45 am, Robert Klemme <shortcut..._at_googlemail.com> wrote: Hi Robert,

> My idea so far is this: on the test database recompile all indexes and
> maybe reorganize all tables to get all into a defined shape. Then
> look at the sum of the number of blocks all involved objects use up
> per solution implemented. Pick the one with the smallest number.

I actually think this is a bad idea unless you plan to do the same thing on production. When you say this is a "copy of production", is the test database a block for block copy of production, i.e., a storage clone or RMAN restore? If so, I would test against that, once again, unless you plan on changing the indexes in production.

The physical stuff you can probably guesstimate the difference. If your test database get 10ms per single block read and your production database gets 7ms, then you can extrapolate time differences for the physical I/O.

> This is even more the case as part of these tables involve historic
> data, i.e. multiple values with different time ranges of validity,
> which makes a certain amount of data dead - from _this_ query's point
> of view. So the way how this "dead wood" is distributed across
> objects may have a significant impact (for example when using
> timestamps as leading columns for an index vs. trailing columns or
> having them in the table only).

How much has this changed since you refreshed test? My goal would be to have test as close as possible to production, and identify any differences such as physical I/O access time and account for that.

Regards,

Steve Received on Wed Nov 26 2008 - 09:59:08 CST

Original text of this message