Re: Statistics during a complex load

From: Tom Anderson <twic_at_urchin.earth.li>
Date: Sun, 20 Feb 2011 13:42:16 +0000
Message-ID: <alpine.DEB.1.10.1102201331030.26532_at_urchin.earth.li>



On Fri, 18 Feb 2011, Robert Klemme wrote:

> On 16.02.2011 20:24, Tom Anderson wrote:
>
>> We have an application that stores all its information in a database
>> (Oracle 11, as it happens). Between the application and the database
>> sits a third-party object-relational mapper. And not just any sort of
>> object-relational mapper, but one that implements a versioned object
>> model - think of it as source control for objects. A fully operational
>> instance of the application has about 300 000 objects in the database
>> (according to one way of counting them, at least), corresponding to a
>> somewhat larger number of rows (less than a million, i'd guess), spread
>> over a few hundred tables.
>>
>> We like building fresh instances from scratch. We do it as part of our
>> build process, along with compiling the code and so on, to make sure
>> that we can always build a working system from the raw materials in CVS.
>> This process involves clearing out everything in the schema (dropping
>> every object and then purging the recyclebin), running all our DDL, and
>> then loading the 300 000 objects through the versioned mapper.
>>
>> This process is not as fast as we'd like it to be - the DDL is fast, but
>> the data loading takes something like 45 minutes.
>>
>> Let's say, for the sake of argument if nothing else, that we cannot
>> abandon building from scratch, or avoid the versioned mapper, or do
>> anything about the speed or behaviour of the mapper itself.
>
> Well, OR-Mappers are typically not very fast since they tend to handle every
> single instance individually unless there is an explicit batch mode.

True. I don't believe this mapper has a batch mode, BICBW.

>> What could we try to make the load faster?
>
> If your OR-Mapper has a means of verifying a database's data it may be
> quicker to data pump the data into the database and then check but I
> guess this is not what you want.

I'm afraid not. The source of the data is XML files, produced by hand or by some data-migration tools we've written to pull data in from other sources, and checked into source control. The mapper turns these files into objects, and then puts them in the database. The table layout changes fairly quickly (we add columns and tables to hold new information), but because the mapper sits between the files and the tables, we don't have to touch the files themselves when we make schema changes (mostly). Using a table-level dump would, i believe, lose that rather nice feature.

Nonetheless, this is something we should think about more. It may turn out that the work of regenerating dumps on schema change may be less than the work of rebuilding the entire database from scratch through the mapper. Hmm.

> Another variant may be to split up your dataset into independent parts
> and do multiple concurrent loads.

We're already doing that!

>> I don't have detailed statistics to hand, but one thing we've noticed is
>> that there is a very high ratio of selects to upserts during the load;
>> there are numerous (>10) queries made concerning each object before it
>> is inserted - i think the mapper is checking to see if there is an
>> existing version of the object, then inserting the loaded version,
>> performing some sort of merge or check-in operation, etc.
>>
>> We're in the process of tuning our indexes to make them more useful
>> during the load, but a major concern is that the query planner is not
>> making use of them - we quite often see that the most time-consuming
>> queries are ones which should be able to make good use of an index, but
>> are being planned as table scans. Our theory is that this is because the
>> statistics for the tables have not been updated since they were created,
>> and so it looks like they're empty, even as they grow to some vast size.
>> Does that seem plausible?
>
> Yes.
>
>> If so, what can we do about it? We could do a bit of bodging to run
>> dbms_stats.gather_table_stats periodically during the load; would that
>> be sensible?
>
> I'd rather take stats after the load, save them somewhere and load them back
> after you have recreated your schema.

In the time since i asked this question, some other guys on my team have also come up with this idea. I'm a little bit dubious about it, because the statistics will be for an earlier version of the built database, which may not have exactly the same structure or content as the one being built. But still, they should be similar enough that the hand-me-down statistics are useful.

> Another option might be to use 11g's new features for plan stability but
> I can't help you there. There are also some pre 11g features like
> "stored outlines" which may help.

Thanks, i'll look those up.

> But maybe it's first reasonable to do a trace in order to find out why
> the load is slow. If you determine that there are some slow queries
> which use bad plans you need to tune them. If you find out it's the
> massive amount of SQL statements fired against the DB the you need to
> tune that (e.g. faster network, more CPU power on the server...).

There do seem to be bad plans. Queries which should be planned as index scans are being planned as table scans. We think this is because the statistics indicate the table is small enough that a table scan is preferable.

>> Is there some way we can arrange for statistics updates to be triggered
>> automatically as the tables grow? A colleague swears that there's some
>> way to tell the planner not to trust the statistics, so it will make
>> plans using indexes even when it doesn't know that's a good idea; can
>> anyone shed any light on that?
>
> I believe he means you should switch to the RBO (rule based optimizer) -
> which is ancient. IIRC you can enable it per session.

Okay. Worth looking into, at least.

Thanks for your help.

tom

-- 
Tech - No Babble
Received on Sun Feb 20 2011 - 07:42:16 CST

Original text of this message