RE: Enable novalidate contains after import?

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Tue, 24 May 2016 15:50:37 +0000
Message-ID: <CE70217733273F49A8A162EE074F64D9282D53F4_at_EXMBX01.thus.corp>


Norman,

If you've got foreign key constraints on the table the last of the three imports wil validate those constraints, doing a table scan for each constraint. Would that account for the number of tablescans you're seeing ? (plus however many for the indexes created).

The NOT NULL declarations are created on the first import, so don't require subsequent tablescans The index for the primary key will be created on the second import, so that's one tablescan and sort The primary key can be enabled on the third import without any further tablescans because the columns will be NOT NULL and the unique index will already exist.

That just leaves the foreign key checks.

If you altered the foreign keys to be enable novalidate before you export then you could eliminate those tablescans. This might disable some optimizer transformations, though (such as table elimination - though I'd have to check that); it's possible that if you made all the unique and primary keys and foreign keys "rely enable novalidate" then you wouldn't lose any optimizer strategies. (If you make foreign keys "rely" then the associated UK/PK have to be rely as well.) The risk here is that if someone disables a PK/UK then re-enables it the (unique) index might be dropped and then recreated non-unique which would change the optimizer costing - which might change some plans.

Safest bet to eliminate the work is to extract the DDL from a structure-only export and process the text to pick out all the foreign key constraints and rewrite the SQL to create them as rely enable novalidate.

Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
_at_jloracle



From: Norman Dunbar [oracle_at_dunbar-it.co.uk] Sent: 24 May 2016 14:26
To: Jonathan Lewis; oracle-l-freelists
Subject: RE: Enable novalidate contains after import?

Hi Jonathan,

OK, the import runs first as a norows:

Rows=n
Ignore=y
Grants=n
Indexes=n
Compile=n
Constraints=n
Statistics=none
Buffer= 1000000000

Then the tables as 4 different parallel imports, but all are done with these parameters:

Rows=y
Ignore=y
Grants=y
Indexes=y
Compile=y
Constraints=n
Statistics=none
Buffer= 1000000000

Finally another norows Import to do the constraints:

Rows=n
Ignore=y
Grants=n
Indexes=n
Compile=n
Constraints=y
Statistics=none
Buffer= 1000000000

Late on we recompile the code etc, and generate stats etc.

The table in question, but there may be others that I didn't see doing excessive scans, has 230 columns. We have some wide tables. The average row length is 551 and there are no chained rows. The table is 38,536 blocks large. 8k block size.

I'll run a test on another similar database late on. I have yet another test in progress. No logging on all tables and indexes. Might save an hours or two!

Thanks again.

Cheers,
Norm.
--

Sent from my Android device with K-9 Mail. Please excuse my brevity.

--

http://www.freelists.org/webpage/oracle-l Received on Tue May 24 2016 - 17:50:37 CEST

Original text of this message