Case sensitive user defined types, 11g imp and my current nightmare!

From: Norman Dunbar <oracle_at_dunbar-it.co.uk>
Date: Wed, 27 Apr 2016 08:31:49 +0100
Message-ID: <FE905ECD-3724-4F97-B431-DF5076986755_at_dunbar-it.co.uk>



Morning All,

I'm working on "upgrading" a Solaris 9i database to 11g on Windows in the (Azure) cloud. I have to use exp/imp due to the 9i database.

I have the "IMP-00061 Object user.TypeNameWithCamelCase already exists with a different identifier" followed by "IMP-00063 Skipping table user.tablename because object type user.TypeNameWithCamelCase cannot be created or has a different identifier." error.

Fine, I can use the TOID_NOVALIDATE parameter for this one as I know that the type in question is valid. And should exist as other tables depend on it being there. Those have imported quite happily.

Sadly, some developer or other deemed it necessary that the type in question absolutely, without question, really had to be created in case sensitive manner with CamelCase naming. (My advice, if you double quote your identifiers, object names etc, be prepared to be severely ranted at by me!)

So, I specify the username and type name in double quotes:

imp ... TOID_NOVALIDATE=("USER"."TypeNameWithCamelCase") ...

and ran the imp again. Same error exactly. Hmm. I then consulted the docs and discover that to preserve case I need to wrap the double quotes in single quotes. Off I go again:

imp ... TOID_NOVALIDATE=(' "USER"."TypeNAmeWithCamelCase" ') ... (Spaces shown for clarity, not used in actual command line.)

and after running the imp again, I get exactly the same error messages. Sigh.

So, how the hell do I preserve CamelCase naming for types that I want to ignore the TOID? I'm doing this on the command line at the moment, but I'm working on the assumption that a parameter file will have the same problems. I have been wrong before!

What makes this all the more galling is the fact that this is only a 400Gb database (ie, small) but the export file is 160 Gb in size. It takes this poor little cloud server around 23 hours to skip the billions of rows of data it doesn't need to get to the table I want to import. Of course, my table has a name that is at the end of the alphabet, so appears at the end of the dump file after the vast majority of the data. Sigh.

My latest run is simply:

drop table .... cascade constraints purge; drop type ... force;

Followed by the import. Which hopefully will work, but might affect the other tables that depend on this type and which have imported fine. However, there might be others - there are indeed other types with these naming conventions - that will affect things in future and I'd like to be armed with the correct manner of preserving the DamnedCamelCaseNamingConventions that SomeFlippingDeveloperThoughWasCute! ;-)

I am rapidly losing the will to live!

Cheers,
Norm.

--

Sent from my Android device with K-9 Mail. Please excuse my brevity.

--

http://www.freelists.org/webpage/oracle-l Received on Wed Apr 27 2016 - 09:31:49 CEST

Original text of this message