RE: RE: Case sensitive user defined types, 11g imp and my current nightmare!

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Thu, 28 Apr 2016 17:30:29 -0400
Message-ID: <01c101d1a195$3076b4e0$91641ea0$_at_rsiz.com>



Ok Norman, here is the point I was trying to make: For purposes of debugging you only need this one particular table. It is new information that it might need to be masked, but for the purposes of debugging the load it does not really need to be consistent. You can truncate and load from your 23 hour monster once you get it working.  

If you get the production DBAs to understand that bit, I would be surprised if they are not willing to give you a special cut to save you 23 hours per trial. When you get it debugged, THEN you use your existing export for a consistent image with the rest.  

AND   In case you have trouble with this table in the future for particular data rows, even if they are unwilling or it might cause inconsistencies to ship you less than a full export, it should be trivial for them to ALSO ship you just that one troublesome table as a single table dump. (And perhaps individual auxiliary dumps for any additional tables that ever prove troublesome.)  

This sort of thing was standard operating procedure (SOP) back in the days of relatively unsophisticated tape drives to avoid all the “and now wait a day” type of problems you are facing.  

That sort of data latency should be entirely a thing of the era when I didn’t qualify for AARP membership.  

mwf  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Norman Dunbar Sent: Thursday, April 28, 2016 4:21 PM
To: oracle-l_at_freelists.org
Subject: Fwd: RE: Case sensitive user defined types, 11g imp and my current nightmare!  

Cc to list. I forgot to reply all. Sigh.


From: Norman Dunbar <oracle_at_dunbar-it.co.uk> Sent: 28 April 2016 15:58:07 BST
To: "Mark W. Farnham" <mwf_at_rsiz.com>
Subject: RE: Case sensitive user defined types, 11g imp and my current nightmare!

Hi Mark,

at the moment, yes. The source database has moved on by over a week. So at the moment I'm stuck with this 160Gb export file. The next refresh has been requested differently after a bit of analysis, I've (hopefully) worked out the optimum set of tables and schemas to export in separate sessions to give me 4 dump files that I can then use to improve performance.

Time will tell, but one single table holds around 45% of the total number of rows which is about 1.5 billion. Lots of CLOBs, XMLTYPES and other nasties involved. That one is the cause of all the delays as it is closer to the 'A' end of the alphabet than the table Ineed, which is in amongst the 'X' end.

It takes about 4 to 5 days to get the production DBAs to supply an export as it has to be consitent, and the personal data obfuscated. It all adds to the delays.

I've even tried to find out, with zero success, the internal structure of a exp dump file to see if there was some way of perhaps slicing the file into just the sections I need. No joy. I've done this before with tar files, but thay are documented. These exp files don't appear to be, at least not publically. I've worked a few details out myself (direct or not, user exporting, original file name, date etc - the easy stuff) but nothing of any real use to my current problem.

If anyone has done any work on this front previously, and has results to publish, I'd be grateful!

Cheers,
Norm.

Cheers,
Norm.
--

Sent from my Android device with K-9 Mail. Please excuse my brevity.
--

Sent from my Android device with K-9 Mail. Please excuse my brevity.

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Apr 28 2016 - 23:30:29 CEST

Original text of this message