Home » RDBMS Server » Performance Tuning » Performance Issue after Bulk Insertion (Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi)
Performance Issue after Bulk Insertion [message #525612] Tue, 04 October 2011 02:32 Go to next message
Manoj.Gupta.91
Messages: 239
Registered: March 2008
Location: Delhi
Senior Member
Hi All,

We had two different databases. Let us say DB1 and DB2.
There were one schema each with identifal structure i.e. tables, views, synonyms, procedures, packages were same.

But the data was different. Data was separated for some business requirement.

Now it was decided to club these two database schema in one. DBA inserted all the data from DB1 into DB2. Now DB2 schema contain data for both the Database schemas.

One of my procedure that was taking 35 to 45 minutes to execute in each database separately now started taking more that 12 hours on clubed data i.e. present DB2.

I tried to tune it to the best of my knowledge. But no benefit out of it.

Can you please suggest me some initial steps to identify the problematic area ?

For example check row chaining, check fregmentation, check indexes require rebuild, cursor sharing mode etc.

Please tell me some steps to check problematic area ..!

Thanks & Regards
Manoj
Re: Performance Issue after Bulk Insertion [message #525619 is a reply to message #525612] Tue, 04 October 2011 02:38 Go to previous messageGo to next message
Michel Cadot
Messages: 68625
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
You can read:
- OraFAQ Forum Guide, Performance Tuning section
- Performances Tuning sticky
- OraFAQ Performance Tuning
- Rosco's SQL Tuning Guide
- Tuning High-Volume SQL Wiki page

You can also have a look at Database Performance Tuning Guide.

Regards
Michel
Re: Performance Issue after Bulk Insertion [message #528678 is a reply to message #525619] Wed, 26 October 2011 06:57 Go to previous message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
sounds to me like one of your databases had lots more data than the other. The problem with putting these together in one schema is that if you are doiong a full table scan, you will be scanning data for all the original schemas even though you only want the data for one. Seems to me that the people who figured putting them together was a good idea should have considered this before putting the two together. I would put them ln the carpet and have them tell you what they are doing to do to fix your problems.

You might consider VPD, either Oracle's version, or your own.

You might also consider partitioning based on the original business requirement so that your queries only need to look at the data they require.

But when you do this you will now see the true cost of this kind of lack of design analysis.

Did they not consider these things when they made the suggestion?

Good luck. Kevin

[Updated on: Wed, 26 October 2011 06:58]

Report message to a moderator

Previous Topic: DECODE in WHERE CLAUSE Performance
Next Topic: Proving OTT undo reads
Goto Forum:
  


Current Time: Fri Mar 29 07:34:30 CDT 2024