Feed aggregator

Insert performance issue

Tom Kyte - Mon, 2016-10-17 14:46
Hi Tom, We have an application to load data into database, It uses insert statements to insert data into tables. Each table contains nearly 220 colums. insert into table1(col1,col2,col3,col6,col7,col8................col220)values(1,2,3,6,7...
Categories: DBA Blogs

Data insertion strategy in normalized tables

Tom Kyte - Mon, 2016-10-17 14:46
Hi Team, We have many tables (master tables) having primary and foreign key relationships. These tables (normalized) contain static data (master data). Inserting data manually in these tables is a tedious task because if we insert data out of ord...
Categories: DBA Blogs

export each XML Message to separate .xml files from Databases

Tom Kyte - Mon, 2016-10-17 14:46
Hi , We store specific information as XML field in Oracle database I have specific Query which has results in xml's already as each filed stores as xml file. I wanted to export each field as XML file select REQEmployeeXML from User.Employe...
Categories: DBA Blogs

How to acheive the output in the aligned format ?

Tom Kyte - Mon, 2016-10-17 14:46
The following .sql file is called by the .sh shell script. connect username/password@sidname SET SERVEROUTPUT ON; SET LINESIZE 4600; #SET TRIMSPOOL ON; SET WRAP OFF; SET HEADING OFF; #SET TRIMOUT ON; SET TIMING ON; SET FEEDBACK ON; SET SPOO...
Categories: DBA Blogs

Partner TekTalk Webinar: Process Lifeycycle for Procurement

WebCenter Team - Mon, 2016-10-17 14:31
Webinar: Process Lifeycycle for Procurement

TekTalk Webinar:  Process Lifecycle for Procurement

Leveraging the Cloud to Accelerate Your Procurement Process, Approvals and Reduce Errors

Wed Oct 26 at 1:00 PM ESTEffective communication and coordination between Procurement, Legal, Finance and vendors is key to smoothly acquire products and services. The procurement process lifecycle can be annoyingly complicated and often experiences frustrating delays, especially when you have to manage massive volumes of active procurements and incoming requests with limited resources. 

Join the webinar to learn how you can:
  • Streamline your entire procurement process from initiating procurement request, managing RFx to contract negotiation and managing the award process
  • Effectively evaluate vendors by creating requirement documents, and schedule evaluations to score and rate each vendor all within one system
  • Optimize your procurement process by expanding your supplier network to dial down costs
  • Enable dynamic routing of procurement forms and documents across your enterprise for productive collaboration

Register now to learn how you can gain total control of your procurement process. Request, evaluate, negotiate, approve, and renew with ease.


For more information, please contact info@tekstream.com or call 844-TEK-STRM 
blank Tweet This blank Send to Linkedin blank Send to Facebook

Who will leave their Apple Watch to their grandchildren, Romania?

Usable Apps - Mon, 2016-10-17 09:40

Just back from some very hot Oracle Applications User Experience (OAUX) outreach and enablement in that part of the world known as the Silicon Valley of Transylvania: Romania.

I teamed up with our Bucharest-based Cloud User Experience (UX) Program Manager Ana Tomescu (@annatomescu) and other local Oracle teams to coordinate our presence, which was all about maximizing the Oracle Cloud and SaaS UX message and changing any lingering perceptions about Oracle being only the database company.

Topics at TechHub Bucharest

Word Cloud UX: Our outreach in Romania covered many areas. Users are always at the center of what we do.

We took to the stage at the Great People Inside Conference: The New World of Work event in Brașov, Romania, to deliver a keynote presentation about the functionality, form, fitness, and fashion trends of the digital user experience needed for today’s digital workforce.

 The Dress Code of the Digital Workforce

Fashion and Technology: The Dress Code of the Digital Workforce. The title of this blog post is taken from this slide.

The Great People Inside conference is the largest of its type for HR professionals in Romania, and it focuses on the world of work, so it was an ideal platform for people to hear about the Human Capital Management (HCM) benefits of the Oracle Cloud UX strategy and innovation.

The keynote presentation concluded with digital UX adoption observations and takeaways, and then we joined in a panel with Monica Costea, Oracle HCM Senior Solutions Consultant for the region, and other speakers to answer questions from a packed audience.

Increasing Digital UX Adoption

Increasing digital UX adoption: from fashion to fitness to functionality . . . 

The panel was asked about whether Toyota's use of Kirobo, the smart car-based robot in Japan, could apply to other markets, about how to achieve a balance between technology and fitness (Pokémon Go is a fine example), about the 360-degree capabilities of HR applications, and more. The English-to-Romanian translator working in real time deserves a prize for my use of Uncanny Valley.

Monica Costea, Ultan O'Broin, and Ana Tomescu

Monica Costea, Ultan O'Broin, and Ana Tomescu testing remote selfie capabilities

In return, I asked the audience if fitness and wellness programs were a feature of enterprise offerings in Romania (an emergent one it seems), and how many people in the audience played Pokémon GO (seems people were shy, but I am assured they do!).

Earlier that same week we were honored to bring the Powerful Tech Team: The Cloud and Wearable Tech Experience event to Bucharest’s awesome TechHub with co-host Vector. A lively evening there featured panel discussions on Cloud UX strategy and innovation trends and how to increase wearable tech adoption and engagement, followed by a couple of guerrilla pitches to the gathered turnout by local startups Endtest and viLive.

Follow the smart money, the smart people, and what they're saying

Influencers: Follow the smart money, smart people, and what they're saying.

Interaction with the TechHub audience included discussions on user research, data accuracy, context, Big Data and visualizations, the importance of cutting-edge and fashionable design, how emerging tech such as AI, machine learning, and VR can be integrated with the wearables experience, and more. We got to experience the excitement and energy of the Bucharest startup scene, make some new contacts, and get some new ideas for further "boots on the ground" UX activities.

Definitely, there is an opportunity for UX and design meetups in this amazing city of tech innovators, entrepreneurs, and developers. 

So, we covered two great events in the same week that covered the best of Digital Romania and the OAUX communications and outreach charter of UX storytelling and enablement in action for the world of work.

Souvenirs of Transylvania

Some downtime in Transylvania. There's always storytelling and a Vlad involved with UX . . . :) 

It was not my first time in Romania, and it won’t be the last! Oracle is making a huge investment in some very smart people in Romania, and we’re eager to be part of their successes.

We’re looking forward to more UX events in Romania and the ECEMEA region. Stay tuned to the Usable Apps website events section.

Partner Webcast – Lightweight Application Deployment with Oracle Application Container Cloud Service

Modern enterprises realize that they need to continually innovate and transform and in order to do so they need new software development competencies to carry through transformation, unlike what they...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Fixed Stats

Jonathan Lewis - Mon, 2016-10-17 06:43

There are quite a lot of systems around the world that aren’t using the AWR (automatic workload repository) and ASH (active session history) tools to help them with trouble shooting because of the licensing requirement – so I’m still finding plenty of sites that are using Statspack and I recently came across a little oddity at one of these sites that I hadn’t noticed before: one of the Statspack snapshot statements was appearing fairly regularly in the Statspack report under the “SQL Ordered by Elapsed Time” section – even when the application had been rather busy and had generated lots of other work that was being reported. It was the following statement – the collection of file-level statistics:


select
       ts.name      tsname
     , df.name      filename
     , fs.phyrds
     , fs.phywrts
     , fs.readtim
     , fs.writetim
     , fs.singleblkrds
     , fs.phyblkrd
     , fs.phyblkwrt
     , fs.singleblkrdtim
     , fw.count     wait_count
     , fw.time      time
     , df.file#
  from x$kcbfwait   fw
     , v$filestat   fs
     , v$tablespace ts
     , v$datafile   df
 where ts.ts#    = df.ts#
   and fs.file#  = df.file#
   and fw.indx+1 = df.file#
;

The execution plan didn’t look very friendly, and the volume of I/O it generated (several hundred thousand disk reads) was surprising. The reason why the statement stood out so much in this case was that there was a fairly large number of files in the database (over 1,000) and the default execution plan was showing very bad cardinality estimates that resulted in highly inappropriate cartesian merge joins. At best the statement was taking around 2 minutes to run, at worst it was much, much worse.

This system was running 10g – also something which is still fairly common, though becoming much scarcer – which produced the following execution plan (which  I’ve recreated on a much smaller system):

-------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                 |     1 |   535 |     2 (100)| 00:00:01 |
|*  1 |  HASH JOIN                    |                 |     1 |   535 |     2 (100)| 00:00:01 |
|   2 |   MERGE JOIN CARTESIAN        |                 |     5 |  1190 |     0   (0)| 00:00:01 |
|   3 |    NESTED LOOPS               |                 |     1 |   199 |     0   (0)| 00:00:01 |
|   4 |     MERGE JOIN CARTESIAN      |                 |     1 |   173 |     0   (0)| 00:00:01 |
|*  5 |      FIXED TABLE FULL         | X$KCCTS         |     1 |    43 |     0   (0)| 00:00:01 |
|   6 |      BUFFER SORT              |                 |     1 |   130 |     0   (0)| 00:00:01 |
|*  7 |       FIXED TABLE FULL        | X$KCFIO         |     1 |   130 |     0   (0)| 00:00:01 |
|*  8 |     FIXED TABLE FIXED INDEX   | X$KCCFE (ind:1) |     1 |    26 |     0   (0)| 00:00:01 |
|   9 |    BUFFER SORT                |                 |   100 |  3900 |     0   (0)| 00:00:01 |
|  10 |     FIXED TABLE FULL          | X$KCBFWAIT      |   100 |  3900 |     0   (0)| 00:00:01 |
|* 11 |   VIEW                        | GV$DATAFILE     |     1 |   297 |     1 (100)| 00:00:01 |
|  12 |    SORT ORDER BY              |                 |     1 |   957 |     1 (100)| 00:00:01 |
|  13 |     NESTED LOOPS              |                 |     1 |   957 |     0   (0)| 00:00:01 |
|  14 |      NESTED LOOPS             |                 |     1 |   647 |     0   (0)| 00:00:01 |
|  15 |       NESTED LOOPS            |                 |     1 |   371 |     0   (0)| 00:00:01 |
|* 16 |        FIXED TABLE FULL       | X$KCCFN         |     1 |   323 |     0   (0)| 00:00:01 |
|* 17 |        FIXED TABLE FIXED INDEX| X$KCVFH (ind:1) |     1 |    48 |     0   (0)| 00:00:01 |
|* 18 |       FIXED TABLE FIXED INDEX | X$KCCFE (ind:1) |     1 |   276 |     0   (0)| 00:00:01 |
|* 19 |      FIXED TABLE FULL         | X$KCCFN         |     1 |   310 |     0   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("TSTSN"="TS#" AND "K"."KCFIOFNO"="FILE#" AND "FILE#"="FW"."INDX"+1)
   5 - filter("TSTSN"<>(-1) AND "INST_ID"=USERENV('INSTANCE'))
   7 - filter("K"."INST_ID"=USERENV('INSTANCE'))
   8 - filter("F"."FEDUP"<>0 AND "F"."FENUM"="K"."KCFIOFNO")
  11 - filter("INST_ID"=USERENV('INSTANCE'))
  16 - filter("FN"."FNNAM" IS NOT NULL AND "FN"."FNTYP"=4 AND BITAND("FN"."FNFLG",4)<>4)
  17 - filter("FN"."FNFNO"="FH"."HXFIL")
  18 - filter("FE"."FEDUP"<>0 AND "FN"."FNFNO"="FE"."FENUM" AND
              "FE"."FEFNH"="FN"."FNNUM" AND "FE"."FETSN"<>(-1))
  19 - filter("FE"."FEPAX"<>65535 AND "FE"."FEPAX"<>0 AND "FE"."FEPAX"="FNAUX"."FNNUM"
              OR ("FE"."FEPAX"=0 OR "FE"."FEPAX"=65535) AND "FE"."FENUM"="FNAUX"."FNFNO" AND
              "FNAUX"."FNTYP"=4 AND "FNAUX"."FNNAM" IS NOT NULL AND BITAND("FNAUX"."FNFLG",4)<>4 AND
              "FE"."FEFNH"="FNAUX"."FNNUM")

Note particularly the two Cartesian merge joins and the very late filter at operation 1.

Note also the number of times the cardinality estimate is 1 – always a bit of a threat when the query gets complicated: “anything goes following a one for Rows”.

The easy (first thought) solution was simply to gather stats on all the fixed objects in this query:


begin
        dbms_stats.gather_table_stats('sys','x$kcbfwait',method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kccfe',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kccfn',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kccts',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kcfio',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kcvfh',   method_opt=>'for all columns size 1');
end;
/

The option to gather fixed objects stats individually with a call to dbms_stats.gather_table_stats() is not commonly known, but it does work.

Here’s the plan (again from the small system) after stats collection:

--------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                 |    29 | 10411 |     3 (100)| 00:00:01 |
|*  1 |  HASH JOIN                     |                 |    29 | 10411 |     3 (100)| 00:00:01 |
|*  2 |   HASH JOIN                    |                 |     1 |   350 |     2 (100)| 00:00:01 |
|   3 |    MERGE JOIN CARTESIAN        |                 |    25 |  1325 |     0   (0)| 00:00:01 |
|   4 |     NESTED LOOPS               |                 |     4 |   148 |     0   (0)| 00:00:01 |
|*  5 |      FIXED TABLE FULL          | X$KCFIO         |   200 |  6200 |     0   (0)| 00:00:01 |
|*  6 |      FIXED TABLE FIXED INDEX   | X$KCCFE (ind:1) |     1 |     6 |     0   (0)| 00:00:01 |
|   7 |     BUFFER SORT                |                 |     7 |   112 |     0   (0)| 00:00:01 |
|*  8 |      FIXED TABLE FULL          | X$KCCTS         |     7 |   112 |     0   (0)| 00:00:01 |
|*  9 |    VIEW                        | GV$DATAFILE     |     1 |   297 |     1 (100)| 00:00:01 |
|  10 |     SORT ORDER BY              |                 |     1 |   316 |     1 (100)| 00:00:01 |
|  11 |      NESTED LOOPS              |                 |     1 |   316 |     0   (0)| 00:00:01 |
|  12 |       NESTED LOOPS             |                 |     1 |   248 |     0   (0)| 00:00:01 |
|  13 |        NESTED LOOPS            |                 |     1 |   226 |     0   (0)| 00:00:01 |
|* 14 |         FIXED TABLE FULL       | X$KCCFE         |     4 |   612 |     0   (0)| 00:00:01 |
|* 15 |         FIXED TABLE FIXED INDEX| X$KCCFN (ind:1) |     1 |    73 |     0   (0)| 00:00:01 |
|* 16 |        FIXED TABLE FIXED INDEX | X$KCVFH (ind:1) |     1 |    22 |     0   (0)| 00:00:01 |
|* 17 |       FIXED TABLE FULL         | X$KCCFN         |     1 |    68 |     0   (0)| 00:00:01 |
|  18 |   FIXED TABLE FULL             | X$KCBFWAIT      |   400 |  3600 |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("FILE#"="FW"."INDX"+1)
   2 - access("TSTSN"="TS#" AND "K"."KCFIOFNO"="FILE#")
   5 - filter("K"."INST_ID"=USERENV('INSTANCE'))
   6 - filter("F"."FEDUP"<>0 AND "F"."FENUM"="K"."KCFIOFNO")
   8 - filter("TSTSN"<>(-1) AND "INST_ID"=USERENV('INSTANCE'))
   9 - filter("INST_ID"=USERENV('INSTANCE'))
  14 - filter("FE"."FEDUP"<>0 AND "FE"."FETSN"<>(-1))
  15 - filter("FN"."FNTYP"=4 AND "FN"."FNNAM" IS NOT NULL AND BITAND("FN"."FNFLG",4)<>4
              AND "FN"."FNFNO"="FE"."FENUM" AND "FE"."FEFNH"="FN"."FNNUM")
  16 - filter("FN"."FNFNO"="FH"."HXFIL")
  17 - filter("FE"."FEPAX"<>65535 AND "FE"."FEPAX"<>0 AND "FE"."FEPAX"="FNAUX"."FNNUM" OR
              ("FE"."FEPAX"=0 OR "FE"."FEPAX"=65535) AND "FE"."FENUM"="FNAUX"."FNFNO" AND
              "FNAUX"."FNTYP"=4 AND "FNAUX"."FNNAM" IS NOT NULL AND BITAND("FNAUX"."FNFLG",4)<>4 AND
              "FE"."FEFNH"="FNAUX"."FNNUM")

Note the changes in cardinality estimates: they now look a little more realistic and we’re down to one cartesian merge join which (if you have a rough idea of what your X$ tables hold) still looks a little surprising at first sight but not completely unreasonable. A change of plan doesn’t necessarily mean much without the data and time behind it, of course, so here are the two sets of results from a 10g database with a handful of datafiles and tablespaces showing the Row Source Operation sections from the tkprof output before and after stats collection:

Before stats collection:

Rows     Row Source Operation
-------  ---------------------------------------------------
      6  HASH JOIN  (cr=0 pr=0 pw=0 time=1957860 us)
  16800   MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=33855662 us)
     42    NESTED LOOPS  (cr=0 pr=0 pw=0 time=73795 us)
   1400     MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=21555 us)
      7      FIXED TABLE FULL X$KCCTS (cr=0 pr=0 pw=0 time=3204 us)
   1400      BUFFER SORT (cr=0 pr=0 pw=0 time=7233 us)
    200       FIXED TABLE FULL X$KCFIO (cr=0 pr=0 pw=0 time=1210 us)
     42     FIXED TABLE FIXED INDEX X$KCCFE (ind:1) (cr=0 pr=0 pw=0 time=1859092 us)
  16800    BUFFER SORT (cr=0 pr=0 pw=0 time=67643 us)
    400     FIXED TABLE FULL X$KCBFWAIT (cr=0 pr=0 pw=0 time=2008 us)
      6   VIEW  GV$DATAFILE (cr=0 pr=0 pw=0 time=68087 us)
      6    SORT ORDER BY (cr=0 pr=0 pw=0 time=68065 us)
      6     NESTED LOOPS  (cr=0 pr=0 pw=0 time=65989 us)
      6      NESTED LOOPS  (cr=0 pr=0 pw=0 time=56632 us)
      6       NESTED LOOPS  (cr=0 pr=0 pw=0 time=47217 us)
      6        FIXED TABLE FULL X$KCCFN (cr=0 pr=0 pw=0 time=19830 us)
      6        FIXED TABLE FIXED INDEX X$KCVFH (ind:1) (cr=0 pr=0 pw=0 time=25568 us)
      6       FIXED TABLE FIXED INDEX X$KCCFE (ind:1) (cr=0 pr=0 pw=0 time=9849 us)
      6      FIXED TABLE FULL X$KCCFN (cr=0 pr=0 pw=0 time=9715 us)

After stats collection:


Rows     Row Source Operation
-------  ---------------------------------------------------
      6  HASH JOIN  (cr=0 pr=0 pw=0 time=196576 us)
      6   HASH JOIN  (cr=0 pr=0 pw=0 time=195829 us)
     42    MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=4390 us)
      6     NESTED LOOPS  (cr=0 pr=0 pw=0 time=7810 us)
    200      FIXED TABLE FULL X$KCFIO (cr=0 pr=0 pw=0 time=1224 us)
      6      FIXED TABLE FIXED INDEX X$KCCFE (ind:1) (cr=0 pr=0 pw=0 time=150150 us)
     42     BUFFER SORT (cr=0 pr=0 pw=0 time=1574 us)
      7      FIXED TABLE FULL X$KCCTS (cr=0 pr=0 pw=0 time=1353 us)
      6    VIEW  GV$DATAFILE (cr=0 pr=0 pw=0 time=41058 us)
      6     SORT ORDER BY (cr=0 pr=0 pw=0 time=41005 us)
      6      NESTED LOOPS  (cr=0 pr=0 pw=0 time=39399 us)
      6       NESTED LOOPS  (cr=0 pr=0 pw=0 time=34229 us)
      6        NESTED LOOPS  (cr=0 pr=0 pw=0 time=15583 us)
      6         FIXED TABLE FULL X$KCCFE (cr=0 pr=0 pw=0 time=1124 us)
      6         FIXED TABLE FIXED INDEX X$KCCFN (ind:1) (cr=0 pr=0 pw=0 time=15067 us)
      6        FIXED TABLE FIXED INDEX X$KCVFH (ind:1) (cr=0 pr=0 pw=0 time=18971 us)
      6       FIXED TABLE FULL X$KCCFN (cr=0 pr=0 pw=0 time=5581 us)
    400   FIXED TABLE FULL X$KCBFWAIT (cr=0 pr=0 pw=0 time=1615 us)

The execution time has dropped from about 2 seconds to less than 1/10th of a second – and all I’ve got is 6 or 7 files and tablespaces.  (Some of the “time=” values in the first plan are very odd, but the final time figure is about right.)

Generating an intermediate result set of 16,800 rows for a tiny number of files is not a good thing – just imagine how big that number would get with 1,000 files and a couple of hundred tablespaces.

I have to say that (for a couple of tiny databases) the 11.2.0.4 and 12.1.0.2 systems I checked this query on had no problem and immediately picked a sensible path. It’s possible that the definition of some of the v$ objects has actually changed or that the optimizer features have changed (some assistence from complex view merging, perhaps) – but if you are still running Statspack, even if it’s on 11g or 12c, then it’s worth checking from time to time how much work goes into executing the snapshot and seeing if you need some fixed object stats to make things a little more efficient.

Footnote:

Several years ago I wrote a short note about how Statspack actually captured its own execution time (from 10g onwards) and how you could run a report on it to check the run time. It’s worth running that report from time to time. I’ve recently updated that note to add the equivalent query against the AWR.


Configuring Reverse Proxies and DMZs for EBS 12.2

Steven Chan - Mon, 2016-10-17 02:06

You may have end-users outside of your organization's firewall who need access to E-Business Suite.  One way of doing that is to set up a reverse proxy server and a series of network segments separated by firewalls. 

EBS DMZ architecture

The outermost network segment that lies between the internet and an organization's intranet is often called a Demilitarized Zone (DMZ).  DMZs are enforced by firewalls and other networking security devices.

Setting up a DMZ

Instructions for deploying EBS 12.2 in a DMZ-based architecture are published here:

Externally-facing EBS products

A subset of EBS products can be deployed for external use, including iSupplier, iRecruitment, iSupport, and others.  Many of these products have special rules that must be enabled in the URL Firewall to work properly in external deployments.  For a complete list of E-Business Suite products certified for external use, see Section 6 in Note 1375670.1.

Related Articles


Categories: APPS Blogs

Documentum story – Documentum installers fail with various errors

Yann Neuhaus - Mon, 2016-10-17 02:00

Some months ago when installing/removing/upgrading several Documentum components, we ended up facing a strange issue (yes I know, another one!). We were able to see these specific errors during the installation or removal of a Docbase, during the installation of a patch for the Content Server, the installation of the Thumbnail Server, aso… The errors we faced change for different installers but in the end, all of these errors were linked to the same issue. The only error that wasn’t completely useless was the one faced during the installation of a new docbase: “Content is not allowed in trailing section”. Yes I know this might not be really meaningful for everybody but this kind of error usually appears when an XML file isn’t formatted properly: some content isn’t allowed at this location in the file…

 

The strange thing is that these installers were working fine a few days before so what changed in the meantime exactly? After some research and analysis, I finally found the guilty! One thing that has been added in these few days was D2 which has been installed a few hours before the first error. Now what can be the link between D2 and these errors when running some installers? The first thing to do when there is an issue with D2 on the Content Server is to check the Java Method Server. The first time I saw this error, it was during the installation of a new docbase. As said before, I checked the logs of the Java Method Server and I found the following WARNING which confirmed what I suspected:

2015-10-24 09:39:59,948 UTC WARNING [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-3) JSF1078: Unable to process deployment descriptor for context ''{0}''.: org.xml.sax.SAXParseException; lineNumber: 40; columnNumber: 1; Content is not allowed in trailing section.
        at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:196) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:175) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:394) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:322) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:281) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLScanner.reportFatalError(XMLScanner.java:1459) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLDocumentScannerImpl$TrailingMiscDispatcher.dispatch(XMLDocumentScannerImpl.java:1302) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:324) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:845) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:768) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XMLParser.parse(XMLParser.java:108) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1196) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:555) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.jaxp.SAXParserImpl.parse(SAXParserImpl.java:289) [xercesImpl-2.9.1-jbossas-1.jar:]
        at javax.xml.parsers.SAXParser.parse(SAXParser.java:195) [rt.jar:1.7.0_72]
        at com.sun.faces.config.ConfigureListener$WebXmlProcessor.scanForFacesServlet(ConfigureListener.java:815) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at com.sun.faces.config.ConfigureListener$WebXmlProcessor.<init>(ConfigureListener.java:768) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:178) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3392) [jbossweb-7.0.13.Final.jar:]
        at org.apache.catalina.core.StandardContext.start(StandardContext.java:3850) [jbossweb-7.0.13.Final.jar:]
        at org.jboss.as.web.deployment.WebDeploymentService.start(WebDeploymentService.java:90) [jboss-as-web-7.1.1.Final.jar:7.1.1.Final]
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_72]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_72]
        at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_72]

 

So the error “Content is not allowed in trailing section” comes from the JMS which isn’t able to properly read the first character of the line 40 coming from an XML file “deployment descriptor”. So which file is that? That’s where the fun begin! There are several deployment descriptors in JBoss like web.xml, jboss-app.xml, jboss-deployment-structure.xml, jboss-web.xml, aso…

 

The D2 installer is updating some configuration files like the server.ini. This is a text file, pretty simple to update and indeed the file is properly formatted so no issue on this side. Except this file, the D2 installer is mainly updating XML files like the following ones:
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/META-INF/jboss-deployment-structure.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/bpm.ear/META-INF/jboss-deployment-structure.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/modules/emc/d2/lockbox/main/module.xml
  • aso…

 

At this point, it was pretty simple to figure out the issue: I just checked all these files until I found the wrongly updated/corrupted XML file. And the winner was… the file web.xml for the DmMethods inside the ServerApps. The D2 installer usually update/read this file but in the process of doing so, it actually does also corrupt it… It is not a big corruption but it is still boring since it will prevent some installers from working properly and it will display the error shown above in the Java Method Server. Basically whenever you have some parsing errors, I would suggest you to take a look at the files web.xml across the JMS. The D2 Installer in our case added at the end of this file the word “ap”. As you know, an XML file should be properly formatted to be readable and “ap” isn’t a correct XML ending tag:

[dmadmin@content_server_01 ~]$ cat $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml
<?xml version="1.0" encoding="UTF-8"?>
<web-app>
    <display-name>Documentum Method Invocation Servlet</display-name>
    <description>This servlet is for Java method invocation using the DO_METHOD apply call.</description>
    <servlet>
        <servlet-name>DoMethod</servlet-name>
        <description>Documentum Method Invocation Servlet</description>
        <servlet-class>com.documentum.mthdservlet.DoMethod</servlet-class>
        <init-param>
            <param-name>trace</param-name>
            <param-value>f</param-value>
        </init-param>
        <init-param>
            <param-name>docbase_install_owner_name</param-name>
            <param-value>dmadmin</param-value>
        </init-param>
        <init-param>
            <param-name>methodlocation-1</param-name>
            <param-value>$DOCUMENTUM/dba/java_methods</param-value>
        </init-param>
        <init-param>
            <param-name>docbase-GLOBAL_REGISTRY</param-name>
            <param-value>GLOBAL_REGISTRY</param-value>
        </init-param>
        <init-param>
            <param-name>docbase-DOCBASE1</param-name>
            <param-value>DOCBASE1</param-value>
        </init-param>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet-mapping>
        <servlet-name>DoMethod</servlet-name>
        <url-pattern>/servlet/DoMethod</url-pattern>
    </servlet-mapping>
</web-app>
ap
[dmadmin@content_server_01 ~]$

 

So to correct this issue, you just have to remove the word “ap” from the end of this file, restart the JMS and finally restart any installer and the issue should be gone. That’s pretty simple but still annoying that installers provided by EMC can cause such trouble on their own products.

 

The errors mentioned above are related to these XML files being wrongly updated by the D2 installer but that’s actually not the only installer that is often wrongly updating XML files. As far as I remember, the BPM installer and Thumbnail Server installer can also produce the exact same issue and the reason behind that is probably that the XML files of the Java Method Server on Linux Boxes have a wrong FileFormat… We faced this issue with all versions that we installed so far on our different environments: CS 7.2 P02, P05, P16… Each and every time we install a new Documentum Content Server, all XML files of the JMS are all using the DOS FileFormat and this prevents the D2/Thumbnail/BPM installers to do their job.

 

As a sub-note, I have also seen some issues with the file “jboss-deployment-structure.xml”. Just like the “web.xml” above, this one is also present for all applications deployed under the Java Method Server. Some installers will try to update this file (including D2, in order to configure the Lockbox in it) but again the same issue is happening, mostly because of the wrong FileFormat: I have already seen the whole content of this file just being removed by a Documentum installer… So before doing anything, I would suggest you to take a backup of the JMS as soon as it is installed and running and before installing all additional components like D2, bpm, Thumbnail Server, aso… On Linux, it is pretty easy to see and change the FileFormat of a file. Just open it using “vi” for example and then write “:set ff?”. This will display the current FileFormat and you can then change it using: “:set ff=unix”, if needed.

 

I don’t remember seeing such kind of behavior before the CS 7.2 so maybe it is just linked to this specific version… If you already have seen such thing for a previous version, don’t hesitate to share!

 

Cet article Documentum story – Documentum installers fail with various errors est apparu en premier sur Blog dbi services.

Export to CSV using UTL_FILE

Tom Kyte - Sun, 2016-10-16 20:26
Hi, I have gone through the code on link https://asktom.oracle.com/pls/asktom/f?p=100:11:0::NO::P11_QUESTION_ID:88212348059 I am getting ora-29283 invalid file operation, I need to know how to resolve this issue I have checked the following...
Categories: DBA Blogs

Can you recover a column marked as UNUSED?

Tom Kyte - Sun, 2016-10-16 20:26
We have a situation where somebody marked one too many columns as UNUSED by mistake. We do not intend to drop this additional column. I was searching around for a solution to this but I have not found one yet... The question is simple: is there a way...
Categories: DBA Blogs

Attribute Clustering/Zone Maps with Hash partitioning

Tom Kyte - Sun, 2016-10-16 20:26
How do Attribute Clustering/Zone work with hash partitioned tables?
Categories: DBA Blogs

Enterprise manager

Tom Kyte - Sun, 2016-10-16 20:26
Tom, I know I could use </code> http://localhost:5500/em" <code>to get the enterprise manager. But if I have several instances in the same server, it only lets me connect one database this way. How could I connect to every instance using tns-strin...
Categories: DBA Blogs

Oracle Database Cloud (DBaaS) Performance Consistency - Part 5

Randolf Geist - Sun, 2016-10-16 17:17
This is the fifth part of this installment, and before coming to comparisons to other cloud providers, in this part I show the results of the read-only test that I've already described in part three of this series, but repeated at a later point in time. The test setup was identical and can be checked in the mentioned previous part.

The reason for running the test again was the fact that I was informed during the first test run that the zone that my Oracle Cloud domain was assigned to was temporarily overloaded, which I also noticed since I wasn't able to create new services for some time.

Hence I decided to repeat the tests after it was confirmed that the issue got resolved.

So here is a comparison of the second test run results compared to the first test run - the corresponding results for the dedicated physical host can be found in part three.

Overall consistency second run:


Overall consistency first run:


Again the graph is supposed to show how many of the runs deviated how much from the overall median runtime. The second run shows a much improved overall consistency much closer to that of the dedicated physical host, although a few outliers are more extreme than in the first run,

The same data on per day basis, second run:


First run:


Except for the first two days the second run shows a pretty consistent behaviour per day, and it becomes obvious that it's the second day of the second run that is responsible for the extreme outliers and also the significant inconsistency in the area between 8 and 12 percent deviation.

Finally the individual thread performance, second run:


First run:


The individual thread performance graph confirms the much improved consistency compared to the first run.

Why I Moved from CompleteITProfessional to DatabaseStar

Complete IT Professional - Sun, 2016-10-16 13:26
Complete IT Professional has been renamed and rebranded to Database Star. I’ll explain what that means and why I made this change in this article. What Was The Change? You’re probably reading this on the new DatabaseStar.com website, wondering why you’re here instead of on CompleteITProfessional.com. Well, the reason is that I have decided to […]
Categories: Development

Oracle Data Integrator 12c: Getting Started - Components and Architecture

Rittman Mead Consulting - Sat, 2016-10-15 12:19

I’ve decided that it’s time for a refresher on Oracle Data Integrator 12c. I’m writing a Getting Started series to help folks get interested in the product and maybe even teach a few old dogs (including myself) some new tricks. In my last post, I shared the history of ODI and a bit about what sets it apart from other ETL tools on the market. In this article, I’ll walk through different components of Oracle Data Integrator and some of the architecture choices you’ll need to make in order to get started with ODI 12c.

Components

Before diving into the architecture, we need to understand the different components that are part of the Oracle Data Integrator installation.

Repositories

ODI is driven by metadata. This metadata is stored away in two different repositories: the Master repository and the Work repository. The Master repository contains information about security (users, profiles, etc), topology (data connections, contexts, physical/logical schemas), and ODI versioning. Each Master repository can be linked to one or more Work repositories. Work repositories can be of 2 different types: development or execution. In a Development Work repository you’ll find all of the design objects (mappings, packages, procedures, etc) and datastore metadata. The Execution Work repository only stores the execution objects, Scenarios and Load Plans, and there is no development capability. More on all of these objects in a later post.

ODI Repositories

The Master and Work repositories can reside in the same database schema or as their own schemas in the same database instance. The latter practice was more common in the past, before the Repository Creation Utility (RCU) was really the main mechanism for creating the repositories. The RCU doesn’t provide an option for separating the repositories into two different schemas, therefore the standard is to use one single schema. But that’s not the only reason, it also stems from the best practice of separating your environments in entirety; development, test, production, so each can be maintained, upgraded, and patch separately. We’ll jump into the environment setup further down.

Agents

The Oracle Data Integrator Agent is what orchestrates the execution of processes created in ODI. At runtime, agents will be used to run Load Plans and Scenarios via an ODI schedule, command line call, web service call, or a third-party scheduler. Agents are accessed via http/https requests, regardless of how they are called into action.

There are 3 types ODI 12c of agents:

  • JEE Agent
    Implemented as a deployment in Weblogic Server 12c, the JEE Agent allows you to use the features of WLS, such as clustering for high availability and JDBC connection pooling.
  • Standalone Agent
    The Standalone Agent is a lightweight Java application that is typically installed to run closest to where most of the transformations will occur. In most data warehouse setups, this is on the data warehouse server.
  • Colocated Agent
    This type of agent is essentially a Standalone Agent that is managed via Weblogic Server. If you want to manage all of your agents via WLS, this is the way to go.

A great article from the ODI A-Team, ODI Agents: Standalone, JEE and Colocated, describes the agent types in further detail, including the comparison of agent features chart, found below.

ODI 12c Agent Comparison

Studio

ODI Studio is a Java based development environment based on the JDeveloper framework. Studio is installed on client machines and used to connect to the master and work repositories to access the ODI metadata and perform object development. Essentially, this is where the magic happens!

Architecture - It Depends.

With any good question comes the answer, “it depends”. Before we can choose an architecture for ODI, the system requirements must be determined, allowing us to work through the “it depends” answer more clearly. Let’s dive right in with some potential requirements that may be necessary for a proper data integration setup.

High Availability

A key decision that drives which components of Oracle Data Integrator will be installed and configured stems from the need for a highly available ETL process. If there is a critical process or reporting that relies on ODI, then HA will be a requirement. Not only that, but you’ll want to look at using something like Oracle RAC for the repository database in order to keep it up and running. Finally, high availability won’t save you from an entire data center going offline, so ensure you have a disaster recovery process in place as well.

Environments Required

How many environments do you need? Let’s start with the minimum, Development, Test/QA, and Production. Ok, well if you’re a small shop you might be able to get away without Test/QA, but not recommended. I would also add a 4th environment, Hotfix, which will store the production development objects, allowing your team to fix a production issue quickly without having to restore code from source control. The purpose of understanding the number of environments upfront is to determine how many application servers and database servers will be required for the entire Oracle Data Integrator setup. There’s yet another great article from the Oracle A-Team that describes the use of the ODI Master Repository across these many environments.

ODI Environments

Lifecycle Management and Deployment Process

As you can see in the environments image above, there are also different arrows showing the deployment process and use of source control. The deployment process is usually the easy part to determine: Migrate ODI execution objects from Dev—>Test—>Prod. But the mechanism for doing so might be a bit different, especially if (or more likely, when) source control is introduced.

Oracle Data Integrator 12c can integrate with Subversion, and soon Git, for full lifecycle management capabilities. ODI also has its own object versioning, but it really is only to be used as a last resort. Often, teams have developed their own process around exporting objects to XML, loading into a source control system, and migrating to the next environment. Whichever process you determine is best for your organization, or if you plan to piggy-back on what’s currently in play for developers at your company, you’ll want to ensure the correct components are introduced into the architecture.

Sources and Targets

This is about the types and location of the data sources that ODI will need to connect to. If you have a set of flat files on a server that is unreachable from the machine where the ODI agent is installed, you’ll need a new Standalone agent placed somewhere that can pull from the file server. Drawing up the entire “planned” data flow will help to sort out these decisions early on, especially if you introduce big data into the mix.

Security

Finally, everyone’s favorite topic: security. There are many aspects to security within ODI, including how developers access ODI Studio and how to secure your ETL processes and the application itself. As mentioned earlier, the ODI Agents are called via a web request. The addition of SSL can further secure transmission of these requests, but may also introduce additional setup. If you have a large team of ETL developers, or maybe just a company policy on how applications are to be accessed, ODI can be integrated with your organization’s LDAP via the external authentication setup. With these and other considerations for ODI security, be sure to sort this out during the requirements and architecture phase.

There are many other questions that will need to be answered in order to properly choose your architecture, but hopefully this will get you started. As always, you can join one of the Rittman Mead ODI bootcamps to learn more from one of our experts on the product. Up next in the Getting Started series, we’ll look at Oracle Data Integrator installation and configuration.

Categories: BI & Warehousing

JDeveloper 12.2.1.1 Bug and Workaround - Wrong Instance Name for Method Action Binding

Andrejus Baranovski - Sat, 2016-10-15 10:34
After upgrade to JDeveloper 12.2.1.1 I have noticed issue related to Method Action binding instance name. This is not ADF 12.2.1.1 bug, but JDeveloper bug. JDeveloper sets incorrect name for Method Action binding instance name.

If you are going to create custom method in VO/AM and expose it through interface to be called in bindings layer - there will be similar error as below on runtime:


Source of this error is in the method binding definition:


Go to page definition source view and you will find instance name highlighted with warning. JDeveloper is able to recognize invalid expression, but still it generates it. ADF 12.2.1.1 runtime quality is improved, but can't say the same about JDeveloper IDE. Oracle focus is on Cloud, but still there is a lot to do to improve development tools quality:


Expression must be replaced manually to correct one (the one previous JDeveloper version used to generate) - data.DataControlName.VOInstanceName:


With correct expression for instance name, custom method is invoked correctly without error:


Download sample application - ADFOperationBindingApp.zip.

Audit execution of package body related to TABLE

Tom Kyte - Sat, 2016-10-15 07:46
HI TOM, question about auditing We turned on DML commands for table X1. How we check if package_body X1_PKGB (contains views, triggers, packages, synonyms) related to table is executed? I am in interested in details - table(X1), package body, ...
Categories: DBA Blogs

does DBMS_REFRESH.REFRESH a atomic refresh of mviews?

Tom Kyte - Sat, 2016-10-15 07:46
Hello Tom, we have a materialized view, created with <code>... CREATE MATERIALIZED VIEW "mySchema"."myMView" ("Col1", "Col2", "Col3") ... REFRESH COMPLETE ON DEMAND START WITH sysdate+0 NEXT (sysdate +10/1440) ... </code> which automatica...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator