Feed aggregator

Partner Webcast – Lightweight Application Deployment with Oracle Application Container Cloud Service

Modern enterprises realize that they need to continually innovate and transform and in order to do so they need new software development competencies to carry through transformation, unlike what they...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Fixed Stats

Jonathan Lewis - Mon, 2016-10-17 06:43

There are quite a lot of systems around the world that aren’t using the AWR (automatic workload repository) and ASH (active session history) tools to help them with trouble shooting because of the licensing requirement – so I’m still finding plenty of sites that are using Statspack and I recently came across a little oddity at one of these sites that I hadn’t noticed before: one of the Statspack snapshot statements was appearing fairly regularly in the Statspack report under the “SQL Ordered by Elapsed Time” section – even when the application had been rather busy and had generated lots of other work that was being reported. It was the following statement – the collection of file-level statistics:


select
       ts.name      tsname
     , df.name      filename
     , fs.phyrds
     , fs.phywrts
     , fs.readtim
     , fs.writetim
     , fs.singleblkrds
     , fs.phyblkrd
     , fs.phyblkwrt
     , fs.singleblkrdtim
     , fw.count     wait_count
     , fw.time      time
     , df.file#
  from x$kcbfwait   fw
     , v$filestat   fs
     , v$tablespace ts
     , v$datafile   df
 where ts.ts#    = df.ts#
   and fs.file#  = df.file#
   and fw.indx+1 = df.file#
;

The execution plan didn’t look very friendly, and the volume of I/O it generated (several hundred thousand disk reads) was surprising. The reason why the statement stood out so much in this case was that there was a fairly large number of files in the database (over 1,000) and the default execution plan was showing very bad cardinality estimates that resulted in highly inappropriate cartesian merge joins. At best the statement was taking around 2 minutes to run, at worst it was much, much worse.

This system was running 10g – also something which is still fairly common, though becoming much scarcer – which produced the following execution plan (which  I’ve recreated on a much smaller system):

-------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                 |     1 |   535 |     2 (100)| 00:00:01 |
|*  1 |  HASH JOIN                    |                 |     1 |   535 |     2 (100)| 00:00:01 |
|   2 |   MERGE JOIN CARTESIAN        |                 |     5 |  1190 |     0   (0)| 00:00:01 |
|   3 |    NESTED LOOPS               |                 |     1 |   199 |     0   (0)| 00:00:01 |
|   4 |     MERGE JOIN CARTESIAN      |                 |     1 |   173 |     0   (0)| 00:00:01 |
|*  5 |      FIXED TABLE FULL         | X$KCCTS         |     1 |    43 |     0   (0)| 00:00:01 |
|   6 |      BUFFER SORT              |                 |     1 |   130 |     0   (0)| 00:00:01 |
|*  7 |       FIXED TABLE FULL        | X$KCFIO         |     1 |   130 |     0   (0)| 00:00:01 |
|*  8 |     FIXED TABLE FIXED INDEX   | X$KCCFE (ind:1) |     1 |    26 |     0   (0)| 00:00:01 |
|   9 |    BUFFER SORT                |                 |   100 |  3900 |     0   (0)| 00:00:01 |
|  10 |     FIXED TABLE FULL          | X$KCBFWAIT      |   100 |  3900 |     0   (0)| 00:00:01 |
|* 11 |   VIEW                        | GV$DATAFILE     |     1 |   297 |     1 (100)| 00:00:01 |
|  12 |    SORT ORDER BY              |                 |     1 |   957 |     1 (100)| 00:00:01 |
|  13 |     NESTED LOOPS              |                 |     1 |   957 |     0   (0)| 00:00:01 |
|  14 |      NESTED LOOPS             |                 |     1 |   647 |     0   (0)| 00:00:01 |
|  15 |       NESTED LOOPS            |                 |     1 |   371 |     0   (0)| 00:00:01 |
|* 16 |        FIXED TABLE FULL       | X$KCCFN         |     1 |   323 |     0   (0)| 00:00:01 |
|* 17 |        FIXED TABLE FIXED INDEX| X$KCVFH (ind:1) |     1 |    48 |     0   (0)| 00:00:01 |
|* 18 |       FIXED TABLE FIXED INDEX | X$KCCFE (ind:1) |     1 |   276 |     0   (0)| 00:00:01 |
|* 19 |      FIXED TABLE FULL         | X$KCCFN         |     1 |   310 |     0   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("TSTSN"="TS#" AND "K"."KCFIOFNO"="FILE#" AND "FILE#"="FW"."INDX"+1)
   5 - filter("TSTSN"<>(-1) AND "INST_ID"=USERENV('INSTANCE'))
   7 - filter("K"."INST_ID"=USERENV('INSTANCE'))
   8 - filter("F"."FEDUP"<>0 AND "F"."FENUM"="K"."KCFIOFNO")
  11 - filter("INST_ID"=USERENV('INSTANCE'))
  16 - filter("FN"."FNNAM" IS NOT NULL AND "FN"."FNTYP"=4 AND BITAND("FN"."FNFLG",4)<>4)
  17 - filter("FN"."FNFNO"="FH"."HXFIL")
  18 - filter("FE"."FEDUP"<>0 AND "FN"."FNFNO"="FE"."FENUM" AND
              "FE"."FEFNH"="FN"."FNNUM" AND "FE"."FETSN"<>(-1))
  19 - filter("FE"."FEPAX"<>65535 AND "FE"."FEPAX"<>0 AND "FE"."FEPAX"="FNAUX"."FNNUM"
              OR ("FE"."FEPAX"=0 OR "FE"."FEPAX"=65535) AND "FE"."FENUM"="FNAUX"."FNFNO" AND
              "FNAUX"."FNTYP"=4 AND "FNAUX"."FNNAM" IS NOT NULL AND BITAND("FNAUX"."FNFLG",4)<>4 AND
              "FE"."FEFNH"="FNAUX"."FNNUM")

Note particularly the two Cartesian merge joins and the very late filter at operation 1.

Note also the number of times the cardinality estimate is 1 – always a bit of a threat when the query gets complicated: “anything goes following a one for Rows”.

The easy (first thought) solution was simply to gather stats on all the fixed objects in this query:


begin
        dbms_stats.gather_table_stats('sys','x$kcbfwait',method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kccfe',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kccfn',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kccts',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kcfio',   method_opt=>'for all columns size 1');
        dbms_stats.gather_table_stats('sys','x$kcvfh',   method_opt=>'for all columns size 1');
end;
/

The option to gather fixed objects stats individually with a call to dbms_stats.gather_table_stats() is not commonly known, but it does work.

Here’s the plan (again from the small system) after stats collection:

--------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                 |    29 | 10411 |     3 (100)| 00:00:01 |
|*  1 |  HASH JOIN                     |                 |    29 | 10411 |     3 (100)| 00:00:01 |
|*  2 |   HASH JOIN                    |                 |     1 |   350 |     2 (100)| 00:00:01 |
|   3 |    MERGE JOIN CARTESIAN        |                 |    25 |  1325 |     0   (0)| 00:00:01 |
|   4 |     NESTED LOOPS               |                 |     4 |   148 |     0   (0)| 00:00:01 |
|*  5 |      FIXED TABLE FULL          | X$KCFIO         |   200 |  6200 |     0   (0)| 00:00:01 |
|*  6 |      FIXED TABLE FIXED INDEX   | X$KCCFE (ind:1) |     1 |     6 |     0   (0)| 00:00:01 |
|   7 |     BUFFER SORT                |                 |     7 |   112 |     0   (0)| 00:00:01 |
|*  8 |      FIXED TABLE FULL          | X$KCCTS         |     7 |   112 |     0   (0)| 00:00:01 |
|*  9 |    VIEW                        | GV$DATAFILE     |     1 |   297 |     1 (100)| 00:00:01 |
|  10 |     SORT ORDER BY              |                 |     1 |   316 |     1 (100)| 00:00:01 |
|  11 |      NESTED LOOPS              |                 |     1 |   316 |     0   (0)| 00:00:01 |
|  12 |       NESTED LOOPS             |                 |     1 |   248 |     0   (0)| 00:00:01 |
|  13 |        NESTED LOOPS            |                 |     1 |   226 |     0   (0)| 00:00:01 |
|* 14 |         FIXED TABLE FULL       | X$KCCFE         |     4 |   612 |     0   (0)| 00:00:01 |
|* 15 |         FIXED TABLE FIXED INDEX| X$KCCFN (ind:1) |     1 |    73 |     0   (0)| 00:00:01 |
|* 16 |        FIXED TABLE FIXED INDEX | X$KCVFH (ind:1) |     1 |    22 |     0   (0)| 00:00:01 |
|* 17 |       FIXED TABLE FULL         | X$KCCFN         |     1 |    68 |     0   (0)| 00:00:01 |
|  18 |   FIXED TABLE FULL             | X$KCBFWAIT      |   400 |  3600 |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("FILE#"="FW"."INDX"+1)
   2 - access("TSTSN"="TS#" AND "K"."KCFIOFNO"="FILE#")
   5 - filter("K"."INST_ID"=USERENV('INSTANCE'))
   6 - filter("F"."FEDUP"<>0 AND "F"."FENUM"="K"."KCFIOFNO")
   8 - filter("TSTSN"<>(-1) AND "INST_ID"=USERENV('INSTANCE'))
   9 - filter("INST_ID"=USERENV('INSTANCE'))
  14 - filter("FE"."FEDUP"<>0 AND "FE"."FETSN"<>(-1))
  15 - filter("FN"."FNTYP"=4 AND "FN"."FNNAM" IS NOT NULL AND BITAND("FN"."FNFLG",4)<>4
              AND "FN"."FNFNO"="FE"."FENUM" AND "FE"."FEFNH"="FN"."FNNUM")
  16 - filter("FN"."FNFNO"="FH"."HXFIL")
  17 - filter("FE"."FEPAX"<>65535 AND "FE"."FEPAX"<>0 AND "FE"."FEPAX"="FNAUX"."FNNUM" OR
              ("FE"."FEPAX"=0 OR "FE"."FEPAX"=65535) AND "FE"."FENUM"="FNAUX"."FNFNO" AND
              "FNAUX"."FNTYP"=4 AND "FNAUX"."FNNAM" IS NOT NULL AND BITAND("FNAUX"."FNFLG",4)<>4 AND
              "FE"."FEFNH"="FNAUX"."FNNUM")

Note the changes in cardinality estimates: they now look a little more realistic and we’re down to one cartesian merge join which (if you have a rough idea of what your X$ tables hold) still looks a little surprising at first sight but not completely unreasonable. A change of plan doesn’t necessarily mean much without the data and time behind it, of course, so here are the two sets of results from a 10g database with a handful of datafiles and tablespaces showing the Row Source Operation sections from the tkprof output before and after stats collection:

Before stats collection:

Rows     Row Source Operation
-------  ---------------------------------------------------
      6  HASH JOIN  (cr=0 pr=0 pw=0 time=1957860 us)
  16800   MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=33855662 us)
     42    NESTED LOOPS  (cr=0 pr=0 pw=0 time=73795 us)
   1400     MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=21555 us)
      7      FIXED TABLE FULL X$KCCTS (cr=0 pr=0 pw=0 time=3204 us)
   1400      BUFFER SORT (cr=0 pr=0 pw=0 time=7233 us)
    200       FIXED TABLE FULL X$KCFIO (cr=0 pr=0 pw=0 time=1210 us)
     42     FIXED TABLE FIXED INDEX X$KCCFE (ind:1) (cr=0 pr=0 pw=0 time=1859092 us)
  16800    BUFFER SORT (cr=0 pr=0 pw=0 time=67643 us)
    400     FIXED TABLE FULL X$KCBFWAIT (cr=0 pr=0 pw=0 time=2008 us)
      6   VIEW  GV$DATAFILE (cr=0 pr=0 pw=0 time=68087 us)
      6    SORT ORDER BY (cr=0 pr=0 pw=0 time=68065 us)
      6     NESTED LOOPS  (cr=0 pr=0 pw=0 time=65989 us)
      6      NESTED LOOPS  (cr=0 pr=0 pw=0 time=56632 us)
      6       NESTED LOOPS  (cr=0 pr=0 pw=0 time=47217 us)
      6        FIXED TABLE FULL X$KCCFN (cr=0 pr=0 pw=0 time=19830 us)
      6        FIXED TABLE FIXED INDEX X$KCVFH (ind:1) (cr=0 pr=0 pw=0 time=25568 us)
      6       FIXED TABLE FIXED INDEX X$KCCFE (ind:1) (cr=0 pr=0 pw=0 time=9849 us)
      6      FIXED TABLE FULL X$KCCFN (cr=0 pr=0 pw=0 time=9715 us)

After stats collection:


Rows     Row Source Operation
-------  ---------------------------------------------------
      6  HASH JOIN  (cr=0 pr=0 pw=0 time=196576 us)
      6   HASH JOIN  (cr=0 pr=0 pw=0 time=195829 us)
     42    MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=4390 us)
      6     NESTED LOOPS  (cr=0 pr=0 pw=0 time=7810 us)
    200      FIXED TABLE FULL X$KCFIO (cr=0 pr=0 pw=0 time=1224 us)
      6      FIXED TABLE FIXED INDEX X$KCCFE (ind:1) (cr=0 pr=0 pw=0 time=150150 us)
     42     BUFFER SORT (cr=0 pr=0 pw=0 time=1574 us)
      7      FIXED TABLE FULL X$KCCTS (cr=0 pr=0 pw=0 time=1353 us)
      6    VIEW  GV$DATAFILE (cr=0 pr=0 pw=0 time=41058 us)
      6     SORT ORDER BY (cr=0 pr=0 pw=0 time=41005 us)
      6      NESTED LOOPS  (cr=0 pr=0 pw=0 time=39399 us)
      6       NESTED LOOPS  (cr=0 pr=0 pw=0 time=34229 us)
      6        NESTED LOOPS  (cr=0 pr=0 pw=0 time=15583 us)
      6         FIXED TABLE FULL X$KCCFE (cr=0 pr=0 pw=0 time=1124 us)
      6         FIXED TABLE FIXED INDEX X$KCCFN (ind:1) (cr=0 pr=0 pw=0 time=15067 us)
      6        FIXED TABLE FIXED INDEX X$KCVFH (ind:1) (cr=0 pr=0 pw=0 time=18971 us)
      6       FIXED TABLE FULL X$KCCFN (cr=0 pr=0 pw=0 time=5581 us)
    400   FIXED TABLE FULL X$KCBFWAIT (cr=0 pr=0 pw=0 time=1615 us)

The execution time has dropped from about 2 seconds to less than 1/10th of a second – and all I’ve got is 6 or 7 files and tablespaces.  (Some of the “time=” values in the first plan are very odd, but the final time figure is about right.)

Generating an intermediate result set of 16,800 rows for a tiny number of files is not a good thing – just imagine how big that number would get with 1,000 files and a couple of hundred tablespaces.

I have to say that (for a couple of tiny databases) the 11.2.0.4 and 12.1.0.2 systems I checked this query on had no problem and immediately picked a sensible path. It’s possible that the definition of some of the v$ objects has actually changed or that the optimizer features have changed (some assistence from complex view merging, perhaps) – but if you are still running Statspack, even if it’s on 11g or 12c, then it’s worth checking from time to time how much work goes into executing the snapshot and seeing if you need some fixed object stats to make things a little more efficient.

Footnote:

Several years ago I wrote a short note about how Statspack actually captured its own execution time (from 10g onwards) and how you could run a report on it to check the run time. It’s worth running that report from time to time. I’ve recently updated that note to add the equivalent query against the AWR.


Configuring Reverse Proxies and DMZs for EBS 12.2

Steven Chan - Mon, 2016-10-17 02:06

You may have end-users outside of your organization's firewall who need access to E-Business Suite.  One way of doing that is to set up a reverse proxy server and a series of network segments separated by firewalls. 

EBS DMZ architecture

The outermost network segment that lies between the internet and an organization's intranet is often called a Demilitarized Zone (DMZ).  DMZs are enforced by firewalls and other networking security devices.

Setting up a DMZ

Instructions for deploying EBS 12.2 in a DMZ-based architecture are published here:

Externally-facing EBS products

A subset of EBS products can be deployed for external use, including iSupplier, iRecruitment, iSupport, and others.  Many of these products have special rules that must be enabled in the URL Firewall to work properly in external deployments.  For a complete list of E-Business Suite products certified for external use, see Section 6 in Note 1375670.1.

Related Articles


Categories: APPS Blogs

Documentum story – Documentum installers fail with various errors

Yann Neuhaus - Mon, 2016-10-17 02:00

Some months ago when installing/removing/upgrading several Documentum components, we ended up facing a strange issue (yes I know, another one!). We were able to see these specific errors during the installation or removal of a Docbase, during the installation of a patch for the Content Server, the installation of the Thumbnail Server, aso… The errors we faced change for different installers but in the end, all of these errors were linked to the same issue. The only error that wasn’t completely useless was the one faced during the installation of a new docbase: “Content is not allowed in trailing section”. Yes I know this might not be really meaningful for everybody but this kind of error usually appears when an XML file isn’t formatted properly: some content isn’t allowed at this location in the file…

 

The strange thing is that these installers were working fine a few days before so what changed in the meantime exactly? After some research and analysis, I finally found the guilty! One thing that has been added in these few days was D2 which has been installed a few hours before the first error. Now what can be the link between D2 and these errors when running some installers? The first thing to do when there is an issue with D2 on the Content Server is to check the Java Method Server. The first time I saw this error, it was during the installation of a new docbase. As said before, I checked the logs of the Java Method Server and I found the following WARNING which confirmed what I suspected:

2015-10-24 09:39:59,948 UTC WARNING [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-3) JSF1078: Unable to process deployment descriptor for context ''{0}''.: org.xml.sax.SAXParseException; lineNumber: 40; columnNumber: 1; Content is not allowed in trailing section.
        at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:196) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:175) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:394) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:322) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:281) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLScanner.reportFatalError(XMLScanner.java:1459) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLDocumentScannerImpl$TrailingMiscDispatcher.dispatch(XMLDocumentScannerImpl.java:1302) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:324) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:845) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:768) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XMLParser.parse(XMLParser.java:108) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1196) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:555) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.jaxp.SAXParserImpl.parse(SAXParserImpl.java:289) [xercesImpl-2.9.1-jbossas-1.jar:]
        at javax.xml.parsers.SAXParser.parse(SAXParser.java:195) [rt.jar:1.7.0_72]
        at com.sun.faces.config.ConfigureListener$WebXmlProcessor.scanForFacesServlet(ConfigureListener.java:815) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at com.sun.faces.config.ConfigureListener$WebXmlProcessor.<init>(ConfigureListener.java:768) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:178) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3392) [jbossweb-7.0.13.Final.jar:]
        at org.apache.catalina.core.StandardContext.start(StandardContext.java:3850) [jbossweb-7.0.13.Final.jar:]
        at org.jboss.as.web.deployment.WebDeploymentService.start(WebDeploymentService.java:90) [jboss-as-web-7.1.1.Final.jar:7.1.1.Final]
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_72]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_72]
        at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_72]

 

So the error “Content is not allowed in trailing section” comes from the JMS which isn’t able to properly read the first character of the line 40 coming from an XML file “deployment descriptor”. So which file is that? That’s where the fun begin! There are several deployment descriptors in JBoss like web.xml, jboss-app.xml, jboss-deployment-structure.xml, jboss-web.xml, aso…

 

The D2 installer is updating some configuration files like the server.ini. This is a text file, pretty simple to update and indeed the file is properly formatted so no issue on this side. Except this file, the D2 installer is mainly updating XML files like the following ones:
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/META-INF/jboss-deployment-structure.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/bpm.ear/META-INF/jboss-deployment-structure.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/modules/emc/d2/lockbox/main/module.xml
  • aso…

 

At this point, it was pretty simple to figure out the issue: I just checked all these files until I found the wrongly updated/corrupted XML file. And the winner was… the file web.xml for the DmMethods inside the ServerApps. The D2 installer usually update/read this file but in the process of doing so, it actually does also corrupt it… It is not a big corruption but it is still boring since it will prevent some installers from working properly and it will display the error shown above in the Java Method Server. Basically whenever you have some parsing errors, I would suggest you to take a look at the files web.xml across the JMS. The D2 Installer in our case added at the end of this file the word “ap”. As you know, an XML file should be properly formatted to be readable and “ap” isn’t a correct XML ending tag:

[dmadmin@content_server_01 ~]$ cat $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml
<?xml version="1.0" encoding="UTF-8"?>
<web-app>
    <display-name>Documentum Method Invocation Servlet</display-name>
    <description>This servlet is for Java method invocation using the DO_METHOD apply call.</description>
    <servlet>
        <servlet-name>DoMethod</servlet-name>
        <description>Documentum Method Invocation Servlet</description>
        <servlet-class>com.documentum.mthdservlet.DoMethod</servlet-class>
        <init-param>
            <param-name>trace</param-name>
            <param-value>f</param-value>
        </init-param>
        <init-param>
            <param-name>docbase_install_owner_name</param-name>
            <param-value>dmadmin</param-value>
        </init-param>
        <init-param>
            <param-name>methodlocation-1</param-name>
            <param-value>$DOCUMENTUM/dba/java_methods</param-value>
        </init-param>
        <init-param>
            <param-name>docbase-GLOBAL_REGISTRY</param-name>
            <param-value>GLOBAL_REGISTRY</param-value>
        </init-param>
        <init-param>
            <param-name>docbase-DOCBASE1</param-name>
            <param-value>DOCBASE1</param-value>
        </init-param>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet-mapping>
        <servlet-name>DoMethod</servlet-name>
        <url-pattern>/servlet/DoMethod</url-pattern>
    </servlet-mapping>
</web-app>
ap
[dmadmin@content_server_01 ~]$

 

So to correct this issue, you just have to remove the word “ap” from the end of this file, restart the JMS and finally restart any installer and the issue should be gone. That’s pretty simple but still annoying that installers provided by EMC can cause such trouble on their own products.

 

The errors mentioned above are related to these XML files being wrongly updated by the D2 installer but that’s actually not the only installer that is often wrongly updating XML files. As far as I remember, the BPM installer and Thumbnail Server installer can also produce the exact same issue and the reason behind that is probably that the XML files of the Java Method Server on Linux Boxes have a wrong FileFormat… We faced this issue with all versions that we installed so far on our different environments: CS 7.2 P02, P05, P16… Each and every time we install a new Documentum Content Server, all XML files of the JMS are all using the DOS FileFormat and this prevents the D2/Thumbnail/BPM installers to do their job.

 

As a sub-note, I have also seen some issues with the file “jboss-deployment-structure.xml”. Just like the “web.xml” above, this one is also present for all applications deployed under the Java Method Server. Some installers will try to update this file (including D2, in order to configure the Lockbox in it) but again the same issue is happening, mostly because of the wrong FileFormat: I have already seen the whole content of this file just being removed by a Documentum installer… So before doing anything, I would suggest you to take a backup of the JMS as soon as it is installed and running and before installing all additional components like D2, bpm, Thumbnail Server, aso… On Linux, it is pretty easy to see and change the FileFormat of a file. Just open it using “vi” for example and then write “:set ff?”. This will display the current FileFormat and you can then change it using: “:set ff=unix”, if needed.

 

I don’t remember seeing such kind of behavior before the CS 7.2 so maybe it is just linked to this specific version… If you already have seen such thing for a previous version, don’t hesitate to share!

 

Cet article Documentum story – Documentum installers fail with various errors est apparu en premier sur Blog dbi services.

Export to CSV using UTL_FILE

Tom Kyte - Sun, 2016-10-16 20:26
Hi, I have gone through the code on link https://asktom.oracle.com/pls/asktom/f?p=100:11:0::NO::P11_QUESTION_ID:88212348059 I am getting ora-29283 invalid file operation, I need to know how to resolve this issue I have checked the following...
Categories: DBA Blogs

Can you recover a column marked as UNUSED?

Tom Kyte - Sun, 2016-10-16 20:26
We have a situation where somebody marked one too many columns as UNUSED by mistake. We do not intend to drop this additional column. I was searching around for a solution to this but I have not found one yet... The question is simple: is there a way...
Categories: DBA Blogs

Attribute Clustering/Zone Maps with Hash partitioning

Tom Kyte - Sun, 2016-10-16 20:26
How do Attribute Clustering/Zone work with hash partitioned tables?
Categories: DBA Blogs

Enterprise manager

Tom Kyte - Sun, 2016-10-16 20:26
Tom, I know I could use </code> http://localhost:5500/em" <code>to get the enterprise manager. But if I have several instances in the same server, it only lets me connect one database this way. How could I connect to every instance using tns-strin...
Categories: DBA Blogs

Oracle Database Cloud (DBaaS) Performance Consistency - Part 5

Randolf Geist - Sun, 2016-10-16 17:17
This is the fifth part of this installment, and before coming to comparisons to other cloud providers, in this part I show the results of the read-only test that I've already described in part three of this series, but repeated at a later point in time. The test setup was identical and can be checked in the mentioned previous part.

The reason for running the test again was the fact that I was informed during the first test run that the zone that my Oracle Cloud domain was assigned to was temporarily overloaded, which I also noticed since I wasn't able to create new services for some time.

Hence I decided to repeat the tests after it was confirmed that the issue got resolved.

So here is a comparison of the second test run results compared to the first test run - the corresponding results for the dedicated physical host can be found in part three.

Overall consistency second run:


Overall consistency first run:


Again the graph is supposed to show how many of the runs deviated how much from the overall median runtime. The second run shows a much improved overall consistency much closer to that of the dedicated physical host, although a few outliers are more extreme than in the first run,

The same data on per day basis, second run:


First run:


Except for the first two days the second run shows a pretty consistent behaviour per day, and it becomes obvious that it's the second day of the second run that is responsible for the extreme outliers and also the significant inconsistency in the area between 8 and 12 percent deviation.

Finally the individual thread performance, second run:


First run:


The individual thread performance graph confirms the much improved consistency compared to the first run.

Why I Moved from CompleteITProfessional to DatabaseStar

Complete IT Professional - Sun, 2016-10-16 13:26
Complete IT Professional has been renamed and rebranded to Database Star. I’ll explain what that means and why I made this change in this article. What Was The Change? You’re probably reading this on the new DatabaseStar.com website, wondering why you’re here instead of on CompleteITProfessional.com. Well, the reason is that I have decided to […]
Categories: Development

Oracle Data Integrator 12c: Getting Started - Components and Architecture

Rittman Mead Consulting - Sat, 2016-10-15 12:19

I’ve decided that it’s time for a refresher on Oracle Data Integrator 12c. I’m writing a Getting Started series to help folks get interested in the product and maybe even teach a few old dogs (including myself) some new tricks. In my last post, I shared the history of ODI and a bit about what sets it apart from other ETL tools on the market. In this article, I’ll walk through different components of Oracle Data Integrator and some of the architecture choices you’ll need to make in order to get started with ODI 12c.

Components

Before diving into the architecture, we need to understand the different components that are part of the Oracle Data Integrator installation.

Repositories

ODI is driven by metadata. This metadata is stored away in two different repositories: the Master repository and the Work repository. The Master repository contains information about security (users, profiles, etc), topology (data connections, contexts, physical/logical schemas), and ODI versioning. Each Master repository can be linked to one or more Work repositories. Work repositories can be of 2 different types: development or execution. In a Development Work repository you’ll find all of the design objects (mappings, packages, procedures, etc) and datastore metadata. The Execution Work repository only stores the execution objects, Scenarios and Load Plans, and there is no development capability. More on all of these objects in a later post.

ODI Repositories

The Master and Work repositories can reside in the same database schema or as their own schemas in the same database instance. The latter practice was more common in the past, before the Repository Creation Utility (RCU) was really the main mechanism for creating the repositories. The RCU doesn’t provide an option for separating the repositories into two different schemas, therefore the standard is to use one single schema. But that’s not the only reason, it also stems from the best practice of separating your environments in entirety; development, test, production, so each can be maintained, upgraded, and patch separately. We’ll jump into the environment setup further down.

Agents

The Oracle Data Integrator Agent is what orchestrates the execution of processes created in ODI. At runtime, agents will be used to run Load Plans and Scenarios via an ODI schedule, command line call, web service call, or a third-party scheduler. Agents are accessed via http/https requests, regardless of how they are called into action.

There are 3 types ODI 12c of agents:

  • JEE Agent
    Implemented as a deployment in Weblogic Server 12c, the JEE Agent allows you to use the features of WLS, such as clustering for high availability and JDBC connection pooling.
  • Standalone Agent
    The Standalone Agent is a lightweight Java application that is typically installed to run closest to where most of the transformations will occur. In most data warehouse setups, this is on the data warehouse server.
  • Colocated Agent
    This type of agent is essentially a Standalone Agent that is managed via Weblogic Server. If you want to manage all of your agents via WLS, this is the way to go.

A great article from the ODI A-Team, ODI Agents: Standalone, JEE and Colocated, describes the agent types in further detail, including the comparison of agent features chart, found below.

ODI 12c Agent Comparison

Studio

ODI Studio is a Java based development environment based on the JDeveloper framework. Studio is installed on client machines and used to connect to the master and work repositories to access the ODI metadata and perform object development. Essentially, this is where the magic happens!

Architecture - It Depends.

With any good question comes the answer, “it depends”. Before we can choose an architecture for ODI, the system requirements must be determined, allowing us to work through the “it depends” answer more clearly. Let’s dive right in with some potential requirements that may be necessary for a proper data integration setup.

High Availability

A key decision that drives which components of Oracle Data Integrator will be installed and configured stems from the need for a highly available ETL process. If there is a critical process or reporting that relies on ODI, then HA will be a requirement. Not only that, but you’ll want to look at using something like Oracle RAC for the repository database in order to keep it up and running. Finally, high availability won’t save you from an entire data center going offline, so ensure you have a disaster recovery process in place as well.

Environments Required

How many environments do you need? Let’s start with the minimum, Development, Test/QA, and Production. Ok, well if you’re a small shop you might be able to get away without Test/QA, but not recommended. I would also add a 4th environment, Hotfix, which will store the production development objects, allowing your team to fix a production issue quickly without having to restore code from source control. The purpose of understanding the number of environments upfront is to determine how many application servers and database servers will be required for the entire Oracle Data Integrator setup. There’s yet another great article from the Oracle A-Team that describes the use of the ODI Master Repository across these many environments.

ODI Environments

Lifecycle Management and Deployment Process

As you can see in the environments image above, there are also different arrows showing the deployment process and use of source control. The deployment process is usually the easy part to determine: Migrate ODI execution objects from Dev—>Test—>Prod. But the mechanism for doing so might be a bit different, especially if (or more likely, when) source control is introduced.

Oracle Data Integrator 12c can integrate with Subversion, and soon Git, for full lifecycle management capabilities. ODI also has its own object versioning, but it really is only to be used as a last resort. Often, teams have developed their own process around exporting objects to XML, loading into a source control system, and migrating to the next environment. Whichever process you determine is best for your organization, or if you plan to piggy-back on what’s currently in play for developers at your company, you’ll want to ensure the correct components are introduced into the architecture.

Sources and Targets

This is about the types and location of the data sources that ODI will need to connect to. If you have a set of flat files on a server that is unreachable from the machine where the ODI agent is installed, you’ll need a new Standalone agent placed somewhere that can pull from the file server. Drawing up the entire “planned” data flow will help to sort out these decisions early on, especially if you introduce big data into the mix.

Security

Finally, everyone’s favorite topic: security. There are many aspects to security within ODI, including how developers access ODI Studio and how to secure your ETL processes and the application itself. As mentioned earlier, the ODI Agents are called via a web request. The addition of SSL can further secure transmission of these requests, but may also introduce additional setup. If you have a large team of ETL developers, or maybe just a company policy on how applications are to be accessed, ODI can be integrated with your organization’s LDAP via the external authentication setup. With these and other considerations for ODI security, be sure to sort this out during the requirements and architecture phase.

There are many other questions that will need to be answered in order to properly choose your architecture, but hopefully this will get you started. As always, you can join one of the Rittman Mead ODI bootcamps to learn more from one of our experts on the product. Up next in the Getting Started series, we’ll look at Oracle Data Integrator installation and configuration.

Categories: BI & Warehousing

JDeveloper 12.2.1.1 Bug and Workaround - Wrong Instance Name for Method Action Binding

Andrejus Baranovski - Sat, 2016-10-15 10:34
After upgrade to JDeveloper 12.2.1.1 I have noticed issue related to Method Action binding instance name. This is not ADF 12.2.1.1 bug, but JDeveloper bug. JDeveloper sets incorrect name for Method Action binding instance name.

If you are going to create custom method in VO/AM and expose it through interface to be called in bindings layer - there will be similar error as below on runtime:


Source of this error is in the method binding definition:


Go to page definition source view and you will find instance name highlighted with warning. JDeveloper is able to recognize invalid expression, but still it generates it. ADF 12.2.1.1 runtime quality is improved, but can't say the same about JDeveloper IDE. Oracle focus is on Cloud, but still there is a lot to do to improve development tools quality:


Expression must be replaced manually to correct one (the one previous JDeveloper version used to generate) - data.DataControlName.VOInstanceName:


With correct expression for instance name, custom method is invoked correctly without error:


Download sample application - ADFOperationBindingApp.zip.

Audit execution of package body related to TABLE

Tom Kyte - Sat, 2016-10-15 07:46
HI TOM, question about auditing We turned on DML commands for table X1. How we check if package_body X1_PKGB (contains views, triggers, packages, synonyms) related to table is executed? I am in interested in details - table(X1), package body, ...
Categories: DBA Blogs

does DBMS_REFRESH.REFRESH a atomic refresh of mviews?

Tom Kyte - Sat, 2016-10-15 07:46
Hello Tom, we have a materialized view, created with <code>... CREATE MATERIALIZED VIEW "mySchema"."myMView" ("Col1", "Col2", "Col3") ... REFRESH COMPLETE ON DEMAND START WITH sysdate+0 NEXT (sysdate +10/1440) ... </code> which automatica...
Categories: DBA Blogs

Single Tablespace with multiple datafiles or Multiple Tablespaces with Multiple datafiles for 80 TB/ 12c Data Warehouse Database

Tom Kyte - Sat, 2016-10-15 07:46
Dear Tom, Looking at today's machine's POWER, it looks useless to spend time on Tablespace planning. Just look at Oracle Apps OATM, single tablespace and everything is dumped into it. Saying that, My question is 1. We are on <b>ASM, 12c Datab...
Categories: DBA Blogs

tool for db monitoring

Tom Kyte - Sat, 2016-10-15 07:46
Hi Team I would like to what are your review comments about the tool "ebd360" https://carlos-sierra.net/2014/02/19/meet-edb360-a-free-tool-that-provides-a-360-degree-view-of-an-oracle-database/ personally i am not inclined to use this. ...
Categories: DBA Blogs

Migrating from MS SQL to Oracle

Tom Kyte - Sat, 2016-10-15 07:46
Hi, I'm currently working as a .Net developer using MS SQL as back end. I'm interested to learn Oracle, for which which book should i prefer? I need to learn Oracle from its base.. Please suggest how to start?
Categories: DBA Blogs

RMAN BASIC compress Algorithm with 11.2.0.4 and 12.1.0.2 Standard Edition.

Tom Kyte - Sat, 2016-10-15 07:46
Hello, I want to know if BASIC compress Algorithm can be used in RMAN backup with 11.2.0.4 and 12.1.0.2 Standard Edition. Now I'm using 11g but soon I will upgrade to 12c so I need to kown to both version. I've checked the documentation and is...
Categories: DBA Blogs

Documentum story – Monitoring of WebLogic Servers

Yann Neuhaus - Sat, 2016-10-15 02:00

As you already know if you are following our Documentum Story, we are building, working and managing, for some time now, a huge Documentum Platform with more than 115 servers so far (still growing). To be able to manage properly this platform, we need an efficient monitoring tool. In this blog, I will not talk about Documentum but rather I will talk a little bit about the monitoring solution we integrated with Nagios to be able to support all of our WebLogic Servers. For those of you who don’t know, Nagios is a very popular Open Source monitoring tool launched in 1999. By default Nagios doesn’t provide any interface to monitor WebLogic or Documentum and therefore we choose to build our own script package to be able to properly monitor our Platform.

 

At the beginning of the project when we were installing the first WebLogic Servers, we used the monitoring scripts coming from the old Platform (a Documentum 6.7 Platform not managed by us). The idea behind these monitoring scripts was the following one:

  • The Nagios Server needs to perform a check of a service
  • The Nagios Server contacts the Nagios Agent which executes the check
  • The Check is starting its own WLST script to retrieve only the value needed for this check (each check calls a different WLST script)
  • The Nagios Agent returns the value to the Nagios Server which is then happy with it

 

This pretty simple approach was working fine at the beginning when we only had a few WebLogic Servers with not so much to monitor on them… The problem is that the Platform was growing very fast and we quickly started to see a few timeouts on the different checks because Nagios was trying to execute a lot of check at the same time on the same host. For example on a specific environment, we had two WebLogic Domains running with 4 or 5 Managed Servers for each domain that were hosting a Documentum Application (DA, D2, D2-Config, …). We were monitoring the heapSize, the number of threads, the server state, the number of sessions, the different URLs with and without Load Balancer, aso… for each Managed Server and for the AdminServers too. Therefore we quickly reached a point where 5 or 10 WLST scripts were running at the same time for the monitoring and only the monitoring.

 

The problem with the WLST script is that it takes a lot of time to initialize itself and start (several seconds) and during that time, 1 or 2 CPUs are fully used only for that. Now correlate this figure with the fact that there are dozens of checks running every 5 minutes for each domain and that are all starting their own WLST script. In the end, you will get a WebLogic Server highly used with a huge CPU consumption only for the monitoring… So that might be sufficient for a small installation but that’s definitively not the right thing to do for a huge Platform.

 

Therefore we needed to do something else. To solve this particular problem, I developed a new set to scripts that I integrated with Nagios to replace the old ones. The idea behind these new scripts was that it should be able to provide us at least the same thing as the old ones but without starting so much WLST scripts and it should be easily extensible. I worked on this small development and this is what I came with:

  • The Nagios Server needs to perform a check of a service
  • The Nagios Server contacts the Nagios Agent which executes the check
  • The Check is reading a log file to find the value needed for this check
  • The Nagios Agent returns the value to the Nagios Server which is then happy with it

 

Pretty similar isn’t it? Indeed… And yet so different! The main idea behind this new version is that instead of starting a WLST script for each check which will fully use 1 or 2 CPUs and last for 2 to 10 seconds (depending on the type of check and on the load), this new version will only read a very short log file (1 log file per check) that contains one line: the result of the check. Reading a log file like that takes a few milliseconds and it doesn’t consume 2 CPUs for doing that… Now the remaining question is how can we handle the process that will populate the log files? Because yes checking a log file is fast but how can we ensure that this log file will contain the correct data?

 

To manage that, this is what I did:

  • Creation of a shell script that will:
    • Be executed by the Nagios Agent for each check
    • Check if the WebLogic Domain is running and exit if not
    • Check if the WLST script is running and start it if not
    • Ensure the log file has been updated in the last 5 minutes (meaning the monitoring is running and the value that will be read is correct)
    • Read the log file
    • Analyze the information coming from the log file and return that to the Nagios Agent
  • Creation of a WLST script that will:
    • Be started once, do its job, sleep for 2 minutes and then do it again
    • Retrieve the monitoring values and store that in log files
    • Store error messages in the log files if there is any issue

 

It will not describe any longer the shell script because that’s just basic shell commands but I will show you instead an example of a WLST script that can be used to monitor a few things (ThreadPool of all Servers, HeapFree of all Severs, Sessions of all Applications deployed on all Servers):

[nagios@weblogic_server_01 scripts]$ cat DOMAIN_check_weblogic.wls
from java.io import File
from java.io import FileOutputStream

directory='/app/nagios/etc/objects/scripts'
userConfig=directory + '/DOMAIN_configfile.secure'
userKey=directory + '/DOMAIN_keyfile.secure'
address='weblogic_server_01'
port='8443'

connect(userConfigFile=userConfig, userKeyFile=userKey, url='t3s://' + address + ':' + port)

def setOutputToFile(fileName):
  outputFile=File(fileName)
  fos=FileOutputStream(outputFile)
  theInterpreter.setOut(fos)

def setOutputToNull():
  outputFile=File('/dev/null')
  fos=FileOutputStream(outputFile)
  theInterpreter.setOut(fos)

while 1:
  domainRuntime()
  for server in domainRuntimeService.getServerRuntimes():
    setOutputToFile(directory + '/threadpool_' + domainName + '_' + server.getName() + '.out')
    cd('/ServerRuntimes/' + server.getName() + '/ThreadPoolRuntime/ThreadPoolRuntime')
    print 'threadpool_' + domainName + '_' + server.getName() + '_OUT',get('ExecuteThreadTotalCount'),get('HoggingThreadCount'),get('PendingUserRequestCount'),get('CompletedRequestCount'),get('Throughput'),get('HealthState')
    setOutputToNull()
    setOutputToFile(directory + '/heapfree_' + domainName + '_' + server.getName() + '.out')
    cd('/ServerRuntimes/' + server.getName() + '/JVMRuntime/' + server.getName())
    print 'heapfree_' + domainName + '_' + server.getName() + '_OUT',get('HeapFreeCurrent'),get('HeapSizeCurrent'),get('HeapFreePercent')
    setOutputToNull()

  try:
    setOutputToFile(directory + '/sessions_' + domainName + '_console.out')
    cd('/ServerRuntimes/AdminServer/ApplicationRuntimes/consoleapp/ComponentRuntimes/AdminServer_/console')
    print 'sessions_' + domainName + '_console_OUT',get('OpenSessionsCurrentCount'),get('SessionsOpenedTotalCount')
    setOutputToNull()
  except WLSTException,e:
    setOutputToFile(directory + '/sessions_' + domainName + '_console.out')
    print 'CRITICAL - The Server AdminServer or the Administrator Console is not started'
    setOutputToNull()

  domainConfig()
  for app in cmo.getAppDeployments():
    domainConfig()
    cd('/AppDeployments/' + app.getName())
    for appServer in cmo.getTargets():
      domainRuntime()
      try:
        setOutputToFile(directory + '/sessions_' + domainName + '_' + app.getName() + '.out')
        cd('/ServerRuntimes/' + appServer.getName() + '/ApplicationRuntimes/' + app.getName() + '/ComponentRuntimes/' + appServer.getName() + '_/' + app.getName())
        print 'sessions_' + domainName + '_' + app.getName() + '_OUT',get('OpenSessionsCurrentCount'),get('SessionsOpenedTotalCount')
        setOutputToNull()
      except WLSTException,e:
        setOutputToFile(directory + '/sessions_' + domainName + '_' + app.getName() + '.out')
        print 'CRITICAL - The Managed Server ' + appServer.getName() + ' or the Application ' + app.getName() + ' is not started'
        setOutputToNull()

  java.lang.Thread.sleep(120000)

[nagios@weblogic_server_01 scripts]$

 

A few notes related to the above WLST script:

  • userConfig and userKey are two files created previously in WLST that contain the username/password of the current user (at the time of creation of these files) in an encrypted way. This allows you to login to WLST without having to type your username and password and more importantly, without having to put a clear text password in this file…
  • To ensure the security of this environment we are always using t3s to perform the monitoring checks and this requires you to configure the AdminServer to HTTPS.
  • In the script, I’m using the “setOutputToFile” and “setOutputToNull” functions. The first one is to redirect the output to the file mentioned in parameter while the second one is to remove all output. That’s basically to ensure that the log files generated ONLY contain the needed lines and nothing else.
  • There is an infinite loop (while 1) that executes all checks, create/update all log files and then sleep for 120 000 ms (so that’s 2 minutes) before repeating it.

 

As said above, this is easily extendable and therefore you can just add a new paragraph with the new values to retrieve. So have fun with that! :)

 

Comparison between the two methods. I will use below real figures coming from one of our WebLogic Server:

  • Old:
    • 40 monitoring checks running every 5 minutes => 40 WLST scripts started
    • each one for a duration of 6 seconds (average)
    • each one using 200% CPU during that time (2 CPUs)
  • New:
    • Shell script:
      • 40 monitoring checks running every 5 minutes => 40 log files read
      • each one for a duration of 0,1s (average)
      • each one using 100% CPU during that time (1 CPU)
    • WLST script:
      • One loop every 2 minutes (so 2.5 loops in 5 minutes)
      • each one for a duration of 0.5s (average)
      • each one using 100% CPU during that time (1 CPU)

 

Period CPU Time (Old) CPU Time (New) 5 minutes 40*6*2 <~> 480 s 40*0.1*1 + 2.5*0.5*1 <~> 5.25 s 1 day 480*(1440/5) <~> 138 240 s
<~> 2 304 min
<~> 38.4 h 4.25*(1440/5) <~> 1 512 s
<~> 25.2 min
<~> 0.42 h

Based on these figures, we can see that our new monitoring solution is almost 100 times more efficient than the old one so that’s a success: instead of spending 38.4 hours using the CPU on a 24 hours period (so that’s 1.6 CPU the whole day), we are now using 1 CPU for only 25 minutes! Here I’m just talking about the CPU Time but of course you can do the same thing for the memory, processes, aso…

 

Note: Starting with WebLogic 12c, Oracle introduced the RESTful services which can now be used to monitor WebLogic too… It has been improved in 12.2 and that can become a pretty good alternative to WLST scripting but for now we are still using this WLST approach with one single execution every 2 minutes and then Nagios reading the log files when needs be.

 

Cet article Documentum story – Monitoring of WebLogic Servers est apparu en premier sur Blog dbi services.

Meet Me at Bangalore, India @ SANGAM16 on 11th & 12th Nov (Friday & Saturday)

Online Apps DBA - Sat, 2016-10-15 01:46

If you are in India in November and working on Oracle then this is one of the events you don’t want to miss. Sangam is largest independent Oracle User Group conference event in India organised by AIOUG  on 11th (Friday) & 12th (Saturday) November in Bangalore . I’ll be presenting two papers 1. Oracle E-Business R12.2 […]

The post Meet Me at Bangalore, India @ SANGAM16 on 11th & 12th Nov (Friday & Saturday) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator