Feed aggregator

Handle individual UKs on bulk inserts

Tom Kyte - Thu, 2016-10-20 16:06
Hello, I need to execute bulk insert into a table where two columns have unique constraints. One column has native values (cannot be changed) and another one contains abstract pseudo-random value (I generate it myself but I cannot change the algorit...
Categories: DBA Blogs

Convert my rows in columns

Tom Kyte - Thu, 2016-10-20 16:06
Hi friends! I tried to use pivot, unpivot and other ways to return the expected, but i'm not successfully. <code>create table t1 (ID NUMBER(5), tp char(1), nm varchar2(5), st number(2), en number(2)); insert into t1 values (1,'A', 'a', 0, ...
Categories: DBA Blogs

How to use case statement inside where clause ?

Tom Kyte - Thu, 2016-10-20 16:06
I'm getting error-- PL/SQL: ORA-00905: missing keyword when i compile the following script create or replace procedure testing (ass_line in char, curs out sys_refcursor ) is begin open curs for select asl.Production_Group,asl.last_sequen...
Categories: DBA Blogs

Mixing 9s and 0s in format number

Tom Kyte - Thu, 2016-10-20 16:06
Hello, I'm a bit confused by SQL Refenrence Lets consider Elements 0 and 9 of "Table 2-14 Number Format Elements" of the current "SQL Reference" Lets also consider the emxamples of "Table 2-15 Results of Number Conversions" I have SO much to a...
Categories: DBA Blogs

Column level access restrictions on a table

Tom Kyte - Thu, 2016-10-20 16:06
Let's say I have a table T with columns A, B, C and D. Data in each column by itself is not considered sensitive, but a combination of columns A,B,C in the same resultset is considered sensitive. Is it possible to allow queries that select A,B,D or A...
Categories: DBA Blogs

Related to Job scheduler

Tom Kyte - Thu, 2016-10-20 16:06
Hi Tom, actually i created a job (it will run for every 5 mins)which will execute the procedure,In that procedure we are reading file list from directory and constructing a string which contains all the file names using java class. Here my prob...
Categories: DBA Blogs

Invalid identifier while merge using dynamic sql - static merge statement suceeds

Tom Kyte - Thu, 2016-10-20 16:06
Hi, I am facing a weird problem in which a merge statement using dynamic sql is failing with the below error while the static merge statement is suceeding. <b>Error:</b> ORA-00904: : invalid identifier ORA-06512: at line 60<u></u> Below is...
Categories: DBA Blogs

The Best Transformation Story Ever

Linda Fishman Hoyle - Thu, 2016-10-20 15:19

Oracle’s transformation story isn’t new. It just gets better over time. As companies transition to the cloud and become digital businesses, who better than Oracle to guide them through these changes? Oracle writer Margaret Lindquist shares many of these insights in Oracle’s Safra Catz (pictured left): How Finance Can Lead Cloud Transformation.

Show Me the Money

Unless you’re a start-up, most companies already have an ERP system in place, if not many ERP systems accumulated over time. That was the case in 1999 when Oracle set out to reduce multiple, disparate systems around the world to a single source of truth.

“By eliminating that duplication and consolidating systems, we were able to invest in our main business. When we started this effort, we spent $650 million a year on R&D. Now, we spend $5.6 billion. That is the goal with these ERP transformations. It’s critical to simplify and run the business in such a way that resources are released to invest in your main business.” – Safra Catz

Simplification, consolidation, and rationalization are just as relevant with the cloud. GE Digital has taken this advice to heart, replacing its fragmented ERP structure with ERP Cloud and investing those savings in innovation.

Don’t Forget the Human Element

Not every transformation is driven by cost savings. CIO Mark Sunday describes Oracle’s transformation into a cloud-first company as a way to engage with customers and employees in a digital world. No matter where your cloud journey begins, the hardest part will be dealing with people’s resistance to change.

Safra offers this advice: “You have to provide employees with so many benefits in terms of improved productivity, cost savings, and better efficiencies, that on their own they begin to look for other opportunities to push the capabilities of the systems.”

HugePages speeds up Oracle login process on Linux

Bobby Durrett's DBA Blog - Thu, 2016-10-20 13:28

We bumped a Linux database up to a 12 gigabyte SGA and the login time went up to about 2.5 seconds. Then a Linux admin configured 12 gigabytes of HugePages to fit the SGA and login time went down to .13 seconds. Here is how I tested the login time. E.sql just has the exit command in it so this logs in as SYSDBA and immediately exits:

$ time sqlplus / as sysdba < e.sql

... edited out for space ...

real    0m0.137s
user    0m0.007s
sys     0m0.020s

So, then the question came up about our databases with 3 gig SGAs without HugePages. So I tested one of them:

real    0m0.822s
user    0m0.014s
sys     0m0.007s

Same version of Oracle/Linux/etc. Seems like even with a 3 gig SGA the page table creation is adding more than half a second to the login time. No wonder they came up with HugePages for Linux!


Categories: DBA Blogs

Common Criteria and the Future of Security Evaluations

Oracle Security Team - Thu, 2016-10-20 12:08

For years, I (and many others) have recommended that customers demand more of their information technology suppliers in terms of security assurance – that is, proof that security is “built in” and not “bolted on,” that security is “part of” the product or service developed and can be assessed in a meaningful way. While many customers are focused on one kind of assurance – the degree to which a product is free from security vulnerabilities – it is extremely important to know the degree to which a product was designed to meet specific security threats (and how well it does that). These are two distinct approaches to security that are quite complementary and a point that should increasingly be of value for all customers. The good news is that many IT customers – whether of on-premises products or cloud services - are asking for more “proof of assurance,” and many vendors are paying more attention. Great! At the same time, sadly, a core international standard for assurance: the Common Criteria (CC) (ISO 15408), is at risk.

The Common Criteria allows you to evaluate your IT products via an independent lab (certified by the national “scheme” in which the lab is domiciled). Seven levels of assurance are defined – generally, the higher the evaluation assurance level (EAL), the more “proof” you have to provide that your product 1) addresses specific (named) security threats 2) via specific (named) technical remedies to those threats. Over the past few years, CC experts have packaged technology-specific security threats, objectives, functions and assurance requirements into “Protection Profiles” that have a pre-defined assurance level. The best part of the CC is the CC Recognition Arrangement (CCRA), the benefit of which is that a CC security evaluation done in one country (subject to some limits) is recognized in multiple other countries (27, at present). The benefit to customers is that they can have a baseline level of confidence in a product they buy because an independent entity has looked at/validated a set of security claims about that product.

Unfortunately, the CC in danger of losing this key benefit of mutual recognition. The main tension is between countries that want fast, cookie cutter, “one assurance size fits all” evaluations, and those that want (for at least some classes of products) higher levels of assurance. These tensions threaten to shatter the CCRA, with the risk of an “every country for itself,” “every market sector for itself” or worse, “every customer for itself” attempt to impose inconsistent assurance requirements on vendors that sell products and services in the global marketplace. Customers will not be well-served if there is no standardized and widely-recognized starting point for a conversation about product assurance.

The uncertainty about the future of the CC creates opportunity for new, potentially expensive and unproven assurance validation approaches. Every Tom, Dick, and Harriet is jumping on the assurance bandwagon, whether it is developing a new assurance methodology (that the promoters hope will be adopted as a standard, although it’s hardly a standard if one company “owns” the methodology), or lobbying for the use of one proprietary scanning tool or another (noting that none of the tools that analyze code are themselves certified for accuracy and cost-efficiency, nor are the operators of these tools). Nature abhors a vacuum: if the CCRA fractures, there are multiple entities ready to promote their assurance solutions – which may or may not work. (Note: I freely admit that a current weakness of the CC is that, while vulnerability analysis is part of a CC evaluation, it’s not all that one would want. A needed improvement would be a mechanism that ensures that vendors use a combination of tools to more comprehensively attempt to find security vulnerabilities that can weaken security mechanisms and have a risk-based program for triaging and fixing them. Validating that vendors are doing their own tire-kicking – and fixing holes in the tires before the cars leave the factory – would be a positive change.)

Why does this threat of CC balkanization matter? First of all, testing the exact same product or service 27 times won’t in all likelihood lead to a 27-fold security improvement, especially when the cost of the testing is born by the same entity over and over (the vendor). Worse, since the resources (time, money, and people) that would be used to improve actual security are assigned to jumping through the same hoop 27 times, we may paradoxically end up with worse security. We may also end up with worse security to the extent that there will be less incentive for the labs that do CC evaluations to pursue excellence and cost efficiency in testing if they have less competition (for example, from labs in other countries, as is the case under the CCRA) and they are handed a captive marketplace via country-specific evaluation schemes.

Second, whatever the shortcomings of the CC, it is a strong, broadly-adopted foundation for security that to-date has the support of multiple stakeholders. While it may be improved upon, it is nonetheless better to do one thing in one market that benefits and is accepted in 26 other markets than to do 27 or more expensive testing iterations that will not lead to a 27-fold improvement in security. This is especially true in categories of products that some national schemes have deemed “too complex to evaluate meaningfully.” The alternative clearly isn't per-country testing or per-customer testing, because it is in nobody's interests and not feasible for vendors to do repeated one-off assurance fire-drills for multiple system integrators. Even if the CC is “not sufficient” for all types of testing for all products, it is still a reputable and strong baseline to build upon.

Demand for Higher Assurance

In part, the continuing demand for higher assurance CC evaluations is due to the nature of some of the products: smart cards, for example, are often used for payment systems, where there is a well understood need for “higher proof of security-worthiness.” Also, smart cards generally have a smaller code footprint, fewer interfaces that are well-defined and thus they lend themselves fairly well to more in-depth, higher assurance validation. Indeed, the smart card industry – in a foreshadowing and/or inspiration of CC community Protection Profiles (cPPs), was an early adopter of devising common security requirements and “proof of security claims,” doubtless understanding that all smart card manufacturers - and the financial institutions who are heavy issuers of them - have a vested interest in “shared trustworthiness.” This is a great example of understanding that, to quote Ben Franklin, “We must all hang together or assuredly we shall all hang separately.”

The demand for higher assurance evaluations continues in part because the CC has been so successful. Customers worldwide became accustomed to “EAL4” as the gold standard for most commercial software. “EAL-none”—the direction of new style community Protection Profiles (cPP)—hasn’t captured the imagination of the global marketplace for evaluated software in part because the promoters of “no-EAL is the new EAL4” have not made the necessary business case for why “new is better than old.” An honorable, realistic assessment of “new-style” cPPs would explain what the benefits are of the new approach and what the downsides are as part of making a case that “new is better than old.” Consumers do not necessarily upgrade their TV just because they are told “new is better than old;” they upgrade because they can see a larger screen, clearer picture, and better value for money.

Product Complexity and Evaluations

To the extent security evaluation methodology can be more precise and repeatable, that facilitates more consistent evaluations across the board at a lower evaluation cost. However, there is a big difference between products that were designed to do a small set of core functions, using standard protocols, and products that have a broader swathe of functionality and have far more flexibility as to how that functionality is implemented. This means that it will be impossible to standardize testing across products in some product evaluation categories.

For example, routers use standard Internet protocols (or well-known proprietary protocols) and are relatively well defined in terms of what they do. Therefore, it is far easier to test their security using standardized tests as part of a CC evaluation to, for example, determine attack resistance, correctness of protocol implementation, and so forth. The Network Device Protection Profile (NDPP) is the perfect template for this type of evaluation.

Relational databases, on the other hand, use structured query language (SQL) but that does not mean all SQL syntax in all commercial databases is identical, or that protocols used to connect to the database are all identical, or that common functionality is completely comparable among databases. For example, Oracle was the first relational database to implement commercial row level access control: specifically, by attaching a security policy to a table that causes a rewrite of SQL to enforce additional security constraints. Since Oracle developed (and patented) row level access control, other vendors have implemented similar (but not identical) functionality.

As a result, no set of standard tests can adequately test each vendor’s row level security implementation, any more than you can use the same key on locks made by different manufacturers. Prescriptive (monolithic) testing can work for verifying protocol implementations; it will not work in cases where features are implemented differently. Even worse, prescriptive testing may have the effect of “design by test harness.”

Some national CC schemes have expressed concerns that an evaluation of some classes of products (like databases) will not be “meaningful” because of the size and complexity of these products,[1] or that these products do not lend themselves to repeatable, cross-product (prescriptive) testing. This is true, to a point: it is much easier to do a building inspection of a 1000-square foot or 100-square meter bungalow than of Buckingham Palace. However, given that some of these large, complex products are the core underpinning of many critical systems, does it make sense to ignore them because it’s not “rapid, repeatable and objective” to evaluate even a core part of their functionality? These classes of products are heavily used in the core market sectors the national schemes serve: all the more reason the schemes should not preclude evaluation of them.

Worse, given that customers subject to these CC schemes still want evaluated products, a lack of mutual recognition of these evaluations (thus breaking the CCRA) or negation of the ability to evaluate merely drives costs up. Demand for inefficient and ineffective ad hoc security assurances continues to increase and will explode if vendors are precluded from evaluating entire classes of products that are widely-used and highly security relevant. No national scheme, despite good intentions, can successfully control its national marketplace, or the global marketplace for information technology.


One of the downsides of rapid, basic, vanilla evaluations is that it stifles the uptake of innovative security features in a customer base that has a lot to protect. Most security-aware customers (like defense and intelligence customers) want new and innovative approaches to security to support their mission. They also want the new innovations vetted properly (via a CC evaluation).

Typically, a community Protection Profile (cPP) defines the set of minimum security functions that a product in category X does. Add-ons can in theory be done via an extended package (EP) – if the community agrees to it and the schemes allow it. The vendor and customer community should encourage the ability to evaluate innovative solutions through an EP, as long as the EP does not specify a particular approach to a threat to the exclusion of other ways to address the threat. This would continue to advance the state of the security art in particular product categories without waiting until absolutely everyone has Security Feature Y. It’s almost always a good thing to build a better mousetrap: there are always more mice to fend off. Rapid adoption of EPs would enable security-aware customers, many of whom are required to use evaluated products, to adopt new features readily, without waiting for:

a) every vendor to have a solution addressing that problem (especially since some vendors may never develop similar functionality)

b) the cPP to have been modified, and

c) all vendors to have evaluated against the new cPP (that includes the new security feature)

Given the increasing focus of governments on improvements to security (in some cases by legislation), national schemes should be the first in line to support “faster innovation/faster evaluation,” to support the customer base they are purportedly serving.

Last but really first, in the absence of the ability to rapidly evaluate new, innovative security features, customers who would most benefit from using those features may be unable or unwilling to use them, or may only use them at the expense of “one-off” assurance validation. Is it really in anyone’s interest to ask vendors to do repeated one-off assurance fire-drills for multiple system integrators?


The Common Criteria – and in particular, the Common Criteria recognition – form a valuable, proven foundation for assurance in a digital world that is increasingly in need of it. That strong foundation can nonetheless be strengthened by:

1) recognizing and supporting the legitimate need for higher assurance evaluations in some classes of product

2) enabling faster innovation in security and the ability to evaluate it via EPs

3) continuing to evaluate core products that have historically had and continue to have broad usage and market demand (e.g., databases and operating systems)

4) embracing, where apropos, repeatable testing and validation, while recognizing the limitations thereof that apply in some cases to entire classes of products and ensuring that such testing is not unnecessarily prescriptive.

Creating Oracle Application Builder Cloud Service App Based on Oracle ADF Business Components

Shay Shmeltzer - Thu, 2016-10-20 11:29

Oracle Application Builder Cloud Service (ABCS for short) enables you (and your business users) to create rich web and mobile apps in a quick visual way from a browser with no-coding required (but coding is possible).

The UI that ABCS creates is based on Oracle JET, which many of our customers love because its responsiveness and lightness.

Some Oracle ADF customers have been on the hunt for a new client-side UI solution for their apps, and Oracle JET is certainly a technology that will work for those use cases.

A nice feature for Oracle ADF customers is that their data-access and business-service layer is built in a reusable way that is decoupled from the UI. And now, with the ability to expose ADF Business Components as REST service, they can use any modern UI framework to develop the UI including Oracle JET. There are already many blog entries with code samples on how to write JET apps that connect to ADF Business Components

But what if we could give you the simplicity of ABCS for the UI creation, the power of JET for the UI experience, and the ability to leverage your existing investment in Oracle ADF all without writing a single line of code manually?

Well, in the demo below I'll show you how you can reuse the logic you have in Oracle ADF Business Component and build a JET based UI on top of them in a declarative way with Oracle Application Builder Cloud Service.

Basically you get the best of each tool - and you don't need to write a single line of code !


In the 9 minutes demo I'll show you how to:

  • Create an ADF Business Components layer on top of Oracle Database in the Cloud - (0:00)
  • Expose the ADF Business Components as REST service - (1:45)
  • Deploy the REST service to Java Cloud Service (JCS) - (2:19)
  • Create an Oracle Application Builder Cloud Service application - (6:00)
  • Add an ADF BC REST Service as a data source to the app - (6:30)
  • Create the user interface to your application - (7:20)

(Times are indicated in case you want to skip sections you are already familiar with) 

If you are interested in a bit of a background on why this is so simple, the answer is that ABCS was built to enable easy integration with Oracle SaaS leveraging the REST services they expose. To quickly build the full app with all the defaulting you are seeing in there (full CRUD with a simple drag and drop) ABCS needs to know some basic information about the data that it needs to render (primary key, data types, etc). Since Oracle SaaS is built on Oracle ADF, we built into ABCS the capability to analyze the describe that ADF BC REST services provide. This makes it dead simple to consume ADF REST service in ABCS, whether these services come from Oracle's apps - or your own ADF apps :-) 

As you can see there is a great synergy between Oracle ADF, Oracle Application Builder Cloud Service and Oracle JET. 

Want to try it on your own? Get a trial of Oracle Application Builder Cloud Service here

Categories: Development

While upgrading to I faced error ORA-01830 / ORA-06512

Pythian Group - Thu, 2016-10-20 10:35

The other day I was running an upgrade for a client that is using ACLs ( Access Control Lists) from to If you have been doing upgrades to 12c, you know that when running the catctl.pl -n 4 catupgrd.sql it entails 73 steps. So this upgrade failed in step 65 with the following error (I have trimmed the output for reading purposes) :

Serial   Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/dev/product/12.1.0/lib; export LD_LIBRARY_PATH;/u01/dev/product/12.1.0/perl/bin/perl -I /u01/dev/product/12.1.0/rdbms/admin -I /u01/dev/product/12.1.0/rdbms/admin/../../sqlpatch /u01/dev/product/12.1.0/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -upgrade_mode_only &gt; catupgrd_datapatch_upgrade.log 2&gt; catupgrd_datapatch_upgrade.err
returned from sqlpatch
    Time: 80s
Serial   Phase #:66 Files: 1     Time: 71s
Serial   Phase #:67 Files: 1     Time: 1s
Serial   Phase #:68 Files: 1     Time: 0s
Serial   Phase #:69 Files: 1     Time: 20s

Grand Total Time: 4946s

catuppst.sql unable to run in Database: DEVSTAR Id: 0
        ERRORS FOUND: during upgrade CATCTL ERROR COUNT=5
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-01830: date format picture ends before converting entire input string ORA-06512: at "SYS.XS_OBJECT_MIGRATION", line 167
ORA-06512: at line 28
ORA-06512: at line 69
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-06512: at "SYS.XS_OBJECT_MIGRATION", line 167 ORA-06512: at line 28
ORA-06512: at line 69
STATEMENT = [as above]
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-06512: at line 28 ORA-06512: at line 69
STATEMENT = [as above]
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-06512: at line 69]
STATEMENT = [as above]
Identifier ORDIM 16-09-25 12:28:53 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-20000: Oracle XML Database component not valid. Oracle XML Database must be installed and valid prior to Oracle Multimedia install, upgrade, downgrade, or patch.
ORA-06512: at line 3

And the worst part of it was that the upgrade also corrupted my database , also a good point to stress out , have a good backup before attempting to do an upgrade

Sun Sep 25 13:55:52 2016
Checker run found 59 new persistent data failures
Sun Sep 25 14:00:18 2016
Hex dump of (file 5, block 1) in trace file /u01/app/diag/rdbms/dev/dev/trace/de_ora_13476.trc
Corrupt block relative dba: 0x01400001 (file 5, block 1)
Bad header found during kcvxfh v8
Data in bad block:
 type: 0 format: 2 rdba: 0x01400001
 last change scn: 0x0000.00000000 seq: 0x1 flg: 0x05
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x00000001
 check value in block header: 0xa641
 computed block checksum: 0x0
Reading datafile '/u01/dev/oradata/dev/system_01.dbf' for corruption at rdba: 0x01400001 (file 5, block 1)
Reread (file 1, block 1) found same corrupt data (no logical check)

So what I had to do was a restore of my database before the upgrade, as I couldn’t even do a flashback due to the corrupt block.

But to fix this error, I had to apply the patch 20369415 to the 12c binaries before I ran the catupgrd.sql

[oracle@dev 20369415]$ opatch lsinventory | grep 20369415
Patch  20369415     : applied on Sun Sep 25 14:49:59 CDT 2016

Once the patch was applied , I reran the upgrade, and now it finished successfully

Serial   Phase #:65      Files: 1     Time: 133s
Serial   Phase #:66      Files: 1     Time: 78s
Serial   Phase #:68      Files: 1     Time: 0s
Serial   Phase #:69      Files: 1     Time: 275s
Serial   Phase #:70      Files: 1     Time: 171s
Serial   Phase #:71      Files: 1     Time: 0s
Serial   Phase #:72      Files: 1     Time: 0s
Serial   Phase #:73      Files: 1     Time: 20s

Phases [0-73]         End Time:[2016_09_26 17:42:54]

Grand Total Time: 5352s

LOG FILES: (catupgrd*.log)
COMP_ID              COMP_NAME                                VERSION  STATUS
-------------------- ---------------------------------------- -------- ---------------
APEX                 Oracle Application Express      VALID

OWB                  OWB                             VALID

AMD                  OLAP Catalog                    OPTION OFF

SDO                  Spatial                         VALID

ORDIM                Oracle Multimedia               VALID

XDB                  Oracle XML Database             VALID

CONTEXT              Oracle Text                     VALID

OWM                  Oracle Workspace Manager        VALID

CATALOG              Oracle Database Catalog Views   VALID

CATPROC              Oracle Database Packages and Types VALID

JAVAVM               JServer JAVA Virtual Machine    VALID

XML                  Oracle XDK                      VALID

CATJAVA              Oracle Database Java Packages   VALID

APS                  OLAP Analytic Workspace         VALID

XOQ                  Oracle OLAP API                 VALID


This was a small post to make you aware that if you are using ACLs , you need to run the patch 20369415 to the 12c binaries so that you don’t have to face a possible database corruption and have a harder time upgrading your database.

Note: This post was originally posted in rene-ace.com

Categories: DBA Blogs

Enterprise Manager does not display correct values for memory

Yann Neuhaus - Thu, 2016-10-20 08:03

I recently had problems with Enterprise Manager, receiving such alerts:

EM Event Critical hostname Memory Utilization is 93,205 % crossed warning (80%) or critical (90%)

When we have a look at the EM 13c console for the host:


On the system the free -m command displays:

oracle@host:~/24437699/ [agent13c] free -m
             total       used       free     shared    buffers     cached
Mem:         48275      44762       3512          0        205      37483
-/+ buffers/cache:       7073      41201
Swap:         8189       2397       5791

Em 13c does not take into account the buffer / cached component.

In fact the memory calculation has changed from EM and EM  According to Metalink Note 2144976.1:

“While the total Memory available in the host target is displayed correctly after applying the latest PSU # 23030165 (Agent-Side, the formula used for Memory Utilization is (100.0 * (realMem-freeMem) / realMem) and does not consider Buffers / Cached component for the calculation.”

To solve the problem we have to patch the OMS and the different agents:

For the oms: use the patch 23134365

For the agents : use the patch 24437699

Watch out, when you want to apply the 23134365 patch for oms, we have to install the latest version of omspatcher. We download  Patch 19999993 of Release from MOS.

We backup the OMSPatcher directory in the $ORACLE_HOME oms13c environment:

oracle:OMS_HOME:/ [oms13c] mv OMSPatcher/ OMSPatcher_save

then we copy and unzip the p19999993_131000_Generic.zip from the $ORACLE_HOME directory:

oracle:$OMS_HOME/ [oms13c] unzip p19999993_131000_Generic.zip
Archive:  p19999993_131000_Generic.zip
   creating: OMSPatcher/
   creating: OMSPatcher/oms/
  inflating: OMSPatcher/oms/generateMultiOMSPatchingScripts.pl
   creating: OMSPatcher/jlib/
  inflating: OMSPatcher/jlib/oracle.omspatcher.classpath.jar
  inflating: OMSPatcher/jlib/oracle.omspatcher.classpath.unix.jar
  inflating: OMSPatcher/jlib/omspatcher.jar
  inflating: OMSPatcher/jlib/oracle.omspatcher.classpath.windows.jar
   creating: OMSPatcher/scripts/
   creating: OMSPatcher/scripts/oms/
   creating: OMSPatcher/scripts/oms/oms_child_scripts/
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_wls.bat
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_jvm_discovery
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_jvm_discovery.bat
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_wls
  inflating: OMSPatcher/scripts/oms/omspatcher
  inflating: OMSPatcher/scripts/oms/omspatcher.bat
  inflating: OMSPatcher/omspatcher
   creating: OMSPatcher/wlskeys/
  inflating: OMSPatcher/wlskeys/createkeys.cmd
  inflating: OMSPatcher/wlskeys/createkeys.sh
  inflating: OMSPatcher/omspatcher.bat
  inflating: readme.txt
  inflating: PatchSearch.xml

We check the OMSPatcher version:

oracle:/ [oms13c] ./omspatcher version
OMSPatcher Version:
OPlan Version:
OsysModel build: Wed Oct 14 06:21:23 PDT 2015
OMSPatcher succeeded.

We download from Metalink the p23134265_131000_Generic-zip file, and we run:

oracle@host:/home/oracle/23134365/ [oms13c] omspatcher apply -analyze
OMSPatcher Automation Tool
Copyright (c) 2015, Oracle Corporation.  All rights reserved.
OMSPatcher version :
OUI version        :
Running from       : /u00/app/oracle/product/
Log file location  : /u00/app/oracle/product/
OMSPatcher log file: /u00/app/oracle/product/
Please enter OMS weblogic admin server URL(t3s://hostname:7102):>
Please enter OMS weblogic admin server username(weblogic):>
Please enter OMS weblogic admin server password:>
Configuration Validation: Success
Running apply prerequisite checks for sub-patch(es) "23134365" 
and Oracle Home "/u00/app/oracle/product/"...
Sub-patch(es) "23134365" are successfully analyzed for Oracle Home 

Complete Summary

OMSPatcher succeeded.

We stop the oms and we run:

oracle@hostname:/home/oracle/23134365/ [oms13c] omspatcher apply
OMSPatcher Automation Tool
Copyright (c) 2015, Oracle Corporation.  All rights reserved.
OMSPatcher version :
OUI version        :
Running from       : /u00/app/oracle/product/
Please enter OMS weblogic admin server URL(t3s://hostname:7102):>
Please enter OMS weblogic admin server username(weblogic):>
Please enter OMS weblogic admin server password:>
Configuration Validation: Success

OMSPatcher succeeded.

We finally restart the OMS:

oracle@hostname:/home/oracle/ [oms13c] emctl start oms

Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Starting Oracle Management Server...
WebTier Successfully Started
Oracle Management Server Successfully Started
Oracle Management Server is Up
JVMD Engine is Up
Starting BI Publisher Server ...
BI Publisher Server Already Started
BI Publisher Server is Up


Now we apply the patch to the agents:

After downloaded and unzipped the p24437699_131000_Generic.zip, we stop the management agent and we run:

oracle@hostname:/home/oracle/24437699/ [agent13c] opatch apply
Oracle Interim Patch Installer version
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
Oracle Home       : /u00/app/oracle/product/
Central Inventory : /u00/app/oraInventory
OPatch version    :
OUI version       :

OPatch detects the Middleware Home as "/u00/app/oracle/product/"
Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   24437699
Do you want to proceed? [y|n]
User Responded with: Y
All checks passed.
Backing up files...
Applying interim patch '24437699' to 
OH '/u00/app/oracle/product/'
Patching component oracle.sysman.top.agent,
Patch 24437699 successfully applied.
OPatch succeeded.

Finally we restart the agent with the emctl start agent command.

After the patches have been applied, the memory used displayed is correct:



And we do not receive critical alerts anymore :=)



Cet article Enterprise Manager does not display correct values for memory est apparu en premier sur Blog dbi services.

Conjuctive Normal Form

Jonathan Lewis - Thu, 2016-10-20 07:00

I recently tweeted about a comment I’d picked up at the Trivadis performance days regarding tablescans and performance.

“If you can write your SQL in conjunctive normal form it can help the optimizer to offload more predicates”

Inevitably someone asked me if I had an example to demonstrate this – I didn’t, and still don’t really, but here’s an interesting demo based on an example from the Oracle In-Memory blog showing how the optimizer will rearrange your filter predicates before passing them to the tablescan code for evaluation against an inmemory table.

rem     Script:         in_memory_conjunctive.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2016
rem     Purpose:
rem     Last tested

create table t1
with generator as (
                rownum id
        from dual
        connect by
                level <= 1e4
        rownum                          id,
        trunc(dbms_random.value(1,501)) qty,
        mod(rownum,200) + 1             part_no,
        lpad(rownum,10,'0')             v1,
        lpad('x',50,'x')                padding
        generator       v1,
        generator       v2
        rownum <= 1e7
prompt  ==========
prompt  Base query
prompt  ==========

        (qty > 495 or (qty < 3 and part_no = 50))
prompt  ===============
prompt  predicate added
prompt  ===============

        (qty > 495 or qty < 3) and (qty > 495 or (qty < 3 and part_no = 50))
prompt  =================
prompt  Ordered predicate
prompt  =================

select  /*+ ordered_predicates */
        (qty > 495 or qty < 3) and (qty > 495 or (qty < 3 and part_no = 50))

The 2nd and 3rd queries add a predicate to the first query – which, unfortunately, changes the estimated cardinality even though it has no effect on the result. This predicate is one that would be added by the inmemory code path if the table were declared to be inmemory. I’ve got two versions of the query, one with the (deprecated) ordered_predicates hint because in my initial tests the optimizer swapped the order of the predicates and I wanted to see if the ordering was at all critical.

Here’s the plan for the base query – first before declaring the table inmemory, then after declaring the table inmemory:

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |       |       | 14739 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |   100K|  1862K| 14739   (6)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - filter(("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)))
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |      |       |       |  1974 (100)|          |
|   1 |  SORT AGGREGATE             |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| T1   |   100K|  1862K|  1974  (44)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - inmemory((("QTY">495 OR "QTY"<3) AND ("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)))) filter(("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)))

And here, after putting the table back to no inmemory are the plans for the second and third queries; note, particularly the different order of the predicates in the predicate section: the predicate order matches the inmemory predicate order only if I use the ordered_predicates hint:

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |       |       | 14741 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |  1404 | 26676 | 14741   (6)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - filter((("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)) AND ("QTY">495
              OR "QTY"<3)))
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |       |       | 14741 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |  1404 | 26676 | 14741   (6)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - filter((("QTY">495 OR "QTY"<3) AND ("QTY">495 OR ("QTY"<3 AND

Finally the run times – after running the queries a few times each to check for consistency:

  • Base query: 0.82 seconds
  • Query with extra predicate: 0.86 seconds
  • Query with extra predicate and forced order of predicate evaluation: 0.71 seconds

The query with the predicate arrangement matching the inmemory rewrite actually ran 13% faster than the original. Unfortunatly the rewrite without the ordered_predicates hint ran slower – which is a bit of a shame but understandable – the first predicate is the more complex, and then the code has to run a completely redundant second predicate; I was a little surprised at how much slower it was, but the table is 10M rows and we’re only looking at sub-second times anyway.

My table was fully cached and just under 112,000 blocks, so not very large, and this was running a serial query on a basic Oracle instance. Nevetheless there is a difference in execution time that is more than just “random noise” – If this is an indication of how a little unsightly tweaking of SQL for small data sets can make a difference, you can imagine that there might be a worthwhile benefit to considering ways of tweaking your predicates that make a significant difference to execution time if the extra predicates end up being pushed down to storage on an Exadata machine.


Another “not quite” example I happen to have written about a few months ago is a case where rewriting “not exists() OR not exists() OR not exists()” as “not (exists() AND exists() AND exists())” allowed Oracle to rewrite three subqueries as a single subquery with three-table join.


Oracle EMPTY_CLOB Function with Examples

Complete IT Professional - Thu, 2016-10-20 06:00
In this article, I’ll explain what the EMPTY_CLOB function does and show you an example of how to use it. Purpose of the Oracle EMPTY_CLOB Function The EMPTY_CLOB function is used to initalise a CLOB column to EMPTY. It can be used in several places: In an INSERT statement In an UPDATE statement Initialising a […]
Categories: Development

EBS 12.2.6 OA Extensions for Jdeveloper 10g Now Available

Steven Chan - Thu, 2016-10-20 02:06
Jdeveloper logoWhen you create extensions to Oracle E-Business Suite OA Framework pages, you must use the version of Oracle JDeveloper shipped by the Oracle E-Business Suite product team. 

The version of Oracle JDeveloper is specific to the Oracle E-Business Suite Applications Technology patch level, so there is a new version of Oracle JDeveloper with each new release of the Oracle E-Business Suite Applications Technology patchset.

The Oracle Applications (OA) Extensions for JDeveloper 10g are now available for E-Business Suite Release 12.2.6.  For details, see:

The same Note also lists the latest OA Extension updates for EBS 11i, 12.0, 12.1, and 12.2.

Related Articles

Categories: APPS Blogs

Documentum story – Replicate an Embedded LDAP manually in WebLogic

Yann Neuhaus - Thu, 2016-10-20 02:00

In this blog, I will talk about the WebLogic Embedded LDAP. This LDAP is created by default on all AdminServers and Managed Servers of any WebLogic installation. The AdminServer always contains the Primary Embedded LDAP and all other Servers are synchronized with this one. This Embedded LDAP is the default security provider database for the WebLogic Authentication, Authorization, Credential Mapping and Role Mapping providers: it usually contains the WebLogic users, groups, and some other stuff like the SAML2 setup, aso… So basically a lot of stuff configured under the “security realms” in the WebLogic Administration Console. This LDAP is based on files that are stored under “$DOMAIN_HOME/servers/<SERVER_NAME>/data/ldap/”.


Normally the Embedded LDAP is automatically replicated from the AdminServer to the Managed Servers during startup but this can fail for a few reasons:

  • AdminServer not running
  • Problems in the communications between the AdminServer and Managed Servers
  • aso…


Oracle usually recommend to use an external RDBMS Security Store instead of the Embedded LDAP but not all information are stored in the RDBMS and therefore the Embedded LDAP is always used, at least for a few things. More information on this page: Oracle WebLogic Server Documentation.


So now in case the automatic replication isn’t working properly, for any reason, or if a manual replication is needed, how can it be done? Well that’s pretty simple and I will explain that below. I will also use a home made script in order to quickly and efficiently start/stop one, several or all WebLogic components. If you don’t have such script available, then please adapt the steps below to manually stop and start all WebLogic components.


So first you need to stop all components:

[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop stopAll
  ** Managed Server msD2-01 stopped
  ** Managed Server msD2Conf-01 stopped
  ** Managed Server msDA-01 stopped
  ** Administration Server AdminServer stopped
  ** Node Managed NodeManager stopped
[weblogic@weblogic_server_01 ~]$ ps -ef | grep weblogic
[weblogic@weblogic_server_01 ~]$


Once this is done, you need to retrieve the list of all Managed Servers installed/configured in this WebLogic Domain for which a manual replication is needed. For me, it is pretty simple, they are printed above in the start/stop command but otherwise you can find them like that:

[weblogic@weblogic_server_01 ~]$ cd $DOMAIN_HOME/servers
[weblogic@weblogic_server_01 servers]$ ls | grep -v "domain_bak"


Now that you have the list, you can proceed with the manual replication for each and every Managed Server. First backup the Embedded LDAP and then replicate it from the Primary (in the AdminServer as explained above):

[weblogic@weblogic_server_01 servers]$ current_date=$(date "+%Y%m%d")
[weblogic@weblogic_server_01 servers]$ 
[weblogic@weblogic_server_01 servers]$ mv msD2-01/data/ldap msD2-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ mv msD2Conf-01/data/ldap msD2Conf-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ mv msDA-01/data/ldap msDA-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ 
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msD2-01/data/
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msD2Conf-01/data/
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msDA-01/data/


When this is done, just start all WebLogic components again:

[weblogic@weblogic_server_01 servers]$ $DOMAIN_HOME/bin/startstop startAll
  ** Node Manager NodeManager started
  ** Administration Server AdminServer started
  ** Managed Server msDA-01 started
  ** Managed Server msD2Conf-01 started
  ** Managed Server msD2-01 started


And if you followed these steps properly, the Managed Servers will now be able to start normally with a replicated Embedded LDAP containing all recent changes coming from the Primary Embedded LDAP.


Cet article Documentum story – Replicate an Embedded LDAP manually in WebLogic est apparu en premier sur Blog dbi services.

Links for 2016-10-19 [del.icio.us]

Categories: DBA Blogs

Critical Patch Updates (CPU) for Oct 2016 are now available : E-Business Suite, FMW, SOA, Identity Management etc

Online Apps DBA - Thu, 2016-10-20 01:37

Critical Patch Updates (CPU) are security fixes that Oracle releases quarterly basis (Jan, April, July, and Oct). 1. Oracle released Oct 2016 patches on 18th Oct 2016 2. These CPUs cover Oracle Databases, Fusion Middleware, Oracle E-Business Suite, Oracle Enterprise Manager, Oracle Siebel CRM., Oracle Peoplesoft, Oracle JD-Edwards, Linux etc . 3. For list of […]

The post Critical Patch Updates (CPU) for Oct 2016 are now available : E-Business Suite, FMW, SOA, Identity Management etc appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

View with a CTE?

Tom Kyte - Wed, 2016-10-19 21:46
Can I create a view that uses a CTE?
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator