Feed aggregator

While upgrading to I faced error ORA-01830 / ORA-06512

Pythian Group - Thu, 2016-10-20 10:35

The other day I was running an upgrade for a client that is using ACLs ( Access Control Lists) from to If you have been doing upgrades to 12c, you know that when running the catctl.pl -n 4 catupgrd.sql it entails 73 steps. So this upgrade failed in step 65 with the following error (I have trimmed the output for reading purposes) :

Serial   Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/dev/product/12.1.0/lib; export LD_LIBRARY_PATH;/u01/dev/product/12.1.0/perl/bin/perl -I /u01/dev/product/12.1.0/rdbms/admin -I /u01/dev/product/12.1.0/rdbms/admin/../../sqlpatch /u01/dev/product/12.1.0/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -upgrade_mode_only > catupgrd_datapatch_upgrade.log 2> catupgrd_datapatch_upgrade.err
returned from sqlpatch
    Time: 80s
Serial   Phase #:66 Files: 1     Time: 71s
Serial   Phase #:67 Files: 1     Time: 1s
Serial   Phase #:68 Files: 1     Time: 0s
Serial   Phase #:69 Files: 1     Time: 20s

Grand Total Time: 4946s

catuppst.sql unable to run in Database: DEVSTAR Id: 0
        ERRORS FOUND: during upgrade CATCTL ERROR COUNT=5
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-01830: date format picture ends before converting entire input string ORA-06512: at "SYS.XS_OBJECT_MIGRATION", line 167
ORA-06512: at line 28
ORA-06512: at line 69
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-06512: at "SYS.XS_OBJECT_MIGRATION", line 167 ORA-06512: at line 28
ORA-06512: at line 69
STATEMENT = [as above]
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-06512: at line 28 ORA-06512: at line 69
STATEMENT = [as above]
Identifier XDB 16-09-25 12:27:05 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-06512: at line 69]
STATEMENT = [as above]
Identifier ORDIM 16-09-25 12:28:53 Script = /u01/dev/product/12.1.0/rdbms/admin/
ERROR = [ORA-20000: Oracle XML Database component not valid. Oracle XML Database must be installed and valid prior to Oracle Multimedia install, upgrade, downgrade, or patch.
ORA-06512: at line 3

And the worst part of it was that the upgrade also corrupted my database , also a good point to stress out , have a good backup before attempting to do an upgrade

Sun Sep 25 13:55:52 2016
Checker run found 59 new persistent data failures
Sun Sep 25 14:00:18 2016
Hex dump of (file 5, block 1) in trace file /u01/app/diag/rdbms/dev/dev/trace/de_ora_13476.trc
Corrupt block relative dba: 0x01400001 (file 5, block 1)
Bad header found during kcvxfh v8
Data in bad block:
 type: 0 format: 2 rdba: 0x01400001
 last change scn: 0x0000.00000000 seq: 0x1 flg: 0x05
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x00000001
 check value in block header: 0xa641
 computed block checksum: 0x0
Reading datafile '/u01/dev/oradata/dev/system_01.dbf' for corruption at rdba: 0x01400001 (file 5, block 1)
Reread (file 1, block 1) found same corrupt data (no logical check)

So what I had to do was a restore of my database before the upgrade, as I couldn’t even do a flashback due to the corrupt block.

But to fix this error, I had to apply the patch 20369415 to the 12c binaries before I ran the catupgrd.sql

[oracle@dev 20369415]$ opatch lsinventory | grep 20369415
Patch  20369415     : applied on Sun Sep 25 14:49:59 CDT 2016

Once the patch was applied , I reran the upgrade, and now it finished successfully

Serial   Phase #:65      Files: 1     Time: 133s
Serial   Phase #:66      Files: 1     Time: 78s
Serial   Phase #:68      Files: 1     Time: 0s
Serial   Phase #:69      Files: 1     Time: 275s
Serial   Phase #:70      Files: 1     Time: 171s
Serial   Phase #:71      Files: 1     Time: 0s
Serial   Phase #:72      Files: 1     Time: 0s
Serial   Phase #:73      Files: 1     Time: 20s

Phases [0-73]         End Time:[2016_09_26 17:42:54]

Grand Total Time: 5352s

LOG FILES: (catupgrd*.log)
COMP_ID              COMP_NAME                                VERSION  STATUS
-------------------- ---------------------------------------- -------- ---------------
APEX                 Oracle Application Express      VALID

OWB                  OWB                             VALID

AMD                  OLAP Catalog                    OPTION OFF

SDO                  Spatial                         VALID

ORDIM                Oracle Multimedia               VALID

XDB                  Oracle XML Database             VALID

CONTEXT              Oracle Text                     VALID

OWM                  Oracle Workspace Manager        VALID

CATALOG              Oracle Database Catalog Views   VALID

CATPROC              Oracle Database Packages and Types VALID

JAVAVM               JServer JAVA Virtual Machine    VALID

XML                  Oracle XDK                      VALID

CATJAVA              Oracle Database Java Packages   VALID

APS                  OLAP Analytic Workspace         VALID

XOQ                  Oracle OLAP API                 VALID


This was a small post to make you aware that if you are using ACLs , you need to run the patch 20369415 to the 12c binaries so that you don’t have to face a possible database corruption and have a harder time upgrading your database.

Note: This post was originally posted in rene-ace.com

Categories: DBA Blogs

Enterprise Manager does not display correct values for memory

Yann Neuhaus - Thu, 2016-10-20 08:03

I recently had problems with Enterprise Manager, receiving such alerts:

EM Event Critical hostname Memory Utilization is 93,205 % crossed warning (80%) or critical (90%)

When we have a look at the EM 13c console for the host:


On the system the free -m command displays:

oracle@host:~/24437699/ [agent13c] free -m
             total       used       free     shared    buffers     cached
Mem:         48275      44762       3512          0        205      37483
-/+ buffers/cache:       7073      41201
Swap:         8189       2397       5791

Em 13c does not take into account the buffer / cached component.

In fact the memory calculation has changed from EM and EM  According to Metalink Note 2144976.1:

“While the total Memory available in the host target is displayed correctly after applying the latest PSU # 23030165 (Agent-Side, the formula used for Memory Utilization is (100.0 * (realMem-freeMem) / realMem) and does not consider Buffers / Cached component for the calculation.”

To solve the problem we have to patch the OMS and the different agents:

For the oms: use the patch 23134365

For the agents : use the patch 24437699

Watch out, when you want to apply the 23134365 patch for oms, we have to install the latest version of omspatcher. We download  Patch 19999993 of Release from MOS.

We backup the OMSPatcher directory in the $ORACLE_HOME oms13c environment:

oracle:OMS_HOME:/ [oms13c] mv OMSPatcher/ OMSPatcher_save

then we copy and unzip the p19999993_131000_Generic.zip from the $ORACLE_HOME directory:

oracle:$OMS_HOME/ [oms13c] unzip p19999993_131000_Generic.zip
Archive:  p19999993_131000_Generic.zip
   creating: OMSPatcher/
   creating: OMSPatcher/oms/
  inflating: OMSPatcher/oms/generateMultiOMSPatchingScripts.pl
   creating: OMSPatcher/jlib/
  inflating: OMSPatcher/jlib/oracle.omspatcher.classpath.jar
  inflating: OMSPatcher/jlib/oracle.omspatcher.classpath.unix.jar
  inflating: OMSPatcher/jlib/omspatcher.jar
  inflating: OMSPatcher/jlib/oracle.omspatcher.classpath.windows.jar
   creating: OMSPatcher/scripts/
   creating: OMSPatcher/scripts/oms/
   creating: OMSPatcher/scripts/oms/oms_child_scripts/
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_wls.bat
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_jvm_discovery
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_jvm_discovery.bat
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_wls
  inflating: OMSPatcher/scripts/oms/omspatcher
  inflating: OMSPatcher/scripts/oms/omspatcher.bat
  inflating: OMSPatcher/omspatcher
   creating: OMSPatcher/wlskeys/
  inflating: OMSPatcher/wlskeys/createkeys.cmd
  inflating: OMSPatcher/wlskeys/createkeys.sh
  inflating: OMSPatcher/omspatcher.bat
  inflating: readme.txt
  inflating: PatchSearch.xml

We check the OMSPatcher version:

oracle:/ [oms13c] ./omspatcher version
OMSPatcher Version:
OPlan Version:
OsysModel build: Wed Oct 14 06:21:23 PDT 2015
OMSPatcher succeeded.

We download from Metalink the p23134265_131000_Generic-zip file, and we run:

oracle@host:/home/oracle/23134365/ [oms13c] omspatcher apply -analyze
OMSPatcher Automation Tool
Copyright (c) 2015, Oracle Corporation.  All rights reserved.
OMSPatcher version :
OUI version        :
Running from       : /u00/app/oracle/product/
Log file location  : /u00/app/oracle/product/
OMSPatcher log file: /u00/app/oracle/product/
Please enter OMS weblogic admin server URL(t3s://hostname:7102):>
Please enter OMS weblogic admin server username(weblogic):>
Please enter OMS weblogic admin server password:>
Configuration Validation: Success
Running apply prerequisite checks for sub-patch(es) "23134365" 
and Oracle Home "/u00/app/oracle/product/"...
Sub-patch(es) "23134365" are successfully analyzed for Oracle Home 

Complete Summary

OMSPatcher succeeded.

We stop the oms and we run:

oracle@hostname:/home/oracle/23134365/ [oms13c] omspatcher apply
OMSPatcher Automation Tool
Copyright (c) 2015, Oracle Corporation.  All rights reserved.
OMSPatcher version :
OUI version        :
Running from       : /u00/app/oracle/product/
Please enter OMS weblogic admin server URL(t3s://hostname:7102):>
Please enter OMS weblogic admin server username(weblogic):>
Please enter OMS weblogic admin server password:>
Configuration Validation: Success

OMSPatcher succeeded.

We finally restart the OMS:

oracle@hostname:/home/oracle/ [oms13c] emctl start oms

Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Starting Oracle Management Server...
WebTier Successfully Started
Oracle Management Server Successfully Started
Oracle Management Server is Up
JVMD Engine is Up
Starting BI Publisher Server ...
BI Publisher Server Already Started
BI Publisher Server is Up


Now we apply the patch to the agents:

After downloaded and unzipped the p24437699_131000_Generic.zip, we stop the management agent and we run:

oracle@hostname:/home/oracle/24437699/ [agent13c] opatch apply
Oracle Interim Patch Installer version
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
Oracle Home       : /u00/app/oracle/product/
Central Inventory : /u00/app/oraInventory
OPatch version    :
OUI version       :

OPatch detects the Middleware Home as "/u00/app/oracle/product/"
Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   24437699
Do you want to proceed? [y|n]
User Responded with: Y
All checks passed.
Backing up files...
Applying interim patch '24437699' to 
OH '/u00/app/oracle/product/'
Patching component oracle.sysman.top.agent,
Patch 24437699 successfully applied.
OPatch succeeded.

Finally we restart the agent with the emctl start agent command.

After the patches have been applied, the memory used displayed is correct:



And we do not receive critical alerts anymore :=)



Cet article Enterprise Manager does not display correct values for memory est apparu en premier sur Blog dbi services.

Conjuctive Normal Form

Jonathan Lewis - Thu, 2016-10-20 07:00

I recently tweeted about a comment I’d picked up at the Trivadis performance days regarding tablescans and performance.

“If you can write your SQL in conjunctive normal form it can help the optimizer to offload more predicates”

Inevitably someone asked me if I had an example to demonstrate this – I didn’t, and still don’t really, but here’s an interesting demo based on an example from the Oracle In-Memory blog showing how the optimizer will rearrange your filter predicates before passing them to the tablescan code for evaluation against an inmemory table.

rem     Script:         in_memory_conjunctive.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2016
rem     Purpose:
rem     Last tested

create table t1
with generator as (
                rownum id
        from dual
        connect by
                level <= 1e4
        rownum                          id,
        trunc(dbms_random.value(1,501)) qty,
        mod(rownum,200) + 1             part_no,
        lpad(rownum,10,'0')             v1,
        lpad('x',50,'x')                padding
        generator       v1,
        generator       v2
        rownum <= 1e7
prompt  ==========
prompt  Base query
prompt  ==========

        (qty > 495 or (qty < 3 and part_no = 50))
prompt  ===============
prompt  predicate added
prompt  ===============

        (qty > 495 or qty < 3) and (qty > 495 or (qty < 3 and part_no = 50))
prompt  =================
prompt  Ordered predicate
prompt  =================

select  /*+ ordered_predicates */
        (qty > 495 or qty < 3) and (qty > 495 or (qty < 3 and part_no = 50))

The 2nd and 3rd queries add a predicate to the first query – which, unfortunately, changes the estimated cardinality even though it has no effect on the result. This predicate is one that would be added by the inmemory code path if the table were declared to be inmemory. I’ve got two versions of the query, one with the (deprecated) ordered_predicates hint because in my initial tests the optimizer swapped the order of the predicates and I wanted to see if the ordering was at all critical.

Here’s the plan for the base query – first before declaring the table inmemory, then after declaring the table inmemory:

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |       |       | 14739 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |   100K|  1862K| 14739   (6)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - filter(("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)))
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |      |       |       |  1974 (100)|          |
|   1 |  SORT AGGREGATE             |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| T1   |   100K|  1862K|  1974  (44)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - inmemory((("QTY">495 OR "QTY"<3) AND ("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)))) filter(("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)))

And here, after putting the table back to no inmemory are the plans for the second and third queries; note, particularly the different order of the predicates in the predicate section: the predicate order matches the inmemory predicate order only if I use the ordered_predicates hint:

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |       |       | 14741 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |  1404 | 26676 | 14741   (6)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - filter((("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)) AND ("QTY">495
              OR "QTY"<3)))
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |       |       | 14741 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |  1404 | 26676 | 14741   (6)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - filter((("QTY">495 OR "QTY"<3) AND ("QTY">495 OR ("QTY"<3 AND

Finally the run times – after running the queries a few times each to check for consistency:

  • Base query: 0.82 seconds
  • Query with extra predicate: 0.86 seconds
  • Query with extra predicate and forced order of predicate evaluation: 0.71 seconds

The query with the predicate arrangement matching the inmemory rewrite actually ran 13% faster than the original. Unfortunatly the rewrite without the ordered_predicates hint ran slower – which is a bit of a shame but understandable – the first predicate is the more complex, and then the code has to run a completely redundant second predicate; I was a little surprised at how much slower it was, but the table is 10M rows and we’re only looking at sub-second times anyway.

My table was fully cached and just under 112,000 blocks, so not very large, and this was running a serial query on a basic Oracle instance. Nevetheless there is a difference in execution time that is more than just “random noise” – If this is an indication of how a little unsightly tweaking of SQL for small data sets can make a difference, you can imagine that there might be a worthwhile benefit to considering ways of tweaking your predicates that make a significant difference to execution time if the extra predicates end up being pushed down to storage on an Exadata machine.


Another “not quite” example I happen to have written about a few months ago is a case where rewriting “not exists() OR not exists() OR not exists()” as “not (exists() AND exists() AND exists())” allowed Oracle to rewrite three subqueries as a single subquery with three-table join.


Oracle EMPTY_CLOB Function with Examples

Complete IT Professional - Thu, 2016-10-20 06:00
In this article, I’ll explain what the EMPTY_CLOB function does and show you an example of how to use it. Purpose of the Oracle EMPTY_CLOB Function The EMPTY_CLOB function is used to initalise a CLOB column to EMPTY. It can be used in several places: In an INSERT statement In an UPDATE statement Initialising a […]
Categories: Development

EBS 12.2.6 OA Extensions for Jdeveloper 10g Now Available

Steven Chan - Thu, 2016-10-20 02:06
Jdeveloper logoWhen you create extensions to Oracle E-Business Suite OA Framework pages, you must use the version of Oracle JDeveloper shipped by the Oracle E-Business Suite product team. 

The version of Oracle JDeveloper is specific to the Oracle E-Business Suite Applications Technology patch level, so there is a new version of Oracle JDeveloper with each new release of the Oracle E-Business Suite Applications Technology patchset.

The Oracle Applications (OA) Extensions for JDeveloper 10g are now available for E-Business Suite Release 12.2.6.  For details, see:

The same Note also lists the latest OA Extension updates for EBS 11i, 12.0, 12.1, and 12.2.

Related Articles

Categories: APPS Blogs

Documentum story – Replicate an Embedded LDAP manually in WebLogic

Yann Neuhaus - Thu, 2016-10-20 02:00

In this blog, I will talk about the WebLogic Embedded LDAP. This LDAP is created by default on all AdminServers and Managed Servers of any WebLogic installation. The AdminServer always contains the Primary Embedded LDAP and all other Servers are synchronized with this one. This Embedded LDAP is the default security provider database for the WebLogic Authentication, Authorization, Credential Mapping and Role Mapping providers: it usually contains the WebLogic users, groups, and some other stuff like the SAML2 setup, aso… So basically a lot of stuff configured under the “security realms” in the WebLogic Administration Console. This LDAP is based on files that are stored under “$DOMAIN_HOME/servers/<SERVER_NAME>/data/ldap/”.


Normally the Embedded LDAP is automatically replicated from the AdminServer to the Managed Servers during startup but this can fail for a few reasons:

  • AdminServer not running
  • Problems in the communications between the AdminServer and Managed Servers
  • aso…


Oracle usually recommend to use an external RDBMS Security Store instead of the Embedded LDAP but not all information are stored in the RDBMS and therefore the Embedded LDAP is always used, at least for a few things. More information on this page: Oracle WebLogic Server Documentation.


So now in case the automatic replication isn’t working properly, for any reason, or if a manual replication is needed, how can it be done? Well that’s pretty simple and I will explain that below. I will also use a home made script in order to quickly and efficiently start/stop one, several or all WebLogic components. If you don’t have such script available, then please adapt the steps below to manually stop and start all WebLogic components.


So first you need to stop all components:

[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop stopAll
  ** Managed Server msD2-01 stopped
  ** Managed Server msD2Conf-01 stopped
  ** Managed Server msDA-01 stopped
  ** Administration Server AdminServer stopped
  ** Node Managed NodeManager stopped
[weblogic@weblogic_server_01 ~]$ ps -ef | grep weblogic
[weblogic@weblogic_server_01 ~]$


Once this is done, you need to retrieve the list of all Managed Servers installed/configured in this WebLogic Domain for which a manual replication is needed. For me, it is pretty simple, they are printed above in the start/stop command but otherwise you can find them like that:

[weblogic@weblogic_server_01 ~]$ cd $DOMAIN_HOME/servers
[weblogic@weblogic_server_01 servers]$ ls | grep -v "domain_bak"


Now that you have the list, you can proceed with the manual replication for each and every Managed Server. First backup the Embedded LDAP and then replicate it from the Primary (in the AdminServer as explained above):

[weblogic@weblogic_server_01 servers]$ current_date=$(date "+%Y%m%d")
[weblogic@weblogic_server_01 servers]$ 
[weblogic@weblogic_server_01 servers]$ mv msD2-01/data/ldap msD2-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ mv msD2Conf-01/data/ldap msD2Conf-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ mv msDA-01/data/ldap msDA-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ 
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msD2-01/data/
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msD2Conf-01/data/
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msDA-01/data/


When this is done, just start all WebLogic components again:

[weblogic@weblogic_server_01 servers]$ $DOMAIN_HOME/bin/startstop startAll
  ** Node Manager NodeManager started
  ** Administration Server AdminServer started
  ** Managed Server msDA-01 started
  ** Managed Server msD2Conf-01 started
  ** Managed Server msD2-01 started


And if you followed these steps properly, the Managed Servers will now be able to start normally with a replicated Embedded LDAP containing all recent changes coming from the Primary Embedded LDAP.


Cet article Documentum story – Replicate an Embedded LDAP manually in WebLogic est apparu en premier sur Blog dbi services.

Links for 2016-10-19 [del.icio.us]

Categories: DBA Blogs

Critical Patch Updates (CPU) for Oct 2016 are now available : E-Business Suite, FMW, SOA, Identity Management etc

Online Apps DBA - Thu, 2016-10-20 01:37

Critical Patch Updates (CPU) are security fixes that Oracle releases quarterly basis (Jan, April, July, and Oct). 1. Oracle released Oct 2016 patches on 18th Oct 2016 2. These CPUs cover Oracle Databases, Fusion Middleware, Oracle E-Business Suite, Oracle Enterprise Manager, Oracle Siebel CRM., Oracle Peoplesoft, Oracle JD-Edwards, Linux etc . 3. For list of […]

The post Critical Patch Updates (CPU) for Oct 2016 are now available : E-Business Suite, FMW, SOA, Identity Management etc appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

View with a CTE?

Tom Kyte - Wed, 2016-10-19 21:46
Can I create a view that uses a CTE?
Categories: DBA Blogs

Format for TRUNC(TIMESTAMP,<seconds>)

Tom Kyte - Wed, 2016-10-19 21:46
Is there a format specifier for TRUNCATE(timestamp,?) to seconds? The following all work: TRUNC(SYSTIMESTAMP,'MI') TRUNC(SYSTIMESTAMP,'HH') TRUNC(SYSTIMESTAMP,'MM') TRUNC(SYSTIMESTAMP,'YY') But TRUNC(SYSTIMESTAMP,'SS') and any variation I'v...
Categories: DBA Blogs

How to tackle 'ORA-14024: number of partitions of LOCAL index must equal that of the underlying table' error?

Tom Kyte - Wed, 2016-10-19 21:46
Hi, I have created a plsql program which does the following. 1) Create Backup tables using script from dbms_metadata.get_ddl utility. 2) Insert records from Main tables into backup tables. 3) Rename indexes on Main table 3) Create Indexes (W...
Categories: DBA Blogs

how to get regular updates on database space left and database used space in my table

Tom Kyte - Wed, 2016-10-19 21:46
how to get regular updates on database space left and database used space in my table . i have created a table with 3 columns , dbspacetotal & dbspaceused,dbspaceremaining. how to get data into these columns when i insert update or delete in my...
Categories: DBA Blogs

running application with an app_user instead of schema owner user

Tom Kyte - Wed, 2016-10-19 21:46
Hi I have a java application that requires access to oracle database with an app_user instead of schema owner user and this is for security purposes. - The schema owner user is the user that owns oracle objects that need to be accessed from j...
Categories: DBA Blogs


Tom Kyte - Wed, 2016-10-19 21:46
i learn all sql and pl/sql concept very well. but still not working on any project. so my question is please give me project or idea to develop project in oracle(SQL and PL/SQL) which helps to improve my knowledge very well... please give me the prj...
Categories: DBA Blogs

Need help on dbms_scheduler

Tom Kyte - Wed, 2016-10-19 21:46
Hi Tom, I have a scheduler which is linked to my package. The package was running for long and hence I cancelled the task. Now when I try to run the package back, the scheduler is not running. I checked in "USER_SCHEDULER_JOB_LOG" AND THE...
Categories: DBA Blogs

Quickly built new Python graph SQL execution by plan

Bobby Durrett's DBA Blog - Wed, 2016-10-19 17:51


I created a new graph in my PythonDBAGraphs to show how a plan change affected execution time. The legend in the upper left is plan hash value numbers. Normally I run the equivalent as a sqlplus script and just look for plans with higher execution times. I used it today for the SQL statement with SQL_ID c6m8w0rxsa92v. It has been running slow since 10/11/2016.

Since I just split up my Python graphs into multiple smaller scripts I decided to build this new Python script to see how easy it would be to show the execution time of the SQL statement for different plans graphically. It was not hard to build this. Here is the script (sqlstatwithplans.py):

import myplot
import util

def sqlstatwithplans(sql_id):
    q_string = """
ELAPSED_TIME_DELTA/(executions_delta*1000000) ELAPSED_AVG_SEC
where ss.sql_id = '""" 
    q_string += sql_id
    q_string += """'
and ss.snap_id=sn.snap_id
and executions_delta > 0
order by ss.snap_id,ss.sql_id,plan_hash_value"""
    return q_string

database,dbconnection = 
util.script_startup('Graph execution time by plan')

# Get user input


mainquery = sqlstatwithplans(sql_id)

mainresults = dbconnection.run_return_flipped_results(mainquery)


date_times = mainresults[0]
plan_hash_values = mainresults[1]
elapsed_times = mainresults[2]
num_rows = len(date_times)

# build list of distict plan hash values

distinct_plans = []
for phv in plan_hash_values:
    string_phv = str(phv)
    if string_phv not in distinct_plans:
# build a list of elapsed times by plan

# create list with num plans empty lists     
elapsed_by_plan = []
for p in distinct_plans:
# update an entry for every plan 
# None for ones that aren't
# in the row

for i in range(num_rows):
    plan_num = distinct_plans.index(str(plan_hash_values[i]))
    for p in range(len(distinct_plans)):
        if p == plan_num:
# plot query
myplot.xlabels = date_times
myplot.ylists = elapsed_by_plan

myplot.title = "Sql_id "+sql_id+" on "+database+
" database with plans"
myplot.ylabel1 = "Averaged Elapsed Seconds"


Having all of the Python code for this one graph in a single file made it much faster to put together a new graph. Pretty neat.


Categories: DBA Blogs

The hot new cloud product for true customer service

Linda Fishman Hoyle - Wed, 2016-10-19 17:15

A Guest Post by Bill Miller, Oracle product management director (pictured left)

It’s such a pleasure doing business with a company that has a 360-degree view of me as a customer. All my information—from different touchpoints that I’ve used to contact the company to purchase products and receive service and support—is consolidated in a master record. When a company manages my data efficiently, I tend to engage more, spend more, renew my loyalty, and tell my friends.

Managing customer data is at the crux of delivering a positive customer experience. Despite their importance, most master data management (MDM) solutions are a bit onerous. They are typically expensive on-premises deployments, which are time consuming to set up and to obtain results.

Oracle Customer Data Management Cloud (CDM) changes the game

CDM Cloud is an affordable solution for a company’s master data management challenges. It’s a single, easy-to-use application that consolidates, cleans, completes, and coordinates customer data from different systems across the enterprise—delivering a current, complete 360-degree view.

The technology has actually been around since the introduction of Fusion, strategically embedded in the Fusion CRM / OSC platform. CDM as a cloud service is new (as part of Oracle CX Cloud) and is the first truly SaaS-based, next-generation MDM platform. It leverages Oracle’s decades of experience in the MDM industry.

Oracle CDM Cloud creates a trusted master customer profile

Based on a recent survey from Experian, on average, customer data resides in at least nine different systems, making it difficult to know what is really there and what is really happening. This is mainly a result of the expanding applications ecosystem used to run any business.

Oracle CDM Cloud uses a cross-reference registry to tie data together from multiple sources to create a “best version” record. Ken Readus from eVerge Consulting says, “We see Oracle CDM as the perfect, easy-to-use solution for our clients to create a common, sharable “customer master” from all the cloud and on-premise applications that have sprung up over the years.”

Besides centralizing data from multiple systems, Oracle CDM Cloud also resolves the issue of bad or duplicate data. Most business applications, such as Salesforce, Microsoft, and SAP, don’t have the embedded function to check data quality.

Why are so many Oracle CX, ERP, and Salesforce customers signing up for Oracle CDM?

Most companies know how a single, complete 360-degree view of the customer positively affects their customer service. But there are so many benefits that extend beyond the increased customer loyalty, retention, and reference sentiment, which I mentioned at the beginning of this post.

For instance, Oracle CDM Cloud increases customer insight. That in turn, reduces churn. There are fewer risk / compliance issues as a result of CDM’s data governance capabilities.

Also, CDM Cloud helps companies better manage their sales territories. Businesses can segment their marketing campaigns more completely. Reporting is more accurate and timely.

For More Information

To learn about the exceptional capabilities and benefits of Oracle CDM Cloud, go to this link for an overview, features, pricing, and a data sheet.

Datawarehouse ODS load is fast and easy in Enterprise Edition

Yann Neuhaus - Wed, 2016-10-19 14:56

In a previous post, tribute to transportable tablespaces (TTS), I said that TTS is also used to move data quickly from operational database to a datawarehouse ODS. For sure, you don’t transport directly from the production database because TTS requires that the tablespace is read only. But you can transport from a snapshot standby. Both features (transportable tablespaces and Data Guard snapshot standby) are free in Enterprise Edition without option. Here is an exemple to show that it’s not difficult to automate

I have a configuration with the primary database “db1a”

DGMGRL> show configuration
Configuration - db1
Protection Mode: MaxPerformance
db1a - Primary database
db1b - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 56 seconds ago)
DGMGRL> show database db1b
Database - db1b
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 0 Byte/s
Real Time Query: ON
Database Status:

I’ve a few tables in the tablespace USERS and this is what I want to transport to ODS database:

SQL> select segment_name,segment_type,tablespace_name from user_segments;
------------ ---------- ----------

Snapshot standby

With Data Guard it is easy to open temporarily the standby database. Just convert it to a snapshot standby with a simple command:

DGMGRL> connect connect system/oracle@//db1b
DGMGRL> convert database db1b to snapshot standby;
Converting database "db1b" to a Snapshot Standby database, please wait...
Database "db1b" converted successfully


Here you can start to do some Extraction/Load but better to reduce this window where the standby is not in sync. The only thing we will do is export the tablespace in the fastest way: TTS.

First, we put the USERS tablespace in read only:

SQL> connect system/oracle@//db1b
SQL> alter tablespace users read only;
Tablespace altered.

and create a directory to export metadata:

SQL> create directory TMP_DIR as '/tmp';
Directory created.

Then export is easy

SQL> host expdp system/oracle@db1b transport_tablespaces=USERS directory=TMP_DIR
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/********@db1b transport_tablespaces=USERS directory=TMP_DIR
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Datafiles required for transportable tablespace USERS:
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Wed Oct 19 21:03:36 2016 elapsed 0 00:00:52

I’ve the metadata in /tmp/expdat.dmp and the data in /u02/oradata/db1/users01.dbf. I copy this datafile directly in his destination for the ODS database:

[oracle@VM118 ~]$ cp /u02/oradata/db1/users01.dbf /u02/oradata/ODS/users01.dbf

This is physical copy, which is the fastest data movement we can do.

I’m ready to import it into my ODA database, but I can already re-sync the standby database because I extracted everything I wanted.

Re-sync the physical standby

DGMGRL> convert database db1b to physical standby;
Converting database "db1b" to a Physical Standby database, please wait...
Operation requires shut down of instance "db1" on database "db1b"
Shutting down instance "db1"...
Connected to "db1B"
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires start up of instance "db1" on database "db1b"
Starting instance "db1"...
ORACLE instance started.
Database mounted.
Connected to "db1B"
Continuing to convert database "db1b" ...
Database "db1b" converted successfully

The duration depends on the time to flashback the changes (and we did no change here as we only exported) and the time to apply the redo stream generated since the convert to snapshot standby (which duration has been minimized to its minimum).

This whole process can be automated. We did that at several customers and it works well. No need to change anything unless you have new tablespaces.


Here is the import to the ODS database and I rename the USERS tablespace to ODS_USERS:

SQL> host impdp system/oracle transport_datafiles=/u02/oradata/db2B/users02.dbf directory=TMP_DIR remap_tablespace=USERS:ODS_USERS
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** transport_datafiles=/u02/oradata/ODS/users01.dbf directory=TMP_DIR remap_tablespace=USERS:ODS_USERS
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 3 error(s) at Wed Oct 19 21:06:18 2016 elapsed 0 00:00:10

Everything is there. You have all your data in ODS_USERS. You can have other data/code in this database. Only the ODS_USERS tablespace have to be dropped to be re-imported. You can have your staging tables here adn even permanent tables.

12c pluggable databases

In 12.1 it is even easier because the multitenant architecture gives the possibility to transport the pluggable databases in one command, through file copy or database links. It is even faster because metadata are transported physically with the PDB SYSTEM tablespace. I said multitenant architecture here, and didn’t mention any option. Multitenant option is needed only if you want multiple PDBs managed by the same instance. But if you want the ODS database to be an exact copy of the operational database, then you don’t need any option to unplug/plug.

In 12.1 you need to put the source in read only, so you still need a snapshto standby. And from my test, there’s no problem to convert it back to physical standby after a PDB has been unplugged. In next release, we may not need a standby because it has been announced that PDB can be cloned online.

I’ll explain the multitenant features available without any option (in 12c current and next release) at Oracle Geneva office on 23rd of November:

Do not hesitate to register by e-mail.


Cet article Datawarehouse ODS load is fast and easy in Enterprise Edition est apparu en premier sur Blog dbi services.

Thought Leader Webcast - Modernize Employee Engagement: Making Culture Actionable

WebCenter Team - Wed, 2016-10-19 13:38
Oracle Corporation Oracle Webcast - Making Culture Actionable Don't Let Your Company Culture Just Happen

Right now 7 out of 10 people in your organization are not actively engaged at work. Disengaged workforces are a global problem; and the costs are high. In the U.S. alone, companies are reporting $450 billion to $550 billion in lost productivity each year.

Join XPLANE founder and industry thought leader Dave Gray as he discusses how culture — the formal and informal values, behaviors, and beliefs practiced in an organization — can help you:
  • Understand how IT empowers Line of Business users to better engage employees
  • Create change and motivate business agility
  • Drive business results
Register Now to join us for this webcast. Red Button Top Register Now Red Button Bottom Live Webcast Join us for this webcast Calendar October 27, 2016
10:00AM PT
1:00PM ET Featured Speakers
Dave Gray Dave Gray
Entrepreneur, Author, Consultant and Founder

Kellsey Ruppel Kellsey Ruppel
Principal Product Marketing Director

Oracle Discoverer Security Alert - High impact to SOX Compliance and Financial Reporting

For those clients using Oracle Discoverer, especially those using Discoverer with the Oracle E-Business Suite for financial reporting, the October 2016 Oracle Critical Patch Update (CPU) include a high-risk vulnerability reported by Integrigy Corporation. CVE-2016-5495 is a vulnerability with the Discoverer EUL Code and Schema and has a base score 7.5. Integrigy believes this vulnerability affects all versions of Discoverer used with the Oracle E-Business Suite and that the confidentiality, integrity, and availability of reports are at risk.

Oracle's recommendation is that clients migrate to Oracle Business Intelligence Enterprise Edition (OBIEE), Oracle Business Intelligence Cloud Service, or Oracle Business Intelligence Applications. If you are still using Discoverer, Oracle recommends upgrading to Fusion Middleware 11g patch set 6 ( and to apply the October 2016 Critical Patch Update Discoverer patch (24716502). Be sure to also apply the CPU patches to WebLogic (10.3.6 and higher) and the database supporting the WebLogic repository.

If you have any questions, please contact us at info@integrigy.com

For more information

October 2016 CPU Announcement: http://www.oracle.com/technetwork/security-advisory/cpuoct2016-2881722.html

Patch Set Update and Critical Patch Update October 2016 Availability Document (Doc ID 2171485.1)

ALERT: Premier Support Ends Dec 31 2011 for Oracle Fusion Middleware 10g 10.1.2 & 10.1.4 (Doc Id: 1290974.1)

Using Discoverer 11.1.1 with Oracle E-Business Suite Release 12 (Doc Id: 1074326.1)

Using Discoverer 11.1.1 with Oracle E-Business Suite Release 11i (Doc Id: 1073963.1)

Vulnerability, Sarbanes-Oxley (SOX), Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs


Subscribe to Oracle FAQ aggregator