Feed aggregator

New OA Framework 12.2.5 Update 12 Now Available

Steven Chan - Mon, 2017-05-08 02:00

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.5 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.5 users should apply this patch.  Future OAF patches for EBS Release 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes 39 fixes in total, including all fixes released in previous EBS Release 12.2.5 bundle patches.

This latest bundle patch includes fixes for following bugs/issues:

  • An added favorite link outside Oracle E-Business Suite to open in a browser's new window is opening in the same window from Framework Simplified Home page.
  • The trusted domain URL such as UIX/Cabo URL redirecting to untrusted site when a malicious URL is framed.

Related Articles

Categories: APPS Blogs

May 2017 Update to E-Business Suite Technology Codelevel Checker (ETCC)

Steven Chan - Mon, 2017-05-08 02:00

The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify application or database tier bugfixes that need to be applied to your Oracle E-Business Suite Release 12.2 system. ETCC maps missing bugfixes to the default corresponding patches, and displays them in a patch recommendation summary.

What’s New

ETCC has been updated to include bug fixes and patching combinations for the following software:

Recommended Versions

  • April 2017 Database PSU and Proactive Bundle Patch
  • April 2017 Database PSU and Engineered Systems Patch
  • Microsoft Windows Bundle Patch

Minimum Versions

  • January 2017 Database PSU and Proactive Bundle Patch
  • October 2016 Database PSU and Engineered Systems Patch

Obtaining ETCC

We recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.

Related Articles


Related Articles

Categories: APPS Blogs

Oracle Utilities Customer Care and Billing V2. is now available

Anthony Shorten - Sun, 2017-05-07 23:28

Oracle Utilities Customer Care And Billing V2. is now available for download and installation from Oracle's Delivery Cloud. This is the first Oracle Utilities product to release on the Oracle Utilities Application Framework V4., also know and 4.3 SP4.

The latest Oracle Utilities Application Framework includes the latest updates, new functionality, content we have delivered from our cloud offerings and new versions of platforms. The release media includes a new set of updated documentation:

  • Updated versions of the online documentation which are available using the Oracle Help engine online and in offline format as well.
  • New technical documentation about installation, operations and security.
  • We have released a new API Guide for the management API's now included in the release documentation. These API's are used by our new management interfaces and our next release of the OEM Management Pack for Oracle Utilities.
  • As in my last posts OUAF Release Summary you can see the Framework features that are now available for Oracle Utilities Customer Care And Billing customers that can be utilized.

With the general availability of the Oracle Utilities Application Framework V4. a series of articles and new versions of whitepapers will be released over the coming months to highlight new features available for the use on the cloud and on-premise implementations of these products.

List of Visualizations and Chart Engines

Nilesh Jethwa - Sun, 2017-05-07 12:30

The internet is quickly filling up with wonderful visualizations and this is because of greater awareness that "Visuals are better than spreadsheets" for conveying the story.

This couldn't be possible without the innumerable Charting engines made available by the dedication of open source developers.

Checkout the wonderful and interesting list of charts and engines that were submitted to Hacker News

Read more at http://www.infocaptor.com/dashboard/big-list-of-visualizations-and-chart-engines

How To Package JET Hybrid Mobile Application for Release (Android Platform)

Andrejus Baranovski - Sun, 2017-05-07 11:01
If you want to build/package JET Hybrid application you must issue build:release or serve:release command. Read more about it in JET developer guide: Packaging Hybrid Mobile Applications. In order to run build:release or serve:release commands successfully, you need create buildConfig.json file, which includes information about self signed certificate. This allows to sign application and package it for release.

Steps below are tested for Android platform.

You can generate certificate with Java keytool utility. Navigate to Java home bin folder and run keytool. Specify correct path and preferred alias:

keytool -genkey -v -keystore /Users/andrejusbaranovskis/jdeveloper/mywork/jellyhouse-release-key.keystore -alias RedSamuraiConsulting -keyalg RSA -keysize 2048 -validity 10000

You will be asked to enter additional information, such as name, organization, location, etc.:

Once certificate is generated, you can create empty buildConfig.json file. I have created it in the root directory of JET Hybrid application. Certificate file is copied into the same location:

Provide release information in buildConfig.json. Since certificate file is located in the same folder, it is enough to specify its name without path. Include alias name, certificate password and keystore password:

If buildConfig.json contains correct entries, build:release should run successfully:

sudo grunt build:release --platform=android --buildConfig=buildConfig.json

Successful result output:

JET Hybrid release app built for Android platform size is 7.5 MB (major part takes Cordova libraries):

So, if you create self signed certificate and populate buildConfig.json correctly - it is very easy to run release build for Oracle JET Hybrid application.

Can you open PDB$SEED read write?

Yann Neuhaus - Sun, 2017-05-07 10:27

If you are in multitenant, you probably already felt the desire to open the PDB$SEED in READ WRITE mode.

  • Can you open PDB$SEED read write yourseld? Yes and No.
  • Should you open PDB$SEED read write yourself? Yes and No.
  • How to run upgrade scripts that need to write to PDB$SEED? catcon.pl

In 12.1 you have no reason to open the seed read write yourself. In 12.2 there is one reason when you are in LOCAL UNDO mode, because you may want to customize the UNDO tablespace.

12c in local undo

I am in 12.1 or in 12.2 in shared undo mode:
SYS@CDB$ROOT SQL> select * from database_properties where property_name like '%UNDO%';
no rows selected

When the CDB is opened, the PDB$SEED is opened in read only mode.
SYS@CDB$ROOT SQL> show pdbs
------ -------- ---- ---- ----------

I try to open the PDB$SEED in read write mode (FORCE is a shortcut to avoid to close it before)
SYS@CDB$ROOT SQL> alter pluggable database pdb$seed open force;
Error starting at line : 1 in command -
alter pluggable database pdb$seed open force
Error report -
ORA-65017: seed pluggable database may not be dropped or altered
65017. 00000 - "seed pluggable database may not be dropped or altered"
*Cause: User attempted to drop or alter the Seed pluggable database which is not allowed.
*Action: Specify a legal pluggable database name.

Obviously, this is impossible and clearly documented. PDB$SEED is not a legal pluggable database for this operation.

Oracle Script

There is an exception to that: internal Oracle scripts need to run statements in the PDB$SEED. They run with “_oracle_script”=true where this operation is possible:

SYS@CDB$ROOT SQL> alter session set "_oracle_script"=true;
Session altered.
SYS@CDB$ROOT SQL> alter pluggable database pdb$seed open read write force;
Pluggable database PDB$SEED altered.


Of course, when upgrading, there are phases where you need the seed opened read-write. But you don’t to that yourself. The scripts to run in each container are called through catcon.pl which, by default, opens the seed read-write and ensures that the initial open mode is restored at the end even in case of error.

-m mode in which PDB$SEED should be opened; one of the following values
may be specified:
- UNCHANGED - leave PDB$SEED in whatever mode it is already open
- READ WRITE (default)

I have the following “/tmp/show_open_mode.sql” script

column name format a10
select name,open_mode,current_timestamp-open_time from v$containers;

I call it with catcon to run in PDB$SEED:

$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -c 'PDB$SEED' -n 1 -d /tmp -l /tmp -b tmp -show_open_mode.sql

Here is the output in /tmp/tmp0.log

catconExec(): @/tmp/show_open_mode.sql
SQL> SQL> column name format a10
SQL> select name,open_mode,current_timestamp-open_time from v$containers;
---------- ---------- ---------------------------------------------------------------------------
PDB$SEED READ WRITE +000000000 00:00:00.471398
==== @/tmp/show_open_mode.sql Container:PDB$SEED Id:2 17-05-07 05:02:06 Proc:0 ====
==== @/tmp/show_open_mode.sql Container:PDB$SEED Id:2 17-05-07 05:02:06 Proc:0 ====

The PDB$SEED was opened READ WRITE to run the statements.

We can see that in alert.log:

alter pluggable database pdb$seed close immediate instances=all
ALTER SYSTEM: Flushing buffer cache inst=0 container=2 local
Pluggable database PDB$SEED closed
Completed: alter pluggable database pdb$seed close immediate instances=all
alter pluggable database pdb$seed OPEN READ WRITE
Database Characterset for PDB$SEED is WE8MSWIN1252
Opening pdb PDB$SEED (2) with no Resource Manager plan active
Pluggable database PDB$SEED opened read write
Completed: alter pluggable database pdb$seed OPEN READ WRITE
alter pluggable database pdb$seed close immediate instances=all
ALTER SYSTEM: Flushing buffer cache inst=0 container=2 local
Pluggable database PDB$SEED closed
Completed: alter pluggable database pdb$seed close immediate instances=all
alter pluggable database pdb$seed OPEN READ ONLY instances=all
Database Characterset for PDB$SEED is WE8MSWIN1252
Opening pdb PDB$SEED (2) with no Resource Manager plan active
Pluggable database PDB$SEED opened read only
Completed: alter pluggable database pdb$seed OPEN READ ONLY instances=all

When the pre-upgrade and post-upgrade scripts are run from DBUA you can see the following in the logs:
exec_DB_script: opened Reader and Writer
exec_DB_script: executed connect / AS SYSDBA
exec_DB_script: executed alter session set "_oracle_script"=TRUE
exec_DB_script: executed alter pluggable database pdb$seed close immediate instances=all
exec_DB_script: executed alter pluggable database pdb$seed OPEN READ WRITE

This is displayed because DBUA runs catcon.pl in debug mode and you can do the same by adding ‘-g’ to the catcon.pl arguments.

12cR2 in local undo

In 12.2 there is a case where you can make a change to the PDB$SEED to customize the UNDO tablespace template. Here I am changing to LOCAL UNDO:

SYS@CDB$ROOT SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS@CDB$ROOT SQL> startup upgrade;
ORACLE instance started.
Total System Global Area 1107296256 bytes
Fixed Size 8791864 bytes
Variable Size 939526344 bytes
Database Buffers 150994944 bytes
Redo Buffers 7983104 bytes
Database mounted.
Database opened.
SYS@CDB$ROOT SQL> alter database local undo on;
Database altered.
SYS@CDB$ROOT SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS@CDB$ROOT SQL> select * from database_properties where property_name like '%UNDO%';
------------- -------------- -----------
LOCAL_UNDO_ENABLED TRUE true if local undo is enabled

PDB$SEED is read only:

SYS@CDB$ROOT SQL> show pdbs
------ -------- ---- ---- ----------

and _oracle_script is not set:

SYS@CDB$ROOT SQL> show parameter script
---- ---- -----

I get no error now and can open the seed in read-write mode:

SYS@CDB$ROOT SQL> alter pluggable database PDB$SEED open force;
Pluggable database PDB$SEED altered.
SYS@CDB$ROOT SQL> show pdbs
------ -------- ---- ---- ----------

Customize UNDO seed

Once you open read write an undo tablespace is created. If you want to customize it, you can create another one and drop the previous one. This requires changing the undo_tablespace parameter:

SYS@CDB$ROOT SQL> show parameter undo
----------------- ------- ------
undo_tablespace string UNDO_1
SYS@CDB$ROOT SQL> create undo tablespace UNDO;
Tablespace UNDO created.
SYS@CDB$ROOT SQL> alter system set undo_tablespace=UNDO;
System SET altered.
SYS@CDB$ROOT SQL> drop tablespace UNDO_1 including contents and datafiles;
Tablespace UNDO_1 dropped.
SYS@CDB$ROOT SQL> shutdown immediate
Pluggable Database closed

You can leave it like this, just close and re-open read only. If you want to keep the same undo tablespace name as before, you need to play with create and drop, and change undo_tablespace again.

So what?

Don’t forget that you should not modify or drop PDB$SEED. If you want a customized template for your PDB creations, then you should create your PDB template to clone. You can clone remotely, so this is possible in single-tenant as well. Being able to open the PDB$SEED in read write is possible only for the exception of creating the UNDO tablespace in PDB$SEED when you move to local undo mode. This is not required, and then an UNDO tablespace will be created when you open a PDB with no undo_tablespace.
When running pre-upgrade and post-upgrade scripts, then don’t worry: catcon.pl is there to help run scripts in containers and handles that for you.


Cet article Can you open PDB$SEED read write? est apparu en premier sur Blog dbi services.


Michael Dinh - Sun, 2017-05-07 06:02

Provide info if tablespace is BIGFILE and existing increment by.
What I did might and might not be for the better – alter tablespace TBSNAME_XXXX autoextend on next 1g maxsize 250g;

sqlplus / as sysdba @free.sql TBSNAME_XXXX

SQL*Plus: Release Production on Sun May 7 05:47:54 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Release - 64bit Production
With the Automatic Storage Management option

--- ----------------------------------- ------ ------ ---------- ------------ ----------- ----------- -------- ----------- ------------
YES *a s TBSNAME_XXXX                      8192      1         24        3,968      79,788     179,199    97.79     256,000        68.45
                                               ------                         -----------                      -----------
sum                                                 1                              79,788                          256,000

   FILE_ID FILE_NAME                                          AUT          GB      INC_GB      MAX_GB
---------- -------------------------------------------------- --- ----------- ----------- -----------
        11 +DATA/xxx/datafile/TBSNAME_XXXX.274.800368305       YES         175           1         250

SQL> exit


set line 150 echo off verify off trimspool off tab off
break on report
COMPUTE sum of mb_used on report
COMPUTE sum of mb_free on report
COMPUTE sum of max_mb_sz on report
COMPUTE sum of dfct on report
COLUMN file_name format a50
COLUMN mb_used format 99,999,999
COLUMN mb_free format 99,999,999
COLUMN mb_total format 99,999,999
COLUMN max_mb_sz format 99,999,999
COLUMN mb_free_frag format 99,999,999
COLUMN dfct format 99999
COLUMN blksz format 99999
COLUMN pct_used format 999.99
COLUMN max_pct_used format 999.99
COLUMN gb format 99,999,999
COLUMN inc_gb format 99,999,999
COLUMN max_gb format 99,999,999
SELECT bigfile,
DECODE(extent_management,'LOCAL','*',' ') ||
DECODE(segment_space_management,'AUTO','a ','m ') ||
DECODE(allocation_type,'SYSTEM','s ','u ') ||
fs.tablespace_name tablespace_name, block_size blksz, dfct,
fs.nfrag ct_frag,
fs.mxfrag / 1048576 mb_free_frag,
fs.free_bytes / 1048576 mb_free,
df.avail / 1048576 mb_total,
(df.avail-fs.mxfrag)/df.avail*100 pct_used,
df.max_bytes / 1048576 max_mb_sz,
(df.avail-fs.mxfrag)/df.max_bytes*100 max_pct_used
FROM dba_tablespaces ts,
(SELECT tablespace_name, count(*) dfct,
SUM(decode(maxbytes,0,user_bytes,greatest(maxbytes,user_bytes))) max_bytes,
SUM(user_bytes) avail
FROM dba_data_files
GROUP BY tablespace_name
) df,
(SELECT tablespace_name, nvl(sum(bytes),0) free_bytes, count(bytes) nfrag, nvl(max(bytes),0) mxfrag
FROM dba_free_space
GROUP BY tablespace_name
) fs
WHERE fs.tablespace_name = ts.tablespace_name(+)
AND fs.tablespace_name = df.tablespace_name
AND regexp_like(fs.tablespace_name,'&1','i')
ORDER BY pct_used desc
bytes/1024/1024/1024 gb,
increment_by*(bytes/blocks)/1024/1024/1024 inc_gb,
maxbytes/1024/1024/1024 max_gb
FROM dba_data_files
WHERE regexp_like(tablespace_name,'&1','i')
ORDER BY 4 asc

Testing new PostgreSQL features before alpha/beta/rc releases

Yann Neuhaus - Sun, 2017-05-07 04:32

A long time ago I blogged on how you can use the PostgreSQL development snapshots to test new PostgreSQL features before alpha/beta/rc releases are officially released. Another way to do this is to use git to get the latest sources and build PostgreSQL from there. Everything which was committed will be available to test. Btw: A great way to stay up to date is to subscribe to the mailing list just referenced. You’ll get a mail for each commit that happened, maybe one of those is getting your attention?

To start you’ll obviously need git. For distributions using yum this is just a matter of:

postgres@pgbox:/home/postgres/ [pg960final] sudo yum install git

For systems using apt use:

postgres@pgbox:/home/postgres/ [pg960final] sudo apt-get install git

Depending on how you want to configure PostgreSQL you’ll need some development packages as well. For yum based systems this is a good starting point:

postgres@pgbox:/home/postgres/ [pg960final] sudo yum install -y gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel tcl tcl-devel openssh-clients bzip2 net-tools wget screen ksh unzip

For apt based systems you might want to start with this:

postgres@pgbox:/home/postgres/ [pg960final] sudo apt-get install libldap2-dev libpython-dev libreadline-dev libssl-dev bison flex libghc-zlib-dev libcrypto++-dev libxml2-dev libxslt1-dev tcl tclcl-dev bzip2 wget screen ksh libpam0g-dev libperl-dev make unzip libpam0g-dev tcl-dev python

Not all of those packages are required, they just reflect what we usually install before building PostgreSQL from source. Of course you should adjust this and remove packages that are not required for what you plan to do.

How do you then get the latest PostgreSQL sources? Quite easy, it is documented in the PostgreSQL wiki:

postgres@pgbox:/home/postgres/ [pg960final] mkdir IwantToTest
postgres@pgbox:/home/postgres/ [pg960final] cd IwantToTest/
postgres@pgbox:/home/postgres/IwantToTest/ [pg960final] git clone git://git.postgresql.org/git/postgresql.git

The result should look similar to this:

Cloning into 'postgresql'...
remote: Counting objects: 629074, done.
remote: Compressing objects: 100% (95148/95148), done.
remote: Total 629074 (delta 534080), reused 626282 (delta 531478)
Receiving objects: 100% (629074/629074), 184.31 MiB | 26.40 MiB/s, done.
Resolving deltas: 100% (534080/534080), done.

From now on you have the complete PostgreSQL sources locally available.

postgres@pgbox:/home/postgres/IwantToTest/ [pg960final] cd postgresql/; ls
aclocal.m4  config  configure  configure.in  contrib  COPYRIGHT  doc  GNUmakefile.in  HISTORY  Makefile  README  README.git  src

Ready to test? Yes, but what? One possible way to start is asking git for what was committed recently:

postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] git log
commit 0de791ed760614991e7cb8a78fddd6874ea6919d
Author: Peter Eisentraut peter_e@gmx.net
Date:   Wed May 3 21:25:01 2017 -0400

    Fix cursor_to_xml in tableforest false mode
    It only produced  elements but no wrapping table element.
    By contrast, cursor_to_xmlschema produced a schema that is now correct
    but did not previously match the XML data produced by cursor_to_xml.
    In passing, also fix a minor misunderstanding about moving cursors in
    the tests related to this.
    Reported-by: filip@jirsak.org
    Based-on-patch-by: Thomas Munro thomas.munro@enterprisedb.com

Usually you can find a link to the discussion in the commit message so can you read through the history of a specific commit. Another way is to read the development documentation or the upcoming release notes once available.

All you need to do then is to build PostgreSQL:

postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] ./configure
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] make all
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] sudo make install
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] cd contrib
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] make all
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] sudo make install
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/initdb -D /var/tmp/test
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/pg_ctl -D /var/tmp/test start
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/psql postgres
psql (10devel)
Type "help" for help.

pgbox/postgres MASTER (postgres@5432) # 

Happy testing …


Cet article Testing new PostgreSQL features before alpha/beta/rc releases est apparu en premier sur Blog dbi services.

Consequences of stopping Oracle support

Amis Blog - Sun, 2017-05-07 03:40

When buying licenses for Oracle, the first year support is mandatory. After that, a Customer may decide to stop paying for the yearly technical support of the Oracle licenses. The consequences of that decision is not always clear to customers. Most OLSA’s will contain the sentence   “If you decide not to purchase technical support, you may not update any unsupported program licenses with new versions of the program.”

This is correct, but there is more to think of.  This post will cover the elements that should be considered when deciding on stopping the support.

Unsupported actions

The Technical Support Policy of Oracle clarifies a bit more of what actions a customer is not entitled to do when stopping the support:

Customers with unsupported programs are not entitled to download, receive, or apply updates, maintenance  releases, patches, telephone assistance, or any other technical support services for unsupported programs.

This means the software instantly become legacy, AND a substantial risk. The Oracle software will not be upgraded or patched, the environment  (O.S., client software, middleware, other connected software) does. With the possible effect the application might not work in the future.


However Oracle claims that the departments Support, Accountmanagement and LMS acts more or less seperated and will not share this kind of information, it is naive to assume that the decision of stopping support of (part of) the Oracle licenses has no consequences regarding the rank of the customer on LMS’s list for submitting an audit.


Matching Service Levels

The support of the license to be stopped could be part of a socalled ‘subset’. Then the following rule applies according to the Support Policy:

You may desupport a subset of licenses in a license set only if you agree to terminate that subset of licenses.

The definition of a license subset is quite a definition, but here are two examples:

Oracle Database Enterprise Edition with RAC, Diagnostic and Tuning Pack.

Weblogic Suite with SOA Suite

So stopping support of the options is a ‘Matching Service Level’ – thing, what LMS will translate as incompliancy, and the chance that My Oracle Support is not willing to help when submitting a Service Request.


Afbeeldingsresultaat voor oracle reinstatement fee


Support of Oracle software is related to CSI-numbers, and there may be several CSI-numbers in one contract. And a customer may have more contracts, all with ther own negotiated discounts. The following line in the Support Policy is important when stopping support of a line-item :

Pricing for support is based upon the level of support and the volume of licenses for which support is ordered. In the event that a subset of licenses on a single order is terminated or if the level of support is reduced, support for the remaining licenses on that license order will be priced at Oracle’s list price for support in effect at the time of termination or reduction minus the applicable standard discount.

This is ‘Repricing’, also called ‘Pricing following Reduction ‘. So, the updated support renewal, then, would be recalculated at a less optimal discount. Ending up being no savings – just less product on support for the same costs.

This is mostly the case of terminating a license and not for terminating support (however this is a ‘reduced level of support’), but it’s important to know.

Terminating a license within a CSI-number – in stead of stopping support – is in some cases by the way not a reason for repricing. E.g. when there has been a reorganisation of contracts in the past.


When a customer decides – for what reason – to reinstate the support, there will be a reinstatement-fee.

The reinstatement fee is computed as follows:

a) if technical support lapsed, then the reinstatement fee is 150% of the last annual technical support fee you paid for the relevant program;

b) if you never acquired technical support for the relevant programs, then the reinstatement fee is 150% of the net technical support fee that would have been charged

Engineered Systems

Stopping support of a productline also has a peculiar effect on products, running on engineered systems.

The lifecycle managment of engineered systems are maintained by so-called  ‘bundle-patches’. These bundle-patches contains patches of storage-firmware, bios-updates, o.s-updates, and .. Oracle software patches.

So, when stopping Oracle support you still receive the database and middleware-patches through the bundle-patches, which is not allowed. And however it could be possible to not use these patches, it will break the life cycle managment of the engineered system. I don’t think this is advisable.


The prerequisites of making such a decision:

  • An overview of all the Oracle contracts at your firm, what seems pretty obvious, but takes quite an effort sometimes.
  • An overview of what licences you are actually using, compared to what you are entitled to.

The OPEX (Operational of Operating Expenditures) can be decreased, in some cases substantially, but before jumping into action and conclusions, contact someone who understands the risks, and is able to look further ahead in the future, together with you.


Example OLSA: http://www.oracle.com/us/corporate/pricing/olsa-ire-v122304-070683.pdf

Oracle Software Technical Support Policies :  http://www.oracle.com/us/support/library/057419.pdf

The post Consequences of stopping Oracle support appeared first on AMIS Oracle and Java Blog.

VPD Policy Type and Binding

Tom Kyte - Sun, 2017-05-07 03:26
I have a couple of VPD issues that I'm trying to ensure I fully understand based on reading Chapter 7 of Oracle Database 12c Security. I've seen both examples for well respected Oracle authors. Regarding bind variable reference of the SYS_CONTEXT fun...
Categories: DBA Blogs

Oracle Exadata Express Service – Kicking the Tires (Part 1 – SignUp)

John Scott - Sat, 2017-05-06 16:22

Pretty much immediately after I read the announcement that the Oracle Exadata Express service was available in Europe I decided to sign up to test it out.

Looking at the 3 options available (Exadata Express – X20, X50 and X50IM), I decided to go for the X20 option – primarily because I was interested how you connect to these instances, rather than.

After looking at the pricing options, I noticed a couple of points that jumped out at me.

Firstly you get 1 PDB, no mention of an option to purchase additional ones (I’m guessing you would need to sign up for a new instance rather than being able to clone an existing PDB for Dev / Test / Prod etc).

Secondly it has APEX already installed, which is obviously great if you want to get up and running with APEX right away.

Ok, so let’s go through the signup process and see how smooth it is…..!

After clicking the “Buy Now” button, I’m redirected to the Oracle Store (which if you didn’t notice is an APEX application!).

Clicking on the X20 option let’s me, choose whether I want to be billed –

  • Month-to-Month
  • 1 Year
  • 2 year
  • 3 Year

I must admit, I was slightly confused at this stage what the benefit of going for 1-Year or 2-Year etc versus Month-to-Month was. I didn’t seem to get a discount for going multi-year and in terms of flexibility for the same cost I could sign up month-to-month and cancel whenever I wanted to (please feel free to point out if I’m being dumb here, but I think they could highlight the differences clearer).

Then it’s a simple matter of Adding my choice to the cart, hitting checkout and paying for it (nope I’m not going to show you that bit, too many personal details on that page!).

All in all, I was pretty impressed – not too many clicks to sign up. I do have to say that I still find the Oracle Cloud payment / invoicing aspect slightly disconnected versus say Amazon. In Amazon AWS they already have my payment details – I just launch a new instance and get billed for it. Whereas with Oracle Cloud, I need to go through paying for each new instance before I can launch a new one (so in essence I’m paying to increase my quota for a specific service type). It might sound like a small quirk, but part of the real ‘wow’ factor of Amazon is the immediacy of being able to spin up an instance on demand quickly. Oh well…I’m sure there’s reasons for doing it this way.

So, after I sign up it tells me that I’ll receive an email once my service is available and that I can keep checking on progress via my Orders.

So, I wait…

and wait…

and wait…

About 2 1/2 hours (150 minutes!) later (I lost track of time but it was roughly then), I receive an email –

Note – I’ve omitted the majority of the email since it contains a lot of details on my service URL, CSI etc.

Again, not to gripe too much but 2.5 hours seems WAY too long to wait. As a frequent Presenter at Conferences it would be nice to walk through showing how easy Exadata Express is to setup, but I’m not sure the attendees would wait 2.5 hours for my email confirmation to come through.

Either there’s an element of human interaction going on here (why? Surely all this can be automated), or Oracle is so inundated with people signing up for the service that I ended up at the end of a very long queue. Either way I really do hope this signup time decreases in future or I predict people getting frustrated waiting for the mail to come through.

So that’s it, I now have access to an Exadata Express instance to play with. In the next blog post I’ll go through setting it up and accessing it.


Oracle Exadata Express – Now Available in Europe!

John Scott - Sat, 2017-05-06 13:57

I was very interested to see the announcement by Oracle that the Exadata Express service was now available in Europe


I’ve been using the Oracle DBaaS service since it was (more or less) first publicly available and have been very impressed with the general performance and availability,

If you’re not familiar with it, the Exadata Express service comes in 3 flavours,

  • Exadata Express – X20
  • Exadata Express – X50
  • Exadata Express – X50IM

As you would expect, the main differences are around the amount of memory and storage you get, together with some differences in feature availability.

At the time of writing, it breaks down as –

  • Exadata Express – X20
    • 20GB Storage
    • 3GB PGA, 3GB SGA
    • 120GB / Month data transfer
  • Exadata Express – X50
    • 50GB Storage
    • 5GB PGA, 5GB SGA
    • 300GB / Month data transfer
  • Exadata Express – X50IM
    • 50GB Storage
    • 5GB PGA, 10GB SGA (5GB RAM for use with Database In-Memory Column Store)
    • 300GB / Month data transfer

So what about cost (prices correct at time of writing)?

  • Exadata Express – X20
    • $175.00 / Month (£141.00 / Month)
  • Exadata Express – X50
    • $750.00 / Month (£604.00 / Month)
  • Exadata Express – X50IM
    • $950.00 / Month (£765.00 / Month)

To be honest, when I saw these prices – I wondered what the break-point would be for choosing Exadata Express versus spinning up a dedicated DBaaS instance. The X20 looks like a decent price, the X50 and X50IM, well I’m not quite sure yet….I suspect the prices are too high for the average individual user, but the specs aren’t high enough for corporate users (50Gb storage these days isn’t a lot).

So…next steps…I’m going to sign up for the X20 and start kicking the tires!

DBFS and XAG for Goldengate P2

Michael Dinh - Sat, 2017-05-06 11:26

In order to use agctl commands, we need to know goldengate instance_name.

Unfortunately, agctl does not work the same way as srvctl where it’s possible to determine what is configured.

ggsuser@hawk1 ~ $ $ORACLE_HOME/bin/srvctl config database

ggsuser@hawk1 ~ $ $GRID_HOME/bin/agctl config goldengate
XAG-212: Instance '' is not yet registered.
ggsuser@hawk1 ~ $ 

How do we find out what the goldengate instance name is? IFF XAG is configured, then grep for it.

ggsuser@hawk1 ~ $ $GRID_HOME/bin/crsctl stat res -t|grep -A2 xag
      1        ONLINE  ONLINE       hawk1                STABLE
      1        ONLINE  ONLINE       hawk1                STABLE

ggsuser@hawk1 ~ $ $GRID_HOME/bin/agctl status goldengate gg_xx
Goldengate  instance 'gg_xx' is running on hawk1
ggsuser@hawk1 ~ $ 

Some other useful commands to gather configurations info.

ggsuser@hawk1 ~ $ $GRID_HOME/bin/crsctl stat res|grep xag
ggsuser@hawk1 ~ $ 

ggsuser@hawk1 ~ $ $GRID_HOME/bin/crsctl stat res|grep -i type|sort -u

ggsuser@hawk1 ~ $ $GRID_HOME/bin/crsctl stat res -w "TYPE = xag.goldengate.type" -p
DATABASES= (No DB dependencies - User Exits)
DESCRIPTION="Oracle GoldenGate Clusterware Resource"
START_DEPENDENCIES=hard(xag.gg_xx-vip.vip,dbfs_mount) pullup(xag.gg_xx-vip.vip,dbfs_mount)
ggsuser@hawk1 ~ $ 

You might be thinking, if there are no dependencies for database, then why is it referencing Database Home?

ggsuser@hawk1 ::/u03/gg/12.2.0
$ ldd ggsci 
	linux-vdso.so.1 =>  (0x00007ffcaa8ff000)
	librt.so.1 => /lib64/librt.so.1 (0x00007f6a02c5b000)
	libdl.so.2 => /lib64/libdl.so.2 (0x00007f6a02a56000)
	libgglog.so => /u03/gg/12.2.0/./libgglog.so (0x00007f6a02630000)
	libggrepo.so => /u03/gg/12.2.0/./libggrepo.so (0x00007f6a023ba000)
	libdb-6.1.so => /u03/gg/12.2.0/./libdb-6.1.so (0x00007f6a01fd5000)
	libggperf.so => /u03/gg/12.2.0/./libggperf.so (0x00007f6a01da5000)
	libggparam.so => /u03/gg/12.2.0/./libggparam.so (0x00007f6a00c8d000)
	libicui18n.so.48 => /u03/gg/12.2.0/./libicui18n.so.48 (0x00007f6a0089d000)
	libicuuc.so.48 => /u03/gg/12.2.0/./libicuuc.so.48 (0x00007f6a0051c000)
	libicudata.so.48 => /u03/gg/12.2.0/./libicudata.so.48 (0x00007f69fed57000)
	libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f69feb3a000)
	libxerces-c.so.28 => /u03/gg/12.2.0/./libxerces-c.so.28 (0x00007f69fe574000)
	libantlr3c.so => /u03/gg/12.2.0/./libantlr3c.so (0x00007f69fe35b000)
	libnnz12.so => /u01/app/oracle/product/12.1.0/db_1/lib/libnnz12.so (0x00007f69fdc36000)
	libclntsh.so.12.1 => /u01/app/oracle/product/12.1.0/db_1/lib/libclntsh.so.12.1 (0x00007f69fabbf000)
	libons.so => /u01/app/oracle/product/12.1.0/db_1/lib/libons.so (0x00007f69fa97a000)
	libclntshcore.so.12.1 => /u01/app/oracle/product/12.1.0/db_1/lib/libclntshcore.so.12.1 (0x00007f69fa406000)
	libggnnzitp.so => /u03/gg/12.2.0/./libggnnzitp.so (0x00007f69f9922000)
	libm.so.6 => /lib64/libm.so.6 (0x00007f69f9620000)
	libc.so.6 => /lib64/libc.so.6 (0x00007f69f925e000)
	/lib64/ld-linux-x86-64.so.2 (0x00005624a8090000)
	libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f69f8f56000)
	libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f69f8d3f000)
	libmql1.so => /u01/app/oracle/product/12.1.0/db_1/lib/libmql1.so (0x00007f69f8ac8000)
	libipc1.so => /u01/app/oracle/product/12.1.0/db_1/lib/libipc1.so (0x00007f69f8750000)
	libnsl.so.1 => /lib64/libnsl.so.1 (0x00007f69f8537000)
	libaio.so.1 => /lib64/libaio.so.1 (0x00007f69f8335000)

Would’t be much better if Goldengate installation is self contained without having to download and install 2 components!

Error when trying to Import Schema Using IMP

Tom Kyte - Sat, 2017-05-06 09:06
Hi Tom, I am trying to import a schema that was exported using old export method (EXP), when importing the schema after some time i get below errors IMP-00019 row rejected due to ORACLE error 1400 IMP-0003 ORACLE error 1400 encountered OR...
Categories: DBA Blogs

Partitions in 11g

Tom Kyte - Sat, 2017-05-06 09:06
Hi team, Could you please help to get answer of One of my interview question - Consider a non-partition table x having date column with 1000 rows. How can we insert future rows i.e. from 1001 .. onward into partition (without modifying table st...
Categories: DBA Blogs

export error

Tom Kyte - Sat, 2017-05-06 09:06
Hi team, I have exported schema with expdp and import into to the development database all are fine but 5 tables are missing i checked on production with SELECT COUNT(*) FROM CPG_PROD.MDRT_20315$; -->>It shows 59 rows but when i try to export that...
Categories: DBA Blogs

wait event ' buffer busy wait' on sys.aud$

Tom Kyte - Sat, 2017-05-06 09:06
Hi, In our prod database we could see more buffer busy wait events on query "insert on sys.aud$" table? Can you explain why buffer busy wait occurred on sys.aud$ table and how to avoid ?
Categories: DBA Blogs

How to identify the total number of distinct blocks (LIO) read by a particular SQL? Is it possible at all?

Tom Kyte - Sat, 2017-05-06 09:06
Hi, There are various ways to easily identify the LIO for SQL execution as a primary measure for performance analysis. As many of the index and table blocks are usually read multiple times over and over again for SQL execution, is there a way t...
Categories: DBA Blogs

char vs varchar2 when end result is filxed format value

Tom Kyte - Sat, 2017-05-06 09:06
We have a temporary table with about 500 columns that is used to generate a fixed format file (.txt). If we use all char fields, we can just build the data as field1 || field2 || field3 ... field500 If we use varchar2 we have to rpad each fiel...
Categories: DBA Blogs

The Hello World of Machine Learning – with Python, Pandas, Jupyter doing Iris classification based on quintessential set of flower data

Amis Blog - Sat, 2017-05-06 01:58

imagePlenty of articles describe this hello world of Machine Learning. I will merely list some references and personal notes – primarily for my own convenience.

The objective is: get a first hands on exposure to machine learning – using a well known example (Iris classification) and using commonly used technology (Python). After this first step, a second step seems logical: doing the same thing with my own set of data.

Useful Resources:

Starting time: 6.55 AM

6.55 AM Download and install latest version of Oracle Virtual Box (5.1.22)

7.00 AM Download Fedora 64-bit ISO image (https://getfedora.org/en/workstation/download/)

7.21 AM Create Fedora VM and install Fedora Linux on it from ISO image (create users root/root and python/python); reboot, complete installation, run dnf update (updates worth 850 MB, 1348 upgrade actions – I regret this step), install Virtual Box Guest Addition (non trivial) using this article: https://fedoramagazine.org/install-fedora-virtualbox-guest/.

8.44 AM Save a Snapshot of the VM to retain its fresh, mint, new car smell  condition.

8.45 AM Install Python environment for Machine Learning (Python plus relevant libraries; possibly install Notebook server)

8.55 AM Save another snapshot of the VM in its current state

now the environment has been prepared, it is time for the real action – based on the second article in the list of resources.

10.05 AM start on machine learning notebook sample – working through Iris classification

10.15 AM done with sample; that was quick. And pretty impressive.


It seems the Anaconda distribution of Python may be valuable to use. I have downloaded and installed: https://www.continuum.io/downloads .

Note: to make the contents of a shared Host Directory available to all users

cd (go to home directory of current user)

mkdir share (in the home directory of the user)

sudo mount -t vboxsf Downloads  ~/share/ (this makes the shared folder called Downloads in Virtual Box Host available as directory share in guest (Fedora)

Let’s see about this thing with Jupyter Notebooks (fka as IPython). Installing the Jupyter notebook is discussed here: https://github.com/rasbt/python-machine-learning-book/blob/master/code/ch01/README.md . Since I installed Anaconda (4.3.1 for Python 3.6) I have the Jupyter app installed already.

With the following command, I download a number of notebooks:

git clone https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projects

Let’s try to run one.

cd /home/python/Data-Analysis-and-Machine-Learning-Projects/example-data-science-notebook

jupyter notebook ‘Example Machine Learning Notebook.ipynb’

And the notebook opens in my browser:


I can run the notebook, walk through it step by step, edit the notebook’s contents and run the changed steps. Hey mum, I’m a Data Scientist!

Oh, it’s 11.55 AM right now.


Some further interesting reads to get going with Python, Pandas and Jupyter Notebooks – and with data:

The post The Hello World of Machine Learning – with Python, Pandas, Jupyter doing Iris classification based on quintessential set of flower data appeared first on AMIS Oracle and Java Blog.


Subscribe to Oracle FAQ aggregator