Feed aggregator

Benefits of vendor support for problem resolution

Chris Warticki - Tue, 2017-06-20 13:49

Elaina Stergiades, Research Manager, Software and Hardware Support Services, IDC

As business leaders push for faster adoption of mobile, social and cloud technologies, CIOs and IT managers wrestle with the best approach to embracing these new delivery systems and “keeping the lights on” for important production systems.  With the relentless pace of change in technology in 2017, IDC research shows that most organizations have already started their digital transformation journey across all aspects of the business.  However, the reality is that for most companies, IT must integrate these new technologies with extensive production systems – and support them for the foreseeable future.  At the same time, CIOs must meet demanding internal and external service level agreements across the technology stack that can directly affect the bottom line – no easy feat in these complex environments.

With so much complex technology supporting revenue generation and customer-facing activities, business leaders rely on the IT organization to make sure that technology is operating at peak efficiency.  It doesn’t take much for technology disruptions to negatively affect the customer experience and the bottom line – and it can be difficult to repair that damage once it occurs.  As a result, CIOs and IT managers are looking for assurance from support providers that can minimize potentially harmful downtime when problems affect IT systems.

IDC believes that support directly from the original software or hardware vendors can be a good choice when navigating these complex technology landscapes.  Many patches for important software problems and any security update can only be delivered from the original vendor, which can be critical for enterprises with strict compliance requirements. In addition, IT organizations typically try to figure out technology problems on their own when problems first occur.  This self-diagnosis and resolution can be much faster and easier than always reaching out for help – and original vendors can include advanced tools and utilities in the code that other providers can’t offer. 

For CIOs and IT managers considering vendor support to enable accelerated problem resolution, IDC recommends looking for the following support features:

Immediate access to the latest updates and patches when problems are resolved, including security updates – a critical consideration with growing security threats

An extensive library of tools and information for self-diagnosis and resolution, including online portals, knowledge bases with advanced search capabilities, and a robust peer-to-peer community

Advanced remote diagnostic capabilities, including tools with direct ties into the software to isolate and identify complex software issues

Direct access to support staff at the original software vendor when problems occur

Well-established partner networks in support delivery, with the ability to support complex integrations of technology from multiple hardware and software providers

When considering vendor support offerings and packages, IDC also recommends looking for comprehensive offerings that can assure broad access to these deliverables.  Ideally, the support provider will offer add-on services as needed, both for additional support activities and non-traditional support related activities (like optimization services and upgrade services).   IDC research has shown that IT organizations spend considerable time managing contracts and relationships with vendors and suppliers, and anything that can help minimize that time is helpful.  As IT environments grow more complex integrating on-premises and cloud solutions, support deliverables that can help speed problem resolution will be critical for resource-strapped IT organizations.

Elaina Stergiades is the Research Manager for IDC's Software Support Services program. In this position, she provides insight and analysis of industry trends and market strategies for software vendors supporting applications, development environment and systems software. Elaina is also responsible for research, writing and program development of the software support services market.

Prior to joining IDC, Elaina spent 10 years in the software and web design industries. As a quality assurance engineer at Parametric Technology and Weather Services International (WSI), she led testing efforts for new applications and worked closely with customers to design and implement new functionality. Elaina also worked in product marketing at WSI, directing an initiative to launch a new weather crawl system. More recently, she was a project manager at Catalyst online. At Catalyst, Elaina was responsible for managing client search marketing campaigns targeting increased website traffic, revenue and top search engine rankings.

Elaina has a B.S. in mechanical engineering from Cornell University and an M.B.A. from Babson College.

How to update a materialized view directly

Tom Kyte - Tue, 2017-06-20 09:46
Hi Can we update data in Materialized view directly using update statement. If yes,will that updated to table as well. What if there are more than one table in view? Please help me on this.
Categories: DBA Blogs

Oracle Text ctxrule - MATCHES: special word work and not work?

Tom Kyte - Tue, 2017-06-20 09:46
Hi all, I use CTXRULE to classify some text, but AB&B work and AT&T not work. Please support me on this: Here is my search text <code> SQL> select text from test_lexer_special; TEXT -------------------- AB&B {AT&T} AT&T SQL></code> ...
Categories: DBA Blogs

Fast Refreshable Materialized Views - Direct Path Logs on the Master Table

Tom Kyte - Tue, 2017-06-20 09:46
Hello Masters :-), I read the documentation about fast refreshable materialized views. It states: "The changes for conventional DML changes are stored in the materialized view log associated with the master table. The changes for direct-path IN...
Categories: DBA Blogs

Populating sequence for tab delimited input data using SQL*Loader

Tom Kyte - Tue, 2017-06-20 09:46
Hi Tom I have an input file that contains data that is tab delimited containing info like name, type, date, result, value as separate rows. The fields are not enclosed in quotes or anything else. I need to populate sequence for the data while load...
Categories: DBA Blogs

Unload Javascript web page contents to a text file to load in database table

Tom Kyte - Tue, 2017-06-20 09:46
Hello Experts, I have a requirement wherein I need to unload/export web page contents whose data is populated by javascript to a text/csv file; which in turn will be be loaded in the database table. I am not sure if utl_http will get the job d...
Categories: DBA Blogs

Basic stored procedures with Oracle

Tom Kyte - Tue, 2017-06-20 09:46
1. Use table [Employee] Write a stored procedure named Get_Employee_Detail which will accept 2 parameters. @col_name varchar(100) @col_value varchar(500) col_name parameter can accept any column name from the table and col_val...
Categories: DBA Blogs

enq: FB - contention

Tom Kyte - Tue, 2017-06-20 09:46
Hi, We are getting enq: FB - contention for some client and its because of Insert statement. We have Very few Oracle document and SR's created to this can we have an idea on why we are getting this and way to find and fix the issue ( possible w...
Categories: DBA Blogs

Webcast: Maintenance Strategies for Oracle E-Business Suite

Steven Chan - Tue, 2017-06-20 02:00

Oracle University has a large collection of free recorded webcasts for Oracle E-Business Suite.  Here's an excellent primer for system administrators interested in creating a framework for regular EBS maintenance:

Elke Phelps, Senior Principal Product Manager presents the best techniques Applications DBAs can use to proactively maintain their Oracle E-Business Suite environment. It starts with a practical approach for defining a preventive maintenance strategy and then moves to detailed descriptions and tips for various tools and approaches for efficiently performing maintenance tasks. Topics include patching shortcuts, distributing worker processes across multiple servers, shared file systems, avoiding resource bottlenecks, and more. Leave this session with the tools required for creating a plan to keep your environment healthy and up to date. This material was presented at Oracle OpenWorld 2016. 

Related Articles

Categories: APPS Blogs

12cR2 Application Containers and Foreign Keys

Yann Neuhaus - Mon, 2017-06-19 15:58

Application Container brings a new way to share data among databases, and adds a new dimension to referential integrity. A foreign key in an application PDB can reference a row belonging to a root data link table. But then, should a delete on the root validate that there are no orpheans in the application PDBs? And what if those PDBs are closed at the time of this delete? Here is a small example.

If you run this in you will get an error because the search for parent key is done only on the current container. This is considered as a bug: 21955394: CDB:ORA-02291 WHEN FOREIGN KEY REFERS TO THE PRIMARY KEY IN DATA LINK

The example that follows is run with the patch applied to fix this bug.

I’m connecting to root where I have no user PDB yet.

SQL> connect sys/oracle@//localhost/CDB1A as sysdba
SQL> select con_id, name, application_root application_root, application_pdb application_pdb,application_root_con_id application_root_con_id from v$containers;
------ ---- ---------------- --------------- -----------------------

I create the application container root

SQL> create pluggable database SCOTT_ROOT as application container admin user SCOTT_ADMIN identified by oracle roles=(DBA);
Pluggable database SCOTT_ROOT created.
SQL> alter pluggable database SCOTT_ROOT open;
Pluggable database SCOTT_ROOT altered.
SQL> select con_id, name, application_root application_root, application_pdb application_pdb,application_root_con_id application_root_con_id from v$containers;
------ ---- ---------------- --------------- -----------------------

I connect to this application root and start the installation of the application

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
SQL> alter pluggable database application SCOTT begin install '1.0';
Pluggable database APPLICATION altered.

I’m installing SCOTT tables DEPT and EMP tables but I changed their definition from utlsampl.sql:

  • DEPT is an EXTENDED DATA LINK where a set of row is common, inserted on application root and visible by all application PDBs
  • EMP is a METADATA LINK where each application PDB has its own data isolated from others, but having the same structure

Grant succeeded.
SQL> alter session set current_schema=SCOTT;
Session altered.
SQL> CREATE TABLE DEPT sharing=extended data
4 LOC VARCHAR2(13) ) ;
Table DEPT created.
SQL> CREATE TABLE EMP sharing=metadata
7 SAL NUMBER(7,2),
Table EMP created.
1 row inserted.
1 row inserted.
2 (30,'SALES','CHICAGO');
1 row inserted.
1 row inserted.
Commit complete.
SQL> alter pluggable database application SCOTT end install '1.0';
Pluggable database APPLICATION altered.

The application root has departments 10, 20, 30 and 40 in DEPT shared with all PDBs and has defined that EMP has a foreign key to it.

I create an application PDB from this application root

SQL> create pluggable database SCOTT_ONE admin user SCOTT_ONE_ADMIN identified by covfefe;
Pluggable database SCOTT_ONE created.
SQL> alter pluggable database SCOTT_ONE open;
Pluggable database SCOTT_ONE altered.

I sync it to get common DDL and DML applied

SQL> connect sys/oracle@//localhost/SCOTT_ONE as sysdba
SQL> alter pluggable database application SCOTT sync;
Pluggable database APPLICATION altered.
SQL> select name,con_id,application_pdb,application_root_con_id from v$containers;
---- ------ --------------- -----------------------

Now let’s connect to the application PDB. I can see the DEPT rows inserted from root because it is a DATA LINK.

SQL> connect scott/tiger@//localhost/SCOTT_ONE
SQL> select * from dept;
------ ----- ---

EMP is empty here

SQL> select * from emp;
no rows selected

I insert an EMP row in the application PDB which references a DEPT row in the application root:

2 (7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20);
1 row inserted.

As DEPT is and EXTENDED DATA LINK, I can add new rows in my PDB:

1 row inserted.

And I can have an EMP row referencing this local parent:

2 (7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,50);
1 row inserted.
SQL> commit;
Commit complete.

This looks good. Now what happens of we delete all rows from DEPT in the application root?

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
SQL> delete from SCOTT.DEPT;
4 rows deleted.
SQL> commit;
Commit complete.

No error here. But then, I have orphans in my application PDB:

SQL> connect scott/tiger@//localhost/SCOTT_ONE
SQL> select * from dept;
---------- -------------- -------------
SQL> select * from emp;
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 50

So what?

Referential integrity works across containers: an application PDB can reference parent key in the application root (according that bug is fixed). However, no ORA-02292 (child record found) is raised when child records are not in the current container. This one makes sense. Enforcing the verification of child records in all PDBs would require that they are opened, and may require locking the table in all containers. We must be aware that doing DML on the application root can lead to inconsistency if not done correctly.

Operations on the application root are application releases (upgrades and patches) and must be validated and tested carefully. For the example above, deleting all rows from DEPT can be done as an application patch which deletes from the EMP table as well:

SQL> connect sys/oracle@//localhost/SCOTT_ROOT as sysdba
SQL> alter pluggable database application SCOTT begin patch 1 ;
Pluggable database APPLICATION altered.
SQL> delete from scott.emp;
0 rows deleted.
SQL> delete from scott.dept where deptno in (10,20,30,40);
4 rows deleted.
SQL> alter pluggable database application SCOTT end patch 1 ;
Pluggable database APPLICATION altered.

The delete from EMP does nothing in the application root here, but it will be done on the PDB when applying the patch:

SQL> select * from dept;
---------- -------------- -------------
SQL> select * from emp;
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 50

Note that I’ve defined exactly which rows from DEPT I wanted to delete in the where clause of delete from scott.dept where deptno in (10,20,30,40);
You may be tempted to do something like: delete from scott.dept where deptno in (select deptno from scott.dept);
But keep in mind that the statements you run in the root are re-played as-is in the PDBs. And when you sync the PDB, you can see no rows from DEPT because there were already purged from the root. Actually, what you want is to delete from EMP the rows which refer to the rows you have deleted from the root. It is not possible to get them with a subquery, except if you have stored them into another data link table before deleting them. Changes in the application root must be managed like application patches.


Cet article 12cR2 Application Containers and Foreign Keys est apparu en premier sur Blog dbi services.

Getting SQL ids for a a sqltext

Tom Kyte - Mon, 2017-06-19 15:26
Hello Tom & Team, Can you please provide to get the list of sql ids along with elasped times and other metrics for a given sql statment. Once obtained, can i force oracle to use a particular sql id which was performing better ? Regards, Manjun...
Categories: DBA Blogs

SQL - Get Date Ranges for contiguous records

Tom Kyte - Mon, 2017-06-19 15:26
Hi Tom, I have been on your site numerous times and learnt a lot from your opinions and solutions. I am looking for SQL that displays in chronological order in which country "Tom" lived. Below are my details: <code>--create table. create ...
Categories: DBA Blogs

SQL Loader- Double Qutoes...

Tom Kyte - Mon, 2017-06-19 15:26
Hi, I have data in the below format,how to load its without loosing double quite... all columns enclosed in "" and separated by ~ , also last column has values with double quote and need to load with quote "Jobid"~"Jobname"~"Status"~"Comment...
Categories: DBA Blogs

best way to derive deltas when there is nothing set up on source

Tom Kyte - Mon, 2017-06-19 15:26
We are looking at a gradual migration of a legacy system running on 11gR1. We do not have control over the source system (although their DBAs are willing to accommodate <i>small</i> requests we might have). The data model is being changed drastical...
Categories: DBA Blogs

More Fine-Grained 'ALTER USER' Privilege

Tom Kyte - Mon, 2017-06-19 15:26
I am currently looking for a more fine-grained approach to the user management within an Oracle 11g Release 2 (soon to be Database: There is an idea to give some users the permission to manage some aspects of a user account such as: -...
Categories: DBA Blogs

Oracle 12c Index unusable after partition drop

Tom Kyte - Mon, 2017-06-19 15:26
In Oracle 11g if global index maintained during execution of the DROP partition command then command could take hours to complete. Oracle 12c said DROP partition command executes immediately and index will remain usable. I had database table in Or...
Categories: DBA Blogs

Security Alert CVE-2017-3629 Released

Oracle Security Team - Mon, 2017-06-19 15:21

Oracle just released Security Alert CVE-2017-3629 to address three vulnerabilities affecting Oracle Solaris:

- Vulnerability CVE-2017-3629 affects Oracle Solaris version 10 and version 11.3 and has a CVSS Base Score of 7.8. - CVE-2017-3630 affects Oracle Solaris version 10 and version 11.3 and has a CVSS Base Score of 5.3. - CVE-2017-3631 only affects Oracle Solaris 11.3 and has a CVSS Base Score of 5.3.

Oracle recommends affected Oracle Solaris customers apply the fixes released with this Security Alert as soon as possible.

For More Information:
The Advisory for Security Alert CVE-2016-0636 is located at http://www.oracle.com/technetwork/security-advisory/alert-cve-2017-3629-3757403.html

Installing SQLDeveloper and SQLCL on CentOS

The Anti-Kyte - Mon, 2017-06-19 14:02

As is becoming usual in the UK, the nation has been left somewhat confused in the aftermath of yet another “epoch-defining” vote.
In this case, we’ve just had a General Election campaign in which Brexit – Britain’s Exit from the EU – played a vanishingly small part. However, the result is now being interpreted as a judgement on the sort of Brexit that is demanded by the Great British Public.
It doesn’t help that, beyond prefixing the word “Brexit” with an adjective, there’s not much detail on the options that each term represents.
Up until now, we’ve had “Soft Brexit” and “Hard Brexit”, which could describe the future relationship with the EU but equally could be how you prefer your pillows.
Suddenly we’re getting Open Brexit and even Red-White-and-Blue Brexit.
It looks like the latest craze sweeping the nation is Brexit Bingo.
This involves drawing up a list of adjectives and ticking them off as they get used as a prefix for the word “Brexit”.
As an example, we could use the names of the Seven Dwarfs. After all, no-one wants a Dopey Brexit, ideally we’d like a Happy Brexit but realistically, we’re likely to end up with a Grumpy Brexit.

To take my mind off all of this wacky word-play, I’ve been playing around with CentOS again. What I’m going to cover here is how to install Oracle’s database development tools and persuade them to talk to a locally installed Express Edition database.

Specifically, I’ll be looking at :

  • Installing the appropriate Java Developer Kit (JDK)
  • Installing and configuring SQLDeveloper
  • Installing SQLCL

Sound like a Chocolate Brexit with sprinkles ? OK then…


I’m running on CentOS 7 (64 bit). I’m using the default Gnome 3 desktop (
CentOS is part of the Red Hat family of Linux distros which includes Red Hat, Fedora and Oracle Linux. If you’re running on one of these distros, or on something that’s based on one of them then these instructions should work pretty much unaltered.
If, on the other hand, you’re running a Debian based distro ( e.g. Ubuntu, Mint etc) then you’ll probably find these instructions rather more useful.

I’ve also got Oracle Database 11gR2 Express Edition installed locally. Should you feel so inclined, you can perform that install on CentOS using these instructions.

One other point to note, I haven’t bothered with any Oracle database client software on this particular machine.

Both SQLDeveloper and SQLCL require Java so…

Installing the JDK

To start with, we’ll need to download the JDK version that SQLDeveloper needs to run against. At the time of writing ( SQLDeveloper 4.2), this is Java 8.

So, we need to head over to the Java download page
… and download the appropriate rpm package. In our case :


Once the file has been downloaded, open the containing directory in Files, right-click our new rpm and open it with Software Install :

Now press the install button.

Once it’s all finished, you need to make a note of the directory that the jdk has been installed into as we’ll need to point SQLDeveloper at it. In my case, the directory is :


Speaking of SQLDeveloper…


Head over to the SQLDeveloper Download Page and get the latest version. We’re looking for the ??? option. In my case :


While we’re here, we may as well get the latest SQLCL version as well. The download for this is a single file as it’s platform independent.

Once again, we can take advantage of the fact that Oracle provides us with an rpm file by right-clicking it in Files and opening with Software Install.

Press the install button and wait for a bit…

Once the installation is complete, we need to configure SQLDeveloper to point to the JDK we’ve installed. To do this, we need to run :

sh /opt/sqldeveloper/sqldeveloper.sh

…and provide the jdk path when prompted, in this case :


The end result should look something like this :

In my case I have no previous install to import preferences from so I’ll hit the No button.

Once SQLDeveloper opens, you’ll want to create a connection to your database.

To do this, go to the File Menu and select New/Connection.

To connect as SYSTEM to my local XE database I created a connection that looks like this :

Once you’ve entered the connection details, you can hit Test to confirm that all is in order and you can actually connect to the database.
Provided all is well, hit Save and the Connection will appear in the Tree in the left-side of the tool from this point forward.

One final point to note, as part of the installation, a menu item for SQLDeveloper is created in the Programming Menu. Once you’ve done the JDK configuration, you can start the tool using this menu option.


As previously noted, SQLCL is a zip file rather than an rpm, so the installation process is slightly different.
As with SQLDeveloper, I want to install SQLCL in /opt .
To do this, I’m going to need to use sudo so I have write privileges to /opt.

To start with then, open a Terminal and then start files as sudo for the directory that holds the zip. So, if the directory is $HOME/Downloads …

sudo nautilus $HOME/Downloads

In Files, right click the zip file and select Open With Archivte Manager

Click the Extract Button and extract to /opt

You should now have a sqlcl directory under /opt.

To start sqlcl, run


…and you should be rewarded with…

There, hopefully that’s all gone as expected and you’ve not been left with a Sneezy Brexit.

Filed under: Linux, Oracle, SQLDeveloper Tagged: jdk, sqlcl, SQLDeveloper, sudo nautilus

Oracle's Security Fixing Practices

Oracle Security Team - Mon, 2017-06-19 13:53

In a previous blog entry, we discussed how Oracle customers should take advantage of Oracle's ongoing security assurance effort in order to help preserve their security posture over time. In today's blog entry, we're going to discuss the highlights of Oracle's security fixing practices and their implications for Oracle customers.

As stated in the previous blog entry, the Critical Patch Update program is Oracle's primary mechanism for the delivery of security fixes in all supported Oracle product releases and the Security Alert program provides for the release of fixes for severe vulnerabilities outside of the normal Critical Patch Update schedule. Oracle always recommends that customers remain on actively-supported versions and apply the security fixes provided by Critical Patch Updates and Security Alerts as soon as possible.

So, how does Oracle decide to provide security fixes? Where does the company start (i.e., for what product versions do security fixes get first generated)? What goes into security releases? What are Oracle's objectives?

The primary objective of Oracle's security fixing policies is to help preserve the security posture of ALL Oracle customers. This means that Oracle tries to fix vulnerabilities in severity order for each Oracle product family. In certain instances, security fixes cannot be backported; in other instances, lower severity fixes are required because of dependencies among security fixes. Additionally, Oracle treats customers equally by providing customers with the same vulnerability information and access to fixes across actively-used platform and version combinations at the same time. Oracle does not provide additional information about the specifics of vulnerabilities beyond what is provided in the Critical Patch Update (or Security Alert) advisory and pre-release note, the pre-installation notes, the readme files, and FAQs. The only and narrow exception to this practice is for the customers who report a security vulnerability. When a customer is reporting a security vulnerability, Oracle will treat the customer in much the same way the company treats security researchers: the customer gets detailed information about the vulnerability as well as information about expected fixing date, and in some instances access to a temporary patch to test the effectiveness of a given fix. However, the scope of the information shared between Oracle and the customer is limited to the original vulnerability being reported by the customer.

Another objective for Oracle's security fixing policies is not so much about producing fixes as quickly as possible, as it is to making sure that these fixes get applied by customers as quickly as possible. Prior to 2005 and the introduction of the Critical Patch Update program, security fixes were published by Oracle as they become produced by development without any fixed schedule (as Oracle would today release a Security Alert). Feedback we received was that this lack of predictability was challenging for customers, and as a result, many customers reported that they no longer applied fixes. Customers said that a predictable schedule would help them ensure that security fixes were picked up more quickly and consistently. As a result, Oracle created the Critical Patch Update program to bring predictability to Oracle customers. Since 2005, and in spite of a growing number of product families, Oracle has never missed a Critical Patch Update release.

It is also worth noting that Critical Patch Update releases for most Oracle products are cumulative. This means that by applying a Critical Patch Update, a customer gets all the security fixes included in a specific Critical Patch Update release as well as all the previously-released fixes for a given product-version combination. This allows customers who may have missed Critical Patch Update releases to quickly "catch up" to current security releases.

Let's now have a look at the order with which Oracle produces fixes for security vulnerabilities. Security fixes are produced by Oracle in the following order:

  • Main code line. The main code line is the code line for the next major release version of the product.
  • Patch set for non-terminal release version. Patch sets are rollup patches for major release versions. A Terminal release version is a version where no additional patch sets are planned.
  • Critical Patch Update. These are fixes against initial release versions or their subsequent patch sets

This means that, in certain instances, security fixes can be backported for inclusion in future patch sets or products that are released before their actual inclusion in a future Critical Patch Update release. This also mean that systems updated with patch sets or upgraded with a new product release will receive the security fixes previously included in the patch set or release.

One consequence of Oracle's practices is that newer Oracle product versions tend to provide an improved security posture over previous versions, because they benefit from the inclusion of security fixes that have not been or cannot be backported by Oracle.

In conclusion, the best way for Oracle customers to fully leverage Oracle's ongoing security assurance effort is to:

  1. Remain on actively supported release versions and their most recent patch set—so that they can have continued access to security fixes;
  2. Move to the most recent release version of a product—so that they benefit from fixes that cannot be backported and other security enhancements introduced in the code line over time;
  3. Promptly apply Critical Patch Updates and Security Alert fixes—so that they prevent the exploitation of vulnerabilities patched by Oracle, which are known by malicious attackers and can be quickly weaponized after the release of Oracle fixes.

For more information:
- Oracle Software Security Assurance website
- Security Alerts and Critical Patch Updates

Real ODA “one button patching”, to

Geert De Paep - Mon, 2017-06-19 13:46

Today I had to patch a virtualized Oracle Database Appliance X5-2 HA at a customer site. Current version was, installed in June 2016, and it had to be upgraded to I’ve heard a lot of scary stories of colleagues and friends about possible issues during patching, so I was a little reluctant to do this. Anyway, no choice, what needs to be done, needs to be done.

I had one big advantage: this ODA was installed by myself and I was 100% sure that nobody had ever done something ‘unsupported’ with it. During installation in June 2016, I have taken very much care about following the books accurately and ensuring that nothing went wrong. This was a real clean install without any issues, now in production for 1 year.

Now one year later I have to say that I am very happy with “my ODA”. The patching today went smoothly. Not a single issue was encountered during the patching. Actually I was a little surprised, because as I said, I have heard about many issues of others during patching.

More specifically I did the following:

I followed Mos: README for 25499210

Luckily version can be directly upgraded to (lower versions need to go in 2 or more steps).


I did the Server part “locally”, i.e. one node at a time and hence 2 times patching. Allthough I had downtime for the full ODA, I preferred to do it node by node. If something would go completely wrong on node 1, I would still have a valid node 2.

Warnings and tips:

  • Ensure each node (oda_base) has more than 15Gb free on /u01
  • Verify if rpm orclovn-user-6.0.r7494-1.el5 exists in Dom0 on each node (cfr one of those issues – someone I know had this rpm missing in Dom0…)
  • Run the patching one node at a time, not both together
  • It is best to stop any VM’s manually before starting the patching (but at the end you will have to start them manually again as well)

This command performs the patching:

/opt/oracle/oak/bin/oakcli update -patch –server –local

You may be interested in the timings, if you need to do this as well and have to plan downtime:

These are the current and proposed versions (I think the amount of components to  be patched will influence the time it will take):

Component Name            Installed Version         Proposed Patch Version  
---------------           ------------------        -----------------       
Controller_INT            4.230.40-3739             Up-to-date               
Expander                  0018                      Up-to-date              
SSD_SHARED {                                                                
[ c1d20,c1d21,c1d22,      A29A                      Up-to-date              
c1d23 ]                                                                     
[ c1d16,c1d17,c1d18,      A29A                      Up-to-date              
c1d19 ]}                                                               
HDD_LOCAL                 A720                      A72A                    
HDD_SHARED                P901                      PAG1                    
ILOM             r101649 r114580        
BIOS                      30050100                  30100400                
IPMI                              Up-to-date              
OL                        6.7                       6.8                     
OVM                       3.2.9                     3.2.11                  
                          8354,21948344)            2082,24828633)          
                          8354,21948344)            2082,24828633)

Each node took about 75 minutes to do the patching. However at the end a reboot is triggered and you should calculate about 15 minutes extra before everything is up again and you can verify that all went well. But good to know, while node 1 is patched, node 2 remains fully available and vice versa.

Shared storage

After this, the shared storage part needs to be patched using

/opt/oracle/oak/bin/oakcli update -patch –storage

You run this command only on node 1.

This took 24 minutes, but at the end both nodes reboot (both at the same time, so this is not rolling and you have db unavailability, even with RAC). So calculate some extra time here.

Database homes

Finally the database homes (I got 4 of them) need to be patched. In fact only 3 of the homes needed to be patched in my case, because we first want to validate the patch before patching production (4th home). On node 1 you can do:

/opt/oracle/oak/bin/oakcli update -patch –database

Note, I didn’t use the “–local” option in this case, so the oracle_home on both nodes are patched in 1 command.


  • I don’t know if it is required, but it is safe to set first (again one of those issues of others):
    • export NLS_LANG=American_America.US7ASCII
  • You don’t have to stop the databases in advance. This will be done, on one node at a time, while patching.
  • “datapatch” will be run automatically during the patching. So after the patching all databases are open again and no additional steps are required.

After starting the command, a list of homes to be patched is displayed and you can select to patch them all together or not. In my case I answered N and then I could choose which homes NOT(!) to patch. Yes, is somewhat strange, instead of asking which homes I want to patch, I had to enter the opposite:

Would you like to patch all the above homes: Y | N ? : N
Please enter the comma separated list of Database Home names that you do NOT want to patch:OraDb12102_home4,OraDb12102_home1,OraDb12102_home2

In the mean while I filed an enhancement request at Oracle Support for adding a confirmation after entering these homes. Because at the moment, after pressing RETURN, the script immediately starts patching. I don’t want to know what will happen if you would have a typo in your list of home names. It might be that the script will patching the wrong home in that case… So be careful.

I noticed that datapatch was automatically run for each database and I could verify this in:

SELECT patch_id, version, status, bundle_id, bundle_series
       FROM dba_registry_sqlpatch;

patch_id version  status  bundle_id bundle_series
21948354 SUCCESS    160119 PSU         
24732082 SUCCESS    170117 PSU

Regarding timings, in my case it took about 20 minutes for each home. However I had 4 databases in each home. If you have more, or less databases, timings may be somewhat different, but at least you have an idea.


In summary, all patch steps went through without a single error or issue. So this is the real ODA “one button patching”. Very nice.

However I don’t know how many patchings go like this. It looks to me that you can easily do (unsupported) things on your ODA that may cause issues during patching, like e.g. installing custom rpm’s, changing OS settings, not creating databases using the oakcli provided command or manually adding other Oracle products in the oraInventory. Yes I know, it is hard to treat an appliance like an appliance if all backdoors are open…

So if you install or manage an ODA, understand the importance of keeping it clean and get rewarded during patching. You really don’t want issues during patching, especially if your downtime is e.g. in the middle of the night.


Subscribe to Oracle FAQ aggregator