Feed aggregator

Virtual column issue : ORA-31495: error in synchronous change table on Schema.Tablename

Tom Kyte - Fri, 2017-07-07 05:06
Hi Team, I am facing tricky issue that never engage before in my database. WE have some tables which caring few virtual columns. These virtual columns are generated using user define function which is just doing strings to upper. It was working fi...
Categories: DBA Blogs

Bind mismatch(21):

Tom Kyte - Fri, 2017-07-07 05:06
Hi, I have a Database with cursor mismatch for a sql_id , but I cant find information about this Bind mismatch "mismatch(21)" neither for new_oacexl. This is a RAC three node, the result of tannel script nonshared2.sql for this sql_id is SQL...
Categories: DBA Blogs

build a hierarchy caculation with ONE sql

Tom Kyte - Fri, 2017-07-07 05:06
Hi Team, We want to build a hierachy process bar with one SQL but failed. We decide to turn to you guys for help. Plz allow me to put it in this way with a simple test. >>> Here's my test data: <code>create table DEMP ( ename VARCHAR2(1...
Categories: DBA Blogs

View for finding Bind variable values

Tom Kyte - Fri, 2017-07-07 05:06
Hi, We are using the database enterprise edition , and due to the application performance need we had to set CURSOR_SHARING=FORCE, as application was not able to use bind variables. When i try to generate the PLAN for sql_id using OR w...
Categories: DBA Blogs

Oracle Cloud Machine ; Your Own Cloud Under Your Own Control

Pakistan's First Oracle Blog - Fri, 2017-07-07 04:12
Yes, every company wants to be on cloud but not everyone wants that cloud to be out there in wild no matter how secure it is. Some want their cloud to be trapped within their own premise, under their own control.

Enters Oracle Cloud Machine.

Some of the key reasons why this would make sense are sovereignty, residency, compliance, and other business requirements. Moreover, the cloud benefits would be there like turn key solutions, same IaaS and PaaS environments for development, test and production.

Cost might be a factor here for some organizations so a hybrid solution might be a go for majority of corporations.Having a private cloud machine and also having a public cloud systems would be the solution for many. One advantage here would be that the integration  of this private cloud with public one would be streamed lined.
Categories: DBA Blogs

Design Guidelines

Anthony Shorten - Thu, 2017-07-06 23:24

The Oracle Utilities Application Framework is both flexible and powerful in terms of the extensibility of the products that use the product. As the famous saying goes though, "With Great Power comes Great Responsibility". Flexibility does not mean that you have carte blanche in terms of design when it comes to using the facilities of the product. Each object in the product has been specifically designed for a specific purpose and trying to use the extension facilities with those object must also respect those purposes.

Let me give some advice that may help guide your design work when building extensions:

  • Look at the base - The most important piece of advice I give partners and customers is look at the base product facilities first. I am amazed how many times I see an enhancement that has been implemented by a partner only to find that the base product already did that. This is particularly important when upgrading to a newer version. We spend a lot of time adding new features and updating existing ones (and sometimes replacing older features with newer features) so what you have as enhancements in previous now are part of the base product. It is a good idea to revert back to the base to reduce your maintenance costs.
  • Respect the objects - We have three types of objects in the product: Configuration, Master and Transaction.
    • The configuration objects are designed to hold meta data and configuration that influence the behavior of the product. They are cached in a L2 Cache that is designed for performance and are generally static data that is used as reference and guidance for the other objects. They tend to be low volume and are the domain of your Administrators or Power Users (rather than end users). A simple rule here is that they tend to exist on the Admin menu of the product.
    • The master objects are medium volume, with low growth, and define the key identifier or root data used by the product. For example, Accounts, Meters, Assets, Crews, etc.
    • The transaction objects are high volume and high growth and are added by processes in the product or interfaces and directly reference master objects. For example, bills, payments, meter reads, work activities, tasks etc.. These objects tend to also support Information Lifecycle Management.
    • Now you need to respect each of them. For example, do not load transaction data into a configuration object is a good example. Each its own place and each resource profile and behaviors.
  • Avoid overuse of the CLOB field - The CLOB field was introduced across most objects in the product and is a great way of extending the product. Just understand that while they are powerful they are not unlimited. They are limited in size for performance reasons and they are not a replacement for other facilities like characteristics and even building custom tables. They are XML remember and have limited maintenance and search capabilities over other methods.
  • Avoid long term issues - This one is hard to explain so let me try. When you design something, think about the other issues that may arise due to your design. For example, lots of implementers forget about volume increases over time and run into issues such as storage long term. Remember data in certain objects has different lifecycles and needs to be managed accordingly. Factor that into your design. Too many times I see extensions that forget this rule and then customer calls support for advice only to hear they need to redesign it to cater for the issue.

I have been in the industry over 30 years and made a lot of those mistakes myself early in my career so it is not impossible. Just learn and make sure you do not repeat your mistakes over time. One more piece of advice, talk about your designs with a few people (of various ages as well) to see if it makes sense. Do not take this as a criticism as a lot of great designers bounce ideas off others to see if they make sense. Doing that as part of any design process helps make the design more robust. Otherwise it just looks rushed and from the other side looks like lazy design. As designers I have seen great designs and bad designs, but it is possible to transform a requirement into a great design with some forethought.

performance tuning - library cache pin and library cache lock

Tom Kyte - Thu, 2017-07-06 10:46
Hi , i want to know about wait events of library cache pin and library cache lock in my environment i have faced this issue but i am not able to get session whos is waiting for library cache pin and session which is acessing that object . ...
Categories: DBA Blogs

BULK COLLECT on DB types Vs PL/SQL types

Tom Kyte - Thu, 2017-07-06 10:46
Hi, I have writtem below PL/SQL block to populate ROWID of table into intermidiate table tb_load_stats. Below block works fine when TYPE typ_rowid/t_typ_rowid is directly used in PL/SQL block. But when these types are created as a oracle obj...
Categories: DBA Blogs

Oracle SQL statement is taking long time

Tom Kyte - Thu, 2017-07-06 10:46
<code>Hi , My SQL statement is taking long time approximately 2.30 hr to complete. I have query like below Select from tables UNION Select from tables Here is the Gather Stat Plan. SQL> SELECT * 2 FROM TABLE(DBMS_XPLAN.DISPLAY...
Categories: DBA Blogs

Performance Issue due to RMAN Jobs

Tom Kyte - Thu, 2017-07-06 10:46
Hello, We are facing performance issues on our database. Our application is designed in such a way that it is rigorously used only for <b>5-days</b> in a month and while it is used there are around 300 sessions running in parallel at any point...
Categories: DBA Blogs

Find oracle database owner

Tom Kyte - Thu, 2017-07-06 10:46
Hi Ora experts, I have some quick queries regarding oracle database. Is there any way to find an oracle database owner information (in Windows platform, where I don't have 'oratab' file)? Also, we also need to find the 'last access time' of an or...
Categories: DBA Blogs


Tom Kyte - Thu, 2017-07-06 10:46
Dear Tom, Does V$SQL_PLAN_STATISTICS shows only the statics about the current (connected) session? SQL> select count(*) from v$sql_plan ; --I can get the plan details here COUNT(*) ---------- 18540 SQL> select count(*) from V$SQ...
Categories: DBA Blogs

unindexed foreign keys

Tom Kyte - Thu, 2017-07-06 10:46
Do you have a script that lists all the foreign keys with no associated indexes?
Categories: DBA Blogs

Unify - bringing together the best of both worlds

Rittman Mead Consulting - Thu, 2017-07-06 09:00

Since I started teaching OBIEE in 2011, I had the pleasure of meeting many fascinating people who work with Business Intelligence.

In talking to my students, I would generally notice three different situations:

  1. Folks were heavy users of OBIEE, and just ready to take their skills to the next level.

  2. They were happily transitioning to OBIEE from a legacy reporting tool, that didn’t have the power that they needed.

  3. There were not-so-good times, like when people were being forced to transition to OBIEE. They felt that they were moving away from their comfort zone and diving into a world of complicated mappings that would first require them to become rocket scientists. They were resistant to change.

It was this more challenging crowd, that mostly sparked my interest for other analytics tools. I received questions like: “Why are we switching to another system? What are the benefits?”


I wanted to have a good answer to these questions. Over the years, different projects have allowed me the opportunity to work with diverse reporting tools. My students’ questions were always in mind: Why? And what are the benefits? So, I always took the time to compare/contrast the differences between OBIEE and these other tools.

I noticed that many of them did a fantastic job at answering the questions needed, and so did OBIEE. It didn’t take me long to have the answer that I needed: the main difference in OBIEE is the RPD!


The RPD is where so much Business Intelligence happens. There, developers spend mind boggling times connecting the data, deriving complex metrics and hierarchies, joining hundreds of tables, and making everything a beautiful drag and drop dream for report writers.

Yes, many other tools will allow us to do magic with metadata, but most of them require this magic to be redefined every time we need a new report, or the report has a different criteria. Yes, the RPD requires a lot of work upfront, but that work is good for years to come. We never lose any of our previous work, we just enhance our model. Overtime, the RPD becomes a giant pool of knowledge for a company and is impressively saved as a file.


For tapping into the RPD metadata, traditionally we have used BI Publisher and OBIEE. They are both very powerful and generally complement each other well. Other tools have become very popular in the past few years. Tableau is an example that quickly won the appreciation of the BI community and has kept consistent leadership in Gartner’s BI Magic quadrant since 2013. With a very slick interface and super fast reporting capability, Tableau introduced less complex methods to create amazing dashboards - and fast! So, what is there not to like? There is really so much TO like!

Going back to the comparing and contrasting, the main thing that Tableau doesn’t offer is… the RPD. It lacks a repository with the ability to save the join definitions, calculations and the overall intelligence that can be used for all future reports.

At Rittman Mead, we’ve been using these tools and appreciate their substantial capabilities, but we really missed the RPD as a data source. We wanted to come up with a solution that would allow our clients to take advantage of the many hours they had likely already put into metadata modeling by creating a seamless transition from OBIEE’s metadata layer to Tableau.


This past week, I was asked to test our new product, called Unify. Wow. Once again, I am so proud of my fellow coworkers. Unify has a simple interface and uses a Tableau web connector to create a direct line to your OBIEE repository for use in Tableau reports, stories and dashboards.


In Unify, we select the subject areas from our RPD presentation layer and choose our tables and columns as needed. Below is a screenshot of Unify using the OBIEE 12c Sample App environment. If you are not familiar with OBIEE 12c, Oracle provides the Sample App - a standalone virtual image with everything that you need to test the product. You can download the SampleApp here: http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html


We are immediately able to leverage all joins, calculated columns, hierarchies, RPD variables, session variables and that’s not all… our RPD security too! Yes, even row level security is respected when we press the “Unify” button and data is brought back into Tableau. So now, there is no reason to lose years of metadata work because one team prefers to visualize with Tableau instead of OBIEE.

Unify allows us to import only those data needed for the report, as we can utilize ‘in-tool’ filtering, keeping our query sets small, and our performance high.

In sum, Unify unites it all - have your cake and eat it too. No matter which tool you love the most, add them together and you will certainly love them both more.


Categories: BI & Warehousing

A few useful Oracle 12cR2 MOS Docs

Syed Jaffar - Thu, 2017-07-06 07:33
A few useful MOS Docs are listed below , in case if 12cR2 upgrade around the corner.

  • How to Upgrade to/Downgrade from Grid Infrastructure 12.2 and Known Issues (Doc ID 2240959.1)
  • Complete Checklist for Upgrading to Oracle Database 12c Release 2 (12.2) using DBUA (Doc ID 2189854.1)
  • 12.2 Grid Infrastructure Installation: What's New (Doc ID 2024946.1)
  • Patches to apply before upgrading Oracle GI and DB to (Doc ID 2180188.1)
  • Differences Between Enterprise, Standard Edition 2 on Oracle 12.2 (Doc ID 2243031.1)
  • 12.2 gridSetup.sh Does Not List Disks Unless the Discovery String is Provided (Doc ID 2244960.1)

Create a 12c physical standby database on ODA X5-2

Amis Blog - Thu, 2017-07-06 07:06

ODA X5-2 simplifies and speeds up the creation of a 12c database quite considerably with oakcli. You can take advantage of this command by also using it in the creation of physical standby databases as I discovered when I had to setup Dataguard on as many as 5 production and 5 acceptance databases within a very short time.

I used the “oakcli create database …” command to create both primary and standby databases really fast and went on from there to setup a Dataguard Bbroker configuration in max availability mode. Where you would normally duplicate a primary database on to a skeleton standby database that’s itself without any data or redo files and starts up with a pfile, working with 2 fully configured databases is a bit different. You do not have to change a db_unique_name after the RMAN duplicate, which proved to be quite an advantage, and the duplicate itself doesn’t have to address any spfile adaptations because it’s already there. But you may get stuck with some obsolete data and redo files of the original standby database that can fill up the filesystem. However, as long as you remove these files in time, just before the RMAN duplicate, this isn’t much of an issue.

What I did to create 12c primary database ABCPRD1 on one ODA and physical standby database ABCPRD2 on a second ODA follows from here. Nodes on oda1 are oda10 and oda11, nodes on oda2 are oda20 and oda21. The nodes I will use are oda10 and oda20.

-1- Create parameterfile on oda10 and oda20
oakcli create db_config_params -conf abcconf
-- parameters:
-- Database Block Size  : 8192
-- Database Language    : AMERICAN
-- Database Characterset: WE8MSWIN1252
-- Database Territory   : AMERICA
-- Component Language   : English
-- NLS Characterset     : AL16UTF16
file is saved as: /opt/oracle/oak/install/dbconf/abcconf.dbconf

-2- Create database ABCPRD1 on oda10 and ABCPRD2 on oda20
oda10 > oakcli create database -db ABCPRD1 -oh OraDb12102_home1 -params abcconf
oda20 > oakcli create database -db ABCPRD2 -oh OraDb12102_home1 -params abcconf
-- Root  password: ***
-- Oracle  password: ***
-- SYSASM  password - During deployment the SYSASM password is set to 'welcome1 - : ***
-- Database type: OLTP
-- Database Deployment: EE - Enterprise Edition
-- Please select one of the following for Node Number >> 1
-- Keep the data files on FLASH storage: N
-- Database Class: odb-02  (2 cores,16 GB memory)

-3- Setup db_name ABCPRD for both databases... this is a prerequisite for Dataguard
oda10 > sqlplus / as sysdba
oda10 > shutdown immediate;
oda10 > startup mount
oda10 > Change database name of database ABCPRD1 to ABCPRD? (Y/[N]) => Y
oda10 > exit

oda20 > sqlplus / as sysdba
oda20 > shutdown immediate;
oda20 > startup mount
oda20 > Change database name of database ABCPRD2 to ABCPRD? (Y/[N]) => Y
oda20 > exit

-4- Set db_name of both databases in their respective spfile as well as ODA cluster,
    and reset the db_unique_name after startup back from ABCPRD to ABCPRD1|ABCPRD2
oda10 > sqlplus / as sysdba    
oda10 > startup mount
oda10 > alter system set db_name=ABCPRD scope=spfile;
oda10 > alter system set service_names=ABCPRD1 scope=spfile;
oda10 > ! srvctl modify database -d ABCPRD1 -n ABCPRD
oda10 > shutdown immediate
oda10 > startup
oda10 > alter system set db_unique_name=ABCPRD1 scope=spfile;
oda10 > shutdown immediate;
oda10 > exit

oda20 > sqlplus / as sysdba    
oda20 > startup mount
oda20 > alter system set db_name=ABCPRD scope=spfile;
oda20 > alter system set service_names=ABCPRD2 scope=spfile;
oda20 > ! srvctl modify database -d ABCPRD2 -n ABCPRD
oda20 > shutdown immediate
oda20 > startup
oda20 > alter system set db_unique_name=ABCPRD2 scope=spfile;
oda20 > shutdown immediate;
oda20 > exit

-5- Startup both databases from the cluster.
oda10 > srvctl start database -d ABCPRD1
oda20 > srvctl start database -d ABCPRD2

Currently, 2 identical configured databases are active with the same db_name, which is a first condition for the following configuration of Dataguard Broker. By just matching the db_name between databases and keeping the db_unique_name as it was, ASM database and diagnostic directory names remain as they are.

Also, the spfile entry in the cluster continues to point to the correct directory and file, as well as the init.ora in $ORACLE_HOME/dbs. Because the standby started with an existing and correctly configured spfile you no longer need to retrieve it from the primary. It simplifies and reduces the RMAN duplicate code to just a one line command, apart from login and channel allocation.

-6- Add Net Service Names for ABCPRD1 and ABCPRD2 to your tnsnames.ora on oda10 and oda20
      (ADDRESS = (PROTOCOL = TCP)(HOST = oda10)(PORT = 1521))

      (ADDRESS = (PROTOCOL = TCP)(HOST = oda20)(PORT = 1521))

-7- Add as a static service to listener.ora on oda10 and oda20
oda10 >   (SID_LIST =
oda10 >     (SID_DESC =
oda10 >       (GLOBAL_DBNAME = ABCPRD1_DGB)
oda10 >       (ORACLE_HOME = /u01/app/oracle/product/
oda10 >       (SID_NAME = ABCPRD1)
oda10 >     ) 
oda10 >   )        

oda20 >   (SID_LIST =
oda20 >     (SID_DESC =
oda20 >       (GLOBAL_DBNAME = ABCPRD2_DGB)
oda20 >       (ORACLE_HOME = /u01/app/oracle/product/
oda20 >       (SID_NAME = ABCPRD2)
oda20 >     ) 
oda20 >   )

-8- Restart listener from cluster on oda10 and oda20
oda10 > srvctl stop listener
oda10 > srvctl start listener

oda20 > srvctl stop listener
oda20 > srvctl start listener

-9- Create 4 standby logfiles on oda10 only (1 more than nr. of redologgroups and each with just 1 member)
    The RMAN duplicate takes care of the standby logfiles on oda20, so don't create them there now
oda10 > alter database add standby logfile thread 1 group 4 size 4096M;
oda10 > alter database add standby logfile thread 1 group 5 size 4096M;
oda10 > alter database add standby logfile thread 1 group 6 size 4096M;
oda10 > alter database add standby logfile thread 1 group 7 size 4096M;
oda10 > exit

-10- Start RMAN duplicate from oda20
oda20 > srvctl stop database -d ABCPRD2
oda20 > srvctl start database -d ABCPRD2 -o nomount
oda20 > *****************************************************************************
oda20 > ********* !!! REMOVE EXISTING DATA EN REDO FILES OF ABCPRD2 NOW !!! *********
oda20 > *****************************************************************************
oda20 > rman target sys/***@ABCPRD1 auxiliary sys/***@ABCPRD2
oda20 > .... RMAN> 
oda20 > run {
oda20 > allocate channel d1 type disk;
oda20 > allocate channel d2 type disk;
oda20 > allocate channel d3 type disk;
oda20 > allocate auxiliary channel stby1 type disk;
oda20 > allocate auxiliary channel stby2 type disk;
oda20 > duplicate target database for standby nofilenamecheck from active database;
oda20 > }
oda20 > exit

And there you are… primary database ABCPRD1 in open read-write mode and standby database ABCPRD2 in mount mode. The only thing left to do now is the dataguard broker setup, and activate flashback and force_logging on both databases.

-11- Setup broker files in shared storage (ASM) and start brokers on oda10 and oda20
oda10 > sqlplus / as sysdba
oda10 > alter system set dg_broker_config_file1='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD1/ABCPRD1/dr1ABCPRD1.dat' scope=both; 
oda10 > alter system set dg_broker_config_file2='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD1/ABCPRD1/dr2ABCPRD1.dat' scope=both;
oda10 > alter system set dg_broker_start=true scope=both;
oda10 > exit

oda20 > sqlplus / as sysdba
oda20 > alter system set dg_broker_config_file1='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD2/ABCPRD1/dr1ABCPRD2.dat' scope=both; 
oda20 > alter system set dg_broker_config_file2='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD2/ABCPRD1/dr2ABCPRD2.dat' scope=both;
oda20 > alter system set dg_broker_start=true scope=both;
oda20 > exit

-12- Create broker configuration from oda10
oda10 > dgmgrl sys/***
oda10 > create configuration abcprd as primary database is abcprd1 connect identifier is abcprd1_dgb;
oda10 > edit database abcprd1 set property StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oda10)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ABCPRD1_DGB)(INSTANCE_NAME=ABCPRD1)(SERVER=DEDICATED)))';
oda10 > add database abcprd2 as connect identifier is abcprd2_dgb maintained as physical;
oda10 > edit database abcprd2 set property StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oda20)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ABCPRD2_DGB)(INSTANCE_NAME=ABCPRD2)(SERVER=DEDICATED)))';
oda10 > enable configuration;
oda10 > edit database abcprd2 set state=APPLY-OFF;
oda10 > exit

-13- Enable flashback and force logging on both primary and standby database
oda10 > sqlplus / as sysdba
oda10 > alter database force logging;
oda10 > alter database flashback on;
oda10 > exit

oda20 > sqlplus / as sysdba
oda20 > alter database force logging;
oda20 > alter database flashback on;
oda20 > exit
oda20 > srvctl stop database -d abcprd2
oda20 > srvctl start database -d abcprd2 -o mount

oda10 > srvctl stop database -d abcprd1
oda10 > srvctl start database -d abcprd1

-14- Configure max availability mode from oda10
oda10 > dgmgrl sys/*** 
oda10 > edit database abcprd2 set state=APPLY-ON;
oda10 > edit database abcprd1 set property redoroutes='(LOCAL : abcprd2 SYNC)';
oda10 > edit database abcprd2 set property redoroutes='(LOCAL : abcprd1 SYNC)';
oda10 > edit configuration set protection mode as maxavailability;
oda10 > show database abcprd1 InconsistentProperties;
oda10 > show database abcprd2 InconsistentProperties;
oda10 > show configuration
oda10 > validate database abcprd2;
oda10 > exit

You should now have a valid 12c Max Availability Dataguard configuration, but you better test it thoroughly with
some switchovers and a failover before taking it into production. Have fun!

The post Create a 12c physical standby database on ODA X5-2 appeared first on AMIS Oracle and Java Blog.

Finance and HR Leaders Shape Digital Disruption, New Research Finds

Oracle Press Releases - Thu, 2017-07-06 07:00
Press Release
Finance and HR Leaders Shape Digital Disruption, New Research Finds New Oracle and MIT Technology Review study reveals the human drivers of cloud automation as the roles of finance, HR, and IT evolve to meet the needs of a more connected organization

Redwood Shores, Calif.—Jul 6, 2017

To enable organizations to thrive in a competitive digital marketplace, Oracle and the MIT Technology Review – an independent media company founded at the Massachusetts Institute of Technology (MIT) in 1899—today released a new study that highlights the importance of collaboration between finance and human resources (HR) teams with a unified cloud. The study, Finance and HR: The Cloud’s New Power Partnership, outlines how a holistic view into finance and HR information, delivered via cloud technology, empowers organizations to better manage continuous change.  

Based on a global survey of 700 C-level executives and finance, HR, and IT managers, the study found that a shared finance and HR cloud system is a critical component of successful cloud transformation initiatives. Among the benefits of integrating enterprise resource planning (ERP) and human capital management (HCM) systems is easier tracking and forecasting of employee costs for budgeting purposes. Additionally, integrated HCM and ERP cloud systems improve collaboration between departments, with 37 percent of respondents noting that they use the cloud to improve the way data is shared.

The report also reveals the human factors behind a successful cloud implementation, with employees’ ability to adapt to change standing out as critical. Among organizations that have fully deployed the cloud, almost half (46 percent) say they have seen their ability to reshape or resize the organization improve significantly—as do 47 percent of C-level respondents.

The productivity benefits have also been significant. Nearly one-third of respondents (31 percent) say they spend less time doing manual work within their department as a result of moving to the cloud and that the automation of processes has freed up time to work toward larger strategic priorities.

“As finance and HR increasingly lead strategic organizational transformation, ROI comes not only with financial savings for the organization, but also from the new insights and visibility into the business HR and finance gain with the cloud. People are at the heart of any company’s success and this is why we are seeing finance and HR executives lead cloud transformation initiatives,” said Steve Cox, group vice president, ERP EPM product marketing at Oracle. “In addition, improved collaboration between departments enables organizations to manage the changes ahead and sets the blueprint for the rest of the organization’s cloud shift.”

The survey also reveals there is a blurring of lines between functions and individual roles as the cloud increasingly ties back office systems together:

  • Increased Collaboration: 46 percent of finance and HR professionals say a full cloud deployment has led to significantly improved collaboration between departments, and nearly half expect a significant improvement in the next two years.
    • This extends to IT as well. 52 percent of C-level respondents said the relationship between IT, HR and finance is even better than expected following their cloud implementation
  • Cross-Corporate Intermingling: With the new roles of HR and finance professionals requiring them to work more closely with data and the cloud, 43 percent of businesses plan to bring IT people into these departments to help employees take advantage of new technologies.
  • New Skillsets: Desired skills respondents want to improve upon include:
    • Time management, with 40 percent saying this is currently an issue
    • Active learning
    • Problem solving, mathematical reasoning and analytical skills
    • The IT function also changes. Post-deployment, 56 percent of C-level respondents report that IT has significantly improved when it comes to producing innovations.

Cox added: “As organizations navigate technological changes, it’s critical for the C-suite to empower its employees to evolve their individual business acumen. Many businesses understand this and it’s encouraging to see 42 percent planning to provide their teams with management skills training to help them break out of their traditional back-office roles. The learnings from the move of finance and HR to the cloud will ultimately spread across the organization as, together, they conceptualize the shape of the next disruption.”

Contact Info
Joann Wardrip
About the Research

Oracle partnered with the MIT Technology Review to survey HR, Finance and IT professionals about the state of their cloud transformation and to gain insight into how moving to the cloud has improved collaboration among teams. In total, 700 HR, Finance, and IT professionals were polled across North America, EMEA and Asia.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Joann Wardrip

  • +1.650.607.1343

Oracle Clusterware 12cR2 - deprecated and desupported features

Syed Jaffar - Thu, 2017-07-06 04:27

Having clear understanding of deprecated and desupported features in a new release is equally important as knowing the new features of the release. In this short blog post, I would like to highlight the following features that are either deprecated or desupported in 12cR2.

·        config.shwill no longer be used for Grid configuration wizard, instead, the gridSetip.sh is used in 12cR2;
·        Placement of OCR and Voting files directly on a shared filesystem is not deprecated;
·        The diagcollection.pl utility is deprecated in favor of Oracle Trace File Analyzer;

·        You are no longer able to use Oracle Clusterware commands that are prefixed with crs_.

In my next blog post, will go over some of the important features Oracle Clusterware in 12cR2. Stay tuned.


Webcast: Getting Optimal Performance from Oracle E-Business Suite

Steven Chan - Thu, 2017-07-06 02:00

Oracle University has many free recorded webcasts that are useful for E-Business Suite system administrators.  Here's a good one on EBS performance tuning (this is always one of our most-popular sessions at OpenWorld):

Samer Barakat, Director Application Performance summarizes practical tips and lessons learned from performance-tuning and benchmarking the world’s largest Oracle E-Business Suite environments. Application system administrators will get concrete tips and techniques for identifying and resolving performance bottlenecks on all layers of the technology stack. They will also learn how Oracle’s engineered systems such as Oracle Exadata and Oracle Exalogic can dramatically improve the performance of their system. This material was presented at Oracle OpenWorld 2015.

Related Articles

Categories: APPS Blogs

Working with Location and Permissions in JET Hybrid

Andrejus Baranovski - Thu, 2017-07-06 00:35
What if you want to access mobile device location data from JET Hybrid application? This can be achieved with Cordova Geolocation plugin. But you want it to be nicely done and want to make sure application is granted with permission to access location information. Use Cordova Permissions plugin for that.

You could add Cordova plugin to JET app by executing this command:

cordova plugin add 

If this command doesnt work for any reason, you could add plugin information directly into config.xml file (check Geertjan post about the same - Plugging into Devices with Oracle JET on Cordova (Part 1)):

In JS function, before calling location API - we call permissions API to check if app is already granted permission to read location data. In hasPermission method, in case of success - location data is accessed. In case of no permission, request for permission is sent. If request is satisfied - location is accessed (and permission is granted at the same time):

Location data is retrieved through callback:

This is how it works. On very first location access, when permission is not granted yet - we request permission through permission API:

When permission is granted, location is displayed:

Download sample application from GitHub repository - rslocationapp.


Subscribe to Oracle FAQ aggregator