Feed aggregator

Oracle REST Data Services and Docker

Kris Rice - Thu, 2017-06-01 07:50
TL;DR 1) check out https://github.com/krisrice/docker-ords-sqlcl-apex 2) Download ORDS ; optionally SQLcl,APEX 3) Build w/DB connection details docker build -t krisrice/ords:3.0.10 --build-arg DBHOST= --build-arg DBSERVICE=orcl --build-arg DBPORT=1521 --build-arg DBPASSWD=oracle . 4) Run the image docker run -d -p 8888:8888 -p 8443:8443 --name=ords krisrice/ords:3.0.10 5) Access


Tom Kyte - Thu, 2017-06-01 07:46
Hi Tom Thanks for your continuous help to Oracle WORLD. I have table dept like SQL> Select * from test_dept; DEPTCODE DEPTNAME AMOUNT --------- --------- -------- 100 SALES 100 100 SALES 120 ...
Categories: DBA Blogs

How to Update Columns from Other Tables

Tom Kyte - Thu, 2017-06-01 07:46
Hi, I have a requirement to update a table with new values where are available in other tables. Table 1 contain Old_name New_name Columns Table 2 contain New_Name column Table 3 contain Name Column my requirement is check the Name columns value...
Categories: DBA Blogs

EXECUTE IMMEDIATE with multiple bind variables and SELECT INTO returning a single record but multiple columns

Tom Kyte - Thu, 2017-06-01 07:46
Hello, I'm building a function that always receives 3 input values. These incoming values are used as bind variables in assembling a SQL query string that is later executed using an EXECUTE IMMEDIATE statement. The SQL query string contains a S...
Categories: DBA Blogs

It is possible to declare/create an UDT type inside of a package?

Tom Kyte - Thu, 2017-06-01 07:46
Hello everyone. I came here with one simple question. It is possible create an UDT inside of a package? For example, I create a new package CREATE OR REPLACE PACKAGE MyPackage AS TYPE myType AS OBJECT( NAME VARCHAR(30), SURNAME VARCHAR(...
Categories: DBA Blogs

Bounded Context Model Without Violating RDBMS Principles

Tom Kyte - Thu, 2017-06-01 07:46
Hello Team, I work on system that is a web portal containing over a hundred different APIs that customers use to interact with our products. This system is legacy, and was developed with little to no database governance. As a result, all these API...
Categories: DBA Blogs

First Steps with Oracle Analytics Cloud

Rittman Mead Consulting - Thu, 2017-06-01 07:43

Not long ago Oracle added a new offer to their Cloud - an OBIEE in a Cloud with full access. Francesco Tisiot made an overview of it and now it's time to go a bit deeper and see how you can poke it with a sharp stick by yourself. In this blog, I'll show how to get your own OAC instance as fast and easy as possible.

Before you start

The very first step is to register a cloud account. Oracle gives a trial which allows testing of all features. I won't show it here as it is more or less a standard registration process. I just want highlight a few things:

  • You will need to verify your phone number by receiving an SMS. It seems that this mechanism may be a bit overloaded and I had to make more than one attempts. I press the Request code button but nothing happens. I wait and press it again, and again. And eventually, I got the code. I can't say for sure and possible it was just my bad luck but if you face the same problem just keep pushing (but not too much, requesting a code every second won't help you).
  • Even for trial you'll be asked for a credit card details. I haven't found a good diagnostics on how much was already spent and the documentation is not really helpful here.

OAC instances are not self-containing and require some additional services. The absolute minimum configuration is the following:

  • Oracle Cloud Storage (OCS) - is used for backups, log files, etc.
  • Oracle Cloud Database Instance (DBC) - is used for RCU schemas.
  • Oracle Analytics Cloud Instance (OAC) - is our ultimate target.

From the Cloud services point of view, architecture is the following. This picture doesn't show virtual disks mounted to instances. These disks consume Cloud Storage quota but they aren't created separately as services.


We need at least one Oracle Database Cloud instance to store RCU schemas. This database may or may not have a separate Cloud Storage area for backups. Every OAC instance requires Cloud storage area for logs. Multiple OAC instances may share one Cloud storage area but I can't find any advantage of this approach over a separate area for every instance.

Create Resources

We create these resource in the order they are listed earlier. Start with Storage, then DB and the last one is OAC. Actually, we don't have to create Cloud Storage containers separately as they may be created automatically. But I show it here to make things more clear without too much "it works by itself" magic.

Create Cloud Storage

The easiest part of all is the Oracle Cloud Storage container. We don't need to specify its size or lots of parameters. All parameters are just a name, storage class (Standard/Archive) and encryption.


I spent some time here trying to figure out how to reference this storage later. There is a hint saying that "Use the format: <storage service>-<identity domain>/<container>. For example: mystorage1-myid999/mybackupcontainer." And if identity domain and container are pretty obvious, storage service puzzled me for some time. The answer is "storage service=Storage". You can see this in the top of the page.


It seems that Storage is a fixed keyword, rurittmanm is the domain name created during the registration process and demo is the actual container name. So in this sample when I need to reference my demo OCS I should write Storage-rurittmanm/demo.

Create Cloud DB

Now when we are somewhat experienced in Oracle Cloud we may move to a more complicated task and create a Cloud DB Instance. It is harder than Cloud Storage container but not too much. If you ever created an on-premise database service using DBCA, cloud DB should be a piece of cake to you.

At the first step, we set the name of the instance and select the most general options. These options are:

  • Service Level. Specifies how this instance will be managed. Options are:

    • Oracle Database Cloud Service: Oracle Database software pre-installed on Oracle Cloud Virtual Machine. Database instances are created for you using configuration options provided in this wizard. Additional cloud tooling is available for backup, recovery and patching.
    • Oracle Database Cloud Service - Virtual Image: Oracle Database software pre-installed on an Oracle Cloud Virtual Machine. Database instances are created by you manually or using DBCA. No additional cloud tooling is available.
  • Metering Frequency - defines how this instance will be paid: by months or by hours.

  • Software Release - if the Service Level is Oracle Database Cloud Service, we may choose 11.2, 12.1 and 12.2, for Virtual Image only 11.2 and 12.1 are available. Note that even cloud does no magic and with DB 12.2 you may expect the same problems as on-premise.

  • Software Edition - Values are:

    • Standard Edition
    • Enterprise Edition
    • Enterprise Edition - High Performance
    • Enterprise Edition - Extreme Performance
  • Database Type - defines High Availability and Disaster Recovery options:

    • Single Instance
    • Database Clustering with RAC
    • Single Instance with Data Guard Standby
    • Database Clustering with RAC and Data Gard Standby

Database Clustering with RAC and Database Clustering with RAC and Data Gard Standby types are available only for Enterprise Edition - Extreme Performance edition.


The second step is also quite intuitive. It has a lot of options but they should be pretty simple and well-known for anyone working with Oracle Database.


The first block of parameters is about basic database configuration. Parameters like DB name (sid) or Administration Password are obvious.

Usable DataFile Storage (GB) is less obvious. Actually, in the beginning, it puzzled me completely. In this sample, I ask for 25 Gb of space. But this doesn't mean that my instance will take 25 Gb of my disk quota. In fact, this particular instance took 150 Gb of disk space. Here we specify only a guaranteed user disk space, but an instance needs some space for OS, and DB software, and temp, and swap, and so on.


A trial account is limited with 500 Gb quota and that means that we can create only 3 Oracle DB Cloud instances at max. Every instance will use around 125 Gb of let's say "technical" disk space we can't reduce. From the practical point of view, it means that it may be preferable to have one "big" instance (in terms of the disk space) rather than multiple "small".

  • Compute shape specifies how powerful our VM should be. Options are the following:
    • OC3 - 1.0 OCPU, 7.5 GB RAM
    • OC4 - 2.0 OCPU, 15.0 GB RAM
    • OC5 - 4.0 OCPU, 30.0 GB RAM
    • OC6 - 8.0 OCPU, 60.0 GB RAM
    • OC7 - 16.0 OCPU, 120.0 GB RAM
    • OC1m - 1.0 OCPU, 15.0 GB RAM
    • OC2m - 2.0 OCPU, 30.0 GB RAM
    • OC3m - 4.0 OCPU, 60.0 GB RAM
    • OC4m - 8.0 OCPU, 120.0 GB RAM
    • OC5m - 16.0 OCPU, 240.0 GB RAM

We may increase or decrease this value later.

  • SSH Public Key - Oracle gives us an ability to connect directly to the instance and authentication is made by user+private key pair. Here we specify a public key which will be added to the instance. Obviously, we should have a private key for this public one. Possible options are either we provide a key we generated by ourselves or let Oracle create keys for us. The most non-obvious thing here is what is the username for the SSH. You can't change it and it isn't shown anywhere in the interface (at least I haven't found it). But you can find it in the documentation and it is opc.

The second block of parameters is about backup and restore. The meaning of these options is obvious, but exact values aren't (at least in the beginning).


  • Cloud Storage Container - that's the Cloud Storage container I described earlier. Value for this field will be something like Storage-rurittmanm/demo. In fact, I may do not create this Container in advance. It's possible to specify any inexistent container here (but still in the form of Storage-<domain>/<name>) and tick Create Cloud Storage Container check-box. This will create a new container for us.

  • Username and Password are credentials of a user who can access this container.

The last block is Advanced settings and I believe it's quite simple and obvious. Most of the time we don't need to change anything in this block.


When we fill all parameters and press the Next button we get a Summary screen and the actual process starts. It takes about 25-30 minutes to finish.

When I just started my experiments I was constantly getting a message saying that no sites available and my request may not be completed.

It is possible that it was again the same "luck" as with the phone number verification but the problem solved by itself a few hours later.

Create OAC Instance

At last, we have all we need for our very first OAC instance. The process of an OAC instance setup is almost the same as for an Oracle DB Cloud Instance. We start the process, define some parameters and wait for the result.

At the first step, we give a name to our instance, provide an SSH public key, and select an edition of our instance. We have two options here Enterprise Edition or Standard Edition and later we will select more additional options. Standard edition will allow us to specify either Data Visualisation or Essbase instances and Enterprise Edition adds to this list a classical Business Intelligence feature. The rest of the parameters here are exactly the same as for Database Instance.


At the second step, we have four blocks of parameters.


  • Service Administrator - the most obvious one. Here we specify an administrator user. This user will be a system administrator.

  • Database - select a database for RCU schemas. That's why we needed a database.

  • Options - specify which options our instance will have.

    • Self-Service Data Visualisation, Preparation and Smart Discovery - this option means Oracle Data Visualisation and it is available for both Standard and Enterprise Editions.
    • Enterprise Data Models - this option gives us classical BI and available only for Enterprise Edition. Also, this option may be combined with the first one giving us both classical BI and modern Data discovery on one instance.
    • Collaborative Data Collection, Scenarios and What-if Analysis - this one stands for Essbase and available for Standard and Enterprise Editions. It can't be combined with other options.
  • Size is the same thing that is called Compute Shape for the Database. Options are exactly the same.
  • Usable Storage Size on Disk GB also has the same meaning as for the DB. The minimum size we may specify here is 25 Gb what gives us total 170 Gb of used disk space.

Here is a picture showing all possible combinations of services:


And here virtual disks configuration. data disk is the one we specify.

The last block - Cloud Storage Configuration was the hardest one. Especially the first field - Cloud Storage Base URL. The documentation says "Use the format: https://example.storage.oraclecloud.com/v1" and nothing more. When you know the answer it may be easy, but when I saw it for the first time it was hard. Should I place here any unique URL just like an identifier? Should it end with v1? And what is the value for the second instance? V2? Maybe I should place here the URL of my current datacenter (https://dbcs.emea.oraclecloud.com). The answer is https://<domain>.storage.oraclecloud.com/v1 in my case it is https://rurittmanm.storage.oraclecloud.com/v1. It stays the same for all instances.

All other parameters are the same as they were for DBCS instance. We either specify an existing Cloud Storage container or create it here.


The rest of the process is obvious. We get a Summary and then wait. It takes about 40 minutes to create a new instance.

Note: diagnostics here is a bit poor and when it says that the instance start process is completed it may not be true. Sometimes it makes sense to wait some time before starting to panic.

Now we may access our instance as a usual. The only difference is that the port is 80 not 9502 (or 443 for SSL). For Data Visualisation the link is http(s)://<ip address>/va, for BIEE - http(s)://<ip address>/analytics and for Essbase http(s)://<ip address>/essbase. Enterprise Manager and Weblogic Server Console are availabale at port 7001 which is blocked by default.

What is bad that https uses a self-signed certificate. Depending on browser settings it may give an error or even prevent access to https.

Options here either use HTTP rather than HTTPS or add this certificate to your local computer. But these aren't the options for a production server. Luckily Oracle provides a way to use own SSL certificates.

Typical Management Tasks SSH to Instances

During the setup process, we provide Oracle with a public key which is used to get an SSH access to instances. Cloud does nothing special to this. In the case of Windows, we may use Putty. Just add the private key to Pageant and connect to the instance using user opc.



Opening Ports

By default only the absolute minimum of the ports is open and we can't connect to the OAC instance using BI Admin tool or to the DB with SQLDeveloper. In order to do this, we should create an access rule which allows access to this particular ports.

In order to get to the Access Rules interface, we must use instance menu and select the Access Rules option.


This will open the Access Rules list. What I don't like about it is that it opens the full list of all rules but we can create only a rule for this particular instance.


New rule creation form is simple and should cause no issues. But be careful here and not open too much for a wild Internet.


Add More Users

The user who registered a Cloud Account becomes its administrator and can invite more users and manage privileges.


Here we can add and modify users.


When we add a user we specify a name, email and login. Also here we set roles for the user. The user will get an email with these details, and link to register.

Obviously, the user won't be asked about a credit card. He just starts working and that's all.


My first steps with Oracle Analytics Cloud were not very easy, but I think it was worth it. Now I can create a new OBIEE instance just in a few minutes and one hour later it will be up and running. And I think that's pretty fast compared to a normal process of creating a new server in a typical organisation. We don't need to think about OS installation, or licenses, or whatever else. Just try it.

Categories: BI & Warehousing

New OA Framework 12.2.6 Update 3 Now Available

Steven Chan - Thu, 2017-06-01 02:00

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.6 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.6 users should apply this patch.  Future OAF patches for EBS Release 12.2.6 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.6 bundle patches.

This latest bundle patch includes fixes for the following issue:

  • Cosmetic issue: Framework email notifications have boxes around headers
  • Launching Forms application in Accessibility mode fails in the second iteration in IE11 browser.
  • In attachment image style, adding an attachment fails when a primary key of an entity map has the value as null.
  • In Japanese enabled environment, the text in the advanced table Total cell is getting split into multiple lines when the text has spaces.

Related Articles

Categories: APPS Blogs

Avoiding Coincidental Cohesion

Andrew Clarke - Wed, 2017-05-31 17:10
Given that Coincidental Cohesion is bad for our code base so obviously we want to avoid writing utilities packages. Fortunately it is mostly quite easy to do so. It requires vigilance on our part. Utilities packages are rarely planned. More often we are writing a piece of business functionality when we find ourselves in need of some low level functionality. It doesn't fit in the application package we're working on, perhaps we suspect that it might be more generally useful, so we need somewhere to put it.

The important thing is to recognise and resist the temptation of the Utilities package. The name itself (and similarly vague synonyms like helper or utils) should be a red flag. When we find ourselves about to type create or replace package utilities we need to stop and think: what would be a better name for this package? Consider whether there are related functions we might end up needing? Suppose we're about to write a function to convert a date into Unix epoch string. It doesn't take much imagine to think we might need a similar function to convert a Unix timestamp into a date. We don't need to write that function now but let's start a package dedicated to Time functions instead of a miscellaneous utils package.

Looking closely at the programs which comprise the DBMS_UTILITY package it is obviously unfair to describe them as a random selection. In fact that there seven or eight groups of related procedures.

DB Info

  • DBLINK_ARRAY Table Type
  • DB_VERSION Procedure
  • PORT_STRING Function
Runtime Messages
Object Management
  • COMMA_TO_TABLE Procedures
  • COMPILE_SCHEMA Procedure
  • INVALIDATE Procedure
  • TABLE_TO_COMMA Procedures
  • VALIDATE Procedure
Object Info (Object Management?)
  • LNAME_ARRAY Table Type
  • NAME_ARRAY Table Type
  • NUMBER_ARRAY Table Type
  • UNCL_ARRAY Table Type
  • CANONICALIZE Procedure
  • GET_DEPENDENCY Procedure
  • NAME_RESOLVE Procedure
  • NAME_TOKENIZE Procedure
Session Info
SQL Manipulation
  • EXPAND_SQL_TEXT Procedure
  • GET_SQL_HASH Function
Statistics (deprecated))
  • ANALYZE_SCHEMA Procedure
  • GET_CPU_TIME Function
  • GET_TIME Function
  • GET_HASH_VALUE Function
  • IS_BIT_SET Function

We can see an alternative PL/SQL code suite, with several highly cohesive packages. But there will be some procedures which are genuinely unrelated to anything else. The four procedures in the Unclassified section above are examples. But writing a miscellaneous utils package for these programs is still wrong. There are better options.

  1. Find a home. It's worth considering whether we already have a package which would fit the new function. Perhaps WAIT_ON_PENDING_DML() should have gone in DBMS_TRANSACTION; perhaps IS_BIT_SET() properly belongs in UTL_RAW.
  2. A package of their own. Why not? It may seem extravagant to have a package with a single procedure but consider DBMS_DG with its lone procedure INITIATE_FS_FAILOVER(). The package delivers the usual architectural benefits plus it provides a natural home for related procedures we might discover a need for in the future.
  3. Standalone procedure. Again, why not? We are so conditioned to think of a PL/SQL program as a package that we forget it can be just a Procedure or Function. Some programs are suited to standalone implementation.

So avoiding the Utilities package requires vigilance. Code reviews can help here. Preventing the Utilities package becoming entrenched is crucial: once we have a number of packages dependent on a Utilities package it is pretty hard to get rid of it. And once it becomes a fixture in the code base developers will consider it more acceptable to add procedures to it.

Part of the Designing PL/SQL Programs series

Utilities - the Coincidental Cohesion anti-pattern

Andrew Clarke - Wed, 2017-05-31 15:46
One way to understand the importance of cohesion is to examine an example of a non-cohesive package, one exhibiting a random level of cohesion. The poster child for Coincidental Cohesion is the utility or helper package. Most applications will have one or more of these, and Oracle's PL/SQL library is no exception. DBMS_UTILITY has 37 distinct procedures and functions (i.e. not counting overloaded signatures) in 11gR2 and 38 in 12cR1 (and R2). Does DBMS_UTILITY deliver any of the benefits the PL/SQL Reference says packages deliver? Easier Application Design? One of the characteristics of utilities packages is that they aren't designed in advance. They are the place where functionality ends up because there is no apparently better place for it. Utilities occur when we are working on some other piece of application code; we discover a gap in the available functionality such as hashing a string. When this happens we generally need the functionality now: there's little benefit to deferring the implementation until later. So we write a GET_HASH_VALUE() function,x stick it in our utilities package and proceed with the task at hand.

The benefit of this approach is we keep our focus on the main job, delivering business functionality. The problem is, we never go back and re-evaluate the utilities. Indeed, now there is business functionality which depends on them: refactoring utilities introduces risk. Thus the size of the utilities package slowing increases, one tactical implementation at a time. Hidden Implementation Details? Another characteristic of utility functions is that they tend not to share concrete implementations. Often a utilities package beyond a certain size will have groups of procedures with related functionality. It seems probable that DBMS_UTILITY.ANALYZE_DATABASE(), DBMS_UTILITY.ANALYZE_PART_OBJECT() and DBMS_UTILITY.ANALYZE_SCHEMA() share some code. So there are benefits to co-locating them in the same package. But it is unlikely that CANONICALIZE() , CREATE_ALTER_TYPE_ERROR_TABLE() and GET_CPU_TIME() have much code in common. Added Functionality? Utility functions are rarely part of a specific business process. They are usually called on a one-off basis rather than being chained together. So there is no state to be maintained across different function calls. Better Performance? For the same reason there is no performance benefit from a utilities package. Quite the opposite. When there is no relationship between the functions we cannot make predictions about usage. We are not likely to call EXPAND_SQL_TEXT() right after calling PORT_STRING(). So there is no benefit in loading the former into memory when we call the latter. In fact the performance of EXPAND_SQL_TEXT() is impaired because we have to load the whole DBMS_UTILITY package into the shared pool, plus it uses up a larger chunk of memory until it gets aged out. Although to be fair, in these days of abundant RAM, some unused code in the library cache need not be our greatest concern. But whichever way we bounce it, it's not a boon. Grants? Privileges on utility packages is a neutral concern. Often utilities won't be used outside the owning schema. In cases where we do need to make them more widely available we're probably granting access on some procedures that the grantee will never use. Modularity? From an architectural perspective, modularity is the prime benefit of cohesion. A well-designed library should be frictionless and painless to navigate. The problem with random assemblages like DBMS_UTILITY is that it's not obvious what functions it may contain. Sometimes we write a piece of code we didn't need to. The costs of utility packagesPerhaps your PL/SQL code base has a procedure like this:

create or replace procedure run_ddl
( p_stmt in varchar2)
pragma autonomous_transaction;
v_cursor number := dbms_sql.open_cursor;
n pls_integer;
dbms_sql.parse(v_cursor, p_stmt, dbms_sql.native);
n := dbms_sql.execute(v_cursor);
when others then
if dbms_sql.is_open(v_cursor) then
end if;
end run_ddl;

It is a nice piece of code for executing DDL statements. The autonomous_transaction pragma prevents the execution of arbitrary DML statements (by throwing ORA-06519), so it's quite safe. The only problem is, it re-implements DBMS_UTILITY.EXEC_DDL_STATEMENT().

Code duplication like this is a common side effect of utility packages. Discovery is hard because their program units are clumped together accidentally. Nobody sets out to deliberately re-write DBMS_UTILITY.EXEC_DDL_STATEMENT(), it happens because not enough people know to look in that package before they start coding a helper function. Redundant code is a nasty cost of Coincidental Cohesion. Besides the initial wasted effort of writing an unnecessary program there are the incurred costs of maintaining it, testing it, the risk of introducing bugs or security holes. Plus each additional duplicated program makes our code base a little harder to navigate.

Fortunately there are tactics for avoiding or dealing with this. Find out more.

Part of the Designing PL/SQL Programs series

New workshop "Exadata For Developers"

Randolf Geist - Wed, 2017-05-31 15:00
Just a short note that I've developed a new two day course that covers all relevant features that a database application developer should know when dealing with the Oracle Exadata Database Machine platform.

It covers in detail Smart Scans, the Exadata Flash Cache, Hybrid Columnar Compression and all surrounding features like Storage Indexes, (serial) direct path reads etc. etc.. Of course it also includes features that were added in 12c, like Attribute Clustering and Zone Maps.

All features are presented with live demo scripts, and there will be enough time to discuss your specific questions and analyse existing applications if desired.

For more information and details, check the corresponding pages:

German: Exadata für Anwendungsentwickler

English: Exadata For Developers

OHUG 2017 - What Looks Good To Me

Floyd Teter - Wed, 2017-05-31 14:01
So I’m headed to the OHUG 2017 conference next week.  As it is one of the few conferences I attend anymore, I’m pretty excited about going.  I’m particularly interested in information related to the implementation of Oracle HCM Cloud.  So, in  preparation for the conference, I thought I’d share some events and sessions that look good to me.

First, a few caveats about the following list.   I’m presenting twice myself, so I’m breaking my sessions out as a blatant plug rather than including them on the list:
  • Oracle Human Capital Management Cloud User Experience Extensibility: From Mystery To Magic, Wednesday, 1:45 pm, Salon 1
  • An Update: The Oracle HCM Cloud Solution Library, Friday, 10:15 am, Salon 8
It's worth stating here that, although I'm an Oracle employee, the opinions here and mine and mine alone - this should not be interpreted in any way as official Oracle information or direction.  Nor am I speaking for OHUG in any way.  In addition, keep in mind that I’m focused on Oracle HCM Cloud Applications.  OHUG tracks cover HCM Cloud, PeopleSoft, and E-Business.  As I make my living these days by working on HCM Cloud Applications, I can’t render any opinions on PeopleSoft or EBS sessions…so you will only see HCM Cloud entries on my list.  Also keep in mind that I won’t list SIG sessions:  you know better than I the SIGs you should attend.  But I will say that the SIG sessions are often the best part of a user conference…be sure you don’t miss out.  And finally, some of the sessions I list will have schedule conflicts with other sessions I list.  But that’s an indicator of a great conference…making difficult choices about where to burn your calories. 
So, with all that said, let’s get to the list.  Sessions are listed in no particular order.  And, for the presenters that did not make this list…it’s not that your presentation did not look interesting (almost all of the sessions have something that I’d love to check out), it’s just that I’m digging into other subject areas this year and my own focus had definitely driven this list.

Digital HR Transformation  First and Foremost Managing The Challenge of Change
4:45 pm
Salon 12
Product Vision and Roadmap:  Oracle HCM Cloud
3:30 pm
The Current State of Human Capital Management
1:30 pm
Executive Ballroom
User Experience: Best Practices to Ensure a Positive Experience for Candidates and Users
9:00 am
Salon 15

Oracle HCM Cloud Leading Practice - Cloud Rapid Start Implementation with Tools
10:30 am
Salon 12
Meet The Experts:  HCM Cloud Center of Excellence
3:15 pm
Salon 19
How To Get The Best Out of OTBI
9:00 am
Salon 12
Cloud Global Deployments - Leading Practices and Things That Make You Go Hmmm
11:00 am
Salon 3
Cloud Payroll:  What You Don’t Know Could Hurt You or At Least Your Project
2:30 pm
Salon 3
Adopting A Testing Strategy Fit For The Latest Oracle HCM Cloud
11:30 am
Salon 3
HCM Cloud Extracts Made Easy!
3:15 pm
Salon 9
Decision Is Made:  Going To HCM Cloud. Now, What’s The Best Strategy For Data Conversion?
10:15 am
Salon 13
Patching - Where’s The Beef?
9:00 am
Salon 4

Best Practices in Attracting Top Talent Through Oracle Taleo Cloud Suite
9:00 am
Salon 2
Cloud Portal:  Self-Service Management For The HCM Cloud
9:00 am
Salon 3

Note that this information is current as of May 31st.  Dates, times and locations could change at anytime leading up to or during the conference.  Your best bet for staying current is the OHUG application for your phone.

In addition to the sessions, there are a few conference events you may want to consider:
  • Meet The Experts:  a place to meet and chat with the people who build and deliver Oracle HCM products.  Watch the schedule for the times and locations of those you’d care to meet.  I have listed one Meet The Experts session in the above list - the session for the HCM Cloud Center of Excellence.  Oracle’s John Cafolla will be the expert at this particular session.  John has his fingers on the pulse of HCM Cloud implementations around the globe.  If you want to talk implementation, John is your guy.  You can find the full schedule for Meet The Experts here.
  • Report Jam:  Oracle’s Julian Challenger conducts a series of pretty cool workshops focused on building reports using OTBI.  If you have reports you’d like to build or report-related questions, you’ll get a hands-on opportunity to collaborate with Oracle’s best OTBI gurus here.  Sign up for a time slot in advance at the Empire foyer.
  • Usability Lab: you’ll have the opportunity to see and provide feedback on HCM-related features and applications still in development.  Great for getting a sneak peek and influencing future direction of Oracle products.  The Lab only happens on Wednesday and requires signing up in advance by sending an email to jeannette.chadwick@oracle.com.
  • Oracle Support Central:  Bring your toughest HCM-related questions here. Whether it’s HR, Payroll, Benefits, Compensation, Taleo or whatever, this is the place to get answers.

Let’s wrap this up with an invitation from me to you:  if you’d like to get together with me at the conference to talk or ask questions about anything related to HCM Cloud (or if you’d just like to hang out), hit me up on Twitter @fteter.  I'm happy to talk to customers, partners, old friends and potential new friends.  Hope to see you there! 

DevOps and the Agile DBA

Gerger Consulting - Wed, 2017-05-31 13:38
The esteemed ACE Director alumna Kellyn Pot'Vin Gorman is hosting our next webinar!

Attend the free webinar by Kellyn on June 13th and learn how agile best practices and virtualization can help you become better at DevOps.

About the Webinar:
This session discusses common database challenges and how they can be solved with newer DevOps practices without impacting the DBAs “safe zone”.

Development, testing, provisioning, patching, upgrading and managing will become easier with agile tools, virtualization and collaboration.

The take-aways from this session are to embrace DevOps basics with a “DBA twist” such as:
  • Develop, Rinse, Repeat
  • Test Until Your Five Year Old Can Do It
  • Human Intervention Creates Human Error- Automate EVERYTHING.
  • How the shorter development cycle build business success.
  • How the DBA can do more with less.
Discover the power of virtualization and agile development practices providing more value to even the most senior database expert.

We’ll demo different scenarios surrounding DevOps, virtualization and how to make the DBA pivotal to DevOps success.

Sign up for the free webinar.

About the Presenter:

Kellyn Pot’Vin-Gorman is a member of the Oak Table Network and an Oracle ACE Director alumna. She is the Technical Intelligence Manager for the Office of CTO at Delphix, a company recognized for it's impressive virtualization and data masking environment capabilities. Kellyn is known for her extensive work with multi-database platforms, cloud migrations, virtualization, the command line interface, environment optimization tuning, automation and architecture design. Her blog, http://dbakevlar.com and social media activity under her handle, DBAKevlar is well respected for her insight and content.

Registration is free but space is limited.
Sign up now.
Categories: Development

Packages, roles, compilation and invoker rights and package privileges.

Tom Kyte - Wed, 2017-05-31 13:26
Hi I have need to select from tables in another schema (B). I am connected to schema (A) by using proxy account. Privileges have been given to me through roles and I can select tables from schema B with sqlplus. There is package in schema...
Categories: DBA Blogs

PL/SQL update set row command and virtual columns

Tom Kyte - Wed, 2017-05-31 13:26
Hi I have found PL/SQL 'update tablex set row' command very useful in some cases. (And special thanks to asktom.com to teaching me it !). However I now would like to add virtual columns to table to be updated. My trial caused error of 'trying...
Categories: DBA Blogs

Find number and date which break the rule

Tom Kyte - Wed, 2017-05-31 13:26
Dear Chris, I have table like this <code>Account_number Opening_date 1234 16/05/2017 1235 16/05/2017 1236 23/04/2017 1237 25/04/2017 1238 25/04/2...
Categories: DBA Blogs

Need to load the data into pl/sql tables from excel file

Tom Kyte - Wed, 2017-05-31 13:26
Hello Experts, I have a below code from where I want to convert .xls file to .csv file and load the data into pl/sql tables but I am not able to perform it. Could you please help to get this done? Code: CREATE OR REPLACE FUNCTION LOAD_CSV...
Categories: DBA Blogs


Tom Kyte - Wed, 2017-05-31 13:26
Hi Chris/Connonr, We have two databases namely Primary & Secondary. On Primary, There is scheduled Job which select data from Primary and Insert it into Secondary then inserted records gets delete from Primary. here are the steps written in ...
Categories: DBA Blogs

Session Maximum Time Exceeded

Tom Kyte - Wed, 2017-05-31 13:26
Hello I have problem with Session Maximum Time Exceeded. Users are usually 13 hours strait in the app. But sessions are cleared in 12 hours. How can i increase or disable it at all? Best requires Kristjan
Categories: DBA Blogs

Enabling an EBS Global Diagnostics Trace

Steven Chan - Wed, 2017-05-31 12:23

Oracle E-Business Suite provides several types of logs.  One of the tools you can use is the Logging Framework, which allows you to collect and view log messages in Oracle Applications Manager.  The Logging Framework collects debug messages, errors, and alerts.

A Global Diagnostics Trace can help you diagnose issues if errors occur when users click a button or select an option. It's a targeted trace that can be run per user, thereby minimizing the impact on the overall performance of your environment. This kind of trace is also known as an FND Diagnostics Trace or Oracle Applications Logging.

You can find a quick summary of steps to enable a Global Diagnostics trace here:

There's lots more information about more-powerful logging options in the following manuals

Related Articles

Categories: APPS Blogs


Subscribe to Oracle FAQ aggregator