Feed aggregator

yum-builddep and rpmbuild

Wim Coekaerts - Sun, 2018-03-18 13:10

I sometimes try to build an RPM from source (to patch something or try a patch). Since I do these things every now and then, I tend to forget stuff easily and it takes me a while to get back into it.

Anyway - I was trying to build lxc (example) earlier today and I wanted to patch the lxc-oracle template. So I log into my OL7 box and use yumdownloader to download the lxc source.

# yumdownloader --source lxc

Install the src rpm

# rpm -ivh lxc-1.1.5-2.0.9.el7.src.rpm

so I now have ~/rpmbuild/SPECS/lxc.spec ~/rpm/build/SOURCES/<bunch of patch files and the lxc-1.1.5.tar.gz)

Install rpmbuild (wasn't installed yet)

# yum install rpm-build

(I know - the rpm is called rpm-build but the binary is rpmbuild... odd. never figured out why in the world it couldn't just be the same - anyway)

Ok. So... my usual step is : 

# rpmbuild -bp SPECS/lxc.spec

I don't want to build binaries. Just create the whole BUILD/tree with patches applied

Here is where I always waste time. There are a bunch of build dependencies that are not yet installed and in the past I would *pretty stupid of me, thinking back* just go down the list one by one doing yum install <rpm needed> until rpmbuild stops complaining.

Turns out that yum-utils includes a tool called yum-builddep! Aha.

# yum-builddep SPECS/lxc.spec

Look at that! It goes and pulls in all the build dependency packages for you.

ok, back to # rpmbuild -bp SPECS/lxc.spec

and all is happy!  This is one I won't forget.






Happy 20th Anniversary to Applied OLAP!

Tim Tow - Sun, 2018-03-18 13:08

Today, March 18, 2018, is the 20 year anniversary of our incorporation! I has been a long journey since that time; here are some of the highlights:
  • 1998 - We were a one-man shop and wrote a reporting and budgeting application for a customer in New York. I spent about 150 nights that year on the road.
  • 1999 - ActiveOLAP for Essbase was released and we earned the trust of two of our long-term customers. It was during the next couple of years that I traveled to the West Coast about 35 times in one year.
  • 2003 - Portions of our web-service technology was acquired by Hyperion Solutions and we wrote the Hyperion Objects product based on that technology.
  • 2007 - The Dodeca Spreadsheet Management System was released.
  • 2014 - We hired our first resource to focus solely on sales. Prior to that, we marketed our software via 'word of mouth'.
  • 2016 - The Dodeca Excel Add-In for Essbase was released and we acquired the DrillBridge product.
During this time, we have grown the company organically without outside investment. While this strategy meant we had slower growth, the benefit is that it has allowed us to focus solely on the needs of our customers and not the needs of 'investors'. It also meant that we 'ate a lot of beans' in the early days. Those were great lessons in the value of a dollar that we carry with us today in the value of the software we provide to our customers.

Thank you to all of our customers. We feel lucky to work with each and every one of you and we continue to learn from each of you. We pledge to continue working hard to make your companies successful.

Tim Tow
Founder and President
Applied OLAP, Inc
Categories: BI & Warehousing

Exadata X7 installation some Whitney+ issues

Syed Jaffar - Sun, 2018-03-18 05:45
One of our DB teams from Europe highlighted the following issues while installation and configuring VM on Exadata X7.

"Today we started X7-2 Exadata installation. But we faced some issues. First problem is, when installing the cluster, resources are being up then suddenly, being down and there are some internal bugs for this problem:

Bug 27275655 - Running tcpdump on bondeth0 kills the Whitney LOM interface
Bug 27195117 - Finisar FTLX8574D3BCV-SU transceiver does not link up on X7-2 LOM

These are internal bugs required to re-image the whole stack and use the image Also, for the PXE(for re-imaging), there is another problem.

Without for those bugs when you are installing ExadataX7.


ORA-01033 when Connecting to the DB Via Scan Listener

Syed Jaffar - Sun, 2018-03-18 04:51
Had a very interesting and tricky scenario a few days ago. It was something I never encountered in my DBA career. Hence, thought of sharing the detail of there store here today.


During mid-day, an application team reported that they are suddenly getting an ORA-01033 when connecting to a 3 node RAC database ( Quick basic validations reveals that the issue is happening only when connecting through SCAN IP, but, the VIP and direct (physical IP) connections were having no issues. So, it was clear that the issue is with the SCAN IPs. Verified all the configuration settings , network and firewall to ensure there is no issues accessing the SCAN IPS. To our surprise, everything was just perfectly okay. This really puzzles us.We also suspected the Data Guard configuration of this database, but, it wasn't the case either.


After a quick search over the internet, we come across of MOS Doc: ORA-01033 is Thrown When Connecting to the Database Via Scan Listener (Doc ID 1446132.1)

The issue was, one of the team members was restoring the database backup on a new host.  The tricky part here is, the new host is part of the same network/subnet where the 3 node RAC database is running, and can access to SCAN IPs too. Perhaps the instance that is restoring is registered with the SCAN, and whenever a new connection request is made through SCAN, the connection was referred to an instance which in mount state (restoring). Hence, an ORA-1033 error is thrown.


After reviving the note, the restore immediately stopped, and things got back to normal. Even nullifying the remote_listener parameter to de-register with SCAN would have been also worked in this case.

This issue can be prevented through configuring Class of Secure Transport (COST).

The probabilities of hitting the issue is very low, but, I felt its interesting and sharing it worth while.


ORA-01033 is Thrown When Connecting to the Database Via Scan Listener (Doc ID 1446132.1)
NOTE:1340831.1 - Using Class of Secure Transport (COST) to Restrict Instance Registration in Oracle RAC
NOTE:1453883.1 - Using Class of Secure Transport (COST) to Restrict Instance Registration 

AVM: Powerline Repeater / Fritz Repeater

Dietrich Schroff - Sat, 2018-03-17 15:16
Last weekend my Powerline Adpater 546E broke. I tried some restarts but after 1-5 minutes the WLAN went down and the adapter got really hot.

Perhaps the missing mesh functionality was due to a hardware problem which caused this total failure.

Never mind. I ordered a Fritz Box Repeater 1160 and followed the instructions from AVM:
Note: If your repeater is already connected to your FritzBox and you perform the firmware update >6.90, then you have to do the integration via WPS once again (enable WPS on your FritzBox, then press WPS button on the repeater).

After that everything was like expected:

Automatic Block Media Recovery in a DataGuard

Yann Neuhaus - Sat, 2018-03-17 13:39

With Oracle 12.2, in a Data Guard environment corrupted data blocks can be automatically replaced with uncorrupted copies of those blocks.
There are just some requirements:
• The physical standby database must be operating in real-time query mode, which requires an Oracle Active Data Guard license.
• The physical standby database must be running real-time apply.
Automatic block media recovery works in two directions depending on whether the corrupted blocks are encountered on the primary or on the standby.
This recovery can happen if corrupted data are encountered in the primary database or in the secondary database.
In this blog we are going to see how this feature works.
Below the configuration we are using

DGMGRL> show configuration;

Configuration - ORCL_DR

  Protection Mode: MaxAvailability
  ORCL_SITE  - Primary database
    ORCL_SITE1 - Physical standby database
    ORCL_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 19 seconds ago)


We can verifiy the status of our databases

DGMGRL> show database  'ORCL_SITE' ;

Database - ORCL_SITE

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON

Database Status:

DGMGRL> show database  'ORCL_SITE1' ;

Database - ORCL_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 5.00 KByte/s
  Real Time Query:    ON

Database Status:

DGMGRL> show database  'ORCL_SITE2' ;

Database - ORCL_SITE2

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 4.00 KByte/s
  Real Time Query:    ON

Database Status:


The feature works for any protection mode, but in our case the tests are done with a MaxAvailability mode

If corrupt data blocks are on the primary database, then the primary automatically searches for good copies of those blocks on a standby and, if they are found, has them shipped back to the primary.
This only requirement is that the primary requires a LOG_ARCHIVE_DEST_n to the standby. That already should be the case in a Data Guard environment.

If corrupted block is located on the standby, the standby will automatically request uncorrupted copies of those blocks to the primary. The condition for this mechanism to work is
• The LOG_ARCHIVE_CONFIG parameter is configured with a DG_CONFIG list and a LOG_ARCHIVE_DEST_n parameter is configured for the primary database.
• The FAL_SERVER parameter is configured and its value contains an Oracle Net service name for the primary database.

In this demonstration we will simulate data corruption in the primary database. But the scenario is the same for a corrupted blocks at standby side and will work same.
From the primary we can verify that LOG_ARCHIVE_DEST_n is set.

SQL> select dest_name,DESTINATION,status from v$archive_dest where destination is not null;

DEST_NAME            DESTINATION                    STATUS
-------------------- ------------------------------ ---------
LOG_ARCHIVE_DEST_2   ORCL_SITE1                     VALID
LOG_ARCHIVE_DEST_3   ORCL_SITE2                     VALID

For the demonstration let’s consider a table of user SCOTT in a tablespace mytab

SQL> select table_name,tablespace_name from dba_tables where owner='SCOTT' and table_name='EMPBIS';

--------------- ------------------------------


Now let’s identify blocks for the table on the primary and let’s corrupt them.

SQL> select * from (select distinct dbms_rowid.rowid_block_number(rowid)  from scott.empbis);



And then let’s run following command to corrupt corresponding blocks.

[oracle@primaserver ORCL]$ dd of=/u01/app/oracle/oradata/ORCL/mytab01.dbf bs=8192 seek=131 conv=notrunc count=1 if=/dev/zero
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000315116 s, 26.0 MB/s
[oracle@primaserver ORCL]$

In a normal environment, accessing to corrupted data by a SELECT will return errors.
But in our case on the primary if we flush the buffer_cache, and and we do a select on the table, rows are returned without errors.

13:41:18 SQL> alter system flush buffer_cache;

System altered.

13:41:22 SQL> select * from scott.empbis;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
---------- ---------- --------- ---------- --------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800

      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300

      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
---------- ---------- --------- ---------- --------- ---------- ----------
      7566 JONES      MANAGER         7839 02-APR-81       2975

      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400

      7698 BLAKE      MANAGER         7839 01-MAY-81       2850

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
---------- ---------- --------- ---------- --------- ---------- ----------
      7782 CLARK      MANAGER         7839 09-JUN-81       2450

      7788 SCOTT      ANALYST         7566 19-APR-87       3000

      7839 KING       PRESIDENT            17-NOV-81       5000

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
---------- ---------- --------- ---------- --------- ---------- ----------
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0

      7876 ADAMS      CLERK           7788 23-MAY-87       1100

      7900 JAMES      CLERK           7698 03-DEC-81        950

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
---------- ---------- --------- ---------- --------- ---------- ----------
      7902 FORD       ANALYST         7566 03-DEC-81       3000

      7934 MILLER     CLERK           7782 23-JAN-82       1300

14 rows selected.

13:41:27 SQL>

Indeed Oracle automatically recovered corrupted blocks. And in the primary alert log at the same time we can see that a recovery was done.

ALTER SYSTEM: Flushing buffer cache inst=0 container=0 global
Hex dump of (file 2, block 131) in trace file /u01/app/oracle/diag/rdbms/orcl_site/ORCL/trace/ORCL_ora_3359.trc

Corrupt block relative dba: 0x00800083 (file 2, block 131)
Completely zero block found during multiblock buffer read

Reading datafile '/u01/app/oracle/oradata/ORCL/mytab01.dbf' for corruption at rdba: 0x00800083 (file 2, block 131)
Reread (file 2, block 131) found same corrupt data (no logical check)
Automatic block media recovery requested for (file# 2, block# 131)
Corrupt Block Found
         TIME STAMP (GMT) = 01/26/2018 13:41:25
         CONT = 0, TSN = 7, TSNAME = MYTAB
         RFN = 2, BLK = 131, RDBA = 8388739
         OBJN = 74352, OBJD = 74352, OBJECT = EMPBIS, SUBOBJECT =
         SEGMENT OWNER = SCOTT, SEGMENT TYPE = Table Segment
Automatic block media recovery successful for (file# 2, block# 131)
Automatic block media recovery successful for (file# 2, block# 131)

We have seen, that an active DataGuard environment may help with corrupted data.


Cet article Automatic Block Media Recovery in a DataGuard est apparu en premier sur Blog dbi services.

Good But Not Good Enough Coding Practice

Michael Dinh - Sat, 2018-03-17 13:36

Good: Alert from from localhost but what script is this? If there was only 1 cron entry and intuitive, then I would not be blogging about it.

ALERT … Goldengate process “EXTRACT(PU)” has a lag of 02 hour 22 min on localhost

Better: Alert is from localhost for monitoring_gg.sh

monitoring_gg.sh ALERT … Goldengate process “EXTRACT(PU)” has a lag of 00 hour 00 min on localhost

That is all.


Updated Oracle Linux 7 update 4 ARM64/aarch64 with uek5 4.14.26-2

Wim Coekaerts - Sat, 2018-03-17 10:48

We refreshed the installation media for OL7/ARM64 with the latest uek5 preview build based on upstream stable 4.14.26 and added perf and tuned.

You can download it from the OTN  OL ARM webpage. Ignore the 4.14-14 in the text, that will get updated. We're also working on updating the Raspberry Pi 3 image to match the same version. Hopefully using grub2 there as well to make it easier to have a single image repo.

The arm64 yum repo on http://yum.oracle.com has also been updated.

A few things to point out :

Oracle Linux 7 for ARM64 is going to be a 64-bit only distribution (aarch64). All binaries are built 64-bit and we have no support in user space libraries nor in the kernel for 32-bit.

Our ARM port is sharing the same source code base as x64. There are minor architecture changes where required to build but we have a single source code repository from which we build both architectures. This is important because it makes it easy and clean and allows us to synchronize the two architectures without problem.

Our kernel on ARM64 is built using GCC 7.3 : Linux version 4.14.26-2.el7uek.aarch64 gcc version 7.3.0 20180125

We currently test on Ampere Computing and Cavium ThunderX® systems. We plan to add more processor types over time.

New Recommended Patch Collection for Web ADI 12.2 Now Available

Steven Chan - Thu, 2018-03-15 11:40

Oracle Web Applications Desktop Integrator (Web ADI) allows you to use Microsoft Office applications such as Excel, Word, and Project to upload data to Oracle E-Business Suite. For example, you can use Excel to create formatted spreadsheets on your desktop to download, view, edit, validate, and upload Oracle E-Business Suite data. 

We have just released a new Recommended Patch Collection for Web ADI update in EBS 12.2:

This patch is cumulative and includes all previously released patches for Web ADI 12.2.  This includes the last patch collection released in July 2016 (23733865:R12.BNE.C).

We recommend that all Web ADI users apply this patch. It includes fixes for the following issues:

  • 20915362 23140772 - FWD-PORT REQUEST OF 17455458:R12.BNE.B ICX: SESSION TIMEOUT TO 12.2.4

Related Articles

Categories: APPS Blogs

Oracle Linux UEK4 (4.1.12-112.16.4) errata kernel update compiled with retpoline support

Wim Coekaerts - Thu, 2018-03-15 10:57

Yesterday afternoon, we released a UEK4 update for both Oracle Linux 6 and Oracle Linux 7.

You can find the announcement mail here.

This update includes a number of generic fixes but most importantly it adds support for retpoline. In order to build this kernel, we also had to release updated versions of gcc which we did a little while ago. You can find more information in general about retpoline on various sites, Here's an article of a discussion on the kernel maillist.

Note, our UEK5 preview kernels (based on 4.14 stable) are also built with retpoline support.

You can find more information about our errata publicly here .

As always, keep checking the what's new page for new RPMs released on http://yum.oracle.com.


Oracle Health Sciences Clinical One Named 2018 CARE Award Finalist

Oracle Press Releases - Thu, 2018-03-15 07:00
Press Release
Oracle Health Sciences Clinical One Named 2018 CARE Award Finalist Cloud-based Solution Recognized for Innovation in Clinical Technology

Redwood Shores, Calif.—Mar 15, 2018

Oracle Health Sciences today announced that Informa Pharma Intelligence named Oracle Health Sciences Clinical One a finalist for the Best Sponsor-Focused Technological Development category in the 2018 Clinical and Research Excellence (CARE) Awards. The awards recognize distinguished leaders who are making important contributions to advancing human health.

“The CARE Awards honor the world’s best innovators in life sciences,” said Karen Currie, Executive Director, Editorial, Pharma Intelligence and Chair of the 2018 CARE Awards judging panel. “Those named play a meaningful role in the entire process of therapeutic discovery, as millions of patients wait with hope. Technology plays a fundamental role in clinical trials and safe and effective drug development, which is why our independent expert panel of judges selected Oracle’s Clinical One platform as a finalist.”

The prestigious CARE awards are produced by Informa Pharma Intelligence, a respected research and publishing authority for pharmaceutical, contract research organizations (CROs), medical technology, biotechnology and healthcare service providers.

Traditionally, clinical operations—the process of developing a potentially life-saving drug from a promising molecule to an FDA-approved therapy—rely on a collection of point solutions that operate in isolation. The Oracle Health Sciences Clinical One platform was purpose built to unify and accelerate the drug discovery process from study design and start-up to conduct and post-marketing. It provides universal access to information that is captured once, secured in the cloud and used across all clinical trial processes. Configurable for trials of any size, customers can dynamically build, deploy and update studies regardless of the time of day.

“We are honored to be recognized for reimagining the way technology supports the entire drug development lifecycle,” said Steve Rosenberg, general manager, Oracle Health Sciences. “Though trials are abundant and the pace of drug approvals seems be temporarily improving. Yet, in the U.S. alone, it can take more than a decade and billions in capital for an experimental drug to travel from the lab to the medicine cabinet. Applying a powerful cloud infrastructure that can handle the volume of data, speed and coordination required for clinical trials can have an instrumental impact on efficiency, safety and compliance.”

Oracle will celebrate this distinction with its customers at the 2018 CARE Awards ceremony and dinner, which will be held on Wednesday, April 25, 2018, at the Boston Harbor Hotel.

Contact Info
Valerie Beaudett
+1 650.400.7833
Phebe Shi
Burson Marsteller for Oracle
+1 415.591.4032
About Informa Pharma Intelligence

Informa is a leading business intelligence, academic publishing, knowledge and events group. Informa’s Pharma Intelligence powers a full suite of analysis products - Datamonitor Healthcare, Sitetrove, Trialtrove, Pharmaprojects, Medtrack, Biomedtracker, Scrip, Pink Sheet and In Vivo – to deliver the data needed by the pharmaceutical and biomedical industry to make decisions and create real-world opportunities for growth. For more information, visit pharmaintelligence.informa.com.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Health Sciences

Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes, helping to bring new drugs to market faster. As the number one vendor in Life Sciences (IDC, 2017) and the number one provider of eClinical solutions (Everest Group, 2017), powered by the number one data management technology in the world (Gartner, 2017), Oracle Health Sciences is trusted by 29 of the top 30 pharma, 10 of the top 10 biotech, and 10 of the top 10 CROs for clinical trial and safety management around the globe.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Phebe Shi

  • +1 415.591.4032

Keeping Intervals

Jonathan Lewis - Thu, 2018-03-15 03:03

I’ve recently been reminded of a blog post I wrote a couple of years ago that discussed the issue of running into the hard limit of 2^20 -1 as the number of segments for a (composite) partitioned table – a problem that could arise in a relatively short time if you used a large number of hash subpartitions in an interval/hash composite partitioned table (you get about 2 years and 10 months of daily partitions at 1,024 subpartitions per day, for example).

A natural follow-on from that article is to think through a strategy for dropping old partitions sufficiently early that you don’t hit the limit as new partitions are created. This, of course, pretty much defeats the point of interval partitioning – instead of planning to add partitions “just in time” you now have to eliminate them “just in time”. Amongst other issues, we’re going to find that interval partitioning manages to re-introduce a problem with range partitioning that Oracle got rid of in Oracle 10g.

So let’s test the obvious option: drop the oldest partition(s) in time to keep head-room for new partitions; for convenience we’ll start with a simple interval partitioned table with a few pre-declared range partitions and a few automatically generated interval partitions. All the examples here were run under

rem     Script:         pt_merge.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2018

create table t1(id, v1, padding)
partition by range (id) interval (1e4)
        partition p10000 values less than (1e4),
        partition p20000 values less than (2e4),
        partition p30000 values less than (3e4),
        partition p40000 values less than (4e4),
        partition p50000 values less than (5e4)
with generator as (
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
        generator       v1,
        generator       v2
        rownum <= 1e5 -- > comment to avoid WordPress format issue

                ownname     => user,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'

------------------------- ---------------------- ------------ ----------
T1                        P10000                          128        256
T1                        P20000                          384        256
T1                        P30000                          640        256
T1                        P40000                          896        256
T1                        P50000                         1152        256
T1                        SYS_P69838                     1408        256
T1                        SYS_P69839                     1664        256
T1                        SYS_P69840                     1920        256
T1                        SYS_P69841                     2176        256
T1                        SYS_P69842                     2432        256
T1                        SYS_P69843                     2688        128

11 rows selected.

I’ve created 100,000 rows and since the partitions I’ve pre-declared have an (unreachable) upper bound of only 50,000 Oracle will have added a further 6 partitions to the table to hold the data for values up to 110,000 (with just one row in the last partition). For testing purposes I’ve created the table in an otherwise empty tablespace so when I check the block address of each segment I can see the location (and size) of the segments so far. So here’s the list of names and locations:

------------------------- ---------------------- ------------ ----------
T1                        P10000                          128        256
T1                        P20000                          384        256
T1                        P30000                          640        256
T1                        P40000                          896        256
T1                        P50000                         1152        256
T1                        SYS_P69838                     1408        256
T1                        SYS_P69839                     1664        256
T1                        SYS_P69840                     1920        256
T1                        SYS_P69841                     2176        256
T1                        SYS_P69842                     2432        256
T1                        SYS_P69843                     2688        128

11 rows selected.

No surprises so far. So let’s pretend we know the dreaded ORA-14299 or ORA-14300 will be arriving soon and try to drop the first 5 partitions to keep the partition count below the limit. Here’s a cut-n-paste from an SQL*Plus session that tries to do that one partition at a time:

SQL> alter table t1 drop partition p10000;

Table altered.

SQL> alter table t1 drop partition p20000;

Table altered.

SQL> alter table t1 drop partition p30000;

Table altered.

SQL> alter table t1 drop partition p40000;

Table altered.

SQL> alter table t1 drop partition p50000;
alter table t1 drop partition p50000
ERROR at line 1:
ORA-14758: Last partition in the range section cannot be dropped

We can’t drop partition p50000 – it’s the highest partition that wasn’t created automatically, and we have to leave an “anchor” partition in place for interval partitioning to work from. By querying user_tab_partitions we can even see that this partition is flagged a little differently from the others:

        partition_name, interval, high_value 
        table_name = 'T1'
order by

---------------------- --- --------------------------
P50000                 NO  5e4
SYS_P69844             YES 60000
SYS_P69845             YES 70000
SYS_P69846             YES 80000
SYS_P69847             YES 90000
SYS_P69848             YES 100000
SYS_P69849             YES 110000

7 rows selected.

So, at first sight, we’re stuck. If we’re dropping old partitions we will eventually get to a point where there’s only one “real” range partition at the bottom and then we can’t drop any more historic partitions. There are two solutions to this problem, explained a long time ago here and here by Harald van Breederode.

Option 1

Convert the interval partitioned table to a range partitioned table and back again, and if you know the interval (and you can always look it up in the data dictionary) there’s a quick and dirty way of doing that. Here’s a cut-n-paste demonstrating the method and effect:

SQL> alter table t1 set interval (10000);

1Table altered.

SQL> select partition_name, interval, high_value from user_tab_partitions where table_name = 'T1' order by partition_position ; 

---------------------- --- --------------------------
P10000                 NO  1e4
P20000                 NO  2e4
P30000                 NO  3e4
P40000                 NO  4e4
P50000                 NO  5e4
SYS_P69850             NO  60000
SYS_P69851             NO  70000
SYS_P69852             NO  80000
SYS_P69853             NO  90000
SYS_P69854             NO  100000
SYS_P69855             NO  110000

11 rows selected.

SQL> select table_name, partitioning_type, interval from user_part_tables;

-------------------- --------- --------------------
T1                   RANGE     1E4

1 row selected.

Every single partition has just become a range-based partition, but the table is still interval partitioned. This is a tidy solution, but there’s one obvious, generic, drawback to the method.  The “theory” of interval partitioning is that you don’t have to pre-create partitions in anticipation of the data arriving – so what will happen if a (possibly bad) row arrives weeks ahead of schedule and you find that Oracle has created (say) partition 85,001 with a gap of 12,000 partitions between the current high partition and the new one. If you use this “convert to range and back” trick then you’ll have a single partition covering the entire range where you were expecting (eventually) to have 12,000 partitions. Every time you convert from interval to range and back you’d better have code that checks if there are any gaps first, and then does loads of “split partition” –  or comes up with some other strategy – to address the side effects.

Option 2

When you’ve got just one range partition left, merge the bottom two partitions – this makes the next partition up a range partition without affecting any other partitions. After recreating the original table and dropping the first 4 partitions this is how things go:

SQL> alter table t1 drop partition p50000;
alter table t1 drop partition p50000
ERROR at line 1:
ORA-14758: Last partition in the range section cannot be dropped

SQL> alter table t1 merge partitions for (45000), for (55000) into partition p_low;

Table altered.

SQL> select partition_name, interval, high_value from user_tab_partitions where table_name = 'T1' order by partition_position;

---------------------- -------------------- --------------------------
P_LOW                  NO                   60000
SYS_P69863             YES                  70000
SYS_P69864             YES                  80000
SYS_P69865             YES                  90000
SYS_P69866             YES                  100000
SYS_P69867             YES                  110000

6 rows selected.

Is this too good to be true ? Of course it is, but you may have to pause for a moment to think why. When you merge two partitions Oracle copies the contents of the two segments into a new segment – always; even if one of the two segments is empty. When you do a “split partition” Oracle runs a check to see if the split would leave all the data in a single segment and if it would then Oracle doesn’t do any copying but simply plays clever games in the data dictionary – unfortunately Oracle doesn’t use the same sort of trick to optimise a merge.

So the merge partition mechanism carries less risk than the “interval/range/interval”, but you either pay the cost of the merge or you carefully code the mechanism so that the bottom two partitions are always empty when you merge: for example you might always leave the bottom (range) partition empty and use your scheduled code to truncate (or exchange out) the lowest interval partition, then do the merge.

The good news

When you upgrade to you can drop the lowest partition – and Oracle will simply turn the lowest interval partition currently in existence into a range partition. (That may be a bit of a nuisance if there’s a gap between the range partition and the current lowest interval partition.)

The Bad News

It doesn’t really matter which strategy you use to deal with this problem (even if you’ve upgraded to 12.2) – you still pay one other penalty for both mechanisms. And that’s the bit which re-introduces a problem that last existed in 9i.

Ask youself “How does Oracle know which interval a partition is for and what the limit is on the partitioning key ?” Then look at the data dictionary, or maybe build a very simple model and trace what happens when you use either of the methods above – but in your model create a significant number or partitions first. I’m going to take the data dictionary method – starting from the point where I’ve created and populated the table. Again this is cut-n-paste, and do note that I switch to the sys account after creating the table:

SQL> select object_id, object_name, subobject_name from user_objects;

---------- -------------------- ----------------------
    185164 T1
    185165 T1                   P10000
    185166 T1                   P20000
    185167 T1                   P30000
    185168 T1                   P40000
    185169 T1                   P50000
    185170 T1                   SYS_P69868
    185171 T1                   SYS_P69869
    185172 T1                   SYS_P69870
    185173 T1                   SYS_P69871
    185174 T1                   SYS_P69872
    185175 T1                   SYS_P69873

12 rows selected.

SQL> connect / as sysdba

SQL> select obj#, dataobj#, part# from tabpart$ where bo# = 185164 order by part#;

      OBJ#   DATAOBJ#      PART#
---------- ---------- ----------
    185165     185165         10
    185166     185166         20
    185167     185167         30
    185168     185168         40
    185169     185169         50
    185170     185170 2147483648
    185171     185171 2147483649
    185172     185172 2147483650
    185173     185173 2147483651
    185174     185174 2147483652
    185175     185175 2147483653

11 rows selected.

I’ve queried user_objects to find the object_id of the table then used that as the “base object number” (bo#) to query tabpart$, which holds the table partition definitions. Note how there are 5 partitions where the partition number goes up 10 at a time, and 6 where it goes up one at a time. Prior to 10g (and interval partitions, of course) the stored partition number would increase in steps of 1 but if you wanted to do a split, merge or drop partition (and the last of the three was the most significant one) every single partition position about the split/merge/drop point would have to be renumbered, and that was done by a single row update to the data dictionary to keep the numbering intact. The steps of 10 were introduced in 10g to deal with the inherent performance problems – particularly the shared pool catastrophe that this could cause.

The steps of 1 for interval partitions allows Oracle to keep track (easily) of what high_value each partition partition represents, and the highest legal partition. Try inserting the values 1,000,000 into the table and re-run the query against tabpart$ and you’ll see Oracle adding part# = 2147483743. So what do you think is going to happen if you try to apply the two mechanisms ?

If you do the interval/range/interval switch every interval part# will be renumbered so to follow the “increment by 10” pattern. If you drop partitions p10000 to p40000 nothing happens to the existing part# values until you get to the command to merge p50000 with the next partition up and then you see this:

SQL> alter table test_user.t1 merge partitions for (45000), for (55000) into partition p_low;

Table altered.

SQL> select obj#, dataobj#, part# from tabpart$ where bo# = 185164 order by part#;

      OBJ#   DATAOBJ#      PART#
---------- ---------- ----------
    185177     185177         10
    185171     185171 2147483648
    185172     185172 2147483649
    185173     185173 2147483650
    185174     185174 2147483651
    185175     185175 2147483652
    185176     185176 2147483742

7 rows selected.

The newly merged partition is a new object, of course, so has a completely new obj# and dataobj#, and it’s been given the part# of 10 (the lowest value for a clean range-partitioned object). Every single interval partition has had its part# decreased by one. The lowest possible interval partition is always given the part# of 2147483648 (0x80000000) and the partition numbering increments by 1 from there onwards. (The numbering gets a little more subtle when you have composite partitioning but a similar approach takes place in tabcompart$).

Pause for thought – if you’re thinking of creating an interval partitioned table that could get close to a running level of 1 million partitions and you start to get rid of old partitions in any version of Oracle then each “drop/merge” partition will update about 1 million rows in the data dictionary – and that’s assuming you don’t have any local indexes that will need to be renumbered in the same way!

Here’s a critical part of the output from tkprof when I recreated the table with 1,000,000 rows – which means 101 partitions – and created a local index on it, before dropping the first 4 partitions and then enabled tracing just before merging the bottom interval partition with the anchor range partition.

update indpart$ set dataobj# = :1, part# = :2, flags = :3, ts# = :4, file# =
  :5, block# = :6, pctfree$ = :7, initrans = :8, maxtrans = :9, analyzetime =
  :10, samplesize = :11, rowcnt = :12, blevel = :13, leafcnt = :14, distkey =
  :15, lblkkey = :16, dblkkey = :17, clufac = :18, pctthres$ = :19
 obj# = :20

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       94      0.00       0.00          0          0          0           0
Execute     94      0.00       0.01          0         94        480          94
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      188      0.01       0.01          0         94        480          94

update tabpart$ set dataobj# = :1, part# = :2, ts# = :3, file# = :4, block# =
  :5, pctfree$ = :6, pctused$ = :7, initrans = :8, maxtrans = :9, flags = :10,
   analyzetime = :11, samplesize = :12, rowcnt = :13, blkcnt = :14, empcnt =
  :15, avgspc = :16, chncnt = :17, avgrln = :18
 obj# = :19

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       94      0.00       0.00          0          0          0           0
Execute     94      0.00       0.00          0        188        489          94
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      188      0.00       0.00          0        188        489          94

That’s not a lot of work for my little example with less than 100 partitions – but when you’ve got a million of them, with a handful of indexes, and the partitions have been created over time that’s going to turn into a lot of work that’s going to disrupt the shared pool for a long time, generate a lot of redo, and do a lot of disk reads and writes.

So be cautious with interval partitioning – even in 12.2 (and 18.1, possibly) the ease of use may disappear if you realise too late that you’re going to get into a cycle of partition maintenance.

Footnote for composite partitioning – the limits of 2^20-1 segments (hence subpartitions) still applies, but the necessary update is relevant only at the partition level, not at the subpartition level. The objects updated are tabcompart$ and indcompart$.

Update (included for ironic effect)

The day I posted this note my “Oracle Support Hot Topics” email with a report of the following bug:


This was reported for, fixed in 12.2. The rediscovery information is:

ORA-14024 during copy_table_dep when the interim table is interval partitioned.

The problem arises if you change a table from simple range partitioned to range with interval – so might be relevant if you have a strategy of doing the interval/range/interval trick.



Week start and end date for a given date

Tom Kyte - Thu, 2018-03-15 02:06
Hi Tom, I want the sql to get the start of the week date and end of the week date for a given date and my week starts from saturday and ends with friday. Example, if the given date is 03/mar/2018 then start date is - 03/mar/2018 and end date...
Categories: DBA Blogs

Converting Long Data Type to CLOB got ORA-06502

Tom Kyte - Thu, 2018-03-15 02:06
Hi, I am running following pl/sql code to extract image data from the table. I am able to extract data which has length less than 32768 how ever I get <b>ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind</b> when the length is...
Categories: DBA Blogs

New Update Available for Integrated SOA Gateway in EBS 12.2

Steven Chan - Wed, 2018-03-14 13:46

Oracle E-Business Suite Integrated SOA Gateway (ISG) provides an infrastructure to provide, consume, and administer Oracle E-Business Suite web services. It service-enables public integration interfaces registered in Oracle Integration Repository. These interfaces can be deployed as SOAP and/or REST-based web services. ISG also provides a service invocation framework to consume external SOAP based web services.

We have just released a new Integrated SOA Gateway update for EBS R12.2. It includes fix for issues in the PostClone script used for SOAP services.

PostClone script for SOAP services: This script is used to reinstate the SOAP services in an cloned EBS environment. If you have cloned an environment  configured with ISG SOAP Services, then you need to:

  1. Re-configure ISG for SOAP services
  2. Execute ISG PostClone script

For details, see Section 11 in: 

You can download the latest ISG consolidated patch here:

Patch 27498317:R12.OWF.C - ISG Consolidated Patch (18_1_3)

This is a cumulative patch update that includes all previously released ISG updates for EBS R12.2.  It can be applied on EBS R12.2.3 and higher releases. Check the Readme for this patch for information on prerequisite patches.


Related Articles

Categories: APPS Blogs

No Asynchronous I/O When Using Shared Server (Also Known As MTS)

Randolf Geist - Wed, 2018-03-14 10:55
I've recently had a case at a client where it was questioned why a particular application was seemingly not making full use of the available I/O capabilities - in comparison to other databases / applications using similar storage.

Basically it ended up in a kind of finger pointing between the application vendor and the IT DBA / storage admins, one side saying that the infrastructure used offers insufficient I/O capabilities (since the most important application tasks where dominated by I/O waits in the database), and the other side saying that the application doesn't make use of the I/O capabilities offered - compared to other databases / applications that showed a significantly higher IOPS rate and/or I/O throughput using the same kind of storage.

At the end it turned out that in some way both sides were right, because the application made use of a somewhat unusual configuration for batch processing: Due to very slow Dedicated Server connection establishment that slowed down some other, more interactive part of the application, the database and client connection strings were configured to use Shared Server connections by default for all parts of the application. This successfully solved the connection establishment problem but obviously introduced another, so far not recognized problem.

Using my recently published I/O benchmark we performed some tests to measure the maximum IOPS and throughput of that specific database independently from the application, and the results were surprising, because in particular the test variations that were supposed to perform asynchronous physical read single block ("db file parallel read") and multi block I/O ("direct path read") didn't do so, but showed synchronous I/O only ("db file sequential read" / "db file scattered read").

After some investigations it became obvious the reason for this behaviour was the usage of the Shared Server architecture - simply switching to Dedicated Server sessions showed the expected behaviour and also a significantly higher maximum IOPS rate and I/O throughput at the same level of concurrency.

It's very easy to reproduce this, using for example my read IOPS and throughput benchmark scripts and performing the benchmark using either Shared or Dedicated Server architecture in asynchronous I/O mode.

For example, this is what I get running this on my laptop using Dedicated Server and testing maximum read I/O throughput in asynchronous I/O mode (which should result in "direct path read" operations bypassing the buffer cache):

If I repeat exactly the same test (same settings, number of processes, size of objects etc.) using Shared Server architecture, this is what I get:

This is particularly interesting - no direct path reads although the benchmark sessions set in this case "_serial_direct_read" = 'always'.

In principle the same can be seen when running the maximum read IOPS benchmark, here is the expected result when using Dedicated Servers in asynchronous I/O mode:

And again, this is what I get when running the same test using Shared Servers:

Again no sign of asynchronous I/O ("db file parallel read") - and the achieved IOPS rate is significantly lower, which is exactly what the client experienced, much more DB time waiting for I/O and less time spent on CPU than expected. Depending on the particular storage configuration and latency the difference when using the same number of processes can be even more significant, I've seen up to factor 20 difference in achieved IOPS rate, and factor 3 to 4 is quite typical. Of course as usual how relevant this really is all depends on the actual query, execution plan and data pattern and where most of the time is spent.

Also, it is important to point out that all this might not be too relevant to most configurations, since Shared Servers aren't used that much nowadays in the times of application servers / connection pools dominating typical environments. Also it's probably rather uncommon to use Shared Servers for batch processing tasks like this client did.

Although Shared Servers were originally designed for typical OLTP applications having many open sessions being idle most of the time and performing mostly very simple and straightforward operations (like looking up a few rows via an efficient index access path) it's still interesting to see this limitation that can be quite significant depending on the I/O patterns used. So far I've not seen this documented anywhere, also on MyOracleSupport I couldn't find any matching notes describing the behaviour, and unfortunately no Service Request was opened for the issue yet.

When using Parallel Execution, by the way, which is also very unlikely when using Shared Servers but perfectly possible (the Shared Server session then acts as Query Coordinator), the Parallel Servers can make use of asynchronous I/O - so the limitation only seems to apply to the Shared Server session itself. I can think of some reasons why this limitation could be explained from an implementation point of view the way Shared Servers work, but this is of course only speculation.

All currently relevant versions show the same behaviour in that regard, by the way. I've reproduced this on, and on Windows as well as on Linux.

Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application

Amis Blog - Wed, 2018-03-14 10:24

Spring Boot allows you to quickly develop microservices. Application Container Cloud Service (ACCS) allows you to easily host Spring Boot applications. Oracle provides an Application Cache based on Coherence which you can use from applications deployed to ACCS. In order to use the Application Cache from Spring Boot, Oracle provides an open source Java SDK. In this blog post I’ll give an example on how you can use the Application Cache from Spring Boot using this SDK. You can find the sample code here.

Using the Application Cache Java SDK Create an Application Cache

You can use a web-interface to easily create a new instance of the Application Cache. A single instance can contain multiple caches. A single application can use multiple caches but only a single cache instance. Multiple applications can use the same cache instance and caches. Mind that the application and the application cache are deployed in the same region in order to allow connectivity. Also do not use the ‘-‘ character in your cache name, since the LBaaS configuration will fail.

Use the Java SDK

Spring Boot applications commonly use an architecture which defines abstraction layers. External resources are exposed through a controller. The controller uses services. These services provide operations to execute specific tasks. The services use repositories for their connectivity / data access objects. Entities are the POJO’s which are exchanged/persisted and exposed for example as REST in a controller. In order to connect to the cache, the repository seems like a good location. Which repository to use (a persistent back-end like a database or for example the application cache repository) can be handled by the service. Per operation this can differ. Get operations for example might directly use the cache repository (which could use other sources if it can’t find its data) while you might want to do Put operations in both the persistent backend as well as in the cache. See for an example here.

In order to gain access to the cache, first a session needs to be established. The session can be obtained from a session provider. The session provider can be a local session provider or a remote session provider. The local session provider can be used for local development. It can be created with an expiry which indicated the validity period of items in the cache. When developing / testing, you might try setting this to ‘never expires’ since else you might not be able to find entries which you expect to be there. I have not looked further into this issue or created a service request for it. Nor do I know if this is only an issue with the local session provider. See for sample code here or here.

When creating a session, you also need to specify the protocol to use. When using the Java SDK, you can (at the moment) choose from GRPC and REST. GRPC might be more challenging to implement without an SDK in for example Node.js code, but I have not tried this. I have not compared the performance of the 2 protocols. Another difference is that the application uses different ports and URLs to connect to the cache. You can see how to determine the correct URL / protocol from ACCS environment variables here.

The ACCS Application Cache Java SDK allows you to add a Loader and a Serializer class when creating a Cache object. The Loader class is invoked when a value cannot be found in the cache. This allows you to fetch objects which are not in the cache. The Serializer is required so the object can be transferred via REST or GRPC. You might do something like below.


Mind that when using Spring Boot you do not want to create instances of objects by directly doing something like: Class bla = new Class(). You want to let Spring handle this by using the @Autowired annotation.

Do mind though that the @Autowired annotation assigns instances to variables after the constructor of the instance is executed. If you want to use the @Autowired variables after your constructor but before executing other methods, you should put them in a @PostConstruct annotated method. See also here. See for a concrete implemented sample here.


The Application cache can be restarted at certain times (e.g. maintenance like patching, scaling) and there can be connectivity issues due to other reasons. In order to deal with that it is a good practice to make the connection handling more robust by implementing retries. See for example here.

Deploy a Spring Boot application to ACCS Create a deployable

In order to deploy an application to ACCS, you need to create a ZIP file in a specific format. In this ZIP file there should at least be a manifest.json file which describes (amongst other things) how to start the application. You can read more here. If you have environment specific properties, binding information (such as which cache to use) and environment variables, you can create a deployment.json file. In addition to those metadata files, there of course needs to be the application itself. In case of Spring Boot, this is a large JAR file which contains all dependencies. You can create this file with the spring-boot-maven-plugin. The ZIP itself is most easily composed with the maven-assembly-plugin.

Deploy to ACCS

There are 2 major ways (next to directly using the API’s with for example CURL) in which you can deploy to ACCS. You can do this manually or use the Developer Cloud Service. The process to do this from Developer Cloud Service is described here. This is quicker (allows redeployment on Git commit for example) and more flexible. Below globally describes the manual procedure. An important thing to mind is that if you deploy the same application under the same name several times, you might encounter issues with the application not being replaced with a new version. In this case you can do 2 things. Deploy under a different name every time. The name of the application however is reflected in the URL and this could cause issues with users of the application. Another way is to remove files from the Storage Cloud Service before redeployment so you are sure the deployable is the most recent version which ends up in ACCS.


Create a new Java SE application.


Upload the previously created ZIP file


Introducing Application Cache Client Java SDK for Oracle Cloud

Caching with Oracle Application Container Cloud

Complete working sample Spring Boot on ACCS with Application cache (as soon as a SR is resolved)

A sample of using the Application Cache Java SDK. Application is Jersey based

The post Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application appeared first on AMIS Oracle and Java Blog.

Oracle Linux 7 UEK5 preview 4.14.26

Wim Coekaerts - Wed, 2018-03-14 10:13

We just updated the UEK5 kernel preview to 4.14.26-1. The latest version is based on upstream stable 4.14.26 and can be found in our UEK5 preview channel.

The preview channel also has a number of other packages in it: an updated dtrace, updated daxctl and ndctl tools for persistent-memory.

Another thing I wanted to point out. We have had the source tree for UEK on oss.oracle.com for a long time in a git repo. We've always made sure that the changes are public, full git history both upstream and our own patches/bugfixes on top so it's very easy for anyone publicly to see what the source is. Not a tarball with just the end result source code, not a web-based only thing that's tedious to see what's up but standard git with all source, all commits. In order to make that a bit easier, we moved this to github.   Nothing different on the code side but this gives a nicer consolidated, cleaner view.


We use the exact same git repo/tree for Oracle Linux for x64 and for ARM64. This source tree also includes dtrace, etc...

PL SQL Type as bind variable in Dynamic FORALL

Tom Kyte - Wed, 2018-03-14 07:46
SQL Version: <code>Oracle Database 12c Enterprise Edition Release - 64bit Production 0 PL/SQL Release - Production 0 CORE Production 0 TNS for Linux: Version - Production 0 NLSRTL Version - ...
Categories: DBA Blogs

Oracle to Break Down the Barriers to Creating Epic Customer Experiences at Modern Customer Experience 2018

Oracle Press Releases - Wed, 2018-03-14 07:00
Press Release
Oracle to Break Down the Barriers to Creating Epic Customer Experiences at Modern Customer Experience 2018 Dedicated marketing, commerce, sales and customer service sessions to help attendees meet skyrocketing expectations and advance careers

REDWOOD SHORES, Calif.—Mar 14, 2018

To help organizations create epic experiences that deliver predictable and tangible business results, Oracle is hosting Modern Customer Experience 2018 in Chicago, Illinois, April 10-12. Held at McCormick Place hotel convention center, Modern Customer Experience 2018 will bring together thousands of marketing, commerce, sales and customer service professionals to share best practices and insights on meeting the expectations of today’s empowered customers.

Despite an increasing focus on the customer, expectations continue to outpace the experiences many organizations can offer. For customer experience professionals this can seem like a no win situation, particularly when new technologies, rapidly shifting behavioral norms and evolving industry regulations continue to change the game. Modern Customer Experience 2018 will help attendees navigate this complexity, advance their careers and take advantage of new technologies such as AI, blockchain and the Internet of Things to create legendary, irresistible experiences.

Modern Customer Experience 2018 will bring together CX practitioners and industry visionaries to share the latest insights on customer experience management. The event will feature:

  • Engaging Keynotes: Keynote speakers include Comcast Executive Vice President, Chief Customer Experience Office Charlie Herrin, Fanatics Vice President of Fan Experience Carolyne Matseshe-Crawford, Oracle Marketing Cloud Senior Vice President Shashi Seth, Bestselling Author Cheryl Strayed, Duel Founder Paul Archer, YouTube Personality Casey Neistat and Oracle Vice President, Demand Generation and Marketing EMEA/APAC Amanda Jobbins.
  • Inspirational Customer Experience Stories: The conferences will feature a range of guest speakers including Convince and Convert President Jay Baer,CXOTalk Industry Analyst Michael Krigsman, Airborn Interconnect Directory of Digital Transformation Rommel Bayola, Zenimax Media Director Customer Support Boyd Beasley, Five9 President Dan Burkland, Chicago Bulls Senior Analyst Jenna Gales and many more.
  • Powerful and Practical Insights: Attendees will have access to pre-conference education sessions and nearly 300 sessions at the conference. This will include marketing, commerce, sales and service tracks, which provide immersive learning opportunities and include hands-on workshops, demos and case studies.
  • Lessons from Legends: More than 30 awards will recognize legendary CX leaders across marketing, commerce, sales and customer service. Our CX awards, which include our 12th annual Markies awards, will be hosted on April 10 at 8:00 p.m., and will spotlight the legendary efforts of those who push the boundaries of innovation to create closer connections with their customers.
  • Lots of Fun: Oracle’s customer appreciation event, CX Fest Concert, on Wednesday, April 11 at 7:30 p.m., will offer attendees a night of great music from Weezer and more opportunities to connect with peers.

“Customer experience professionals are facing a perfect storm of technological, business and societal change that is shifting expectations and reshaping the very nature of marketing, sales, service and commerce,” said Des Cahill, vice president and head CX Evangelist, Oracle. “Technology and data have been presented as the panacea, but in reality, it’s not a case of simply automating experiences with technology. Instead, it’s about using technology to remove the barriers that prevent people from delivering epic experiences. At Modern Customer Experience 2018, we have hundreds of sessions planned that will help attendees navigate all the complexity and uncertainty in today’s business environment to advance their careers. It’s going to be a great event.”

Register to attend Modern Customer Experience 2018 now through April 9 for $1,395. Use the code CXPR to receive $500 off of the onsite rate.

Contact Info
Kimberly Guillon
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kimberly Guillon

  • 209.601.9152


Subscribe to Oracle FAQ aggregator