Feed aggregator

ODA Some curious password management rules

Yann Neuhaus - Thu, 2018-02-22 09:33

While deploying an ODA based on the DCS stack (odacli), it is mandatory to provide a “master” password at appliance creation. The web GUI provides for that a small tooltip which describes the rules applied on password management. However it looks like there is some flexibility with those rules. Lets try to check this out with some basics tests.

First of all here are the rules as provided by the ODA interface:


So basically it has to start with an alpha character and be at least 9 characters long. My first reaction was that 9 characters is not to bad even if 10 would be better as minimum. Unfortunately it is not requesting any additional complexity mixing uppercase, lowercase, numbers… My second reaction, as most of IT guys, was to try to not respect these rules and see what happen :-P

I started really basically by using an “high secured” password: test


Perfect the ODA reacted as expect and tells me I should read the rules once again. Next step is try something a bit more complicated: manager

..and don’t tell me you never used it in any Oracle environment ;-)


Fine, manager is still not 9 character long, 7 indeed, and the installer is still complaining. For now, everything is okay.
Next step was to try a password respecting the rules of 9 characters: welcome123


Still a faultless reaction of ODA!

Then I had the strange idea to test the historical ODA password: welcome1


Oops! The password starts with an alpha character fine, but if I’m right welcome1 is only 8 characters long :-?
If you don’t believe me, try to count the dot on the picture above….and I swear I didn’t use Gimp to “adjust” it ;-)

Finally just to be sure I tried another password of 8 characters: welcome2


Ah looks better. This time the installer sees that the password is not long enough and shows a warning.

…but would it mean that welcome1 is hard-coded somewhere??


Not matter, let’s continue and run the appliance creation with welcome123. Once done I try log using SSH to my brandly new created ODA using my new master password


it doesn’t work! 8-O

I tried multiple combination from welcome123, welcome1, Welcome123 and much more. Unfortunately none of them work.
At this point there are only 2 solutions to connect back to your ODA:

  1. There is still a shell connected as root to the ODA and then the root password can easily be changed using passwd
  2. No session is open to the ODA anymore and then it requires to open the remote console to reboot the ODA in Single User mode :-(

As the master password should be set to both root, grid and oracle users, I tried the password for grid and oracle too:


Same thing there the master password provided during the appliance creation hasn’t be set properly.

Hope it help!


Cet article ODA Some curious password management rules est apparu en premier sur Blog dbi services.

Oracle Communications Network Charging and Control Enables Mobile Service Providers to Differentiate and Monetize Their Brand

Oracle Press Releases - Thu, 2018-02-22 07:00
Press Release
Oracle Communications Network Charging and Control Enables Mobile Service Providers to Differentiate and Monetize Their Brand Delivers agile online charging for high-growth mobile, cloud and IoT services

Redwood Shores, Calif.—Feb 22, 2018

Oracle today announced the latest version of Oracle Communications Network Charging and Control (NCC), a key product in Oracle’s complete digital monetization portfolio which addresses communications, cloud and IoT services. A modern, standards-based online charging system for prepaid dominant mobile markets, Oracle Communications NCC expands the reach of Oracle’s digital monetization portfolio to help service providers, mobile virtual network enablers (MVNEs) and operators (MVNOs) in high growth markets, introduce innovative and interactive mobile offers to rapidly and cost effectively monetize their brands. Key capabilities introduced in this new release include 3GPP advanced data charging and policy integration together with support for contemporary, cost effective deployments on Oracle Linux.

The pre-paid market for consumer mobile broadband and Intelligent-Network (IN) voice services continues to grow globally. Ovum forecasts1 that the market for pre-paid mobile voice and data subscriptions will grow from 5.5B subscriptions in 2017 to 6.0B subscriptions in 2022 with highest net growth in developing markets. In addition, the GSMA estimates there to be almost 1,000 MVNOs globally with more than 250 mobile network operator (MNO) sub-brands, all seeking growth through brand differentiation.

For such operators, Oracle Communications NCC provides advanced mobile broadband and IN monetization, intuitive graphical service logic design and complete prepaid business management in a single solution. It supports flexible recharge and targeted real-time promotions, complete and secure voucher lifecycle management, and a large set of pre-built and configurable service templates for the rapid launch of new innovative offers. This is critical as competitive pressures and customer expectations mount, requiring service providers to rethink their services and how they can increase brand engagement and loyalty. With this evolution in services, it’s imperative that underlying charging systems evolve to meet these changing business requirements—across digital, cloud and IoT services.

ASPIDER-NGI builds, supports and operates innovative MVNO and IoT platforms for Operator, Manufacturer and Enterprise sectors,” said David Traynor, Chief Marketing Officer, ASPIDER-NGI. “We use Oracle Communications Network Charging and Control as part of our MVNE infrastructure, allowing our clients to quickly deploy new mobile data and intelligent network services. Our clients demand the controls to deliver competitive offerings to specific customer segments and to support their own IoT business models. This release provides us the agility to accelerate our pace of innovation with an online charging platform that supports the latest 3GPP technologies.”

Oracle Communications NCC aligns with 3GPP Release 14 Policy and Charging Control (PCC) standards, including Diameter Gy data services charging, and supports comprehensive SS7 Service Control (CAP, INAP, and MAP) for IN services. In addition, it supports integration with Policy and Charging Rules Function (PCRF) deployments, including Oracle Communications Policy Management, via the Diameter Sy interface. Such integration provides support for a wide range of value added scenarios from on-demand bandwidth purchases for video or data intensive services to fair usage policies that gracefully reduce mobile bandwidth as threshold quotas are met to ensure an optimal customer experience. Oracle Communications NCC may be deployed in a virtualized or bare metal configuration on Oracle Linux using the Oracle Database to provide a highly cost effective, performant and scalable online charging solution.

“This major release of Oracle Communications Network Charging and Control reiterates Oracle’s continued commitment to provide a complete and cost effective online charging and business management platform for the pre-paid consumer mobile market,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “With new features including support for policy integration and deployment flexibility on a contemporary, open platform, we are offering our customers a modern alternative to traditional IN platforms, enabling them to differentiate and grow their brands, and in turn, delight their customers.”

In addition to Oracle Communications Network Charging and Control, Oracle’s digital monetization portfolio also includes Oracle Communications Billing and Revenue Management and Oracle Monetization Cloud, which collectively support the rapid introduction and monetization of subscription and consumption based offerings.

Oracle Communications provides the integrated communications and cloud solutions that enable users to accelerate their digital transformation journey—from customer experience to digital business to network evolution. See Oracle Communications NCC in action at Mobile World Congress, Barcelona, February 26–March 1, 2018, Hall 3, Booth 3B30. Ovum, TMT Intelligence, Informa, Active Users, Prepaid and Postpaid Mobile Subscriptions, February 09, 2018

1. GSMA Intelligence—Segmenting the global MVNO footprint—https://www.gsmaintelligence.com/research/2015/03/infographic-segmenting-the-global-mvno-footprint/482/

Contact Info
Katie Barron
Kristin Reeves
Blanc & Otus
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Migrate Oracle Database(s) and ASM diskgroups from VMWARE to Oracle VM

Yann Neuhaus - Thu, 2018-02-22 06:45

This is a step by step demonstration on how to migrate any ASM disk groups from a cluster to another. May be use, with or without virtualization and may be used with storage layer snapshot for fast environment provisioning.

Step 01 – Shutdown source database(s) on VMWARE servers

Shutdown all databases hosted in the targeted Disk groups for which you want consistency. Then unmount the disk groups.

$ORACLE_HOME/bin/srvctl stop database -db cdb001

$ORACLE_HOME/bin/asmcmd umount FRA

$ORACLE_HOME/bin/asmcmd umount DATA


Step 02 – Re route LUNs from the storage array to newf servers

Create a snapshot and make the snapshot LUNs visible for Oracle Virtual Server (OVS) according the third-party storage technology.

Step 03 – Add LUNs to DomUs (VMs)

Then, we refresh the storage layer from OVM Manager to present LUNs in each OVS

OVM> refresh storagearray name=STORAGE_ARRAY_01

Step 04 – Then, tell OVM Manager to add LUNs to the VMs in which we want our databases to be migrated

create VmDiskMapping slot=20 physicalDisk=sa01_clus01_asm_data01 name=sa01_clus01_asm_data01 on Vm name=rac001
create VmDiskMapping slot=21 physicalDisk=sa01_clus01_asm_data02 name=sa01_clus01_asm_data02 on Vm name=rac001
create VmDiskMapping slot=22 physicalDisk=sa01_clus01_asm_data03 name=sa01_clus01_asm_data03 on Vm name=rac001
create VmDiskMapping slot=23 physicalDisk=sa01_clus01_asm_data04 name=sa01_clus01_asm_data04 on Vm name=rac001
create VmDiskMapping slot=24 physicalDisk=sa01_clus01_asm_data05 name=sa01_clus01_asm_data05 on Vm name=rac001
create VmDiskMapping slot=25 physicalDisk=sa01_clus01_asm_data06 name=sa01_clus01_asm_data06 on Vm name=rac001
create VmDiskMapping slot=26 physicalDisk=sa01_clus01_asm_reco01 name=sa01_clus01_asm_reco01 on Vm name=rac001
create VmDiskMapping slot=27 physicalDisk=sa01_clus01_asm_reco02 name=sa01_clus01_asm_reco02 on Vm name=rac001

create VmDiskMapping slot=20 physicalDisk=sa01_clus01_asm_data01 name=sa01_clus01_asm_data01 on Vm name=rac002
create VmDiskMapping slot=21 physicalDisk=sa01_clus01_asm_data02 name=sa01_clus01_asm_data02 on Vm name=rac002
create VmDiskMapping slot=22 physicalDisk=sa01_clus01_asm_data03 name=sa01_clus01_asm_data03 on Vm name=rac002
create VmDiskMapping slot=23 physicalDisk=sa01_clus01_asm_data04 name=sa01_clus01_asm_data04 on Vm name=rac002
create VmDiskMapping slot=24 physicalDisk=sa01_clus01_asm_data05 name=sa01_clus01_asm_data05 on Vm name=rac002
create VmDiskMapping slot=25 physicalDisk=sa01_clus01_asm_data06 name=sa01_clus01_asm_data06 on Vm name=rac002
create VmDiskMapping slot=26 physicalDisk=sa01_clus01_asm_reco01 name=sa01_clus01_asm_reco01 on Vm name=rac002
create VmDiskMapping slot=27 physicalDisk=sa01_clus01_asm_reco02 name=sa01_clus01_asm_reco02 on Vm name=rac002

At this stage we have all LUNs of our both disk groups for DATA and FRA available on both nodes of the cluster.

Step 05 – Migrate disk in AFD

We can rename disk groups if required or if a disk group with the same name already exists

renamedg phase=both dgname=DATA newdgname=DATAMIG verbose=true asm_diskstring='/dev/xvdr1','/dev/xvds1','/dev/xvdt1','/dev/xvdu1','/dev/xvdv1','/dev/xvdw1'
renamedg phase=both dgname=FRA  newdgname=FRAMIG  verbose=true asm_diskstring='/dev/xvdx1','/dev/xvdy1'


Then we migrate disks into AFD configuration

$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdr1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvds1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdt1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdu1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdv1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdw1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label FRAMIG  /dev/xvdx1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label FRAMIG  /dev/xvdy1 --migrate


Step 06 – Mount disk groups on the new cluster and add database(s) in the cluster

$ORACLE_HOME/bin/asmcmd mount DATAMIG
$ORACLE_HOME/bin/asmcmd mount FRAMIG


Then add database(s) to cluster (repeat for each database)

$ORACLE_HOME/bin/srvctl add database -db cdb001 \
-oraclehome /u01/app/oracle/product/12.2.0/dbhome_1 \
-dbtype RAC \
-spfile +DATAMIG/CDB001/spfileCDB001.ora \


Step 06 – Startup database

In that case, we renamed the disk groups so we need to modify file locations and some parameter values

create pfile='/tmp/initcdb001.ora' from spfile='+DATAMIG/<spfile_path>' ;
-- modify controlfiles, recovery area and any other relevant paramters
create spfile='+DATAMIG/CDB001/spfileCDB001.ora' from pfile='/tmp/initcdb001.ora' ;

ALTER DATABASE RENAME FILE '+DATA/<datafile_paths>','+DATAMIG/<datafile_paths>'
ALTER DATABASE RENAME FILE '+DATA/<tempfile_paths>','+DATAMIG/<tempfile_paths>'
ALTER DATABASE RENAME FILE '+DATA/<onlinelog_paths>','+DATAMIG/<onlinelog_paths>'
ALTER DATABASE RENAME FILE '+FRA/<onlinelog_paths>', '+FRAMIG/<onlinelog_paths>'


Then start the database

$ORACLE_HOME/bin/srvctl start database -db cdb001


This method can be used to easily migrated TB of data with almost no pain, reducing at most as possible the downtime period. For near Zero downtime migration, just add a GoldenGate replication on top of that.

The method describes here is also perfectly applicable for ASM snapshot in order to duplicate huge volume from one environment to another. This permits fast environment provisioning without the need to duplicate data over the network nor impact storage layer with intensive I/Os.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.




Cet article Migrate Oracle Database(s) and ASM diskgroups from VMWARE to Oracle VM est apparu en premier sur Blog dbi services.

Oracle Systems Partner Webcast-Series: SPARC value for Partners


We share our skills to maximize your revenue!
Categories: DBA Blogs

Set up continuous application build and delivery from Git to Kubernetes with Oracle Wercker

Amis Blog - Thu, 2018-02-22 03:22

It is nice – to push code to a branch in a Git repository and after a little while find the freshly built application up and running in the live environment. That is exactly what Wercker can do for me.


The Oracle + Wercker Cloud service allows me to define applications based on Git repositories. For each application, one or more workflows can be defined composed out of one or more pipelines (steps). A workflow can be triggered by a commit on a specific branch in the Git repository. A pipeline can do various things – including: build a Docker container from the sources as runtime for the application, push the Docker container to a container registry and deploy containers from this container registry to a Kubernetes cluster.

In this article, I will show the steps I went through to set up the end to end workflow for a Node JS application that I had developed and tested locally and then pushed to a repository on GitHub. This end to end workflow is triggered by any commit to the master branch. It builds the application runtime container, stores it and deploys it to a Kubernetes Cluster running on Oracle Cloud Infrastructure (the Container Engine Cloud).

The starting point is the application – eventmonitor-microservice-soaring-clouds-sequel – in the GitHub Repository at: https://github.com/lucasjellema/eventmonitor-microservice-soaring-clouds-sequel . I already have a free account on Wercker (http://www.wercker.com/)

The steps:

1. Add an Application to my Wercker account


2. Step through the Application Wizard:


Select GitHub (in my case).

Since I am logged in into Wercker using my GitHub account details, I get presented a list of all my repositories. I select the one that holds the code for the application I am adding:


Accept checking out the code without SSH key:


Step 4 presents the configuration information for the application. Press Create to complete the definition of the application.


The successful creation of the application is indicated.


3. Define the build steps in a wercker.yml

The build steps that Wercker executes are described by a wercker.yml file. This file is expected in the root of the source repository.

Wercker offers help with the creation of the build file. For a specific languagem it can generate the skeleton wercker.yml file that already refers to the base box (a language specific runtime) and has the outline for the steps to build and push a container.


In my case, I have created the wercker.yml file manually and already included it in my source repo.

Here is part of that file.


Based on the box node8 (the base container image), it defines three building block: build, push-to-releases and deploy-to-oke. The first one is standard for Node applications and builds the application (well, it gathers all node modules). The second one takes the resulting container image from the first step and pushes it to the Wercker Container Registry with a tag composed from the branch name and the git commit id. The third one is a little more elaborate. It takes the container image from the Wercker registry and creates a Kubernetes deployment that is subsequently pushed to the Kubernetes cluster that is indicated by the environment variables KUBERNETES_MASTER and KUBERNETES_TOKEN.

4. Define Pipelines and Workflow

In the Wercker console, I can define workflows for my application. These workflows consist of pipelines, organized in a specific sequence. Each pipeline is triggered by the completion of the previous one. The first pipeline is typically triggered by a commit event in the source repository.



Before I can compose the workflow I need, I first have to set up the Pipelines – corresponding to the build steps in the wercker.yml file in the application source repo. Click on Add new pipline.

Define the name for the new pipeline (anything you like) and the name of the YML Pipeline – this one has to correspond exactly with the name of the building block in the wercker.yml file.


Click on Create.

Next, create a pipeline for the ”deploy-to-oke” step in the YML file


Press Create to also create this pipeline.

With all three pipelines available, we can complete the workflow.


Click on the plus icon to add step in the workflow. Associate this step with the pipeline push-docker-image-to-releases:image

Next, add a step for the final pipeline:


This completes the workflow. If you now commit code to the master branch of the GitHub repo, the workflow will be triggered and will start to execute. The execution will fail however: the wercker.yml file contains various references to variables that need to be defined for the application (or the workflow or even the individual pipeline) before the workflow can be successful.


Crucial in making the deployment to Kubernetes successful are the files kubernetes-deployment.yml.template and ingress.yml.template. These files are used as template for the Kubernetes deployment and ingress definitions that are applied to Kubernetes. These files define important details such as:

  • Container Image in the Wercker Container Registry to create the Pod for
  • Port(s) to be exposed from each Pod
  • Environment variables to be published inside the Pod
  • URL path at which the application’s endpoints are accessed (in ingress.yml.template)


5. Define environment variables

Click on the Environment tab. Set values for all the variables used in the wercker.yml file. Some of these define the Kubernetes environment to which deployment should take place, others provide values that are injected into the Kubernetes Pod and made available as environment variables to the application at run timeSNAGHTMLce5b5cb

6. Trigger a build of the application

At this point, the application is truly ready to be built and deployed. One way to trigger this, is by committing something to the master branch. Another option is shown here:


The build is triggered. The output from each step is available in the console:image

When the build is done, the console reflects the result.


Each pipeline can be clicked to inspect details for all individual steps, for example the deployment to Kubernetes:


Each step can be expanded for even more details:


In these details, we can find the values that have been injected for the environment variables.

7. Access the live application

This final step is not specific to Wercker. It is however the icing on the cake – to make actual use of the application.

The ingress definition for the application specifies:


This means that the application can be accessed at the endpoint for the K8S ingress at the path /eventmonitor-ms/app/.

Given the external IP address for the ingress service, I can now access the application:


Note: /health is one of the operations supported by the application.

8. Change the application and Roll out the Change – the ultimate proof

The real proof of this pipeline is in changing the application and having that change rolled out as a result of the Git commit.

I make a tiny change, commit the change to GitHub


and push the changes. Almost immediately, the workflow is triggered:


After a minute or so, the workflow is complete:

and the updated application is live on Kubernetes:


Check the live logs in the Pod:


And access the application again – now showing the updated version:


The post Set up continuous application build and delivery from Git to Kubernetes with Oracle Wercker appeared first on AMIS Oracle and Java Blog.

Huge Pages

Jonathan Lewis - Thu, 2018-02-22 03:03

A useful quick summary from Neil Chandler replying to a thread on Oracle-L:

Topic: RAC install on Linux

You should always be using Hugepages.

They give a minor performance improvement and a significant memory saving in terms of the amount of memory needed to handle the pages – less Transaction Lookaside Buffers, which also means less TLB misses (which are expensive).

You are handling the memory chopped up into 2MB pieces instead of 4K. But you also have a single shared memory TLB for Hugepages.

The kernel has less work to do, bookkeeping fewer pointers in the TLB.

You also have contiguous memory allocation and it can’t be swapped.

If you are having problems with Hugepages, you have probably overallocated them (I’ve seen this several times at clients so it’s not uncommon). Hugepages can *only* be used for your SGA’s. All of your SGA’s should fit into the Hugepages and that should generally be no more than about 60% of the total server memory (but there are exceptions), leaving plenty of “normal” memory (small pages) for PGA , O/S and other stuff like monitoring agendas.

As an added bonus, AMM can’t use Hugepages, so your are forced to use ASMM. AMM doesn’t work well and has been kind-of deprecated by oracle anyway – dbca won’t let you setup AMM if the server has more than 4GB of memory.


Oracle database backup

Tom Kyte - Wed, 2018-02-21 15:46
Hi Developers, I am using Oracle 10g. I need to take backup of my database. I can take a back-up of tables, triggers etc using sql developers' Database Backup option but there are multiple users created in that database. Can you please support ...
Categories: DBA Blogs

How do you purge stdout files generated by DBMS_SCHEDULER jobs?

Tom Kyte - Wed, 2018-02-21 15:46
When running scheduler jobs, logging is provided in USER_SCHEDULER_JOB_LOG and USER_SCHEDULER_JOB_RUN_DETAILS. And stdout is provided in $ORACLE_HOME/scheduler/log. The database log tables are purged either by default 30 days (log_history attribute)....
Categories: DBA Blogs

V$SQL history

Tom Kyte - Wed, 2018-02-21 15:46
How many records/entry are there in v$sql,v$ession. and how they flush like Weekly or Space pressure. Thanks
Categories: DBA Blogs

Dynamic SQL in regular SQL queries

Tom Kyte - Wed, 2018-02-21 15:46
Hi, pardon me for asking this question (I know I can do this with the help of a PL/SQL function) but would like to ask just in case. I'm wondering if this doable in regular SQL statement without using a function? I'm trying to see if I can write a ...
Categories: DBA Blogs

Adding hash partitions and spreading data across

Tom Kyte - Wed, 2018-02-21 15:46
Hi, I have a table with a certain number of range partitions and for each partitions I have eight hash subpartitions. Is there a way to increase the subpartitions number to ten and distributing evenly the number of rows? I have tried "alter tabl...
Categories: DBA Blogs

Bug when using 1 > 0 at "case when" clause

Tom Kyte - Wed, 2018-02-21 15:46
Hello, guys! Recently, I've found a peculiar situation when building a SQL query. The purpose was add a "where" clause using a "case" statement that was intented to verify if determined condition was greater than zero. I've reproduced using a "wit...
Categories: DBA Blogs

Difference between explain and execute plan and actual execute plan

Tom Kyte - Wed, 2018-02-21 15:46
Hi, I have often got questions around explain plan and execute plan. As per my knowledge, explain plan gives you the execute plan of the query. But I have also read that Execute plan is the plan which Oracle Optimizer intends to use for the query and...
Categories: DBA Blogs

Oracle Data Cloud Launches Data Marketing Program to Help Savvy Auto Dealer Agencies Better Use Digital Data

Oracle Press Releases - Wed, 2018-02-21 12:55
Press Release
Oracle Data Cloud Launches Data Marketing Program to Help Savvy Auto Dealer Agencies Better Use Digital Data Nine Leading Retail Automotive Marketing Agencies Are First to Complete Comprehensive Program, Receive Oracle Data Cloud’s Auto Elite Data Marketer (EDM) Designation

Redwood City, Calif.—Feb 21, 2018

Oracle Data Cloud today launched an advanced data training and marketing program to help savvy auto dealer agencies better use digital data. Oracle also announced the first nine leading Tier 3 auto marketing agencies to qualify for the rigorous program and receive Oracle Data Cloud’s Auto Elite Data Marketer (EDM) designation. Those companies included: C-4 Analytics, Dealer Inspire, Dealers United, Goodway Group, L2TMedia, SocialDealer, Stream Marketing, Team Velocity, and TurnKey Marketing. Oracle’s Auto Elite Data Marketer program will help agencies effectively allocate their marketing resources as advertising budgets shift from offline media to digital platforms.

“As the automotive industry goes through an era of transformational change, dealers are literally where the rubber meets the road, and they need cutting edge marketing tools to help maintain or grow market share,” said Joe Kyriakoza, VP and GM of Automotive for the Oracle Data Cloud. “Tier 3 marketers know that reaching the right audience drives measurable campaign results. By increasing the data skills of our marketing agency partners, Oracle can help them directly impact and improve their clients’ campaign results.”

Oracle Data Cloud’s Auto Elite Data Marketer Program includes:

  1. 1. Education & training - Expert training to the marketing agency and their extended teams on advanced targeting strategies and audience planning techniques

  2. 2. Customized collateral - Co-branded collateral pieces to support client marketing efforts, including summary sheets, decks, activation guides, and other materials.

  3. 3. Co-branding marketing - Co-branded marketing initiatives through thought leadership, speaking opportunities, and co-hosted webinars.

  4. 4. Strategic sales support - Access to Oracle’s specialized Retail Solutions Team and the Oracle Data Hotline to support strategic pitches, events, and RFP inquiries.

“We are proud to have worked with Oracle Data Cloud since the beginning, shaping the program together to drive more business for dealers using audience data,” said Joe Chura, CEO of Dealer Inspire. “Our team is excited to continue this relationship as an Elite Data Marketer, empowering Dealer Inspire clients with the unique advantage of utilizing Oracle data for automotive retail targeting.”

“We are consumed with data that allows for hyper-personalization and better targeting of in-market consumers,” said David Boice, CEO and Chairman of Team Velocity Marketing. “Oracle is a new goldmine of data to drive excellent sales and service campaigns and a perfect complement to our Apollo Technology Platform.”  According to Joe Castle, Founder of SOCIALDEALER, “We are excited to be one of the few Auto Elite Data Marketers which provides us a deeper level of custom audience data access from Oracle. Our companies look forward to working closely to further deliver a superior ROI to all our dealership and OEM relationships.”

Through the Auto Elite Data Marketer program, retail marketers learn how to use Oracle’s expansive selection of automotive audiences, which cover the entire vehicle ownership lifecycle, like in-market car shoppers, existing owners, and individuals needing auto finance, credit assistance, or vehicle service. This comprehensive data set allows clients to precisely target the right prospects for any automotive retail campaign. Oracle has teamed up with industry leading data providers to build the robust dataset, like IHS Markit’s Polk for vehicle ownership and intent data, Edmunds.com for online car shopper data and TransUnion the trusted source for consumer finance audiences.

Oracle Data Cloud plans to expand the Auto Elite Data Marketer program to include additional dealer marketing agencies, as well as working directly with dealers and dealer groups and their media partners to use data effectively for advanced targeting and audience planning efforts. For more information about the Auto Elite Data Marketer program, please contact the Oracle Auto team at dealersolutions@oracle.com.

Oracle Data Cloud

Oracle Data Cloud operates the BlueKai Data Management Platform and the BlueKai Marketplace, the world’s largest audience data marketplace. Leveraging more than $5 trillion in consumer transaction data, more than five billion global IDs and 1,500+ data partners, Oracle Data Cloud connects more than two billion consumers around the world across their devices each month. Oracle Data Cloud is made up of AddThis, BlueKai, Crosswise, Datalogix and Moat.

Oracle Data Cloud helps the world’s leading marketers and publishers deliver better results by reaching the right audiences, measuring the impact of their campaigns and improving their digital strategies. For more information and free data consultation, contact The Data Hotline at www.oracle.com/thedatahotline

Contact Info
Simon Jones
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Simon Jones

  • +1.650.506.0325

Early morning RMOUG post

Bobby Durrett's DBA Blog - Wed, 2018-02-21 06:44

Well, it is early Wednesday morning here at the Westin hotel in Denver where the RMOUG Training Days conference is being held. I can’t sleep anyway so I thought I would write-up some of my impressions of yesterday’s presentations.

I appreciate all the effort people put in making their presentations. Since I have done Toastmasters I’ve learned to appreciate more what goes into being an effective speaker. But, the nature of my work is that I have to be critical of everything people say about technology. Maybe I should say “I have to think critically” instead of “be critical”. The problem with the type of work we do is that it involves a lot of money and that inevitably obscures the truth about the technical details of how things work. So, I want to just sit back and applaud but my technical side wants to tear apart every detail.

A nice perk of being a RMOUG presenter is that I got to attend the pre-conference workshops for free as well as the rest of the talks. In past conferences that I have spoken at that was not the case. So, I went to a four-hour Snowflake workshop. I have read a fair amount on Snowflake so much that the speaker presented was familiar. I wonder how people who had no Snowflake background perceived the talk? Being a nuts and bolts Oracle person I would have liked to dig in more to Snowflake internals and discuss its limitations. Surely any tool has things it does better and things that it does not do so well because of the choices that the developers made in its design. I’m interested in how Snowflake automatically partitions data across files on S3 and caches data in SSD and RAM at the compute level. At least, that is what the information on the web site suggests. But with cloud computing it seems that people frown upon looking under the covers. The goal is to spin up new systems quickly and Snowflake is fantastic at that. Also, it seems to get great performance with little effort. No tuning required! Anyway, it was a good presentation but didn’t get into nuts and bolts tuning and limitations which I would have liked to see.

I spent the rest of the day attending hour-long presentations on various topics. AWS offered a 3 hour session on setting up Oracle on RDS but since I’ve played with RDS at work I decided to skip it. Instead I went to mostly cloud and Devops sessions. I accidentally went to an Oracle performance session which was amusing. It was about tuning table scans in the cloud. The speaker claimed that in Oracle’s cloud you get sub-millisecond I/O which raised a bunch of questions in my mind. But the session was more about using Oracle database features to speed up a data warehouse query. It was fun but not what I expected.

I was really surprised by the Devops sessions. Apparently Oracle has some free Devops tools in their cloud that you can use for on premise work. My office is working with a variety of similar tools already so it is not something we would likely use. But it could be helpful to someone who doesn’t want to install the tools yourself. I’m hopeful that today’s Devops session(s) will fill in more details about how people are using Devlops with databases. I’m mostly interested in how to work with large amounts of data in Devops. It’s easy to store PL/SQL code in Git for versioning and push it out with Flywaydb or something like it. It is hard to make changes to large tables and have a good backout. Data seems to be Devops’s Achilles heel and I haven’t seen something that handles it well. I would love to hear about companies that have had success handling data changes with Devops tools.

Well, I’ve had one cup of coffee and Starbucks doesn’t open for another half hour but this is probably enough of a pre-dawn RMOUG data dump. Both of my talks are tomorrow so today is another day as a spectator. Likely it will be another day of cloud and Devops but I might sneak an Oracle performance talk in for one session.


Categories: DBA Blogs

Interval Partition Problem

Jonathan Lewis - Wed, 2018-02-21 02:40

Assume you’ve got a huge temporary tablespace, there’s plenty of space in your favourite tablespace, you’ve got a very boring, simple table you want to copy and partition, and no-one and nothing is using the system. Would you really expect a (fairly) ordinary “create table t2 as select * from t1” to end with an Oracle error “ORA-1652: unable to extend temp segment by 128 in tablespace TEMP” . That’s the temporary tablespace that’s out of space, not the target tablespace for the copy.

Here’s a sample data set (tested on and to demonstrate the surprise – you’ll need about 900MB of space by the time the entire model has run to completion:

rem     Script:         pt_interval_threat_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2018

column tomorrow new_value m_tomorrow
select to_char(sysdate,'dd-mon-yyyy') tomorrow from dual;

create table t1
with g as (
        select rownum id
        from dual
        connect by level <= 2e3
        rownum id,
        trunc(sysdate) + g2.id  created,
        rpad('x',50)            padding
        g g1,
        g g2
        rownum  comment to avoid WordPress format mess

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

I’ve created a table of 4 million rows, covering 2,000 dates out into the future starting from sysdate+1 (tomorrow). As you can see there’s nothing in the slightest bit interesting, unusual, or exciting about the data types and content of the table.

I said my “create table as select” was fairly ordinary – but it’s actually a little bit out of the way because it’s going to create a partitioned copy of this table.

execute snap_my_stats.start_snap

create table t2
partition by range(created)
interval(numtodsinterval(7, 'day'))
        partition p_start       values less than (to_date('&m_tomorrow','dd-mon-yyyy'))
storage(initial 1M)

set serveroutput on
execute snap_my_stats.end_snap

I’ve created the table as a range-partitioned table with an interval() declared. Conveniently I need only mention the partitioning column by name in the declaration, rather than listing all the columns with their types, and I’ve only specified a single starting partition. Since the interval is 7 days and the data spans 2,000 days I’m going to end up with nearly 290 partitions added.

There’s no guarantee that you will see the ORA-01652 error when you run this test – the data size is rather small and your machine may have sufficient other resources to hide the problem even when you’re looking for it – but the person who reported the problem on the OTN/ODC database forum was copying a table of 2.5 Billion rows using about 200 GB of storage, so size is probably important, hence the 4 million rows as a starting point on my small system.

Of course, hitting an ORA-01652 on TEMP when doing a simple “create as select” is such an unlikely sounding error that you don’t necessarily have to see it actually happen; all you need to see (at least as a starting point in a small model) is TEMP being used unexpectedly so, for my first test (on, I’ve included some code to calculate and report changes in the session stats – that’s the calls to the package snap_my_stats. Here are some of the more interesting results:

Session stats - 20-Feb 16:58:24
Interval:-  14 seconds
Name                                                                     Value
----                                                                     -----
table scan rows gotten                                               4,000,004
table scan blocks gotten                                                38,741

session pga memory max                                             181,338,112

sorts (rows)                                                         2,238,833

physical reads direct temporary tablespace                              23,313
physical writes direct temporary tablespace                             23,313

The first couple of numbers show the 4,000,000 rows being scanned from 38,741 table blocks – and that’s not a surprise. But for a simple copy the 181MB of PGA memory we’ve acquired is a little surprising, though less so when we see that we’ve sorted 2.2M rows, and then ended up spilling 23,313 blocks to the temporary tablespace. But why are we sorting anything – what are those rows ?

My first thought was that there was a bug in some recursive SQL that was trying to define or identify dynamically created partitions, or maybe something in the space management code trying to find free space, so the obvious step was to enable extended tracing and look for any recursive statements that were running a large number of times or doing a lot of work. There weren’t any – and the trace file (particularly the detailed wait events) suggested the problem really was purely to do with the CTAS itself; so I ran the code again enabling events 10032 and 10033 (the sort traces) and found the following:

---- Sort Statistics ------------------------------
Initial runs                              1
Input records                             2140000
Output records                            2140000
Disk blocks 1st pass                      22292
Total disk blocks used                    22294
Total number of comparisons performed     0
Temp segments allocated                   1
Extents allocated                         175
Uses version 1 sort
Uses asynchronous IO

One single operation had resulted in Oracle sorting 2.14 million rows (but not making any comparisons!) – and the only table in the entire system with enough rows to do that was my source table! Oracle seems to be sorting a large fraction of the data for no obvious reason before inserting it.

  • Why, and why only 2.14M out of 4M ?
  • Does it do the same on (yes), what about (no – hurrah: unless it just needs a larger data set!).
  • Is there any clue about this on MoS (yes Bug 17655392 – though that one is erroneously, I think, flagged as “closed not a bug”)
  • Is there a workaround ? (Yes – I think so).

Playing around and trying to work out what’s happening the obvious pointers are the large memory allocation and the “incomplete” spill to disc – what would happen if I fiddled around with workarea sizing – switching it to manual, say, or setting the pga_aggregate_target to a low value. At one point I got results showing 19M rows (that’s not a typo, it really was close to 5 times the number of rows in the table) sorted with a couple of hundred thousand blocks of TEMP used – the 10033 trace showed 9 consecutive passes (that I can’t explain) as the code executed from which I’ve extract the row counts, temp blocks used, and number of comparisons made:

Input records                             3988000
Total disk blocks used                    41544
Total number of comparisons performed     0

Input records                             3554000
Total disk blocks used                    37023
Total number of comparisons performed     0

Input records                             3120000
Total disk blocks used                    32502
Total number of comparisons performed     0

Input records                             2672000
Total disk blocks used                    27836
Total number of comparisons performed     0

Input records                             2224000
Total disk blocks used                    23169
Total number of comparisons performed     0

Input records                             1762000
Total disk blocks used                    18357
Total number of comparisons performed     0

Input records                             1300000
Total disk blocks used                    13544
Total number of comparisons performed     0

Input records                             838000
Total disk blocks used                    8732
Total number of comparisons performed     0

Input records                             376000
Total disk blocks used                    3919
Total number of comparisons performed     0

There really doesn’t seem to be any good reason why Oracle should do any sorting of the data (and maybe it wasn’t given the total number of comparisons performed in this case) – except, perhaps, to allow it to do bulk inserts into each partition in turn or, possibly, to avoid creating an entire new partition at exactly the moment it finds just the first row that needs to go into a new partition. Thinking along these lines I decided to pre-create all the necessary partitions just in case this made any difference – the code is at the end of the blog note. Another idea was to create the table empty (with, and without, pre-created partitions), then do an “insert /*+ append */” of the data.

Nothing changed (much – though the number of rows sorted kept varying).

And then — it all started working perfectly with virtually no rows reported sorted and no I/O to the temporary tablespace !

Fortunately I thought of looking at v$memory_resize_ops and found that the automatic memory management had switched a lot of memory to the PGA, allowing Oracle to do whatever it needed to do completely in memory without reporting any sorting. A quick re-start of the instance fixed that “workaround”.

Still struggling with finding a reasonable workaround I decided to see if the same anomaly would appear if the table were range partitioned but didn’t have an interval clause. This meant I had to precreate all the necessary partitions, of course – which I did by starting with an interval partitioned table, letting Oracle figure out which partitions to create, then disabling the interval feature – again, see the code at the end of this note.

The results: no rows sorted on the insert, no writes to temp. Unless it’s just a question of needing even more data to reproduce the problem with simple range partitioned tables, it looks as if there’s a problem somewhere in the code for interval partitioned tables and all you have to do to work around it is precreate loads of partitions, disable intervals, load, then re-enable the intervals.


Here’s the “quick and dirty” code I used to generate the t2 table with precreated partitions:

create table t2
partition by range(created)
interval(numtodsinterval(7, 'day'))
        partition p_start values less than (to_date('&m_tomorrow','dd-mon-yyyy'))
storage(initial 1M)
        rownum <= 0

        m_max_date      date;
        select  max(created)
        into    expand.m_max_date
        from    t1

        for i in 1..expand.m_max_date - trunc(sysdate) loop
                        to_char(trunc(sysdate) + loop.i,'dd-mon-yyyy') || chr(9)
                execute immediate
                        'lock table t2 partition for ('''  ||
                        to_char(trunc(sysdate) + loop.i,'dd-mon-yyyy') ||
                        ''') in exclusive mode'
        end loop;

prompt  ========================
prompt  How to disable intervals
prompt  ========================

alter table t2 set interval();

The code causes partitions to be created by locking the relevant partition for each date between the minimum and maximum in the t1 table; locking the partition is enough to create it if it doesn’t already exists. The code is a little wasteful since it locks each partition 7 times as we walk through the dates – but it’s only a quick demo for a model, and for copying a very large table wastage would probably be very small compared to the work of doing the actual data copy. Obviously one could be more sophisticated and limit the code to locking and creating only the partitions needed, and only locking them once each.


Communicate events in ADF based UI areas to embedded Rich Client Applications such as Oracle JET, Angular and React

Amis Blog - Wed, 2018-02-21 02:33

For one of our current projects I have done some explorations into the combination of ADF (and WebCenter Portal in our specific case) with JET. Our customer has existing investments in WC Portal and many ADF Taskflows and is now switching to JET as a WebApp implementation technology – for reasons of better user experience and especially better availability of developers. I believe that this is a situation that many organizations are in or are contemplating (including those who want to extend Oracle EBusiness Suite or Fusion Apps). This is not the ideal green field technology mix of course. However, if either WebCenter Portal (heavily steeped in ADF) or an existing enterprise ADF application are the starting point for new UI requirements, you are bound to end up with a combination of ADF and the latest and greatest technology used for building those requirements.

We have to ensure that the rich client based ‘Portlets’ are nicely embedded in the ADF host environment. We also have to take care that events triggered by user actions in the ADF UI areas are communicated to the embedded rich client based UI areas in the page and lead to appropriate actions over there.

In a previous article, I have described how events in the embedded area are communicated to the ADF side of the fence and lead to UI synchronization: Publish Events from any Web Application in IFRAME to ADF Applications. In the current article, I describe the reverse route: events in the ADF based areas on the page are communicated to the rich client based UI and trigger the appropriate synchronization. The implementation is suitably decoupled, using ADF mechanisms such as server listener, conetxtual event, partial page refresh and the standard HTML5 mechanism for publishing events on embedded IFRAME windows.image_thumb11


It is quite likely that an IFRAME is used as a container for the new UI components

The UI components created in ADF and those built in other technologies are fairly well isolated from each other, through the use of the IFRAME. However, in certain instances, the isolation has to be pierced. When a user performs an action in one UI component, it is quite possible that is action should have an effect in another UI area in the same page. The other area may need to refresh (get latest data), synchronize (align with the selection), navigate, etc. We need a solid, decoupled way of taking an event in the ADF based UI area to the UI sections embedded in IFRAMEs and based on one of the proponents of the latest UI technology.

This article describes such an appoach – that allows our ADF side of the User Interface to send events in a well defined way to the JET, React or Angular UI component and thus make these areas play nice with each other after all. The visualization of the end to end flow is shown below:



Note: where it says JET, it could also say Angular, Vue or React – or even plain HTM5.

The steps are:

  1. A user action is performed and the event to be published is identified in the ADF UI – the ADF X taskflow in the figure.
  2. Several options are available for the communication to the server – from an auto-submit enabled input component with a value change listener associated with a managed bean to a UI comonent with a client listener that leverages a server listener to queue a custom event to be sent to the server – also ending up in a managed bean
  3. The managed bean, defined in the context of the ADF Taskflow X, gets hold of the binding container for the current view, gets hold of the publishEvent method binding and executes that binding
  4. The publishEvent method binding is specified in the page definition for the current page. It invokes method publishEvent on the Data Control EventPublisherBean that was created for the POJO EventPublisherBean. The method binding in the page definition contains an events element that specifies that execution of this method binding will result in the publication of a contextual event called CountrySelectedEvent that takes the result from the method publishEvent as its payload.

    At this point, we leave the ADF X Taskflow. It has done its duty by reporting the event that took place in the client. It is available at the heart of the ADF framework, ready to be processed by one or more consumers – that each have to take care of refreshing their own UI if so desired.

  5. The contextual event CountryChangedEvent is consumed in method handleCountryChangedEvent in POJO EventConsumer. A Data Control is created for this POJO. A method action is configured for handleCountryChangedEvent in the page definition for the view in JET Container ADF Taskflow. This page definition also contains an eventMap element that specifies that the CountryChangedEvent event is to be handled by method binding handleCountryChangedEvent. The method binding specifies a single parameter that will receive the payload of the event (from the EL Expression ${payLoad} for attribute NDValue)
  6. The EventConsumer receives the payload of the event and writes a JavaScript snippet to be executed in the browser at the event of the partial page request processing.
  7. The JavaScript snippet, written by EventConsumer, is executed in the client; it invokes function processCountryChangedEvent (loaded in JS library adf-jet-client-app.js) and pass the payload of the countrychanged event.
  8. Function processCountryChangedEvent locates the IFRAME element that contains the target client application and posts a message on its content window – carrying the event’s payload
  9. A message event handler defined in the IFRAME, in the JET application, consumes the message, extracts the event’s payload and processes it in the appropriate way – probably synchronizing the UI in some way or other.At this point, all effects that the action in ADF X area should have in the JET application in the IFRAME have been achieved.


And now for some real code.

Starting point:

  • ADF Web Application (may have Model, such as ADF BC, not necessarily)
    • an index.jsf page – home of the application
    • the ADF JET Container Taskflow with a JETView.jsff that has the embedded IFRAME that loads the index.xhtml
    • a jet-web-app folder with an index.html – to represent the JET application (note: for now it is just a plain HTML5 application)
    • the ADF X Taskflow with a view.jsff page – representing the existing WC Portal or ADF ERP application




From ADF X Taskflow to ADF Contextual Event

The page view.jsff contains a selectOneChoice component



Users can select a country.

The component has autoSubmit set to true – which means that when the selection changes, the change is submitted (in an AJAX request) to the server. The valueChangeListener has been configured – with the detailsBean managed bean, defined in the ADF-X-taskflow.

<?xml version='1.0' encoding='UTF-8'?>
<ui:composition xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
    <af:panelHeader text="Classic ADF X Taskflow" id="ph1">
        <af:selectOneChoice label="Choose a country" id="soc1" autoSubmit="true"
            <af:selectItem label="The Netherlands" value="nl" id="si1"/>
            <af:selectItem label="Germany" value="de" id="si2"/>
            <af:selectItem label="United Kingdom of Great Brittain and Northern Ireland" value="uk" id="si3"/>
            <af:selectItem label="United States of America" value="us" id="si4"/>
            <af:selectItem label="Spain" value="es" id="si5"/>
            <af:selectItem label="Norway" value="no" id="si6"/>


The detailsBean is defined for the ADF-X-taskflow:

<?xml version="1.0" encoding="windows-1252" ?>
<adfc-config xmlns="http://xmlns.oracle.com/adf/controller" version="1.2">
  <task-flow-definition id="ADF-X-taskflow">
    <managed-bean id="__1">
    <view id="view">

The bean is based on class DetailsBean. The relevant method here is countryChangedHandler:

    public void countryChangeHandler(ValueChangeEvent valueChangeEvent) {
        System.out.println("Country Changed to = " + valueChangeEvent.getNewValue());
        // find operation binding publishEvent and execute in order to publish contextual event
        BindingContainer bindingContainer = BindingContext.getCurrent().getCurrentBindingsEntry();
        OperationBinding method = bindingContainer.getOperationBinding("publishEvent");
        method.getParamsMap().put("payload", valueChangeEvent.getNewValue());

CODE details bean country changed handler

This method gets hold of the binding container (for the current page, view.jsff) and in it of the method action publishEvent

<?xml version="1.0" encoding="UTF-8" ?>
<pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel" version="" id="viewPageDef"
        <variableIterator id="variables"/>
        <methodAction id="publishEvent" RequiresUpdateModel="true" Action="invokeMethod" MethodName="publishEvent"
                      IsViewObjectMethod="false" DataControl="EventPublisherBean"
            <NamedData NDName="payload" NDType="java.lang.Object"/>
            <events xmlns="http://xmlns.oracle.com/adfm/contextualEvent">
                <event name="CountryChangedEvent"/>

This Page Definition defines the method action and specifies that execution of that method action publishes the Contextual Event CountryChangedEvent.


At this point, we leave the ADF X Taskflow. It has done its duty by reporting
the event that took place in the client. It is available at the heart of the ADF
framework, ready to be processed by one or more consumers – that each have to
take care of refreshing their own UI if so desired.


From ADF Contextual Event to JET Application

A method action is configured for method
handleCountryChangedEvent in data control EventConsumer created for POJO EventConsumer, in the page definition for the view in JET
Container ADF Taskflow. This page definition also contains an eventMap
element that specifies that the CountryChangedEvent event is to be handled by this method binding handleCountryChangedEvent. The method binding specifies
a single parameter that will receive the payload of the event (from the EL
Expression ${payLoad} for attribute NDValue)

Here is the code for the Page Definition for the JETView.jsff:

<?xml version="1.0" encoding="UTF-8" ?>
<pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel" version="" id="JETViewPageDef"
        <variableIterator id="variables"/>
         <methodAction id="handleCountryChangedEvent" RequiresUpdateModel="true" Action="invokeMethod"
                      MethodName="handleCountryChangedEvent" IsViewObjectMethod="false" DataControl="EventConsumer"
            <NamedData NDName="payload" NDValue="${payLoad}" NDType="java.lang.Object"/>
    <eventMap xmlns="http://xmlns.oracle.com/adfm/contextualEvent">
        <event name="CountryChangedEvent">
            <producer region="*">
            <!-- http://www.jobinesh.com/2014/05/revisiting-contextual-event-dynamic.html -->
                <consumer handler="handleCountryChangedEvent" refresh="false"/>

Note: the refresh attribute in the consumer element is crucial: it specifies that the page should not be refreshed when the event is consumed. The default is that the page does refresh; that would mean in our case that the IFRAME refreshes and reloads the JET application that is reinitialized and loses all it state.

And here is the EventConsumer class – for which a Data Control has been created – that handles the CountryChangedEvent:

package nl.amis.frontend.jet2adf.view.adfjetclient;

import javax.faces.context.FacesContext;

import org.apache.myfaces.trinidad.render.ExtendedRenderKitService;
import org.apache.myfaces.trinidad.util.Service;

public class EventConsumer {
    public EventConsumer() {

    public void handleCountryChangedEvent(Object payload) {
        System.out.println(">>>>>> Consume Event: " + payload);
        writeJavaScriptToClient("console.log('CountryChangeEvent was consumed; the new country value = "+payload+"'); processCountryChangedEvent('"+payload+"');");

    //generic, reusable helper method to call JavaScript on a client
    private void writeJavaScriptToClient(String script) {
        FacesContext fctx = FacesContext.getCurrentInstance();
        ExtendedRenderKitService erks = null;
        erks = Service.getRenderKitService(fctx, ExtendedRenderKitService.class);
        erks.addScript(fctx, script);

The contextual event CountryChangedEvent is consumed in method handleCountryChangedEvent in this POJO EventConsumer. It receives the payload of the event and writes a
JavaScript snippet to be executed in the browser at the event of the partial
page request processing, using the ExtendedRenderKitService in the ADF framework.

The JavaScript snippet, written by EventConsumer:

  console.log('CountryChangeEvent was consumed; the new country value = uk'); 

It is executed in the client;
it invokes function processCountryChangedEvent (loaded in JS library
adf-jet-client-app.js) and passes the payload of the countrychanged event (that is: the country code for the selected country).

Function processCountryChangedEvent locates the IFRAME element that
contains the target client application and posts a message on its content
window – carrying the event’s payload:

function findIframeWithIdEndingWith(idEndString) {
    var iframe;
    var iframeHtmlCollectionArray = document.getElementsByTagName("iframe");
    [].forEach.call(iframeHtmlCollectionArray, function (el, i) {
        if (el.id.endsWith(idEndString)) {
            iframe = el;
    return iframe;

function processCountryChangedEvent(newCountry) {
    console.log("Client Side handling of Country Changed event; now transfer to IFRAME");    
    var iframe = findIframeWithIdEndingWith('jetIframe::f');
    var targetOrigin = '*';
    iframe.contentWindow.postMessage({'eventType':'countryChanged','payload':newCountry}, targetOrigin);

A message event handler defined in the IFRAME, in the JET application,
consumes the message, extracts the event’s payload and processes it.

          function init() {
              // attach listener to receive message from parent; this is not required for sending messages to the parent window
              window.addEventListener("message", function (event) {
                  console.log("Iframe receives message from parent" + event.data);
                  if (event.data &amp;&amp; event.data.eventType == 'countryChanged' &amp;&amp; event.data.payload) {
                      var countrySpan = document.getElementById('currentCountry');
                      countrySpan.innerHTML = "Fresh Country: " + event.data.payload;
          document.addEventListener("DOMContentLoaded", function (event) {

It receives the event, gets the data property that contains the payload (as a String) and parses it as JSON (to turn it into a JavaScript object). It extracts the country from the event. Then it locates a SPAN element in the DOM and updates its innerHTML property. This will update the UI:


Here the salient details of the index.html of the embedded web application:

        <h2>Client Web App</h2>
            Country is 
            <span id="currentCountry"></span>

The full project looks like this:



A simple animated visualization of what happens:

Webp.net-gifmaker (4)


Sources for this article: https://github.com/lucasjellema/WebAppIframe2ADFSynchronize. (note: this repository also contains the code for the flow from JET IFRAME to the ADF Taskflow X

Docs on postMessage: https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage

Blog ADF: (re-)Introducing Contextual Events in several simple steps  : https://technology.amis.nl/2013/03/14/adf-re-introducing-contextual-events-in-several-simple-steps/

Blog Revisiting Contextual Event : Dynamic Event Producer, Manual Region Refresh, Conditional Event Subscription and using Managed Bean as Event Handler (with the crucial hint regarding supressing the automatic refresh of pages after consuming a contextual event

The post Communicate events in ADF based UI areas to embedded Rich Client Applications such as Oracle JET, Angular and React appeared first on AMIS Oracle and Java Blog.

ODA X7-2S/M update-repository fails after re-image

Yann Neuhaus - Wed, 2018-02-21 00:54

While playing with a brand new ODA X7-2M, I faced a strange behaviour after re-imaging the ODA with the latest version Basically after re-imaging and doing the configure-firstnet the next step is to import the GI clone in the repository before creating the appliance. Unfortunately this command fails with an error DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070. Why not having a look on how to fix it…

First of all doing a re-image is really straight forward and work very well. I simply access to the ILOM remote console to attach the ISO file for the ODA, in this case the patch 23530609 from the MOS, and restart the box on the CDROM. After approx. 40 minutes you have a brand new ODA running the latest release.

Of course instead re-imaging, I could “simply” update/upgrade the DCS agent to the latest version. Let say that I like to start from a “clean” situation when deploying a new environment and patching a not installed system sound a bit strange for me ;-)

So once re-imaged the ODA is ready for deployment. The first step is to configure the network that I can SSH to it and go ahead with the create appliance. This takes only 2 minutes using the command configure-firstnet.

The last requirement before running the appliance creation is to import the GI Clone, here the patch p27119393_122120, in the repository. Unfortunately that’s exactly where the problem starts…

Screen Shot 2018-02-19 at 12.11.23

Hmmm… I can’t get it in the repository due to a strange hand shake error. So I will check if the web interface is working at least (…of course using Chrome…)

Screen Shot 2018-02-19 at 12.11.14

Same thing here, it is not possible to come in the web interface at all.

While searching a bit for this error, we finally landed in the Know Issue chapter of the ODA Release Note, which sounds promising. Unfortunately none of the listed error did really match to our case. However doing a small search in the page for the error message pointed us the following case out:

Screen Shot 2018-02-19 at 12.12.28

Ok the error is ODA X7-2HA related, but let’s give a try.


Once DCS is restarted, just re-try the update-repository


Here we go! The job has been submitted and the GI clone is imported in the repository :-)

After that the CREATE APPLIANCE will run like a charm.

Hope it helped!



Cet article ODA X7-2S/M update-repository fails after re-image est apparu en premier sur Blog dbi services.

Strange dependency in user_dependency: view depends on unreferenced function

Tom Kyte - Tue, 2018-02-20 21:26
Dear Team, I will try to simplify the scenario we have, using a simple test case: <code> SQL> create table test_20 ( a number) 2 / Table created. SQL> SQL> create or replace function test_function (p_1 in number) 2 return num...
Categories: DBA Blogs

Report for employee attendance

Tom Kyte - Tue, 2018-02-20 21:26
I am sorry for asking this seemingly trivial question, but I have been struggling with it for some time, my deadline is approaching and I can't find any answers for it. I have 3 tables: Calendar table: <code>CREATE TABLE "CJ_CAL" ( "CAL_ID...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator