Feed aggregator

Generic Docker Container Image for running and live reloading a Node application based on a ...

OTN TechBlog - Fri, 2018-09-21 21:30

Originally published at technology.amis.nl

My desire: find a way to run a Node application from a Git(Hub) repository using a generic Docker container and be able to refresh the running container on the fly whenever the sources in the repo are updated. The process of producing containers for each application and upon each change of the application is too cumbersome and time consuming for certain situations — including rapid development/test cycles and live demonstrations. I am looking for a convenient way to run a Node application anywhere I can run a Docker container — without having to build and push a container image — and to continuously update the running application in mere seconds rather than minutes. This article describes what I created to address that requirement.

Key ingredient in the story: nodemon — a tool that monitors a file system for any changes in a node.js application and automatically restarts the server when there are such changes. What I had to put together:

a generic Docker container based on the official Node image — with npm and a git client inside

  • adding nodemon (to monitor the application sources)
  • adding a background Node application that can refresh from the Git repository — upon an explicit request, based on a job schedule and triggered by a Git webhook
  • defining an environment variable GITHUB_URL for the url of the source Git repository for the Node application
  • adding a startup script that runs when the container is ran first (clone from Git repo specified through GITHUB_URL and run application with nodemon) or restarted (just run application with nodemon)

I have been struggling a little bit with the Docker syntax and operations (CMD vs RUN vs ENTRYPOINT) and the Linux bash shell scripts — and I am sure my result can be improved upon.

The Dockerfile that builds the Docker container with all generic elements looks like this:

FROM node:8 #copy the Node Reload server - exposed at port 4500 COPY package.json /tmp COPY server.js /tmp RUN cd tmp && npm install EXPOSE 4500 RUN npm install -g nodemon COPY startUpScript.sh /tmp COPY gitRefresh.sh /tmp CMD ["chmod", "+x", "/tmp/startUpScript.sh"] CMD ["chmod", "+x", "/tmp/gitRefresh.sh"] ENTRYPOINT ["sh", "/tmp/startUpScript.sh"]

Feel free to pick any other node base image — from https://hub.docker.com/_/node/. For example: node:10.

The startUpScript that is executed whenever the container is started up — that takes care of the initial cloning of the Node application from the Git(Hub) URL to directory /tmp/app and the running of that application using nodemon is shown below. Note the trick (inspired by StackOverflow) to run a script only when the container is ran for the very first time.

#!/bin/sh CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER" if [ ! -e $CONTAINER_ALREADY_STARTED ]; then touch $CONTAINER_ALREADY_STARTED echo "-- First container startup --" # YOUR_JUST_ONCE_LOGIC_HERE cd /tmp # prepare the actual Node app from GitHub mkdir app git clone $GITHUB_URL app cd app #install dependencies for the Node app npm install #start both the reload app and (using nodemon) the actual Node app cd .. (echo "starting reload app") & (echo "start reload";npm start; echo "reload app finished") & cd app; echo "starting nodemon for app cloned from $GITHUB_URL"; nodemon else echo "-- Not first container startup --" cd /tmp (echo "starting reload app and nodemon") & (echo "start reload";npm start; echo "reload app finished") & cd app; echo "starting nodemon for app cloned from $GITHUB_URL"; nodemon fi The startup script runs the live reloader application in the background — using (echo “start reload”;npm start)&. That final ampersand (&) takes care of running the command in the background. This npm start command runs the server.js file in /tmp. This server listens at port 4500 for requests. When a request is received at /reload, the application will execute the gitRefresh.sh shell script that performs a git pull in the /tmp/app directory where the git clone of the repository was targeted.


const RELOAD_PATH = '/reload' const GITHUB_WEBHOOK_PATH = '/github/push' var http = require('http'); var server = http.createServer(function (request, response) { console.log(`method ${request.method} and url ${request.url}`) if (request.method === 'GET' && request.url === RELOAD_PATH) { console.log(`reload request starting at ${new Date().toISOString()}...`); refreshAppFromGit(); response.write(`RELOADED!!${new Date().toISOString()}`); response.end(); console.log('reload request handled...'); } else if (request.method === 'POST' && request.url === GITHUB_WEBHOOK_PATH) { let body = []; request.on('data', (chunk) => { body.push(chunk);}) .on('end', () => { body = Buffer.concat(body).toString(); // at this point, `body` has the entire request body stored in it as a string console.log(`GitHub WebHook event handling starting ${new Date().toISOString()}...`); ... (see code in GitHub Repo https://github.com/lucasjellema/docker-node-run-live-reload/blob/master/server.js console.log("This commit involves changes to the Node application, so let's perform a git pull ") refreshAppFromGit(); response.write('handled'); response.end(); console.log(`GitHub WebHook event handling complete at ${new Date().toISOString()}`); }); } else { // respond response.write('Reload is live at path '+RELOAD_PATH); response.end(); } }); server.listen(4500); console.log('Server running and listening at Port 4500'); var shell = require('shelljs'); var pwd = shell.pwd() console.info(`current dir ${pwd}`) function refreshAppFromGit() { if (shell.exec('./gitRefresh.sh').code !== 0) { shell.echo('Error: Git Pull failed'); shell.exit(1); } else { } }

Using the node-run-live-reload image
Now that you know a little about the inner workings of the image, let me show you how to use it (also see instructions here: https://github.com/lucasjellema/docker-node-run-live-reload).

To build the image yourself, clone the GitHub repo and run

docker build -t "node-run-live-reload:0.1" .

using of course your own image tag if you like. I have pushed the image to Docker Hub as lucasjellema/node-run-live-reload:0.1. You can use this image like this:

docker run --name express -p 3011:3000 -p 4505:4500 -e GITHUB_URL=https://github.com/shapeshed/express_example -d lucasjellema/node-run-live-reload:0.1

In the terminal window — we can get the logging from within the container using

docker logs express --follow

After the application has been cloned from GitHub, npm has installed the dependencies and nodemon has started the application, we can access it at <host>:3011 (because of the port mapping in the docker run command):

When the application sources are updated in the GitHub repository, we can use a GET request (from CURL or the browser) to <host>:4505 to refresh the container with the latest application definition:

The logging from the container indicates that a git pull was performed — and returned no new sources:

Because there are no changed files, nodemon will not restart the application in this case.

One requirement at this moment for this generic container to work is that the Node application has a package.json with a scripts.start entry in its root directory; nodemon expects that entry as instruction on how to run the application. This same package.json is used with npm install to install the required libraries for the Node application.


The next figure gives an overview of what this article has introduced. If you want to run a Node application whose sources are available in a GitHub repository, then all you need is a Docker host and these are your steps:

  1. Pull the Docker image: docker pull lucasjellema/node-run-live-reload:0.1 (this image currently contains the Node 8 runtime, npm, nodemon, a git client and the reloader application) 
    Alternatively: build and tag the container yourself.
  2. Run the container image, passing the GitHub URL of the repo containing the Node application; specify required port mappings for the Node application and the reloader (port 4500): docker run –name express -p 3011:3000 -p 4500:4500 -e GITHUB_URL=<GIT HUB REPO URL> -d lucasjellema/node-run-live-reload:0.1
  3. When the container is started, it will clone the Node application from GitHub
  4. Using npm install, the dependencies for the application are installed
  5. Using nodemon the application is started (and the sources are monitored so to restart the application upon changes)
  6. Now the application can be accessed at the host running the Docker container on the port as mapped per the docker run command
  7. With an HTTP request to the /reload endpoint, the reloader application in the container is instructed to
  8. git pull the sources from the GitHub repository and run npm install to fetch any changed or added dependencies
  9. if any sources were changed, nodemon will now automatically restart the Node application
  10. the upgraded Node application can be accessed

Note: alternatively, a WebHook trigger can be configured. This makes it possible to automatically trigger the application reload facility upon commits to the GitHub repo. Just like a regular CD pipeline this means running Node applications can be automatically upgraded.

Next Steps

Some next steps I am contemplating with this generic container image — and I welcome your pull requests — include:

  • allow an automated periodic application refresh to be configured through an environment variable on the container (and/or through a call to an endpoint on the reload application) instructing the reloader to do a git pull every X seconds.
  • use https://www.npmjs.com/package/simple-git instead of shelljs plus local Git client (this could allow usage of a lighter base image — e.g. node-slim instead of node)
  • force a restart of the Node application — even it is not changed at all
  • allow for alternative application startup scenarios besides running the scripts.start entry in the package.json in the root of the application

GitHub Repository with the resources for this article — including the Dockerfile to build the container: https://github.com/lucasjellema/docker-node-run-live-reload

My article on my previous attempt at creating a generic Docker container for running a Node application from GitHub: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/

Article and Documentation on nodemon: https://medium.com/lucjuggery/docker-in-development-with-nodemon-d500366e74df and https://github.com/remy/nodemon#nodemon

NPM module shelljs that allows shell commands to be executed from Node applications: https://www.npmjs.com/package/shelljs

RMAN-03002: ORA-19693: backup piece already included

Michael Dinh - Fri, 2018-09-21 18:36

I have been cursed trying to create 25TB standby database.

Active duplication using standby as source failed due to bug.

Backup based duplication using standby as source failed due to bug again.

Now performing traditional restore.

Both attempts failed with RMAN-20261: ambiguous backup piece handle

RMAN> list backuppiece '/bkup/ovtdkik0_1_1.bkp';
RMAN> change backuppiece '/bkup/ovtdkik0_1_1.bkp' uncatalog;

What’s in the backup?

RMAN> spool log to /tmp/list.log
RMAN> list backup;
RMAN> exit

There are 2 identical backuppiece and don’t know how this could have happened.

$ grep ovtdkik0_1_1 /tmp/list.log
    201792  1   AVAILABLE   /bkup/ovtdkik0_1_1.bkp
    202262  1   AVAILABLE   /bkup/ovtdkik0_1_1.bkp

RMAN> delete backuppiece 202262;

Restart restore and is running again.

PeopleTools 8.57 is Available on the Oracle Cloud

PeopleSoft Technology Blog - Fri, 2018-09-21 15:17

We are pleased to announce that PeopleTools 8.57 is generally available for install and upgrade on the Oracle Cloud.  As we announced earlier, PeopleTools 8.57 will initially be available only on the Oracle Cloud.  We plan to make PeopleTools 8.57 available for on-premises downloads with the 8.57.04 CPU patch in January 2019.  

There are many new exiting features in PeopleTools 8.57 including:

  • The ability for end-users to set up conditions in analytics that if met will notify the user
  • Improvements to the way Related Content and Analytics are displayed
  • Add custom fields to Fluid pages with minimum life-cycle impact
  • More capabilities for end user personalization
  • Improved search that supports multi-facet selections
  • Easier than ever to brand the application with your corporate colors and graphics
  • Fluid page preview in AppDesigner and improved UI properties interface
  • End-to-end command-line support for life-cycle management processes
  • And much more!

You’ll want to get all the details and learn about the new features in 8.57.  A great place to start is the PeopleTools 8.57 Highlights Video  posted on the PeopleSoft YouTube channel.  The highlights video gives you a overview of the new features and shows how to use them.

There is plenty more information about the release available today.  Here are some links to some of the other places you can go to learn more about 8.57:

In addition to releasing PeopleTools 8.57, version 7 of PeopleSoft Cloud Manager is also being released today.  CM 7 is similar in functionality to CM 6 with additional support for PeopleTools 8.57.  If you currently use a version of Cloud Manager you must upgrade to version 7 in order to install PT 8.57. 

There are a lot of questions about how to get started using PeopleTools 8.57 and Cloud Manager 7.  Documentation and installation instructions are available on the Cloud Manager Home Page.

More information will be published over the next couple of weeks to help you get started with 8.57 on the cloud.  Additional information will include blogs to help with details of the installation, an video that shows the complete process from creating a free trial account to running PT8.57, and a detailed Spotlight Video that describes configuring OCI and Cloud Manager 7.

PeopleTools 8.57 is a significant milestone for Oracle, making it easier than ever for customers to use, maintain and run PeopleSoft Applications.

OAC 18.3.3: New Features

Rittman Mead Consulting - Fri, 2018-09-21 07:58
 New Features

I believe there is a hidden strategy behind Oracle's product release schedule: every time I'm either on holidays or in a business trip full of appointments a new version of Oracle Analytics Cloud is published with a huge set of new features!

 New Features

OAC 18.3.3 went live last week and contains a big set of enhancements, some of which were already described at Kscope18 during the Sunday Symposium. New features are appearing in almost all the areas covered by OAC, from Data Preparation to the main Data Flows, new Visualization types, new security and configuration options and BIP and Essbase enhancements. Let's have a look at what's there!

Data Preparation

A recurring theme in Europe since last year is GDPR, the General Data Protection Regulation which aims at protecting data and privacy of all European citizens. This is very important in our landscape since we "play" with data on daily basis and we should be aware of what data we can use and how.
Luckily for us now OAC helps to address GDPR with the Data Preparation Recommendations step: every time a dataset is added, each column is profiled and a list of recommended transformations is suggested to the user. Please note that Data Preparation Recommendations is only suggesting changes to the dataset, thus can't be considered the global solution to GDPR compliance.
The suggestion may include:

  • Complete or partial obfuscation of the data: useful when dealing with security/user sensitive data
  • Data Enrichment based on the column data can include:
    • Demographical information based on names
    • Geographical information based on locations, zip codes

 New Features

Each of the suggestion applied to the dataset is stored in a data preparation script that can easily be reapplied if the data is updated.

 New Features

Data Flows

Data Flows is the "mini-ETL" component within OAC which allows transformations, joins, aggregations, filtering, binning, machine learning model training and storing the artifacts either locally or in a database or Essbase cube.
The dataflows however had some limitations, the first one was that they had to be run manually by the user. With OAC 18.3.3 now there is the option to schedule Data Flows more or less like we were used to when scheduling Agents back in OBIEE.

 New Features

Another limitation was related to the creation of a unique Data-set per Data Flow which has been solved with the introduction of the Branch node which allows a single Data Flow to produce multiple data-sets, very useful when the same set of source data and transformations needs to be used to produce various data-sets.

 New Features

Two other new features have been introduced to make data-flows more reusable: Parametrized Sources and Outputs and Incremental Processing.
The Parametrized Sources and Outputs allows to select the data-flow source or target during runtime, allowing, for example, to create a specific and different dataset for today's load.

 New Features

The Incremental Processing, as the name says, is a way to run Data Flows only on top of the data added since the last run (Incremental loads in ETL terms). In order to have a data flow working with incremental loads we need to:

  • Define in the source dataset which is the key column that can be used to indicate new data (e.g. CUSTOMER_KEY or ORDER_DATE) since the last run
  • When including the dataset in a Data Flow enable the execution of the Data Flow with only the new data
  • In the target dataset define if the Incremental Processing replaces existing data or appends data.

Please note that the Incremental Load is available only when using Database Sources.

Another important improvement is the Function Shipping when Data Flows are used with Big Data Cloud: If the source datasets are coming from BDC and the results are stored in BDC, all the transformations like joining, adding calculation columns and filtering are shipped to BDC as well, meaning there is no additional load happening on OAC for the Data Flow.

Lastly there is a new Properties Inspector feature in Data Flow allowing to check the properties like name and description as well as accessing and modifying the scheduling of the related flow.

 New Features

Data Replication

Now is possible to use OAC to replicate data from a source system like Oracle's Fusion Apps, Talend or Eloqua directly into Big Data Cloud, Database Cloud or Data Warehouse Cloud. This function is extremely useful since allows decoupling the queries generated by the analytical tools from the source systems.
As expected the user can select which objects to replicate, the filters to apply, the destination tables and columns, and the load type between Full or Incremental.

Project Creation

New visualization capabilities have been added which include:

  • Grid HeatMap
  • Correlation Matrix
  • Discrete Shapes
  • 100% Stacked Bars and Area Charts

In the Map views, Multiple Map Layers can now be added as well as Density and Metric based HeatMaps, all on top of new background maps including Baidu and Google.

 New Features

Tooltips are now supported in all visualizations, allowing the end user to add measure columns which will be shown when over a section of any graph.

 New Features

The Explain feature is now available on metrics and not only on attributes and has been enhanced: a new anomaly detection algorithm identifies anomalies in combinations of columns working in the background in asynchronous mode, allowing the anomalies to be pushed as soon as they are found.

A new feature that many developers will appreciate is the AutoSave: we are all used to autosave when using google docs, the same applies to OAC, a project is saved automatically at every change. Of course this feature can be turn off if necessary.
Another very interesting addition is the Copy Data to Clipboard: with a right click on any graph, an option to save the underline data to clipboard is available. The data can then natively be pasted in Excel.

Did you create a new dataset and you want to repoint your existing project to it? Now with Dataset replacement it's just few clicks away: you need only to select the new dataset and re-map all the columns used in your current project!

 New Features

Data Management

The datasets/dataflows/project methodology is typical of what Gartner defined as Mode 2 analytics: analysis done by a business user whitout any involvement from the IT. The step sometimes missing or hard to be performed in self-service tools is the publishing: once a certain dataset is consistent and ready to be shared, it's rather difficult to open it to a larger audience within the same toolset.
New OAC administrative options have been addressing this problem: a dataset Certification by an administrator allows a certain dataset to be queried via Ask and DayByDay by other users. There is also a dataset Permissions tab allowing the definition of Full Control, Edit or Read Only access at user or role level. This is the way of bringing the self service dataset back to corporate visibility.

 New Features

A Search tab allows a fine control over the indexing of a certain dataset used by Ask and DayByDay. There are now options to select when then indexing is executed as well as which columns to index and how (by column name and value or by column name only).

 New Features

BIP and Essbase

BI Publisher was added to OAC in the previous version, now includes new features like a tighter integration with the datasets which can be used as datasources or features like email delivery read receipt notification and compressed output and password protection that were already available on the on-premises version.
There is also a new set of features for Essbase including new UI, REST APIs, and, very important security wise, all the external communications (like Smartview) are now over HTTPS.
For a detailed list of new features check this link


OAC 18.3.3 includes an incredible amount of new features which enable the whole analytics story: from self-service data discovery to corporate dashboarding and pixel-perfect formatting, all within the same tool and shared security settings. Options like the parametrized and incremental Data Flows allows content reusability and enhance the overall platform performances reducing the load on source systems.
If you are looking into OAC and want to know more don't hesitate to contact us

Categories: BI & Warehousing

in doubt transaction

Laurent Schneider - Fri, 2018-09-21 07:39

Distributed transactions allows you to have multiple DML’s over multiple databases within a single transaction

For instance, one local and one remote

insert into t values(1);
insert into t@db02 values(2);

If you lose connection to db02 and wants to commit, your database server may/does not know about the state of the remote transaction. The transaction then shows up als pending.

Oracle documentation mentions about ORA-2PC-CRASH-TEST transaction comment to test this behavior, however, anything like note 126069.1 who starts with grant dba to scott; should be banned.

Apart from granting DBA to scott and using commit tansaction commment 'ORA-2PC-CRASH-TEST-7', I can still use my good (bad?) old shutdown abort.

SQL> insert into t values(1);
1 row created.
SQL> insert into t@db02 values(2);
1 row created.
SQL> -- shutdown abort on db02
SQL> commit;
ERROR at line 1:
ORA-02054: transaction 2.7.4509 in-doubt
ORA-03150: end-of-file on communication channel for database link
ORA-02063: preceding line from DB02
SQL> select LOCAL_TRAN_ID, STATE from dba_2pc_pending;

---------------------- ----------------
2.7.4509 prepared

Now you’ve got an issue. Not only the state of the transaction is unknown, but the in-doubt transaction may prevent further DMLs

SQL> update t set x=x+1;
update t set x=x+1
ERROR at line 1:
ORA-01591: lock held by in-doubt distributed transaction 2.7.4509

You need to decide whether to commit or rollback the transaction. Let’s say I want to rollback. I need to have FORCE TRANSACTION privilege

SQL> rollback force '2.7.4509';
Rollback complete.
SQL> select LOCAL_TRAN_ID, STATE from dba_2pc_pending;

---------------------- ----------------
2.7.4509 forced rollback
SQL> update t set x=x+1;
0 rows updated.
PL/SQL procedure successfully completed.
SQL> select LOCAL_TRAN_ID, STATE from dba_2pc_pending;
no rows selected

The lock disappears, dbms_transaction.purge_log_db_entry can also cleanup old entries.

Clob data type error out when crosses the varchar2 limit

Tom Kyte - Fri, 2018-09-21 04:26
Clob datatype in PL/SQL program going to exception when it crosses the varchar2 limit and giving the "Error:ORA-06502: PL/SQL: numeric or value error" , Why Clob datatype is behaving like varchar2 datatype. I think clob can hold upto 4 GB of data. Pl...
Categories: DBA Blogs

Migrating Oracle 10g on Solaris Sparc to Linux RHEL 5 VM

Tom Kyte - Fri, 2018-09-21 04:26
Hi, if i will rate my oracle expertise i would give it 3/10. i just started learning oracle, solaris and linux 2months ago and was given this task to migrate. yes our oracle version is quite old and might not be supported anymore. Both platforms ...
Categories: DBA Blogs

"secure" in securefile

Tom Kyte - Fri, 2018-09-21 04:26
Good Afternoon, My question is a simple one. I've wondered why Oracle decided to give the new data type the name "securefile". Is it because we can encrypt it while before with basicfile, we couldn't encrypt the LOB? Also, why not call it "se...
Categories: DBA Blogs

Pre-allocating table columns for fast customer demands

Tom Kyte - Fri, 2018-09-21 04:26
Hello team, I have come across a strange business requirement that has caused an application team I support to submit a design that is pretty bad. The problem is I have difficulty quantifying this, so I'm going you can help me all the reasons why ...
Categories: DBA Blogs

move system datafiles

Tom Kyte - Fri, 2018-09-21 04:26
Hi Tom, When we install oracle and create the database by default (not manually) ...the system datafiles are located at a specific location .. Is is possible to move these (system tablespace datafiles) datafiles from the original location to...
Categories: DBA Blogs

how does SKIPEMPTYTRANS work?

Tom Kyte - Fri, 2018-09-21 04:26
I am wondering how does SKIPEMPTYTRANS work? when does ogg judge a transaction empty or not? if it does the judgement in the middle transction? how does ogg know it's a empty transaction? provided that it did not update mapped tables before the jud...
Categories: DBA Blogs

Upgrade Oracle Internet Directory from 11G ( to 12C (

Yann Neuhaus - Fri, 2018-09-21 00:53

There is no in-place upgrade for the OID to OID 12C The steps to follow are the following:

  1. Install the required JDK version
  2. Install the Fusion Middleware Infrastructure 12c (
  3. Install the OID 12C ( in the Fusion Middleware Infrastructure Home
  4. Upgrade the exiting OID database schemas
  5. Reconfigure the OID WebLogic Domain
  6. Upgrade the OID WebLogic Domain

1. Install JDK 1.8.131+

I have used the JDK 1.8_161

cd /u00/app/oracle/product/Java
tar xvf ~/software/jdk1.8.0_161

set JAVA_HOME and add  $JAVA_HOME/bin in the path

2. Install Fusion Middleware Infrastructure  software

I will not go into the details as this is a simple Fusion Middleware Infrastructure software installation.
This software contains the WebLogic Thee is no need to install a separate WebLogic software.

I used MW_HOME set to /u00/app/oracle/product/oid12c

java -jar ~/software/fmw_12.2.1.3_infrastructure.jar

3. Install OID 12C software

This part is just a software installation, you just need to follow the steps in the installation wizard

cd ~/software/

4. Check the existing schemas:

In SQLPLUS connected as SYS run the following query


The results:

-------------- -------------------- ------------------------------ ------------ --------- --------
DEFAULT_PREFIX    OID            ODS            VALID      N
IAM               IAU            IAM_IAU        VALID      N
IAM               MDS            IAM_MDS        VALID      N
IAM               OAM            IAM_OAM        VALID      N
IAM               OMSM           IAM_OMSM       VALID      N
IAM               OPSS           IAM_OPSS       VALID      N
OUD               IAU            OUD_IAU        VALID      N
OUD               MDS            OUD_MDS        VALID      N
OUD               OPSS           OUD_OPSS       VALID      N

9 rows selected.

I have a OID and a IAM using the same database as repository

5. ODS Schema upgrade:

Take care to only upgrade the ODS schema and not the IAM schemas or the Internet Access Manager will not work any more.
Associated to OID, there was only the ODS schema installed, the ODS upgrade requires to create new Schemas.

cd /u00/app/oracle/product/oid12c/oracle_common/upgrade/bin/

Oracle Fusion Middleware Upgrade Assistant
Log file is located at: /u00/app/oracle/product/oid12c/oracle_common/upgrade/logs/ua2018-01-26-11-13-37AM.log
Reading installer inventory, this will take a few moments...
...completed reading installer inventory.

In the following, I provide the most important screen shots for the “ODS schema upgrade”

ODS schema upgrade 1

ODS schema upgrade 2
Checked the schema validity:

ODS schema upgrade 3

ODS schema upgrade 4

ODS schema upgrade 5

ODS schema upgrade 6

ODS schema upgrade 7

ODS schema upgrade 8

In SQLPLUS connected as SYS run the following query


MRC_NAME       COMP_ID            OWNER               VERSION    STATUS      UPGRADED
————– —————- ——————————– ———— ——— ——–
DEFAULT_PREFIX OID                ODS            VALID      Y
IAM               IAU                IAM_IAU        VALID      N
IAM               MDS                IAM_MDS        VALID      N
IAM               OAM                IAM_OAM        VALID      N
IAM               OMSM               IAM_OMSM       VALID      N
IAM               OPSS               IAM_OPSS       VALID      N
OID12C           IAU                OID12C_IAU     VALID      N
OID12C           IAU_APPEND        OID12C_IAU_APPEND    VALID      N
OID12C           IAU_VIEWER        OID12C_IAU_VIEWER    VALID      N
OID12C           OPSS               OID12C_OPSS    VALID      N
OID12C           STB                OID12C_STB     VALID      N
OID12C           WLS                OID12C_WLS     VALID      N
OUD               IAU                OUD_IAU        VALID      N
OUD               MDS                OUD_MDS        VALID      N
OUD               OPSS               OUD_OPSS       VALID      N

15 rows selected.

I named the new OID repository schemas OID12C during the ODS upgrade.

6. reconfigure the domain

cd /u00/app/oracle/product/oid12c/oracle_common/common/bin/
./reconfig.sh -log=/tmp/reconfig.log -log_prority=ALL

See screen shots “Reconfigure Domain”
Reconfigure Domain 1
Reconfigure Domain 2
Reconfigure Domain 3
Reconfigure Domain 4
Reconfigure Domain 5
Reconfigure Domain 6
Reconfigure Domain 7
Reconfigure Domain 8
Reconfigure Domain 9
Reconfigure Domain 10
Reconfigure Domain 11
Reconfigure Domain 12
Reconfigure Domain 13
Reconfigure Domain 14
Reconfigure Domain 15
Reconfigure Domain 16
Reconfigure Domain 17
Reconfigure Domain 18
Reconfigure Domain 19
Reconfigure Domain 20
Reconfigure Domain 21
Reconfigure Domain 22
Reconfigure Domain 23
Reconfigure Domain 24
Reconfigure Domain 25

7. Upgrading Domain Component Configurations

cd ../../upgrade/bin/

Oracle Fusion Middleware Upgrade Assistant
Log file is located at: /u00/app/oracle/product/oid12c/oracle_common/upgrade/logs/ua2018-01-26-12-18-12PM.log
Reading installer inventory, this will take a few moments…

The following are the screen shots of the upgrade of the WebLogic Domain configuration

upgrade domain component configuration 1
upgrade domain component configuration 2
upgrade domain component configuration 3
upgrade domain component configuration 4
upgrade domain component configuration 5
upgrade domain component configuration 6
upgrade domain component configuration 7

8. Start the domain

For this first start I will use the normal start scripts installed when upgrading the domain in separate putty session to see the traces

Putty Session 1:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
# Start the Admin Server in the first putty

Putty Session 2:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
# In an other shell session start the node Manager:

Putty Session 3:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
./startComponent.sh oid1

Starting system Component oid1 ...

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Reading domain from /u01/app/OID/user_projects/domains/IDMDomain

Please enter Node Manager password:
Connecting to Node Manager ...
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090905> <Disabling the CryptoJ JCE Provider self-integrity check for better startup performance. To enable this check, specify -Dweblogic.security.allowCryptoJDefaultJCEVerification=true.>
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG128 to HMACDRBG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true.>
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090909> <Using the configured custom SSL Hostname Verifier implementation: weblogic.security.utils.SSLWLSHostnameVerifier$NullHostnameVerifier.>
Successfully Connected to Node Manager.
Starting server oid1 ...
Successfully started server oid1 ...
Successfully disconnected from Node Manager.

Exiting WebLogic Scripting Tool.


The ODSM application is now deployed in the WebLogic Administration Server and the WLS_ODS1 WebLogic Server from the previous OID 11C  administration domain is not used any more.


7002 is the Administration Server port for this domain.


Cet article Upgrade Oracle Internet Directory from 11G ( to 12C ( est apparu en premier sur Blog dbi services.

Don’t Drop Your Career Using Drop Database

Michael Dinh - Thu, 2018-09-20 22:12

I first learned about drop database in 2007.

Environment contains standby database oltpdr.
Duplicate standby database olapdr on the same host using oltpdr as source failed during restore phase.
Clean up data files from failed olapdr duplication.

Check database olapdr.
olap1> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      oltp
db_unique_name                       string      olapdr

olap1> select count(*) from gv$session;


Elapsed: 00:00:00.00
olap1> select open_mode from v$database;


Elapsed: 00:00:00.03
olap1> startup force mount restrict exclusive;
ORACLE instance started.

Total System Global Area 2.5770E+10 bytes
Fixed Size                  6870952 bytes
Variable Size            5625976920 bytes
Database Buffers         1.9998E+10 bytes
Redo Buffers              138514432 bytes
Database mounted.

olap1> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      oltp
db_unique_name                       string      olapdr

olap1> select count(*) from gv$session;


Elapsed: 00:00:00.01
olap1> select open_mode from v$database;


Elapsed: 00:00:00.04
At this point, I was ready to run drop database and somehow an angel was watching over me and I decided to check v$datafile.
olap1> select name from v$datafile where rownum < 10;


9 rows selected.

Elapsed: 00:00:00.01

olap1> exit
Strange data files are the same for source and target.
oltp1> select open_mode from v$database;


Elapsed: 00:00:00.07
oltp1> select name from v$datafile where rownum < 10;


9 rows selected.

Elapsed: 00:00:00.01
oltp1> exit
Check data files from ASM.

ASMCMD> exit
Shutdown olapdr.
olap1> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      oltp
db_unique_name                       string      olapdr

olap1> select open_mode from v$database;


Elapsed: 00:00:00.03
olap1> shut abort;
ORACLE instance shut down.
olap1> exit
Manually remove data files from ASM.
$ asmcmd lsof -G +DATA|grep -ic OLAPDR
$ asmcmd ls +DATA/OLAPDR/DATAFILE|wc -l
$ asmcmd lsof -G +DATA/OLAPDR/DATAFILE|wc -l
$ asmcmd
ASMCMD> cd datac1
ASMCMD> cd olapdr
ASMCMD> cd datafile
ASMCMD> rm *
You may delete multiple files and/or directories.
Are you sure? (y/n) y

What would have happened if drop database was executed?
Does anyone know for sure?
Would you have executed drop database?

Differences Between Validate Preview [Summary]

Michael Dinh - Thu, 2018-09-20 19:44

Summary is equivalent to – list backup of database summary versus list backup of database.

RMAN> restore database validate preview summary from tag=stby_dup;

Starting restore at 20-SEP-2018 21:19:48
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=33 instance=hawk1 device type=DISK

List of Backups
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
119     B  0  A DISK        18-SEP-2018 13:56:33 1       1       NO         STBY_DUP
using channel ORA_DISK_1

RMAN> restore database validate preview from tag=stby_dup;

Starting restore at 20-SEP-2018 21:18:44
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=33 instance=hawk1 device type=DISK

List of Backup Sets

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
119     Incr 0  1.34G      DISK        00:00:15     18-SEP-2018 13:56:33
        BP Key: 121   Status: AVAILABLE  Compressed: NO  Tag: STBY_DUP
        Piece Name: /tmp/HAWK_djtde1c3_1_1.bkp
  List of Datafiles in backup set 119
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  1    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/system.306.984318067
  2    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/sysaux.307.984318067
  3    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/undotbs1.309.984318093
  4    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/users.310.984318093
  5    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/undotbs2.311.984318095
  6    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/undotbs3.312.984318095
using channel ORA_DISK_1

Following is the same for both.
RMAN> restore database validate preview summary from tag=stby_dup;
RMAN> restore database validate preview from tag=stby_dup;

List of Archived Log Copies for database with db_unique_name HAWKB

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
849     1    506     A 18-SEP-2018 13:55:08
        Name: +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_506.551.987170199

852     1    507     A 18-SEP-2018 13:56:39
        Name: +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_507.552.987199227

856     1    508     A 18-SEP-2018 22:00:26
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_508.554.987220639

860     1    509     A 19-SEP-2018 03:57:18
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_509.556.987258729

862     1    510     A 19-SEP-2018 14:32:07
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_510.557.987285627

864     1    511     A 19-SEP-2018 22:00:27
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_511.558.987287235

868     1    512     A 19-SEP-2018 22:27:15
        Name: +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_512.560.987325879

872     1    513     A 20-SEP-2018 09:11:18
        Name: +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_513.562.987364831

847     2    173     A 18-SEP-2018 13:55:08
        Name: +FRA/hawkb/archivelog/2018_09_18/thread_2_seq_173.550.987170199

854     2    174     A 18-SEP-2018 13:56:38
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_174.553.987210305

858     2    175     A 19-SEP-2018 01:05:05
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_175.555.987253211

866     2    176     A 19-SEP-2018 13:00:10
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_176.559.987287239

870     2    177     A 19-SEP-2018 22:27:18
        Name: +FRA/hawkb/archivelog/2018_09_20/thread_2_seq_177.561.987328815

Media recovery start SCN is 6038608
Recovery must be done beyond SCN 6038608 to clear datafile fuzziness

channel ORA_DISK_1: starting validation of datafile backup set
channel ORA_DISK_1: reading from backup piece /tmp/HAWK_djtde1c3_1_1.bkp
channel ORA_DISK_1: piece handle=/tmp/HAWK_djtde1c3_1_1.bkp tag=STBY_DUP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:08
using channel ORA_DISK_1

channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_506.551.987170199
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_507.552.987199227
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_508.554.987220639
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_509.556.987258729
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_510.557.987285627
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_511.558.987287235
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_512.560.987325879
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_513.562.987364831
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_18/thread_2_seq_173.550.987170199
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_174.553.987210305
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_175.555.987253211
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_176.559.987287239
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_20/thread_2_seq_177.561.987328815
Finished restore at 20-SEP-2018 21:20:11


New File Adapter - Native File Storage

Anthony Shorten - Thu, 2018-09-20 17:59

In Oracle Utilities Application Framework V4., a new File Adapter has been introduced to parameterize locations across environments. In previous releases, environment variables or path's where hard coded to implement locations of files.

With the introduction of the Oracle Utilities Cloud SaaS Services, the location of files are standardized and to reduce maintenance costs, these paths are not parameterized using an Extendable Lookup (F1-FileStorage) defining the path alias and the physical location. The on-premise version of the Oracle Utilities Application Framework V4. supports local storage (including network storage) using this facility. The Oracle Utilities Cloud SaaS version supports both local (predefined) and Oracle Object Storage Cloud.

For example:

Example Lookup

To use the alias in any FILE-PATH (for example) the URL is used in the FILE-PATH:

file-storage://MYFILES/mydirectory  (if you want to specify a subdirectory under the alias)



Now, if you migrate to another environment (the lookup is migrated using Configuration Migration Assistant) then this record can be altered. If you are moving to the Cloud then this adapter can change to Oracle Object Storage Cloud. This reduces the need to change individual places that uses the alias.

It is recommended to take advantage of this capability:

  • Create an alias per location you read or write files from in your Batch Controls. Define it using the Native File Storage adapter. Try and create the minimum number of alias as possible to reduce maintenance costs.
  • Change all the FILE-PATH parameters in your batch controls to the use the relevant file-storage URL.

If you decide to migrate to the Oracle Utilities SaaS Cloud, these Extensable Lookup values will be the only thing that changes to realign the implementation to the relevant location on the Cloud instance. For on-premise implementation and the cloud, these definitions are now able to be migrated using Configuration Migration Assistant.

Oracle 12.2 : Windows Virtual Account

Yann Neuhaus - Thu, 2018-09-20 09:51

With Oracle 12.2 we can use a Virtual Account during the Oracle installation on Windows. Virtual Accounts allow you to install an Oracle Database and, create and manage Database services without passwords. A Virtual Account can be used as the Oracle Home User for Oracle Database Single Instance installations and does not require a user name or password during installation and administration.
In this blog I want to share an experience I had with the Windows Virtual Accounts when installing Oracle.
I was setting an Oracle environment on Windows Server 2016 for a client. During The installation I decided to use the Virtual Account option.
After the installation of Oracle, I created a database PROD. And everything was fine

SQL*Plus: Release Production on Wed Sep 19 05:43:05 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Standard Edition Release - 64bit Production

SQL> select name,open_mode from v$database;

--------- --------------------


SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      C:\APP\ORACLE\PRODUCT\12.2.0\D

Looking into the properties of my spfile I can see that there is a Windows group named ORA_OraDB12Home1_SVCACCTS
which has full control of the spfile. Indeed as we used the virtual account to install the Oracle software, oracle will automatically create this group and will use it for some tasks
After the first database, the client asked to create a second database. Using DBCA I created a second let’s say ORCL.
After the creation of ORCL, I changed some configuration parameters of the first database PROD and decide to restart it. And then I was surprised with the following error.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file 'C:\APP\ORACLE\PRODUCT\12.2.0\DBHOME_1\DATABASE\INITPROD.ORA'

Waw!! What happened is that when using DBCA to create the second database ORCL, Oracle changed the properties of the spfile of the first database PROD (spfilePROD.ora). Yes it’s strange but this was exactly what happened. The Virtual Group was replaced by OracleServiceORCL

At the other side The ORCL spfile was fine.

So I decided to remove the OracleServiceORCL in the properties of the PROD spfile and I add back the Virtual Group

And Then I was able to start the PROD database

SQL> startup
ORACLE instance started.

Total System Global Area  524288000 bytes
Fixed Size                  8748760 bytes
Variable Size             293601576 bytes
Database Buffers          213909504 bytes
Redo Buffers                8028160 bytes
Database mounted.
Database opened.

But this issue means that every time I create a new database with DBCA the properties of spfiles of others databases may be changed and this is not normal.
When checking for this strange issue I found this Oracle Support note
DBCA Using Virtual Account Incorrectly Sets The SPFILE Owner (Doc ID 2410452.1)

So I decided to apply the recommended patches by Oracle
Oracle Database

C:\Users\Administrator>c:\app\oracle\product\12.2.0\dbhome_1\OPatch\opatch lspatches

And Then I create a new database TEST to see if the patches have corrected the issue.
Well I was able to restart all databases without any errors. But looking into the properties of the 3 databases, we can see that the patch added back the Virtual Group but the service of the last database is still present for previous databases. I don’t really understand why OracleServiceTest should be present in spfilePROD.ora and spfileORCL.ora.




Conclusion : In this blog I shared an issue I experienced with Windows Virtual Account. Hope that this will help.


Cet article Oracle 12.2 : Windows Virtual Account est apparu en premier sur Blog dbi services.

Oracle Recognized as a Leader in Gartner Magic Quadrant for Cloud HCM Suites for Midmarket and Large Enterprises

Oracle Press Releases - Thu, 2018-09-20 07:00
Press Release
Oracle Recognized as a Leader in Gartner Magic Quadrant for Cloud HCM Suites for Midmarket and Large Enterprises Oracle Placed Furthest for Completeness of Vision within the entire Gartner Magic Quadrant

REDWOOD SHORES, Calif. —Sep 20, 2018

Oracle today announced that it has been recognized, for the third consecutive year, as a Leader in Cloud HCM Suites for Midmarket and Large Enterprises by Gartner. The 2018 Gartner Magic Quadrant for Cloud HCM Suites for Midmarket and Large Enterprises evaluates vendors based on completeness of vision and ability to execute. It positioned Oracle furthest for completeness of vision for Cloud HCM Suites. A complimentary copy of the report is available here.

“Our strong investment in a simple, powerful HCM system, and innovation in artificial intelligence and digital assistants, will forever change the experience of working with HCM systems,” said Chris Leone, senior vice president of development, Oracle HCM Cloud. “We are very pleased to be recognized by Gartner and believe our position as a Leader in this year’s report further validates our relentless commitment to helping customers gain a competitive advantage while adapting to the ever-accelerating pace of technological change.”

According to Gartner, “Leaders demonstrate a market-defining vision of how HCM technology can help HR leaders achieve business objectives. Leaders have the ability to execute against that vision through products and services, and have demonstrated solid business results in the form of revenue and earnings. In the cloud HCM suite market, Leaders show a consistent ability to win deals, including the foundational elements of admin HR (with a large number of country-specific HR localizations) and high attach rates of Talent Management, Workforce Management and HRSD capabilities. They have multiple proof points of successful implementations. Further, these customers have workforces deployed in more than one of the main geographic regions (North America, Europe, MENA, Latin America and Asia/Pacific), in a wide variety of vertical industries and sizes of organization (by number of employees). Leaders are often what other providers in the market measure themselves against.”

Part of Oracle Cloud Applications, Oracle HCM Cloud enables HR professionals to simplify the complex in order to meet the increasing expectations of an ever-changing workforce and business environment. By providing a complete and powerful platform that spans the entire employee life cycle, Oracle HCM Cloud helps HR professionals deliver superior employee experience, align people strategy to evolving business priorities, and cultivate a culture of continuous innovation.

For additional information on Oracle HCM Cloud visit: https://cloud.oracle.com/en_US/hcm-cloud.

Gartner, Magic Quadrant for Cloud HCM Suites for Midmarket and Large Enterprises, Melanie Lougee, Ranadip Chandra, et al., 15 August 2018.

Contact Info
Simon Jones
Oracle PR
Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Simon Jones

  • 415-202-4574

Object Erasure capability introduced in

Anthony Shorten - Wed, 2018-09-19 17:45

With data privacy regulations around the world being strengthened data management principles need to be extended to most objects in the product. In the past, Information Lifecycle Management (ILM) was introduced for transaction object management and is continued to be used today in implementations for effective data management. When designing the ILM capability, it did not make sense to extend it to be used for Master data such as Account, Persons, Premises, Meters, Assets, Crews etc as data management and privacy rules tend to be different for these types of objects.

In Oracle Utilities Application Framework V4., we have introduced Object Erasure to support Master Data and take into account purging as well as obfuscation of data. This new capability is complementary to Information Lifecycle Management to offer full data management capability. This new capability does not replace Information Lifecycle Management or depends on Information Lifecycle Management being licensed. Customers using Information Lifecycle Management in conjunction with Object Erasure can implement full end to end data management capabilities.

The idea behind Object Erasure is as follows:

  • Any algorithm can call the Manage Erasure algorithm on the associated Maintenance Object to check for the conditions to ascertain that the object is eligible for object erasure. This is flexible to allow implementations to have the flexibility to initiate the process from a wide range of possibilities. This can be as simple as checking some key fields or some key data on an object (you decide the criteria). The Manage Erasure algorithm is used to detect the conditions, collate relevant information and call the F1-ManageErasureSchedule Business Service to create an Erasure Schedule Business Object in a Pending state to initiate the process. A set of generic Erasure Schedule Business Objects is provided (for example, a generic Purge Object for use in Purging data) and you can create your own to record additional information.
  • The Erasure Schedule BO has three states which can be configured with algorithms (usually Enter Algorithms, a set are provided for reuse with the product).
    • Pending - This is the initial state of the erasure
    • Erased - This is the most common final state indicating the object has been erased or been obfuscated.
    • Discarded - This is an alternative final state where the record can be parked (for example, if the object becomes eligible, an error has occurred in the erasure or reversal of obfuscation is required).
  • A new Erasure Monitor (F1-OESMN) Batch Control can be used to transition the Erasure Schedule through its states and perform the erasure or obfuscation activity.

Here is a summary of this processing:

Erasure Flow

Note: The base supplied Purge Enter algorithm (F1-OBJERSPRG) can be used for most requirements. It should be noted that it does not remove the object from the _K Key tables to avoid conflicts when reallocating identifiers.

The solution has been designed with a portal to link all the element together easily and the product comes with a set of pre-defined objects ready to use. The portal also allows an implementer to configure Erasure Days which is effectively the number of days the record remains in the Erasure Schedule before being considered by the Erasure Monitor (a waiting period basically).

Erasure Configuration

As an implementer you can just build the Manage Erasure algorithm to detect the business event or you can also write the algorithms to perform all of the processing (and every variation in between). The Erasure will respect any business rules configured for the Maintenance Object so the erasure or obfuscation will only occur if the business rules permit it.

Customers using Information Lifecycle Management can manage the storage of Erasure Schedule objects using Information Lifecycle Management.

Objects Provided

The Object Erasure capability supplies a number of objects you can use for your implementation:

  • Set of Business Objects. A number of Erasure Schedule Business Objects such as F1-ErasureScheduleRoot (Base Object), F1-ErasureScheduleCommon (Generic Object for Purges) and F1-ErasureScheduleUser (for user record obfuscation). Each product may ship additional Business Objects.
  • Common Business Services. A number of Business Services including F1-ManageErasureSchedule to use within your Manage Erasure algorithm to create the necessary Erasure Schedule Object.
  • Set of Manage Erasure Algorithms. For each predefined Object Erasure object provided with the product, a set of Manage Erasure algorithms are supplied to be connected to the relevant Maintenance Object.
  • Erasure Monitor Batch Control. The F1-OESMN Batch Control provided to manage the Erasure Schedule Object state transition.
  • Enter Algorithms. A set of predefined Enter algorithms to use with the Erasure Schedule Object to perform common outcomes including Purge processing.
  • Erasure Portal. A portal to display and maintain the Object Erasure configuration.
Refer to the online documentation for further advice on Object Erasure.

In-Database Archiving

Tom Kyte - Wed, 2018-09-19 15:46
Hi, Currently i am using list partitioning based on a status column to classify the data as ACTIVE and EXPIRED. And then the corresponding partitions are exported and then dropped from Prod. The problem with this approach is the internal data m...
Categories: DBA Blogs

TDE Column vs TDE tablespace when to use

Tom Kyte - Wed, 2018-09-19 15:46
Hi, I have gone through the TDE column and TDE tablespace encryption. Most cases TDE tablespace option is found to be better compared to TDE column option. Wanted to know what advantage TDE column encryption gives or rather the use cases for TD...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator