Feed aggregator

Checking if the current user is logged into Application Builder

Andrew Tulley - Thu, 2018-05-17 10:01

If you want conditional logic in your PL/SQL dependent on whether the current user is also logged into the APEX Application Builder, this can be achieved by reference to:

APP_BUILDER_SESSION

e.g. v(‘APP_BUILDER_SESSION’)

 

This will have a value if the user is logged into APEX Application Builder, otherwise it will be null.

Related documentation: https://docs.oracle.com/database/apex-18.1/HTMDB/understanding-substitution-strings.htm#GUID-62FE6E65-265A-4BE4-B04B-F90BDA317328

A node-oracledb Web Service in Docker

Christopher Jones - Thu, 2018-05-17 02:28

This post shows how to run a node-oracledb application in a Docker Container. For bonus points, the application connects to an Oracle Database running in a second container.

The steps are the 'show notes' from a recent talk at Oracle Code.

The demo app is a simple Express web service that accepts REST calls.

 

DOCKER

Oracle Docker images are available from https://store.docker.com/ and also mirrored on https://container-registry.oracle.com

If you're not familiar with Docker, it helps to know basic terminology:

  • Images: Collection of software to be run as a container. Images are immutable. Changes to an image require a new image build.

  • Registry: Place to store and download images.

  • Container: A lightweight standalone, executable piece of software that includes everything required to run it on a host. Containers are spun up from images. Containers are non-persistent. Once a container is deleted, all files inside that container are gone.

  • Docker engine: The software engine running containers.

  • Volumes: Place to persist data outside the container.

CONFIGURE A DOCKER HOST

For my host, I used Oracle Linux 7, which has the ol7_latest and ol7_uekr4  channels already enabled.

  • Install the Docker engine as the root user by running 'sudo su -', or prefix each command with 'sudo':

    # yum-config-manager --enable ol7_addons # yum install docker-engine # systemctl enable docker # systemctl start docker
DOWNLOAD INSTANT CLIENT AND DATABASE DOCKER IMAGES
  • Sign in to the container registry https://container-registry.oracle.com/ with your (free) Oracle "single sign-on" (SSO) credentials.

  • Accept the license on the container registry.

  • On your OL7 Docker host, log in to the registry. Remember to run Docker commands as 'root':

    # docker login container-registry.oracle.com

    This prompts for your Oracle SSO credentials.

  • Get the Oracle Database and Oracle Instant Client images:

    # docker pull container-registry.oracle.com/database/enterprise:12.2.0.1 # docker pull container-registry.oracle.com/database/instantclient:12.2.0.1

    This can take a while. For testing, you may want to pull the smaller, 'slim' version of the database.

  • View the installed images with:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB
CREATE A DATABASE CONTAINER FROM THE DATABASE IMAGE
  • Start the database container:

    # docker run -d --name demodb -P container-registry.oracle.com/database/enterprise:12.2.0.1

    The '-P' option maps the ports used, allowing access to the database from outside the container.

  • Check for its health and wait until it shows 'healthy'

    # docker ps CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES 9596bc2345d3 [...]/database/[...] "/bin/sh -c '/bin/..." Up 3 hours (healthy) ...->1521/tcp, ...->5500/tcp demodb
  • Find the database container's IP address: # docker inspect -f "{{ .NetworkSettings.IPAddress }}" demodb

    You will use this IP in database connect strings in your applications.

  • You can stop and start the container as desired:

    # docker stop demodb # docker start demodb

    The data is persistent as long as the container exists. Use 'docker ps --all' to show all containers, running or not.

CREATE A NEW SCHEMA
  • Create a SQL file called createschema.sql:

    SET ECHO ON ALTER SESSION SET CONTAINER=orclpdb1; DROP USER scott CASCADE; CREATE USER scott IDENTIFIED BY tiger; GRANT CONNECT, RESOURCE TO scott; ALTER USER scott QUOTA UNLIMITED ON USERS; DROP TABLE scott.bananas; CREATE TABLE scott.bananas (shipment VARCHAR2(4000) CHECK (shipment IS JSON)); INSERT INTO scott.bananas VALUES ('{ "farmer": "Gita", "ripeness": "All Green", "kilograms": 100 }'); INSERT INTO scott.bananas VALUES ('{ "farmer": "Ravi", "ripeness": "Full Yellow", "kilograms": 90 }'); INSERT INTO scott.bananas VALUES ('{ "farmer": "Mindy", "ripeness": "More Yellow than Green", "kilograms": 92 }'); COMMIT; EXIT

    For this demo, you can see I used the Oracle Database 12.1.0.2 JSON data type.

  • Execute createschema.sql in your favorite tool, such as SQL*Plus.

    In my case I actually ran SQL*Plus on my Docker host machine. Cheating a bit on giving details here, I had downloaded the Instant Client Basic and SQL*Plus packages and unzipped as shown in the the Instant Client instructions. I then set my shell to use the SQL*Plus binary:

    # export LD_LIBRARY_PATH=/home/cjones/instantclient_12_2 # export PATH=/home/cjones/instantclient_12_2:$PATH

    Using the database IP address as shown earlier you can now run the script in SQL*Plus against the container database. In my environment the database IP was 172.17.0.2:

    # sqlplus -l sys/Oradoc_db1@172.17.0.2/orclpdb1.localdomain as sysdba @createschema.sql

    The database password and service name shown are the defaults in the image.

CREATE A NODE.JS IMAGE

Let's add Node.js to the Instant Client image.

  • Create a sub-directory nodejs-scripts

    # mkdir nodejs-scripts
  • Create a new file 'nodejs-scripts/Dockerfile'. This is the 'recipe' for building a Docker image. Here Node.js is added to the Instant Client image to create a new image usable by any Node.js application. The Node.js 8 package for Oracle Linux is handy.

    The Dockerfile should contain:

    FROM container-registry.oracle.com/database/instantclient:12.2.0.1 ADD ol7_developer_nodejs8.repo /etc/yum.repos.d/ol7_developer_nodejs8.repo RUN echo proxy=http://my-proxy.example.com:80 >> /etc/yum.conf RUN yum -y update && \ rm -rf /var/cache/yum && \ yum -y install nodejs

    The FROM line shows that we base our new image on the Instant Client image.

    If you are not behind a proxy, you can omit the proxy line. Or change the line to use your proxy.

    For quick testing, you may want to omit the 'yum -y update' command.

  • The Dockerfile ADD command copies 'ol7_developer_nodejs8.repo' from the host file system into the image's file system. Create 'nodejs-scripts/ol7_developer_nodejs8.repo' containing:

    [ol7_developer_nodejs8] name=Oracle Linux $releasever Node.js 8 Packages for Development and test ($basearch) baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/developer_nodejs8/$basearch/ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1
  • Now the new image with Oracle Instant Client and Node.js 8 can be built using this Dockerfile:

    docker build -t cjones/nodejs-image nodejs-scripts

    The 'cjones/nodejs-image' is the image name, not a directory path.

  • You can see the new image has been created:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/nodejs-image latest e048b739bb63 29 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB
CREATE A NODE.JS DEMO IMAGE

The new Node.js image is refined by installing our demo application. This creates another new image that we can later run whenever we want to use the application.

  • Create a sub-directory 'ws-demo-scripts':

    # mkdir ws-demo-scripts
  • Create a new file 'ws-demo-scripts/Dockerfile' containing:

    FROM cjones/nodejs-image ENV https_proxy=http://my-proxy.example.com:80 WORKDIR workdir COPY package.json package.json COPY server.js server.js RUN npm install CMD ["npm", "start"]

    The first line shows the new image should be based on the Node.js image 'cjones/nodejs-image' created in the section above.

    Again, adjust the proxy line as needed by your network.

    You can see the Dockerfile copies two files from our host file system into the image. These files are shown below.

    When the image is created, the RUN command will install the Node.js dependencies from package.json.

    When a container starts, the CMD action is taken, which runs 'npm start', in turn invoking the 'main' target in package.json. Looking below to the package.json content, you can see this means 'node server.js' is run.

  • Create a file 'ws-demo-scripts/package.json' containing:

    { "name": "banana-farmer", "version": "1.0.0", "description": "RESTful API using Node.js Express Oracle DB", "main": "server.js", "author": "Oracle", "license": "Apache", "dependencies": { "body-parser": "^1.18.2", "express": "^4.16.0", "oracledb": "^2.2.0" } }

    As obvious, this application installs the body-parser module, the node-oracledb module, and also express. This demo is an Express web service application. And yes, it is a Banana Farmer web service.

    The default run target of package.json is the application file 'server.js'.

  • Create the application file 'ws-demo-scripts/server.js' containing the contents from here.

    The demo application is just this one file.

  • Build the demo image:

    # docker build -t cjones/ws-demo ws-demo-scripts

    We now have our fourth image which contains our runnable application:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/ws-demo latest 31cbe6d2ea4e 21 seconds ago 1.51GB cjones/nodejs-image latest e048b739bb63 29 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB
DEMO APPLICATION OVERVIEW

The Banana Farmer scenario is that shipments of bananas from farmers are recorded. They can have a farmer name, ripeness, and weight. Shipments can be inserted, queried, updated or deleted.

Let's look at a couple of snippets from ws-demo-scripts/server.js.

A connection helper creates a pool of database connections:

oracledb.createPool({  user: process.env.NODE_ORACLEDB_USER, password: process.env.NODE_ORACLEDB_PASSWORD, connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING }, . . .

The credentials are taken from environment variables. When we run the app container we will pass value for those environment variables into the container.

The application has Express routes for REST GET, POST, PUT and DELETE calls. The code to handle a GET request looks like:

// HTTP method: GET // URI : /bananas/FARMER // Get the banana shipment for FARMER app.get('/bananas/:FARMER', function (req, res) { doGetConnection(res, function(err, connection) { if (err) return; connection.execute( "SELECT b.shipment FROM bananas b WHERE b.shipment.farmer = :f", { f: req.params.FARMER }, function (err, result) { if (err) { res.set('Content-Type', 'application/json'); res.status(500).send(JSON.stringify({ status: 500, message: "Error getting the farmer's profile", detailed_message: err.message })); } else if (result.rows.length < 1) { res.set('Content-Type', 'application/json'); res.status(404).send(JSON.stringify({ status: 404, message: "Farmer doesn't exist", detailed_message: "" })); } else { res.contentType('application/json'); res.status(200).send(JSON.stringify(result.rows)); } doRelease(connection, "GET /bananas/" + req.params.FARMER); }); }); });

Express makes it easy. It handles the routing to this code when a GET request with the URL '/bananas/<name>' e.g. '/bananas/Gita' is called. This simply binds the URL route parameter containing the farmer’s name into the SELECT statement. Binding is important for security and scalability, as you know. The SQL syntax used is the JSON 'dot' notation of Oracle Database 12.2 but it could be rewritten to work with 12.1.0.2.

The bulk of the code is error handling, looking after the cases where there was a processing error or no rows returned. It sends back HTTP status codes 500 or 404, respectively.

The success code path sends back the query output 'result.rows' as a JSON string, with the HTTP success status code 200.

START THE DEMO CONTAINER

Let's run the application.

  • Create a file 'ws-demo-scripts/envfile.list' with the credentials for the application. Use the IP address of your database container found with the 'docker inspect' command shown previously. In my environment, the database IP address was '172.17.0.2'

    NODE_ORACLEDB_USER=scott NODE_ORACLEDB_PASSWORD=tiger NODE_ORACLEDB_CONNECTIONSTRING=172.17.0.2/orclpdb1.localdomain
  • Start the Node.js web service container

    # docker run -d --name nodejs -P --env-file ws-demo-scripts/envfile.list cjones/ws-demo
STATUS CHECK
  • To recap what's happened, the Docker images are:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/ws-demo latest 25caede29b17 12 minutes ago 1.51GB cjones/nodejs-image latest 138f2b76ffe7 13 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12.2.0.1 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient 12.2.0.1 fda46de41de3 7 months ago 407MB

    Two base images were downloaded, An image with Node.js was created from the Instant Client image. Finally a fourth image 'cjones/ws-demo' with the Node.js, Instant Client and the application code was created.

  • We have started database ('demodb') and application containers ('nodejs'):

    # docker ps CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES 2924e1225290 cjones/ws-demo ”npm start" Up 3 hours nodejs 9596bc2345d3 [...]/database/[...] "/bin/sh -c '/bin/..." Up 3 hours (healthy) ...->1521/tcp, ...->5500/tcp demodb

    We found the IP address of the database container, and knew (by reading the container registry documentation) the default credentials of the SYS user.

    We created a schema SCOTT on the database, with a table containing some JSON data.

    An application container was started, with the database application credentials and connection string specified in an environment file outside the container.

SUBMIT REST REQUESTS

Now we can call our application, and it will access the database.

  • Install the browser extension HttpRequester (in Firefox) or Postman (in Chrome).

  • Find the IP of the demo web service container:

    # docker inspect -f "{{ .NetworkSettings.IPAddress }}" nodejs

    In my environment, it was '172.17.0.3'. Use this with the port (3000) and various endpoints (e.g. '/bananas/<farmer>') defined in server.js for REST requests.

  • In the HttpRequester or Postman extensions you can make various REST calls.

    Get all shipments:

    GET http://172.17.0.3:3000/bananas

    Get one farmer's shipment(s):

    GET http://172.17.0.3:3000/bananas/Gita

    New data:

    POST http://172.17.0.3:3000/bananas { "farmer" : "CJ", "ripeness" : "Light Green", "kilograms" : 50 }

    Update data:

    PUT http://172.17.0.3:3000/bananas/CJ { "farmer" : "CJ", "ripeness" : "50% Green, 50% Yellow", "kilograms" : 45 }

    Here's a screenshot of HttpRequester in action doing a GET request to get all banana shipments. On the left, the red boxes show the URL for the '/bananas' endpoint was executed as a GET request. On the right, the response shows the success HTTP status code of 200 and the returned data from the request:

    Screenshot of HttpRequester
  • When you are finished with the containers you can stop them:

    # docker stop demodb # docker stop nodejs

    If you haven't tried Docker yet, now is the perfect time! They make deployment and development easy. Oracle's Docker images let you get started with Oracle products very quickly.

Why the XAI Staging is not in the OSB Adapters?

Anthony Shorten - Wed, 2018-05-16 19:52

With the replacement of the Multi-Purpose Listener (MPL) with the Oracle Service Bus (OSB) with the additional OSB Adapters for Oracle Utilities Application Framework based products, customers have asked about transaction staging support.

One of the most common questions I have received is why there is an absence of an OSB Adapter for the XAI Staging table. Let me explain the logic.

  • One Pass versus Two Passes. The MPL processed its integration by placing the payload from the integration into the XAI Staging table. The MPL would then process the payload in a second pass. The staging record would be marked as complete or error. The complete ones would need to be removed using the XAI Staging purge process run separately. You then used XAI Staging portals to correct the data coming in for ones in error. On the other hand, the OSB Adapters treat the product as a "black box" (i,e, like a product) and it directly calls the relevant service directly (for inbound) and polls the relevant Outbound or NDS table for outbound processing records directly. This is a single pass process rather than multiple that MPL did. OSB is far more efficient and scalable than the MPL because of this.
  • Error Hospital. The idea behind the XAI Staging is that error records remain in there for possible correction and reprocessing. This was a feature of MPL. In the OSB world, if a process fails for any reason, the OSB can be configured to act as an Error Hospital. This is effectively the same as the MPL except you can configure the hospital to ignore any successful executions which reduces storage. In fact, OSB has features where you can detect errors anywhere in the process and allows you to determine which part of the integration was at fault in a more user friendly manner. OSB effectively already includes the staging functionality so adding this to the adapters just duplicates processing. The only difference is that error correction, if necessary, is done within the OSB rather than the product.
  • More flexible integration model. One of the major reasons to move from the MPL to the OSB is the role that the product plays in integration. If you look at the MPL model, any data that was passed to the product from an external source was automatically the responsibility of the product (that is how most partners implemented it). This means the source system had no responsibility for the cleanliness of their data as you had the means of correcting the data as it entered the system. The source system could send bad data over and over and as you dealt with it in the staging area that would increase costs on the target system. This is not ideal. In the OSB world, you can choose your model. You can continue to use the Error Hospital to keep correcting the data if you wish or you can configure the Error Hospital to compile the errors and send them back, using any adapter, to the source system for correction. With OSB there is a choice, MPL did not really give you a choice.

With these considerations in place it was not efficient to add an XAI Staging Adapter to OSB as it would duplicate effort and decrease efficiency which negatively impacts scalability.

What is the Status of the PeopleSoft Interaction Hub?

PeopleSoft Technology Blog - Wed, 2018-05-16 19:07

As many of you may have heard, PeopleSoft has moved several functional capabilities from the PeopleSoft Interaction Hub (formerly the PeopleSoft Portal) to PeopleTools.  Why did we do this?  We wanted to make clustering available to many more customers.  Clustering enables two or more PeopleSoft applications to behave like a seamless system without noticeable boundaries between applications.  We did a webinar on this subject recently.  Clustering formerly required the Interaction Hub, but now when you move to PeopleTools 8.56, you can achieve clustering without the Hub.  There are many customers that could benefit from clustering their PeopleSoft applications, but don’t want the overhead, system complexity, or additional expense of deploying the Hub.  Some simply don’t want to deploy a web site.

That’s great, but what does that mean for customers who have the Hub?  First of all, nothing is taken away.  If you have deployed the Hub and are successful and your users are happy, you can continue to use it.  We will continue to support it.   Customers that still want to deploy a PeopleSoft–centric web site experience for their users may want to continue to use the Hub.  You can also use the Hub with PeopleSoft Fluid applications.  In fact, if you are using the Hub you should be deploying Fluid applications with it.

Our focus on Fluid as the current and future PeopleSoft user interface, means, however, that we don’t have plans to enhance the Interaction Hub further.  Enhancements to clustering and common capabilities used across applications will be made directly in PeopleTools so all customers can benefit from them.

Filtering a Table, List or Other Collections in Oracle Visual Builder Cloud Service

Shay Shmeltzer - Wed, 2018-05-16 18:36

A common use case when working with data is to try and filter it.

For example if you have a set of records shown in a table in the UI the user might want to filter those to show specific rows.

In the video below I show you the basic way to achieve this using the filterCriterion of ServiceDataProvider variables - the type of variable that populates tables and lists.

Basically each SDP has an attribute called filterCriterion that accepts a structure that can contain arrays of conditions. You can then map a criteria that leverage for example a page level variable connected to a field.

FilterCriterion setting

In the criteria you'll specify

  • an attribute - this will be the column id (not title) of your business object
  • a value - this is the value you are filtering based on - usually a pointer to a variable in your page
  • An operator (op) - the filterCriterion is using operators like $eq or $ne - these are based on the Oracle JET AttributeFilterOperator - a full list of the operators is here.

In the video I end up with a filterCriterion that is:

{ "criteria": [ { "value": "{{ $page.variables.filterVar }}", "op": "{{ \"$eq\"\n }}", "attribute": "{{ \"traveler\"\n }}" } ], "op": "{{ \"$or\"\n }}" }

Since you can have multiple criteria you can also specify an operator on them - either an or ($or) or an and ($and) - these are CompoudOperators from Oracle JET.

By the way, this will work automatically for your Business Objects, however if you want to apply this to data from a random REST service - then that service will need to have filtering transformation defined for it.

 

Categories: Development

Reflecting Changes in Business Objects in UI Tables with Visual Builder

Shay Shmeltzer - Wed, 2018-05-16 11:52

While the quick start wizards in Visual Builder Cloud Service (VBCS) make it very easy to create tables and other UI components and bind them to business objects, it is good to understand what is going on behind the scenes, and what the wizards actually do. Knowing this will help you achieve things that we still don't have wizards for.

For example - let's suppose you created a business object and then created a UI table that shows the fields from that business object in your page. You probably used the "Add Data" quick start wizard to do that. But then you remembered that you need one more column added to your business object, however after you added that one to the BO, you'll notice it is not automatically shown in the UI. That makes sense since we don't want to automatically show all the fields in a BO in the UI.

But how do you add this new column to the UI?

The table's Add Data wizard will be disabled at this point - so is your only option to drop and recreate the UI table? Of course not!

 

If you'll look into the table properties you'll see it is based on a page level ServiceDataProvider ( SDP for short) variable. This is a special type of object that the wizards create to represent collections. If you'll look at the variable, you'll see that it is returning data using a specific type. Note that the type is defined at the flow level - if you'll look at the type definition you'll see where the fields that make up the object are defined.

Type Definition

It is very easy to add a new field here - and modify the type to include the new column you added to the BO. Just make sure you are using the column's id - and not it's title - when you define the new field in the items array.

Now back in the UI you can easily modify the code of the table to add one more column that will be hooked up to this new field in the SDP that is based on the type.

Sounds complex? It really isn't - here is a 3 minute video showing the whole thing end to end:

As you see - a little understanding of the way VBCS works, makes it easy to go beyond the wizards and achieve anything.

Categories: Development

Pizza, Beer, and Dev Expertise at Your Local Meet-up

OTN TechBlog - Wed, 2018-05-16 06:30

Big developer conferences are great places to learn about new trends and technologies, attend technical sessions, and connect with colleagues. But by virtue of their size, their typical location in destination cities, and multi-day schedules, they can require a lot of planning, expense, and time away from work.

Meet-ups, offer a fantastic alternative. They’re easily accessible local events, generally lasting a couple of hours. Meet-ups offer a more human scale and are far less crowded than big conferences, with a far more casual, informal atmosphere that can be much more conducive to learning through Q&A and hands-on activities.

One big meet-up advantage is that by virtue of their smaller scale they can be scheduled more frequently. For example, while Oracle ACE Associate Jon Petter Hjulsted and his colleagues attend the annual Oracle User Group Norway (OUGN) Conference, they wanted to get together more often, three or four times a year. The result is a series of OUGN Integration meet-ups “where we can meet people who work on the same things.” As of this podcast two meet-ups have already taken place, with third schedule for the end of May.

Luis Weir, CTO at Capgemini in the UK and an Oracle ACE Director and Developer Champion, felt a similar motivation. “There's so many events going on and there's so many places where developers can go,” Luis says. But sometimes developers want a more relaxed, informal, more approachable atmosphere in which to exchange knowledge. Working with his colleague Phil Wilkins, senior consultant at Capgemini and an Oracle ACE, Luis set out to organize a series of meet-ups that offered more “cool.”

Phil’s goal in the effort was to organize smaller events that were “a little less formal, and a bit more convenient.” Bigger, longer events are more difficult to attend because they require more planning on the part of attendees. “It can take quite a bit of effort to organize your day if you’re going to be out for a whole day to attend a user group special interest group event,” Phil says. But local events scheduled in the evening require much less planning in order to attend. “It's great! You can get out and attend these things and you get to talk to people just as much as you would at a during a day-time event.”

For Oracle ACE Ruben Rodriguez Santiago, a Java, ADF, and cloud solution specialist with Avanttic in Spain, the need for meet-ups arose out of a dearth of events focused on Oracle technologies. And those that were available were limited to database and SaaS. “So for me this was a way to get moving and create events for developers,” Ruben says.

What steps did these meet-up organizers take? What insight have they gained along the way as they continue to organize and schedule meet-up events? You’ll learn all that and more in this podcast. Listen!

 

The Panelists Jon-Petter Hjulstad
Department Manager, SYSCO AS
Twitter LinkedIn   
Ruben Rodriguez Santiago
Java, ADF, and Cloud Solution Specialist, Avanttic
Twitter LinkedIn  
Luis Weir
CTO, Oracle DU, Capgemini
Twitter LinkedIn  
Phil Wilkins
Senior Consultant, Capgemini
Twitter LinkedIn  Additional Resources Coming Soon
  • What Developers Need to Know About API Monetization
  • Best Practices for API Development
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

 

Oracle Buys DataScience.com

Oracle Press Releases - Wed, 2018-05-16 06:00
Press Release
Oracle Buys DataScience.com Adds Leading Data Science Platform to the Oracle Cloud, Enabling Customers to Fully Utilize Machine Learning

Redwood Shores, Calif.—May 16, 2018

Oracle today announced that it has signed an agreement to acquire DataScience.com, whose platform centralizes data science tools, projects and infrastructure in a fully-governed workspace.

Data science teams use the platform to organize work, easily access data and computing resources, and execute end-to-end model development workflows. Leading organizations like Amgen, Rio Tinto, and Sonos are using the DataScience.com platform to improve productivity, reduce operational costs and deploy machine learning solutions faster to power their digital transformations.

DataScience.com empowers data scientists to deliver the business-changing insights executives expect in less time with self-service access to open source tools, data and computing resources, while also improving the ability of IT teams to support that work. Oracle embeds Artificial Intelligence (AI) and machine learning capabilities across its software as a service (SaaS) and platform as a service (PaaS) solutions, including big data, analytics and security operations, to enable digital transformations. Together, Oracle and DataScience.com will provide customers with a single data science platform that leverages Oracle Cloud Infrastructure and the breadth of Oracle's integrated SaaS and PaaS offerings to help them realize the full potential of machine learning.

“Every organization is now exploring data science and machine learning as a key way to proactively develop competitive advantage, but the lack of comprehensive tooling and integrated machine learning capabilities can cause these projects to fall short,” said Amit Zavery, Executive Vice President of Oracle Cloud Platform, Oracle. “With the combination of Oracle and DataScience.com, customers will be able to harness a single data science platform to more effectively leverage machine learning and big data for predictive analysis and improved business results.”

“Data science requires a comprehensive platform to simplify operations and deliver value at scale,” said Ian Swanson, CEO of DataScience.com. “With DataScience.com, customers leverage a robust, easy-to-use platform that removes barriers to deploying valuable machine learning models in production. We are extremely enthusiastic about joining forces with Oracle’s leading cloud platform so customers can realize the benefits of their investments in data science.”

More information about this announcement is available at www.oracle.com/datascience.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, SCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Oracle is currently reviewing the existing DataScience.com product roadmap and will be providing guidance to customers in accordance with Oracle’s standard product communication policies. Any resulting features and timing of release of such features as determined by Oracle’s review of DataScience.com’s product roadmap are at the sole discretion of Oracle. All product roadmap information, whether communicated by DataScience.com or by Oracle, does not represent a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. It is intended for information purposes only, and may not be incorporated into any contract.

Cautionary Statement Regarding Forward-Looking Statements
This document contains certain forward-looking statements about Oracle and DataScience.com, including statements that involve risks and uncertainties concerning Oracle’s proposed acquisition of DataScience.com, anticipated customer benefits and general business outlook. When used in this document, the words “anticipates”, “can”, “will”, “look forward to”, “expected” and similar expressions and any other statements that are not historical facts are intended to identify those assertions as forward-looking statements. Any such statement may be influenced by a variety of factors, many of which are beyond the control of Oracle or DataScience.com, that could cause actual outcomes and results to be materially different from those projected, described, expressed or implied in this document due to a number of risks and uncertainties. Potential risks and uncertainties include, among others, the possibility that the transaction will not close or that the closing may be delayed, the anticipated synergies of the combined companies may not be achieved after closing, the combined operations may not be successfully integrated in a timely manner, if at all, general economic conditions in regions in which either company does business may deteriorate and/or Oracle or DataScience.com may be adversely affected by other economic, business, and/or competitive factors. Accordingly, no assurances can be given that any of the events anticipated by the forward-looking statements will transpire or occur, or if any of them do so, what impact they will have on the results of operations or financial condition of Oracle or DataScience.com. You are cautioned to not place undue reliance on forward-looking statements, which speak only as of the date of this document. Neither Oracle nor DataScience.com is under any duty to update any of the information in this document.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Ken Bond

  • +1.650.607.0349

Autonomous Oracle Visual Builder Cloud Service

Visual Application Development and Hosting The new Autonomous Oracle Visual Builder Cloud Service (AVBCS) introduces a new architecture that turns VBCS into an ideal development and hosting...

We share our skills to maximize your revenue!
Categories: DBA Blogs

12c upuserxt.lst, upobjxt.lst & Oracle Maintained objects/users

Yann Neuhaus - Tue, 2018-05-15 16:24

Mike Dietrich has blogged recently about upuserxt.lst and upobjxt.lst and how to query them with external table. The first time I’ve seen those ‘.lst’ files, the default extension for sqlplus spool files, I wondered whether they were provided in ?/rdbms/admin on purpose, or if they were just some leftovers from some tests Oracle did before packaging the Oracle Home. Finally, I realized that they were there on purpose and that those ‘.lst’ are important files when upgrading to 12c.

I’ll look at an 18c Oracle Home (/rdbms/admin) in the Oracle Cloud but that applies to all 12c (and 18c is a 12cR2 patchset). One of the most important little feature of 12c is the tagging of Oracle Supplied objects and users. Before 12c it was a nightmare to distinguish system users from application ones. I detailed that in a previous post.

At database creation: _oracle_script

In a newly created 12c database, all the objects and users belonging to the system are flagged with ORACLE_MAINTAINED=Y

Here is an example listing system users and roles:

SQL> select listagg(username,',' on overflow truncate) within group (order by username) from dba_users where oracle_maintained='Y';LISTAGG(USERNAME,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYUSERNAME)
 
ANONYMOUS,APPQOSSYS,AUDSYS,CTXSYS,DBSFWUSER,DBSNMP,DIP,DVF,DVSYS,GGSYS,GSMADMIN_INTERNAL,GSMCATUSER,GSMUSER,LBACSYS,MDDATA,MDSYS,OJVMSYS,OLAPSYS,ORACLE_OCM,ORDDATA,ORDPLUGINS,ORDSYS,OUTLN,REMOTE_SCHEDULER_AGENT,SI_INFORMTN_SCHEMA,SYS,SYS$UMF,SYSBACKUP,SYSDG,SYSKM,SYSRAC,SYSTEM,WMSYS,XDB,XS$NULL
 
SQL> select listagg(role,',' on overflow truncate) within group (order by role) from dba_roles where oracle_maintained='Y';
LISTAGG(ROLE,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYROLE)
 
ADM_PARALLEL_EXECUTE_TASK,APPLICATION_TRACE_VIEWER,AQ_ADMINISTRATOR_ROLE,AQ_USER_ROLE,AUDIT_ADMIN,AUDIT_VIEWER,AUTHENTICATEDUSER,CAPTURE_ADMIN,CDB_DBA,CONNECT,CTXAPP,DATAPATCH_ROLE,DATAPUMP_EXP_FULL_DATABASE,DATAPUMP_IMP_FULL_DATABASE,DBA,DBFS_ROLE,DBJAVASCRIPT,DBMS_MDX_INTERNAL,DV_ACCTMGR,DV_ADMIN,DV_AUDIT_CLEANUP,DV_DATAPUMP_NETWORK_LINK,DV_GOLDENGATE_ADMIN,DV_GOLDENGATE_REDO_ACCESS,DV_MONITOR,DV_OWNER,DV_PATCH_ADMIN,DV_POLICY_OWNER,DV_PUBLIC,DV_REALM_OWNER,DV_REALM_RESOURCE,DV_SECANALYST,DV_STREAMS_ADMIN,DV_XSTREAM_ADMIN,EJBCLIENT,EM_EXPRESS_ALL,EM_EXPRESS_BASIC,EXECUTE_CATALOG_ROLE,EXP_FULL_DATABASE,GATHER_SYSTEM_STATISTICS,GDS_CATALOG_SELECT,GGSYS_ROLE,GLOBAL_AQ_USER_ROLE,GSMADMIN_ROLE,GSMUSER_ROLE,GSM_POOLADMIN_ROLE,HS_ADMIN_EXECUTE_ROLE,HS_ADMIN_ROLE,HS_ADMIN_SELECT_ROLE,IMP_FULL_DATABASE,JAVADEBUGPRIV,JAVAIDPRIV,JAVASYSPRIV,JAVAUSERPRIV,JAVA_ADMIN,JMXSERVER,LBAC_DBA,LOGSTDBY_ADMINISTRATOR,OEM_ADVISOR,OEM_MONITOR,OLAP_DBA,OLAP_USER,OLAP_XS_ADMIN,OPTIMIZER_PROCESSING_RATE,ORDADMIN,PDB_DBA,PROVISIONER,RDFCTX_ADMIN,RECOVERY_CATALOG_OWNER,RECOVERY_CATALOG_OWNER_VPD,RECOVERY_CATALOG_USER,RESOURCE,SCHEDULER_ADMIN,SELECT_CATALOG_ROLE,SODA_APP,SYSUMF_ROLE,WM_ADMIN_ROLE,XDBADMIN,XDB_SET_INVOKER,XDB_WEBSERVICES,XDB_WEBSERVICES_OVER_HTTP,XDB_WEBSERVICES_WITH_PUBLIC,XS_CACHE_ADMIN,XS_CONNECT,XS_NAMESPACE_ADMIN,XS_SESSION_ADMIN

And here is an exemple listing the owners of system objects flagged with ORACLE_MAINTAINED=Y

SQL> select listagg(num||' '||owner,',' on overflow truncate) within group (order by num) from (select owner,count(*) num from dba_objects where oracle_maintained='Y' group by owner);LISTAGG(NUM||''||OWNER,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYNUM)
 
6 APPQOSSYS,8 DBSFWUSER,8 ORACLE_OCM,8 SI_INFORMTN_SCHEMA,10 ORDPLUGINS,10 OUTLN,13 REMOTE_SCHEDULER_AGENT,22 DVF,24 OJVMSYS,25 OLAPSYS,35 AUDSYS,55 DBSNMP,209 GSMADMIN_INTERNAL,239 LBACSYS,292 ORDDATA,398 DVSYS,399 WMSYS,412 CTXSYS,466 SYSTEM,1029 XDB,2574 MDSYS,3171 ORDSYS,12173 PUBLIC,51069 SYS

How this is done? That’s easy. All system objects are created by Oracle scripts, such as those called by catalog.sql and catproc.sql during database creation. Those scripts set “_oracle_script”=true before running the DDL and all object created while “_oracle_script”=true is flagged as Oracle Maintained.

If, in a lab (not in prod), you create your own object in the same way, they will also be flagged as Oracle Maintained:

SQL> connect / as sysdba
Connected.
SQL> alter session set "_oracle_script"=true;
Session altered.
 
SQL> create user FRANCK identified by myself;
User FRANCK created.
SQL> alter user FRANCK quota unlimited on users;
User FRANCK altered.
 
SQL> create table FRANCK.DEMO(n primary key) as select 1 from dual;
Table FRANCK.DEMO created.
SQL> alter session set "_oracle_script"=false;
Session altered.
 
SQL> select username,oracle_maintained from dba_users where username='FRANCK';
 
USERNAME ORACLE_MAINTAINED
-------- -----------------
FRANCK Y
SQL> select owner,object_name,object_type,oracle_maintained from dba_objects where owner='FRANCK';
 
OWNER OBJECT_NAME OBJECT_TYPE ORACLE_MAINTAINED
------ ---- ----- -
FRANCK DEMO TABLE Y
FRANCK SYS_C007409 INDEX Y

So, this one is easy. Database creation runs with “_oracle_script”=true and objects and users created when this parameter is set to true are flagged as Oracle Supplied objects.

And during upgrades?

When you upgrade from 11g to 12c you don’t have this Oracle Maintained information. The catupgrd runs with “_oracle_script”=true but this script does not create all objects. However the upgraded database has all system objects flagged as Oracle Maintained. This is where upuserxt.lst upobjxt.lst are used.

When Oracle developers build a new Oracle Home to be shipped, they create a database (including all options I suppose) and then run the utlupox.sql script. This script will list all Oracle Maintained users and objects, just relying on the flag that has been set during creation, and spool to the upuserxt.lst upobjxt.lst files. And those files will be shipped in the Oracle Home (all that under /rdbms/admin).

These .lst files will be used when upgrading from pre-12c in order to set the flags for Oracle Maintained objects. The external tables SYS.USERXT on upuserxt.lst and SYS.OBJXT on upobjxt.lst are created by catupcox.sql and, finally, those tables are read by catuposb.sql to set Oracle Maintained in USER$ and OBJ$. The catuposb.sql is a bit more complex that that because there are objects that can have different name when a database is created.

Note that this information about Oracle Maintained objects, in addition to being very useful for us, is crucial when you further convert the non-CDB to a PDB because those will become metadata links.

 

Cet article 12c upuserxt.lst, upobjxt.lst & Oracle Maintained objects/users est apparu en premier sur Blog dbi services.

Leveraging "On Field Value Changes" Event in Visual Builder Cloud Service - Redone

Shay Shmeltzer - Tue, 2018-05-15 16:08

With the new Oracle Visual Builder Cloud Service (VBCS) released this month, some of my past how-to's that shows tricks in VBCS are no longer valid/needed.

The direct access we provide to both REST services and the UI components in your application make things that in the past required code or hacking much simpler.

Here is one example - reacting to value change events on fields and modifying other UI components based on them.

Input component have a "value" event that you can hook into and provide an action chain that will be executed when the value change.

In the video below you see for example how I can use a value selected in a drop down list to control whether other components on the page are shown or hidden.

To do this, you define a page variable that you can change in the "value" event. You can then rely on that page variable to control another component behavior.

As you can see - no coding needed - just drag and drop your way to create the functionality.

action chain

 

 

Categories: Development

Enabling Automatic EBS Application Tier Failover Between Physical Nodes on Oracle Solaris Clusters

Steven Chan - Tue, 2018-05-15 15:28

This is a long-overdue tip of the hat to my colleagues in the Oracle Solaris group. I recently learned about an article they posted a couple of years ago about E-Business Suite 12.2 running on Oracle Solaris Cluster:

It is possible to install an EBS primary application tier and the associated WebLogic Administration Server on a logical host using Oracle Solaris Clusters. If the physical node hosting these EBS services fail, then Oracle Solaris Cluster will automatically failover to another physical node in the cluster.

This is very cool! In a traditional EBS multinode environment, any user sessions on an affected failed node that were in-flight (meaning, in the middle of a transaction) would be lost. Users would need to redo any transactions that were not saved when their node when down. Deploying EBS on Oracle Solaris Clusters allow for the failover without any impact to those in-flight transactions.

If you're already running EBS on Solaris, you should definitely monitor this blog.

Related Articles

Categories: APPS Blogs

Configuring WebLogic Server 12.2.1.3 Diagnostics Framework policies and actions

Yann Neuhaus - Tue, 2018-05-15 09:28

Configure a Policy-action (new name for watch-notification) to send a JMX notification when the monitored WebLogic Server is having stuck threads or high memory usage. We will use a simple JMX notification listener program which can process WLDF JMX notifications. In this case, it will simply print the contents of the notification.
For simplicity, we will have only two servers in the domain, AdminServer (admin-server) and Server1 (managed-server). We will configure WLDF on the Server1 to send a JMX notification when the Heap Free of the managed-server Server1 (actually, any server it is targeted to within the domain) is less than 20% or as soon as there is a Stuck thread.
We will use WebLogic Server Administration Console to configure WLDF. It can also be done with WLST. Console provides a nice feature to Create Repeatable Configuration Scripts, which can be handy for building such scripts. Configuration steps for this case consist of:
1. Create a Diagnostic Module.
2. Enable periodic metrics collection in the Diagnostic module.
3. Create a JMX notification type.
4. Create a policy to detect if the WebLogic Server is having stuck threads
5. Create a policy to detect high memory usage on the WebLogic Server
6. Target the Diagnostic module to the WebLogic Server server1
We will go over these steps and see how to configure WLDF using Console for this task.

1. Create a Diagnostic Module.

a) Log into Console and acquire Edit Lock so we can add new configuration to WebLogic Server.
b) From the left navigation tree, open the Diagnostics node and click on Diagnostic Modules. Console will show a table of existing Diagnostic modules within the domain. Click the New button to create a new diagnostic module. Call it myWLDF. Click OK to create the module. At this point, we have an empty myWLDF diagnostic module.

2. Enable periodic metrics collection in the Diagnostic module.

a) Click on the myWLDF module link in the table of Diagnostics modules.
b) Click on Collected Metrics sub-tab under Configuration tab.
c) Check the Enabled checkbox and set the Sampling Period to 10000 (10 seconds). Click Save.

3. Create a JMX notification type.

a) Configuring a policy/actions (Watch and Notifications in earlier versions) has two aspects. The first aspect is a policy (watch rule) which specifies the condition that WLDF will check. The second aspect is the set of actions (notifications) that will be sent when the rule condition is met. Console provides configuration assistants to make the configuration task easier. To create an action type:
b) Click Policies and Actions sub-tab under Configuration tab.
On the Actions sub-tab, click New in the Actions table.
c) Select “JMX Notification” for the notification type from the drop down list and click Next.
d) Give a name to the notification type (myJMX)
e) Keep the Notification Type to its default value
f) Check the Enable Notification checkbox and click OK to create the notification type.

4. Create a policy to detect servers having stuck threads

Now, we will create the policy rule based on runtime mbean data. Specifically, we will use the StuckThreadCount attribute on the WorkManagerRuntime mbeans. For each server within the domain, there is a WorkManagerRuntime mbean in the domain runtime mbean server. The StuckThreadCount attribute reflects the current number of stuck thread on the server. We will configure a rule which will fire as soon one server in the targets list is having a stuck thread.
a) Click on the Policies sub-tab -> Configuration. Click New in the Policies table.
b) Set Policy Name to WatchServerStuckThread. Select Collected Metrics for Policy Type, check Enabled checkbox and click Next.
c) Set the Policy Expression as below and Click Next:

wls.runtime.query('com.bea:Type=WorkManagerRuntime,*','StuckThreadCount').stream().anyMatch( x -> x > 0 )

Note that the syntax of Policy Expressions has changed from the previous WebLogic Versions. Starting from this version, the policy expression is to be provided in Java Expression Language (EL).

d) Keep The Frequency to “Every N Seconds” and Click Next
e) Set the Repeat parameter to 5 and click Next
f) In this wizard keep the default and click Next
g) In the diagnostic Actions part, in the available actions, select the myJMX created earlier and move it to the “chosen” actions.
h) Click Finish

5. Create a policy to detect high memory usage on the WebLogic Server

Now, we will create the policy rule based on runtime mbean data. Specifically, we will use the StuckThreadCount attribute on the WorkManagerRuntime mbeans. For each server within the domain, there is a WorkManagerRuntime mbean in the domain runtime mbean server. The StuckThreadCount attribute reflects the current number of stuck thread on the server. We will configure a rule which will fire as soon one server in the targets list is having a stuck thread.
a) Click on the Policies sub-tab -> Configuration. Click New in the Policies table.
b) Set Policy Name to WatchServerStuckThread. Select Collected Metrics for Policy Type, check Enabled checkbox and click Next.
c) Set the Policy Expression as below and Click Next:

wls.runtime.serverRuntime.JVMRuntime.heapFreePercent < 20

d) Keep The Frequency to “Every N Seconds” and Click Next
e) Set the Repeat parameter to 5 and click Next
f) In this wizard keep the default and click Next
g) In the diagnostic Actions part, in the available actions, select the myJMX created earlier and move it to the “chosen” actions.
h) Click Finish
6. Target the Diagnostic module to the WebLogic Server server1
a) Click on the Targets TAB
b) In the list of possible Targets, select the Server1 and click on Save
c) Activate Changes

Receiving Notifications

WLDF sends a JMX notification on a specific WLDF runtime mbean, whose ObjectName is of the form:
com.bea:Name=DiagnosticsJMXNotificationSource,ServerRuntime=$SERVER,Type=WLDFWatchJMXNotificationRuntime,WLDFRuntime=WLDFRuntime,WLDFWatchNotificationRuntime=WatchNotification
where $SERVER is the name of the WebLogic Server instance. For our case (Server1), it is:
com.bea:Name=DiagnosticsJMXNotificationSource,ServerRuntime=Server1,Type=WLDFWatchJMXNotificationRuntime,WLDFRuntime=WLDFRuntime,WLDFWatchNotificationRuntime=WatchNotification
By registering for JMX notifications on this mbean, a client program can listen to generated notifications.
We will use the JMXWatchNotificationListener.java provided in the Oracle WLDF documentation (see references). It is a simple notification listener for WLDF JMX notifications. It simply prints the contents of received notification, but can be easily adapted to perform other actions.

A sample Java code of such listener can be downloaded from the Oracle WebLogic Diagnostic Framework documentation (here)

To run it for this blog sample run:

java JMXWatchNotificationListener vm01 7006 weblogic Welcome1 Server1

Note: The WebLogic Managed Server named Server1 is listening on port 7006.

Sample WLDF message sent when the WebLogic Server is having Stuck Threads

Notification name: myJMXNotif called. Count= 79.
Watch severity: Notice
Watch time: Apr 24, 2018 12:08:35 PM CEST
Watch ServerName: Server1
Watch RuleType: Harvester
Watch Rule: wls.runtime.query('com.bea:Type=WorkManagerRuntime,*','StuckThreadCount').stream().anyMatch( x -> x > 0 )
Watch Name: WatchServerStuckThread
Watch DomainName: wldf_domain
Watch AlarmType: None
Watch AlarmResetPeriod: 60000

Sample WLDF message sent when the WebLogic Server heap free becomes low (less than 20%)

Notification name: myJMXNotif called. Count= 114.
Watch severity: Notice
Watch time: Apr 24, 2018 12:11:45 PM CEST
Watch ServerName: Server1
Watch RuleType: Harvester
Watch Rule: wls.runtime.serverRuntime.JVMRuntime.heapFreePercent < 20
Watch Name: WatchServerLowHeapFreePercent
Watch DomainName: wldf_domain
Watch AlarmType: None
Watch AlarmResetPeriod: 60000

 

Cet article Configuring WebLogic Server 12.2.1.3 Diagnostics Framework policies and actions est apparu en premier sur Blog dbi services.

PDB RMAN backups available after plugging in on a new CDB with Oracle 18c

Yann Neuhaus - Tue, 2018-05-15 08:53

With Oracle 18c, it is possible to use PDB rman backups created on the source CDB (they are called PREPLUGIN backups) when the PDB has been relocated to a target CDB.

In my environment, my original CDB is DB18, with the PDB named pdborig. The target CDB is PSI18.

The first step consist in running a rman backup on pdborig:

oracle@localhost:/u00/app/oracle/ [DB18] rman target sys/manager@pdborig
Recovery Manager: Release 18.0.0.0.0 - Production on Tue May 15 10:57:38 2018
Version 18.1.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DB18:PDBORIG (DBID=3031125269)

RMAN> backup pluggable database pdborig plus archivelog;

Starting backup at 15-MAY-2018 10:57:55
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=80 device type=DISK
skipping archived logs when connected to a PDB
backup cancelled because there are no files to backup
Finished backup at 15-MAY-2018 10:57:56

Starting backup at 15-MAY-2018 10:57:56
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00029 name=/u00/app/oracle/oradata/DB18/pdborig/undotbs01.dbf
input datafile file number=00027 name=/u00/app/oracle/oradata/DB18/pdborig/system01.dbf
input datafile file number=00028 name=/u00/app/oracle/oradata/DB18/pdborig/sysaux01.dbf
input datafile file number=00030 name=/u00/app/oracle/oradata/DB18/pdborig/users01.dbf
channel ORA_DISK_1: starting piece 1 at 15-MAY-2018 10:57:56
channel ORA_DISK_1: finished piece 1 at 15-MAY-2018 10:57:59
piece handle=/u00/app/oracle/fast_recovery_area/DB18/DB18/
6C3BAD3B7C73354AE0530100007F9AD9/backupset/
2018_05_15/o1_mf_nnndf_TAG20180515T105756_fho88nn0_.bkp 
tag=TAG20180515T105756 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 15-MAY-2018 10:57:59

Starting backup at 15-MAY-2018 10:57:59
using channel ORA_DISK_1
skipping archived logs when connected to a PDB
backup cancelled because there are no files to backup
Finished backup at 15-MAY-2018 10:57:59

Then we have to export the RMAN backup metadata for the non CDB into its dictionary using dbms_pdb.exportrmanbackup()

oracle@localhost:/u00/app/oracle/ [DB18] sqlplus sys/manager@pdborig as sysdba

SQL*Plus: Release 18.0.0.0.0 Production on Tue May 15 11:00:38 2018
Version 18.1.0.0.0

Copyright (c) 1982, 2017, Oracle.  All rights reserved.

Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

SQL> execute dbms_pdb.exportrmanbackup();

PL/SQL procedure successfully completed.

As my environment is configured with TDE, in order to unplug the pdborig, we need to export the master key of the container database otherwise we will receive the following error message:

SQL> alter pluggable database pdborig 
unplug into '/home/oracle/pdborig.xml';
alter pluggable database pdborig unplug into '/home/oracle/pdborig.xml'
*
ERROR at line 1:
ORA-46680: master keys of the container database must be exported

We export the master key:

SQL> alter session set container=PDBORIG;
Session altered

SQL> administer key management 
  2  export encryption keys with secret "manager_cdb" 
  3  to '/home/oracle/pdborig.p12'
  4  identified by manager_cdb;

keystore altered.

SQL> alter pluggable database PDBORIG close immediate;

Pluggable database altered.

SQL> alter pluggable database PDBORIG unplug into '/home/oracle/pdborig.xml';

Pluggable database altered.

Finallly on the target CDB named PSI18, we first have to create a wallet and 
open the keystore. Just remember you have to define wallet_root and 
tde_configuration in your CDB environment in order to use TDE:


SQL> show parameter wallet

NAME		       TYPE	    VALUE
-------------------- ----------- -------------------------------------
wallet_root	       string	 /u00/app/oracle/admin/PSI18/walletcdb

SQL> alter system set tde_configuration="keystore_configuration=file";
System altered.

We create and open the keystore on the target CDB and we import the master key:

SQL> administer key management create keystore identified by manager_cdb;

keystore altered.

SQL> administer key management set keystore open 
identified by manager_cdb container=all;

keystore altered.

SQL> alter session set container=pdb1;

Session altered.

SQL> administer key management import encryption keys 
     with secret "manager_cdb" from '/home/oracle/pdborig.p12'
     identified by "manager_cdb" with backup;

keystore altered.

We create pdbnew on the target CDB using pdborig.xml:

SQL> create pluggable database pdbnew using '/home/oracle/pdborig.xml'
file_name_convert=
('/u00/app/oracle/oradata/DB18/pdborig','/home/oracle/oradata/PSI18/pdbnew');

Pluggable database created.

We open the pluggable database pdbnew:

SQL> alter pluggable database pdbnew open;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 4 PDBNEW			  READ WRITE NO

And now the non CDB PDBNEW has been plugged in the target CDB, we can ask if the rman backups are visible because we had exported the rman metadata backup. To visualize that we have to use the preplugin clause:

RMAN> list preplugin backup of pluggable database pdbnew;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time     
------- ---- -- ---------- ----------- ------------ --------------------
9       Full    463.15M    DISK        00:00:01     15-MAY-2018 10:56:51
        BP Key: 9   Status: AVAILABLE  Compressed: NO  Tag: TAG20180515T105650
        Piece Name: /u00/app/oracle/fast_recovery_area/DB18/DB18/
6C3BAD3B7C73354AE0530100007F9AD9/backupset/2018_05_15/
o1_mf_nnndf_TAG20180515T105650_fho86ltx_.bkp
  List of Datafiles in backup set 9
  File LV Type Ckp SCN    Ckp Time             Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- -------------------- ----------- ------ ----
  20 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/system01.dbf
  21 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/sysaux01.dbf
  22 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/undotbs01.dbf
  23 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/users01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time     
------- ---- -- ---------- ----------- ------------ --------------------
10      Full    463.15M    DISK        00:00:01     15-MAY-2018 10:57:57
        BP Key: 10   Status: AVAILABLE  Compressed: NO  Tag: TAG20180515T105756
        Piece Name: /u00/app/oracle/fast_recovery_area/DB18/DB18/6C3BAD3B7C73354AE0530100007F9AD9/backupset/2018_05_15/o1_mf_nnndf_TAG20180515T105756_fho88nn0_.bkp
  List of Datafiles in backup set 10
  File LV Type Ckp SCN    Ckp Time             Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- -------------------- ----------- ------ ----
  20 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/system01.dbf
  21 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/sysaux01.dbf
  22 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/undotbs01.dbf
  23 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/users01.dbf

We can also display the archive logs preplugin backups:

RMAN> list preplugin archivelog all;

List of Archived Log Copies for database with db_unique_name PSI18
=====================================================================

Key     Thrd Seq     S Low Time            
------- ---- ------- - --------------------
.....

6       1    16      A 15-MAY-2018 10:08:53
/u00/app/oracle/fast_recovery_area/archivelog/2018_05_15/o1_mf_1_16_fho5r944_.a
...

So let’s see if we can make a restore and recover test:

We delete the user01.dbf datafile:

oracle@localhost:/u00/app/oracle/ [PSI18] rm /home/oracle/oradata/PSI18/pdbnew/users01.dbf

oracle@localhost:/u00/app/oracle/oradata/DB18/pdbseed/ [PSI18] sq

SQL*Plus: Release 18.0.0.0.0 Production on Tue May 15 11:20:47 2018
Version 18.1.0.0.0

Copyright (c) 1982, 2017, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

SQL> alter pluggable database pdbnew close;

Pluggable database altered.

SQL> alter pluggable database pdbnew open;
alter pluggable database pdbnew open
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 23 - see DBWR trace file

Now we try to restore: we connect with rman to the target CDB and we set the PDB that needs to be restored with the command set preplugin container=pdbnew:

oracle@localhost:/u00/app/oracle/ [PSI18] rman target sys/manager@psi18

Recovery Manager: Release 18.0.0.0.0 - Production on Tue May 15 11:25:06 2018
Version 18.1.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PSI18 (DBID=3896993764)

RMAN> set preplugin container=pdbnew;

executing command: SET PREPLUGIN CONTAINER
using target database control file instead of recovery catalog

RMAN> list preplugin backup of pluggable database pdbnew;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time     
------- ---- -- ---------- ----------- ------------ --------------------
9       Full    463.15M    DISK        00:00:01     15-MAY-2018 10:56:51
        BP Key: 9   Status: AVAILABLE  Compressed: NO  Tag: TAG20180515T105650
        Piece Name: /u00/app/oracle/fast_recovery_area/
/6C3BAD3B7C73354AE0530100007F9AD9/backupset/
2018_05_15/o1_mf_nnndf_TAG20180515T105650_fho86ltx_.bkp
  List of Datafiles in backup set 9
  Container ID: 4, PDB Name: PDBNEW
  File LV Type Ckp SCN    Ckp Time             Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- -------------------- ----------- ------ ----
  20 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/system01.dbf
  21 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/sysaux01.dbf
  22 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/undotbs01.dbf
  23 Full 1081248 15-MAY-2018 10:56 /home/oracle/oradata/PSI18/pdbnew/users01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time     
------- ---- -- ---------- ----------- ------------ --------------------
10      Full    463.15M    DISK        00:00:01     15-MAY-2018 10:57:57
        BP Key: 10   Status: AVAILABLE  Compressed: NO  Tag: TAG20180515T105756
        Piece Name: /u00/app/oracle/fast_recovery_area
/6C3BAD3B7C73354AE0530100007F9AD9/backupset/2018_05_15/
o1_mf_nnndf_TAG20180515T105756_fho88nn0_.bkp
  List of Datafiles in backup set 10
  Container ID: 4, PDB Name: PDBNEW
  File LV Type Ckp SCN    Ckp Time             Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- -------------------- ----------- ------ ----
  20 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/system01.dbf
  21 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/sysaux01.dbf
  22 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/undotbs01.dbf
  23 Full 1081326 15-MAY-2018 10:57 /home/oracle/oradata/PSI18/pdbnew/users01.dbf

We run the restore command with the preplugin clause:

RMAN> restore pluggable database pdbnew from preplugin;

Starting restore at 15-MAY-2018 11:26:23
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=108 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00020 to 
/home/oracle/oradata/PSI18/pdbnew/system01.dbf
channel ORA_DISK_1: restoring datafile 00021 to 
/home/oracle/oradata/PSI18/pdbnew/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00022 to 
/home/oracle/oradata/PSI18/pdbnew/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00023 to 
/home/oracle/oradata/PSI18/pdbnew/users01.dbf
channel ORA_DISK_1: reading from backup piece /u00/app/oracle/fast_recovery_area/
DB18/DB18/6C3BAD3B7C73354AE0530100007F9AD9/backupset/2018_05_15/
o1_mf_nnndf_TAG20180515T105756_fho88nn0_.bkp
channel ORA_DISK_1: piece handle=/u00/app/oracle/fast_recovery_area/DB18/DB18/
6C3BAD3B7C73354AE0530100007F9AD9/backupset/2018_05_15/
o1_mf_nnndf_TAG20180515T105756_fho88nn0_.bkp tag=TAG20180515T105756
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 15-MAY-2018 11:26:28

We run the recover command with the preplugin clause:

RMAN> recover pluggable database pdbnew from preplugin;

Starting recover at 15-MAY-2018 11:27:02
using channel ORA_DISK_1

starting media recovery

unable to find archived log
archived log thread=1 sequence=17
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 05/15/2018 11:27:03
RMAN-06054: media recovery requesting unknown archived log 
for thread 1 with sequence 17 and starting SCN of 1081326

We have to catalog the archive logs generated after the backup into the target CDB by issuing the catalog preplugin archivelog command :

RMAN> catalog preplugin archivelog '/u00/app/oracle/fast_recovery_area/
DB18/DB18/archivelog/2018_05_15/o1_mf_1_17_fhob69t7_.arc';

cataloged archived log
archived log file name=/u00/app/oracle/fast_recovery_area/DB18/DB18/
archivelog/2018_05_15/o1_mf_1_17_fhob69t7_.arc RECID=7 STAMP=0

Finally the recover command runs successfully:

RMAN> recover pluggable database pdbnew from preplugin;

Starting recover at 15-MAY-2018 11:32:25
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 17 is already on disk as file 
/u00/app/oracle/fast_recovery_area/DB18/DB18/archivelog/2018_05_15/
o1_mf_1_17_fhob69t7_.arc
media recovery complete, elapsed time: 00:00:00
Finished recover at 15-MAY-2018 11:32:26

We finish the recover and open the target PDB:

RMAN> recover pluggable database pdbnew;

Starting recover at 15-MAY-2018 11:33:10
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 15-MAY-2018 11:33:10

RMAN> alter pluggable database pdbnew open;

Statement processed

RMAN> exit


Recovery Manager complete.

As far we can see, the target PDB has been successfully restored and recovered:

oracle@localhost:/u00/app/oracle/oradata/DB18// [PSI18] sq

SQL*Plus: Release 18.0.0.0.0 Production on Tue May 15 11:33:37 2018
Version 18.1.0.0.0

Copyright (c) 1982, 2017, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.1.0.0.0

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 4 PDBNEW			  READ WRITE NO

The preplugin clause can be used on the target CDB with rman commands like restore, recover, crosscheck, list and delete.

This new feature helps to maintain the backup compliance after moving to a new target CDB.

 

Cet article PDB RMAN backups available after plugging in on a new CDB with Oracle 18c est apparu en premier sur Blog dbi services.

New Oracle Billing and Revenue Management Features Help Cloud, Telco Providers Monetize Emerging Services

Oracle Press Releases - Tue, 2018-05-15 07:00
Press Release
New Oracle Billing and Revenue Management Features Help Cloud, Telco Providers Monetize Emerging Services New capabilities empower communication service providers to deliver exceptional service to customers

TM FORUM, Nice, France—May 15, 2018

Helping customers capitalize on new digital communications services and revenue opportunities, Oracle today announced major new enhancements to Oracle Communications Billing and Revenue Management (BRM). With new operational user experience features, enhanced security capabilities and updated technology platform support, Oracle Communications BRM supports the rapid design and monetization of consumption and subscription based offerings across cloud, IoT, and future 5G services.

Oracle Communications BRM provides complete revenue management including the high volume, low latency, and convergent usage processing required by new digital communication services. With its Elastic Charging Engine technology, the offering is designed to support advanced charging models for today’s mobile, fixed and cable digital services and provide a foundation for IoT and future 5G services.

“This major release of Oracle Communications BRM underscores Oracle’s ongoing commitment to delivering powerful, scalable monetization solutions that enable our customers to fully capitalize on the digital revolution,” said Doug Suriano, senior vice president, general manager, Oracle Communications. “With this offering, service providers have a modern monetization platform to innovate, profit and grow as they evolve to support broadband technology innovations and deliver new services to market.”

Oracle Communications BRM was recently noted as a Leader in the IDC MarketScape: Worldwide Subscription Relationship Management (SRM) 2017 Vendor Assessment.

Modern Monetization for the Digital Era

In today’s competitive business environment, delivering a superior customer experience is more critical than ever. Oracle Communications Billing and Revenue Management’s Billing Care module enables CSPs to serve their customers through intuitive interfaces that provide real-time views of customer’s balance and allow easy access to all aspects of account history. CSPs and digital service providers can build customer loyalty by awarding non-currency assets such as movie downloads, gigabytes of storage or throughput, or access to games.

Real-time tracking of consumption enables notification to customers when they approach thresholds. Flexible options for invoice design and delivery create additional opportunities to personalize subscriber relationships. Oracle BRM 12.0 enhances these capabilities, empowering CSPs to leverage powerful, scalable monetization capabilities to accelerate customer success and time to market for digital services.

New capabilities include:

  • New unified functionality available in the modern, browser based Billing Care user interface:
    • Advanced customer and asset management with subscription group services
    • Corrective invoicing, credit profile management and sharing
    • SEPA Payments, account and bill unit level refunds, item and event level disputes
    • Collections management, write-off and recovery
  • Secure and documented Billing Care REST API
  • Enhanced security including use of Oracle Wallet
  • Updated core technology support

Oracle Communications BRM is part of Oracle’s complete monetization service portfolio, which also includes Oracle Monetization Cloud and Oracle Communications Network Charging and Control. With Oracle’s monetization solutions, customers can fuel disruptive innovation, brand differentiation, and business transformation with a service-, industry-, and business-model-agnostic real-time charging, billing, and settlement solution designed for an increasingly connected world.

1 IDC, “MarketScape: Worldwide Subscription Relationship Management 2017 Vendor Assessment”, Jordan Jewell and Eric Newmark, DC #US43172417, November 2017

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Blanc & Otus
+1.925.787.6744
kreeves@blancandotus.com
About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Partner Webcast – Autonomous Data Warehouse in the Cloud: Practical Use Case Guide

Today’s leading-edge organizations differentiate themselves through analytics to further their competitive advantage by extracting value from all their data sources. However, the velocity and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Generating Duplicate Rows

Tom Kyte - Tue, 2018-05-15 03:46
Hi I am new to oracle plsql and want advise on a Biz scenario: Biz want to run a shipping label report and each shipping has one record. They will choose particular shipping record and based on provided parameter they want to see number of labe...
Categories: DBA Blogs

Goldengate REPORTING P2

Michael Dinh - Mon, 2018-05-14 21:35

Previous post for Goldengate REPORTING

Goldengate reporting has to be the least implemented functionality until it is needed and often in hindsight.

Here is an example of what I would normally implement.

STATOPTIONS RESETREPORTSTATS
REPORT AT 00:01
REPORTROLLOVER AT 00:01
REPORTCOUNT EVERY 15 MINUTES, RATE
DISCARDROLLOVER AT 00:01 ON SUNDAY

If business only cares about monthly data, then would not make sense to collect daily nor would it make sense for business to request such.

Leading Analyst Firm: Oracle Continues Strong Cloud Growth

Oracle Press Releases - Mon, 2018-05-14 15:09
Press Release
Leading Analyst Firm: Oracle Continues Strong Cloud Growth New IDC report finds Oracle gained the most share out of all Enterprise Applications SaaS vendors and grew the fastest out of the top 10 PaaS vendors in the last two calendar years

Redwood Shores, Calif.—May 14, 2018

Oracle today announced that IDC has recognized the momentum and growth of Oracle Cloud. According to the IDC Public Cloud Services Tracker, April, 2018¹, Oracle gained the most share out of all Enterprise Applications SaaS vendors in CY’2016 as well as CY’2017. In addition, the same report found that Oracle grew the fastest out of the top 10 PaaS vendors in CY’2016 as well as CY’2017.

“Oracle Cloud is redefining how organizations modernize, innovate and compete,” said Thomas Kurian, president, product development, Oracle. “Regardless of where organizations want to start their cloud journeys, Oracle delivers complete and integrated cloud services that deliver faster innovation and business transformation. We believe that the results from the latest IDC Public Cloud Services Tracker further demonstrate the value Oracle Cloud provides.”

IDC’s Worldwide Semiannual Public Cloud Services Tracker® monitors public cloud services with frequent releases of semiannual revenue estimates. It tracks and details individual public cloud services providers’ historical business performance and forecasts market sizes five years into the future—across 53 geographies and up to 80 discrete market segments. In the report, Enterprise Applications SaaS refers to the IDC markets CRM, Enterprise Resource Management (including HCM, Financials, Procurement, Order Management, PPM, EAM), Engineering, SCM, Operations and Manufacturing Applications.

Oracle provides its customers with a unique advantage as it is the only company in the world to deliver a complete cloud portfolio, spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Today, organizations in 195 countries and territories process more than 55 billion transactions a day through the Oracle Cloud.

To learn more about how organizations around the globe are rapidly adopting Oracle Cloud solutions, see: Organizations Worldwide Turn to Oracle Cloud to Fuel their Modernization Efforts

1 IDC’s Worldwide Semiannual Public Cloud Services Tracker®, 27 April, 2018 [Enterprise Applications SaaS refers to the IDC markets CRM, Enterprise Resource Management (including HCM, Financials, Procurement, Order Management, PPM, EAM), Engineering, SCM, Operations and Manufacturing Applications.]

Contact Info
Simon Jones
Oracle
+1.415.202.4574
s.jones@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Simon Jones

  • +1.415.202.4574

Loading Tables with Oracle GoldenGate and REST APIs

DBASolved - Mon, 2018-05-14 12:11

With Oracle GoldenGate 12c (12.3.0.1.x), you can now quickly load empty target tables with data from your source database. You could always do this in previous releases of Oracle GoldenGate, but the process has now been simplified using REST APIs and some scripting know-how. In this post, I’m going to show you, high level, how you can use the REST APIs and a bit of scripting to do an initial load of a two table with a single command.

In previous releases of Oracle GoldenGate, a similar task could be done, but it required you to include the Oracle Database Export/Import data pumps or some other drawn out process. With this new process, you can effectively get around that and only need to use trail files to perform the initial load.

In this scenario, I have two table with a total of 14,000 records in them. This will be a small example of an initial load, but you should get the idea behind how this will work. This approach will also work for adding tables into an existing replication scheme.

The below architcture diagram illistrates how the architecture would look with an existing GoldenGate capture running and incorprating an File-Based Initial Load process to load a few tables.

Image 1:

This may look a bit confusing, but this is quite simple to understand. The red items are the GoldenGate extract, trails (local and remote), and the GoldenGate replicat. This is an existing replication stream. The GoldenGate extract is capturing from the source database, moving transactions to the local trail file (aa). Then the DistroService picks up/reads the local trail and ships the transactions across the GoldenGate Path to the ReceiverService. The Receiver Service then writes to the remote trail (ab) where the GoldenGate replicat processes the transactions into the target database. Pretty simple and this is doing a continuous replication of transactions.

Now, you want to just setup a few new tables, but do not want to take the day or two it would take to configure, export, import, apply and then catch up. Along the bottom, is the initial load path (green) using a File-Based approach to initially load tables. This process is what I’ve scripted out to using cURL and Shell scripts. Normally, you would spend time doing an export/import for the table(s) that you want to move to the target system after setting up the initial load extract.

Using Oracle GoldenGate Microservices architecture, this initial load process can be simplied and done very quickly. Below is a link to a script which I wrote to perform an File-Based Initial Load within Oracle GoldenGate Microservices.

FB_InitialLoad.sh <— Use at your own risk! This is only an example script of how this can be done.

What this script does, is creates the File-Based Initial Load process and populates the two tables I’ve identified in the target system.

As you run this script, everything I needed to build has been reduced down to functions that I can call when needed within the script. Granted this script if very simple but it orchatrates the whole initial load process for the tables I wanted. After the tables have been loaded, then they can be merged into the existing replication stream.

Enjoy!!!

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator