Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 7 hours 11 min ago

Oracle API Platform Cloud Service: using the Developer Portal for discovering APIs via the API Catalog and subscribing applications to APIs

Sun, 2018-04-22 14:22

At the Oracle Partner PaaS Summer Camps VII 2017 in Lisbon last year, at the end of august, I attended the API Platform Cloud Service & Integration Cloud Service bootcamp.

In a series of article’s I will give a high level overview of what you can do with Oracle API Platform Cloud Service.

At the Summer Camp a pre-built Oracle VM VirtualBox APIPCS appliance (APIPCS_17_3_3.ova) was provided to us, to be used in VirtualBox. Everything needed to run a complete demo of API Platform Cloud Service is contained within Docker containers that are staged in that appliance. The version of Oracle API Platform CS, used within the appliance, is Release 17.3.3 — August 2017.

See https://docs.oracle.com/en/cloud/paas/api-platform-cloud/whats-new/index.html to learn about the new and changed features of Oracle API Platform CS in the latest release.

In this article in the series about Oracle API Platform CS, the focus will be on the Developer Portal, discovering APIs via the API Catalog and subscribing applications to APIs. As a follow-up from my previous article, at the end the focus is on validating the “Key Validation” policy of the “HumanResourceService”API.
[https://technology.amis.nl/2018/04/14/oracle-api-platform-cloud-service-using-the-management-portal-and-creating-an-api-including-some-policies/]

Be aware that the screenshot’s in this article and the examples provided, are based on a demo environment of Oracle API Platform CS and were created by using the Oracle VM VirtualBox APIPCS appliance mentioned above.

This article only covers part of the functionality of Oracle API Platform CS. For more detail I refer you to the documentation: https://cloud.oracle.com/en_US/api-platform.

Short overview of Oracle API Platform Cloud Service

Oracle API Platform Cloud Service enables companies to thrive in the digital economy by comprehensively managing the full API lifecycle from design and standardization to documenting, publishing, testing and managing APIs. These tools provide API developers, managers, and users an end-to-end platform for designing, prototyping. Through the platform, users gain the agility needed to support changing business demands and opportunities, while having clear visibility into who is using APIs for better control, security and monetization of digital assets.
[https://cloud.oracle.com/en_US/api-platform/datasheets]

Architecture

Management Portal:
APIs are managed, secured, and published using the Management Portal.
The Management Portal is hosted on the Oracle Cloud, managed by Oracle, and users granted
API Manager privileges have access.

Gateways:
API Gateways are the runtime components that enforce all policies, but also help in
collecting data for analytics. The gateways can be deployed anywhere – on premise, on Oracle
Cloud or to any third party cloud providers.

Developer Portal:
After an API is published, Application Developers use the Developer Portal to discover, register, and consume APIs. The Developer Portal can be customized to run either on the Oracle Cloud or directly in the customer environment on premises.
[https://cloud.oracle.com/opc/paas/datasheets/APIPCSDataSheet_Jan2018.pdf]

Oracle Apiary:
In my article “Oracle API Platform Cloud Service: Design-First approach and using Oracle Apiary”, I talked about using Oracle Apiary and interacting with its Mock Server for the “HumanResourceService” API, I created earlier.

The Mock Server for the “HumanResourceService” API is listening at:
http://private-b4874b1-humanresourceservice.apiary-mock.com
[https://technology.amis.nl/2018/01/31/oracle-api-platform-cloud-service-design-first-approach-using-oracle-apiary/]

Roles

Within Oracle API Platform CS roles are used.

Roles determine which interfaces a user is authorized to access and the grants they are eligible to receive.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/api-platform-cloud-service-roles-resources-actions-and-grants.html]

  • Administrator
    System Administrators responsible for managing the platform settings. Administrators possess the rights of all other roles and are eligible to receive grants for all objects in the system.
  • API Manager
    People responsible for managing the API lifecycle, which includes designing, implementing, and versioning APIs. Also responsible for managing grants and applications, providing API documentation, and monitoring API performance.
  • Application Developer
    API consumers granted self-service access rights to discover and register APIs, view API documentation, and manage applications using the Developer Portal.
  • Gateway Manager
    Operations team members responsible for deploying, registering, and managing gateways. May also manage API deployments to their gateways when issued the Deploy API grant by an API Manager.
  • Gateway Runtime
    This role indicates a service account used to communicate from the gateway to the portal. This role is used exclusively for gateway nodes to communicate with the management service; users assigned this role can’t sign into the Management Portal or the Developer Portal.
  • Service Manager
    People responsible for managing resources that define backend services. This includes managing service accounts and services.
  • Plan Manager
    People responsible for managing plans.

Within the Oracle VM VirtualBox APIPCS appliance the following users (all with password welcome1) are present and used by me in this article:

User Role api-manager-user APIManager api-gateway-user GatewayManager app-dev-user ApplicationDeveloper

Publish an API, via the Management Portal (api-manager-user)

Start the Oracle API Platform Cloud – Management Portal as user api-manager-user.

Navigate to tab “Publication” of the “HumanResourceService” API (which I created earlier).
[https://technology.amis.nl/2018/04/14/oracle-api-platform-cloud-service-using-the-management-portal-and-creating-an-api-including-some-policies/]

Publish an API to the Developer Portal when you want application developers to discover and consume it.

Each published API has a details page on the Developer Portal. This page displays basic information about the API, an overview describing the purpose of the API, and documentation for using the API. This page is not visible on the Developer Portal until you publish it.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-145F0AAE-872B-4577-ACA6-994616A779F1]

The tab “Publication” contains the following parts:

  • API Portal URL
  • Developer Portal API Overview
  • Documentation

Next I will explain (in reversed order) these parts in more detail.

As you can see, for some of the parts we can use HTML, Markdown or Apiary.

Remark:
Markdown is a lightweight markup language with plain text formatting syntax.
[https://en.wikipedia.org/wiki/Markdown]

Part “Documentation” of the tab “Publication”

You can provide HTML or Markdown documentation by uploading a file, manually entering text, or providing a URL to the documentation resource. After you have added the documentation, it appears on the Documentation tab of the API detail page in the Developer Portal.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-9FD22DC2-18A9-4338-91E7-70726C906B91]

It is also possible to add documentation from Oracle Apiary to an API.

Adding documentation to the API can help users understand its purpose and how it was configured.

Note:
Swagger or API Blueprint documentation can only be added to an Oracle Apiary Pro account. To add documentation, the team must have ownership of the API in Oracle Apiary. API definitions owned by personal accounts cannot be used. To transfer ownership of an API from a personal account to a team account, see the Oracle Apiary documentation.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-A7E68AA0-396D-400C-933C-1C4CD3DDD832]

So let’s see how I tried using documentation from Oracle Apiary.

Using Oracle Apiary for documentation

I clicked on button “Apiary”. In my case the following screen appeared:

Next, I clicked on button “Go To Apiary”.

Then I clicked on button “Sign in”.

After a successful sign in (for example by using Email/Password), the following screen appeared (with the “HumanResourceService” API visible):

Next, I clicked on button “Create a team”. The following pop-up appeared:

Because I use a Free (personal) Account for Oracle Apiary, I am not able to create a team.

Remember the note (see above) saying: “Swagger or API Blueprint documentation can only be added to an Oracle Apiary Pro account. To add documentation, the team must have ownership of the API in Oracle Apiary. API definitions owned by personal accounts cannot be used.”.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-A7E68AA0-396D-400C-933C-1C4CD3DDD832]

So, for me, the path of using documentation from Oracle Apiary came to an end.

As an alternative, in this article, I used Markdown for documentation. But before explaining that in more detail, I want to give you an impression of screenshot’s of what happens when you click on button “Apiary” and have an Apiary account with the right privileges to add documentation to an API.

Remark:
The screenshot’s that follow, are taken from the API Platform Cloud Service bootcamp, I attended at the Oracle Partner PaaS Summer Camps VII 2017 in Lisbon last year.

So, when you click on button “Apiary”, the following screen appears:

A list of API’s is visible where you can choose one to connect. For example: “TicketService27” API.

After a click on button “Connect”, the “Documentation” part of tab “Publication” looks like:

Using Markdown for documentation

For reasons mentioned above, as an alternative for using Oracle Apiary, in this article, I used Markdown for documentation. Markdown is new to me, so (in this article) I will only demonstrate it with a simplified version of the documentation (available in Apiary).

Click on button “Markdown”.

Next, click on tab “Text” and enter the following text:

# HumanResourceService

Human Resource Service is an API to manage Human Resources.

## Employees Collection [/employees]

### Get all employees [GET /employees]

Get all employees.

### Get an employee [GET /employees/{id}]

Get a particular employee by providing an identifier.

### Create an employee [POST /employees]

Create an employee, by using post with the complete payload.

### Update an employee [PUT /employees/{id}]

Update an employee, by using put with the a payload containing: last_name, job_id, salary and department_id.

## Departments Collection [/departments]

### Get a department [GET /department/{id}]

Get a particular department by providing an identifier.

### Get a department and employee [GET /departments/{department_id}/employees/{employee_id}]

Get a particular department by providing a department identifier and a particular employee within that department by providing an employee identifier.

After a click on button “OK”, the “Documentation” part of tab “Publication” looks like:

In the pop-up, click on button “Save Changes”.

Part “Developer Portal API Overview” of the tab “Publication”

You can provide overview text for an API, describing its features and other information a developer should know about its use, in either HTML or Markdown.

You can upload a file, enter text manually, or provide a link to HTML or Markdown to use as overview text. This text appears on the API’s detail page in the Developer Portal.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-D1BF7E3E-03C9-42AE-9808-EC9BC77D3E61]

For the “Developer Portal API Overview” part, I chose to use HTML (because in this article, up to now, examples of using Markdown and Apiary were already provided).

Once again I will only demonstrate it with a simplified version of the documentation (available in Apiary).

Click on button “HTML”.

Next, click on tab “Text” and enter the following text:

<h1>HumanResourceService</h1>

Human Resource Service is an API to manage Human Resources.

It provides CRUD methods for the resources <b>Employees</b> and <b>Departments</b>.

After a click on button “OK”, the “Developer Portal API Overview” part of tab “Publication” looks like:

In the pop-up, click on button “Save Changes”.

Part “API Portal URL” of the tab “Publication”

Before publishing to the Developer Portal, each API has to be configured with its own unique Vanity Name. A vanity name is the URI path of an API’s details page when it is published to the Developer Portal.

On the Publication tab, enter the path at which this API will be discoverable in the Developer Portal in the API Portal URL field. This is also called the API’s vanity name.

Note:
An API’s vanity name must be unique, regardless of case. You can’t have APIs with vanity names of Creditcheck and creditcheck. You must enter the vanity name exactly (matching case) in the URL to navigate to an API’s details page in the Developer Portal. For example, navigating to https://<host>:<port>/developers/apis/Creditcheck opens the page for an API with a vanity name of Creditcheck; https://<host>:<port>/developers/apis/creditcheck doesn’t open this page and returns a 404 because the segment in the URL does not match the vanity name exactly.

Only valid URI simple path names are supported. Characters such as “?”, “/”, and “&” are not supported in vanity names. Test_2 is a supported vanity name, but Test/2 is not.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-C9034B10-72EA-4046-A8B8-B5B1AE087180]

The default API’s vanity name, is derived from the API name:

<not published>/HumanResourceService

Publish the “HumanResourceService” API to the Developer Portal

So now that we have all the documentation in place, Notice that the button “Preview” appeared.

Clicking on button “Preview” generates an error:

Remember that I am using a demo environment of Oracle API Platform CS by using the Oracle VM VirtualBox APIPCS appliance. This seems to be a bug in that environment. So what should have been visible was something like:

Here you can see on the left, that the tab “Overview” is selected . There is also a tab “Documentation”.

Remark:
Please see the screenshot’s later on in this article for the “HumanResourceService”API in the “Developer Portal” (tab APIs), with regard to the tabs “Overview” and “Documentation”. These show the same information as in the Preview context.

Next, click on button “Publish to Portal”.

Notice that the > icon “Launch Developer Portal in another browser window” appeared and also that the API Portal URL is changed to:

http://apics.oracle.com:7201/developers/apis/HumanResourceService

In the top part of the screen you can see that the “HumanResourceService”API is “Published’.

It’s time to launch the Developer Portal.

Click on the icon “Launch Developer Portal in another browser window”.

Sign in to the Oracle API Platform Cloud – Developer Portal as user app-dev-user

After a successful sign in as user app-dev-user, the next screen appears (with tab “APIs” selected):

The “Developer Portal” is the web page where you discover APIs, subscribe to APIs and get the necessary information to invoke them. When you access the “Developer Portal”, the API Catalog page appears. All the APIs that have been published to the “Developer Portal” are listed. Use the API Catalog page to find APIs published to the “Developer Portal”.

In the “Developer Portal” screen above there are no APIs, or they are not visible for the current user. So we have to go back to the Oracle API Platform Cloud – Management Portal (as an API Manager). There we can grant the privileges needed for an Application Developer to see the API. How you do this is described later on in this article.

For now we continue as if the correct privileges were already in place. Therefor the “HumanResourceService” API is visible.

Click on the “HumanResourceService” API.

Here you can see on the left, that the tab “Overview” is selected.

For now I will give you a short overview of screenshot’s of each of the tabs on the left.

Tab “Overview” of the “HumanResourceService” API

Remember that we used HTML code for the “Developer Portal API Overview” part of the tab “Publication”?
So here you can see the result.

Tab “Documentation” of the “HumanResourceService” API

Remember that we used Markdown code for the “Documentation” part of the tab “Publication”?
So here you can see the result.

Remark:
If I had an Apiary account with the right privileges to add documentation to an API and used Apiary for documentation, the tab “Documentation” would have looked like:

Discover APIs

In the API Catalog page, you can search for an API by entering keywords in the field at the top of the catalog. The list is narrowed to the APIs that have that word in the name or the description. If you enter multiple words, the list contains all APIs with either of the words; APIs with both words appear at the top of the list. If a keyword or keywords have been applied to the list, they appear in a bar at the top of the page. Filters can also be applied to the list. You can also sort the list for example in alphabetical order or by newest to oldest API.
[Oracle Partner PaaS Summer Camps VII 2017, APIPCS bootcamp, Lab_APIPCS_Design_and_Implement.pdf]

Subscribe an application to the “HumanResourceService” API

In the “Developer Portal” screen if we navigate, in the API Catalog page, to the “HumanResourceService” API, and if the user has the correct privileges, a button “Subscribe” is visible. In the Oracle API Platform Cloud – Management Portal (as an API Manager) we can grant the privileges needed for an Application Developer to register an application to the API. How you do this is described later on in this article.

For now we continue as if the correct privileges were already in place.

Click on button “Subscribe”.

Next, click on button “Create New Application”. Enter the following values:

Application Name HumanResourceWebApplication Description Web Application to manage Human Resources. Application Types Web Application Contact information: First Name FirstName Last Name LastName Email Email@company.com Phone 123456789 Company Company

Click on button “Save”.

For a short while a pop-up “The application ‘HumanResourceWebApplication’ was created.” appears.

So now we have an application, we can subscribe it, to the “HumanResourceService” API.

Notice that an Application Key was generated, with as value:

fb3138d1-0636-456e-96c4-4e21b684f45e

Remark:
You can reissue a key for an application in case it has been compromised, Application keys are established at the application level. If you reissue an application’s key, the old key is invalidated. This affects all APIs (that have the key validation policy applied) to which an application is registered. Every request to these APIs must use the new key to succeed. Requests using the old key are rejected. APIs without the key validation policy are not affected as these do not require a valid application key to pass requests.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/reissuing-application-key.html#GUID-4E570C15-C289-4B6D-870C-F7ADACC1F6DD]

Next, click on button “Subscribe API”.

For a short while a pop-up “API ‘HumanResourceService’ was subscribed to application ‘HumanResourceWebApplication’.” appears.

A request to register the application to the API is sent to the API Manager. So now we have to wait for the approval of the API Manager. How you do this is described later on in this article.

In the API Catalog page, when viewing an API you can see which applications are subscribed to it.

In the My Applications page, when viewing an application you can see which APIs it subscribed to.

After a click on the “HumanResourceWebApplication” application, the next screen appears (with tab “Overview” selected):

First l will give you a short overview of screenshot’s of each of the tabs on the left. Some of these I will explain in more detail as I will walk you through some of the functionality of Oracle API Platform CS.

Tab “Overview” of the “HumanResourceWebApplication” application

Tab “Subscribed APIs” of the “HumanResourceWebApplication” application

Tab “Grants” of the “HumanResourceWebApplication” application

Application grants are issued per application.

The following tabs are visible and can be chosen:

  • Manage Application
    People issued this grant can view, modify and delete this application. API Manager users issued this grant can also issue grants for this application to others.
  • View all details
    People issued this grant can see all details about this application in the Developer Portal.

See for more information: https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/managing-application-grants.html

Tab “Analytics” of the “HumanResourceWebApplication” application

Create an Application in the “My Applications” page

Click on button “New Application”.

In the same way as described before I created several applications (one at a time) with minimum information (Application Name, Application Types, First Name, Last Name and Email).

In the My Applications page, the list of applications then looks like:

In the table below I summarized the applications that I created:

Application Name Application Types Application Key DesktopApp_A_Application Desktop App e194833d-d5ac-4c9d-8143-4cf3a3e81fea DesktopApp_B_Application Desktop App fd06c3b5-ab76-4e89-8c5a-e4b8326c360b HumanResourceWebApplication Web Application fb3138d1-0636-456e-96c4-4e21b684f45e MobileAndroid_A_Application Mobile – Android fa2ed56f-da3f-49ea-8044-b16d9ca75087 MobileAndroid_B_Application Mobile – Android 385871a2-7bb8-4281-9a54-c0319029e691 Mobile_iOS_A_Application Mobile – iOS 7ebb4cf8-5a3f-4df5-82ad-fe09850f0e50

In the API Catalog page, navigate to the “HumanResourceService” API. Here you can see that there is already one subscribed application.

Click on button “Subscribe”.

Next, select the “MobileAndroid_B_Application” application.

For a short while a pop-up “API ‘HumanResourceService’ was subscribed to application ‘ MobileAndroid_B_Application ‘.” appears.

In the API Catalog page, when viewing an API you can see which applications are subscribed to it.

Here we can see the status “Pending”. A request to register the “MobileAndroid_B_Application” application to the “HumanResourceService” API is sent to the API Manager. So now we have to wait for the approval of the API Manager. Repeat the steps described in this article, to approve the request, by switching to an API Manager.

In the screen below, we can see the end result:

Edit the Key Validation Policy, via the Management Portal (api-manager-user)

In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-manager-user.

Navigate to tab “API Implementation” of the “HumanResourceService” API.

Hoover over the “Key Validation” policy and then, on the right, click on icon “Edit policy details”.

Click on button “Apply”.

Next, click on button “Save Changes”.

I applied this policy as an active policy, represented as a solid line around the policy.

Redeploy the API, via the Management Portal (api-manager-user)

Navigate to tab “Deployments” of the “HumanResourceService” API, and then hover over the “Production Gateway” gateway and then, on the right, hover over the icon “Redeploy”.

Next, click on icon “Latest Iteration”. Also approve the request, by switching to a Gateway Manager.
How you do this, is described in my previous article “Oracle API Platform Cloud Service: using the Management Portal and creating an API (including some policies)”.
[https://technology.amis.nl/2018/04/14/oracle-api-platform-cloud-service-using-the-management-portal-and-creating-an-api-including-some-policies/]

So now the “HumanResourceService” API is redeployed on the “Production Gateway” gateway (Node 1).

It is time to invoke the API.

Validating the “Key Validation” policy, via Postman

As described in my previous article, in Postman, I created requests within the collection named “HumanResourceServiceCollection”.
[https://technology.amis.nl/2018/04/14/oracle-api-platform-cloud-service-using-the-management-portal-and-creating-an-api-including-some-policies/]

Then again I invoked two request, to validate them against the “Key Validation” policy.

Invoke method “GetEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “GetEmployeeRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees/100”) and a response with “Status 401 Unauthorized” is shown:

After providing the Value fb3138d1-0636-456e-96c4-4e21b684f45e (being the Application Key of the “HumanResourceWebApplication” application) for the Header Key “application-key”, a response with “Status 200 OK” is shown:

After providing the Value e194833d-d5ac-4c9d-8143-4cf3a3e81fea (being the Application Key of the “DesktopApp_A_Application” application) for the Header Key “application-key”, a response with “Status 401 Unauthorized” is shown:

Invoke method “GetDepartmentEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “GetDepartmentEmployeeRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/departments/30/employees/119”) and a response with “Status 401 Unauthorized” is shown:

After providing the Value 385871a2-7bb8-4281-9a54-c0319029e691 (being the Application Key of the “MobileAndroid_B_Application” application) for the Header Key “application-key”, a response with “Status 200 OK” is shown:

Tab “Analytics” of the “Production Gateway” gateway

In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-gateway-user and click on the “Production Gateway” gateway and navigate to the tab “Analytics”.

In this tab the requests I sent, are visible at “Total Requests”.

If we look, for example, at “Requests By Resource”, the requests are also visible.

Next, click on icon “Applications (4 Active)” and if we look, for example, at “Active Applications”, we can see that there were in total 3 request rejections (because of policy “Key Validation”).

If we look, for example, at “Requests By API”, the requests are also visible.

There were 2 request that had no Header Key “application-key” at all. As you can see in the graph above, these were rejected and were administrated under “Unknown Application (No Key).

There was 1 request that had a Value e194833d-d5ac-4c9d-8143-4cf3a3e81fea for the Header Key “application-key”. As you can see in the graph above, this request was rejected and was administrated under the “DesktopApp_A_Application” application. Remember that this application was not registered to the “HumanResourceService” API.

The other 2 request were accepted, because they had a valid Value for the Header Key and the corresponding applications were registered to the “HumanResourceService” API.

So the “Key Validation” policy is working correct.

Sign in to the Oracle API Platform Cloud – Management Portal as user api-manager-user

Go back to the Oracle API Platform Cloud – Management Portal and, if not already done, sign in as user api-manager-user. Navigate to tab “Grants” of the “HumanResourceService” API.

API grants are issued per API.

The following tabs are visible and can be chosen:

  • Manage API
    Users issued this grant are allowed to modify the definition of and issue grants for this API.
  • View all details
    Users issued this grant are allowed to view all information about this API in the Management Portal.
  • Deploy API
    Users issued this grant are allowed to deploy or undeploy this API to a gateway for which they have deploy rights. This allows users to deploy this API without first receiving a request from an API Manager.
  • View public details
    Users issued this grant are allowed to view the publicly available details of this API on the Developer Portal.
  • Register
    Users issued this grant are allowed to register applications for this plan.
  • Request registration
    Users issued this grant are allowed to request to register applications for this plan.

Users and groups issued grants for a specific API have the privileges to perform the associated actions on that API. See for more information: https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/managing-api-grants.html.

“View public details” grant

To view an API, the Application Developer must have the “View public details” grant or another grant that implies these privileges.

Click on tab “View public details”.

Next, click on button “Add Grantee”.

Select “app-dev-user” and click on button “Add”.

So now, the user app-dev-user (with Role ApplicationDeveloper) is granted the “View public details” privilege.

Remark:
In practice you would probably grant to a group instead of to a single user.

“Request registration” grant

To register an API, the Application Developer must have the “Request registration” grant or another grant that implies these privileges.

Click on tab “Request registration”.

Next, click on button “Add Grantee”.

Select “app-dev-user” and click on button “Add”.

So now, the user app-dev-user (with Role ApplicationDeveloper) is granted the “Request registration” privilege.

Remark:
In practice you would probably grant to a group instead of to a single user.

Be aware that you could also grant the “Register” privilege, so approval of the API Manager (for registering an application to an API) is not needed anymore in that case. This makes sense if it concerns a development environment, for example. Since the Oracle VM VirtualBox APIPCS appliance is using a “Production Gateway” gateway, in this article, I chose for the request and approve mechanism.

Approve a request for registering an application to an API, via the Management Portal (api-manager-user)

On the left, click on tab “Registrations” and then click on tab “Requesting”.

Hover over the “HumanResourceWebApplication” application, then click on button “Approve”.

In the pop-up, click on button “Yes”.

Then you can see on the tab “Registered”, that the registration is done.

After a click on the top right icon “Expand”, more details are shown:

So now the “HumanResourceWebApplication” application is registered to the “HumanResourceService” API.

Summary

As a follow up from my previous articles about Oracle API Platform Cloud Service, in this article the focus is on using the Developer Portal, discovering APIs via the API Catalog and subscribing applications to APIs.

I activated the Key Validation (Security) policy, which I created in my previous article, and redeployed the API to a gateway and validated that this policy worked correct, using requests which I created in Postman.
[https://technology.amis.nl/2018/04/14/oracle-api-platform-cloud-service-using-the-management-portal-and-creating-an-api-including-some-policies/]

While using the Management Portal and Developer Portal in this article, I focused on the roles “API Manager” and “Application Developer”. For example, the user api-manager-user had to approve a request from the app-dev-user to register an application to an API.

At the API Platform Cloud Service bootcamp (at the Oracle Partner PaaS Summer Camps VII 2017 in Lisbon last year, at the end of august), I (and many others) got hands-on experience with the API Platform Cloud Service. There we created an API with more policies than described in this article.

It became obvious that the API Platform Cloud Service is a great API Management solution and that with the help of policies your are able to secure, throttle, route, manipulate, or log requests before they reach the backend service.

The post Oracle API Platform Cloud Service: using the Developer Portal for discovering APIs via the API Catalog and subscribing applications to APIs appeared first on AMIS Oracle and Java Blog.

15 Minutes to get a Kafka Cluster running on Kubernetes – and start producing and consuming from a Node application

Thu, 2018-04-19 11:07

imageFor  workshop I will present on microservices and communication patterns I need attendees to have their own local Kafka Cluster. I have found a way to have them up and running in virtually no time at all. Thanks to the combination of:

  • Kubernetes
  • Minikube
  • The Yolean/kubernetes-kafka GitHub Repo with Kubernetes yaml files that creates all we need (including Kafka Manager)

Prerequisites:

  • Minikube and Kubectl are installed
  • The Minikube cluster is running (minikube start)

In my case the versions are:

Minikube: v0.22.3, Kubectl Client 1.9 and (Kubernetes) Server 1.7:

image

The steps I went through:

Git Clone the GitHub Repository: https://github.com/Yolean/kubernetes-kafka 

From the root directory of the cloned repository, run the following kubectl commands:

(note: I did not know until today that kubectl apply –f can be used with a directory reference and will then apply all yaml files in that directory. That is incredibly useful!)

kubectl apply -f ./configure/minikube-storageclass-broker.yml
kubectl apply -f ./configure/minikube-storageclass-zookeeper.yml

(note: I had to comment out the reclaimPolicy attribute in both files – probably because I am running a fairly old version of Kubernetes)

kubectl apply -f ./zookeeper

kubectl apply -f ./kafka

(note: I had to change API version in 50pzoo and 51zoo as well as in 50kafka.yaml from apiVersion: apps/v1beta2 to apiVersion: apps/v1beta1 – see https://github.com/kubernetes/kubernetes/issues/55894 for details; again, I should upgrade my Kubernetes version)

To make Kafka accessible from the minikube host (outside the K8S cluster itself)

kubectl apply -f ./outside-services

This exposes Services as type NodePort instead of ClusterIP, making them available for client applications that can access the Kubernetes host.

I also installed (Yahoo) Kafka Manager:

kubectl apply -f ./yahoo-kafka-manager

(I had to change API version in kafka-manager from apiVersion: apps/v1beta2 to apiVersion: apps/v1beta1 )

At this point, the Kafka Cluster is running. I can check the pods and services in the Kubernetes Dashboard as well as through kubectl on the command line. I can get the Port at which I can access the Kafka Brokers:

image

And I can access the Kafka Manager at the indicated Port.

image

Initially, no cluster is visible in Kafka Manager. By providing the Zookeeper information highlighted in the figure (zookeeper.kafka:2181) I can make the cluster visible in this user interface tool.

Finally the eating of the pudding: programmatic production and consumption of messages to and from the cluster. Using the world’s simplest Node Kafka clients, it is easy to see the stuff is working. I am impressed.

I have created the Node application and its package.json file. Then added the kafka-node dependency (npm install kafka-node –save). Next I created the producer:

// before running, either globally install kafka-node  (npm install kafka-node)
// or add kafka-node to the dependencies of the local application

var kafka = require('kafka-node')
var Producer = kafka.Producer
KeyedMessage = kafka.KeyedMessage;

var client;
KeyedMessage = kafka.KeyedMessage;

var APP_VERSION = "0.8.5"
var APP_NAME = "KafkaProducer"

var topicName = "a516817-kentekens";
var KAFKA_BROKER_IP = '192.168.99.100:32400';

// from the Oracle Event Hub - Platform Cluster Connect Descriptor
var kafkaConnectDescriptor = KAFKA_BROKER_IP;

console.log("Running Module " + APP_NAME + " version " + APP_VERSION);

function initializeKafkaProducer(attempt) {
  try {
    console.log(`Try to initialize Kafka Client at ${kafkaConnectDescriptor} and Producer, attempt ${attempt}`);
    const client = new kafka.KafkaClient({ kafkaHost: kafkaConnectDescriptor });
    console.log("created client");
    producer = new Producer(client);
    console.log("submitted async producer creation request");
    producer.on('ready', function () {
      console.log("Producer is ready in " + APP_NAME);
    });
    producer.on('error', function (err) {
      console.log("failed to create the client or the producer " + JSON.stringify(err));
    })
  }
  catch (e) {
    console.log("Exception in initializeKafkaProducer" + JSON.stringify(e));
    console.log("Try again in 5 seconds");
    setTimeout(initializeKafkaProducer, 5000, ++attempt);
  }
}//initializeKafkaProducer
initializeKafkaProducer(1);

var eventPublisher = module.exports;

eventPublisher.publishEvent = function (eventKey, event) {
  km = new KeyedMessage(eventKey, JSON.stringify(event));
  payloads = [
    { topic: topicName, messages: [km], partition: 0 }
  ];
  producer.send(payloads, function (err, data) {
    if (err) {
      console.error("Failed to publish event with key " + eventKey + " to topic " + topicName + " :" + JSON.stringify(err));
    }
    console.log("Published event with key " + eventKey + " to topic " + topicName + " :" + JSON.stringify(data));
  });

}

//example calls: (after waiting for three seconds to give the producer time to initialize)
setTimeout(function () {
  eventPublisher.publishEvent("mykey", { "kenteken": "56-TAG-2", "country": "nl" })
}
  , 3000)

and ran the producer:

image

The create the consumer:

var kafka = require('kafka-node');

var client;

var APP_VERSION = "0.8.5"
var APP_NAME = "KafkaConsumer"

var eventListenerAPI = module.exports;

var kafka = require('kafka-node')
var Consumer = kafka.Consumer

// from the Oracle Event Hub - Platform Cluster Connect Descriptor

var topicName = "a516817-kentekens";

console.log("Running Module " + APP_NAME + " version " + APP_VERSION);
console.log("Event Hub Topic " + topicName);

var KAFKA_BROKER_IP = '192.168.99.100:32400';

var consumerOptions = {
    kafkaHost: KAFKA_BROKER_IP,
    groupId: 'local-consume-events-from-event-hub-for-kenteken-applicatie',
    sessionTimeout: 15000,
    protocol: ['roundrobin'],
    fromOffset: 'earliest' // equivalent of auto.offset.reset valid values are 'none', 'latest', 'earliest'
};

var topics = [topicName];
var consumerGroup = new kafka.ConsumerGroup(Object.assign({ id: 'consumerLocal' }, consumerOptions), topics);
consumerGroup.on('error', onError);
consumerGroup.on('message', onMessage);

consumerGroup.on('connect', function () {
    console.log('connected to ' + topicName + " at " + consumerOptions.host);
})

function onMessage(message) {
    console.log('%s read msg Topic="%s" Partition=%s Offset=%d'
    , this.client.clientId, message.topic, message.partition, message.offset);
}

function onError(error) {
    console.error(error);
    console.error(error.stack);
}

process.once('SIGINT', function () {
    async.each([consumerGroup], function (consumer, callback) {
        consumer.close(true, callback);
    });
});

and ran the consumer – which duly consumed the event published by the publisher. It is wonderful.

image

Resources

The main resources is the GitHub Repo: https://github.com/Yolean/kubernetes-kafka . Absolutely great stuff.

Also useful: npm package kafka-node – https://www.npmjs.com/package/kafka-node

Documentation on Kubernetes: https://kubernetes.io/docs/user-journeys/users/application-developer/foundational/#section-2 – with references to Kubectl and Minikube – and the Katakoda playground: https://www.katacoda.com/courses/kubernetes/playground

The post 15 Minutes to get a Kafka Cluster running on Kubernetes – and start producing and consuming from a Node application appeared first on AMIS Oracle and Java Blog.

Remote and Programmatic Manipulation of Docker Containers from a Node application using Dockerode

Thu, 2018-04-19 02:23

imageIn previous articles, I have talked about using Docker Containers in smart testing strategies by creating a container image that contains the baseline of the application and the required test setup (test data for example). For each test instead of doing complex setup actions and finishing of with elaborate tear down steps, simply spinning up a container at the beginning and tossing it away at the end.

I have shown how that can be done through the command line – but that of course is not a workable procedure. In this article I will provide a brief introduction of programmatic manipulation of containers. By providing access to the Docker Daemon API from remote clients (step 1) and by leveraging the npm package Dockerode (step 2) it becomes quite simple from a straightforward Node application to create, start and stop containers – as well as build, configure, inspect, pause them and manipulate in other ways. This opens up the way for build jobs to programmatically run tests by starting the container, running the tests against that container and killing and removing the container after the test. Combinations of containers that work together can be managed just as easily.

As I said, this article is just a very lightweight introduction.

Expose Docker Daemon API to remote HTTP clients

The step that to me longest was exposing the Docker Daemon API. Subsequent versions of Docker used different configurations for this and apparently different Linux distributions also have different approaches. I was happy to find this article: https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04 that describes for Ubuntu 16.x as Docker Host how to enable access to the API.

Edit file /lib/systemd/system/docker.service – add -H tcp://0.0.0.0:4243 to the entry that describes how to start the Docker Daemon in order to have it listen to incoming requests at port 4243 (note: other ports can be used just as well).

Reload (systemctl daemon-reload) to apply the changed file configuration

Restart the Docker Service: service docker restart

And we are in business.image

A simple check to see if HTTP requests on port 4243 are indeed received and handled: execute this command on the Docker host itself:

curl http://localhost:4243/version

image

The next step is the actual remote access. From a browser running on a machine that can ping successfully to the Docker Host – in my case that is the Virtual Box VM spun up by Vagrant, at IP 192.168.188.108 as defined in the Vagrantfile – open this URL: http://192.168.188.108:4243/version. The result should be similar to this:

image

Get going with Dockerode

To get started with npm package Dockerode is not any different really from any other npm package. So the steps to create a simple Node application that can list, start, inspect and stop containers in the remote Docker host are as simple as:

Use npm init to create the skeleton for a new Node application

Use

npm install dockerode –save

to retrieve Dockerode and create the dependency in package.json.

Create file index.js. Define the Docker Host IP address (192.168.188.108 in my case) and the Docker Daemon Port (4243 in my case) and write the code to interact with the Docker Host. This code will list all containers. Then it will inspect, start and stop a specific container (with identifier starting with db8). This container happens to run an Oracle Database – although that is not relevant in the scope of this article.

var Docker = require('dockerode');
var dockerHostIP = "192.168.188.108"
var dockerHostPort = 4243

var docker = new Docker({ host: dockerHostIP, port: dockerHostPort });

docker.listContainers({ all: true }, function (err, containers) {
    console.log('Total number of containers: ' + containers.length);
    containers.forEach(function (container) {
        console.log(`Container ${container.Names} - current status ${container.Status} - based on image ${container.Image}`)
    })
});

// create a container entity. does not query API
async function startStop(containerId) {
    var container = await docker.getContainer(containerId)
    try {
        var data = await container.inspect()
        console.log("Inspected container " + JSON.stringify(data))
        var started = await container.start();
        console.log("Started "+started)
        var stopped = await container.stop();
        console.log("Stopped "+stopped)
    } catch (err) {
        console.log(err);
    };
}
//invoke function
startStop('db8')

The output in Visual Studio Code looks like this:

SNAGHTML26a0b0e

And the action can be tracked on the Docker host like this (to prove it is real…)image

Resources

Article by Ivan Krizsan on configuring the Docker Daemon on Ubuntu 16.x – my life safer: https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04

GitHub Repo for Dockerode – with examples and more: https://github.com/apocas/dockerode

Presentation at DockerCon 2016 that gave me the inspiration to use Dockerode: https://www.youtube.com/watch?v=1lCiWaLHwxo 

Docker docs on Configuring the Daemon – https://docs.docker.com/install/linux/linux-postinstall/#configure-where-the-docker-daemon-listens-for-connections


The post Remote and Programmatic Manipulation of Docker Containers from a Node application using Dockerode appeared first on AMIS Oracle and Java Blog.

Quickly spinning up Docker Containers with baseline Oracle Database Setup – for performing automated tests

Wed, 2018-04-18 07:00

imageHere is a procedure for running an Oracle Database, preparing a baseline in objects (tables, stored procedures) and data, creating an image of that baseline and subsequently running containers based on that baseline image. Each container starts with a fresh setup. For running automated tests that require test data to be available in a known state, this is a nice way of working.

The initial Docker container was created using an Oracle Database 11gR2 XE image: https://github.com/wnameless/docker-oracle-xe-11g.

Execute this statement on the Docker host:

docker run -d -p 49160:22 -p 49161:1521 -e ORACLE_ALLOW_REMOTE=true --name oracle-xe  wnameless/oracle-xe-11g

This will spin up a container called oracle-xe. After 5-20 seconds, the database is created and started and can be accessed from an external database client.

From the database client, prepare the database baseline, for example:

create user newuser identified by newuser;

create table my_data (data varchar2(200));

insert into my_data values ('Some new data '||to_char(sysdate,'DD-MM HH24:MI:SS'));

commit;

 

These actions represent the complete database installation of your application – that may consists of hundreds or thousands of objects and MBs of data. The steps and the principles remain exactly the same.

At this point, create an image of the baseline – that consists of the vanilla database with the current application release’s DDL and DML applied to it:

docker commit --pause=true oracle-xe

This command returns an id, the identifier of the Docker image that is now created for the current state of the container – our base line. The original container can now be stopped and even removed.

docker stop oracle-xe

 

Spinning up a container from the base line image is now done with:

docker run -d -p 49160:22 -p 49161:1521 -e ORACLE_ALLOW_REMOTE=true  --name oracle-xe-testbed  &lt;image identifier&gt;

After a few seconds, the database has started up and remote database clients can start interacting with the database. They will find the database objects and data that was part of the baseline image. To perform a test, no additional set up nor any tear down is required.

Perform the tests that require performing. The tear down after the test consists of killing and removing the testbed container:

docker kill oracle-xe-testbed &amp;&amp; docker rm oracle-xe-testbed

Now return to the step “Spinning up a container”

Spinning up the container takes a few seconds – 5 to 10. The time is mainly taken up by the database processes that have to be started from scratch.

It should be possible to create a snapshot of a running container (using Docker Checkpoints) and restore the testbed container from that snapshot. This create-start from checkpoint –kill-rm should happen even faster than the run-kill-rm cycle that we have now got going. A challenge is the fact that opening the database does not just start processes and manipulate memory, but also handles files. That means that we need to commit the running container and associate the restored checkpoint with that image. I have been working on this at length – but I have not been successful yet – running into various issues (ORA-21561 OID Generation failed,  ora 27101 shared-memory-realm-does-not-exist, Redo Log File not found,…).I continue to look into this.

Use Oracle Database 12c Image

Note: instead of the Oracle Database XE image used before, we can go through the same steps based for example on image sath89/oracle-12c (see https://hub.docker.com/r/sath89/oracle-12c/ ) .

The commands and steps are now:

docker pull sath89/oracle-12c

docker run -d -p 8080:8080 -p 1521:1521 --name oracle-db-12c sath89/oracle-12c

connect from a client – create baseline.

When the baseline database and database contents has been set up, create the container image of that state:

docker commit --pause=true oracle-db-12c

Returns an image identifier.

docker stop oracle-db-12c

Now to run a test iteration, run a container from the base line image:

docker run -d -p 1521:1521  --name oracle-db-12c-testbed  <image identifier>

Connect to the database at port 1521 or have the web application or API that is being tested make the connection.

 

Resources

The Docker Create Command: https://docs.docker.com/engine/reference/commandline/create/#parent-command

Nifty Docker commands in Everyday hacks for Docker:  https://codefresh.io/docker-tutorial/everyday-hacks-docker/

Circle CI Blog – Checkpoint and restore Docker container with CRIU – https://circleci.com/blog/checkpoint-and-restore-docker-container-with-criu/

The post Quickly spinning up Docker Containers with baseline Oracle Database Setup – for performing automated tests appeared first on AMIS Oracle and Java Blog.

How to install the Oracle Integration Cloud on premises connectivity agent (18.1.3)

Mon, 2018-04-16 02:00
Recapitulation on how to install the Oracle Integration Cloud on premises connectivity agent

Recently (april 2018) I gained access to the new Oracle Integration Cloud (OIC), version 18.1.3.180112.1616-762,  and wanted to make an integration connection to an on-premise database. For this purpose, an on premise connectivity agent needs to be installed, as is thoroughly explained by my colleague Robert van Mölken in his blog prepraring-to-use-the-ics-on-premises-connectivity-agent.

With the (new) Oracle Integration Cloud environment the installation of the connectivity agent has slightly changed though, as shown below. It gave me some effort to get the new connectivity agent working. Therefore I decided to recapture the steps needed in this blog. Hopefully, this will give you a headstart to get the connectivity agent up and running.

LeftMenuPane

MenuBar Prerequisites

Access to an Oracle Integration Cloud Service instance.

Rights to do some installation on a local / on-premise environment, Linux based (eg. SOA virtual box appliance).

 

Agent groups

For connection purposes you need to have an agent group defined in the Oracle Integration Cloud.

To define an agent group, you need to select the agents option in the left menu pane.  You can find any already existing agent groups here as well.

Select the ‘create agent group’ button to define a new agent group and fill in this tiny web form.

DefineAgentGroup

Downloading and extracting the connectivity agent

For downloading the connectivity agent software you also need to select the agents option in the left menu pane, followed by the download option in the upper menu bar.

After downloading you have a file called ‘oic_connectivity_agent.zip’, which takes 145.903.548 bytes

This has a much smaller memory footprint than the former connectivity agent software (ics_conn_agent_installer_180111.0000.1050.zip, which takes 1.867.789.797 bytes).

For installation of the connectivity agent, you need to copy and extract the file to an installation folder of your choice on the on-premise host.

After extraction you see several files, amongst which ‘InstallerProfile.cfg’.

oic-content(1)

Setting configuration properties

Before starting the installation you need to edit the content of the file InstallerProfile.cfg.

Set the value for the property OIC_URL to the right hostname and sslPort *.

Also set the value for the property agent_GROUP_IDENTIFIER to the name of the agent group  you want the agent to belong to.

After filling in these properties save the file.

InstallerProfile

* On the instance details page you can see the right values for the hostname and sslPort. This is the page which shows you the weblogic instances that host your OIC and it looks something like this:

ServiceCapture Certificates

For my trial purpose I didn’t need a certificate to communicate between the OIC and the on-premise environment.

But if you do, you can follow the next 2 steps:

oic-content(2)

a. Go to the agenthome/agent/cert/ directory.

b. Run the following command: keytool -importcert -keystore keystore.jks -storepass changeit -keypass password -alias alias_name  -noprompt -file certificate_file

 

Java JDK

Before starting the installation of the connectivity agent, make sure your JAVA JDK is at least version 8, with the JAVA_HOME and PATH set.

To check this, open a terminal window and type: ‘java –version’ (without the quotes)

You should see the version of the installed java version, eg. java version “1.8.0_131”.

To add the JAVA_HOME to the PATH setting, type ‘setenv PATH = $JAVA_HOME/bin:$PATH’ (without the quotes)

Running the installer

You can start the connectivity agent installer with the command: ‘java –jar connectivityagent.jar’  (again, without the quotes).

During the installation you are for your OIC username and corresponding password.

The installation finishes with a message that the agent was installed succesfully en is now up and running.

Check the installed agent

You can check that the agent is communicating to/under/with the agent group you specified.

Behind the name of the agent group the number of agents communicating within it is shown

AgentGroupShowingAgentCapture

The post How to install the Oracle Integration Cloud on premises connectivity agent (18.1.3) appeared first on AMIS Oracle and Java Blog.

Oracle API Platform Cloud Service: using the Management Portal and creating an API (including some policies)

Sat, 2018-04-14 13:15

At the Oracle Partner PaaS Summer Camps VII 2017 in Lisbon last year, at the end of august, I attended the API Cloud Platform Service & Integration Cloud Service bootcamp.

In a series of article’s I will give a high level overview of what you can do with Oracle API Platform Cloud Service.

At the Summer Camp a pre-built Oracle VM VirtualBox APIPCS appliance (APIPCS_17_3_3.ova) was provided to us, to be used in VirtualBox. Everything needed to run a complete demo of API Platform Cloud Service is contained within Docker containers that are staged in that appliance. The version of Oracle API Platform CS, used within the appliance, is Release 17.3.3 — August 2017.

See https://docs.oracle.com/en/cloud/paas/api-platform-cloud/whats-new/index.html to learn about the new and changed features of Oracle API Platform CS in the latest release.

In this article in the series about Oracle API Platform CS, the focus will be on the Management Portal and creating an API (including some policies) .

Be aware that the screenshot’s in this article and the examples provided, are based on a demo environment of Oracle API Platform CS and were created by using the Oracle VM VirtualBox APIPCS appliance mentioned above.

This article only covers part of the functionality of Oracle API Platform CS. For more detail I refer you to the documentation: https://cloud.oracle.com/en_US/api-platform.

Short overview of Oracle API Platform Cloud Service

Oracle API Platform Cloud Service enables companies to thrive in the digital economy by comprehensively managing the full API lifecycle from design and standardization to documenting, publishing, testing and managing APIs. These tools provide API developers, managers, and users an end-to-end platform for designing, prototyping. Through the platform, users gain the agility needed to support changing business demands and opportunities, while having clear visibility into who is using APIs for better control, security and monetization of digital assets.
[https://cloud.oracle.com/en_US/api-platform/datasheets]

Architecture

Management Portal:
APIs are managed, secured, and published using the Management Portal.
The Management Portal is hosted on the Oracle Cloud, managed by Oracle, and users granted
API Manager privileges have access.

Gateways:
API Gateways are the runtime components that enforce all policies, but also help in
collecting data for analytics. The gateways can be deployed anywhere – on premise, on Oracle
Cloud or to any third party cloud providers.

Developer Portal:
After an API is published, Application Developers use the Developer Portal to discover, register, and consume APIs. The Developer Portal can be customized to run either on the Oracle Cloud or directly in the customer environment on premises.
[https://cloud.oracle.com/opc/paas/datasheets/APIPCSDataSheet_Jan2018.pdf]

Oracle Apiary:
In my article “Oracle API Platform Cloud Service: Design-First approach and using Oracle Apiary”, I talked about using Oracle Apiary and interacting with its Mock Server for the “HumanResourceService” API, I created earlier.

The Mock Server for the “HumanResourceService” API is listening at:
http://private-b4874b1-humanresourceservice.apiary-mock.com
[https://technology.amis.nl/2018/01/31/oracle-api-platform-cloud-service-design-first-approach-using-oracle-apiary/]

Roles

Within Oracle API Platform CS roles are used.

Roles determine which interfaces a user is authorized to access and the grants they are eligible to receive.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/api-platform-cloud-service-roles-resources-actions-and-grants.html]

  • Administrator
    System Administrators responsible for managing the platform settings. Administrators possess the rights of all other roles and are eligible to receive grants for all objects in the system.
  • API Manager
    People responsible for managing the API lifecycle, which includes designing, implementing, and versioning APIs. Also responsible for managing grants and applications, providing API documentation, and monitoring API performance.
  • Application Developer
    API consumers granted self-service access rights to discover and register APIs, view API documentation, and manage applications using the Developer Portal.
  • Gateway Manager
    Operations team members responsible for deploying, registering, and managing gateways. May also manage API deployments to their gateways when issued the Deploy API grant by an API Manager.
  • Gateway Runtime
    This role indicates a service account used to communicate from the gateway to the portal. This role is used exclusively for gateway nodes to communicate with the management service; users assigned this role can’t sign into the Management Portal or the Developer Portal.
  • Service Manager
    People responsible for managing resources that define backend services. This includes managing service accounts and services.
  • Plan Manager
    People responsible for managing plans.

Within the Oracle VM VirtualBox APIPCS appliance the following users (all with password welcome1) are present and used by me in this article:

User Role api-manager-user APIManager api-gateway-user GatewayManager

Design-First approach

Design is critical as a first step for great APIs. Collaboration ensures that we are creating the correct design. In my previous article “Oracle API Platform Cloud Service: Design-First approach and using Oracle Apiary”, I talked about the Design-First approach and using Oracle Apiary. I designed a “HumanResourceService” API.
[https://technology.amis.nl/2018/01/31/oracle-api-platform-cloud-service-design-first-approach-using-oracle-apiary/]

So with a design in place, an application developer could begin working on the front-end, while service developers work on the back-end implementation and others can work on the API implementation, all in parallel.

Create an API, via the Management Portal (api-manager-user)

Start the Oracle API Platform Cloud – Management Portal as user api-manager-user.

After a successful sign in, the “APIs” screen is visible.

Create a new API via a click on button “Create API”. Enter the following values:

Name HumanResourceService Version 1 Description Human Resource Service is an API to manage Human Resources.

Next, click on button “Create”.

After a click on the “HumanResourceService” API, the next screen appears (with tab “APIs” selected):

Here you can see on the left, that the tab “API Implementation” is selected.

First l will give you a short overview of screenshot’s of each of the tabs on the left. Some of these I will explain in more detail as I will walk you through some of the functionality of Oracle API Platform CS.

Tab “API Implementation” of the “HumanResourceService” API

Tab “Deployments” of the “HumanResourceService” API

Tab “Publication” of the “HumanResourceService” API

Tab “Grants” of the “HumanResourceService” API

API grants are issued per API.

The following tabs are visible and can be chosen:

  • Manage API
    Users issued this grant are allowed to modify the definition of and issue grants for this API.
  • View all details
    Users issued this grant are allowed to view all information about this API in the Management Portal.
  • Deploy API
    Users issued this grant are allowed to deploy or undeploy this API to a gateway for which they have deploy rights. This allows users to deploy this API without first receiving a request from an API Manager.
  • View public details
    Users issued this grant are allowed to view the publicly available details of this API on the Developer Portal.
  • Register
    Users issued this grant are allowed to register applications for this plan.
  • Request registration
    Users issued this grant are allowed to request to register applications for this plan.

Users and groups issued grants for a specific API have the privileges to perform the associated actions on that API. See for more information: https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/managing-api-grants.html.

Tab “Registrations” of the “HumanResourceService” API

Tab “Analytics” of the “HumanResourceService” API

Tab “API Implementation” of the “HumanResourceService” API

After you create an API, you can apply policies to configure the Request and Response flows. Policies in the Request flow secure, throttle, route, manipulate, or log requests before they reach the backend service. Polices in the Response flow manipulate and log responses before they reach the requesting client.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html]

Request flow, configuring the API Request URL

The API Request URL is the endpoint to which users or applications send requests for your API. You configure part of this URL. This endpoint resides on the gateway on which the API is deployed. The API will be deployed later.

The full address to which requests are sent consists of the protocol used, the gateway hostname, the API Request endpoint, and any private resource paths available for your service.

<protocol>://<hostname and port of the gateway node instance>/<API Request endpoint>/<private resource path of the API>

Anything beyond the API Request endpoint is passed to the backend service.

Hover over the “API Request” policy and then, on the right, click the icon “Edit policy details”. Enter the following values:

Your Policy Name API Request Comments Configuration | Protocol HTTP ://MyGatewayIP/ Configuration | API Endpoint URL HumanResourceService/1

Next, click on button “Apply”.

In the pop-up, click on button “Save Changes”.

Request flow, configuring the Service Request URL

The Service Request is the URL at which your backend service receives requests.

When a request meets all policy conditions, the gateway routes the request to this URL and calls your service. Note that the Service Request URL can point to any of your service’s resources, not just its base URL. This way you can restrict users to access only a subset of your API’s resources.

Hover over the “Service Request” policy and then, on the right, click the icon “Edit policy details”. Enter the following values:

Configure Headers – Service | Enter a URL <Enter the Apiary Mock Service URL>

For example:
http://private-b4874b1-humanresourceservice.apiary-mock.com

Remark:
Remove the “/employees” from the Mock Service URL so the API can be designed to call multiple end-points such as “/departments” Use Gateway Node Proxy uncheck Service Account None

Next, click on button “Apply”.

In the pop-up, click on button “Save Changes”.

Oftentimes, there are multiple teams participating in the development process. There may be front-end developers creating a new mobile app or chatbot, there can be a backend services and integration team and of course the API team.

If the backend service is not yet ready, you can still start creating the API. Perhaps you may want to begin with a basic implementation (for example an Apiary Mock Service URL) so your front-end developers are already pointing to the API, even before it is fully operational.

Response Flow

Click the Response tab to view a top-down visual representation of the response flow. The Service and API Response entries can’t be edited.
The Service Response happens first. The response from the backend service is always the first entry in the outbound flow. You can place additional policies in this flow. Policies are run in order, with the uppermost policy run first, followed by the next policy, and so on, until the response is sent back to the client.
The API Response entry is a visual representation of the point in the outbound flow when the response is returned to the requesting client.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html]

Deploy an API to the Gateway, via the Management Portal (api-manager-user)

On the left, click on tab “Deployments”.

Next, click on button “Deploy API”.

In the pop-up “Deploy API” there are no gateways, or they are not visible for the current user. So in order to find out what the situation is about the gateways, we have to sign in, in the Oracle API Platform Cloud – Management Portal as a Gateway Manager. There we also can grant the privileges needed to deploy the API. How you do this is described later on in this article.

For now we continue as if the correct privileges were already in place.

So in the pop-up “Deploy API”, select the “Production Gateway” gateway and click on button ‘Deploy”.

For a short while a pop-up “Deployment request submitted” appears.

Next, click on tab “Requesting” where we can see the request (for an API deployment to a gateway), the user api-manager-user sent to the Gateway Manager. The “Deployment State” is REQUESTING. So now we have to wait for the approval of the Gateway Manager.

Sign in to the Oracle API Platform Cloud – Management Portal as user api-gateway-user

In the top right of the Oracle API Platform Cloud – Management Portal click on the api-manager-user and select ”Sign Out”. Next, Sign in as user api-gateway-user.

After a successful sign in, the “Gateways” screen is visible.

Because this user is only a Gateway Manager, only the tab “Gateways” is visible.

At the moment (in this demo environment) there is one gateway available, being the “Production Gateway”. After a click on the “Production Gateway” gateway, the next screen appears:

Here you can see on the left, that the tab “Settings” is selected.

First l will give you a short overview of screenshot’s of each of the tabs on the left. Some of these I will explain in more detail as I will walk you through some of the functionality of Oracle API Platform CS.

Tab “Settings” of the “Production Gateway” gateway

Have a look at the “Load Balancer URL” (http://apics.oracle.com:8001), which we will be using later on in this article.

Tab “Nodes” of the “Production Gateway” gateway

Tab “Deployments” of the “Production Gateway” gateway

Tab “Grants” of the “Production Gateway” gateway

Tab “Analytics” of the “Production Gateway” gateway

Tab “Grants” of the “Production Gateway” gateway

On the left, click on tab “Grants”.

Grants are issued per gateway.

The following tabs are visible and can be chosen:

  • Manage Gateway
    Users issued this grant are allowed to manage API deployments to this gateway and manage the gateway itself.

    Remark:
    The api-gateway-user (with role GatewayManager) is granted the “Manage Gateway” privilege.

  • View all details
    Users issued this grant are allowed to view all information about this gateway.
  • Deploy to Gateway
    Users issued this grant are allowed to deploy or undeploy APIs to this gateway.
  • Request Deployment to Gateway
    Users issued this grant are allowed to request API deployments to this gateway.
  • Node service account
    Gateway Runtime service accounts are issued this grant to allow them to download configuration and upload statistics.

Users issued grants for a specific gateway have the privileges to perform the associated actions on that gateway. See for more information: https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/managing-gateway-grants.html.

Click on tab “Request Deployment to Gateway”.

Next, click on button “Add Grantee”.

Select “api-manager-user” and click on button “Add”.

So now, the user api-manager-user (with Role APIManager) is granted the “Request Deployment to Gateway” privilege.

Remark:
In practice you would probably grant to a group instead of to a single user.

Be aware that you could also grant the “Deploy to Gateway” privilege, so approval of the Gateway Manager (for deploying an API to a gateway) is not needed anymore in that case. This makes sense if it concerns a development environment, for example. Since the Oracle VM VirtualBox APIPCS appliance is using a “Production Gateway” gateway, in this article, I chose for the request and approve mechanism.

Approve a request for an API deployment to a gateway, via the Management Portal (api-gateway-user)

On the left, click on tab “Deployments” and then click on tab “Requesting”.

Hover over the “HumanResourceService” API, then click on button “Approve”.

In the pop-up, click on button “Yes”.

Then you can see that on the tab “Waiting”, the deployment is waiting.

Remark:
The deployment enters a Waiting state and the logical gateway definition is updated. The endpoint is deployed the next time gateway node(s) poll the management server for the updated gateway definition.

So after a short while, you can see on the tab “Deployed”, that the deployment is done.

After a click on the top right icon “Expand”, more details are shown:

So now the “HumanResourceService” API is deployed on the “Production Gateway” gateway (Node 1). We can also see the active policies in the Request and Response flow of the API Implementation.

It is time to invoke the API.

Invoke method “GetAllEmployees” of the “HumanResourceService” API, via Postman

For invoking the “HumanResourceService” API I used Postman (https://www.getpostman.com) as a REST Client tool.

In Postman, I created a collection named “HumanResourceServiceCollection”(in order to bundle several requests) and created a request named “GetAllEmployeesRequest”, providing method “GET” and request URL “http://apics.oracle.com:8001/HumanResourceService/1/employees”.

Remember the “API Request URL”, I configured partly in the “API Request” policy and the “Load Balancer URL” of the “Production Gateway” gateway? They make up the full address to which requests have to be sent.

After clicking on button Send, a response with “Status 200 OK” is shown:

Because I have not applied any extra policies, the request is passed to the backend service without further validation. This is simply the “proxy pattern”.

Later on in this article, I will add some policies and send additional requests to validate each one of them.

Tab “Analytics” of the “Production Gateway” gateway

Go back to the Management Portal (api-gateway-user) and in the tab “Analytics” the request I sent, is visible at “Total Requests”.

If we look, for example, at “Requests By Resource”, the request is also visible.

Policies

Policies in API Platform CS serve a number of purposes. You can apply any number of policies to an API definition to secure, throttle, limit traffic, route, or log requests sent to your API. Depending on the policies applied, requests can be rejected if they do not meet criteria you specify when configuring each policy. Policies are run in the order they appear on the Request and Response tabs. A policy can be placed only in certain locations in the execution flow.

The available policies are:

Security:

  • OAuth 2.0 | 1.0
  • Key Validation | 1.0
  • Basic Auth | 1.0
  • Service Level Auth | 1.0 Deprecated
  • IP Filter Validation | 1.0
  • CORS | 1.0

Traffic Management:

  • API Throttling – Delay | 1.0
  • Application Rate Limiting | 1.0
  • API Rate Limiting | 1.0

Interface Management:

  • Interface Filtering | 1.0
  • Redaction | 1.0
  • Header Validation | 1.0
  • Method Mapping | 1.0

Routing:

  • Header Based Routing | 1.0
  • Application Based Routing | 1.0
  • Gateway Based Routing | 1.0
  • Resource Based Routing | 1.0

Other:

  • Service Callout | 2.0
  • Service Callout | 1.0
  • Logging | 1.0
  • Groovy Script | 1.0

As an example I have created two policies: Key Validation (Security) and Interface Filtering (Interface Management).

Add a Key Validation Policy, via the Management Portal (api-manager-user)

Use a key validation policy when you want to reject requests from unregistered (anonymous) applications.

Keys are distributed to clients when they register to use an API on the Developer Portal. At runtime, if they key is not present in the given header or query parameter, or if the application is not registered, the request is rejected; the client receives a 400 Bad Request error if no key validation header or query parameter is passed or a 403 Forbidden error if an invalid key is passed.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html#GUID-5CBFE528-A74E-4700-896E-154378818E3A]

This policy requires that you create and register an application, which is described in my next article.

In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-manager-user.

Navigate to tab “API Implementation” of the “HumanResourceService” API, and then in the “Available Policies” region, expand “Security”. Hover over the “Key Validation” policy and then, on the right, click the icon “Apply”. Enter the following values:

Your Policy Name Key Validation Comments Place after the following policy API Request

Then, click on icon “Next”. Enter the following values:

Key Delivery Approach Header Key Header application-key

Click on button “Apply as Draft”.

Next, click on button “Save Changes”.

I applied this as a draft policy, represented as a dashed line around the policy. Draft policies let you “think through” what you want before you have the complete implementation details. This enables you to complete the bigger picture in one sitting and to leave reminders of what is missing to complete the API later.
When you deploy an API, draft policies are not deployed.

Add an Interface Filtering Policy, via the Management Portal (api-manager-user)

Use an interface filtering policy to filter requests based on the resources and methods specified in the request.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html#GUID-69B7BC21-416B-4262-9CE2-9896DEDF2144]

Navigate to tab “API Implementation” of the “HumanResourceService” API, and then in the “Available Policies” region, expand “Interface Management”. Hover over the “Interface Filtering” policy and then, on the right, click the icon “Apply”. Enter the following values:

Your Policy Name Interface Filtering Comments Place after the following policy Key Validation

Then, click on icon “Next”.

In the table below I summarized the requests that I created in the Oracle Apiary Mock Server for the “HumanResourceService” API:

Request name Method Oracle Apiary Mock Server Request URL GetAllEmployeesRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/employees CreateEmployeeRequest POST http://private-b4874b1-humanresourceservice.apiary-mock.com/employees GetEmployeeRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/employees/100 UpdateEmployeeRequest PUT http://private-b4874b1-humanresourceservice.apiary-mock.com/employees/219 GetDepartmentRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/departments/30 GetDepartmentEmployeeRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/departments/30/employees/119

I want to use an interface filtering policy to filter requests. As an example, I want to pass requests (to the backend service) with the method GET specified in the request and a resource starting with employees followed by an identification or starting with departments followed by employees and an identification.

Select “Pass” from the list.

At “Filtering Conditions”, “Condition 1” enter the following values:

Resources /employees/* ; /departments/*/employees/* Methods GET

Click on button “Apply ”.

Next, click on button “Save Changes”.

I applied this policy as an active policy, represented as a solid line around the policy.

Redeploy the API, via the Management Portal (api-manager-user)

Navigate to tab “Deployments” of the “HumanResourceService” API, and then hover over the “Production Gateway” gateway and then, on the right, hover over the icon “Redeploy”.

Next, click on icon “Latest Iteration”.

In the pop-up, click on button “Yes”. For a short while a pop-up “Redeploy request submitted” appears.

Then repeat the steps described before in this article, to approve the request, by switching to a Gateway Manager.

Remark:
Click on “Latest Iteration” to deploy the most recently saved iteration of the API.
Click on “Current Iteration” to redeploy the currently deployed iteration of the API.

After that, it is time to try out the effect of adding the “Interface Filtering” policy.

Validating the “Interface Filtering” policy, via Postman

In Postman for each request mentioned earlier (in the table), I created that request within the collection named “HumanResourceServiceCollection”.

Then again I invoked each request, to validate it against the “Interface Filtering” policy.

Invoke method “GetAllEmployees” of the “HumanResourceService” API

From Postman I invoked the request named “GetAllEmployeesRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “CreateEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “CreateEmployeeRequest” (with method “POST” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “GetEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “GetEmployeesRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees/100”) and a response with “Status 200 OK” is shown:

Invoke method “UpdateEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “UpdateEmployeeRequest” (with method “PUT” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees/219”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “GetDepartment” of the “HumanResourceService” API

From Postman I invoked the request named “GetDepartmentRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/departments/30”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “GetDepartmentEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “GetDepartmentEmployeeRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/departments/30/employees/119”) and a response with “Status 200 OK” is shown:

Tab “Analytics” of the “Production Gateway” gateway

In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-gateway-user and click on the “Production Gateway” gateway and navigate to the tab “Analytics”.

In this tab the requests I sent, are visible at “Total Requests”.

If we look, for example, at “Requests By Resource”, the requests are also visible.

Next, click on icon “Error and Rejections (4 Total)” and if we look, for example, at “Rejection Distribution”, we can see that there were 4 request rejections, because of policy “Interface Filtering”.

So the “Interface Filtering” policy is working correct.

Summary

As a follow up from my previous articles about Oracle API Platform Cloud Service, in this article the focus is on using the Management Portal and Creating the “HumanResourceService” API (including some policies).

As an example I have created two policies: Key Validation (Security) and Interface Filtering (Interface Management). The later policy, I deployed to a gateway and validated that this policy worked correct, using requests which I created in Postman.

While using the Management Portal in this article, I focused on the roles “API Manager” and “Gateway Manager”. For example, the user api-gateway-user had to approve a request from the api-manager-user to deploy an API the a gateway.

In a next article the focus will be on validating the “Key Validation” policy and using the “Development Portal”.

The post Oracle API Platform Cloud Service: using the Management Portal and creating an API (including some policies) appeared first on AMIS Oracle and Java Blog.

A DBA’s first steps in Jenkins

Thu, 2018-04-12 02:56

My Customer wanted an automated way to refresh an application database to a known state, to be done by non-technical personnel. As a DBA I know a lot of scripting, can build some small web interfaces, but why bother when there are ready available tools, like Jenkins. Jenkins is mostly a CI/CD developer thing that for a classical DBA is a bit of magic. I decided to try this tool to script the refreshing of my application.

Successs

 

Getting started

First, fetch the Jenkins distribution from https://jenkins-ci.org, I used the jenkins.war latest version. Place the jenkins.war file in a desired location and you’re almost set to go, set the environment variable JENKINS_HOME to a sane value, or else your Jenkins settings, data and workdir will be in $HOME/.jenkins/

Start Jenkins by using the following commandline:

java -jar jenkins.war --httpPort=8024

You may want to make a start script to automate this step. Please note the –httpPort argument: choose a available portnumber (and make sure the firewall is opened for this port)

When starting Jenkins for the first time it creates a password that it will show in the standard output. When you open the webinterface for Jenkins for the first time you need this password. After logging in, install the recommended plugins. In this set there should be at least the Pipeline plugin. The next step will create your admin user account.

Creating a Pipeline build job.

Navigate to “New Item” to start creating your first pipeline. Type a descriptive name, choose as type a Pipeline

myfirstpipeline

After creating the job, you can start building the pipeline: In my case I needed about four steps: stopping the Weblogic servers,
clearing the schemas, importing the schemas and fixing stuff, and finally starting Weblogic again.

The Pipeline scripting language is quite extensive, I only used the bare minimum of the possibilities, but at least it gets my job done. The actual code can be entered in the configuration of the job, in the pipeline script field. A more advanced option could be to retrieve your Pipeline code (plus additional scripts) from a SCM like Git or Bitbucket.

empty_pipeline

 

The code below is my actual code to allow the refresh of the application:

pipeline {
    agent any
    stages {
        stage ('Stop Weblogic') {
            steps { 
                echo 'Stopping Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/stopServers.py'
            }
        }
        stage ( 'Drop OWNER') {
            steps {
                echo "Dropping the Owner"
                sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus /@theSID @ scripts/drop_tables.sql"'
            }
        }
        stage ( 'Import OWNER' ) {
            steps {
                echo 'Importing OWNER'
                sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; impdp /@@theSID directory=thedirforyourdump \
                            dumpfile=Youknowwhichfiletoimport.dmp \
                            logfile=import-`date +%F-%h%m`.log \
                            schemas=ONLY_OWNER,THE_OTHER_OWNER,SOME_OTHER_REQUIRED_SCHEMA \
                            exclude=USER,SYNONYM,VIEW,TYPE,PACKAGE,PACKAGE_BODY,PROCEDURE,FUNCTION,ALTER_PACKAGE_SPEC,ALTER_FUNCTION,ALTER_PROCEDURE,TYPE_BODY"', returnStatus: true

				 echo 'Fixing invalid objects'           
                 sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus / as sysdba @?/rdbms/admin/utlrp"'    
				 
                 echo 'Gathering statistics in the background'
                 sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus /@theSID @ scripts/refresh_stats.sql"'
            }
        }
        stage ( 'Start Weblogic' ) {
            steps {
                echo 'Starting Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/startServers_turbo.py'
            }
        }
    }
}

In this script you can see the four global steps, but some steps are more involved. In this situation I decided not to completely drop the schemas associated with the application, the dump file could come from a different environment with different passwords. Additionally I only import here the known schemas, if the supplied dumpfile accidentally contains additional schemas the errors in the log would be enormous due to not creating the useraccounts in the import stage.

When the job is saved, you can try a Build, this will run your job, you can monitor the console output to see how your job is going.

SQL*Plus with wallet authentication

The observant types among you may have noticed that I used a wallet for authentication with SQL*Plus and impdp. As this tool would be used by people who should not get DBA passwords, using a password on the commandline is not recommended: note that all the command above and their output would be logged in plaintext. So I decided to start making use of a wallet for the account information. Most steps are well documented, but I found that the step of making the wallet autologin capable (not needing to type a wallet password all the time) was documented using the GUI tool, but not the commandline tool. Luckily there are ways of doing that on the command line.

mkdir -p $ORACLE_HOME/network/admin/wallet
mkstore -wrl $ORACLE_HOME/network/admin/wallet/ -create
mkstore -wrl $ORACLE_HOME/network/admin/wallet -createCredential theSID_system system 'YourSuperSekritPassword'
orapki wallet create -wallet $ORACLE_HOME/network/admin/wallet -auto_login

sqlnet.ora needs to contain some information so the wallet can be found:

WALLET_LOCATION =
  (SOURCE =    (METHOD = FILE)
   (METHOD_DATA =      (DIRECTORY = &lt;&lt;ORACLE_HOME&gt;&gt;/network/admin/wallet)    )  )
SQLNET.WALLET_OVERRIDE = TRUE

also make sure a tnsnames entry is added for your wallet credential name (above: theSID_system) now using sqlplus /@theSID_system should connect you to the database as the configured user.

Asking Questions

The first job was quite static: always the same dump, or I need to edit the pipeline code to change the named dumpfile… not as flexible as I would like… Can Jenkins help me here? Luckily, YES:

    def dumpfile
    def dbhost = 'theHost'
    def dumpdir = '/u01/oracle/admin/THESID/dpdump'

    pipeline {
    agent any
    stages {
        stage ('Choose Dumpfile') {
            steps {
                script {
                    def file_collection
                    file_collection = sh script: "ssh $dbhost 'cd $dumpdir; ls *X*.dmp *x*.dmp 2&gt;/dev/null'", returnStdout: true
                    dumpfile = input message: 'Choose the right dump', ok: 'This One!', parameters: [choice(name: 'dump file', choices: "${file_collection}", description: '')]
                }
            }
        }
        stage ('Stop Weblogic') {
            steps { 
                echo 'Stopping Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/stopServers.py'
            }
        }
        stage ( 'Drop OWNER') {
            steps {
                echo "Dropping Owner"
                sh script: 'ssh $dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus /@theSID @ scripts/drop_tables.sql"'
            }
        }
        stage ( 'Import OWNER' ) {
            steps {
                echo 'Import OWNER'
                sh script: "ssh $dbhost 'export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; impdp /@theSID directory=dump \
                            dumpfile=$dumpfile \
                            logfile=import-`date +%F@%H%M%S`.log \
                            schemas=MYFAVOURITE_SCHEMA,SECONDOWNER \
                            exclude=USER,SYNONYM,VIEW,TYPE,PACKAGE,PACKAGE_BODY,PROCEDURE,FUNCTION,ALTER_PACKAGE_SPEC,ALTER_FUNCTION,ALTER_PROCEDURE,TYPE_BODY'", returnStatus: true
                            
                 sh script: 'ssh $dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus / as sysdba @?/rdbms/admin/utlrp"'
                            
                 sh script: 'ssh dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus /@theSID @ scripts/refresh_stats.sql"'
            }
        }
        stage ( 'Start Weblogic' ) {
            steps {
                echo 'Starting Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/startServers_turbo.py'
            }
        }
    }
}

The first stage actually looks at the place where all the dumpfiles are to be found and does a ls on it. This listing is then stored in a variable that will be split into choices. The running job will wait for input, so no harm is done until the choice is made.

Starting a build like this will pause, you can see that when looking at the latest running build in the build queue.

When clicking the link the choice can be made (or the build can be aborted)

 

 

 

 

 

 

The post A DBA’s first steps in Jenkins appeared first on AMIS Oracle and Java Blog.

First steps with Docker Checkpoint – to create and restore snapshots of running containers

Sun, 2018-04-08 01:31

Docker Containers can be stopped and started again. Changes made to the file system in a running container will survive this deliberate stop and start cycle. Data in memory and running processes obviously do not. A container that crashes cannot just be restarted and will have a file system in an undetermined state if it can be restarted. When you start a container after it was stopped, it will go through its full startup routine. If heavy duty processes needs to be started – such as a database server process – this startup time can be substantial, as in many seconds or dozens of seconds.

Linux has a mechanism called CRIU or Checkpoint/Restore In Userspace. Using this tool, you can freeze a running application (or part of it) and checkpoint it as a collection of files on disk. You can then use the files to restore the application and run it exactly as it was during the time of the freeze. See https://criu.org/Main_Page for details. Docker CE has (experimental) support for CRIU. This means that using straightforward docker commands we can take a snapshot of a running container (docker checkpoint create <container name> <checkpointname>). At a later moment, we can start this snapshot as the same container (docker start –checkpoint <checkpointname> <container name> ) or as a different container.

The container that is started from a checkpoint is in the same state – memory and processes – as the container was when the checkpoint was created. Additionally, the startup time of the container from the snapshot is very short (subsecond); for containers with fairly long startup times – this rapid startup can be a huge boon.

In this article, I will tell about my initial steps with CRIU and Docker. I got it to work. I did run into an issue with recent versions of Docker CE (17.12 and 18.x) so I resorted back to 17.04 of Docker CE. I also ran into an issue with an older version of CRIU, so I built the currently latest version of CRIU (3.8.1) instead of the one shipped in the Ubuntu Xenial 64 distribution (2.6).

I will demonstrate how I start a container that clones a GitHub repository and starts a simple REST API as a Node application; this takes 10 or more seconds. This application counts the number of GET requests it handles (by keeping some memory state). After handling a number of requests, I create a checkpoint for this container. Next, I make a few more requests, all the while watching the counter increase. Then I stop the container and start a fresh container from the checkpoint. The container is running lightningly fast – within 700ms – so it clearly leverages the container state at the time of creating the snapshot. It continues counting requests at the point were the snapshot was created, apparently inheriting its memory state. Just as expected and desired.

Note: a checkpoint does not capture changes in the file system made in a container. Only the memory state is part of the snapshot.

Note 2: Kubernetes does not yet provide support for checkpoints. That means that a pod cannot start a container from a checkpoint.

In a future article I will describe a use case for these snapshots – in automated test scenarios and complex data sets.

The steps I went through (on my Windows 10 laptop using Vagrant 2.0.3 and VirtualBox 5.2.8):

  • use Vagrant to a create an Ubuntu 16.04 LTS (Xenial) Virtual Box VM with Docker CE 18.x
  • downgrade Docker from 18.x to 17.04
  • configure Docker for experimental options
  • install CRIU package
  • try out simple scenario with Docker checkpoint
  • build CRIU latest version
  • try out somewhat more complex scenario with Docker checkpoint (that failed with the older CRIU version)

 

Create Ubuntu 16.04 LTS (Xenial) Virtual Box VM with Docker CE 18.x

My Windows 10 laptop already has Vagrant 2.0.3 and Virtual Box 5.2.8. Using the following vagrantfile, I create the VM that is my Docker host for this experiment:

 

After creating (and starting) the VM with

vagrant up

I connect into the VM with

vagrant ssh

ending up at the command prompt, ready for action.

And in just to make sure we are pretty much up to date, I run

sudo apt-get upgrade

image

Downgrade Docker CE to Release 17.04

At the time of writing there is an issue with recent Docker version (at least 17.09 and higher – see https://github.com/moby/moby/issues/35691) and for that reason I downgrade to version 17.04 (as described here: https://forums.docker.com/t/how-to-downgrade-docker-to-a-specific-version/29523/4 ).

First remove the version of Docker installed by the vagrant provider:

sudo apt-get autoremove -y docker-ce \
&& sudo apt-get purge docker-ce -y \
&& sudo rm -rf /etc/docker/ \
&& sudo rm -f /etc/systemd/system/multi-user.target.wants/docker.service \
&& sudo rm -rf /var/lib/docker \
&&  sudo systemctl daemon-reload

then install the desired version:

sudo apt-cache policy docker-ce

sudo apt-get install -y docker-ce=17.04.0~ce-0~ubuntu-xenial

 

    Configure Docker for experimental options

    Support for checkpoints leveraging CRIU is an experimental feature in Docker. In order to make use of it, the experimental options have to be enabled. This is done (as described in https://stackoverflow.com/questions/44346322/how-to-run-docker-with-experimental-functions-on-ubuntu-16-04)

     

    sudo nano /etc/docker/daemon.json
    

    add

    {
    "experimental": true
    }
    

    Press CTRL+X, select Y and press Enter to save the new file.

    restart the docker service:

    sudo service docker restart
    

    Check with

    docker version
    

    if experimental is indeed enabled.

     

    Install CRIU package

    The simple approach with CRIU – how it should work – is by simply installing the CRIU package:

    sudo apt-get install criu
    

    (see for example in https://yipee.io/2017/06/saving-and-restoring-container-state-with-criu/)

    This installation results for me in version 2.6 of the CRIU package. For some actions that proves sufficient, and for others it turns out to be not enough.

    image

     

    Try out simple scenario with Docker checkpoint on CRIU

    At this point we have Docker 17.04, Ubuntu 16.04 with CRIU 2.6. And that combination can give us a first feel for what the Docker Checkpoint mechanism entails.

    Run a simple container that writes a counter value to the console once every second (and then increases the counter)

    docker run --security-opt=seccomp:unconfined --name cr -d busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'
    

    check on the values:

    docker logs cr
    

    create a checkpoint for the container:

    docker checkpoint create  --leave-running=true cr checkpoint0
    

    image

    leave the container running for a while and check the logs again

    docker logs cr
    

    SNAGHTML19a5da6

    now stop the container:

    docker stop cr
    

    and restart/recreate the container from the checkpoint:

    docker start --checkpoint checkpoint0 cr
    

    Check the logs:

    docker logs cr
    

    You will find that the log is resumed at the value (19) where the checkpoint was created:

    SNAGHTML197d66e

     

    Build CRIU latest version

    When I tried a more complex scenario (see next section) I ran into this issue. I could work around that issue by building the latest version of CRIU on my Ubuntu Docker Host. Here are the steps I went through to accomplish that – following these instuctions: https://criu.org/Installation.

    First, remove the currently installed CRIU package:

    sudo apt-get autoremove -y criu \
    && sudo apt-get purge criu -y \
    

    Then, prepare the build environment:

    sudo apt-get install build-essential \
    && sudo apt-get install gcc   \
    && sudo apt-get install libprotobuf-dev libprotobuf-c0-dev protobuf-c-compiler protobuf-compiler python-protobuf \
    && sudo apt-get install pkg-config python-ipaddr iproute2 libcap-dev  libnl-3-dev libnet-dev --no-install-recommends
    

    Next, clone the GitHub repository for CRIU:

    git clone <a href="https://github.com/checkpoint-restore/criu">https://github.com/checkpoint-restore/criu</a>
    

    Navigate into to the criu directory that contains the code base

    cd criu
    

    and build the criu package:

    make
    

    When make is done, I can run CRIU :

    sudo ./criu/criu check
    

    to see if the installation is successful. The final message printed should be: Looks Good (despite perhaps one or more warnings).

    Use

    sudo ./criu/criu –V
    

    to learn about the version of CRIU that is currently installed.

    Note: the CRIU instructions describe the following steps to install criu system wide. This does not seem to be needed in order for Docker to leverage CRIU from the docker checkpoint commands.

    sudo apt-get install asciidoc  xmlto
    sudo make install
    criu check
    

    Now we are ready to take on the more complex scenario that failed before with an issue in the older CRIU version.

    A More complex scenario with Docker Checkpoint

    This scenario failed with the older CRIU version – probably because of this issue. I could work around that issue by building the latest version of CRIU on my Ubuntu Docker Host.

      In this case, I run a container based on a Docker Container image for running any Node application that is downloaded from a GitHub Repository. The Node application that the container will download and run handles simple HTTP GET requests: it counts requests and returns the value of the counter as the response to the request. This container image and this application were introduced in an earlier article: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/

      Here you see the command to run the container – to be be called reqctr2:

      docker run --name reqctr2 -e "GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017" -e "APP_PORT=8080" -p 8005:8080 -e "APP_HOME=part1"  -e "APP_STARTUP=requestCounter.js"   lucasjellema/node-app-runner
      

      image

      It takes about 15 seconds for the application to start up and handle requests.

      Once the container is running, requests can be sent from outside the VM – from a browser running on my laptop for example – to be handled  by the container, at http://192.168.188.106:8005/.

      After a number or requests, the counter is at 21:

      image

      At this point, I create a checkpoint for the container:

      docker checkpoint create  --leave-running=true reqctr2 checkpoint1
      

      image

      I now make a few additional requests in the browser, bringing the counter to a higher value:

      imageAt this point, I stop the container – and subsequently start it again from the checkpoint:

      docker stop reqctr2
      docker start --checkpoint checkpoint1 reqctr2
      

      image

      It takes less than a second for the container to continue running.

      When I make a new request, I do not get 1 as a value (as would be the result from a fresh container) nor is it 43 (the result I would get if the previous container would still be running). Instead, I get

      imageThis is the next value starting at the state of the container that was captured in the snapshot. Note: because I make the GET request from the browser and the browser also tries to retrieve the favicon, the counter is increased by two for every single time I press refresh in the browser.

      Note: I can get a list of all checkpoints that have been created for a container. Clearly, I should put some more effort in a naming convention for those checkpoints:

      docker checkpoint ls reqctr2
      

      image

      The flow I went through in this scenario can be visualized like this:

      image

      The starting point: Windows laptop with Vagrant and Virtual Box. A VM has been created by Vagrant with Docker inside. The correct version of Docker and of the CRIU package have been set up.

      Then these steps are run through:

      1. Start Docker container based on an image with Node JS runtime
      2. Clone GitHub Repository containing a Node JS application
      3. Run the Node JS application – ready for HTTP Requests
      4. Handle HTTP Requests from a browser on the Windows Host machine
      5. Create a Docker Checkpoint for the container – a snapshot of the container state
      6. The checkpoint is saved on the Docker Host – ready for later use
      7. Start a container from the checkpoint. This container starts instantaneously, no GitHub clone and application startup are required; it resumes from the state at the time of creating the checkpoint
      8. The container handles HTTP requests – just like its checkpointed predecessor

       

      Resources

      Sources are in this GitHub repo: https://github.com/lucasjellema/docker-checkpoint-first-steps

      Article on CRIU: http://www.admin-magazine.com/Archive/2014/22/Save-and-Restore-Linux-Processes-with-CRIU

      Also: on CRIU and Docker: https://yipee.io/2017/06/saving-and-restoring-container-state-with-criu/.

      Docs on Checkpoint and Restore in Docker: https://github.com/docker/cli/blob/master/experimental/checkpoint-restore.md

       

      Home of CRIU:   and page on Docker support: https://criu.org/Docker; install CRIU package on Ubuntu: https://criu.org/Packages#Ubuntu

      Install and Build CRIU Sources: https://criu.org/Installation

       

      Docs on Vagrant’s Docker providingprovisioning: https://www.vagrantup.com/docs/provisioning/docker.html

      Article on downgrading Docker : https://forums.docker.com/t/how-to-downgrade-docker-to-a-specific-version/29523/4

      Configure Docker for experimental options: https://stackoverflow.com/questions/44346322/how-to-run-docker-with-experimental-functions-on-ubuntu-16-04

      Issue with Docker and Checkpoints (at least in 17.09-18.03): https://github.com/moby/moby/issues/35691

      The post First steps with Docker Checkpoint – to create and restore snapshots of running containers appeared first on AMIS Oracle and Java Blog.

      Regenerate Oracle VM Manager repository database

      Fri, 2018-04-06 02:01

      Some quick notes to regenerate a corrupted Oracle VM manager repository database.

      How did we discover the corruption?
      The MySQL repository databases was increasing in size, the file “OVM_STATISTIC.ibd” was 62G. We also found the following error messages in the “AdminServer.log” logfile:

      ####<2018-02-13T07:52:17.339+0100> <Error> <com.oracle.ovm.mgr.task.ArchiveTask> <ovmm003.gemeente.local> <AdminServer> <Scheduled Tasks-12> <<anonymous>> <> <e000c2cc-e7fe-4225-949d-25d2cdf0b472-00000004> <1518504737339> <BEA-000000> <Archive task exception:
      com.oracle.odof.exception.ObjectNotFoundException: No such object (level 1), cluster is null: <9208>

      Regenerate steps
      – Stop the OVM services
      ve used Toad in the past, this tool will be very familiar to you.</p> <p>TOra s born out of <a href="http://www.globecom.net/tora/history.html">jealousy</a>. Windows users have an abundance of tools to choose from, Linux user however, don’t… or at least didn’t. TOra filled this gap.</p> <p>It was created in C++ and uses the Qt library. In the included documentation, there is a section explaining ways to create plug-ins for TOra. It even includes a tutorial. The only <a href="http://log4plsql.sourceforge.net/download/log4plsql.tpl">plug-in</a> I could find incorporates <a href="http://log4plsql.sourceforge.net/">Log4PLSQL</a> into TOra.</p> <p>While using Google to search for plug-ins available for TOra I came across a post mentioning a plug-in for SQL*Loader, I couldn’t find the actual plug-in though.</p> <p>TOra is free of charge, unless you’re a Windows user, then you’ll need to purchase a commercial license. The Windows version of TOrais governed by <a href="http://www.globecom.se/tora/license.pdf">the Software License Agreement from Quest Software.</a> Other platform releases are licensed under <a href="http://www.gnu.org/copyleft/gpl.html">GPL</a>.</p> <p>Features included in TOra:</p> <ul style='margin-top:0in' type=disc> <li>PL/SQL Debugger, at least according to the specs. I couldn’t get it going. The<br /> menu showed the icon, but was disabled.</li> <li>SQL Worksheet with syntax highlighting. Tab Pages provide additional<br /> information such as Explain Plan and a Log of previously executed<br /> statements<br /> A nice feature here is the “describe under cursorâ€? which shows the table<br /> structure you a currently querying.</li> <li>Schema Browser to show tables, view, indexes , sequences, synonyms, pl/sql and triggers for a particular schema. </li> </ul> <p>Here is a screenshot showcasing some of these features.<br /> <img src="https://technology.amis.nl/wp-content/uploads/images/ToraScreenshot.png" alt="TOra Screenshot" /></p> <p>TOra supports Database versions up to Oracle 9i (which release is not specified). Being connected to an Oracle 10g database didn’t seem to cause any problems.</p> <p>I installed a trial version on a Windows platform and played around with that for a while.<br /> The first thing that strikes me is the resemblance to Toad. There are a lot of similarities between these two products. The overall look and feel, where the different tools are located etc. make clear that TOra was inspired by Toad.</p> <p>My experience with TOra… it has a lot of features I never use. The ones I do use, don’t provide me with the feedback I need.</p> <p>An example to illustrate this: If I create a procedure with an error in it. It will compile, or at least it appears that way. The error messages are shown on the status bar and disappear after a while. You can<br /> recall the messages using a button on the status bar, or navigate the cursor to the status bar to display the error message in a tooltip.<br /> What I’d like to see is more immediate feedback to notice errors early on during development. Toad will display a pop-up window clearly stating the error.</p> <p>Creating and manipulating Objects formed somewhat of a problem in the SQL Worksheet. A valid Object Type Body definition (tested in SQL*Plus) resulted in an “ORA-00900: Invalid SQL Statement” error, making it impossible to create the Object Type Body here.</p> <p>Doing a similar action(creating a Object Type Body in a SQL window) in Toad or SQL*Plus was no problem. A valid Object Type Body was the result.</p> <p>A really nice feature in TOra is the DB Extract/Compare/Search tool. Simply using check-marks to specify which database objects you want to use and this tool will either Extract (creating installation scripts), Compare (handy if you need to compare two schema’s) or Search the database.</p> <p>I think it’s possible to overcome the limitations I mentioned before, once you get more comfortable using this tool. Getting used to a tool like Toad or TOra requires some time. There are so many tools at your disposal, learning each one of them simply takes time and effort. It’s like a new pair of shoes, once you break them in, they’re comfortable to wear, but the first two weeks…</p> <p>There are a number of tools on the market to choose from, especially if you’re using Windows. TOra beats Toad price-wise, but for how long? Quest is involved in TOra, draw your own conclusion. How will it compete with others on the Windows platform? Is it still going to evolve and incorporate new features and enhancements?</p> <p>If you’re not on a Windows platform, TOramight be worth looking into. The price is right, it offers a lot (maybe most) of the features Toad has.<br /> Jealousy can be a thing of the past.</p> " data-medium-file="" data-large-file="" class="alignnone size-full wp-image-119" src="https://ronniekalisingh.files.wordpress.com/2018/02/restore_db_1.png?resize=674%2C42" alt="restore_db_1" width="674" height="42" data-recalc-dims="1" />

      – Delete the Oracle VM Manager repository database
      restore_db_2

      “/u01/app/oracle/ovm-manager-3/ovm_upgrade/bin/ovm_upgrade.sh –deletedb –dbuser=ovs –dbpass=<PASSWORD> –dbhost=localhost –dbport=49500 –dbsid=ovs”

      restore_db_3

      – Generate replacement certificate
      restore_db_4

      – Start the OVM services and generate new certificates
      restore_db_5

      – Restart OVM services
      restore_db_6

      – Repopulate the database by discovering the Oracle VM Servers
      restore_db_7.png

      – Restore simple names
      Copy the restoreSimpleName script to /tmp, see Oracle Support note: 2129616.1
      restore_db_8

      Resources
      [OVM] Issues with huge OVM_STATISTIC.ibd used as OVM_STATISTIC Table. (Doc ID 2216441.1)
      Oracle VM: How To Regenerate The OVM 3.3.x/3.4.x DB (Doc ID 2038168.1)
      Restore OVM Manager “Simple Names” After a Rebuild/Reinstall (Doc ID 2129616.1)

      The post Regenerate Oracle VM Manager repository database appeared first on AMIS Oracle and Java Blog.

      First steps with REST services on ADF Business Components

      Sat, 2018-03-31 10:20

      Recently we had a challenge at a customer for which ADF REST resources on Business Components were the perfect solution.

      Our application is built in Oracle JET and of course we wanted nice REST services to communicate with. Because our data is stored in an Oracle database we needed an implementation to easily access the data from JET. We decided on using ADF and Business Components to achieve this. Of course there are alternative solutions available but because our application runs as a portal in Webcenter Portal, ADF was already in our technology stack. I would like to share some of my first experiences with this ADF feature. We will be using ADF 12.2.1.1.

      In this introduction we will create a simple application, the minimal required set of business components and a simple REST service. There are no prerequirements to start using the REST functionality in ADF. If you create a custom application you can choose to add the feature for REST Services but it is not necessary. Start with making a simple EO and VO:

      image

      Before you can create any REST services, you need to define your first release version. The versions of REST resources are managed in the adf-config.xml. Go to this file, open the Release Versions tab and create version 1. The internal name is automatically configured based on your input:

      image

      Your application is now ready for your first service. Go to the Web Service tab of the Application Module and then the REST tab. Click the green plus icon to add a resource. Your latest version will automatically be selected. Choose an appropriate name and press OK.

      image

      ADF will create a config file for your resource (based on the chosen ViewObject), a resourceRegistry that will manage all resources in your application and a new RESTWebService project that you can use to start the services. The config file automatically opens and you can now further configure your resource.

      image

      In the wizard Create Business Components from Tables, there is a REST Resources step in which you can immediately define some resources on View Objects. Using this option always gives me an addPageDefinitionUsage error, even by creating the simplest service:

      image

      After ignoring this error, several things go wrong (what a surprise). The REST resource is created in a separate folder (not underneath the Application Module), it is not listed as a REST resource in the Application Module and finally it doesn’t work. All in all not ideal. I haven’t been able to figure out what happens but I would recommend avoiding this option (at least in this version of JDeveloper).

      There are two important choices to make before starting your service. You have to decide which REST actions will be allowed, and what attributes will be exposed.

      Setting the actions is simple. On the first screen of your config file are several checkboxes for the actions, Create, Delete and Update. By default they are all allowed on your service. Make sure to uncheck all actions that you don’t want to allow on your service. This provides for better security.

      image

      Limiting the exposed attributes can be done in two ways. You can hide attributes on the ViewObject for all REST services on that VO. This is a secure and convenient way if you know an attribute should never be open to a user.

      image

      Another way of configuring attributes for your REST services is creating REST shapes. This is a powerful feature that can be accessed from the ViewObject screen. You can make shapes independent of specific resources and apply them whenever you want. To create a shape, go to the ViewObject and to the tab Service Shaping. Here you can add a shape with the green plus-icon. Keep in mind that the name you choose for your shape will be a suffix to the name of your ViewObject. After creating the shape, you can use the shuttle to remove attributes.

      image

      The newly created shape will have its own configuration file in a different location but you can only change it in the ViewObject configuration.

      image

      After the shape is created, it can now be added to your REST service. To do this, use the Attributes tab in your Resource file, select the shape and you see the attribute shuttle is updated automatically.

      image

      You are now ready to start your service. Right-click on the RESTWebService project and run. If you have done everything right, JDeveloper will show you the url where your services are running. Now you can REST easily.

      The post First steps with REST services on ADF Business Components appeared first on AMIS Oracle and Java Blog.

      ORDS: Installation and Configuration

      Fri, 2018-03-30 09:57

      In my job as system administrator/DBA/integrator I was challenged to implement smoketesting using REST calls. Implementing REST in combination with WebLogic is pretty easy. But then we wanted to extend smoketesting to the database. For example we wanted to know if the database version and patch level were at the required level as was used throughout the complete DTAP environment. Another example is the existence of required database services. As it turns out Oracle has a feature called ORDS – Oracle REST Data Service – to accomplish this.

      With ORDS you can install it in 2 different scenario’s, in standalone mode on the database server, or in combination with an application server such as WebLogic Server, Glassfish Server, or Tomcat.

      This article will give a short introduction to ORDS. It then shows you how to install ORDS feasible for a production environment using WebLogic Server 12c and an Oracle 12c database as we have done for our smoketesting application.

      We’ve chosen WebLogic Server to deploy the ORDS application because we already used WebLogic’s REST feature for smoketesting the application and WebLogic resources, and for high availability reasons because we use an Oracle RAC database. Also running in stand-alone mode would lead to additional security issues for port configutions.

      Terminology

      REST: Representational State Transfer. It provides interoperability on the Internet between computer systems.

      ORDS: Oracle REST Data Services. Oracle’s implementation of RESTful services against the database.

      RESTful service: an http web service that follows the REST architecture principles. Access to and/or manipulation of web resources is done using a uniform and predefined set of stateless operators.

      ORDS Overview

      ORDS makes it easy to develop a REST interface/service for relational data. This relational data can be stored in either an Oracle database, an Oracle 12c JSON Document Store, or an Oracle NoSQL database.

      A mid-tier Java application called ORDS, maps HTTP(S) requests (GET, PUT, POST, DELETE, …) to database transactions and returns results in a JSON format.

      ORDS Request Response Flow

      Installation Process

      The overall process of installing and configuring ORDS is very simple.

      1. Download the ORDS software
      2. Install the ORDS software
      3. Make some setup configurational changes
      4. Run the ORDS setup
      5. Make a mapping between the URL and the ORDS application
      6. Deploy the ORDS Java application

      Download the ORDS software

      Downloading the ORDS software can be done from the Oracle Technology Network. I used version ords.3.0.12.263.15.32.zip. I downloaded it from Oracle Technet:
      http://www.oracle.com/technetwork/developer-tools/rest-data-services/downloads/index.html

      Install the ORDS software

      The ORDS software is installed on the WebLogic server running the Administration console. Create an ORDS home directory and unzip the software.

      Here are the steps on Linux

      $ mkdir -p /u01/app/oracle/product/ords
      $ cp -p ords.3.0.12.263.15.32.zip /u01/app/oracle/product/ords
      $ cd /u01/app/oracle/product/ords
      $ unzip ords.3.0.12.263.15.32.zip

      Make some setup configurational changes ords_params.properties File

      Under the ORDS home directory a couple of subdirectories are created. One subdirectory is called params. This directory holds a file called ords_params.properties. This file holds some default parameters that are used during the installation. This file ords_params.properties, is used for silent installation. In case any parameters aren’t specified in this file, ORDS interactively asks you for the values.

      In this article I go for a silent installation. Here are the default parameters and the ones I set for installing

      Parameter

      Default Value

      Configured Value

      db.hostname

      dbserver01.localdomain

      db.port

      1521

      1521

      db.servicename

      ords_requests

      db.username

      APEX_PUBLIC_USER

      APEX_PUBLIC_USER

      migrate.apex.rest

      false

      false

      plsql.gateway.add

      false

      false

      rest.services.apex.add

      false

      rest.services.ords.add

      true

      true

      schema.tablespace.default

      SYSAUX

      ORDS

      schema.tablespace.temp

      TEMP

      TEMP

      standalone.http.port

      8080

      8080

      user.public.password

      Ords4Ever!

      user.tablespace.default

      USERS

      ORDS

      user.tablespace.temp

      TEMP

      TEMP

      sys.user

      SYS

      sys.password

      Oracle123

      NOTE

      As you see, I refer to a tablespace ORDS for the installation of the metadata objects. Don’t forget to create this tablespace before continuing.

      NOTE

      The parameters sys.user and sys.password are removed from the ords_params.properties file after running the setup (see later on in this article)

      NOTE

      The password for parameter user.public.password is obscured after running the setup (see later on in this article)

      NOTE

      As you can see there are many parameters that refer to APEX. APEX is a great tool for rapidly developing very sophisticated applications nowadays. Although you can run ORDS together with APEX, you don’t have to. ORDS runs perfectly without an APEX installation.

      Configuration Directory

      I create an extra directory to hold all configuration data, called config directly under the ORDS home directory. Here all configurational data used during setup are stored.

      $ mkdir config
      $ java -jar ords.war configdir /u01/app/oracle/product/ords/config
      $ # Check what value of configdir has been set!
      $ java -jar ords.war configdir

      Run the ORDS setup

      After all configuration is done, you can run the setup, which installs the Oracle metadata objects necessary for running ORDS in the database. The setup creates 2 schemas called:

      • ORDS_METADATA
      • ORDS_PUBLIC_USER

      The setup is run in silent mode, which uses the parameter values previously set in the ords_params.properties file.

      $ mkdir -p /u01/app/oracle/logs/ORDS
      $ java -jar ords.war setup –database ords –logDir /u01/app/oracle/logs/ORDS –silent

      Make a mapping between the URL and the ORDS application

      After running the setup, ORDS required objects are created inside the database. Now it’s time to make a mapping from the request URL to the ORDS interface in the database.

      $ java -jar ords.war map-url –type base-path /ords ords

      Here a mapping is made between the request URL from the client to the ORDS interface in the database. The /ords part after the base URL is used to map to a database connection resource called ords.

      So the request URL will look something like this:

      http://webserver01.localdomain:7001/ords/

      Where http://webserver01.localdomain:7001 is the base path.

      Deploy the ORDS Java application

      Right now all changes and configurations are done. It’s time to deploy the ORDS Java application against the WebLogic Server. Here I use wlst to deploy the ORDS Java application, but you can do it via the Administration Console as well, whatever you like.

      $ wlst.sh
      $ connect(‘weblogic’,’welcome01′,’t3://webserver01.localdomain:7001′)
      $ progress= deploy(‘ords’,’/u01/app/oracle/product/ords/ords.war’,’AdminServer’)
      $ disconnect()
      $ exit()

      And your ORDS installation is ready for creating REST service!

      NOTE

      After deployment of the ORDS Java application, it’s state should be Active and health OK. You might need to restart the Managed Server!

      Deinstallation of ORDS

      As the installation of ORDS is pretty simple, deinstallation is even more simple. The installation involves the creation of 2 schemas on the database and a deployment of ORDS on the application server. The deinstall process is the reverse.

      1. Undeploy ORDS from WebLogic Server
      2. Deinstall the database schemas using

        $ java –jar ords.war uninstall

        In effect this removes the 2 schemas from the database

      3. Optionally remove the ORDS installation directories
      4. Optionally remove the ORDS tablespace from the database

      References

      Summary

      The installation of ORDS is pretty simple. You don’t need to get any extra licenses to use ORDS. ORDS can be installed without installing APEX. You can run ORDS stand-alone, or use a J2EE webserver like WebLogic Server, Glassfish Server, or Apache Tomcat. Although you will need additional licenses for the use of these webservers.

      Hope this helps!

      The post ORDS: Installation and Configuration appeared first on AMIS Oracle and Java Blog.

      Upgrade of Oracle Restart/SIHA from 11.2 to 12.2 fails with CRS-2415

      Thu, 2018-03-29 10:26

      We are in the process of upgrading our Oracle Clusters and SIHA/Restart systems to Oracle 12.2.0.1

      The upgrade of the Grid-Infra home on a Oracle SIHA/Restart system from 11.2.0.4 to 12.2.0.1 fails when
      running rootupgrade.sh with error message:

      CRS-2415: Resource ‘ora.asm’ cannot be registered because its owner ‘root’ is not the same as the Oracle Restart user ‘oracle’

      We start the upgrade to 12.2.0.1 (with Jan2018 RU patch) as:
      $ ./gridSetup.sh -applyPSU /app/software/27100009

      The installation and relink of the software looks correct.
      However, when running the rootupgrade.sh as root user, as part of the post-installation,
      the script ends with :

      2018-03-28 11:20:27: Executing cmd: /app/gi/12201_grid/bin/crsctl query has softwareversion
      2018-03-28 11:20:27: Command output:
      > Oracle High Availability Services version on the local node is [12.2.0.1.0]
      >End Command output
      2018-03-28 11:20:27: Version String passed is: [Oracle High Availability Services version on the local node is [12.2.0.1.0]]
      2018-03-28 11:20:27: Version Info returned is : [12.2.0.1.0]
      2018-03-28 11:20:27: Got CRS softwareversion for su025p074: 12.2.0.1.0
      2018-03-28 11:20:27: The software version on su025p074 is 12.2.0.1.0
      2018-03-28 11:20:27: leftVersion=11.2.0.4.0; rightVersion=12.2.0.0.0
      2018-03-28 11:20:27: [11.2.0.4.0] is lower than [12.2.0.0.0]
      2018-03-28 11:20:27: Disable the SRVM_NATIVE_TRACE for srvctl command on pre-12.2.
      2018-03-28 11:20:27: Invoking “/app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first”
      2018-03-28 11:20:27: trace file=/app/oracle/crsdata/su025p074/crsconfig/srvmcfg1.log
      2018-03-28 11:20:27: Executing cmd: /app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first
      2018-03-28 11:21:02: Command output:
      > PRCA-1003 : Failed to create ASM asm resource ora.asm
      > PRCR-1071 : Failed to register or update resource ora.asm
      > CRS-2415: Resource ‘ora.asm’ cannot be registered because its owner ‘root’ is not the same as the Oracle Restart user ‘oracle’.
      >End Command output
      2018-03-28 11:21:02: “upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first” failed with status 1.
      2018-03-28 11:21:02: Executing cmd: /app/gi/12201_grid/bin/clsecho -p has -f clsrsc -m 180 “/app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first”
      2018-03-28 11:21:02: Command

      The rootupgrade.sh script is run as the root user as prescribed,  but root cannot add the ASM resource.
      This leaves the installation unfinished.

      There is no description in the Oracle Knowledge base, however according Oracle Support this problem is
      caused by unpublished   Bug 25183818 : SIHA 11204 UPGRADE TO MAIN IS FAILING 

      As per March 2018, no workaround or software patch is yet available.

      The post Upgrade of Oracle Restart/SIHA from 11.2 to 12.2 fails with CRS-2415 appeared first on AMIS Oracle and Java Blog.

      Dbvisit Standby upgrade

      Wed, 2018-03-28 10:00
      Upgrading to Dbvisit Standby 8.0.x

      Dbvisit provides upgrade documentation which is detailed and in principle  correct but only describes the upgrade process from the viewpoint of an installation on a single host.
      I upgraded Dbvisit Standby at a customer’s site with Dbvisit Standby in a running configuration with several hosts with several primary and standby databases.  I found, by trial and error and with the help of Dbvisit support, some additional steps and points of advice that I think may be of help to others.
      This documents describes the upgrade process for a working environment and provides information and advice in addition to the upgrade documentation. Those additions will be clearly marked in red throughout the blog. Also the steps of the upgrade process have been rearranged in a more logical order.
      It is assumed that the reader is familiar with basic Dbvisit concepts and processes.

      Configuration

      The customer’s configuration that was upgraded is as follows:

      • Dbvisit 8.0.14
      • 4 Linux OEL 6 hosts running Dbvisit Standby
      • 6 databases in Dbvisit Standby configuration distributed among the hosts
      • 1 Linux OEL 7 host running Dbvisit Console
      • DBIVIST_BASE: /usr/dbvisit
      • Dbvctl running in Deamon mode
      Dvisit upgrade overview

      The basic steps that are outlined in the Dbvisit upgrade documentation are as follows:

      1. Stop your Dbvisit Schedules if you have any running.
      2. Stop or wait for any Dbvisit processes that might still be executing.
      3. Backup the Dbvisit Base location where your software is installed.
      4. Download the latest version from www.dbvisit.com.
      5. Extract the install files into a temporary folder, example /home/oracle/8.0.
      6. Start the Installer and select to install the required components.
      7. Once the update is complete, you can remove the temporary install folder where the installer was extracted.
      8. It is recommended to run a manual send/apply of logs once an upgrade is complete.
      9. Re-enable any schedules.

      During the actual upgrade we deviated significantly from this: steps were rearranged, added and changed slightly.

      1. Download the latest available version of DBVisit and make it available on all server.
      2. Make a note of the primary host for each Dbvisit standby configuration.
      3. Stop dbvisit processes.
      4. Backup the Dbvisit Base location where your software is installed.
      5. Upgrade the software.
      6. Start dbvagent and dbvnet.
      7. Upgrade de DDC configuration files.
      8. Restart dbvserver.
      9. Update DDC’s in Bbvisit Console.
      10. run a manual send/apply of logs.
      11. Restart dbvsit standby processes.

      In the following sections these steps will  be explained more detailed.

      Dbvisit Standby upgrade

      Here follow the steps in detail that in my view should be taken for a Dbvisit upgrade, based on the experience and steps taken during the actual Dbvisit upgrade.

      1. Download the latest available version of DBVisit and make it available on all server.
        In our case I put it in /home/oracle/upgrade on all hosts. The versions used were 8.0.18 for Oracle Enterprise Linux 6 and 7:

        dbvisit-standby8.0.18-el6.zip
        dbvisit-standby8.0.18-el7.zip
        
      2. Make a note of the primary hosts for each Dbvisit standby configuration.
        You will need this information later in step 7. It is possible to get the information from the DDC .env files, but in our case it is easier to get it from the Dbvisit console.
        If you need to get them from the DDC .env files look for the SOURCE parameter. Say we have a database db1:

        [root@dbvhost04 conf]# cd /usr/dbvisit/standby/conf/
        [root@dbvhost04 conf]# grep "^SOURCE" dbv_db1.env
        SOURCE = dbvhost04
        
      3. Stop dbvisit processes.
        The Dbvisit upgrade manual assumes you schedule dbvctl from the cron. In our situation the dbvctl processes were running in Deamon mode. Easiest was therefor to stop them from the Dbvisit console. Go to Main Menu -> Database Actions -> Daemon Actions -> select both hosts in turn and choose stop.
        Dbvagent, dbvnet and, on the Dbvisit console host, dbvserver can be stopped as follows:

        cd /usr/dbvisit/dbvagent
        ./dbvagent -d stop
        cd /usr/dbvisit/dbvnet
        ./dbvnet -d stop
        cd /usr/dbvisit/dbvserver
         ./dbvserver -d stop
        

        Do this on all hosts. Dbvisit support advices that all hosts in a configuration be upgraded at the same time. There is no rolling upgrade or something similar.
        Check before proceeding if all processes are down.

      4. Backup the Dbvisit Base location where your software is installed.
        The Dbvisit upgrade manual marks this step as optional – but recommended. In my view it is not optional.
        You can simply tar everything under DBIVIST_BASE for later use.
      5. Upgrade the software.
        Extract the downloaded software and run the included  installer. It will show you which version you already have and which versions is available in the downloaded software. Choose the correct install option to upgrade. Below you can see the upgrade of one of the OEL 6 database hosts running Dbvisit standby:

        cd /home/oracle/upgrade
        <unzip and untar the correct version from /home/oracle/upgrade>
        cd dbvisit/installer/
        ./install-dbvisit
        
        -----------------------------------------------------------
            Welcome to the Dbvisit software installer.
        -----------------------------------------------------------
        
            It is recommended to make a backup of our current Dbvisit software
            location (Dbvisit Base location) for rollback purposes.
            
            Installer Directory /home/oracle/upgrade/dbvisit
        
        >>> Please specify the Dbvisit installation directory (Dbvisit Base).
         
            The various Dbvisit products and components - such as Dbvisit Standby, 
            Dbvisit Dbvnet will be installed in the appropriate subdirectories of 
            this path.
        
            Enter a custom value or press ENTER to accept default [/usr/dbvisit]: 
             >     DBVISIT_BASE = /usr/dbvisit 
        
            -----------------------------------------------------------
            Component      Installer Version   Installed Version
            -----------------------------------------------------------
            standby        8.0.18_0_gc6a0b0a8  8.0.14.19191                                      
            dbvnet         8.0.18_0_gc6a0b0a8  2.0.14.19191                                      
            dbvagent       8.0.18_0_gc6a0b0a8  2.0.14.19191                                      
            dbvserver      8.0.18_0_gc6a0b0a8  not installed                                     
        
            -----------------------------------------------------------
         
            What action would you like to perform?
               1 - Install component(s)
               2 - Uninstall component(s)
               3 - Exit
            
            Your choice: 1
        
            Choose component(s):
               1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
               2 - Dbvisit Standby Core (Command Line Interface)
               3 - Dbvnet (Dbvisit Network Communication) 
               4 - Dbvagent (Dbvisit Agent)
               5 - Dbvserver (Dbvisit Central Console) - Not available on Solaris/AIX
               6 - Exit Installer
            
            Your choice: 1
        
        -----------------------------------------------------------
            Summary of the Dbvisit STANDBY configuration
        -----------------------------------------------------------
            DBVISIT_BASE /usr/dbvisit 
        
            Press ENTER to continue 
        -----------------------------------------------------------
            About to install Dbvisit STANDBY
        -----------------------------------------------------------
        
            Component standby installed. 
        
            Press ENTER to continue 
        -----------------------------------------------------------
            About to install Dbvisit DBVNET
        -----------------------------------------------------------
        
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/cert.pem to /usr/dbvisit/dbvnet/conf/cert.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/ca.pem to /usr/dbvisit/dbvnet/conf/ca.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/prikey.pem to /usr/dbvisit/dbvnet/conf/prikey.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/dbvnet to /usr/dbvisit/dbvnet/dbvnet
        
        Copied file /usr/dbvisit/dbvnet/conf/dbvnetd.conf to /usr/dbvisit/dbvnet/conf/dbvnetd.conf.201802201235
        
            DBVNET config file updated 
        
        
            Press ENTER to continue 
        -----------------------------------------------------------
            About to install Dbvisit DBVAGENT
        -----------------------------------------------------------
        
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/cert.pem to /usr/dbvisit/dbvagent/conf/cert.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/ca.pem to /usr/dbvisit/dbvagent/conf/ca.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/prikey.pem to /usr/dbvisit/dbvagent/conf/prikey.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/dbvagent to /usr/dbvisit/dbvagent/dbvagent
        
        Copied file /usr/dbvisit/dbvagent/conf/dbvagent.conf to /usr/dbvisit/dbvagent/conf/dbvagent.conf.201802201235
        
            DBVAGENT config file updated 
        
        
            Press ENTER to continue 
        
            -----------------------------------------------------------
            Component      Installer Version   Installed Version
            -----------------------------------------------------------
            standby        8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
            dbvnet         8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
            dbvagent       8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
            dbvserver      8.0.18_0_gc6a0b0a8  not installed                                     
        
            -----------------------------------------------------------
         
            What action would you like to perform?
               1 - Install component(s)
               2 - Uninstall component(s)
               3 - Exit
            
            Your choice: 3
        
      6. Start dbvagent and dbvnet.
        For the next step dbvagent and dbvnet need to be running.  In our case we had an init script which started both:

        cd /etc/init.d
        ./dbvisit start
        

        Otherwise do something like:

        cd /usr/dbvisit/dbvnet
        ./dbvnet –d start 
        cd /usr/dbvisit/dbvagent
        ./dbvagent –d start
        

        The upgrade documentation at this point refers to section 5 of the Dbvisit Standby Networking chapter from the Dbvisit 8.0 user guide: Testing Dbvnet Communication. There some tests to see if dbvnet is working are described. It is important, as the upgrade documentation rightly points, out to test this before proceeding.
        Do on all database hosts:

        [oracle@dbvhost04 init.d]$ cd /usr/dbvisit/dbvnet/
        [oracle@dbvhost04 dbvnet]$ ./dbvnet -e "uname -n"
        dbvhost01
        [oracle@dbvhost04 dbvnet]$ ./dbvnet -f /tmp/dbclone_extract.out.err -o /tmp/testfile
        [oracle@dbvhost04 dbvnet]$ cd /usr/dbvisit/standby
        [oracle@dbvhost04 standby]$ ./dbvctl -f system_readiness
        
        Please supply the following information to complete the test.
        Default values are in [].
        
        Enter Dbvisit Standby location on local server: [/usr/dbvisit]:
        Your input: /usr/dbvisit
        
        Is this correct? <Yes/No> [Yes]:
        
        Enter the name of the remote server: []: dbvhost01
        Your input: dbvhost01
        
        Is this correct? <Yes/No> [Yes]:
        
        Enter Dbvisit Standby location on remote server: [/usr/dbvisit]:
        Your input: /usr/dbvisit
        
        Is this correct? <Yes/No> [Yes]:
        
        Enter the name of a file to transfer relative to local install directory
        /usr/dbvisit: [standby/doc/README.txt]:
        Your input: standby/doc/README.txt
        
        Is this correct? <Yes/No> [Yes]:
        
        Choose copy method:
        1)   /usr/dbvisit/dbvnet/dbvnet
        2)   /usr/bin/scp
        Please enter choice [1] :
        
        Is this correct? <Yes/No> [Yes]:
        
        Enter port for method /usr/dbvisit/dbvnet/dbvnet: [7890]:
        Your input: 7890
        
        Is this correct? <Yes/No> [Yes]:
        -------------------------------------------------------------
        Testing the network connection between local server and remote server dbvhost01.
        -------------------------------------------------------------
        Settings
        ========
        Remote server                                          =dbvhost01
        Dbvisit Standby location on local server               =/usr/dbvisit
        Dbvisit Standby location on remote server              =/usr/dbvisit
        Test file to copy                                      =/usr/dbvisit/standby/doc/README.txt
        Transfer method                                        =/usr/dbvisit/dbvnet/dbvnet
        port                                                   =7890
        -------------------------------------------------------------
        Checking network connection by copying file to remote server dbvhost01...
        -------------------------------------------------------------
        Trace file /usr/dbvisit/standby/trace/58867_dbvctl_f_system_readiness_201803201304.trc
        
        File copied successfully. Network connection between local and dbvhost01
        correctly configured.
        
      7. Upgrade de DDC configuratie files.
        Having upgraded the software now the Dbvisit Standby Configuration (DDC) files, which are located in DBVISIT_BASE/standby/conf, on the database hosts need to be upgraded.
        Do this once for each standby configuration only on the primary host. If you do it on the secondary host you will get an error and all DDC configuration files will be deleted!
        So if we have a database db1 in a Dbvisit standby configuration with database host dbvhost1 running the primary database (source in Dbvisit terminology) and database host dbvhost2 running the standby database (destination in Dbvisit terminology), we do the following on the dbvhost1 only:

        cd /usr/dbvisit/standby
        ./dbvctl -d db1 -o upgrade
        
      8. Restart dbvserver.
        In our configuration the next step is to restart dbvserver to renable the Dbvisit Console.

        cd /usr/dbvisit/dbvserver
        ./dbvserver -d start
        
      9. Update DDC’s in Dbvisit Console.
        After the upgrade the configurations need to te be updated in Dbvisit Console. Go to Manage Configurations and status field will show an error and the edit configuration button is replaced with an update button.
        Update the DDC for each configuration on that screen.
      10. run a manual send/apply of logs.
        In our case this was easiest done from the Dbvisit console again: Main Menu -> Database Actions -> send logs button, followed by apply logs button.
        Do this for each configuration and check for errors before continuing.
      11. Restart Dbvsit standby processes.
        In our case we restarted the dbvctl processes in deamon mode from the Dbvisit Console. Go to Main Menu -> Database Actions -> Daemon Actions -> select both hosts in turn and choose start.
      References

      Linux – Upgrade from Dbvisit Standby version 8.0.x
      Dbvisit Standby Networking – Dbvnet – 5. Testing Dbvnet Communication

      The post Dbvisit Standby upgrade appeared first on AMIS Oracle and Java Blog.

      Getting started with git behind a company proxy

      Sun, 2018-03-25 11:50

      Since a few months I’ve been involved in working with git to save our Infrastructure as Code in GitHub. But I don’t want to have to type in my password every time and do not like in clear text saved passwords, thus I prefer ssh over https. But when working behind a proxy that doesn’t allow for traffic over port 22 (ssh) I had to spend some time to get it working. Without a proxy there is nothing to it.

      First some background information. We connect to a “stepping stone” server that has some version of Windows as the O.S. and then use Putty to connect to our Linux host where we work on our code.

       

      Network background

      Our connection to Internet is via the proxy, but the proxy doesn’t allow traffic over port 22 (ssh/git). It does however allow traffic over port 80 (http) or 443 (https).

      So the goal here is to:
      1. use a public/private key pair to authenticate myself at GitHub.com
      2. route traffic to GitHub.com via the proxy
      3. reroute port 22 to port 443
      Generate a public/private key pair.

      This can be done on the Linux prompt but then you either need to type your passphrase every time you use git (or have it cached in Linux), or use a key pair without a passphrase. I wanted to take this one step further and use Putty Authentication Agent (Pageant.exe) to cache my private key and forward authentication requests over Putty to Pageant.

      With Putty Key Generator (puttygen.exe) you generate a public/private key pair. Just start the program and press the generate button.

      2018-03-25 16_35_08-keygen

      You then need to generate some entropy by moving the mouse around:

      2018-03-25 16_39_08-PuTTY Key Generator

      And in the end you get something like this:

      2018-03-25 16_41_25-PuTTY Key Generator

      Ad 1) you should use a descriptive name like “github <accountname>”

      Ad 2) you should use a sentence to protect your private key. Mind you: If you do not use a caching mechanism you need to type it in frequently

      Ad 3) you should save your private key somewhere you consider safe. (It should not be accessible for other people)

      Ad 4) you copy this whole text field (starting with ssh-rsa in this case up to and including the Key comment “rsa-key-20180325” which is repeated in that text field)

      Once you have copied the public key you need to add it to your account at github.com.

      Adding the public key in github.com

      Log in to github.com and click on your icon:

      2018-03-25 17_03_03-github

      Choose “Settings” and go to “SSH and GPG keys”:

      2018-03-25 17_03_14-Your Profile

      There you press the “Add SSH key” button and you get to the next screen:

      2018-03-25 17_08_16-Add new SSH keys

      Give the Title a descriptive name so you can recognize/remember where you generated this key for, and in the Key field you paste the copied public key in. Then you press Add SSH key which results in something like this:

      2018-03-25 17_11_43-SSH and GPG keys

      In your case the picture of the key will not be green but black as you haven’t used it yet. In case you no longer want this public/private key pair to have access to your github account you can Delete it here as well.

      So now you can authenticate yourself with a private key that get checked by the public key you uploaded in github.

      You can test that on a machine that has direct access to Internet and is able to use port 22 (For example a VirtualBox VM on your own laptop at home).

      Route git traffic to github.com via the Proxy and change the port.

      On the Linux server behind the company firewall, when logged on with your own account, you need to got to the “.ssh” directory. If it isn’t there yet you haven’t used ssh on that machine yet. (ssh <you>@<linuxserver> is enough and cancel the logging in). So change directory to .ssh in your home dir. Create a file called “config” with the contents.

      # github.com
      Host github.com
          Hostname ssh.github.com
          ProxyCommand nc -X connect -x 192.168.x.y:8080 %h %p
          Port 443
          ServerAliveInterval 20
          User git
      
      #And if you use gitlab as well the entry should be like:
      # gitlab.com
      Host gitlab.com
          Hostname altssh.gitlab.com
          Port    443
          ProxyCommand    /usr/bin/nc -X connect -x 192.168.x.y:8080 %h %p
          ServerAliveInterval 20
          User  git
      

      This is the part where you define that ssh call’s to server github.com should be rerouted to the proxy server 192.168.x.y on port 8080 (change that to your proxy details), and that the server should not be github.com but changed to ssh.github.com. That is the server where github allows you to use the git or ssh protocol to connect to over https (port 443). I’ve added the example for gitlab as well. There the hostname should be changed to altssh.gitlab.com as is done in the config above.

      “nc” or “/usr/bin/nc” is the utility Netcat that does the work of changing hostname and port number for us. On our RedHat Linux 6 server it is installed by default.

      The ServerAliveInterval 20 makes sure that the connection is kept alive by sending a packet every 20 seconds to prevent a “broken pipe”. And the User git makes sure you will not connect as your local Linux user to github.com but as user git.

      But two things still needs to be done:

      1. Add your private key to Putty Authentication Agent
      2. Allow the Putty session to your Linux host to use Putty Authentication Agent
      Add your private key to Putty Authentication Agent

      On your “Stepping Stone Server” start the Putty Authentication Agent (Pageant.exe), right click on the icon (useally somewhere on the bottom of your screen to the right)

      2018-03-25 17_49_49-

      Select View Keys to see the keys already loaded or press Add Key to add your newly created private key. You get asked to type your passphrase. Via View Keys you can check if the key was loaded:

      2018-03-25 17_56_06-Pageant Key List

      The obfuscated part shows the key fingerprint and the text to the right of that is the Key Comment you used. If the comment is bigger not all the text is visible. Thus make sure the Key Comment is distinguishable in the first part.

      If you want to use the same key for authentication on the Linux host, then put the Public key part in a file called “authorized_keys”. This file should be located in the “.ssh” directory and have rw permissions for your local user only (chmod 0600 authorized_keys) and nobody else. If you need or want a different key pair for that make sure you load the corresponding private key as well.

      Allow the Putty session to your Linux host to use Putty Authentication Agent

      The Putty session that you use to connect to the Linux host needs to have the following checked:

      2018-03-25 18_08_03-PuTTY Configuration

      Thus for the session go to “Connection” –> “SSH” –> “Auth” and check “Allow agent forwarding” to allow you terminal session on the Linux host to forward the authentication request with GitHub (or gitlab) to be handled by your Pageant process on the Stepping Stone server. For that last part you need to have checked the box “Attempt authentication using Pageant”.

      Now you are all set to clone a GitHub repository on your Linux host and use key authentication.

      Clone a git repository using the git/ssh protocol

      Browse to GitHub.com, select the repository you have access to with your GitHub account (if it is a private repo), press the “Clone or download” button and make sure you select “Clone with SSH”. See the picture below.

      2018-03-25 18_18_41-git

      Press the clipboard icon to copy the line starting with “git@github.com” and ending with “.git”.

      That should work now (like it did for me).

      HTH Patrick

      P.S. If you need to authenticate your connection with the proxy service you probably need to have a look at the manual pages of “nc”. Or google it. I didn’t have to authenticate with the proxy service so I didn’t dive into that.

      The post Getting started with git behind a company proxy appeared first on AMIS Oracle and Java Blog.

      How to fix Dataguard FAL server issues

      Wed, 2018-03-21 06:23

      One of my clients had an issue with their Dataguard setup, after having to move tables and rebuild indexes the transport to their standby databases failed. The standby databases complained about not being able to fetch archivelogs from the primary database. In this short blog I will explain what happened and how I diagnosed the issue and fixed it.

       

      The situation

      Below  you can see a diagram of the setup: a primary site with both a primary database and a standby database. At the remote site there are two standby databases both get their redo stream from the primary database.

      DG_situation

      This setup was working well for the company, but having two redo streams going to the remote site with limited bandwith can give issues when doing massive data manipulation. When the need arrived for doing massive table movements and rebuilding of indexes the generation of redo was too much for the WAN link and also to the local standby database. After trying to fix the standby databases for several days my help was requested because the standby databases were not able to fix the gaps in the redo stream.

       

      The issues

      While analyzing the issues I found that the standby databases failed to fetch archived logs from the primary database, usually you can fix this by using RMAN to supply the primary database with the archived logs needed for the standby, because in most cases the issue is that het archived logs have been deleted on the primary database. The client’s own DBA already supplied the required archived logs so the message was kind of misleading, the archived logs are there, but the primary doesn’t seem to be able to supply them.

      When checking the alert log for the primary database there was no obvious sign that there was anything going on or going wrong.  While searching for more information I discovered the default setting for the parameter log_archive_max_processes is 4. This setting controls the amount of processes available for archiving, redo transport and FAL servers. Now take a quick look at the drawing above and start counting with me: at least one for local archiving, and three for the redo transport to the three standby databases. So when one of the standby databases wants to fetch archived logs to fill in a gap, it may not be able to request this from the primary database. So time to fix it:

       

      ALTER SYSTEM SET log_archive_max_processes=30 scope=both;
      

      Now the fetching start working better, but I discovered some strange behaviour, the standby database closest to the primary database was still not able to fetch archive logs from the primary. The two remote standby databases were actually fetching some archived logs, so thats an improvement… but still, the alert log for the primary database was quiet silent… fortunately Oracle provides us with more server parameters: log_archive_trace. This setting enables extra logging for certain subprocesses. add the values in the linked documentation to see the desired logging: in this case 2048  and 128 for getting FAL server logging and redo transport logging.

      ALTER SYSTEM SET log_archive_trace=2176 scope=both;
      

      With this setting I was able to see that all 26 other archiver processes were busy with supplying one of the standby databases with archived logs. It seems to me that the database thats furthest behind will get the first go at the primary database…. Anyway, my first instinct was to have the local standby database fixed first so this one is available for failover, so by stopping the remote standby databases the local standby database was now able to fetch archived logs from the primary database. The next step is to start the other standby databases, to speed up things I started the first one and only after this database has completed its archive log gap I started the second database.

       

      In conclusion, it’s important that you tune your settings for your environment: set log_archive_max_processes as appropriate and set your log level so you see what’s going on.

      Please mind that both of these settings are also managed by the Dataguard Broker. To prevent warnings from Dataguard Broken make sure you set these parameters via dgmgrl:

      edit database <<primary>> set property LogArchiveTrace=2176;
      edit database <<primary>> set property LogArchiveMaxProcesses=30;
      

      The post How to fix Dataguard FAL server issues appeared first on AMIS Oracle and Java Blog.

      Handle a GitHub Push Event from a Web Hook Trigger in a Node application

      Tue, 2018-03-20 11:57

      My requirement in this case: a push of one or more commits to a GitHub repository need to trigger a Node application that inspects the commit and when specific conditions are met – it will download the contents of the commit.

      image

      I have implemented this functionality using a Node application – primarily because it offers me an easy way to create a REST end point that I can configure as a WebHook in GitHub.

      Implementing the Node application

      The requirements for a REST endpoint that can be configured as a webhook endpoint are quite simple: handle a POST request  no response required. I can do that!

      In my implementation, I inspect the push event, extract some details about the commits it contains and write the summary to the console. The code is quite straightforward and self explanatory; it can easily be extended to support additional functionality:

      app.post('/github/push', function (req, res) {
        var githubEvent = req.body
        // - githubEvent.head_commit is the last (and frequently the only) commit
        // - githubEvent.pusher is the user of the pusher pusher.name and pusher.email
        // - timestamp of final commit: githubEvent.head_commit.timestamp
        // - branch:  githubEvent.ref (refs/heads/master)
      
        var commits = {}
        if (githubEvent.commits)
          commits = githubEvent.commits.reduce(
            function (agg, commit) {
              agg.messages = agg.messages + commit.message + ";"
              agg.filesTouched = agg.filesTouched.concat(commit.added).concat(commit.modified).concat(commit.removed)
                .filter(file => file.indexOf("src/js/jet-composites/input-country") > -1)
              return agg
            }
            , { "messages": "", "filesTouched": [] })
      
        var push = {
          "finalCommitIdentifier": githubEvent.after,
          "pusher": githubEvent.pusher,
          "timestamp": githubEvent.head_commit.timestamp,
          "branch": githubEvent.ref,
          "finalComment": githubEvent.head_commit.message,
          "commits": commits
        }
        console.log("WebHook Push Event: " + JSON.stringify(push))
        if (push.commits.filesTouched.length > 0) {
          console.log("This commit involves changes to the input-country component, so let's update the composite component for it ")
          var compositeName = "input-country"
          compositeloader.updateComposite(compositeName)
        }
      
        var response = push
        res.json(response)
      })
      
      

      Configuring the WebHook in GitHub

      A web hook can be configured in GitHub for any of your repositories. You indicate the endpoint URL, the type of event that should trigger the web hook and optionally a secret. See my configuration:

      image

      Trying out the WebHook and receiving Node application

      In this particular case, the Node application is running locally on my laptop. I have used ngrok to expose the local application on a public internet address:

      image

      (note: this is the address you saw in the web hook configuration)

      I have committed and pushed a small change in a file in the repository on which the webhook is configured:

      image

      The ngrok agent has received the WebHook request:

      image

      The Node application has received the push event and has done its processing:

      image

      The post Handle a GitHub Push Event from a Web Hook Trigger in a Node application appeared first on AMIS Oracle and Java Blog.

      Node & Express application to proxy HTTP requests – simply forwarding the response to the original caller

      Mon, 2018-03-19 00:58

      The requirement is simple: a Node JS application that receives HTTP requests and forwards (some of) them to other hosts and subsequently the returns the responses it receives to the original caller.

      image

      This can be used in many situations – to ensure all resources loaded in a web application come from the same host (one way to handle CORS), to have content in IFRAMEs loaded from the same host as the surrounding application or to allow connection between systems that cannot directly reach each other. Of course, the proxy component does not have to be the dumb and mute intermediary – it can add headers, handle faults, perform validation and keep track of the traffic. Before you know it, it becomes an API Gateway…

      In this article a very simple example of a proxy that I want to use for the following purpose: I create a Rich Web Client application (Angular, React, Oracle JET) – and some of the components used are owned and maintained by an external party. Instead of adding the sources to the server that serves the static sources of the web application, I use the proxy to retrieve these specific sources from their real origin (either a live application, a web server or even a Git repository). This allows me to have the latets sources of these components at any time, without redeploying my own application.

      The proxy component is of course very simple and straightforward. And I am sure it can be much improved upon. For my current purposes, it is good enough.

      The Node application consists of file www that is initialized with npm start through package.json. This file does some generic initialization of Express (such as defining the port on which the listen). Then it defers to app.js for all request handling. In app.js, a static file server is configured to serve files from the local /public subdirectory (using express.static).

      www:

      var app = require('../app');
      var debug = require('debug')(' :server');
      var http = require('http');

      var port = normalizePort(process.env.PORT || '3000');
      app.set('port', port);
      var server = http.createServer(app);
      server.listen(port);
      server.on('error', onError);
      server.on('listening', onListening);

      function normalizePort(val) {
      var port = parseInt(val, 10);

      if (isNaN(port)) {
      // named pipe
      return val;
      }

      if (port >= 0) {
      // port number
      return port;
      }

      return false;
      }

      function onError(error) {
      if (error.syscall !== 'listen') {
      throw error;
      }

      var bind = typeof port === 'string'
      ? 'Pipe ' + port
      : 'Port ' + port;

      // handle specific listen errors with friendly messages
      switch (error.code) {
      case 'EACCES':
      console.error(bind + ' requires elevated privileges');
      process.exit(1);
      break;
      case 'EADDRINUSE':
      console.error(bind + ' is already in use');
      process.exit(1);
      break;
      default:
      throw error;
      }
      }

      function onListening() {
      var addr = server.address();
      var bind = typeof addr === 'string'
      ? 'pipe ' + addr
      : 'port ' + addr.port;
      debug('Listening on ' + bind);
      }

      package.json:

      {
      "name": "jet-on-node",
      "version": "0.0.0",
      "private": true,
      "scripts": {
      "start": "node ./bin/www"
      },
      "dependencies": {
      "body-parser": "~1.18.2",
      "cookie-parser": "~1.4.3",
      "debug": "~2.6.9",
      "express": "~4.15.5",
      "morgan": "~1.9.0",
      "pug": "2.0.0-beta11",
      "request": "^2.85.0",
      "serve-favicon": "~2.4.5"
      }
      }

      app.js:

      var express = require('express');
      var path = require('path');
      var favicon = require('serve-favicon');
      var logger = require('morgan');
      var cookieParser = require('cookie-parser');
      var bodyParser = require('body-parser');

      const http = require('http');
      const url = require('url');
      const fs = require('fs');
      const request = require('request');

      var app = express();
      // uncomment after placing your favicon in /public
      //app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
      app.use(logger('dev'));
      app.use(bodyParser.json());
      app.use(bodyParser.urlencoded({ extended: false }));
      app.use(cookieParser());

      // define static resource server from local directory public (for any request not otherwise handled)
      app.use(express.static(path.join(__dirname, 'public')));

      app.use(function (req, res, next) {
      res.header("Access-Control-Allow-Origin", "*");
      res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
      next();
      });

      // catch 404 and forward to error handler
      app.use(function (req, res, next) {
      var err = new Error('Not Found');
      err.status = 404;
      next(err);
      });

      // error handler
      app.use(function (err, req, res, next) {
      // set locals, only providing error in development
      res.locals.message = err.message;
      res.locals.error = req.app.get('env') === 'development' ? err : {};

      // render the error page
      res.status(err.status || 500);
      res.json({
      message: err.message,
      error: err
      });
      });

      module.exports = app;

      Then the interesting bit: requests for URL /js/jet-composites/* are intercepted: instead of having those requests also handle by serving local resources (from directory public/js/jet-composites/*), the requests are interpreted and routed to an external host. The responses from that host are returned to the requester. To the requesting browser, there is no distinction between resources served locally as static artifacts from the local file system and resources retrieved through these redirected requests.

      // any request at /js/jet-composites (for resouces in that folder)
      // should be intercepted and redirected
      var compositeBasePath = '/js/jet-composites/'
      app.get(compositeBasePath + '*', function (req, res) {
      var requestedResource = req.url.substr(compositeBasePath.length)
      // parse URL
      const parsedUrl = url.parse(requestedResource);
      // extract URL path
      let pathname = `${parsedUrl.pathname}`;
      // maps file extention to MIME types
      const mimeType = {
      '.ico': 'image/x-icon',
      '.html': 'text/html',
      '.js': 'text/javascript',
      '.json': 'application/json',
      '.css': 'text/css',
      '.png': 'image/png',
      '.jpg': 'image/jpeg',
      '.wav': 'audio/wav',
      '.mp3': 'audio/mpeg',
      '.svg': 'image/svg+xml',
      '.pdf': 'application/pdf',
      '.doc': 'application/msword',
      '.eot': 'appliaction/vnd.ms-fontobject',
      '.ttf': 'aplication/font-sfnt'
      };

      handleResourceFromCompositesServer(res, mimeType, pathname)
      })

      async function handleResourceFromCompositesServer(res, mimeType, requestedResource) {
      var reqUrl = "http://yourhost:theport/applicationURL/" + requestedResource
      // fetch resource and return
      var options = url.parse(reqUrl);
      options.method = "GET";
      options.agent = false;

      // options.headers['host'] = options.host;
      http.get(reqUrl, function (serverResponse) {
      console.log('<== Received res for', serverResponse.statusCode, reqUrl); console.log('\t-> Request Headers: ', options);
      console.log(' ');
      console.log('\t-> Response Headers: ', serverResponse.headers);

      serverResponse.pause();

      serverResponse.headers['access-control-allow-origin'] = '*';

      switch (serverResponse.statusCode) {
      // pass through. we're not too smart here...
      case 200: case 201: case 202: case 203: case 204: case 205: case 206:
      case 304:
      case 400: case 401: case 402: case 403: case 404: case 405:
      case 406: case 407: case 408: case 409: case 410: case 411:
      case 412: case 413: case 414: case 415: case 416: case 417: case 418:
      res.writeHeader(serverResponse.statusCode, serverResponse.headers);
      serverResponse.pipe(res, { end: true });
      serverResponse.resume();
      break;

      // fix host and pass through.
      case 301:
      case 302:
      case 303:
      serverResponse.statusCode = 303;
      serverResponse.headers['location'] = 'http://localhost:' + PORT + '/' + serverResponse.headers['location'];
      console.log('\t-> Redirecting to ', serverResponse.headers['location']);
      res.writeHeader(serverResponse.statusCode, serverResponse.headers);
      serverResponse.pipe(res, { end: true });
      serverResponse.resume();
      break;

      // error everything else
      default:
      var stringifiedHeaders = JSON.stringify(serverResponse.headers, null, 4);
      serverResponse.resume();
      res.writeHeader(500, {
      'content-type': 'text/plain'
      });
      res.end(process.argv.join(' ') + ':\n\nError ' + serverResponse.statusCode + '\n' + stringifiedHeaders);
      break;
      }

      console.log('\n\n');
      });
      }

      Resources

      Express Tutorial Part 2: Creating a skeleton website - https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/skeleton_website

      Building a Node.js static file server (files over HTTP) using ES6+ - http://adrianmejia.com/blog/2016/08/24/Building-a-Node-js-static-file-server-files-over-HTTP-using-ES6/

      How To Combine REST API calls with JavaScript Promises in node.js or OpenWhisk - https://medium.com/adobe-io/how-to-combine-rest-api-calls-with-javascript-promises-in-node-js-or-openwhisk-d96cbc10f299

      Node script to forward all http requests to another server and return the response with an access-control-allow-origin header. Follows redirects. - https://gist.github.com/cmawhorter/a527a2350d5982559bb6

      5 Ways to Make HTTP Requests in Node.js - https://www.twilio.com/blog/2017/08/http-requests-in-node-js.html

      The post Node & Express application to proxy HTTP requests – simply forwarding the response to the original caller appeared first on AMIS Oracle and Java Blog.

      Create a Node JS application for Downloading sources from GitHub

      Sun, 2018-03-18 16:26

      My objective: create a Node application to download sources from a repository on GitHub. I want to use this application to read a simple package.json-like file (that describes which reusable components (from which GitHub repositories) the application has dependencies on) and download all required resources from GitHub and store them in the local file system. This by itself may not seem very useful. However, this is a stepping stone on the road to a facility to trigger run time update of appliation components triggered by GitHub WebHook triggers.

      I am making use of the Octokit Node JS library to interact with the REST APIs of GitHub. The code I have created will:

      • fetch the meta-data for all items in the root folder of a GitHub Repo (at the tip of a specific branch, or at a specific tag or commit identifier)
      • iterate over all items:
        • download the contents of the item if it is a file and create a local file with the content (and cater for large files and for binary files)
        • create a local directory for each item in the GitHub repo that is a diectory, then recursively process the contents of the directory on GitHub

      An example of the code in action:

      A randomly selected GitHub repo (at https://github.com/lucasjellema/WebAppIframe2ADFSynchronize):

      image

      The local target directory is empty at the beginning of the action:

      SNAGHTML8180706

      Run the code:

      image

      And the content is downloaded and written locally:

      image

      Note: the code could easily provide an execution report with details such as file size, download, last change date etc. It is currently very straightforward. Note: the gitToken is something you need to get hold of yourself in the GitHub dashboard: https://github.com/settings/tokens . Without a token, the code will still work, but you will be bound to the GitHub rate limit (of about 60 requests per hour).

      const octokit = require('@octokit/rest')() 
      const fs = require('fs');
      
      var gitToken = "YourToken"
      
      octokit.authenticate({
          type: 'token',
          token: gitToken
      })
      
      var targetProjectRoot = "C:/data/target/" 
      var github = { "owner": "lucasjellema", "repo": "WebAppIframe2ADFSynchronize", "branch": "master" }
      
      downloadGitHubRepo(github, targetProjectRoot)
      
      async function downloadGitHubRepo(github, targetDirectory) {
          console.log(`Installing GitHub Repo ${github.owner}\\${github.repo}`)
          var repo = github.repo;
          var path = ''
          var owner = github.owner
          var ref = github.commit ? github.commit : (github.tag ? github.tag : (github.branch ? github.branch : 'master'))
          processGithubDirectory(owner, repo, ref, path, path, targetDirectory)
      }
      
      // let's assume that if the name ends with one of these extensions, we are dealing with a binary file:
      const binaryExtensions = ['png', 'jpg', 'tiff', 'wav', 'mp3', 'doc', 'pdf']
      var maxSize = 1000000;
      function processGithubDirectory(owner, repo, ref, path, sourceRoot, targetRoot) {
          octokit.repos.getContent({ "owner": owner, "repo": repo, "path": path, "ref": ref })
              .then(result => {
                  var targetDir = targetRoot + path
                  // check if targetDir exists 
                  checkDirectorySync(targetDir)
                  result.data.forEach(item => {
                      if (item.type == "dir") {
                          processGithubDirectory(owner, repo, ref, item.path, sourceRoot, targetRoot)
                      } // if directory
                      if (item.type == "file") {
                          if (item.size > maxSize) {
                              var sha = item.sha
                              octokit.gitdata.getBlob({ "owner": owner, "repo": repo, "sha": item.sha }
                              ).then(result => {
                                  var target = `${targetRoot + item.path}`
                                  fs.writeFile(target
                                      , Buffer.from(result.data.content, 'base64').toString('utf8'), function (err, data) { })
                              })
                                  .catch((error) => { console.log("ERROR BIGGA" + error) })
                              return;
                          }// if bigga
                          octokit.repos.getContent({ "owner": owner, "repo": repo, "path": item.path, "ref": ref })
                              .then(result => {
                                  var target = `${targetRoot + item.path}`
                                  if (binaryExtensions.includes(item.path.slice(-3))) {
                                      fs.writeFile(target
                                          , Buffer.from(result.data.content, 'base64'), function (err, data) { reportFile(item, target) })
                                  } else
                                      fs.writeFile(target
                                          , Buffer.from(result.data.content, 'base64').toString('utf8'), function (err, data) { if (!err) reportFile(item, target); else console.log('Fuotje ' + err) })
      
                              })
                              .catch((error) => { console.log("ERROR " + error) })
                      }// if file
                  })
              }).catch((error) => { console.log("ERROR XXX" + error) })
      }//processGithubDirectory
      
      function reportFile(item, target) {
          console.log(`- installed ${item.name} (${item.size} bytes )in ${target}`)
      }
      
      function checkDirectorySync(directory) {
          try {
              fs.statSync(directory);
          } catch (e) {
              fs.mkdirSync(directory);
              console.log("Created directory: " + directory)
          }
      }
      
      

      Resources

      Octokit REST API Node JS library: https://github.com/octokit/rest.js 

      API Documentation for Octokit: https://octokit.github.io/rest.js/#api-Repos-getContent

      The post Create a Node JS application for Downloading sources from GitHub appeared first on AMIS Oracle and Java Blog.

      Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle Linux, Ubuntu

      Sun, 2018-03-18 08:53

      Spring Boot is great for running inside a Docker container. Spring Boot applications ‘just run’. A Spring Boot application has an embedded servlet engine making it independent of application servers. There is a Spring Boot Maven plugin available to easily create a JAR file which contains all required dependencies. This JAR file can be run with a single command-line like ‘java -jar SpringBootApp.jar’. For running it in a Docker container, you only require a base OS and a JDK. In this blog post I’ll give examples on how to get started with different OSs and different JDKs in Docker. I’ll finish with an example on how to build a Docker image with a Spring Boot application in it.

      Getting started with Docker Installing Docker

      Of course you need a Docker installation. I’ll not get into details here but;

      Oracle Linux 7
      yum-config-manager --enable ol7_addons
      yum-config-manager --enable ol7_optional_latest
      yum install docker-engine
      systemctl start docker
      systemctl enable docker
      Ubuntu
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
      apt-get update
      apt-get install docker-ce

      You can add a user to the docker group or give it sudo docker rights. They do allow the user to become root on the host-OS though.

      Running a Docker container

      See below for commands you can execute to start containers in the foreground or background and access them. For ‘mycontainer’ in the below examples, you can fill in a name you like. The name of the image can be found in the description further below. This can be for example for an Oracle Linux 7 image container-registry.oracle.com/os/oraclelinux:7 when using the Oracle Container Registry or store/oracle/serverjre:8 for for example a JRE image from the Docker Store.

      If you are using the Oracle Container Registry (for example to obtain Oracle JDK or Oracle Linux docker images) you first need to

      • go to container-registry.oracle.com and enable your OTN account to be used
      • go to the product you want to use and accept the license agreement
      • do docker login -u username -p password container-registry.oracle.com

      If you are using the Docker Store, you first need to

      • go to store.docker.com and create an account
      • find the image you want to use. Click Get Content and accept the license agreement
      • do docker login -u username -p password

      To start a container in the foreground

      docker run --name mycontainer -it imagename /bin/sh

      To start a container in the background

      docker run --name mycontainer -d imagename tail -f /dev/null

      To ‘enter’ a running container:

      docker exec -it mycontainer /bin/sh

      /bin/sh exists in Alpine Linux, Oracle Linux and Ubuntu. For Oracle Linux and Ubuntu you can also use /bin/bash. ‘tail -f /dev/null’ is used to start a ‘bare OS’ container with no other running processes to keep it running. A suggestion from here.

      Cleaning up

      Good to know is how to clean up your images/containers after having played around with them. See here.

      #!/bin/bash
      # Delete all containers
      docker rm $(docker ps -a -q)
      # Delete all images
      docker rmi $(docker images -q)
      Options for JDK

      Of course there are more options for running JDKs in Docker containers. These are just some of the more commonly used.

      Oracle JDK on Oracle Linux

      When you’re running in the Oracle Cloud, you have probably noticed the OS running beneath it is often Oracle Linux (and currently also often version 7.x). When for example running Application Container Cloud Service, it uses the Oracle JDK. If you want to run in a similar environment locally, you can use Docker images. Good to know is that the Oracle Server JRE contains more than a regular JRE but less than a complete JDK. Oracle recommends using the Server JRE whenever possible instead of the JDK since the Server JRE has a smaller attack surface. Read more here. For questions about support and roadmap, read the following blog.

      store.docker.com

      The steps to obtain Docker images for Oracle JDK / Oracle Linux from store.docker.com are as follows:

      Create an account on store.docker.com. Go to https://store.docker.com/images/oracle-serverjre-8. Click Get Content. Accept the agreement and you’re ready to login, pull and run.

      #use the store.docker.com username and password
      docker login -u yourusername -p yourpassword
      docker pull store/oracle/serverjre:8

      To start in the foreground:

      docker run --name jre8 -it store/oracle/serverjre:8 /bin/bash
      container-registry.oracle.com

      You can use the image from the container registry. First, same as for just running the OS, enable your OTN account and login.

      #use your OTN username and password
      docker login -u yourusername -p yourpassword container-registry.oracle.com
      
      docker pull container-registry.oracle.com/java/serverjre:8
      
      #To start in the foreground:
      docker run --name jre8 -it container-registry.oracle.com/java/serverjre:8 /bin/bash
      OpenJDK on Alpine Linux

      When running Docker containers, you want them to as small as possible to allow quick starting, stopping, downloading, scaling, etc. Alpine Linux is a suitable Linux distribution for small containers and is being used quite often. There can be some thread related challenges with Alpine Linux though. See for example here and here.

      Running OpenJDK in Alpine Linux in a Docker container is more easy than you might think. You don’t require any specific accounts for this and also no login.

      When you pull openjdk:8, you will get a Debian 9 image. In order to run on Alpine Linux, you can do

      docker pull openjdk:8-jdk-alpine

      Next you can do

      docker run --name openjdk8 -it openjdk:8-jdk-alpine /bin/sh
      Zulu on Ubuntu Linux

      You can also consider OpenJDK based JDK’s like Azul’s Zulu. This works mostly the same only the image name is something like ‘azul/zulu-openjdk:8’. The Zulu images are Ubuntu based.

      Do it yourself

      Of course you can create your own image with a JDK. See for example here. This requires you download the JDK code and build the image yourself. This is quite easy though.

      Spring Boot in a Docker container

      Creating a container with a Spring Boot application based on an image which already has a JDK in it, is easy. This is described here. You can create a simple Dockerfile like:

      FROM openjdk:8-jdk-alpine
      VOLUME /tmp
      ARG JAR_FILE
      ADD ${JAR_FILE} app.jar
      ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

      The FROM image can also be an Oracle JDK or Zulu JDK image as mentioned above.

      And add a dependency to com.spotify.dockerfile-maven-plugin and some configuration to your pom.xml file to automate building the Dockerfile once you have the Spring Boot JAR file. See for a complete example pom.xml and Dockerfile also here. The relevant part of the pom.xml file is below.

      <build>
      <finalName>accs-cache-sample</finalName>
      <plugins>
      <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      </plugin>
      <plugin>
      <groupId>com.spotify</groupId>
      <artifactId>dockerfile-maven-plugin</artifactId>
      <version>1.3.6</version>
      <configuration>
      <repository>${docker.image.prefix}/${project.artifactId}</repository>
      <buildArgs>
      <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
      </buildArgs>
      </configuration>
      </plugin>
      </plugins>
      </build>

      To actually build the Docker image, which allows using it locally, you can do:

      mvn install dockerfile:build

      If you want to distribute it (allow others to easily pull and run it), you can push it with

      mvn install dockerfile:push

      This will of course only work if you’re logged in as maartensmeets and only for Docker hub (for this example). The below screenshot is after having pushed the image to hub.docker.com. You can find it there since it is public.

      You can then do something like

      docker run -t maartensmeets/accs-cache-sample:latest

      The post Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle Linux, Ubuntu appeared first on AMIS Oracle and Java Blog.

      Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application

      Wed, 2018-03-14 10:24

      Spring Boot allows you to quickly develop microservices. Application Container Cloud Service (ACCS) allows you to easily host Spring Boot applications. Oracle provides an Application Cache based on Coherence which you can use from applications deployed to ACCS. In order to use the Application Cache from Spring Boot, Oracle provides an open source Java SDK. In this blog post I’ll give an example on how you can use the Application Cache from Spring Boot using this SDK. You can find the sample code here.

      Using the Application Cache Java SDK Create an Application Cache

      You can use a web-interface to easily create a new instance of the Application Cache. A single instance can contain multiple caches. A single application can use multiple caches but only a single cache instance. Multiple applications can use the same cache instance and caches. Mind that the application and the application cache are deployed in the same region in order to allow connectivity. Also do not use the ‘-‘ character in your cache name, since the LBaaS configuration will fail.

      Use the Java SDK

      Spring Boot applications commonly use an architecture which defines abstraction layers. External resources are exposed through a controller. The controller uses services. These services provide operations to execute specific tasks. The services use repositories for their connectivity / data access objects. Entities are the POJO’s which are exchanged/persisted and exposed for example as REST in a controller. In order to connect to the cache, the repository seems like a good location. Which repository to use (a persistent back-end like a database or for example the application cache repository) can be handled by the service. Per operation this can differ. Get operations for example might directly use the cache repository (which could use other sources if it can’t find its data) while you might want to do Put operations in both the persistent backend as well as in the cache. See for an example here.

      In order to gain access to the cache, first a session needs to be established. The session can be obtained from a session provider. The session provider can be a local session provider or a remote session provider. The local session provider can be used for local development. It can be created with an expiry which indicated the validity period of items in the cache. When developing / testing, you might try setting this to ‘never expires’ since else you might not be able to find entries which you expect to be there. I have not looked further into this issue or created a service request for it. Nor do I know if this is only an issue with the local session provider. See for sample code here or here.

      When creating a session, you also need to specify the protocol to use. When using the Java SDK, you can (at the moment) choose from GRPC and REST. GRPC might be more challenging to implement without an SDK in for example Node.js code, but I have not tried this. I have not compared the performance of the 2 protocols. Another difference is that the application uses different ports and URLs to connect to the cache. You can see how to determine the correct URL / protocol from ACCS environment variables here.

      The ACCS Application Cache Java SDK allows you to add a Loader and a Serializer class when creating a Cache object. The Loader class is invoked when a value cannot be found in the cache. This allows you to fetch objects which are not in the cache. The Serializer is required so the object can be transferred via REST or GRPC. You might do something like below.

      Injection

      Mind that when using Spring Boot you do not want to create instances of objects by directly doing something like: Class bla = new Class(). You want to let Spring handle this by using the @Autowired annotation.

      Do mind though that the @Autowired annotation assigns instances to variables after the constructor of the instance is executed. If you want to use the @Autowired variables after your constructor but before executing other methods, you should put them in a @PostConstruct annotated method. See also here. See for a concrete implemented sample here.

      Connectivity

      The Application cache can be restarted at certain times (e.g. maintenance like patching, scaling) and there can be connectivity issues due to other reasons. In order to deal with that it is a good practice to make the connection handling more robust by implementing retries. See for example here.

      Deploy a Spring Boot application to ACCS Create a deployable

      In order to deploy an application to ACCS, you need to create a ZIP file in a specific format. In this ZIP file there should at least be a manifest.json file which describes (amongst other things) how to start the application. You can read more here. If you have environment specific properties, binding information (such as which cache to use) and environment variables, you can create a deployment.json file. In addition to those metadata files, there of course needs to be the application itself. In case of Spring Boot, this is a large JAR file which contains all dependencies. You can create this file with the spring-boot-maven-plugin. The ZIP itself is most easily composed with the maven-assembly-plugin.

      Deploy to ACCS

      There are 2 major ways (next to directly using the API’s with for example CURL) in which you can deploy to ACCS. You can do this manually or use the Developer Cloud Service. The process to do this from Developer Cloud Service is described here. This is quicker (allows redeployment on Git commit for example) and more flexible. Below globally describes the manual procedure. An important thing to mind is that if you deploy the same application under the same name several times, you might encounter issues with the application not being replaced with a new version. In this case you can do 2 things. Deploy under a different name every time. The name of the application however is reflected in the URL and this could cause issues with users of the application. Another way is to remove files from the Storage Cloud Service before redeployment so you are sure the deployable is the most recent version which ends up in ACCS.

      Manually

      Create a new Java SE application.

       

      Upload the previously created ZIP file

      References

      Introducing Application Cache Client Java SDK for Oracle Cloud

      Caching with Oracle Application Container Cloud

      Complete working sample Spring Boot on ACCS with Application cache (as soon as a SR is resolved)

      A sample of using the Application Cache Java SDK. Application is Jersey based

      The post Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application appeared first on AMIS Oracle and Java Blog.

      Pages