OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 5 hours 30 min ago

Build and Deploy .Net Code using Oracle Developer Cloud

Wed, 2018-04-11 13:27

The much awaited support for building and deploying .Net code on Oracle Cloud using Developer Cloud Service is here.

This blog post will show how you can use Oracle Developer Cloud Service to build .Net code and deploy it on Oracle Application Container Cloud. It will show how the newly released Docker build support in Developer Cloud can be leveraged to perform the build.

Technology Stack Used:

Application Stack: .Net for developing ASPX pages

Build: Docker for compiling the .Net code

Packaging Tool: Grunt to package the compiled code

DevOps Cloud Service: Oracle Developer Cloud

Deployment Cloud Service: Oracle Application Container Cloud

OS for Development: Windows 7

 

.Net application code source:

 The ASP.Net application that we would be building and deploying on Oracle Cloud using Docker can be downloaded from the Git repository on GitHub. Below is the link for the same.

https://github.com/dotnet/dotnet-docker-samples/tree/master/aspnetapp

If you want to clone the GitHub repository then use the below git command after installing the git cli on your machine.

git clone https://github.com/dotnet/dotnet-docker-samples/

After cloning the above mentioned repository you can pick to use the aspnetapp. Below is the folder structure of the cloned aspnetapp.

 

Apart from the four highlighted files in the screen shot below, which are essential for the deployment, rest all the other files and folder are part of the .Net application.

Note: You may not be able to see the .get folder as you have not initialized the Git repository.

Now we need to initialize the Git repository for the aspnetappl as we will be pushing this code to the Git repo hosted on Oracle Developer Cloud. Below are the commands that you can use on you command line after installing Git Cli and configuring the same in the path.

Command prompt > cd <to the aspnetappl folder>

Command prompt > git init

Command prompt > git add –all

Command prompt > git commit –m “First Commit”

Above mentioned git commands will initialize the git repository locally in the application folder. And then add all the code in the folder to the local Git repository using the git add –all command.

Then commit the added files by using the git commit command, as shown above.

Now go to Oracle Developer Cloud project and create a Git repository for the .Net code to be pushed to. For the purpose of this blog I have created the Git repository by clicking the ‘New Repository’ button and named it as ‘DotNetDockerAppl’, as shown in the screenshot below. You may choose to name it as per your choice.

Copy the Git repository URL as shown below.

Then add the URL as the remote repository to the local Git repository that we have created using the below command:

 Command prompt > git remote add origin <Developer Cloud Git repository URL>

The use the below command to push the code to the master branch of the Developer Cloud hosted Git repository.

Command prompt > git push origin master

 

Deployment related files that need to be created: Dockerfile

This file will be used by the Docker Tool to build the Docker image with the .Net core installed and it would also include the .Net application code, cloned from the Developer Cloud Git repository. You will be getting the Dockerfile as part of the project. Please replace the existing Dockerfile script with the one below.

 

FROM microsoft/aspnetcore-build:2.0

WORKDIR /app

# copy csproj and restore as distinct layers

COPY *.csproj ./

RUN dotnet restore

# copy everything else and build

COPY . ./

RUN dotnet publish -c Release -r linux-x64

In the above script we download the aspnetcore-build:2.0 image, then create a work directory where we copy the .csproj file and then copy all the code from the Git repo. Finally use the ‘dotnet’ command to publish the compiled code, compliant with linux-x64 machine.

manifest.json

This file is essential for the deployment of the .Net application on the Oracle Application Container Cloud.

{

 "runtime":{

 "majorVersion":"2.0.0-runtime"

 },

 "command": "dotnet AspNetAppl.dll"

}

The command attribute in the json, specifies the dll to be executed by the dotnet command. It also specifies the .Net version to be used for executing the compiled code.

 

Gruntfile.js

This file defines the build task and is being used by the Build file to identify the deployment artifact type that needs to be generated, which in this case is a zip file and also the files from the project that need to be included in the build artifact. For the .Net application we would only need to include everything in the publish folder including the manifest.json for Application Container Cloud deployment. The folder is defined in the src attribute as shown in the code snippet below.

 

/**

 * http://usejsdoc.org/

 */

module.exports = function(grunt) {

  require('load-grunt-tasks')(grunt);

  grunt.initConfig({

    compress: {

      main: {

        options: {

          archive: 'AspNetAppl.zip',

          pretty: true

        },

        expand: true,

        cwd: './publish',

        src: ['./**/*'],

        dest: './'

      }

    }

  });

  grunt.registerTask('default', ['compress']);

};

package.json

Since Grunt is Nodejs based build tool, which we are using in this blog to build and package the deployment artifact, we would need the package json file to define the dependencies required for the Grunt tool to execute.

{

  "name": "ASPDotNetAppl",

  "version": "0.0.1",

  "private": true,

  "scripts": {

    "start": "dotnet AspNetAppl.dll"

  },

  "dependencies": {

    "grunt": "^0.4.5",

    "grunt-contrib-compress": "^1.3.0",

    "grunt-hook": "^0.3.1",

    "load-grunt-tasks": "^3.5.2"

  }

}

Once all the code is pushed to the Git repository hosted on Oracle Developer Cloud. Below screenshot, shows how you can browse and verify your code by going to the Code tab and selecting the appropriate Git repository and branch in the respective dropdowns on top of the files list.

 

Build Job Configuration on Developer Cloud

We are going to use the newly introduced Mako build instead of the Hudson build system in DevCS.

Below are the build job configuration screen shots for the ‘DotNetBuild’ which will build and deploy the .Net application:

Create a build job by clicking on the “New Job” button. Give a name of your choice to the build job. For this blog I have named it as ‘DotNetBuild’. You will also need to select the Software Template which contains Docker and Nodejs runtimes. In case you do see the required software template in the dropdown,as shown in the screenshot below, you will have to configure the same from Organization -> VM Template menu. This will kick start the Build VM with the required software template. To understand and learn more about configuring VM and VM Templates you can refer this link.

 

Now go to the Builders tab where we would configure the build steps. First we would select execute shell where we would build the Docker image using the Dockerfile in our Git repository. Then create a container for the same (but not start the container). Then copy the compiled code to the build machine from the container and then use npm registry to download the grunt build tool dependencies. Finally, use the grunt command to build the AspNetAppl.zip file which will be deployed on Application Container Cloud.

 

 

Now configure the PSM Cli and configure the credentials for your ACCS instance along with the domain name. Then again configure Unix Shell builder where you will have to provide the psm command to deploy the zip file on Application Container that we have generated earlier using Grunt build tool.

Note: All this will be done in the same ‘DotNetBuild’ build job that we have created earlier.

 

AS part of the last part of build configuration, in the Post Build tab configure the Artifact Archiver as show below, to archive the generated zip file for deployment.

 

Below screen shot show the ‘DotNet’ application deployed on Application Container Cloud service console. Copy the application URL as shown in the screen shot. The URL will vary for your cloud instance.

 

Use the copied URL to access the deployed .Net application on a browser. It will look like as shown in the below screen shot.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

 

Introducing the Oracle MySQL Operator for Kubernetes

Wed, 2018-03-28 00:30

(Originally published on Medium)

Introduction

Oracle recently open sourced a Kubernetes operator for MySQL that makes running and managing MySQL on Kubernetes easier.

The MySQL Operator is a Kubernetes controller that can be installed into any existing Kubernetes cluster. Once installed, it will enable users to create and manage production-ready MySQL clusters using a simple declarative configuration format. Common operational tasks such as backing up databases and restoring from an existing backup are made extremely easy. In short, the MySQL Operator abstracts away the hard work of running MySQL inside Kubernetes.

The project started as a way to help internal teams get MySQL running in Kubernetes more easily, but it quickly become clear that many other people might be facing similar issues.


Features

Before we dive into the specifics of how the MySQL Operator works, let’s take a quick look at some of the features it offers:

Cluster configuration

We have only two options for how a cluster is configured.

  • Primary (in this mode the group has a single-primary server that is set to read-write mode. All the other members in the group are set to read-only mode)
  • Multi-Primary (In multi-primary mode, there is no notion of a single primary. There is no need to engage an election procedure since there is no server playing any special role.)
Cluster management
  • Create and scale MySQL clusters using Innodb and Group Replication on Kubernetes
  • When cluster instances die, the MySQL Operator will automatically re-join them into the cluster
  • Use Kubernetes Persistent Volume Claims to store data on local disk or network attached storage.
Backup and restore
  • Create on-demand backups
  • Create backup schedules to automatically backup databases to Object Storage (S3 etc)
  • Restore a database from an existing backup
Operations
  • Run on any Kubernetes cluster (Oracle Cloud Infrastructure, AWS, GCP, Azure)
  • Prometheus metrics for alerting and monitoring
  • Self healing clusters
The Operator Pattern

A Kubernetes Operator is simply a domain specific controller that can manage, configure and automate the lifecycle of stateful applications. Managing stateful applications, such as databases, caches and monitoring systems running on Kubernetes is notoriously difficult. By leveraging the power of Kubernetes API we can now build self managing, self driving infrastructure by encoding operational knowledge and best practices directly into code. For instance, if a MySQL instance dies, we can use an Operator to react and take the appropriate action to bring the system back online.


How it works

The MySQL Operator makes use of Custom Resource Definitions as a way to extend the Kubernetes API. For instance, we create custom resources for MySQLClusters and MySQLBackups. Users of the MySQL Operator interact via these third party resource objects. When a user creates a backup for example, a new MySQLBackup resource is created inside Kubernetes which contains references and information about that backup.

The MySQL Operator is, at it’s core, a simple Kubernetes controller that watches the API server for Customer Resource Definitions relating to MySQL and acts on them.


HA / Production Ready MySQL Clusters

The MySQL Operator is opinionated about the way in which clusters are configured. We build upon InnoDB cluster (which uses Group Replication) to provide a complete high availability solution for MySQL running on Kubernetes.


Examples

The following examples will give you an idea of how the MySQL Operator can be used to manage your MySQL Clusters.


Create a MySQL Cluster

Creating a MySQL cluster using the Operator is easy. We define a simple YAML file and submit this directly to Kubernetes via kubectl. The MySQL operator watches for MySQLCluster resources and will take action by starting up a MySQL cluster.

apiVersion: "mysql.oracle.com/v1" kind: MySQLCluster metadata:   name: mysql-cluster-with-3-replicas spec:   replicas: 3

You should now be able to see your cluster running

There are several other options available when creating a cluster such as specifying a Persistent Volume Claim to define where your data is stored. See the examples directory in the project for more examples.


Create an on-demand backup

We can use the MySQL operator to create an “on-demand” database backup and upload it to object storage.

Create a backup definition and submit it via kubectl.

apiVersion: "mysql.oracle.com/v1" kind: MySQLBackup metadata: name: mysql-backup spec: executor: provider: mysqldump databases: - test storage: provider: s3 secretRef: name: s3-credentials config: endpoint: x.compat.objectstorage.y.oraclecloud.com region: ociregion bucket: mybucket clusterRef: name: mysql-cluster

You can now list or fetch individual backups via kubectl

kubectl get mysqlbackups

Or fetch an individual backup

kubectl get mysqlbackup api-production-snapshot-151220170858 -o yaml
Create a Backup Schedule

Users can attach schedule backup policies to a cluster so that backups get created on a given cron schedule. A user may be create multiple backup schedules attached to a single cluster if required.

This example will create a backup of a cluster test database every hour and upload it to Oracle Cloud Infrastructure Object Storage.

apiVersion: "mysql.oracle.com/v1" kind: MySQLBackupSchedule metadata: name: mysql-backup-schedule spec: schedule: '30 * * * *' backupTemplate: executor: provider: mysqldump databases: - test storage: provider: s3 secretRef: name: s3-credentials config: endpoint: x.compat.objectstorage.y.oraclecloud.com region: ociregion bucket: mybucket clusterRef: name: mysql-cluster Roadmap

Some of the features on our roadmap include

  • Support for MySQL Enterprise Edition
  • Support for MySQL Enterprise Backup
Conclusion

The MySQL Operator showcases the power of Kubernetes as a platform. It makes running MySQL inside Kubernetes easy by abstracting complexity and reducing operational burden. Although it is still in very early development, the MySQL Operator already provides a great deal of useful functionality out of the box.

Visit https://github.com/oracle/mysql-operator to learn more. We welcome contributions, ideas and feedback from the community.

If you want to deploy MySQL inside Kubernetes, we recommend using the MySQL Operator to do the heavy lifting for you.

 

Links

 

 

 

Announcing Terraform support for Oracle Cloud Platform Services

Mon, 2018-03-26 06:03

Oracle and HashiCorp are pleased to announce the immediate availability of the Oracle Cloud Platform Terraform provider.

The initial release of the Oracle Cloud Platform Terraform provider supports the creation and lifecycle management of Oracle Database Cloud Service and Oracle Java Cloud Service instances.

With the availability of the Oracle Cloud Platform services support, Terraform’s “infrastructure-as-code” configurations can now be defined for deploying standalone Oracle PaaS services, or combined with the Oracle Cloud Infrastructure and Infrastructure Classic services supported by the opc and oci providers for complete infrastructure and application deployment.

Supported PaaS Services

The following Oracle Cloud Platform services are supported by the initial Oracle Cloud Platform (PaaS) Terraform provider. Additional services/resources will be added over time.

  • Oracle Database Cloud Service Instances
  • Oracle Database Cloud Service Access Rules
  • Oracle Java Cloud Service Instances
  • Oracle Java Service Access Rules
Using the Oracle Cloud Platform Terraform provider

To get started using Terraform to provision the Oracle Cloud Platform services lets looks at an example of deploying a single Java Cloud Service instance, along with its dependent Database Cloud Service instance.

First we declare the provider definition, providing the account credentials and the appropriate service REST API endpoints. The Identity Domain name, Identity Service ID and REST endpoint URL can be found in the Service details section from on My Services Dashboard

For IDCS Cloud Accounts use the Identity Service ID for the identity_domain.

provider "oraclepaas" { user = "example@user.com" password = "Pa55_Word" identity_domain = "idcs-5bb188b5460045f3943c57b783db7ffa" database_endpoint = "https://dbaas.oraclecloud.com" java_endpoint = "https://jaas.oraclecloud.com" }

For Traditional Accounts use the account Identity Domain Name for the identity_domain

provider "oraclepaas" { user = "example@user.com" password = "Pa55_Word" identity_domain = "mydomain" database_endpoint = "https://dbaas.oraclecloud.com" java_endpoint = "https://jaas.oraclecloud.com" } Database Service Instance configuration

The oraclepaas_database_service_instance resource is used to define the Oracle Database Cloud service instance. A single terraform database service resource definition can represent configurations ranging from a single instance Oracle Database Standard Edition deployment, to a complete multi node Oracle Database Enterprise Edition with RAC and Data Guard for high availability and disaster recovery.

Instances can also be created for backups or snapshots of another Database Service instance. For this example we’ll create a new single instance database for use with the Java Cloud Service configured later further down.

resource "oraclepaas_database_service_instance" "database" { name = "my-terraformed-database" description = "Created by Terraform" edition = "EE" version = "12.2.0.1" subscription_type = "HOURLY" shape = "oc1m" ssh_public_key = "${file("~/.ssh/id_rsa.pub")}" database_configuration { admin_password = "Pa55_Word" backup_destination = "BOTH" sid = "ORCL" usable_storage = 25 } backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-database-backup" cloud_storage_username = "${var.user}" cloud_storage_password = "${var.password}" create_if_missing = true } }

Lets take a closer look at the configuration settings. Here we are declaring that this is an Oracle Database 12c Release 2 (12.2.0.1) Enterprise Edition instance with a oc1m (1 OCPU/15Gb RAM) shape and with hourly usage metering.

edition = "EE" version = "12.2.0.1" subscription_type = "HOURLY" shape = "oc1m"

The ssh_public_key is the public key to be provisioned to the instance to allow SSH access.

The database_configuration block sets the initial configuration for the actual Database instance to be created in the Database Cloud service, including the database SID, the initial password, and the initial usable block volume storage for the database.

database_configuration { admin_password = "Pa55_Word" backup_destination = "BOTH" sid = "ORCL" usable_storage = 25 }

The backup_destination configure if backup are on the Object Storage Servive (OSS), both object storage and local storage (BOTH), or disabled (NONE). A backup destination of OSS or BOTH is required for database instances that as used in combination with Java Cloud service instances

The Object Storage Service location and access credentials are configured in the backups block

backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-database-backup" cloud_storage_username = "${var.user}" cloud_storage_password = "${var.password}" create_if_missing = true } Java Cloud Service Instance

The oraclepaas_java_service_instance resource is used to define the Oracle Java Cloud service instance. A single Terraform resource definition can represent configurations ranging from a single instance Oracle WebLogic Server deployment, to a complete multi-node Oracle WebLogic cluster with a Oracle Coherence data grid cluster and an Oracle Traffic Director load balancer.

Instances can also be created from snapshots of another Java Cloud Service instance. For this example we’ll create a new two node Weblogic cluster with a load balancer, and associated to the Database Cloud Service instance defined above.

resource "oraclepaas_java_service_instance" "jcs" { name = "my-terraformed-java-service" description = "Created by Terraform" edition = "EE" service_version = "12cRelease212" metering_frequency = "HOURLY" enable_admin_console = true ssh_public_key = "${file("~/.ssh/id_rsa.pub")}" weblogic_server { shape = "oc1m" managed_servers { server_count = 2 } admin { username = "weblogic" password = "Weblogic_1" } database { name = "${oraclepaas_database_service_instance.database.name}" username = "sys" password = "${oraclepaas_database_service_instance.database.database_configuration.0.admin_password}" } } oracle_traffic_director { shape = "oc1m" listener { port = 8080 secured_port = 8081 } } backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-java-service-backup" auto_generate = true } }

Let break this down. Here we are declaring that this is a 12c Release 2 (12.2.1.2) Enterprise Edition Java Cloud Service instance with hourly usage metering.

edition = "EE" service_version = "12cRelease212" metering_frequency = "HOURLY"

Again the ssh_public_key is the public key to be provisioned to the instance to allow SSH access.

The weblogic_server block provides the configuration details for the WebLogic Service instances deployed for this Java Cloud Service instance. The weblogic_server definition sets the instance shape, in this case a oc1m (1 OCPU/15Gb RAM).

The admin block sets the WebLogic server admin user and initial password.

admin { username = "weblogic" password = "Weblogic_1" }

The database block connects the WebLogic server to the Database Service instance already defined above. In this example we are assuming the database and java service instances are declared in the same configuration, so we can fetch the database configuration values.

database { name = "${oraclepaas_database_service_instance.database.name}" username = "sys" password = "${oraclepaas_database_service_instance.database.database_configuration.0.admin_password}" }

The oracle_traffic_director block configures the load balancer that directs traffic to the managed WebLogic server instances.

oracle_traffic_director { shape = "oc1m" listener { port = 8080 secured_port = 8081 } }

By default the load balancer will be configured with the same admin credentials defined in the weblogic_server block, different credentials can also be configured if required.  If the insecure port is not set then only the secured_port is enabled

Finally, similar to the Database Cloud service instance configuration, the backups block sets the Object Storage Service location for the Java Service instance backups.

backups { cloud_storage_container = "Storage-${var.domain}/-backup" auto_generate = true } Provisioning

With the provider and resource definitions configured in a terraform project (e.g all in a main.tf file), deploying the above configuration is a simple as:

$ terraform init $ terraform apply

The terraform init command will automatically fetch the latest version of the oraclepaas provider. terraform apply will start the provisioning. The complete provisioning of the Database and Java Cloud Instances can be a long running operation. To remove the provisioning instance run terraform destroy

Related Content

Terraform Provider for Oracle Cloud Platform

Terraform Provider for Oracle Cloud Infrastructure

Terraform Provider for Oracle Cloud Infrastructure Classic

Part II: Data processing pipelines with Spring Cloud Data Flow on Oracle Cloud

Thu, 2018-03-22 00:30

This is the 2nd (and final) part of this blog series about Spring Cloud Data Flow on Oracle Cloud

In Part 1, we covered some of the basics, infrastructure setup (Kafka, MySQL) and at the end of it, we had a fully functional Spring Cloud Data Flow server on the cloud — now its time to put it to use !

In this part, you will

  • get a technical overview of solution and look at some internal details — whys and hows
  • build and deploy a data flow pipeline on Oracle Application Container Cloud
  • and finally test it out…
Behind the scenes

Before we see things in action, here is an overview so that you understand what you will be doing and get (a rough) idea of why it’s working the way it is

At a high level, this is how things work in Spring Cloud Data Flow (you can always dive into the documentation for details)

  • You start by registering applications — these contain the core business logic and deal with how you would process the data e.g. a service which simply transforms the data it receives (from the messaging layer) or an app which pumps user events/activities into a message queue
  • You will then create a stream definition where you will define the pipeline of your data flow (using the apps which you previously registered) and then deploy them
  • (here is the best part!) once you deploy the stream definition, the individual apps in the pipeline, which will get automatically deployed to Oracle Application Container Cloud, thanks to our custom Spring Cloud Deployer SPI implementation (this was briefly mentioned in Part 1)

At a high level, the SPI implementation needs to adhere to the contract/interface outlined by

org.springframework.cloud.deployer.spi.app.AppDeployer and provide implementation for the following methods — deploy, undeploy, status and environmentInfo

Thus the implementation handles the life cycle of the pipeline/stream processing applications

  • creation and deletion
  • providing status information
Show time…! App registration
We will start by registering our stream/data processing applications

As mentioned in Part 1, Spring Cloud Data Flow uses Maven as one of its sources for the applications which need to be deployed as a part of the pipelines which you build — more details here and here

You can use any Maven repo — we are using Spring Maven repo since we will be importing their pre-built starter apps. Here is the manifest.json where this is configured

{   "runtime": {     "majorVersion": "8"   },   "command": "java -jar spring-cloud-dataflow-server-accs-1.0.0-SNAPSHOT.jar    --server.port=$PORT    --maven.remote-repositories.repo1.url=http://repo.spring.io/libs-snapshot    --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=$OEHPCS_EXTERNAL_CONNECT_STRING     --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=<event_hub_zookeeper_IP>:<port>",   "notes": "ACCS Spring Cloud Data Flow Server" }

manifest.json for Data Flow server on ACCS

Access the Spring Cloud Data Flow dashboard — navigate to the application URL e.g. https://SpringCloudDataflowServer-mydomain.apaas.us2.oraclecloud.com/dashboard

Spring Cloud Data Flow dashboard

For the purpose of this blog, we will import two pre-built starter apps

http
  • Type — source
  • Role — pushes pushes data to the message broker
  • Maven URLmaven://org.springframework.cloud.stream.app:http-source-kafka:1.0.0.BUILD-SNAPSHOT

log

  • Type — sink
  • Role — consumes data/events from the message broker
  • Maven URLmaven://org.springframework.cloud.stream.app:log-sink-kafka:1.0.0.BUILD-SNAPSHOT

There is another category of apps known as processor — this is not covered for the sake of simplicity

There are a bunch of these starter apps which make it super easy to get going with Spring Cloud Data Flow!

Importing applications

After app registration, we can go ahead and create our data pipeline. But, before we do that, let’s quickly glance at what it will do…

Overview of the sample pipeline/data flow

Here is the flow which the pipeline will encapsulate — you will see this in action once you reach the Test Drive section.. so keep going !

  • http app -> Kafka topic
  • Kafka -> log app -> stdout

The http app will provide a REST endpoint for us to POST messages to it and these will be pushed to a Kafka topic. The log app will simply consume these messages from the Kafka topic and then spit them out to stdout — simple!

Create & deploy a pipeline

Lets start creating stream — you can pick from the list of source and sink apps which we just imported ( http and log )

 

Use the below stream definition — just replace KafkaDemo with the name of your Event Hub Cloud service instance which you had setup in the Infrastructure setup section in Part 1

http --port=$PORT --app.accs.deployment.services='[{"type": "OEHPCS", "name": "KafkaDemo"}]' | log --app.accs.deployment.services='[{"type": "OEHPCS", "name": "KafkaDemo"}]'

Stream definition

You will see a graphical representation of the pipeline (which is quite simple in our case)

Stream definition


Create (and deploy) the pipeline

Deploy the stream definition

The deployment process will get initiated and the same will be reflected on the console

Deployment in progress….

Go back to the Applications menu in Oracle Application Container Cloud to confirm that the individual app deployment has also got triggered

Deployment in progress…

Open the application details and navigate to the Deployments section to confirm that both apps have service binding to the Event Hub instances as specified in the stream definition

Service Binding to Event Hub Cloud

After the applications are deployed to Oracle Application Container Cloud, the state of the stream definition will change to deployed and the apps will also show up in the Runtime section

 

Deployment complete

 

Spring Cloud Data Flow Runtime menu

Connecting the dots..
Before we jump ahead and test our the data pipeline we just created, here are a couple of pictorial representations to summarize how everything connects logically

Individual pipeline components in Spring Cloud Data Flow map to their corresponding applications in Oracle Application Container Cloud — deployed via the custom SPI implementation (discussed above as well as in part 1)

Spring Cloud Data Flow pipeline to application mapping

.. and here is where the logical connection to Kafka is depicted

  • http app pushes to Kafka topic
  • the log app consumes from Kafka topic and emits the messages to stdout
  • the topics are auto-created in Kafka by default (you can change this) and the naming convention is the stream definition (DemoStream) and the pipeline app name (http) separated by a dot (.)

Pipeline apps interacting with Kafka

Test drive

Time to test the data pipeline…

Send messages via the http (source) app
POST a few messages to the REST endpoint exposed by the http app (check its URL from the Oracle Application Container Cloud console) — these messages will be sent to a Kafka topic and consumed by the log app

curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test1 curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test12 curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test123

Check the log (sink) service
Download logs for log app to confirm . Navigate to the application details and check out the Logs tab in the Administration section — documentation here

Check logs

You should see the same messages which you sent to the HTTP endpoint

 

Messages from Kafka consumed and sent to stdout

There is another way…
What you can also do is to validate this directly using Kafka (on Event Hub cloud) itself — all you need is to create a custom Access Rule to open port 6667 on the Kafka Server VM on Oracle Event Hub Cloud — details here

You can now inspect the Kafka topic directly by using the console consumer and then POSTing messages to the HTTP endpoint (as mentioned above)

kafka-console-consumer.bat --bootstrap-server <event_hub_kakfa_IP>:6667 --topic DemoStream.http

Un-deploy
If you trigger an un-deployment or destroy of the stream definition, it will trigger an app deletion from Oracle Application Container Cloud

Un-deploy/destroy the definition

Quick recap

That’s all for this blog and it marks the end of this 2-part blog series!

  • we covered the basic concepts & deployed a Spring Cloud Data Flow server on Oracle Application Container Cloud along with its dependent components which included…
  • Oracle Event Hub Cloud as the Kafka based messaging layer, and Oracle MySQL Cloud as the persistent RDBMS store
  • we then explored some behind the scenes details and made use of our Spring Cloud Data Flow setup where …
  • … we built & deployed a simple data pipeline along with its basic testing/validation
Don’t forget to…
  • check out the tutorials for Oracle Application Container Cloud — there is something for every runtime!
  • other blogs on Application Container Cloud

Cheers!

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Podcast: Combating Complexity: Fad, Fashion, and Failure in Software Development

Wed, 2018-03-21 18:40

There is little in our lives that does not rely on software. That has been the reality for quite some time, and it will be even more true as self-driving cars and similar technologies become an even greater part of our lives. But as our reliance on software grows, so does the potential for disaster as software becomes increasingly complex.

In September 2017 The Atlantic featured “The Coming Software Apocalypse,” an article by James Somers that offers a fascinating and sobering look at how rampant code complexity has caused massive failures in critical software systems, like the 2014 incident that left the entire state of Washington without 9-1-1 emergency call-in services until the problem was traced to software running on a server in Colorado.

The article suggests that the core of the complexity problem is that code is too hard to think about. When and how did this happen?  

“You have to talk about the problem domain,” says Chris Newcombe,”because there are areas where code clearly works fine.” Newcombe, one of the people interviewed for the Atlantic article, is an expert on combating complexity, and since 2014 has been an architect on Oracle’s Bare Metal IaaS team.

“I used to work in video games,” Newcombe says. “There is lots of complex code in video games and most of them work fine. But if you're talking about control systems, with significant concurrency or affecting real-world equipment, like cars and planes and rockets or large-scale distribution systems, then we still have a way to go to solve the problem of true reliability. I think it's problem-domain specific. I don't think code is necessarily the problem. The problem is complexity, particularly concurrency and partial failure modes and side effects in the real world.”
 
Java Champion Adam Bien believes that in constrained environments, such as the software found in automobiles, “it's more or less a state machine which could or should be coded differently. So it really depends on the focus or the context. I would say that in enterprise software, code works well. The problem I see is more if you get newer ideas -- how to reshape the existing code quickly. But also coding is not just about code. Whether you write code or draw diagrams, the complexity will remain the same.”

Java Champion and microservices expert Chris Richardson agrees that “if you work hard enough, you can deliver software that actually works.” But he questions what is actually meant when software is described as “working well.”

“How successful are large software developments?” Richardson asks. “Do they meet requirements on time? Obviously that's a complex issue around project management and people. But what's the success rate?”

Richardson also points out that concerns about complexity are nothing new. “If you go back and look at the literature 30 or 40 years ago, people were concerned about software complexity then.”

The Atlantic article mentions that in most cases software does exactly what it was designed to do, an indication that it's not really a failure of the software as much as of the design of the software.

According to Developer Champion and Oracle ACE Director Lucas Jellema, “The complexity may not be in the software, but in the translation of the real-world problem or requirement into software. That starts not with coding, but with communication from one human being to another, from business end user to analyst to developer and maybe even some layers in between. That's where it usually goes wrong. In the end the software will do what the programmer told it to do, but that might not be what the business user or the real world requires it to do.”

Communication between stakeholders is only one aspect of the battle to reduce software complexity, and it’s just one issue among many that Chris Newcombe, Chris Richardson, Adam Bien, and Lucas Jellema discuss in this podcast. So settle in and listen.

This program was recorded on November 22, 2017.

The Panelists

(In alphabetical order)

Adam Bien
Java Champion
Oracle ACE Director
Twitter Lucas Jellema
CTO, AMIS Services
Oracle Developer Champion
Oracle ACE Director
Twitter  LinkedIn
  Chris Newcombe
Architect, Oracle Bare Metal IaaS Team
 LinkedIn 
  Chris Richardson
Founder, Eventuate. Inc
Java Champion
Twitter LinkedIn Additional Resources Coming Soon
  • AI Beyond Chatbots: How is AI being applied to modern applications?
  • Microservices, API Management, and Modern Enterprise Software Architecture

Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle ...

Sat, 2018-03-17 13:30

(Originally published on  javaoraclesoa.blogspot.com)

Spring Boot is great for running inside a Docker container. Spring Boot applications ‘just run’. A Spring Boot application has an embedded servlet engine making it independent of application servers. There is a Spring Boot Maven plugin available to easily create a JAR file which contains all required dependencies. This JAR file can be run with a single command-line like ‘java -jar SpringBootApp.jar’. For running it in a Docker container, you only require a base OS and a JDK. In this blog post I’ll give examples on how to get started with different OSs and different JDKs in Docker. I’ll finish with an example on how to build a Docker image with a Spring Boot application in it.

Getting started with Docker Installing Docker

Of course you need a Docker installation. I’ll not get into details here but;

Oracle Linux 7

yum-config-manager — enable ol7_addons yum-config-manager — enable ol7_optional_latest yum install docker-engine systemctl start docker systemctl enable docker

Ubuntu

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" apt-get update apt-get install docker-ce

You can add a user to the docker group or give it sudo docker rights. They do allow the user to become root on the host-OS though.

Running a Docker container

See below for commands you can execute to start containers in the foreground or background and access them. For ‘mycontainer’ in the below examples, you can fill in a name you like. The name of the image can be found in the description further below. This can be for example for an Oracle Linux 7 image container-registry.oracle.com/os/oraclelinux:7 when using the Oracle Container Registry or store/oracle/serverjre:8 for for example a JRE image from the Docker Store.

If you are using the Oracle Container Registry (for example to obtain Oracle JDK or Oracle Linux docker images) you first need to

  • go to container-registry.oracle.com and enable your OTN account to be used
  • go to the product you want to use and accept the license agreement
  • do docker login -u username -p password container-registry.oracle.com

If you are using the Docker Store, you first need to

  • go to store.docker.com and create an account
  • find the image you want to use. Click Get Content and accept the license agreement
  • do docker login -u username -p password

To start a container in the foreground

docker run — name mycontainer -it imagename /bin/sh

To start a container in the background

docker run — name mycontainer -d imagename tail -f /dev/null

To ‘enter’ a running container:

docker exec -it mycontainer /bin/sh

/bin/sh exists in Alpine Linux, Oracle Linux and Ubuntu. For Oracle Linux and Ubuntu you can also use /bin/bash. ‘tail -f /dev/null’ is used to start a ‘bare OS’ container with no other running processes to keep it running. A suggestion from here.

Cleaning up
Good to know is how to clean up your images/containers after having played around with them. See here.

#!/bin/bash # Delete all containers docker rm $(docker ps -a -q) # Delete all images docker rmi $(docker images -q) Options for JDK

Of course there are more options for running JDKs in Docker containers. These are just some of the more commonly used.

Oracle JDK on Oracle Linux

When you’re running in the Oracle Cloud, you have probably noticed the OS running beneath it is often Oracle Linux (and currently also often version 7.x). When for example running Application Container Cloud Service, it uses the Oracle JDK. If you want to run in a similar environment locally, you can use Docker images. Good to know is that the Oracle Server JRE contains more than a regular JRE but less than a complete JDK. Oracle recommends using the Server JRE whenever possible instead of the JDK since the Server JRE has a smaller attack surface. Read more here. For questions about the roadmap and support, read the following blog article.

store.docker.com

The steps to obtain Docker images for Oracle JDK / Oracle Linux from store.docker.com are as follows:

Create an account on store.docker.com. Go to https://store.docker.com/images/oracle-serverjre-8. Click Get Content. Accept the agreement and you’re ready to login, pull and run.

#use the store.docker.com username and password docker login -u yourusername -p yourpassword docker pull store/oracle/serverjre:8 #To start in the foreground: docker run — name jre8 -it store/oracle/serverjre:8 /bin/bash

container-registry.oracle.com

You can use the image from the container registry. First, same as for just running the OS, enable your OTN account and login.

#use your OTN username and password docker login -u yourusername -p yourpassword container-registry.oracle.com docker pull container-registry.oracle.com/java/serverjre:8 #To start in the foreground: docker run — name jre8 -it container-registry.oracle.com/java/serverjre:8 /bin/bash

OpenJDK on Alpine Linux

When running Docker containers, you want them to as small as possible to allow quick starting, stopping, downloading, scaling, etc. Alpine Linux is a suitable Linux distribution for small containers and is being used quite often. There can be some thread challenges with Alpine Linux though. See for example here and here. Running OpenJDK in Alpine Linux in a Docker container is more easy than you might think. You don’t require any specific accounts for this and also no login. When you pull openjdk:8, you will get a Debian 9 image. In order to run on Alpine Linux, you can do

docker pull openjdk:8-jdk-alpine

Next you can do

docker run — name openjdk8 -it openjdk:8-jdk-alpine /bin/sh

Zulu on Ubuntu Linux

 

You can also consider OpenJDK based JDK’s like Azul’s Zulu. This works mostly the same only the image name is something like ‘azul/zulu-openjdk:8’. The Zulu images are Ubuntu based.

Do it yourself

Of course you can create your own image with a JDK. See for example here. This requires you download the JDK code and build the image yourself. This is quite easy though.

Spring Boot in a Docker container

Creating a container with a Spring Boot application based on an image which already has a JDK in it, is easy. This is described here. You can create a simple Dockerfile like:

FROM openjdk:8-jdk-alpine VOLUME /tmp ARG JAR_FILE ADD ${JAR_FILE} app.jar ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

The FROM image can also be an Oracle JDK or Zulu JDK image as mentioned above.

And add a dependency to com.spotify.dockerfile-maven-plugin and some configuration to your pom.xml file to automate building the Dockerfile once you have the Spring Boot JAR file. See for a complete example pom.xml and Dockerfile also here. The relevant part of the pom.xml file is below.

<build> <finalName>accs-cache-sample</finalName> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>com.spotify</groupId> <artifactId>dockerfile-maven-plugin</artifactId> <version>1.3.6</version> <configuration> <repository>${docker.image.prefix}/${project.artifactId}</repository> <buildArgs> <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE> </buildArgs> </configuration> </plugin> </plugins> </build>

To actually build the Docker image, which allows using it locally, you can do:

mvn install dockerfile:build

If you want to distribute it (allow others to easily pull and run it), you can push it with

mvn install dockerfile:push

This will of course only work if you’re logged in as maartensmeets and only for Docker hub (for this example). The below screenshot is after having pushed the image to hub.docker.com. You can find it there since it is public.

#Running the container docker run -t maartensmeets/accs-cache-sample:latest

DevOps Meets Monitoring and Analytics

Mon, 2018-03-05 11:33

Much has been said about the role new technologies play in supporting DevOps, like automation and machine learning. My colleague Padmini Murthy wrote “DevOps Meets Next Gen Technologies”. In that post, Padmini does a great job discussing the DevOps ecosystem, partly based on a recent DevOps.com survey.

New technologies are rapidly shaping the way companies address Security and Application Performance Monitoring as well.

The same survey found 57% of companies have already adopted, and another 36% are planning to adopt modern monitoring in the next 12 months. Major reasons are: enhanced security, increased IT efficiency, and faster troubleshooting as shown in the chart below.  

Figure 1: “DevOps Meets Next Gen Technologies” by Devops.com; benefits and adoption profile for security, performance, and analytics monitoring.

Traditional IT practices would suggest application and security monitorings are oil and water, they don’t mix. Those responsible for applications and those responsible for IT security think and work dramatically different.  Here also, the landscape is changing rapidly.  The rapid proliferation of mobile and web applications built on modular microservices architectures or the like means monitoring needs to be agile and automatic.  At the same time, security strategies need to go beyond a good firewall, intrusion detection, and identity management.

What have emerged are commonalities between security and performance monitoring.  Both are using real-time monitoring of transactions through the entire stack.  Both are using machine learning to translate massive amounts of data into IT and security insights in real time.  Both are correlating data across an entire transaction in real time to quickly find performance or security issues.  Both are summarizing normal and abnormal behavior automatically to identify what’s important to view and what’s normal behavior.

This is what’s behind the design for Oracle Management Cloud.  It unifies all the metadata and log files in the cloud.  It normalizes the information on a big data analytics platform and applies machine learning algorithms to deliver IT Ops and Security dashboards pre-built specifically for security and performance teams with insights in real time, and automatically.

Figure 2: Oracle Management Cloud provides an integrated platform for security and performance monitoring.

Here are some lessons we’ve learned working with customers on DevOps efforts:

  1. Stop denying there is a problem. Ops teams are constantly bombarded by “false Signal” alerts.  They want better intelligence sooner about performance and security anomalies and threats. Read this Profit Magazine article to learn more about what Oracle is doing to help customers defend against ever-changing security and performance threats.
  2. Eliminate operational information silos so you eliminate finger pointing. Put your operational data (security, performance, configuration, etc.) in one place, and let today’s machine-learning-powered tools do the heavy lifting for you. You will reduce finger pointing, troubleshoot faster, and you may be able to eliminate the “war room” entirely. Watch this video to hear what one Oracle customer says about the power of machine learning.

Figure 3: Why Machine Learning is a key enabler for cloud-based monitoring.

  1. Monitor what (really) matters – your actual end-users. Over 70% of IT issues are end-user complaints. This can hinder the Ops team’s ability to respond to important issues. Look at this infographic highlighting the value of application and end-user monitoring. Figure 4 pinpoints why traditional monitoring tools miss the mark when it comes to delivering value.

Figure 4: End-user and application performance monitoring are key to a successful monitoring strategy.

  1. It’s in the logs! Logs are everywhere, but most organizations don’t use them because they are overwhelmed with the amount of data involved. Next-generation management clouds that are designed to ingest big data at enterprise-scale can cope with today’s log data volume and velocity. Check out this infographic for more details on Oracle Management Cloud’s Log Analytics service.

Figure 5: Key challenges with using logs to troubleshoot issues.

  1. Planning is an everyday activity. Leverage analytical capabilities against your unified store of operational information to answer a variety of forward-looking questions to improve security posture, application performance and resource utilization. If you’ve followed my advice in steps 1 through 4 above, you have all the data you need already available. Now it’s time to use it.

Further resources on Oracle Management Cloud:

Three Quick Tips API Platform CS - Gateway Installation (Part 2)

Wed, 2018-02-28 02:00

This is Part 2 of the blog series (The first part of the series can be accessed here). The aim of the blog post is to provide useful tips, which will enable the installation of the on premise Gateway for Oracle API Platform Cloud Services. If you want to know more about the product, then you can refer here.

The following tips are based on some of the scenarios, we have observed in production.

Essentially,to get past the entropy problem, you need to do the following (for Linux):

  •    check the current entropy count by executing:

   cat /proc/sys/kernel/random/entropy_avail

  • If the entropy is low you can do any of the following:    
  •    export CONFIG_JVM_ARGS=-Djava.security.egd=file:/dev/./urandom 
  •   Install the rngd tool (if not present) and execute:

   rngd -r /dev/urandom -o /dev/random -b   

  •   You can now proceed with the gateway domain creation or domain startup.

 

  • It is possible to generate the gateway properties from the API Portal UI. Please try to leverage this functionality and download the generated property file on to the on premise machine. This will significantly reduce the effort of hand crafting the properties file which is critical for the gateway installation process. Please refer here for more details.

 

  • If you encounter scenarios where failures in the "configure" action look something like:

64040: Specified template does not exist or is not a file: "/d01/apipcs/app/oracle/gateway/run/build/apiplatform_gateway-services_template.jar".
64040: Provide a valid template location.
at com.oracle.cie.domain.script.jython.CommandExceptionHandler.handleException(CommandExceptionHandler.java:56)
at com.oracle.cie.domain.script.jython.WLScriptContext.handleException(WLScriptContext.java:2279)
at com.oracle.cie.domain.script.jython.WLScriptContext.addTemplate(WLScriptContext.java:793)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    

The above is just an example but any file not found kind of error during the "configure" action is an indication that the previous step (which is the "install" action), did not complete successfully. Please refer to the "gatewayInstall.log" and "main.log", as this will point to why the install might have had errors even if the install process might have completed.

So  that is all for today. We will be back with more tips soon. Happy API management with the Oracle API Platform Cloud Services.    

Digital Transformation - Oracle API Platform Cloud Service

Questions on DevOps, Graal, APIs, Git? Champs Have Answers at Oracle Code Los Angeles

Fri, 2018-02-23 06:16

If you had technical questions about API design, for instance, or about date types in relational databases, or about DevOps bottlenecks, or about using Graal or Git,  you’d look for answers from someone with an abundance of relevant expertise, right? A champ in that particular topic.

As it happens, if you do indeed have questions on any of those topics, the Oracle Code event in Los Angeles on February 27 represents a unique opportunity for you to connect with a Developer Champion who can set you straight. Register now for Oracle Code Los Angeles, and put these sessions by Oracle Developer Champions on your schedule.

A Research Study Into DevOps Bottlenecks
Presented by: Baruch Sadogursky, Developer Advocate, JFrog
1:10 p.m.  - 1:55 p.m.  San Jose Room

Think DevOps is just so much hype? Guess again! “DevOps is among the none-hypish methodologies that really help,” said Developer Champion Baruch Sadogursky in a recent podcast. “It’s here to stay because it is another step toward faster and better integration between stakeholders in the delivery process.” But taking that step trips up some organizations. In this session Baruch dives deep into the results of a poll of Fortune 500 software delivery leaders to determine what’s causing the bottlenecks that are impeding their DevOps progress, and find solutions that will set them back on the path

Graal: How to Use the New JVM JIT Compiler in Real Life
by Christian Thalinger, Staff Software Engineer, Twitter, Inc.
2:10 p.m. - 2:55 p.m. San Francisco Room

Is Graal on your radar? It should be. It’s a new JVM JIT compiler that could become the default HotSpot JIT compiler, according to Developer Champion Christian Thalinger. But that kind of transition isn’t automatic. “One of the biggest mistakes people make when benchmarking Graal is that they assume they could use the same metrics as for C1 and C2” explains Christian. “Some people just measure overall time spent in GC and that just doesn't work.  I've seen the same being done to overall time spent for JIT compilations.  You can't do that." What can you do with Graal? Christian’s session will look at how it works, and what it can do for you.

Tackling Time Troubles - About Date Types in Relational Databases
by Bjoern Rost, Principal Consultant, The Pythian Group Inc
2:10 p.m. - 2:55 p.m. Sacramento Room

The thing about time is that it’s always passing, and there never seems to be enough of it. Things get even more complicated when it comes to dealing with time-related data in databases. While your mobile phone might easily handle leap years, time zones, or seasonal time changes, those issues can cause runtime errors, SQL code headaches, and other database problems you’d rather avoid. In this session Developer Champion Bjoern Rost will discuss best practices that will help you dodge some of the time data issues that can increase your aspirin intake. Put this session on your schedule and learn how to have an easier time when dealing with time data

Best Practices for API Design Using Oracle APIARY
by Rolando Carrasco, Fusion Middleware Director, S&P Solutions
Leonardo Gonzalez Cruz, OFMW Architect, S&P Soutions
 3:05 p.m.  - 3:50 p.m. San Jose Room

Designing and developing APIs is an important part of modern development. But if you’re not applying good design principles, you’re headed for trouble. “We are living in an API world, and you cannot play in this game with poor design principles,” says Developer Champion Rolando Carrasco. In this session, Rolando and co-presenter Leonardo Gonzalez Cruz will define what an API is, examine what distinguishes a good API, discuss the design principles that are necessary to build stable, scalable, secure APIs, and also look at some of the available tools. Whether you’re an API producer or an API consumer, you’ll want to take in this session.

Git it! A Primer To The Best Version Control System
by Bjoern Rost, Principal Consultant, The Pythian Group Inc
Stewart Bryson, owner and co-founder, Red Pill Analytics
4:20 p.m. - 5:05 p.m.  San Francisco Room

Git, the open source version control system, already has a substantial following. But whether you count yourself among those fans, or if you’re new and ready to get on board, this session by Bjeorn Rost and Oracle ACE Director Stewart Bryson will walk you through setting up your own Git repository, and discuss cloning, syncing, using and merging branches, integrating with CI/CD systems, and other hot Git tips. Don’t miss this opportunity to sharpen your Git skill

Of course, the sessions mentioned above are just 5 among 31 sessions, labs, and keynotes that are part of the overall Oracle Code Los Angeles agenda.

Don’t miss Oracle Code Los Angeles

Tuesday, February 217, 2018
7:30am - 6:00pm
The Westin Bonaventure Hotel and Suites
404 S Figueroa St.
Los Angeles, CA  90071
Register Now!

Learn about other events in the Oracle Code 2018 series
 

Related Resources

 

 

Podcast: DevOps in the Real World: Culture, Tools, Adoption

Tue, 2018-02-20 17:38

Among technology trends DevOps is certainly generating its share of heat. But is that heat actually driving adoption? “I’m going to give the answer everyone hates: It depends,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC. “It depends on where each team is, on where the organization is. I talk to people all over the industry, and I work with organizations all over the industry, and everyone is at a very different place.”

Some of the organizations Nicole has spoken with are pushing the DevOps envelope. “They’re almost squeezing blood out of a stone, finding ways to optimize things that have been optimized at the very edge. They’re doing things that most people can’t even comprehend.” Other organizations aren't feeling it. "There’s no DevOps,” says Nicole. “DevOps is nowhere near on their radar.”

Some organizations that had figured out DevOps stumbled a bit when the word came down to move everything to the cloud, explains Shay Shmeltzer, product management director for Oracle Cloud Development tools. “A lot of them need to rethink how they’re doing stuff, because cloud actually simplifies DevOps to some degree. It makes the provisioning of environments and getting stuff up and down much easier and quicker in many cases.”

As Nicole explains, “DevOps is a technology transformation methodology that makes your move into the cloud much more sticky, much more successful, much more effective and efficient to deliver value, to realize cost-savings. You can get so much more out of the technology that you are using and leveraging, so that when you do move to the cloud, everything is so much better. It’s almost a chicken and egg thing. You need so much of it together.”

However, that value isn’t always apparent to everyone. Kelly Shortridge, product manager at SecurityScorecard, observes that some security stakeholders, “feel they don’t have a place in the DevOps movement.” Some security teams have a sense that configuration management will suffice. “Then they realize that they can’t just port existing security solutions or existing security methodologies directly into agile development processes,” explains Kelly. “You have the opportunity to start influencing change earlier in the cycle, which I think was the hype. Now we’re at the Trough of Disillusionment, where people are discovering that it’s actually very hard to integrate properly, and you can’t just rely on technology for this shift. There also has to be a cultural shift, as far as security, and how they think about their interactions with engineers.” In that context Kelly sees security teams wrestling with how to interact within the organization.

But the value of DevOps is not lost on other roles and disciplines. It depends on how you slice it, explains Leonid Igolnik, member and angel investor with Sand Hill Angels, and founding investor, advisor, and managing partner with Batchery. He observes that DevOps progress varies across different industry subsets and different disciplines, “whether it’s testing, development, or security.”

“Overall, I think we’re reaching the Slope of Enlightenment, and some of those slices are reaching the Plateau of Productivity,” Leonid says.

Alena Prokharchyk began her journey into DevOps three years ago when she started her job as principal software engineer at Rancher Labs, whose principal product targets DevOps. “That actually forced me to look deeper into DevOps culture,” she says. “Before that I didn’t realize that such problems existed to this extent. That helped me understand certain aspects of the problem. Within the company, the key for me was communication with the DevOps team. Because if I’m going to develop something for DevOps, I have to understand the problems.”

If you’re after a better understanding of challenges and opportunities DevOps represents, you’ll want to check out this podcast, featuring more insight on adoption, cultural change, tools and other DevOps aspects from this collection of experts.

The Panelists

(Listed alphabetically)

Nicole Forsgren Nicole Forsgren
Founder and CEO, DevOps Research and Assessment LLC
Twitter LinkedIn Leonid Igolnik
Member and Angel Investor, Sand Hill Angels
Founding Investor, Advisor, Managing Partner, Batchery
Twitter LinkedIn Alena Prokharchyk
Principal Software Engineer, Rancher Labs
Twitter LinkedIn Baruch Sadogursky
Developer Advocate, JFrog
Twitter LinkedIn Shay Shmeltzer
Director of Product Management, Oracle Cloud Development Tools
Twitter LinkedIn Kelly Shortridge
Product Manager at SecurityScorecard
Twitter LinkedIn   Additional Resources Coming Soon
  • Combating Complexity
    An article in the September 2017 edition of the Atlantic warned of The Coming Software Apocalypse. Oracle's Chris Newcombe was interviewed for that article. In this podcast Chris joins Chris Richardson, Adam Bien, and Lucas Jellema to discuss heading off catastophic software failures.
  • AI Beyond Chatbots
    How is Artificial Intelligence being applied to modern applications? What are the options and capabilities? What patterns are emerging in the application of AI? A panel of experts provides the answers to these and other questions.
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

:

Podcast: DevOps in the Real World: Culture, Tools, Adoption

Tue, 2018-02-20 17:38

Among technology trends DevOps is certainly generating its share of heat. But is that heat actually driving adoption? “I’m going to give the answer everyone hates: It depends,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC. “It depends on where each team is, on where the organization is. I talk to people all over the industry, and I work with organizations all over the industry, and everyone is at a very different place.”

Some of the organizations Nicole has spoken with are pushing the DevOps envelope. “They’re almost squeezing blood out of a stone, finding ways to optimize things that have been optimized at the very edge. They’re doing things that most people can’t even comprehend.” Other organizations aren't feeling it. "There’s no DevOps,” says Nicole. “DevOps is nowhere near on their radar.”

Some organizations that had figured out DevOps stumbled a bit when the word came down to move everything to the cloud, explains Shay Shmeltzer, product management director for Oracle Cloud Development tools. “A lot of them need to rethink how they’re doing stuff, because cloud actually simplifies DevOps to some degree. It makes the provisioning of environments and getting stuff up and down much easier and quicker in many cases.”

As Nicole explains, “DevOps is a technology transformation methodology that makes your move into the cloud much more sticky, much more successful, much more effective and efficient to deliver value, to realize cost-savings. You can get so much more out of the technology that you are using and leveraging, so that when you do move to the cloud, everything is so much better. It’s almost a chicken and egg thing. You need so much of it together.”

However, that value isn’t always apparent to everyone. Kelly Shortridge, product manager at SecurityScorecard, observes that some security stakeholders, “feel they don’t have a place in the DevOps movement.” Some security teams have a sense that configuration management will suffice. “Then they realize that they can’t just port existing security solutions or existing security methodologies directly into agile development processes,” explains Kelly. “You have the opportunity to start influencing change earlier in the cycle, which I think was the hype. Now we’re at the Trough of Disillusionment, where people are discovering that it’s actually very hard to integrate properly, and you can’t just rely on technology for this shift. There also has to be a cultural shift, as far as security, and how they think about their interactions with engineers.” In that context Kelly sees security teams wrestling with how to interact within the organization.

But the value of DevOps is not lost on other roles and disciplines. It depends on how you slice it, explains Leonid Igolnik, member and angel investor with Sand Hill Angels, and founding investor, advisor, and managing partner with Batchery. He observes that DevOps progress varies across different industry subsets and different disciplines, “whether it’s testing, development, or security.”

“Overall, I think we’re reaching the Slope of Enlightenment, and some of those slices are reaching the Plateau of Productivity,” Leonid says.

Alena Prokharchyk began her journey into DevOps three years ago when she started her job as principal software engineer at Rancher Labs, whose principal product targets DevOps. “That actually forced me to look deeper into DevOps culture,” she says. “Before that I didn’t realize that such problems existed to this extent. That helped me understand certain aspects of the problem. Within the company, the key for me was communication with the DevOps team. Because if I’m going to develop something for DevOps, I have to understand the problems.”

If you’re after a better understanding of challenges and opportunities DevOps represents, you’ll want to check out this podcast, featuring more insight on adoption, cultural change, tools and other DevOps aspects from this collection of experts.

The Panelists

(Listed alphabetically)

Nicole Forsgren Nicole Forsgren
Founder and CEO, DevOps Research and Assessment LLC
Twitter LinkedIn Leonid Igolnik
Member and Angel Investor, Sand Hill Angels
Founding Investor, Advisor, Managing Partner, Batchery
Twitter LinkedIn Alena Prokharchyk
Principal Software Engineer, Rancher Labs
Twitter LinkedIn Baruch Sadogursky
Developer Advocate, JFrog
Twitter LinkedIn Shay Shmeltzer
Director of Product Management, Oracle Cloud Development Tools
Twitter LinkedIn Kelly Shortridge
Product Manager at SecurityScorecard
Twitter LinkedIn   Additional Resources Coming Soon
  • Combating Complexity
    An article in the September 2017 edition of the Atlantic warned of The Coming Software Apocalypse. Oracle's Chris Newcombe was interviewed for that article. In this podcast Chris joins Chris Richardson, Adam Bien, and Lucas Jellema to discuss heading off catastophic software failures.
  • AI Beyond Chatbots
    How is Artificial Intelligence being applied to modern applications? What are the options and capabilities? What patterns are emerging in the application of AI? A panel of experts provides the answers to these and other questions.
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

:

Oracle Code is back – Bigger and Better!

Fri, 2018-02-16 16:24

2018 is yet another great year for developers! Oracle’s awesome global developer conference series, Oracle Code, is back – and it’s bigger and better!

In 2017 Oracle ran the first series of Oracle Code developer conferences. In over 20 cities across the globe the series attracted more than 10,000 developers from all over the world, providing them with the opportunity to learn new skills, network with peers and take home some great memories. Following the huge success, Oracle is about to run yet another 14 events across the globe kicking off in late February in Los Angeles. The great thing about Oracle Code, attendance and speaking at the conferences is fully free of charge, showing Oracle holding true to the commitment to the developer communities out there. Across four continents you will get to hear everything that is hot and top in the industry: Blockchain, Containers, Microservices, API Design, Machine Learning, AI, Mobile, Chatbots, Databases, Low Code Development, trendy programming languages, CI/CD, DevOps and much, much more will be right in the center of Oracle Code.

Throughout the one-day events, that provide space for 500 people, developers can share their experience, participate in hands-on labs, talk to subject matter experts and, most importantly, have a lot of fun in the Oracle Code Lounge.

IoT Cloud Brewed Beer

Got a few minutes to try the IoT Cloud Brewed Beer from a local micro brewery? Extend manufacturing processes and logistics operations quickly using data from connected devices. Tech behind the brew: IoT Production Monitoring, IoT Asset Monitoring, Big Data, Event Hub, Oracle JET.


3D Builder Playground

Create your own sculptures and furniture with the 3D printer and help complete the furniture created using Java constructive geometry library. The Oracle technology used is Application Container Cloud running Visual IDE and Java SE running JSCG library.

Oracle Zip Labs Challenge

Want some bragging rights and to win prizes at the same time? Sign up for a 15-minute lab on Oracle Cloud content and see your name on the leaderboard as the person to beat in Oracle Zip Labs Challenge.

IoT Workshop

Interact and exchange ideas with other attendees at the IoT Workshop spaces. Get your own Wi-Fi microcontroller and connect to Oracle IoT Cloud Service. Oracle Developer Community is partnering with AppsLab and the Oracle Applications Cloud User Experience emerging technologies team to make these workshops happen.

Robots Rule with Cloud Chatbot Robot

Ask NAO the robot to do Tai Chi or ask "who brewed the beers"? So how does NAO do what it does? It uses the Intelligent Bot API on Oracle Mobile Cloud Service to understand your command and responds back by speaking back to you.

Dev Live

The Oracle Code crew also thought of the folks who aren’t that lucky to participate at Oracle Code in person: Dev Live are live interviews happening at Oracle Code that are streamed online across the globe so that everyone can watch developers and community members share their experiences.

Register NOW!

Register now for an Oracle Code event near you at: https://developer.oracle.com/code

Have something interesting that you did and want to share it with the world? Submit a proposal in the Call for Papers at: https://developer.oracle.com/code/cfp





See you next, at Oracle Code!

Announcing Packer Builder for Oracle Cloud Infrastructure Classic

Wed, 2018-02-14 10:30

HashiCorp Packer 1.2.0 adds native support for building images on Oracle Cloud Infrastructure Classic.

Packer is an open source tool for creating machine images across multiple platforms from a single source configuration. With the new oracle-classic builder, Packer can now build new application images directly on Oracle Classic Compute, similar to the oracle-oci builder. New Images can be created from an Oracle provided base OS image, an existing private image, or an image that that has been installed from the Oracle Cloud Marketplace

Note: Packer can also create Oracle Cloud Infrastructure Classic compatible machine images using the VirtualBox builder - and this approach still remains useful when building new base OS images from ISOs, see Creating Oracle Compute Cloud Virtual Machine Images using Packer

oracle-classic Builder Example

This examples creates a new image with Redis installed using an existing Ubuntu image as the base OS.

Create a packer configuration file redis.json

Now run Packer to build the image

After packer completes the new Image is available in the Compute Classic console to launch new instances.

See also

For building Oracle Cloud Infrastructure images see:

Three Quick Tips API Platform CS - Gateway Installation (Part 1)

Tue, 2018-02-13 16:00

This blog post assumes some prior knowledge of API Platform Cloud Service and pertains to the on premise gateway installation steps. Here we try to list down 3 useful tips (applicable for 18.1.3+), arranged in no particular order:. 

  • Before installing the gateway, make sure you have the correct values for "listenIpAddress" and "publishAddress".  This can be done by the following checklist (Linux only):
    • Does the command "hostname -f" return a valid value ?
    • Does the command "ifconfig" list downs the ip addresses properly ?
    • Do you have additional firewall/network policies that may prevent communication with management tier?
    • Do you authoritatively know the internal and public ip/addresses to be used for the gateway node?

            If you do not know the answers to any of the questions, please contact your network administrator.

           If you see issues with gateway server not starting up properly, incorrect values of  "listenIpAddress" and "publishAddress" could be the possible cause. 

  • Before running the "creategateway" action (or any other action involving the "creategateway" like "create-join" for example), do make sure that the management tier is accessible. You can use something like:
    • wget "<http/https>:<managmentportal_host>:<management_portal_port>/apiplatform"  
    • curl "<http/https>:<managmentportal_host>:<management_portal_port>/apiplatform"

           If the above steps fail, then "creategateway" will also not work, so the questions to ask are:

  1. Do we need a proxy?
  2. If we have already specified a proxy , is it the correct proxy ?
  3. In case we need a proxy , have we set the "managementServiceConnectionProxy" property in gateway-props.json.

Moreover, it is better if we set the http_proxy/https_proxy to the correct proxy, if proxies are applicable.

  • Know your log location, please refer to the following list:
    • Logs for troubleshooting "install" or  "configure" actions , we have to refer to <install_dir>/logs directory.
    • Logs for troubleshooting "start" or "stop" actions, we have to refer to <install_dir>/domain/<gateway_name>/(start*.out|(stop*.out)).
    • Logs for troubleshooting "create-join"/"join" actions, we have to refer to <install_dir>/logs directory.
    • To troubleshoot issues post installation (i.e. after the physical node has joined the gateway), we can refer to <install_dir>/domain/<gateway_name>/apics/logs directory. 

We will try to post more tips in the coming weeks, so stay tuned and happy API Management.            

Announcing the Oracle Vagrant boxes GitHub repository

Mon, 2018-02-12 13:23

Today we are pleased to announce the launch of a new GitHub repository to build Oracle software Vagrant boxes: https://github.com/oracle/vagrant-boxes

Vagrant provides an easy and fully automated way of setting up a developer environment. In conjunction with Oracle’s VirtualBox, Vagrant is a powerful tool for creating a sandbox environment inside a virtual machine. With this announcement, we introduce this powerful automation to users worldwide as a streamlined way for creating virtual machines with Oracle software fully configured and ready to go inside of them. This is yet another in a series of steps for making the lives of developers easier and more productive.

Getting started is quick and easy! If you have not done so yet, you will need to download and install the following:

Once you have installed those two components you can go ahead and clone/download the GitHub repository and create your own Vagrant boxes. Getting an Oracle Linux virtual machine is as simple as follows:

  1. Clone (or download) the GitHub repository:

gvenzl-mac:vagrant gvenzl$ git clone https://github.com/oracle/vagrant-boxes
Cloning into 'vagrant-boxes'...
remote: Counting objects: 74, done.
remote: Total 74 (delta 0), reused 0 (delta 0), pack-reused 74
Unpacking objects: 100% (74/74), done.

  1. Go into the the OracleLinux sub folder:

gvenzl-mac:vagrant gvenzl$ cd vagrant-boxes/OracleLinux/

  1. Type “vagrant up” and wait for your VM to be provisioned:

gvenzl-mac:OracleLinux gvenzl$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'http://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly...
==> default: Adding box 'http://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box' (v0) for provider: virtualbox
    default: Downloading: http://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box
==> default: Successfully added box 'http://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box' (v0) for 'virtualbox'!
==> default: Importing base box 'http://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box'...
==> default: Matching MAC address for NAT networking...
==> default: Setting the name of the VM: ol7-vagrant
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2220 (host) (adapter 1)
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
...
...
...
==> default: INSTALLER: Locale set
==> default: INSTALLER: Installation complete, Oracle Linux ready to use!

Once the machine is provisioned you are all set and ready to go. You can now just simply ssh into the virtual machine by typing “vagrant ssh” and perform whatever tasks you would like to do. Once you are done, just type “exit” just like any other ssh terminal:

gvenzl-mac:OracleLinux gvenzl$ vagrant ssh

Welcome to Oracle Linux Server release 7.4 (GNU/Linux 4.1.12-112.14.13.el7uek.x86_64)

The Oracle Linux End-User License Agreement can be viewed here:

* /usr/share/eula/eula.en_US

For additional packages, updates, documentation and community help, see:

* http://yum.oracle.com/

[vagrant@ol7-vagrant ~]$ uname -a
Linux ol7-vagrant 4.1.12-112.14.13.el7uek.x86_64 #2 SMP Thu Jan 18 11:38:29 PST 2018 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@ol7-vagrant ~]$ exit
logout
Connection to 127.0.0.1 closed.
gvenzl-mac:OracleLinux gvenzl$

You can stop the virtual machine and reboot it any time by typing “vagrant halt” and “vagrant up”:

gvenzl-mac:OracleLinux gvenzl$ vagrant halt
==> default: Attempting graceful shutdown of VM...


gvenzl-mac:OracleLinux gvenzl$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2220 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
==> default: Machine booted and ready!
[default] GuestAdditions 5.1.30 running --- OK.
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
==> default: Mounting shared folders...
default: /vagrant => /Users/gvenzl/Downloads/vagrant/vagrant-boxes/OracleLinux
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.
gvenzl-mac:OracleLinux gvenzl$

Last, if you would like to remove the VM altogether from your machine, you can do so by typing “vagrant destroy”. This will remove the entire VM and everything within it, so be careful with this command:

gvenzl-mac:OracleLinux gvenzl$ vagrant destroy
default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Forcing shutdown of VM...
==> default: Destroying VM and associated drives...

Going forward, Oracle will bring more and more Vagrant configuration files to this GitHub repository, which is driven in a fully open source fashion. Please provide comments and enhancement requests via the GitHub issues.

Also check out this cool video by Sergio Leunissen showing you how to setup a Docker sandbox using Oracle VM VirtualBox and Vagrant:

6 Ways Automated Security Becomes A Developer’s Ally

Wed, 2018-02-07 09:46

In a recent InfoWorld article, Siddhartha Agarwal, VP of Product Management at Oracle, outlined his top 10 predictions impacting application developers in 2018. In this blog, we’ll take a closer look at one of those predictions and how a cloud access security broker (CASB) service can help with it.

#10. Highly automated security and compliance efforts become a new ally of developers

Companies are increasingly adopting DevOps methodologies to accelerate their app development lifecycles in the cloud. Unfortunately, the common perception is that accelerating application lifecycles comes at the expense of security. That’s because, traditionally, security used to be a discrete step in application development lifecycles, taking weeks or months to certify an application for production use. There is no way such a delay can be incorporated into an agile CI/CD methodology. Security needs to be a continuous process linked to every step of DevOps.

Fortunately, artificial intelligence and machine learning have matured to the point that they can be used to automate much of application security. Developers can ensure that their applications and data are continuously monitored using a CASB service, and any threats, compliance violations, or security incidents are automatically detected and remediated. This lets app developers maintain their development velocity, while conforming to security and compliance standards. Let’s look at some key areas that can be protected with continuous visibility and monitoring with Oracle CASB. 

1. Enforcing Strong Application Configuration and Micro Segmentation

CASB can monitor application configurations to detect any changes and revert those automatically to the “golden” configuration, as well as alert relevant administrators. This enforcement may include configurations for network segmentation, DNS resolution, usage of secure or insecure network ports, and encryption settings for folders containing application data.

2. Enforcing Strong Access Control for Administrators

CASB can continuously monitor and enforce strong access control policies for administrators, including multi-factor authentication, strong password policies, and SSH key rotation. Any changes to these can be reverted automatically and alerted to relevant administrators.

3. Monitoring Admin Activity for Out-of-the-Ordinary Patterns

CASB uses machine learning to automatically learn “normal” or regular patterns of administrative activity, such as the login/logout times of administrators, locations/IP addresses where they typically login from, and types of changes they usually perform to the application. It can then send an alert on any deviations from these normal patterns, such as an admin logging in from a location, IP address, or device type that they’ve never used before. In addition, customers can also configure CASB to look for admin changes to specific areas, such as lists of authorized users or groups, starting or stopping of app instances, or changes to encryption settings of folders. For example, if an infiltrator attempts to use valuable compute or storage resources for malicious usage, CASB will immediately raise an alert.

4. Enforcing Data Security and Compliance
CASB can continuously scan application data to detect any files that violate the company’s compliance policy. For example, it can be configured to look for sensitive or confidential information, such as credit card or Social Security numbers. If found, CASB can automatically alert administrators and take remedial action that prevent unauthorized access of the data.

5. Monitoring User Activity for Out-of-the-Ordinary Patterns

CASB uses machine learning to automatically detect unauthorized or malicious insider usage of the application. Similar to monitoring administrator activity, CASB uses machine learning to automatically learn normal patterns of regular user activity. Any deviations from these, such as users logging in from a location that they’ve never logged in from before, can automatically be alerted as being suspicious. On detecting suspicious activity, application access for the user can automatically be downgraded to prevent downloads, as an example, until the user has been able to prove their identity with further authentication.

6. Monitoring for Misuse of Escalated Privileges

Often times, developers gain access to production resources for troubleshooting purposes such as debugging, bug fixing, or other maintenance purposes. In many cases, those privileges are never revoked, thereby leaving those resources fully accessible by those developers even after those issues are resolved. CASB can help monitor resources in production so that any access or modifications is alerted to respective administrators, who can then respond accordingly. CASB can also help prevent or revert changes to the original state, thereby preventing unauthorized changes to production resources by such users.

Oracle CASB offers all of the capabilities listed above, and it can also be integrated with other enterprise systems, such as SIEM, Identity-as-a-Service (IDaaS), or IT Service Management applications. This ensures that companies can tightly integrate CASB into their existing Security Operations Center (SOC) workflows and enable CASB to raise tickets automatically for remediation.

Platform choice matters

Oracle has spent the last several years building and assembling the set of security and management services in the Oracle Cloud that together enable customers to build the Identity-centric Security Operations Center (Identity SOC). The Identity SOC platform leverages purpose-built machine learning against the full breadth of operational and security telemetry — including activity and configuration information as well as identity and asset context — to provide real-time threat detection across heterogeneous, hybrid cloud environments.  When potential or active threats are detected, automated remediation can be invoked to eliminate those threats.

 

Podcast: Women in Technology: Motivation and Momentum

Tue, 2018-02-06 10:39

According to the National Center for Women and Information Technology (NCWIT), while 57% of professional occupations in the US were held by women in 2016, women held only 26% of professional computing occupations. Correcting that imbalance is the right thing to do, of course. But there’s another dimension to the issue that raises the stakes for getting more women into IT jobs.

“We have 80,000 graduates every year coming out of college with computer science degrees,” says Kellyn Pot’Vin-Gorman, technical intelligence manager for the office of CTO at Delphix. But US colleges and universities can’t crank out computer science grads fast enough to meet demand. “Over a million technical jobs will be here by 2020, and we’ve got nobody to fill them,” Kellyn says.

Attracting more women into software development and other technical fields will help to fill the IT jobs that will otherwise go wanting. But, perhaps due to lingering gender bias, or simple oversight, effective communication of the opportunities doesn’t always happen. “No one told me that I could do this as a career,” says Michelle Malcher, a security architect at Extreme Scale Solutions in Chicago. “No one said, ‘you can have fun with code.’”

Now that Michelle is having fun with code, she, like Kellyn, puts significant time and effort into getting the word out about the opportunities and career potential for young women. But men also have a role in that mission. “Men need to be part of the conversation. It can’t just be women talking about women's issues,” says Natalie Delemar, a senior consultant with Ernst and Young and an active supporter of women in technology. “We need to have men at the table so that they understand the importance of these issues.”

Women and men can engage in mentoring and sponsorship activities that are important in getting more women into IT roles. Heli Helskyaho, CEO of Miracle Finland and a PhD student at the University of Helsinki, is one of two mentors recently elected by computer science students at that institution. “The faculty just decided that it's time to have mentorship in the university the first time after all these years.”

But while mentoring and sponsorship are important, there are key differences. And, as Natalie observes, “women in the workplace are actually over mentored and under sponsored.”

Natalie explains that while mentoring typically focuses on career guidance and advice on educational matters, “sponsorship is when somebody actually uses their political capital to put you into positions of power to give you experiences to get ahead.”

Getting ahead is what the latest Oracle Developer Community podcast is all about, as Kellyn Pot'Vin-Gorman, Michelle Malcher, Natalie Delemar, and Heli Helskyaho, along with panel organizer and moderator Laura Ramsey, share insight on what motivated them in their IT careers, and how they lend their expertise and energy to driving momentum in the effort to draw more women into technology.

This panel discussion took place at Oracle Openworld in San Francisco on September 18, 2016.

The Panelists

(Listed alphabetically)

Natalie Delemar
Senior Consultant, Ernst and Young
President, ODTUG Board of Directors
Twitter LinkedIn Facebook Heli Helskyaho Heli Helskyaho
CEO, Miracle Finland
Oracle ACE Director
Ambassador, EMEA Oracle Usergroups Community
Twitter LinkedIn Facebook Michelle Malcher
Security Architect, Extreme Scale Solutions
Oracle ACE Director
Twitter LinkedIn Facebook Kellyn Pot'Vin-Gorman
Technical Intelligence Manager, Office of CTO, Delphix
President, Board Of Directors, Denver SQL Server User Group
Twitter LinkedIn Facebook Laura Ramsey
Manager, Database Technology and Developer Communities
Oracle America
Twitter LinkedIn Facebook   Additional Resources Coming Soon
  • DevOps: Can This Marriage be Saved? (Feb 21)
    What is the biggest threat to successful DevOps? What’s the most common DevOps mistake? Experts Nicole Forsgen, Leonid Igolnik, Alaina Prokharchyk, Baruch Sadogursky, Shay Shmeltzer, and Kelly Shortridge discuss what it takes to make DevOps work.
  • Combating Complexity
    An article in the September 2017 edition of the Atlantic warned of The Coming Software Apocalypse. Oracle's Chris Newcombe was interviewed for that article. In this podcast Chris joins Chris Richardson, Adam Bien, and Lucas Jellema to discuss heading off catastophic software failures.
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

:

Announcing the Oracle WebLogic Server Kuberentes Operator

Wed, 2018-01-31 08:00

We are very excited to announce the Oracle WebLogic Server Kubernetes Operator, which is available today as a Technology Preview and which is delivered in open source at https://oracle.github.io/weblogic-kubernetes-operator.  The operator can manage any number of WebLogic domains running in a Kubernetes environment.  It provides a mechanism to create domains, automates domain startup, allows scaling WebLogic clusters up and down either manually (on-demand) or through integration with the WebLogic Diagnostics Framework or Prometheus, manages load balancing for web applications deployed in WebLogic clusters, and provides integration with ElasticSearch, logstash and Kibana.

The operator uses the standard Oracle WebLogic Server 12.2.1.3 Docker image, which can be found in the Docker Store or in the Oracle Container Registry.  It treats this image as immutable, and all of the state is persisted in a Kubernetes persistent volume.  This allows us to treat all of the pods as throwaway and replaceable, and it completely eliminates the need to manage state written into Docker containers at runtime (because there is none).

The diagram below gives a high level overview of the layout of a domain in Kubernetes when using the operator:

The operator can expose the WebLogic Server Administration Console to external users (if desired), and can also allow external T3 access; for example for WLST.  Domains can talk to each other, allowing distributed transactions, and so on. All of the pods are configured with Kubernetes liveness and readiness probes, so that Kubernetes can automatically restart failing pods, and the load balancer configuration can include only those Managed Servers in the cluster that are actually ready to service user requests.

We have a lot of documentation available on the project pages on GitHub including details about our design philosophy and architecture, as well as instructions on how to use the operator, video demonstrations of the operator in action, and a developer page for people who are interested in contributing to the operator.

We hope you take the opportunity to play with the Technology Preview and we look forward to getting your feedback.

Getting Started

The Oracle WebLogic Server Kubernetes Operator has the following requirements:

  • Kubernetes 1.7.5+, 1.8.0+ (check with kubectl version)
  • Flannel networking v0.9.1-amd64 (check with docker images | grep flannel)
  • Docker 17.03.1.ce (check with docker version)
  • Oracle WebLogic Server 12.2.1.3.0

For more details on the certification and support statement of WebLogic Server on Kubernetes, refer to My Oracle Support Doc Id 2349228.1.

A series of video demonstrations of the operator are available here:

The overall process of installing and configuring the operator and using it to manage WebLogic domains consists of the following steps. The provided scripts will perform most of these steps, but some must be performed manually:

  • Registering for access to the Oracle Container Registry
  • Setting up secrets to access the Oracle Container Registry
  • Customizing the operator parameters file
  • Deploying the operator to a Kubernetes cluster
  • Setting up secrets for the Administration Server credentials
  • Creating a persistent volume for a WebLogic domain
  • Customizing the domain parameters file
  • Creating a WebLogic domain

Complete up-to-date instructions are available at https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/installation.md or read on for an abbreviated version:

Build the Docker image for the operator

To run the operator in a Kubernetes cluster, you need to build the Docker image and then deploy it to your cluster.

First run the build using this command:

mvn clean install

Then create the Docker image as follows:

docker build -t weblogic-kubernetes-operator:developer --no-cache=true

We recommend that you use a tag other than latest to make it easy to distinguish your image. In the example above, the tag could be the GitHub ID of the developer.

Next, upload your image to your Kubernetes server as follows:

# on your build machine docker save weblogic-kubernetes-operator:developer > operator.tar scp operator.tar YOUR_USER@YOUR_SERVER:/some/path/operator.tar # on the Kubernetes server docker load < /some/path/operator.tar

Verify that you have the right image by running docker images | grep webloogic-kubernetes-operator on both machines and comparing the image ID.

We will be publishing the image in Oracle Container Registry and the instructions will be updated when it is available there.  After it is published, you will not need to build the image yourself, you will have the option to pull it from the registry instead.

Customizing the operator parameters file

The operator is deployed with the provided installation script, create-weblogic-operator.sh. The input to this script is the file create-operator-inputs.yaml, which needs to updated to reflect the target environment.

The following parameters must be provided in the input file:

CONFIGURATION PARAMETERS FOR THE OPERATOR Parameter Definition Default externalOperatorCert A base64 encoded string containing the X.509 certificate that the operator will present to clients accessing its REST endpoints. This value is only used when externalRestOption is set to custom-cert.   externalOperatorKey A base64 encoded string containing the private key ask tom This value is only used when externalRestOption is set to custom-cert.   externalRestOption Write me. Allowed values:
- none Write me
- self-signed-cert The operator will use a self-signed certificate for its REST server. If this value is specified, then the externalSans parameter must also be set.
- custom-cert Write me. If this value is specified, then the externalOperatorCert and externalOperatorKey must also be provided. none externalSans A comma-separated list of Subject Alternative Names that should be included in the X.509 Certificate. This list should include ...
Example: DNS:myhost,DNS:localhost,IP:127.0.0.1 . namespace The Kubernetes namespace that the operator will be deployed in. It is recommended that a namespace be created for the operator rather than using the default namespace. weblogic-operator targetNamespaces A list of the Kubernetes namespaces that may contain WebLogic domains that the operator will manage. The operator will not take any action against a domain that is in a namespace not listed here. default remoteDebugNodePort Tom is adding a debug on/off parameter
If the debug parameter if set to on, then the operator will start a Java remote debug server on the provided port and will suspend execution until a remote debugger has attached. 30999 restHttpsNodePort The NodePort number that should be allocated for the operator REST server on which it should listen for HTTPS requests on. 31001 serviceAccount The name of the service account that the operator will use to make requests to the Kubernetes API server. weblogic-operator loadBalancer The load balancer that is installed to provide load balancing for WebLogic clusters. Allowed values are:
- none – do not configure a load balancer
- traefik – configure the Traefik Ingress provider
- nginx – reserved for future use
- ohs – reserved for future use traefik loadBalancerWebPort The NodePort for the load balancer to accept user traffic. 30305 enableELKintegration Determines whether the ELK integration will be enabled. If set to true, then ElasticSearch, Logstash and Kibana will be installed, and Logstash will be configured to export the operator’s logs to ElasticSearch. false Decide which REST configuration to use

The operator provides three REST certificate options:

  • none will disable the REST server.
  • self-signed-cert will generate self-signed certificates.
  • custom-cert provides a mechanism to provide certificates that were created and signed by some other means.
Decide which optional features to enable

The operator provides some optional features that can be enabled in the configuration file.

Load Balancing

The operator can install the Traefik Ingress provider to provide load balancing for web applications running in WebLogic clusters. If enabled, an instance of Traefik and an Ingress will be created for each WebLogic cluster. Additional configuration is performed when creating the domain.

Note that the Technology Preview release provides only basic load balancing:

  • Only HTTP(S) is supported. Other protocols are not supported.
  • A root path rule is created for each cluster. Rules based on the DNS name, or on URL paths other than ‘/’, are not supported.
  • No non-default configuration of the load balancer is performed in this release. The default configuration gives round robin routing and WebLogic Server will provide cookie-based session affinity.

Note that Ingresses are not created for servers that are not part of a WebLogic cluster, including the Administration Server. Such servers are exposed externally using NodePort services.

Log integration with ELK

The operator can install the ELK stack and publish its logs into ELK. If enabled, ElasticSearch and Kibana will be installed in the default namespace, and a logstash pod will be created in the operator’s namespace. Logstash will be configured to publish the operator’s logs into Elasticsearch, and the log data will be available for visualization and analysis in Kibana.

To enable the ELK integration, set the enableELKintegration option to true.

Deploying the operator to a Kubernetes cluster

To deploy the operator, run the deployment script and give it the location of your inputs file:

./create-weblogic-operator.sh –i /path/to/create-operator-inputs.yaml What the script does

The script will carry out the following actions:

  • A set of Kubernetes YAML files will be created from the inputs provided.
  • A namespace will be created for the operator.
  • A service account will be created in that namespace.
  • If ELK integration was enabled, a persistent volume for ELK will be created.
  • A set of RBAC roles and bindings will be created.
  • The operator will be deployed.
  • If requested, the load balancer will be deployed.
  • If requested, ELK will be deployed and logstash will be configured for the operator’s logs.

The script will validate each action before it proceeds.

This will deploy the operator in your Kubernetes cluster.  Please refer to the documentation for next steps, including using the REST services, creating a WebLogic domain, starting a domain, and so on.

Three Advances That Will Finally Make Software Self-Healing, Self-Tuning, and Self Managing

Tue, 2018-01-23 13:37

Three Advances That Will Finally Make Software Self-Healing, Self-Tuning, and Self Managing

Ever heard the adage that the operating cost of a given application is often 2x the app’s acquisition cost?  Or how about that bugs cost 100x more to fix in the production phase than during the requirements phase? Or that developers in DevOps environments are often spending over half their time tweaking the “Ops” portion, like CI/CD, instead of writing code?

Removing effort from the operating portion of the equation has long been a goal of IT, though actually doing so is difficult in traditional environments where visibility to the edge (say, end-user monitoring and server-side instrumentation) is low and where remediation (say, optimizing configuration parameters) is manual.  But change is on the horizon, thanks to three integrated capabilities provided by cloud platforms that can lead to autonomous, self-healing systems.  Those three capabilities are automatic instrumentation, machine learning-powered analytics, and integrated remediation.

Automatic Instrumentation: Closing the Visibility Gap

Cloud software platform providers like Oracle are working hard to make visibility and instrumentation simply a feature of the underlying platform, rather than requiring a separate effort.  What this means for developers is that as you write and deploy code, the platform automatically generates and delivers relevant activity and environment telemetry. 

For example, PaaS services such as Java Cloud Service, SOA Cloud Service, and Database Cloud Service automatically expose detailed telemetry both about their environments (instance-level telemetry) as well as the artifacts deployed in those environments (code-level telemetry) to management services such as Oracle Management Cloud, without any extra work by developers or operations personnel.

By generating and exposing instrumentation automatically, we can close the visibility gap that often exists today between developers (who know what they coded, but not necessarily about environment dependencies) and operations (who know about environment dependencies, but not about what was coded). 

2 views of automated telemetry, generated by Java Cloud Service and Integration Cloud Service and exposed in Oracle Management Cloud.

Image 1:  2 views of automated telemetry, generated by Java Cloud Service and Integration Cloud Service and exposed in Oracle Management Cloud.

Machine Learning-Based Analytics

Having the relevant telemetry is a required first step, but understanding it is no easy task.  We’re talking about terabytes of logs, tens of thousands of activity and configuration metrics, in an environment where neither developers nor operators understand the dependencies among components. After all, we’ve happily given up a level of control in cloud in exchange for the ability to iterate faster. 

Fortunately, we no longer have to rely on our human faculties to deal with this data overload – we can instead rely on purpose-built machine learning (ML).  ML loves data.  The more the better. And ML that is designed specifically for the operations problem is able to intuit pretty interesting things out of this data, such as how applications are built (topology, dependencies) and how they should behave (baselining, anomaly detection, forecasting) – without any effort from developers. 

So, instead of a human having to program a monitoring regime to tell how something ought to work, the monitoring regime tells the humans how the application actually works, how it should work in the future, and why it may not be working as it should.  In this scenario, root-cause analysis becomes automated, capacity-planning becomes continuous, dependency-mapping just happens, and alerts/events only bubble up when they actually require attention.

Oracle Management Cloud’s ML portfolio provides topology-aware diagnostics that can forecast impending problems or identify root-cause of current problems without any operator knowledge of the systems being managed. 

Machine learning-based topology views are generated automatically by Oracle Management Cloud.

Image 2:  Machine learning-based topology views generated automatically by Oracle Management Cloud.

Automated Remediation: The Final Step

So now that we have all the data we need to understand what’s going on, and have the ability to analyze it in real-time using machine learning to understand why and what we should do about it, we can move toward the final step:  taking action. 

Automated remediation is the most visible aspect of self-healing systems, but in a sense it’s also the oldest.  API-based and script-based automation options have existed for most technical platforms for a long time and are wildly under-utilized.  The problem in most IT organizations is not can they automate something, it’s should they run that particular automation at a given time.  Sure, I can spin up a new VM, or clone the microservice – but should I?  Will it solve the problem or prevent another problem?

Put simply, for automation to be more heavily-utilized, we need to be better at answering the “should I?” question.  Fortunately, since we’ve now taken care of having better telemetry data and the ability to analyze it, we can link our analytic results directly to automation, at the platform level.  For example, Oracle Management Cloud can automatically invoke automation regimes such as Chef and Puppet, or Cloud Service APIs, in response to analytic conclusions.

Automated remediation is part of Oracle Management Cloud.

Image 3:  Automated remediation in Oracle Management Cloud

Autonomous Software Isn’t Magic

Variability and complexity in software environments is inevitable.  We have urgent business pressures to innovate and an increasingly sophisticated portfolio of loosely coupled cloud platforms on which to innovate.  However, unless we take steps to remove the downstream operational effort associated with the increase in variability and complexity, we will be dragged into spending ever-more time and energy on operations rather than development, and that 2x ratio may quickly become 5x or 10x. 

Self-healing, self-tuning, and self-managing aren’t “magic.” Rather, they are the by-design outputs of a platform that first auto-generates sufficient instrumentation, then provides that instrumentation to an ML-based analytic engine, and finally uses the analytic results to invoke the proper automation.  Given the pace of business change, these aren’t just cool features of a platform, they are absolute necessities for sustainable modern application development.  And they are here, now. 

We invite you to experience just what autonomous PaaS is like at cloud.oracle.com/tryit

Open Source Resolutions: 3 Ways To Simplify, Break Free, and Focus in 2018

Mon, 2018-01-22 11:00

For developers, development teams, and DevOps organizations, 2017 brought forward a growing stack of open source technologies that were proven out by early adopter cloud teams. Those technologies are now being rapidly mainstreamed thanks to some heavy lifting by the CNCF and the broader cloud native community. So now is the time to resolve to make three powerful changes for 2018!

1. Simplify Your Life

So you’ve been experimenting with open source technologies from Docker to Kubernetes to Istio.  Perhaps you’ve stood these up locally on your laptop, in your lab, or experimentally up on AWS. Congratulations, this is a great first step! But trust me, keeping that environment up and running, updated with the latest releases and patches, and scaled to meet the needs of your broader organization is painful, expensive, time consuming, and foolish, considering that cloud providers are now offering managed services that do that for you – typically for no more than the cost of your current infrastructure as a service (IaaS) resources (compute, storage, network). 2017 should be the last year we give out “I Stood Up My Own Kubernetes” participation trophies.  There’s no reason in 2018 to spend valuable developer and DevOps time running and maintaining your own open source platforms when cloud providers are doing the work for you in a secure, cost-effective package. There are plenty of better ways to differentiate, compete, expand your skills, and grow your career in 2018 – building, running, and maintaining your own open source based platform is not one of them. Move to a managed open source-based service in 2018 and simplify your life.  You’ll thank me later!

2. Declare Your Independence:

Break Free From Cloud Lock-In

Take a self-inventory of the cloud providers your org uses and how much money you spent on them in 2017 versus 2016. My guess is that you will find you are developing a significant business and technical risk exposure based on single vendor cloud lock-in. Open source technologies actually give you leverage to choose the cloud vendor that works best for you from a cost, use-case, technical, and/or business perspective. In particular, serverless has been one of the big remaining closed and proprietary cloud native technology areas to date. This has forced enterprises to choose between cloud lock-in or adopting early service tools like AWS Lambda. That’s all about to change in 2018 as a set of open serverless projects (e.g., http://FnProject.io/ ) and CNCF efforts move forward. “Open on Open” is the only way to move open serverless forward in 2018 — building serverless solutions on an integrated stack on top of a Kubernetes foundation.  So, 2018 is the year to ditch lock-in and break free from your captive cloud situation. Don’t be a prisoner in your own cloud. 

3. Focus on What Matters:

Imagine if all cloud providers offered the same core set of open source-based services (e.g., Docker, Kubernetes, Kafka, Cassandra, etc.), and the only cost was for the IaaS resources you used. If this were true, then you could focus on choosing a solution based on what really matters to you.  Hey, that is true now!  The market moved in 2017 from a seller’s market to a buyer’s market with all the major cloud vendors offering similar, core OSS-based services — at least on the surface. The difference now comes down to what matters to you. And in particular the “ilities” like scalability, security, availability, reliability, and usability become key differentiators to consider. Often that can be described as “enterprise-grade” or “open source for grownups.” Open source can be free and fun, but when you need to run your enterprise apps on it, you’ll want to go top-shelf and reach for the good stuff — and that’s where the “ilities” come in.  In 2018, focus on what really matters to you, be an informed buyer, and ask the hard questions when it comes to running your apps on these infrastructures.

Open source technologies are already making developer’s lives better and their projects healthier. Now it’s time to simplify your life with managed services versus going down the DIY “hard way” path.  Break free from cloud lock-in and declare your independence from captive clouds.  And finally, in 2018 focus on what matters to you when it comes to choosing a cloud service — now that the playing field is evening out in your favor.  And most of all, have a spectacular 2018!

Pages