OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 2 hours 16 min ago

JavaOne Event Expands with More Tracks, Languages and Communities – and New Name

Thu, 2018-04-19 11:00

The JavaOne conference is expanding to create a new, bigger event that’s inclusive to more languages, technologies and developer communities. Expect more talks on Go, Rust, Python, JavaScript, and R along with more of the great Java technical content that developers have come to expect. We’re calling the new event Oracle Code One, October 22-25 at Moscone West in San Francisco.

Oracle Code One will include a Java technical keynote with the latest information on the Java platform from the architects of the Java team.  It will also have the latest details on Java 11, advances in OpenJDK, and other core Java development.  We are planning dedicated tracks for server side Java EE technology including Jakarta EE (now part of the Eclipse Foundation), Spring, and the latest advances in Java microservices and containers.  Also a wealth of community content on client development, JVM languages, IDEs, test frameworks, etc.

As we expand, developers can also expect additional leading edge topics such as chatbots, microservices, AI, and blockchain. There will also be sessions around our modern open source developer technologies including Oracle JET, Project Fn and OpenJFX.

Finally, one of the things that will continue to make this conference so great is the breadth of community run activities such as Oracle Code4Kids workshops for young developers, IGNITE lightning talks run by local JUG leaders, and an array of technology demos and community projects showcased in the Developer Lounge.  Expect a grand finale with the Developer Community Keynote to close out this week of fun, technology, and community.

Today, we are launching the call for papers for Oracle Code One and you can apply now to be part of any of the 11 tracks of content for Java developers, database developers, full stack developers, DevOps practitioners, and community members.  

I hope you are as excited about this expansion of JavaOne as I am and will join me at the inaugural year of Oracle Code One!

Please submit your abstracts here for consideration:
https://www.oracle.com/code-one/index.html

Beyond Chatbots: An AI Odyssey

Wed, 2018-04-18 06:00

This month the Oracle Developer Community Podcast looks beyond chatbots to explore artificial intelligence -- its current capabilities, staggering potential, and the challenges along the way.

One of the most surprising comments to emerge from this discussion reveals how a character from a 50 year-old feature film factors into one of the most pressing AI challenges.

According to podcast panelist Phil Gordon, CEO and founder of Chatbox.com, the HAL 9000 computer at the center of Stanley Kubrick’s 1968 science fiction classic “2001: A Space Odyssey” is very much on the minds of those now rushing to deploy AI-based solutions. “They have unrealistic expectations of how well AI is going to work and how much it’s going to solve out of the box.” (And apparently they're willing to overlook HAL's abysmal safety record.)

It's easy to see how an AI capable of carrying on a conversation while managing and maintaining all the systems on a complex interplanetary spaceship would be an attractive idea for those who would like to apply similar technology to keeping a modern business on course. But the reality of today’s AI is a bit more modest (if less likely to refuse to open the pod bay doors).

In the podcast, Lyudmil Pelov, a cloud solutions architect with Oracle’s A-Team, explains that unrealistic expectations about AI have been fed by recent articles that portray AI as far more human-like than is currently possible.

“Most people don't understand what's behind the scenes,” says Lyudmil. “They cannot understand that the reality of the technology is very different. We have these algorithms that can beat humans at Go, but that doesn't necessarily mean we can find the cure for the next disease.” Those leaps forward are possible. “From a practical perspective, however, someone has to apply those algorithms,” Lyudmil says.

For podcast panelist Brendan Tierney, an Oracle ACE Director and principal consultant with Oralytics, accessing relevant information from within the organization poses another AI challenge.  “When it comes to customer expectations, there's an idea that it's a magic solution, that it will automatically find and discover and save lots of money automatically. That's not necessarily true.”  But behind that magic is a lot of science.

“The general term associated with this is, ‘data science,’” Brendan explains. “The science to it is that there is a certain amount of experimental work that needs to be done. We need to find out what works best with your data. If you're using a particular technique or algorithm or whatever, it might work for one company, but it might not work best for you. You've got to get your head around the idea that we are in a process of discovery and learning and we need to work out what's best for your data in your organization and processes.”

For panelist Joris Schellekens, software engineer at iText, a key issue is that of retractability. “If the AI predicts something or if your system makes some kind of decision, where does that come from? Why does it decide to do that? This is important to be able to explain expectations correctly, but also in case of failure—why does it fail and why does it decide to do this instead of the correct thing?”

Of course, these issues are only a sampling of what is discussed by the experienced developers in this podcast. So plug in and gain insight that just might help you navigate your own AI odyssey.

The Panelists Phil Gordon
CEO/founder of Chatbox.com

Twitter LinkedIn 

Lyudmil Pelov
Oracle A-Team Cloud Architect, Mobile, Cloud and Bot Technologies, Oracle

Twitter LinkedIn 

Joris Schellekens
Software Engineer, iText

Twitter LinkedIn

Brendan Tierney
Consultant, Architect, Author, Oralytics

Twitter LinkedIn 

Additional Resources Coming Soon
  • The Making of a Meet-Up
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

Announcing GraalVM: Run Programs Faster Anywhere

Tue, 2018-04-17 02:47

Current production virtual machines (VMs) provide high performance execution of programs only for a specific language or a very small set of languages. Compilation, memory management, and tooling are maintained separately for different languages, violating the ‘don’t repeat yourself’ (DRY) principle. This leads not only to a larger burden for the VM implementers, but also for developers due to inconsistent performance characteristics, tooling, and configuration. Furthermore, communication between programs written in different languages requires costly serialization and deserialization logic. Finally, high performance VMs are heavyweight processes with high memory footprint and difficult to embed.

Several years ago, to address these shortcomings, Oracle Labs started a new research project for exploring a novel architecture for virtual machines. Our vision was to create a single VM that would provide high performance for all programming languages, therefore facilitating communication between programs. This architecture would support unified language-agnostic tooling for better maintainability and its embeddability would make the VM ubiquitous across the stack.

To meet this goal, we have invented a new approach for building such a VM. After years of extensive research and development, we are now ready to present the first production-ready release.

Introducing GraalVM

Today, we are pleased to announce the 1.0 release of GraalVM, a universal virtual machine designed for a polyglot world.

GraalVM provides high performance for individual languages and interoperability with zero performance overhead for creating polyglot applications. Instead of converting data structures at language boundaries, GraalVM allows objects and arrays to be used directly by foreign languages.

Example scenarios include accessing functionality of a Java library from Node.js code, calling a Python statistical routine from Java, or using R to create a complex SVG plot from data managed by another language. With GraalVM, programmers are free to use whatever language they think is most productive to solve the current task.

GraalVM 1.0 allows you to run:

- JVM-based languages like Java, Scala, Groovy, or Kotlin
- JavaScript (including Node.js)
- LLVM bitcode (created from programs written in e.g. C, C++, or Rust)
- Experimental versions of Ruby, R, and Python

GraalVM can either run standalone, embedded as part of platforms like OpenJDK or Node.js, or even embedded inside databases such as MySQL or the Oracle RDBMS. Applications can be deployed flexibly across the stack via the standardized GraalVM execution environments. In the case of data processing engines, GraalVM directly exposes the data stored in custom formats to the running program without any conversion overhead.

For JVM-based languages, GraalVM offers a mechanism to create precompiled native images with instant start up and low memory footprint. The image generation process runs a static analysis to find any code reachable from the main Java method and then performs a full ahead-of-time (AOT) compilation. The resulting native binary contains the whole program in machine code form for immediate execution. It can be linked with other native programs and can optionally include the GraalVM compiler for complementary just-in-time (JIT) compilation support to run any GraalVM-based language with high performance.

A major advantage of the GraalVM ecosystem is language-agnostic tooling that is applicable in all GraalVM deployments. The core GraalVM installation provides a language-agnostic debugger, profiler, and heap viewer. We invite third-party tool developers and language developers to enrich the GraalVM ecosystem using the instrumentation API or the language-implementation API. We envision GraalVM as a language-level virtualization layer that allows leveraging tools and embeddings across all languages.

GraalVM in Production

Twitter is one of the companies deploying GraalVM in production already today for executing their Scala-based microservices. The aggressive optimizations of the GraalVM compiler reduces object allocations and improves overall execution speed. This results in fewer garbage collection pauses and less computing power necessary for running the platform. See this presentation from a Twitter JVM Engineer describing their experiences in detail and how they are using the GraalVM compiler to save money. In the current 1.0 release, we recommend JVM-based languages and JavaScript (including Node.js) for production use while R, Ruby, Python and LLVM-based languages are still experimental.

Getting Started

The binary of the GraalVM v1.0 (release candidate) Community Edition (CE) built from the GraalVM open source repository on GitHub is available here.

We are looking for feedback from the community for this release candidate. We welcome feedback in the form of GitHub issues or GitHub pull requests.

In addition to the GraalVM CE, we also provide the GraalVM v1.0 (release candidate) Enterprise Edition (EE) for better security, scalability and performance in production environments. GraalVM EE is available on Oracle Cloud Infrastructure and can be downloaded from the Oracle Technology Network for evaluation. For production use of GraalVM EE, please contact graalvm-enterprise_grp_ww@oracle.com.

Stay Connected

The latest up-to-date downloads and documentation can be found at www.graalvm.org. Follow our daily development, request enhancements, or report issues via our GitHub repository at www.github.com/oracle/graal. We encourage you to subscribe to these GraalVM mailing lists:

- graalvm-announce@oss.oracle.com
- graalvm-users@oss.oracle.com
- graalvm-dev@oss.oracle.com

We communicate via the @graalvm alias on Twitter and watch for any tweet or Stack Overflow question with the #GraalVM hash tag.

Future

This first release is only the beginning. We are working on improving all aspects of GraalVM; in particular the support for Python, R and Ruby.

GraalVM is an open ecosystem and we encourage building your own languages or tools on top of it. We want to make GraalVM a collaborative project enabling standardized language execution and a rich set of language-agnostic tooling. Please find more at www.graalvm.org on how to:

- allow your own language to run on GraalVM
- build language-agnostic tools for GraalVM
- embed GraalVM in your own application

We look forward to building this next generation technology for a polyglot world together with you!

Three Quick Tips API Platform CS - Gateway Installation (Part 3)

Thu, 2018-04-12 16:09

The part 2 of the series can be accessed here. Today, we keep it short and simple, here are three troubleshooting tips for Oracle API CS Gateway Installation:

  • If while running the "install" action, you see an output as something like:

           -bash-4.2$ ./APIGateway -f gateway-props.json -a install-configure-start-join
Please enter user name for weblogic domain,representing the gateway node:
weblogic
Password:
2018-03-22 17:33:20,342 INFO action: install-configure-start-join
2018-03-22 17:33:20,342 INFO Initiating validation checks for action: install.
2018-03-22 17:33:20,343 WARNING Previous gateway installation found at directory = /u01/oemm
2018-03-22 17:33:20,343 INFO Current cleanup action is CLEAN
2018-03-22 17:33:20,343 INFO Validation complete
2018-03-22 17:33:20,343 INFO Action install is starting
2018-03-22 17:33:20,343 INFO start action: install
2018-03-22 17:33:20,343 INFO Clean started.
2018-03-22 17:33:20,345 INFO Logging to file /u01/oemm/logs/main.log
2018-03-22 17:33:20,345 INFO Outcomes of operations will be accumulated in /u01/oemm/logs/status.log
2018-03-22 17:33:20,345 INFO Clean finished.
2018-03-22 17:33:20,345 INFO Installing Gateway
2018-03-22 17:33:20,718 INFO complete action: install isSuccess: failed detail: {}
2018-03-22 17:33:20,718 ERROR Action install has failed. Detail: {}
2018-03-22 17:33:20,718 WARNING Full-Setup execution incomplete. Please check log file for more details
2018-03-22 17:33:20,719 INFO Execution complete.

  The issue could be "/tmp" directory permissions. Please check that the tmp directory which is used by default by the OUI installer is not setup with "noexec","nosuid" or "nodev". Please check for other permission issues as well, Another possible area to investigate is the size allocated to "/tmp"  file system (should be greater than equal to 10 GB).

  • If sometime during running any of the installer actions, you get an "Invalid JSON object: .... " error, then, please check if the gateway-master.props file is not empty.  This can happen if for example you execute "ctrl+z" to exit an installer action. The best approach is to backup the gateway-master.json file and replace it, in case the above error happens. In worst case copy the gateway-master .
  •  If the "start" action is unable to start the managed server but the admin server starts Ok, then try changing the "publishAddress" property's value  to "listenIpAddress" property's value and try install,configure and start again. In other words "publishAddress"  = "listenIpAddress".

That is all for now, we will back soon with more.

 

 

 

Introducing Build Pipeline in Oracle Developer Cloud

Wed, 2018-04-11 16:05

With our current release we are introducing a new build engine in Oracle Developer Cloud. This new build engine is called Mako. The new build engine also comes with a new enhanced functionality and user interface in Oracle Developer Cloud Service ‘Build’ tab for defining build pipelines visually. This was the much awaited functionality in Oracle Developer Cloud from the Continuous Integration and Continuous Delivery perspective.

So what is changing in Developer Cloud build?

The below screen shot shows the user interface for the new ‘Build’ tab in Oracle Developer Cloud. A quick glance at it tells you that there is a new tab called ‘Pipeline’ being added alongside the ‘Jobs’ tab. So the concept of creating build jobs remains the same. We have Pipeline in addition to the build jobs that you can create.

Creating of build job has gone through a change as well. When you try to create a build job by clicking the ‘+New Job’ button in the Build tab, you will have a dialog box to create a new build job. The first screen shot shows the earlier ‘New Job ‘ dialog where you could give the job name and select to create a freestyle job or copy an existing build job.

The second screen shot shows the latest ‘New Job’ dialog that comes up in Oracle Developer Cloud.  It has a Job name, description (which you could give in the build configuration interface earlier), create new/copy existing job options, check box to select ‘use for merge request’ and the most noticeable addition the Software Template dropdown.

Dialog in the old build system:

Dialog in the new build system:

What these additional fields in the ‘New Job’ dialog mean?

Description: To give the job description, which you could give in the build configuration interface earlier. You will still be able to edit it in the build configuration as part of the settings tab.

Use for Merge Request: By selecting this option, your build will be parameterized to get the Git repo URL, Git repo branch and Git repo merge id and perform the merge as part of the build.

Software Template: With this release you will be using your own Oracle Compute Classic to run/execute your build jobs. Earlier the build jobs were executed on internal pool of compute. This gives you immense flexibility to configure you build machine using the software runtimes that you need using the user interface that we provide as part of the Developer Cloud Service. These configuration will stay and the build machines will not be claimed back as it is your own compute instance. This will also enable you to run multiple parallel builds without any constraint by spinning up new computes as per your requirements. You will be able to create multiple VM templates with different software configurations and choose them while creating build jobs as per your requirement.

Please use this link to refer the documentation for configuring Software Templates.

Build Configuration Screen:

In the build configuration tab you will now have two tabs as seen in the screen shot below.

  1. Build Configuration
  2. Build Settings

As seen in the screenshot below, the build configuration tab would further have Source Control tab, Build Parameters, Build Environment, Builders and Post Build sub tabs.

While in the build settings tab, you will have sub tabs such as General, Software, Triggers, and Advanced. Below are the brief description of each tab:

General: As seen in the screenshot below is for generic build job related details. It is similar to the Main tab which existed previously.

Software: This tab is a new introduction in the build configuration to support Software Templates for build machines, which is getting introduced in our current release as described above. It will let you change/see the software template that you have selected while creating the build job and also let you see the software (runtimes) available in the template. Please see the screenshot below for your reference.

Triggers: You will be able add build triggers like Periodic Trigger and SCM Polling Trigger as shown in the screenshot below. This is similar to the Triggers tab that existed earlier.

Advanced: Consists of some build settings related to aborting job conditions, retry count and adding timestamp to the console output.

In the Build Configuration Tab

There are four tabs in the Build Configuration tab as described below:

Source Control: You can add Git as the Source Control from the dropdown-‘Add Source Control’.

 

Build Parameters: Apart from the existing build parameters like String Parameter, Password Parameter, Boolean Parameter, Choice Parameter, there is a new parameter type being added called Merge Request Parameters. The Merge Request Parameters get added automatically when the checkbox ‘Use for Merge Request’ is selected while creating the build job. This will add Git repo URL, Git repo branch and Git repo merge id as the build parameters.

Build Environment: A new Build Environment settings have been added apart from the existing Xvfb Wrapper, Copy Artifacts and Oracle Maven Repository Connection, which is SonarQube Settings.

SonarQube Settings – For static code analysis using SonarQube tool. I will be publishing a separate blog on SonarQube in Developer Cloud.

Builders: To add build steps. There is an additions to the build steps, which is Docker Builder. 

Docker Builder: Support to build Docker images and execute any Docker command. (Will be releasing a separate blog for dockers.)

Post Build: To add Post Build configurations like deployment. SonarQube Result Publisher is the new Post Build configuration added in the current release.

Pipelines

After creating and configuring the build jobs, you can create a pipeline in the Pipelines tab using these build jobs. You can create a new pipeline using the ‘+New Pipeline’ button.

You will see the below dialog to create a new pipeline.

On creation of the Pipeline, you can drag and drop the build jobs using the Pipeline visual editor, sequence and connect the build jobs as per the requirement.

You can also add conditions to the connection for execution by double clicking the links and selecting the condition from the dropdown, as shown below in the screenshot.

Once completed, the pipeline will be listed in the Pipelines tab as shown below.

 

You can start the build manually using the play symbol button. We can also configure it to Auto Start when one of the job is executed externally.

Stay tuned for more blogs on latest features and capabilities of Developer Cloud Service. 

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

 

Building Docker on Oracle Developer Cloud Service

Wed, 2018-04-11 13:40

The much awaited Docker build support on Oracle Developer Cloud Service is here. Now you will be able to build Docker images and execute Docker commands as part of the Continuous Integration and Continuous Deployment pipeline.

This blog covers the description of how and what you can do with the Docker build support on Developer Cloud Service. It will give an understanding of the Docker commands that we can run/execute on Developer Cloud as part of the build job.

Note: There will be a series of blogs following up on using Docker build on Developer Cloud covering different technology stacks and usage.

Build Job Essentials:

Pre-requisite to be able to run Docker commands or use Docker Build steps in the build job is that we should select a software template which has Docker included as a software bundle. Selecting the template with Docker ensures that the Build VM which gets instantiated using the selected software template has Docker runtime installed on it, as shown in the below screen shot. The template names may vary in your instance.

To know about the new build system you can read this blog link. Also you can refer the documentation on configuring the Build VM.

 

You will be able to verify whether Docker is part of the slected VM or not by navigating to Build -> <Build Job> -> Build Settings -> Software

You can refer this link to understand more about the new build interface on Developer Cloud.

 

Once you have the Build Job created with the right Software Template selected as described above, go to the Builders tab in the Build Job and click on the Add Builder. You will see Docker Builder in dropdown as shown in the screen shot below. Selecting Docker Builder would give you Docker command options which are given out of the box.

You can run all other Docker commands as well, by selecting Unix Shell Builder and writing your Docker command in it.

In the below screen shot you can see two commands selected from the Docker Builder menu.

Docker Version – This command interface prints the Docker version installed on your Build VM.

Docker Login – Using this command interface you can login and create connection with the Docker Registry. By default it is DockerHub but you can use Quay.io or any other Docker registry available over the internet. If you leave the Registry Host empty then by default it will connect to DockerHub.

 

Docker build – Using this command interface you can build a Docker Image in Oracle Developer Cloud.  You will have to have a Dockerfile in the Git repository that you will be configuring in the Build Job. The path of the Dockerfile has to be mentioned in the Dockerfile field. In case the Dockerfile resides in the build context root, you can leave the field empty. You will have to give the image name.

 

Docker Push – Now to push the Docker image that you have built using Docker Build command interface to the Docker Registry. You will have to first use Docker Login to create a connection to the Docker Registry where you want to push the image. Then use the Docker Push command giving the exact name of the image built as you had given in Docker Build command.

 

Docker rmi – To remove the Docker images we have build.

As mentioned previously, you can run any Docker command in Developer Cloud.  If the UI for the command is not given, you can use Unix Shell Builder to write and execute your Docker command.

In my follow up blog series I will using a combination of the out of the box command interface and Unix Shell Builder to execute Docker commands and get build tasks accomplished. So watch out for the upcoming blogs here.

Happy Dockering on Developer Cloud!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Build and Deploy .Net Code using Oracle Developer Cloud

Wed, 2018-04-11 13:27

The much awaited support for building and deploying .Net code on Oracle Cloud using Developer Cloud Service is here.

This blog post will show how you can use Oracle Developer Cloud Service to build .Net code and deploy it on Oracle Application Container Cloud. It will show how the newly released Docker build support in Developer Cloud can be leveraged to perform the build.

Technology Stack Used:

Application Stack: .Net for developing ASPX pages

Build: Docker for compiling the .Net code

Packaging Tool: Grunt to package the compiled code

DevOps Cloud Service: Oracle Developer Cloud

Deployment Cloud Service: Oracle Application Container Cloud

OS for Development: Windows 7

 

.Net application code source:

 The ASP.Net application that we would be building and deploying on Oracle Cloud using Docker can be downloaded from the Git repository on GitHub. Below is the link for the same.

https://github.com/dotnet/dotnet-docker-samples/tree/master/aspnetapp

If you want to clone the GitHub repository then use the below git command after installing the git cli on your machine.

git clone https://github.com/dotnet/dotnet-docker-samples/

After cloning the above mentioned repository you can pick to use the aspnetapp. Below is the folder structure of the cloned aspnetapp.

 

Apart from the four highlighted files in the screen shot below, which are essential for the deployment, rest all the other files and folder are part of the .Net application.

Note: You may not be able to see the .get folder as you have not initialized the Git repository.

Now we need to initialize the Git repository for the aspnetappl as we will be pushing this code to the Git repo hosted on Oracle Developer Cloud. Below are the commands that you can use on you command line after installing Git Cli and configuring the same in the path.

Command prompt > cd <to the aspnetappl folder>

Command prompt > git init

Command prompt > git add –all

Command prompt > git commit –m “First Commit”

Above mentioned git commands will initialize the git repository locally in the application folder. And then add all the code in the folder to the local Git repository using the git add –all command.

Then commit the added files by using the git commit command, as shown above.

Now go to Oracle Developer Cloud project and create a Git repository for the .Net code to be pushed to. For the purpose of this blog I have created the Git repository by clicking the ‘New Repository’ button and named it as ‘DotNetDockerAppl’, as shown in the screenshot below. You may choose to name it as per your choice.

Copy the Git repository URL as shown below.

Then add the URL as the remote repository to the local Git repository that we have created using the below command:

 Command prompt > git remote add origin <Developer Cloud Git repository URL>

The use the below command to push the code to the master branch of the Developer Cloud hosted Git repository.

Command prompt > git push origin master

 

Deployment related files that need to be created: Dockerfile

This file will be used by the Docker Tool to build the Docker image with the .Net core installed and it would also include the .Net application code, cloned from the Developer Cloud Git repository. You will be getting the Dockerfile as part of the project. Please replace the existing Dockerfile script with the one below.

 

FROM microsoft/aspnetcore-build:2.0

WORKDIR /app

# copy csproj and restore as distinct layers

COPY *.csproj ./

RUN dotnet restore

# copy everything else and build

COPY . ./

RUN dotnet publish -c Release -r linux-x64

In the above script we download the aspnetcore-build:2.0 image, then create a work directory where we copy the .csproj file and then copy all the code from the Git repo. Finally use the ‘dotnet’ command to publish the compiled code, compliant with linux-x64 machine.

manifest.json

This file is essential for the deployment of the .Net application on the Oracle Application Container Cloud.

{

 "runtime":{

 "majorVersion":"2.0.0-runtime"

 },

 "command": "dotnet AspNetAppl.dll"

}

The command attribute in the json, specifies the dll to be executed by the dotnet command. It also specifies the .Net version to be used for executing the compiled code.

 

Gruntfile.js

This file defines the build task and is being used by the Build file to identify the deployment artifact type that needs to be generated, which in this case is a zip file and also the files from the project that need to be included in the build artifact. For the .Net application we would only need to include everything in the publish folder including the manifest.json for Application Container Cloud deployment. The folder is defined in the src attribute as shown in the code snippet below.

 

/**

 * http://usejsdoc.org/

 */

module.exports = function(grunt) {

  require('load-grunt-tasks')(grunt);

  grunt.initConfig({

    compress: {

      main: {

        options: {

          archive: 'AspNetAppl.zip',

          pretty: true

        },

        expand: true,

        cwd: './publish',

        src: ['./**/*'],

        dest: './'

      }

    }

  });

  grunt.registerTask('default', ['compress']);

};

package.json

Since Grunt is Nodejs based build tool, which we are using in this blog to build and package the deployment artifact, we would need the package json file to define the dependencies required for the Grunt tool to execute.

{

  "name": "ASPDotNetAppl",

  "version": "0.0.1",

  "private": true,

  "scripts": {

    "start": "dotnet AspNetAppl.dll"

  },

  "dependencies": {

    "grunt": "^0.4.5",

    "grunt-contrib-compress": "^1.3.0",

    "grunt-hook": "^0.3.1",

    "load-grunt-tasks": "^3.5.2"

  }

}

Once all the code is pushed to the Git repository hosted on Oracle Developer Cloud. Below screenshot, shows how you can browse and verify your code by going to the Code tab and selecting the appropriate Git repository and branch in the respective dropdowns on top of the files list.

 

Build Job Configuration on Developer Cloud

We are going to use the newly introduced Mako build instead of the Hudson build system in DevCS.

Below are the build job configuration screen shots for the ‘DotNetBuild’ which will build and deploy the .Net application:

Create a build job by clicking on the “New Job” button. Give a name of your choice to the build job. For this blog I have named it as ‘DotNetBuild’. You will also need to select the Software Template which contains Docker and Nodejs runtimes. In case you do see the required software template in the dropdown,as shown in the screenshot below, you will have to configure the same from Organization -> VM Template menu. This will kick start the Build VM with the required software template. To understand and learn more about configuring VM and VM Templates you can refer this link.

 

Now go to the Builders tab where we would configure the build steps. First we would select execute shell where we would build the Docker image using the Dockerfile in our Git repository. Then create a container for the same (but not start the container). Then copy the compiled code to the build machine from the container and then use npm registry to download the grunt build tool dependencies. Finally, use the grunt command to build the AspNetAppl.zip file which will be deployed on Application Container Cloud.

 

 

Now configure the PSM Cli and configure the credentials for your ACCS instance along with the domain name. Then again configure Unix Shell builder where you will have to provide the psm command to deploy the zip file on Application Container that we have generated earlier using Grunt build tool.

Note: All this will be done in the same ‘DotNetBuild’ build job that we have created earlier.

 

AS part of the last part of build configuration, in the Post Build tab configure the Artifact Archiver as show below, to archive the generated zip file for deployment.

 

Below screen shot show the ‘DotNet’ application deployed on Application Container Cloud service console. Copy the application URL as shown in the screen shot. The URL will vary for your cloud instance.

 

Use the copied URL to access the deployed .Net application on a browser. It will look like as shown in the below screen shot.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

 

Introducing the Oracle MySQL Operator for Kubernetes

Wed, 2018-03-28 00:30

(Originally published on Medium)

Introduction

Oracle recently open sourced a Kubernetes operator for MySQL that makes running and managing MySQL on Kubernetes easier.

The MySQL Operator is a Kubernetes controller that can be installed into any existing Kubernetes cluster. Once installed, it will enable users to create and manage production-ready MySQL clusters using a simple declarative configuration format. Common operational tasks such as backing up databases and restoring from an existing backup are made extremely easy. In short, the MySQL Operator abstracts away the hard work of running MySQL inside Kubernetes.

The project started as a way to help internal teams get MySQL running in Kubernetes more easily, but it quickly become clear that many other people might be facing similar issues.


Features

Before we dive into the specifics of how the MySQL Operator works, let’s take a quick look at some of the features it offers:

Cluster configuration

We have only two options for how a cluster is configured.

  • Primary (in this mode the group has a single-primary server that is set to read-write mode. All the other members in the group are set to read-only mode)
  • Multi-Primary (In multi-primary mode, there is no notion of a single primary. There is no need to engage an election procedure since there is no server playing any special role.)
Cluster management
  • Create and scale MySQL clusters using Innodb and Group Replication on Kubernetes
  • When cluster instances die, the MySQL Operator will automatically re-join them into the cluster
  • Use Kubernetes Persistent Volume Claims to store data on local disk or network attached storage.
Backup and restore
  • Create on-demand backups
  • Create backup schedules to automatically backup databases to Object Storage (S3 etc)
  • Restore a database from an existing backup
Operations
  • Run on any Kubernetes cluster (Oracle Cloud Infrastructure, AWS, GCP, Azure)
  • Prometheus metrics for alerting and monitoring
  • Self healing clusters
The Operator Pattern

A Kubernetes Operator is simply a domain specific controller that can manage, configure and automate the lifecycle of stateful applications. Managing stateful applications, such as databases, caches and monitoring systems running on Kubernetes is notoriously difficult. By leveraging the power of Kubernetes API we can now build self managing, self driving infrastructure by encoding operational knowledge and best practices directly into code. For instance, if a MySQL instance dies, we can use an Operator to react and take the appropriate action to bring the system back online.


How it works

The MySQL Operator makes use of Custom Resource Definitions as a way to extend the Kubernetes API. For instance, we create custom resources for MySQLClusters and MySQLBackups. Users of the MySQL Operator interact via these third party resource objects. When a user creates a backup for example, a new MySQLBackup resource is created inside Kubernetes which contains references and information about that backup.

The MySQL Operator is, at it’s core, a simple Kubernetes controller that watches the API server for Customer Resource Definitions relating to MySQL and acts on them.


HA / Production Ready MySQL Clusters

The MySQL Operator is opinionated about the way in which clusters are configured. We build upon InnoDB cluster (which uses Group Replication) to provide a complete high availability solution for MySQL running on Kubernetes.


Examples

The following examples will give you an idea of how the MySQL Operator can be used to manage your MySQL Clusters.


Create a MySQL Cluster

Creating a MySQL cluster using the Operator is easy. We define a simple YAML file and submit this directly to Kubernetes via kubectl. The MySQL operator watches for MySQLCluster resources and will take action by starting up a MySQL cluster.

apiVersion: "mysql.oracle.com/v1" kind: MySQLCluster metadata:   name: mysql-cluster-with-3-replicas spec:   replicas: 3

You should now be able to see your cluster running

There are several other options available when creating a cluster such as specifying a Persistent Volume Claim to define where your data is stored. See the examples directory in the project for more examples.


Create an on-demand backup

We can use the MySQL operator to create an “on-demand” database backup and upload it to object storage.

Create a backup definition and submit it via kubectl.

apiVersion: "mysql.oracle.com/v1" kind: MySQLBackup metadata: name: mysql-backup spec: executor: provider: mysqldump databases: - test storage: provider: s3 secretRef: name: s3-credentials config: endpoint: x.compat.objectstorage.y.oraclecloud.com region: ociregion bucket: mybucket clusterRef: name: mysql-cluster

You can now list or fetch individual backups via kubectl

kubectl get mysqlbackups

Or fetch an individual backup

kubectl get mysqlbackup api-production-snapshot-151220170858 -o yaml
Create a Backup Schedule

Users can attach schedule backup policies to a cluster so that backups get created on a given cron schedule. A user may be create multiple backup schedules attached to a single cluster if required.

This example will create a backup of a cluster test database every hour and upload it to Oracle Cloud Infrastructure Object Storage.

apiVersion: "mysql.oracle.com/v1" kind: MySQLBackupSchedule metadata: name: mysql-backup-schedule spec: schedule: '30 * * * *' backupTemplate: executor: provider: mysqldump databases: - test storage: provider: s3 secretRef: name: s3-credentials config: endpoint: x.compat.objectstorage.y.oraclecloud.com region: ociregion bucket: mybucket clusterRef: name: mysql-cluster Roadmap

Some of the features on our roadmap include

  • Support for MySQL Enterprise Edition
  • Support for MySQL Enterprise Backup
Conclusion

The MySQL Operator showcases the power of Kubernetes as a platform. It makes running MySQL inside Kubernetes easy by abstracting complexity and reducing operational burden. Although it is still in very early development, the MySQL Operator already provides a great deal of useful functionality out of the box.

Visit https://github.com/oracle/mysql-operator to learn more. We welcome contributions, ideas and feedback from the community.

If you want to deploy MySQL inside Kubernetes, we recommend using the MySQL Operator to do the heavy lifting for you.

 

Links

 

 

 

Announcing Terraform support for Oracle Cloud Platform Services

Mon, 2018-03-26 06:03

Oracle and HashiCorp are pleased to announce the immediate availability of the Oracle Cloud Platform Terraform provider.

The initial release of the Oracle Cloud Platform Terraform provider supports the creation and lifecycle management of Oracle Database Cloud Service and Oracle Java Cloud Service instances.

With the availability of the Oracle Cloud Platform services support, Terraform’s “infrastructure-as-code” configurations can now be defined for deploying standalone Oracle PaaS services, or combined with the Oracle Cloud Infrastructure and Infrastructure Classic services supported by the opc and oci providers for complete infrastructure and application deployment.

Supported PaaS Services

The following Oracle Cloud Platform services are supported by the initial Oracle Cloud Platform (PaaS) Terraform provider. Additional services/resources will be added over time.

  • Oracle Database Cloud Service Instances
  • Oracle Database Cloud Service Access Rules
  • Oracle Java Cloud Service Instances
  • Oracle Java Service Access Rules
Using the Oracle Cloud Platform Terraform provider

To get started using Terraform to provision the Oracle Cloud Platform services lets looks at an example of deploying a single Java Cloud Service instance, along with its dependent Database Cloud Service instance.

First we declare the provider definition, providing the account credentials and the appropriate service REST API endpoints. The Identity Domain name, Identity Service ID and REST endpoint URL can be found in the Service details section from on My Services Dashboard

For IDCS Cloud Accounts use the Identity Service ID for the identity_domain.

provider "oraclepaas" { user = "example@user.com" password = "Pa55_Word" identity_domain = "idcs-5bb188b5460045f3943c57b783db7ffa" database_endpoint = "https://dbaas.oraclecloud.com" java_endpoint = "https://jaas.oraclecloud.com" }

For Traditional Accounts use the account Identity Domain Name for the identity_domain

provider "oraclepaas" { user = "example@user.com" password = "Pa55_Word" identity_domain = "mydomain" database_endpoint = "https://dbaas.oraclecloud.com" java_endpoint = "https://jaas.oraclecloud.com" } Database Service Instance configuration

The oraclepaas_database_service_instance resource is used to define the Oracle Database Cloud service instance. A single terraform database service resource definition can represent configurations ranging from a single instance Oracle Database Standard Edition deployment, to a complete multi node Oracle Database Enterprise Edition with RAC and Data Guard for high availability and disaster recovery.

Instances can also be created for backups or snapshots of another Database Service instance. For this example we’ll create a new single instance database for use with the Java Cloud Service configured later further down.

resource "oraclepaas_database_service_instance" "database" { name = "my-terraformed-database" description = "Created by Terraform" edition = "EE" version = "12.2.0.1" subscription_type = "HOURLY" shape = "oc1m" ssh_public_key = "${file("~/.ssh/id_rsa.pub")}" database_configuration { admin_password = "Pa55_Word" backup_destination = "BOTH" sid = "ORCL" usable_storage = 25 } backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-database-backup" cloud_storage_username = "${var.user}" cloud_storage_password = "${var.password}" create_if_missing = true } }

Lets take a closer look at the configuration settings. Here we are declaring that this is an Oracle Database 12c Release 2 (12.2.0.1) Enterprise Edition instance with a oc1m (1 OCPU/15Gb RAM) shape and with hourly usage metering.

edition = "EE" version = "12.2.0.1" subscription_type = "HOURLY" shape = "oc1m"

The ssh_public_key is the public key to be provisioned to the instance to allow SSH access.

The database_configuration block sets the initial configuration for the actual Database instance to be created in the Database Cloud service, including the database SID, the initial password, and the initial usable block volume storage for the database.

database_configuration { admin_password = "Pa55_Word" backup_destination = "BOTH" sid = "ORCL" usable_storage = 25 }

The backup_destination configure if backup are on the Object Storage Servive (OSS), both object storage and local storage (BOTH), or disabled (NONE). A backup destination of OSS or BOTH is required for database instances that as used in combination with Java Cloud service instances

The Object Storage Service location and access credentials are configured in the backups block

backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-database-backup" cloud_storage_username = "${var.user}" cloud_storage_password = "${var.password}" create_if_missing = true } Java Cloud Service Instance

The oraclepaas_java_service_instance resource is used to define the Oracle Java Cloud service instance. A single Terraform resource definition can represent configurations ranging from a single instance Oracle WebLogic Server deployment, to a complete multi-node Oracle WebLogic cluster with a Oracle Coherence data grid cluster and an Oracle Traffic Director load balancer.

Instances can also be created from snapshots of another Java Cloud Service instance. For this example we’ll create a new two node Weblogic cluster with a load balancer, and associated to the Database Cloud Service instance defined above.

resource "oraclepaas_java_service_instance" "jcs" { name = "my-terraformed-java-service" description = "Created by Terraform" edition = "EE" service_version = "12cRelease212" metering_frequency = "HOURLY" enable_admin_console = true ssh_public_key = "${file("~/.ssh/id_rsa.pub")}" weblogic_server { shape = "oc1m" managed_servers { server_count = 2 } admin { username = "weblogic" password = "Weblogic_1" } database { name = "${oraclepaas_database_service_instance.database.name}" username = "sys" password = "${oraclepaas_database_service_instance.database.database_configuration.0.admin_password}" } } oracle_traffic_director { shape = "oc1m" listener { port = 8080 secured_port = 8081 } } backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-java-service-backup" auto_generate = true } }

Let break this down. Here we are declaring that this is a 12c Release 2 (12.2.1.2) Enterprise Edition Java Cloud Service instance with hourly usage metering.

edition = "EE" service_version = "12cRelease212" metering_frequency = "HOURLY"

Again the ssh_public_key is the public key to be provisioned to the instance to allow SSH access.

The weblogic_server block provides the configuration details for the WebLogic Service instances deployed for this Java Cloud Service instance. The weblogic_server definition sets the instance shape, in this case a oc1m (1 OCPU/15Gb RAM).

The admin block sets the WebLogic server admin user and initial password.

admin { username = "weblogic" password = "Weblogic_1" }

The database block connects the WebLogic server to the Database Service instance already defined above. In this example we are assuming the database and java service instances are declared in the same configuration, so we can fetch the database configuration values.

database { name = "${oraclepaas_database_service_instance.database.name}" username = "sys" password = "${oraclepaas_database_service_instance.database.database_configuration.0.admin_password}" }

The oracle_traffic_director block configures the load balancer that directs traffic to the managed WebLogic server instances.

oracle_traffic_director { shape = "oc1m" listener { port = 8080 secured_port = 8081 } }

By default the load balancer will be configured with the same admin credentials defined in the weblogic_server block, different credentials can also be configured if required.  If the insecure port is not set then only the secured_port is enabled

Finally, similar to the Database Cloud service instance configuration, the backups block sets the Object Storage Service location for the Java Service instance backups.

backups { cloud_storage_container = "Storage-${var.domain}/-backup" auto_generate = true } Provisioning

With the provider and resource definitions configured in a terraform project (e.g all in a main.tf file), deploying the above configuration is a simple as:

$ terraform init $ terraform apply

The terraform init command will automatically fetch the latest version of the oraclepaas provider. terraform apply will start the provisioning. The complete provisioning of the Database and Java Cloud Instances can be a long running operation. To remove the provisioning instance run terraform destroy

Related Content

Terraform Provider for Oracle Cloud Platform

Terraform Provider for Oracle Cloud Infrastructure

Terraform Provider for Oracle Cloud Infrastructure Classic

Part II: Data processing pipelines with Spring Cloud Data Flow on Oracle Cloud

Thu, 2018-03-22 00:30

This is the 2nd (and final) part of this blog series about Spring Cloud Data Flow on Oracle Cloud

In Part 1, we covered some of the basics, infrastructure setup (Kafka, MySQL) and at the end of it, we had a fully functional Spring Cloud Data Flow server on the cloud — now its time to put it to use !

In this part, you will

  • get a technical overview of solution and look at some internal details — whys and hows
  • build and deploy a data flow pipeline on Oracle Application Container Cloud
  • and finally test it out…
Behind the scenes

Before we see things in action, here is an overview so that you understand what you will be doing and get (a rough) idea of why it’s working the way it is

At a high level, this is how things work in Spring Cloud Data Flow (you can always dive into the documentation for details)

  • You start by registering applications — these contain the core business logic and deal with how you would process the data e.g. a service which simply transforms the data it receives (from the messaging layer) or an app which pumps user events/activities into a message queue
  • You will then create a stream definition where you will define the pipeline of your data flow (using the apps which you previously registered) and then deploy them
  • (here is the best part!) once you deploy the stream definition, the individual apps in the pipeline, which will get automatically deployed to Oracle Application Container Cloud, thanks to our custom Spring Cloud Deployer SPI implementation (this was briefly mentioned in Part 1)

At a high level, the SPI implementation needs to adhere to the contract/interface outlined by

org.springframework.cloud.deployer.spi.app.AppDeployer and provide implementation for the following methods — deploy, undeploy, status and environmentInfo

Thus the implementation handles the life cycle of the pipeline/stream processing applications

  • creation and deletion
  • providing status information
Show time…! App registration
We will start by registering our stream/data processing applications

As mentioned in Part 1, Spring Cloud Data Flow uses Maven as one of its sources for the applications which need to be deployed as a part of the pipelines which you build — more details here and here

You can use any Maven repo — we are using Spring Maven repo since we will be importing their pre-built starter apps. Here is the manifest.json where this is configured

{   "runtime": {     "majorVersion": "8"   },   "command": "java -jar spring-cloud-dataflow-server-accs-1.0.0-SNAPSHOT.jar    --server.port=$PORT    --maven.remote-repositories.repo1.url=http://repo.spring.io/libs-snapshot    --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=$OEHPCS_EXTERNAL_CONNECT_STRING     --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=<event_hub_zookeeper_IP>:<port>",   "notes": "ACCS Spring Cloud Data Flow Server" }

manifest.json for Data Flow server on ACCS

Access the Spring Cloud Data Flow dashboard — navigate to the application URL e.g. https://SpringCloudDataflowServer-mydomain.apaas.us2.oraclecloud.com/dashboard

Spring Cloud Data Flow dashboard

For the purpose of this blog, we will import two pre-built starter apps

http
  • Type — source
  • Role — pushes pushes data to the message broker
  • Maven URLmaven://org.springframework.cloud.stream.app:http-source-kafka:1.0.0.BUILD-SNAPSHOT

log

  • Type — sink
  • Role — consumes data/events from the message broker
  • Maven URLmaven://org.springframework.cloud.stream.app:log-sink-kafka:1.0.0.BUILD-SNAPSHOT

There is another category of apps known as processor — this is not covered for the sake of simplicity

There are a bunch of these starter apps which make it super easy to get going with Spring Cloud Data Flow!

Importing applications

After app registration, we can go ahead and create our data pipeline. But, before we do that, let’s quickly glance at what it will do…

Overview of the sample pipeline/data flow

Here is the flow which the pipeline will encapsulate — you will see this in action once you reach the Test Drive section.. so keep going !

  • http app -> Kafka topic
  • Kafka -> log app -> stdout

The http app will provide a REST endpoint for us to POST messages to it and these will be pushed to a Kafka topic. The log app will simply consume these messages from the Kafka topic and then spit them out to stdout — simple!

Create & deploy a pipeline

Lets start creating stream — you can pick from the list of source and sink apps which we just imported ( http and log )

 

Use the below stream definition — just replace KafkaDemo with the name of your Event Hub Cloud service instance which you had setup in the Infrastructure setup section in Part 1

http --port=$PORT --app.accs.deployment.services='[{"type": "OEHPCS", "name": "KafkaDemo"}]' | log --app.accs.deployment.services='[{"type": "OEHPCS", "name": "KafkaDemo"}]'

Stream definition

You will see a graphical representation of the pipeline (which is quite simple in our case)

Stream definition


Create (and deploy) the pipeline

Deploy the stream definition

The deployment process will get initiated and the same will be reflected on the console

Deployment in progress….

Go back to the Applications menu in Oracle Application Container Cloud to confirm that the individual app deployment has also got triggered

Deployment in progress…

Open the application details and navigate to the Deployments section to confirm that both apps have service binding to the Event Hub instances as specified in the stream definition

Service Binding to Event Hub Cloud

After the applications are deployed to Oracle Application Container Cloud, the state of the stream definition will change to deployed and the apps will also show up in the Runtime section

 

Deployment complete

 

Spring Cloud Data Flow Runtime menu

Connecting the dots..
Before we jump ahead and test our the data pipeline we just created, here are a couple of pictorial representations to summarize how everything connects logically

Individual pipeline components in Spring Cloud Data Flow map to their corresponding applications in Oracle Application Container Cloud — deployed via the custom SPI implementation (discussed above as well as in part 1)

Spring Cloud Data Flow pipeline to application mapping

.. and here is where the logical connection to Kafka is depicted

  • http app pushes to Kafka topic
  • the log app consumes from Kafka topic and emits the messages to stdout
  • the topics are auto-created in Kafka by default (you can change this) and the naming convention is the stream definition (DemoStream) and the pipeline app name (http) separated by a dot (.)

Pipeline apps interacting with Kafka

Test drive

Time to test the data pipeline…

Send messages via the http (source) app
POST a few messages to the REST endpoint exposed by the http app (check its URL from the Oracle Application Container Cloud console) — these messages will be sent to a Kafka topic and consumed by the log app

curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test1 curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test12 curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test123

Check the log (sink) service
Download logs for log app to confirm . Navigate to the application details and check out the Logs tab in the Administration section — documentation here

Check logs

You should see the same messages which you sent to the HTTP endpoint

 

Messages from Kafka consumed and sent to stdout

There is another way…
What you can also do is to validate this directly using Kafka (on Event Hub cloud) itself — all you need is to create a custom Access Rule to open port 6667 on the Kafka Server VM on Oracle Event Hub Cloud — details here

You can now inspect the Kafka topic directly by using the console consumer and then POSTing messages to the HTTP endpoint (as mentioned above)

kafka-console-consumer.bat --bootstrap-server <event_hub_kakfa_IP>:6667 --topic DemoStream.http

Un-deploy
If you trigger an un-deployment or destroy of the stream definition, it will trigger an app deletion from Oracle Application Container Cloud

Un-deploy/destroy the definition

Quick recap

That’s all for this blog and it marks the end of this 2-part blog series!

  • we covered the basic concepts & deployed a Spring Cloud Data Flow server on Oracle Application Container Cloud along with its dependent components which included…
  • Oracle Event Hub Cloud as the Kafka based messaging layer, and Oracle MySQL Cloud as the persistent RDBMS store
  • we then explored some behind the scenes details and made use of our Spring Cloud Data Flow setup where …
  • … we built & deployed a simple data pipeline along with its basic testing/validation
Don’t forget to…
  • check out the tutorials for Oracle Application Container Cloud — there is something for every runtime!
  • other blogs on Application Container Cloud

Cheers!

The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Podcast: Combating Complexity: Fad, Fashion, and Failure in Software Development

Wed, 2018-03-21 18:40

There is little in our lives that does not rely on software. That has been the reality for quite some time, and it will be even more true as self-driving cars and similar technologies become an even greater part of our lives. But as our reliance on software grows, so does the potential for disaster as software becomes increasingly complex.

In September 2017 The Atlantic featured “The Coming Software Apocalypse,” an article by James Somers that offers a fascinating and sobering look at how rampant code complexity has caused massive failures in critical software systems, like the 2014 incident that left the entire state of Washington without 9-1-1 emergency call-in services until the problem was traced to software running on a server in Colorado.

The article suggests that the core of the complexity problem is that code is too hard to think about. When and how did this happen?  

“You have to talk about the problem domain,” says Chris Newcombe,”because there are areas where code clearly works fine.” Newcombe, one of the people interviewed for the Atlantic article, is an expert on combating complexity, and since 2014 has been an architect on Oracle’s Bare Metal IaaS team.

“I used to work in video games,” Newcombe says. “There is lots of complex code in video games and most of them work fine. But if you're talking about control systems, with significant concurrency or affecting real-world equipment, like cars and planes and rockets or large-scale distribution systems, then we still have a way to go to solve the problem of true reliability. I think it's problem-domain specific. I don't think code is necessarily the problem. The problem is complexity, particularly concurrency and partial failure modes and side effects in the real world.”
 
Java Champion Adam Bien believes that in constrained environments, such as the software found in automobiles, “it's more or less a state machine which could or should be coded differently. So it really depends on the focus or the context. I would say that in enterprise software, code works well. The problem I see is more if you get newer ideas -- how to reshape the existing code quickly. But also coding is not just about code. Whether you write code or draw diagrams, the complexity will remain the same.”

Java Champion and microservices expert Chris Richardson agrees that “if you work hard enough, you can deliver software that actually works.” But he questions what is actually meant when software is described as “working well.”

“How successful are large software developments?” Richardson asks. “Do they meet requirements on time? Obviously that's a complex issue around project management and people. But what's the success rate?”

Richardson also points out that concerns about complexity are nothing new. “If you go back and look at the literature 30 or 40 years ago, people were concerned about software complexity then.”

The Atlantic article mentions that in most cases software does exactly what it was designed to do, an indication that it's not really a failure of the software as much as of the design of the software.

According to Developer Champion and Oracle ACE Director Lucas Jellema, “The complexity may not be in the software, but in the translation of the real-world problem or requirement into software. That starts not with coding, but with communication from one human being to another, from business end user to analyst to developer and maybe even some layers in between. That's where it usually goes wrong. In the end the software will do what the programmer told it to do, but that might not be what the business user or the real world requires it to do.”

Communication between stakeholders is only one aspect of the battle to reduce software complexity, and it’s just one issue among many that Chris Newcombe, Chris Richardson, Adam Bien, and Lucas Jellema discuss in this podcast. So settle in and listen.

This program was recorded on November 22, 2017.

The Panelists

(In alphabetical order)

Adam Bien
Java Champion
Oracle ACE Director
Twitter Lucas Jellema
CTO, AMIS Services
Oracle Developer Champion
Oracle ACE Director
Twitter  LinkedIn
  Chris Newcombe
Architect, Oracle Bare Metal IaaS Team
 LinkedIn 
  Chris Richardson
Founder, Eventuate. Inc
Java Champion
Twitter LinkedIn Additional Resources Coming Soon
  • AI Beyond Chatbots: How is AI being applied to modern applications?
  • Microservices, API Management, and Modern Enterprise Software Architecture

Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle ...

Sat, 2018-03-17 13:30

(Originally published on  javaoraclesoa.blogspot.com)

Spring Boot is great for running inside a Docker container. Spring Boot applications ‘just run’. A Spring Boot application has an embedded servlet engine making it independent of application servers. There is a Spring Boot Maven plugin available to easily create a JAR file which contains all required dependencies. This JAR file can be run with a single command-line like ‘java -jar SpringBootApp.jar’. For running it in a Docker container, you only require a base OS and a JDK. In this blog post I’ll give examples on how to get started with different OSs and different JDKs in Docker. I’ll finish with an example on how to build a Docker image with a Spring Boot application in it.

Getting started with Docker Installing Docker

Of course you need a Docker installation. I’ll not get into details here but;

Oracle Linux 7

yum-config-manager — enable ol7_addons yum-config-manager — enable ol7_optional_latest yum install docker-engine systemctl start docker systemctl enable docker

Ubuntu

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" apt-get update apt-get install docker-ce

You can add a user to the docker group or give it sudo docker rights. They do allow the user to become root on the host-OS though.

Running a Docker container

See below for commands you can execute to start containers in the foreground or background and access them. For ‘mycontainer’ in the below examples, you can fill in a name you like. The name of the image can be found in the description further below. This can be for example for an Oracle Linux 7 image container-registry.oracle.com/os/oraclelinux:7 when using the Oracle Container Registry or store/oracle/serverjre:8 for for example a JRE image from the Docker Store.

If you are using the Oracle Container Registry (for example to obtain Oracle JDK or Oracle Linux docker images) you first need to

  • go to container-registry.oracle.com and enable your OTN account to be used
  • go to the product you want to use and accept the license agreement
  • do docker login -u username -p password container-registry.oracle.com

If you are using the Docker Store, you first need to

  • go to store.docker.com and create an account
  • find the image you want to use. Click Get Content and accept the license agreement
  • do docker login -u username -p password

To start a container in the foreground

docker run — name mycontainer -it imagename /bin/sh

To start a container in the background

docker run — name mycontainer -d imagename tail -f /dev/null

To ‘enter’ a running container:

docker exec -it mycontainer /bin/sh

/bin/sh exists in Alpine Linux, Oracle Linux and Ubuntu. For Oracle Linux and Ubuntu you can also use /bin/bash. ‘tail -f /dev/null’ is used to start a ‘bare OS’ container with no other running processes to keep it running. A suggestion from here.

Cleaning up
Good to know is how to clean up your images/containers after having played around with them. See here.

#!/bin/bash # Delete all containers docker rm $(docker ps -a -q) # Delete all images docker rmi $(docker images -q) Options for JDK

Of course there are more options for running JDKs in Docker containers. These are just some of the more commonly used.

Oracle JDK on Oracle Linux

When you’re running in the Oracle Cloud, you have probably noticed the OS running beneath it is often Oracle Linux (and currently also often version 7.x). When for example running Application Container Cloud Service, it uses the Oracle JDK. If you want to run in a similar environment locally, you can use Docker images. Good to know is that the Oracle Server JRE contains more than a regular JRE but less than a complete JDK. Oracle recommends using the Server JRE whenever possible instead of the JDK since the Server JRE has a smaller attack surface. Read more here. For questions about the roadmap and support, read the following blog article.

store.docker.com

The steps to obtain Docker images for Oracle JDK / Oracle Linux from store.docker.com are as follows:

Create an account on store.docker.com. Go to https://store.docker.com/images/oracle-serverjre-8. Click Get Content. Accept the agreement and you’re ready to login, pull and run.

#use the store.docker.com username and password docker login -u yourusername -p yourpassword docker pull store/oracle/serverjre:8 #To start in the foreground: docker run — name jre8 -it store/oracle/serverjre:8 /bin/bash

container-registry.oracle.com

You can use the image from the container registry. First, same as for just running the OS, enable your OTN account and login.

#use your OTN username and password docker login -u yourusername -p yourpassword container-registry.oracle.com docker pull container-registry.oracle.com/java/serverjre:8 #To start in the foreground: docker run — name jre8 -it container-registry.oracle.com/java/serverjre:8 /bin/bash

OpenJDK on Alpine Linux

When running Docker containers, you want them to as small as possible to allow quick starting, stopping, downloading, scaling, etc. Alpine Linux is a suitable Linux distribution for small containers and is being used quite often. There can be some thread challenges with Alpine Linux though. See for example here and here. Running OpenJDK in Alpine Linux in a Docker container is more easy than you might think. You don’t require any specific accounts for this and also no login. When you pull openjdk:8, you will get a Debian 9 image. In order to run on Alpine Linux, you can do

docker pull openjdk:8-jdk-alpine

Next you can do

docker run — name openjdk8 -it openjdk:8-jdk-alpine /bin/sh

Zulu on Ubuntu Linux

 

You can also consider OpenJDK based JDK’s like Azul’s Zulu. This works mostly the same only the image name is something like ‘azul/zulu-openjdk:8’. The Zulu images are Ubuntu based.

Do it yourself

Of course you can create your own image with a JDK. See for example here. This requires you download the JDK code and build the image yourself. This is quite easy though.

Spring Boot in a Docker container

Creating a container with a Spring Boot application based on an image which already has a JDK in it, is easy. This is described here. You can create a simple Dockerfile like:

FROM openjdk:8-jdk-alpine VOLUME /tmp ARG JAR_FILE ADD ${JAR_FILE} app.jar ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

The FROM image can also be an Oracle JDK or Zulu JDK image as mentioned above.

And add a dependency to com.spotify.dockerfile-maven-plugin and some configuration to your pom.xml file to automate building the Dockerfile once you have the Spring Boot JAR file. See for a complete example pom.xml and Dockerfile also here. The relevant part of the pom.xml file is below.

<build> <finalName>accs-cache-sample</finalName> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>com.spotify</groupId> <artifactId>dockerfile-maven-plugin</artifactId> <version>1.3.6</version> <configuration> <repository>${docker.image.prefix}/${project.artifactId}</repository> <buildArgs> <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE> </buildArgs> </configuration> </plugin> </plugins> </build>

To actually build the Docker image, which allows using it locally, you can do:

mvn install dockerfile:build

If you want to distribute it (allow others to easily pull and run it), you can push it with

mvn install dockerfile:push

This will of course only work if you’re logged in as maartensmeets and only for Docker hub (for this example). The below screenshot is after having pushed the image to hub.docker.com. You can find it there since it is public.

#Running the container docker run -t maartensmeets/accs-cache-sample:latest

DevOps Meets Monitoring and Analytics

Mon, 2018-03-05 11:33

Much has been said about the role new technologies play in supporting DevOps, like automation and machine learning. My colleague Padmini Murthy wrote “DevOps Meets Next Gen Technologies”. In that post, Padmini does a great job discussing the DevOps ecosystem, partly based on a recent DevOps.com survey.

New technologies are rapidly shaping the way companies address Security and Application Performance Monitoring as well.

The same survey found 57% of companies have already adopted, and another 36% are planning to adopt modern monitoring in the next 12 months. Major reasons are: enhanced security, increased IT efficiency, and faster troubleshooting as shown in the chart below.  

Figure 1: “DevOps Meets Next Gen Technologies” by Devops.com; benefits and adoption profile for security, performance, and analytics monitoring.

Traditional IT practices would suggest application and security monitorings are oil and water, they don’t mix. Those responsible for applications and those responsible for IT security think and work dramatically different.  Here also, the landscape is changing rapidly.  The rapid proliferation of mobile and web applications built on modular microservices architectures or the like means monitoring needs to be agile and automatic.  At the same time, security strategies need to go beyond a good firewall, intrusion detection, and identity management.

What have emerged are commonalities between security and performance monitoring.  Both are using real-time monitoring of transactions through the entire stack.  Both are using machine learning to translate massive amounts of data into IT and security insights in real time.  Both are correlating data across an entire transaction in real time to quickly find performance or security issues.  Both are summarizing normal and abnormal behavior automatically to identify what’s important to view and what’s normal behavior.

This is what’s behind the design for Oracle Management Cloud.  It unifies all the metadata and log files in the cloud.  It normalizes the information on a big data analytics platform and applies machine learning algorithms to deliver IT Ops and Security dashboards pre-built specifically for security and performance teams with insights in real time, and automatically.

Figure 2: Oracle Management Cloud provides an integrated platform for security and performance monitoring.

Here are some lessons we’ve learned working with customers on DevOps efforts:

  1. Stop denying there is a problem. Ops teams are constantly bombarded by “false Signal” alerts.  They want better intelligence sooner about performance and security anomalies and threats. Read this Profit Magazine article to learn more about what Oracle is doing to help customers defend against ever-changing security and performance threats.
  2. Eliminate operational information silos so you eliminate finger pointing. Put your operational data (security, performance, configuration, etc.) in one place, and let today’s machine-learning-powered tools do the heavy lifting for you. You will reduce finger pointing, troubleshoot faster, and you may be able to eliminate the “war room” entirely. Watch this video to hear what one Oracle customer says about the power of machine learning.

Figure 3: Why Machine Learning is a key enabler for cloud-based monitoring.

  1. Monitor what (really) matters – your actual end-users. Over 70% of IT issues are end-user complaints. This can hinder the Ops team’s ability to respond to important issues. Look at this infographic highlighting the value of application and end-user monitoring. Figure 4 pinpoints why traditional monitoring tools miss the mark when it comes to delivering value.

Figure 4: End-user and application performance monitoring are key to a successful monitoring strategy.

  1. It’s in the logs! Logs are everywhere, but most organizations don’t use them because they are overwhelmed with the amount of data involved. Next-generation management clouds that are designed to ingest big data at enterprise-scale can cope with today’s log data volume and velocity. Check out this infographic for more details on Oracle Management Cloud’s Log Analytics service.

Figure 5: Key challenges with using logs to troubleshoot issues.

  1. Planning is an everyday activity. Leverage analytical capabilities against your unified store of operational information to answer a variety of forward-looking questions to improve security posture, application performance and resource utilization. If you’ve followed my advice in steps 1 through 4 above, you have all the data you need already available. Now it’s time to use it.

Further resources on Oracle Management Cloud:

Three Quick Tips API Platform CS - Gateway Installation (Part 2)

Wed, 2018-02-28 02:00

This is Part 2 of the blog series (The first part of the series can be accessed here). The aim of the blog post is to provide useful tips, which will enable the installation of the on premise Gateway for Oracle API Platform Cloud Services. If you want to know more about the product, then you can refer here.

The following tips are based on some of the scenarios, we have observed in production.

Essentially,to get past the entropy problem, you need to do the following (for Linux):

  •    check the current entropy count by executing:

   cat /proc/sys/kernel/random/entropy_avail

  • If the entropy is low you can do any of the following:    
  •    export CONFIG_JVM_ARGS=-Djava.security.egd=file:/dev/./urandom 
  •   Install the rngd tool (if not present) and execute:

   rngd -r /dev/urandom -o /dev/random -b   

  •   You can now proceed with the gateway domain creation or domain startup.

 

  • It is possible to generate the gateway properties from the API Portal UI. Please try to leverage this functionality and download the generated property file on to the on premise machine. This will significantly reduce the effort of hand crafting the properties file which is critical for the gateway installation process. Please refer here for more details.

 

  • If you encounter scenarios where failures in the "configure" action look something like:

64040: Specified template does not exist or is not a file: "/d01/apipcs/app/oracle/gateway/run/build/apiplatform_gateway-services_template.jar".
64040: Provide a valid template location.
at com.oracle.cie.domain.script.jython.CommandExceptionHandler.handleException(CommandExceptionHandler.java:56)
at com.oracle.cie.domain.script.jython.WLScriptContext.handleException(WLScriptContext.java:2279)
at com.oracle.cie.domain.script.jython.WLScriptContext.addTemplate(WLScriptContext.java:793)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    

The above is just an example but any file not found kind of error during the "configure" action is an indication that the previous step (which is the "install" action), did not complete successfully. Please refer to the "gatewayInstall.log" and "main.log", as this will point to why the install might have had errors even if the install process might have completed.

So  that is all for today. We will be back with more tips soon. Happy API management with the Oracle API Platform Cloud Services.    

Digital Transformation - Oracle API Platform Cloud Service

Questions on DevOps, Graal, APIs, Git? Champs Have Answers at Oracle Code Los Angeles

Fri, 2018-02-23 06:16

If you had technical questions about API design, for instance, or about date types in relational databases, or about DevOps bottlenecks, or about using Graal or Git,  you’d look for answers from someone with an abundance of relevant expertise, right? A champ in that particular topic.

As it happens, if you do indeed have questions on any of those topics, the Oracle Code event in Los Angeles on February 27 represents a unique opportunity for you to connect with a Developer Champion who can set you straight. Register now for Oracle Code Los Angeles, and put these sessions by Oracle Developer Champions on your schedule.

A Research Study Into DevOps Bottlenecks
Presented by: Baruch Sadogursky, Developer Advocate, JFrog
1:10 p.m.  - 1:55 p.m.  San Jose Room

Think DevOps is just so much hype? Guess again! “DevOps is among the none-hypish methodologies that really help,” said Developer Champion Baruch Sadogursky in a recent podcast. “It’s here to stay because it is another step toward faster and better integration between stakeholders in the delivery process.” But taking that step trips up some organizations. In this session Baruch dives deep into the results of a poll of Fortune 500 software delivery leaders to determine what’s causing the bottlenecks that are impeding their DevOps progress, and find solutions that will set them back on the path

Graal: How to Use the New JVM JIT Compiler in Real Life
by Christian Thalinger, Staff Software Engineer, Twitter, Inc.
2:10 p.m. - 2:55 p.m. San Francisco Room

Is Graal on your radar? It should be. It’s a new JVM JIT compiler that could become the default HotSpot JIT compiler, according to Developer Champion Christian Thalinger. But that kind of transition isn’t automatic. “One of the biggest mistakes people make when benchmarking Graal is that they assume they could use the same metrics as for C1 and C2” explains Christian. “Some people just measure overall time spent in GC and that just doesn't work.  I've seen the same being done to overall time spent for JIT compilations.  You can't do that." What can you do with Graal? Christian’s session will look at how it works, and what it can do for you.

Tackling Time Troubles - About Date Types in Relational Databases
by Bjoern Rost, Principal Consultant, The Pythian Group Inc
2:10 p.m. - 2:55 p.m. Sacramento Room

The thing about time is that it’s always passing, and there never seems to be enough of it. Things get even more complicated when it comes to dealing with time-related data in databases. While your mobile phone might easily handle leap years, time zones, or seasonal time changes, those issues can cause runtime errors, SQL code headaches, and other database problems you’d rather avoid. In this session Developer Champion Bjoern Rost will discuss best practices that will help you dodge some of the time data issues that can increase your aspirin intake. Put this session on your schedule and learn how to have an easier time when dealing with time data

Best Practices for API Design Using Oracle APIARY
by Rolando Carrasco, Fusion Middleware Director, S&P Solutions
Leonardo Gonzalez Cruz, OFMW Architect, S&P Soutions
 3:05 p.m.  - 3:50 p.m. San Jose Room

Designing and developing APIs is an important part of modern development. But if you’re not applying good design principles, you’re headed for trouble. “We are living in an API world, and you cannot play in this game with poor design principles,” says Developer Champion Rolando Carrasco. In this session, Rolando and co-presenter Leonardo Gonzalez Cruz will define what an API is, examine what distinguishes a good API, discuss the design principles that are necessary to build stable, scalable, secure APIs, and also look at some of the available tools. Whether you’re an API producer or an API consumer, you’ll want to take in this session.

Git it! A Primer To The Best Version Control System
by Bjoern Rost, Principal Consultant, The Pythian Group Inc
Stewart Bryson, owner and co-founder, Red Pill Analytics
4:20 p.m. - 5:05 p.m.  San Francisco Room

Git, the open source version control system, already has a substantial following. But whether you count yourself among those fans, or if you’re new and ready to get on board, this session by Bjeorn Rost and Oracle ACE Director Stewart Bryson will walk you through setting up your own Git repository, and discuss cloning, syncing, using and merging branches, integrating with CI/CD systems, and other hot Git tips. Don’t miss this opportunity to sharpen your Git skill

Of course, the sessions mentioned above are just 5 among 31 sessions, labs, and keynotes that are part of the overall Oracle Code Los Angeles agenda.

Don’t miss Oracle Code Los Angeles

Tuesday, February 217, 2018
7:30am - 6:00pm
The Westin Bonaventure Hotel and Suites
404 S Figueroa St.
Los Angeles, CA  90071
Register Now!

Learn about other events in the Oracle Code 2018 series
 

Related Resources

 

 

Podcast: DevOps in the Real World: Culture, Tools, Adoption

Tue, 2018-02-20 17:38

Among technology trends DevOps is certainly generating its share of heat. But is that heat actually driving adoption? “I’m going to give the answer everyone hates: It depends,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC. “It depends on where each team is, on where the organization is. I talk to people all over the industry, and I work with organizations all over the industry, and everyone is at a very different place.”

Some of the organizations Nicole has spoken with are pushing the DevOps envelope. “They’re almost squeezing blood out of a stone, finding ways to optimize things that have been optimized at the very edge. They’re doing things that most people can’t even comprehend.” Other organizations aren't feeling it. "There’s no DevOps,” says Nicole. “DevOps is nowhere near on their radar.”

Some organizations that had figured out DevOps stumbled a bit when the word came down to move everything to the cloud, explains Shay Shmeltzer, product management director for Oracle Cloud Development tools. “A lot of them need to rethink how they’re doing stuff, because cloud actually simplifies DevOps to some degree. It makes the provisioning of environments and getting stuff up and down much easier and quicker in many cases.”

As Nicole explains, “DevOps is a technology transformation methodology that makes your move into the cloud much more sticky, much more successful, much more effective and efficient to deliver value, to realize cost-savings. You can get so much more out of the technology that you are using and leveraging, so that when you do move to the cloud, everything is so much better. It’s almost a chicken and egg thing. You need so much of it together.”

However, that value isn’t always apparent to everyone. Kelly Shortridge, product manager at SecurityScorecard, observes that some security stakeholders, “feel they don’t have a place in the DevOps movement.” Some security teams have a sense that configuration management will suffice. “Then they realize that they can’t just port existing security solutions or existing security methodologies directly into agile development processes,” explains Kelly. “You have the opportunity to start influencing change earlier in the cycle, which I think was the hype. Now we’re at the Trough of Disillusionment, where people are discovering that it’s actually very hard to integrate properly, and you can’t just rely on technology for this shift. There also has to be a cultural shift, as far as security, and how they think about their interactions with engineers.” In that context Kelly sees security teams wrestling with how to interact within the organization.

But the value of DevOps is not lost on other roles and disciplines. It depends on how you slice it, explains Leonid Igolnik, member and angel investor with Sand Hill Angels, and founding investor, advisor, and managing partner with Batchery. He observes that DevOps progress varies across different industry subsets and different disciplines, “whether it’s testing, development, or security.”

“Overall, I think we’re reaching the Slope of Enlightenment, and some of those slices are reaching the Plateau of Productivity,” Leonid says.

Alena Prokharchyk began her journey into DevOps three years ago when she started her job as principal software engineer at Rancher Labs, whose principal product targets DevOps. “That actually forced me to look deeper into DevOps culture,” she says. “Before that I didn’t realize that such problems existed to this extent. That helped me understand certain aspects of the problem. Within the company, the key for me was communication with the DevOps team. Because if I’m going to develop something for DevOps, I have to understand the problems.”

If you’re after a better understanding of challenges and opportunities DevOps represents, you’ll want to check out this podcast, featuring more insight on adoption, cultural change, tools and other DevOps aspects from this collection of experts.

The Panelists

(Listed alphabetically)

Nicole Forsgren Nicole Forsgren
Founder and CEO, DevOps Research and Assessment LLC
Twitter LinkedIn Leonid Igolnik
Member and Angel Investor, Sand Hill Angels
Founding Investor, Advisor, Managing Partner, Batchery
Twitter LinkedIn Alena Prokharchyk
Principal Software Engineer, Rancher Labs
Twitter LinkedIn Baruch Sadogursky
Developer Advocate, JFrog
Twitter LinkedIn Shay Shmeltzer
Director of Product Management, Oracle Cloud Development Tools
Twitter LinkedIn Kelly Shortridge
Product Manager at SecurityScorecard
Twitter LinkedIn   Additional Resources Coming Soon
  • Combating Complexity
    An article in the September 2017 edition of the Atlantic warned of The Coming Software Apocalypse. Oracle's Chris Newcombe was interviewed for that article. In this podcast Chris joins Chris Richardson, Adam Bien, and Lucas Jellema to discuss heading off catastophic software failures.
  • AI Beyond Chatbots
    How is Artificial Intelligence being applied to modern applications? What are the options and capabilities? What patterns are emerging in the application of AI? A panel of experts provides the answers to these and other questions.
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

:

Podcast: DevOps in the Real World: Culture, Tools, Adoption

Tue, 2018-02-20 17:38

Among technology trends DevOps is certainly generating its share of heat. But is that heat actually driving adoption? “I’m going to give the answer everyone hates: It depends,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC. “It depends on where each team is, on where the organization is. I talk to people all over the industry, and I work with organizations all over the industry, and everyone is at a very different place.”

Some of the organizations Nicole has spoken with are pushing the DevOps envelope. “They’re almost squeezing blood out of a stone, finding ways to optimize things that have been optimized at the very edge. They’re doing things that most people can’t even comprehend.” Other organizations aren't feeling it. "There’s no DevOps,” says Nicole. “DevOps is nowhere near on their radar.”

Some organizations that had figured out DevOps stumbled a bit when the word came down to move everything to the cloud, explains Shay Shmeltzer, product management director for Oracle Cloud Development tools. “A lot of them need to rethink how they’re doing stuff, because cloud actually simplifies DevOps to some degree. It makes the provisioning of environments and getting stuff up and down much easier and quicker in many cases.”

As Nicole explains, “DevOps is a technology transformation methodology that makes your move into the cloud much more sticky, much more successful, much more effective and efficient to deliver value, to realize cost-savings. You can get so much more out of the technology that you are using and leveraging, so that when you do move to the cloud, everything is so much better. It’s almost a chicken and egg thing. You need so much of it together.”

However, that value isn’t always apparent to everyone. Kelly Shortridge, product manager at SecurityScorecard, observes that some security stakeholders, “feel they don’t have a place in the DevOps movement.” Some security teams have a sense that configuration management will suffice. “Then they realize that they can’t just port existing security solutions or existing security methodologies directly into agile development processes,” explains Kelly. “You have the opportunity to start influencing change earlier in the cycle, which I think was the hype. Now we’re at the Trough of Disillusionment, where people are discovering that it’s actually very hard to integrate properly, and you can’t just rely on technology for this shift. There also has to be a cultural shift, as far as security, and how they think about their interactions with engineers.” In that context Kelly sees security teams wrestling with how to interact within the organization.

But the value of DevOps is not lost on other roles and disciplines. It depends on how you slice it, explains Leonid Igolnik, member and angel investor with Sand Hill Angels, and founding investor, advisor, and managing partner with Batchery. He observes that DevOps progress varies across different industry subsets and different disciplines, “whether it’s testing, development, or security.”

“Overall, I think we’re reaching the Slope of Enlightenment, and some of those slices are reaching the Plateau of Productivity,” Leonid says.

Alena Prokharchyk began her journey into DevOps three years ago when she started her job as principal software engineer at Rancher Labs, whose principal product targets DevOps. “That actually forced me to look deeper into DevOps culture,” she says. “Before that I didn’t realize that such problems existed to this extent. That helped me understand certain aspects of the problem. Within the company, the key for me was communication with the DevOps team. Because if I’m going to develop something for DevOps, I have to understand the problems.”

If you’re after a better understanding of challenges and opportunities DevOps represents, you’ll want to check out this podcast, featuring more insight on adoption, cultural change, tools and other DevOps aspects from this collection of experts.

The Panelists

(Listed alphabetically)

Nicole Forsgren Nicole Forsgren
Founder and CEO, DevOps Research and Assessment LLC
Twitter LinkedIn Leonid Igolnik
Member and Angel Investor, Sand Hill Angels
Founding Investor, Advisor, Managing Partner, Batchery
Twitter LinkedIn Alena Prokharchyk
Principal Software Engineer, Rancher Labs
Twitter LinkedIn Baruch Sadogursky
Developer Advocate, JFrog
Twitter LinkedIn Shay Shmeltzer
Director of Product Management, Oracle Cloud Development Tools
Twitter LinkedIn Kelly Shortridge
Product Manager at SecurityScorecard
Twitter LinkedIn   Additional Resources Coming Soon
  • Combating Complexity
    An article in the September 2017 edition of the Atlantic warned of The Coming Software Apocalypse. Oracle's Chris Newcombe was interviewed for that article. In this podcast Chris joins Chris Richardson, Adam Bien, and Lucas Jellema to discuss heading off catastophic software failures.
  • AI Beyond Chatbots
    How is Artificial Intelligence being applied to modern applications? What are the options and capabilities? What patterns are emerging in the application of AI? A panel of experts provides the answers to these and other questions.
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

:

Oracle Code is back – Bigger and Better!

Fri, 2018-02-16 16:24

2018 is yet another great year for developers! Oracle’s awesome global developer conference series, Oracle Code, is back – and it’s bigger and better!

In 2017 Oracle ran the first series of Oracle Code developer conferences. In over 20 cities across the globe the series attracted more than 10,000 developers from all over the world, providing them with the opportunity to learn new skills, network with peers and take home some great memories. Following the huge success, Oracle is about to run yet another 14 events across the globe kicking off in late February in Los Angeles. The great thing about Oracle Code, attendance and speaking at the conferences is fully free of charge, showing Oracle holding true to the commitment to the developer communities out there. Across four continents you will get to hear everything that is hot and top in the industry: Blockchain, Containers, Microservices, API Design, Machine Learning, AI, Mobile, Chatbots, Databases, Low Code Development, trendy programming languages, CI/CD, DevOps and much, much more will be right in the center of Oracle Code.

Throughout the one-day events, that provide space for 500 people, developers can share their experience, participate in hands-on labs, talk to subject matter experts and, most importantly, have a lot of fun in the Oracle Code Lounge.

IoT Cloud Brewed Beer

Got a few minutes to try the IoT Cloud Brewed Beer from a local micro brewery? Extend manufacturing processes and logistics operations quickly using data from connected devices. Tech behind the brew: IoT Production Monitoring, IoT Asset Monitoring, Big Data, Event Hub, Oracle JET.


3D Builder Playground

Create your own sculptures and furniture with the 3D printer and help complete the furniture created using Java constructive geometry library. The Oracle technology used is Application Container Cloud running Visual IDE and Java SE running JSCG library.

Oracle Zip Labs Challenge

Want some bragging rights and to win prizes at the same time? Sign up for a 15-minute lab on Oracle Cloud content and see your name on the leaderboard as the person to beat in Oracle Zip Labs Challenge.

IoT Workshop

Interact and exchange ideas with other attendees at the IoT Workshop spaces. Get your own Wi-Fi microcontroller and connect to Oracle IoT Cloud Service. Oracle Developer Community is partnering with AppsLab and the Oracle Applications Cloud User Experience emerging technologies team to make these workshops happen.

Robots Rule with Cloud Chatbot Robot

Ask NAO the robot to do Tai Chi or ask "who brewed the beers"? So how does NAO do what it does? It uses the Intelligent Bot API on Oracle Mobile Cloud Service to understand your command and responds back by speaking back to you.

Dev Live

The Oracle Code crew also thought of the folks who aren’t that lucky to participate at Oracle Code in person: Dev Live are live interviews happening at Oracle Code that are streamed online across the globe so that everyone can watch developers and community members share their experiences.

Register NOW!

Register now for an Oracle Code event near you at: https://developer.oracle.com/code

Have something interesting that you did and want to share it with the world? Submit a proposal in the Call for Papers at: https://developer.oracle.com/code/cfp





See you next, at Oracle Code!

Announcing Packer Builder for Oracle Cloud Infrastructure Classic

Wed, 2018-02-14 10:30

HashiCorp Packer 1.2.0 adds native support for building images on Oracle Cloud Infrastructure Classic.

Packer is an open source tool for creating machine images across multiple platforms from a single source configuration. With the new oracle-classic builder, Packer can now build new application images directly on Oracle Classic Compute, similar to the oracle-oci builder. New Images can be created from an Oracle provided base OS image, an existing private image, or an image that that has been installed from the Oracle Cloud Marketplace

Note: Packer can also create Oracle Cloud Infrastructure Classic compatible machine images using the VirtualBox builder - and this approach still remains useful when building new base OS images from ISOs, see Creating Oracle Compute Cloud Virtual Machine Images using Packer

oracle-classic Builder Example

This examples creates a new image with Redis installed using an existing Ubuntu image as the base OS.

Create a packer configuration file redis.json

Now run Packer to build the image

After packer completes the new Image is available in the Compute Classic console to launch new instances.

See also

For building Oracle Cloud Infrastructure images see:

Three Quick Tips API Platform CS - Gateway Installation (Part 1)

Tue, 2018-02-13 16:00

This blog post assumes some prior knowledge of API Platform Cloud Service and pertains to the on premise gateway installation steps. Here we try to list down 3 useful tips (applicable for 18.1.3+), arranged in no particular order:. 

  • Before installing the gateway, make sure you have the correct values for "listenIpAddress" and "publishAddress".  This can be done by the following checklist (Linux only):
    • Does the command "hostname -f" return a valid value ?
    • Does the command "ifconfig" list downs the ip addresses properly ?
    • Do you have additional firewall/network policies that may prevent communication with management tier?
    • Do you authoritatively know the internal and public ip/addresses to be used for the gateway node?

            If you do not know the answers to any of the questions, please contact your network administrator.

           If you see issues with gateway server not starting up properly, incorrect values of  "listenIpAddress" and "publishAddress" could be the possible cause. 

  • Before running the "creategateway" action (or any other action involving the "creategateway" like "create-join" for example), do make sure that the management tier is accessible. You can use something like:
    • wget "<http/https>:<managmentportal_host>:<management_portal_port>/apiplatform"  
    • curl "<http/https>:<managmentportal_host>:<management_portal_port>/apiplatform"

           If the above steps fail, then "creategateway" will also not work, so the questions to ask are:

  1. Do we need a proxy?
  2. If we have already specified a proxy , is it the correct proxy ?
  3. In case we need a proxy , have we set the "managementServiceConnectionProxy" property in gateway-props.json.

Moreover, it is better if we set the http_proxy/https_proxy to the correct proxy, if proxies are applicable.

  • Know your log location, please refer to the following list:
    • Logs for troubleshooting "install" or  "configure" actions , we have to refer to <install_dir>/logs directory.
    • Logs for troubleshooting "start" or "stop" actions, we have to refer to <install_dir>/domain/<gateway_name>/(start*.out|(stop*.out)).
    • Logs for troubleshooting "create-join"/"join" actions, we have to refer to <install_dir>/logs directory.
    • To troubleshoot issues post installation (i.e. after the physical node has joined the gateway), we can refer to <install_dir>/domain/<gateway_name>/apics/logs directory. 

We will try to post more tips in the coming weeks, so stay tuned and happy API Management.            

Pages