OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 5 hours 16 min ago

Kubernetes, Serverless, and Federation – Oracle at KubeCon 2017

Wed, 2017-12-06 09:00

Today at the KubeCon + CloudNativeCon 2017 conference in Austin, TX, the Oracle Container Native Application Development team open sourced two new Kubernetes related projects which we are also demoing here at the show.  First, we have open sourced an Fn Installer for Kubernetes. Fn is an open source serverless project announced this October at Oracle OpenWorld.  This Helm Chart for Fn enables organizations to easily install and run Fn on any Kubernetes deployment including on top of the new Oracle managed Kubernetes service Oracle Container Engine (OCE). 

Second, we have open sourced Global Multi-Cluster Management, a new set of distributed cluster management features for Kubernetes federation that intelligently manages highly distributed applications – “planet-scale” if you will - that are multi-region, hybrid, or even multi-cloud.  In a federated world, many operational challenges emerge - imagine how you would manage and auto-scale global applications or deploy spot clusters on-demand.  For more info, make sure to check out the Multi-Cluster Ops in a Hybrid World session by Kire Filipovski and Vitaliy Zinchenko on Thursday December 7 at 3:50pm!

Pushing Ahead: Keep it Open, Integrated and Enterprise-Grade

Customers are seeking an open, cloud-neutral, and community-driven container-native technology stack that avoids cloud lock-In and allows them to run the same stack in the public cloud as they run locally.  This was our vision when we launched the Container Native Application Development Platform at Oracle OpenWorld 2017 in October.


Since then Oracle Container Engine was in the first wave of Certified Kubernetes platforms announced in November 2017, helping developers and dev teams be confident that there is consistency and portability amongst products and implementations.  

So, the community is now looking for the same assurances from their serverless technology choice: make it open and built in a consistent way to match the rest of their cloud native stack.  In other words, make it open and on top of Kubernetes.  And if the promise of an open-source based solution is to avoid cloud lock-in, the next logical request is to make it easy for DevOps teams to operate across clouds or in a hybrid mode.  This lines up with the three major “asks” we hear from customers, development teams and enterprises: their container native platform must be open, integrated, and enterprise-grade:

  • Open: Open on Open

Both the Fn project and Global Multi-Cluster Management are cloud neutral and open source. Doubling down on open, the Fn Helm Chart enables the open serverless project (Fn) to run on the leading open container orchestration platform (Kubernetes).   (Sure beats closed on closed!)  The Helm Chart deploys a fully functioning cluster of Fn github.com/fnproject/fn on a Kubernetes cluster using the Helm helm.sh/ package manager.

  • Integrated: Coherent and Connected

Delivering on the promise of an integrated platform, both the Fn Installer Helm Charts and Global Multi-Cluster Management are built to run on top of Kubernetes and thus integrate natively into Oracle’s Container Native Platform.  While having one of everything works in a Home Depot or Costco, it’s no way to create an integrated, effortless application developer experience – especially at scale across hundreds if not thousands of developers across an organization.  Both the Fn installer and Global Multi-Cluster Management will be available on top of OCE, our managed Kubernetes service

  • Enterprise-Grade: HA, Secure, and Operationally Aware

With the ability to deploy Fn to an enterprise-grade Kubernetes service such as Oracle Container Engine you can run serverless on a highly-available and secure backend platform.  Furthermore, Global Multi-Cluster Management extends the enterprise platform to multiple clusters and clouds and delivers on the enterprise desire for better utilization and capacity management. 

Production operations for large distributed systems is hard enough in a single cloud or on-prem, but becomes even more complex with federated deployments – such as multiple clusters applied across multi-regions, hybrid (cloud/on-prem), and multi-cloud scenarios.  So, in these situations, DevOps teams need to deploy and auto-scale global applications or spot clusters on-demand and enable cloud migrations and hybrid scenarios.

With Great Power Comes Great Responsibility (and Complexity)

So, with the power of Kubernetes federation comes great responsibility and new complexities: how to deal with challenge of applying application-aware decision logic to container native deployments.  Thorny business and operational issues could include cost, regional affinity, performance, quality of service, and compliance.  When DevOps teams are faced with managing multiple Kubernetes deployments they can also struggle with multiple cluster profiles, deployed on a mix of on-prem and public cloud environments.  These are basic DevOps question that are hard questions to answer:

  • How many clusters should we operate?
    • Do we need separate clusters for each environment?
    • How much capacity do we allocate for each cluster?
  • Who will manage the lifecycle of the clusters?
  • Which cloud is best suited for my application?
  • How do we avoid cloud lock-in?
  • How do we deploy applications to multiple clusters?

The three open source components that make up Global Multi-Cluster Management are: (1) Navarkos (which means Admiral in Greek) enables a Kubernetes federated deployment to automatically manage multi-cluster infrastructure and manage clusters in response to federated Kubernetes application deployments; (2) Cluster Manager provides lifecycle management for Kubernetes clusters using a Kubernetes federation backend; and (3) the Federated Ingress Controller is an alternative implementation of federated ingress using external DNS.

Global Multi-Cluster Management works with Kubernetes federation to solve these problems in several ways:

  • Creates Kubernetes clusters on demand and deploys apps to them (only when there is a need)
    • Clusters can be run on any public or private cloud platform
    • Runs the application matching supply and demand
  • Manages cluster consistency and cluster life-cycle
    • Ingress, nodes, network
  • Control multi-cloud application deployments
    • Control applications independently of cloud provider
  • Application-aware clusters
    • Clusters are offline when idle
    • Workloads can be auto-scaled automatically
    • Provides the basis to help decide where apps run based on factors that could include cost, regional affinity, performance, quality of service and compliance

Global Multi-Cluster Management ensures that all of the Kubernetes clusters are created, sized and destroyed only when there is a need for them based on the requested application deployments.  If there are no application deployments, then there are no clusters. As DevOps teams deploy various applications to a federated environment, then Global Multi-Cluster Management makes intelligent decisions if any clusters should be created, how many of them, and where.  At any point in time the live clusters are in tune with the current demand for applications, and the Kubernetes infrastructure becomes more application and operationally aware.

See Us at Booth G8, Join our Sessions, & Learn More at KubeCon + CloudNativeCon 2017

Come see us at Booth G8 and meet our engineers and contributors!  As a local Austin native (and for the rest of the old StackEngine team) we’re excited to welcome you all (y’all) to Austin.  Make sure to join in to “Keep Cloud Native Weird.”    And be fixin’ to check out these sessions:


Announcing The New Open Source WebLogic Monitoring Exporter on GitHub

Mon, 2017-12-04 08:00

As it runs, WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain. To give our users the best possible experience when running WebLogic domains in Docker/Kubernetes environments, we have developed the WebLogic Monitoring Exporter. This new tool exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana.

We are also making the WebLogic Monitoring Exporter tool available as open source on GitHub, which will allow our community to contribute to this project and be part of enhancing it. 

The WebLogic Monitoring Exporter is implemented as a web application that is deployed to the WebLogic Server instances that are to be monitored. The exporter uses the WebLogic Server 12.2.1.x RESTful Management Interface for accessing runtime state and metrics.  With a single HTTP query, and no special setup, it provides an easy way to select the metrics that are monitored for a managed server.

For detailed information about the design and implementation of the WebLogic Monitoring Exporter, see Exporting Metrics from WebLogic Server.

Prometheus collects the metrics that have been scraped by the WebLogic Monitoring Exporter. By constructing Prometheus-defined queries, you can generate any data output you require to monitor and diagnose the servers, applications, and resources that are running in your WebLogic domain.

We can use Grafana to display these metrics in graphical form.  Connect Grafana to Prometheus, and create queries that take the metrics scraped by the WebLogic Monitoring Exporter and display them in dashboards.

For more information, see Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes.

Get Started!

Get started building and deploying the WebLogic Monitoring Exporter, setup Prometheus and Grafana, and monitor the metrics from the WebLogic Managed servers in a domain/cluster running in Kubernetes. 

  • Clone the source code for the WebLogic Monitoring Exporter from GitHub.
  • Build the WebLogic Monitoring Exporter following the steps in the README file.
  • Install both Prometheus and Grafana in the host where you are running Kubernetes.  
  • Start a WebLogic on Kubernetes domain; find a sample in GitHub.
  • Deploy the WebLogic Monitoring Exporter to the cluster where the WebLogic Managed servers are running.
  • Follow the blog entry Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes that steps you through collecting metrics in Prometheus and display them in Grafana dashboards.

We welcome you to try this out. It's a good start to making the transition to open source monitoring tools.  We can work together to enhance it and take full advantage of its functionality in Docker/Kubernetes environments.


Updates to Oracle Cloud Infrastructure CLI

Fri, 2017-12-01 15:01
pre { white-space: pre-wrap; /* css-3 */ white-space: -moz-pre-wrap; /* Mozilla, since 1999 */ white-space: -pre-wrap; /* Opera 4-6 */ white-space: -o-pre-wrap; /* Opera 7 */ word-wrap: break-word; /* Internet Explorer 5.5+ */ margin-bottom: 30px; }

We’ve been hard at work the last few months making updates to our command line interface for Oracle Cloud Infrastructure, and wanted to take a minute to share some of the new functionality! The full list of new features and services can be found in our changelog on GitHub, and below are a few core features we wanted to call out specifically:


We know how tedious it can be to type out the same values again and again while using the CLI, so we have added the ability to specify default values for parameters. The example below shows a sample oci_cli_rc file which sets two defaults: one at a global level which will be applied to all operations with a --compartment-id parameter, and one for only ‘os’ (object storage) commands which will be applied to all ‘os’ commands with a --namespace parameter.

Content of ~/.oci/oci_cli_rc:

[DEFAULT] # globally scoped default for all operations with a --compartment-id parameter compartment-id= ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2… # default for --namespace scoped specifically to Object Storage commands os.namespace=mynamespace

Example commands that no longer need explicit parameters:

oci compute instance list # no --compartment-id needed oci os bucket list  # no --compartment-id or --namespace needed


Command and parameter aliases

To help with specifying long command and parameter names, we have also added support for defining aliases. The example oci_cli_rc file below shows examples of defining aliases for commands and parameters:

Content of ~/.oci/oci_cli_rc:

[OCI_CLI_PARAM_ALIASES] --ad=--availability-domain -a=--availability-domain --dn=--display-name [OCI_CLI_COMMAND_ALIASES] # This lets you use "ls" instead of "list" for any list command in the CLI (e.g. oci compute instance ls) ls = list # This lets you do "oci os object rm" rather than "oci os object delete" rm = os.object.delete Table output

JSON output is great for parsing but can be problematic when it comes to readability on the command line. To help with this we have added table output format which can be triggered for any operation by supplying --output table. This also makes it easier to use common tools like grep and awk on the CLI output to grab specific records from a table. See the section on JMESPath below to see how you can filter data to make your table output more concise.

Here is an example command and output:

oci iam region list --output table +-----+--------------------+ | key | name | +-----+--------------------+ | FRA | eu-frankfurt-1 | | IAD | us-ashburn-1 | | PHX | us-phoenix-1 | +-----+--------------------+ JMESPath queries

Often times a CLI operation will return more data than you are interested in. To help with filtering and querying data from CLI responses, we have added the --query option which allows running arbitrary JMESPath (http://jmespath.org/) queries on the CLI output before the data is returned.

For example, you may want to list all of the instances in your compartment but only want to see the display-name and lifecycle-state, you can do this with the following query:

# using the oci_cli_rc file from above so we don’t have to specify --compartment-id oci compute instance list --query 'data[*].{"display-name":"display-name","lifecycle-state":"lifecycle-state"}'

This is especially convenient for use with table output so you can limit the output to a size that will fit in your terminal.

You can also define queries in your oci_cli_rc file and reference them by name so you don’t have to type out complex queries, for example:

Content of ~/.oci/oci_cli_rc:

[OCI_CLI_CANNED_QUERIES] get_id_and_display_name_from_list=data[*].{id: id, "display-name": “display-name"}

Example command:

oci compute instance list -c $C --query query://get_id_and_display_name_from_list

To help getting started with some of these features we have added the command 'oci setup oci-cli-rc' to generate a sample oci_cli_rc file with examples of canned queries, defaults, and parameter / command aliases.

JSON Input made easier

We have made a number of improvements to how our CLI works with complex parameters that require JSON input:

Reading JSON parameters from a file:

For any parameter marked as a "COMPLEX TYPE" you can now specify the value to be read from a file using the "file://" prefix instead of needing to format a JSON string on the command line. For example:

oci iam policy create —statements file://statements.json

Generate JSON skeletons for single parameter

To help with specifying JSON input from a file we have added --generate-param-json-input to each command with complex parameters to enable generating a JSON template for a given input parameter. For example, if you are not sure of the format for the oci iam policy create --statements parameter you can issue the following command to generate a template:

oci iam policy create --generate-json-param-input statements output: [ “string”, “string” ]

You can then fill out this template and specify it as the input to a create policy call like so:

oci iam policy create --statements file://statements.json

Generate JSON skeletons for full command input

We also support generating a JSON skeleton for the full command input. A common workflow with this parameter is to dump the full JSON skeleton to a file, edit the file with the input values you want, and then execute the command using that file as input. Here is an example:

# command to emit full JSON skeleton for command to a file input.json oci os preauth-request create --generate-full-command-json-input > input.json # view content of input.json and edit values cat input.json { "accessType": "ObjectRead|ObjectWrite|ObjectReadWrite|AnyObjectWrite", "bucketName": "string", "name": "string", "namespace": "string", "objectName": "string", "opcClientRequestId": "string", "timeExpires": "2017-01-01T00:00:00.000000+00:00" } # run create pre-authenticated request with the values specified from a file oci os preauth-request create --from-json file://input.json Windows auto-complete for power shell

We have now added tab completion for Windows PowerShell! Completion works on commands and parameters and can be enabled with the following command:

oci setup autocomplete

For more in-depth documentation on these features and more, check out our main CLI documentation page here.

Related content

Announcing Mobile Authentication Plugin for Apache Cordova, and More!

Fri, 2017-12-01 11:19

We are excited to announce the open source release on GitHub of the cordova-plugin-oracle-idm-auth plugin for Apache Cordova, developed by the Oracle JavaScript Extension Toolkit (Oracle JET) team.

This plugin provides a simple JavaScript API for performing complex authentication, powered by a native SDK developed by the Oracle Access Management Mobile & Social (OAMMS) team that has been tested and verified against Oracle Access Manager (OAM) and Oracle Identity Cloud Service (IDCS) and is compatible with other 3rd party authentication applications that support Basic Authentication, OAuth, Web SSO or OpenID Connect.

Whilst the plugin is primarily intended for hybrid mobile applications created using Oracle JET, it can be used within any Cordova-based app targeting Android or iOS.

Most mobile authentication scenarios are complex, often requiring interaction with the native operating system for use cases such as:

  • Retrieving authentication tokens and cookies following successful authentication
  • Securely storing tokens and user credentials
  • Performing offline authentication and automatic login

Writing code to handle each of the required authentication scenarios, especially within hybrid mobile applications, is tedious and can be error-prone.

The cordova-plugin-oracle-idm-auth plugin significantly reduces the amount of coding required to successfully authenticate your users and handle various error cases, by abstracting the complex logic behind a set of simple JavaScript APIs, thus allowing you to focus on implementation of your mobile app’s functional aspects.

To add this plugin to your Oracle JET app:

$ ojet add plugin cordova-plugin-oracle-idm-auth


To know more about the Oracle JET CLI, visit the ojet-cli project.

To add this plugin to your plain Apache Cordova app:

$ cordova plugin add cordova-plugin-oracle-idm-auth


Although the plugin itself contains detailed documentation, stay tuned for more technical posts describing common usage scenarios.

The release of this plugin continues Oracle’s commitment to the open source Apache Cordova community, along with these previously released plugins:

Hope you enjoy, and if you have any feedback, please submit issues to our Cordova projects on GitHub.

For more technical articles, you can also follow OracleDevs on Medium.com.

Related content


Introducing Data Hub Cloud Service to Manage Apache Cassandra and More

Wed, 2017-11-22 11:00

Today we are introducing the general availability of the Oracle Data Hub Cloud Service. With Data Hub, developers are now able to initialize and run Apache Cassandra clusters on-demand without having to manage backups, patching and scaling for Cassandra clusters. Oracle Data Hub is a foundation for other databases like MongoDB, Postgres and more coming in the future. Read the full press release from OpenWorld 2017.

The Data Hub Cloud Service provides the following key benefits:

  • Dynamic Scalability – users will have access to an API and a web console interface to easily operate in minutes things such as scale-up/scale-down or scale-out/scale-in, and size their clusters accordingly to their needs.
  • Full Control –as development teams migrate from an on premise environment to the cloud, they continue to have full secure shell (ssh) access to the underlying virtual machines (VMs) hosting these database clusters so that they can login and perform management tasks in the same way they have been doing.

Developers may be looking for more than relational data management for their applications. MySQL and Oracle Database have been around for quite some time already on Oracle Cloud. Today, application developers are looking for the flexibility to choose the database technology according to the data models they use within their application. This use case specific approach enables these developers to choose the Oracle Database Cloud Service when appropriate and in other cases choose other database technologies such as MySQL, MongoDB, Redis, Apache Cassandra etc.

In such a polyglot development environment, the enterprise IT faces the key challenge of how to support as well as lower the total cost of ownership (TCO) of managing such open source database technologies within the organization. This is specifically the problem that the Oracle Data Hub Cloud Service addresses. How to Use Data Hub Cloud Service

Using the Data Hub Cloud Service to provision, administer or monitor an Apache Cassandra database cluster is extremely simple and easy. You can create an Apache Cassandra database cluster with as many nodes as you would like in 2 simple steps:

  • Step 1
    • Choose between Oracle Cloud Infrastructure and Oracle Cloud Infrastructure Classic regions
    • Choose between the latest (3.11) and stable (3.10) Apache Cassandra database versions
  • Step 2
    • Choose the cluster size, compute shape (processor cores) and the storage size. Don't worry about choosing the right value here. You can always dynamically resize when you need additional compute power or storage.
    • Provide the shell access information so that you have the full control to your database clusters.

Flexibility to choose the Database Version

When you create the cluster, you have the flexibility to choose the Apache Cassandra versions. Additionally, you can easily patch to the latest version, as it becomes available for the Cassandra version. Once you choose to apply the patch, the service applies this patch within your cluster in a rolling fashion to minimize any downtime.

Dynamic Scaling

During provisioning, you have the flexibility to choose the cluster size, the compute shapes (compute core and memory), and the storage sizes for all the nodes within the cluster. This flexibility allows you to choose the compute and storage shapes that better meet your workload and performance requirements.
If you want to add either additional nodes in your cluster (commonly referred as scale-out) or additional storage to your nodes in the cluster, you can easily do so using the Data Hub Cloud Service API or Console. So, you don't have to worry about sizing your workload at the time of provisioning.

Full Control

You have full shell access to all the nodes within the cluster so that you have full control to the underlying database and its storage. You also have the full flexibility to login to these nodes and configure the database instances to meet your scalability and performance requirements.

Once you select Create, the service will create the compute instances, attach the block volumes to the node and then lay out the Apache Cassandra binaries within each of the nodes in the cluster. In the Oracle Cloud Infrastructure Classic platform, the service will also automatically enable the network access rules so that users can now begin to use CQL (Cassandra Query Language) tool to create your Cassandra database. In the Oracle Cloud Infrastructure platform, you have the full control and flexibility to create this cluster within a specific subnet in the virtual cloud network (VCN).

Getting Started

This service is accessible via the Oracle My Services dashboard for users already under the Universal Credits. And, if you're not already using the Oracle Cloud, you can start off with a free Cloud credits to explore the services. Appreciate if you can kindly give this service a spin and share your feedback.

Additional Reference

Linuxgiving! The Things We do With and For Oracle Linux

Tue, 2017-11-21 17:00

By: Sergio Leunissen - VP, Operating Systems & Virtualization 

It is almost Thanksgiving, so you may be thinking about things that you’re thankful for –good food, family and friends.  When it comes to making your (an enterprise software developer’s) work life better, your list might include Docker, Kubernetes, VirtualBox and GitHub. I’ll bet Oracle Linux wasn’t on your list, but here’s why it should be…

As enterprises move to the Cloud and DevOps increases in importance, application development also has to move faster. Here’s where Oracle Linux comes in. Not only is Oracle Linux free to download and use, but it also comes pre-configured with access to our Oracle Linux yum server with tons of extra packages to address your development cravings, including:

If you’re still craving something sweet, you can add less complexity to your list as with Oracle Linux you’ll have the advantage of runningthe exact same OS and version in development as you do in production (on-premises or in the cloud).

Related content

And, we’re constantly working on ways to spice-up your experience with Linux, from things as simple as "make it boot faster," to always-available diagnostics for network filesystem mounts, to ways large systems can efficiently parallelize tasks. These posts, from members of the Oracle Linux Kernel Development team, will show you how we are doing this:

Accelerating Linux Boot Time

Pasha Tatashin describes optimizations to the kernel to speed up booting Linux, especially on large systems with many cores and large memory sizes.

Tracing NFS: Beyond tcpdump

Chuck Leverdescribes how we are investigating new ways to trace NFS client operations under heavy load and on high performance network fabrics so that system administrators can better observe and troubleshoot this network file system.

ktask: A Generic Framework for Parallelizing CPU-Intensive Work

Daniel Jordan describes a framework that’s been submitted to the Linux community which makes better use of available system resources to perform large scale housekeeping tasks initiated by the kernel or through system calls.

On top of this, you can have your pumpkin, apple or whatever pie you like and eat it too – since Oracle Linux Premier Support is included with your Oracle Cloud Infrastructure subscription – yes, that includes Ksplice zero down-time updates and much more at no additional cost.

Most everyone's business runs on Linux now, it's at the core of today’s cloud computing. There are still areas to improve, but if you look closely, Oracle Linux is the OS you’ll want for app/dev in your enterprise.

Podcast: What's Hot? Tech Trends That Made a Real Difference in 2017

Wed, 2017-11-15 05:00

Innovation never sleeps, and tech trends come at you from every angle. That's business as usual in the software developer's world. In 2017, microservices, containers, chatbots, blockchain, IoT, and other trends drew lots of attention and conversation. But what trends and technologies penetrated the hype to make a real difference?

In order to get a sense of what's happening on the street, we gathered a group of highly respected software developers, recognized leaders in the community, crammed them into a tiny hotel room in San Francisco (they were in town to present sessions at JavaOne and Oracle OpenWorld), tossed in a couple of microphones, and asked them to talk about the technologies that actually had an impact on their work over the past year. The resulting conversation is lively, wide-ranging, often funny, and insightful from start to finish. Listen for yourself.

The Panelists

(listed alphabetically)

Lonneke Dikmans Lonneke Dikmans
Chief Product Officer, eProseed
Oracle ACE Director
Developer Champion


Lucas Jellema
Chief Technical Officer, AMIS Services
Oracle ACE Director
Developer Champion


Frank Munz
Software Architect, Cloud Evangelist, Munz & More
Oracle ACE Director
Developer Champion


Pratik Patel
Chief Technical Officer, Triplingo
President, Atlanta Java Users Group
Java Champion
Code Champion


Chris Richardson
Founder, Chief Executive Officer, Eventuate Inc.
Java Champion
Code Champion


Additional Resources Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:


An API First Approach to Microservices Development

Wed, 2017-11-08 17:18

Co-author: Claudio Caldato, Sr. Director Development


Over the last couple of years our work on various microservices platforms in the cloud has brought us into close collaboration and engagement with many customers and as a result we have developed a deep understanding of what developers struggle with when adopting microservices architectures in addition to a deep knowledge of distributed systems. A major motivation for joining Oracle, besides working with a great team of very smart people from startups, Amazon and Microsoft, was the opportunity to build from scratch a platform based on open source components that truly addresses the developer. In this initial blog post on our new platform we will describe what was driving the design of our platform, and present an overview of the architecture. 

What developers are looking for

Moving to microservices is not an easy transition for developers that have been building applications using more traditional methods. There are a lot of new concepts and details developers need to become familiar with and consider when they design a distributed application, which is what a microservice application is. Throw containers and orchestrators into the mix and it becomes clear why many developers struggle to adapt to this new world.  

Developers now need to think about their applications in terms of a distributed system with a lot of moving parts; as a result, challenges such as resiliency, idempotency and eventual consistency, just to name a few, are important aspects they now need to take into account. 

In addition, with the latest trends in microservices design and best practices, they also need to learn about containers and orchestrators to make their applications and services work. Modern cluster management and container orchestration solutions such as Kubernetes, Mesos/Marathon or Docker Swarm are improving over time, which simplifies things such as networking, service discovery, etc., but they are still an infrastructure play. The main goal of these tools and technologies is to handle the process of deploying and connecting services, and guarantee that they keep running in case of failures. These aspects are more connected with the infrastructure used to host the services than the actual services themselves. Developers need to have a solid understanding of how orchestrators work, and they need to take that into account when they build services. Programming model and infrastructure are entangled; there is no clear separation, and developers need to understand the underlying infrastructure to make their services work. 

One obvious thing that we have heard repeatedly from our customers and the open source community is that developers really want to focus on the development of the logic, not on the code necessary to handle the execution environment where the service will be deployed, but what does that really mean?  

It means that above all, developers want to focus on APIs (the only thing needed to connect to another service), develop their services in a reactive style, and sometimes just use ‘functions’ to perform simple operations, when deploying and managing more complex services involves too much overhead.  

There is also a strong preference among developers to have a platform built on an OSS stack to avoid vendor lock-in, and to enable hybrid scenarios where public cloud is used in conjunction with on-premise infrastructure.  

It was the copious feedback heard from customers and developers that served as our main motivation to create an API-first microservices platform, and it is based on the following key requirements: 

  • Developers can focus solely on writing code: API-first approach 
  • It combines the traditional REST-based programming model with a modern reactive event-driven model  
  • It consolidates traditional container-based microservices with a serverless/FaaS infrastructure, offering more flexibility so developers can pick the right tool for the job 
  • Easy onboarding of 'external' services so developers can leverage things such as cloud services, and can connect to legacy or 3rd party services easily 

We were asked many times how we would describe our platform as it covers more than just microservices, so in a humorous moment, we came up with the Grand Unified Theory of Container Native Development


The Platform Approach 

So what does the platform look like and what components are being used? Before we get into the details let’s look at our fundamental principles for building out this platform:

  • Opinionated and open: make it easy for developers to get productive right away, but also provide the option to go deep in the stack or even replace modules. 
  • Cloud vendor agnostic: although the platform will work best on our New Application Development Stack customers need to be able to install it on top of any cloud infrastructure. 
  • Open source-based stack: we are strong believers in OSS, and our stack is entirely built upon popular OSS components and will be available as OSS 

The Platform Architecture 

Figure 1 shows the high level architecture of our platform and the functionality of each component. 

Let’s look at all the major components of the platform. We start with the API registry as it changes how developers think about, build, and consume microservices. 

API Registry: 

The API registry stores all the information about available APIs in the cluster. Developers can publish an API to make it easier for other developers to use their service. Developers can search for a particular service or function (if there is a serverless framework installed in the cluster). Developers can test an API against a mock service even though the real service is not ready or deployed yet. To connect to a microservice or function in the cluster, developers can generate a client library in various languages. The client library is integrated into the source code and used to call the service. It will always automatically discover the endpoint in the cluster at runtime so developers don’t have to deal with infrastructure details such as IP address or port number that may change over the lifecycle of the service.  In future versions, we plan to add the ability for developers to set security and routing policies directly in the API registry. 

Event Manager: 

The event manager allows services and functions to publish events that other services and functions can subscribe to. It is the key component that enables an event-driven programming model where EventProviders publish events, and consumers – either functions or microservices – consume them. With the EventManager developers can combine both a traditional REST-based programming model with a reactive/event-driven model in a consolidated platform that offers a consistent experience in terms of workflow and tools. 

Service Broker: 

In our transition to working for a major cloud vendor, we have seen that many customers choose to use managed cloud services instead of running and operating their services themselves on a Kubernetes cluster. A popular example of this is Redis cache, offered as a managed service by almost all major cloud providers. As a result, it is very common that a microservice-based application not only consists of services developed by the development team but also of managed cloud services. Kubernetes has introduced a great new feature called service catalog which allows the consumption of external services within a Kubernetes cluster. We have extended our initial design to not only configure the access to external services, but also to register user services with the API registry, so that developers can easily consume them along with the managed services. 

In this way external services, such as the ones provided by the cloud vendor, can be consumed like any other service in the cluster with developers using the same workflow: identify the APIs they want to use, generate the client library, and use it to handle the actual communication with the service. 

Service Broker is also our way to help developers engaged in modernizing their existing infrastructure, for instance by enabling them to package their existing code in containers that can be deployed in the cluster. We are also considering solving for scenarios in which there are existing applications that cannot be modernized; in this case, the Service Broker can be used to ‘expose’ a proxy service that publishes a set of APIs in the API Registry, thereby making the consumption of the external/legacy system similar to using any other microservice in the cluster.  

Kubernetes and Istio: 

We chose Kubernetes as the basis for our platform as it is emerging as the most popular container management platform to run microservices. Another important factor is that the community around Kubernetes is growing rapidly, and that there is Kubernetes support with every major cloud vendor.   

As mentioned before one of our main goals is to reduce complexity for developers. Managing communications among multiple microservices can be a challenging task. For this reason, we determined that we needed to add Istio as a service mesh to our platform. With Istio we get monitoring, diagnostics, complex routing, resiliency and policies for free. This removes a big burden from developers as they would otherwise need to implement those features; with Istio, they are now available at the platform level. 


Monitoring is an important component of a microservices platform. With potentially a lot of moving parts, the system requires having a way to monitor its behavior at runtime. For our microservices platform we chose to offer an out-of-the-box monitoring solution which is, like the other components in our platform, based on well consolidated and battle-tested technologies such as Prometheus, Zipkin/Jaeger, Grafana and Vizsceral. 

In the spirit of pushing the API-first approach to monitoring as well, our monitoring solution offers developers the ability to see how microservices are connected to each other (via Vizsceral), see data flowing across them and, in the future, will show insight into which APIs have been used. Developers can then use distributed tracing information in Zipkin/Jaeger to investigate potential latency issues or improve the efficiency of their services. In the future, we plan to add integration with other services. For instance, we will add the ability to correlate requests between microservices with data structures inside the JVM so developers can optimize across multiple microservices by following how data is being processed for each request. 

What’s Next? 

This is an initial overview of our new platform and some insight into our motivation, and the design guidelines that we used. We will follow with more blogs that will go deeper into the various aspects of the platform as we get closer to our initial OSS release early 2018. Meanwhile, please take a look at our JavaOne session

For more background on this topic, please see our other blog posts in the Getting Started with Microservices series. Part 1 discusses some of the main advantages of microservices, and touches on some areas to consider when working with them. Part 2 considers how containers fit into the microservices story. Part 3 looks at some basic patterns and best practices for implementing microservices. Part 4 examines the critical aspects of using DevOps principles and practices with containerized microservices. 

Related content

Introducing Dev Gym! Free Training on SQL and More

Wed, 2017-11-08 10:52

There are many ways to learn. For example, you can read a book or blog post, watch a video, or listen to a podcast. All good stuff, which is what you'd expect me to say since I am the author of ten books on the Oracle PL/SQL language, and offer scores of videos and articles on my YouTube channel and blog, respectively.

But there's one problem with those learning formats: they're passive. One way or another, you sit there, and ingest data through your eyes and ears. Nothing wrong with that, but we all know that when it comes to writing code, that sort of knowledge is entirely theoretical.

If you want to get stronger, you can't just read about weightlifting and running. 

You've got to hit the gym and lift some weights. You've got to put on your running shoes and pound the pavement. 

Or as Confucius said it back in 450 BC:

Tell me and I will forget.
Show me and I may remember.
Involve me and I will understand

It's the same with programming. Until you start writing code, and until you start reading and struggling to understand code, you haven't really learned anything.  To get good at programming, you need to engage in some active learning.

That's what the Oracle Dev Gym is all about. And it's absolutely, totally free. 

Learn from Quizzes

Multiple choice quizzes are the core learning mechanism on the Oracle Dev Gym. Our library of over 2,500 quizzes deepen your expertise by challenging you to read and understand code, a great complement to writing and running code.

The home page offers several featured quizzes that are hand-picked by experts from the Dev Gym's library of over 2,000 quizzes.

Looking for something in particular? Enter a keyword or two in the search bar and we'll show you what we've got on that topic.

After submitting your answer, you can explore the quiz's topic in more detail, with full verification code scripts, links to related resources and other quizzes, and discussion on the quiz.

You accumulate points for all the quizzes you answer, but your performance on these quizzes is not ranked. To play competitively against other developers, try our weekly Open Tournaments.

Check out this video on Dev Gym quizzes. 

Learn from Workouts

Quizzes are great, but when you know nothing about the topic of a quiz, they can leave you rather more confused than educated.

So to help you get started with concepts, we’ve created workouts. These contain resources to teach you about an aspect of programming, followed up by questions on the topic to test and reinforce your newly-gained knowledge.

A workout typically consists of a video or article followed by several quizzes. But a workout could also consist simply of a set of quizzes. Either way, go through the exercises of the workout and you will find yourself better able to tackle your real world programming challenges. Build your own custom workout, pick from available workouts, and set up daily workouts (single quiz workouts that expire each day).

Check out this video on Dev Gym workouts. 

Learn from Classes

Perhaps you’re looking for something more structured to help you learn. Then a Dev Gym class might be a perfect fit.

You can think of these as "mini-MOOCS". A MOOC is a massive online open class. The Oracle Learning Library offers a variety of MOOCs and I strongly encourage you to try them out. Generally, you should expect a 3-5 hour per week commitment, over several weeks. 

Dev Gym class are typically lighter-weight. Each class module consists of a video or blog post, followed by several quizzes to reinforce what you've learned. 

A great example of a Dev Gym class is Database of Developers, a 12-week course by Chris Saxon, a member of the AskTOM Answer Team and all around SQL wizard.

Check out this video on Dev Gym classes. 

Open Tournaments

Sometimes you just want to learn, and other times you want to test that knowledge against other developers. Let's face it: lots of humans like to compete, and we make it easy for you to do that with our weekly Open tournaments.

Each Saturday, we publish a brand-new quiz on SQL, PL/SQL, database design and logic (this list will likely grow over time). You have until the following Friday to submit your answer. And if you don't want to compete but still want to tackle those brand-new quizzes, we let you opt-out of ranking.

But for those of you who like to compete, you can check your rankings on the Leaderboard to see how you did the previous week, month, quarter and year. And if finish the year ranked in the top 50 in a particular technology, you are then eligible to compete in the annual championship.

Note that we do not show the results of your submission for an Open tournament until that week is over. Since the quiz is competitive, we don't want to make it easy for players to share results with others who may not yet have taken the quiz. And since the quiz is competitive, we also have rules against cheating. Read Competition Integrity for a description of what constitutes cheating at the Oracle Dev Gym.

Work Out Those Oracle Muscles!

So...are you ready to start working out those Oracle muscles and stretch your Oracle skills?

Visit the Oracle Dev Gym. Take a quiz, step up to a workout, or explore our classes.

Oh, and did I mention? It's all free!


Podcast: Chatbot Development: First Steps and Lessons Learned - Part 2

Wed, 2017-10-18 12:42

The previous podcast featured a discussion of chatbot development with a panel of developers who were part of a program that provided early access to the Oracle Intelligent Bots platform available within the Mobile Cloud Service. In this podcast we continue the discussion of chatbot development with an entirely new panel of developers who also had the opportunity to work with that same Intelligent Bots beta release.

Panelists Mia Urman, Peter Crew, and Christoph Reupprich compare notes on the particular challenges that defined their chatbot development experiences, and discuss what they did to meet those challenges. Listen!

The Panelists

Oracle ACE Director Mia Urman
Chief Executive Officer, AuraPlayer Limited, Brookline, Massachusetts.

Peter Crew
Director, SDS Group; Chief Technical Officer, MagiaCX Solutions, Perth, Australia

Oracle ACE Christoph Ruepprich
Infrastructure Senior Principal, Accenture Enkitec Group, Dallas, TX

Additional Resources

SaveSubscribe to the Oracle Developer Community Podcast


A Simple Guide to Oracle Intelligent Bots Error Handling

Tue, 2017-10-03 03:50

Like any software development, building chatbots is rarely perfect first time.  In particular areas such as the conversation flow or backend system integration, which are programmed, are more likely to be subject to bugs and errors. The assumption is that where there is room for failure, there is a way to handle those failures ;and in fact, there is. This blog post explains how to handle errors in Oracle Intelligent Bots. 

Categories of Errors

There are three broad categories of errors that may occur in the context of a bot.

The first category are design time errors in the dialog definition, for example, a missing colon or invalid indents.  The good news is that the Intelligent Bots designer validates the dialog definition at design time and highlights which line it thinks is in error.

Second problem relates to system components at runtime.  Component properties can have their value assigned at runtime, for which bot designers would use an expression such as ${myKeepTurnVar.value} that references a context variable defined in the dialog flow. If a component property attempts to read the variable value before it gets set then this also produces a failure on the component level.

The third category is a problem within a custom component, for example a failed connection to a backend service or a failed input validation. Possibly the backend system is returning an HTTP 404 because it can’t find the requested data, or maybe there is an HTTP 5xx error because the backend system is down. Given the nature of these problems they don't show at design time but only at runtime.

Layers of Error Handling

So now that you know about the categories of errors bot designers and developers usually deal with, let’s have a look how these can be handled.

  • Implicit error handling is what Oracle Intelligent Bots does when there is no error handler defined at all, which is the default. You wouldn’t want to put any bot into production that only has this level of error handling.
  • Component level error handling allows conversation flow designers to catch errors as close as possible to their cause.
  • Global error handling is defined on the chatbot level. All errors that are not handled on the component level will be passed to this error handler.
Component Level Error Handling

To handle errors, each component can have an error transition property set. The error transition references a state in the same dialog flow that the dialog engine navigates to in case of an error.

So first thing you learn is that an error transition in Oracle Intelligent Bots donesn't handle errors itself but triggers navigation.

The Oracle BotML example below shows a definition of a System.Out component with a missing value for the "keepTurn" property. The state has an error transition defined that points to a state with the name "handleError".


  component: "System.Output"


      text: "Welcome ${profile.firstName} ${profile.lastName}"



      next: "showOptions"

      error: "handleError"


Note: The keep turn property must have a value defined. The code above is not valid and thus will fail at runtime. As the time of writing, design time validation does not catch a missing keepTurn property.

The BotML below shows how the "handleError" state may look like:


   component: "System.Output"


     text: "This is a problem caught by the component. The error
            state is the \"${system.errorState}\" state"


      return: "done"

In this example, the error handler simply displays a message containing the errorState, which is the name of the dialog state in which the error occurred. If you wanted to componentize the error handler you could build a specific custom component and the error handler state could reference this custom component which would be a more elegant solution


   component: "my.errorhandler"


     errorState: "${system.errorState}"

     user: "${profile.firstName} {profile.lastName}"

     isDebug: "true"


     next: "start"


Custom components can be used within any state in a dialog flow. In the example above the custom component has input properties defined for the error state, the user and a flag to indicate whether this component is used in development or in production. The latter could be used to determine the message to be printed to the user.

What makes a custom component special, as an error handler is that it can log the problem, try to recover or – in serious cases –perform incident reporting so an administrator becomes aware of a runtime problem.

The custom component could also dynamically determine the next state to visit by updating a context variable that is configured as the value for the "next" element in the "transitions" section.

For example


   component: "my.errorhandler"


     errorState: "${system.errorState}"

     user: "${profile.firstName} {profile.lastName}"

     isDebug: "true"


     next: "${variableName.value}"

Global Error Handling

So defining error handling at the component level is our first strategy.  The second line of defence is a global error handler. There is a good reason for defining a bot wide global error handler, which is to avoid the implicit error handler. The global error handler is defined using an "error" element in the bot header definition as shown below.


  platformVersion: 1.0

main: true

name: TheBotName



    iResult: "nlpresult"

error: "handleGlobalError"




Because this error handler is defined on the bot level, it behaves exactly as the component level error handler in that it triggers navigation to a state defined as the error handler, "handleGlobalError" in this example.

So all that I wrote in the previous section about the component level error handler can be used here as well. However, special caution should be put on using custom components to handle global errors as the global error setting replaces the implicit handler. An error in the custom component could then lead to an infinitive loop.

Implicit Error Handler

This error handler is used if nothing else has been defined. The error message displayed by this error handler is

Oops I'm encountering a spot of trouble. Please try again later...

Hope you agree that this should not be a message to display to a user in a production bot. However, it is important that a generic implicit error hander like this exists because bot design usually starts with the use case at hand and not custom error handling. 

Learn more

To learn more about Oracle Intelligent Bots and Chatbots, visit http://oracle.com/bots


Feature image courtesy of Sira Anamwong at FreeDigitalPhotos.net

Announcing Fn–An Open Source Serverless Functions Platform

Mon, 2017-10-02 17:00

We are very excited to announce our new open source, cloud agnostic, serverless platform–Fn.

The Fn project is a container native Apache 2.0 licensed serverless platform that you can run anywhere–any cloud or on-premise. It’s easy to use, supports every programming language, and is extensible and performant. 

We've focused on making it really easy to get started so you can try it out in just a few minutes and then use more advanced features as you grow into it. Check out our quickstart to get up and running and deploying your own function in a few minutes.


The Fn Project is being developed by the same team that created IronFunctions. The team pioneered serverless technology and ran a hosted serverless platform for 6 years. After running billions of containers for thousands of customers, pre and post Docker, the team has learned a thing or two about running containers at scale, specifically in a functions-as-a service style.

Now at Oracle, the team has taken this knowledge and experience and applied it to Fn.


Fn has a bunch of great features for development and operations.

  • Easy to use command line tool to develop, test and deploy functions
  • One dependency: Docker
  • Hot functions for high performance applications
  • Lambda code compatibility - export your Lambda code and run it on Fn
  • FDK's (Function Developer Kit) for many popular languages
  • Advanced Java FDK with JUnit test framework
  • Deploy Fn with your favorite orchestration tool such as Kubernetes, Mesosphere and Docker Swarm
  • Smart load balancer built specially for routing traffic to functions
  • Extensible and modular, enabling custom add-ons and integrations

The project homepage is fnproject.io but all the action is on GitHub at github.com/fnproject/fn

We welcome your feedback and contributions to help make Fn the best serverless platform out there. 

Related Content


Image credit: Cuito Cuanavale (Creative Commons Attribution License)

Cloud Foundry Arrives on Oracle Cloud with a Provider Interface and Service Brokers

Mon, 2017-10-02 17:00

As the adoption of Oracle Cloud grows, there is increasing demand to bring a variety workloads to run on it. Due to the popularity of the Cloud Foundry application development platform, over the last year, Oracle customers have requested the option of running Cloud Foundry on Oracle Cloud. Reasons include:

  • Cloud Foundry is a very popular application development platform and many Cloud Foundry developers are using Oracle Cloud for other interrelated projects

  • Oracle Cloud has a large ecosystem of Platform services that can be used to augment Cloud Foundry applications or, conversely, Cloud Foundry can be used to extend Oracle services in new ways.

  • Many Cloud Foundry users have significant Oracle workloads that they need to integrate with and more and more Oracle customers are finding it easier and easier to move those workloads to Oracle Cloud. Co-locating Cloud Foundry workloads near those Oracle workloads in the cloud enables them to easily interoperate and integrate.

So what has Oracle done to make this possible?

Cloud Foundry Running on Oracle Cloud

Over the last several months, Pivotal and Oracle engineering teams have  been collaborating to build out several pieces of an integrated solution to run Cloud Foundry on Oracle Cloud.

We started with the BOSH Cloud Provider Interface. This layer of the Cloud Foundry architecture abstracts away the infrastructure provider to the Cloud Foundry application developer. This allows Cloud Foundry to be installed on various cloud providers like AWS, Microsoft Azure, Google Cloud Platform and now Oracle Cloud Infrastructure.

The code for this was just pushed into our GitHub repositories and is being actively worked on by the Oracle team. At this stage, it’s not currently GA, so use it for proof of concepts. You can look at it here

This work has been a great collaboration between Oracle and Pivotal. Over the next few month, our expectation is that this CPI will become regularly tested as part of the standard Cloud Foundry build processes and part of the collection of CPIs available for Cloud Foundry.

Oracle Cloud Service Brokers for Cloud Foundry

Beyond running Cloud Foundry on Oracle Cloud Infrastructure, one of the key technical requirements we’ve heard from developers is the desire to integrate with various Oracle Cloud Services – from Database to WebLogic Server to MySQL.

Cloud Foundry has a natural model for doing this through an interface called a Service Broker. Service brokers enable Cloud Foundry applications to easily interact with services on or off Cloud Foundry. Operations include provisioning and de-provisioning, binding and unbinding, updating instances and catalog management.  

The first service broker type is for our Oracle Cloud Platform PaaS services. In this model, by configuring one service broker – hosted on Oracle Cloud - we enable Cloud Foundry to interact with upwards of five different PaaS services including Database Cloud Service, Java Cloud Service, MySQL Cloud Service, DataHub Cloud Service (Cassandra) and Event Hub Cloud Service (Kafka). This is an initial set of cloud services and Oracle will evaluate others based on market demand. The diagram below shows a pictorial diagram of this service broker approach.

The second service broker Oracle has developed is for the Oracle Cloud Infrastructure capabilities—in particular our Oracle Cloud Infrastructure Oracle Database Cloud Service and our Oracle Cloud Infrastructure Object Storage. These are service brokers that can be installed and configured in Cloud Foundry to give direct access to these Oracle Cloud Infrastructure services. The diagram below provides a pictorial diagram of this model.

Deployment Approaches

All of this integration between Cloud Foundry and the Oracle Cloud naturally begs the question around what are the typical deployment topologies that one will typically see this solution used. For an initial overview our expectation is that there will be three types of topologies:

  1. All in Oracle Cloud. The all in Oracle Cloud approach is where both Cloud Foundry and the services it interacts with will be running in the Oracle Cloud – nothing runs on premises. The diagram below brings together the BOSH CPI and the two service brokers to illustrate this.

  1. The second approach is more of a hybrid approach where Cloud Foundry runs off Oracle Cloud - either on premise or potentially on other cloud infrastructures but via the service brokers integrates remotely with Oracle Cloud services. This approach is clearly constrained architecturally by issues such as network latency, but depending on the cloud services, may be a useful topology for some use cases. The diagram below illustrates this in action.

  1. The third approach is an all on-premises approach leveraging a capability Oracle calls Oracle Cloud at Customer. This enables customers to run Oracle Cloud services in their data center. This approach is particularly useful for customers who have data residency concerns, regulatory concerns and even performance/latency concerns when running Cloud Foundry on premises and reaching out to public clouds.  Oracle Cloud at Customer includes all the services available via the Oracle PaaS Service Broker running on Oracle Cloud Machine, as well as Oracle Exadata Cloud Machine – all running on premises. The diagram below illustrates this topology in action.

Overall there’s a lot of choice and opportunity here and these three different approaches are really meant to give ideas of how it could be done rather than being prescriptive.

What’s Next?

This work is the start of a journey to run Cloud Foundry workloads on and interacting with Oracle Cloud. Watch for more announcements as we move this work forward over the next few months!

For more information on the Oracle Cloud’s BOSH CPI and Oracle Cloud Infrastructure Service Brokers see this blog.  For Pivotal’s perspective on this work, see this blog.

Meet the New Application Development Stack - Managed Kubernetes, Serverless, Registry, CI/CD, Java

Mon, 2017-10-02 17:00
  • Oracle OpenWorld 2017, JavaOne, and Oracle Code Container Native Highlights
  • New Oracle Container Native App Dev Platform: Managed Kubernetes + CI/CD + Private Registry Service
  • Announcing Fn: an Open Source Functions as a Service Project (Serverless)
  • Latest from Java 9: Driving the Build, Deploy, and Operate Loop
The Container Native Challenge

Today, customers face a difficult decision when selecting a container-native application stack.  Either they choose from a mind-boggling menu of non-integrated, discrete and proprietary components from their favorite cloud provider – thus signing up for layers of DIY integration and administration – and slowly getting locked into that cloud vendor drip-by-drip.  Alternatively, many enterprises venture down a second path and select an opinionated application development stack – which looks “open” at first glance but in reality, consists of closed, forked, and proprietary components – well-integrated, yes, but far from open and cloud neutral.   More lock-in?  Absolutely.


Cloud Native Landscape (github.com/cncf/landscape)

So, what if you could combine an integrated developer experience with an open, cloud-neutral application stack built to avoid cloud lock-in?

Announcing Oracle’s Container Native Application Development Platform

The Container Native Application Development team today at Oracle OpenWorld 2017 announced the Oracle Container Native Application Development Platform – bringing together three new services - managed Kubernetes, CI/CD, and private registry services together in a frictionless, integrated developer experience.  The goal?  To provide a complete and enterprise-grade suite of cloud services to build, deploy, and operate container-native microservices and serverless applications.   Developers, as they rapidly adopt container-native technologies for new cloud-native apps and for migrating traditional apps, are becoming increasingly concerned about being locked-in by their cloud vendors and their application development platform vendors.  Moreover, they are seeking the nirvana of the true hybrid cloud, using the same stack in the cloud - any cloud - as they run on premise.

Directly addressing this need, the Oracle Container Native Application Development Platform includes a new managed Kubernetes service - Oracle Container Engine - to create and manage Kubernetes clusters for secure, high-performance, high-availability container deployment.  Secondly, a new private Oracle Container Registry Service for storing and sharing container images across multiple deployments. And finally, a new, full container lifecycle management CI/CD service Oracle Container Pipelines, based upon the Wercker acquisition, for continuous integration and delivery of microservice applications. 

Why should you care? Because unlike other cloud providers and enterprise appdev stack vendors, the Container Native Application Development Platform provides an open, integrated container developer experience as a fully-managed, high-availability service on top of an enterprise-grade cloud (bare metal & secure).  A free community edition of Wercker and early adopter access to the full Oracle Container Native Application Development Platform are available at wercker.com.

Meet Fn: An Open Source Serverless Solution

So as if that weren’t enough, we today open sourced Fn, a serverless developer platform project  fnproject.io. Developers using Oracle Cloud Platform, their laptop, or any cloud, can now build and run applications by just writing code without provisioning, scaling or managing any servers – this is all taken care of transparently by the cloud.  This allows them to focus on delivering value and new services instead of managing servers, clusters, and infrastructure. As Fn is an open-source project, it can also be run locally on a developer’s laptop and across multiple clouds, further reducing risk of vendor lock-in. 

Fn consists of three components: (1) the Fn Platform (Fn Server and CLI); (2) Fn Java FDK (Function Development Kit) which brings a first-class function development experience to Java developers including a comprehensive JUnit test harness (JUnit is a unit test harness for Java); and (3) Fn Flow for orchestrating functions directly in code. Fn Flow enables function orchestration for higher level workflows for sequencing, chaining, fanin/fanout, but directly and natively in the developer’s code versus relying on a console.  We will have initial support for Java with additional language bindings coming soon.

How is Fn different? Because it’s open (cloud neutral with no lock-in), can run locally, is container native, and provides polyglot language support (including Java, Go, Ruby, Python, PHP, Rust, .NET Core, and Node.js with AWS Lambda compatibility). We believe serverless will eventually lead to a new, more efficient cloud development and economic model.  Think about it - virtualization disintermediated physical servers, containers are disintermediating virtualization, so how soon until serverless disintermediates containers?  In the end, it’s all about raising the abstraction level so that developers never think about servers, VM’s, and other IaaS components, giving everybody better utilization by using less resources with faster product delivery and increased agility.  But it must follow the developer mandate: open, community-driven, and cloud-neutral.  And that’s why we introduced Fn.

Java 9: Driving the Build–Deploy–Operate Cloud Loop

DevOps and SRE patterns consistently look for automation and culture to create a repeatable application lifecycle of build, deploy, and operate. The latest Java SE 9 release, announced September 21, 2017 and highlighted at the JavaOne 2017 conference, includes more than 150 new features that help drive new Cloud Native development in this model.  Java SE 9 (JDK 9) is a production-ready implementation of the Java SE 9 Platform Specification, which was recently approved together with Java EE 8 in the Java Community Process (JCP).  Java continues to fuel cloud development in a big way - judging by the latest metrics.  The numbers are staggering: 

- #1 Developer Choice for the Cloud
- #1 Programming Language
- #1 Development Platform in the Cloud

Supported by these metrics:

- 12 Million Developers Run Java
- 21 Billion Cloud Connected Java Virtual Machines
- 38 Billion Active Java Virtual Machines
- 1 Billion Downloads Per Year

So, what’s new in Java 9?  Too much to list here, but a good way to summarize it is to look through a DevOps lens as the Java community continues to improve Java and its application in cloud native application development.  Highly effective DevOps teams are seeking to improve their Build-Deploy-Operate loop to build better code, deploy faster and more often, and recover faster from failures – and new Java 9 features are leading the way:

- Build Smarter
  -- JShell to easily explore APIs and try out language features
  -- Improved Javadoc to learn new APIs
  -- New & improved APIs including Process, StackWalker, VarHandle, Flow, CompletableFuture

- Deploy Faster
  -- New module system - Project Jigsaw
  -- Build lightweight Java apps quickly and easily
  -- Bundle just those parts of the JDK that you need
  -- Efficiently deploy apps to the cloud
  -- Modular Java runtime size makes Docker images smaller & Kubernetes orchestration more efficient

- Operate Securely
  -- More scalability and improved security
  -- Better performance management
  -- Java Flight Recorder released to OpenJDK for improved monitoring and diagnostics

To learn more and try out some new sample apps check out wercker.com/java.  And speaking of open source, agility, and velocity, Oracle is moving to a 6-month release cadence for Java SE 9, and will also be providing OpenJDK builds under the General Public License (GPL).  Cool - more open, more better, more often. Also, we will be contributing previously commercial features to OpenJDK such as Java Flight Recorder in Oracle JDK targeting alignment of Oracle JDK and OpenJDK.

Cloud Foundry on Oracle Cloud

For Cloud Foundry developers, we’ve released an Open Service Broker implementation that integrates Oracle Cloud Platform Services with Cloud Foundry so you can now build directly on the Cloud Foundry framework on Oracle Cloud.  Also, we’ve open sourced the BOSH Cloud Provider Interface, so developers can deploy Cloud Foundry workloads directly on Oracle Cloud Infrastructure, a capability targeted for general availability for later this year.

Beating the Open Source Drum

As a container native group, we’re committed to the open source community and these announcements showcase that commitment. The Oracle Container Native Application Development Platform is yet another step in our journey to deliver an open, cloud neutral and frictionless experience for building cloud-native as well as conventional enterprise applications. Over the course of this spring and summer, Oracle has shown continued commitment to open-source standards by joining the Container Native Computing Foundation, dedicating engineering resources to the Kubernetes project, open-sourcing several container utilities and making its flagship databases and developer tools available in the Docker Store marketplace.

Check out Container Native Highlights at OpenWorld and JavaOne

Finally, check out all the Container Native activities at #OOW17, #JavaOne and #OracleCode, and learn more about all things containers, Java, cloud, and more - from build to deploy to operate. Learn from our engineers in technical sessions, get your hands dirty in a hands-on lab, and take a product tour in the DevOps Corner at the Dev Lounge!  

And make sure to stay connected:  


Announcing BOSH Cloud Provider Interface for Oracle Cloud Infrastructure

Mon, 2017-10-02 17:00
Today we are pleased to announce our partnership with Pivotal to bring Cloud Foundry to Oracle Cloud Infrastructure. Over the past months, we have been working closely with Pivotal and the Cloud Foundry commmunity to develop a BOSH Cloud Provider Interface for OCI.  BOSH is a tool created to deploy Cloud Foundry on a wide variety of infrastructure, and a preview of this is now available.
We have released the BOSH CPI for OCI on GitHub under a dual Apache 2 / UPL license. Dhiraj Mutreja, my colleague at Oracle, is the technical lead for the CPI effort. While this is an early preview release, we are working with the Cloud Foundry community to have it formally certified and we will have more news on that soon.  That said, it works, and it is real, and we are using it every day!
We are also happy to announce today that we have open sourced Terraform configurations for deploying BOSH and Cloud Foundry on OCI. Using the Terraform Provider for OCI, we are able to greatly simplify the task of installing a BOSH Director and Cloud Foundry.
Terraform is a common tool for paving complex infrastructure configurations. Our Terraform configurations for BOSH and Cloud Foundry configure group policies, virtual networks, security lists, and more.  All in all, it creates around 50 artifacts for you, and creates a fault tolerant, multi availability domain setup.  Run the tool and you will have all of this plus a bastion instance to SSH into and using the BOSH 2 CLI to deploy a BOSH Director, and not have to worry about which ports need to be opened on your security lists.
In the coming days, we will be releasing sample BOSH deployment manifests and guides for deploying various systems using BOSH on OCI, including Cloud Foundry, ZooKeeper, Concourse, Consul and more.

Announcing Oracle Container Engine and Oracle Container Registry Service

Mon, 2017-10-02 15:00

Today at Oracle OpenWorld, we announced early adopter availability of the Oracle Container Native Application Development platform – a big step forward for developers and DevOps teams who are building, deploying and operating container native microservices and serverless applications. 

Key foundational pieces of this platform are the Oracle Container Engine and the Oracle Container Registry Service – a managed Kubernetes and Docker registry service respectively. 

As we discussed in a recent post, Kubernetes has taken the developer and DevOps world by storm, providing the de facto container orchestration layer for managing containers at scale in a significant number of enterprises, as they look to an open Kubernetes to avoid lock-in, and further their hybrid-cloud and multi-cloud goals. 

Like the Container Native applications that our customers themselves are building, the Kubernetes community itself is extremely agile and innovative.  With close to 1400 contributors at time of writing, Kubernetes has averaged a new (minor) release around every 3 months or so.  While customers want to embrace and take advantage of these innovations for their applications quickly, it can be a challenge to upgrade and maintain their Kubernetes environments to be able to take advantage of them.  Getting networking and storage fully integrated and working, and putting in place the right security permissions to control team access to clusters can also be challenging, as demonstrated by a recent survey from the CNCF. 


Common Challenges When Deploying Containers (Source) 

Furthermore, developer and DevOps teams are not looking at Kubernetes in isolation – they are looking for full container lifecycle management and integration, from building software, testing, deploying and operating in production. 

Previously, we announced the availability of the Terraform Kubernetes Installer, an open source toolkit for customers to rapidly stand up their own Kubernetes cluster on Oracle Cloud Infrastructure. 

Now we've taken that a step further with the managed Kubernetes service - combining the production grade container orchestration of standard upstream Kubernetes, the control, security and high predictable performance of Oracle’s next generation OCI cloud infrastructure, and the significant operational savings of a managed service – where we maintain and update the Kubernetes infrastructure for you.


Oracle Container Engine 

In conjunction with the managed Kubernetes service, we are also announcing a managed registry service.  This new container registry is Docker v2 API compliant and provides a private registry service for users, that is tightly integrated with Oracle Container Pipelines and Container Engine for developers looking for seamless and integrated end-to-end container lifecycle management.  Users have the ability to view images that are stored in their registry in the UI, plus utilize these images in their build workflows and pipelines.  Users can also connect to the registry directly using the traditional Docker CLI. 

This is the latest in a series of developer focused announcements with much more to come!  The free community edition of Wercker and early adopter access to the Oracle Container Native Application Development Platform is available at www.wercker.com.

New Game. New Name. Oracle Developer Community

Sun, 2017-10-01 02:40

Technology has driven so much change that people have become nearly oblivious to all but the most dramatic innovations. But it’s important to remember that each wave of the innovation began with a single keystroke, one character in a line of code, then another, then another. It’s important to remember that the finger pressing down on that key is attached to a human being. A developer.

Developers drive change, and change, in turn, drives developers. It’s a perpetual evolutionary cycle that requires perpetual adaptation at both the individual and the organizational level.

There’s a major shift happening in technology — and at Oracle. This evolution is powered by a diverse population of software developers. In adapting to this evolution, the Oracle Technology Network has now become the Oracle Developer Community, and over the past year we’ve created programs to support people building modern, open, cloud-native applications.

Developer Champions

Developer Champions have expertise in modern software and cloud development, including microservices, containers, DevOps, continuous delivery, open source technologies, and SQL/NoSQL databases. These professionals are contributors to open source projects, authors on contemporary development approaches, and speakers at prominent Oracle (Oracle Code, JavaOne) and top industry conferences such as Devoxx, Developer Week, Velocity, and QCon. Meet the New Developer Champion Program.

Live for the Code!

Oracle Code Events

In this international series of free events developers learn about the latest technologies, practices, and trends from technical experts, industry leaders, and other developers in keynotes, sessions, and hands-on labs. And to make sure every developer  has access to these content, Oracle Code Online brings the same content and experts directly to your screen. More information...

Social Media and Content Aggregation

You can now connect with the Oracle Developer Community through a variety of social media channels:

Oracle Developers Portal

Pulling it all together is the Oracle Developers Portal. This inclusive space is for all of our language- and technology-specific communities, which we enthusiastically support. You’ll find a wide variety of content covering:

So play your part in driving innovation, and be a part of the Oracle Developer Community


Announcing Oracle JET 4.0 and Web Components

Wed, 2017-09-27 16:56

This release of Oracle JET brings many new features. None bigger than a completely new Custom Element based syntax for defining all JET UI components. We believe you will find this new syntax more intuitive and natural to work with when developing your HTML content. This is being done to further our effort to stay current with HTML standards and specifications such as the HTML5 Web Component specification. To learn more about developing with this new syntax, refer to the JET Developers Guide.

While you don’t have to move to the new custom element syntax when you migrate your application to use v4.0.0, it is highly recommended that you start all new work using this new syntax. The custom element syntax can co-exist with the existing data-bind syntax (e.g. ojComponent) in the same page without any problems. They are designed to work together until the time the data-bind syntax has reached End of Life, which is currently planned for on or about the time JET v8.0.0 is released (approx two years from the v4.0.0 release).

An example of the old and new syntax looks like:


<input id=”text-input” type=”text” data-bind=”ojComponent: {component: ‘ojInputText’, value: value}”/>


<oj-input-text id=”text-input” value=”{{value}}”></oj-input-text>

Notice the use of {{ }} for the binding of the custom element value?

 {{ }} represents two-way binding, while [[ ]] represents one-way binding. This is inspired by the binding syntax of Polymer. 
Learn more…

A new Getting Starting video is also available for a high level overview of the new release.

As always, the Release Notes will give you all the details about changes and updates in this release. Listed below are some of the highlights.

Web Components

Beyond the new custom element syntax, there are also some framework level enhancements in this release that are sure to please.

Content Delivery Network(CDN) Support

Oracle JET is now available via a CDN managed by Oracle. All JET libraries as well as the versions of the 3rd party libraries that JET distributes, are included on the CDN. As of this release, resources are available for JET v3.1.0, 3.2.0, and 4.0.0. 
Learn more…


A very simple example of how this feature will help improve performance of applications built with JET, can be seen here. The content placed inside of an <oj-defer> will not be rendered until the parent element calls for it. 
Learn more...

<oj-collapsible id="collapsiblePage"> <h4 id="collapsibleHeader" slot="header">Deferred Content</h4> <oj-defer> <div data-bind="ojModule: 'deferredRendering/content'"> </div> </oj-defer> </oj-collapsible> ojBusyContext

Test automation can always be tricky when a toolkit/framework provides multiple types of animations and asynchronous data interactions. The JET BusyContext API provides multiple ways for testers to get control over their JET based applications in these areas.
Learn more…

<div id="mycontext" data-oj-context> ... <!-- JET content --> ... </div> var node = document.querySelector("#mycontext"); var busyContext = oj.Context.getContext(node).getBusyContext(); busyContext.whenReady().then(function (){ var component = document.querySelector("#myInput"); component.value = "foo"; if (!component.isValid()) component.value = "foobar"; }); Composite Components

One of the more powerful features of Oracle JET is the new composite component architecture. This is JET’s implementation of the HTML5 web component specification for creating and sharing reusable UI elements. HTML code as simple as this:

<demo-memory-game id="game1" cards="[[chartImages]]" on-attempts-changed="[[updateAttempts]]" on-has-won-changed="[[showWinnerPopup]]"> </demo-memory-game>

... can deliver something as complex as this complete memory game sample.
Learn more…


One of the greatest new features of JET v4.0.0 is something that will help you get started with your application, as well as add new features to an existing application. This command line interface (CLI) has been in preview for the last two JET releases, but reaches an official state with JET v4.0.0. Creating a new application for Web, Mobile, or both, is as easy as:

ojet create myApp --template=navdrawer

add a new composite component to your application as simply as:

ojet create component my-component

Learn more…

Always a popular feature of JET is the Data Visualization(DVT) component set. With this release there are improvements in performance across the board for the DVT components, but also some new features and functionality for some of the components.

  • New milestone, progress, and baseline elements
  • Multiple positions for task labels now available

  • New support for merged cells

Other new or updated UI components include…


This new component makes it easy to build responsive employee lists


The ojTree component has been replaced with a completely re-written oj-tree-view component. HTML5 drag and drop functionality as well as support for icons and lazy loading of content on expand.

Color Pallet and Color Spectrum

The oj-color-palette and oj-color-spectrum components have been improved to provide easy inclusion of your own color schemes.

With over 1750 issues and features delivered in this major release, a years worth of work on the new custom element syntax, and a collection of updated components, this is one of the more comprehensive releases the JET team has delivered. It’s our 17th consecutive on-time release, continuing to show our commitment to a consistent and reliable release cycle for our customers to take advantage of.

We hope you enjoy developing new products with this release, as much as we have enjoyed delivering it to you.

As always, if you have questions, comments, or constructive feedback, you can find us on Twitter ( @oraclejet ), StackOverflow, or the Oracle Developer community forums. We look forward to you getting involved in the Oracle JET Community.

Happy coding!

The Oracle JET Team

7 Reasons Developers Won't Believe That Will Make Them Attend Code San Francisco

Tue, 2017-09-26 02:00

It's been 7 months since we kicked off the Oracle Code series in the Bay Area, and since then we’ve visited 17 countries to host thousands of developers, and promote the best and latest content about modern cloud application development. Now, we are 7 days away from the 21st Code event right back to the start: San Francisco! 

After all that learning, traveling, eating, drinking, and laughing, we're back with 7 great reasons for developers to join us at Oracle Code San Francisco on Oct. 3.

#1 Even More Content!

We handpicked 33 exclusive sessions, hands-on labs, and keynotes aimed specifically at helping developers, and it will all be delivered at the Moscone West center on Oct. 3. Just like past Oracle Code events, this is a one-day conference to help you learn about microservices, containers, DevOps, Java, Javascript, SQL, mobile, and a lot more. Now spread over the rest of the week, Code is even more: a full track within JavaOne dedicated for those same areas of interest, located at the same venue and has 92 sessions, 8 birds-of-a-feather discussion sessions, 10 unique hands-on labs, and multiple keynotes.

#2 It's Free!

If you feel like attending JavaOne or Oracle OpenWorld is too much of a commitment to spend — pun intended — an entire week at, we've got you covered. Oracle Code San Francisco is free for developers to attend!

#3 It's Fun!

By attending the Code conference you also get access to a Developer Lounge where you can interact with many high-tech experiences built from the ground-up using Oracle Cloud technologies, Raspberry Pies, and Java. Meet for instance, the upcoming BulletTime Video Ring, which creates a Matrix-like video effect with you as the star.

#4 It Has Great Speakers!

No matter where we went in the world, developers wanted the same thing from a conference: great content and great speakers. And we've got this covered as well. Developers will hear from great speakers such as Adam Bien, Chris Richardson, Baruch Sadogursky, Frank Munz, Leonid Igolnik, Lucas Jellema, Sean Philips, Johan Vos, Viktor Gamov, and Pratik Patel, as well few excellent Oracle speakers like Boris Scholl, Geertjan Wielenga, Joe Levy, Chad ArimuraConnor McDonald, and Dan Mcghan. Here goes the #FollowTuesday by the way!

#5 Keynotes, Music, and Beer!

On Oct. 3, between 4pm and 6:30pm, we will hold the Oracle Code Keynote with Patrick Debois, who will talk about DevOps, and Oracle experts who will share how cloud is changing the way developers build software. Plus, look for some special guests. Right after the keynote, there’s a concerton Howard Street with the folks from Royal Machines bringing the rock'n roll. Beer's on us!

#6 Docker, Containers, and Hands-on Labs!

Oracle Code has three unique hands-on lab opportunities on Tuesday that developers can access for free. Register for these hands-on labs and learn with experts while practicing your coding skills. Find out which labs are available and register for them after signing up for the event.

Also, join Docker engineers and developer advocates like Mano Marks, Eric Smalling, and Ben Bonnefoy and attend the Docker 101 Hands-on Lab on Tuesday at 8am. Plus, don’t miss “Modernizing Traditional Apps: Java Edition,” another session by Docker engineers. 

#7 T-Shirts, Access to Exhibition Hall All Days, and more

If all that above isn’t enough, here's another tip: developers can go on Monday to the Moscone Center, collect their badges, and hang out at the Exhibition Hall. Pick-up a t-shirt and start getting used to the venue, the people, and network with other developers around JavaOne and Oracle OpenWorld.

So hurry up and register now!

Related content

Podcast: Chatbot Development, First Steps and Lessons Learned - Part 1

Wed, 2017-09-20 13:02

Amid the hype and discussion around chatbots, people are actually diving in and developing chatbot services.  Chabot development comes with a unique set of requirements and considerations that may prove challenging to those making their first excursion into this new breed of services. This podcast features a panel of developers who have been there, done that, and are willing to talk about it.

The panelists for this program were part of a pilot program that provided early access to the Intelligent Bot platform, which is now available as part of Oracle Mobile Cloud, Enterprise.

This program was recorded on Monday August 14, 2017.

The Panelists Matt Hornung
Software Consultant, Fishbowl Solutions
Minneapolis, Minnesota
Leon Smiers
Oracle ACE, Solution Architect, Capgemini
Rotterdam, Netherlands
Martin Deh
Consulting Member, Technical Staff, Oracle,
Additional Resources Coming Soon

Coming in October, the exploration of chatbot development continues with an entirely different panel. Oracle ACE Director Mia Urman of AuraPlayer, Peter Crew of MagiaCX Solutions, and Christoph Ruepprich of Accenture Enkitec Group discuss the triumphs and challenges in their first chatbot projects, and share their thoughts on working with Oracle's Intelligent Bot platform.

 Subscribe to the podcast