OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 16 hours 50 min ago

Podcast: Combating Complexity: Fad, Fashion, and Failure in Software Development

Wed, 2018-03-21 18:40

There is little in our lives that does not rely on software. That has been the reality for quite some time, and it will be even more true as self-driving cars and similar technologies become an even greater part of our lives. But as our reliance on software grows, so does the potential for disaster as software becomes increasingly complex.

In September 2017 The Atlantic featured “The Coming Software Apocalypse,” an article by James Somers that offers a fascinating and sobering look at how rampant code complexity has caused massive failures in critical software systems, like the 2014 incident that left the entire state of Washington without 9-1-1 emergency call-in services until the problem was traced to software running on a server in Colorado.

The article suggests that the core of the complexity problem is that code is too hard to think about. When and how did this happen?  

“You have to talk about the problem domain,” says Chris Newcombe,”because there are areas where code clearly works fine.” Newcombe, one of the people interviewed for the Atlantic article, is an expert on combating complexity, and since 2014 has been an architect on Oracle’s Bare Metal IaaS team.

“I used to work in video games,” Newcombe says. “There is lots of complex code in video games and most of them work fine. But if you're talking about control systems, with significant concurrency or affecting real-world equipment, like cars and planes and rockets or large-scale distribution systems, then we still have a way to go to solve the problem of true reliability. I think it's problem-domain specific. I don't think code is necessarily the problem. The problem is complexity, particularly concurrency and partial failure modes and side effects in the real world.”
 
Java Champion Adam Bien believes that in constrained environments, such as the software found in automobiles, “it's more or less a state machine which could or should be coded differently. So it really depends on the focus or the context. I would say that in enterprise software, code works well. The problem I see is more if you get newer ideas -- how to reshape the existing code quickly. But also coding is not just about code. Whether you write code or draw diagrams, the complexity will remain the same.”

Java Champion and microservices expert Chris Richardson agrees that “if you work hard enough, you can deliver software that actually works.” But he questions what is actually meant when software is described as “working well.”

“How successful are large software developments?” Richardson asks. “Do they meet requirements on time? Obviously that's a complex issue around project management and people. But what's the success rate?”

Richardson also points out that concerns about complexity are nothing new. “If you go back and look at the literature 30 or 40 years ago, people were concerned about software complexity then.”

The Atlantic article mentions that in most cases software does exactly what it was designed to do, an indication that it's not really a failure of the software as much as of the design of the software.

According to Developer Champion and Oracle ACE Director Lucas Jellema, “The complexity may not be in the software, but in the translation of the real-world problem or requirement into software. That starts not with coding, but with communication from one human being to another, from business end user to analyst to developer and maybe even some layers in between. That's where it usually goes wrong. In the end the software will do what the programmer told it to do, but that might not be what the business user or the real world requires it to do.”

Communication between stakeholders is only one aspect of the battle to reduce software complexity, and it’s just one issue among many that Chris Newcombe, Chris Richardson, Adam Bien, and Lucas Jellema discuss in this podcast. So settle in and listen.

This program was recorded on November 22, 2017.

The Panelists

(In alphabetical order)

Adam Bien
Java Champion
Oracle ACE Director
Twitter Lucas Jellema
CTO, AMIS Services
Oracle Developer Champion
Oracle ACE Director
Twitter  LinkedIn
  Chris Newcombe
Architect, Oracle Bare Metal IaaS Team
 LinkedIn 
  Chris Richardson
Founder, Eventuate. Inc
Java Champion
Twitter LinkedIn Additional Resources Coming Soon
  • AI Beyond Chatbots: How is AI being applied to modern applications?
  • Microservices, API Management, and Modern Enterprise Software Architecture

Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle ...

Sat, 2018-03-17 13:30

(Originally published on  javaoraclesoa.blogspot.com)

Spring Boot is great for running inside a Docker container. Spring Boot applications ‘just run’. A Spring Boot application has an embedded servlet engine making it independent of application servers. There is a Spring Boot Maven plugin available to easily create a JAR file which contains all required dependencies. This JAR file can be run with a single command-line like ‘java -jar SpringBootApp.jar’. For running it in a Docker container, you only require a base OS and a JDK. In this blog post I’ll give examples on how to get started with different OSs and different JDKs in Docker. I’ll finish with an example on how to build a Docker image with a Spring Boot application in it.

Getting started with Docker Installing Docker

Of course you need a Docker installation. I’ll not get into details here but;

Oracle Linux 7

yum-config-manager — enable ol7_addons yum-config-manager — enable ol7_optional_latest yum install docker-engine systemctl start docker systemctl enable docker

Ubuntu

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" apt-get update apt-get install docker-ce

You can add a user to the docker group or give it sudo docker rights. They do allow the user to become root on the host-OS though.

Running a Docker container

See below for commands you can execute to start containers in the foreground or background and access them. For ‘mycontainer’ in the below examples, you can fill in a name you like. The name of the image can be found in the description further below. This can be for example for an Oracle Linux 7 image container-registry.oracle.com/os/oraclelinux:7 when using the Oracle Container Registry or store/oracle/serverjre:8 for for example a JRE image from the Docker Store.

If you are using the Oracle Container Registry (for example to obtain Oracle JDK or Oracle Linux docker images) you first need to

  • go to container-registry.oracle.com and enable your OTN account to be used
  • go to the product you want to use and accept the license agreement
  • do docker login -u username -p password container-registry.oracle.com

If you are using the Docker Store, you first need to

  • go to store.docker.com and create an account
  • find the image you want to use. Click Get Content and accept the license agreement
  • do docker login -u username -p password

To start a container in the foreground

docker run — name mycontainer -it imagename /bin/sh

To start a container in the background

docker run — name mycontainer -d imagename tail -f /dev/null

To ‘enter’ a running container:

docker exec -it mycontainer /bin/sh

/bin/sh exists in Alpine Linux, Oracle Linux and Ubuntu. For Oracle Linux and Ubuntu you can also use /bin/bash. ‘tail -f /dev/null’ is used to start a ‘bare OS’ container with no other running processes to keep it running. A suggestion from here.

Cleaning up
Good to know is how to clean up your images/containers after having played around with them. See here.

#!/bin/bash # Delete all containers docker rm $(docker ps -a -q) # Delete all images docker rmi $(docker images -q) Options for JDK

Of course there are more options for running JDKs in Docker containers. These are just some of the more commonly used.

Oracle JDK on Oracle Linux

When you’re running in the Oracle Cloud, you have probably noticed the OS running beneath it is often Oracle Linux (and currently also often version 7.x). When for example running Application Container Cloud Service, it uses the Oracle JDK. If you want to run in a similar environment locally, you can use Docker images. Good to know is that the Oracle Server JRE contains more than a regular JRE but less than a complete JDK. Oracle recommends using the Server JRE whenever possible instead of the JDK since the Server JRE has a smaller attack surface. Read more here. For questions about the roadmap and support, read the following blog article.

store.docker.com

The steps to obtain Docker images for Oracle JDK / Oracle Linux from store.docker.com are as follows:

Create an account on store.docker.com. Go to https://store.docker.com/images/oracle-serverjre-8. Click Get Content. Accept the agreement and you’re ready to login, pull and run.

#use the store.docker.com username and password docker login -u yourusername -p yourpassword docker pull store/oracle/serverjre:8 #To start in the foreground: docker run — name jre8 -it store/oracle/serverjre:8 /bin/bash

container-registry.oracle.com

You can use the image from the container registry. First, same as for just running the OS, enable your OTN account and login.

#use your OTN username and password docker login -u yourusername -p yourpassword container-registry.oracle.com docker pull container-registry.oracle.com/java/serverjre:8 #To start in the foreground: docker run — name jre8 -it container-registry.oracle.com/java/serverjre:8 /bin/bash

OpenJDK on Alpine Linux

When running Docker containers, you want them to as small as possible to allow quick starting, stopping, downloading, scaling, etc. Alpine Linux is a suitable Linux distribution for small containers and is being used quite often. There can be some thread challenges with Alpine Linux though. See for example here and here. Running OpenJDK in Alpine Linux in a Docker container is more easy than you might think. You don’t require any specific accounts for this and also no login. When you pull openjdk:8, you will get a Debian 9 image. In order to run on Alpine Linux, you can do

docker pull openjdk:8-jdk-alpine

Next you can do

docker run — name openjdk8 -it openjdk:8-jdk-alpine /bin/sh

Zulu on Ubuntu Linux

 

You can also consider OpenJDK based JDK’s like Azul’s Zulu. This works mostly the same only the image name is something like ‘azul/zulu-openjdk:8’. The Zulu images are Ubuntu based.

Do it yourself

Of course you can create your own image with a JDK. See for example here. This requires you download the JDK code and build the image yourself. This is quite easy though.

Spring Boot in a Docker container

Creating a container with a Spring Boot application based on an image which already has a JDK in it, is easy. This is described here. You can create a simple Dockerfile like:

FROM openjdk:8-jdk-alpine VOLUME /tmp ARG JAR_FILE ADD ${JAR_FILE} app.jar ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

The FROM image can also be an Oracle JDK or Zulu JDK image as mentioned above.

And add a dependency to com.spotify.dockerfile-maven-plugin and some configuration to your pom.xml file to automate building the Dockerfile once you have the Spring Boot JAR file. See for a complete example pom.xml and Dockerfile also here. The relevant part of the pom.xml file is below.

<build> <finalName>accs-cache-sample</finalName> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>com.spotify</groupId> <artifactId>dockerfile-maven-plugin</artifactId> <version>1.3.6</version> <configuration> <repository>${docker.image.prefix}/${project.artifactId}</repository> <buildArgs> <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE> </buildArgs> </configuration> </plugin> </plugins> </build>

To actually build the Docker image, which allows using it locally, you can do:

mvn install dockerfile:build

If you want to distribute it (allow others to easily pull and run it), you can push it with

mvn install dockerfile:push

This will of course only work if you’re logged in as maartensmeets and only for Docker hub (for this example). The below screenshot is after having pushed the image to hub.docker.com. You can find it there since it is public.

#Running the container docker run -t maartensmeets/accs-cache-sample:latest

DevOps Meets Monitoring and Analytics

Mon, 2018-03-05 11:33

Much has been said about the role new technologies play in supporting DevOps, like automation and machine learning. My colleague Padmini Murthy wrote “DevOps Meets Next Gen Technologies”. In that post, Padmini does a great job discussing the DevOps ecosystem, partly based on a recent DevOps.com survey.

New technologies are rapidly shaping the way companies address Security and Application Performance Monitoring as well.

The same survey found 57% of companies have already adopted, and another 36% are planning to adopt modern monitoring in the next 12 months. Major reasons are: enhanced security, increased IT efficiency, and faster troubleshooting as shown in the chart below.  

Figure 1: “DevOps Meets Next Gen Technologies” by Devops.com; benefits and adoption profile for security, performance, and analytics monitoring.

Traditional IT practices would suggest application and security monitorings are oil and water, they don’t mix. Those responsible for applications and those responsible for IT security think and work dramatically different.  Here also, the landscape is changing rapidly.  The rapid proliferation of mobile and web applications built on modular microservices architectures or the like means monitoring needs to be agile and automatic.  At the same time, security strategies need to go beyond a good firewall, intrusion detection, and identity management.

What have emerged are commonalities between security and performance monitoring.  Both are using real-time monitoring of transactions through the entire stack.  Both are using machine learning to translate massive amounts of data into IT and security insights in real time.  Both are correlating data across an entire transaction in real time to quickly find performance or security issues.  Both are summarizing normal and abnormal behavior automatically to identify what’s important to view and what’s normal behavior.

This is what’s behind the design for Oracle Management Cloud.  It unifies all the metadata and log files in the cloud.  It normalizes the information on a big data analytics platform and applies machine learning algorithms to deliver IT Ops and Security dashboards pre-built specifically for security and performance teams with insights in real time, and automatically.

Figure 2: Oracle Management Cloud provides an integrated platform for security and performance monitoring.

Here are some lessons we’ve learned working with customers on DevOps efforts:

  1. Stop denying there is a problem. Ops teams are constantly bombarded by “false Signal” alerts.  They want better intelligence sooner about performance and security anomalies and threats. Read this Profit Magazine article to learn more about what Oracle is doing to help customers defend against ever-changing security and performance threats.
  2. Eliminate operational information silos so you eliminate finger pointing. Put your operational data (security, performance, configuration, etc.) in one place, and let today’s machine-learning-powered tools do the heavy lifting for you. You will reduce finger pointing, troubleshoot faster, and you may be able to eliminate the “war room” entirely. Watch this video to hear what one Oracle customer says about the power of machine learning.

Figure 3: Why Machine Learning is a key enabler for cloud-based monitoring.

  1. Monitor what (really) matters – your actual end-users. Over 70% of IT issues are end-user complaints. This can hinder the Ops team’s ability to respond to important issues. Look at this infographic highlighting the value of application and end-user monitoring. Figure 4 pinpoints why traditional monitoring tools miss the mark when it comes to delivering value.

Figure 4: End-user and application performance monitoring are key to a successful monitoring strategy.

  1. It’s in the logs! Logs are everywhere, but most organizations don’t use them because they are overwhelmed with the amount of data involved. Next-generation management clouds that are designed to ingest big data at enterprise-scale can cope with today’s log data volume and velocity. Check out this infographic for more details on Oracle Management Cloud’s Log Analytics service.

Figure 5: Key challenges with using logs to troubleshoot issues.

  1. Planning is an everyday activity. Leverage analytical capabilities against your unified store of operational information to answer a variety of forward-looking questions to improve security posture, application performance and resource utilization. If you’ve followed my advice in steps 1 through 4 above, you have all the data you need already available. Now it’s time to use it.

Further resources on Oracle Management Cloud:

Three Quick Tips API Platform CS - Gateway Installation (Part 2)

Wed, 2018-02-28 02:00

This is Part 2 of the blog series (The first part of the series can be accessed here). The aim of the blog post is to provide useful tips, which will enable the installation of the on premise Gateway for Oracle API Platform Cloud Services. If you want to know more about the product, then you can refer here.

The following tips are based on some of the scenarios, we have observed in production.

Essentially,to get past the entropy problem, you need to do the following (for Linux):

  •    check the current entropy count by executing:

   cat /proc/sys/kernel/random/entropy_avail

  • If the entropy is low you can do any of the following:    
  •    export CONFIG_JVM_ARGS=-Djava.security.egd=file:/dev/./urandom 
  •   Install the rngd tool (if not present) and execute:

   rngd -r /dev/urandom -o /dev/random -b   

  •   You can now proceed with the gateway domain creation or domain startup.

 

  • It is possible to generate the gateway properties from the API Portal UI. Please try to leverage this functionality and download the generated property file on to the on premise machine. This will significantly reduce the effort of hand crafting the properties file which is critical for the gateway installation process. Please refer here for more details.

 

  • If you encounter scenarios where failures in the "configure" action look something like:

64040: Specified template does not exist or is not a file: "/d01/apipcs/app/oracle/gateway/run/build/apiplatform_gateway-services_template.jar".
64040: Provide a valid template location.
at com.oracle.cie.domain.script.jython.CommandExceptionHandler.handleException(CommandExceptionHandler.java:56)
at com.oracle.cie.domain.script.jython.WLScriptContext.handleException(WLScriptContext.java:2279)
at com.oracle.cie.domain.script.jython.WLScriptContext.addTemplate(WLScriptContext.java:793)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    

The above is just an example but any file not found kind of error during the "configure" action is an indication that the previous step (which is the "install" action), did not complete successfully. Please refer to the "gatewayInstall.log" and "main.log", as this will point to why the install might have had errors even if the install process might have completed.

So  that is all for today. We will be back with more tips soon. Happy API management with the Oracle API Platform Cloud Services.    

Digital Transformation - Oracle API Platform Cloud Service

Questions on DevOps, Graal, APIs, Git? Champs Have Answers at Oracle Code Los Angeles

Fri, 2018-02-23 06:16

If you had technical questions about API design, for instance, or about date types in relational databases, or about DevOps bottlenecks, or about using Graal or Git,  you’d look for answers from someone with an abundance of relevant expertise, right? A champ in that particular topic.

As it happens, if you do indeed have questions on any of those topics, the Oracle Code event in Los Angeles on February 27 represents a unique opportunity for you to connect with a Developer Champion who can set you straight. Register now for Oracle Code Los Angeles, and put these sessions by Oracle Developer Champions on your schedule.

A Research Study Into DevOps Bottlenecks
Presented by: Baruch Sadogursky, Developer Advocate, JFrog
1:10 p.m.  - 1:55 p.m.  San Jose Room

Think DevOps is just so much hype? Guess again! “DevOps is among the none-hypish methodologies that really help,” said Developer Champion Baruch Sadogursky in a recent podcast. “It’s here to stay because it is another step toward faster and better integration between stakeholders in the delivery process.” But taking that step trips up some organizations. In this session Baruch dives deep into the results of a poll of Fortune 500 software delivery leaders to determine what’s causing the bottlenecks that are impeding their DevOps progress, and find solutions that will set them back on the path

Graal: How to Use the New JVM JIT Compiler in Real Life
by Christian Thalinger, Staff Software Engineer, Twitter, Inc.
2:10 p.m. - 2:55 p.m. San Francisco Room

Is Graal on your radar? It should be. It’s a new JVM JIT compiler that could become the default HotSpot JIT compiler, according to Developer Champion Christian Thalinger. But that kind of transition isn’t automatic. “One of the biggest mistakes people make when benchmarking Graal is that they assume they could use the same metrics as for C1 and C2” explains Christian. “Some people just measure overall time spent in GC and that just doesn't work.  I've seen the same being done to overall time spent for JIT compilations.  You can't do that." What can you do with Graal? Christian’s session will look at how it works, and what it can do for you.

Tackling Time Troubles - About Date Types in Relational Databases
by Bjoern Rost, Principal Consultant, The Pythian Group Inc
2:10 p.m. - 2:55 p.m. Sacramento Room

The thing about time is that it’s always passing, and there never seems to be enough of it. Things get even more complicated when it comes to dealing with time-related data in databases. While your mobile phone might easily handle leap years, time zones, or seasonal time changes, those issues can cause runtime errors, SQL code headaches, and other database problems you’d rather avoid. In this session Developer Champion Bjoern Rost will discuss best practices that will help you dodge some of the time data issues that can increase your aspirin intake. Put this session on your schedule and learn how to have an easier time when dealing with time data

Best Practices for API Design Using Oracle APIARY
by Rolando Carrasco, Fusion Middleware Director, S&P Solutions
Leonardo Gonzalez Cruz, OFMW Architect, S&P Soutions
 3:05 p.m.  - 3:50 p.m. San Jose Room

Designing and developing APIs is an important part of modern development. But if you’re not applying good design principles, you’re headed for trouble. “We are living in an API world, and you cannot play in this game with poor design principles,” says Developer Champion Rolando Carrasco. In this session, Rolando and co-presenter Leonardo Gonzalez Cruz will define what an API is, examine what distinguishes a good API, discuss the design principles that are necessary to build stable, scalable, secure APIs, and also look at some of the available tools. Whether you’re an API producer or an API consumer, you’ll want to take in this session.

Git it! A Primer To The Best Version Control System
by Bjoern Rost, Principal Consultant, The Pythian Group Inc
Stewart Bryson, owner and co-founder, Red Pill Analytics
4:20 p.m. - 5:05 p.m.  San Francisco Room

Git, the open source version control system, already has a substantial following. But whether you count yourself among those fans, or if you’re new and ready to get on board, this session by Bjeorn Rost and Oracle ACE Director Stewart Bryson will walk you through setting up your own Git repository, and discuss cloning, syncing, using and merging branches, integrating with CI/CD systems, and other hot Git tips. Don’t miss this opportunity to sharpen your Git skill

Of course, the sessions mentioned above are just 5 among 31 sessions, labs, and keynotes that are part of the overall Oracle Code Los Angeles agenda.

Don’t miss Oracle Code Los Angeles

Tuesday, February 217, 2018
7:30am - 6:00pm
The Westin Bonaventure Hotel and Suites
404 S Figueroa St.
Los Angeles, CA  90071
Register Now!

Learn about other events in the Oracle Code 2018 series
 

Related Resources

 

 

Podcast: DevOps in the Real World: Culture, Tools, Adoption

Tue, 2018-02-20 17:38

Among technology trends DevOps is certainly generating its share of heat. But is that heat actually driving adoption? “I’m going to give the answer everyone hates: It depends,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC. “It depends on where each team is, on where the organization is. I talk to people all over the industry, and I work with organizations all over the industry, and everyone is at a very different place.”

Some of the organizations Nicole has spoken with are pushing the DevOps envelope. “They’re almost squeezing blood out of a stone, finding ways to optimize things that have been optimized at the very edge. They’re doing things that most people can’t even comprehend.” Other organizations aren't feeling it. "There’s no DevOps,” says Nicole. “DevOps is nowhere near on their radar.”

Some organizations that had figured out DevOps stumbled a bit when the word came down to move everything to the cloud, explains Shay Shmeltzer, product management director for Oracle Cloud Development tools. “A lot of them need to rethink how they’re doing stuff, because cloud actually simplifies DevOps to some degree. It makes the provisioning of environments and getting stuff up and down much easier and quicker in many cases.”

As Nicole explains, “DevOps is a technology transformation methodology that makes your move into the cloud much more sticky, much more successful, much more effective and efficient to deliver value, to realize cost-savings. You can get so much more out of the technology that you are using and leveraging, so that when you do move to the cloud, everything is so much better. It’s almost a chicken and egg thing. You need so much of it together.”

However, that value isn’t always apparent to everyone. Kelly Shortridge, product manager at SecurityScorecard, observes that some security stakeholders, “feel they don’t have a place in the DevOps movement.” Some security teams have a sense that configuration management will suffice. “Then they realize that they can’t just port existing security solutions or existing security methodologies directly into agile development processes,” explains Kelly. “You have the opportunity to start influencing change earlier in the cycle, which I think was the hype. Now we’re at the Trough of Disillusionment, where people are discovering that it’s actually very hard to integrate properly, and you can’t just rely on technology for this shift. There also has to be a cultural shift, as far as security, and how they think about their interactions with engineers.” In that context Kelly sees security teams wrestling with how to interact within the organization.

But the value of DevOps is not lost on other roles and disciplines. It depends on how you slice it, explains Leonid Igolnik, member and angel investor with Sand Hill Angels, and founding investor, advisor, and managing partner with Batchery. He observes that DevOps progress varies across different industry subsets and different disciplines, “whether it’s testing, development, or security.”

“Overall, I think we’re reaching the Slope of Enlightenment, and some of those slices are reaching the Plateau of Productivity,” Leonid says.

Alena Prokharchyk began her journey into DevOps three years ago when she started her job as principal software engineer at Rancher Labs, whose principal product targets DevOps. “That actually forced me to look deeper into DevOps culture,” she says. “Before that I didn’t realize that such problems existed to this extent. That helped me understand certain aspects of the problem. Within the company, the key for me was communication with the DevOps team. Because if I’m going to develop something for DevOps, I have to understand the problems.”

If you’re after a better understanding of challenges and opportunities DevOps represents, you’ll want to check out this podcast, featuring more insight on adoption, cultural change, tools and other DevOps aspects from this collection of experts.

The Panelists

(Listed alphabetically)

Nicole Forsgren Nicole Forsgren
Founder and CEO, DevOps Research and Assessment LLC
Twitter LinkedIn Leonid Igolnik
Member and Angel Investor, Sand Hill Angels
Founding Investor, Advisor, Managing Partner, Batchery
Twitter LinkedIn Alena Prokharchyk
Principal Software Engineer, Rancher Labs
Twitter LinkedIn Baruch Sadogursky
Developer Advocate, JFrog
Twitter LinkedIn Shay Shmeltzer
Director of Product Management, Oracle Cloud Development Tools
Twitter LinkedIn Kelly Shortridge
Product Manager at SecurityScorecard
Twitter LinkedIn   Additional Resources Coming Soon
  • Combating Complexity
    An article in the September 2017 edition of the Atlantic warned of The Coming Software Apocalypse. Oracle's Chris Newcombe was interviewed for that article. In this podcast Chris joins Chris Richardson, Adam Bien, and Lucas Jellema to discuss heading off catastophic software failures.
  • AI Beyond Chatbots
    How is Artificial Intelligence being applied to modern applications? What are the options and capabilities? What patterns are emerging in the application of AI? A panel of experts provides the answers to these and other questions.
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

:

Podcast: DevOps in the Real World: Culture, Tools, Adoption

Tue, 2018-02-20 17:38

Among technology trends DevOps is certainly generating its share of heat. But is that heat actually driving adoption? “I’m going to give the answer everyone hates: It depends,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC. “It depends on where each team is, on where the organization is. I talk to people all over the industry, and I work with organizations all over the industry, and everyone is at a very different place.”

Some of the organizations Nicole has spoken with are pushing the DevOps envelope. “They’re almost squeezing blood out of a stone, finding ways to optimize things that have been optimized at the very edge. They’re doing things that most people can’t even comprehend.” Other organizations aren't feeling it. "There’s no DevOps,” says Nicole. “DevOps is nowhere near on their radar.”

Some organizations that had figured out DevOps stumbled a bit when the word came down to move everything to the cloud, explains Shay Shmeltzer, product management director for Oracle Cloud Development tools. “A lot of them need to rethink how they’re doing stuff, because cloud actually simplifies DevOps to some degree. It makes the provisioning of environments and getting stuff up and down much easier and quicker in many cases.”

As Nicole explains, “DevOps is a technology transformation methodology that makes your move into the cloud much more sticky, much more successful, much more effective and efficient to deliver value, to realize cost-savings. You can get so much more out of the technology that you are using and leveraging, so that when you do move to the cloud, everything is so much better. It’s almost a chicken and egg thing. You need so much of it together.”

However, that value isn’t always apparent to everyone. Kelly Shortridge, product manager at SecurityScorecard, observes that some security stakeholders, “feel they don’t have a place in the DevOps movement.” Some security teams have a sense that configuration management will suffice. “Then they realize that they can’t just port existing security solutions or existing security methodologies directly into agile development processes,” explains Kelly. “You have the opportunity to start influencing change earlier in the cycle, which I think was the hype. Now we’re at the Trough of Disillusionment, where people are discovering that it’s actually very hard to integrate properly, and you can’t just rely on technology for this shift. There also has to be a cultural shift, as far as security, and how they think about their interactions with engineers.” In that context Kelly sees security teams wrestling with how to interact within the organization.

But the value of DevOps is not lost on other roles and disciplines. It depends on how you slice it, explains Leonid Igolnik, member and angel investor with Sand Hill Angels, and founding investor, advisor, and managing partner with Batchery. He observes that DevOps progress varies across different industry subsets and different disciplines, “whether it’s testing, development, or security.”

“Overall, I think we’re reaching the Slope of Enlightenment, and some of those slices are reaching the Plateau of Productivity,” Leonid says.

Alena Prokharchyk began her journey into DevOps three years ago when she started her job as principal software engineer at Rancher Labs, whose principal product targets DevOps. “That actually forced me to look deeper into DevOps culture,” she says. “Before that I didn’t realize that such problems existed to this extent. That helped me understand certain aspects of the problem. Within the company, the key for me was communication with the DevOps team. Because if I’m going to develop something for DevOps, I have to understand the problems.”

If you’re after a better understanding of challenges and opportunities DevOps represents, you’ll want to check out this podcast, featuring more insight on adoption, cultural change, tools and other DevOps aspects from this collection of experts.

The Panelists

(Listed alphabetically)

Nicole Forsgren Nicole Forsgren
Founder and CEO, DevOps Research and Assessment LLC
Twitter LinkedIn Leonid Igolnik
Member and Angel Investor, Sand Hill Angels
Founding Investor, Advisor, Managing Partner, Batchery
Twitter LinkedIn Alena Prokharchyk
Principal Software Engineer, Rancher Labs
Twitter LinkedIn Baruch Sadogursky
Developer Advocate, JFrog
Twitter LinkedIn Shay Shmeltzer
Director of Product Management, Oracle Cloud Development Tools
Twitter LinkedIn Kelly Shortridge
Product Manager at SecurityScorecard
Twitter LinkedIn   Additional Resources Coming Soon
  • Combating Complexity
    An article in the September 2017 edition of the Atlantic warned of The Coming Software Apocalypse. Oracle's Chris Newcombe was interviewed for that article. In this podcast Chris joins Chris Richardson, Adam Bien, and Lucas Jellema to discuss heading off catastophic software failures.
  • AI Beyond Chatbots
    How is Artificial Intelligence being applied to modern applications? What are the options and capabilities? What patterns are emerging in the application of AI? A panel of experts provides the answers to these and other questions.
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

:

Oracle Code is back – Bigger and Better!

Fri, 2018-02-16 16:24

2018 is yet another great year for developers! Oracle’s awesome global developer conference series, Oracle Code, is back – and it’s bigger and better!

In 2017 Oracle ran the first series of Oracle Code developer conferences. In over 20 cities across the globe the series attracted more than 10,000 developers from all over the world, providing them with the opportunity to learn new skills, network with peers and take home some great memories. Following the huge success, Oracle is about to run yet another 14 events across the globe kicking off in late February in Los Angeles. The great thing about Oracle Code, attendance and speaking at the conferences is fully free of charge, showing Oracle holding true to the commitment to the developer communities out there. Across four continents you will get to hear everything that is hot and top in the industry: Blockchain, Containers, Microservices, API Design, Machine Learning, AI, Mobile, Chatbots, Databases, Low Code Development, trendy programming languages, CI/CD, DevOps and much, much more will be right in the center of Oracle Code.

Throughout the one-day events, that provide space for 500 people, developers can share their experience, participate in hands-on labs, talk to subject matter experts and, most importantly, have a lot of fun in the Oracle Code Lounge.

IoT Cloud Brewed Beer

Got a few minutes to try the IoT Cloud Brewed Beer from a local micro brewery? Extend manufacturing processes and logistics operations quickly using data from connected devices. Tech behind the brew: IoT Production Monitoring, IoT Asset Monitoring, Big Data, Event Hub, Oracle JET.


3D Builder Playground

Create your own sculptures and furniture with the 3D printer and help complete the furniture created using Java constructive geometry library. The Oracle technology used is Application Container Cloud running Visual IDE and Java SE running JSCG library.

Oracle Zip Labs Challenge

Want some bragging rights and to win prizes at the same time? Sign up for a 15-minute lab on Oracle Cloud content and see your name on the leaderboard as the person to beat in Oracle Zip Labs Challenge.

IoT Workshop

Interact and exchange ideas with other attendees at the IoT Workshop spaces. Get your own Wi-Fi microcontroller and connect to Oracle IoT Cloud Service. Oracle Developer Community is partnering with AppsLab and the Oracle Applications Cloud User Experience emerging technologies team to make these workshops happen.

Robots Rule with Cloud Chatbot Robot

Ask NAO the robot to do Tai Chi or ask "who brewed the beers"? So how does NAO do what it does? It uses the Intelligent Bot API on Oracle Mobile Cloud Service to understand your command and responds back by speaking back to you.

Dev Live

The Oracle Code crew also thought of the folks who aren’t that lucky to participate at Oracle Code in person: Dev Live are live interviews happening at Oracle Code that are streamed online across the globe so that everyone can watch developers and community members share their experiences.

Register NOW!

Register now for an Oracle Code event near you at: https://developer.oracle.com/code

Have something interesting that you did and want to share it with the world? Submit a proposal in the Call for Papers at: https://developer.oracle.com/code/cfp





See you next, at Oracle Code!

Announcing Packer Builder for Oracle Cloud Infrastructure Classic

Wed, 2018-02-14 10:30

HashiCorp Packer 1.2.0 adds native support for building images on Oracle Cloud Infrastructure Classic.

Packer is an open source tool for creating machine images across multiple platforms from a single source configuration. With the new oracle-classic builder, Packer can now build new application images directly on Oracle Classic Compute, similar to the oracle-oci builder. New Images can be created from an Oracle provided base OS image, an existing private image, or an image that that has been installed from the Oracle Cloud Marketplace

Note: Packer can also create Oracle Cloud Infrastructure Classic compatible machine images using the VirtualBox builder - and this approach still remains useful when building new base OS images from ISOs, see Creating Oracle Compute Cloud Virtual Machine Images using Packer

oracle-classic Builder Example

This examples creates a new image with Redis installed using an existing Ubuntu image as the base OS.

Create a packer configuration file redis.json

Now run Packer to build the image

After packer completes the new Image is available in the Compute Classic console to launch new instances.

See also

For building Oracle Cloud Infrastructure images see:

Three Quick Tips API Platform CS - Gateway Installation (Part 1)

Tue, 2018-02-13 16:00

This blog post assumes some prior knowledge of API Platform Cloud Service and pertains to the on premise gateway installation steps. Here we try to list down 3 useful tips (applicable for 18.1.3+), arranged in no particular order:. 

  • Before installing the gateway, make sure you have the correct values for "listenIpAddress" and "publishAddress".  This can be done by the following checklist (Linux only):
    • Does the command "hostname -f" return a valid value ?
    • Does the command "ifconfig" list downs the ip addresses properly ?
    • Do you have additional firewall/network policies that may prevent communication with management tier?
    • Do you authoritatively know the internal and public ip/addresses to be used for the gateway node?

            If you do not know the answers to any of the questions, please contact your network administrator.

           If you see issues with gateway server not starting up properly, incorrect values of  "listenIpAddress" and "publishAddress" could be the possible cause. 

  • Before running the "creategateway" action (or any other action involving the "creategateway" like "create-join" for example), do make sure that the management tier is accessible. You can use something like:
    • wget "<http/https>:<managmentportal_host>:<management_portal_port>/apiplatform"  
    • curl "<http/https>:<managmentportal_host>:<management_portal_port>/apiplatform"

           If the above steps fail, then "creategateway" will also not work, so the questions to ask are:

  1. Do we need a proxy?
  2. If we have already specified a proxy , is it the correct proxy ?
  3. In case we need a proxy , have we set the "managementServiceConnectionProxy" property in gateway-props.json.

Moreover, it is better if we set the http_proxy/https_proxy to the correct proxy, if proxies are applicable.

  • Know your log location, please refer to the following list:
    • Logs for troubleshooting "install" or  "configure" actions , we have to refer to <install_dir>/logs directory.
    • Logs for troubleshooting "start" or "stop" actions, we have to refer to <install_dir>/domain/<gateway_name>/(start*.out|(stop*.out)).
    • Logs for troubleshooting "create-join"/"join" actions, we have to refer to <install_dir>/logs directory.
    • To troubleshoot issues post installation (i.e. after the physical node has joined the gateway), we can refer to <install_dir>/domain/<gateway_name>/apics/logs directory. 

We will try to post more tips in the coming weeks, so stay tuned and happy API Management.            

Announcing the Oracle Vagrant boxes GitHub repository

Mon, 2018-02-12 13:23

Today we are pleased to announce the launch of a new GitHub repository to build Oracle software Vagrant boxes: https://github.com/oracle/vagrant-boxes

Vagrant provides an easy and fully automated way of setting up a developer environment. In conjunction with Oracle’s VirtualBox, Vagrant is a powerful tool for creating a sandbox environment inside a virtual machine. With this announcement, we introduce this powerful automation to users worldwide as a streamlined way for creating virtual machines with Oracle software fully configured and ready to go inside of them. This is yet another in a series of steps for making the lives of developers easier and more productive.

Getting started is quick and easy! If you have not done so yet, you will need to download and install the following:

Once you have installed those two components you can go ahead and clone/download the GitHub repository and create your own Vagrant boxes. Getting an Oracle Linux virtual machine is as simple as follows:

  1. Clone (or download) the GitHub repository:

gvenzl-mac:vagrant gvenzl$ git clone https://github.com/oracle/vagrant-boxes
Cloning into 'vagrant-boxes'...
remote: Counting objects: 74, done.
remote: Total 74 (delta 0), reused 0 (delta 0), pack-reused 74
Unpacking objects: 100% (74/74), done.

  1. Go into the the OracleLinux sub folder:

gvenzl-mac:vagrant gvenzl$ cd vagrant-boxes/OracleLinux/

  1. Type “vagrant up” and wait for your VM to be provisioned:

gvenzl-mac:OracleLinux gvenzl$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'http://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly...
==> default: Adding box 'http://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box' (v0) for provider: virtualbox
    default: Downloading: http://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box
==> default: Successfully added box 'http://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box' (v0) for 'virtualbox'!
==> default: Importing base box 'http://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box'...
==> default: Matching MAC address for NAT networking...
==> default: Setting the name of the VM: ol7-vagrant
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2220 (host) (adapter 1)
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
...
...
...
==> default: INSTALLER: Locale set
==> default: INSTALLER: Installation complete, Oracle Linux ready to use!

Once the machine is provisioned you are all set and ready to go. You can now just simply ssh into the virtual machine by typing “vagrant ssh” and perform whatever tasks you would like to do. Once you are done, just type “exit” just like any other ssh terminal:

gvenzl-mac:OracleLinux gvenzl$ vagrant ssh

Welcome to Oracle Linux Server release 7.4 (GNU/Linux 4.1.12-112.14.13.el7uek.x86_64)

The Oracle Linux End-User License Agreement can be viewed here:

* /usr/share/eula/eula.en_US

For additional packages, updates, documentation and community help, see:

* http://yum.oracle.com/

[vagrant@ol7-vagrant ~]$ uname -a
Linux ol7-vagrant 4.1.12-112.14.13.el7uek.x86_64 #2 SMP Thu Jan 18 11:38:29 PST 2018 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@ol7-vagrant ~]$ exit
logout
Connection to 127.0.0.1 closed.
gvenzl-mac:OracleLinux gvenzl$

You can stop the virtual machine and reboot it any time by typing “vagrant halt” and “vagrant up”:

gvenzl-mac:OracleLinux gvenzl$ vagrant halt
==> default: Attempting graceful shutdown of VM...


gvenzl-mac:OracleLinux gvenzl$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2220 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
==> default: Machine booted and ready!
[default] GuestAdditions 5.1.30 running --- OK.
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
==> default: Mounting shared folders...
default: /vagrant => /Users/gvenzl/Downloads/vagrant/vagrant-boxes/OracleLinux
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.
gvenzl-mac:OracleLinux gvenzl$

Last, if you would like to remove the VM altogether from your machine, you can do so by typing “vagrant destroy”. This will remove the entire VM and everything within it, so be careful with this command:

gvenzl-mac:OracleLinux gvenzl$ vagrant destroy
default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Forcing shutdown of VM...
==> default: Destroying VM and associated drives...

Going forward, Oracle will bring more and more Vagrant configuration files to this GitHub repository, which is driven in a fully open source fashion. Please provide comments and enhancement requests via the GitHub issues.

Also check out this cool video by Sergio Leunissen showing you how to setup a Docker sandbox using Oracle VM VirtualBox and Vagrant:

6 Ways Automated Security Becomes A Developer’s Ally

Wed, 2018-02-07 09:46

In a recent InfoWorld article, Siddhartha Agarwal, VP of Product Management at Oracle, outlined his top 10 predictions impacting application developers in 2018. In this blog, we’ll take a closer look at one of those predictions and how a cloud access security broker (CASB) service can help with it.

#10. Highly automated security and compliance efforts become a new ally of developers

Companies are increasingly adopting DevOps methodologies to accelerate their app development lifecycles in the cloud. Unfortunately, the common perception is that accelerating application lifecycles comes at the expense of security. That’s because, traditionally, security used to be a discrete step in application development lifecycles, taking weeks or months to certify an application for production use. There is no way such a delay can be incorporated into an agile CI/CD methodology. Security needs to be a continuous process linked to every step of DevOps.

Fortunately, artificial intelligence and machine learning have matured to the point that they can be used to automate much of application security. Developers can ensure that their applications and data are continuously monitored using a CASB service, and any threats, compliance violations, or security incidents are automatically detected and remediated. This lets app developers maintain their development velocity, while conforming to security and compliance standards. Let’s look at some key areas that can be protected with continuous visibility and monitoring with Oracle CASB. 

1. Enforcing Strong Application Configuration and Micro Segmentation

CASB can monitor application configurations to detect any changes and revert those automatically to the “golden” configuration, as well as alert relevant administrators. This enforcement may include configurations for network segmentation, DNS resolution, usage of secure or insecure network ports, and encryption settings for folders containing application data.

2. Enforcing Strong Access Control for Administrators

CASB can continuously monitor and enforce strong access control policies for administrators, including multi-factor authentication, strong password policies, and SSH key rotation. Any changes to these can be reverted automatically and alerted to relevant administrators.

3. Monitoring Admin Activity for Out-of-the-Ordinary Patterns

CASB uses machine learning to automatically learn “normal” or regular patterns of administrative activity, such as the login/logout times of administrators, locations/IP addresses where they typically login from, and types of changes they usually perform to the application. It can then send an alert on any deviations from these normal patterns, such as an admin logging in from a location, IP address, or device type that they’ve never used before. In addition, customers can also configure CASB to look for admin changes to specific areas, such as lists of authorized users or groups, starting or stopping of app instances, or changes to encryption settings of folders. For example, if an infiltrator attempts to use valuable compute or storage resources for malicious usage, CASB will immediately raise an alert.

4. Enforcing Data Security and Compliance
CASB can continuously scan application data to detect any files that violate the company’s compliance policy. For example, it can be configured to look for sensitive or confidential information, such as credit card or Social Security numbers. If found, CASB can automatically alert administrators and take remedial action that prevent unauthorized access of the data.

5. Monitoring User Activity for Out-of-the-Ordinary Patterns

CASB uses machine learning to automatically detect unauthorized or malicious insider usage of the application. Similar to monitoring administrator activity, CASB uses machine learning to automatically learn normal patterns of regular user activity. Any deviations from these, such as users logging in from a location that they’ve never logged in from before, can automatically be alerted as being suspicious. On detecting suspicious activity, application access for the user can automatically be downgraded to prevent downloads, as an example, until the user has been able to prove their identity with further authentication.

6. Monitoring for Misuse of Escalated Privileges

Often times, developers gain access to production resources for troubleshooting purposes such as debugging, bug fixing, or other maintenance purposes. In many cases, those privileges are never revoked, thereby leaving those resources fully accessible by those developers even after those issues are resolved. CASB can help monitor resources in production so that any access or modifications is alerted to respective administrators, who can then respond accordingly. CASB can also help prevent or revert changes to the original state, thereby preventing unauthorized changes to production resources by such users.

Oracle CASB offers all of the capabilities listed above, and it can also be integrated with other enterprise systems, such as SIEM, Identity-as-a-Service (IDaaS), or IT Service Management applications. This ensures that companies can tightly integrate CASB into their existing Security Operations Center (SOC) workflows and enable CASB to raise tickets automatically for remediation.

Platform choice matters

Oracle has spent the last several years building and assembling the set of security and management services in the Oracle Cloud that together enable customers to build the Identity-centric Security Operations Center (Identity SOC). The Identity SOC platform leverages purpose-built machine learning against the full breadth of operational and security telemetry — including activity and configuration information as well as identity and asset context — to provide real-time threat detection across heterogeneous, hybrid cloud environments.  When potential or active threats are detected, automated remediation can be invoked to eliminate those threats.

 

Podcast: Women in Technology: Motivation and Momentum

Tue, 2018-02-06 10:39

According to the National Center for Women and Information Technology (NCWIT), while 57% of professional occupations in the US were held by women in 2016, women held only 26% of professional computing occupations. Correcting that imbalance is the right thing to do, of course. But there’s another dimension to the issue that raises the stakes for getting more women into IT jobs.

“We have 80,000 graduates every year coming out of college with computer science degrees,” says Kellyn Pot’Vin-Gorman, technical intelligence manager for the office of CTO at Delphix. But US colleges and universities can’t crank out computer science grads fast enough to meet demand. “Over a million technical jobs will be here by 2020, and we’ve got nobody to fill them,” Kellyn says.

Attracting more women into software development and other technical fields will help to fill the IT jobs that will otherwise go wanting. But, perhaps due to lingering gender bias, or simple oversight, effective communication of the opportunities doesn’t always happen. “No one told me that I could do this as a career,” says Michelle Malcher, a security architect at Extreme Scale Solutions in Chicago. “No one said, ‘you can have fun with code.’”

Now that Michelle is having fun with code, she, like Kellyn, puts significant time and effort into getting the word out about the opportunities and career potential for young women. But men also have a role in that mission. “Men need to be part of the conversation. It can’t just be women talking about women's issues,” says Natalie Delemar, a senior consultant with Ernst and Young and an active supporter of women in technology. “We need to have men at the table so that they understand the importance of these issues.”

Women and men can engage in mentoring and sponsorship activities that are important in getting more women into IT roles. Heli Helskyaho, CEO of Miracle Finland and a PhD student at the University of Helsinki, is one of two mentors recently elected by computer science students at that institution. “The faculty just decided that it's time to have mentorship in the university the first time after all these years.”

But while mentoring and sponsorship are important, there are key differences. And, as Natalie observes, “women in the workplace are actually over mentored and under sponsored.”

Natalie explains that while mentoring typically focuses on career guidance and advice on educational matters, “sponsorship is when somebody actually uses their political capital to put you into positions of power to give you experiences to get ahead.”

Getting ahead is what the latest Oracle Developer Community podcast is all about, as Kellyn Pot'Vin-Gorman, Michelle Malcher, Natalie Delemar, and Heli Helskyaho, along with panel organizer and moderator Laura Ramsey, share insight on what motivated them in their IT careers, and how they lend their expertise and energy to driving momentum in the effort to draw more women into technology.

This panel discussion took place at Oracle Openworld in San Francisco on September 18, 2016.

The Panelists

(Listed alphabetically)

Natalie Delemar
Senior Consultant, Ernst and Young
President, ODTUG Board of Directors
Twitter LinkedIn Facebook Heli Helskyaho Heli Helskyaho
CEO, Miracle Finland
Oracle ACE Director
Ambassador, EMEA Oracle Usergroups Community
Twitter LinkedIn Facebook Michelle Malcher
Security Architect, Extreme Scale Solutions
Oracle ACE Director
Twitter LinkedIn Facebook Kellyn Pot'Vin-Gorman
Technical Intelligence Manager, Office of CTO, Delphix
President, Board Of Directors, Denver SQL Server User Group
Twitter LinkedIn Facebook Laura Ramsey
Manager, Database Technology and Developer Communities
Oracle America
Twitter LinkedIn Facebook   Additional Resources Coming Soon
  • DevOps: Can This Marriage be Saved? (Feb 21)
    What is the biggest threat to successful DevOps? What’s the most common DevOps mistake? Experts Nicole Forsgen, Leonid Igolnik, Alaina Prokharchyk, Baruch Sadogursky, Shay Shmeltzer, and Kelly Shortridge discuss what it takes to make DevOps work.
  • Combating Complexity
    An article in the September 2017 edition of the Atlantic warned of The Coming Software Apocalypse. Oracle's Chris Newcombe was interviewed for that article. In this podcast Chris joins Chris Richardson, Adam Bien, and Lucas Jellema to discuss heading off catastophic software failures.
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

:

Announcing the Oracle WebLogic Server Kuberentes Operator

Wed, 2018-01-31 08:00

We are very excited to announce the Oracle WebLogic Server Kubernetes Operator, which is available today as a Technology Preview and which is delivered in open source at https://oracle.github.io/weblogic-kubernetes-operator.  The operator can manage any number of WebLogic domains running in a Kubernetes environment.  It provides a mechanism to create domains, automates domain startup, allows scaling WebLogic clusters up and down either manually (on-demand) or through integration with the WebLogic Diagnostics Framework or Prometheus, manages load balancing for web applications deployed in WebLogic clusters, and provides integration with ElasticSearch, logstash and Kibana.

The operator uses the standard Oracle WebLogic Server 12.2.1.3 Docker image, which can be found in the Docker Store or in the Oracle Container Registry.  It treats this image as immutable, and all of the state is persisted in a Kubernetes persistent volume.  This allows us to treat all of the pods as throwaway and replaceable, and it completely eliminates the need to manage state written into Docker containers at runtime (because there is none).

The diagram below gives a high level overview of the layout of a domain in Kubernetes when using the operator:

The operator can expose the WebLogic Server Administration Console to external users (if desired), and can also allow external T3 access; for example for WLST.  Domains can talk to each other, allowing distributed transactions, and so on. All of the pods are configured with Kubernetes liveness and readiness probes, so that Kubernetes can automatically restart failing pods, and the load balancer configuration can include only those Managed Servers in the cluster that are actually ready to service user requests.

We have a lot of documentation available on the project pages on GitHub including details about our design philosophy and architecture, as well as instructions on how to use the operator, video demonstrations of the operator in action, and a developer page for people who are interested in contributing to the operator.

We hope you take the opportunity to play with the Technology Preview and we look forward to getting your feedback.

Getting Started

The Oracle WebLogic Server Kubernetes Operator has the following requirements:

  • Kubernetes 1.7.5+, 1.8.0+ (check with kubectl version)
  • Flannel networking v0.9.1-amd64 (check with docker images | grep flannel)
  • Docker 17.03.1.ce (check with docker version)
  • Oracle WebLogic Server 12.2.1.3.0

For more details on the certification and support statement of WebLogic Server on Kubernetes, refer to My Oracle Support Doc Id 2349228.1.

A series of video demonstrations of the operator are available here:

The overall process of installing and configuring the operator and using it to manage WebLogic domains consists of the following steps. The provided scripts will perform most of these steps, but some must be performed manually:

  • Registering for access to the Oracle Container Registry
  • Setting up secrets to access the Oracle Container Registry
  • Customizing the operator parameters file
  • Deploying the operator to a Kubernetes cluster
  • Setting up secrets for the Administration Server credentials
  • Creating a persistent volume for a WebLogic domain
  • Customizing the domain parameters file
  • Creating a WebLogic domain

Complete up-to-date instructions are available at https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/installation.md or read on for an abbreviated version:

Build the Docker image for the operator

To run the operator in a Kubernetes cluster, you need to build the Docker image and then deploy it to your cluster.

First run the build using this command:

mvn clean install

Then create the Docker image as follows:

docker build -t weblogic-kubernetes-operator:developer --no-cache=true

We recommend that you use a tag other than latest to make it easy to distinguish your image. In the example above, the tag could be the GitHub ID of the developer.

Next, upload your image to your Kubernetes server as follows:

# on your build machine docker save weblogic-kubernetes-operator:developer > operator.tar scp operator.tar YOUR_USER@YOUR_SERVER:/some/path/operator.tar # on the Kubernetes server docker load < /some/path/operator.tar

Verify that you have the right image by running docker images | grep webloogic-kubernetes-operator on both machines and comparing the image ID.

We will be publishing the image in Oracle Container Registry and the instructions will be updated when it is available there.  After it is published, you will not need to build the image yourself, you will have the option to pull it from the registry instead.

Customizing the operator parameters file

The operator is deployed with the provided installation script, create-weblogic-operator.sh. The input to this script is the file create-operator-inputs.yaml, which needs to updated to reflect the target environment.

The following parameters must be provided in the input file:

CONFIGURATION PARAMETERS FOR THE OPERATOR Parameter Definition Default externalOperatorCert A base64 encoded string containing the X.509 certificate that the operator will present to clients accessing its REST endpoints. This value is only used when externalRestOption is set to custom-cert.   externalOperatorKey A base64 encoded string containing the private key ask tom This value is only used when externalRestOption is set to custom-cert.   externalRestOption Write me. Allowed values:
- none Write me
- self-signed-cert The operator will use a self-signed certificate for its REST server. If this value is specified, then the externalSans parameter must also be set.
- custom-cert Write me. If this value is specified, then the externalOperatorCert and externalOperatorKey must also be provided. none externalSans A comma-separated list of Subject Alternative Names that should be included in the X.509 Certificate. This list should include ...
Example: DNS:myhost,DNS:localhost,IP:127.0.0.1 . namespace The Kubernetes namespace that the operator will be deployed in. It is recommended that a namespace be created for the operator rather than using the default namespace. weblogic-operator targetNamespaces A list of the Kubernetes namespaces that may contain WebLogic domains that the operator will manage. The operator will not take any action against a domain that is in a namespace not listed here. default remoteDebugNodePort Tom is adding a debug on/off parameter
If the debug parameter if set to on, then the operator will start a Java remote debug server on the provided port and will suspend execution until a remote debugger has attached. 30999 restHttpsNodePort The NodePort number that should be allocated for the operator REST server on which it should listen for HTTPS requests on. 31001 serviceAccount The name of the service account that the operator will use to make requests to the Kubernetes API server. weblogic-operator loadBalancer The load balancer that is installed to provide load balancing for WebLogic clusters. Allowed values are:
- none – do not configure a load balancer
- traefik – configure the Traefik Ingress provider
- nginx – reserved for future use
- ohs – reserved for future use traefik loadBalancerWebPort The NodePort for the load balancer to accept user traffic. 30305 enableELKintegration Determines whether the ELK integration will be enabled. If set to true, then ElasticSearch, Logstash and Kibana will be installed, and Logstash will be configured to export the operator’s logs to ElasticSearch. false Decide which REST configuration to use

The operator provides three REST certificate options:

  • none will disable the REST server.
  • self-signed-cert will generate self-signed certificates.
  • custom-cert provides a mechanism to provide certificates that were created and signed by some other means.
Decide which optional features to enable

The operator provides some optional features that can be enabled in the configuration file.

Load Balancing

The operator can install the Traefik Ingress provider to provide load balancing for web applications running in WebLogic clusters. If enabled, an instance of Traefik and an Ingress will be created for each WebLogic cluster. Additional configuration is performed when creating the domain.

Note that the Technology Preview release provides only basic load balancing:

  • Only HTTP(S) is supported. Other protocols are not supported.
  • A root path rule is created for each cluster. Rules based on the DNS name, or on URL paths other than ‘/’, are not supported.
  • No non-default configuration of the load balancer is performed in this release. The default configuration gives round robin routing and WebLogic Server will provide cookie-based session affinity.

Note that Ingresses are not created for servers that are not part of a WebLogic cluster, including the Administration Server. Such servers are exposed externally using NodePort services.

Log integration with ELK

The operator can install the ELK stack and publish its logs into ELK. If enabled, ElasticSearch and Kibana will be installed in the default namespace, and a logstash pod will be created in the operator’s namespace. Logstash will be configured to publish the operator’s logs into Elasticsearch, and the log data will be available for visualization and analysis in Kibana.

To enable the ELK integration, set the enableELKintegration option to true.

Deploying the operator to a Kubernetes cluster

To deploy the operator, run the deployment script and give it the location of your inputs file:

./create-weblogic-operator.sh –i /path/to/create-operator-inputs.yaml What the script does

The script will carry out the following actions:

  • A set of Kubernetes YAML files will be created from the inputs provided.
  • A namespace will be created for the operator.
  • A service account will be created in that namespace.
  • If ELK integration was enabled, a persistent volume for ELK will be created.
  • A set of RBAC roles and bindings will be created.
  • The operator will be deployed.
  • If requested, the load balancer will be deployed.
  • If requested, ELK will be deployed and logstash will be configured for the operator’s logs.

The script will validate each action before it proceeds.

This will deploy the operator in your Kubernetes cluster.  Please refer to the documentation for next steps, including using the REST services, creating a WebLogic domain, starting a domain, and so on.

Three Advances That Will Finally Make Software Self-Healing, Self-Tuning, and Self Managing

Tue, 2018-01-23 13:37

Three Advances That Will Finally Make Software Self-Healing, Self-Tuning, and Self Managing

Ever heard the adage that the operating cost of a given application is often 2x the app’s acquisition cost?  Or how about that bugs cost 100x more to fix in the production phase than during the requirements phase? Or that developers in DevOps environments are often spending over half their time tweaking the “Ops” portion, like CI/CD, instead of writing code?

Removing effort from the operating portion of the equation has long been a goal of IT, though actually doing so is difficult in traditional environments where visibility to the edge (say, end-user monitoring and server-side instrumentation) is low and where remediation (say, optimizing configuration parameters) is manual.  But change is on the horizon, thanks to three integrated capabilities provided by cloud platforms that can lead to autonomous, self-healing systems.  Those three capabilities are automatic instrumentation, machine learning-powered analytics, and integrated remediation.

Automatic Instrumentation: Closing the Visibility Gap

Cloud software platform providers like Oracle are working hard to make visibility and instrumentation simply a feature of the underlying platform, rather than requiring a separate effort.  What this means for developers is that as you write and deploy code, the platform automatically generates and delivers relevant activity and environment telemetry. 

For example, PaaS services such as Java Cloud Service, SOA Cloud Service, and Database Cloud Service automatically expose detailed telemetry both about their environments (instance-level telemetry) as well as the artifacts deployed in those environments (code-level telemetry) to management services such as Oracle Management Cloud, without any extra work by developers or operations personnel.

By generating and exposing instrumentation automatically, we can close the visibility gap that often exists today between developers (who know what they coded, but not necessarily about environment dependencies) and operations (who know about environment dependencies, but not about what was coded). 

2 views of automated telemetry, generated by Java Cloud Service and Integration Cloud Service and exposed in Oracle Management Cloud.

Image 1:  2 views of automated telemetry, generated by Java Cloud Service and Integration Cloud Service and exposed in Oracle Management Cloud.

Machine Learning-Based Analytics

Having the relevant telemetry is a required first step, but understanding it is no easy task.  We’re talking about terabytes of logs, tens of thousands of activity and configuration metrics, in an environment where neither developers nor operators understand the dependencies among components. After all, we’ve happily given up a level of control in cloud in exchange for the ability to iterate faster. 

Fortunately, we no longer have to rely on our human faculties to deal with this data overload – we can instead rely on purpose-built machine learning (ML).  ML loves data.  The more the better. And ML that is designed specifically for the operations problem is able to intuit pretty interesting things out of this data, such as how applications are built (topology, dependencies) and how they should behave (baselining, anomaly detection, forecasting) – without any effort from developers. 

So, instead of a human having to program a monitoring regime to tell how something ought to work, the monitoring regime tells the humans how the application actually works, how it should work in the future, and why it may not be working as it should.  In this scenario, root-cause analysis becomes automated, capacity-planning becomes continuous, dependency-mapping just happens, and alerts/events only bubble up when they actually require attention.

Oracle Management Cloud’s ML portfolio provides topology-aware diagnostics that can forecast impending problems or identify root-cause of current problems without any operator knowledge of the systems being managed. 

Machine learning-based topology views are generated automatically by Oracle Management Cloud.

Image 2:  Machine learning-based topology views generated automatically by Oracle Management Cloud.

Automated Remediation: The Final Step

So now that we have all the data we need to understand what’s going on, and have the ability to analyze it in real-time using machine learning to understand why and what we should do about it, we can move toward the final step:  taking action. 

Automated remediation is the most visible aspect of self-healing systems, but in a sense it’s also the oldest.  API-based and script-based automation options have existed for most technical platforms for a long time and are wildly under-utilized.  The problem in most IT organizations is not can they automate something, it’s should they run that particular automation at a given time.  Sure, I can spin up a new VM, or clone the microservice – but should I?  Will it solve the problem or prevent another problem?

Put simply, for automation to be more heavily-utilized, we need to be better at answering the “should I?” question.  Fortunately, since we’ve now taken care of having better telemetry data and the ability to analyze it, we can link our analytic results directly to automation, at the platform level.  For example, Oracle Management Cloud can automatically invoke automation regimes such as Chef and Puppet, or Cloud Service APIs, in response to analytic conclusions.

Automated remediation is part of Oracle Management Cloud.

Image 3:  Automated remediation in Oracle Management Cloud

Autonomous Software Isn’t Magic

Variability and complexity in software environments is inevitable.  We have urgent business pressures to innovate and an increasingly sophisticated portfolio of loosely coupled cloud platforms on which to innovate.  However, unless we take steps to remove the downstream operational effort associated with the increase in variability and complexity, we will be dragged into spending ever-more time and energy on operations rather than development, and that 2x ratio may quickly become 5x or 10x. 

Self-healing, self-tuning, and self-managing aren’t “magic.” Rather, they are the by-design outputs of a platform that first auto-generates sufficient instrumentation, then provides that instrumentation to an ML-based analytic engine, and finally uses the analytic results to invoke the proper automation.  Given the pace of business change, these aren’t just cool features of a platform, they are absolute necessities for sustainable modern application development.  And they are here, now. 

We invite you to experience just what autonomous PaaS is like at cloud.oracle.com/tryit

Open Source Resolutions: 3 Ways To Simplify, Break Free, and Focus in 2018

Mon, 2018-01-22 11:00

For developers, development teams, and DevOps organizations, 2017 brought forward a growing stack of open source technologies that were proven out by early adopter cloud teams. Those technologies are now being rapidly mainstreamed thanks to some heavy lifting by the CNCF and the broader cloud native community. So now is the time to resolve to make three powerful changes for 2018!

1. Simplify Your Life

So you’ve been experimenting with open source technologies from Docker to Kubernetes to Istio.  Perhaps you’ve stood these up locally on your laptop, in your lab, or experimentally up on AWS. Congratulations, this is a great first step! But trust me, keeping that environment up and running, updated with the latest releases and patches, and scaled to meet the needs of your broader organization is painful, expensive, time consuming, and foolish, considering that cloud providers are now offering managed services that do that for you – typically for no more than the cost of your current infrastructure as a service (IaaS) resources (compute, storage, network). 2017 should be the last year we give out “I Stood Up My Own Kubernetes” participation trophies.  There’s no reason in 2018 to spend valuable developer and DevOps time running and maintaining your own open source platforms when cloud providers are doing the work for you in a secure, cost-effective package. There are plenty of better ways to differentiate, compete, expand your skills, and grow your career in 2018 – building, running, and maintaining your own open source based platform is not one of them. Move to a managed open source-based service in 2018 and simplify your life.  You’ll thank me later!

2. Declare Your Independence:

Break Free From Cloud Lock-In

Take a self-inventory of the cloud providers your org uses and how much money you spent on them in 2017 versus 2016. My guess is that you will find you are developing a significant business and technical risk exposure based on single vendor cloud lock-in. Open source technologies actually give you leverage to choose the cloud vendor that works best for you from a cost, use-case, technical, and/or business perspective. In particular, serverless has been one of the big remaining closed and proprietary cloud native technology areas to date. This has forced enterprises to choose between cloud lock-in or adopting early service tools like AWS Lambda. That’s all about to change in 2018 as a set of open serverless projects (e.g., http://FnProject.io/ ) and CNCF efforts move forward. “Open on Open” is the only way to move open serverless forward in 2018 — building serverless solutions on an integrated stack on top of a Kubernetes foundation.  So, 2018 is the year to ditch lock-in and break free from your captive cloud situation. Don’t be a prisoner in your own cloud. 

3. Focus on What Matters:

Imagine if all cloud providers offered the same core set of open source-based services (e.g., Docker, Kubernetes, Kafka, Cassandra, etc.), and the only cost was for the IaaS resources you used. If this were true, then you could focus on choosing a solution based on what really matters to you.  Hey, that is true now!  The market moved in 2017 from a seller’s market to a buyer’s market with all the major cloud vendors offering similar, core OSS-based services — at least on the surface. The difference now comes down to what matters to you. And in particular the “ilities” like scalability, security, availability, reliability, and usability become key differentiators to consider. Often that can be described as “enterprise-grade” or “open source for grownups.” Open source can be free and fun, but when you need to run your enterprise apps on it, you’ll want to go top-shelf and reach for the good stuff — and that’s where the “ilities” come in.  In 2018, focus on what really matters to you, be an informed buyer, and ask the hard questions when it comes to running your apps on these infrastructures.

Open source technologies are already making developer’s lives better and their projects healthier. Now it’s time to simplify your life with managed services versus going down the DIY “hard way” path.  Break free from cloud lock-in and declare your independence from captive clouds.  And finally, in 2018 focus on what matters to you when it comes to choosing a cloud service — now that the playing field is evening out in your favor.  And most of all, have a spectacular 2018!

Open Source Resolutions: 3 Ways To Simplify, Break Free, and Focus in 2018

Mon, 2018-01-22 11:00

For developers, development teams, and DevOps organizations, 2017 brought forward a growing stack of open source technologies that were proven out by early adopter cloud teams. Those technologies are now being rapidly mainstreamed thanks to some heavy lifting by the CNCF and the broader cloud native community. So now is the time to resolve to make three powerful changes for 2018!

1. Simplify Your Life

So you’ve been experimenting with open source technologies from Docker to Kubernetes to Istio.  Perhaps you’ve stood these up locally on your laptop, in your lab, or experimentally up on AWS. Congratulations, this is a great first step! But trust me, keeping that environment up and running, updated with the latest releases and patches, and scaled to meet the needs of your broader organization is painful, expensive, time consuming, and foolish, considering that cloud providers are now offering managed services that do that for you – typically for no more than the cost of your current infrastructure as a service (IaaS) resources (compute, storage, network). 2017 should be the last year we give out “I Stood Up My Own Kubernetes” participation trophies.  There’s no reason in 2018 to spend valuable developer and DevOps time running and maintaining your own open source platforms when cloud providers are doing the work for you in a secure, cost-effective package. There are plenty of better ways to differentiate, compete, expand your skills, and grow your career in 2018 – building, running, and maintaining your own open source based platform is not one of them. Move to a managed open source-based service in 2018 and simplify your life.  You’ll thank me later!

2. Declare Your Independence:

Break Free From Cloud Lock-In

Take a self-inventory of the cloud providers your org uses and how much money you spent on them in 2017 versus 2016. My guess is that you will find you are developing a significant business and technical risk exposure based on single vendor cloud lock-in. Open source technologies actually give you leverage to choose the cloud vendor that works best for you from a cost, use-case, technical, and/or business perspective. In particular, serverless has been one of the big remaining closed and proprietary cloud native technology areas to date. This has forced enterprises to choose between cloud lock-in or adopting early service tools like AWS Lambda. That’s all about to change in 2018 as a set of open serverless projects (e.g., http://FnProject.io/ ) and CNCF efforts move forward. “Open on Open” is the only way to move open serverless forward in 2018 — building serverless solutions on an integrated stack on top of a Kubernetes foundation.  So, 2018 is the year to ditch lock-in and break free from your captive cloud situation. Don’t be a prisoner in your own cloud. 

3. Focus on What Matters:

Imagine if all cloud providers offered the same core set of open source-based services (e.g., Docker, Kubernetes, Kafka, Cassandra, etc.), and the only cost was for the IaaS resources you used. If this were true, then you could focus on choosing a solution based on what really matters to you.  Hey, that is true now!  The market moved in 2017 from a seller’s market to a buyer’s market with all the major cloud vendors offering similar, core OSS-based services — at least on the surface. The difference now comes down to what matters to you. And in particular the “ilities” like scalability, security, availability, reliability, and usability become key differentiators to consider. Often that can be described as “enterprise-grade” or “open source for grownups.” Open source can be free and fun, but when you need to run your enterprise apps on it, you’ll want to go top-shelf and reach for the good stuff — and that’s where the “ilities” come in.  In 2018, focus on what really matters to you, be an informed buyer, and ask the hard questions when it comes to running your apps on these infrastructures.

Open source technologies are already making developer’s lives better and their projects healthier. Now it’s time to simplify your life with managed services versus going down the DIY “hard way” path.  Break free from cloud lock-in and declare your independence from captive clouds.  And finally, in 2018 focus on what matters to you when it comes to choosing a cloud service — now that the playing field is evening out in your favor.  And most of all, have a spectacular 2018!

Podcast: Jfokus Panel: Building a New World Out of Bits

Tue, 2018-01-16 17:27

Our first program for 2018 brings together a panel of experts whose specialties cover a broad spectrum, including Big Data, security, open source, agile, domain driven design, Pattern-Oriented Software Architecture, Internet of Things, and more. The thread that connects these five people is that they are part of the small army of experts that will be presenting at the 2018 Jfokus Developers Conference, February 5-7, 2018 in Stockholm, Sweden.

This program was recorded on January 10, 2018

The Panelists

(in alphabetical order)

Jesse Anderson

Jesse Anderson (@jessetanderson)
Data Engineer, Creative Engineer, Managing Director, Big Data Institute
Reno, Nevada

    Suggested Resources

Benjamin Cabe

Benjamin Cabé (@kartben)
IoT Program Manager, Evangelist, Eclipse Foundation
Toulouse, France

   Suggested Resources

  • Article: Monetizing IoT Data using IOTA
  • White Paper: The Three Software Stacks Required for IoT Architectures
    A collaboration of the Eclipse IoT Working Group
Kevlin Henney

Kevlin Henney (@KevlinHenney)
Consultant, programmer, speaker, trainer, writer, owner, Curbralan
Bristol, UK

   Suggested Resources

Siren Hofvander

Siren Hofvander (@SecurityPony)
Chief Security Officer with Min Doktor
Malmö, Sweden

Suggested Resources

Dan Bergh Johnsson

Dan Bergh Johnsson (@danbjson)
Agile aficionado, Domain Driven Design enthusiast, code quality craftsman, Omegapoint, Stockholm, Sweden

Suggested Resources

Additional Resources Coming Soon
  • Women in Technology
    With Heli Helskyaho, Michelle Malcher, Kellyn Pot'Vin-Gorman, and Laura Ramsey
  • DevOps: Can This Marriage be Saved
    With Nicole Forsgen, Leonid Igolnik, Alaina Prokharchyk, Baruch Sadogursky, Shay Shmeltzer, Kelly Shortridge
  • Combating Complexity
    With Adam Bien, Lucas Jelllema, Chris Newcombe, and Chris Richardson
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

The Best Way to Get Help with Your Oracle Database Questions

Tue, 2018-01-16 12:34

One of the best things about the Oracle Developer Community is the easy access to expert help and ideas. To add to the expert content, Oracle is adding a new service for developers called Ask TOM Office Hours.  Chris Saxon, Oracle SQL Developer Advocate and SQL expert tells all about it:


Aaaaargh! Any more of this and I was ready to throw my computer out of the window. I was stuck. I was editing a video for The Magic of SQL, trying to create some blended split-screen effects. I was sure it was possible. I just didn’t know how. Searches turned up nothing. So I turned to forums for help.

But, instead of answers, all I was getting was requests for extra details. Three days in and I was still no closer to achieving the desired effect. So I gave up and called a colleague. After a couple of minutes chatting, they were able to point me to a solution.

Progress at last!

It’s a drawback that plagues technical forums. A simple request for help can turn into a prolonged back-and-forth exchange of information.

“Which version are you using?”

“What does your code look like?”

“Have you have you set the im_not_an_idiot parameter?”

They do want to help. But the problem is that it's tough to provide effective help without a full understanding of your issue. Respondents need to know what you’re trying to do, what you’ve tried and what you’re working with. So you settle in for a game of internet pong. Your question pings back and forth between you and your unknown “helper”. Until finally your query is answered. Or one of you gives up. All the while sucking up your valuable time.

Frustrating, isn’t it?

Wouldn’t it be great if, in addition to support and Q&A forums, you could have an actual, live conversation, working out all the details of your malady?

Where you could quickly get to the root of the issue or learn how to properly apply a new feature to your program?

Now you can!

Introducing Ask TOM Office Hours

These are scheduled, live Q&A sessions. Hosted by Oracle Database Product Managers, evangelists and even developers. The Oracle product experts. Ready to help you get the best out of Oracle technology.

And the best part: Ask TOM Office Hours sessions are 100% free!

Office Hours continues the pioneering tradition of Ask TOM. Launched in 2000 by Tom Kyte, the site now has a dedicated team who answer hundreds of questions each month. Together they’ve helped millions of developers understand and use Oracle Database.

Office Hours takes this service to the next level, giving you live, direct access to a horde of experts within Oracle. All dedicated to helping you get the most out of your Oracle investment. To take advantage of this new program, visit the Office Hours home page and find an expert who can help . Sign up for the session and, at the appointed hour, join the webinar. There you can put your questions to the host or listen to the Q&A of others, picking up tips and learning about new features.

Each session will have a specific focus, based on the presenter’s expertise. But you are welcome to ask other questions as well.

Stuck on a thorny SQL problem? Grill Chris Saxon or Connor McDonald of the Ask TOM team. 

Want to make the most of Oracle Database's amazing In-Memory feature? Andy Rivenes and Maria Colgan will take you through the key steps.

Started a new job and need to get up-to-speed on Multitenant? Patrick Wheeler will help you get going.

Struggling to get bulk collect working? Ask renowned PL/SQL expert, Steven Feuerstein.

Our experts live all over the globe. So even if you inhabit "Middleofnowhereland", you’re sure to find a timeslot that suits you.

You need to make the most of Oracle Database and its related technologies. It's our job to make it easy for you.

Ask TOM Office Hours: Dedicated to Customer Success

View the sessions and sign up now!

 

Announcing Offline Persistence Toolkit for JavaScript Client Applications

Mon, 2018-01-08 19:27

We are excited to announce the open source release on GitHub of the offline-persistence-toolkit for JavaScript client applications, developed by the Oracle JavaScript Extension Toolkit (Oracle JET) team.

The Offline Persistence Toolkit is a client-side JavaScript library that provides caching and offline support at the HTTP request layer. This support is transparent to the user and is done through the Fetch API and an XHR adapter. HTTP requests made while the client device is offline are captured for replay when connection to the server is restored. Additional capabilities include a persistent storage layer, synchronization manager, binary data support and various configuration APIs for customizing the default behavior.

Whilst the toolkit is primarily intended for hybrid mobile applications created using Oracle JET, it can be used within any JavaScript client application that requires persistent storage and/or offline data access.

The Offline Persistence Toolkit simplifies life for application developers by providing a response caching solution that works well across modern browsers and web views. The toolkit covers common caching cases with a minimal amount of application-specific coding, but provides flexibility to cover non-trivial cases as well. In addition to providing the ability to cache complete response payloads, the toolkit supports "shredding" of REST response payloads into objects that can be stored, queried and updated on the client while offline.

The architecture diagram illustrates the major components of the toolkit and how an application interacts with it:

The Offline Persistence Toolkit is distributed as an npm package consisting of AMD modules.

To install the toolkit, enter the following command at a terminal prompt in your app’s top-level directory:

$ npm install @oracle/offline-persistence-toolkit

 

The toolkit makes heavy use of the Promise API. If you are targeting environments that do not support the Promise API, you will need to polyfill this feature. We recommend the es6-promise polyfill.

The toolkit does not have a dependency on a specific client-side storage solution, but does include a PouchDB adapter. If you plan to use PouchDB for your persistent store, you will need to install the following PouchDB packages:

$ npm install pouchdb pouchdb-find

 

For more information about how to make use of this toolkit in your Oracle JET application or any other JavaScript application, refer to the toolkit's README, which also provides details about why we developed this toolkit, how to include it into your app, some simple use cases and links to JS Doc and more advanced use cases.

You can also refer to the JET FixItFast sample app that makes use of the toolkit.  You can refer directly to the source code and even use the Oracle JET command line interface to build and deploy the app to see how it works.

I hope you find this toolkit really useful and if you have any feedback, please submit issues on GitHub.

For more technical articles about the Offline Persistence Toolkit, Oracle JET and other products, you can also follow OracleDevs on Medium.com.

Pages