Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 18 hours 9 min ago

Oracle Mobile Cloud Service (MCS): Overview of integration options

Fri, 2017-08-11 04:40

Oracle Mobile Cloud Service has a lot of options which allows it to integrate with other services and systems. Since it runs JavaScript on Node.js for custom APIs, it is very flexible.

Some features allow it to extent its own functionality such as the Firebase configuration option to send notifications to mobile devices, while for example the connectors allow wizard driven integration with other systems. The custom API functionality running on a recent Node.js version ties it all together. In this blog article I’ll provide a quick overview and some background of the integration options of MCS.

MCS is very well documented here and there are many YouTube video’s available explaining/demonstrating various MCS features here. So if you want to know more, I suggest looking at those.

Some recent features

Oracle is working hard on improving and expanding MCS functionality. For the latest improvements to the service see the following page. Some highlights I personally appreciate of the past half year which will also get some attention in this blog:

  • Zero footprint SSO (June 2017)
  • Swagger support in addition to RAML for the REST connector (April 2017)
  • Node.js version v6.10.0 support (April 2017)
  • Support for Firebase (FCM) to replace GCM (December 2016)
  • Support for third party tokens (December 2016)
Feature integration Notification support

In general there are two options for sending notifications from MCS. Integrating with FCM and integrating with Syniverse. Since they are third party suppliers, you should compare these options (license, support, performance, cost, etc) before choosing one of them.

You can also use any other notification provider if it offers a REST interface by using the REST connector. You will not get much help in configuring it through the MCS interface though; it will be a custom implementation.

Firebase Cloud Messaging / Google Cloud Messaging

Notification support is implemented by integrating with Google cloud messaging products. Google Cloud Messaging (GCM) is being replaced with Firebase Cloud Messaging (FCM) in MCS. GCM has been deprecated by Google for quite a while now so this is a good move. You do need a Google Cloud Account though and have to purchase their services in order to use this functionality. See for example here on how to implement this from a JET hybrid application.


Read more on how to implement this here. You first have to create a Syniverse account. Next subscribe to the Syniverse Messaging Service, register the app and get credentials. These credentials you can register in MCS, client management.


Beacon support

Beacons create packages which can be detected on Bluetooth by mobile devices. The package structure the beacons broadcast, can differ. There are samples available for iBeacon, altBeacon and Eddystone but others can be added if you know the corresponding package structure. See the following presentation some background on beacons and how they can be integrated in MCS. How to implement this for an Android app can be watched here.


Client support

MCS comes with several SDKs which provide easy integration of a client with MCS APIs. Available client SDKs are iOS, Android, Windows, Web (plain JavaScript). These SDKs provide an easy alternative to using the raw MCS REST APIs. They provide a wrapper for the APIs and provide easy access in the respective language the client uses.

Authentication options (incoming) SAML, JWT

Third party token support for SAML and JWT is available. Read more here. A token exchange is available as part of MCS which creates MCS tokens from third party tokens based on specifically defined mappings. This MCS tokens can be used by clients in subsequent requests. This does require some work on the client side but the SDKs of course help with this.

Facebook Login

Read here for an example on how to implement this in a hybrid JET application.

OAuth2 and Basic authentication support.

No third party OAuth tokens are supported. This is not strange since the OAuth token does not contain user data and MCS needs a way to validate the token. MCS provides its own OAuth2 STS (Secure Token Service) to create tokens for MCS users. Read more here.

Oracle Enterprise Single Sign-on support.

Read here. This is not to be confused with the Oracle Enterprise Single Sign-on Suite (ESSO). This is browser based authentication of Oracle Cloud users which are allowed access to MCS.

These provide the most common web authentication methods. Especially the third party SAML and JWT support provides for many integration options with third party authentication providers. OKTA is given as an example in the documentation.

Application integration: connectors

MCS provides connectors which allow wizard driven configuration in MCS. Connectors are used for outgoing calls. There is a connector API available which makes it easy to interface with the connectors from custom JavaScript code. The connectors support the use of Oracle Credential Store Framework (CSF) keys and certificates. TLS versions to TLS 1.2 are supported. You are of course warned that older versions might not be secure. The requests the connectors do are over HTTP since no other technologies are currently directly supported. You can of course use REST APIs and ICS as wrappers should you need it.

Connector security settings

For the different connectors, several Oracle Web Service Security Manager (OWSM) policies are used. See here. These allow you to configure several security settings and for example allow usage of WS Security and SAML tokens for outgoing connections. The policies can be configured with security policy properties. See here.


It is recommended to use the REST connector instead of doing calls directly from your custom API code because of they integrate well with MCS and provide security and monitoring benefits. For example out of the box analytics.


The SOAP connector can do a transformation from SOAP to JSON and back to make working with the XML easier in JavaScript code. This has some limitations however:

Connector scope

There are also some general limitations defined by the scope of the API of the connector:

  • Only SOAP version 1.1 and WSDL version 1.2 are supported.
  • Only the WS-Security standard is supported. Other WS-* standards, such as WS-RM or WS-AT, aren’t supported.
  • Only document style and literal encoding are supported.
  • Attachments aren’t supported.
  • Of the possible combinations of input and output message operations, only input-output operations and input-only operations are supported. These operations are described in the Web Services Description Language (WSDL) Version 1.2 specification.
Transformation limitations

  • The transformation from SOAP to XML has limitations
  • A choice group with child elements belonging to different namespaces having the same (local) name. This is because JSON doesn’t have any namespace information.
  • A sequence group with child elements having duplicate local names. For example, <Parent><ChildA/><ChildB/>…<ChildA/>…</Parent>. This translates to an object with duplicate property names, which isn’t valid.
  • XML Schema Instance (xsi) attributes aren’t supported.
Integration Cloud Service connector

Read more about this connector here. This connector allows you to call ICS integrations. You can connect to your ICS instance and select an integration from a drop-down menu. For people who also use ICS in their cloud architecture, this will probably be the most common used connector.

Fusion Applications connector

Read more about this connector here. The flow looks similar to that of the ICS Cloud Adapters (here). In short, you authenticate, a resource discovery is done and local artifacts are generated which contain the connector configuration. At runtime this configuration is used to access the service. The wizard driven configuration of the connector is a great strength. MCS does not provide the full range of cloud adapters as is available in ICS and SOA CS.

Finally Flexibility

Oracle Mobile Cloud Service allows you to define custom APIs using JavaScript code. Oracle Mobile Cloud Service V17.2.5-201705101347 runs Node.js version v6.10.0 and OpenSSL version 1.0.2k (process.versions) which are quite new! Because a new OpenSSL version is supported, TLS 1.2 ciphers are also supported and can be used to create connections to other systems. This can be done from custom API code or by configuring the OWSM settings in the connector configuration. It runs on Oracle Enterprise Linux 6 kernel 2.6.39-400.109.6.el6uek.x86_64 (JavaScript: os.release()). Most JavaScript packages will run on this version so few limitations there.

ICS also provides an option to define custom JavaScript functions (see here). I haven’t looked at the engine used in ICS though but I doubt this will be a full blown Node.js instance and suspect (please correct me if I’m wrong) a JVM JavaScript engine is used like in SOA Suite / SOA CS. This provides less functionality and performance compared to Node.js instances.

What is missing? Integration with other Oracle Cloud services

Mobile Cloud Service does lack out of the box integration options with other Oracle Cloud Services. Only 4 HTTP based connectors are available. Thus if you want to integrate with an Oracle Cloud database (a different one than which is provided) you have to use the external DB’s REST API (with the REST connector or from custom API code) or use for example the Integration Cloud Service connector or the Application Container Cloud Service to wrap the database functionality. This of course requires a license for the respective services.

Cloud adapters

A Fusion Applications Connector is present in MCS. Also OWSM policies are used in MCS. It would therefore not be strange if MCS would be technically capable of running more of the Cloud adapters which are present in ICS. This would greatly increase the integration options for MCS.

Mapping options for complex payloads

Related to the above, if the payloads become large and complex, mapping fields also becomes more of a challenge. ICS does a better job at this than MCS currently. It has a better mapping interface and provides mapping suggestions.

The post Oracle Mobile Cloud Service (MCS): Overview of integration options appeared first on AMIS Oracle and Java Blog.

Automating Build and Deployment of Node application in Oracle Developer Cloud to Application Container Cloud

Fri, 2017-08-11 02:57

A familiar story:

  • Develop a Node application with one or more developers
  • Use Oracle Developer Cloud Service to organize the development work, host the source code and coordinate build jobs and the ensuing deployment
  • Run the Node application on Oracle Application Container Cloud

I have read multiple tutorials and blog posts that each seemed to provide a piece of puzzle. This article shows the full story – in its simplest form.

We will:

  • Start a new project on Developer Cloud Service
  • Clone the Git repository for this new project
  • Locally work on the Node application and configure it for Application Container Cloud
  • Commit and push the sources to the Git repo
  • Create a Build job in Developer Cloud service that creates the zip file that is suitable for deployment; the job is triggered by changes on the master branch in the Git repo
  • Create a Deployment linking to an existing Oracle Application Container Cloud service instance; associate the deployment with the build task (and vice versa)
  • Run the build job – and verify that the application will be deployed to ACCS
  • Add the ACCS Deployment descriptor with the definition of environment variables (that are used inside the Node application)
  • Make a change in the sources of the application, commit and push and verify that the live application gets updated

Prerequisites: access to a Developer Cloud Instance and an Application Container Cloud service. Locally access to git and ideally Node and npm.

Sources for this article are in GitHub: .

Start a new project on Developer Cloud Service

Create the new project greeting in Developer Cloud





After you press Finish, the new project is initialized along with all associated resources and facilities, such as a new Git repository, a Wiki, an Issue store.


When the provisioning is done, the project can be accessed.



Locally work on the Node application

Copy the git URL for the source code repository.


Clone the Git repository for this new project

git clone


Start a new Node application, using npm init:


This will create the package.json file.

To prepare the application for eventual deployment to Application Container Cloud, we need to add the manifest.json file.


We also need to create a .gitignore file, to prevent node_modules from being committed and pushed to Git.


Implement the application itself, in file app.js. This is a very simplistic application – that will handle an incoming request and return a greeting of some sort:


Note how the greeting can be read from an environment variable, just like the port on which the requests should be listened to. When no environment values are provided, defaults are used instead.

Commit and push the sources to the Git repo

The Git repository in the Developer Cloud Service project is empty except for the when the project is first created:


Now we commit and push the files created locally:




A little while later, these sources show up in Developer Cloud Service console:


Create a Build job in Developer Cloud service

To have the application build we can create a build job in Developer Cloud Service that creates the zip file that is suitable for deployment; this zip file needs to contain all sources from Git and all dependencies (all node modules) specified in package.json. The job is triggered by changes on the master branch in the Git repo. Note: the build job ideally should also perform automated tests – such as described by Yannick here.



Specify free-style job. Specify the name – here BuildAndDeploy.

Configure the Git repository that contains the sources to build; this is the repository that was first set up when the project was created.


Configure the build job to be performed whenever sources are committed to (the master branch in) the Git repository:



Create a Build Step, of type Execute Shell:



Enter the following shell-script commands:

git config –global url. git://

npm install

zip -r .

This will download all required node modules and package all sources in a single zip-file called


Define as post build step that the file should be archived. That makes this zip file available as artifact produced by the build job – for use in deployments or other build jobs.



Run the job a first time with Build Now.




The console output for running the shell commands is shown. Note that the implicit first steps performed in a build include the retrieval of all sources from the git repositories on to the file system of the build server. The explicit shell commands are executed subsequently – and can make use of these cloned git repo sources.


The build job produces as artifact:


Create a Deployment linking to an existing Oracle Application Container Cloud service instance

The build job produces an artifact that can be deployed to an ACCS instance. We need a Deployment to create an ACCS instance based on that artifact. The Deployment is the bridge between the build artifact and a specific target environment – in this case an ACCS instance.


Specify name of the configuration – for use within Developer Cloud Service – and of the application – that will be used in Application Container Cloud. Specify the type of Deployment – we want On Demand because that type of Deployment can be associated with a Build job to be automatically performed at the end of the build. Specify the Deployment Target – New of type Application Container Cloud.


Provide the connection details for an ACCS instance. Press Test Connection to verify these details.


Upon success, click on Use Connection.


Specify the type of Runtime – Node in this case. Select the Build Job and Artifact to base this Deployment on:



Note: for now, the Deployment is tied to a specific instance of the build job. When add the Deployment as Post Build step to the Build Job, we will always use the artifact produced by that specific build instance.

When the Deployment is saved, it starts to execute the deployment immediately:


In the Application Container Cloud Console, we can see the new Node application greeting being created



After some time (actually, quite some time) the application is deployed and ready to be accessed:


And here is the result of opening the application in  browser


Now associate the build job with the Deployment, in order to have the deployment performed at the end of each successful build:


Go to the Post Build tab, check the box for Oracle Cloud Service Deployment and add a Deployment Task of type Deploy:


Select the Deployment we created earlier:


And press Save to save the changes to the build job’s definition.


Run the build job – and verify that the application will be deployed to ACCS (again)

If we now run the build job, as its last action it should perform the deployment:




The ACCS console shows that now we have Version 2.0, deployed just now.



Add the ACCS Deployment descriptor with the definition of environment variables

The app.js file contains the line

var greeting = process.env.GREETING || ‘Hello World!’;

This line references the environment variable GREETING – that currently is not set. By defining a deployment descriptor as part of the Deployment definition, we can not only specify the number of instances and their size as well as any Service Bindings and the value of Environment Variables such as GREETING.



Add the Deployment Descriptor json:


“memory”: “1G”,

“instances”: “1”,

“environment”: {

“GREETING”:”Greetings to you”,




Note: variable APPLICATION_PREFIX is not currently used.


Save and the deployment will be performed again:



When done, the application can be accessed. This time, the greeting returned is the one specified in the the deployment descriptor deployment.json (as environment variable) and picked up by the application at run time (using



Make a change in the sources of the application and Do the End To End Workflow

If we make a change in the application and commit and push the change to Git then after some time we should be able to verify that the live application gets updated.

Make the change – a new version label and a small change in the text returned by the application.


    Then commit the change and push the changes – to the Developer CS Git repo:



    The changes arrive in the Git repo:


    Now the Git repo has been updated, the build job should be triggered:



    Some of the console output – showing that deployment has started:


    The ACCS Service Console makes it clear too


    When the deployment is done, it is clear that the code changes made it through to the running application:


    So editing the source code and committing plus pushing to git suffices to trigger the build and redeployment of the application – thanks to the set up made in Developer Cloud Service.

    Next Steps

    Show how multiple instances of an application each have their own state – and how using an Application Cache can make them share state.

    Show how an ACCS application can easily access a DBaaS instance through Service Bindings (and in the case of Node application through the oracle node driver and OCI libraries that come prepackaged with the ACCS Node Runtime.

    Show how Oracle Management Cloud APM can be setup as part of an ACCS instance in order to perform application monitoring of applications running on ACCS; probably works for Log Analytics as well.



    Sources for this article are available in GitHub:

    Oracle Community Article by Abhinav Shroff –Oracle Developer Cloud to build and deploy Nodejs REST project on Application Container Cloud

    A-Team Chronicle by Yannick Ongena- Automated unit tests with Node.JS and Developer Cloud Services

    Article by Fabrizio Marini – Oracle Application Container Cloud & Developer Cloud Service: How to create a Node.js application with DB connection in pool (in cloud) and how to deploy it on Oracle Application Container Cloud (Node.js) using Developer Cloud Service

    Create Node.js Applications (Oracle Documentation) –

    Developer Cloud Service Docs – Managing Releases in Oracle Developer Cloud Service

    Oracle Documentation – Creating Meta Files for ACCS deployments –

    The post Automating Build and Deployment of Node application in Oracle Developer Cloud to Application Container Cloud appeared first on AMIS Oracle and Java Blog.

    When Screen Scraping became API calling – Gathering Oracle OpenWorld 2017 Session Catalog with Node

    Thu, 2017-08-10 02:57

    A dataset with all sessions of the upcoming Oracle OpenWorld 2017 conference is nice to have – for experiments and demonstrations with many technologies. The session catalog is exposed at a website – 


    With searching, filtering and scrolling, all available sessions can be inspected. If data is available in a browser, it can be retrieved programmatically and persisted locally in for example a JSON document. A typical approach for this is web scraping: having a server side program act like a browser, retrieve the HTML from the web site and query the data from the response. This process is described for example in this article – – for Node and the Cheerio library.

    However, server side screen scraping of HTML will only be successful when the HTML is static. Dynamic HTML is constructed in the browser by executing JavaScript code that manipulates the browser DOM. If that is the mechanism behind a web site, server side scraping is at the very least considerably more complex (as it requires the server to emulate a modern web browser to a large degree). Selenium has been used in such cases – to provide a server side, programmatically accessible browser engine. Alternatively, screen scraping can also be performed inside the browser itself – as is supported for example by the Getsy library.

    As you will find in this article – when server side scraping fails, client side scraping may be a much to complex solution. It is very well possible that the rich client web application is using a REST API that provides the data as a JSON document. An API that our server side program can also easily leverage. That turned out the case for the OOW 2017 website – so instead of complex HTML parsing and server side or even client side scraping, the challenge at hand resolves to nothing more than a little bit of REST calling.

    Server Side Scraping

    Server side scraping starts with client side inspection of a web site, using the developer tools in your favorite browser.


    A simple first step with cheerio to get hold of the content of the H1 tag:


    Now let’s inspect in the web page where we find those session details:


    We are looking for LI elements with a CSS class of rf-list-item. Extending our little Node program with queries for these elements:


    The result is disappointing. Apparently the document we have pulled with request-promise does not contain these list items. As I mentioned before, that is not necessarily surprising: these items are added to the DOM at runtime by JavaScript code executed after an Ajax call is used to fetch the session data.

    Analyzing the REST API Calls

    Using the Developer Tools in the browser, it is not hard to figure out which call was made to fetch these results:


    The URL is there: Now the question is: what headers and parameters are sent as part of the request to the API – and what HTTP operation should it be (GET, POST, …)?

    The information in the browser tools reveals:


    A little experimenting with custom calls to the API in Postman made clear that rfWidgetId and rfApiProfileId are required form data.


    Postman provides an excellent feature to quickly get going with source code in many technologies for making the REST call you have just put together:


    REST Calling in Node

    My first stab:


    With the sample generated by Postman as a starting point, it is not hard to create the Node application that will iterate through all session types – TUT, BOF, GEN, CON, … -:


    To limit the size of the individual (requests and) responses, I have decided to search the sessions of each type in 9 blocks – for example CON1, CON2, CON3 etc. The search string is padded with wild cards – so CON1 will return all sessions with an identifier starting with CON1.

    To be nice to the OOW 2017 server – and prevent being blocked out by any filters and protections – I will fire requests spaced apart (with a 500 ms delay between each of them).

    Because this code is for one time use only, and is not constrained by time limits, I have not put much effort in parallelizing the work, creating the most elegant code in the world etc. It is simply not worth it. This will do the job – once – and that is all I need. (although I want to extend the code to help me download the slide decks for the presentations in an automated fashion; for each conference, it takes me several hours to manually download slide decks to take with me on the plane ride home – only to find out each year that I am too tired to actually browser through those presentations).

    The Node code for constructing a local file with all OOW 2017 sessions:

    The post When Screen Scraping became API calling – Gathering Oracle OpenWorld 2017 Session Catalog with Node appeared first on AMIS Oracle and Java Blog.

    Oracle Mobile Cloud Service (MCS) and Integration Cloud Service (ICS): How secure is your TLS connection?

    Wed, 2017-07-26 08:27

    In a previous blog I have explained which what cipher suites are, the role they play in establishing SSL connections and have provided some suggestions on how you can determine which cipher suite is a strong cipher suite. In this blog post I’ll apply this knowledge to look at incoming connections to Oracle Mobile Cloud Service and Integration Cloud Service. Outgoing connections are a different story altogether. These two cloud services do not allow you control of cipher suites to the extend as for example Oracle Java Cloud Service and you are thus forced to use the cipher suites Oracle has chosen for you.

    Why should you be interested in TLS? Well, ‘normal’ application authentication uses tokens (like SAML, JWT, OAuth). Once an attacker obtains such a token (and no additional client authentication is in place), it is more or less free game for the attacker. An important mechanism which prevents the attacker from obtaining the token is TLS (Transport Layer Security). The strength of the provided security depends on the choice of cipher suite. The cipher suite is chosen by negotiation between client and server. The client provides options and the server chooses the one which has its preference.

    Disclaimer: my knowledge is not at the level that I can personally exploit the liabilities in different cipher suites. I’ve used several posts I found online as references. I have used the OWASP TLS Cheat Sheet extensively which provides many references for further investigation should you wish.

    Method Cipher suites

    The supported cipher suites for the Oracle Cloud Services appear to be (on first glance) host specific and not URL specific. The APIs and exposed services use the same cipher suites. Also the specific configuration of the service is irrelevant we are testing the connection, not the message. Using tools described here (for public URL’s is easiest) you can check if the SSL connection is secure. You can also check yourself with a command like: nmap –script ssl-enum-ciphers -p 443 hostname. Also there are various scripts available. See for some suggestions here.

    I’ve looked at two Oracle Cloud services which are available to me at the moment:


    It was interesting to see the supported cipher suites for Mobile Cloud Service and Integration Cloud Service are the same and also the supported cipher suites for the services and APIs are the same. This could indicate Oracle has public cloud wide standards for this and they are doing a good job at implementing it!

    Supported cipher suites

    TLS 1.2
    TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) ECDH secp256r1 (eq. 3072 bits RSA) FS
    TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027) ECDH secp256r1 (eq. 3072 bits RSA) FS
    TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) ECDH secp256r1 (eq. 3072 bits RSA) FS
    TLS_RSA_WITH_AES_256_CBC_SHA256 (0x3d)
    TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
    TLS_RSA_WITH_AES_128_CBC_SHA256 (0x3c)
    TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)
    TLS 1.1
    TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) ECDH secp256r1 (eq. 3072 bits RSA) FS
    TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) ECDH secp256r1 (eq. 3072 bits RSA) FS
    TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
    TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)
    TLS 1.0
    TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) ECDH secp256r1 (eq. 3072 bits RSA) FS
    TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) ECDH secp256r1 (eq. 3072 bits RSA) FS
    TLS_RSA_WITH_AES_256_CBC_SHA (0x35)
    TLS_RSA_WITH_AES_128_CBC_SHA (0x2f)
    Liabilities in the cipher suites

    You should not read this as an attack against the choices made in the Oracle Public Cloud for SSL connections. Generally the cipher suites Oracle chose to support are pretty secure and there is no need to worry unless you want to protect yourself against groups like the larger security agencies. When choosing your cipher suite in your own implementations outside the mentioned Oracle cloud products, I would go for stronger cipher suites than which are provided. Read here.

    TLS 1.0 support

    TLS 1.0 is supported by the Oracle Cloud services. This standard is outdated and should be disabled. Read the following for some arguments of why you should do this. It is possible Oracle choose to support TLS 1.0 since some older browsers (really old ones like IE6) do not support TLS 1.1 and 1.2 yet. This is a consideration of compatibility versus security.

    TLS_RSA_WITH_3DES_EDE_CBC_SHA might be a weak cipher

    There are questions whether TLS_RSA_WITH_3DES_EDE_CBC_SHA could be considered insecure (read here, here and here why). Also SSLLabs says it is weak. You can mitigate some of the vulnerabilities by not using CBC mode, but that is not an option in the Oracle cloud as GCM is not supported (see more below). If a client indicates he only supports TLS_RSA_WITH_3DES_EDE_CBC_SHA, this cipher suite is used for the SSL connection making you vulnerable to collision attacks like sweet32. Also it uses a SHA1 hash which can be considered insecure (read more below).

    Weak hashing algorithms

    There are no cipher suites available which provide SHA384 hashing. Only SHA256 and SHA. SHA1 (SHA) is considered insecure (see here and here. plenty of other references to this can be found easily).

    No GCM mode support

    GCM provides data authenticity (integrity) and confidentiality checking. It is more efficient and performant compared to CBC mode. CBC only provides authenticity/integrity but no confidentiality checking. GCM uses a so-called nonce. You cannot use the same nonce to encrypt data with the same key twice.

    Wildcard certificates are used

    As you can see in the screenshot below, the certificate used for my Mobile Cloud Service contains a wildcard: *

    This means the same certificate is used for all Mobile Cloud Service hosts in a data center unless specifically overridden. See here Rule – Do Not Use Wildcard Certificates. They violate the principle of least privilege. If you decide to implement two-way SSL, I would definitely consider using your own certificates since you want to avoid trust on the data center level. They also violate the EV Certificate Guidelines. Since the certificate is per data center, there is no difference between the certificate used for development environments compared to production environments. In addition, everyone in the same data center will use the same certificate. Should the private key be compromised (of course Oracle will try not to let this happen!), this will be an issue for the entire data center and everyone using the default certificate.

    Oracle provides the option to use your own certificates and even recommends this. See here. This allows you to manage your own host specific certificate instead of the one used by the data center.

    Choice of keys

    Only RSA and ECDHE keys are used and no DSA/DSS keys. Also the ECDHE keys are given priority above the RSA keys. ECDHE gives forward secrecy. Read more here. DHE however is preferred above ECDHE (see here) since ECDHE uses Elliptic Curves and there are doubts they are really secure. Read here and here. Oracle does not provide DHE support in their list of cipher suites.

    Strengths of the cipher suites

    Is it all bad? No, definitely not! You can see Oracle has put thought into choosing their cipher suites and only provide a select list. Maybe it is possible to request stronger cipher suites to be enabled by contacting Oracle support.

    Good choice of encryption algorithm

    AES is the preferred encryption algorithm (here). WITH_AES_256 is supported which is a good thing. WITH_AES_128 is also supported. This one is obviously weaker, but it is not really terrible that it is still used and for compatibility reasons, OWASP even recommends TLS_RSA_WITH_AES_128_CBC_SHA as cipher suite (also SHA1!) so they are not completely against it.

    Good choice of ECDHE curve

    The ECDHE curve used is the default most commonly used secp256r1 which is equivalent to 3072 bits RSA. OWASP recommends > 2048 bits so this is ok.

    No support for SSL2 and SSL3

    Of course SSL2 and SSL3 are not secure anymore and usage should not be allowed.

    So why these choices? Considerations

    I’ve not been involved with these choices and have not talked to Oracle about this. In summary, I’m just guessing at the considerations.

    I can imagine the cipher suites have been chosen to create a balance between compatibility, performance and security. Also, they could be related to export restrictions / government regulations. The supported cipher suites do not all require the installation of JCE (here) but some do. For example usage of AES_256 and ECDHE require the JCE cryptographic provider but AES_128 and RSA do not. Also of course compatibility is taken into consideration. The list of supported cipher suites are common cipher suites supported by most web browsers (see here). When taking performance into consideration (although this is hardware dependent, certain cipher suites perform better on ARM processors, others better on for example Intel), using ECDHE is not at all strange while not using GCM might not be a good idea (try for example the following: gnutls-cli –benchmark-ciphers). For Oracle using a single certificate for your data center with a wildcard is of course an easy and cheap default solution.

    • Customers should consider using their own host specific certificates instead of the default wildcard certificate.
    • Customers should try to put constraints on their clients. Since the public cloud offers support for weak ciphers, the negotiation between client and server determines the cipher suite (and thus strength) used. If the client does not allow weak ciphers, relatively strong ciphers will be used. It of course depends if you are able to do this since if you would like to provide access to the entire world, controlling the client can be a challenge. If however you are integrating web services, you are more in control (unless of course a SaaS solution has limitations).
    • Work with Oracle support to see what is possible and where the limitations are.
    • Whenever you have more control, consider using stronger cipher suites like TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

    The post Oracle Mobile Cloud Service (MCS) and Integration Cloud Service (ICS): How secure is your TLS connection? appeared first on AMIS Oracle and Java Blog.

    Industrial IoT Strategy, The Transference of Risk by using a Digital Twin

    Tue, 2017-07-25 02:41

    The Internet of Things (IoT) is all about getting in-depth insight about your customers. It is the inter-networking of physical devices, vehicles (also referred to as “connected devices” and “smart devices”), buildings, and other items embedded with electronics, software, sensors, actuators, and network connectivity which enable these objects to collect and exchange data.

    For me, IoT is an extension of data integration and big data. The past decade I have worked in the integration field and adding smart devices to these systems makes it even more interesting. Connecting the real world with the digital one creates a huge potential for valuable digital services on top of the physical world. This article contains our vision and guidance for a strategy for The Internet of Things based on literature and our own experience. 

    Drivers for business.

    Everybody is talking about The Internet Of Things. This is going to become a billion dollar business in the near future. IoT has become a blanket term for smart, connected devices. Technology is giving these devices the ability to sense/act for themselves, cause an effect on the environment and be controlled by us.  Especially in the industrial world the application of smart sensors has the potential to change the landscape of current supplier of large scale industrial solutions. 

    This is the perfect storm

    For decades we have had devices with sensors and connectivity,  but these devices never reached the market potential they currently have until now. IoT is slowly becoming a mainstream technology. Only 2 years ago there were technical limitations in processing power, storage, connectivity, and platform accessibility hindering the growth of the usage of IoT devices.

    Now we see a perfect storm: The advances in cloud computing, big data storage, an abundance of fast internet access, machine learning, and smart sensors come together. The past economic crisis has made businesses start focusing more on lean manufacturing, measuring and real-time feedback. And finally, our addiction to social media and direct interaction makes us accustomed to instant feedback. We demand real time process improvement and in-depth, highly personalized services. This can only be achieved by probing deep into data about the behavior of consumers.

    Digital Transformation changes our economy.  

    Smart devices are a driver for efficiency. On one hand, we can save power usage – by switching off unused machines for example – and boost effective usage of machines by optimizing their utilization. For example: have cleaning robots to visit rooms with a lot of traffic more often, instead of the same schedule for all rooms.  Intensive data gathering offers the possibility to optimize our processes and apply effective usage of machines and resources.  These solutions are aimed at saving money. Your customers expect this process data as an inclusive service on top of the product they buy from you. In practice: look at the Nest thermostat; the dashboard and data are perceived as part of the device. Nobody is going to pay extra for the Nest dashboard.  

    Create value using a digital twin of your customer

    You can make a real difference with IoT when you consider the long term strategic goals of your company. Smart devices make it possible to acquire extensive data of your customer.  This information is very valuable, especially when you combine individual sensor data of each customer to a complete digital representation of the customer (also called digital twin). This is very valuable for both B2B and B2C businesses.  Having a digital twin of your customer helps you know exactly what your customer needs and what makes them successful.  You can create additional services and better a user experience with the data you acquire.  Your customers are willing to pay for an add-on when you are able to convert their data into valuable content and actions. This is how you create more revenue.

    IoT is all about transference of risk and responsibility

    I predict IoT will transform the economy. With IoT, you are able to influence your customer and their buying habits. You are able to measure the status and quality of a heating installation, car engine or security system. You are able to influence the operation of these machines and warn your customer up front about possible outages due to wear and usage. Next logical step for your customer is to transfer the responsibility for these machines to you as a supplier.  This has huge consequences for the risk profile of your company and the possible liabilities connected to it. Having an extensive sensor network and operational digital twin of the customer makes it possible to assess and control this risk. You can implement predictive maintenance and reduce the risk of an outage. Since you can predict possible malfunction since you have a vast amount of data and trained algorithms to predict the future state of the machines and your customers. Customers are prepared to pay an insurance fee if you can guarantee the operational state and business continuity.

    How to create a profitable your IoT strategy?

    The first step is to determine what kind of company you want to be in the IoT realm. According to  Frank Burkitt and Brian Solis There are 3 types of companies building IoT services:

    • Enablers
      These are the companies that develop and implement IoT technology; they deliver pure IoT solutions. Ranging from hardware to all kinds of cloud systems. They have no industry focus and deliver generic IoT solutions. The purpose of these companies is to process as high as the possible volume at a low price. The enablers will focus on delivering endpoint networks and cloud infrastructure. This market will be dominated by a small number of global players who deliver devices, sensors, and suitable cloud infrastructure.
    • Engagers
      These are the companies who design, create, integrate, and deliver IoT services to customers. The purpose of these companies it to deliver customer intimacy by adding close interaction with the end users. Aiming their strategy on customer intimacy via IoT. Usually via one specific industry or product stack.  The engagers will focus on hubs and market-facing solutions like dashboards and historical data. This market will contain traditional software companies able to offer dashboards on top of existing systems and connecting IoT devices.
    • Enhancers
      These are the companies that deliver their own value-added services on top of services delivered by the Engagers. The services of the Engagers are unique to IoT and add a lot of value to their end user. Their goal is to provide a richer end-user engagement and surprise and delight the customer by offering them new services using their data and enhancing this with your experience and third party sources. This market will contain innovative software companies able to bridge the gap between IoT, Big Data and Machine Learning. These companies need to have excellent technical and creative to offer new and disruptive solutions.
    How to be successful in the IoT World?
    1. Decide the type of company you want to be: Enabler, Engager or Enhancer? Make sure if you are an enabler you need to offer a distinctive difference compared to existing platforms and devices.
    2. Identify your target market as you need to specialize in making a significant difference.
    3. Hire a designer and a business developer if you aren’t any of these.
    4. Develop using building blocks.
      Enhance existing products and services. Be very selective about what you want to offer. Do not invent the wheel yourself and use existing products and services and build on the things that are already being offered as SAAS solutions.
    5. Create additional value
      Enhance existing services with insight and algorithm. Design your service in such a way that you create additional value in your network. Create new business models and partner with companies outside your industry.
    6. Invest in your company
      Train your employees and build relationships with other IoT companies.
    7. Experiment with new ideas, create an innovation lab and link to companies outside your comfort zone to add them to your service

    You are welcome to contact us if you want to know more about adding value to your products and services using IoT.
    We can help you make your products and services smart at scale.  Visit our IoT services page

    The post Industrial IoT Strategy, The Transference of Risk by using a Digital Twin appeared first on AMIS Oracle and Java Blog.

    Oracle Compute Cloud – Uploading My Image – Part Two – Linux 7

    Mon, 2017-07-24 14:20

    In this sequel of part one I will show how you can upload your own (Oracle) Linux 7 image in the IAAS Cloud of Oracle. This post will use the lessons learnt by using AWS which I described here.

    The tools used are: VirtualBox, Oracle Linux 7, Oracle IAAS Documentation and lots of time.

    With Oracle as Cloud provider it is possible to use the UEKR3 or UEKR4 kernels in your image that you prepare in VirtualBox. There is no need to temporarily disable the UEKR3 or UEKR4 repo’s in your installation. I reused the VirtualBox VM that I’d prepared for the previous blog: AWS – Build your own Oracle Linux 7 AMI in the Cloud.

    The details:

    The main part here is (again) making sure that the XEN blockfront en netfront drivers are installed in your initramfs. There are multiple ways of doing so. I prefer changing dracut.conf.

     # additional kernel modules to the default
     add_drivers+="xen-blkfront xen-netfront"

    You could also use:

    rpm -qa kernel | sed 's/^kernel-//'  | xargs -I {} dracut -f --add-drivers 'xen-blkfront xen-netfront' /boot/initramfs-{}.img {}

    But it is easy to forget to check if you need to rebuild your initramfs after you have done a: “yum update”. I know, I have been there…

    The nice part of the Oracle tutorial is that you can minimize the size you need to upload by using sparse copy etc. But on Windows or in Cygwin that doesn’t work. Nor on my iMac. Therefore I had to jump through some hoops by using an other VirtualBox Linux VM that could access the image file and make a sparse copy, create a tar file and copy it back to the host OS (Windows or OSX).

    Then use the upload feature of Oracle Compute Cloud or Oracle Storage Cloud to be exact.

    Tip: If you get errors that your password isn’t correct (like I did) you might not have set a replication policy. (See the Note at step 7 in the documentation link).

    Now you can associate your image file, which you just uploaded, to an image. Use a Name and Description that you like:

    2017-07-14 17_54_30-Oracle Compute Cloud Service - Images

    Then Press “Ok” to have the image created, and you will see messages similar to these on your screen:

    2017-07-14 17_54_40

    2017-07-14 17_54_45-Oracle Compute Cloud Service - Images

    I now have two images created in IAAS. One exactly the same as my AWS image source and one with a small but important change:

    2017-07-14 17_55_16-Oracle Compute Cloud Service - Images

    Now create an instance with the recently uploaded image:

    2017-07-14 17_55_37-Oracle Compute Cloud Service - Images

    2017-07-14 17_56_34-Oracle Compute Cloud Service - Instance Creation

    Choose the shape that you need:

    2017-07-14 17_56_45-Oracle Compute Cloud Service - Instance Creation

    Do not forget to associate your SSH Keys with the instance or you will not be able to logon to the instance:

    2017-07-14 17_58_18-Oracle Compute Cloud Service - Instance Creation

    I left the Network details default:
    2017-07-14 18_01_33-Oracle Compute Cloud Service - Instance Creation

    To change the storage details of the boot disk press the “hamburger menu” on the right (Just below “Boot Drive”):

    2017-07-14 18_02_12-Oracle Compute Cloud Service - Instance Creation

    I changed the boot disk from 11GB to 20GB so I can expand the filesystems if needed later on:

    2017-07-14 18_03_21-Oracle Compute Cloud Service - Instance Creation

    Review your input in the next step and press “Create” when you are satisfied:

    2017-07-14 18_04_16-Oracle Compute Cloud Service - Instance Creation

    You will see some messages passing by with the details of steps that have been put in motion:

    2017-07-14 18_04_27-Oracle Compute Cloud Service - Instances (Instances)

    If it all goes too fast you can press the little clock on the right side of you screen to get the ”Operations History”:

    2017-07-14 18_04_35-Oracle Compute Cloud Service - Instances (Instances)

    On the “Orchestrations” tab you can follow the status of the instance creation steps:

    2017-07-14 18_06_45-Oracle Compute Cloud Service - Orchestrations

    Once they have the status ready you will find a running instance on the instances tab:

    2017-07-14 18_09_21-Oracle Compute Cloud Service - Instances (Instances)

    Then you can connect to the instance and do with it whatever you want. In the GUI you can use the “hamburger” menu on the right to view the details of the instance, and for instance stop it:

    2017-07-14 18_14_22-Oracle Compute Cloud Service - Instance Details (Overview)

    Sometimes I got the error below, but found that waiting a few minutes before repeating the action it sequentially succeeded:

    2017-07-17 18_01_32-

    A nice feature of the Oracle Cloud is that you can capture screenshots of the console output, just as if you were looking at a monitor:

    2017-07-17 18_46_08-Oracle Compute Cloud Service - Instance Details (Screen Captures)

    And to view the Console Log (albeit truncated to a certain size) if you added the highlighted text to GRUB_CMDLINE_LINUX in /etc/default/grub:

    [ec2-user@d3c0d7 ~]$ cat /etc/default/grub 
    GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
    GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet net.ifnames=0 console=ttyS0"

    If you didn’t you will probably see something like:

    2017-07-17 18_46_28-Oracle Compute Cloud Service - Instance Details (Logs)

    If you did you will see something like:

    2017-07-17 19_01_38-Oracle Compute Cloud Service - Instance Details (Logs)

    I hope this helps building your own Linux 7 Cloud Images.

    The post Oracle Compute Cloud – Uploading My Image – Part Two – Linux 7 appeared first on AMIS Oracle and Java Blog.

    Integrating Vue.js in ADF Faces 12c Web Application – using HTML5 style Document Import

    Mon, 2017-07-24 02:51

    Vue.js is a popular framework for developing rich client web applications, leveraging browsers for all they are worth. Vue.js has attracted a large number of developers that together have produced a large number of quite interesting reusable components. ADF Faces is itself a quite mature framework for the development of rich web applications. It was born over 10 years ago. It has evolved over the years and adopted quite a few browser enhancements over the years. However, ADF Faces is still – and will stay – a server side framework that provides only piecemeal support for HTML5 APIs. When developing in ADF Faces, it feels a bit as if your missing out on all those rapid, cool, buzzing developments that take place on the client side.

    Oracle strongly recommends you to stay inside the boundaries of the framework. To use JavaScript only sparingly. To not mess with the DOM as that may confuse Partial Page Rendering, one of the cornerstones of ADF Faces 11g and 12c. And while I heed these recommendations and warnings, I do not want to miss out on all the goodness that is available to me.

    So we tread carefully. Follow the guidelines for doing JavaScript things in ADF Faces. Try to keep the worlds of ADF Faces en Vue.js apart except for when they need to come into contact.

    In this article, I will discuss how the simplest of Vue.js application code can be integrated in a ‘normal’ ADF Faces web application. Nothing fancy yet, no interaction between ADF Faces client components and Vue.js, no exchange of events or even data. Just a hybrid page that contains ADF Faces content (largely server side rendered) and Vue.js content (HTML based and heavily post processed in JavaScript as is normally the case with Vue.js).

    The steps we have to go through:

    1. Create new ADF Faces Web Application with main page
    2. Import Vue.js JavaScript library into ADF Faces web application main page
    3. Create HTML document with Vue.js application content – HTML tags, custom tags, data bound attributes; potentially import 3rd party Vue.js components
    4. Create JavaScript module with initialization of Vue.js application content (function VueInit() – data structure, methods, custom components, … (see:
    5. Create a container in the ADF Faces main page to load the Vue.js content into
    6. Import HTML document with Vue.js content into browser and add to main page DOM
    7. Import custom Vue.js JavaScript module into main page; upon ADF Faces page load event, call VueInit()

    When these steps are complete, the application can be run. The browser will bring up a page with ADF Faces content as well as Vue.js content. A first step towards a truly hybrid application with mutually integrated components. Or at least some rich Vue.js components enriching the ADF Faces application. Such as the time picker (, the Google Charts integrator ( and many more.

    The source code described in this article is in GitHub:

    A brief overview of the steps and code is provided below. The biggest challenge probably was to get HTML into the ADF Faces page that could not be parsed by the ADF Faces framework (that does not allow the notation used by Vue.js such as :value=”expression” and @click=”function”. Using link for an HTML document is a workaround, followed by a little DOM manipulation. At this moment, this approach is only supported in Chrome browser. For Firefox there is a polyfill available and perhaps an approach based on XMLHttpRequest is viable (see this article).


    Create new ADF Faces Web Application with main page

    Use the wizard to create the new application. Then create a new page: main.jsf. Also create a JavaScript module: main.js and import it into the main page:

    <af:resource type=”javascript” source=”resources/js/main.js”/>

    Import Vue.js JavaScript library into ADF Faces web application main page

    Add  an af:resource tag that references the online resource for the Vue.js 2 framework library.

    <af:resource type=”javascript” source=””/>

    Create HTML document with Vue.js application content

    Just create a new HTML document in the application – for example VueContent.html. Add some Vue.js specific content using data bound syntax with : and {{}} notation. Use a third party component – for example the 3D carousel:

    The final HTML tags are in VueContent.html as is an import of the 3D carousel component (straight JavaScript reference). Some local custom components are defined in VueContent.js; that is also where the data is prepared that is leveraged in this document.



    Create JavaScript module with initialization of Vue.js application content

    Create JavaScript module VueContent.js with a function VueInit() that will do the Vue.js application initialization and set up data structure, methods, … (see:

    In this library, local custom components are defined – such as app-menu, app-menu-list, update, updates-list, status-replies, post-reply – and third party components are registered – carousel-3d and slide.

    The VueInit() function does the familiar hard Vue.js work:

    function initVue() {
         console.log("Initialize Vue in VueContent.js");
         new Vue({
          el: '#app',
          data: {
            greeting: 'Welcome to your hybrid ADF and Vue.js app!',
            docsURL: '',
            message: 'Hello Vue!',
            value:'Welcome to the tutorial <small>which is all about Vue.js</small>',
            showReplyModal: false,
            slides: 7
          methods: {
            humanizeURL: function (url) {
              return url
                .replace(/^https?:\/\//, '')
                .replace(/\/$/, '')
          components: {
            'carousel-3d': Carousel3d.Carousel3d,
            'slide': Carousel3d.Slide
      }) /* new Vue */

    Create a container in the ADF Faces main page to load the Vue.js content into

    The Vue.js content can be loaded in the ADF page into a DIV element. Such an element can  best be created into an ADF Faces web page by using a af:panelGroupLayout with layout set to vertical (says Duncan Mills):

    <af:panelGroupLayout id=”app” layout=”vertical”>

    Import HTML document with Vue.js content into browser and add to main page DOM

    JSF 2 allows us to embed HTML in our JSF pages – XHTML and Facelet, jspx and jsff – although as it happens there are more than a few server side parser limitations that make this not so easy. Perhaps this is only for our own good: it forces us to strictly separate the (client side) HTML that Vue.js will work against and the server side files that are parsed and rendered by ADF Faces. We do need a link between these two of course: the document rendered in the browser from the JSF source needs to somehow import the HTML and JavaScript resources.

    The Vue.js content is in a separate HTML document called VueContent.html. To add the content of this document – or at least everything inside a DIV with id=”content” – to the main page, add a <link> element (as described in this article ) and have it refer to the HTML document. Also specify an onload listener to process the content after it has been loaded. Note: this event will fire before the page load event fires.

    <link id=”VueContentImport” rel=”import” href=”VueContent.html” onload=”handleLoad(event)”  onerror=”handleError(event)”/>

    Implement the function handleLoad(event) in the main.js JavaScript module. Have it get hold of the just loaded document and deep clone it into the DOM, inside the DIV with the app id (the DIV rendered from the panelGroupLayout component).

    Import custom Vue.js JavaScript module into main page and call upon Page Load Event

    Import JavaScript module:

    <af:resource type=”javascript” source=”resources/js/VueContent.js”/>

    Add a clientListener component to execute function init() in main.js that will call VueInit() in VueContent.js :

    <af:clientListener method=”init” type=”load”/>

    In function init(), call VueInit() – the function that is loaded from VueContent.js – the JavaScript module that constitutes the Vue.js application together with VueContent.html. In VueInit() the real Vue.js initialization is performed and the data bound content inside DIV app is prepared.

    The overall set up and flow is depicted in this figure:


    And the application looks like this in JDeveloper:


    When running, this is what we see in the browser (note: only Chrome supports this code at the moment); the blue rectangle indicates the Vue.js content:


    And at the bottom of the page, we see the 3D Carousel:



    Next steps would have us exchange data and events between ADF Faces components and Vue.js content. But as stated at the beginning – we tread carefully, stick to the ADF framework as much as possible.


    Vue 2 – Introduction Guide –

    Vue Clock Picker component – Compare · DomonJi/vue-clock-picker

    Google Charts plugin for Vue – Google Charts Plugin For Vue.js – Vue.js Script

    How to include HTML in HTML (W3 Schools) –

    HTML Imports in Firefox –

    Chrome – HTML5 Imports: Embedding an HTML File Inside Another HTML File –

    Me, Myself and JavaScript – Working with JavaScript in an ADF World, Duncan Mills, DOAG 2015 –,_Myself_and_JavaScript-Praesentation.pdf

    The post Integrating Vue.js in ADF Faces 12c Web Application – using HTML5 style Document Import appeared first on AMIS Oracle and Java Blog.

    Get going with Node.js, npm and Vue.js 2 on Red Hat & Oracle Linux

    Sun, 2017-07-23 11:56

    A quick and rough guide on getting going with Node, npm and Vue.js 2 on a Enterprise Linux platform (Oracle Linux based on RedHat Linux)

    Install Node.JS on a Oracle Enterprise Linux system:


    as root:

    curl –silent –location | bash –


    yum -y install nodejs

    (in order to disable the inaccessible proxy server that was setup for my yum environment I have to remove the line in /etc/yum.conf with proxy server)

    (see instruction at:



    For Vue.js


    still as root:

    npm install vue

    npm install --global vue-cli


    Now again as the [normal]development user:

    create and run your first Vue.js application

    A single HTML document that loads Vue.js library and contains Vue.js “application” – and that can be opened like that in a local browser (no web server required)

    vue init simple my-first-app






    # create a new project using the “webpack” template

    vue init webpack my-second-app



    # install dependencies and go!

    cd my-second-app

    npm install

    npm run dev


    Open the generated Vue.js application in the local browser – or in a remote one:



    Optional – though recommended – is the installation of a nice code editor. One that is to my liking is Microsoft Visual Studio Code – free, light weight, available on all platforms. See for installation instructions:

    To turn the application – simplistic as it is – into a shippable, deployable application, we can use the build feature of webpack:

    npm run build


    The built resources are in the /dist folder of the project. These resources can be shipped and placed on any web server, such as nginx, Apache, Node.js and even WebLogic (co-located with Java EE web application).

    The build process can be configured through the file /build/, for example to have the name of the application included in the name of the generated resources:



    The post Get going with Node.js, npm and Vue.js 2 on Red Hat & Oracle Linux appeared first on AMIS Oracle and Java Blog.

    Using Vue.JS Community Component in My Own Application

    Sun, 2017-07-23 00:34

    In a recent blog article, I fiddle around a little with Vue.JS – Auto suggest with HTML5 Data List in Vue.js 2 application. For me, it was a nice little exercise to get going with properties and events, the basics for creating a custom component. It was fun to do, easy to achieve some degree of success.

    Typing into a simple input field lists a number of suggestions – using the HTML5 data list component.


    At that moment, I was not yet aware of the wealth of reusable components available to Vue.js developers, for example at  and

    I decided to try my hand at reusing just one of those components, expecting that to give me a good impression of what it is in general to reuse components. I stumbled across a nice little carousel component: and thought that it might be nice to display the news items for the selected news source in a carousel. How hard can that be?

    (well, in many server side web development framework, integrating third party components actually can be quite hard. And I am not sure it is that simple in all client side frameworks either).

    The steps with integrating the Carousel in my Vue.js application turned out to be:

    1. Install the component into the application’s directory structure:

    npm install -S vue-carousel-3d

    This downloads a set of files into the node-modules directory child folder vue-carousel-3d.


    2. Import the component into the application

    In main.js add an import statement:

    import Carousel3d from ‘vue-carousel-3d’;

    To install the plugin – make it globally available throughout the application – add this line, also in main.js:



    At this point, the carousel component is available and can be added in templates.

    3. To use the carousel, follow the instructions in its documentation:

    In the Newslist component from the original sample application (based on this article) I have introduced the carousel and slide components that have become available through the import of the carousel component:

      <div class="newslist">
        <carousel-3d controlsVisible="true">
          <slide :index="index"  v-for="(article,index) in articles">
            <div class="media-left">
              <a v-bind:href="article.url" target="_blank">
                <img class="media-object" v-bind:src="article.urlToImage">
            <div class="media-body">
              <h4 class="media-heading"><a v-bind:href="article.url" target="_blank">{{article.title}}</a></h4>
              <h5><i>by {{}}</i></h5>

    Note: comparing with the code as it was before, only two lines were meaningfully changed – the ones with the carousel-3d tag and the slide tag.

    The result: news items displayed in a 3d carousel.


    The post Using Vue.JS Community Component in My Own Application appeared first on AMIS Oracle and Java Blog.


    Fri, 2017-07-21 09:51

    How it works in a simple view

    Several implementations are done with 2 way ssl certificates, but still wondering how it works?

    Two-way ssl means that a client and a server communicates on a verified connection with each other. The verifying is done by certificates to identify. A server and a client has implemented a private key certificate and a public key certificate.

    In short and simple terms.

    A server has a private certificate which will be accepted by a client. The client also has a private certificate which will be accepted by the server. This is called the handshake and then it is safe to sent messages to each other. The proces looks like a cash withdrawal, putting in your creditcard corresponds to sending a hello to the server. Your card will be accepted if your card is valid for that machine.  You will be asked for your code. With two way ssl, the server sent a code,  the cliënt accept the code. Back to the withdrawal machine, the display ass for your code and putting in the right code, sent it to the server. The server accept the connection. Back to the two-way ssl process the clients sents a thumbprint which should be accepted on the server. When this proces is ready on the withdrawal you might put in the amount you want to receive, on the two-way ssl connection a message could be sent. The withdrawal machine responds with cash and probably a revenu, the two-way ssl connections with a respond message.

    In detail.

    These are the basic components necessary for communicate 2-way SSL over https.

    Sending information to a http address is done in plain text, hacking of these communication remains in clear text information to a hacker, this is not likely for several internet traffic. You don’t want to communicate password in plain text over the internet. So https and a certificate is necessary.

    So the first part to describe, the public key.

    A public key consists of a root certificate with one or more intermediate certificates. A certificate authority generates a root certificate and on top of these an intermediate certificate and on top of that certificate another intermediate certificate. This is done to arrange a smaller set of clients who can communicate with you. A root certificate will be used in several intermediates, and an intermediate certificate will be used in other intermediate certificates, so using the root certificate will remain in accepting connections of all intermediates. A public key is not protected by password and can be shared.

    The second part is the private key.

    A private key is built like a public key but on top there is a private key installed, this key is client specific and protected by a password. This private key represent you as firm or as person, so you don’t want to share this key with other people.

    What happens when setting up a two-way ssl connection

    The first step in the communication is sent a hello from the client to the server and then information is exchanged. The servers sends a request to the client with an encoded string of the thumbprint of his private key. The authorization key of the public chain below is sent to ask if the client will accept the communication. When the public key of the request corresponds to a public key on the client an OK sign will be sent back. The server asks also for the encoded string of the client, so the client will sent his encoded string of the thumbprint to the server. When the server accepts this in case of a match on his public key the connection between client and server is established and a message could be sent.

    A certificate has an expiration date, so a certificate (public and private) will only works until the expiration date is reached. Normally it will take some time to receive a new certificate so do a request for a new certificate on time.

    A certificate has a version within, for now version 3 is the standard version. Also the term SHA will be used, the start was with sha1 but still this one is achieved not safe enough anymore so we use SHA2 certificates or SHA256 as it will be shown.

    The post TWO WAY SSL appeared first on AMIS Oracle and Java Blog.

    Machine Learning in Oracle Database – Classification of Conference Abstracts based on Text Analysis

    Tue, 2017-07-18 01:53

    Machine Learning is hot. The ability to have an automated system predict, classify, recommend and even decide based on models derived from past experience is quite attractive. And with the number of obvious applications of machine learning – Netflix and Amazon recommendations, intelligent chat bots, license plate recognition in parking garages, spam filters in email servers – the interest further grows. Who does not want to apply machine learning?

    This article shows that the Oracle Database (platform) – with the Advanced Analytics option – is perfectly capable of doing ‘machine learning’. And has been able to do such learning for many years. From the comfort of their SQL & PL/SQL zone, database developers can play data scientists. The challenge is as follows:

    For the nlOUG Tech Experience 2017 conference, we have a set of about 90 abstracts in our table (title and description). 80 of these abstracts have been classified into the conference tracks, such as DBA, Development, BI & Warehousing, Web & Mobile, Integration & Process. For about 10 abstracts, this classification has not yet been done – they do not currently have an assigned track. We want to employ machine learning to determine the track for these unassigned abstracts.

    The steps we will go through to solve this challenge:

  • Create a database table with the conference abstracts – at least columns title, abstract and track

  • Create an Oracle Text policy object

  • Specify the model configuration settings

  • Create the model using the model settings and text transformation instructions to DBMS_DATA_MINING.CREATE_MODEL.

  • Test the model/Try out the model – in our case against the currently unassigned conference abstracts

  • The volume of code required for this is very small (less than 30 lines of PL/SQL). The time it takes to go through this is very limited as well. Let’s see how this works. Note: the code is in a GitHub repository: .

    Note: from the Oracle Database documentation on text mining:

    Text mining is the process of applying data mining techniques to text terms, also called text features or tokens. Text terms are words or groups of words that have been extracted from text documents and assigned numeric weights. Text terms are the fundamental unit of text that can be manipulated and analyzed.

    Oracle Text is a Database technology that provides term extraction, word and theme searching, and other utilities for querying text. When columns of text are present in the training data, Oracle Data Mining uses Oracle Text utilities and term weighting strategies to transform the text for mining. Oracle Data Mining passes configuration information supplied by you to Oracle Text and uses the results in the model creation process.

    Create a database table with the conference abstracts

    I received the data in an Excel spreadsheet. I used SQL Developer to import the file and create a table from it. I then exported the table to a SQL file with DDL and DML statements.



    Create an Oracle Text policy object

    An Oracle Text policy specifies how text content must be interpreted. You can provide a text policy to govern a model, an attribute, or both the model and individual attributes.

      l_policy     VARCHAR2(30):='conf_abstrct_mine_policy';
      l_preference VARCHAR2(30):='conference_abstract_lexer';
      ctx_ddl.create_preference(l_preference, 'BASIC_LEXER');
      ctx_ddl.create_policy(l_policy, lexer => l_preference);

    Note: the database user you use for this requires two system privileges from the DBA: grant execute on ctx_ddl and grant create mining model

    Specify the text mining model configuration settings

    When the Data Mining  model is created with a PL/SQL command, we need to specify the name of a table that holds key-value pairs (columns setting_name and setting value) with the settings that should be applied.

    Create this settings table.

    CREATE TABLE text_mining_settings
        setting_name  VARCHAR2(30),
        setting_value VARCHAR2(4000)

    Choose the algorithm to use for classification – in this case Naïve Bayes. Indicate the Oracle Text policy to use – in this case conf_abstrct_mine_policy- through INSERT statements.

      l_policy     VARCHAR2(30):='conf_abstrct_mine_policy';
      -- Populate settings table
      INTO text_mining_settings VALUES
      INTO text_mining_settings VALUES
      INTO text_mining_settings VALUES


    Pass the model settings and text transformation instructions to DBMS_DATA_MINING.CREATE_MODEL

    I do not like the elaborate, unintuitive syntax required for creating model. I do not like the official Oracle Documentation around this. It is not as naturally flowing as it should be, the pieces do not fit together nicely. It feels a little like the SQL Model clause – something that never felt quite right to me.

    Well, this is how it is. To specify which columns must be treated as text (configure text attribute) and, optionally, provide text transformation instructions for individual attributes, we need to use a dbms_data_mining_transform.TRANSFORM_LIST object to hold all columns and/or SQL expressions that contribute to the identification of each record. The attribute specification is a field (attribute_spec) in a transformation record (transform_rec). Transformation records are components of transformation lists (xform_list) that can be passed to CREATE_MODEL. You can view attribute specifications in the data dictionary view ALL_MINING_MODEL_ATTRIBUTES.

    Here is how we specify the text attribute abstract:

    dbms_data_mining_transform.SET_TRANSFORM( xformlist, ‘abstract’, NULL, ‘abstract’, NULL, ‘TEXT(TOKEN_TYPE:NORMAL)’);

    where xformlist is a local PL/SQL variable of type dbms_data_mining_transform.TRANSFORM_LIST.

    In the call to create_model, we specify the name of the new model, the table (of view) against which the model is to be built, the target column name for which the model should predict the values, the name of the database table with the key value pairs holding the settings for the model and the list of text attributes:

      xformlist dbms_data_mining_transform.TRANSFORM_LIST;
      -- add column abstract as a column to parse and use for text mining
      dbms_data_mining_transform.SET_TRANSFORM( xformlist, 'abstract', NULL, 'abstract', NULL, 'TEXT(TOKEN_TYPE:NORMAL)');
      dbms_data_mining_transform.SET_TRANSFORM( xformlist, 'title', NULL, 'title', NULL, 'TEXT(TOKEN_TYPE:NORMAL)');
      , mining_function => dbms_data_mining.classification
      , data_table_name => 'OGH_TECHEXP17'
      , case_id_column_name => 'title'
      , target_column_name => 'track'
      , settings_table_name => 'text_mining_settings'
      , xform_list => xformlist);

    Oracle Data Miner needs to have one attribute that identifies each records; the name of the column to use for this is passed as the case id.


    Test the model/Try out the model – in our case against the currently unassigned conference abstracts

    Now that the model has been created, we can make use of it for predicting the value of the target column for selected records.

    First, let’s have the model classify the abstracts without track:

    SELECT title
    ,      abstract
    where  track is null



    We can use the model also to classify data on the fly, like this (using two abstracts from a different conference that are not stored in the database at all):

    with sessions_to_judge as
    ( select 'The Modern JavaScript Server Stack' title
      , 'The usage of JavaScript on the server is rising, and Node.js has become popular with development shops, from startups to big corporations. With its asynchronous nature, JavaScript provides the ability to scale dramatically as well as the ability to drive server-side applications. There are a number of tools that help with all aspects of browser development: testing, packaging, and deployment. In this session learn about these tools and discover how you can incorporate them into your environment.' abstract
      from dual
      select 'Winning Hearts and Minds with User Experience' title
      , 'Not too long ago, applications could focus on feature functionality alone and be successful. Today, they must also be beautiful, responsive, and intuitive. In other words, applications must be designed for user experience (UX) because when they are, users are far more productive, more forgiving, and generally happier. Who doesnt want that? In this session learn about the psychology behind what makes a great UX, discuss the key principles of good design, and learn how to apply them to your own projects. Examples are from Oracle Application Express, but these principles are valid for any technology or platform. Together, we can make user experience a priority, and by doing so, win the hearts and minds of our users. We will use Oracle JET as well as ADF and some mobile devices and Java' abstract
      from dual
    SELECT title
    ,      abstract
    FROM   sessions_to_judge



    Both abstracts are assigned tracks within the boundaries of the model. If these abstracts were submitted to the Tech Experience 2017 conference, they would have been classified like this. It would be interesting to see which changes to make to for example the second abstract on user experience in order to have it assigned to the more fitting Web & Mobile track.

    One final test: find all abstracts for which the model predicts a different track than the track that was actually assigned:

    select *
    from ( SELECT title
           ,      track 
           ,      PREDICTION(ABSTRACT_CLASSIFICATION USING *) AS predicted_track
           FROM   OGH_TECHEXP17
           where  track is not null
    where track != predicted_track


    Seems not unreasonable to have a second look at this track assignment.


    Source code in GitHub: 

    Oracle Advanced Analytics Database Option: 

    My big inspiration for this article:  Introduction to Machine Learning for Oracle Database Professionals by Alex Gorbachev –

    Oracle Documentation on Text Mining:

    Toad World article on Explicit Semantic Analysis setup using SQL and PL/SQL:

    Sentiment Analysis Using Oracle Data Miner – OTN article by Brendan Tierney – 

    My own blogs on Oracle Database Data Mining from PL/SQL – from long, long ago: Oracle Datamining from SQL and PL/SQL and Hidden PL/SQL Gem in 10g: DBMS_FREQUENT_ITEMSET for PL/SQL based Data Mining

    The post Machine Learning in Oracle Database – Classification of Conference Abstracts based on Text Analysis appeared first on AMIS Oracle and Java Blog.

    Virtualization on Windows 10 with Virtual Box, Hyper-V and Docker Containers

    Mon, 2017-07-17 16:17

    Recently I started working on a brand new HP ZBook 15-G3 with Windows 10 Pro. And I immediately tried to return to the state I had my previous Windows 7 laptop in: Oracle Virtual Box for running most software in virtual machines, using Docker Machine (and Kubernetes) for running some things in Docker Containers and using Vagrant to spin up some of these containers and VMs.

    I quickly ran into some issues that made me reconsider – and realize that some things are different on Windows 10. In this article a brief summary of my explorations and findings.

    • Docker for Windows provides near native support for running Docker Containers; the fact that under the covers there is still a Linux VM running is almost hidden and from command line (Powershell) and a GUI I have easy access to the containers. I do  not believe though that I can run containers that expose a GUI – except through a VNC client
    • Docker for Windows leverages Hyper-V. Hyper-V lets you run an operating system or computer system as a virtual machine on Windows. (Hyper-V is built into Windows as an optional feature; it needs to be explicitly enabled) Hyper-V on Windows is very similar to VirtualBox
    • In order to use Hyper-V or Virtual Box, hardware virtualization must be enabled in the system’s BIOS
    • And the one finding that took longest to realize: Virtual Box will not work if Hyper-V is enabled. So the system at any one time can only run Virtual Box or Hyper-V (and Docker for Windows), not both. Switching Hyper-V support on and off is fairly easy, but it does require a reboot

    Quick tour of Windows Hyper-V

    Creating a virtual machine is very easy. A good example is provided in this article: that describes how a Hyper-V virtual machine is created with Ubuntu Linux.

    I went through the following steps to create a Hyper-V VM running Fedora 26. It was easy enough. However, the result is not as good in terms of the GUI experience as I had hoped it would be. Some of my issues: low resolution, only 4:3 aspect ratio, I cannot get out of full screen mode (that requires CTRL-ALT-BREAK and my keyboard does not have a break key. All alternative I have found do not work for me.

      • Download ISO image for Fedora 26 (Fedora-Workstation-Live-x86_64-26-1.5.iso using Fedora Media Writer or from
      • Enable Virtualization in BIOS
      • Enable Hyper-V (First, open Control Panel. Next, go to Programs. Then, click “Turn Windows features on or off”. Finally, locate Hyper-V and click the checkbox (if it isn’t already checked))
      • Run Hyper-V Manager – click on search, type Hype… and click on Hype-V Manager
      • Create Virtual Switch – a Network Adapter that will allow the Virtual Machine to communicate to the world
      • Create Virtual Machine – specify name, size and location of virtual hard disk (well, real enough inside he VM, virtual on your host), size of memory, select the network switch (created in the previous step), specify the operating system and the ISO while where it will be installed from
      • Start the virtual machine and connect to it. It will boot and allow you to run through the installation procedure
      • Potentially change the screen resolution used in the VM. That is not so simple: see this article for an instruction: Note: this is one of the reasons why I am not yet a fan of Hyper-V
      • Restart the VM an connect to it; (note: you may have to eject the ISO file from the virtual DVD player, as otherwise the machine could boot again from the ISO image instead of the now properly installed (virtual) hard disk


    Article that explains how to create a Hyper-V virtual machine that runs Ubuntu (including desktop): 

    Microsoft article on how to use local resources (USB, Printer) inside Hyper-V virtual machine: 

    Microsoft documentation: introduction of Hypervisor Hyper-v on Windows 10:

    Two article on converting Virtual Box VM images to Hyper-V: and (better)

    And: how to create one’s own PC into a Hyper-V VM:

    Rapid intro to Docker on Windows

    Getting going with Docker on Windows is surprisingly simple and pleasant. Just install Docker for Windows (see for example article for instructions: ). Make sure that Hyper-V is enabled – because Docker for Windows leverages Hyper-V to run a Linux VM: the MobyLinuxVM that you see the details for in the next figure.


    At this point you can interact with Docker from the Powershell command line – simply type docker ps, docker run, docker build and other docker commands on your command line. To just run containers based on images – local or in public or private registries – you can use the Docker GUI Kitematic. It is a separate install action – – that is largely automated as is described here –to get Kitematic installed. That is well worth the extremely small trouble it is.


    From Kitematic, you have a graphical overview of your containers as well as an interactive UI for starting containers, configuring them, inspecting them and interacting with them. All things you can do from the command line – but so much simpler.


    In this example, I have started a container based on the ubuntu-xfce-nvc image (see which runs the Ubuntu Linux distribution with “headless” VNC session, Xfce4 UI and preinstalled Firefox and Chrome browser.


    The Kitematic IP & Ports tab specify that port 5901 – the VNC port – is mapped to port 32769 on the host (my Windows 10 laptop). I can run the MobaXterm tool and open a VNC session with it, fir at port 32769. This allows me to remotely (or at least outside of the container) see the GUI for the Ubuntu desktop:


    Even though it looks okay and it is pretty cool that I can graphically interact with the container, it is not a very good visual experience – especially when things start to move around. Docker for Windows is really best for headless programs that run in the background.

    For quickly trying out Docker images and for running containers in the background – for example with a MongoDB database, an Elastic Search Index and a Node.JS or nginx web server – this seems to be a very usable way of working.


    Introducing Docker for Windows: Documentation

    Download Docker for Windows Community Edition:

    Article on installation for Kitematic – the GUI for Docker for Windows: 

    Download MobaXterm: 

    Virtual Box on Windows 10

    My first impressions on Virtual Box compared to Hyper-V that for now at least I far prefer Virtual Box(for running Linux VMs).The support for shared folders between host and guest, the high resolution GUI for the Guest, and the fact that currently many prebuilt images are available for Virtual Box and not so many (or hardly any) for Hyper-V are for now points in favor of Virtual Box. I never run VMs with Windows as Guest OS, I am sure that would impact my choice.

    Note- once more- that for VirtualBox to run on Windows 10, you need to make sure that hardware virtualization is enabled in BIOS and that Hyper-V is not enabled. Failing to take care of either of these two will return the same error VT-x is not available (VERR_VMX_NO_VMX):


    Here is a screenshot of a prebuilt VM image running on Virtual Box on Windows 10 – all out of the box.


    No special set up required. It uses the full screen, it can interact with the host, is clipboard enabled, I can easily toggle between guest and host and it has good resolution and reasonable responsiveness:



    Article describing setting up two boot profiles for Windows 10 – one for Hyper-V and one without it (for example run Virtual Box):

    Article that explains how to create a Hyper-V virtual machine that runs Ubuntu (including desktop): 

    Microsoft article on how to use local resources (USB, Printer) inside Hyper-V virtual machine: 

    Microsoft documentation: introduction of Hypervisor Hyper-v on Windows 10:

    HP Forum Entry on enabling Virtualization in BIOS fo ZBook G2 : 

    Introducing Docker for Windows: Documentation

    Download Docker for Windows Community Edition:

    Article on installation for Kitematic – the GUI for Docker for Windows: 

    Two article on converting Virtual Box VM images to Hyper-V: and (better)

    And: how to create one’s own PC into a Hyper-V VM:

    The post Virtualization on Windows 10 with Virtual Box, Hyper-V and Docker Containers appeared first on AMIS Oracle and Java Blog.

    Auto suggest with HTML5 Data List in Vue.js 2 application

    Sun, 2017-07-16 06:11

    This article shows data (News stories) retrieved from a public REST API ( in a nice and simple yet attractive Vue.js 2 application. In the example, the user selects a news source using a dropdown select component.


    I was wondering how hard – or easy – it would be to replace the select component with an input component with associated data list – a fairly new HTML5 addition that is rendered as a free format entry field with associated list of suggestions based on the input. In the case of the sample News List application, this component renders like this:


    and this if the user has typed “on”


    To change the behavior of the SourceSelection component in the sample, I first clone the original source repository from GitHub.  I then focus only on the file SourceSelection.vue in the components directory.

    I have added the <datalist> tag with the dynamic creation of <option> elements in the same way as in the original <select> element. With one notable change: with the select component, we have both the display label and the underlying value. With datalist, we have only one value to associate with each option – the display label.

    The input element is easily associated with the datalist, using the list attribute. The input element supports the placeholder attribute that allows us to present an initial text to the end user. The input element is two-way databound to property source on the component. Additionally, the input event – which fires after each change in the value of the input element – is associated with a listener method on the component, called sourceChanged.

    I make a distinction now between the source property – which is bound to value in the input field – and the deepSource property which holds the currently selected news source object (with name, id and url). In function sourceChanged() the new value of source is inspected. If it differs from the currently selected deepSource, then we try to find this new value of source in the array of news sources. If we find it, we set that news source as the new deepsource – and publish the event sourceChanged.

    The full code for the SourceSelection.vue file is here from

    The post Auto suggest with HTML5 Data List in Vue.js 2 application appeared first on AMIS Oracle and Java Blog.

    First encounters of a happy kind – rich web client application development with Vue.js

    Sun, 2017-07-16 01:39

    Development of rich web applications can be done in various ways, using one or more of many frameworks. In the end it all boils down to HTML(5), CSS and JavaScript, run and interpreted by the browser. But the exact way of getting there differs. Server side oriented Web applications with .NET and Java EE (Servlet, JSP, JSF) and also PHP, Python and Ruby has long been the most important way of architecting web applications. However, with the power of today’s browsers, the advanced state of HTML5 and JavaScript and the high degree of standardization across browsers, it is now almost goes without saying that web applications are implemented with a rich client side that interacts with a backend to a very limited degree and typically only to retrieve or pass data or enlist external services and complex backend operations. What client/server did to terminal based computing in the early nineties, the fat browser is doing now to three tier web computing with its heavy focus on the server side.

    The most prominent frameworks for developing these fat browser based clients are Angular and Angular 2, React.js, Ember, complemented by jQuery and a plethora of other libraries, components and frameworks (see for example this list of top 9 frameworks) . And then there is Vue.js. To be honest, I am not sure where Vue ranks in all the trends and StackOverflow comparisons etc. However, I did decide to take a quick look at Vue.js – and I liked what I saw.

    From the Vue website:

    Vue (pronounced /vjuː/, like view) is a progressive framework for building user interfaces. Unlike other monolithic frameworks, Vue is designed from the ground up to be incrementally adoptable. The core library is focused on the view layer only, and is very easy to pick up and integrate with other libraries or existing projects. On the other hand, Vue is also perfectly capable of powering sophisticated Single-Page Applications when used in combination with modern tooling and supporting libraries.

    I have never really taken to Angular. It felt overly complex and I never particularly liked it. Perhaps I should give it another go – now that my understanding of modern web development has evolved. Maybe now I am finally ready for it. Instead, I checked out Vue.js and it made me more than a little happy. I smiled as I read through the introductory guide, because it made sense. The pieces fit together. I understand the purpose of the main moving pieces and I enjoy trying them out. The two way data binding is fun. The encapsulation of components, passing down properties, passing up events – I like that too. The HTML syntax, the use of templates, the close fit with “standard” HTML. It somehow agrees with me.

    Note: it is still early days and I have not yet built a serious application with Vue. But I thought I should share some of my excitement.

    The creator of Vue, Evan You, Vue.js ( ), writes about Vue’s origins:

    I started Vue as a personal project when I was working at Google Creative Labs in 2013. My job there involved building a lot of UI prototypes. After hand-rolling many of them with vanilla JavaScript and using Angular 1 for a few, I wanted something that captured the declarative nature of Angular’s data binding, but with a simpler, more approachable API. That’s how Vue started.

    And that is what appealed to me.

    The first thing I did to get started with Vue.js was to read through the Introductory Guide for Vue.js 2.0: .

    Component Tree

    It is a succinct tour and explanation, starting at the basics and quickly coming round to the interesting challenges. Most examples in the guide work in line – and using the Google Chrome Addin for Vue.js it is even easier to inspect what is going on in the runtime application.

    The easiest way to try out Vue.js (at its simplest) is using the JSFiddle Hello World example.

    Next, I read through and followed the example of a more interesting Vue application in this article that shows data (News stories) retrieved from a public REST API (

    This example explains in a very enjoyable way how two components are created – news source selection and news story list from selected source- as encapsulated, independent components that still work together. Both components interact with the REST API to fetch their data. The article starts with an instruction on how to install the Vue command line tool and initialize a new project with a generated scaffold. If Node and NPM are already installed, you will be up and running with the hello world of Vue applications in less than 5 minutes.

    Vue and Oracle JET

    One other line of investigation is how Vue.js can be used in an Oracle JET application, to complement and perhaps even replace KnockOut. More on that:

    The post First encounters of a happy kind – rich web client application development with Vue.js appeared first on AMIS Oracle and Java Blog.

    Running any Node application on Oracle Container Cloud Servicer

    Sun, 2017-07-16 00:32

    In an earlier article, I discussed the creation of a generic Docker Container Image that runs any Node.JS application based on sources for that application on GitHub. When the container is started, the GitHub URL is passed in as a parameter and the container will download the sources and run the application. Using this generic image, you can your Node application everywhere you can run a Docker container. One of the places where you can run a Docker Container is the Oracle Container Cloud Service (OCCS) – a service that offers a platform for managing your container landscape. In this article, I will show how I used OCCS to run my generic Docker image for running Node application and how I configured the service to run a specific Node application from GitHub.

    Getting started with OCCS is described very well in an article by my colleague Luc Gorissen on this same blog: Docker, WebLogic Image on Oracle Container Cloud Service. I used his article to get started myself.

    The steps are:

    • create OCCS Service instance
    • configure OCCS instance (with Docker container image registry)
    • Create a Service for the desired container image (the generic Node application runner) – this includes configuring the Docker container parameters such as port mapping and environment variables
    • Deploy the Service (run a container instance)
    • Check the deployment (status, logs, assigned public IP)
    • Test the deployment – check if the Node application is indeed available


    Create OCCS Service instance

    Assuming you have an Oracle Public Cloud account with a subscription to OCCS. Go to the Dashboard for OCCS. Click on Create Service


    Configure the service instance:



    However, do not make it too small (!) (Oracle Cloud does not come in small portions):


    So now with the minimum allowed data volume size (for a stateless container!)


    This time I pass the validations:


    And the Container Cloud Service instance is  created:



    Configure OCCS instance (with Docker container image registry)

    After some time, when the instance is ready, I can access it:



    It is pretty sizable as you can see.

    Let’s access the Container console.



    The Dashboard gives an overview of the current status, the actual deployments (none yet) and access to Services, Stacks, Containers, Images and more.


    One of the first things to do, is to configure a (Container Image) Registry – for example a local registry or an account on Docker Hub – my account, where I have saved container images that I need to create containers from in the Oracle Container Cloud:


    My details are validated:


    The registry is added:



    Create a Service for a desired container image

    Services are container images along with configuration to be used for running containers. Oracle Container Cloud comes with a number of popular container images already configured as services. I want to add another service, for my own image: the generic Node application runner). For this I select the image from my Docker Hub account followed by configuring the Docker container parameters such as port mapping and environment variables


    The Service editor – the form to define the Image (from one of the configured registries), the name of the service (which represents the combination of the image with a set of configuration settings to make it into a specific service) and of course those configuration settings – port mappings, environment variables, volumes etc.


    Note: I am creating a service for the image that can run any Node application that is available in GitHub (as described here: )

    Deploy the Service (run a container instance)

    After the service was created, it is now available as the blueprint to run new containers from. This dis done through a Deployment – this ties together a Service with a some runtime settings around scaling, load balancing and the like:


    Set the deployment details for the new deployment of this service:


    After completing these details, press deploy to go ahead and run the new deployment; in this case it consists of a single instance (boring….) but it could have been more involved.


    The deployment is still starting.

    A little later (a few seconds) the container is running:


    Check some details:


    To check the deployment (status, logs, assigned IP), click on the container name:


    Anything written to the console inside the container is accessible from the Logs:



    To learn about the public IP address at which the application is exposed, we need to turn to the Hosts tab.

    Monitor Hosts


    Drill down on one specific host:


    and learn its public IP address, where we can access the application running in the deployed container.

    Test the deployment – check if the Node application is indeed available

    With the host’s public IP address and the knowledge that port 8080 inside the container (remember environment variable APP_PORT that was defined as 8080 to pass to the generic Node application running) is mapped to port 8005 externally, we can now invoke the application running inside the container deployed on the Container Cloud Service from our local browser.




    And there is the output of the application (I never said it would be spectacular…)




    After having gotten used to the sequence of actions:

    • configure registry (probably only once)
    • configure a service (for every container image plus specific setup of configuration parameters, including typical Docker container settings such as port mapping, volumes, environment variables)
    • define and run a deployment (from a service) with scaling factor and other deployment details
    • get hold of host public IP address to access the application in the container

    Oracle Container Cloud Service provides a very smooth experience that compares favorably with other Container Cloud Services and management environments I have seen. From a Developer’s perspective at least, OCCS does a great job. It is a little too early to say much about the Ops side of things – how operations with OCCS are.

    The post Running any Node application on Oracle Container Cloud Servicer appeared first on AMIS Oracle and Java Blog.

    AWS – Build your own Oracle Linux 7 AMI in the Cloud

    Fri, 2017-07-14 14:37

    I always like to know what is installed in the servers that I need to use for databases or Weblogic installs. Whether it is in the Oracle Cloud or in any other Cloud. One way to know is to build your own image that will be used to start your instances. My latest post was about building my own image for the Oracle Cloud (IAAS), but I could only get it to work with Linux 6. Whatever I tried with Linux 7 it wouldn’t start in a way that I could logon to it. And no way to see what was wrong. Not even when mounting the boot disk to an other instance after a test boot. My trial ran out before I could get it to work and a new trial had other problems.

    Since we have an AWS account I could try to do the same in AWS EC2 when I had some spare time. A few years back I had built Linux 6 AMI’s via a process that felt a bit complicated but it worked for a PV Kernel. For Linux 7 I couldn’t find any examples on the web on how to do that with enough detail to really get it working. But while was studying for my Oracle VM 3.0 for x86 Certified Implementation Specialist exam I realized what must have been the problem. Therefore below follow my notes on how to build my own Oracle Linux 7.3 AMI for EC2.

    General Steps:
    1. Create a new Machine in VirtualBox
    2. Install Oracle Linux 7.3 on it
    3. Configure it and install some extra packages
    4. Clean your soon to be AMI
    5. Export your VirtualBox machine as an OVA
    6. Create an S3 bucket and upload your OVA
    7. Use aws cli to import your image
    8. Start an instance from your new AMI, install the UEKR3 kernel.
    9. Create a new AMI from that instance in order to give it a sensible name

    The nitty gritty details: Ad 1) Create a new Machine in VirtualBox

    Create an New VirtualBox Machine and start typing the name as “OL” which sets the type to Linux and version to Oracle (64 bit). Pick a name you like. I choose OL73. I kept the memory as it was (1024M). Create a HardDisk. 10Gb Dynamically allocated (VDI) worked for me. I disabled the audio as I had no use for that and made sure one network interface was available. I selected my NatNetwork type because that gives me VM access to the network and lets me access it via a Forwarding Rule on just one interface. You need to logon via VBox first to get the IP address then you can use an other preferred terminal to login. I like putty.

    Attach the DVD with the Linux you want to use, I like Oracle Linux (, and start the VM.

    Ad 2) Install Oracle Linux 7.3 on it

    When you get the installation screen do not choose “Install Oracle Linux 7.3” but use TAB to add “ net.ifnames=0” to the boot parameters (note the extra space) and press enter.

    Choose the language you need, English (United States) with a us keyboard layout works for me. Go to the next screen.

    Before you edit “Date & Time” edit the network connection (which is needed for NTP).

    Notice that the interface has the name eth0 and is disconnected. Turn the eth0 on by flipping the switch

    And notice the IP address etc. get populated:

    Leave the host name as it is (localhost.localdomain) because your cloud provider will change anything you set here anyway, and press the configure button. Then choose the General tab to check “Automatically connect to this network when it is available”, keep the settings on the Ethernet tab as they are, the same for 802.1X Security tab, DCB tab idem. On the IPv4 Settings tab, leave “Method” on Automatic (DHCP) and check “Require IPv4 addressing for this connection to complete”. On the IPv6 Settings tab change “Method” to Ignore and press the “Save” button and then press “Done”.

    Next change the “Date & Time” settings to your preferred settings and make sure that “Network Time” is on and configured. Then press “Done”.

    Next you have to press “Installation Destination”

    Now if the details are in accordance with what you want press “Done”.

    Your choice here has impact on what you can expect from the “cloud-init” tools.

    For example: Later on you can launch an instance with this soon to be AMI and start it with let’s say a 20 GiB disk instead of the 10GiB disk this image now has. The extra 10GiB can be used via a new partition and adding that to a LVM pool. That requires manual actions. But if you expect the cloud-init tools to resize your partition to make use of the extra 10GiB and extend the filesystem (at first launch). Then you need to change a few things.

    Then press “Done” and you get guided through an other menu:

    Change LVM to “Standard Partition”

    And then create the mount points you need by pressing “+” or click the blue link:

    Now what you get are 3 partitions on your disk (/dev/sda). Notice that “/” is sda3 and is the last partition. When you choose this in your image the cloud-init utils will resize that partition to use the extra 10GiB and extend the filesystem on it as well. It makes sense that it can only resize the last partition of your disk. Which means that that your swap size is fixed between these partitions and can only be increased on a different disk (Or volume as it is called in EC2) that you need to add to your instance when launching (or afterwards). Leaving you with a gap of 1024MiB that is not very useful.

    You might know what kind of memory size instances you want to use this image for and create the necessary swap up front (and maybe increase the disk from 10GiB to a size that caters for the extra needed swap).

    I like LVM and choose to partition automatically and will use LVM utils to use the extra space by creating a third partition.

    The other options I kept default:

    And press “Begin Installation”. You then will see:

    Set the root password to something you will remember, later I will disable it via “cloud-init” and there is no need to create an other user. Cloud-init will also take care of that.

    I ignored the message: and pressed Done again.

    Press the “Reboot” button when you are asked to and when restarting select the standard kernel (Not UEK). This is needed for the Amazon VMImport tool. You have less then 5 seconds to change the default kernel (UEK) from booting.

    If you missed it just restart the VM.

    Ad 3) Configure it and install some extra packages

    Login with your preferred terminal program via NatNetwork (make sure you have a forwarding rule for the IP you wrote down for ssh)


    or use the VirtualBox console. If you forgot to write the IP down you can still find it via the VirtualBox console session:

    You might have noticed that my IP address changed. That is because I forgot to set the network in VirtualBox to NatNetwork when making the screenshots. As you can see the interface name is eth0 as expected. If you forgot to set the boot parameter above you need to do some extra work in the Console to make sure that eth0 is used.

    Check the grub settings:

    cat /etc/default/grub

    And look at: GRUB_CMDLINE_LINUX (check if net.ifnames=0 is in there), and look at: GRUB_TIMEOUT. You might want to change that from 5 seconds to give you a bit more time. The AWS VMImport tool will change it to 30 seconds.

    If you made some changes, you need to rebuild grub via:

    grub2-mkconfig -o /boot/grub2/grub.cfg

    Change the network interface settings:

    vi /etc/sysconfig/network-scripts/ifcfg-eth0
    Make it look like this:


    Change dracut.conf *** this is very important. In VirtualBox the XEN drivers do not get installed in the initramfs image and that will prevent your AMI from booting in AWS if it is not fixed ***

    vi /etc/dracut.conf

    adjust the following two lines:

    # additional kernel modules to the default


    # additional kernel modules to the default
    add_drivers+=”xen-blkfront xen-netfront”

    Temporarily change default kernel:

    (AWS VMImport has issues when the UEK kernels are installed or even present)

    vi /etc/sysconfig/kernel





    Remove the UEK kernel:

    yum erase -y kernel-uek kernel-uek-firmware

    Check the saved_entry setting of grub:

    cat /boot/grub2/grubenv
    or: grubby –default-kernel

    If needed set it to the RHCK (RedHat Compatible Kernel) via:

    grub2-set-default <nr>

    Find the <nr> to use via:

    grubby –info=ALL

    Use the <nr> of index=<nr> where kernel=/xxxx lists the RHCK (not a UEK kernel).

    Rebuild initramfs to contain the xen drivers for all the installed kernels:

    rpm -qa kernel | sed ‘s/^kernel-//’  | xargs -I {} dracut -f /boot/initramfs-{}.img {}

    Verify that the xen drivers are indeed available:

    rpm -qa kernel | sed ‘s/^kernel-//’  | xargs -I {} lsinitrd -k {}|grep -i xen

    Yum repo adjustments:

    vi /etc/yum.repos.d/public-yum-ol7.repo

    Disable: ol7_UEKR4 and ol7_UEKR3.
    You don’t want to get those kernels back with a yum update just yet.
    Enable: ol7_optional_latest, ol7_addons

    Install deltarpm, system-storage-manager and wget:

    yum install -y deltarpm system-storage-manager wget

    (Only wget is really necessary to enable/download the EPEL repo. The others are useful)

    Change to a directory where you can store the rpm and install it. For example:

    cd ~
    rpm -Uvh epel-release-latest-7.noarch.rpm

    Install rlwrap (useful tool) and the necessary cloud tools:

    yum install -y rlwrap cloud-init cloud-utils-growpart

    Check your Firewall settings (SSH should be enabled!):

    firewall-cmd –get-default-zone
    firewall-cmd –zone=public –list-all

    You should see something like for your default-zone:
    interfaces: eth0
    services: dhcpv6-client ssh

    Change SELinux to permissive (might not be really needed, but I haven’t tested it without this):

    vi /etc/selinux/config
    change: SELINUX=enforcing
    to: SELINUX=permissive

    Edit cloud.cfg:

    vi /etc/cloud/cloud.cfg
    change: ssh_deletekeys:    0
    to: ssh_deletekeys:   1


    name: cloud-user
    name: ec2-user

    Now cloud.cfg should look like this: (between the =====)

    – default

    disable_root: 1
    ssh_pwauth:   0

    mount_default_fields: [~, ~, ‘auto’, ‘defaults,nofail’, ‘0’, ‘2’]
    resize_rootfs_tmp: /dev
    ssh_deletekeys:   1
    ssh_genkeytypes:  ~
    syslog_fix_perms: ~

    – migrator
    – bootcmd
    – write-files
    – growpart
    – resizefs
    – set_hostname
    – update_hostname
    – update_etc_hosts
    – rsyslog
    – users-groups
    – ssh

    – mounts
    – locale
    – set-passwords
    – yum-add-repo
    – package-update-upgrade-install
    – timezone
    – puppet
    – chef
    – salt-minion
    – mcollective
    – disable-ec2-metadata
    – runcmd

    – rightscale_userdata
    – scripts-per-once
    – scripts-per-boot
    – scripts-per-instance
    – scripts-user
    – ssh-authkey-fingerprints
    – keys-to-console
    – phone-home
    – final-message

    name: ec2-user
    lock_passwd: true
    gecos: Oracle Linux Cloud User
    groups: [wheel, adm, systemd-journal]
    sudo: [“ALL=(ALL) NOPASSWD:ALL”]
    shell: /bin/bash
    distro: rhel
    cloud_dir: /var/lib/cloud
    templates_dir: /etc/cloud/templates
    ssh_svcname: sshd

    # vim:syntax=yaml

    With this cloud.cfg you will get new ssh keys for the server when you deploy a new instance, a user “ec2-user” that has password less sudo rights to root, and direct ssh to root becomes disabled as well as using a password for ssh authentication.

    **** Remember, when you reboot now cloud-init will kick in and only console access to root will be available. Ssh to root is disabled ****
    **** because you do not have an http server running serving ssh keys for the new ec2-user that cloud-init can use ****
    **** It might be prudent to validate your cloud.cfg is a valid yaml file via ****

    Check for the latest packages and update:

    yum check-update
    yum update -y

    Ad 4) Clean your soon to  be AMI

    You might want to clean the VirtualBox machine of logfiles and executed commands etc:

    rm -rf  /var/lib/cloud/
    rm -rf /var/log/cloud-init.log
    rm -rf /var/log/cloud-init-output.log

    yum -y clean packages
    rm -rf /var/cache/yum
    rm -rf /var/lib/yum

    rm -rf /var/log/messages
    rm -rf /var/log/boot.log
    rm -rf /var/log/dmesg
    rm -rf /var/log/dmesg.old
    rm -rf /var/log/lastlog
    rm -rf /var/log/yum.log
    rm -rf /var/log/wtmp

    find / -name .bash_history -exec rm -rf {} +
    find / -name .Xauthority -exec rm -rf {} +
    find / -name authorized_keys -exec rm -rf {} +

    history -c
    shutdown -h now

    Ad 5) Export your VirtualBox machine as an OVA

    In VirtualBox Manager:

    And select the Virtual Machine you had just shutdown:

    If needed change the location of the ova to be created:


    Ad 6) Create an S3 bucket and upload your OVA

    Log in to your AWS console choose the region where you want your AMI to be created and create a bucket there (or re-use one that you already have):

    (I used the region eu-west-1)

    Set the properties you want, I kept the defaults properties and permissions:

    Then press:


    Ad 7) Use aws cli to import your image

    Before you can import the OVA file you need to put it in the created bucket. You can upload it via the browser or use “aws cli” to do that. I prefer the aws cli because that always works and the browser upload gave me problems.

    How to install the command line interface is described here:

    On an Oracle linux 7 machine it comes down to:

    yum install -y python34.x86_64 python34-pip.noarch
    pip3 install –upgrade pip
    pip install –upgrade awscli
    aws –version

    Then it is necessary to configure it, which is basically (

    aws configure

    And answer the questions by supplying your credentials and your preferences. These are fake credentials below

    AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
    Default region name [None]: eu-west-1
    Default output format [None]: json

    The answers will be saved in two files:


    To test the access try to do a listing of your bucket:

    aws s3 ls s3://amis-share

    To upload the generated OVA file is then as simple as:

    aws s3 cp /file_path/OL73.ova s3://amis-share

    The time it takes depends on your upload speed.

    Create the necessary IAM role and policy (

    Create a trust-policy.json file:

    vi trust-policy.json
       "Version": "2012-10-17",
       "Statement": [
             "Effect": "Allow",
             "Principal": { "Service": "" },
             "Action": "sts:AssumeRole",
             "Condition": {
                   "sts:Externalid": "vmimport"

    Create the IAM role:

    aws iam create-role –role-name vmimport –assume-role-policy-document file:///home/ec2-user/trust-policy.json

    Create the role-policy.json file:

    Change the file to use your S3 bucket (amis-share/*).

    vi role-policy.json
       "Version": "2012-10-17",
       "Statement": [
             "Effect": "Allow",
             "Action": [
             "Resource": [
             "Effect": "Allow",
             "Action": [
             "Resource": [
             "Effect": "Allow",
             "Resource": "*"

    aws iam put-role-policy –role-name vmimport –policy-name vmimport –policy-document file:///home/ec2-user/role-policy.json

    Now you should be able to import the OVA.

    Prepare a json file with the following contents (adjust to your own situation):

    cat imp_img.json 
        "DryRun": false,
        "Description": "OL73 OVA",
        "DiskContainers": [
                "Description": "OL73 OVA",
                "Format": "ova",
                "UserBucket": {
                    "S3Bucket": "amis-share",
                    "S3Key": "OL73.ova"
        "LicenseType": "BYOL",
        "Hypervisor": "xen",
        "Architecture": "x86_64",
        "Platform": "Linux",
        "ClientData": {
            "Comment": "OL73"

    Then start the actual import job:

    aws ec2 import-image –cli-input-json file:///home/ec2-user/imp_img.json

    The command retuns with the name of the import job which you can then use to get the progress:
    aws ec2 describe-import-image-tasks –import-task-ids import-ami-fgotr2g7

    Or in a loop:

    while true; do sleep 60; date; aws ec2 describe-import-image-tasks –import-task-ids import-ami-fgotr2g7; done

    Depending on the size of your OVA it takes some time to complete. An example output is:

        "ImportImageTasks": [
                "StatusMessage": "converting",
                "Status": "active",
                "LicenseType": "BYOL",
                "SnapshotDetails": [
                        "DiskImageSize": 1470183936.0,
                        "Format": "VMDK",
                        "UserBucket": {
                            "S3Bucket": "amis-share",
                            "S3Key": "OL73.ova"
                "Platform": "Linux",
                "ImportTaskId": "import-ami-fgotr2g7",
                "Architecture": "x86_64",
                "Progress": "28",
                "Description": "OL73 OVA"

    Example of an error:

        "ImportImageTasks": [
                "SnapshotDetails": [
                        "DiskImageSize": 1357146112.0,
                        "UserBucket": {
                            "S3Key": "OL73.ova",
                            "S3Bucket": "amis-share"
                        "Format": "VMDK"
                "StatusMessage": "ClientError: Unsupported kernel version 3.8.13-118.18.4.el7uek.x86_64",
                "ImportTaskId": "import-ami-fflnx4fv",
                "Status": "deleting",
                "LicenseType": "BYOL",
                "Description": "OL73 OVA"

    Once the import is successful you can find your AMI in your EC2 Console:

    Unfortunately no matter what you Description or Comment you supply in the json file the AMI is only recognized via the name of the import job: import-ami-fgotr2g7. As I want to use the UEK kernel I need to start an instance from this AMI and use that as an new AMI. And via that process (Step 9) I can supply a better name. Make a note of the snapshots and volumes that have been created via this import job. You might want to remove those later to prevent storage costs for something you don’t need anymore.


    Ad 8) Start an instance from your new AMI, install the UEKR3 kernel

    I want an AMI to run Oracle software and want the UEK kernel that has support. UEKR4 wasn’t supported for some of the software I recently worked with, thus that left me with the UEKR3 kernel.

    Login to your new instance as the ec2-user with your preferred ssh tool and use sudo to become root:

    sudo su –

    Enable Yum Repo UEKR3

    vi /etc/yum.repos.d/public-yum-ol7.repo
    enabled=0 ==> 1

    Change the default kernel back to UEK:

    vi /etc/sysconfig/kernel

    Update the kernel:

    yum check-update
    yum install kernel-uek.x86_64

    Notice the changes in grub_cmd_line that where made by the import proces:

    cat /etc/default/grub

    Notice some changes:

    GRUB_CMDLINE_LINUX=”crashkernel=auto rhgb quiet net.ifnames=0 console=ttyS0″

    To verify which kernel will be booted next time you can use:

    cat /boot/grub2/grubenv
    grubby –default-kernel
    grubby –default-index
    grubby –info=ALL

    Clean the instance again and shut it down in order to create an new AMI:

    rm -rf  /var/lib/cloud/
    rm -rf /var/log/cloud-init.log
    rm -rf /var/log/cloud-init-output.log

    yum -y clean packages
    rm -rf /var/cache/yum
    rm -rf /var/lib/yum

    rm -rf /var/log/messages
    rm -rf /var/log/boot.log
    rm -rf /var/log/dmesg
    rm -rf /var/log/dmesg.old
    rm -rf /var/log/lastlog
    rm -rf /var/log/yum.log
    rm -rf /var/log/wtmp

    find / -name .bash_history -exec rm -rf {} +
    find / -name .Xauthority -exec rm -rf {} +
    find / -name authorized_keys -exec rm -rf {} +

    history -c
    shutdown -h now

    Ad 9) Create a new AMI from that instance in order to give it a sensible name

    Use the instance id of the instance that you just shut down: i-050357e3ecce863e2 to create a new AMI.

    To generate a skeleton json file:

    aws ec2 create-image –instance-id i-050357e3ecce863e2 –generate-cli-skeleton

    Edit the file to your needs or liking:

    vi cr_img.json
    “DryRun”: false,
    “InstanceId”: “i-050357e3ecce863e2”,
    “Name”: “OL73 UEKR3 LVM”,
    “Description”: “OL73 UEKR3 LVM 10GB disk with swap and root on LVM thus expandable”,
    “NoReboot”: true

    And create the AMI:

    aws ec2 create-image –cli-input-json file:///home/ec2-user/cr_img.json
    “ImageId”: “ami-27637b41”

    It takes a few minutes for the AMI to be visable in the webconsole of AWS EC2.

    Don’t forget to:

    • Deregister the AMI generated bij VMImport
    • Delete the corresponding snaphot
    • Terminate the instance you used to create the new AMI
    • Delete the volumes of that instance (if they are not deleted on termination) (expand the info box in AWS you see when you terminate the instance to see which volume it is. E.g.: The following volumes are not set to delete on termination: vol-0150ca9702ea0fa00)
    • Remove the OVA from your S3 bucket if you don’t need it for something else.

    Launch an instance of your new AMI and start to use it.

    Useful documentation:

    The post AWS – Build your own Oracle Linux 7 AMI in the Cloud appeared first on AMIS Oracle and Java Blog.

    Create a 12c physical standby database on ODA X5-2

    Thu, 2017-07-06 07:06

    ODA X5-2 simplifies and speeds up the creation of a 12c database quite considerably with oakcli. You can take advantage of this command by also using it in the creation of physical standby databases as I discovered when I had to setup Dataguard on as many as 5 production and 5 acceptance databases within a very short time.

    I used the “oakcli create database …” command to create both primary and standby databases really fast and went on from there to setup a Dataguard Bbroker configuration in max availability mode. Where you would normally duplicate a primary database on to a skeleton standby database that’s itself without any data or redo files and starts up with a pfile, working with 2 fully configured databases is a bit different. You do not have to change a db_unique_name after the RMAN duplicate, which proved to be quite an advantage, and the duplicate itself doesn’t have to address any spfile adaptations because it’s already there. But you may get stuck with some obsolete data and redo files of the original standby database that can fill up the filesystem. However, as long as you remove these files in time, just before the RMAN duplicate, this isn’t much of an issue.

    What I did to create 12c primary database ABCPRD1 on one ODA and physical standby database ABCPRD2 on a second ODA follows from here. Nodes on oda1 are oda10 and oda11, nodes on oda2 are oda20 and oda21. The nodes I will use are oda10 and oda20.

    -1- Create parameterfile on oda10 and oda20
    oakcli create db_config_params -conf abcconf
    -- parameters:
    -- Database Block Size  : 8192
    -- Database Language    : AMERICAN
    -- Database Characterset: WE8MSWIN1252
    -- Database Territory   : AMERICA
    -- Component Language   : English
    -- NLS Characterset     : AL16UTF16
    file is saved as: /opt/oracle/oak/install/dbconf/abcconf.dbconf
    -2- Create database ABCPRD1 on oda10 and ABCPRD2 on oda20
    oda10 > oakcli create database -db ABCPRD1 -oh OraDb12102_home1 -params abcconf
    oda20 > oakcli create database -db ABCPRD2 -oh OraDb12102_home1 -params abcconf
    -- Root  password: ***
    -- Oracle  password: ***
    -- SYSASM  password - During deployment the SYSASM password is set to 'welcome1 - : ***
    -- Database type: OLTP
    -- Database Deployment: EE - Enterprise Edition
    -- Please select one of the following for Node Number >> 1
    -- Keep the data files on FLASH storage: N
    -- Database Class: odb-02  (2 cores,16 GB memory)
    -3- Setup db_name ABCPRD for both databases... this is a prerequisite for Dataguard
    oda10 > sqlplus / as sysdba
    oda10 > shutdown immediate;
    oda10 > startup mount
    oda10 > ! nid TARGET=sys/*** DBNAME=ABCPRD SETNAME=YES
    oda10 > Change database name of database ABCPRD1 to ABCPRD? (Y/[N]) => Y
    oda10 > exit
    oda20 > sqlplus / as sysdba
    oda20 > shutdown immediate;
    oda20 > startup mount
    oda20 > ! nid TARGET=sys/*** DBNAME=ABCPRD SETNAME=YES
    oda20 > Change database name of database ABCPRD2 to ABCPRD? (Y/[N]) => Y
    oda20 > exit
    -4- Set db_name of both databases in their respective spfile as well as ODA cluster,
        and reset the db_unique_name after startup back from ABCPRD to ABCPRD1|ABCPRD2
    oda10 > sqlplus / as sysdba    
    oda10 > startup mount
    oda10 > alter system set db_name=ABCPRD scope=spfile;
    oda10 > alter system set service_names=ABCPRD1 scope=spfile;
    oda10 > ! srvctl modify database -d ABCPRD1 -n ABCPRD
    oda10 > shutdown immediate
    oda10 > startup
    oda10 > alter system set db_unique_name=ABCPRD1 scope=spfile;
    oda10 > shutdown immediate;
    oda10 > exit
    oda20 > sqlplus / as sysdba    
    oda20 > startup mount
    oda20 > alter system set db_name=ABCPRD scope=spfile;
    oda20 > alter system set service_names=ABCPRD2 scope=spfile;
    oda20 > ! srvctl modify database -d ABCPRD2 -n ABCPRD
    oda20 > shutdown immediate
    oda20 > startup
    oda20 > alter system set db_unique_name=ABCPRD2 scope=spfile;
    oda20 > shutdown immediate;
    oda20 > exit
    -5- Startup both databases from the cluster.
    oda10 > srvctl start database -d ABCPRD1
    oda20 > srvctl start database -d ABCPRD2

    Currently, 2 identical configured databases are active with the same db_name, which is a first condition for the following configuration of Dataguard Broker. By just matching the db_name between databases and keeping the db_unique_name as it was, ASM database and diagnostic directory names remain as they are.

    Also, the spfile entry in the cluster continues to point to the correct directory and file, as well as the init.ora in $ORACLE_HOME/dbs. Because the standby started with an existing and correctly configured spfile you no longer need to retrieve it from the primary. It simplifies and reduces the RMAN duplicate code to just a one line command, apart from login and channel allocation.

    -6- Add Net Service Names for ABCPRD1 and ABCPRD2 to your tnsnames.ora on oda10 and oda20
        (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = oda10)(PORT = 1521))
        (CONNECT_DATA =
        (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = oda20)(PORT = 1521))
        (CONNECT_DATA =
    -7- Add as a static service to listener.ora on oda10 and oda20
    oda10 > SID_LIST_LISTENER =
    oda10 >   (SID_LIST =
    oda10 >     (SID_DESC =
    oda10 >       (GLOBAL_DBNAME = ABCPRD1_DGB)
    oda10 >       (ORACLE_HOME = /u01/app/oracle/product/
    oda10 >       (SID_NAME = ABCPRD1)
    oda10 >     ) 
    oda10 >   )        
    oda20 > SID_LIST_LISTENER =
    oda20 >   (SID_LIST =
    oda20 >     (SID_DESC =
    oda20 >       (GLOBAL_DBNAME = ABCPRD2_DGB)
    oda20 >       (ORACLE_HOME = /u01/app/oracle/product/
    oda20 >       (SID_NAME = ABCPRD2)
    oda20 >     ) 
    oda20 >   )
    -8- Restart listener from cluster on oda10 and oda20
    oda10 > srvctl stop listener
    oda10 > srvctl start listener
    oda20 > srvctl stop listener
    oda20 > srvctl start listener
    -9- Create 4 standby logfiles on oda10 only (1 more than nr. of redologgroups and each with just 1 member)
        The RMAN duplicate takes care of the standby logfiles on oda20, so don't create them there now
    oda10 > alter database add standby logfile thread 1 group 4 size 4096M;
    oda10 > alter database add standby logfile thread 1 group 5 size 4096M;
    oda10 > alter database add standby logfile thread 1 group 6 size 4096M;
    oda10 > alter database add standby logfile thread 1 group 7 size 4096M;
    oda10 > exit
    -10- Start RMAN duplicate from oda20
    oda20 > srvctl stop database -d ABCPRD2
    oda20 > srvctl start database -d ABCPRD2 -o nomount
    oda20 > *****************************************************************************
    oda20 > ********* !!! REMOVE EXISTING DATA EN REDO FILES OF ABCPRD2 NOW !!! *********
    oda20 > *****************************************************************************
    oda20 > rman target sys/***@ABCPRD1 auxiliary sys/***@ABCPRD2
    oda20 > .... RMAN> 
    oda20 > run {
    oda20 > allocate channel d1 type disk;
    oda20 > allocate channel d2 type disk;
    oda20 > allocate channel d3 type disk;
    oda20 > allocate auxiliary channel stby1 type disk;
    oda20 > allocate auxiliary channel stby2 type disk;
    oda20 > duplicate target database for standby nofilenamecheck from active database;
    oda20 > }
    oda20 > exit

    And there you are… primary database ABCPRD1 in open read-write mode and standby database ABCPRD2 in mount mode. The only thing left to do now is the dataguard broker setup, and activate flashback and force_logging on both databases.

    -11- Setup broker files in shared storage (ASM) and start brokers on oda10 and oda20
    oda10 > sqlplus / as sysdba
    oda10 > alter system set dg_broker_config_file1='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD1/ABCPRD1/dr1ABCPRD1.dat' scope=both; 
    oda10 > alter system set dg_broker_config_file2='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD1/ABCPRD1/dr2ABCPRD1.dat' scope=both;
    oda10 > alter system set dg_broker_start=true scope=both;
    oda10 > exit
    oda20 > sqlplus / as sysdba
    oda20 > alter system set dg_broker_config_file1='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD2/ABCPRD1/dr1ABCPRD2.dat' scope=both; 
    oda20 > alter system set dg_broker_config_file2='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD2/ABCPRD1/dr2ABCPRD2.dat' scope=both;
    oda20 > alter system set dg_broker_start=true scope=both;
    oda20 > exit
    -12- Create broker configuration from oda10
    oda10 > dgmgrl sys/***
    oda10 > create configuration abcprd as primary database is abcprd1 connect identifier is abcprd1_dgb;
    oda10 > edit database abcprd1 set property StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oda10)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ABCPRD1_DGB)(INSTANCE_NAME=ABCPRD1)(SERVER=DEDICATED)))';
    oda10 > add database abcprd2 as connect identifier is abcprd2_dgb maintained as physical;
    oda10 > edit database abcprd2 set property StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oda20)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ABCPRD2_DGB)(INSTANCE_NAME=ABCPRD2)(SERVER=DEDICATED)))';
    oda10 > enable configuration;
    oda10 > edit database abcprd2 set state=APPLY-OFF;
    oda10 > exit
    -13- Enable flashback and force logging on both primary and standby database
    oda10 > sqlplus / as sysdba
    oda10 > alter database force logging;
    oda10 > alter database flashback on;
    oda10 > exit
    oda20 > sqlplus / as sysdba
    oda20 > alter database force logging;
    oda20 > alter database flashback on;
    oda20 > exit
    oda20 > srvctl stop database -d abcprd2
    oda20 > srvctl start database -d abcprd2 -o mount
    oda10 > srvctl stop database -d abcprd1
    oda10 > srvctl start database -d abcprd1
    -14- Configure max availability mode from oda10
    oda10 > dgmgrl sys/*** 
    oda10 > edit database abcprd2 set state=APPLY-ON;
    oda10 > edit database abcprd1 set property redoroutes='(LOCAL : abcprd2 SYNC)';
    oda10 > edit database abcprd2 set property redoroutes='(LOCAL : abcprd1 SYNC)';
    oda10 > edit configuration set protection mode as maxavailability;
    oda10 > show database abcprd1 InconsistentProperties;
    oda10 > show database abcprd2 InconsistentProperties;
    oda10 > show configuration
    oda10 > validate database abcprd2;
    oda10 > exit

    You should now have a valid 12c Max Availability Dataguard configuration, but you better test it thoroughly with
    some switchovers and a failover before taking it into production. Have fun!

    The post Create a 12c physical standby database on ODA X5-2 appeared first on AMIS Oracle and Java Blog.

    Virtualization on the Oracle Database Appliance S, M, L

    Wed, 2017-07-05 15:44

    One of the great advantages of the Oracle database Appliance HA is the possibility of Virtualization through OracleVM. This virtualization wasn’t possible for the other members of the Oracle Database Appliance. Until now.

    In the patch which has been released recently for the ODA S,M and L, virtualization is possible… through KVM. Is this a shocking change? No, KVM is part of Linux for more than 10 years now. Desirable? Yes, I think so, and worthwhile for give it a bit of attention in this blogpost.

    You can read a very, very short announcement in the documentation of the Oracle Database Appliance.

    Oracle has promised more information (including step-by-step guide) will be released very soon.

    When installing the patch, the Oracle Linux KVM will be installed, and there’s no need for re-imaging your system like the Oracle Database Appliance HA. When using KVM it’s possible to run applications on the ODA S,M and L , and in that way isolate the databases from the application in matter of life cycle management.

    In my opinion this could be a great solution for some customers for consolidating their software and for ISV’s for creating a solution in a box.


    But… ( there’s always a but) as I understand – haven’t tested it yet – there are a few limitations:

    – You may only use the Linux O.S. on the guest VM

    – There’s no support for installing an Oracle database on the guest VM

    – Related to that, there’s no capacity-on-demand for databases or applications in the guestVM


    So the usability of this new feature may seem limited for now, but testing and using the feature has just begun!

    The next big release will be in Feb/March 2018:

    • Databases in the VM’s
    • Each database will be running in its own VM
    • VM hard-partitioning support for licensing
    • Windows support

    I’m very curious how Oracle will handle the standardization in the Oracle Database Appliance family in the future:

    – ODACLI versus OAKCLI

    – OracleVM versus KVM

    – Web console user interface vs command-line

    Will it merge and if it will, in what direction. Or  will a new rising technology take the lead.





    Oracle Database Appliance Documentation:

    The post Virtualization on the Oracle Database Appliance S, M, L appeared first on AMIS Oracle and Java Blog.

    SSL/TLS: How to choose your cipher suite

    Tue, 2017-07-04 11:00

    For SSL/TLS connections, cipher suites determine for a major part how secure the connection will be. A cipher suite is a named combination of authentication, encryption, message authentication code (MAC) and key exchange algorithms used to negotiate the security settings (here). But what does this mean and how do you choose a secure cipher suite? The area of TLS is quite extensive and I cannot cover it in its entirety in a single blog post but I will provide some general recommendations based on several articles researched online. At the end of the post I’ll provide some suggestions for strong ciphers for JDK8.


    First I’ll introduce what a cipher suite is and how it is agreed upon by client / server. Next I’ll explain several of the considerations which can be relevant while making a choice of cipher suites to use.

    What does the name of a cipher suite mean?

    The names of the cipher suites can be a bit confusing. You see for example a cipher suite called: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 in the SunJSSE list of supported cipher suites. You can break this name into several parts:

    • TLS: transport layer security (duh..)
    • ECDHE: The key exchange algoritm is ECDHE (Elliptic curve Diffie–Hellman, ephemeral).
    • ECDSA: The authentication algorithm is ECDSA (Elliptic Curve Digital Signature Algorithm). The certificate authority uses an ECDH key to sign the public key. This is what for example Bitcoin uses.
    • WITH_AES_256_CBC: This is used to encrypt the message stream. (AES=Advanced Encryption Standard, CBC=Cipher Block Chaining). The number 256 indicates the block size.
    • SHA_384: This is the so-called message authentication code (MAC) algorithm. SHA = Secure Hash Algorithm. It is used to create a message digest or hash of a block of the message stream. This can be used to validate if message contents have been altered. The number indicates the size of the hash. Larger is more secure.

    If the key exchange algorithm or the authentication algorithm is not explicitly specified, RSA is assumed. See for example here for a useful explanation of cipher suite naming.

    What are your options

    First it is a good idea to look at what your options are. This is dependent on the (client and server) technology used. If for example you are using Java 8, you can look here (SunJSSE) for supported cipher suites. In you want to enable the strongest ciphers available to JDK 8 you need to install Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files (here). You can find a large list of cipher suites and which version of JDK supports them (up to Java 8 in case of the Java 8 documentation). Node.js uses OpenSSL for cipher suite support. This library supports a large array of cipher suites. See here.

    How determining a cipher suite works

    They are listed in preference order. How does that work? During the handshake phase of establishing an TLS/SSL connection, the client sends supported cipher suites to the server. The server chooses the cipher to use based on the preference order and what the client supports.

    This works quite efficiently, but a problem can arise when

    • There is no overlap in ciphers the client and server can speak
    • The only overlap between client and server supported cipher is a cipher which provides poor or no encryption

    This is illustrated in the image below. The language represents the cipher suite. The order/preference specifies the encryption strength. In the first illustration, client and server can both speak English so the server chooses English. In the second image, the only overlapping language is French. French might not be ideal to speak but the server has no other choice in this case but to accept speaking French or to refuse talking to the client.

    Thus it is a good practice to for the server only select specific ciphers which conform to your security requirements, but do of course take client compatibility into account.

    How to choose a cipher suite Basics Check which cipher suites are supported

    There are various mechanisms to check which ciphers are supported. For cloud services or websites you can use SSLLabs. For internal server checking, you can use various scripts available online such as this one or this one.

    TLS 1.2

    Of course you only want TLS 1.2 cipher suites since older TLS and SSL versions contain security liabilities. Within TLS 1.2 there is a lot to choose from. OWASP provides a good overview of which ciphers to choose here (‘Rule – Only Support Strong Cryptographic Ciphers’). Wikipedia provides a nice overview of (among other things) TLS 1.2 benefits such as GCM (Galois/Counter Mode) support which provides integrity checking.

    Disable weak ciphers

    As indicated before, if weak ciphers are enabled, they might be used, making you vulnerable. You should disable weak ciphers like those with DSS, DSA, DES/3DES, RC4, MD5, SHA1, null, anon in the name. See for example here and here. For example, do not use DSA/DSS: they get very weak if a bad entropy source is used during signing (here). For the other weak ciphers, similar liabilities can be looked up.

    How to determine the key exchange algorithm Types

    There are several types of keys you can use. For example:

    • ECDHE: Use elliptic curve diffie-hellman (DH) key exchange (ephemeral). One key is used for every exchange. This key is generated for every request and does not provide authentication like ECDH which uses static keys.
    • RSA: Use RSA key exchange. Generating DH symetric keys is faster than RSA symmetric keys. DH also currently seems more popular. DH and RSA keys solve different challenges. See here.
    • ECDH: Use elliptic curve diffie-hellman key exchange. One key is for the entire SSL session. The static key can be used for authentication.
    • DHE: Use normal diffie-hellman key. One key is used for every exchange. Same as ECDHE but a different algorithm is used for the calculation of shared secrets.

    There are other key algorithms but the above ones are most popular. A single server can host multiple certificates such as ECDSA and RSA certificates. Wikipedia is an example. This is not supported by all web servers. See here.

    Forward secrecy

    Forward secrecy means that is a private key is compromised, past messages which are send cannot also be decrypted. Read here. Thus it is beneficial to have perfect forward secrecy for your security (PFS).

    The difference between ECDHE/DHE and ECDH is that for ECDH one key for the duration of the SSL session is used (which can be used for authentication) while with ECDHE/DHE a distinct key for every exchange is used. Since this key is not a certificate/public key, no authentication can be performed. An attacked can use their own key (here). Thus when using ECDHE/DHE, you should also implement client key validation on your server (2-way SSL) to provide authentication.

    ECDHE and DHE give forward secrecy while ECDH does not. See here. ECDHE is significantly faster than DHE (here). There are rumors that the NSA can break DHE keys and ECDHE keys are preferred (here). On other sites it is indicated DHE is more secure (here). The calculation used for the keys is also different. DHE is prime field Diffie Hellman. ECDHE is Elliptic Curve Diffie Hellman. ECDHE can be configured. ECDHE-ciphers must not support weak curves, e.g. less than 256 bits (see here).

    Certificate authority

    The certificate authority you use to get a certificate from to sign the key can have limitations. For example, RSA certificates are very common while ECDSA is gaining popularity. If you use an internal certificate authority, you might want to check it is able to generate ECDSA certificates and use them for signing. For compatibility, RSA is to be preferred.

    How to determine the message encryption mechanism

    As a rule of thumb: AES_256 or above is quite common and considered secure. 3DES, EDE and RC4 should be avoided.

    The difference between CBC and GCM

    GCM provides both encryption and integrity checking (using a nonce for hashing) while CBC only provides encryption (here). You can not use the same nonce for the same key to encrypt twice when using GCM. This protects against replay attacks. GCM is supported from TLS 1.2.

    How to choose your hashing algorithm

    MD5 (here) and SHA-1 (here) are old and should not be used anymore. As a rule of thumb, SHA256 or above can be considered secure.

    Finally Considerations

    Choosing a cipher suite can be a challenge. Several considerations play a role in making the correct choice here. Just to name a few;
    Capabilities of server, client and certificate authority (required compatibility); you would choose a different cipher suite for an externally exposed website (which needs to be compatible with all major clients) than for internal security.

    • Encryption/decryption performance
    • Cryptographic strength; type and length of keys and hashes
    • Required encryption features; such as prevention of replay attacks, forward secrecy
    • Complexity of implementation; can developers and testers easily develop servers and clients supporting the cipher suite?

    Sometimes even legislation plays a role since some of the stronger encryption algorithms are not allowed to be used in certain countries (we will not guess for the reason but you can imagine).


    Based on the above I can recommend some strong cipher suites to be used for JDK8 in preference order:


    My personal preference would be to use TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 as it provides

    • Integrity checking: GCM
    • Perfect forward secrecy: ECDHE
    • Uses strong encryption: AES_256
    • Uses a strong hashing algorithm: SHA384
    • It uses a key signed with an RSA certificate authority which is supported by most internal certificate authorities.

    Since ECDHE does not provide authentication, you should tell the server to verify client certificates (implement 2-way SSL).

    The post SSL/TLS: How to choose your cipher suite appeared first on AMIS Oracle and Java Blog.

    How to start with Amazon cloud server

    Mon, 2017-07-03 08:56

    Just created an Amazon account and willing to create a first server? Use the interactive guide (Launch Instance button) to create your own oracle server within 5 minutes of time. Hereby practical notes to create a new instance.

    After login to AWS select a region nearby, this for the speed of network traffic. At the upper right front of the webpage you can select a region. When you want to develop, the US East region is the region you have to select. Creating an Oracle environment may be done in a region of your choice. With Amazon it is also good to be known that pricing per server is different between regions. Before start you can check on the following link where you can get the best price for you environment (on demand).

    • For a first Oracle environment you better choose an existing Amazon Machine Image (AMI). To create a new instance press the button Launch Instance on the dashboard. Within several steps you will be guided to create a new Instance. For this trial we show you how to create an Oracle environment for test usage.
    •  For our first server I use an predefined Image created by a colleague, there are several predefined Machine Images available. On the first tab we choose an linux image. When creating a server for an Oracle database, it is also possible to start with a RDS (oracle database) On the second tab, we can select an Instance type. It depends on the software you want to install on the instance, for Oracle middleware applications such as Database or weblogic an instance with 2 cores and 4 or 8 GB of memory is eligible. Below the explanation of the codes used by Amazon:

    T<number> generic usage for development and test environments

    M<number> generic usage for production environements

    C<number> CPU intensive usage

    G<number> Grafical solution such as videostreaming

    R<number> Memory intensive systems

    I<number> IO intensive systems.

    Costs per instance per hour are on the website, see above for link

    •  On the 3rd tab only the IAM role has to be set. Create a new one if not having already one. When creating a new one select one for Amazon EC2 and then AdministratorAccess for your own environment. When saved, you have to push on the refresh button before it is available in the dropdown box. Leave everything else as is to avoid additional costs.5.
    • On the 4th tab you can select additional storage to your instance. Select a different disk instead of enlarge the existing disk This is better for an Oracle environment. So press the Add New Volume button for more disk space. Volume type EBS is right, only change the size you want to use. Volume type GP2 stands for General Purpose (see picture below).

    • Appending an extra volume to an instance will remain in an reusable instance after reboot, else you have to install the software of Oracle again.
    • The next tab is for creating tags to your Oracle environment. The TAG Name will also directly be displayed when using the dashboard for looking to the instances. Other tags are optional but very helpful for colleagues, department, name or id of the owner are very helpful for colleagues.
    • On the 6th tab you have to configure a security group, you want to avoid access from anyone, everywhere by default port 22. When selecting on the source drill down menu the My-IP option only your own IP will be allowed to connect by port 22. Even other ports can be configured. For Oracle database or weblogic different ports will be used, so you have to configure them also.
    • On the last tab review and launch by pressing the launch button, you will be asked for selecting or creating a key, when it is your first server, create a key for using it putty or other ssh applications. A private key will be generated, this one you have to store carefully, because this is given once. Using the key with putty, use the following link:
    • You might also reuse a key when you already have one
    • Your system will be ready after several minutes for using (state = running). You first have to run a yum update on the system by typing : sudo yum update –y
    • Now the system is ready to install Oracle software and create an Oracle database or weblogic environment

    The post How to start with Amazon cloud server appeared first on AMIS Oracle and Java Blog.