Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 3 hours 18 min ago

Top 5 Infrastructure (IaaS) announcements by Oracle at Oracle OpenWorld 2017

Sun, 2017-10-08 12:26

From Thomas Kurian’s keynote during Oracle OpenWorld 2017 – see https://youtu.be/cef7C2uiDTM – a quick recap of the five most important announcements regarding IaaS:

1.image

2.

image

3.

image

4.

image

5.

image

World record benchmarks

image

image

image

The post Top 5 Infrastructure (IaaS) announcements by Oracle at Oracle OpenWorld 2017 appeared first on AMIS Oracle and Java Blog.

Watch Oracle OpenWorld 2017 Keynotes On Demand

Sat, 2017-10-07 06:16

imageWatch Keynotes on YouTube using these links:

Larry Ellison (Sunday Oct 1st) – https://www.youtube.com/watch?v=HEupUSSSEBo

Dave Donatelli (Tuesday Oct 3rd) – https://www.youtube.com/watch?v=irvNYpCopA8 

imageThomas Kurian (Tuesday Oct 3rd) – https://www.youtube.com/watch?v=cef7C2uiDTM 

Larry Ellison (Tuesday Oct 3rd) – https://www.youtube.com/watch?v=faKWViY6zEk&t=6s 

SuiteConnect – Evan Goldberg (Wednesday Oct 4th) – https://www.youtube.com/watch?v=pURoDocJW1Y 


imageJavaOne Keynote (Monday Oct 2nd) – https://www.youtube.com/watch?v=UNg9lmk60sg

image

The post Watch Oracle OpenWorld 2017 Keynotes On Demand appeared first on AMIS Oracle and Java Blog.

Fun with Data Visualization Cloud–creating a timeline for album releases

Fri, 2017-10-06 08:51

I have played a little with Oracle’s Data Visualization cloud and it is really fun to be able to so quickly turn raw data into nice and sometimes meaningful visuals. I do not pretend I grasp the full potential of Data Viz CS, but I can show you some simple steps to quickly create something good looking and potentially really useful.

My very first steps were documented in this earlier article: https://technology.amis.nl/2017/09/10/hey-mum-i-am-a-citizen-data-scientist-with-oracle-data-visualization-cloud-and-you-can-be-one-too/.

In this article, I start with two tables in a cloud database – with the data we used for the Soaring through the Clouds demo at Oracle OpenWorld 2017:

image

As described in the earlier article, I have created a database connection to this DBaaS instance and I have created data sources for these two tables.

Now I am ready to create a new project:

image

I select the data sources to use in this project:

image

And on the prepare tab I make sure that the connection between the Data Sources is defined correctly (with Proposed Acts adding fact – lookup data – to the Albums):

image

On the Visualize tab, I drag the Release Date to the main pane.

image

I then select Timeline as visualization :

image

Next, I bring the title of the album to the Details section:

image

and the genre of the album to the Color area:

image

Then I realize I would like to have the concatenation of Artist Name and Album Title in the details section. However, I cannot add two attributes to that area. What I can do instead is create a Calculation:

image

Next I can use this caclculation for the details:

image

I can use Trellis Rows to create a Timeline per value of the selected attribute, in this case the artist:

image

It is very easy to add filters – that can be manipulated by end users in presentation mode to filter on data relevant to them. Simply drag attributes to the filter section at the top:

image

Then select the desired filter values:

image

and the visualization is updated accordingly:

image

The post Fun with Data Visualization Cloud–creating a timeline for album releases appeared first on AMIS Oracle and Java Blog.

Tweet with download link for JavaOne and Oracle OpenWorld slide decks

Fri, 2017-10-06 07:24

In a recent article I discussed how to programmatically fetch a JSON document with information about sessions at Oracle OpenWorld and JavaOne 2017. Yesterday, slidedecks for these sessions started to become available. I have analyzed how the link to these downloads were included in the JSON data returned by the API. Then I created simple Node programs to tweet about each of the sessions for which the download became available

image

and to download the file to my local file system.

image

I added provisions to space out the tweets and the download activity over time – as to not burden the backend of the web site and to not be kicked off Twitter for being a robot.

The code I crafted is not particularly ingenuous – it was created rather hastily in order to share with the OOW17 and JavaOne communities the links for downloading slide decks from presentations at both conferences. I used npm modules twit and download. This code can be found on GitHub: https://github.com/lucasjellema/scrape-oow17.

The documents javaone2017-sessions-catalog.json and oow2017-sessions-catalog.json contain details on all sessions – including the URLs for downloading slides.

image

The post Tweet with download link for JavaOne and Oracle OpenWorld slide decks appeared first on AMIS Oracle and Java Blog.

Oracle Open World; day 4 – almost done

Thu, 2017-10-05 13:43

Almost done. It’s not expected that tomorrow, thursday, will be a day full of new stuff, exciting news. Today, wednesday was a mix for me between ‘normal’ content like sessions about migrating to Oracle Enterprise 13.2 (another packed room) and a very interesting session about the Autonomous Database.  Just a short note about a few sessions (including the Autonomour Database of course).

As mentioned, sessions with ‘normal’ content, in this case, migrating a database of 100TB in one day – with Mike Dietrich, are quite popular. We may almost forget that most of the customers are thinking about the cloud, but at the moment just focussed on how to keep the daily business running.

The session about Oracle Enterprise Manager, about upgrading to 13c (a packed room) is quite rare. Two years ago there were a lot of presentations about this management product, this year close to none. I’m very curious to know what happens after 2020. Oracle Management Cloud is coming rapidly. But… Oracle is using it quite heavy in the public cloud, so it is expected it won’t dissappear that fast. Here are the timelines:

 

Foto 04-10-17 11 03 01 (1)

 

At the end of the day, a session was planned about the most most important announcement of Oracle OpenWorld, preview of the autonomous database.

Quite peculiar, at the very end of the day, in a room that was obviously too small for the crowd.

A view outlines. The DBA is still needed, only the general tasks are disappearing:

Foto 04-10-17 15 38 54 (1)

The very rough roadmap .

Foto 04-10-17 16 10 48 (1)

This Data Warehouse version is already there in 2017. This was technically ‘easier’ to accomplish. The OLTP autonomous database has more challenges.

Foto 04-10-17 16 10 48 (2)

And a very important message to the customers: a SLA guarantee.

Foto 04-10-17 16 01 11 (3)

Regardz

The post Oracle Open World; day 4 – almost done appeared first on AMIS Oracle and Java Blog.

Oracle Open World; day 3- some highlights

Wed, 2017-10-04 07:01

Day 3 began with a very smooth and interesting keynote of Thomas Kurian, full of flawless, wonderful demo’s . The second keynote, of Larry Ellison happened in the afternoon, I couldn’t attent for as I head a product management meeting. But some hickups I heard. Beside the keynotes it was a day full of good information  and surprisingly stuff, with in the end: Oracle Database Appliance, the X7-2 series. Another short note about Oracle Open World.As said, the day began with the keynote of Thomas Kurian.

A slide of the six journeys to the cloud, almost the key of whole Open World. The Journey to the Cloud. My special interest by the way, as I am interested in engineered systems, is the first . Optimizing your on-premises datacenter.

image

A lot of ‘best of’  came along: Fastest compute, Fastest GPU, Fastest storage, Fastest network, Industry leading global DNS.

Then the demo’s were given.

Cool stuff is the right word for it I think: Chatbots, Smartfeed, Connected Intelligence, Social Media Analyses, Analytics, IOT.

The technique behind these demo’s spans a dazzling amount of new and existing different Cloud services. Too much for now I’m  afraid.

Machine Learning was the big keyword in all these demo’s I think.

Serverless with Kafka and Kubernetes Cloud Service:

image

There’s a new cost estimator to calculate how much universal credits  the several services will cost.

image

Another announcement : Blockchain cloud service for secure inter-connected transactions

The announcement in the afternoon: Oracle Management and Security Cloud, with machine learning and Management Cloud.Larry Ellison talks about the severity of data hacks and information stealing while data centers get increasingly complicated and systems are harder to patch. “We’ve got to do something. It must be an automated process.”

In a product management session of IaaS several price comparisons were made with AWS on a detailed level. How cheap is Oracle compared to AWS.

Foto 03-10-17 12 13 16 (1)

Announcement of SLA’s

In the session of Oracle Database Appliance of course the X7-2 serie, the summary – SE for all models, 12.2.0.1 support:

Foto 03-10-17 12 50 53 (2) 

The new HA :

Foto 03-10-17 13 00 20

KVM as a new deployment option, and the way forward. Oracle VM on the ODA is slowly disappearing.

Foto 03-10-17 13 08 01 (1)

I know, this is just a scratch at the surface of all new things that are happening…

Regardz

The post Oracle Open World; day 3- some highlights appeared first on AMIS Oracle and Java Blog.

Oracle Open World; day 2 – highlights

Tue, 2017-10-03 01:45

 

At Monday Oracle Open World really starts. A whole lot of general sessions with announcements and strategic directions on product / service level. In this post I’ll try to summarize the highlights of this day. Quite hard, there are a whole lot of interesting sessions which overlap. Always the feeling I’m missing something. And a very interesting session at the very end of the day as a surprise.

It started with the keynote of Mark Hurd, no big news, or it must be the revisiting of his predictions a year ago. This holds true:

– By 2025 the number of corporate-owned data centers will decrease by 80%

– 80% of production apps will be in the cloud by 2025

 

Hardware : not shouting out loud, but there’s a new generation of

– Engineered systems, the X7 line. Engineered for Oracle Cloud Infrastructure (but also for on-premises)

– Sparc servers with the Sparc M8 processor.

 

As mentioned earlier, Oracle Management Cloud is becoming more dominant as ‘single pane of glass’.

OMC-overview

Software : Wim Coekaerts

– Solaris will be supported and developed for a long time !

image

Oracle Kubernetes Cloud is started!

A few slides about key announcements I really can’t judge by it’s value:

key-integration

key-security-management

The last session was about new features of Database 18c (december 2017), quite interesting. A few enhancements:

– Performance enhancements by low latency memory transactions, non-volatile memory support, in-memory column store improvements. Mainly for OLTP and IOT workload improvements.

– For Data Warehousing and Big Data are new features as In-Memory for external tables, Machine learning algorithmen, alter table merge function.

– Per-PDB switchover

– Centrally managed users in Active Directory

– Improved JSON support

– Centrally Managed Users Directly in Active Directory

Foto 02-10-17 18 16 40 (1)

– A REST API to provide instance management and monitoring

– Official Docker support (except RAC, is coming)

– SQLCI got more attention

– Gold Images as new Installation Approach, as zip-, tarfile, docker image, virtual box. Installation through RPM.

– First quarter 2018 18c XE is launched. Including PDB’s, probably 2GB, 2 CPU, 12GB storage (compressed, net 40GB). Meant for students, Proof of Concepts, not for production.

Foto 02-10-17 18 27 40 (2)

Regardz

The post Oracle Open World; day 2 – highlights appeared first on AMIS Oracle and Java Blog.

Oracle Open World 2017; day 1 observations

Mon, 2017-10-02 09:41

Just a quick note about day 1 at Oracle Open World. This Sunday traditionally is filled with presentations of usergroups, customers and product management, and at the end of the day the welcome-keynote. In the presentations there’s not really exciting news, they will have to wait until the keynotes. But some observations can be done already.

 

The big news at the keynote – The Autonomous Database –  was not that big anymore, Larry Ellison announced it a week ago, I did a wrap up yesterday.

Quite a summary of the keynote has been written by businessinsider, And here are the highlights on video.

The most important slide of the keynote regarding the changing role of the DBA is included in this post : Less time on Administration, more time on innovation. Oracle 18c requires no DBA, is highly available, and autotune queries.

 

So what about the presentations I went to at Sunday. Just a few observations (very limited in scope of course):

– The phrase ‘Single Pane of Glass’, which was used for Enterprise Manager 13c a while ago, is now being used at the Oracle Management Cloud (OMC). The context and scope is howerver quite different, OMC is strategically meant for monitoring and managing a complete hybrid cloud environment, including Azure, Amazon and the on-premises environment.

– The word ‘management’ has been inserted in the slides of OMC. Not just monitoring anymore. What are the consequences for Oracle Enterprise Manager?

– Security is on the agenda

– Machine Learning is trending.

– Some services are barely present at this OpenWorld:  Oracle Enterprise Manager, Exa-systems (not the cloud service), Weblogic platform, Oracle Database Appliance, in summary a lot of hardware and on-premises management. And when hardware is mentioned it is just a step to the final goal: the cloud. With one big exception: the Oracle Cloud Machine for Cloud@Customer.

– A lot of sessions with one theme: journey to the cloud.

 

Mentioned it before, just a quick note for now. Regardz

Resources:

Announcement autonomous database: https://technology.amis.nl/2017/10/01/the-day-before-oow17-expectations-and-my-agenda/

Business insider: https://www.businessinsider.nl/oracle-18c-database-patch-cybersecurity-flaws-2017-10/?international=true&r=US

Video highlights: https://twitter.com/twitter/statuses/914866257492533249

The post Oracle Open World 2017; day 1 observations appeared first on AMIS Oracle and Java Blog.

The day before OOW17; infra expectations

Sat, 2017-09-30 19:01

Oracle Open World 2017 is starting tomorrow, and as a platinum partner of Oracle, we – AMIS Services – are obliged to keep us and our customers informed with the roadmap of Oracle products.

And of course translate this to added business-value for our customers.  In short: what to pick of all the coming announcements, new features, cloudiness and so on.  All the arrows are pointed towards the Oracle Cloud products of course, but first we’ll have to find the answer of the two questions:   ‘why should a customer go to the cloud’  and if yes ‘what role is Oracle Cloud playing in this’.

This week will be – as in the last years – full of announcements. It started early this year with the pre-announcement of Larry last week about the ‘autonomous database’, Bring Your Own Licence to the Cloud and Universal Credits.

As a reminder, hereby a wrap-up of the presentation of Larry Ellison:

– Lowest price for IaaS. That means the same price as Amazon, but faster, thus cheaper.

– Highest rate of automation in PaaS. That means the lowest TCO. Goal is to garantee a 50% lower TCO than Amazon

– The ‘autonomous database’ is available in December, based on Machine Learning. This should eliminate human Labor (DBA).

– Service Level Agreement of 99,995%, this is 30 minues of planned and unplanned downtime a year.

– Bring Your Own License for PaaS. 94% cheaper than the old price. This should lower the treshold to use the Oracle Cloud.

– It’s becoming possible to buy Universal Credits, no more linked to a Cloud product.

This wrap-up covers pretty much my agenda or, as you will, my focus of this Oracle Open World. Primarily focussed on TCO, (Cloud) platforms, databases, middleware , Engineered systems.

Oracle has taken some pretty smart moves, which will influence products like

– Engineered Systems. In general : is there still a business case in buying on-premisses hardware

– The role and feature of the DBA. DBAKevlar wrote a nice blog about this.

– Life Cycle Management of the platform. How to cope with the management of a hybrid environment, including this new autonomous databases, and containers.

– Databases. There will be a lot of new features in Oracle18.

Tomorrow it will be the start of an exhausting week and I’m not even presenting nor ACE or Developement Champion! Just a mortal visitor, attending a lot of presentations, key-notes, meetings with product-managment, network-events, appreciation-events, dinners with Oracle representatives.

Respect for my colleagues Lucas Jellema and Robert van Molken. Together they  are involved  with (or are presenting) 9 presentations.

My intention is also to write quite regurarly about the things that went on my journey this week.

Regardz

Resources:

Larry Ellison’s announcement: http://www.oracle.com/us/corporate/events/cloud-announcement/index.html

DBAkevlar’s blog : http://dbakevlar.com/2017/09/death-dba-long-live-dba/

Lucas Jellema en Robert van Möken presentations / involvements: https://events.rainfocus.com/catalog/oracle/oow17/catalogoow17?search=AMIS%20&showEnrolled=false

The post The day before OOW17; infra expectations appeared first on AMIS Oracle and Java Blog.

Oracle Applications Cloud User Experience Strategy Day– Directions for User Experience

Wed, 2017-09-27 18:02

imageToday – Wednesday 27th of September – saw close to 50 people gathering for the OAUX (Oracle Applications User Experience) Strategy Day. Some attendees joined from remote locations on three continents, while most of us had assembled in the UX Spaces Lab at Oracle’s Redwood Shores HQ – equipped with some interesting video and audio equipment.

IMG_9972 Some important themes for this day:

  • The key message of Simplicity, Mobility and Extensibility is continued; simplicity means: a user experience that is to the point, only drawing a user’s attention to relevant items, only presenting meaningful data and allowing a task to be handled most efficiently.

    IMG_9974
    In order to achieve this simplicity, quite a bit of smartness is required: User context interpreted by smart apps lead to Simple UX, with Chat, Voice Input and Conversational UIs.and fully automated processes at the pinnacle. Machine learning is at the heart of this smartness – deriving information from the context, presenting relevant choices en defaults based on both context and historical patterns

    image
  • Enterprise Mobility is a key element in the user experience – with a consistent experience yet tailored to the device (one size does not fit all at all) and the ability to start tasks on one device and continue with them on different devices and a later point in time. The experience should be light on data. Only show the absolute essential information.

    image
  • The latest Oracle Cloud Applications Release – R13 – has some evolution in the UX and UI.

    IMG_9982
  • There is a move away from using icons to interact with the application for navigation – more towards search & notifications. The ability to tailor the look & feel (theming, logo, heading, integrate external UIs) has improved substantially.

    image

  • Conversational UI for the Enterprise is rapidly becoming relevant. Conversational UI for the enterprise complements and replaces current Web&Mobile UI – for quick, simple, mini transaction and smart capture. The OAUX team discerns four categories of interactions that conversational interfaces are initially most likely to be used for: Do (quick decisions, approvals, data submission), Lookup (get information), Go To (use conversation as starting point for a deeplink context rich navigation to a dedicated application component) and Decision Making (provide recommendations and guidance to users).

    Some examples of conversational UIs – low threshold user to system interaction for simple questions,requests, actions and submissions


    image

    Jeremy Ashley introduced the term JIT UI – just in time UI: widgets (buttons, selection lists) that are mixed in with the text based conversational UI (aka chat) to allow easy interaction when relevant; this could also include dynamically generated visualizations for more complex presentation of data.

    The OAUX makes an RDK (Rapid Development Kit) available for Conversational UI – or actually the first half of the RDK – the part that deals with designing the conversational UI. The part about the actual implementation will follow with the launch of the Oracle Intelligent Bot Cloud Service and associated technology and tooling.

    image

    This new RDK can be found at :  https://t.co/m7AuSBJw5J . It contains many guidelines on designing conversations – about how to address users, what information and interaction to provide.

  • Another brand new RDK is soon to be released for Oracle JET – aligned with JET 4.0, that is to be released next week at Oracle OpenWorld 2017. This RDK support development of Oracle JET rich client applications with the same look and feel as the R13 ADF based Oracle SaaS apps. Assuming that there will be a long  period of coexistence between ADF based frontends and Oracle JET powered user interfaces, it seems important to be able to develop an experience in JET that is very similar to the one users already are used to in the existing SaaS applications.


    image

    Additionally, the JET RDK will provide guidelines on how to developer JET applications. These guidelines were created in collaboration between the SaaS foundation and development teams, the JET product development team and the OAUX team. They are primarily targeted at Oracle’s own development teams that embrace JET for building SaaS App components and other developers creating extensions on top of Oracle SaaS. However, these guidelines are very useful for any development team that is using JET for developing any applications. The guidance provided by the RDK resources – as well as potentially the reusable components provided as part of the RDK – embodies best experiences and the intent of the JET team and provides a relevant headstart to teams that otherwise have to invent their own wheels.

    Here is a screenshot of the sample JET application (R13 style) provided with the RDK:

    image

  • Updates – aligned with Cloud Apps Release 13 – are released for MAF and ADF. Go to https://github.com/oracle/apps-cloud-ui-kit to find all resources

    Here is a screenshot of the ADF demo application provided with the ADF RDK:IMG_0013

Some other observations

Any data in a user interface has to be justified. Why should it be there? What will you use it for? What happens if it is not shown? Less is more (or at least: better)

Different generations of users prefer different styles of navigation & interaction; ideally the UX is personalized to cater for that.

An overview of all activities of the OAUX team during Oracle OpenWorld 2017:

image

The post Oracle Applications Cloud User Experience Strategy Day– Directions for User Experience appeared first on AMIS Oracle and Java Blog.

Oracle SOA Suite and WebLogic: Overview of key and keystore configuration

Sun, 2017-09-24 09:31

Keystores and the keys within can be used for security on the transport layer and application layer in Oracle SOA Suite and WebLogic Server. Keystores hold private keys (identity) but also public certificates (trust). This is important when WebLogic / SOA Suite acts as the server but also when it acts as the client. In this blog post I’ll explain the purpose of keystores, the different keystore types available and which configuration is relevant for which keystore purpose.

Why use keys and keystores?

The below image (from here) illustrates the TCP/IP model and how the different layers map to the OSI model. When in the below elaboration, I’m talking about the application and transport layers, I mean the TCP/IP model layers and more specifically for HTTP.

The two main reasons why you might want to employ keystores are that

  • you want to enable security measures on the transport layer
  • you want to enable security measures on the application layer

Almost all of the below mentioned methods/techniques require the use of keys and you can imagine the correct configuration of these keys within SOA Suite and WebLogic Server is very important. They determine which clients can be trusted, how services can be called and also how outgoing calls identity themselves.

You could think transport layer and application layer security are two completely separate things. Often they are not that separated though. The combination of transport layer and application layer security has some limitations and often the same products / components are used to configure both.

  • Double encryption is not allowed. See here. ‘U.S. government regulations prohibit double encryption’. Thus you are not allowed to do encryption on the transport layer and application layer at the same time. This does not mean you cannot do this though, but you might encounter some product restrictions since, you know, Oracle is a U.S. company.
  • Oracle Webservice Manager (OWSM) allows you to configure policies that perform checks if transport layer security is used (HTTPS in this case) and is also used to configure application level security. You see this more often that a single product is used to perform both transport layer and application layer security. For example also API gateway products such as Oracle API Platform Cloud Service.
Transport layer (TLS)

Cryptography is achieved by using keys from keystores. On the transport layer you can achieve

You can read more on TLS in SOA Suite here.

Application layer

On application level you can achieve similar feats (authentication, integrity, security, reliability), however often more fine grained such as for example on user level or on a specific part of a message instead of on host level or for the entire connection. Performance is usually not as good as with transport layer security because the checks which need to be performed, can require actual parsing of messages instead of securing the transport (HTTP) connection as a whole regardless of what passes through. The implementation depends on the application technologies used and is thus quite variable.

  • Authentication by using security tokens such as for example
    • SAML. SAML tokens can be used in WS-Security headers for SOAP and in plain HTTP headers for REST.
    • JSON Web Tokens (JWT) and OAuth are also examples of security tokens
    • Certificate tokens in different flavors can be used which directly use a key in the request to authenticate.
    • Digest authentication can also be considered. Using digest authentication, a username-password token is created which is send using WS-Security headers.
  • Security and reliability by using message protection. Message protection consists of measures to achieve message confidentiality and integrity. This can be achieved by
    • signing. XML Signature can be used for SOAP messages and is part of the WS Security standard. Signing can be used to achieve message integrity.
    • encrypting. Encrypting can be used to achieve confidentiality.
Types of keystores

There are two types of keystores in use in WebLogic Server / OPSS. JKS keystores and KSS keystores. To summarize the main differences see below table:

JKS

There are JKS keystores. These are Java keystores which are saved on the filesystem. JKS keystores can be edited by using the keytool command which is part of the JDK. There is no direct support for editing JKS keystores from WLST, WebLogic Console or Fusion Middleware Control. You can use WLST however to configure which JKS file to use. For example see here

connect('weblogic','Welcome01','t3://localhost:7001') 
edit()
startEdit()
cd ('Servers/myserver/ServerMBean/myserver')

cmo.setKeyStores('CustomIdentityAndCustomTrust')
cmo.setCustomIdentityKeyStoreFileName('/path/keystores/Identity.jks') 
cmo.setCustomIdentityKeyStorePassPhrase('passphrase') 
cmo.setCustomIdentityKeyStoreType('JKS')
cmo.setCustomIdentityKeyStoreFileName('/path/keystores/Trust.jks') 
cmo.setCustomTrustKeyStorePassPhrase('passphrase') 
cmo.setCustomTrustKeyStoreType('JKS')

save()
activate()
disconnect()

Keys in JKS keystores can have passwords as can keystores themselves. If you use JKS keystores in OWSM policies, you are required to configure the key passwords in the credential store framework (CSF). These can be put in the map: oracle.wsm.security and can be called: keystore-csf-key, enc-csf-key, sign-csf-key. Read more here. In a clustered environment you should make sure all the nodes can access the configured keystores/keys by for example putting them on a shared storage.

KSS

OPSS also offers KeyStoreService (KSS) keystores. These are saved in a database in an OPSS schema which is created by executing the RCU (repository creation utility) during installation of the domain. KSS keystores are the default keystores to use since WebLogic Server 12.1.2 (and thus for SOA Suite since 12.1.3). KSS keystores can be configured to use policies to determine if access to keys is allowed or passwords. The OWSM does not support using a KSS keystore which is protected with a password (see here: ‘Password protected KSS keystores are not supported in this release’) thus for OWSM, the KSS keystore should be configured to use policy based access.

KSS keys cannot be configured to have a password and using keys from a KSS keystore in OWSM policies thus do not require you to configure credential store framework (CSF) passwords to access them. KSS keystores can be edited from Fusion Middleware Control, by using WLST scripts or even by using a REST API (here). You can for example import JKS files quite easily into a KSS store with WLST using something like:

connect('weblogic','Welcome01','t3://localhost:7001')
svc = getOpssService(name='KeyStoreService')
svc.importKeyStore(appStripe='mystripe', name='keystore2', password='password',aliases='myOrakey', keypasswords='keypassword1', type='JKS', permission=true, filepath='/tmp/file.jks')
Where and how are keystores / keys configured

As mentioned above, keys within keystores are used to achieve transport security and application security for various purposes. If we translate this to Oracle SOA Suite and WebLogic Server.

Transport layer Incoming
  • Keys are used to achieve TLS connections between different components of the SOA Suite such as Admin Servers, Managed Servers, Node Managers. The keystore configuration for those can be done from the WebLogic Console for the servers and manually for the NodeManager. You can configure identity and trust this way and if the client needs to present a certificate of its own so the server can verify its identity. See for example here on how to configure this.
  • Keys are used to allow clients to connect to servers via a secure connection (in general, so not specific for communication between WebLogic Server components). This configuration can be done in the same place as above, with the only difference that no manual editing of files on the filesystem is required (since no NodeManager is relevant here).

Outgoing Composites (BPEL, BPM)

Keys are be used to achieve TLS connections to different systems from the SOA Suite. The SOA Suite acts as the client here. The configuration of identity keystore can be done from Fusion Middleware Control by setting the KeystoreLocation MBean. See the below image. Credential store entries need to be added to store the identity keystore password and key password. Storing the key password is not required if it is the same as the keystore password. The credential keys to create for this are: SOA/KeystorePassword and SOA/KeyPassword with the user being the same as the keyalias from the keystore to use). In addition components also need to be configured to use a key to establish identity. In the composite.xml a property oracle.soa.two.way.ssl.enabled can be used to enable outgoing two-way-ssl from a composite.

Setting SOA client identity store for 2-way SSL

 

Specifying the SOA client identity keystore and key password in the credential store

Service Bus

The Service Bus configuration for outgoing SSL connections is quite different from the composite configuration. The following blog here describes the locations where to configure the keystores and keys nicely. In WebLogic Server console, you create a PKICredentialMapper which refers to the keystore and also contains the keystore password configuration. From the Service Bus project, a ServiceKeyProvider can be configured which uses the PKICredentialMapper and contains the configuration for the key and key password to use. The ServiceKeyProvider configuration needs to be done from the Service Bus console since JDeveloper can not resolve the credential mapper.

To summarize the above:

Overwriting keystore configuration with JVM parameters

You can override the keystores used with JVM system parameters such as javax.net.ssl.trustStore, javax.net.ssl.trustStoreType, javax.net.ssl.trustStorePassword, javax.net.ssl.keyStore, javax.net.ssl.keyStoreType, javax.net.ssl.keyStorePassword in for example the setDomainEnv script. These will override the WebLogic Server configuration and not the OWSM configuration (application layer security described below). Thus if you specify for example an alternative truststore by using the command-line, this will not influence HTTP connections going from SOA Suite to other systems. Even when message protection (using WS-Security) has been enabled, which uses keys and check trust. It will influence HTTPS connections though. For more detail on the above see here.

Application layer

Keys can be used by OWSM policies to for example achieve message protection on the application layer. This configuration can be done from Fusion Middleware Control.

The OWSM run time does not use the WebLogic Server keystore that is configured using the WebLogic Server Administration Console and used for SSL. The keystore which OWSM uses by default is kss://owsm/keystore since 12.1.2 and can be configured from the OWSM Domain configuration. See below for the difference between KSS and JKS keystores.

OWSM keystore contents and management from FMW Control

OWSM keystore domain config

In order for OWSM to use JKS keystores/keys, credential store framework (CSF) entries need to be created which contain the keystore and key passwords. The OWSM policy configuration determines the key alias to use. For KSS keystores/keys no CSF passwords to access keystores/keys are required since OWSM does not support KSS keystores with password and KSS does not provide a feature to put a password on keys.

Identity for outgoing connections (application policy level, e.g. signing and encryption keys) is established by using OWSM policy configuration. Trust for SAML/JWT (secure token service and client) can be configured from the OWSM Domain configuration.

Finally This is only the tip of the iceberg

There is a lot to tell in the area of security. Zooming in on transport and application layer security, there is also a wide range of options and do’s and don’ts. I have not talked about the different choices you can make when configuring application or transport layer security. The focus of this blog post has been to provide an overview of keystore configuration/usage and thus I have not provided much detail. If you want to learn more on how to achieve good security on your transport layer, read here. To configure 2-way SSL using TLS 1.2 on WebLogic / SOA Suite, read here. Application level security is a different story altogether and can be split up in a wide range of possible implementation choices.

Different layers in the TCP/IP model

If you want to achieve solid security, you should look at all layers of the TCP/IP model and not just at the transport and application layer. Thus it also helps if you use different security zones, divide your network so your development environment cannot by accident access your production environment or the other way around.

Final thoughts on keystore/key configuration in WebLogic/SOA Suite

When diving into the subject, I realized using and configuring keys and keystores can be quite complex. The reason for this is that it appears that for every purpose of a key/keystore, configuration in a different location is required. It would be nice if that was it, however sometimes configuration overlaps such as for example the configuration of the truststore used by WebLogic Server which is also used by SOA Suite. This feels inconsistent since for outgoing calls, composites and service bus use entirely different configuration. It would be nice if it could be made a bit more consistent and as a result simpler.

The post Oracle SOA Suite and WebLogic: Overview of key and keystore configuration appeared first on AMIS Oracle and Java Blog.

Setting up Oracle Event Hub (Apache Kafka) Cloud Service and Pub & Sub from local Node Kafka client

Thu, 2017-09-21 13:43

Oracle offers an Event Bus Cloud Service – an enterprise grade Apache Kafka instance – with large numbers of partitions and topics, (retained) messages and distributed nodes. Setting up this cloud service is simple enough – especially once you know what to do, as I will demonstrate in this article. In order to communicate with this Event Bus from a local client – in this case created with Node – we need to open up some ports for access from the public internet.

The steps I went through:

  • create Oracle Event Hub Cloud Service – Platform instance (the Kafka & Zookeeper instance)
  • create Oracle Event Hub Cloud Service – Service instance (a Topic)
  • create two network rules to allow access from the public internet to ports 2181 – Zookeeper – and 6667 – Kafka Server)
  • create two Node applications using the kafka-node package – one to produce and one to consume (based on the work done by Kunal Rupani of Oracle (GitHub: https://github.com/kunalrupani/OracleEventHubConsumer)

Note: the Event Hub Cloud Service is a metered service, billed per hour. The cost for the smallest shape is around $0,70 per hour (or $200/month for non metered).

Create Oracle Event Hub Cloud Service – Platform instance (the Kafka & Zookeeper instance)

image

image

image

imageimage

SNAGHTML23dbd55a


Create Oracle Event Hub Cloud Service – Service instance (a Topic)

Switch to the Big Data Compute CS and select the service category Oracle Event Hub Cloud Service –Topics.

SNAGHTML23dcc70d

image

image

image

image

Create two network rules to allow access from the public internet to ports 2181 – Zookeeper – and 6667 – Kafka Server)

SNAGHTML23de6174

image

SNAGHTML23ded01c

image

Create two Node applications using the kafka-node package

one to produce and one to consume (based on the work done by Kunal Rupani of Oracle (GitHub: https://github.com/kunalrupani/OracleEventHubConsumer)

    Note: the topic was created as microEventBus. The actual name under which it is accessed is partnercloud17-microEventBus (which is identity domain as prefix to the topic name)

    The package.json contains the dependency on kafka-node:

    image

    The straightforward code for producing to the Event Hub Kafka Topic:

    var EVENT_HUB_PUBLIC_IP = '129.xxxxxxxx';
    var TOPIC_NAME = 'partnercloud17-microEventBus';
    var ZOOKEEPER_PORT = 2181;
    
    var kafka = require('kafka-node');
    var Producer = kafka.Producer;
    var client = new kafka.Client(EVENT_HUB_PUBLIC_IP + ':'+ZOOKEEPER_PORT);
    var producer = new Producer(client);
    
    let payloads = [
      { topic: TOPIC_NAME, messages: '*', partition: 0 }
    ];
    
    console.log(payloads[0].messages);
    setInterval(function () {
      console.log('called about every 1 second');
      producer.send(payloads, function (err, data) {
        if (err) {
          console.error(err);
        }
        console.log(data);
      });
      if (payloads[0].messages.length < 10) {
        payloads[0].messages = payloads[0].messages + "*";
      }
      else {
        payloads[0].messages = "*";
      }
    }, 1000);
    
    

    And the similar code for consuming. Note that the producer talks to Zookeeper (port 2181) and the consumer to the Kafka Server itself (port 6667):

    var EVENT_HUB_PUBLIC_IP = '129xxxxxxx';
    var TOPIC_NAME = 'partnercloud17-microEventBus';
    var KAFKA_SERVER_PORT = 6667;
    
    var kafka = require('kafka-node'),
        Consumer = kafka.Consumer,
        client = new kafka.KafkaClient({ "kafkaHost": EVENT_HUB_PUBLIC_IP + ':' + KAFKA_SERVER_PORT }),
        consumer = new Consumer(
            client,
            [
                { topic: TOPIC_NAME, offset: 1 }
            ],
            {
                autoCommit: false,
                fromOffset: true
            }
        );
    
    consumer.on('message', function (message) {
        console.log(message.value)
    });
    

    Here is the output for producing:

    image

    and here for consuming:

    image

    The post Setting up Oracle Event Hub (Apache Kafka) Cloud Service and Pub & Sub from local Node Kafka client appeared first on AMIS Oracle and Java Blog.

    Getting started with Oracle JET: a CRUD service

    Mon, 2017-09-18 10:25
    Introduction

    AMIS has recently set up a brand new Enterprise Web Application team, of which I am proud to be a member. We will working in front-end development using a variety of Javascript based frameworks. As a first framework, we are currently investigating Oracle JET.  After working through the Oracle JET MOOC and a Knockout.js tutorial we have begun to build a meetings organisation app in order to get some more hands-on experience. This app is initially intended to have users and meetings, allowing for the users to create, show, update and delete meetings with their authorization dependent on authentication. This blog post is intended to show you some of the discoveries we made while working on this project.

    Getting started

    To keep things simple, we started of by setting up a FeathersJS REST API as a back-end with two models, one for meetings and one for users, as well as some seeds to provide initial data for these models. The next step was to make this data available in the JET app, across any of the components which might want to make use of it. This was done through use of the JET model and collection systems. In this system an model is a single data record, while a collection contains multiple records.  Since we want the CRUD functionality to be available to different JET components, we placed the code in a separate service as shown below. As you can see, this service defines the model and collection as well as providing an instance of the collection. The parseMeeting and parseSaveMeeting functions and attributes are optional and can be left out if you do not want to change the names of any attributes from back-end to front-end. The comparator attribute of the collection is used to order the different models.

    .gist table { margin-bottom: 0; }
    CRUD actions

    After setting up the model and collection, we added  CRUD functionality. JET models and collections provide functions which take care of the communication with the back-end. For example, when using the create function a plain old javascript object can be passed, as well as success and error handlers. The collection which was instantiated earlier is used to call the create function on, adding the new model to collection for quick availability in the front-end as well as making a call to the back-end to create the object in the database. Specific headers can be used, for example in order to pass an authorization token.

    For the fetch and delete methods, we instantiate a new model and set its ID to the ID of the object we want to affect. The relevant function is then called on this object. In the case of the fetch function, the data in the model is mapped to a plain old javascript object, which is returned. In the success handler of the delete function we remove the destroyed meeting from the collection, automatically updating all components in the front-end which rely on its data. The update function accepts both  plain javascript objects and JET models.

    .gist table { margin-bottom: 0; }
    Calling the service

    Setting the service up like this gave us an easy to use way to interact with our back-end, as well as allowing us to clean up the code of our components and remove any duplication. Here is an example of how we call this service from our meetings.js. Other services can be set up in the same way, or multiple services could even inherit from the same base service.

    .gist table { margin-bottom: 0; }

     

    The post Getting started with Oracle JET: a CRUD service appeared first on AMIS Oracle and Java Blog.

    Hey Mum, I am a Citizen Data Scientist with Oracle Data Visualization Cloud (and you can be one too)

    Sun, 2017-09-10 03:42

    One of the Oracle Public Cloud Services I have seen mouthwatering demos with but have not actually tried out myself is Oracle Data Visualization Cloud. I had several triggers to at last give it a try – and I am glad I did. In this article a brief report of my first experiences with this cloud service that aims to provide the business user – aka citizen data scientist – with the means that explore data and come up with insights and meaningful visualizations that can easily be shared across the team, department or enterprise.

    I got myself a 30-day trial to the cloud service, uploaded a simple Excel document with a publicly available dataset on the countries of the world and started to play around. It turned out to be quite simple – and a lot of fun – to come up with some interesting findings and visualizations. No technical skills required – certainly not any beyond an average Excel user.

    Steps:

    • get a trial account – it took about 5 hours from my initial request for a trial to the moment the trial was confirmed and the service had been provisioned
    • enter the Data Viz CS and add a new Data Source (from the Excel file with country data)
    • do a little data preparation (assign attributes to be used as measures) on the data source
    • create a new project; try out Smart Insights for initial data exploration, to get a feel for attributes
    • create some visualizations – get a feel for the data and for what DVCS can do ; many different types of visualizations, various options for filtering, embellishing, highlighting; many dimensions to be included in a visualization
    • try a narrative – a dossier with multiple visualizations, to tell my story

    In a few hours, you get a very good feel for what can be done.

     

    Create Countries as my new data source

    First steps: download countries.csv from https://www.laenderdaten.info/downloads/  . Create Excel workbook from this data. Note: I first tried to upload the raw csv file format, but that ended with an internal error.

     

    image

    image

    The data that was imported is shown. Now is a good moment to set the record straight – any meta data defined at this point is inherited in projects and visualizations that use this data. For example: if we want to calculate with attributes and use them as values – for the size of bubbles and stacks and to plot a line – we have to identify those attributes as measures.

    image

    Once the data source is set up – we can create our first project based on that data source by simply clicking on it. Note: the PROPOSED_ACTS and ACT_ALBUMS data sources are based on database table in an Oracle DBaaS instance to which I have first created a connection (simply with host, port, service name and username & password).

    image

     

    My First Project – Data Preparation & Initial Exploration

    Here is the Data Preparation page in the project. We can review the data, see what is there, modify the definition of attributes, prescribe a certain treatment (conversion) of data, add (derived) attributes etc.

    image

    If we click on the Visuals icon in the upper right hand corner, we get a first stab at visualization of some of the data in this data source. Out of the box – just based on how DVCS interprets the data and the various attributes:

    image

    For example the number of countries per continent. Note how we can select different measures from the dropdownlist – for example area:

    image

    This tells us that Asia is the largest continent in landmass, followed by Africa, Russia is the largest country and the size of all countries using a currency called Dollar put together is the largest, with Rubel using countries (probably just one) as a runner up. Note: all of this – out of the box. I am 10 minutes into my exploration of DVCS!

     

    First Visualizations – Let’s try a Few Things

    Go to the second tab – Visualize.

    Drag an attribute – or multiple attributes – to the canvas.

    image

     

    The default visualization is a pivot table that presents the data for the selected attribute. Drag two more attributes to the canvas:

    image

     

    The result is a matrix of data – with the measure (area) in the cells:

    image

    In order to prepare the visualization for presentation and sharing, we can do several things – such as removing columns or rows of data that is not relevant, setting a color to highlight a cell:

    image

     

    Define range filters on selected attributes – for example filter on countries with at least a 45M population or

    image

    image

    When the filter has been set, the matrix adapts:

    image

    We can try out different styles of visualization – what about a map?

    image

    image

    DVCS recognizes the names of continents and countries as geographical indications and can represent them on a map, using color for total area. Let’s remove continent from the Category dimensions, and let’s set bubble size for population:

    image

    If we are interested in population density, we can add a calculated value:

    image

    Some more examples:

    Select countries by size per continent – in a horizontal stack chart:

    image

    and a treemap – with population added in as additional attribute represented through color:

    image

     

    Any visualization we like and want to include in our final narrative can be saved as an insight:

    image

    On the Narrate tab – we can include these insights in a meaningful order to tell our story through the data visualizations. Also see Building Stories.

     

    Resources

    Oracle Data Visualization Cloud Service: https://cloud.oracle.com/en_US/data-visualization  (at $75.00 / Named User / Month with a minimum of 5 users a fairly friendly priced offering)

    The country data is downloaded from https://www.laenderdaten.info/downloads/

    Documentation on Oracle Data Visualization Cloud Service: http://docs.oracle.com/en/cloud/paas/data-visualization-cloud/bidvc/getting-started-oracle-data-visualization.html 

    Documentation on Building Stories: http://docs.oracle.com/en/cloud/paas/data-visualization-cloud/bidvc/building-stories.html#GUID-9D6282AA-C7B7-4F7E-9B9E-873EF8F1FB5D

    The post Hey Mum, I am a Citizen Data Scientist with Oracle Data Visualization Cloud (and you can be one too) appeared first on AMIS Oracle and Java Blog.

    Oracle OpenWorld 2017 Review sessie op 31 oktober 2017 – met aandacht voor ….

    Thu, 2017-09-07 06:37

    imageVan 2 tot 5 oktober vindt in San Francisco Oracle OpenWorld 2017 plaats. In deze week zal de nabije – en verdere – toekomst van Oracle duidelijk worden gemaakt. De roadmaps voor de producten in het portfolio. En ook de dead-end streets en ends-of-the-road. De grote thema’s voor Oracle, de voortgang sinds vorig jaar en de ambities voor het komende. Klantverhalen over behaalde resultaten, demonstraties van nieuwe producten en features. Een week voor inspiratie, contemplatie, en voor kritische vragen.

    Op 31 oktober doet het AMIS Team – zoals ieder jaar – verslag van de bevindingen op Oracle OpenWorld. In een sessie die barstensvol zit met informatie krijg je in een paar uur een goed beeld van wat die 2000 sessies van Oracle OpenWord in grote lijnen hebben duidelijk gemaakt – en waar Oracle heen gaat.

    Je kunt je voor deze sessie nu al aanmelden (en doe dat snel, want meestal zit het nokkie vol): https://www.amis.nl/nieuws/oracle-open-world-review 

    Als voorbeeld van wat je kunt verwachten kan je nog even de slides bekijken van onze review-sessie van vorig jaar: Oracle OpenWorld 2016 Review – Slides 

     

    Hieronder een opsomming van thema’s en vragen die wij tijdens Oracle OpenWorld 2017 gaan verkennen en waarover we in de review zullen rapporteren. NB: als je aanvullende onderwerpen en vragen hebt – laat het weten als comment bij dit artikel.

    · Volwassenheid van Oracle als Cloud leverancier: echt pay-as-you-go subscription model per cloud capability en per business value (unit) met de mogelijkheid een concrete TCO berekening te doen; ook: de staat van de Cloud Operations van Oracle: zijn die echt “cloud scale” qua schaalbaarheid, beschikbaarheid en graad van automatisering. Slaagt Oracle erin om van de PaaS Cloud – “One integrated suite of cloud (native) capabilities” te maken, “cloud first, comprehensive, integrated, open” zoals is aangekondigd. Is geautomatiseerde Operations mogelijk van de Oracle Cloud omgevingen door gebruikers (scripting, scheduling, monitoring, …)?

    · Hoe is de match tussen Oracle en haar portfolio en middelgrote ondernemingen? Is Oracle steeds meer alleen interessant voor en geïnteresseerd in Top-100 enterprises in de wereld? Of ontstaat misschien wel door de cloud – een ‘democratization’van Oracle en is er een betere aansluiting met kleinere-dan-de-absolute top bedrijven

    https://i2.wp.com/www.amis.nl/files/golden_gate1.jpg?resize=576%2C192&ssl=1· Organisaties maken voor hun IT gebruik van de producten van diverse leveranciers. Dat zal in de cloud niet anders zijn. Een belangrijk criterium bij de selectie van cloud-diensten zal zijn: hoe goed zijn ze toepasbaar in het hybride landschap van meerdere omgevingen met verschillende cloud-diensten van verschillende vendors.

    · Security – wat heeft Oracle ons te bieden om secure IT dichter bij te brengen? Waar staat de Identity Cloud Service op dit moment? Is er nog toekomst voorOracle Identity & Access Management Suite? Wat bieden de “security in silicon” voorzieningen?

    · De waarde van IT ontstaat in de operatie. Flexibele en snelle evolutie van IT oplossingen is vereist vanuit de business. Hoe ondersteunt Oracle de DevOps-beweging die is opgekomen om agile ontwikkeling, continuous delivery en efficiënte Operations te realiseren over de grenzen van technologie, platformen en data center locatie heen (Oracle Management Cloud is daar een schakel in)

    · Actuele Architectuur-trends en consequenties voor technologie en implementatie, zoals Microservices, Serverless Functions en Containers, “the real 3rd tier” (rich web client) en  REST APIs, de veranderende rol van de relationele (enterprise) database, de opkomst van Hadoop, NoSQL en event sourcing & CQRS

    · Het vlaggeschip product van Oracle is (nog) Oracle Database. De meest recente release is 12c Release 2 (September 2016 in de cloud, Maart 2017 voor on premises) met als belangrijke features In Memory, multitenancy, sharding, native JSON. Hoe is de adoptie van die release, hoe zijn de ervaringen met de belangrijkste features, wat kunnen we leren over migratie? Hoe ziet de roadmap richting Oracle Database 18 en 19 eruit (in plaats van een major 13c release komen er jaarlijkse opleveringen)

    · Opkomst van (niche) SaaS toepassingen en de voorzieningen om over leverancier- en omgevingsgrenzen heen standaardapplicaties te verrijken en te integreren

    · Toepassing van Machine Learning in Oracle producten en met Oracle technologie

    · Met IoT (Internet of Things) komt de fysieke wereld – klein en groot – binnen handbereik, deels zelfs in (near) real time; hoe maakt Oracle IoT mogelijk – vanaf device en sensor tot dashboard en automatische aanbevelingen en acties

    · Serverless computing is een belangrijke nieuwe manier om IaaS resources in te zetten: stateless functions die reageren op triggers – zoals http requests of events – en dan hun werk uitvoeren, potentieel met vele instances naast elkaar. Dit is het echte pay-per-use model – waarbij niet wordt betaald voor een stilstaande server die staat te wachten op verkeer om af te handelen en dat horizontaal schaalbaar is. Oracle introduceert Oracle Functions als tegenhanger van AWS Lambda en Azure Functions. We zullen bespreken hoe rijk en inzetbaar Oracle Functions zijn.

    · Stand van Infrastructure as a Service: kunnen we data centers gaan sluiten en hardware bij het grof vuil zetten: Heeft Oracle de vorig jaar aangekondigde Generation 2 Infrastructure zodanig op orde dat het motto “bigger, better, cheaper than AWS” wordt waargemaakt – en we en masse IaaS gaan afnemen? Hoe gaat het met het “cloud@customer”-programma (ook bekend als “the cloud machine”)?

    Image result for oracle cloud

    · Zit er nog leven in on premises producten van Oracle? Daarbij kijken we naar hardware (de engineered systems and de appliances – waaronder de Exadata SL met Sparc en Linux, Software & Security in Silicon) en naar de platform software zoals Database, SOA Suite, BPM Suite en andere middleware componenten

    · Hoe verloopt de Low Code revolutie? Kunnen Citizen Developers (& Data Scientists) uit de voeten met Oracle’s producten – zoals Visual Builder CS, APEX, Data Visualization CS

    · PaaS Cloud – One integrated suite of cloud (native) capabilities – cloud first, comprehensive, integrated, open

    · Tegelijk met Oracle OpenWorld vindt ook JavaOne plaats. Java 9 zal (eindelijk) het licht zien en Java EE 8 staat nu ook grotendeels in de steigers. We rapporteren over de temperatuur van het Java ecosysteem – ook in relatie tot moderne architecturen, rich client web applicaties en concurrende platforms zoals Node, Python en Go.

    · Modern User eXperience – gebruikers hebben op steeds meer manieren interactie met IT systemen. Via verschillende devices (desktop, smartphone, tablet, watch) – mogelijk door elkaar heen – en protocollen (muis/toetsenbord, touch, spraak) en kanalen (applicatie, app, notificatie, Chat (Slack/Facebook Messenger), SMS, spraakbericht, email). Welke ontwikkelingen leidt | volgt | adviseert Oracle? Hoe evolueert de Alta UI en hoe is de ondersteuning van Alta voor verschillende technologieën? Welke middelen biedt Oracle voor de realisatie van mens-applicatie interactie? Waar staan Oracle JET en ADF op dit moment?

    · BlockChain – een mechanisme voor gedistribueerde, encrypted documentstores – heeft de potentie in diverse branches voor grote veranderingen te zorgen – vooral rond het dichter en directer bij elkaar brengen van partijen – en het elimineren van ‘men in the middle’. Oracle heeft aangekondigd BlockChain te omarmen. Wat dit inhoudt, hoe de integratie met Oracle PaaS en SaaS gaat verlopen en welke mogelijkheden gaan ontstaan zullen we zeker aan de orde stellen.

    · Acquisities, roadmaps, partnerships, personalia, aankondigingen, innovatie, verrassingen

    The post Oracle OpenWorld 2017 Review sessie op 31 oktober 2017 – met aandacht voor …. appeared first on AMIS Oracle and Java Blog.

    Rapid and free cloud deployment of Node applications and Docker containers with Now by ZEIT

    Fri, 2017-09-01 03:10

    I was tipped off about this now service from ZEIT: https://zeit.co . A cloud service with free tier that allows command line deployment of a simple static website, any Node application or any Docker container to a cloud environment where the application is publicly accessible. Depending on resources consumed and the number of applications required, you may need to upgrade to higher service tiers (starting at $15/month). Note: for personal research and small team development, the free (or OSS) tier will probably suffice.

    I decided to give it a spin.

    Steps:

    1. Download now command line tool – in my case for Windows – https://zeit.co/download 

    2. Install command line tool

    image

    3. Open the command line and login to now

    now login

    SNAGHTMLaeaa981

    This will prompt you for an email address, trigger an email sent to that address and wait for you to click on the link in that email to confirm your human nature and email identity.

    4. Navigate to a directory that contains a Node application (or a Docker build file + resources or a simple static web site); the now website provides samples (https://zeit.co/docs/examples/json-api) and of course any Node application will do.

    A quick and dirty new Express/Node application:

    image

    image

    image

    image

     

    5. Deploy the application:

    now

     

    You will get a unique URL for this deployment of the application. Note: your code will be launched into a 64-bit Node.js enviroment (the latest release of Node) running on Alpine Linux.

    Deployment is not super fast – I experienced 30 secs to 4 minutes.

    6. Using the URL – we can access the ‘dashboard’ for the application:

    image

    Both the log files:

    image

    and the sources are available:

    image

    And of course the application itself can be accessed, once deployment is complete:

    image

    7. If you make changes to the sources of the application

    image

    and want to redeploy, you simply execute again:

    now

    This results in a new unique URL – for this new deployment.

    image

    The previous deployment status available at its original URL. Presumably, now considers the application deployments similar to serverless functions and only spins them up when there is a demand. Old deployments will soon run out of steam – most of the time – and only consume storage after having gone to sleep. Having older deployments which are not active costs you nothing. You can actively remove those old deployments with now remove [url of deployment]. 

    image

     

    Miscellaneous

    We can specify custom actions to be performed for installation or starting the Node application through instructions in the package.json file (build-now instead of build and start-now instead of start)

    At runtime, the only writable location will be /tmp. Inside this directory, temporary data can be saved. But please note that it will be removed each time each deployment goes to sleep and wakes up again. In turn, it is not safe to be used as a file-based database.

    Now can deploy code directly from Git(Hub, BitBucket and GitLab): using now <username>/<repository>

    During deployment with now, we can pass environment variables for the runtime execution environment of the application; this can be done on the command line or through a local now.json configuration file.

    Scaling can be configured for the application – specifying the (max and min) number of concurrent instances that can be running to handle the load; scaling beyond three instances (and fixed scaling) can only be done on the paid plans.

    If you want to, you can now continue to buy a domain name for a friendly URL and associate that with the application you have just deployed. One of the ways for ZEIT to make money I guess is through this low threshold domain name name selling.

     

    Resources

    Five Minute Guide to now – https://zeit.co/docs/getting-started/five-minute-guide-to-now 

    Real time deployments with Zeit’s Now   https://olegkorol.de/2017/03/12/Real-time-deployments-with-now/ 

     

    Plans & Pricing

    image

     

     

    Dashboard/Activity Stream

    image

    The post Rapid and free cloud deployment of Node applications and Docker containers with Now by ZEIT appeared first on AMIS Oracle and Java Blog.

    Serverless computing with Azure Functions – interaction with Event Hub

    Thu, 2017-08-31 01:16

    In a previous article, I described my first steps with Azure Functions – one of the implementation mechanisms for serverless computing: Serverless Computing – Function as a Service (FaaS) – with Azure Functions – first small steps with a Node/JavaScript function. Functions can be triggered in many ways – by HTTP requests, the clock (scheduled), by database modifications and by events. In this article, I will look at a Function that is triggered by an event on the Azure Event Hub. I will also show how a function (triggered by an HTTP request) can write to Event Hub.

    Functions can have triggers and input bindings. The first is the cause of the function to run – which can have a payload. An input binding is a declarative definition of data that the function has (read) access to during execution. Function also can have Output bindings – for each of the channels to which they write results.

    Steps

    The first steps: arrange Azure account, create Event Hubs namespace – as the context in which to create individual event hubs (the latter are comparable to Kafka topics)

    On the Event Hub side of the world:

    • Create Event Hub
    • Create Shared Access Policy
    • Get Connection String URL for the shared access policy

    In Azure Functions –

    • At the function app level: Create Connection String for Connection String URL copied from shared access policy
    • Create a function based on the template Data Processing/JavaScript/EventHub Trigger – a JavaScript function triggered by a message on the indicated Event Hub in the Event Hub namespace addressed through the connection string; save and (test) run the function (this will publish an event to the event hub)
    • Optionally: create a second function, for example triggered by an HTTP Request, and have it write to an output binding to the Event Hub; in that case, an HTTP request to the second function will indirectly – through Event Hub – cause the first function to be executed

     

    In Event Hub Namespace

    Create Event Hub GreetingEvents. Set the name and accept all defaults. Press Create.

    image

     

    SNAGHTML2844af1

    Once the Event Hub creation is complete, we can inspect the details – such as 1 Consumer Group, 2 Partitions and 1 Day message retention:

    SNAGHTML284b19f

    This is our current situation:

    image

     

    Now return to the overview and click on the link Connection Strings. We need the to create a connection from Azure Functions app to Event Hub Namespace using the URL for the Shared Policy we want to leverage for that connection.

    image

    Click on Connection Strings to bring up a list of Shared Policies. Click on the Shared Policy to use for accessing the Event Hub namespace from Azure Functions.

    SNAGHTML2880164

    Click the copy button to copy the RootManageSharedAccessKey connection string to the clipboard.

    In Azure Function App

    In order for the Function to access the Event Hub [Namespace], the connection string to the Event Hub [Namespace[ needs to be configured as app setting in the function app [context in which the Function to be triggered by Event Hub is created]. Note: that is the value in the clipboard.

    image

    Scrolll down.

    SNAGHTML28c5fb9

    Create Connection String to Event Hub Namespace using the value in the clipboard

    image

     

    Save changes in function app

    image

    At this point, a link is established between the function app (context) and the Event Hub Namespace. Any function in the app can link to any event hub in the namespace.

    image

     

    Create Function to be Triggered by Event

    With the connection string in place, we can create a function that is executed when an event is published on Event Hub greetingevents. That is done like this:

    image

    Type the name of the function, click on the link new and select event hub greetingevents to associate the function with:

     

    image

     

    Click on create.

    The function is created – including the template code:

     

    image

    The configuration of the function is defined in the file function.json. Its contents can be inspected and edited:

    image

    The value of connection is a reference to an APP Setting that has been created when the function was created, based on the connection string to Event Hub Namespace.

    Click on Save and Run. A test event is published to the Event Hub greetingevents. In the log window – we can see the function reacting to that event. So we have lift off for our function – it is triggered by an event (and therefore presumably by all events) on the Event Hub and processes these events according to the (limited) logic it currently contains.

    image

    The set up looks like this:image

     

     

    Publish to Event Hub from Azure Function

     

    To make things a little bit more interesting we will make the Azure Function that was introduced in a previous article for handling HTTP Request “events” also produce output to the Event Hub greetingevents. This means that any HTTP request sent to function HttpTriggerJS1 leads to an event published to Event Hub greetingevents and in turn to function EventHubTrigger-GreetingEvents being triggered.

    image

     

    To add this additional output flow to the function, first open the Integration tab for the function and create a new Output Binding, of type Azure Event Hubs. Select the connection string and the target Event Hub – greetingevents. Define the name of the context parameter that provides the value to be published to the Event Hub – outputEventHubMessage:

    image

    We now need to modify the code of the function, to actually set the value of this context parameter called outputEventHubMessage:

    image

    At this point, we can test the function – and see how it sends the event

    image

    that indirectly triggers our former function.

    When the HTTP Request is sent to the function HttpTriggerJS1 from Postman for example

    image

    The function returns it response and also publishes the event. We can tell, because in the logging for function EventHubTrigger-GreetingEvents we see the name sent as parameter to the HttpTriggerJS1 function.

    (Note: In this receiving function, I have added the line the red to see the contents of the event message.)

     

    image

     

    Resources

    Azure Function – Event Hub binding – https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-hubs 

    Azure Documentation on Configuring App Settings – https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings#settings 

    Azure Event Hubs Overview – https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-what-is-event-hubs 

    Azure Functions Triggers and Binding Concepts – https://docs.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings

    The post Serverless computing with Azure Functions – interaction with Event Hub appeared first on AMIS Oracle and Java Blog.

    Serverless Computing – Function as a Service (FaaS) – with Azure Functions – first small steps with a Node/JavaScript function

    Wed, 2017-08-30 03:59

    If your application does not have internal state – and sometimes it is handling peak loads of requests while at other times it is not doing any work at all, why then should there be one or even more instances of the application (plus container and/or server) continuously and dedicatedly up and running for the application? For peak loads – a single instance is nowhere near enough. For times without any traffic, even a single instance is too much – and yet you pay for it.

    Serverless computing – brought to prominence with AWS Lambda – is an answer to this. It is defined on Wikipedia as a “cloud execution model” in which “the cloud provider dynamically manages the allocation of machine resources”. The subscriber to the cloud service provides the code to execute and specifies the events that should trigger execution. The cloud provider takes care of running that code whenever the event occurs. Pricing is based on the combination of the resources used (memory, possibly CPU) and the time it takes to execute the function. No compute node is permanently associated with the function and any function [execution]instance can run on a different virtual server. (so it is not really serverless in a strict sense – a server is used for running the function; but it can be a different server with each execution). Of course, function instances can still have and share state by using a cache or backend data store of some kind.

    The Serverless Function model can be used for processing events (a very common use case) but also for handling HTTP requests and therefore for implementing REST APIs or even stateless web applications. Implementation languages for serverless functions differ a little across cloud providers. Common runtimes are Node, Python, Java and C#. Several cloud vendors provide a form of Serverless Computing – AWS with Lamba, Microsoft with Azure Functions, Google with Google Cloud Functions and IBM with BlueMix FaaS (Function as a Service). Oracle announced Oracle [Cloud] Functions at Oracle OpenWorld 2016 (Oracle Functions – Serverless architecture on the Oracle PaaS Cloud) and is expected to actually the service (including support for orchestration for distributed serverless functions) around Oracle OpenWorld 2017 (October 2017) – see for example the  list of session at OOW2017 on Serverless.

    Note: monitoring the execution of the functions, collecting run time metrics and doing debugging on issues can be a little challenging. Special care should be taken when writing the functions – as for example there is no log file written on the server on which the code executes.

    In this article, I briefly show an example of working with Serverless Computing using Azure Functions.

    Steps for implementing a Function:

    • arrange Azure cloud account
    • create Function App as context for Functions
    • create Function
    • trigger Function – cause the events that trigger the Function.
    • inspect the result from the function
    • monitor the function execution

    Taking an existing Azure Cloud Account, the first step is to create a Function App in your Azure subscription – as a context to create individual functions in (“You must have a function app to host the execution of your functions. A function app lets you group functions as a logic unit for easier management, deployment, and sharing of resources.”).

    image

    I will not discuss the details for this step – they are fairly trivial (see for example this instruction: https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-azure-function#create-a-function-app)

    Quick Overview of Steps

    Navigate into the function app:

    image

    Click on plus icon to create a new Function:

    image

    Click on goto quickstart for the easiest way in

    image

    Select scenario WebHook + API; select JavaScript as the language. Note: the JavaScript runtime environment is Node 6.5 at the time of writing (August 2017).

    Click on Create this function.

    image

    The function is created – with a name I cannot influence

    image

    When the function was created, two files were created: index.js and function.json. We can inspect these files by clicking on the View Files tab:

    image 

    The function.json file is a configuration file where we specify generic meta-data about the function.

    The integration tab shows the triggering event (s) for this function – configured for HTTP requests.

    image

    The manage tab allows us to define environment variable to pass into the function runtime execution environment:

    image

    The Monitor tab allows us to monitor executions of the Function and the logging they produce:

    image

    Return to the main tab with the function definition. Make a small change in the template code – to make it my own function; then click on Save & Run to store the modified definition and make a test call to the Function:

    SNAGHTMLd5c68d

    The result of the test call is shown on the right as well in the logging tab at the bottom of the page:

    image

    To invoke the function outside the Azure Cloud environment, click on Get Function URL.

    image

    Click on the icon to copy the URL to the clipboard.

    Open a browser, paste the URL and add the name query parameter:

    image

    In Postman we can also make a test call:

    image

    Both these calls are from my laptop without any special connection to the Azure Cloud. You can make that same call from your environment. The function is triggerable – and when an HTTP request is received to hand to the function, Azure will assign it a run time environment in which to execute the JavaScript code. Pretty cool.

    The logging shows the additional instances of the function:

    image

    From within the function, we can write output to the logging. All function execution instances write to the same pile of logging, from within their own execution environments:

    image

    Now Save & Run again – and see the log line written during the function execution:

    image

    Functions lets you define the threshold trace level for writing to the console, which makes it easy to control the way traces are written to the console from your functions. You can set the trace-level threshold for logging in the host.json file, or turn it off.

    The Monitor tab provides an overview of all executions of the function, including the not so happy ones (I made a few coding mistakes that I did not share). For each instance, the specific logging and execution details are available:

    SNAGHTMLdc20dc

     

    Debug Console and Package Management

    At the URL https://<function_app_name>.scm.azurewebsites.net we can access a management/development console where we can perform advanced operations regarding application deployment and configuration:

    image

    The CMD console looks like this:

    SNAGHTMLea740b

    NPM packages en Node Modules can be added to a JavaScript Function. See for details : https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-node#node-version-and-package-management 

    An not obvious feature of the CMD Console is the ability to drag files from my local Windows operating system into the browser – such as the package.json shown in this figure:

    image

    Note: You should define a package.json file at the root of your function app. Defining the file lets all functions in the app share the same cached packages, which gives the best performance. If a version conflict arises, you can resolve it by adding a package.json file in the folder of a specific function.

    Conclusion

    Creating a JavaScript (Node) Function in Azure Functions is pretty straightforward. The steps are logical, the environment reacts intuitively and smoothly. Good fun working with this.

    I am looking forward to Oracle’s Cloud service for serverless computing – to see if it provides a similar good experience,and perhaps even more. More on that next month I hope.

    Next steps for me: trigger Azure Functions from other events than HTTP Requests and leveraging NPM packages from my Function. Perhaps also trying out Visual Studio as the development and local testing environment for Azure Functions.

     

    Resources

    FAQ on AWS Lambda – https://aws.amazon.com/lambda/faqs/

    Wikipedia on Serverless Computing – https://en.wikipedia.org/wiki/Serverless_computing

    Oracle announced Oracle [Cloud] Functions at Oracle OpenWorld 2016  – Oracle Functions – Serverless architecture on the Oracle PaaS Cloud

    Sessions at Oracle OpenWorld 2017 on Serverless Computing (i.e. Oracle Functions) –  list of session at OOW2017 on Serverless

    Azure Functions – Create your first Function – https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-azure-function 

    Azure Functions Documentation – https://docs.microsoft.com/en-us/azure/azure-functions/index 

    Azure Functions HTTP and webhook bindings – https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook

    Azure Functions JavaScript developer guide – https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-node

    How to update function app files – package.json, project.json, host.json – https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference#fileupdate

    The post Serverless Computing – Function as a Service (FaaS) – with Azure Functions – first small steps with a Node/JavaScript function appeared first on AMIS Oracle and Java Blog.

    Pages