Feed aggregator

A Love Letter to Singapore Airlines

Doug Burns - Thu, 2015-12-10 14:25
I used to be a Star Alliance Gold card holder from my extensive travel with BMI and other *A carriers. Eventually my travel tailed off a little and I dropped to Silver *just* before BA took over BMI and my *A status was converted to One World. Which was ok, because a BA Silver is in many ways similar to other airlines gold with all the lounge access I could need. The chances of getting or retaining a BA Gold card were about the same as those of me becoming a teetotal vegan, so I settled into my new position in life ;-)

However, it was a little disappointing and strange that I switched over to One World just before I landed a job in Singapore. In my *A days, everyone knew that Singapore Airlines were *the* top *A carrier (honourable mention to Air New Zealand) and so they always cropped up in any forum conversation about how best to use miles. Now I was in the perfect place to earn and redeem miles well, my new employer always uses SQ for travel but I was kind of stuck with my BA Silver and a whole bunch of miles and partner vouchers and the rest. To give you an example, when my new employer was helping me book our business class flights to move out to Singapore, you could tell they were a little confused as to why we weren't choosing SQ. Tier points, of course! ;-)

Don't get me wrong, BA are great and I've had some good rewards over the past few years, but my choice of loyalty program suddenly felt out of step with my life so I was considering a gradual cutover to KrisFlyer. But SQ never do status matches (as far as I know), so it was going to take a while. Making it worse was the fact that I've grown to like Cathay Pacific and so the temptation to stay put in OneWorld is stronger.

Anyway, I've said enough to merely touch on my intense geekery about airline loyalty programs and, for that matter, airlines and air travel in general.

However, the experience of last week has convinced me that Singapore Airlines are unparalleled in their customer service. The fleet and on-board service are already great, even in Economy (really - Europeans should try a dose of Far Eastern mid-haul travel to see the difference), but Customer Service is such a difficult thing to get right and SQ absolutely knocked the ball out of the park!

I'm terrible with names and remembering them but, in any case, there was such a large team of people over the course of 3 and a half days that were almost uniformly excellent, professional and warm that I'm not sure I want to single anyone out. I will pick out a few small examples (in order of the time they happened) but I'm not sure that will communicate just how happy everyone I know was with the customer service.

- I was constantly having struggles getting out of the terminal for a smoke and, on one occasion, I asked one of the senior ground crew how I could get out and he walked me out personally, dealt with security and stood there while I had a smoke, so he could then help me back into the terminal. He was a smoker too, so he understood, but he didn't have one himself. Absolutely not his job, but just caring about different passengers needs.

- At every single turn (and the passengers discussed this several times amongst ourselves), the airline made the right decision, at just the right time and so it always felt like we were ahead of the game. They couldn't change the situation or unblock the blockages but once they realised there was a blockage, they simply found a way around it. They didn't pussy-foot about and there was only very rarely a sense of "what's happening here?". Even in those moments, it was really just about the airline trying to work out for themselves what was happening.

- There were very few changes in team members. Where we went, they went. When we were able to sleep, even if it was on the floor of the terminal, they weren't. When we were able to sit and relax in the hotel, they were still chasing around trying to make plans for us despite having no sleep themselves. Whatever challenges we experienced, they experienced worse because they couldn't have a seat, grab a nap, get a shower or whatever either and not once did I get any sense that it bothered them. They must have been *shattered* with tiredness and they never let it show or gave even a hint of this not being their favourite work experience!

- When the Regional Manager turns up to deliver a short speech to passengers who haven't seen a bed or a shower in over 50 hours and is basically telling them that there's no quick end in sight and they *applaud* you spontaneously during your speech and at the end, you know you're doing this thing right. Embarassing though it is to admit it, and I suspect my extreme tiredness was a factor, I was practically wiping away a tear! In retrospect, I realise that it was because they seemed to genuinely care about our predicament. It's difficult to define the difference between this and fake customer care but it was clear as day if you were there. He then hung around until every single passenger had asked whatever weird and wonderful questions they had and answered them with calm authority and honesty.

- The free food was endless and of great quality, despite my personal preferences. Not your European - here's a voucher for a cup of coffee and a sandwich. Instead - here are three proper meals a day at the right time. I'm guessing this was very important to most people, particularly the large number of families among the passengers and in the end (as you'll see in another blog post), they moved us at one point from one hotel to another, just so people could eat and wash.

- As soon as it became clear that MAA was shut down for days, they made a (heavily caveated) promise that they would try to organise some extra capacity out of Bangalore as the fastest way to get us home. They had to work with the air authorities on this, they were in the midst of every airline trying to do the same, were operating to tight timescales and were honest with us that it was starting to look unlikely and so spent hours trying to rebook people on to other flights to Mumbai and other routes. But they came through. They promised they would try something for us, they worked on it and worked on it until they made it happen and they got people home.

I can't emphasise enough how fantastic SQ were over my 85 hour (read that again - 85 hour) trip home. If it was just me saying this, then it would be personal taste, but a bunch of extremely tired passengers with a wide demographic all seemed to agree whenever we discussed it or I heard others discussing it. The interesting (but really unsurprising thing), is that I also found my fellow passengers understanding and behaviour far above what I've ever experienced in a situation like this. Mmmmm ... maybe when you treat people well, they behave well?

So, Seah Chee Chian and your team ... You should be extremely proud of yourselves! But I mean the whole team, working selflessly over hours and days and showing genuine care for your customers, which is so rare. I'm not a fan of low cost airlines in general - each to their own - so the small difference in fares has never been a question for me and it's at times like this you remember you get what you pay for! However, I can put Singapore's efforts up against any full-fare airline I've ever flown with and I can't think of one that would have handled things as impressively. I just always knew I could count on SQ to take care of me.

You have a fan for life!


P.S. All of this and having the best airport on the planet (SIN) as your hub. What more could I ask for?

P.P.S. I was obviously delighted to get any seat on any plane going back to Singapore to be home again with Mads. So when I was asked whether I was happy to be downgraded to Economy it wasn't a long consideration, but I'll obviously be reclaiming the cost of that upgrade. I mean, the experience hasn't changed me *that* much! ;-)

P.P.P.S. ... and you would think that such a glowing tribute to such an amazing airline might, you know, increase my chances of an upgrade one day. (See? Ever the frequent flyer! LOL)

Playing with graphOra and Graphite

Marcelo Ochoa - Thu, 2015-12-10 11:57
Following Neto's blog post about graphOra (Docker Image) – Oracle Real Time Performance Statistics I did my personal test using a Docker image for 12c.
First, I started a 12c Docker DB using:
# docker run --privileged=true --volume=/var/lib/docker/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true --publish=1521:1521 --publish=9099:9099 oracle-12102Next starting Graphite Docker image:
# docker run --name graphs-db -p 80 -p 2003 -p 2004 -p 7002 --rm -ti nickstenning/graphiteNext installing graphOra repository:
# docker run -ti --link ols:oracle-db netofrombrazil/graphora --host oracle-db --port 1521 --sid ols --create
Enter sys password: -------
Creating user graphora
Grant access for user graphora to create sessions
Grant select privilege on V$SESSION_EVENT, V$SYSSTAT, V$STATNAME for user graphora
---
GraphOra is ready to collect your performance data!

Finally starting the graphOra Docker image:
# docker run -ti --link ols:oracle-db --rm --link graphs-db netofrombrazil/graphora --host oracle-db --port 1521 --sid ols --interval 10 --graphite graphs-db --graph-port 2003
phyReads: 0 phyWrites: 0 dbfsr: 43.30 lfpw: 43.30
phyReads: 0 phyWrites: 0 dbfsr: 0.00 lfpw: 0.00
phyReads: 0 phyWrites: 0 dbfsr: 0.00 lfpw: 0.00
and that's all, happy monitoring.
Here an screenshot from my monitored session:
Note on parameters used
First from the original post is mandatory to remove parameter graphOra, I think is due to changes on the image build of  Docker image netofrombrazil/graphora.
Second I used --link Docker syntax to avoid IP usage on command line options, that is, my Oracle DB is running on a container named ols, Graphite server running on a container named graphs-db, so by passing parameters --link ols:oracle-db --link graphs-db graphOra container receives connectivity and /etc/host file updated with the IP address of both related containers.

Readings in Database Systems

Curt Monash - Thu, 2015-12-10 06:26

Mike Stonebraker and Larry Ellison have numerous things in common. If nothing else:

  • They’re both titanic figures in the database industry.
  • They both gave me testimonials on the home page of my business website.
  • They both have been known to use the present tense when the future tense would be more accurate. :)

I mention the latter because there’s a new edition of Readings in Database Systems, aka the Red Book, available online, courtesy of Mike, Joe Hellerstein and Peter Bailis. Besides the recommended-reading academic papers themselves, there are 12 survey articles by the editors, and an occasional response where, for example, editors disagree. Whether or not one chooses to tackle the papers themselves — and I in fact have not dived into them — the commentary is of great interest.

But I would not take every word as the gospel truth, especially when academics describe what they see as commercial market realities. In particular, as per my quip in the first paragraph, the data warehouse market has not yet gone to the extremes that Mike suggests,* if indeed it ever will. And while Joe is close to correct when he says that the company Essbase was acquired by Oracle, what actually happened is that Arbor Software, which made Essbase, merged with Hyperion Software, and the latter was eventually indeed bought by the giant of Redwood Shores.**

*When it comes to data warehouse market assessment, Mike seems to often be ahead of the trend.

**Let me interrupt my tweaking of very smart people to confess that my own commentary on the Oracle/Hyperion deal was not, in retrospect, especially prescient.

Mike pretty much opened the discussion with a blistering attack against hierarchical data models such as JSON or XML. To a first approximation, his views might be summarized as: 

  • Logical hierarchical models can be OK in certain cases. In particular, JSON could be a somewhat useful datatype in an RDBMS.
  • Physical hierarchical models are horrible.
  • Rather, you should implement the logical hierarchical model over a columnar RDBMS.

My responses start:

  • Nested data structures are more important than Mike’s discussion seems to suggest.
  • Native XML and JSON stores are apt to have an index on every field. If you squint, that index looks a lot like a column store.
  • Even NoSQL stores should and I think in most cases will have some kind of SQL-like DML (Data Manipulation Language). In particular, there should be some ability to do joins, because total denormalization is not always a good choice.

In no particular order, here are some other thoughts about or inspired by the survey articles in Readings in Database Systems, 5th Edition.

  • I agree that OLTP (OnLine Transaction Processing) is transitioning to main memory.
  • I agree with the emphasis on “data in motion”.
  • While I needle him for overstating the speed of the transition, Mike is right that columnar architectures are winning for analytics. (Or you could say they’ve won, if you recognize that mop-up from the victory will still take 1 or 2 decades.)
  • The guys seem to really hate MapReduce, which is an old story for Mike, but a bit of a reversal for Joe.
  • MapReduce is many things, but it’s not a data model, and it’s also not something that Hadoop 1.0 was an alternative to. Saying each of those things was sloppy writing.
  • The guys characterize consistency/transaction isolation as a rather ghastly mess. That part was an eye-opener.
  • Mike is a big fan of arrays. I suspect he’s right in general, although I also suspect he’s overrating SciDB. I also think he’s somewhat overrating the market penetration of cube stores, aka MOLAP.
  • The point about Hadoop (in particular) and modern technologies in general showing the way to modularization of DBMS is an excellent one.
  • Joe and Mike disagreed about analytics; Joe’s approach rang truer for me. My own opinion is:
  • The challenge of whether anybody wants to do machine learning (or other advanced analytics) over a DBMS is sidestepped in part by the previously mentioned point about the modularization of a DBMS. Hadoop, for example, can be both an OK analytic DBMS (although not fully competitive with mature, dedicated products) and of course also an advanced analytics framework.
  • Similarly, except in the short-term I’m not worried about the limitations of Spark’s persistence mechanisms. Almost every commercial distribution of Spark I can think of is part of a package that also contains a more mature data store.
  • Versatile DBMS and analytic frameworks suffer strategic contention for memory, with different parts of the system wanting to use it in different ways. Raising that as a concern about the integration of analytic DBMS with advanced analytic frameworks is valid.
  • I used to overrate the importance of abstract datatypes, in large part due to Mike’s influence. I got over it. He should too. :) They’re useful, to the point of being a checklist item, but not a game-changer. A big part of the problem is what I mentioned in the previous point — different parts of a versatile DBMS would prefer to do different things with memory.
  • I used to overrate the importance of user-defined functions in an analytic RDBMS. Mike had nothing to do with my error. :) I got over it. He should too. They’re useful, to the point of being a checklist item, but not a game-changer. Looser coupling between analytics and data management seems more flexible.
  • Excellent points are made about the difficulties of “First we build the perfect schema” data warehouse projects and, similarly, MDM (Master Data Management).
  • There’s an interesting discussion that helps explain why optimizer progress is so slow (both for the industry in general and for each individual product).

Related links

  • I did a deep dive into MarkLogic’s indexing strategy in 2008, which informed my comment about XML/JSON stores above.
  • Again with MarkLogic as the focus, in 2010 I was skeptical about document stores not offering joins. MarkLogic has since capitulated.
  • I’m not current on SciDB, but I did write a bit about it in 2010.
  • I’m surprised that I can’t find a post to point to about modularization of DBMS. I’ll leave this here as a placeholder until I can.
  • Edit: As promised, I’ve now posted about the object-relational/abstract datatype boom of the 1990s.

Using Apache Drill REST API to Build ASCII Dashboard With Node

Tugdual Grall - Thu, 2015-12-10 04:56
Read this article on my new blog Apache Drill has a hidden gem: an easy to use REST interface. This API can be used to Query, Profile and Configure Drill engine. In this blog post I will explain how to use Drill REST API to create ascii dashboards using Blessed Contrib. The ASCII Dashboard looks like Prerequisites Node.js Apache Drill 1.2 For this post, you will use the SFO Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

My Indian Adventure - Part 1

Doug Burns - Wed, 2015-12-09 08:29
Last week I had a small adventure and wanted to record some of the events before I forget them and to take the opportunity to praise both the good people of Chennai and the stellar staff of Singapore Airlines. You'll find nothing about Oracle here and unless you're a friend or my family, probably not much to care about, but those are the people I'm writing this for.

I suppose it began the previous week when we received a travel advisory ahead of my short two night business trip warning us of fresh rains and flooding risk in Chennai. I asked my boss if it was really a good idea for us to travel, particularly as I had to be back on Wednesday morning for my partners birthday trip to Phuket. But the decision was made and so I found myself in a somewhat wet Chennai on Sunday night. 

However, other than some occasional rain and the residual effects of earlier flooding - Chennai has been living with this for a while now - the business stuff went well and I woke up at 4am (jetlag) on Tuesday, looking forward to travelling home that night.

Tuesday

Sitting in my final meeting before travelling to the airport, one of the attendees suggested that we break up the meeting as people were getting 
calls from home to tell them that their homes were being flooded! So we broke up, the office cleared out and we phoned for the car to come from our hotel 25 minutes away. Estimated time of arrival 1-2 hours! Oh well. I'd be pushing it to make my flight, but would probably be fine.

We waited and after the first hour I stood outside with an umbrella, sheltering under a 
concrete archway until I'd venture out with the brolly at each possible hotel car sighting. It also gave me an opportunity to smoke outside but under the brolly. However, after an hour of this, I was absolutely drenched and my feet and trousers were soaking. Just me being an idiot as usual, but I would come to regret this more and more as time passed later. Soaking clothes were not ideal for the trip to come and I'd packed extremely lightly!

The car turned up at 6:15 and so began our journey to the hotel and then probably time for a quick beer, dry out a bit and then on to the airport.

We eventually arrived at the hotel 1:45 later and I was starting to panic because Chennai Airport (MAA) is one where arriving 2-3 hours before departure is definitely a good idea. Don't get me started on Indian airport security! I was 3:15 away from departure so after switching to another car to give our poor driver a break, we set off immediately. The next hour and 15 minutes were frankly chaotic and worrying as we passed roads that were now rivers, with cars almost under water and the wake from our own car more like that generated by a boat. Despite a very hairy ending to the drive, we made it to the airport 2 hours before departure and I breathed a huge sigh of relief because I knew I'd probably make it home now.

Except Singapore Airlines wouldn't check me in because the flight was going to be seriously delayed, the situation was changing all the time and they didn't want us stuck air-side. The incoming plane had been diverted to Bangalore (BLR) because MAA runway was closed. If the runway could be reopened, then they would fly the plane in from BLR, board us and we could fly home but it was clear there'd be a long delay in any case. I made the decision it was best to stick around as I really needed to get home but what sealed it was that there were now no rooms at all in the hotel I'd checked out of. I could share my bosses room, but that was the best on offer and all taxis had stopped operating from the airport anyway.

After an hour or two, the flight was cancelled and the runway closed until 6am. Singapore Airlines immediately informed us what was happening and organised hot airline style food and a blanket each. The food was the first of many South Indian meals I was to face over the course of the next few days and those who know me well know that means I was condemned to mild hunger! LOL. Fortunately I had a giant bag of Haribo Gold Bears I could dig into occasionally ;-)

Wednesday

Though the blanket was ok, sleeping on the marble floor of an airport terminal with your rucksack as a pillow and a thin blanket is never going to be an enjoyable experience and I think I managed about an hour. Others who had managed to 
commandeer seats and benches seemed to fair better. Here was my slot - always go Business Class, folks! ;-)




I wandered up and down the terminal aimlessly (and there really isn't much else to do in MAA), occasionally trying to get out of the terminal building through security 
so I could have a smoke. Did I mention how I feel about Indian Security guys? Really, just don't get me started on them!

I was hearing rumours from home that Singapore Airlines were flying a plane in and we would be able to get out so I stuck with it but, ultimately, it became clear that the runway was closed and was going to be closed for some time at which point Singapore stepped in and took control of the situation. They cancelled the flight and organised a bus to the Hilton Chennai where we wouldn't be able to have rooms (there were really none available and they offered to pay the costs of anyone who could find one) but we could at least get some food and get away from MAA. It was yet another great decision as MAA was starting to descend into chaos. After a surprisingly easy and short bus drive, we found ourselves at the Hilton but I wasn't sure how much of a benefit being able to stay in Ballroom 2 for hours was going to be.




Over time I came to realise it was a great move when I started hearing reports of what a car crash the MAA terminal had become. We also had wifi for a few hours, which meant I was able to contact Mads so she could start rebooking our trip to Phuket for the next day, in case I was going to get back to Singapore in time. Our original Wednesday departure was clearly a no-go by this stage.

It also helped that we could now get some decent coffee and biscuits and Singapore and the Hilton could start serving up some really pretty good hot buffet lunch. All South Indian food, of course! But then, what else should I expect really? LOL

But at least there were chairs, and power sockets, and some wifi and even occasionally a little 3G, but Chennai's communications infrastructure was slowly but surely disappearing into the surrounding water! I could go outside, try to find reception, smoke, chat to the Singapore Airlines staff who were taking care of us and two of those trips outside will stay with me for a while. (Note that although the flooding doesn't look too bad here, this was definitely one of the better streets and it got much worse later ...)




The first was when I was smoking with one of the SQ guys (hopefully not something that's disallowed, but I'm not handing his name over anyway! ;-)) and I asked him how he thought things were looking. He showed me a video he'd taken of the runway area and my heart sank. It was a lake. A big lake. With waves and stuff. He told me that realistically, nothing would be flying out of MAA any time soon and my heart sank. At the same time, I settled into the idea that this was going to be a long trip and maybe it's something about my military upbringing but I knew that we'd just have to put up with whatever was coming and we'd get there in the end.

Besides, the next visit outside cheered me up no end. As I was passing the time, smoking and day-dreaming, a commotion broke out in the crowd in the street with people running and pushing and laughing and shouting and I genuinely thought there was a mini-riot breaking out.



We all rushed over to see what was going on and then I realised, but I didn't get a photo of it! The crowd were grappling with a large fish! It must have been a good 2.5-3 feet long and fat. Absolutely not a tiddler! As they caught it, they all ran back up the street, laughing and celebrating with their prize. 

Catching fish in the street with your hands. Now *that's* flooding!

More to follow ....

More OTN VTS OnDemand Highlighted Sessions

OTN TechBlog - Tue, 2015-12-08 13:09

Today we are featuring more sessions from each OTN Virtual Technology Summit Replay Group.  See session titles and abstracts below for content created by Oracle employees and community members.  Watch right away and then join the group to interact with other community members and stay up to date on when NEW content is coming! 

Master Data Management (MDM) Using Oracle Table Access for Hadoop - By Kuassi Mensah, Oracle Corporation
The new Hadoop 2 architecture leads to a bloom of compute engines. Some Hadoop applications such as Master Data Management and Advanced Analytics perform the majority of their processing from Hadoop but need access to data in Oracle database which is the reliable and auditable source of truth. This technical session introduces upcoming Oracle Table Access for Hadoop (OTA4H) which exposes Oracle database tables as Hadoop data sources. It will describe OTA4H architecture, projected features, performance/scalability optimizations, and discuss use cases.  A demo of various Hive SQL and Spark SQL queries against Oracle table will be shown.

What's New for Oracle and .NET (Part 2)  - By Alex Keh, Senior Principal Product Manager, Oracle
With the release of ODAC 12c Release 4 and Oracle Database 12c, .NET developers have many more features to increase productivity and ease development. These sessions explore new features introduced in recent releases with code and tool demonstrations using Visual Studio 2015

How To Increase Application Security & Reliability with Software in Silicon Technology - By Angelo Rajuderai, Worldwide Technology Lead Partner Adoption for SPARC, Oracle and Ikroop Dhillon, Principal Product Manager, Oracle

Learn about Software in Silicon Application Data Integrity (ADI) and how you can use this revolutionary technology to catch memory access errors in production code. Also explore key features for developers that make it easy and simple to create secure and reliable high performance applications.


Real-Time Service Monitoring and Exploration  - By Oracle ACE Associate Robert van Molken
There is a great deal of value in knowing which services are deployed and correctly running on an Oracle SOA Suite or Service Bus instance. This session explains and demonstrates how to retrieve this data using JMX and the available Managed Beans on Weblogic. You will learn how the data can be retrieved using existing Java APIs, and how to explore dependencies between Service Bus and SOA Suite. You'll also learn how the retrieved data can be used to create a simple dashboard or even detailed reports.


Shakespeare Plays Scrabble  - By José Paumard Assistant Professor at the University Paris 13
This session will show how lambdas and Streams can be used to solve a toy problem based on Scrabble. We are going to solve this problem with the Scrabble dictionary, the list of the words used by Shakespeare, and the Stream API. The three main steps shown will be the mapping, filtering and reduction. The mapping step converts a stream of a given type into a stream of another type. Then the filtering step is used to sort out the words not allowed by the Scrabble dictionary. Finally, the reduction can be as simple as computing a max over a given stream, but can also be used to compute more complex structures. We will use these tools to extract the three best words Shakespeare could have played. 



Oracle Cloud – Glassfish Administration (port 4848 woes)

John Scott - Tue, 2015-12-08 04:46

In the previous post I discussed accessing the DBaaS Monitor application, in this post I’ll show how to access the Glassfish Admin application.

On the home page for your DBaaS Instance, you’ll see a link for ‘Glassfish Administration’

cloud_home.png

However if you click on that link you’ll probably find the browser just hangs and nothing happens. It took me a while to notice but unlike the DBaaS monitor which is accessed via HTTP/HTTPs, the Glassfish Administration is done via port 4848 (you’ll notice 4848 in the URL once your browser times out).

The issue here is that by default port 4848 isn’t open in your network rules for your DBaaS instance, so the browser cannot connect to it.

So you have a couple of options –

  1. Open up port 4848 to the world (or to just specific IP addresses)
  2. Use an SSH Tunnel

I tend to go with option 2, since I’ve found occasionally while travelling and staying in a hotel if you go with option #1 you might be accessing from an IP address that isn’t in your whitelist.

As I blogged previously, we can setup an SSH tunnel to port 4848 pretty easily from the terminal, with a command similar to:

ssh -L 4848:localhost:4848 -i oracle_cloud_rsa opc@<my.remote.ip.here>

So now we should be able to access Glassfish using the URL http://localhost:4848

Why localhost? Remember when you setup an SSH tunnel you connect to your own local machine which then tunnels the traffic to the remote host via SSH over the ports you specify.

Once we’ve done that you should be able to access the Glassfish Administation homepage.

glassfish.png

You should be able to login using the username ‘admin‘ and the same password you specified when you created your DBaaS instance.

glassfish2.png

The first thing I noticed was that this is a pretty old version of Glassfish which is installed by default (version 3.1.2.2 in my case), when Glassfish 4 was already out. So you may wish to check if you’re missing any patches or need some Glassfish 4 features.

This is definitely one downside to going with the pre-bundled installation, you will (by definition) get an image which was created some time ago, so you need to check if there are any patches etc that have been released since the image was created.

I’m not going to go into detail on Glassfish itself, since it’s pretty much a standard (3.1) Glassfish and there are lots of blog posts and documents around that go into more detail. However if you go into the application section you’ll see that it comes pre-bundled with the APEX Listener / ORDS and also DBaaS Monitor which is how you can access them via the Glassfish server.

glassfish_apps.png

 


Optimizing Log levels in OUAF based applications

Anthony Shorten - Mon, 2015-12-07 22:22

Typically the default setup of logging in Oracle Utilities Application Framework based application favors non-production environments. This can cause excessive logging in specific situations in the various channels available.

The Oracle Utilities Application Framework uses log4j for log management. The log4j.properties (default name) controls the individual channels logging information and level. The names and locations of the log4j.properties files are discussed in the Server Administration Guides shipped with the products.

The setting can be altered to suit the amount of logging. The following values are supported (in order of least to most logging):

  • off - The OFF has the highest possible rank and is intended to turn off logging. This is not recommended.
  • fatal - The FATAL level designates very severe error events that will presumably lead the application to abort.
  • error - The ERROR level designates error events that might still allow the application to continue running.
  • warn- The WARN level designates potentially harmful situations.
  • info - The INFO level designates informational messages that highlight the progress of the application at coarse-grained level.
  • debug - The DEBUG Level designates fine-grained informational events that are most useful to debug an application.
  • all - The ALL has the lowest possible rank and is intended to turn on all logging. Only recommended for development environments for use with developers.

Each level includes the levels above. For example a setting of info would include messages of type fatal, error, warning as well as info.

The format of the settings typically look like this:

log4j.logger.com.splwg=info

Each configuration file has multiple logging settings to cover the logging types of individual elements of the architecture. You can optimize individual components of the architecture within a channel.

To implement this you will need to use custom templates as the templates are prebuilt. Refer to the Server Administration Guide supplied with your version of the product for instructions on how to build and use custom templates.

Warning: Changing log levels can hide messages that you might find helpful. Just be careful when setting custom levels.

Transitioning to the cloud(s)

Curt Monash - Mon, 2015-12-07 11:48

There’s a lot of talk these days about transitioning to the cloud, by IT customers and vendors alike. Of course, I have thoughts on the subject, some of which are below.

1. The economies of scale of not running your own data centers are real. That’s the kind of non-core activity almost all enterprises should outsource. Of course, those considerations taken alone argue equally for true cloud, co-location or SaaS (Software as a Service).

2. When the (Amazon) cloud was newer, I used to hear that certain kinds of workloads didn’t map well to the architecture Amazon had chosen. In particular, shared-nothing analytic query processing was necessarily inefficient. But I’m not hearing nearly as much about that any more.

3. Notwithstanding the foregoing, not everybody loves Amazon pricing.

4. Infrastructure vendors such as Oracle would like to also offer their infrastructure to you in the cloud. As per the above, that could work. However:

  • Is all your computing on Oracle’s infrastructure? Probably not.
  • Do you want to move the Oracle part and the non-Oracle part to different clouds? Ideally, no.
  • Do you like the idea of being even more locked in to Oracle than you are now? [Insert BDSM joke here.]
  • Will Oracle do so much better of a job hosting its own infrastructure that you use its cloud anyway? Well, that’s an interesting question.

Actually, if we replace “Oracle” by “Microsoft”, the whole idea sounds better. While Microsoft doesn’t have a proprietary server hardware story like Oracle’s, many folks are content in the Microsoft walled garden. IBM has fiercely loyal customers as well, and so may a couple of Japanese computer manufacturers.

5. Even when running stuff in the cloud is otherwise a bad idea, there’s still:

  • Test and dev(elopment) — usually phrased that way, although the opposite order makes more sense.
  • Short-term projects — the most obvious examples are in investigative analytics.
  • Disaster recovery.

So in many software categories, almost every vendor should have a cloud option of some kind.

6. Reasons for your data to wind up in a plurality of remote data centers include:

  • High availability, and similarly disaster recovery. Duh.
  • Second-source/avoidance of lock-in.
  • Geo-compliance.
  • Particular SaaS offerings being hosted in different places.
  • Use of both true cloud and co-location for different parts of your business.

7. “Mostly compatible” is by no means the same as “compatible”, and confusing the two leads to tears. Even so, “mostly compatible” has stood the IT industry in good stead multiple times. My favorite examples are:

  • SQL
  • UNIX (before LINUX).
  • IBM-compatible PCs (or, as Ben Rosen used to joke, Compaq-compatible).
  • Many cases in which vendors upgrade their own products.

I raise this point for two reasons:

  • I think Amazon/OpenStack could be another important example.
  • A vendor offering both cloud and on-premises versions of their offering, with minor incompatibilities between the two, isn’t automatically crazy.

8. SaaS vendors, in many cases, will need to deploy in many different clouds. Reasons include:

That said, there are of course significant differences between, for example:

  • Deploying to Amazon in multiple regions around the world.
  • Deploying to Amazon plus a variety of OpenStack-based cloud providers around the world, e.g. some “national champions” (perhaps subsidiaries of the main telecommunications firms).*
  • Deploying to Amazon, to other OpenStack-based cloud providers, and also to an OpenStack-based system that resides on customer premises (or in their co-location facility).

9. The previous point, and the last bullet of the one before that, are why I wrote in a post about enterprise app history:

There’s a huge difference between designing applications to run on one particular technology stack, vs. needing them to be portable across several. As a general rule, offering an application across several different brands of almost-compatible technology — e.g. market-leading RDBMS or (before the Linux era) proprietary UNIX boxes — commonly works out well. The application vendor just has to confine itself to relying on the intersection of the various brands’ feature sets.*

*The usual term for that is the spectacularly incorrect phrase “lowest common denominator”.

Offering the “same” apps over fundamentally different platform technologies is much harder, and I struggle to think of any cases of great success.

10. Decisions on where to process and store data are of course strongly influenced by where and how the data originates. In broadest terms:

  • Traditional business transaction data at large enterprises is typically managed by on-premises legacy systems. So legacy issues arise in full force.
  • Internet interaction data — e.g. web site clicks — typically originates in systems that are hosted remotely. (Few enterprises run their websites on premises.) It is tempting to manage and analyze that data where it originates. That said:
    • You often want to enhance that data with what you know from your business records …
    • … which is information that you may or may not be willing to send off-premises.
  • “Phone-home” IoT (Internet of Things) data, from devices at — for example — many customer locations, often makes sense to receive in the cloud. Once it’s there, why not process and analyze it there as well?
  • Machine-generated data that originates on your premises may never need to leave them. Even if their origins are as geographically distributed as customer devices are, there’s a good chance that you won’t need other cloud features (e.g. elastic scalability) as much as in customer-device use cases.

Related link

How to monitor Weblogic correct HEALTH STATE using EM12c Metric Extension

Arun Bavera - Fri, 2015-12-04 17:00
Requirement is to know failed status of Weblogic Servers.
image
image


For Weblogic 11g:



Refer:
EM12c: How to Monitor WebLogic Server Health Status in Enterprise Manager 12c Cloud Control (Doc ID 1984804.1)

http://www.oracle.com/webfolder/technetwork/tutorials/obe/em/em12c/metric_extensions/Metric_Extensions.html

https://docs.oracle.com/cd/E24628_01/doc.121/e24473/metric_extension.htm#EMADM10032

Categories: Development

Oracle Proudly Releases PeopleTools 8.55

PeopleSoft Technology Blog - Fri, 2015-12-04 16:29

PeopleTools 8.55 has now been released for general availability.  With this release, Oracle continues to demonstrate our commitment to extend the value of our customers’ investment in PeopleSoft.  The capabilities included in this release have been designed to enhance many of the features we’ve delivered previously as well as to provide our application users substantial new functionality.  Features in this release focus on continued improvements to the PeopleSoft User Experience while providing additional technology options that will help you take advantage of data center innovations proven to reduce ongoing operating costs and assist in developing a Cloud deployment strategy. Our beta customers have been using PeopleTools 8.55 for a few months and definitely liked what they saw.

The PeopleTools Beta Program had two participants, Centene Corporation and Cerner Corporation. During the Beta Program, participants explored the improvements to the PeopleSoft Fluid User Experience and the PeopleSoft Cloud Architecture (PCA) and found “smooth sailing” with few bugs reported and significant productivity improvements to environment provisioning with PCA.

PeopleTools 8.55 dramatically extends our investment in the Fluid User Experience. New in 8.55 are features such as Fluid Dashboards, Fluid Master/Detail pages and the new Activity Guide Framework.  Our intention with this latest release is to extend the capabilities of PeopleSoft Fluid to include functionality required by Power Users and to make the overall application experience as intuitive as possible. 

For example, PeopleTools 8.55 introduces a new Tile Wizard that makes the creation of these important navigation elements and process “building blocks” much simpler.  Tiles automatically resize based on the content they display, the size of the device being used, even based on the orientation of the mobile device. 

With this release we deliver Fluid Dashboards that augment our existing Fluid Home Page structures.  Dashboards can display tiled content just like home pages, but also allow Tiles to display external, non-PeopleSoft content such as information from a BI system or an external website or application system.

One of the long-standing strengths of PeopleSoft’s user experience has been the ability for our customers to alter the look and behavior of the screens to reflect their own company’s “brand”.  With PeopleTools 8.55, we extend the Branding Framework to include the ability to easily apply your company’s brand to Fluid pages and components.

There are very many new PeopleSoft Fluid features in PeopleTools 8.55, all of which extend our commitment to ensuring you have the most powerful, complete business applications and that these applications reflect the intuitive usability that your employees expect from today’s web systems. 

In a similar vein, we have extended many other elements of the PeopleSoft system infrastructure including Selective Adoption and Application Lifecycle Management features, the PeopleSoft Analytics capabilities and the ability for customers to deploy and manage PeopleSoft in a cloud datacenter. Most of the cloud infrastructure technology applies to on premise customers as well, and even to customers who don't use virtualization. A major benefit of our approach is that we can bring cloud-like benefits to customers who are not running on any form of cloud infrastructure.

With PeopleTools 8.55, we have continued to address the needs of our customers to achieve improved operational efficiency in the deployment, configuration and administration of their PeopleSoft applications.  As our customers see innovations across the technology landscape that include cloud service offerings, datacenter efficiencies through system virtualization and improved resource automation, they have asked us to identify opportunities for them to take advantage these innovations to help them better manage their datacenter operating costs.  Many of our customers are looking to cloud offerings as a strategic opportunity to achieve improved leverage and efficiency. 

PeopleTools 8.55 introduces the new PeopleSoft Cloud Architecture (PCA), a comprehensive organization of system-wide structures, features and capabilities that will allow our customers to achieve greater operational efficiencies.  Whether a company has a DevOps strategy to improve the collaboration between their internal PeopleSoft development and Quality Assurance teams and their IT Operations group, or a comprehensive initiative to leverage Cloud solutions for datacenter operations, PeopleSoft’s Cloud Deployment Architecture will assist our customers to reach these strategic goals with their own PeopleSoft application investment.

The PCA and incorporated features such as Deployment Packages (DPKs) work with the Application Configuration Manager (ACM) and PeopleSoft’s virtualization capabilities to provide customers a near fully automated process to install and configure PeopleTools.  Our goal is to help our customers leverage server and datacenter innovations such as market-leading resource virtualization solutions with choice of virtualization platform vendor as well as dynamic deployment of our solutions to public and private cloud platforms. PeopleTools patches deployed using DPKs can be found on MOS on the new PeopleSoft PeopleTools Patches Homepage.

PeopleTools 8.55 offers significant enhancements across the entire product footprint.  We introduced new features that improve the productivity of your developers as well as your end users.  It will be easier to deploy PeopleSoft applications on the cloud, develop custom mobile applications that incorporate PeopleSoft data and provide your users personalized access to PeopleSoft information and analytic content.  This release builds functionality into the product as a result of direct customer input, industry analysis and internal feature design.  New features, bug fixes and product certifications combine to offer PeopleSoft customers improved application user experience and operational efficiency.

As you get started with your PeopleTools 8.55 planning, be sure to review PT 8.55 Certifications on MOS so that you don’t encounter any last minute incompatibilities. Setting up the infrastructure can take time – plan for it. If you are looking for more information on PeopleTools 8.55, be sure to review the Technology tab for PeopleTools 8.55 Features and Enhancements or go to peoplesoftinfo.com for more information on anything related to PeopleSoft.

Enjoy!

OWB-TO-ODI MIGRATION PATCH FOR OWB 11.2.0.4 TO ODI 12.2.1 released

Antonio Romero - Fri, 2015-12-04 12:52
 The OWB to ODI Migration now supports migration from OWB version 11.2.0.4 to ODI 12.2.1
 It is available as "Patch 21977765 : OWB-TO-ODI MIGRATION PATCH (MP3) FOR OWB 11.2.0.4 TO ODI 12.2.1" and can be downloaded from the support website.

 This patch(21977765) only supports migration from Linux 64-bit and Windows 64-bit standalone OWB 11.2.0.4 to ODI 12.2.1.
 For Migrating to ODI 12.1.2 and ODI 12.1.3 please use patch no 18537208

 More information about the migration utility is here
 http://docs.oracle.com/middleware/1221/odi/install-migrate-owb-odi/toc.htm

Gluent launch! New production release, new HQ, new website!

Tanel Poder - Fri, 2015-12-04 12:23

I’m happy to announce that the last couple of years of hard work is paying off and the Gluent Offload Engine is production now! After beta testing with our early customers, we are now out of complete stealth mode and are ready talk more about what exactly are we doing :-)

Check out our new website and product & use case info here!

Follow us on Twitter:

We are hiring! Need to fill that new Dallas World HQ ;-) Our distributed teams around the US and in London need more helping hands (and brains!) too.

You’ll be hearing more of us soon :-)

Paul & Tanel just moved in to Gluent World HQPaul & Tanel just moved in to Gluent World HQ

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

PeopleSoft Announces Certification of Oracle Database 12c with Secure Enterprise Search

PeopleSoft Technology Blog - Fri, 2015-12-04 12:13

PeopleSoft is pleased to announce that the Oracle's 12c database is now certified for use with Secure Enterprise Search and the PeopleSoft Search Framework.  Details on how to use 12c with PeopleSoft Search are available in PeopleTools Certification Notes (Document 2081577.1) and the Knowledge Document Upgrading SES database to 12c (Document 2084851.1).

In addition to the certification steps, we have also provided an automated mechanism for customers to upgrade their SES Database to 12c.  This is an important business need for customers to perform their SES Database Upgrade.


Major news: PHP 7.0.0 has been released

Christopher Jones - Thu, 2015-12-03 22:58

Congratulations to the PHP community - the whole community - on the release of PHP 7.0.0. Thanks also to the Oracle staff who have worked on the internal rewrite necessary to make the OCI8 and PDO_OCI extensions work with PHP 7's completely revamped Extension API.

The Oracle Database OCI8 and PDO_OCI extensions are included in the PHP source distribution. The feature sets are unchanged.

The equivalent standalone OCI8 package compatible with PHP 7 will be released as version 2.1 on PECL soon. PDO_OCI will remain solely part of the core PHP source distribution.

For those interested in performance, Zend have put some benchmark figures here showing the significant improvements, which were a key feature of this release.

Other features are listed in the release announcement:

  • Significantly reduced memory usage
  • Abstract Syntax Tree
  • Consistent 64-bit support
  • Improved Exception hierarchy
  • Many fatal errors converted to Exceptions
  • Secure random number generator
  • Removed old and unsupported SAPIs and extensions
  • The null coalescing operator (??)
  • Return and Scalar Type Declarations
  • Anonymous Classes
  • Zero cost asserts

See the migration documentation for all the fine details.

Oracle Cloud – Database Monitor

John Scott - Thu, 2015-12-03 03:12

One of the nice features in Oracle Cloud is that they have incorporated a couple of extra tools available for you to use to monitor and maintain your Oracle DBaaS instance easily.

You can access Database Monitor if you have opened up the firewall for HTTP/HTTPS by accessing the URL

https://<your.public.ip.address>/dbaas_monitor/

(or you could use an SSH tunnel if you didn’t want to open it up).

Or you can navigate to it from the home page of (https://<your.public.ip.address&gt;) and clicking the Database Monitor link.

cloud_home.png

You will be prompted for a username and password to login

database_monitor.png

Now here’s where I wished I’d read the documentation before trying to “just guess”. I assumed that the username would be ‘system’ or ‘sysdba’ or some other DBA level account (perhaps the username / email address I used to sign up to the Cloud service).

But no…it turns out the default username is dbaas_monitor

The password is the same password you specified when you created the DBaaS instance.

 

Once you’ve entered those and (hopefully) logged in, you should see the DBaaS Monitor homepage

dbaas_home.png

As you can see we get a nice overview of the ‘health’ of our DBaaS Instance, including a summary of waits, CPU utilization and alert log entries.

We can drill into some CPU metrics

cpu.png

Get a nice (simplified) overview of storage

storage.png

and perform some (very simplified) management tasks like starting and stopping the database.

manage.png

So is this a replacement for Enterprise Manager? Absolutely not, it has very limited functionality, however it is also pretty light-weight so it’s potentially a faster way of checking the health of your DBaaS instance before you drill into EM etc.

I do hope Oracle extends and adds functionality to DBaaS Monitor in the future since it has a lot of potential.


APEX Feature Request

Denes Kubicek - Thu, 2015-12-03 01:41
Just created a new feature request for APEX at https://apex.oracle.com/pls/apex/f?p=55447:19:::NO:19:P19_ID:50481528500531591330407043519019274105 … Extend Interactive Report API - Get IR Query. The feature request is abut the following:

"This API should deliver a couple of different SQL statements for an interactive report. There are several possible requirements I can think of:

1. IR query including visible columns and filter values - actual SQL for the user session,
2. IR query including all columns and the original SQL,
3. get column names of an IR with or without column alias,...

Having this SQL we should be able to run it as EXECUTE IMMEDIATE, without having to replace any binds.

This feature could be included in the actions menu and available as a plugin for dynamic actions - new dynamic action feature (action)."

Please, feel free to go there and vote for it.

Categories: Development

The Oracle Flow Builder difference

Anthony Shorten - Wed, 2015-12-02 20:15

One of the key features of the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities is the support for Oracle Flow Builder. Those not familiar with this product. It is a component and flow management tool to quickly build and maintain testing assets with reduced skills required.

Oracle Flow Builder is not a new product to Oracle. Previously it was exclusively part of the successful product Oracle eBusiness Suite. It was developed to automate the testing of that product to reduce time to market and reduce the risk in implementation and upgrades. It was developed for internal quality assurance originally but we released to success to Oracle eBusiness Suite customers. Customer and QA teams reported up to 70% saving on testing time.

Keen to realize these savings across other products, Oracle moved the Oracle Flow Builder product to be part of the functional testing part of the Oracle Application Testing Suite. This is where our pack came into existence. We had the components available but originally no way to allow customers to quickly adopt the components into flows. It is possible in OpenScript to code a set of component calls, this required higher levels of skills, but quickly it was apparent that Oracle Flow Builder was the solution.

The two development teams worked closely together to allow the pack to be the first product set outside of Oracle eBusiness Suite to support Oracle Flow Builder. This relationship offers great advantages for the solution:

  • Oracle Flow Builder allows non-development resources to build and maintain components and testing flows.
  • Oracle Flow Builder includes a component management toolset to manage the availability and use of components for testing.
  • Oracle Flow Builder including a flow management toolset to allow testers to orchestrate components into testing flows representing different scenarios of business flows. This allows modelling of business flows much easier.
  • Oracle Flow Builder is a team based solution running on a server rather than individual desktops. Typically testing tools, even OpenScript, run on individual desktops which means team development is much harder.
  • Oracle Flow Builder is tightly integrated with Oracle's other testing products in the Oracle Application Testing Suite family to implement testing planning, testing automation and load testing.

Oracle Flow Builder is a key part of our testing infrastructure and it is also a key part for the testing solution for Oracle Utilities products.

For training on Oracle Flow Builder you can use the Youtube Oracle Learning Library training or use the Oracle Learning Library training.

Oracle Functional/Load Testing Advanced Pack for Oracle Utilities 5.0.0.1.0 Released

Anthony Shorten - Wed, 2015-12-02 17:28

A new version of the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities is now available for download from Oracle Software Delivery Cloud.

Look for Oracle Functional Testing Advanced Pack for Oracle Utilities, looking for Version 5.0.0.1.0, as this download includes support for Functional and Load testing.

This new version of the pack now supports a bigger range of Oracle Utilities products and a bigger range of versions. It also includes a component generator and component verifier to allow implementations to build custom components quickly from the meta data.

This new version of the pack supports the following releases:

  • Oracle Utilities Customer Care And Billing 2.4.0.3 (new)
  • Oracle Utilities Customer Care And Billing 2.5.0.1
  • Oracle Utilities Mobile Workforce Management 2.2.0.3 (updated)
  • Oracle Real Time Scheduler 2.2.0.3 (updated)
  • Oracle Utilities Application Framework 4.2.0.3 (new)
  • Oracle Utilities Application Framework  4.3.0.1
  • Oracle Utilities Meter Data Management 2.1.0.3 (new)
  • Oracle Utilities Smart Grid Gateway (all adapters) 2.1.0.3 (new)
  • Oracle Utilities Work And Asset Management 2.1.1 (updated)
  • Oracle Utilities Operational Device Management 2.1.1 (new)

The pack is content for Oracle Application Testing Suite for Functional Testing and Load Testing.

IBM Containers running Spring Boot Applications with IBM Bluemix

Pas Apicella - Wed, 2015-12-02 16:56
There is now a new command line plugin for IBM containers on Bluemix so you can push and run docker images using CF CLI itself. The steps below show you how to set this up and I use a basic spring boot application as a docker image to test this out.

Steps

Take a note of the docker local host IP. In this example it was as follows, as I test my docker image on my laptop prior to pushing it to Bluemix.

-> docker is configured to use the default machine with IP 192.168.99.100

1. Install the latest CF command line, I used the following version.

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cf --version
cf version 6.14.0+2654a47-2015-11-18


https://github.com/cloudfoundry/cli

2. Install IBM Containers Cloud Foundry plug-in

pasapicella@pas-macbook-pro:~$ cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-mac

**Attention: Plugins are binaries written by potentially untrusted authors. Install and use plugins at your own risk.**

Do you want to install the plugin https://static-ice.ng.bluemix.net/ibm-containers-mac? (y or n)> y

Attempting to download binary file from internet address...
9314192 bytes downloaded...
Installing plugin /var/folders/rj/5r89y5nd6pd4c9hwkbvdp_1w0000gn/T/ibm-containers-mac...
OK
Plugin IBM-Containers v0.8.788 successfully installed.


Note: Default plugin directory as follows

$HOME/.cf/plugins


3. Login to IBM Containers

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS$ cf ic login
Client certificates are being retrieved from IBM Containers...
Client certificates are being stored in /Users/pasapicella/.ice/certs/...
Client certificates are being stored in /Users/pasapicella/.ice/certs/containers-api.ng.bluemix.net/0bcbcada-bd11-4372-b416-955dff3078a1...
OK
Client certificates were retrieved.

Deleting old configuration file...
Checking local Docker configuration...
OK

Authenticating with registry at host name registry.ng.bluemix.net
OK
Your container was authenticated with the IBM Containers registry.
Your private Bluemix repository is URL: registry.ng.bluemix.net/apples

You can choose from two ways to use the Docker CLI with IBM Containers:

Option 1: This option allows you to use "cf ic" for managing containers on IBM Containers while still using the Docker CLI directly to manage your local Docker host.
    Use this Cloud Foundry IBM Containers plug-in without affecting the local Docker environment:

    Example Usage:
    cf ic ps
    cf ic images

Option 2: Use the Docker CLI directly. In this shell, override the local Docker environment to connect to IBM Containers by setting these variables. Copy and paste the following commands:
    Note: Only Docker commands followed by (Docker) are supported with this option.

     export DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
     export DOCKER_CERT_PATH=/Users/pasapicella/.ice/certs/containers-api.ng.bluemix.net/0bcbcada-bd11-4372-b416-955dff3078a1
     export DOCKER_TLS_VERIFY=1

    Example Usage:
    docker ps
    docker images
4. View docker images

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS$ cf ic images
REPOSITORY                                        TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
registry.ng.bluemix.net/ibm-mobilefirst-starter   latest              5996bb6e51a1        6 weeks ago         770.4 MB
registry.ng.bluemix.net/ibm-node-strong-pm        latest              ef21e9d1656c        8 weeks ago         528.7 MB
registry.ng.bluemix.net/ibmliberty                latest              2209a9732f35        8 weeks ago         492.8 MB
registry.ng.bluemix.net/ibmnode                   latest              8f962f6afc9a        8 weeks ago         429 MB
registry.ng.bluemix.net/apples/etherpad_bluemix   latest              131fd7a39dff        11 weeks ago        570 MB


5. Clone application to run as docker image

$ git clone https://github.com/spring-guides/gs-rest-service.git

6. Create a file called Dockerfile as follows in the "complete" directory

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cat Dockerfile
FROM java:8
VOLUME /tmp
ADD target/gs-rest-service-0.1.0.jar app.jar
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]


7. Package the demo

$ mvn package

8. Build docker image

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ docker build -t gs-rest-service .
Sending build context to Docker daemon 13.44 MB
Step 1 : FROM java:8
8: Pulling from library/java
1565e86129b8: Pull complete
a604b236bcde: Pull complete
5822f840e16b: Pull complete
276ac25b516c: Pull complete
5d32526c1c0e: Pull complete
0d61f7a71c59: Pull complete
16952eac0a64: Pull complete
2fb3388c8597: Pull complete
ca603b247c8e: Pull complete
1785f2bc7c99: Pull complete
40e61a6ae215: Pull complete
32f541968fe6: Pull complete
Digest: sha256:52a1b487ed34f5a76f88a336a740cdd3e7b4486e264a3e69ece7b96e76d9f1dd
Status: Downloaded newer image for java:8
 ---> 32f541968fe6
Step 2 : VOLUME /tmp
 ---> Running in 030f739777ac
 ---> 22bf0f9356a1
Removing intermediate container 030f739777ac
Step 3 : ADD target/gs-rest-service-0.1.0.jar app.jar
 ---> ac590c46b73b
Removing intermediate container 9790c39eb1f7
Step 4 : RUN bash -c 'touch /app.jar'
 ---> Running in e9350ddebb75
 ---> 697d245c6afb
Removing intermediate container e9350ddebb75
Step 5 : ENTRYPOINT java -Djava.security.egd=file:/dev/./urandom -jar /app.jar
 ---> Running in 42fc22473930
 ---> df853abfea57
Removing intermediate container 42fc22473930
Successfully built df853abfea57


9. Run locally

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ docker run --name gs-rest-service -p 80:8080 -d -t gs-rest-service
a392aa15da81fb4ca6c16a6307e0bd1c6b22f9a046228f1fc477d3fe12e15f16


10. Test as follows

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers$ curl http://192.168.99.100/greeting
{"id":1,"content":"Hello, World!"}


11. PUSH TO BLUEMIX AS follows

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ docker tag gs-rest-service registry.ng.bluemix.net/apples/gs-rest-service
pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ docker push registry.ng.bluemix.net/apples/gs-rest-service
The push refers to a repository [registry.ng.bluemix.net/apples/gs-rest-service] (len: 1)
Sending image list
Pushing repository registry.ng.bluemix.net/apples/gs-rest-service (1 tags)
Image 5822f840e16b already pushed, skipping
Image 276ac25b516c already pushed, skipping
Image 5d32526c1c0e already pushed, skipping
Image a604b236bcde already pushed, skipping
Image 1565e86129b8 already pushed, skipping
Image 0d61f7a71c59 already pushed, skipping
Image 2fb3388c8597 already pushed, skipping
Image 16952eac0a64 already pushed, skipping
Image ca603b247c8e already pushed, skipping
Image 1785f2bc7c99 already pushed, skipping
Image 40e61a6ae215 already pushed, skipping
Image 32f541968fe6 already pushed, skipping
22bf0f9356a1: Image successfully pushed
ac590c46b73b: Image successfully pushed
697d245c6afb: Image successfully pushed
df853abfea57: Image successfully pushed
Pushing tag for rev [df853abfea57] on {https://registry.ng.bluemix.net/v1/repositories/apples/gs-rest-service/tags/latest}


12. List all allocated IP

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cf ic ip list
Number of allocated public IP addresses:  2

IpAddress        ContainerId
134.168.13.83
134.168.15.105


13. Create a container from the uploaded image

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cf ic run -p 8080 --memory 512 --name pas-sb-container registry.ng.bluemix.net/apples/gs-rest-service:latest
b1fe3159-0c19-4d54-b0f5-cdd938618deb


14. Assign IP to container

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cf ic ip bind 134.168.13.83 pas-sb-container
OK
The IP address was bound successfully.


15. Verify it's running

pasapicella@pas-macbook-pro:~/bluemix_apps/CONTAINERS/ibm-containers/gs-rest-service/complete$ cf ic ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                  PORTS                          NAMES
3794802b-b0c                  ""                  4 minutes ago       Running 3 minutes ago   134.168.13.83:8080->8080/tcp   pas-sb-container

16. Invoke as follows

$ curl http://134.168.13.83:8080/greeting


More Information

Plugin Reference ->

https://www.eu-gb.bluemix.net/docs/containers/container_cli_reference_cfic.html

Installing cf ci plugin ->

https://www.eu-gb.bluemix.net/docs/containers/doc/container_cli_cfic.html

Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator