Feed aggregator

My Nomination for the Oracle Database Developer Choice Awards

Dietmar Aust - Thu, 2015-09-17 00:30
Actually this came as a wonderful surprise ... I have been nominated for the Oracle Database Developer Choice Awards:
I have basically devoted my entire work life to build solutions based on Oracle technology ... and you can build some pretty cool stuff with it. I have always enjoyed building software that makes a difference ... and even more so share to what I have learned and support and inspire others to do the same. The people in the Oracle community are simply amazing and I have made a lot of friends there. If you have an account for the Oracle Technology Network (OTN) I would appreciate your vote ! And if you don't feel like voting for me ... vote anyway in all the different categories ... because the Oracle community deserves the attention. Thanks, ~Dietmar.

US Consumer Law Attorney Rates

Nilesh Jethwa - Wed, 2015-09-16 22:27

The hourly rate in any consulting business or practice increases by the years of experience in the field.

Read more at: http://www.infocaptor.com/dashboard/us-consumer-law-attorney-rates

If You're In Latvia, Estonia, Romania, Slovenia or Croatia, Oracle APEX is Coming to You!

Joel Kallman - Wed, 2015-09-16 21:04
In the first part of October, my colleague Vlad Uvarov and I are taking the Oracle APEX & Oracle Database Cloud message to a number of user groups who are graciously hosting us.  These are countries for which there is growing interest in Oracle Application Express, and we wish to help support these groups and aid in fostering their growing APEX communities.

The dates and locations are:

  1. Latvian Oracle User Group, October 5, 2015
  2. Oracle User Group Estonia, Oracle Innovation Day in Tallinn, October 7, 2015
  3. Romanian Oracle User Group, October 8, 2015
  4. Oracle Romania (for Oracle employees, at the Floreasca Park office), October 8-9, 2015
  5. Slovenian Oracle User Group, SIOUG 2015, October 12-13, 2015
  6. Croatian Oracle User Group, 20th HrOUG Conference, October 13-16, 2015

You should consider attending one of these user group meetings/conferences if:

  • You're a CIO or manager, and you wish to understand what Oracle Application Express is and if it can help you and your business.
  • You're a PL/SQL developer, and you want to learn how easy or difficult it is to exploit your skills on the Web and in the Cloud.
  • You come from a client/server background and you want to understand what you can do with your skills but in Web development and Cloud development.
  • You're an Oracle DBA, and you want to understand if you can use Oracle Application Express in your daily responsibilities.
  • You know nothing about Oracle Application Express and you want to learn a bit more.

The User Group meetings in Latvia, Estonia and Romania all include 2-hour instructor-led hands on labs.  All you need to bring is a laptop, and we'll supply the rest.  But you won't be merely watching an instructor drive their mouse.  You will be the ones building something real.  I guarantee that people completely new to APEX, as well as seasoned APEX developers, will learn a number of relevant skills and techniques in these labs.

If you have any interest or questions or concerns (or complaints!) about Oracle Application Express, and you are nearby, we would be very honored to meet you in person and assist in any way we can.  We hope you can make it!

Presenting the Hotsos Symposium Training Day – 10 March 2016 (Heat)

Richard Foote - Wed, 2015-09-16 05:06
I’ve just accepted an invitation to present the Hotsos Symposium Training Day on 10 March 2016 in sunny Dallas, Texas. In the age of Exadata and In-Memory databases, it’ll be an updated and consolidated version of my Index Internals and Best Practices seminar. With an emphasis on using indexes appropriately to boost performance, it’ll feature […]
Categories: DBA Blogs

Presentation slides for my ORDS talk at KScope 2015

Dietmar Aust - Tue, 2015-09-15 12:48
Hi guys,

in June I gave a talk at the ODTUG KScope conference regarding the optimal setup of Oracle ORDS for Oracle Application Express: Setting Up the Oracle APEX Listener (Now ORDS) for Production Environments

You can access the slides certainly through the ODTUG site. They have even recorded the presentation and made it available to their members.

This seems to be a good investment for $99 per year for a paid membership, because now you also have access to the other content from the ODTUG conferences. I am not affiliated with ODTUG but all I can say is that the KScope conference is the best place for an Oracle developer to learn and connect with the best folks in the industry.

For everybody else who is not (yet) an ODTUG member you can download my slides and the config file here: http://www.opal-consulting.de/downloads/presentations/2015-06-ODTUG-KScope-ORDS-in-production/

Cheers and all the best,

P.S.: The configuration is based on the version 3.0.0 of ORDS. You should definitely move to 3.0.1 which is currently available.

But on the other hand I was once again thrown of by another problem with version 3.0.1 running the schema creation scripts for the ORDS schema users (ords_metadata and ords_public_user) .

Thus I have come to the conclusion it is best to do it step by step, the database users have to be created first. You can extract the installation scripts from the ords.war just as well:
- http://docs.oracle.com/cd/E56351_01/doc.30/e56293/install.htm#CHDDIFEC
- http://docs.oracle.com/cd/E56351_01/doc.30/e56293/install.htm#CHDFJHEA

OTBI Enterprise

Dylan's BI Notes - Tue, 2015-09-15 10:39
OTBI Enterprise is the BI cloud service, a SaaS deployment of OBIA.  It is using a data warehouse based architecture.  The ETL processes are handled within the cloud.  The data are first loaded from either on premise or cloud sources using various means in the original formats.   The data are first loaded into the […]
Categories: BI & Warehousing

Copycat blog

Vikram Das - Tue, 2015-09-15 03:50
While doing a google search today I noticed that there is another blog that has copied all content from my blog and posted it as their own content and even kept a similar sounding name: http://oracleapps-technology.blogspot.com .  I have made a DMCA complaint to google about this.  The google team asked me to provide a list of URLs.  I had to go through the copycat's whole blog and create a spreadsheet with two columns. One column with URL of my original post and second column with the URL of the copycat's blog.  There were 498 entries.  I patiently did it and sent the spreadsheet to google team and got a reply within 2 hours:

In accordance with the Digital Millennium Copyright Act, we have completed processing your infringement notice. We are in the process of disabling access to the content in question at the following URL(s):


The content will be removed shortly.

The Google Team 
Categories: APPS Blogs

Turkish Hadoop User Group(TRHUG) 2015 meeting

H.Tonguç Yılmaz - Tue, 2015-09-15 02:39
Turkish Hadoop User Group(TRHUG) 2015 annual meeting will be at Tuesday October 6, İTÜ Maslak Campus İstanbul. Dilişim / Oracle / Cloudera / Intel are the event sponsors of the meeting this year. You can catch one of the last free tickets and check out the presentations of the day from this link: https://www.eventbrite.com/e/trhug-2015-tickets-18360144687

Adding the "Deploy to Bluemix" Button to my Bluemix Applications in GitHub

Pas Apicella - Mon, 2015-09-14 19:25
Not before time I finally added my first "Deploy to Bluemix" button on my GitHub projects for Bluemix applications. The screen shot below shows this for the Spring Session - Spring Boot Portable Cloud Ready HTTP Session demo.

Here is what it looks like when I do deploy this using the "Deploy to Bluemix" button and requires me to log in to IBM Bluemix. What happens when you use this button it adds the prohect code via FORK to your own DevOps projects , adds a pipeline to compile/deploy the code and finally deploys it as you Expect it to do.

More Information

Categories: Fusion Middleware

EM12c Upgrade Tasks

Arun Bavera - Mon, 2015-09-14 14:51
1.      Upgrade Primary OMR, OMS using Installer of   - 2 Hours
   Check if OMR requires upgrade:
12c Database has been Certified as an EM or Repository with Certain Patchset and PSU Restrictions (Doc ID 1987905.1) Patch Set Updates - List of Fixes in each PSU (Doc ID 1924126.1) Patch Set - Availability and Known Issues (Doc ID 1683799.1)
Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)

Applying Enterprise Manager 12c Recommended Patches (Doc ID 1664074.1)

2.     Upgrade Primary Agent      - 6 Minutes

3.      Cleanup Agent

4.      Cleanup OMS

5.      Upgrade Secondary  OMS     - 30 Minutes

6.      Cleanup Agent

7.      Cleanup OMS

8.      Apply Monthly Agent/OMS Patches available 
    Oracle Recommended Patches (PSU) for Enterprise Manager Base Platform (All Releases) (Doc ID 822485.1)
    Document 2038446.1 - Enterprise Manager (PS4) Master Bundle Patch List

9.  Install Latest JDK 1.6 (Note: 1944044.1) JDK 
All Java SE Downloads on MOS (Doc ID 1439822.1)
  How to Upgrade JDK to 1.6 Update 95 on OMS or (Doc ID 2059426.1)
How to Upgrade the JDK Used by Oracle WebLogic Server 11g to a Different Version (Doc ID 1309855.1)
How to Upgrade the JDK Used by Oracle WebLogic Server 12c to a Different Version (Doc ID 1616397.1)
         How to Install and Maintain the Java SE Installed or Used with FMW 11g/12c Products (Doc ID 1492980.1)

10.     Install Weblogic latest PSU (1470197.1)  

11.  Verify Load Balancer

12.  OMS Sizing 

Enterprise Manager Cloud Control Upgrade Guide

EM 12c R5: Checklist for Upgrading Enterprise Manager Cloud Control from Version to (Doc ID 2022505.1)

12c Database has been Certified as an EM or Repository with Certain Patchset and PSU Restrictions (Doc ID 1987905.1)

EM 12c: How to Patch the EM-Integrated Oracle BI Publisher (Doc ID 1982656.1)


Categories: Development

Forcing Garbage Collection in JDK manually using JVisualVM

Arun Bavera - Mon, 2015-09-14 14:43
You might have seen many times heap crossing the limit and GC algorithm not working properly and keeping old object long time.
Even though it is not advised to force major GC manually if you come across a situation you can use the following method to clear the Heap.
Note. If the Heap size is huge more than 6GB doing major GC may cause application to wait for couple of seconds. Also, make sure you have enough system memory(RAM) to invoke the tool JVisualVM.
This is typical method in many corporates where X-Windows is not installed on their *NIX machines and app account is locked down for direct login.
1) Login as yourself into Linux/Unix machine and make sure your laptop/Desktop has X-emulator like xming running.
2) Note down the authorized X-keys:    xauth list
3) Login as app owner :     sudo su – oracle
4) Add the X-keys to oracle(App owner session)
xauth add <full string from xauth list from previous session>image

5) Do ps –ef|java , note down the JDK directory and go directly to JDK bin (/opt/app/oracle/jdk1.7.0_55/bin ) in this case we are using JDK7
6) Invoke  ./jvisualvm &
7) Choose the Weblogic PID and make sure in the Overview tab the server name is the one you are interested and Perform manual GC.
  Note: From JDK 7 onwards if your Heap size is more than 6GB then G1GC algorithm works in best possible ways. 
     Also refer: https://blogs.oracle.com/g1gc/

Categories: Development

Report Carousel in APEX 5 UT

Dimitri Gielis - Mon, 2015-09-14 10:45
The Universal Theme in APEX 5.0 is full of nice things.

Did you already see the Carousel template for Regions
When you add a region to your page with a couple of sub-regions and you give the parent region the "Carousel Container" template it turns the regions into a carousel, so you can flip between regions.

I was asked to have the same functionality but than on dynamic content.
So I decided to build a report template that would be shown as carousel. Here's the result:

I really like carousels :)

Here's how you can have this report template in your app:
1) Create a new Report Template:

Make sure to select Named Column for the Template Type:

Add following HTML into the template at the given points:

That's it for the template.

Now you can create a new report on your page and give it the template you just created.
Here's the SQL Statement I used:

select PRODUCT_ID          as id,
       PRODUCT_NAME        as title,
       PRODUCT_DESCRIPTION as description,
       dbms_lob.getlength(PRODUCT_IMAGE) as image,
       'no-icon'           as icon,
       null                as link_url 

Note 1: that you have to use the same column aliases as you defined in the template.
Note 2: make sure you keep the real id of your image in the query too, as otherwise you'll get an error (no data found)

To make the carousel a bit nicer I added following CSS to the page, but you could add it to your own CSS file or in the custom css section of Theme Roller.

Note: the carousel can work with an icon or an image. If you want to see an icon you can use for example "fa-edit fa-4x". When using an image, define the icon as no-icon.

Eager for more Universal Theme tips and tricks? check-out our APEX 5.0 UI training in Birmingham on December 10th. :)

For easier copy/paste into your template, you find the source below:

 *** Before Rows ***  
<div class="t-Region t-Region--carousel t-Region--showCarouselControls t-Region--hiddenOverflow" id="R1" role="group" aria-labelledby="R1_heading">
<div class="t-Region-bodyWrap">
<div class="t-Region-body">
<div class="t-Region-carouselRegions">
*** Column Template ***
<div data-label="#TITLE#" id="SR_R#ID#">
<a href="#LINK_URL#">
<div class="t-HeroRegion " id="R#ID#">
<div class="t-HeroRegion-wrap">
<div class="t-HeroRegion-col t-HeroRegion-col--left">
<span class="t-HeroRegion-icon t-Icon #ICON#"></span>
<div class="t-HeroRegion-col t-HeroRegion-col--content">
<h2 class="t-HeroRegion-title">#TITLE#</h2>
<div class="t-HeroRegion-col t-HeroRegion-col--right"><div class="t-HeroRegion-form"></div><div class="t-HeroRegion-buttons"></div></div>
*** After Rows ***
*** Inline CSS ***
.t-HeroRegion-col.t-HeroRegion-col--left {
.t-HeroRegion {
border-bottom:0px solid #CCC;
.t-Region--carousel {
border: 1px solid #d6dfe6 !important;
.t-HeroRegion-col--left img {
max-height: 90px;
max-width: 130px;
.no-icon {
Categories: Development

Do you ‘Glow in the Dark’?

Duncan Davies - Mon, 2015-09-14 07:00

I’m in awe of many people. I’m lucky to have met and worked with some truly smart and outstanding individuals. (I just wish I wasn’t so reserved and was able to tell them!)

If I was asked to pick a handful of the most talented people however, Seth Godin would undoubtedly be up there.

sethI’ve not met Seth in real life (although I had a near miss at OpenWorld 5 or 6 years back) but I’ve followed his work for a decade at least. He writes daily posts on his blog – most of them succinct and quick to read – which are always really insightful.

My all-time favourite post from Seth was from just the other day. I’m reposting it – not because I’m stealing his work, but because it increases the chances of readers of this blog seeing it – and going to his site and subscribing, adding it to your RSS reader etc.

Glow in the dark

Some people are able to reflect the light that lands on them, to take directions or assets or energy and focus it where it needs to be focused. This is a really valuable skill.

Even more valuable, though, is the person who glows in the dark. Not reflecting energy, but creating it. Not redirecting urgencies but generating them. The glow in the dark colleague is able to restart momentum, even when everyone else is ready to give up.

At the other end of the spectrum (ahem) is the black hole. All the energy and all the urgency merely disappears.

Your glow in the dark colleague knows that recharging is eventually necessary, but for now, it’s okay that there’s not a lot of light. The glow is enough.

I wish I was able to write half as beautifully as this. Please go to his site and subscribe. I’m sure we can all identify some people who can reflect the light, some who are occasionally black holes, and – if you’re lucky – have a glow in the dark colleague. If you need further convincing of Seth’s genius, the Interim Strategy will probably resonate too.

Bigfoot vs UFO analytics

Nilesh Jethwa - Sat, 2015-09-12 21:29

Bigfoot and UFO remain elusive but know their ways to make news from time to time.

Read more at: http://www.infocaptor.com/dashboard/bigfoot-vs-ufo-analytics

Here We Go Again

Floyd Teter - Thu, 2015-09-10 13:59
Yup, moving on one more time.  Hopefully for the last time.  I’m leaving Sierra-Cedar Inc. for a position as Sr. Director with Oracle's HCM Center of Excellence team.

As an enterprise software guy, I see the evolution of SaaS and Cloud as the significant drivers of change in the field.  I want to be involved, I want to contribute in a meaningful way, I want to learn more, and I want to be at the center of it all.  And there is no better place for all that than Oracle.  I had the opportunity to meet most of the folks I’ll be working alongside…knew many of them and met a few new faces.  And I’m excited to work with them. So when the opportunity presented itself, I was happy to follow through on it.

I’ll also freely admit that I’ve seen…and experienced…a pretty substantial amount of upheaval regarding Oracle services partners over the past several years.  Some are fighting the cloud-driven changes in the marketplace, others have accepted the change but have yet to adapt, a few are substantially shifting their business model to provide relevant services as the sand shifts under their feet.  Personally, I’ve had enough upheaval for a bit.

The first mission at Oracle:  develop tools and methods to meaningfully reduce lead time between customer subscript and customer go-live.  Pretty cool, as it lets me work on my #beat39 passion.  I’ll be starting with building tools to convert data from legacy HCM applications to HCM Cloud through the HCM Data Loader (“HDL”).

While I regret leaving a group of great people at SCI, I’m really looking forward to rejoining Oracle.  I kind of feel like a minion hitting the banana goldmine!

Building an Oracle NoSQL cluster using Docker

Marcelo Ochoa - Thu, 2015-09-10 09:43
Continuing with my previous post about using Docker in development/testing environment now the case is how to build an Oracle NoSQL cluster in single machine using Docker.
I assume that you already have docker installed and running, there are a plenty of tutorial about that and in my case using Ubuntu is just two step installer using apt-get :)
My starting point was using some ideas from another Docker project for building a Hadoop Cluster.
This project is using another great idea named Serf/Dnsmasq on Docker the motivating extracted from README.md file is:
This image aims to provide resolvable fully qualified domain names,
between dynamically created docker containers on ubuntu.
## The problem
By default **/etc/hosts** is readonly in docker containers. The usual
solution is to start a DNS server (probably as a docker container) and pass
a reference when starting docker instances: `docker run -dns `
So with this idea in mind I wrote this Docker file:

FROM java:openjdk-7-jdk
MAINTAINER marcelo.ochoa@gmail.com
RUN export DEBIAN_FRONTEND=noninteractive && \
    apt-get update && \
    apt-get install -y dnsmasq unzip curl ant ant-contrib junit
# dnsmasq configuration
ADD dnsmasq.conf /etc/dnsmasq.conf
ADD resolv.dnsmasq.conf /etc/resolv.dnsmasq.conf
# install serfdom.io
RUN curl -Lo /tmp/serf.zip https://dl.bintray.com/mitchellh/serf/0.5.0_linux_amd64.zip
RUN curl -Lo /tmp/kv-ce-3.3.4.zip http://download.oracle.com/otn-pub/otn_software/nosql-database/kv-ce-3.3.4.zip
RUN unzip /tmp/serf.zip -d /bin
RUN unzip /tmp/kv-ce-3.3.4.zip -d /opt
RUN rm -f /tmp/serf.zip
RUN rm -f /tmp/kv-ce-3.3.4.zip
# configure serf
ADD serf-config.json $SERF_CONFIG_DIR/serf-config.json
ADD event-router.sh $SERF_CONFIG_DIR/event-router.sh
RUN chmod +x  $SERF_CONFIG_DIR/event-router.sh
ADD handlers $SERF_CONFIG_DIR/handlers
ADD start-serf-agent.sh  $SERF_CONFIG_DIR/start-serf-agent.sh
RUN chmod +x  $SERF_CONFIG_DIR/start-serf-agent.sh
EXPOSE 7373 7946 5000 5001 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020
CMD /etc/serf/start-serf-agent.sh
relevant information was marked in strong, here the explanation:
  • FROM java:openjdk-7-jdk, this Docker base images already have installed Ubuntu and Java7 so only a few additions are required
  • RUN curl .. /0.5.0_linux_amd64.zip, this is compiled version of Serf implementation ready to run on Ubuntu
  • RUN curl -Lo .. /kv-ce-3.3.4.zip, this is the community version of Oracle NoSQL, free download
  • CMD /etc/serf/start-serf-agent.sh, this is the script modified from the original Docker/serf project which includes the configuration of the Oracle NoSQL just after the image boot.
Last point requires an special explanation first there are 3 bash function for starting, stopping and creating the bootconfig file for the NoSQL nodes, here the sections:
stop_database() {
        java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar stop -root $KVROOT
start_database() {
nohup java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar start -root $KVROOT &
create_bootconfig() {
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "m" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -admin 5001 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "s" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
last function (create_bootconfig) works different if the node is designed as master ($NODE_TYPE = "m") or slave ($NODE_TYPE = "s").
I decided to not persist the NoSQL storage after docker images stop, but is it possible replacing the directory where the NoSQL nodes reside externally as I showed on my previous post, with this configuration the NoSQL storage is not re-created at every boot.
With above explanations we can create the Docker image using:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker build -t "oracle-nosql/serf" .
the complete list of files required can be download as zip from this location.
Once the image is built We can start a cluster of 3 nodes simple executing the script start-cluster.sh, this script creates a node named master.mycorp.com and two slaves, slave[1..2].mycorp.com, here the output:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster.sh
WARNING: Localhost DNS setting (--dns= may fail in containers.
WARNING: Localhost DNS setting (--dns= may fail in containers.
WARNING: Localhost DNS setting (--dns= may fail in containers.
you can check the status of the cluster executing a serf command at the master node, for example:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master serf members
master.mycorp.com  alive
slave1.mycorp.com  alive
slave2.mycorp.com  alive
at this point 3 NoSQL nodes are ready to work, but they are unconfigured, here the output of NoSQL ping command:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
SNA at hostname: master, registry port: 5000 is not registered.
No further information is available
Using the examples of Oracle NoSQL Documentation We can create an store using this plan (script.txt):
configure -name mystore
plan deploy-zone -name "Boston" -rf 3 -wait
plan deploy-sn -zn zn1 -host master.mycorp.com -port 5000 -wait
plan deploy-admin -sn sn1 -port 5001 -wait
pool create -name BostonPool
pool join -name BostonPool -sn sn1
plan deploy-sn -zn zn1 -host slave1.mycorp.com -port 5000 -wait
pool join -name BostonPool -sn sn2
plan deploy-sn -zn zn1 -host slave2.mycorp.com -port 5000 -wait
pool join -name BostonPool -sn sn3
topology create -name topo -pool BostonPool -partitions 300
plan deploy-topology -name topo -wait
show topology
to simple submit this plan to the NoSQL nodes there is a script named deploy-store.sh, here the output:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./deploy-store.sh 
Store configured: mystore
Executed plan 1, waiting for completion...
Plan 1 ended successfully
Executed plan 2, waiting for completion...
Plan 2 ended successfully
Executed plan 3, waiting for completion...
Plan 3 ended successfully
Added Storage Node(s) [sn1] to pool BostonPool
Executed plan 4, waiting for completion...
Plan 4 ended successfully
Added Storage Node(s) [sn2] to pool BostonPool
Executed plan 5, waiting for completion...
Plan 5 ended successfully
Added Storage Node(s) [sn3] to pool BostonPool
Created: topo
Executed plan 6, waiting for completion...
Plan 6 ended successfully
store=mystore  numPartitions=300 sequence=308
  zn: id=zn1 name="Boston" repFactor=3 type=PRIMARY
  sn=[sn1] zn:[id=zn1 name="Boston"] master.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2] zn:[id=zn1 name="Boston"] slave1.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn2] RUNNING
          No performance info available
  sn=[sn3] zn:[id=zn1 name="Boston"] slave2.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn3] RUNNING
          No performance info available
  shard=[rg1] num partitions=300
    [rg1-rn1] sn=sn1
    [rg1-rn2] sn=sn2
    [rg1-rn3] sn=sn3
Also you can access to NoSQL Admin page using the URL http://localhost:5001/ because the start-cluster.sh script publish this port outside the master container.
Here the screen shot:

The cluster is ready!!, have fun storing your data.

Persistent NoSQL store, as I mentioned early in this post if We put the /var/kvroot mapped to the host machine the NoSQL store will persist through multiples execution of the cluster, for example creating 3 directories as:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot1
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot2
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot3
and creating a new shell script for starting the cluster mapped to above directories as (start-cluster-persistent.sh):
docker run -d -t --volume=/tmp/kvroot1:/var/kvroot --publish=5000:5000 --publish=5001:5001 --dns -e NODE_TYPE=m -P --name master -h master.mycorp.com oracle-nosql/serf
FIRST_IP=$(docker inspect --format="{{.NetworkSettings.IPAddress}}" master)
docker run -d -t --volume=/tmp/kvroot2:/var/kvroot --dns -e NODE_TYPE=s -e JOIN_IP=$FIRST_IP -P --name slave1 -h slave1.mycorp.com oracle-nosql/serf
docker run -d -t --volume=/tmp/kvroot3:/var/kvroot --dns -e NODE_TYPE=s -e JOIN_IP=$FIRST_IP -P --name slave2 -h slave2.mycorp.com oracle-nosql/serf
We can start and deploy the store for the first time using:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster-persistent.sh
... output here...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ls -ltr /tmp/kvroot1
total 8
-rw-r--r-- 1 root root  52 sep 10 20:19 security.policy
-rw-r--r-- 1 root root 781 sep 10 20:19 config.xml
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./deploy-store.sh 
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
Pinging components of store mystore based upon topology sequence #308
300 partitions and 3 storage nodes
Time: 2015-09-10 23:20:18 UTC   Version:
Shard Status: total:1 healthy:1 degraded:0 noQuorum:0 offline:0
Zone [name="Boston" id=zn1 type=PRIMARY]   RN Status: total:3 online:3 maxDelayMillis:0 maxCatchupTimeSecs:0
Storage Node [sn1] on master.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:627 haPort:5011
Storage Node [sn2] on slave1.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:627 haPort:5010 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on slave2.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn3] Status: RUNNING,REPLICA sequenceNumber:627 haPort:5010 delayMillis:0 catchupTimeSecs:0
as you can see the cluster is ready for storing data, now We will stop and start again to see that is not necessary to redeploy the configuration:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./stop-cluster.sh
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster-persistent.sh
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
Pinging components of store mystore based upon topology sequence #308
300 partitions and 3 storage nodes
Time: 2015-09-10 23:34:15 UTC   Version:
Shard Status: total:1 healthy:1 degraded:0 noQuorum:0 offline:0
Zone [name="Boston" id=zn1 type=PRIMARY]   RN Status: total:3 online:3 maxDelayMillis:2342 maxCatchupTimeSecs:-4
Storage Node [sn1] on master.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,REPLICA sequenceNumber:639 haPort:5011 delayMillis:2342 catchupTimeSecs:-4
Storage Node [sn2] on slave1.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:639 haPort:5010 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on slave2.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn3] Status: RUNNING,MASTER sequenceNumber:639 haPort:5010
and that's all last ping command shows that the store survive the stop/remove/start container cycle.

Spring Session - Spring Boot application for IBM Bluemix

Pas Apicella - Thu, 2015-09-10 07:28
The following guide shows how to use Spring Session to transparently leverage Redis to back a web application’s HttpSession when using Spring Boot.


The demo below is a simple Spring Boot / Thymeleaf/ Bootstrap application to test Session replication using Spring Session - Spring Boot within IBM Bluemix. Same demo will run on Pivotal Cloud Foundry as well.

IBM DevOps URL ->


Sample Project on GitHub ->


More Information

The Portable, Cloud-Ready HTTP Session
Categories: Fusion Middleware

Is Apache Spark becoming a DBMS?

Dylan's BI Notes - Wed, 2015-09-09 22:02
I attended a great meetup and this is the question I have after the meeting. Perhaps the intent is to make it like a DBMS, like Oracle, or even a BI platform, like OBIEE? — The task flow it actually very similar to a typical database profiling and data analysis job. 1. Define your question […]
Categories: BI & Warehousing

Amazon S3 to Glacier - Cloud ILM

Pakistan's First Oracle Blog - Wed, 2015-09-09 19:27
Falling in love with Kate Upton is easy but more easier is to be swept off your feet by information lifecycle management (ILM) in the Amazon Web Services (AWS). Simple, easily-configurable, fast, reliable, cost effective and proven are the words which describe it.

Pythian has been involved with ILM for a long time. With various flavors of databases and systems, Pythian has been overseeing creation, alteration, and flow of data for a long time until it becomes obsolete. That is why AWS's ILM resonates perfectly well with Pythian's expertise.

Amazon S3 is an object store for short term storage, whereas Amazon Glacier is their cloud archiving offering or storage for long term. Rules can be defined on the information to specify and automate its lifecycle.

Following screenshot shows the rules being configured on objects from S3 bucket to Glacier and then permanent deletion. 90 days after creation if an object, it will be moved to Glacier, and then after 1 year, it will be permanently deleted. Look at the graphical representation of lifecycle as how intuitive it is.

Categories: DBA Blogs

Oracle Priority Support Infogram for 09-SEP-2015

Oracle Infogram - Wed, 2015-09-09 15:49


Database Insider - September 2015 issue now available, from Exadata Partner Community EMEA.

Some good posts this week over at Update your Database – NOW!


If you’ve never been to the Ask Tom site and you have anything to do with Oracle technologies well, where have you been hanging out? Always one of the best sources for SQL/PL/SQL Oracle internals, design, etc. See this posting for an update: Ask Tom Questions: the Good, the Bad and the Ugly, from All Things SQL.


Oracle VM VirtualBox 5.0.4 now available!, from Oracle’s Virtualization Blog.

Big Data


Additional new material WebLogic Community, from WebLogic Partner Community EMEA.


Insert and show whitespace in ADF Faces Components, from WebLogic Partner Community EMEA.

Mobile Computing

Quick Tip: Multi Line Labels on Command Button, from The Oracle Mobile Platform Blog.


From the Oracle E-Business Suite Support blog:

From the Oracle E-Business Suite Technology blog:

Are AMP Support Dates Based on EBS or EM Releases?


Subscribe to Oracle FAQ aggregator