Feed aggregator

My Nomination for the Oracle Database Developer Choice Awards

Dietmar Aust - Thu, 2015-09-17 00:30
Actually this came as a wonderful surprise ... I have been nominated for the Oracle Database Developer Choice Awards:
I have basically devoted my entire work life to build solutions based on Oracle technology ... and you can build some pretty cool stuff with it. I have always enjoyed building software that makes a difference ... and even more so share to what I have learned and support and inspire others to do the same. The people in the Oracle community are simply amazing and I have made a lot of friends there. If you have an account for the Oracle Technology Network (OTN) I would appreciate your vote ! And if you don't feel like voting for me ... vote anyway in all the different categories ... because the Oracle community deserves the attention. Thanks, ~Dietmar.

US Consumer Law Attorney Rates

Nilesh Jethwa - Wed, 2015-09-16 22:27

The hourly rate in any consulting business or practice increases by the years of experience in the field.

Read more at: http://www.infocaptor.com/dashboard/us-consumer-law-attorney-rates

If You're In Latvia, Estonia, Romania, Slovenia or Croatia, Oracle APEX is Coming to You!

Joel Kallman - Wed, 2015-09-16 21:04
In the first part of October, my colleague Vlad Uvarov and I are taking the Oracle APEX & Oracle Database Cloud message to a number of user groups who are graciously hosting us.  These are countries for which there is growing interest in Oracle Application Express, and we wish to help support these groups and aid in fostering their growing APEX communities.

The dates and locations are:

  1. Latvian Oracle User Group, October 5, 2015
  2. Oracle User Group Estonia, Oracle Innovation Day in Tallinn, October 7, 2015
  3. Romanian Oracle User Group, October 8, 2015
  4. Oracle Romania (for Oracle employees, at the Floreasca Park office), October 8-9, 2015
  5. Slovenian Oracle User Group, SIOUG 2015, October 12-13, 2015
  6. Croatian Oracle User Group, 20th HrOUG Conference, October 13-16, 2015

You should consider attending one of these user group meetings/conferences if:

  • You're a CIO or manager, and you wish to understand what Oracle Application Express is and if it can help you and your business.
  • You're a PL/SQL developer, and you want to learn how easy or difficult it is to exploit your skills on the Web and in the Cloud.
  • You come from a client/server background and you want to understand what you can do with your skills but in Web development and Cloud development.
  • You're an Oracle DBA, and you want to understand if you can use Oracle Application Express in your daily responsibilities.
  • You know nothing about Oracle Application Express and you want to learn a bit more.

The User Group meetings in Latvia, Estonia and Romania all include 2-hour instructor-led hands on labs.  All you need to bring is a laptop, and we'll supply the rest.  But you won't be merely watching an instructor drive their mouse.  You will be the ones building something real.  I guarantee that people completely new to APEX, as well as seasoned APEX developers, will learn a number of relevant skills and techniques in these labs.

If you have any interest or questions or concerns (or complaints!) about Oracle Application Express, and you are nearby, we would be very honored to meet you in person and assist in any way we can.  We hope you can make it!

Presenting the Hotsos Symposium Training Day – 10 March 2016 (Heat)

Richard Foote - Wed, 2015-09-16 05:06
I’ve just accepted an invitation to present the Hotsos Symposium Training Day on 10 March 2016 in sunny Dallas, Texas. In the age of Exadata and In-Memory databases, it’ll be an updated and consolidated version of my Index Internals and Best Practices seminar. With an emphasis on using indexes appropriately to boost performance, it’ll feature […]
Categories: DBA Blogs

Presentation slides for my ORDS talk at KScope 2015

Dietmar Aust - Tue, 2015-09-15 12:48
Hi guys,

in June I gave a talk at the ODTUG KScope conference regarding the optimal setup of Oracle ORDS for Oracle Application Express: Setting Up the Oracle APEX Listener (Now ORDS) for Production Environments

You can access the slides certainly through the ODTUG site. They have even recorded the presentation and made it available to their members.

This seems to be a good investment for $99 per year for a paid membership, because now you also have access to the other content from the ODTUG conferences. I am not affiliated with ODTUG but all I can say is that the KScope conference is the best place for an Oracle developer to learn and connect with the best folks in the industry.

For everybody else who is not (yet) an ODTUG member you can download my slides and the config file here: http://www.opal-consulting.de/downloads/presentations/2015-06-ODTUG-KScope-ORDS-in-production/

Cheers and all the best,
~Dietmar.

P.S.: The configuration is based on the version 3.0.0 of ORDS. You should definitely move to 3.0.1 which is currently available.

But on the other hand I was once again thrown of by another problem with version 3.0.1 running the schema creation scripts for the ORDS schema users (ords_metadata and ords_public_user) .

Thus I have come to the conclusion it is best to do it step by step, the database users have to be created first. You can extract the installation scripts from the ords.war just as well:
- http://docs.oracle.com/cd/E56351_01/doc.30/e56293/install.htm#CHDDIFEC
- http://docs.oracle.com/cd/E56351_01/doc.30/e56293/install.htm#CHDFJHEA



Copycat blog

Vikram Das - Tue, 2015-09-15 03:50
While doing a google search today I noticed that there is another blog that has copied all content from my blog and posted it as their own content and even kept a similar sounding name: http://oracleapps-technology.blogspot.com .  I have made a DMCA complaint to google about this.  The google team asked me to provide a list of URLs.  I had to go through the copycat's whole blog and create a spreadsheet with two columns. One column with URL of my original post and second column with the URL of the copycat's blog.  There were 498 entries.  I patiently did it and sent the spreadsheet to google team and got a reply within 2 hours:


Hello,
In accordance with the Digital Millennium Copyright Act, we have completed processing your infringement notice. We are in the process of disabling access to the content in question at the following URL(s):

http://oracleapps-technology.blogspot.com/

The content will be removed shortly.

Regards,
The Google Team 
Categories: APPS Blogs

Adding the "Deploy to Bluemix" Button to my Bluemix Applications in GitHub

Pas Apicella - Mon, 2015-09-14 19:25
Not before time I finally added my first "Deploy to Bluemix" button on my GitHub projects for Bluemix applications. The screen shot below shows this for the Spring Session - Spring Boot Portable Cloud Ready HTTP Session demo.


Here is what it looks like when I do deploy this using the "Deploy to Bluemix" button and requires me to log in to IBM Bluemix. What happens when you use this button it adds the prohect code via FORK to your own DevOps projects , adds a pipeline to compile/deploy the code and finally deploys it as you Expect it to do.



More Information

https://developer.ibm.com/devops-services/2015/02/18/share-code-new-deploy-bluemix-button/
Categories: Fusion Middleware

EM12c 12.1.0.5 Upgrade Tasks

Arun Bavera - Mon, 2015-09-14 14:51
1.      Upgrade Primary OMR, OMS using Installer of 12.1.0.5   - 2 Hours
   Check if OMR requires upgrade:
12c Database has been Certified as an EM 12.1.0.4 or 12.1.0.5 Repository with Certain Patchset and PSU Restrictions (Doc ID 1987905.1)
12.1.0.2 Patch Set Updates - List of Fixes in each PSU (Doc ID 1924126.1)
12.1.0.2 Patch Set - Availability and Known Issues (Doc ID 1683799.1)
Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)


Applying Enterprise Manager 12c Recommended Patches (Doc ID 1664074.1)

2.     Upgrade Primary Agent      - 6 Minutes

3.      Cleanup Agent

4.      Cleanup OMS

5.      Upgrade Secondary  OMS     - 30 Minutes

6.      Cleanup Agent

7.      Cleanup OMS

8.      Apply Monthly Agent/OMS Patches available 
    Oracle Recommended Patches (PSU) for Enterprise Manager Base Platform (All Releases) (Doc ID 822485.1)
    Document 2038446.1 - Enterprise Manager 12.1.0.5.0 (PS4) Master Bundle Patch List

9.  Install Latest JDK 1.6 (Note: 1944044.1) JDK 1.6.0.95.. 
        Refer:
All Java SE Downloads on MOS (Doc ID 1439822.1)
  How to Upgrade JDK to 1.6 Update 95 on OMS 12.1.0.4 or 12.1.0.5 (Doc ID 2059426.1)
How to Upgrade the JDK Used by Oracle WebLogic Server 11g to a Different Version (Doc ID 1309855.1)
How to Upgrade the JDK Used by Oracle WebLogic Server 12c to a Different Version (Doc ID 1616397.1)
         How to Install and Maintain the Java SE Installed or Used with FMW 11g/12c Products (Doc ID 1492980.1)

10.     Install Weblogic latest PSU (1470197.1)  

11.  Verify Load Balancer

12.  OMS Sizing 

Refer:
Enterprise Manager Cloud Control Upgrade Guide

EM 12c R5: Checklist for Upgrading Enterprise Manager Cloud Control from Version 12.1.0.2/3/4 to 12.1.0.5 (Doc ID 2022505.1)

12c Database has been Certified as an EM 12.1.0.4 or 12.1.0.5 Repository with Certain Patchset and PSU Restrictions (Doc ID 1987905.1)

EM 12c: How to Patch the EM-Integrated Oracle BI Publisher (Doc ID 1982656.1)

http://oraforms.blogspot.com/2014/05/oracle-em12c-release-and-patch-schedules.html


Categories: Development

Forcing Garbage Collection in JDK manually using JVisualVM

Arun Bavera - Mon, 2015-09-14 14:43
You might have seen many times heap crossing the limit and GC algorithm not working properly and keeping old object long time.
Even though it is not advised to force major GC manually if you come across a situation you can use the following method to clear the Heap.
Note. If the Heap size is huge more than 6GB doing major GC may cause application to wait for couple of seconds. Also, make sure you have enough system memory(RAM) to invoke the tool JVisualVM.
This is typical method in many corporates where X-Windows is not installed on their *NIX machines and app account is locked down for direct login.
1) Login as yourself into Linux/Unix machine and make sure your laptop/Desktop has X-emulator like xming running.
2) Note down the authorized X-keys:    xauth list
3) Login as app owner :     sudo su – oracle
4) Add the X-keys to oracle(App owner session)
xauth add <full string from xauth list from previous session>image

5) Do ps –ef|java , note down the JDK directory and go directly to JDK bin (/opt/app/oracle/jdk1.7.0_55/bin ) in this case we are using JDK7
6) Invoke  ./jvisualvm &
7) Choose the Weblogic PID and make sure in the Overview tab the server name is the one you are interested and Perform manual GC.
  Note: From JDK 7 onwards if your Heap size is more than 6GB then G1GC algorithm works in best possible ways. 
     Also refer: https://blogs.oracle.com/g1gc/

image
Categories: Development

Report Carousel in APEX 5 UT

Dimitri Gielis - Mon, 2015-09-14 10:45
The Universal Theme in APEX 5.0 is full of nice things.

Did you already see the Carousel template for Regions
When you add a region to your page with a couple of sub-regions and you give the parent region the "Carousel Container" template it turns the regions into a carousel, so you can flip between regions.

I was asked to have the same functionality but than on dynamic content.
So I decided to build a report template that would be shown as carousel. Here's the result:



I really like carousels :)

Here's how you can have this report template in your app:
1) Create a new Report Template:


Make sure to select Named Column for the Template Type:


Add following HTML into the template at the given points:




That's it for the template.

Now you can create a new report on your page and give it the template you just created.
Here's the SQL Statement I used:

select PRODUCT_ID          as id,
       PRODUCT_NAME        as title,
       PRODUCT_DESCRIPTION as description,
       product_id,       
       dbms_lob.getlength(PRODUCT_IMAGE) as image,
       'no-icon'           as icon,
       null                as link_url 
  from DEMO_PRODUCT_INFO

Note 1: that you have to use the same column aliases as you defined in the template.
Note 2: make sure you keep the real id of your image in the query too, as otherwise you'll get an error (no data found)

To make the carousel a bit nicer I added following CSS to the page, but you could add it to your own CSS file or in the custom css section of Theme Roller.


Note: the carousel can work with an icon or an image. If you want to see an icon you can use for example "fa-edit fa-4x". When using an image, define the icon as no-icon.

Eager for more Universal Theme tips and tricks? check-out our APEX 5.0 UI training in Birmingham on December 10th. :)

For easier copy/paste into your template, you find the source below:

 *** Before Rows ***  
<div class="t-Region t-Region--carousel t-Region--showCarouselControls t-Region--hiddenOverflow" id="R1" role="group" aria-labelledby="R1_heading">
<div class="t-Region-bodyWrap">
<div class="t-Region-body">
<div class="t-Region-carouselRegions">
*** Column Template ***
<div data-label="#TITLE#" id="SR_R#ID#">
<a href="#LINK_URL#">
<div class="t-HeroRegion " id="R#ID#">
<div class="t-HeroRegion-wrap">
<div class="t-HeroRegion-col t-HeroRegion-col--left">
<span class="t-HeroRegion-icon t-Icon #ICON#"></span>
#IMAGE#
</div>
<div class="t-HeroRegion-col t-HeroRegion-col--content">
<h2 class="t-HeroRegion-title">#TITLE#</h2>
#DESCRIPTION#
</div>
<div class="t-HeroRegion-col t-HeroRegion-col--right"><div class="t-HeroRegion-form"></div><div class="t-HeroRegion-buttons"></div></div>
</div>
</div>
</a>
</div>
*** After Rows ***
</div>
</div>
</div>
</div>
*** Inline CSS ***
.t-HeroRegion-col.t-HeroRegion-col--left {
padding-left:60px;
}
.t-HeroRegion {
padding:25px;
border-bottom:0px solid #CCC;
}
.t-Region--carousel {
border: 1px solid #d6dfe6 !important;
}
.t-HeroRegion-col--left img {
max-height: 90px;
max-width: 130px;
}
.no-icon {
display:none;
}
Categories: Development

Bigfoot vs UFO analytics

Nilesh Jethwa - Sat, 2015-09-12 21:29

Bigfoot and UFO remain elusive but know their ways to make news from time to time.

Read more at: http://www.infocaptor.com/dashboard/bigfoot-vs-ufo-analytics

Here We Go Again

Floyd Teter - Thu, 2015-09-10 13:59
Yup, moving on one more time.  Hopefully for the last time.  I’m leaving Sierra-Cedar Inc. for a position as Sr. Director with Oracle's HCM Center of Excellence team.

As an enterprise software guy, I see the evolution of SaaS and Cloud as the significant drivers of change in the field.  I want to be involved, I want to contribute in a meaningful way, I want to learn more, and I want to be at the center of it all.  And there is no better place for all that than Oracle.  I had the opportunity to meet most of the folks I’ll be working alongside…knew many of them and met a few new faces.  And I’m excited to work with them. So when the opportunity presented itself, I was happy to follow through on it.

I’ll also freely admit that I’ve seen…and experienced…a pretty substantial amount of upheaval regarding Oracle services partners over the past several years.  Some are fighting the cloud-driven changes in the marketplace, others have accepted the change but have yet to adapt, a few are substantially shifting their business model to provide relevant services as the sand shifts under their feet.  Personally, I’ve had enough upheaval for a bit.

The first mission at Oracle:  develop tools and methods to meaningfully reduce lead time between customer subscript and customer go-live.  Pretty cool, as it lets me work on my #beat39 passion.  I’ll be starting with building tools to convert data from legacy HCM applications to HCM Cloud through the HCM Data Loader (“HDL”).


While I regret leaving a group of great people at SCI, I’m really looking forward to rejoining Oracle.  I kind of feel like a minion hitting the banana goldmine!

Building an Oracle NoSQL cluster using Docker

Marcelo Ochoa - Thu, 2015-09-10 09:43
Continuing with my previous post about using Docker in development/testing environment now the case is how to build an Oracle NoSQL cluster in single machine using Docker.
I assume that you already have docker installed and running, there are a plenty of tutorial about that and in my case using Ubuntu is just two step installer using apt-get :)
My starting point was using some ideas from another Docker project for building a Hadoop Cluster.
This project is using another great idea named Serf/Dnsmasq on Docker the motivating extracted from README.md file is:
This image aims to provide resolvable fully qualified domain names,
between dynamically created docker containers on ubuntu.
## The problem
By default **/etc/hosts** is readonly in docker containers. The usual
solution is to start a DNS server (probably as a docker container) and pass
a reference when starting docker instances: `docker run -dns `
So with this idea in mind I wrote this Docker file:

FROM java:openjdk-7-jdk
MAINTAINER marcelo.ochoa@gmail.com
RUN export DEBIAN_FRONTEND=noninteractive && \
    apt-get update && \
    apt-get install -y dnsmasq unzip curl ant ant-contrib junit
# dnsmasq configuration
ADD dnsmasq.conf /etc/dnsmasq.conf
ADD resolv.dnsmasq.conf /etc/resolv.dnsmasq.conf
# install serfdom.io
RUN curl -Lo /tmp/serf.zip https://dl.bintray.com/mitchellh/serf/0.5.0_linux_amd64.zip
RUN curl -Lo /tmp/kv-ce-3.3.4.zip http://download.oracle.com/otn-pub/otn_software/nosql-database/kv-ce-3.3.4.zip
RUN unzip /tmp/serf.zip -d /bin
RUN unzip /tmp/kv-ce-3.3.4.zip -d /opt
RUN rm -f /tmp/serf.zip
RUN rm -f /tmp/kv-ce-3.3.4.zip
ENV SERF_CONFIG_DIR /etc/serf
# configure serf
ADD serf-config.json $SERF_CONFIG_DIR/serf-config.json
ADD event-router.sh $SERF_CONFIG_DIR/event-router.sh
RUN chmod +x  $SERF_CONFIG_DIR/event-router.sh
ADD handlers $SERF_CONFIG_DIR/handlers
ADD start-serf-agent.sh  $SERF_CONFIG_DIR/start-serf-agent.sh
RUN chmod +x  $SERF_CONFIG_DIR/start-serf-agent.sh
EXPOSE 7373 7946 5000 5001 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020
CMD /etc/serf/start-serf-agent.sh
relevant information was marked in strong, here the explanation:
  • FROM java:openjdk-7-jdk, this Docker base images already have installed Ubuntu and Java7 so only a few additions are required
  • RUN curl .. /0.5.0_linux_amd64.zip, this is compiled version of Serf implementation ready to run on Ubuntu
  • RUN curl -Lo .. /kv-ce-3.3.4.zip, this is the community version of Oracle NoSQL, free download
  • CMD /etc/serf/start-serf-agent.sh, this is the script modified from the original Docker/serf project which includes the configuration of the Oracle NoSQL just after the image boot.
Last point requires an special explanation first there are 3 bash function for starting, stopping and creating the bootconfig file for the NoSQL nodes, here the sections:
stop_database() {
        java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar stop -root $KVROOT
exit
}
start_database() {
nohup java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar start -root $KVROOT &
}
create_bootconfig() {
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "m" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -admin 5001 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "s" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
}
last function (create_bootconfig) works different if the node is designed as master ($NODE_TYPE = "m") or slave ($NODE_TYPE = "s").
I decided to not persist the NoSQL storage after docker images stop, but is it possible replacing the directory where the NoSQL nodes reside externally as I showed on my previous post, with this configuration the NoSQL storage is not re-created at every boot.
With above explanations we can create the Docker image using:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker build -t "oracle-nosql/serf" .
the complete list of files required can be download as zip from this location.
Once the image is built We can start a cluster of 3 nodes simple executing the script start-cluster.sh, this script creates a node named master.mycorp.com and two slaves, slave[1..2].mycorp.com, here the output:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster.sh
WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.
e4932053780227f2a99e167f6efb0b1eeb9fda93fba2aa9206c7a9f05bacc25c
WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.
d6d0387c6893263141d58efa80933065be23aa3c98651dc6358bf7d7688d32cf
WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.
4fc18aebf466ec67de18c72c22739337499b5a76830f86d90a6533ff3bb6e314
you can check the status of the cluster executing a serf command at the master node, for example:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master serf members
master.mycorp.com  172.17.0.71:7946  alive
slave1.mycorp.com  172.17.0.72:7946  alive
slave2.mycorp.com  172.17.0.73:7946  alive
at this point 3 NoSQL nodes are ready to work, but they are unconfigured, here the output of NoSQL ping command:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
SNA at hostname: master, registry port: 5000 is not registered.
No further information is available
Using the examples of Oracle NoSQL Documentation We can create an store using this plan (script.txt):
configure -name mystore
plan deploy-zone -name "Boston" -rf 3 -wait
plan deploy-sn -zn zn1 -host master.mycorp.com -port 5000 -wait
plan deploy-admin -sn sn1 -port 5001 -wait
pool create -name BostonPool
pool join -name BostonPool -sn sn1
plan deploy-sn -zn zn1 -host slave1.mycorp.com -port 5000 -wait
pool join -name BostonPool -sn sn2
plan deploy-sn -zn zn1 -host slave2.mycorp.com -port 5000 -wait
pool join -name BostonPool -sn sn3
topology create -name topo -pool BostonPool -partitions 300
plan deploy-topology -name topo -wait
show topology
to simple submit this plan to the NoSQL nodes there is a script named deploy-store.sh, here the output:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./deploy-store.sh 
Store configured: mystore
Executed plan 1, waiting for completion...
Plan 1 ended successfully
Executed plan 2, waiting for completion...
Plan 2 ended successfully
Executed plan 3, waiting for completion...
Plan 3 ended successfully
Added Storage Node(s) [sn1] to pool BostonPool
Executed plan 4, waiting for completion...
Plan 4 ended successfully
Added Storage Node(s) [sn2] to pool BostonPool
Executed plan 5, waiting for completion...
Plan 5 ended successfully
Added Storage Node(s) [sn3] to pool BostonPool
Created: topo
Executed plan 6, waiting for completion...
Plan 6 ended successfully
store=mystore  numPartitions=300 sequence=308
  zn: id=zn1 name="Boston" repFactor=3 type=PRIMARY
  sn=[sn1] zn:[id=zn1 name="Boston"] master.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2] zn:[id=zn1 name="Boston"] slave1.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn2] RUNNING
          No performance info available
  sn=[sn3] zn:[id=zn1 name="Boston"] slave2.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn3] RUNNING
          No performance info available
  shard=[rg1] num partitions=300
    [rg1-rn1] sn=sn1
    [rg1-rn2] sn=sn2
    [rg1-rn3] sn=sn3
Also you can access to NoSQL Admin page using the URL http://localhost:5001/ because the start-cluster.sh script publish this port outside the master container.
Here the screen shot:


The cluster is ready!!, have fun storing your data.

Addendum!!
Persistent NoSQL store, as I mentioned early in this post if We put the /var/kvroot mapped to the host machine the NoSQL store will persist through multiples execution of the cluster, for example creating 3 directories as:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot1
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot2
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot3
and creating a new shell script for starting the cluster mapped to above directories as (start-cluster-persistent.sh):
docker run -d -t --volume=/tmp/kvroot1:/var/kvroot --publish=5000:5000 --publish=5001:5001 --dns 127.0.0.1 -e NODE_TYPE=m -P --name master -h master.mycorp.com oracle-nosql/serf
FIRST_IP=$(docker inspect --format="{{.NetworkSettings.IPAddress}}" master)
docker run -d -t --volume=/tmp/kvroot2:/var/kvroot --dns 127.0.0.1 -e NODE_TYPE=s -e JOIN_IP=$FIRST_IP -P --name slave1 -h slave1.mycorp.com oracle-nosql/serf
docker run -d -t --volume=/tmp/kvroot3:/var/kvroot --dns 127.0.0.1 -e NODE_TYPE=s -e JOIN_IP=$FIRST_IP -P --name slave2 -h slave2.mycorp.com oracle-nosql/serf
We can start and deploy the store for the first time using:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster-persistent.sh
... output here...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ls -ltr /tmp/kvroot1
total 8
-rw-r--r-- 1 root root  52 sep 10 20:19 security.policy
-rw-r--r-- 1 root root 781 sep 10 20:19 config.xml
...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./deploy-store.sh 
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
Pinging components of store mystore based upon topology sequence #308
300 partitions and 3 storage nodes
Time: 2015-09-10 23:20:18 UTC   Version: 12.1.3.3.4
Shard Status: total:1 healthy:1 degraded:0 noQuorum:0 offline:0
Zone [name="Boston" id=zn1 type=PRIMARY]   RN Status: total:3 online:3 maxDelayMillis:0 maxCatchupTimeSecs:0
Storage Node [sn1] on master.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:627 haPort:5011
Storage Node [sn2] on slave1.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:627 haPort:5010 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on slave2.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn3] Status: RUNNING,REPLICA sequenceNumber:627 haPort:5010 delayMillis:0 catchupTimeSecs:0
as you can see the cluster is ready for storing data, now We will stop and start again to see that is not necessary to redeploy the configuration:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./stop-cluster.sh
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster-persistent.sh
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
Pinging components of store mystore based upon topology sequence #308
300 partitions and 3 storage nodes
Time: 2015-09-10 23:34:15 UTC   Version: 12.1.3.3.4
Shard Status: total:1 healthy:1 degraded:0 noQuorum:0 offline:0
Zone [name="Boston" id=zn1 type=PRIMARY]   RN Status: total:3 online:3 maxDelayMillis:2342 maxCatchupTimeSecs:-4
Storage Node [sn1] on master.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,REPLICA sequenceNumber:639 haPort:5011 delayMillis:2342 catchupTimeSecs:-4
Storage Node [sn2] on slave1.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:639 haPort:5010 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on slave2.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn3] Status: RUNNING,MASTER sequenceNumber:639 haPort:5010
and that's all last ping command shows that the store survive the stop/remove/start container cycle.


Spring Session - Spring Boot application for IBM Bluemix

Pas Apicella - Thu, 2015-09-10 07:28
The following guide shows how to use Spring Session to transparently leverage Redis to back a web application’s HttpSession when using Spring Boot.

http://docs.spring.io/spring-session/docs/current/reference/html5/guides/boot.html

The demo below is a simple Spring Boot / Thymeleaf/ Bootstrap application to test Session replication using Spring Session - Spring Boot within IBM Bluemix. Same demo will run on Pivotal Cloud Foundry as well.

IBM DevOps URL ->

https://hub.jazz.net/project/pasapples/SpringBootHTTPSession/overview

Sample Project on GitHub ->

https://github.com/papicella/SpringBootHTTPSession



More Information

The Portable, Cloud-Ready HTTP Session
https://spring.io/blog/2015/03/01/the-portable-cloud-ready-http-session
Categories: Fusion Middleware

Amazon S3 to Glacier - Cloud ILM

Pakistan's First Oracle Blog - Wed, 2015-09-09 19:27
Falling in love with Kate Upton is easy but more easier is to be swept off your feet by information lifecycle management (ILM) in the Amazon Web Services (AWS). Simple, easily-configurable, fast, reliable, cost effective and proven are the words which describe it.

Pythian has been involved with ILM for a long time. With various flavors of databases and systems, Pythian has been overseeing creation, alteration, and flow of data for a long time until it becomes obsolete. That is why AWS's ILM resonates perfectly well with Pythian's expertise.

Amazon S3 is an object store for short term storage, whereas Amazon Glacier is their cloud archiving offering or storage for long term. Rules can be defined on the information to specify and automate its lifecycle.

Following screenshot shows the rules being configured on objects from S3 bucket to Glacier and then permanent deletion. 90 days after creation if an object, it will be moved to Glacier, and then after 1 year, it will be permanently deleted. Look at the graphical representation of lifecycle as how intuitive it is.



Categories: DBA Blogs

Oracle Priority Support Infogram for 09-SEP-2015

Oracle Infogram - Wed, 2015-09-09 15:49

RDBMS

Database Insider - September 2015 issue now available, from Exadata Partner Community EMEA.

Some good posts this week over at Update your Database – NOW!






SQL

If you’ve never been to the Ask Tom site and you have anything to do with Oracle technologies well, where have you been hanging out? Always one of the best sources for SQL/PL/SQL Oracle internals, design, etc. See this posting for an update: Ask Tom Questions: the Good, the Bad and the Ugly, from All Things SQL.

OVM

Oracle VM VirtualBox 5.0.4 now available!, from Oracle’s Virtualization Blog.

Big Data


WebLogic

Additional new material WebLogic Community, from WebLogic Partner Community EMEA.

ADF

Insert and show whitespace in ADF Faces Components, from WebLogic Partner Community EMEA.


Mobile Computing

Quick Tip: Multi Line Labels on Command Button, from The Oracle Mobile Platform Blog.

EBS

From the Oracle E-Business Suite Support blog:





From the Oracle E-Business Suite Technology blog:

Are AMP Support Dates Based on EBS or EM Releases?

JavaScript stored procedures as Cloud data services.

Kuassi Mensah - Tue, 2015-09-08 17:37
Find out how to implement JavaScript Stored Procedures with Oracle Database 12c and how to invoke these through RESTful Web Services.

https://blogs.oracle.com/java/entry/nashorn_and_stored_procedures

Every Word Counts: Translating the Oracle Applications Cloud User Experience

Usable Apps - Mon, 2015-09-07 07:06
Loic Le Guisquet. Image by Oracle PR.

"Successfully crossing new frontiers in commerce needs people who understand local preferences as well as global drivers. In addition, technology has also been a great enabler of globalization, so the right balance between people and tech is key to success."

- Loïc Le Guisquet, Oracle President for EMEA and APAC

Oracle's worldwide success is due to a winning combination of smart people with local insight and great globalized technology. The Oracle Applications Cloud experience (UX)—that competitive must-have and differentiator—is also a story of global technology and empathy for people everywhere.

UX provides for the cultural dynamics of how people work, the languages they speak, and local conventions and standards on the job. So, how do we deliver global versions of SaaS? Oracle Applications UX Communications and Outreach's Karen Scipi (@karenscipi) explains:

How We Build for Global Users

Oracle Applications Cloud is currently translated into 23 natural languages, besides U.S. English, using a process that ensures translated versions meet the latest user expectations about language, be it terminology, style, or tone.

Oracle HCM Cloud R10 Optimized for Global Working on YouTube

Global Workforce Optimization with Oracle HCM Cloud Release 10: More than 220 countries or jurisdictions supported.

Oracle Applications Cloud is designed for global use and deployment, leveraging Oracle ADF’s built-in internationalization (i18n) and translatability support to make development and translation easy. For example:

  • Translatable text is stored separately (externalized) from the application code for each language version (called a National Language Support [NLS] version).
  • Externalized text is contained in industry-standard XML Localization Interchange File Format (XLIFF)-based resource bundles, enabling not only safe, fast translation but also easy maintenance on a per language basis.
  • Currency, date, time, characters, reading and writing directions, and other local standards and conventions are automatically built in for developers. Oracle ADF uses the industry-standard i18n support of Oracle Java and Unicode.

In addition:

  • Users can enter and display data in their language of choice, independent of the language of the user interface: relying on what we call multilingual support (or MLS) architecture.
  • The software includes global and country-specific localizations that provide functionality for country- and region-specific statutory regulatory requirements, compliance reporting, local data protection rules, business conventions, organizational structure, payroll, and other real-world necessities for doing business with enterprise software.
  • Users can switch the language of their application session through personalization options.
  • NLS versions can be customized and extended in different languages by using Oracle composer tools to align with to align with their business identity and process. Translated versions too rely on the same architecture as the U.S. version for safe customizations and updates.

How We Translate

During development, the U.S. English source text is pseudo-translated using different language characters (such as symbols, Korean and Arabic characters), "padded" to simulate the longer words of other languages, and then tested with international data by product teams. This enables developers to test for translation and internationalization issues (such as any hard-coded strings still in English, or spacing, alignment, and bi-directional rendering issues) before external translation starts.

Hebrew version of Oracle Sales Cloud Release 8

Internationalized from the get-go: Oracle Sales Cloud in Hebrew (Release 8) shows the built-in bi-directional power of Oracle ADF.

For every target language, the Oracle Worldwide Product Translation Group (WPTG) contracts with professional translators in each country to perform the translation work. Importantly, these in-country translators do not perform literal translations of content but use the choice terms, style, and tone that local Oracle WPTG language specialists specify and that our applications users demand in each country or locale.

Mockup of French R10 Oracle Sales Cloud

Mockup of an Oracle Sales Cloud landing page in French. (Image credit: Laurent Adgie, Oracle Senior Sales Consultant)

NLS versions of Oracle Applications Cloud are made available to customers at the same time as the U.S. English version, released as NLS language packs that contain the translated user interface (UI) text, messages, and embedded help for each language. The secret sauce of this ability to make language versions available at the same time is a combination of Oracle technology and smart people too: translation, in fact, begins as soon as the text is created, and not when it's released! 

And, of course, before the NLS versions of Oracle Applications Cloud are released, Oracle language quality and functional testing teams rigorously test them.

The Language of Choice

Imagine an application that will be used in North America, South America, Europe, and Asia. What words should you choose for the UI?

  • The label Last Name or Surname?
  • The label Social Security Number, Social Insurance Number, or National Identification Number?
  • The MM-DD-YYYY, DD-MM-YYYY, or YYYY-MM-DD date format?

The right word choice for a label in one country, region, or protectorate is not necessarily the right word choice in another. Insight and care is needed in that decision. Language is a critical part of UX and, in the Oracle Applications Cloud UX, all the text you see is written by information development professionals, leaving software developers free to concentrate on building the applications productively and consistently using UX design patterns based on Oracle ADF components.

Our focus on language design—choosing accurate words and specialized terms and pairing them with a naturally conversational voice and tone—and providing descriptions and context for translators and customizers alike-also enables easy translation. Translated versions of application user interface pages are ultimately only as accurate, clear, and understandable as their source pages.

In a future blog post we'll explore how PaaS4SaaS partners and developers using the Oracle Applications Cloud Simplified UX Rapid Development Kit can choose words for their simplified UIs that will resonate with the user’s world and optimize the overall experience.

For More Information

For insights into language design and translation considerations for Oracle Applications Cloud and user interfaces in general, see the Oracle Not Lost in Translation blog and Blogos.

Solaris VM Templates for WebLogic Server 12.1.3

Steve Button - Sun, 2015-09-06 19:45
A new set of VM Templates for Solaris have been made available on OTN. 
These templates  provide a quick and easy way to spin up pre-built WebLogic Server 12.1.3 instances using either Solaris Zones or Oracle VM Server for Sparc.

http://www.oracle.com/technetwork/server-storage/solaris11/downloads/solaris-vm-2621499.html


Pages

Subscribe to Oracle FAQ aggregator