Skip navigation.

Feed aggregator

Oracle MAF 2.1.3 Intergration with OMSS and MCS

Containerization is the ability to inject security functionality for native, 3rd party or custom apps with zero code changes. These containerized apps can then be deployed in ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

SQL On The Edge #1 – 2016 Installer

Pythian Group - Fri, 2015-09-11 14:35


Hello and welcome to my new video blog (Vlog) series SQL On The Edge! My goal with these videos is to provide short, concise demonstrations of the latest and greatest features regarding SQL Server, Azure SQL Database and other Microsoft data platform tools. I’m very interested in other related tech like Azure Data Factory, Power BI, etc so expect the series to branch out in those directions as well!

For today I’m going to be showing the installation process for the latest SQL Server 2016 CTP release: 2.3. This release just came out September 2nd, 2015 and is the best way to play around with the upcoming full release of the product.

As a quick reminder, these are some of the main new features coming to SQL 2016:

  • Improvements for AlwaysOn Availability Groups
  • Improvements on capabilities of the In-Memory OLTP technology
  • Improvements on the Columnstore indexes technology
  • New support for processing JSON
  • New Query Store feature
  • Temporal tables
  • New Azure integrations and many more.

Yeah, it’s a pretty big list and even though 2014 was released not long ago, this doesn’t feel like 2014 R2. Rest assured I will be covering in detail all these new features on new video episodes as time goes by.

For now, let’s jump into the first video in the series where we run down the installation process for this new CTP 2.3 and point out some of the best practices and new features on this new installer. Be sure to grab your own copy of the CTP installer right here and try this out for yourself.


Discover more about our expertise with SQL Server.

Categories: DBA Blogs

SQLCL - More secure, now with REST !

Kris Rice - Fri, 2015-09-11 14:30
A new SQLCL build was just posted go grab it and kick the tires.  There are well over 100 bug fixes in there so it's better than ever.  Also there's some new things. More Secure    Imagine you have an api that is called and a password or something is used in the parameters.  We use exec MY_API(...)  and it works just fine.  However consider if someone with access to v$sql they just got

Log Buffer #440: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-09-11 14:14


This Log Buffer Edition carries on with the weekly culling of blog posts from Oracle, SQL Server and MySQL.


Most people are now aware that in 12c, a VARCHAR2 has been extended from a maximum size of 4000 to 32767 bytes.

FDMEE is offered as the full-featured successor to Oracle Hyperion Financial Data Quality Management ERP Integration Adapter for Oracle Applications (ERP Integrator) and Oracle Hyperion Financial Data Quality Management (FDM).

Cybersecurity is Hot! In fact, so is the weather here in California at this moment.

How do you monitor an ZFS Storage Appliance with Oracle Enterprise Manager 12c?

Using SYSBACKUP in 12c with a media manager layer

SQL Server:

Autogenerating SSIS file import packages using Biml

When to Quote in PowerShell

More on CXPACKET Waits: Skewed Parallelism

Streamline Log Shipping Failovers: DR Made Just For You…

Getting query text from SSMS into Excel with PowerShell


MariaDB 10.1.7 now available

Track and Optimize Server Connection Methods

Abstracting Binlog Servers and MySQL Master Promotion without Reconfiguring all Slaves

Testing MySQL partitioning with pt-online-schema-change

Third day with InnoDB transparent page compression

Categories: DBA Blogs

Measuring CPU Performance across servers

Pythian Group - Fri, 2015-09-11 14:09

The y-cruncher utility is a handy little tool that will calculate digits of Pi using multiple threads, and will report back with detailed information on the duration of each step. This allows a direct 1:1 comparison of CPU performance across all servers. Y-Cruncher will perform benchmark testing, stress testing, and I/O analysis, as well as let you create a custom test for very large machines.

This is particularly nice for Virtual Machines, where you may not be sure the Hypervisor is configured correctly and need to test.

The output is easy to read, and gives information on every step performed. For most of us, the important pieces will be:

Start Date: Thu Sep 10 17:43:14 2015
End Date: Thu Sep 10 17:45:48 2015

Computation Time: 120.163 seconds
Total Time: 154.007 seconds

CPU Utilization: 98.577 %
Multi-core Efficiency: 98.577 %

I thought a nice test of this utility would be to compare a few different sizes of Microsoft Azure & Amazon AWS servers and see how they perform. For all of these tests, I am running Windows 2012 R2, calculating 1 million digits of pi, and using multi-threaded processing when I have more than 1 vCPU.

The below table has my (not surprising) results.

ProviderTypeCPU InfoNum of vCPUCPU Time (Sec)AzureStandard A1AMD Opteron 4171 HE 2.10 GHz114446.791AzureStandard D13Intel Xeon E5-2660 v0 2.20 GHz8293.939AzureStandard G3Intel Xeon E5-2698B v3 2.00 GHz8142.508AWSt2.smallIntel Xeon E5-2670 v2 2.50 GHz13115.828AWSm4.2xlargeIntel Xeon E5-2676 v2 2.40 GHz8205.36AWSc4.2xlargeIntel Xeon E5-2666 v3 2.90 GHz8177.72
Categories: DBA Blogs

Amazon S3 to Glacier – Cloud ILM

Pythian Group - Fri, 2015-09-11 13:57

Falling in love with Kate Upton is easy, but even better than that is to be swept off your feet by Information Lifecycle Management (ILM) in the Amazon Web Services (AWS). But that’s understandable right?:) Simple, easily-configurable, fast, reliable, cost effective and proven are the words which describe it.

Pythian has been involved with ILM for a long time. With various flavours of databases and systems, Pythian has been overseeing creation, alteration, and flow of data for a long time until it becomes obsolete. That is why AWS’s ILM resonates perfectly well with Pythian’s expertise.

Amazon S3 is an object store for short term storage, whereas Amazon Glacier is their cloud archiving offering or storage for long term. Rules can be defined on the information to specify and automate its lifecycle.

The following screenshot shows the rules being configured on objects from S3 bucket to Glacier and then permanent deletion. If it’s an object 90 days after creation it will be moved to Glacier, and then after 1 year, it will be permanently deleted. Look at the graphical representation of lifecycle as how intuitive it is.




Discover more about our expertise in Cloud.

Categories: DBA Blogs

My Sales Journey: Episode 2

Pythian Group - Fri, 2015-09-11 06:46

Goldfish jumping into the sea

It is Week 2 at Pythian and one thing that is becoming clear to me is that I am surrounded by people trying to create value for clients or be valuable themselves. The more we innovate and think about doing things in new interesting ways the more engaged we become in our work. Sales, with its endless outreach and prospecting may seem tedious to many of us, so in creating an atmosphere of engagement, team spirit with lots of jokes thrown in can make the day less daunting.

This week is definitely more intense than the last one. The scope of on-boarding has more than doubled. I have attended initiation sessions with VP’s of Infrastructure and Service Delivery and I have shadowed every member on the sales team to learn their approach and hopefully take the best of it and create my own.

Salesforce is a “force” if you have never done it before with a big learning curve. Being computer and web savvy definitely goes a long way to figure out its intricacies. I am lucky to have a sales research team to help build lists and provide support. Another perk about working at Pythian!

Its Friday! The week flew past me in a good way. My on-boarding check-list is complete, this post is almost done, the first few calls to get me started are complete and so are the first few emails! Nothing beats starting the first day of the week feeling accomplished.

If you are reading this I would love to hear from you about your experience just starting out in Sales or if you are seasoned and have some tricks to share. Looking forward to hearing from you!


Categories: DBA Blogs

Links for 2015-09-10 []

Categories: DBA Blogs

October 7: The SSI Group Oracle Sales Cloud Customer Reference Forum

Linda Fishman Hoyle - Thu, 2015-09-10 18:15

Join us for another Oracle Customer Reference Forum on October 7, 2015.

Terry Pefanis, CFO of The SSI Group, will talk about their success implementing Oracle Sales Cloud and how it has transformed their business.

The transition from to Oracle Sales Cloud has given The SSI Group more visibility into its pipeline and increased its ability to uncover opportunities–add to that improved reporting and analytics. The company has been able to reduce the total cost of ownership and streamline its vendor management. In addition, Oracle Sales Cloud has enabled SSI to automate and simplify its monthly customer contract billing process.

SSI is a forward-thinking innovator that is powering the business of healthcare through improved flexibility, connectivity and integration. The company offers a single-vendor, end-to-end Revenue Cycle Management (RCM) solution featuring front-end patient access solutions; industry-leading, best-in-class billing and claims transmission; contract management, audit management, release of information and attachment processing; and an advanced analytics and business intelligence product suite.

Industry: Healthcare

Register now to attend the live Forum on Wednesday, October 7, 2015, at 03:00 p.m. GMT / 8:00 a.m. PDT (U.S.) / 10:00 a.m. CDT (U.S.).

Oracle Big Data Integrator Transforms and Enriches Big Data

Oracle Data Integrator is a comprehensive data integration platform that covers all data integration requirements: from high-volume, high-performance batch loads, to event-driven, trickle-feed...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Enabling the Modern Supply Chain—Oracle SCM at OpenWorld 2015

Linda Fishman Hoyle - Thu, 2015-09-10 14:47

A Guest Post by Jon Chorley (pictured left), CSO and Group VP, SCM Product Strategy and PLM

We are in the midst of a massive change in the way all software, including supply chain software, is conceived, built, delivered and supported. While I am talking about the cloud, I’m also talking about what defines the supply chain applications of the future. What makes them broader, better and faster? What makes them modern?

To address these challenges, Oracle is delivering the largest set of new supply chain applications in our history--modern applications for the modern supply chain. And this OpenWorld, beginning on October 25 in San Francisco, is your chance to learn all about it.

Conference Sessions

There will be over 70 sessions covering both Oracle Cloud and on-premises SCM solutions. Make sure to attend these sessions that address all levels, from an overall Oracle SCM strategy and roadmap general session to detailed sessions on implementation and tuning. As always, we will showcase the real-world experiences and best practices of our customers and partners. Here are just a few highlights:

  • “General Session: Oracle SCM Cloud” - October 26, 11:00 a.m. - 12:30 p.m. | Moscone West—3002/3004 [GEN6837]
  • “Order to Cash with Oracle Supply Chain Management Cloud” October 26, 4:00 p.m. - 4:45 p.m. | Moscone West—3010 [CON9582]
  • “Plan to Produce in the Cloud: Overview and Roadmap for Manufacturers” October 26, 2:45 p.m. - 3:30 p.m. | Moscone West—3008 [CON9578]
  • “PLM and Product MDM Cloud: Strategy Overview and Roadmap October 26, 2:45 p.m. - 3:30 p.m. | Moscone West—3011 [CON9588]

SCM On-Premises Applications

While we will be talking about the new SCM Cloud, this will be additional to the usual comprehensive set of strategy and roadmap sessions and detailed sessions on all our existing best-in-class supply chain applications, covering value chain planning, value chain execution, and the product value chain.

Meet the Experts

Do not miss the opportunity to meet with Oracle Supply Chain Management experts as we will be hosting two sessions, one for Oracle Cloud and one for on-premises SCM applications.

  • Meet the Experts: Oracle Supply Chain Management Cloud - October 26, 1:30 p.m. - 2:15 p.m. | Moscone West—3001A
  • Meet the Experts: Oracle Supply Chain Management - October 28, 11:00 a.m. - 11:45 a.m. | Moscone West—3001A

Demo Grounds

Oracle SCM will be showcased at 20 separate demonstration pods in Moscone West. Make sure to check them out so you can see what’s new and get your questions answered.

Customer Events

List of All SCM Activities

These are just some of the many SCM events that you won’t want to miss. Check out this Focus On document for a list of all SCM events.

Come learn what’s new in your existing supply chain applications, what new applications are available and planned, gain knowledge and share experiences with fellow supply chain professionals, view demonstrations, and of course, meet and discuss all of the above with folks in Oracle SCM Development.

I look forward to seeing you there! Register now!

Jon Chorley
CSO and Group VP, SCM Product Strategy and PLM

September 24: Henry Schein Animal Health Oracle Sales Cloud Customer Reference Forum

Linda Fishman Hoyle - Thu, 2015-09-10 14:03

Join us for another Oracle Customer Reference Forum on September 24, 2015. You will hear Jerry Savage, President, Henry Schein Animal Health, talk about the company's success implementing Oracle Sales Cloud and Marketing Cloud and how the solutions have increased their ability to respond to their customers. Using a customer lifecycle approach, they are able to target cross-sell and up-sell campaigns, nurture leads, and increase their business throughput.

Located in Dublin, Ohio, Henry Schein Animal Health is the leading companion animal health distribution company in the U.S. Partnering with over 400 of the leading animal health manufacturers in the world, it is positioned to bring the broadest selection of veterinary products and strategic business solutions to over 26,000 veterinary professionals nationwide. Henry Schein is in the Life Sciences industry.

Register now to attend the live Forum on Thursday, September 24, 2015, at 7:00 p.m. GMT / 11:00 a.m. PST (U.S.) / 2:00 p.m. EST (U.S.). You can listen to archived Customer Success Forums here.

Here We Go Again

Floyd Teter - Thu, 2015-09-10 12:59
Yup, moving on one more time.  Hopefully for the last time.  I’m leaving Sierra-Cedar Inc. for a position as Sr. Director with Oracle's HCM Center of Excellence team.
As an enterprise software guy, I see the evolution of SaaS and Cloud as the significant drivers of change in the field.  I want to be involved, I want to contribute in a meaningful way, I want to learn more, and I want to be at the center of it all.  And there is no better place for all that than Oracle.  I had the opportunity to meet most of the folks I’ll be working alongside…knew many of them and met a few new faces.  And I’m excited to work with them. So when the opportunity presented itself, I was happy to follow through on it.
I’ll also freely admit that I’ve seen…and experienced…a pretty substantial amount of upheaval regarding Oracle services partners over the past several years.  Some are fighting the cloud-driven changes in the marketplace, others have accepted the change but have yet to adapt, a few are substantially shifting their business model to provide relevant services as the sand shifts under their feet.  Personally, I’ve had enough upheaval for a bit.
The first mission at Oracle:  develop tools and methods to meaningfully reduce lead time between customer subscript and customer go-live.  Pretty cool, as it lets me work on my #beat39 passion.  I’ll be starting with building tools to convert data from legacy HCM applications to HCM Cloud through the HCM Data Loader (“HDL”).

While I regret leaving a group of great people at SCI, I’m really looking forward to rejoining Oracle.  I kind of feel like a minion hitting the banana goldmine!

Oracle Consulting at OpenWorld 2015

Linda Fishman Hoyle - Thu, 2015-09-10 12:54

A Guest Post by Wendy Leslie (pictured left), Director, Oracle Consulting

Oracle OpenWorld is the place to learn, share, and network about the latest Oracle solutions. It's a smart move to get a jump on planning your Oracle OpenWorld schedule for October 25-29 in San Francisco.

This year, let the planning include Oracle Consulting sessions and demos. In our sessions, you will hear directly from your peers about how they are leveraging Oracle Consulting for faster adoption and return on investment to help them succeed across the Oracle stack.

At OpenWorld 2015, Oracle Consulting will be showcasing our latest thought leadership in each pillar and sharing customer successes as Oracle’s customer base migrates to the Oracle Cloud. Our current session profile spans all cloud pillars. Session highlights include:


  • Effectively Migrating from to Oracle Sales Cloud (CON9694)
  • Maximize Your Investment in Oracle Sales Cloud (CON9705)
  • Maximize Your Investment in Oracle Service Cloud (CON9700)


  • Transform Your Application Backbone with Oracle ERP Cloud (CON9696)
  • Take Manufacturing Computing to the Cloud (CON9698)
  • Maximize Your Investment in Oracle ERP Cloud (CON9699)


  • An Evolutive Approach to HCM Cloud That Benefits Your #1 Asset (CON9697)
  • Maximize Your Investment in Oracle HCM Cloud (CON9701)

We encourage you to attend these sessions to obtain the leading practice insight that only Oracle Consulting can provide.

Once you understand our value, we encourage you to visit us at the Oracle Services Center. The Oracle Services Center at Oracle OpenWorld provides the best venue for you to meet with Oracle executives and product experts. The Oracle Services Center is conveniently located in Moscone West on Level 3 adjacent to the Lobby escalators. You can plan ahead and schedule time to meet with one of our experts. For more details, contact your local CSM directly or engage us via

Register now!

ZFS Storage monitoring with #EM12C

DBASolved - Thu, 2015-09-10 11:03

How do you monitor an ZFS Storage Appliance with Oracle Enterprise Manager 12c? This is a question that has been asked a few times and I needed to solve this issue. You hear a lot about the em agents and that they need to be used to monitor many different targets. Let’s just say, that is not the case when wanting to monitor a ZFS Storage Appliance.

In order to monitor a ZFS Storage Appliance, there are a few things that have to be setup first. After configuring your ZFS the way you want it, you have to create an oracle_agent user and role within the ZFS.

To create a user on the ZFS, you need to login to the ZFS and then go to Maintenance -> Workflows and click the “Edit” option next to Configure for Oracle Enterprise Manager Monitoring.

The workflow will create a user and role named “oracle_agent” when complete. Click the OK button to allow the workflow to perform the actions needed. (It may take a few minutes to pop-up the status window)


After clicking the OK button, the ZFS will configure a worksheet that is used to monitor the ZFS from OEM. At this point, the ZFS is ready to be monitored from OEM.

In order to set up the ZFS within OEM, it needs to be added manually. This is done by using the Setup -> Add Target -> Add Targets Manually. Once on the Add Targets Manually page, the ZFS needs to be added by using the Add Targets Declaratively by Specifying Target Monitoring Properties. On this page, select the Target Type for the ZFS and the monitoring agents should be the EM Agent for the OMS.

Note: The monitoring agent, can be any agent that you want to monitor the ZFS with. The agent will act as a proxy to the ZFS.

The next screen, within OEM, will ask you for the specific information needed to add the ZFS to be monitored. The target name can be anything you would like to call the ZFS. Username and Password are what you configured when setting up the worksheet (oracle_agent). The management port (215) is the default port. Lastly, the IP address or DNS name of the ZFS to be monitored.

After clicking OK, the ZFS will be added to OEM. To verify this, the ZFS can be found under Targets -> All Targets -> Oracle ZFS Storage Appliance

Finally, the ZFS can be viewed and monitored from within OEM.

Keep in mind that the metrics being collected will update over time. The image above is captured from a newly discovered ZFS and has not had time to populate.


Filed under: OEM
Categories: DBA Blogs

Release of Analysis Episode for e-Literate TV Series on Personalized Learning

Michael Feldstein - Thu, 2015-09-10 09:01

By Phil HillMore Posts (365)

Today we are thrilled to release the the final episode in our new e-Literate TV series on “personalized learning”. In this series, we examine how that term, which is heavily marketed but poorly defined, is implemented on the ground at a variety of colleges and universities. While today’s episode is the final one released due to its analysis of what we learned in the five case studies, it was designed to be used as an introduction to the series.

We have deliberately held back from providing a lot of analysis and commentary within each case study – letting faculty, students, administrators and staff speak for themselves – but in today’s episode we share some of the key lessons we learned. We had a variety schools profiled in the series, and our analysis addresses the commonalities and differences that we saw. You can see the analysis episode at this link or in the embed below.


Introduction: What Did We Learn In Our Personalized Learning Series?

This episode introduces a new feature in e-Literate TV. We can now embed other episodes as well as YouTube videos directly in this episode. As viewers watch us discuss different lessons, they can also watch additional video-based discussions in context for a deeper dive into the topic at hand. As we discuss specific case studies (Middlebury College, Essex County College, Arizona State University, Empire State College, or UC Davis), the actual case study episodes appear on the right side.

Embedded Episodes

We also had the chance to participate in a panel discussion on the series at the EDUCAUSE Learning Initiative (ELI) conference along with Malcolm Brown and Veronica Diaz. They made some of their own observations and asked some excellent questions. We have embedded the specific questions from the conference as YouTube videos with the analysis episode.

Embedded ELI

e-Literate TV, owned and run by MindWires Consulting, is funded in part by the Bill & Melinda Gates Foundation. When we first talked about the series with the Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

As with the previous series, we are working in collaboration with In the Telling, our partners providing the platform and video production. Telling Story platform allows people to choose their level of engagement, from just watching the video to accessing synchronized transcripts and accessing transmedia. We have added content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. With In The Telling’s help, we are crafting episodes that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies.

We welcome your feedback, either in comments or on Twitter using the hashtag #eLiterateTV. Enjoy!

The post Release of Analysis Episode for e-Literate TV Series on Personalized Learning appeared first on e-Literate.

Building an Oracle NoSQL cluster using Docker

Marcelo Ochoa - Thu, 2015-09-10 08:43
Continuing with my previous post about using Docker in development/testing environment now the case is how to build an Oracle NoSQL cluster in single machine using Docker.
I assume that you already have docker installed and running, there are a plenty of tutorial about that and in my case using Ubuntu is just two step installer using apt-get :)
My starting point was using some ideas from another Docker project for building a Hadoop Cluster.
This project is using another great idea named Serf/Dnsmasq on Docker the motivating extracted from file is:
This image aims to provide resolvable fully qualified domain names,
between dynamically created docker containers on ubuntu.
## The problem
By default **/etc/hosts** is readonly in docker containers. The usual
solution is to start a DNS server (probably as a docker container) and pass
a reference when starting docker instances: `docker run -dns `So with this idea in mind I wrote this Docker file:
FROM java:openjdk-7-jdk
RUN export DEBIAN_FRONTEND=noninteractive && \
    apt-get update && \
    apt-get install -y dnsmasq unzip curl ant ant-contrib junit
# dnsmasq configuration
ADD dnsmasq.conf /etc/dnsmasq.conf
ADD resolv.dnsmasq.conf /etc/resolv.dnsmasq.conf
# install
RUN curl -Lo /tmp/
RUN curl -Lo /tmp/
RUN unzip /tmp/ -d /bin
RUN unzip /tmp/ -d /opt
RUN rm -f /tmp/
RUN rm -f /tmp/
# configure serf
ADD serf-config.json $SERF_CONFIG_DIR/serf-config.json
ADD handlers $SERF_CONFIG_DIR/handlers
EXPOSE 7373 7946 5000 5001 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020
CMD /etc/serf/start-serf-agent.shrelevant information was marked in strong, here the explanation:
  • FROM java:openjdk-7-jdk, this Docker base images already have installed Ubuntu and Java7 so only a few additions are required
  • RUN curl .. /, this is compiled version of Serf implementation ready to run on Ubuntu
  • RUN curl -Lo .. /, this is the community version of Oracle NoSQL, free download
  • CMD /etc/serf/, this is the script modified from the original Docker/serf project which includes the configuration of the Oracle NoSQL just after the image boot.
Last point requires an special explanation first there are 3 bash function for starting, stopping and creating the bootconfig file for the NoSQL nodes, here the sections:stop_database() {
        java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar stop -root $KVROOT
start_database() {
nohup java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar start -root $KVROOT &
create_bootconfig() {
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "m" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -admin 5001 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "s" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
}last function (create_bootconfig) works different if the node is designed as master ($NODE_TYPE = "m") or slave ($NODE_TYPE = "s").I decided to not persist the NoSQL storage after docker images stop, but is it possible replacing the directory where the NoSQL nodes reside externally as I showed on my previous post, with this configuration the NoSQL storage is not re-created at every boot.With above explanations we can create the Docker image using:root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker build -t "oracle-nosql/serf" .the complete list of files required can be download as zip from this location.Once the image is built We can start a cluster of 3 nodes simple executing the script, this script creates a node named and two slaves, slave[1..2], here the output:root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./
WARNING: Localhost DNS setting (--dns= may fail in containers.
WARNING: Localhost DNS setting (--dns= may fail in containers.
WARNING: Localhost DNS setting (--dns= may fail in containers.
4fc18aebf466ec67de18c72c22739337499b5a76830f86d90a6533ff3bb6e314you can check the status of the cluster executing a serf command at the master node, for example:root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master serf members  alive  alive  aliveat this point 3 NoSQL nodes are ready to work, but they are unconfigured, here the output of NoSQL ping command:root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
SNA at hostname: master, registry port: 5000 is not registered.
No further information is availableUsing the examples of Oracle NoSQL Documentation We can create an store using this plan (script.txt):configure -name mystore
plan deploy-zone -name "Boston" -rf 3 -wait
plan deploy-sn -zn zn1 -host -port 5000 -wait
plan deploy-admin -sn sn1 -port 5001 -wait
pool create -name BostonPool
pool join -name BostonPool -sn sn1
plan deploy-sn -zn zn1 -host -port 5000 -wait
pool join -name BostonPool -sn sn2
plan deploy-sn -zn zn1 -host -port 5000 -wait
pool join -name BostonPool -sn sn3
topology create -name topo -pool BostonPool -partitions 300
plan deploy-topology -name topo -wait
show topologyto simple submit this plan to the NoSQL nodes there is a script named, here the output:root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./ 
Store configured: mystore
Executed plan 1, waiting for completion...
Plan 1 ended successfully
Executed plan 2, waiting for completion...
Plan 2 ended successfully
Executed plan 3, waiting for completion...
Plan 3 ended successfully
Added Storage Node(s) [sn1] to pool BostonPool
Executed plan 4, waiting for completion...
Plan 4 ended successfully
Added Storage Node(s) [sn2] to pool BostonPool
Executed plan 5, waiting for completion...
Plan 5 ended successfully
Added Storage Node(s) [sn3] to pool BostonPool
Created: topo
Executed plan 6, waiting for completion...
Plan 6 ended successfully
store=mystore  numPartitions=300 sequence=308
  zn: id=zn1 name="Boston" repFactor=3 type=PRIMARY
  sn=[sn1] zn:[id=zn1 name="Boston"] capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2] zn:[id=zn1 name="Boston"] capacity=1 RUNNING
    [rg1-rn2] RUNNING
          No performance info available
  sn=[sn3] zn:[id=zn1 name="Boston"] capacity=1 RUNNING
    [rg1-rn3] RUNNING
          No performance info available
  shard=[rg1] num partitions=300
    [rg1-rn1] sn=sn1
    [rg1-rn2] sn=sn2
    [rg1-rn3] sn=sn3Also you can access to NoSQL Admin page using the URL http://localhost:5001/ because the script publish this port outside the master container.Here the screen shot:

The cluster is ready!!, have fun storing your data.

Persistent NoSQL store, as I mentioned early in this post if We put the /var/kvroot mapped to the host machine the NoSQL store will persist through multiples execution of the cluster, for example creating 3 directories as:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot1
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot2
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot3and creating a new shell script for starting the cluster mapped to above directories as (
docker run -d -t --volume=/tmp/kvroot1:/var/kvroot --publish=5000:5000 --publish=5001:5001 --dns -e NODE_TYPE=m -P --name master -h oracle-nosql/serf
FIRST_IP=$(docker inspect --format="{{.NetworkSettings.IPAddress}}" master)
docker run -d -t --volume=/tmp/kvroot2:/var/kvroot --dns -e NODE_TYPE=s -e JOIN_IP=$FIRST_IP -P --name slave1 -h oracle-nosql/serf
docker run -d -t --volume=/tmp/kvroot3:/var/kvroot --dns -e NODE_TYPE=s -e JOIN_IP=$FIRST_IP -P --name slave2 -h oracle-nosql/serfWe can start and deploy the store for the first time using:root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./
... output here...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ls -ltr /tmp/kvroot1
total 8
-rw-r--r-- 1 root root  52 sep 10 20:19 security.policy
-rw-r--r-- 1 root root 781 sep 10 20:19 config.xml
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./ 
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
Pinging components of store mystore based upon topology sequence #308
300 partitions and 3 storage nodes
Time: 2015-09-10 23:20:18 UTC   Version:
Shard Status: total:1 healthy:1 degraded:0 noQuorum:0 offline:0
Zone [name="Boston" id=zn1 type=PRIMARY]   RN Status: total:3 online:3 maxDelayMillis:0 maxCatchupTimeSecs:0
Storage Node [sn1] on    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:627 haPort:5011
Storage Node [sn2] on    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:627 haPort:5010 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn3] Status: RUNNING,REPLICA sequenceNumber:627 haPort:5010 delayMillis:0 catchupTimeSecs:0as you can see the cluster is ready for storing data, now We will stop and start again to see that is not necessary to redeploy the configuration:root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
Pinging components of store mystore based upon topology sequence #308
300 partitions and 3 storage nodes
Time: 2015-09-10 23:34:15 UTC   Version:
Shard Status: total:1 healthy:1 degraded:0 noQuorum:0 offline:0
Zone [name="Boston" id=zn1 type=PRIMARY]   RN Status: total:3 online:3 maxDelayMillis:2342 maxCatchupTimeSecs:-4
Storage Node [sn1] on    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,REPLICA sequenceNumber:639 haPort:5011 delayMillis:2342 catchupTimeSecs:-4
Storage Node [sn2] on    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:639 haPort:5010 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn3] Status: RUNNING,MASTER sequenceNumber:639 haPort:5010and that's all last ping command shows that the store survive the stop/remove/start container cycle.

Transforming Businesses to generate new revenue and to serve the customer better

The evolution of technology we are experiencing today clearly is changing the business, the society, the world. The capabilities available today for consumers and businesses are stunning and were...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Spring Session - Spring Boot application for IBM Bluemix

Pas Apicella - Thu, 2015-09-10 06:28
The following guide shows how to use Spring Session to transparently leverage Redis to back a web application’s HttpSession when using Spring Boot.

The demo below is a simple Spring Boot / Thymeleaf/ Bootstrap application to test Session replication using Spring Session - Spring Boot within IBM Bluemix. Same demo will run on Pivotal Cloud Foundry as well.

IBM DevOps URL ->

Sample Project on GitHub ->

More Information

The Portable, Cloud-Ready HTTP Session
Categories: Fusion Middleware

MongoDB update

DBMS2 - Thu, 2015-09-10 04:33

One pleasure in talking with my clients at MongoDB is that few things are NDA. So let’s start with some numbers:

  • >2,000 named customers, the vast majority of which are unique organizations who do business with MongoDB directly.
  • ~75,000 users of MongoDB Cloud Manager.
  • Estimated ~1/4 million production users of MongoDB total.

Also >530 staff, and I think that number is a little out of date.

MongoDB lacks many capabilities RDBMS users take for granted. MongoDB 3.2, which I gather is slated for early November, narrows that gap, but only by a little. Features include:

  • Some JOIN capabilities.
    • Specifically, these are left outer joins, so they’re for lookup but not for filtering.
    • JOINs are not restricted to specific shards of data …
    • … but do benefit from data co-location when it occurs.
  • A BI connector. Think of this as a MongoDB-to- SQL translator. Using this does require somebody to go in and map JSON schemas and relational tables to each other. Once that’s done, the flow is:
    • Basic SQL comes in.
    • Filters and GroupBys are pushed down to MongoDB. A result set … well, it results. :)
    • The result set is formatted into a table and returned to the system — for example a business intelligence tool — that sent the SQL.
  • Database-side document validation, in the form of field-specific rules that combine into a single expression against which to check a document.
    • This is fairly simple stuff — no dependencies among fields in the same document, let alone foreign key relationships.
    • MongoDB argues, persuasively, that this simplicity makes it unlikely to recreate the spaghetti code maintenance nightmare that was 1990s stored procedures.
    • MongoDB concedes that, for performance, it will ordinarily be a good idea to still do your validation on the client side.
    • MongoDB points out that enforcement can be either strict (throw errors) or relaxed (just note invalid documents to a log). The latter option is what makes it possible to install this feature without breaking your running system.

There’s also a closed-source database introspection tool coming, currently codenamed MongoDB Scout. 

  • The name will change, in part because if you try to search on that name you’ll probably find an unrelated Scout. :)
  • Scout samples data, runs stats, and all that stuff.
  • Scout is referred to as a “schema introspection” tool, but I’m not sure why; schema introspection sounds more like a feature or architectural necessity than an actual product.

As for storage engines:

  • WiredTiger, which was the biggest deal in MongoDB 3.0, will become the default in 3.2. I continue to think analogies to InnoDB are reasonably appropriate.
  • An in-memory storage engine option was also announced with MongoDB 3.0. Now there’s a totally different in-memory option. However, details were not available at posting time. Stay tuned.
  • Yet another MongoDB storage engine, based on or akin to WiredTiger, will do encryption. Presumably, overhead will be acceptably low. Key management and all that will be handled by usual-suspect third parties.

Finally — most data management vendors brag to me about how important their text search option is, although I’m not necessarily persuaded. :) MongoDB does have built-in text search, of course, of which I can say:

  • It’s a good old-fashioned TF/IDF algorithm. (Text Frequency/Inverse Document Frequency.)
  • About the fanciest stuff they do is tokenization and stemming. (In a text search context, tokenization amounts to the identification of word boundaries and the like. Stemming is noticing that alternate forms of the same word really are the same thing.)

This level of technology was easy to get in the 1990s. One thing that’s changed in the intervening decades, however, is that text search commonly supports more languages. MongoDB offers stemming in 8 or 9 languages for free, plus a paid option via Basis for other languages yet.

Related links

Categories: Other