Skip navigation.

Feed aggregator

DOAG Expertenseminar "Parallel Execution Masterclass" (German)

Randolf Geist - Mon, 2015-03-30 15:34
In zwei Wochen findet das Expertenseminar "Parallel Execution Masterclass" in Berlin statt, das ich gemeinsam mit der DOAG veranstalte.

Es sind noch ein paar Plätze frei - sollten Sie also Lust und Zeit haben, nach Berlin zu kommen und exklusives Wissen (nicht nur) über das Parallel Execution Feature der Oracle Datenbank zu erfahren, würde ich mich sehr freuen, Sie dort mit den anderen Teilnehmern begrüßen zu dürfen, um gemeinsam mit Ihnen eine gute und produktive Zeit zu verbringen!

Bei Interesse wenden Sie sich bitte an die Ansprechpartner der DOAG, die im Link angegeben sind - dort finden Sie auch eine genauere Beschreibung des Seminars.

Oracle GoldenGate, MySQL and Flume

Rittman Mead Consulting - Mon, 2015-03-30 13:05

Back in September Mark blogged about Oracle GoldenGate (OGG) and HDFS . In this short followup post I’m going to look at configuring the OGG Big Data Adapter for Flume, to trickle feed blog posts and comments from our site to HDFS. If you haven’t done so already, I strongly recommend you read through Mark’s previous post, as it explains in detail how the OGG BD Adapter works.  Just like Hive and HDFS, Flume isn’t a fully-supported target so we will use Oracle GoldenGate for Java Adapter user exits to achieve what we want.

What we need to do now is

  1. Configure our MySQL database to be fit for duty for GoldenGate.
  2. Install and configure Oracle GoldenGate for MySQL on our DB server
  3. Create a new OGG Extract and Trail files for the database tables we want to feed to Flume
  4. Configure a Flume Agent on our Cloudera cluster to ‘sink’ to HDFS
  5. Create and configure the OGG Java adapter for Flume
  6. Create External Tables in Hive to expose the HDFS files to SQL access

OGG and Flume

Setting up the MySQL Database Source Capture

The MySQL database I will use for this example contains blog posts, comments etc from our website. We now want to use Oracle GoldenGate to capture new blog post and our readers’ comments and feed this information in to the Hadoop cluster we have running in the Rittman Mead Labs, along with other feeds, such as Twitter and activity logs.

The database has to be configured to user binary logging and also we need to ensure that the socket file can be found in /tmp/mysql.socket. You can find the details for this in the documentation. Also we need to make sure that the tables we want to extract from are using the InnoDB engine and not the default MyISAM one. The engine can easily be changed by issuing

alter table wp_mysql.wp_posts engine=InnoDB;

Assuming we already have installed OGG for MySQL on /opt/oracle/OGG/ we can now go ahead and configure the Manager process and the Extract for our tables. The tables we are interested in are

wp_mysql.wp_posts
wp_mysql.wp_comments
wp_mysql.wp_users
wp_mysql.wp_terms
wp_mysql.wp_term_taxonomy

First configure the manager

-bash-4.1$ cat dirprm/mgr.prm 
PORT 7809
PURGEOLDEXTRACTS /opt/oracle/OGG/dirdat/*, USECHECKPOINTS

Now configure the Extract to capture changes made to the tables we are interested in

-bash-4.1$ cat dirprm/mysql.prm 
EXTRACT mysql
SOURCEDB wp_mysql, USERID root, PASSWORD password
discardfile /opt/oracle/OGG/dirrpt/FLUME.dsc, purge
EXTTRAIL /opt/oracle/OGG/dirdat/et
GETUPDATEBEFORES
TRANLOGOPTIONS ALTLOGDEST /var/lib/mysql/localhost-bin.index
TABLE wp_mysql.wp_comments;
TABLE wp_mysql.wp_posts;
TABLE wp_mysql.wp_users;
TABLE wp_mysql.wp_terms;
TABLE wp_mysql.wp_term_taxonomy;

We should now be able to create the extract and start the process, as with a normal extract.

ggsci>add extract mysql, tranlog, begin now
ggsci>add exttrail ./dirdat/et, extract mysql
ggsci>start extract mysql
ggsci>info mysql
ggsci>view report mysql

We will also have to generate metadata to describe the table structures in the MySQL database. This file will be used by the Flume adapter to map columns and data types to the Avro format.

-bash-4.1$ cat dirprm/defgen.prm 
-- To generate trail source-definitions for GG v11.2 Adapters, use GG 11.2 defgen,
-- or use GG 12.1.x defgen with "format 11.2" definition format.
-- If using GG 12.1.x as a source for GG 11.2 adapters, also generate format 11.2 trails.

-- UserId logger, Password password
SOURCEDB wp_mysql, USERID root, PASSWORD password

DefsFile dirdef/wp.def

TABLE wp_mysql.wp_comments;
TABLE wp_mysql.wp_posts;
TABLE wp_mysql.wp_users;
TABLE wp_mysql.wp_terms;
TABLE wp_mysql.wp_term_taxonomy;
-bash-4.1$ ./defgen PARAMFILE dirprm/defgen.prm 

***********************************************************************
        Oracle GoldenGate Table Definition Generator for MySQL
      Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140920.0203
...

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
SOURCEDB wp_mysql, USERID root, PASSWORD ******
DefsFile dirdef/wp.def
TABLE wp_mysql.wp_comments;
Retrieving definition for wp_mysql.wp_comments.
TABLE wp_mysql.wp_posts;
Retrieving definition for wp_mysql.wp_posts.
TABLE wp_mysql.wp_users;
Retrieving definition for wp_mysql.wp_users.
TABLE wp_mysql.wp_terms;
Retrieving definition for wp_mysql.wp_terms.
TABLE wp_mysql.wp_term_taxonomy;
Retrieving definition for wp_mysql.wp_term_taxonomy.


Definitions generated for 5 tables in dirdef/wp.def.

Setting up the OGG Java Adapter for Flume

The OGG Java Adapter for Flume will use the EXTTRAIL created earlier as a source, pack the data up and feed to the cluster Flume Agent, using Avro and RPC. The Flume Adapter thus needs to know

  • Where is the OGG EXTTRAIL to read from
  • How to treat the incoming data and operations (e.g. Insert, Update, Delete)
  • Where to send the Avro messages to

First we create a parameter file for the Flume Adapter

-bash-4.1$ cat dirprm/flume.prm
EXTRACT flume
SETENV ( GGS_USEREXIT_CONF = "dirprm/flume.props")
CUSEREXIT libggjava_ue.so CUSEREXIT PASSTHRU INCLUDEUPDATEBEFORES
GETUPDATEBEFORES
NOCOMPRESSUPDATES
SOURCEDEFS ./dirdef/wp.def
DISCARDFILE ./dirrpt/flume.dsc, purge

TABLE wp_mysql.wp_comments;
TABLE wp_mysql.wp_posts;
TABLE wp_mysql.wp_users;
TABLE wp_mysql.wp_terms;
TABLE wp_mysql.wp_term_taxonomy;

There are two things to note here

  • The OGG Java Adapter User Exit is configured in a file called flume.props
  • The source tables’ structures are defined in wp.def

The flume.props file is a ‘standard’ User Exit config file

-bash-4.1$ cat dirprm/flume.props 
gg.handlerlist=ggflume

gg.handler.ggflume.type=com.goldengate.delivery.handler.flume.FlumeHandler
gg.handler.ggflume.host=bd5node1.rittmandev.com
gg.handler.ggflume.port=4545

gg.handler.ggflume.rpcType=avro
gg.handler.ggflume.delimiter=;
gg.handler.ggflume.mode=tx
gg.handler.ggflume.includeOpType=true
# Indicates if the operation timestamp should be included as part of output in the delimited separated values
# true - Operation timestamp will be included in the output
# false - Operation timestamp will not be included in the output
# Default :- true
gg.handler.ggflume.includeOpTimestamp=true

# Optional properties to use the transaction grouping functionality
#gg.handler.ggflume.maxGroupSize=1000
#gg.handler.ggflume.minGroupSize=1000

### native library config ###
goldengate.userexit.nochkpt=TRUE
goldengate.userexit.timestamp=utc
goldengate.log.logname=cuserexit
goldengate.log.level=INFO
goldengate.log.tofile=true
goldengate.userexit.writers=javawriter

gg.report.time=30sec
gg.classpath=AdapterExamples/big-data/flume/target/flume-lib/*

javawriter.stats.full=TRUE
javawriter.stats.display=TRUE
javawriter.bootoptions=-Xmx32m -Xms32m -Djava.class.path=ggjava/ggjava.jar -Dlog4j.configuration=log4j.properties

Some points of interest here are

  • The Flume agent we will send our data to is running on port 4545 on host bd5node1.rittmandev.com
  • We want each record to be prefixed with I(nsert), U(pdated) or D(delete)
  • We want each record to be postfixed with a timestamp of the transaction date
  • The Java class com.goldengate.delivery.handler.flume.FlumeHandler will do the actual work. (The curios reader can view the code in /opt/oracle/OGG/AdapterExamples/big-data/flume/src/main/java/com/goldengate/delivery/handler/flume/FlumeHandler.java)

Before starting up the OGG Flume, let’s first make sure that the Flume agent on bd5node1 is configure to receive our Avro message (Source) and also what to do with the data (Sink)

a1.channels = c1
a1.sources = r1
a1.sinks = k2
a1.channels.c1.type = memory
a1.sources.r1.channels = c1 
a1.sources.r1.type = avro 
a1.sources.r1.bind = bda5node1
a1.sources.r1.port = 4545
a1.sinks.k2.type = hdfs
a1.sinks.k2.channel = c1
a1.sinks.k2.hdfs.path = /user/flume/gg/%{SCHEMA_NAME}/%{TABLE_NAME} 
a1.sinks.k2.hdfs.filePrefix = %{TABLE_NAME}_ 
a1.sinks.k2.hdfs.writeFormat=Writable 
a1.sinks.k2.hdfs.rollInterval=0
a1.sinks.k2.hdfs.hdfs.rollSize=1048576
a1.sinks.k2.hdfs.rollCount=0
a1.sinks.k2.hdfs.batchSize=100 
a1.sinks.k2.hdfs.fileType=DataStream

Here we note that

  • The agent’s source (inbound data stream) is to run on port 4545 and to use avro
  • The agent’s sink will write to HDFS and store the files  in /user/flume/gg/%{SCHEMA_NAME}/%{TABLE_NAME}
  • The HDFS files will be rolled over every 1Mb (1048576 bytes)

We are now ready to head back to the webserver that runs the MySQL database and start the Flume extract, that will feed all committed MySQL transactions against our selected tables to the Flume Agent on the cluster, which in turn will write the data to HDFS

-bash-4.1$ export LD_LIBRARY_PATH=/usr/lib/jvm/jdk1.7.0_55/jre/lib/amd64/server
-bash-4.1$ export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_55/
-bash-4.1$ ./ggsci
ggsci>add extract flume, exttrailsource ./dirdat/et 
ggsci>start flume
ggsci>info flume
EXTRACT    FLUME     Last Started 2015-03-29 17:51   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:06 ago)
Process ID           24331
Log Read Checkpoint  File /opt/oracle/OGG/dirdat/et000008
                     2015-03-29 17:51:45.000000  RBA 7742

If I now submit this blogpost I should see the results showing up our Hadoop cluster in the Rittman Mead Labs.

[oracle@bda5node1 ~]$ hadoop fs -ls /user/flume/gg/wp_mysql/wp_posts
-rw-r--r--   3 flume  flume   3030 2015-03-30 16:40 /user/flume/gg/wp_mysql/wp_posts/wp_posts_.1427729981456

We can quickly create an externally organized table in Hive to view the results with SQL

hive> CREATE EXTERNAL TABLE wp_posts(
     op string, 
 ID                     int,
 post_author            int,
 post_date              String,
 post_date_gmt          String,
 post_content           String,
 post_title             String,
 post_excerpt           String,
 post_status            String,
 comment_status         String,
 ping_status            String,
 post_password          String,
 post_name              String,
 to_ping                String,
 pinged                 String,
 post_modified          String,
 post_modified_gmt      String,
 post_content_filtered  String,
 post_parent            int,
 guid                   String,
 menu_order             int,
 post_type              String,
 post_mime_type         String,
 comment_count          int,
     op_timestamp timestamp
  )
 COMMENT 'External table ontop of GG Flume sink, landed in hdfs'
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ';'
 STORED AS TEXTFILE
 LOCATION '/user/flume/gg/wp_mysql/wp_posts/';

hive> select post_title from gg_flume.wp_posts where op='I' and id=22112;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1427647277272_0017, Tracking URL = http://bda5node1.rittmandev.com:8088/proxy/application_1427647277272_0017/
Kill Command = /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/bin/hadoop job  -kill job_1427647277272_0017
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 0
2015-03-30 16:51:17,715 Stage-1 map = 0%,  reduce = 0%
2015-03-30 16:51:32,363 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 1.88 sec
2015-03-30 16:51:33,422 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.38 sec
MapReduce Total cumulative CPU time: 3 seconds 380 msec
Ended Job = job_1427647277272_0017
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2   Cumulative CPU: 3.38 sec   HDFS Read: 3207 HDFS Write: 35 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 380 msec
OK
Oracle GoldenGate, MySQL and Flume
Time taken: 55.613 seconds, Fetched: 1 row(s)

Please leave a comment and you’ll be contributing to an OGG Flume!

Categories: BI & Warehousing

Log Buffer #416, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-03-30 12:29

This log buffer edition sprouts from the beauty, glamour and intelligence of various blog posts from Oracle, SQL Server, and MySQL.

Oracle:

Oracle Exadata Performance: Latest Improvements and Less Known Features

Exadata Storage Index Min/Max Optimization

Oracle system V shared memory indicated deleted

12c Parallel Execution New Features: Concurrent UNION ALL

Why does index monitoring makes Connor’s scratch his head and charges off to google so many times.

SQL Server:

Learn how to begin unit testing with tSQLt and SQL Server.

‘Temporal’ tables contain facts that are valid for a period of time. When they are used for financial information they have to be very well constrained to prevent errors getting in and causing incorrect reporting.

As big data application success stories (and failures) have appeared in the news and technical publications, several myths have emerged about big data. This article explores a few of the more significant myths, and how they may negatively affect your own big data implementation.

When effective end dates don’t align properly with effective start dates for subsequent rows, what are you to do?

In order to automate the delivery of an application together with its database, you probably just need the extra database tools that allow you to continue with your current source control system and release management system by integrating the database into it.

MySQL:

Ronald Bradford on SQL, ANSI Standards, PostgreSQL and MySQL.

How to Manage the World’s Top Open Source Databases: ClusterControl 1.2.9 Features Webinar Replay

A few interesting findings on MariaDB and MySQL scalability, multi-table OLTP RO

MariaDB: The Differences, Expectations, and Future

How to Tell If It’s MySQL Swapping

Categories: DBA Blogs

New blog to handle the PJC/Bean articles

Francois Degrelle - Mon, 2015-03-30 12:06

Here is the link to another place that stores the PJCs/Beans article without adds.

http://forms.pjc.bean.blog.free.fr/

Francois

 

Dimensional Modeling

Dylan's BI Notes - Mon, 2015-03-30 11:06
Moved the content into a page – Dimensional Modeling
Categories: BI & Warehousing

A command-line alternative to PeopleSoft SendMaster

Javier Delgado - Mon, 2015-03-30 10:05
If you are familiar with PeopleSoft Integration Broker, I'm sure you have dealt with SendMaster to some degree. This is a very simple but yet useful tool to perform unit tests of the Integration Broker incoming service operations using plain XML (if I'm dealing with SOAP Web Services, I normally use SoapUI, for which there is a very good article on PeopleSoft Wiki).

Most of the time it's enough with SendMaster, but today I came through a problem that required an alternative. While testing an XML message with this tool against an HTTPS PeopleSoft installation, I got the following error message:

Error communicating with server: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

After checking in My Oracle Support, I've found the following resolution (doc 1634045.1):

The following steps will resolve the error:

1) Import the appropriate SSL certificate to the java kestore PS_HOME\jre\lib\security\cacerts or Integration Broker's keystore location i.e pskey file
2) Set sendmaster's preferences  ( via File-Preferences-HTTP tab )  to point to the keystore with the appropriate SSL certificate3) Test
Unfortunately, I didn't have access to the appropriate SSL certificate, so I've decided to use curl, a pretty old (dating back to 1997 according to all knowing wikipedia) but still useful command line tool.



curl is a command line tool that can be used to test HTTP and HTTPS operations, including GET, PUT, POST and so on. One of the features of this tool is that it can run in "insecure" mode, eliminating the need of a client certificate to test URLs on HTTPS. Both in Linux and Mac OS, the option to run in insecure mode is -k. The command line to test my service operation then looked like:

curl -X POST -d @test.xml -k https://<server>/PSIGW/HttpListeningConnector 

Please note that the @ option actually requests curl to take the data from the file following it. Instead of doing so, you can specify the data in the command line, but it is a bit more cumbersome.

Also, keep in mind that curl is not delivered with Windows out of the box, but you can download similar tools from several sources (for instance, this one).







PTS Sample code now available on GitHub

Angelo Santagata - Mon, 2015-03-30 08:43

Not sure many people know about this, but sometime ago my team created a whole collection of sample code. This code is available on OTN at this location  but it is now also available in github here!

 We'll be updating this repository with some new code soon, when we do I'll make sure to update this blog entry

IBM Bluemix demo using IBM Watson Personality Insights service

Pas Apicella - Mon, 2015-03-30 04:31
The IBM Watson Personality Insights service uses linguistic analysis to extract cognitive and social characteristics from input text such as email, text messages, tweets, forum posts, and more. By deriving cognitive and social preferences, the service helps users to understand, connect to, and communicate with other people on a more personalized level.

1. Clone the GitHub repo as shown below.

pas@192-168-1-4:~/bluemix-apps/watson$ git clone https://github.com/watson-developer-cloud/personality-insights-nodejs.git
Cloning into 'personality-insights-nodejs'...
remote: Counting objects: 84, done.
remote: Total 84 (delta 0), reused 0 (delta 0), pack-reused 84
Unpacking objects: 100% (84/84), done.
Checking connectivity... done.

2. Create the service as shown below.

pas@192-168-1-4:~/bluemix-apps/watson/personality-insights-nodejs$ cf create-service personality_insights "IBM Watson Personality Insights Monthly Plan" personality-insights-service
Creating service personality-insights-service in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

3. Edit the manifest.yml to use a unique application name , I normally use {myname}-appname

---
declared-services:
  personality-insights-service:
    label: personality_insights
    plan: 'IBM Watson Personality Insights Monthly Plan'

applications:
- name: pas-personality-insights-nodejs
  command: node app.js
  path: .
  memory: 256M
  services:
  - personality-insights-service

4. Push the application as shown below.

pas@192-168-1-4:~/bluemix-apps/watson/personality-insights-nodejs$ cf push
Using manifest file /Users/pas/ibm/bluemix/apps/watson/personality-insights-nodejs/manifest.yml

Creating app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

Creating route pas-personality-insights-nodejs.mybluemix.net...
OK

Binding pas-personality-insights-nodejs.mybluemix.net to pas-personality-insights-nodejs...
OK

Uploading pas-personality-insights-nodejs...
Uploading app files from: /Users/pas/ibm/bluemix/apps/watson/personality-insights-nodejs
Uploading 188.5K, 30 files
Done uploading
OK
Binding service personality-insights-service to app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

Starting app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
-----> Downloaded app package (192K)
-----> Node.js Buildpack Version: v1.14-20150309-1555
-----> Requested node range:  >=0.10
-----> Resolved node version: 0.10.36
-----> Installing IBM SDK for Node.js from cache
-----> Checking and configuring service extensions
-----> Installing dependencies
       errorhandler@1.3.5 node_modules/errorhandler
       ├── escape-html@1.0.1
       └── accepts@1.2.5 (negotiator@0.5.1, mime-types@2.0.10)
       body-parser@1.11.0 node_modules/body-parser
       ├── bytes@1.0.0
       ├── media-typer@0.3.0
       ├── raw-body@1.3.2
       ├── depd@1.0.0
       ├── qs@2.3.3
       ├── on-finished@2.2.0 (ee-first@1.1.0)
       ├── iconv-lite@0.4.6
       └── type-is@1.5.7 (mime-types@2.0.10)
       express@4.11.2 node_modules/express
       ├── escape-html@1.0.1
       ├── merge-descriptors@0.0.2
       ├── utils-merge@1.0.0
       ├── methods@1.1.1
       ├── fresh@0.2.4
       ├── cookie@0.1.2
       ├── range-parser@1.0.2
       ├── cookie-signature@1.0.5
       ├── media-typer@0.3.0
       ├── finalhandler@0.3.3
       ├── vary@1.0.0
       ├── parseurl@1.3.0
       ├── serve-static@1.8.1
       ├── content-disposition@0.5.0
       ├── path-to-regexp@0.1.3
       ├── depd@1.0.0
       ├── qs@2.3.3
       ├── on-finished@2.2.0 (ee-first@1.1.0)
       ├── debug@2.1.3 (ms@0.7.0)
       ├── etag@1.5.1 (crc@3.2.1)
       ├── proxy-addr@1.0.7 (forwarded@0.1.0, ipaddr.js@0.1.9)
       ├── send@0.11.1 (destroy@1.0.3, ms@0.7.0, mime@1.2.11)
       ├── accepts@1.2.5 (negotiator@0.5.1, mime-types@2.0.10)
       └── type-is@1.5.7 (mime-types@2.0.10)
       jade@1.9.2 node_modules/jade
       ├── character-parser@1.2.1
       ├── void-elements@2.0.1
       ├── commander@2.6.0
       ├── mkdirp@0.5.0 (minimist@0.0.8)
       ├── with@4.0.2 (acorn-globals@1.0.3, acorn@1.0.1)
       ├── constantinople@3.0.1 (acorn-globals@1.0.3)
       └── transformers@2.1.0 (promise@2.0.0, css@1.0.8, uglify-js@2.2.5)
       watson-developer-cloud@0.9.8 node_modules/watson-developer-cloud
       ├── object.pick@1.1.1
       ├── cookie@0.1.2
       ├── extend@2.0.0
       ├── isstream@0.1.2
       ├── async@0.9.0
       ├── string-template@0.2.0 (js-string-escape@1.0.0)
       ├── object.omit@0.2.1 (isobject@0.2.0, for-own@0.1.3)
       └── request@2.53.0 (caseless@0.9.0, json-stringify-safe@5.0.0, aws-sign2@0.5.0, forever-agent@0.5.2, form-data@0.2.0, stringstream@0.0.4, oauth-sign@0.6.0, tunnel-agent@0.4.0, qs@2.3.3, node-uuid@1.4.3, mime-types@2.0.10, combined-stream@0.0.7, http-signature@0.10.1, tough-cookie@0.12.1, bl@0.9.4, hawk@2.3.1)
-----> Caching node_modules directory for future builds
-----> Cleaning up node-gyp and npm artifacts
-----> No Procfile found; Adding npm start to new Procfile
-----> Building runtime environment
-----> Checking and configuring service extensions
-----> Installing App Management
-----> Node.js Buildpack is done creating the droplet

-----> Uploading droplet (12M)

0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-personality-insights-nodejs was started using this command `node app.js`

Showing health and status for app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: pas-personality-insights-nodejs.mybluemix.net
last uploaded: Mon Mar 30 10:18:37 +0000 2015

     state     since                    cpu    memory   disk     details
#0   running   2015-03-30 09:20:06 PM   0.0%   0 of 0   0 of 0

5. Access Application


This demo is based off the link below.

https://github.com/watson-developer-cloud/personality-insights-nodejs

More information as follows

http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/personality-insights/

http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Retrieving OAM keystore password

Frank van Bortel - Mon, 2015-03-30 03:12
How to retrieve the password of OAM keystore If you ever need it; the password of the default OAM keystore password (which is generated) can be retrieved using: cd /oracle/middleware/oracle_common/common/bin ./wlst.sh connect(); domainRuntime() listCred(map="OAM_STORE",key="jks") Would you like to change it, use resetKeystorePassword() Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Slides from my presentation at APEX World 2015 in Rotterdam

Dietmar Aust - Mon, 2015-03-30 03:05
Hi guys,

I had a great time at APEX World in Rotterdam, it was a wonderful event. I could meet up with my friends and learn a few new tricks, too :).

Here are the slides from my presentation about the many smaller new features of Oracle APEX 5.0. And I could only cram like half of the good stuff that I found into this 45 min. session.

About 70 people attended the session and I sure hope they will use some of the features I presented.

Once I clean up my demo application, I will make it available, too.

Cheers,
~Dietmar.

Presentation material & E-learning videos – In-Memory Column Store Workshop with Maria Colgan

Marco Gralike - Mon, 2015-03-30 03:00
You can now download and have another look at the presentations used during the In-Memory…

Oracle Database In-Memory Test Drive Workshop: Canberra 28 April 2015

Richard Foote - Sun, 2015-03-29 21:17
I’ll be running a free Oracle Database In-Memory Test Drive Workshop locally here in Canberra on Tuesday, 28th April 2015. Just bring a laptop with at least 8G of RAM and I’ll supply a VirtualBox image with the Oracle Database 12c In-Memory environment. Together we’ll go through a number of hands-on labs that cover: Configuring the Product Easily […]
Categories: DBA Blogs

Sqlplus is my second home, part 8: Embedding multiple sqlplus arguments into one variable

Tanel Poder - Sun, 2015-03-29 15:23

I’ve updated some of my ASH scripts to use these 4 arguments in a standard way:

  1. What ASH columns to display (and aggregate by)
  2. Which ASH rows to use for the report (filter)
  3. Time range start
  4. Time range end

So this means whenever I run ashtop (or dashtop) for example, I need to type in all 4 parameters. The example below would show top SQL_IDs only for user SOE sessions from last hour of ASH samples:

SQL> @ashtop sql_id username='SOE' sysdate-1/24 sysdate

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
     2271      .6   21% | 56pwkjspvmg3h 2015-03-29 13:13:16 2015-03-29 13:43:34               145
     2045      .6   19% | gkxxkghxubh1a 2015-03-29 13:13:16 2015-03-29 13:43:14               149
     1224      .3   11% | 29qp10usqkqh0 2015-03-29 13:13:25 2015-03-29 13:43:32               132
      959      .3    9% | c13sma6rkr27c 2015-03-29 13:13:19 2015-03-29 13:43:34               958
      758      .2    7% |               2015-03-29 13:13:16 2015-03-29 13:43:31                 1

When I want more control and specify a fixed time range, I can just use the ANSI TIMESTAMP (or TO_DATE) syntax:

SQL> @ashtop sql_id username='SOE' "TIMESTAMP'2015-03-29 13:00:00'" "TIMESTAMP'2015-03-29 13:15:00'"

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
      153      .2   22% | 56pwkjspvmg3h 2015-03-29 13:13:29 2015-03-29 13:14:59                 9
      132      .1   19% | gkxxkghxubh1a 2015-03-29 13:13:29 2015-03-29 13:14:59                 8
       95      .1   14% | 29qp10usqkqh0 2015-03-29 13:13:29 2015-03-29 13:14:52                 7
       69      .1   10% | c13sma6rkr27c 2015-03-29 13:13:31 2015-03-29 13:14:58                69
       41      .0    6% |               2015-03-29 13:13:34 2015-03-29 13:14:59                 1

Note that the arguments 3 & 4 above are in double quotes as there’s a space within the timestamp value. Without the double-quotes, sqlplus would think the script has total 6 arguments due to the spaces.

I don’t like to type too much though (every character counts!) so I was happy to see that the following sqlplus hack works. I just defined pairs of arguments as sqlplus DEFINE variables as seen below (also in init.sql now):

  -- geeky shorcuts for producing date ranges for various ASH scripts
  define     min="sysdate-1/24/60 sysdate"
  define  minute="sysdate-1/24/60 sysdate"
  define    5min="sysdate-1/24/12 sysdate"
  define    hour="sysdate-1/24 sysdate"
  define   2hours="sysdate-1/12 sysdate"
  define  24hours="sysdate-1 sysdate"
  define      day="sysdate-1 sysdate"
  define    today="TRUNC(sysdate) sysdate"

And now I can type just 3 arguments instead of 4 when I run some of my scripts and want some predefined behavior like seeing last 5 minutes’ activity:

SQL> @ashtop sql_id username='SOE' &5min

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
      368     1.2   23% | gkxxkghxubh1a 2015-03-29 13:39:34 2015-03-29 13:44:33                37
      241      .8   15% | 56pwkjspvmg3h 2015-03-29 13:40:05 2015-03-29 13:44:33                25
      185      .6   12% | 29qp10usqkqh0 2015-03-29 13:39:40 2015-03-29 13:44:33                24
      129      .4    8% | c13sma6rkr27c 2015-03-29 13:39:35 2015-03-29 13:44:32               129
      107      .4    7% |               2015-03-29 13:39:34 2015-03-29 13:44:33                 1

That’s it, I hope this hack helps :-)

By the way – if you’re a command line & sqlplus fan, check out the SQLCL command line “new sqlplus” tool from the SQL Developer team! (you can download it from the SQL Dev early adopter page for now).

 

Related Posts

Windows Cluster vNext and cloud witness

Yann Neuhaus - Sun, 2015-03-29 12:14

The next version of Windows will provide some interesting features about WFSC architectures. One of them is the new quorum type: "Node majority and cloud witness" which will solve many cases where a third datacenter is mandatory and missing to achieve a truly resilient quorum.

Let’s imagine the following scenario that may concern the implementation of either an SQL Server availability group or a SQL Server FCI. Let’s say you have to implement a geo-cluster that includes 4 nodes across two datacenters with 2 nodes on each. To achieve the quorum in case of broken network link between the two datacenters, adding a witness is mandatory even if you work with dynamic weight nodes feature but where to put it? Having a third datacenter to host this witness seems to be the better solution but as you may imagine, it is a costly and not affordable solution for many customers.

Using a cloud witness in this case might be a very interesting workaround. Indeed, a cloud witness consists of a blob storage inside a storage account's container. From cost perspective, it is a very cheap solution because you have to pay only for the storage space you will use (first 1TB/month – CHF 0.0217 / GB). Let's take a look at the storage space consumed by my cloud witness from my storage account:

 

blog_36_-_cloud_witness_storage_space_

 

 

Interesting, isn’t it? To implement a cloud witness, you have to meet the following requirements:

  • Yourstorage account must be configured as a locally redundant storage (LRS) because the created blob file is used as the arbitration point, which requires some consistency guarantees when reading the data. All data in the storage account is made durable by replicating transactions synchronously in this case. LRS doesn’t protect against a complete regional disaster but it may be acceptable in our case because cloud witness is also dynamic weight-based feature.
  • A special container, called msft-cloud-witness, is created to this purpose and contains the blob file lied to the cloud witness.

 

blog_36_-_storage_account_replication_type_

 

How to configure my cloud witness?

In the same way than before. By using the GUI, you have to select the quorum type you want to use and then you must provide the storage account information (storage account name and the access key). You may also prefer to configure your cloud witness by using PowerShell cmdlet Set-ClusterQuorum as follows:

 

blog_36_-_cloud_witness_configuration_powershel

 

After configuring the cloud witness, a corresponded core resource is created with an online state as follows:

 

blog_36_-_cloud_witness_view_from_GUI_

 

By using PowerShell:

 

blog_36_-_cloud_witness_view_from_powershell_

 

Let’s have a deeper look at this core resource, especially the following advanced policies parameters isAlive() and looksAlive() configuration:

 

blog_36_-_cloud_witness_isalive_looksalive

 

We may notice that the basic resource health check interval default value is configured to 15 min. Hmm, I guess that this value will probably be customized according to the customer architecture configuration.

Go ahead and let’s perform some basic tests with my lab architecture. Basically, I have configured a multi-subnet failover cluster that includes four nodes across two (simulated) datacenters. Then, I have implemented a cloud witness hosted inmy storage account “mikedavem”. You may find a simplified picture of my environment below:

 

blog_36_-_WFSC_core_resources_overview

 

...

 

blog_36_-_WFSC_nodes_overview

 

You may notice that because I implemented a cloud witness, the system changes the overall node weight configuration (4 nodes + 1 witness = 5 votes). In addition, in case of network failure between my 2 datacenters, I want to prioritize the first datacenter in terms of availability. In order to meet this requirement, I used the new cluster property LowerQuorumPriorityNodeID to change the priority of the WIN104 cluster node.

 

blog_36_-_WFSC_change_node_priority

 

At this point we are not ready to perform our first test: simulate a failure of the cloud witness:

 

blog_36_-_cloud_witness_failed_statejpg

 

Then the system recalculates the overall node weight configuration to achieve a maximum quorum resiliency. As expected, the node weight of WIN104 cluster node is changed from 1 to 0 because it has the lower priority.

The second consists in simulating a network failure between the two datacenters. Once again, as expected, the first partition of the WFSC in the datacenter1 keeps online whereas the second partition brings offline according the node weight priority configuration.

 

blog_36_-_WFSC_failed_state_partition_2jpg

 

Is the cloud witness dynamic behavior suitable with minimal configurations?

I wrote a blog post here about issues that exist with dynamic witness behavior and minimal configurations with only 2 cluster nodes. I hoped to see an improvement on that side but unfortunately no. Perhaps with the RTM release … wait and see.

 

Happy clustering!

 

 

Video Tutorial: XPLAN_ASH Active Session History - Part 4

Randolf Geist - Sun, 2015-03-29 11:55
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.


Automatic ADF Popup Opening on Fragment Load

Andrejus Baranovski - Sun, 2015-03-29 08:56
I had a post about opening ADF Popup on page load - Opening ADF PopUp on Page Load. Approach is quite straightforward, developer needs to use showPopupBehavior operation with appropriate trigger type. When it comes to ADF Popup opening on fragment load, implementation is a bit more complex. There is a known method to implement hidden text field and in the getter method call your custom logic - getter will be executed when fragment loads. However, this is not very efficient, you will need to add condition to distinguish between first and subsequent calls to the getter (it will be executed multiple times). I will describe in this post different approach - using ADF poll component and forcing it to execute only once after fragment load.

Here you can download sample application - FragmentPopUpLoadApp.zip. This sample implements two UI tabs. Each of the tabs renders ADF region. First region displays information about all employees - tree map with salary information:


Automatic popup opening is implemented in the second region - Employees By Department tab. As soon as user opens this tab, popup is load to select department. Data in the region is filtered, based on department selected in the popup:


Filtered data after selection was made in automatically opened popup:


Popup in the fragment is loaded on the first load by ADF poll component. Poll component is set with short interval of 10 milliseconds. During its first execution it will call Java listener method and in addition JavaScript client listener will be invoked. Inside JavaScript client listener, we disable ADF poll component by setting its interval to be negative. This is how ADF poll executes only once and then it stops:


Here is Java listener method, invoked by ADF poll component - it loads the popup:


ADF poll is stopped after its first execution. However, we need to ensure it will be started again - if user re-opens the same tab. For this purpose I have implemented conditional ADF region activation - region is de-activated when user navigates away from the tab. Tab disclosure listener updates helper variable to track which tab becomes active:


Disclosure listener updates page flow scope variable - forceActivate:


This variable is used in the region definition - region is active when tab is selected, and inactive otherwise:

node-oracledb 0.4.2 is on NPM (Node.js driver for Oracle Database)

Christopher Jones - Sat, 2015-03-28 18:41

The 0.4.2 version of the Node.js driver for Oracle Database is out.

  • Node-oracledb is now officially on the npmjs.com repository. This simplifies the Install instructions by removing the need to manually clone or download from GitHub. Thanks to Tim Branyen for setting this up and handing over stewardship to us.

  • Metadata support was added. Column names are now provided in the execute() callback result object. See the doc example.

  • We saw a few people try to use strangely old versions of Node 0.10. I've bumped up the lower limit requirement a bit. It won't force you to use the latest Node.js 0.10 patch set but you really should keep up to date with security fixes.

    If you want to build with Node 0.12, there is a community contributed patch from Richard Natal that can be found here. This patch also allows node-oracledb to work with io.js.

  • The default Instant Client directory on AIX was changed from /opt/oracle/instantclient_12_1 to /opt/oracle/instantclient. This now matches the default of other platforms.

  • One other small change was some improvements to the Windows install documentation.

Yes, work is continuing behind the scenes on other features.

A Glance at Smartwatches in the Enterprise: A Moment in Time Experience

Usable Apps - Sat, 2015-03-28 02:30

Ultan O’Broin (@usableapps) talks to Oracle Applications User Experience (OAUX) Vice President Jeremy Ashley (@jrwashley) about designing apps for that smartwatch, and every other smartwatch, too.

Nobody wants their device to disrupt them from what they are doing or to have to move to another one to continue working. Keeping users in the moment of their tasks—independent of the devices they’re using—is central to any great user experience.

The ability to apply our Oracle Applications Cloud design philosophy to the smartwatch demonstrates an ideal realization of the “glance” method, keeping users in that moment: Making the complex simple, flexible, intuitive, and most of all, convenient. OAUX recognizes the need for smartwatch wearers to experience that “right here, right now” feeling, the one in which you have just what you need, just when you need it.

The wearable technology space is currently focused on smartwatches. We’re excited by Apple’s announcement about their smartwatch, and we’re even more thrilled to now show you our proof of concept glance designs for the Oracle Applications Cloud on the Apple Watch. We want to hear your reaction! 

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch


Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud for Apple Watch proof of concept designs

For the smartwatch specifically, VP Jeremy Ashley explained how our glance approach applies to smartwatch wearers, regardless of their choice of device:

“The most common wearable user interaction is to glance at something. The watch works as the wearer’s mini dialog box to the cloud, making microtransactions convenient on the wrist, and presenting the right information to the wearer at the right time. How quickly and easily someone can do something actually useful is the key activity."

Glance brings cloud interaction to wearers in a personal way, requesting and not demanding attention, while eliminating a need to switch to other devices to “dig in,” or to even have to pull a smartphone out of the pocket to respond.

“To continue the journey to completing a task using glance is as simple and natural as telling the time on your wrist”, says Jeremy.

Being able to glance down at your wrist at a stylish smartwatch experience—one that provides super-handy ways to engage with gems of information— enhances working in the cloud in powerful and productive ways, whether you’re a sales rep walking from your car to an opportunity engagement confidently glancing at the latest competitive news, or a field technician swiping across a watchface to securely record time on a remote job.

Glancing at a UI is the optimal wearable experience for the OAUX mobility strategy, where the cloud, not the device, is our platform. This means you can see our device-agnostic glance design at work not only on an Apple Watch, but on Android Wear, Pebble, and other devices, too.

Glance on Android Wear Samsung Gear Live and Pebble

Glance for Oracle Applications Cloud proof of concept apps on Android Wear Samsung Gear Live and Pebble

Designing a Glanceable Platform

The path to our glance designs began with OAUX research into every kind of smartwatch we could get on our wrists so that we could study their possibilities, experience how they felt, how they looked, and how they complemented everyday work and life activities. Then we combined ideas and experiences with Oracle Cloud technology to deliver a simplified design strategy that we can apply across devices. As a result, our UI designs are consistent and familiar to users as they work flexibly in the cloud, regardless of their device, type of operating system, or form factor.

This is not about designing for any one specific smartwatch. It’s a platform-agnostic approach to wearable technology that enables Oracle customers to get that awesome glanceable, cloud-enabled experience on their wearable of choice.

Why Smartwatches?

Smartwatches such as the Apple Watch, Pebble, and Android Wear devices have resonated strongly with innovators and consumers of wearable technology. The smartwatch succeeds because we’re already familiar and comfortable with using wristwatches, and they’re practical and easy to use.

From first relying on the sun to tell the time, to looking up at town hall clocks, to taking out pocket watches, and then being able to glance at our wrists to tell the time, we’ve seen an evolution in glanceable technology analogous to the miniaturization of computing from large mainframes to personal, mobile devices for consumers.

Just like enterprise apps, watches have already been designed for many specializations and roles, be they military, sport, medical, fashion, and so on. So the evolution of the smartwatch into an accepted workplace application is built on a firm foundation.

More Information

Again, OAUX is there, on trend, ready and offering a solution grounded in innovation and design expertise, one that responds to how we work today in the cloud.

In future articles, we’ll explore more examples that showcase how we’re applying the glance approach to wearable technology, and we’ll look at design considerations in more detail. You can read more about our Oracle Applications Cloud design philosophy and other trends and innovations that influence our thinking in our free eBook.

Check the Usable Apps website for events where you can experience our smartwatch and other innovations for real, read our Storify feature on wearable technology, and see our YouTube videos about our UX design philosophy and strategy.

More Apple Watch glance designs are on Instagram

Seriously proud of this and it doesn't make me grumpy!

Grumpy old DBA - Fri, 2015-03-27 18:27
So the GLOC 2015 conference registration is open (GLOC 2015 ) ( has been for a while ) and recently we completed posting all the speakers/topics.  That's been good darn good.

Just out today is our SAG  ( schedule at a glance ) which demonstrates just how good our conference will be.  Low cost high quality and just an event that you really should think about being in Cleveland for in may.

The schedule at a glance does not include our 4 top notch 1/2 day workshops going on monday but you can see them from the regular registration.

I am so grateful for the speakers we have on board.  It's a lot of work behind the scenes getting something like this rolling but when you see a lineup like this just wow!
Categories: DBA Blogs

Be Careful when using FRA with Streams

Michael Dinh - Fri, 2015-03-27 16:12

Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production

select state from gv$streams_capture;

STATE
----------------------------------------------------------------------------------------------------------------------------------------------------------------
WAITING FOR REDO: LAST SCN MINED 442455793041

select thread#, sequence#, status
from v$archived_log
where 442455793041 between first_change#
and next_change# order by 1,2;

   THREAD#  SEQUENCE# S
---------- ---------- -
	 1    1070609 D
	 1    1070609 D
	 1    1070609 D
	 1    1070610 D
	 1    1070610 D
	 2    1153149 D
	 2    1153149 D
	 2    1153149 D

8 rows selected.

Who’s deleting the archived logs? Thanks to Praveen G. who figured this out. From the alert log.

WARNING: The following archived logs needed by Streams capture process
are being deleted to free space in the flash recovery area. If you need
to process these logs again, restore them from a backup to a destination
other than the flash recovery area using the following RMAN commands:
   RUN{
      # <directory/ASM diskgroup> is a location other than the
      # flash recovery area
      SET ARCHIVELOG DESTINATION TO '<directory/ASM diskgroup>';
      RESTORE ARCHIVELOG ...;
   }