Skip navigation.

Feed aggregator

Log Buffer #436: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-08-14 08:00

This Log Buffer Edition covers the top blog posts of the week from the Oracle, SQL Server and MySQL arenas.

Oracle:

  • Momentum and activity regarding the Data Act is gathering steam, and off to a great start too. The Data Act directs the Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) to establish government-wide financial reporting data standards by May 2015.
  • RMS has a number of async queues for processing new item location, store add, warehouse add, item and po induction. We have seen rows stuck in the queues and needed to release the stuck AQ Jobs.
  • We have a number of updates to partitioned tables that are run from within pl/sql blocks which have either an execute immediate ‘alter session enable parallel dml’ or execute immediate ‘alter session force parallel dml’ in the same pl/sql block. It appears that the alter session is not having any effect as we are ending up with non-parallel plans.
  • Commerce Cloud, a new flexible and scalable SaaS solution built for the Oracle Public Cloud, adds a key new piece to the rich Oracle Customer Experience (CX) applications portfolio. Built with the latest commerce technology, Oracle Commerce Cloud is designed to ignite business innovation and rapid growth, while simplifying IT management and reducing costs.
  • Have you used R12: Master Data Fix Diagnostic to Validate Data Related to Purchase Orders and Requisitions?

SQL Server:

  • SQL Server 2016 Community Technology Preview 2.2 is available
  • What is Database Lifecycle Management (DLM)?
  • SSIS Catalog – Path to backup file could not be determined
  • SQL SERVER – Unable to Bring SQL Cluster Resource Online – Online Pending and then Failed
  • Snapshot Isolation Level and Concurrent Modification Collisions – On Disk and In Memory OLTP

MySQL:

  • A Better Approach to all MySQL Regression, Stress & Feature Testing: Random Coverage Testing & SQL Interleaving.
  • What is MySQL Package Verification? Package verification (Pkgver for short) refers to black box testing of MySQL packages across all supported platforms and across different MySQL versions. In Pkgver, packages are tested in order to ensure that the basic user experience is as it should be, focusing on installation, initial startup and rudimentary functionality.
  • With the rise of agile development methodologies, more and more systems and applications are built in series of iterations. This is true for the database schema as well, as it has to evolve together with the application. Unfortunately, schema changes and databases do not play well together.
  • MySQL replication is a process that allows you to easily maintain multiple copies of MySQL data by having them copied automatically from a master to a slave database.
  • In Case You Missed It – Breaking Databases – Keeping your Ruby on Rails ORM under Control.

The post Log Buffer #436: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

OAM PS3 State-of-the-art

Frank van Bortel - Fri, 2015-08-14 06:25
An attempt to run OAM 11G Release 2 PS3 on Oracle Linux 6.7, WLS 12C, RDBMS 12C. Install Linux Pretty straightforward. Used Oracle 6.7, as 7 is not certified. Create a 200MB /boot, and an LVM for /, both ext4. Install just the server. Deselect *all* options, just X system and X legacy support (the OUI needs it). Some 566 packages will get installed. Make sure it boots, and the network starts. Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

WebLogic Server 12.1.3 Developer Zip - Update 3 Posted

Steve Button - Thu, 2015-08-13 17:48
An update has just been posted on OTN for the WebLogic Server 12.1.3 Developer Zip distribution.

WebLogic Server 12.1.3 Developer Zip Update 3 is built with the fixes from the WebLogic Server 12.1.3.0.4 Patch Set Update, providing developers with access to the latest set of fixes available in the corresponding production release.

See the download page for access to the update:

http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-for-dev-1703574.html

http://download.oracle.com/otn/nt/middleware/12c/wls/1213/wls1213_dev_update3.zip

The Update 3 README provides details of what has been included:

http://download.oracle.com/otn/nt/middleware/12c/wls/1213/README_WIN_UP3.txt


Thoughts on Google Cloud Dataflow

Pythian Group - Thu, 2015-08-13 15:20

Google Cloud Dataflow is a data processing tool developed by Google that runs in the cloud. Dataflow is an easy to use, flexible tool that delivers completely automated scaling. It is deeply tied to the Google cloud infrastructure, making it a very powerful for projects running in Google Cloud.

Dataflow is an attractive resource management and job monitoring tool because it automatically manages all of the Google Cloud resources, including creating and tearing down  Google Compute Engine resources, communicating with Google Cloud Storage, working with Google Cloud Pub/Sub, aggregating logs, etc.

Cloud Dataflow has the following major components:

SDK – The Dataflow SDK provides a programming mode that simplifies/abstracts out the processing of large amounts of data. Dataflow only provides a Java SDK at the moment, which is a barrier for non-Java programmers. More on the programming model later.

Google Cloud Platform Managed Services – This is one of my favourite features in Dataflow. Dataflow manages and ties together components, such as Google Compute Engine, spins up and tears down VMs, manages BigQuery, aggregates logs, etc.

These two components can be used together to create jobs.

Being programmatic, Dataflow is extremely flexible. It works well for both batch and streaming jobs. Dataflow excels at high-volume computations and provides a unified programming model, which is very efficient and rather simple considering how powerful it is.

The Dataflow programming model simplifies the mechanics of large-scale data processing and abstracts out a lot of the lower level tasks, such as cluster management, adding more nodes, etc. It lets you focus on the logical aspect of your pipeline and not worry about how the job will run.

The Dataflow pipeline consists of four major abstractions:

  • Pipelines – A pipeline represents a complete process on a dataset or datasets. The data could be brought in from external data sources. It could then have a series of transformation operations, such as filter, joins, aggregation, etc., applied to the data to give it meaning and to achieve its desired form. This data could be then written to a sink. The sink could be within the Google Cloud platform or external. The sink could even be the same as the data source.
  • PCollections – PCollections are datasets in the pipeline. PCollections could represent datasets of any size. These datasets could be bounded (fixed size – such as national census data) or unbounded (such as a Twitter feed or data from weather sensors). PCollections are the input and output of every transform operation.
  • Transforms – Transforms are the data processing steps in the pipeline. Transforms take one or more PCollections, apply some transform operations to those collections, and then output to a PCollection.
  • I/O Sinks and Sources – The Source and Sink APIs provide functions to read data into and out of collections. The sources act as the roots of the pipeline and the sinks are the endpoints of the pipeline. Dataflow has a set of built in sinks/sources, but it is also possible to write sinks sources for custom data sources.

Dataflow is also planning to add integration for Apache Flink and Apache Spark. Adding Spark and Flink integration would be a huge feature since it would open up the possibilities to use MLlib, Spark SQL, and Flink machine-learning capabilities.

One of the use cases we explored was to create a pipeline that ingests streaming data from several POS systems using Dataflow’s streaming APIs. This data can be then joined with customer profile data that is ingested incrementally on a daily basis from a relational database. We can then run some filtering and aggregation operations on this data. Using the sink for BigQuery, we can insert the data into BigQuery and then run queries. What makes this so attractive is that in this whole process of ingesting vast amounts of streaming data, there was no need to set up clusters or networks or install software, etc. We stayed focused on the data processing and the logic that went into it.

To summarize, Dataflow is the only data processing tool that completely manages the lower level infrastructure. This removes several API calls for monitoring the load and spinning up and tearing down VMs, aggregating logs, etc., and lets you focus on the logic of the task at hand.  The abstractions are very easy to understand and work with and the Dataflow API also provides a good set of built in transform operations for tasks such as filtering, joining, grouping, and aggregation. Dataflow integrates really well with all components in the Google Cloud Platform, however, Dataflow does not have SDKs in any language besides Java, which is somewhat restrictive.

The post Thoughts on Google Cloud Dataflow appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Reuters: Instructure has filed for IPO later this year

Michael Feldstein - Thu, 2015-08-13 10:29

By Phil HillMore Posts (356)

Reuters is on a breaking news roll lately with ed tech. This time it is about Instructure filing for an initial public offering (IPO).

Instructure is planning an initial public offering later this year that could value the education software company at $500 million to $800 million, according to people familiar with the matter.

Instructure, based in Salt Lake City, has hired Morgan Stanley (MS.N) and Goldman Sachs (GS.N) to help prepare for the IPO, which has been filed confidentially, the people said. They requested anonymity because the news of the IPO was not public.

Under the Jumpstart Our Business Startups Act, new companies that generate less than $1 billion in revenue can file for IPOs with the U.S. Securities and Exchange Commission without immediately disclosing details publicly.

Instructure has long stated its plans to eventually IPO, so the main question has been one of timing. Now we know that it is late 2015 (assuming Reuters story is correct, but they have been quite accurate with similar stories).

Michael and I have written recently about Instructure’s strong performance, including this note about expanding markets and their consistent growth in higher ed, K-12 and potentially corporate learning.

InstructureCon 2015 Growth Slide

Taken together, what we see is a company with a fairly straightforward strategy. Pick a market where the company can introduce a learning platform that is far simpler and more elegant than the status quo, then just deliver and go for happy customers. Don’t expand beyond your core competency, don’t add parallel product lines, don’t over-complicate the product, don’t rely on corporate M&A. Where you have problems, address the gap. Rinse. Repeat.

Instructure has now solidified their dominance in US higher ed (having the most new client wins), they have hit their stride with K-12, and they are just starting with corporate learning. What’s next? I would assume international education markets, where Instructure has already started to make inroads in the UK and a few other locations.

The other pattern we see is that the company focuses on the mainstream from a technology adoption perspective. That doesn’t mean that they don’t want to serve early adopters with Canvas or Bridge, but Instructure more than any other LMS company knows how to say ‘No’. They don’t add features or change designs unless the result will help the mainstream adoption – which is primarily instructors. Of course students care, but they don’t choose whether to use an LMS for their course – faculty and teachers do. For education markets, the ability to satisfy early adopters rests heavily on the Canvas LTI-enabled integrations and acceptance of external application usage; this is in contrast to primarily relying on having all the features in one system.

Combine this news with that of Blackboard being up for sale and changes in Moodle’s approach, and you have some big moves in the LMS market that should have long-term impacts on institutional decision-making. Watch this space for more coverage.

The post Reuters: Instructure has filed for IPO later this year appeared first on e-Literate.

Blackboard Acquires Large Latin American Moodle Provider

Michael Feldstein - Thu, 2015-08-13 10:13

By Michael FeldsteinMore Posts (1042)

In my first post-BbWorld blog post, I noted that the international market is hugely important for Blackboard and Moodle is hugely important for their international strategy. Nearly a quarter of the company’s revenue and much of their growth comes from their international business, where they seem to be way ahead of their main North American competition in many countries. Learn has some really large contracts—whole country contracts, in some cases—but Moodle has a lot of smaller contracts. In some countries, you just can’t get traction in the LMS space unless you have a Moodle offering. In my post, I predicted that we would see continuing investments in Moodle, based on what we heard from senior management.

Today, Blackboard announced that they have acquired Nivel Siete, a Colombia-based Moodle hosting and services provider with over 200 customers in Latin America. This follows their acquisition of Remote Learner UK, a company that serviced about 100 UK- and Ireland-based Moodle schools at the time of the acquisition and their acquisition of the X-Ray learning analytics company that currently is focused on Moodle. These are all in the last year. And they are on top of the original acquisition of Moodlerooms and Netspot, two of the biggest Moodle providers around. There are some interesting—and complicated—long-term implications here for the governance and financial structure of the Moodle ecosystem that Phil and I will eventually write about, but for now it’s worth noting that Blackboard is making serious investments in Moodle and international growth.

The post Blackboard Acquires Large Latin American Moodle Provider appeared first on e-Literate.

Oracle Priority Support Infogram for 13-AUG-2015

Oracle Infogram - Thu, 2015-08-13 10:00

RDBMS
12c Temporal Validity Support in SQL Developer Data Modeler, from that JEFF SMITH.
Exalogic
Sneak Peek at What’s Next for Oracle Exalogic Elastic Cloud, from WebLogic Partner Community EMEA.
Data Warehousing
System Statistics About DOP Downgrades, from The Data Warehouse Insider.
Solaris
Solaris Swift using ZFS Storage Appliance, from Jim Kremer’s Blog.
Stateful Packet Inspection, from the Solaris Firewall blog.
Authentication
Weblogic Console and BPM Worklist. Authentication using OpenLDAP, from the AMIS Technology Blog.
RDBMS Optimizer
Tips on SQL Plan Management and Oracle Database In-Memory Part 1, from the Oracle Optimizerblog.
Java
Clash Of Slashes ( / versus \ ), from Brewing tests with CAFE BABE.
Two good postings from The Java Source:
Solving Problems Using the Stream API
About sun.misc.Unsafe
Hyperion
Several patch set announcements from Business Analytics - Proactive Support:
Patch Set Update: Hyperion Essbase 11.1.2.3.508
Patch Set Update: Hyperion Tax Provision 11.1.2.4.100
Patch Set Update: Essbase Analytics Link for HFM 11.1.2.2.500
EPM Patch Set Updates - July 2015
EBS
From the Oracle E-Business Suite Support blog:
Have you used R12: Master Data Fix Diagnostic to Validate Data Related to Purchase Orders and Requisitions?
Identifying Missing Application and Database Tier Patches for EBS 12.2
Receivables: Important system options to review for Autoinvoice
Webcast: Oracle Product Hub Web Services - Setup, Use, and Troubleshooting
Overview of Physical Inventory in Oracle Assets
What's The High Cost Of Not Patching?

From the Oracle E-Business Suite Technology blog:
Identifying Missing App Tier and Database Tier Patches for EBS 12.2

Windows 10 Certified with Oracle E-Business Suite

ED and CBE: Example of higher ed “structural barrier to change” that is out of institutions’ control

Michael Feldstein - Thu, 2015-08-13 09:55

By Phil HillMore Posts (356)

There has been a great conversation going on in the comments to my recent post “Universities As Innovators That Have Difficulty Adopting Their Own Changes” on too many relevant issues to summarize (really, go read the ongoing comment thread). They mostly center on the institution and faculty reward system, yet those are not the only sources of structural barriers to change that lead institutions to this “difficulty adopting their own changes”. Increasingly there are outside forces that both encourage change and resist change, and it is important to recognize the impact of the entire higher education ecosystem.

Yesterday Amy Laitinen from New America wrote an excellent article titled “Whatever Happened to the Department’s Competency-Based Education Experiments?” highlighting just such an example.

About this time two years ago, President Obama went on his college affordability bus tour and unveiled his plan to take on the rising costs of higher education in front of thousands of students at SUNY Buffalo. Promoting innovation and competition was a key part of his plan and President Obama held up competency-based education (CBE) up as one of the “innovative new ways to prepare our students for a 21st century economy and maintain a high level of quality without breaking the bank.” The President touted Southern New Hampshire University’s College for America CBE approach. The university “gives course credit based on how well students master the material, not just on how many hours they spend in the classroom,” he explained. “So the idea would be if you’re learning the material faster, you can finish faster, which means you pay less and you save money.” This earned applause from students in the audience as well as from CBE practitioners around the country. [snip]

The problem is that day was nearly two years ago and the CBE experimental sites are not yet off the ground. It’s not because institutions aren’t ready and willing. They are. But the Department of Education has been dragging its feet. It took the Department nearly a year after the President’s announcement to issue a notice describing what the experiments would look like. Perhaps this could have been done more quickly, but CBE is complicated and it’s understandable that the Department wanted to be thorough in its review of the relevant laws and regulations (they turned out much more forward-thinking than I would have imagined). But the notice did go out, schools did apply, and schools were accepted to participate. But the experiment hasn’t started, because schools haven’t received guidance on how to do their experiments.

Amy goes on to describe how schools are repeatedly asking for guidance and how foundations like Lumina and Gates are doing the same, yet the Education Department (ED) has not or will not provide such guidance.

Matt Reed, writing at Inside Higher Ed this morning, asks why (or why not) does the ED not step up to move along the program, offers some possible answers and solicits input:

  • They’re overwhelmed. They approved the concept of CBE without first thinking through all of the implications for other policies, and now they’re playing catchup. This strikes me as highly likely.
  • They’re focused more on “gainful employment,” for-profit providers, student loan issues, and, until recently, the effort to produce college ratings. With other things on fire, something like CBE could easily get overshadowed. I consider this possibility entirely compatible with the previous one.
  • They’re stuck in a contradiction. At the very same time that they’re trying to encourage experimentation with moving away from the credit hour in the context of CBE, they’re also clamping down on the credit hour in the context of online teaching. It’s possible to do either, but doing both at the same time requires a level of theoretical hair-splitting far beyond what they’re usually called upon to do. My guess is that an initial rush of enthusiasm quickly gave way to dispirited foot-dragging as they realized that the two emphases can’t coexist.
  • Their alien overlords in Area 51, in conjunction with the Illuminati and the Trilateral Commission… (You can fill in the rest. I’m not a fan of this one, but any explanation of federal government behavior on the Internet has to include at least one reference to it. Let’s check that box and move on.)

Rather than add my own commentary or conjecture on the subject, I would prefer to just highlight this situation and note how we need to look beyond just colleges and universities, and even faculty reward systems, to understand the structural barriers to change for higher education.

The post ED and CBE: Example of higher ed “structural barrier to change” that is out of institutions’ control appeared first on e-Literate.

Redstone Live Webcast: Saving Money by Saving Time

WebCenter Team - Thu, 2015-08-13 06:25
You're Invited to attend Redstone's Live Webcast: 'Saving Money by Saving Time' When: Tuesday, Sept. 1st at 3:00 PM CT
During this live webcast, Redstone will provide an overview of Distributed Index, and we'll show how a satisfied customer is using this game changing technology. Distributed Index is an Oracle Validated Integration that complements and integrates with WebCenter Content 10g and 11g. 
Mark Heindselman, Emerson Process Management's Director, Knowledge Network and Information Services will discuss and demonstrate Emerson's use case. 
At the conclusion of the live demonstration, we'll field questions from the audience. 

Special Offer
All registered attendees will receive a no-cost WebCenter Content system evaluation. This evaluation will predict the potential time savings that can be realized post Distributed Index implementation. 

Featured Speaker
Mark Heindselman, Director, Knowledge Network, Emerson Process Management

Webcast Agenda

  1. Distributed Index Overview
  2. Customer Case Study
  3. Live In-Production Solution Demo
  4. Q & A 
Register today!

ODTUG APEX Gaming Competition 2015

Joel Kallman - Thu, 2015-08-13 02:11
If you're not aware, there is an APEX Gaming Competition which is already underway, and which is sponsored by the Oracle Development Tools User Group (ODTUG).  For those who don't know what ODTUG is, it is an independent user group and community of professionals, with a primary focus on the tools, products, and frameworks to build solutions and applications with the Oracle technology stack.  Although ODTUG is based in the USA, they have members (thousands of them) around the globe.

The purpose of the APEX Gaming Competition is simply to show off what you can do with APEX, and instead of crafting a business solution or transactional application, the goal here is a bit more whimsical and fun.  The solution can be desktop or mobile or both.  Personally, if I had the time, I'd like to write a blackjack simulator and try and improve upon the basic strategy.  I'm not sure that could be classified as a "game", but it would enable me to go to Las Vegas and clean house!

If you're looking to make a name for yourself in the Oracle community, one way to do it is through ODTUG.  And if you're looking to make a name for yourself in the APEX community, one way to stand out is through the APEX Gaming Competition.  Just ask Robert Schaefer from Köln, Germany.  Robert won the APEX Theming Competition in 2014, and now everyone in the APEX Community knows who Robert is!  I've actually had the good fortune of meeting Robert in person - twice!

Yesterday I listened to the APEX Talkshow podcast with Jürgen Schuster and Shakeeb Rahman (Jürgen is a luminary in the APEX community and Shakeeb is on the Oracle APEX development team, he is the creator of the Universal Theme).  And in this podcast, I was reminded how Shakeeb's first introduction to Oracle was...by winning a competition, when he was a student!  You simply never know what the future holds.  So - whether you're a student or a professional, whether you're in Ireland or the Ivory Coast, this is an opportunity for you to shine in front of this wonderful global APEX Community.  Submissions close in 2 months, so hurry!  Go to http://competition.odtug.com

OTN Tour of Latin America 2015 : PEOUG, Peru – Day 1

Tim Hall - Wed, 2015-08-12 19:28

A quick taxi ride got us to the conference hotel really quickly, so we were nice and early for the PEOUG event.

After the introductions by Miguel Palacios, it was time for the first sessions of the event. Of the English speakers, first up were Debra Lilley and Dana Singleterry. Debra had some problems with her laptop, so she did her presentation using mine and all went well. Dana did his session over the net, so I sent a few Tweets to let him know how things looked and sounded from our end. I figured a bit of feedback would help reassure him there weren’t any technical issues.

My first session of the day came next. I had a good sized audience and some of the people were brave enough to ask questions at the end. :) I had some in English and some in Spanish using the translation service to help me. :)

Debra fixed her laptop by the time her next session started, but her clicker died, so she borrowed mine. Dana’s second session was at the same time as Debra’s, so I flitted between the two, sending a few feedback Tweets to Dana about his session again.

After that session, Ronald, Debra, Pelinio, Enrique and myself ducked out to get some lunch in a place down the street.

After lunch, both Ronald and I each had back-to-back sessions. I did my Cloud Database and Analytic Functions talks. I feel like they went well. I hope the crowd did too. :)

There was one more set to talks, all from Spanish speakers, including a very full web session by Edelweiss from Uruguay. After that we got together for the closing session and some prize draws. I didn’t understand what was being said, but everyone seemed really happy and in good spirits, so I think the whole day was well received. Certainly all the feedback we got was very positive!

Big thanks to MiguelEnrique and everyone at PEOUG for inviting us and making us feel welcome. Thanks to the attendees for coming to the sessions and making us feel special by asking for photos. :) Also, big thanks to the ACE Program for making this possible for us!

So that marks the end of this years OTN Tour of Latin America for me. Sorry to the countries in the northern leg. I hope I will be able to visit your folks soon!

Debra and I are going to visit Pikachu Machu Picchu over the next couple of days, then it’s back home to normal life for a while. :)

I’ll write a summary post to close off this little adventure when I get home. Once again, thank you all!

Cheers

Tim…

OTN Tour of Latin America 2015 : PEOUG, Peru – Day 1 was first posted on August 13, 2015 at 2:28 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Cloudera certifies InfoCaptor for Big Data analytics on Hadoop

Nilesh Jethwa - Wed, 2015-08-12 13:12

Cloudera welcomes InfoCaptor as a certified partner for data analytics and visualization. InfoCaptor delivers self-service BI and analytics to data analysts and business users in enterprise organizations, enabling more users to mine and search for data that uncovers valuable business insights and maximizes value from an enterprise data hub

Rudrasoft, the software company that specializes in data analytics dashboard solutions, announced today that it has released an updated version of its popular InfoCaptor software, which includes integration with Cloudera Enterprise. The integration takes advantage of Impala and Apache Hive for analytics.

“Our clients are increasingly looking to adopt Hadoop for their data storage and analytics requirements and their common concern is the lack of an economical web-based platform that works with their traditional data warehouses, RDBMS and with Cloudera Enterprise,”

Cloudera-certified InfoCaptor, adds native Impala functionality within Visualizer so users can leverage Date/time functions for date hierarchy visualizations, time series plots and leverage all the advanced hierarchical visualization natively on Cloudera Enterprise.

“Impala is the fastest SQL engine on Hadoop and InfoCaptor can render millions of data points into beautiful visualizations in just a blink of an eye,” said Nilesh Jethwa [founder]. “This is a great promise for the big data world and affordable analytics with sub-second response time, finally CEOs and CIOs across industries can truly dream of cultivating a data driven culture and make it a reality.”

“Cloudera welcomes InfoCaptor as a certified partner for data analytics and visualization. InfoCaptor delivers self-service BI and analytics to data analysts and business users in enterprise organizations, enabling more users to mine and search for data that uncovers valuable business insights and maximizes value from an enterprise data hub,” said Tim Stevens, vice president of Business and Corporate Development at Cloudera.

InfoCaptor is an Enterprise Business Analytics and Dashboard software meant for:

  • Data Discovery
  • Visualizations
  • Adhoc Reports
  • Dashboards

InfoCaptor brings the power of d3js.org visualizations and the simplicity of Microsoft Excel and puts it in the hands of a non-technical user. This same user can build Circle Pack, Chord, Cluster and Treemap/Sunburst visualizations on top of Cloudera Enterprise using simple drag and drop operations.
InfoCaptor can connect with data from virtually any source in the world, including SQL database from Microsoft Excel, Microsoft Access, Oracle, SQL Server, MySQL, Sqlite, PostgreSQL, IBM DB2 and now Impala and Hive. It supports both JDBC and ODBC protocols.

InfoCaptor also serves as a powerful visualization software and it includes over 30 vector-based map Visualizations, close to 40 types of chart visualizations, over 100 flowchart icons and other HTML widgets. InfoCaptor also provides a free style dashboard editor that allows quick dashboard mockups and prototyping. With this ability users can place widgets directly anywhere on the page and use flowchart style icons and connectors for annotation and storytelling.

Users can download the application and install it within their firewall.
Alternatively, a cloud offering is also available at https://my.infocaptor.com or Download dashboard software

 

InfoCaptor is a very modestly priced Analytics and Visualization software
” Personal Dashboard License can be purchased for $149/year
” Server license starts at $599/year
” Cloud based subscription starts at $29/user/month

Visit http://www.infocaptor.com or email bigdata(at)infocaptor(dot)com for Demo and Price list

Oracle SOA/BPM 12c: Propagation of Flow Instance Title and Instance Abortion

Jan Kettenis - Wed, 2015-08-12 12:23
Recently I wrote this posting regarding an improvement for setting the title of a flow instance in Oracle BPEL, and BPMN 12c. In this posting I will discuss two related improvements that comes with SOA/BPM Suite 12c, being that the flow instance abortion is automatically propagated from one instance to the other, as well as the flow instance title. Or more precisely, for every child instance the initiating instance is shown together with its name.

Since 12c the notion of composite instance is superseded by that of flow instance, which refers to the complete chain of calls starting from one main instance to any other composite, and further. Every flow has a unique flowId which is automatically propagated from one instance to the other.

Propagation of Flow Instance TitleThis propagation does not only apply to the flowId, but also to the flowInstanceTitle, meaning that if you set the flowInstanceTitle for the main instance all called composites automatically get the same title.

So if the flowInstanceTitle is set on the main instance:

Then you will automatically see it for every child instance as well:

Trust but verify is my motto, so I tried it for a couple of combinations of composite types calling each other, including:
  • BPM calling BPEL calling another BPEL
  • BPM initiating a another composite with a Mediator and BPEL via an Event
  • Mediator calling BPEL

Flow Instance AbortionWhen you abort the instance of the parent, then all child instances are aborted as well.

In the following flow trace you see a main BPM process that kicks of:
  1. A (fire&forget) BPEL process
  2. Throws an Event that is picked up by a Mediator
  3. Calls another BPM process
  4. Schedules a human task

On its turn the BPEL process in step 1 kicks of another BPEL process (request/response). Finally the BPM process in step 3 also has a human task:

Once the instance of the main process is aborted, all child instances are automatically aborted as well, including all Human Tasks and composites that are started indirectly.

The flip-side of the coin is that you will not be able to abort any individual child instance. When you go to a child composite, select a particular child instance and abort, the whole flow will be aborted. That is different from how it worked with 11g, and I can imagine this will not always meet the requirements you have.
Another thing that I find strange is that the Mediator that is started by means of an event, even is aborted when the consistency level is set to 'guaranteed' (which means that event delivery happens in a local instead of a global transaction). Even though an instance is aborted, you may have a requirement to process that event.
But all in all, a lot easier to get rid of a chain of processes instances than with 11g!!

Make Your Nominations: Oracle Database Developer Choice Awards

OTN TechBlog - Wed, 2015-08-12 09:57

I Did.  You Should. Here's the Nomination Form.  Here's Why:

The Oracle Database Developer Choice Awards celebrate and recognize technical expertise and contributions in the Oracle Database community. We all have an expert in our lives. Here's your chance to nominate them for an award. Who is that expert? Sometimes we're inspired by a technical presentation at an Oracle User Group event or SIG; but more often it's that person who is on the discussion space with an answer or suggestion just when you need it. Either way, it's easier to develop great solutions when we "know" someone in our community...and they are is rooting for us, and even helping us along.

Here's my nomination:  John Stegeman.  And he will be embarrassed and might send me a howler for nominating him. 

John Stegeman is a regular in the Oracle Database Community. When he's not answering questions in the Oracle Database General Questions space, he's commenting on social discussions around the Watercooler. He's a Grand Titan with 243,225 points on the Oracle Community platform and an Oracle ACE. He might not be digging in coding an app today, but then again he just might be because he's clearly got SQL chops. And, while he may not realize it, he's been a tremendous help to me as a community manager because his activities help me keep a pulse on what's going on in the community :)

Here's what you need to know:

  • Nominations are open through August 15.
  • A panel of judges, composed of Oracle ACEs and Oracle employees, will then choose a set of finalists.
  • The worldwide Oracle technologist community votes on the finalists from September 15 through October 15.
  • The winners of the Oracle Database Developer Choice Awards will be announced at the YesSQL! Celebration during Oracle OpenWorld 2015.

Get your nominations in ASAP!

Ciao for now,

LKR


OTN Tour of Latin America 2015 : PEOUG, Peru – Day -1

Tim Hall - Wed, 2015-08-12 05:04

After the CLOUG event, Francisco drove us to the airport, where Kerry, Ronald, Debra and I parked ourselves in the lounge for a while. Lots of eating then ensued! Kerry was flying back home, but the rest of us were on our way to Lima, Peru, for the PEOUG event. :)

The flight across to Lima was pretty straight forward, taking about 4 hours, if you include the time sitting and waiting to take off. I think the flight time was about 3 hours and 30 mins. We arrived at the airport at about 02:00 and we were all pretty beat up. It was an effort to even speak, which if you know me is a rather extreme state. :)

I had a complete brain fade and forgot we were being picked up by Enrique Orbegozo, but fortunately he caught us before we disappeared onto the shuttle, so it ended OK. I’m so sorry Enrique! :)

We arrived at the hotel at about 03:00. I can’t speak for the others, but I was feeling like the living dead. I got to my room and I don’t remember anything else until the morning! :)

Debra has Hilton Honors status, so I got signed into the lounge for the day, which meant free food. :) We had a lazy day. Apart from a 10 minute walk down to the coast and back, it was a hotel day, trying to recharge the batteries. Some food, sitting in the pool and sitting in the lounge with our laptops, trying to catch up with the world.

This morning we are off to the PEOUG event. The last event of the southern leg of the OTN Tour of Latin America 2015. I’ve got three presentations to do, plus some backups in case speakers don’t show. :)

Cheers

Tim…

OTN Tour of Latin America 2015 : PEOUG, Peru – Day -1 was first posted on August 12, 2015 at 12:04 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Applied July Patch Sets To Test Databases

Bobby Durrett's DBA Blog - Tue, 2015-08-11 16:09

I applied the current July patch sets to a 11.2 and a 12.1 test database.  Now I have a 11.2.0.4.7 and a 12.1.0.2.4 test database.  It is helpful to have test databases that are on the most current patch sets and releases.  If I see unexpected behavior on some other database I can try the same thing on the patched test databases to see if some patch changed the behavior to what I expect.  Also, our production databases are all on 11.2.0.4 or earlier releases so I can check whether the new fully patched 12.1 release has different behavior than our older systems.

Here are the patch numbers:

6880880 – current version of opatch

20760982 – 11.2.0.4.7

20831110 – 12.1.0.2.4

My test environments are on x86-64 Linux.

– Bobby

Categories: DBA Blogs

Biohacking, Here Come the Cyborgs

Oracle AppsLab - Tue, 2015-08-11 11:21

For me, 2015 has been the year of the quantified self.

I’ve been tracking my activity using various wearables: Nike+ Fuelband, Basis Peak, Jawbone UP24, Fitbit Surge, and currently, Garmin Vivosmart. I just set up Automatic to track my driving; check out Ben’s review for details. I couldn’t attend QS15, but luckily, Thao (@thaobnguyen) and Ben went and provided a complete download.

And, naturally, I’m fascinated by biohacking because, at its core, it’s the same idea, i.e. how to improve/modify the body to do more, better, faster.

index.1

Professor Kevin Warwick of the University of Reading

Ever since I read about RFID chip implanting in the early 00s, I’ve been curiously observing from the fringe. This post on the Verge today included a short video about biohacking that was well worth 13 and half minutes.

If you like that, check out the long-form piece, Cyborg America: inside the strange new world of basement body hackers.

This stuff is fascinating to me. People like Kevin Warwick and Steve Mann have modified themselves for the better, but I’m guessing the future of biohacking lies in healthcare and military applications, places where there’s big money to be made.

My job is to look ahead, and I love doing that. At some point during this year, Tony asked me what the future held; what were my thoughts on the next big things in technology.

I think the human body is the next frontier for technology. It’s an electrical source that could solve the modern battery woes we all have; it’s an enormous source for data collection, and you can’t forget it in a cab or on a plane. At some point, because we’ll be so dependent on it, technology will become parasitic.

And I for one, welcome the cyborg overlords.

Find the comments.Possibly Related Posts:

OTN Tour of Latin America 2015 : CLOUG, Chile – Day 1

Tim Hall - Tue, 2015-08-11 09:27

The morning was a little confusing. I got up and went to breakfast, but there was no Debra. Once I had finished I got the front desk to call her and found out her clock was an hour behind. Chile has changed its timezone to match Brazil, but some Apple devices don’t seem to realise this, even if they are set to auto-update. One of those devices being Debra’s phone. When we asked at the hotel, they said it’s been a problem for a number of people. :(

Francisco drove us to the venue and we moved straight into the auditorium. After an introduction by Francisco, it was time for the first session. It was a three track event, so I’m just going to talk about the sessions I was in. :)

  • Kerry had a different version of the agenda, which had him on at a later time, so he hadn’t arrived by the time his session was due to start. I was originally down as the second session, so we switched and I went first with my “Pluggable Databases : What they will break and why you should use them anyway!”. Being in a auditorium is always hard unless it is full, as people spread out and you feel like you are presenting to empty chairs. :)
  • Next up was Kerry Osborne, with his “In-Memory In Action” session. I had to duck out of this early to get to across to my next session, which was on the other side of the building.
  • My next session was “It’s raining data! Oracle Databases in the Cloud”. This was in a much smaller room, so it felt really full and much more personal. As a result, the audience interaction felt a lot better. I spent quite a bit of time talking to people after the session, which is my favourite bit of this conference thing.
  • I got over to see the tail end of Ronald Bradford‘s session on “Testing and verifying your MySQL backup strategy”. I’ve got a couple of things I need to check in my own MySQL backups now. :)
  • Next up was Kyle Hailey speaking about “SQL Performance Tuning”. Kyle has a very visual approach, which works for me!
  • After lunch it was back to me for “Oracle Database Consolidation : It’s not all about Oracle database 12c!”
  • Next up was Kyle Hailey with “Database performance tuning”, which focussed on using ASH to identify problems and was once again, very visual.
  • The final person up was Debra, with “Do Oracle Cloud Applications Add Up?”. The answer is yes, they do add up, to 42!

After the final session, we hung around for a prize giving and a quick photo opportunity, then had to say our goodbyes and go straight off to the airport to get our flight to Lima.

Thanks to Francisco and the folks at CLOUG for inviting me, as well as all the attendees that came to my sessions and spoke to me during the day. I love speaking directly to people about the technology, so when people come to ask questions I’m in my element. :) Big thanks to OTN and the ACE Program for helping to make this happen for me.

Cheers

Tim…

OTN Tour of Latin America 2015 : CLOUG, Chile – Day 1 was first posted on August 11, 2015 at 4:27 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

TekTalk Webinar: Breakthrough in Enterprise-Wide Contract Lifecycle Management

WebCenter Team - Tue, 2015-08-11 06:33

TekTalk Webinar: Breakthrough in Enterprise-Wide Contract Lifecycle Management

Email not displaying correctly?

View it in your browser.

Webinar: Breakthrough in Contract 
Lifecycle Management Thursday, August 20, 2015 at 1 PM EST / 10 AM PST


Contracts rule B2B relationships. Whether you’re a growing mid-market company or a large-scale global organization, you need an effective system to manage surges in contract volumes and ensure accuracy in reporting.

TekStream and Oracle would like to invite you to a webinar on an exciting new solution for Contract Lifecycle Management (CLM).

This solution provides organizations with a consolidated and secure platform to logically ingest and organize contracts and supporting documents. It offers total contact lifecycle management with intuitive workflow processing as well as native integration to many existing ERP systems. With this new solution, contracts and other critical documents will no longer be locked in enterprise systems; the entire enterprise can gain seamless access from one centralized repository.

The webinar is scheduled for Thursday, August 20th at 1 PM EST/10 AM PST.

Solution Summary:

TekStream’s Contract Lifecycle Management (CLM) software is built on Oracle’s industry leading document management system, WebCenter Content, and is designed to seamlessly integrate with enterprise applications like JD Edwards, PeopleSoft and Oracle’s Enterprise Business Suite (EBS). Combining Oracle’s enterprise level applications with TekStream’s deep understanding of managing essential business information, delivers a contract management tool powerful enough to facilitate even the most complex processes. TekStream’s solution tracks and manages all aspects of your contract work streams from creation and approval to completion and expiration. Companies can rely on TekStream’s CLM to ensure compliance and close deals faster.
Join us to understand how our innovative new solution can address the cost and complexity of Contract Lifecycle Management and provide the following benefits:

Solution Benefits:

  1. Centralized repository for all in-process and executed contracts.
  2. Increase efficiency through better control of the contract process.
  3. Support for “Evergreen” contracts help to improve contract renewal rates.
  4. Improve compliance to regulations and standards by providing clear and concise reporting of procedures and controls.

For more information about Oracle Documents Cloud Service, please contact info@tekstream.com or call 844-TEK-STRM.

blank Tweet ThisblankSend to LinkedinblankSend to FacebookblankblankblankView more services


SQL Plan Management Choices

Dominic Brooks - Tue, 2015-08-11 03:02

My thoughts on SQL plan management decision points: SPM SQL Patches are also available, primarily to avoid a specific problem not to enforce a particular plan, and are not covered in the above flowchart.