Skip navigation.

Feed aggregator

Embanet and 2U: More financial insight into Online Service Providers

Michael Feldstein - Sat, 2014-03-29 11:24

While I have written recently about UF Online and 2U, there is actually very little insight into the operations and finances of the market segment for Online Service Providers (OSP, also known as School-as-a-Service, Online Program Management). Thanks to 2U going public yesterday and the Gainesville Sun doing investigative work on UF Online, we have more information on one of the highest growth segments for educational technology and online learning.

2U’s IPO

2U went public yesterday, initially offered at $13.00 per share and closing the day at $13.98 (a 7.5% gain). The following is not intended to be a detailed stock market evaluation – just the basics to present the general scale of the company as insight into the market. While there is not a direct comparison, this IPO is a much better IPO than the most recent ed tech offering when Chegg (down 2.7% its first day and down 26% to date). Based on 2U’s first day of trading and the IPO filing:

  • 2U’s market valuation is $547 million, and the company raised $120 million from the IPO;
  • 2U’s annual revenue for 2013 was $83.1 million with $28.0 million in net losses, representing a revenue growth of 49% per year;
  • 69% of this revenue ($57 million) came from one client, USC, with two programs – masters of education (Rossier Online) and social work;
  • Across all 9 customers, 2U makes $10,000 – $15,000 in revenue per student per year;
  • Across all 9 customers, 2U makes an average of $10 million in revenue per customer per year;
  • Across all 9 customers, 2U’s customers make an average of $10 million in net revenue per year; and
  • Across all 9 customers, 2U’s customers are charging $17,000 – $45,000 per student per year in tuition.
Pearson Embanet’s Contract with UF Online

Meanwhile, the Gainesville Sun has been doing some investigative work on the University of Florida Online (UF Online) contract with Pearson Embanet. Embanet is the largest OSP in the market and was purchased by Pearson for $650 million in 2012. From yesterday’s article in the Sun we get some specific information on the UF Online contract.

The University of Florida will pay Pearson Embanet an estimated $186 million over the life of its 11-year contract — a combination of direct payments and a share of tuition revenue — to help launch and manage the state’s first fully online, four-year degree program.

Initially the financial terms of the contract were hidden by University of Florida officials due to “trade secrets”, but the Sun was persistent, found a presentation with similar information, eventually leading to UF providing the contract with most redactions removed.

According to the article and its interview with Associate Provost Andy McDonough (who took over the executive director position at UF Online when the first one resigned after just two and a half months), Pearson Embanet will be paid $9.5 million over the first five years to help with startup costs. After this point, Pearson Embanet’s pay will come from revenue sharing (similar to 2U and most OSP contracts).

Gov. Rick Scott signed a bill last year tapping UF to create an online university that would offer a full, four-year degree program at 75 percent of the tuition that residential students pay. The Legislature gave UF $35 million in startup funds for the first five years, and also gave the university six months to get the program up and running.

The program started in January 2014 with 583 transfer students, with the first freshman expected in September 2014. What we don’t know about the program startup is how much Pearson Embanet will invest in the program. Typically an OSP loses money for the first 3 – 5 years of program startup ($9.5 million will not cover costs), which is one of the rationale’s for the long-term contracts of 10 years or more. The model is that up front the provider loses money (see 2U’s losses for comparison) and makes a profit on the back end of the contract. For UF Online, the state legislature plans to stop subsidies by 2019, assuming the program will be self-sustaining.

For the fall term (first term not purely based on transfer students), UF Online is planning on 1,000 students, and so far 91 have signed up. I do not know if this is on target or not.

Under its new contract with UF, Pearson is responsible for creating “proprietary digital content,” providing admission and enrollment support, generating leads and signing new students, tracking retention rates, engaging in joint research and development, and providing on-demand student support.

Note that this set of services is not as comprehensive as what 2U provides. For example, UF Online will use the Canvas LMS from Instructure, like the rest of the University of Florida, whereas 2U provides its own learning platform built on top of Moodle and Adobe Connect.

After 2018, UF will also stop paying Pearson directly and Pearson’s income will come entirely from its share of tuition revenue and any fees it charges. UF projects it will have over 13,000 students in UF Online generating $43 million in tuition revenue by 2019 — of which Pearson will get close to $19 million.

By 2024, with 24,000 students anticipated, revenues generated will be about $76 million, with $28 million going to Pearson, McCullough said.

Based on these numbers, UF Online expects to make just approximately $3,167 per student in revenue with Pearson Embanet making $1,167 per student.

 Notes

Below are some additional notes the 2U and Pearson Embanet examples.

  • It is important to recognize the difference in target markets here. 2U currently targets high-tuition master’s programs, and the UF Online example is an undergraduate program with the goal of charging students 75% of face-to-face UF costs.
  • While the total contract values seem high, the argument for this model is that without the massive investment and startup capability of OSP companies, the school either would not be able to create the online program by itself or at least would not have been able to do so as quickly.
  • Despite the difference in market and in services, it is still remarkable the difference in revenue per student between 2U and Pearson Embanet – $10 – $15k for 2U vs. $1.2k for Pearson Embanet.

Full disclosure: Pearson is a client of MindWires Consulting but not for OSP. All information here is from public sources.

The post Embanet and 2U: More financial insight into Online Service Providers appeared first on e-Literate.

OGh APEX Conference

Denes Kubicek - Sat, 2014-03-29 04:32
Last week I was presenting at OGh (ORACLE GEBRUIKERSCLUB HOLLAND) APEX World. My topic was "APEX 4.2 Application Deployment and Application Management". I can only recommend this conference to all the APEX users in Europe. This is definitely the biggest APEX conference on our continent. If you don't travel to ODTUG then this is something you shouldn't miss. They have an international track where you can listen to the well known APEX developers and book authors. This time Dan McGhan, Martin Giffy D'Souza, Joel Kallman, Dietmar Aust, Roel Hartman, Peter Raganitsch. For the tracks in Dutch, they are also willing to switch their language to English at any time if there are visitors not understanding Dutch. All together, Dutch people are open minded and I admire their sense for organizing such events - they definitely know how to do it.

Categories: Development

Java Cookbook 3rd Edition

Surachart Opun - Sat, 2014-03-29 01:10
Java is a programming language and computing platform. There are lots of applications and websites have used it. About latest Java version, Java 8. Oracle announced Java 8 on March 25, 2014. I mention a book title - Java CookbookJava Cookbook by  Ian F. Darwin and this book covers Java 8.
 It isn't a book for someone who is new (Readers should know a bit about syntax to write Java) in Java, but It is a book that will help readers learn from real-world examples. Readers or some people who work in Java developments, that can use this book like reference or they can pick some example to use with their work. In a book, Readers will find 24 chapters - "Getting Started: Compiling, Running, and Debugging", "Interacting with the Environment", "Strings and Things", "Pattern Matching with Regular Expressions", "Numbers", "Dates and Times - New API", "Structuring Data with Java", "Object-Oriented Techniques", "Functional Programming Techniques:Functional Interfaces, Streams,Spliterators, Parallel Collections", "Input and Output", "Directory and Filesystem Operations", "Media: Graphics, Audio, Video", "Graphical User Interfaces", "Internationalization and Localization", "Network Clients", "Server-Side Java", "Java and Electronic Mail", "Database Access", "Processing JSON Data", "Processing XML", "Packages and Packaging", "Threaded Java", "Reflection, or “A Class Named Class”", "Using Java with Other Languages".

Each example is useful for learning and practice in Java programming. Everyone can read and use it, just know a bit about Java programming. Anyway, I suggest Readers should know basic with Java programming before start with this book.

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

New Version Of XPLAN_ASH Utility

Randolf Geist - Fri, 2014-03-28 17:26
A minor update 4.01 to the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

These are the notes from the change log:

- More info for RAC Cross Instance Parallel Execution: Many sections now show a GLOBAL aggregate info in addition to instance-specific data

- The Parallel Execution Server Set detection and ASSUMED_DEGREE info now makes use of the undocumented PX_STEP_ID and PX_STEPS_ARG info (bit mask part of the PX_FLAGS column) on 11.2.0.2+

- Since version 4.0 added from 11.2.0.2 on the PX *MAX* DOP in the "SQL statement execution ASH Summary" based on the new PX_FLAGS column of ASH it makes sense to add a PX *MIN* DOP in the summary to see at one glance if different DOPs were used or not

- The "Active DOPs" column in the "Activity Timeline based on ASH" was extended/modified: The number in parantheses is no longer the simple count of samples but the Average Active Sessions (AAS) per DFO / bucket.

From 11.2.0.2 it now shows also the DOP of the DFO in brackets, so the
output could look now like this:

1[16] (14.5)

which means DFO 1 at a DOP of 16 had an AAS value of 14.5 for this time
bucket. If there are multiple DFOs active in the time bucket, they are
separated by commas:

1[16] (3.5),2[4] (1.5)

which means DFO 1 at a DOP of 16 had an AAS value of 3.5 and DFO 2 at a
DOP of 4 had an AAS value of 1.5 for this time bucket

A new version 4.1 is already underway that includes new 12c features, so stay tuned.

another month almost gone ... another presentation done at NEOOUG

Grumpy old DBA - Fri, 2014-03-28 17:07
Geez this month really just flew by it started with my presentation at Hotsos 2014.  Pretty well attended waiting for final evaluation information.  Then work work work ...

This Friday ( today 3/28/2014 ) I did my Hotsos presentation again but this time at my local user group.  I added "just three more slides" to try to give some additional information on PGA and program connections to people aka developers coming in cold to this area. 

Somehow the three additional slides caused me to take an additional twenty minutes to deliver this information.  Lucky I was not given the hook ( sometimes it helps to be president ).  It was kind of funny even though I had just delivered the presentation at the beginning of the month I found myself looking at some of the slides that came up thinking "oh that is out of order here ( no it was not )" or even worse "oh geez what am I trying to connect with on this slide" ... yikes!

Registrations are starting to roll in for GLOC 2014 but the month of April is the critical one.  We would like to increase attendance by 33% percent ... time will tell!
Categories: DBA Blogs

ADF 11g PS6 Table Pagination and Displaying Selected Row Issue - Solution

Andrejus Baranovski - Fri, 2014-03-28 15:33
While ago, I had a blog post about new feature in ADF 11g PS6 (11.1.1.7) - table pagination support. There is an issue, when we want to open specific row and display it automatically in the table - required table page for the selected row is not opened correctly. However, blog reader suggested a fix, received from Oracle Support. Blog reader was kind enough, to post a comment with suggested fix, you can read it here - JDev/ADF sample - ADF 11g PS6 Table Pagination and Displaying Selected Row Issue. I decided to test this fix myself and provide updated sample application. The fix is to use range start from the iterator and set to it for the first property of the table with pagination. Actually, this fix does the job, but not completely perfect. Current row is displayed, only if Range Size for the iterator is set to 25, probably there is some hard coding somewhere. Ok, but at least it works.

Download sample application - TablePaginationApp_v5.zip. This application contains two fragments, second fragment with the table is opened from the first - where current row is selected. In the first fragment, we call setPropertyListener for navigation button and save required information in pageFlowScope (to be used in the second fragment):


This information - range start, once we move current row in the row set, range start is also changed. We are going to use range start, to set first property for the table - in such way, we could force table to display required page with selected row:


Here you can see, how table first property is set - we are using range start saved in pageFlowScope in the first fragment. This would force ADF table with pagination to display required page of rows:


Let's see how this works. Select a row belonging to the first range (I have configured Range Size to be 25):


Table with pagination is loaded and selected row is displayed in the first page, this is correct:


Navigate to some row from the second range (this should be after first 25 rows):


Selected row is displayed in the second page, as expected. There is one small issue - selected row is displayed at the bottom, while it should be somewhere in the middle. Well, this is another issue related to ADF table pagination:


If you navigate back to the first page and then again navigate to the second page - selected row will be displayed correctly in the middle:


In my opinion, ADF table pagination is not yet very tested and stable feature. Perhaps we should wait for improvements in the next release, until using it in the complex scenarios.

One Queue to Rule them All

Antony Reynolds - Fri, 2014-03-28 15:16
Using a Single Queue for Multiple Message Types with SOA Suite Problem StatementYou use a single JMS queue for sending multiple message types /  service requests.  You use a single JMS queue for receiving multiple message types / service requests.  You have multiple SOA JMS Adapter interfaces for reading and writing these queues.  In a composite it is random which interface gets a message from the JMS queue.  It is not a problem having multiple adapter instances writing to a single queue, the problem is only with having multiple readers because each reader gets the first message on the queue. Background

The JMS Adapter is unaware of who receives the messages.  Each adapter instance just takes the message from the queue and delivers it to its own configured interface, one interface per adapter instance.  The SOA infrastructure is then responsible for routing that message, usually via a database table and an in memory notification message, to a component within a composite.  Each message will create a new composite but the BPEL engine and Mediator engine will attempt to match callback messages to the appropriate Mediator or BPEL instance.
Note that message type, including XML document type, has nothing to do with the preceding statements.

The net result is that if you have a sequence of two receives from the same queue using different adapters then the messages will be split equally between the two adapters, meaning that half the time the wrong adapter will receive the message.  This blog entry looks at how to resolve this issue.

Note that the same problem occurs whenever you have more than 1 adapter listening to the same queue, whether they are in the same composite or different composites.  The solution in this blog entry is also relevant to this use case.

SolutionsIn order to correctly deliver the messages to the correct interface we need to identify the interface they should be delivered to.  This can be done by using JMS properties.  For example the JMSType property can be used to identify the type of the message.  A message selector can be added to the JMS inbound adapter that will cause the adapter to filter out messages intended for other interfaces.  For example if we need to call three services that are implemented in a single application:
  • Service 1 receives messages on the single outbound queue from SOA, it send responses back on the single inbound queue.
  • Similarly Service 2 and Service 3 also receive messages on the single outbound queue from SOA, they send responses back on the single inbound queue.
First we need to ensure the messages are delivered to the correct adapter instance.  This is achieved as follows:
  • aThe inbound JMS adapter is configured with a JMS message selector.  The message selector might be "JMSType='Service1'" for responses from Service 1.  Similarly the selector would be "JMSType='Service2'" for the adapter waiting on a response from Service 2.  The message selector ensures that each adapter instance will retrieve the first message from the queue that matches its selector.
  • The sending service needs to set the JMS property (JMSType in our example) that is used in the message selector.
Now our messages are being delivered to the correct interface we need to make sure that they get delivered to the correct Mediator or BPEL instance.  We do this with correlation.  There are several correlation options:
  1. We can do manual correlation with a correlation set, identifying parts of the outbound message that uniquely identify our instance and matching them with parts of the inbound message to make the correlation.
  2. We can use a Request-Reply JMS adapter which by default expects the response to contain a JMSCorrelationID equal to the outgoing JMSMessageID.  Although no configuration is required for this on the SOA client side, the service needs to copy the incoming JMSMessageID to the outgoing JMSCorrelationID.
Special Case - Request-Reply Synchronous JMS Adapter

When using a synchronous Request-Reply JMS adapter we can omit to specify the message selector because the Request-Reply JMS adapter will immediately do a listen with a message selector for the correlation ID rather than processing the incoming message asynchronously.
The synchronous request-reply will block the BPEL process thread and hold open the BPEL transaction until a response is received, so this should only be used when you expect the request to be completed in a few seconds.

The JCA Connection Factory used must point to a non-XA JMS Connection Factory and must have the isTransacted property set to “false”.  See the documentation for more details.

Sample

I developed a JDeveloper SOA project that demonstrates using a single queue for multiple incoming adapters.  The overall process flow is shown in the picture below.  The BPEL process on the left receives messages from the jms/TestQueue2 and sends messages to the jms/Test Queue1.  A Mediator is used to simulate multiple services and also provide a web interface to initiate the process.  The correct adapter is identified by using JMS message properties and a selector.

 

The flow above shows that the process is initiated from EM using a web service binding on mediator.  The mediator, acting as a client, posts the request to the inbound queue with a JMSType property set to "Initiate". Model Client BPEL Service Inbound Request Client receives web service request and posts the request to the inbound queue with JMSType='Initiate' The JMS adapter with a message selector "JMSType='Initiate'" receives the message and causes a composite to be created.  The composite in turn causes the BPEL process to start executing.
The BPEL process then sends a request to Service 1 on the outbound queue.
Key Points

  • Initiate message can be used to initate a correlation set if necessary
  • Selector required to distinguish initiate messages from other messages on the queue
Service 1 receives the request and sends a response on the inbound queue with JMSType='Service1' and JMSCorrelationID= incoming JMS Message ID. Separate Request and Reply Adapters   The JMS adapter with a message selector "JMSType='Service1'" receives the message and causes a composite to be created.  The composite uses a correlation set to in turn deliver the message to BPEL which correlates it with the existing BPEL process.
The BPEL process then sends a request to Service 2 on the outbound queue.
Key Points
  • Separate request & reply adapters require a correlation set to ensure that reply goes to correct BPEL process instance
  • Selector required to distinguish service 1 response messages from other messages on the queue
Service 2 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID. Asynchronous Request-Reply Adapter   The JMS adapter with a message selector "JMSType='Service2'" receives the message and causes a composite to be created.  The composite in turn delivers the message to the existing BPEL process using native JMS correlation.
Key Point
  • Asynchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector still required to distinguish service 2 response messages from other messages on the queue
The BPEL process then sends a request to Service 3 on the outbound queue using a synchronous request-reply.
Service 3 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID. Synchronous Request-Reply Adapter   The synchronous JMS adapter receives the response without a message selector and correlates it to the BPEL process using native JMS correlation and sends the overall response to the outbound queue.
Key Points
  • Synchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector also not required to distinguish service 3 response messages from other messages on the queue because the synchronous adapter is doing a selection on the expected CorrelationID
  Outbound Response Client receives the response on an outbound queue.       Summary

When using a single JMS queue for multiple purposes bear in mind the following:

  • If multiple receives use the same queue then you need to have a message selector.  The corollary to this is that the message sender must add a JMS property to the message that can be used in the message selector.
  • When using a request-reply JMS adapter then there is no need for a correlation set, correlation is done in the adapter by matching the outbound JMS message ID to the inbound JMS correlation ID.  The corollary to this is that the message sender must copy the JMS request message ID to the JMS response correlation ID.
  • When using a synchronous request-reply JMS adapter then there is no need for the message selector because the message selection is done based on the JMS correlation ID.
  • Synchronous request-reply adapter requires a non-XA connection factory to be used so that the request part of the interaction can be committed separately to the receive part of the interaction.
  • Synchronous request-reply JMS adapter should only be used when the reply is expected to take just a few seconds.  If the reply is expected to take longer then the asynchronous request-reply JMS adapter should be used.
Deploying the Sample

The sample is available to download here and makes use of the following JMS resources:

JNDI Resource; Notes jms/TestQueue Queue Outbound queue from the BPEL process jms/TestQueue2 Queue Inbound queue to the BPEL process eis/wls/TestQueue JMS Adapter Connector Factory This can point to an XA or non-XA JMS Connection Factory such as weblogic.jms.XAConnectionFactory eis/wls/TestQueue None-XA JMS Adapter Connector Factory This must point to a non-XA JMS Connection Factory such as weblogic.jms.ConnectionFactory and must have isTransacted set to “false”

To run the sample then just use the test facility in the EM console or the soa-infra application.

Best of OTN - Week of March 23rd

OTN TechBlog - Fri, 2014-03-28 13:19

Java Community - 

Java 8 Launch - Over 9000 developers joined the Java 8 launch webcast. You can watch the replay and over 30 videos covering features of Java 8.

Java Tutorials Update for Java 8 - Find the JDK 8 Release Notes, and updates for specific features.

Java 8, Eclipse, and the Future
Java 8 Day at EclipseCon was standing room only. Learn what Mike Milinkovich, Executive Director of Eclipse Foundation, said about the trends he sees that will have an impact on developers and IDEs in the future.

Friday Funny by OTN Java Community Manager Tori Wieldt: Easy come, easy go Thanks @aljensen7

Architect Community - 

Video: ADF - Designing for Application Customization & MDS - MDS Infrastructure Decisions | Frank Nimphius
In this episode of ADF Architecture TV Frank Nimphius covers the MDS repository infrastructure you need for customizable and personalizable ADF applications.

You Are Not Even Wrong About the Cloud - Part 3 | RogerG
"The Cloud is not magic – reduced costs are not suddenly available through magic Cloud pixie dust," says RogerG. His article clears up some common misconceptions about moving existing applications to the cloud.

IoT end-to-end demo - Remote Monitoring and Service | Harish Doddala
IoT expert Harish Doddala's demo "showcases how the health of remotely deployed machinery can be monitored to illustrate how data coming from devices can be analyzed in real-time, integrated with back-end systems and visualized to initiate action as may be necessary."

Friday Funny by OTN Architect Community Manager Bob Rhubart: How about never?
This is my all-time favorite New Yorker cartoon, the creation of the brilliant Bob Mankoff. Reproducing the cartoon here would violate copyright laws, so you'll have to click the link. It's worth it.

Database Community - 

The Oracle Big Data Appliance 2.5 was released last week. With every BDA release, we upgrade the Big Data Light VM. Get it Now!

Quick Read: great article from Deiby Gómez (Oracle ACE), YV Ravikumar (Oracle OCM) & Nassyam Basha (OCP) on how “Flex ASM” and “Flex Cluster” support demanding requirements of Cloud Computing-oriented environments.

Friday Funny by OTN Database Community Manager Laura Ramsey: Oracle ACE Director Bjoern Rost is getting ready for the RAC ATTACK at the Finnish User event in Helsinki in June. (see picture at top.)

Systems Community

If You Have to Ask, You Wouldn't Understand - Resources, software, links, and the proper developer attitude to join the beta program for Oracle Solaris Studio 12.4, which began this week.

More Tips for Remote Access with Oracle Linux - Robert Chase continues with his series of tips and tricks for using SSH and other utilities to connect to a remote server.

Friday Funny by OTN Systems Community Manager, Rick Ramsey: OTN's Got Talent - Bob Rhubart, who manages the OTN Architect community, is also the lead singer for The Elderly Brothers, a band in Cleveland, Ohio.  I love their blend of old folk, blues, and rock.  White Freightliner is my favorite.

Speaking at Collaborate 2014

DBASolved - Fri, 2014-03-28 08:45

I’m a little behind on my updating my blog with images of conferences where I will be speaking (I’ll get to that later and hopefully fix it).  In the meantime, I wanted to let everyone know that I will be speaking at IOUG Collaborate 2014 this year.  IOUG has decided to hold the conference in Las Vegas, NV.  Should be a fun conference; after all everyone knows the saying “What happens in Vegas…”, well you get the picture. 

Unlike last year, I will not be at the conference all week.  I will be there later in the week for my sessions and then leaving quickly due to work commitments.  All good though and  I hope to see many friends while I’m there.   You may be wondering what sessions I’ll be presenting; here they are:

Thursday, 4/10/2014 @ 12:15 pm
How many ways can you monitor Oracle Golden Gate?

This is presentation is going to be a quick look at how you can monitor Oracle Golden Gate within your environment from different approaches.

Friday, 4/11/2014 @ 09:45 am
Oracle Enterprise Manager 12c, Oracle Database 12c, and You!

This presentation is one of my most well received presentations.  As the title explains it deals with what you can expect from using Oracle Enterprise Manager 12c when managing Oracle Database 12c.  I try to improve this presentation each time; maybe there will be something new that you haven’t seen yet.

On an Oracle Enterprise Manager 12c related note, if you are looking to expand your knowledge of OEM12c, there are a lot of sessions being presented by Oracle and some of my friends at Collaborate 14. You can use this like to see what OEM sessions are being presented (here).

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: General
Categories: DBA Blogs

NoSQL vs. NewSQL vs. traditional RDBMS

DBMS2 - Fri, 2014-03-28 08:09

I frequently am asked questions that boil down to:

  • When should one use NoSQL?
  • When should one use a new SQL product (NewSQL or otherwise)?
  • When should one use a traditional RDBMS (most likely Oracle, DB2, or SQL Server)?

The details vary with context — e.g. sometimes MySQL is a traditional RDBMS and sometimes it is a new kid — but the general class of questions keeps coming. And that’s just for short-request use cases; similar questions for analytic systems arise even more often.

My general answers start:

  • Sometimes something isn’t broken, and doesn’t need fixing.
  • Sometimes something is broken, and still doesn’t need fixing. Legacy decisions that you now regret may not be worth the trouble to change.
  • Sometimes — especially but not only at smaller enterprises — choices are made for you. If you operate on SaaS, plus perhaps some generic web hosting technology, the whole DBMS discussion may be moot.

In particular, migration away from legacy DBMS raises many issues: 

  • Feature incompatibility (especially in stored-procedure languages and/or other vendor-specific SQL).
  • Your staff’s programming and administrative skill-sets.
  • Your investment in DBMS-related tools.
  • Your supply of hockey tickets from the vendor’s salesman.

Except for the first, those concerns can apply to new applications as well. So if you’re going to use something other than your enterprise-standard RDBMS, you need a good reason.

Commonly, the good reason to change DBMS is one or more of:

  • Programming model. Increasingly often, dynamic schemas seem preferable to fixed ones. Internet-tracking nested data structures are just one of the reasons.
  • Performance (scale-out). DBMS written in this century often scale out better than ones written in the previous millennium. Also, DBMS with fewer features find it easier to scale than more complex ones; distributed join performance is a particular challenge.
  • Geo-distribution. A special kind of scale-out is geo-distribution, which is sometimes a compliance requirement, and in other cases can be a response time nice-to-have.
  • Other stack choices. Couchbase gets a lot of its adoption from existing memcached users (although they like to point out that the percentage keeps dropping). HBase gets a lot of its adoption as a Hadoop add-on.
  • Licensing cost. Duh.

NoSQL products commonly make sense for new applications. NewSQL products, to date, have had a harder time crossing that bar. The chief reasons for the difference are, I think:

  • Programming model!
  • Earlier to do a good and differentiated job in scale-out.
  • Earlier to be at least somewhat mature.

And that brings us to the 762-gigabyte gorilla — in-memory DBMS performance – which is getting all sorts of SAP-driven marketing attention as a potential reason to switch. One can of course put any database in memory, providing only that it is small enough to fit in a single server’s RAM, or else that the DBMS managing it knows how to scale out. Still, there’s a genuine category of “in-memory DBMS/in-memory DBMS features”, principally because:

  • In-memory database managers can and should have a very different approach to locking and latching than ones that rely on persistent storage.
  • Not all DBMS are great at scale-out.

But Microsoft has now launched Hekaton, about which I long ago wrote:

I lack detail, but I gather that Hekaton has some serious in-memory DBMS design features. Specifically mentioned were the absence of locking and latching.

My level of knowledge about Hekaton hasn’t improved in the interim; still, it would seem that in-memory short-request database management is not a reason to switch away from Microsoft SQL Server. Oracle has vaguely promised to get to a similar state one of these years as well.

Of course, HANA isn’t really a short-request DBMS; it’s an analytic DBMS that SAP plausibly claims is sufficiently fast and feature-rich for short-request processing as well.* It remains to be seen whether that difference in attitude will drive enough sustainable product advantages to make switching make sense.

*Most obviously, HANA is columnar. And it has various kinds of integrated analytics as well.

Related links

Categories: Other

NoSQL vs. NewSQL vs. traditional RDBMS

Curt Monash - Fri, 2014-03-28 08:09

I frequently am asked questions that boil down to:

  • When should one use NoSQL?
  • When should one use a new SQL product (NewSQL or otherwise)?
  • When should one use a traditional RDBMS (most likely Oracle, DB2, or SQL Server)?

The details vary with context — e.g. sometimes MySQL is a traditional RDBMS and sometimes it is a new kid — but the general class of questions keeps coming. And that’s just for short-request use cases; similar questions for analytic systems arise even more often.

My general answers start:

  • Sometimes something isn’t broken, and doesn’t need fixing.
  • Sometimes something is broken, and still doesn’t need fixing. Legacy decisions that you now regret may not be worth the trouble to change.
  • Sometimes — especially but not only at smaller enterprises — choices are made for you. If you operate on SaaS, plus perhaps some generic web hosting technology, the whole DBMS discussion may be moot.

In particular, migration away from legacy DBMS raises many issues: 

  • Feature incompatibility (especially in stored-procedure languages and/or other vendor-specific SQL).
  • Your staff’s programming and administrative skill-sets.
  • Your investment in DBMS-related tools.
  • Your supply of hockey tickets from the vendor’s salesman.

Except for the first, those concerns can apply to new applications as well. So if you’re going to use something other than your enterprise-standard RDBMS, you need a good reason.

Commonly, the good reason to change DBMS is one or more of:

  • Programming model. Increasingly often, dynamic schemas seem preferable to fixed ones. Internet-tracking nested data structures are just one of the reasons.
  • Performance (scale-out). DBMS written in this century often scale out better than ones written in the previous millennium. Also, DBMS with fewer features find it easier to scale than more complex ones; distributed join performance is a particular challenge.
  • Geo-distribution. A special kind of scale-out is geo-distribution, which is sometimes a compliance requirement, and in other cases can be a response time nice-to-have.
  • Other stack choices. Couchbase gets a lot of its adoption from existing memcached users (although they like to point out that the percentage keeps dropping). HBase gets a lot of its adoption as a Hadoop add-on.
  • Licensing cost. Duh.

NoSQL products commonly make sense for new applications. NewSQL products, to date, have had a harder time crossing that bar. The chief reasons for the difference are, I think:

  • Programming model!
  • Earlier to do a good and differentiated job in scale-out.
  • Earlier to be at least somewhat mature.

And that brings us to the 762-gigabyte gorilla — in-memory DBMS performance – which is getting all sorts of SAP-driven marketing attention as a potential reason to switch. One can of course put any database in memory, providing only that it is small enough to fit in a single server’s RAM, or else that the DBMS managing it knows how to scale out. Still, there’s a genuine category of “in-memory DBMS/in-memory DBMS features”, principally because:

  • In-memory database managers can and should have a very different approach to locking and latching than ones that rely on persistent storage.
  • Not all DBMS are great at scale-out.

But Microsoft has now launched Hekaton, about which I long ago wrote:

I lack detail, but I gather that Hekaton has some serious in-memory DBMS design features. Specifically mentioned were the absence of locking and latching.

My level of knowledge about Hekaton hasn’t improved in the interim; still, it would seem that in-memory short-request database management is not a reason to switch away from Microsoft SQL Server. Oracle has vaguely promised to get to a similar state one of these years as well.

Of course, HANA isn’t really a short-request DBMS; it’s an analytic DBMS that SAP plausibly claims is sufficiently fast and feature-rich for short-request processing as well.* It remains to be seen whether that difference in attitude will drive enough sustainable product advantages to make switching make sense.

*Most obviously, HANA is columnar. And it has various kinds of integrated analytics as well.

Related links

db.person.find( { "role" : "DBA" } )

Tugdual Grall - Fri, 2014-03-28 08:00
Wow! it has been a while since I posted something on my blog post. I have been very busy, moving to MongoDB, learning, learning, learning…finally I can breath a little and answer some questions. Last week I have been helping my colleague Norberto to deliver a MongoDB Essentials Training in Paris. This was a very nice experience, and I am impatient to deliver it on my own. I was happy to see thatTugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

Log Buffer #365, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-03-28 07:51

This Log Buffer edition covers various tips, new releases, and technically rich blog posts from the worlds of Oracle, SQL Server and MySQL.

Oracle:

Why choose Oracle for Advanced Analytics? Mark Hornick answers.

Michael Rainey talks about Data Integration Tips: ODI 12c – Substitution API GUIDs.

Warren Baird has shared a tip.  If you are using AutoVue with large 3D models it is often valuable to increase the maximum heap size available to the AutoVue client.

Sveta reports that new version of JSON UDF functions: 0.3.1 has been released. This is development release which contains new functionality. You can download functions from the MySQL Labs website.

A new IOUG research report “Efficiency Isn’t Enough: Data Centers Lead the Drive to Innovation” presents the results of a survey of 285 data managers and professionals.

SQL Server:

A drive on a mission-critical server is reaching capacity, and the new DBA is panicking. How do you approach a ballooning log file that won’t stop growing?

Is there a way to process only the new data for a partition in SQL Server Analysis Services? Yes, this is accomplished in SQL Server Analysis Services with the ProcessAdd option for partitions. Daniel Calbimonte demonstrates how it works.

Stairway to XML: Level 1 – Introduction to XML

Resilient T-SQL code is code that is designed to last, and to be safely reused by others. The goal of defensive database programming, the goal of this book, is to help you to produce resilient T-SQL code.

Private Cloud, What Is It and Why Do You Need It?

MySQL:

Performance_schema success stories : replication SQL thread tuning

Real-Time Data Loading from MySQL to Hadoop with New Tungsten Replicator 3.0 — Webinar-on-Demand

There was an exciting announcement today about WebScaleSQL, the new “branch” (not a fork, they say!) of MySQL created by folks from MySQL engineering teams at Facebook, Google, LinkedIn, and Twitter.

I have wanted multi-source replication in MySQL since 4.0, so I was delighted to see this feature appear in MariaDB 10.0.

Joro wrote recently about MySQL 5.6.17‘s new support for AES-256 encryption, and it’s a great improvement for people need to encrypt their data at rest.

Categories: DBA Blogs

Find Intelligence in your Dark Data to Illuminate Business Process Opportunities

WebCenter Team - Fri, 2014-03-28 07:49


In conjunction with the folks at AIIM, we are developing a new webinar to help everyone better understand the opportunities that can be realized when information is made more accessible within the appropriate business, user and security contexts.

Transactional content comes in many forms, from many places: externally, invoices and loan applications processed from customers; internally, expense reports and purchase orders to process. These are but a few of the many forms of transactional content. Each with specific data you collect for a specific purpose. Each that could be tied to and trigger a backend process. Yet, this information once used for its initial purpose is often overlooked for additional use. Referred to as "dark data", the result is a lost value in the information you have captured.

Bringing this "dark data" to light to connect transactional and textual data improves business intelligence. The result is used to create a more agile organization in decision-making, case management, and operational processes. It helps develop meaningful insight into the available and relevant information. Uniting infrastructures and interfaces that enrich the digital experience, customers and employees gain a clearer, and more complete understanding of the information they are dealing with. In this webinar, join AIIM’s Bob Larrivee as he discusses connecting transactional and textual data so that it can:

  • deliver a more complete view of relevant content
  • maximize the value of your information
  • and bring business intelligence to a new dimension.

The webinar is Wednesday April 23rd, 2014 at 2pm ET, 11am PT, 7pm GMT.  Click this link and register today.  

We think you will find it insightful and helpful as you look to improve the visibility of the right information across various areas of your business.  See you there!

Built-In OBIEE Load Testing with nqcmd

Rittman Mead Consulting - Fri, 2014-03-28 05:21

nqcmd ships with all installations of OBIEE and includes some very useful hidden functionality – the ability to generate load tests against OBIEE. There are lots of ways of generating load against OBIEE, but most require third party tools of varying degrees of complexity to work with.

It’s easy to try this out. First set the OBIEE environment:  [I'm using SampleApp v309R2 as an example; your FMW_HOME path will vary]

. ~/obiee/instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/bi-init.sh

and then the “open sesame” setting which enables the hidden nqcmd functionality:

export SA_NQCMD_ADVANCED=Yes

On Windows, run set SA_NQCMD_ADVANCED=YES instead. If you don’t set this environment variable then nqcmd just throws an error if you try to use one of the hidden options.

Now if you list the available options for nqcmd you’ll see lots of new options in addition to the usual ones:

Command: nqcmd - a command line client which can issue SQL statements
                 against either Oracle BI server or a variety
                 of ODBC compliant backend databases.
SYNOPSIS
         nqcmd [OPTION]...
DESCRIPTION
         -d<data source name>
         -u<user name>
         -p<password>
         -s<sql input file name>
         -o<output result file name>
         -D<Delimiter>
         -b<super batch file name>
         -w<# wait seconds>
         -c<# cancel interval seconds>
         -C<# number of fetched rows by column-wise binding>
         -n<# number of loops>
         -r<# number of requests per shared session>
         -R<# number of fetched rows by row-wise binding>
         -t<# number of threads>
         -T (a flag to turn on time statistics)
         -a (a flag to enable async processing)
         -f (a flag to enable to flush output file for each write)
         -H (a flag to enable to open/close a request handle for each query)
         -z (a flag to enable UTF8 in the output result file
         -utf16 (a flag to enable UTF16 for communicating to Oracle BI ODBC driver)
         -q (a flag to turn off row output)
         -NoFetch (a flag to disable data fetch with query execution)
         -SmartDiff (a flag to enable SmartDiff tags in output)
         -NotForwardCursor (a flag to disable forwardonly cursor)
         -v (a flag to display the version)
         -P<the percent of statements to disable cache hit>
         -impersonate <the impersonate username>
         -runas <the runas username>
         -td <the time duration to run >
         -qsel <the query selection>
         -ds <the dump statistics duration in secs>
         -qstats <print Query statistics at end of run>
         -login <login scenario for PSR. login/execute sqls/logout for sql file>
         -ShowQueryLog <to display query log from server, -H is required for this setting>
         -i <ramup interval for each user for load testing, -i is required for this setting>
         -ONFormat<FormatString, i.e. TM9, 0D99>

You’re own your own figuring the new options out as they’re not documented (and therefore presumably not supported and liable to change or be dropped at any time). What I’ve done below is my best guess at how to use them – don’t take this as gospel. The one source that I did find is a post on Oracle’s CEAL blog: OBIEE 11.1.1 – Advanced Usage of nqcmd command, from which I’ve taken some of the detail below.

Let’s have a look at how we can generate a load test. First off, I’ll create a very simple query:

and from the Advanced tab extract the Logical SQL from it:

SELECT
   0 s_0,
   "A - Sample Sales"."Products"."P2  Product Type" s_1,
   "A - Sample Sales"."Base Facts"."1- Revenue" s_2
FROM "A - Sample Sales"
ORDER BY 1, 2 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY

This Logical SQL I’ve saved to a file, report01.lsql.

To run this Logical SQL from nqcmd I use the standard (documented) syntax, passing the Logical SQL filename with the -s flag:

[oracle@obieesample loadtest]$ nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -s report01.lsql

-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2013 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------

Connection open with info:
[0][State: 01000] [DataDirect][ODBC lib] Application's WCHAR type must be UTF16, because odbc driver's unicode type is UTF16
SELECT
   0 s_0,
   "A - Sample Sales"."Products"."P2  Product Type" s_1,
   "A - Sample Sales"."Base Facts"."1- Revenue" s_2
FROM "A - Sample Sales"
ORDER BY 1, 2 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY
[...]

0            Smart Phones   6773120.36
--------------------
Row count: 11
--------------------

Processed: 1 queries

Adding the -q flag will do the same, but suppress the data output:

oracle@obieesample loadtest]$ nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -s report01.lsql -q

[...]
----------------------------------------------------------------------
Row count: 11
-------------------------------------------------------------------------------------------------------------   
Processed: 1 queries

The basic parameters for load testing are

  • -t – how many threads [aka Virtual Users]
  • -td – test duration
  • -ds – how frequently to write out load test statistics
  • -T – enable time statistics [without this they will not be reported correctly]

You also need to supply -o with an output filename. Even if you’re not writing the data returned from the query to disk (which you shouldn’t, and -q disables), nqcmd needs this in order to be able to write its load test statistics properly (I got a lot of zeros and nan otherwise). In addition, the -T (Timer) flag should be enabled for accurate timings.

So to run a test for a minute with 5 threads, writing load test stats to disk every 5 seconds, you’d run:

nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -s report01.lsql -q -T -td 60 -t 5 -ds 5 -o output

The load test stats are written to a file based on the name given in the -o parameter, with a _Counters.txt suffix:

$ cat output_Counters.txt
                        nQcmd Load Testing
TimeStamp       Sqls/Sec        Avg RT  CumulativePrepareTime   CumulativeExecuteTime   CumulativeFetchTime
00:00:05        56.200000       0.065925        2.536000                13.977000               2.012000
00:00:10        66.800000       0.065009        5.641000                33.479000               4.306000
00:00:15        69.066667       0.066055        8.833000                52.234000               7.366000
00:00:20        73.100000       0.063984        11.978000               71.944000               9.622000
[...]

Using obi-metrics-agent to pull out the OBIEE metrics and Graphite to render them we can easily visualise what happened when we ran the test. The Oracle_BI_General.Total_sessions metric shows:

nq07

Ramping Up the Load

nqcmd also has a -i parameter, to specify the ramp up per thread. Most load tests should incorporate a “ramp up”, whereby the load is introduced gradually. This is important so that you don’t overwhelm a server all at once. It might be the server will not support the total number of users planned, so by using a ramp up period you can examine the server’s behaviour as the load increases gradually, spotting the point at which the wheels begin to come off.

The -i parameter for nqcmd is the delay between each thread launching, and this has an interesting effect on the duration of the test. If you specify a test duration (-td) of 5 seconds, five threads (-t), and a rampup (-i) of 10 seconds the total elapsed will be c.55 seconds (5×10 + 5).

I’ve used the standard time command on Linux to validate this by specifying it before the nqcmd call.

$ time nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -s report01.lsql -q -td 5 -t 5 -ds 1 -o $(date +%Y-%m-%d-%H%M%S) -T -i 10 

[...]

real    0m56.896s
user    0m2.350s
sys     0m1.434s

So basically the -td is the “Steady State” once all threads are ramped up, and the literal test duration is equal to (rampup * number of threads) + (desired steady state)

The above ramp-up can be clearly seen:

nq06

BTW a handy trick I’ve used here is to use a timestamp for the output name so that the Counter.txt from one test doesn’t overwrite another, by specifying date using an inline bash command :

nqcmd [...]   -o $(date +%Y-%m-%d-%H%M%S)   [...]

Whilst we’re at it for tips & tricks – if you want to stop nqcmd running but Ctrl-C isn’t instant enough for you, the following will stop it in its tracks:

pkill -9 nqcmd

Wait a Moment…

…or two. Wait time, or “think time”, is also important in producing a realistic load test. Unless you want to hammer your server just for the lulz to see how fast you can overload it, you’ll want to make sure the workload you’re simulating represents how it is actually used — and in reality users will be pausing (thinking) between report requests. The -w flag provides this option to nqcmd.

In this test below, whilst the Total Sessions is as before (no ramp up), the Connection Pool shows far fewer busy connections. On previous tests the busy connections were equal to the number of active threads, because the server was continuously running queries.

nq09

And the CPU, which in the previous test was exhausted at five users with no wait time, now is a bit more relaxed

nq10

for comparison, this was the CPU in the first test we ran (5 threads, no wait time, no ramp up). Note that ‘idle’ drops to zero, i.e. the CPU is flat-out.

nq11

Load Test in Action

Let’s combine ramp up and wait times to run a load test and see what we can see in the underlying OBIEE metrics. I’m specifying:

  • Write the output to a file with the current timestamp (date, in the format YYYY-MM-DD HH:MM:SS)
    -o $(date +%Y-%m-%d-%H%M%S)
  • 20 threads
    -t 20
  • 10 second gap between starting each new thread
    -i  10
  • 5 second wait between each thread submitting a new query
    -w 5
  • Run for a total of 230 seconds (20 thread x 10 second ramp up = 200 seconds, plus 30 second steady state)
    -td 230

$ date;time nqcmd -d AnalyticsWeb -u weblogic -p Password01 -s queries.lsql -q -T -o $(date +%Y-%m-%d-%H%M%S) -t 20 -ds 5 -td 230 -w 5 -i 10;date

Here’s what happened.

  • At first, as the users ramp up the Connection Pool gets progressively busier
    2014-03-28_10-24-11
  • However, when we hit c.14 threads, things start to go awry. The busy count stays at 10, even though the user count is increasing: 2014-03-28_10-26-12
    (This was displayed in flot which you can get to on the /graphlot URL of your Graphite server)
  • So the user count is increasing, but we’re not seeing increasing activity on the Connection Pool… so what does that do for the response times? 2014-03-28_10-30-50
    OK, so the Average Query Elapsed Time is a metric I’d normally be wary of, but this is a dedicated server running just my load test workload (and a single query within it) so in this case it’s a valid indicator — and it’s showing that the response time it going up. Why’s it going up?
  • Looking more closely at the Connection Pool we can see a problem — we’re hitting the capacity of ten connections, and requests are starting to queue up: 2014-03-28_10-38-06
    Note how once the Current Busy Connection Count hits the Capacity of ten, the Current Queued Requests value starts to increase — because the number of users is increasing, trying to run more queries, but having to wait.

So this is a good example of where users would see slow performance, but some of the usual “Silver Bullets” around hardware and the database would completely miss the target, because the bottleneck here is actually in the configuration of the Connection Pool.

If you’re interested in hearing more about this subject, make sure you register for the BI Forum in Brighton, 7-9 May where I’m delighted to be speaking for the second time, presenting “No Silver Bullets : OBIEE Performance in the Real World“.

Categories: BI & Warehousing

OBIEE Security: User Authentication, WebLogic, OPSS, Application Roles and LDAP

Where and how are OBIEE users authenticated? A few options exists. A later blog post will review how to use the Oracle E-Business Suite to authenticate user connections and pass the E-Business Suite session cookie to OBIEE. Many if not most OBIEE users will though authenticate through WebLogic. For these users, they are defined and authenticated within WebLogic using it’s built in LDAP database or an external LDAP implementation. Once authenticated, the user’s LDAP group memberships are mapped to Applications roles that are shared by all Fusion Applications, OBIEE included.

WebLogic and Oracle Platform Security Services (OPSS)

As a Fusion Middleware 11g product, OBIEE 11g uses Oracle WebLogic for centralized common services, including a common security model. WebLogic Security Realms define the security configurations required to protect the application(s) deployed within WebLogic and consist of definitions of users, groups, security roles and polices.

If at all possible, Integrigy Corporation recommends using the default realm as a baseline to configure a new Realm for OBIEE. Integrigy Corporation highly recommends that each security realm attribute be thoroughly understood.

To implement Security Realm configurations, all Fusion Middleware applications use a security abstraction layer within WebLogic called the Oracle Platform Security Services (OPSS). OPSS is not the same as WebLogic security. WebLogic consumes OPSS services and frameworks (for example authentication). OPSS provides three key services:

  • An Identity Store, to define and authenticate users
  • A Credential Store, to hold the usernames, passwords and other credentials that system services require.
  • A Policy Store, containing details of user groups and application roles, application policies and permissions. The policy store is used to authorize users (what can they do?) after they are authenticated.

Enterprise Manager and Application Roles

Application roles are new with OBIEE 11g and replace groups within OBIEE 10g. The migration of application roles out of OBIEE allows a common set of roles to be define across all Fusion Middleware products and applications.

Application roles and Application Policies are managed in Oracle Enterprise Manager - Fusion Middleware Control.  This is where LDAP groups are mapped to application roles and detailed permissions are assigned to the application roles. The key concept is that LDAP groups can be assigned to both Fusion users and Fusion Application roles, LDAP users are never individually or directly assigned permissions and grants within OBIEE.

The out-of-the-box installation of OBIEE delivers three main application roles. These roles may be granted to individual users or to LDAP groups.  During the implementation or at any time new roles can be created and existing roles changed.

Default OBIEE Application Roles

Application Role

LDAP Group*

Description

BIConsumer

 

BIConsumers

Base-level role that grants the user access to OBIEE analyses, dashboards and agents.  Allows user to run or schedule existing BI Publisher reports, but not create any new ones

BIAuthor

BIAuthors

 

All BIConsumer rights, grants and permissions but also allows users to create new analyses, dashboards and other BI objects

BIAdministrator

BIAdministrators

 

All BIAuthor rights, grants and permissions (and therefore BIConsumer) as well as allows the user to administer all parts of the system, including modifying catalog permissions and privileges

 *Note the naming convention difference of plural vs singular for Application Roles

If you have questions, please contact us at info@integrigy.com

 -Michael Miller, CISSP-ISSMP

References Tags: Oracle Business Intelligence (OBIEE)
Categories: APPS Blogs, Security Blogs

JDeveloper BPMN Bug: Activity Name conflict - Follow Up

Darwin IT - Fri, 2014-03-28 02:49
A few weeks ago I reported about my experiences with JDeveloper 11g PS6 in Malta. I had to borrow a laptop to do the Adaptive Case Management workshop. It was a HP laptop with Ubuntu 12. Somehow this combination with VirtualBox 4.3.6 lead into a bug in JDeveloper. Creating a new project would introduce a dash ('-') in the ID of each created activity.

In the mean time I have a new laptop, a very nice Asus N56. This morning I was in the opportunity to startup the VM that I exported and backed up from the borrowed laptop. And what a surprise: creating a new project just works! No naming conflicts. Also, of course, creating a new activity is ok.

A strange case indeed.

Juggernaut

Jonathan Lewis - Fri, 2014-03-28 02:12

One of the problems of “knowing” so much about Oracle is that the more you know the more you have to check on each new release of the software. An incoming ping on my posting “Lock Horror” reminded me that I was writing about 11.2.0.1, and the terminal release is 11.2.0.4, and the whole thing may have changed in 12.1.0.1 – so I ought to re-run some tests to make sure that the articel is up to date if it’s likely to be read a few times in the next few days.

Unfortunately, although I often add a URL to scripts I’ve used to confirm results published in the blog, I don’t usually include a script name in my blog postings  to remind me where to go if I want to re-run the tests. So how do I find the right script(s) ? Typically I list all the likely scripts and compare dates with the date on the blog; so here’s what I got for “lock”.


SQL> host ls -ltr *lock*.sql | grep -v block
-rwxr-xr-x 1 jonathan dba 1569 Jun 28  2002 c_bitlock.sql
-rwxr-xr-x 1 jonathan dba 1303 Oct  5  2002 ddl_deadlock.sql
-rwxr-xr-x 1 jonathan dba 1875 Oct  7  2002 ddl_deadlock_2.sql
-rwxr-xr-x 1 jonathan dba 1654 Aug  6  2003 hw_lock.sql
-rwxr-xr-x 1 jonathan dba 2626 Sep 17  2004 lock_oddity.sql
-rwxr-xr-x 1 jonathan dba 1804 Sep 17  2004 lock_speed.sql
-rwxr-xr-x 1 jonathan dba 3194 May  8  2006 space_locks.sql
-rwxr-xr-x 1 jonathan dba 4337 Jan  3  2008 tm_deadlock.sql
-rwxr-xr-x 1 jonathan dba 1149 Jan  3  2008 show_lock.sql
-rwxr-xr-x 1 jonathan dba 2068 Apr 21  2008 hw_lock_2.sql
-rwxr-xr-x 1 jonathan dba 1482 Feb  5  2010 tt_lock.sql
-rwxr-xr-x 1 jonathan dba 1692 Feb 16  2010 to_lock.sql
-rwxr-xr-x 1 jonathan dba 3308 Jun  1  2010 skip_locked.sql
-rwxr-xr-x 1 jonathan dba 2203 Nov  2  2010 deadlock_statement.sql
-rwxr-xr-x 1 jonathan dba 2883 Nov  3  2010 merge_locking.sql
-rwxr-xr-x 1 jonathan dba 1785 Dec 14  2010 sync_lock.sql
-rwxr-xr-x 1 jonathan dba  984 Apr 23  2011 para_dml_deadlock.sql
-rwxr-xr-x 1 jonathan dba 4305 Jun  4  2011 locking_fifo.sql
-rwxr-xr-x 1 jonathan dba 5970 Jun  5  2011 locking_fifo_2.sql
-rwxr-xr-x 1 jonathan dba  917 Jun 30  2011 ul_deadlock.sql
-rwxr-xr-x 1 jonathan dba  936 Jul  8  2011 funny_deadlock.sql
-rwxr-xr-x 1 jonathan dba  741 Sep  8  2011 row_lock_wait_index.sql
-rwxr-xr-x 1 jonathan dba 2590 Nov 30  2012 fk_lock_stress.sql
-rwxr-xr-x 1 jonathan dba 4561 Feb  6  2013 dbms_lock.sql
-rwxr-xr-x 1 jonathan dba 1198 Apr  6  2013 libcache_locks.sql
-rwxr-xr-x 1 jonathan dba 5636 Nov 27 19:40 ash_deadlocks.sql
-rwxr-xr-x 1 jonathan dba  379 Mar 27 19:17 fk_constraint_locks.sql

Nothing leaps out as an obvious candidate, though “funny_deadlock.sql” catches my eye for future reference; maybe I should look for “foreign key”.

SQL> host ls -ltr *fk*.sql | grep -v fkr
-rwxr-xr-x 1 jonathan dba  2140 Jun 16  2005 fk_check.sql
-rwxr-xr-x 1 jonathan dba  2897 Jun 16  2005 fk_order.sql
-rwxr-xr-x 1 jonathan dba   650 Oct 26  2007 pk_fk_null.sql
-rwxr-xr-x 1 jonathan dba  5444 Nov  4  2007 c_fk.sql
-rwxr-xr-x 1 jonathan dba  1568 Dec  5  2008 null_fk.sql
-rwxr-xr-x 1 jonathan dba  2171 Mar  2  2009 fk_anomaly_2.sql
-rwxr-xr-x 1 jonathan dba  3922 Mar  2  2009 fk_anomaly.sql
-rwxr-xr-x 1 jonathan dba  5512 Oct 15  2009 fk_check_2.sql
-rwxr-xr-x 1 jonathan dba  1249 Feb 15  2010 c_pk_fk_2.sql
-rwxr-xr-x 1 jonathan dba  1638 Feb 16  2010 c_pk_fk_3.sql
-rwxr-xr-x 1 jonathan dba  5121 Jun  1  2012 c_pt_fk_2.sql
-rwxr-xr-x 1 jonathan dba  4030 Jun  5  2012 c_pt_fk_3.sql
-rwxr-xr-x 1 jonathan dba  2062 Jun  5  2012 c_pt_fk_3a.sql
-rwxr-xr-x 1 jonathan dba  2618 Sep 23  2012 c_pk_fk_02.sql
-rwxr-xr-x 1 jonathan dba  1196 Oct 19  2012 deferrable_fk.sql
-rwxr-xr-x 1 jonathan dba  2590 Nov 30  2012 fk_lock_stress.sql
-rwxr-xr-x 1 jonathan dba  4759 Sep  1  2013 fk_bitmap.sql
-rwxr-xr-x 1 jonathan dba  1730 Sep 30 07:51 virtual_fk.sql
-rwxr-xr-x 1 jonathan dba  3261 Dec 22 09:41 pk_fk_gets.sql
-rwxr-xr-x 1 jonathan dba  8896 Dec 31 13:19 fk_delete_gets.sql
-rwxr-xr-x 1 jonathan dba 10071 Dec 31 14:52 fk_delete_gets_2.sql
-rwxr-xr-x 1 jonathan dba  4225 Jan 14 11:15 c_pk_fk.sql
-rwxr-xr-x 1 jonathan dba  2674 Jan 14 13:42 append_fk.sql
-rwxr-xr-x 1 jonathan dba  1707 Feb 10 12:34 write_cons_fk.sql
-rwxr-xr-x 1 jonathan dba  9677 Feb 24 17:23 c_pt_fk.sql
-rwxr-xr-x 1 jonathan dba   379 Mar 27 19:17 fk_constraint_locks.sql

(The “-fkr” is to eliminate scripts about “first K rows optimisation”). With a little luck, the dates are about right, c_pk_fk_2.sql and c_pk_fk_3.sql will be relevant. So keep an eye on “Lock Horror” for an update in the next few days.

You’ll notice that some of the scripts have a very old datestamp on them - that’s an indication of how hard it is to keep up; when I re-run a script on a new version of Oracle I invariably add a “Last Tested:” version to the header, and a couple of notes about changes.  A couple of my scripts date back to June 2001 – but that is, at least, the right century, and some people are still using Oracle 7.

Footnote

It should be obvious that I can’t test everything on every new release – but it’s amazing how often on a client site I can recognize a symptom and pick at script that I’ve used in the past to construct the problem – and that’s when a quick bit of re-testing helps me find a solution or workaround (or Oracle bug note).

 


Juggernaut

Jonathan Lewis - Fri, 2014-03-28 02:12

One of the problems of “knowing” so much about Oracle is that the more you know the more you have to check on each new release of the software. An incoming ping on my posting “Lock Horror” reminded me that I was writing about 11.2.0.1, and the terminal release is 11.2.0.4, and the whole thing may have changed in 12.1.0.1 – so I ought to re-run some tests to make sure that the articel is up to date if it’s likely to be read a few times in the next few days.

Unfortunately, although I often add a URL to scripts I’ve used to confirm results published in the blog, I don’t usually include a script name in my blog postings  to remind me where to go if I want to re-run the tests. So how do I find the right script(s) ? Typically I list all the likely scripts and compare dates with the date on the blog; so here’s what I got for “lock”.


SQL> host ls -ltr *lock*.sql | grep -v block
-rwxr-xr-x 1 jonathan dba 1569 Jun 28  2002 c_bitlock.sql
-rwxr-xr-x 1 jonathan dba 1303 Oct  5  2002 ddl_deadlock.sql
-rwxr-xr-x 1 jonathan dba 1875 Oct  7  2002 ddl_deadlock_2.sql
-rwxr-xr-x 1 jonathan dba 1654 Aug  6  2003 hw_lock.sql
-rwxr-xr-x 1 jonathan dba 2626 Sep 17  2004 lock_oddity.sql
-rwxr-xr-x 1 jonathan dba 1804 Sep 17  2004 lock_speed.sql
-rwxr-xr-x 1 jonathan dba 3194 May  8  2006 space_locks.sql
-rwxr-xr-x 1 jonathan dba 4337 Jan  3  2008 tm_deadlock.sql
-rwxr-xr-x 1 jonathan dba 1149 Jan  3  2008 show_lock.sql
-rwxr-xr-x 1 jonathan dba 2068 Apr 21  2008 hw_lock_2.sql
-rwxr-xr-x 1 jonathan dba 1482 Feb  5  2010 tt_lock.sql
-rwxr-xr-x 1 jonathan dba 1692 Feb 16  2010 to_lock.sql
-rwxr-xr-x 1 jonathan dba 3308 Jun  1  2010 skip_locked.sql
-rwxr-xr-x 1 jonathan dba 2203 Nov  2  2010 deadlock_statement.sql
-rwxr-xr-x 1 jonathan dba 2883 Nov  3  2010 merge_locking.sql
-rwxr-xr-x 1 jonathan dba 1785 Dec 14  2010 sync_lock.sql
-rwxr-xr-x 1 jonathan dba  984 Apr 23  2011 para_dml_deadlock.sql
-rwxr-xr-x 1 jonathan dba 4305 Jun  4  2011 locking_fifo.sql
-rwxr-xr-x 1 jonathan dba 5970 Jun  5  2011 locking_fifo_2.sql
-rwxr-xr-x 1 jonathan dba  917 Jun 30  2011 ul_deadlock.sql
-rwxr-xr-x 1 jonathan dba  936 Jul  8  2011 funny_deadlock.sql
-rwxr-xr-x 1 jonathan dba  741 Sep  8  2011 row_lock_wait_index.sql
-rwxr-xr-x 1 jonathan dba 2590 Nov 30  2012 fk_lock_stress.sql
-rwxr-xr-x 1 jonathan dba 4561 Feb  6  2013 dbms_lock.sql
-rwxr-xr-x 1 jonathan dba 1198 Apr  6  2013 libcache_locks.sql
-rwxr-xr-x 1 jonathan dba 5636 Nov 27 19:40 ash_deadlocks.sql
-rwxr-xr-x 1 jonathan dba  379 Mar 27 19:17 fk_constraint_locks.sql

Nothing leaps out as an obvious candidate, though “funny_deadlock.sql” catches my eye for future reference; maybe I should look for “foreign key”.

SQL> host ls -ltr *fk*.sql | grep -v fkr
-rwxr-xr-x 1 jonathan dba  2140 Jun 16  2005 fk_check.sql
-rwxr-xr-x 1 jonathan dba  2897 Jun 16  2005 fk_order.sql
-rwxr-xr-x 1 jonathan dba   650 Oct 26  2007 pk_fk_null.sql
-rwxr-xr-x 1 jonathan dba  5444 Nov  4  2007 c_fk.sql
-rwxr-xr-x 1 jonathan dba  1568 Dec  5  2008 null_fk.sql
-rwxr-xr-x 1 jonathan dba  2171 Mar  2  2009 fk_anomaly_2.sql
-rwxr-xr-x 1 jonathan dba  3922 Mar  2  2009 fk_anomaly.sql
-rwxr-xr-x 1 jonathan dba  5512 Oct 15  2009 fk_check_2.sql
-rwxr-xr-x 1 jonathan dba  1249 Feb 15  2010 c_pk_fk_2.sql
-rwxr-xr-x 1 jonathan dba  1638 Feb 16  2010 c_pk_fk_3.sql
-rwxr-xr-x 1 jonathan dba  5121 Jun  1  2012 c_pt_fk_2.sql
-rwxr-xr-x 1 jonathan dba  4030 Jun  5  2012 c_pt_fk_3.sql
-rwxr-xr-x 1 jonathan dba  2062 Jun  5  2012 c_pt_fk_3a.sql
-rwxr-xr-x 1 jonathan dba  2618 Sep 23  2012 c_pk_fk_02.sql
-rwxr-xr-x 1 jonathan dba  1196 Oct 19  2012 deferrable_fk.sql
-rwxr-xr-x 1 jonathan dba  2590 Nov 30  2012 fk_lock_stress.sql
-rwxr-xr-x 1 jonathan dba  4759 Sep  1  2013 fk_bitmap.sql
-rwxr-xr-x 1 jonathan dba  1730 Sep 30 07:51 virtual_fk.sql
-rwxr-xr-x 1 jonathan dba  3261 Dec 22 09:41 pk_fk_gets.sql
-rwxr-xr-x 1 jonathan dba  8896 Dec 31 13:19 fk_delete_gets.sql
-rwxr-xr-x 1 jonathan dba 10071 Dec 31 14:52 fk_delete_gets_2.sql
-rwxr-xr-x 1 jonathan dba  4225 Jan 14 11:15 c_pk_fk.sql
-rwxr-xr-x 1 jonathan dba  2674 Jan 14 13:42 append_fk.sql
-rwxr-xr-x 1 jonathan dba  1707 Feb 10 12:34 write_cons_fk.sql
-rwxr-xr-x 1 jonathan dba  9677 Feb 24 17:23 c_pt_fk.sql
-rwxr-xr-x 1 jonathan dba   379 Mar 27 19:17 fk_constraint_locks.sql

(The “-fkr” is to eliminate scripts about “first K rows optimisation”). With a little luck, the dates are about right, c_pk_fk_2.sql and c_pk_fk_3.sql will be relevant. So keep an eye on “Lock Horror” for an update in the next few days.

You’ll notice that some of the scripts have a very old datestamp on them - that’s an indication of how hard it is to keep up; when I re-run a script on a new version of Oracle I invariably add a “Last Tested:” version to the header, and a couple of notes about changes.  A couple of my scripts date back to June 2001 – but that is, at least, the right century, and some people are still using Oracle 7.

Footnote

It should be obvious that I can’t test everything on every new release – but it’s amazing how often on a client site I can recognize a symptom and pick at script that I’ve used in the past to construct the problem – and that’s when a quick bit of re-testing helps me find a solution or workaround (or Oracle bug note).

 


OOW : Package Digora

Jean-Philippe Pinte - Fri, 2014-03-28 01:57
Vous souhaitez vous rendre à la prochaine session d' Oracle Open World qui aura lieu du 28 septembre au 2 octobre 2014.
Découvrez le package dédié mis en place par Digora