Skip navigation.

Feed aggregator

A 2014 (Personal) Blogging Retrospective

Michael Feldstein - 2 hours 16 min ago

Unlike many of the bloggers who I enjoy reading the most, I don’t often let my blogging wander into the personal except as a route to making a larger point. For some reason, e-Literate never felt like the right outlet for that. But with the holidays upon us, with some life cycle events in my family causing me to be a bit more introspective than usual, and with the luxury of having discovered Phil’s top 20 posts of the year post showing up in my inbox, I’m in the mood to ruminate about my personal journey in blogging, where it’s taken me so far, and what it means to me. In the process, I’ll also reflect a bit on what we try to do at e-Literate.

When I started the blog 10 years ago, I honestly didn’t know what I was doing. OK, I guess that’s still true in some ways. What I mean is that I was looking for a purpose in my life. I had been a middle school and high school teacher for five years. It was the by far the best job I had ever had, and in some ways is still the best job I ever had. I left for a few different reasons. One was financial. I had fallen in love with a woman who had two teenaged daughters and suddenly found myself having to support a family. Another was frustration with a lack of professional growth opportunities. I taught in a wonderful, tiny little private school that operated out of eight rooms in the back of the Hoboken public library. It was amazing. But I wanted to do more and there was really no place for me to grow at the small school. I was young and feeling my oats. Lacking teacher’s certification and having been spoiled by teaching in such an amazing environment, I despaired of finding the right opportunity that would be professionally exciting while also allowing me to support my family. Part of it, too, was that I was beginning to get drawn to larger, systemic and cultural questions. For example, in the United States we have strong local control over our school systems, and my experience was that the overwhelming majority of parents care deeply for their children and want what’s best for them. Theoretically, it should be simple for parents to demand and get better schools. But that rarely happens. Why not? Why was the wonderful place that I was working at so rare? So I went wandering. I tried a few different things, but none of them made me happy. I am a teacher from a family of teachers. I needed to be close to education. But I also needed to support my family. And I needed to spread my wings, intellectually. I kept getting drawn to the bigger, systemic issues.

I started e-Literate just before I got a job at the SUNY Learning Network, having wandered in the wilderness first of graduate school and then of corporate e-Learning and knowledge management for a number of years. I had hoped that writing in public would help me clarify for myself what I wanted to do next in education as well as find some fellow travelers who might help me identify some sort of a career path that made sense. Meanwhile, I made a few good friends at SUNY, but mostly I grew quickly frustrated with the many barriers to doing good educational work that, once again, just shouldn’t exist if we lived in any kind of a rational world. Blogging was an oasis for me. It was a place where I found the kind of community that I should have had in academia but mostly didn’t. As I learned from early ed tech bloggers like Stephen Downes, Alan Levine, D’Arcy Norman, Scott Leslie, Beth Harris, Steven Zucker, Joe Ugoretz, George Siemens, and Dave Cormier (who co-hosted a wonderful internet radio show in those pre-podcasting days), I felt like I had found a home. It’s hard to describe what those early times of edublogging felt like if you weren’t around then. It was much friendlier. Much cozier. Everybody was just trying to figure stuff out together. I was just another shmoe working in the coal mines at a public university system, but in the blogosphere, there were really smart, articulate, accomplished people who took what I had to say seriously and encouraged me to say more. We argued sometimes, but mostly it was the good kind of argument. Arguments over what matters and what is true, rather than over who matters and what is the correct thing to say. It was…magical. I owe a great debt of gratitude to the bloggers I have mentioned here as well as others. I am ashamed to realize that I probably haven’t expressed that publicly before now. Without the folks who were already here when I arrived, I wouldn’t be where I am and who I am.

That said, finding a community is not the same thing as finding a purpose. The blogging wasn’t part of a satisfying career doing good in education so much as it was an escape from an unsatisfying career of failing to do good in education.

Then Blackboard sued Desire2Learn over a patent.

Such a strange thing to change a person’s life. Like most people, I really didn’t know what to make of it at first. I have never been dogmatically anti-corporate, anti-patent, or even anti-Blackboard. That said, Blackboard had proven itself to be a nasty, hyper-competitive company in those days, and this sounded like more of the same at first blush. But really, what did it mean to assert a patent in ed tech? I decided to figure it out. I read up on patent law and studied the court documents from the case (which Desire2Learn was publishing). I got a lot of help from Jim Farmer and some folks in the law community. And what I learned horrified me. Blackboard’s patent, if it had been upheld, would have applied to every LMS on the market, both proprietary and open source. Much worse, though, was the precedent it would have set. The basic argument that Blackboard made in their patent application process was that their invention was novel because it applied specifically to education. It was a little bit like arguing that one could patent a car that was designed only to be driven to the grocery store. Even if you didn’t care about the LMS, a successful assertion of that patent would have opened up Pandora’s box for any educational software. And if companies perceived that they could gain competitive advantages over their rivals by asserting patents, it would be the end of creative experimentation in educational technology. The U.S. patent system is heavily tilted toward large companies with deep pockets. Blackboard was already in the process of assembling a patent portfolio that would have enabled them to engage in what’s known as “stacking.” This is when a company files a flurry of lawsuits over a bunch of patents against a rival. Even if most of those assertions are bogus, it doesn’t matter, because the vast majority of organizations simply can’t afford the protracted legal battle. It’s less expensive for them to fold and just pay the extortion money patent license fees, or to sell out to the patent holder (which is probably what Blackboard really wanted from Desire2Learn). All that’s left in the market is for the big companies to cut cross-licensing deals with each other. Whatever you may think about the current innovation or lack thereof in educational technology, whatever we have now would have been crushed had Blackboard succeeded. That includes open source innovation. If a college president was told by her legal counsel that running a campus installation of WordPress with some education-specific modifications might violate a patent, what do you think the institutional decision about running WordPress would be?

So I went to war. I may have been just some shmoe working in the coal mines of a public university system, but dammit, I was going to organize. I translated the legalese of the patent into plain English so that everybody could see how ridiculous it was. I started a Wikipedia page on the History of Virtual Learning Environments so that people could record potential prior art against the patent. Mostly, I wrote about what I was learning about patents in general and Blackboard’s patents in particular. I wrote a lot. If you look down at the tag cloud at the bottom of the blog page, you’ll see that “Blackboard-Inc.” and “edupatents” are, to this day, two of the most frequently used tags on e-Literate.

And then an amazing thing happened. People listened. Not just the handful of edubloggers who were my new community, but all kinds of people. The entries on the Wikipedia page exploded in a matter of days. Every time Blackboard’s Matt Small gave a statement to some news outlet, I was asked to respond. I began getting invited to speak at conferences and association meetings for organizations that I never even knew existed before. Before I knew it, my picture was in freakin’ USA Today. e-Literate‘s readership suddenly went off the charts. In a weird way, I owe the popularity of the blog and the trajectory of my career to Blackboard and Matt Small.

And with that, I finally found my purpose. I won’t pretend that the community outrage and eventual outcome of the patent fight were mostly due to me—there were many, many people fighting hard, not the least of which were John Baker and Desire2Learn—but I could tell that I was having an impact, in part because of the ferocity with which Matt Small attempted to get me into trouble with my employers. With the blog, I could make things happen. I could address systemic issues. It isn’t a good vehicle for everything, but it works for some things. That’s why, more often than not, the best question to ask yourself when reading one of my blog posts is not “What is Michael really trying to say?” but “What is Michael really trying to do?” A lot of the time, I write to try to influence people to take (or not take) a particular course of action. Sometimes it’s just one or a couple of particular people who I have in mind. Other times it may be several disparate groups. For me, the blog is a tool for improving education, first and foremost. Improvement only happens when people take action. Therefore, saying the right things isn’t enough. If my writing is to be worth anything, it has to catalyze people to do the right things.

Of course, it doesn’t always work. Once Blackboard gave up on their patent assertion, I tried to rally colleges and universities to take steps to protect against educational patent assertion in the future. There was very little interest. Why? For starters, it was easier for them to vilify Blackboard than it was to confront the much more complex reality that our patent system itself is deeply flawed. But also, a lot of it was that the universities that were in the best position to take affirmative steps harbored fantasies of being Stanford and owning a piece of the next Google. Addressing the edupatent problem in a meaningful way would have been deeply inconvenient for those ambitions and forced them to think hard about their intellectual property transfer policies. With the immediate threat over, there was no appetite for introspection on college campuses. The patent suit was dropped, Michael Chasen eventually left the company, Matt Small was moved into another role, and life has gone on. I suspect that somewhere in some university startup incubator is a student who was still in middle school when the edupatent war was going on and is filing patent applications for a “disruptive” education app today. Cue the teaser for the sequel, “Lawyers for the Planet of the Apes.”

Meanwhile, my blogging had raised my profile enough to get me out of SUNY and land me a couple of other jobs, both of which taught me a great deal about the larger systemic context and challenges of ed tech but neither of which turned out to be a long-term home for me (which I knew was likely to be the case at the time that I took them). But at the second job in particular, I got too busy with work to blog as regularly as I wanted to. It really bothered me that I had built up a platform that could make a difference and was largely unable to do anything with it. So I decided to try to turn it into a group blog. The blogosphere had changed by then. The power law had really taken hold. There were a handful of bloggers who got most of the attention, and it was getting harder for new voices to break in. So I decided to invite people who maybe didn’t (yet) have the same platform that I did but who regularly taught me important things through their writing to come and blog on e-Literate, writing whatever they liked, whenever they liked, however often they liked. No strings attached. I’m proud to have posts here from people like Audrey Watters, Bill Jerome, David White, Kim Thanos, and Laura Czerniewicz, among others. Most of the people I invited wrote one or a few posts and then moved on to other things. Which was fine. I wasn’t inviting them because I wanted to build up e-Literate. I was inviting them because I wanted to expose their good work to more people. That’s something that we still try to do whenever we can. For example, the analysis that Mike Caulfield raising doubts about some of the Purdue Course Signals research was hugely important to the field of learning analytics. I’m proud to have had the opportunity to draw attention to it.

Like I said, most of the bloggers wrote a few pieces and then moved on. Most. One of them just kept hanging around, like a relative you invite to dinner who never gets the hint when it’s time to leave. As with many of the others, I had not really met Phil Hill before and mainly knew him through his writing. Before long, he was writing more blog posts on e-Literate than I was. And—please don’t tell him I told you this—I love his writing. Phil is more of a natural analyst than I am. He has a head for details that may seem terribly boring in and of themselves but often turn out to have important implications. Whether he is digging through discrepancies on employee numbers to call BS on D2L’s claims of hypergrowth (and therefore their rationale for all the investment money and debt they are taking on) or collaborating with WCET’s Russ Poulin on an analysis of how the Federal government’s IPEDs figures are massively misreporting the size of online learning programs, At the same time, he shares my constitutional inability to restrain myself from saying something when I see something that I think is wrong. This is what, for example, led him to file a public records request for information that definitively showed how few students Cal State Online was reaching for all the money that was spent on the program. For the record, Cal State is a former consulting client of Phil’s. In fact, I’m pretty sure that he created the famous LMS market share squid diagram while consulting for Cal State. As consultants in our particular niche, any critical post that we write of just about anyone runs the risk of alienating a potential client. (More on that in a bit.) Anyway, Phil is now co-publisher of e-Literate. The blog is every bit as much his as it is mine.

And so e-Literate continues to evolve. When I look at Phil’s list of our top 20 posts from 2014, it strikes me that there are a few things we are trying to do with our writing that I think are fairly unusual in ed tech reporting and analysis at the moment:

  • We provide critical analysis and long-form reporting on ed tech companies. There are lots of good pieces written by academic bloggers on ed tech products or the behavior of ed tech companies, but many of them are essentially cultural studies-style critiques of either widely reported news items or personal experiences. There’s nothing wrong with that, but it doesn’t give us the whole picture without some supplementation. On the other hand, the education news outlets break stories but don’t do a lot of in-depth analysis. Because Phil and I have eclectic backgrounds, we have some insight into how these companies work that academics or even reporters often don’t. We’ve been doing this long enough that we have a lot of contacts who are willing to talk to us so, even though we’re not in the business of breaking stories, we sometimes get important details that others don’t. Also, as you can tell from this blog post (if you’ve made it this far), we’re not afraid of writing long pieces.
  • We provide critical analysis and long-form reporting on colleges’ and universities’ (mis)adventures in ed tech. One of things that really bugs me about the whole ed tech blogging and reporting world is that some of the most ferocious critics of corporate misbehavior are often strangely muted on the dysfunction of colleges and universities and completely silent on the dysfunction of faculty. I’m as proud of our work digging into the back room deals of school administrators that circumvent faculty governance or the ways in which faculty behavior impedes progress in areas like better learning platforms or OER as I am of our analysis of corporate malfeasance.
  • We demystify. I was particularly honored to be invited to write a piece on adaptive learning for the American Federation of Teachers. The AFT tends to take a skeptical view of ed tech, so I took their invitation as validation that at least some of the writing we do here is of as much value to the skeptics as to the enthusiasts. When I write a piece about Pearson and I get compliments on it from both people inside the company and people who despise the company, I know that I’ve managed to explain something clearly in a way that clarifies while letting readers make their own judgments. A lot of the coverage of ed tech tends to be either reflexively positive or reflexively negative, and in neither case do we get a lot of details about what the product is, how it actually works, and how people are using it in practice.

One other thing that I feel good about on e-Literate and that I am completely amazed by is our community of commenters. We frequently get 5 or 10 comments on a given post (either in the WordPress comments thread or on Google+), and it’s not terribly uncommon for us to get 50 or even 100 comments on a post. And yet, I can count on one hand the number of times that we’ve ever had personalized attacks or unproductive behaviors from our commenters. I have no idea why this is so and take no credit for it. Even after ten years, I can’t predict which blog posts will generate a lot of discussion and which ones will not. It’s just a magic thing that happens sometimes. I’m still surprised and grateful every time that it does.

But of all the astonishing, wonderful things that have happened to me because of the blog, one of the most astonishing and wonderful is the way that it turned into a fulfilling job. When Phil asked me to join him as a consultant two years ago, I frankly didn’t give high odds that we would be be in business a for very long. I thought the most likely scenario was that we would fail, hopefully in an interesting way, and have some fun in the process. (Please don’t tell Phil I said that either.) But we have been not only pretty consistently busy with work but also growing the business about as fast as we would want it to grow, despite an almost complete lack of sales or marketing effort on our part. The overwhelming majority of our work comes to us through people who read our blog, find something helpful in what we wrote, and contacts us to see if we can help more. We’ve made it a policy to mostly not blog about our consulting except where we need to make conflict-of-interest disclosures, but sometimes I wonder if that’s the right thing to do. The tag line of e-Literate is “What We Are Learning About Online Learning…Online”, and a lot of what we are learning comes from our jobs. If anything, that is more true now than ever, given that so much of our work springs directly from our blogging. Our clients tend to hire us to help them with problems related to issues that we have cared about enough to write about. We also seem to gain more clients than we lose by writing honestly and critically, and our relationships with our clients are better because of it. People who come to us for help expect us to be blunt and are not surprised or offended when we offer them advice which is critical of the way they have been doing things.

Honestly, this is the most fulfilled that I have felt, professionally, since I left the classroom. I will go back to teaching at some point before I retire, but in the meantime I feel really good about what I’m doing for the first time in a long time. I get to work with schools, foundations, and companies on interesting and consequential education problems—and increasingly on systemic and cultural problems. I get to do a lot of it in an open way, with people who I like and respect. I get to speak my mind about the things I care about without fear that it will get me in (excessive) trouble. And I even get paid.

Who knew that such a thing is possible?

The post A 2014 (Personal) Blogging Retrospective appeared first on e-Literate.

TIMESTAMPS and Presentation Variables

Rittman Mead Consulting - 4 hours 28 min ago

TIMESTAMPS and Presentation Variables can be some of the most useful tools a report creator can use to invent robust, repeatable reports while maximizing user flexibility.  I intend to transform you into an expert with these functions and by the end of this page you will certainly be able to impress your peers and managers, you may even impress Angus MacGyver.  In this example we will create a report that displays a year over year analysis for any rolling number of periods, by week or month, from any date in time, all determined by the user.  This entire document will only use values from a date and revenue field.

Final Month DS

The TIMESTAMP is an invaluable function that allows a user to define report limits based on a moving target. If the goal of your report is to display Month-to-Date, Year-to-Date, rolling month or truly any non-static period in time, the TIMESTAMP function will allow you to get there.  Often users want to know what a report looked like at some previous point in time, to provide that level of flexibility TIMESTAMPS can be used in conjunction with Presentation Variables.

To create robust TIMESTAMP functions you will first need to understand how the TIMESTAMP works. Take the following example:

Filter Day -7 DS

Here we are saying we want to include all dates greater than or equal to 7 days ago, or from the current date.

  • The first argument, SQL_TSI_DAY, defines the TimeStamp Interval (TSI). This means that we will be working with days.
  • The second argument determines how many of that interval we will be moving, in this case -7 days.
  • The third argument defines the starting point in time, in this example, the current date.

So in the end we have created a functional filter making Date >= 1 week ago, using a TIMESTAMP that subtracts 7 days from today.

Results -7 Days DS

Note: it is always a good practice to include a second filter giving an upper limit like “Time”.”Date” < CURRENT_DATE. Depending on the data that you are working with you might bring in items you don’t want or put unnecessary strain on the system.

We will now start to build this basic filter into something much more robust and flexible.

To start, when we subtracted 7 days in the filter above, let’s imagine that the goal of the filter was to always include dates >= the first of the month. In this scenario, we can use the DAYOFMONTH() function. This function will return the calendar day of any date. This is useful because we can subtract this amount to give us the first of the month from any date by simply subtracting it from that date and adding 1.

Our new filter would look like this:

DayofMonth DS

For example if today is December 18th, DAYOFMONTH(CURRENT_DATE) would equal 18. Thus, we would subtract 18 days from CURRENT_DATE, which is December 18th, and add 1, giving us December 1st.

MTD Dates DS

(For a list of other similar functions like DAYOFYEAR, WEEKOFYEAR etc. click here.)

To make this even better, instead of using CURRENT_DATE you could use a prompted value with the use of a Presentation Variable (for more on Presentation Variables, click here). If we call this presentation variable pDate, for prompted date, our filter now looks like this:

pDate DS

A best practice is to use default values with your presentation variables so you can run the queries you are working on from within your analysis. To add a default value all you do is add the value within braces at the end of your variable. We will use CURRENT_DATE as our default, @{pDate}{CURRENT_DATE}.  Will will refer to this filter later as Filter 1.

{Filter 1}:

pDateCurrentDate DS

As you can see, the filter is starting to take shape. Now lets say we are going to always be looking at a date range of the most recent completed 6 months. All we would need to do is create a nested TIMESTAMP function. To do this, we will “wrap” our current TIMESTAMP with another that will subtract 6 months. It will look like this:

Month -6 DS

Now we have a filter that is greater than or equal to the first day of the month of any given date (default of today) 6 months ago.

Month -6 Result DS

To take this one step further, you can even allow the users to determine the amount of months to include in this analysis by making the value of 6 a presentation variable, we will call it “n” with a default of 6, @{n}{6}.  We will refer to the following filter as Filter 2:

{Filter 2}:

n DS

For more on how to create a prompt with a range of values by altering a current column, like we want to do to allow users to select a value for n, click here.

Our TIMESTAMP function is now fairly robust and will give us any date greater than or equal to the first day of the month from n months ago from any given date. Now we will see what we just created in action by creating date ranges to allow for a Year over Year analysis for any number of months.

Consider the following filter set:

 Robust1 DS

This appears to be pretty intimidating but if we break it into parts we can start to understand its purpose.

Notice we are using the exact same filters from before (Filter 1 and Filter 2).  What we have done here is filtered on two time periods, separated by the OR statement.

The first date range defines the period as being the most recent complete n months from any given prompted date value, using a presentation variable with a default of today, which we created above.

The second time period, after the OR statement, is the exact same as the first only it has been wrapped in another TIMESTAMP function subtracting 1 year, giving you the exact same time frame for the year prior.

YoY Result DS

This allows us to create a report that can run a year over year analysis for a rolling n month time frame determined by the user.

A note on nested TIMESTAMPS:

You will always want to create nested TIMESTAMPS with the smallest interval first. Due to syntax, this will always be the furthest to the right. Then you will wrap intervals as necessary. In this case our smallest increment is day, wrapped by month, wrapped by year.

Now we will start with some more advanced tricks:

  • Instead of using CURRENT_DATE as your default value, use yesterday since most data are only as current as yesterday.  If you use real time or near real time reporting, using CURRENT_DATE may be how you want to proceed. Using yesterday will be valuable especially when pulling reports on the first day of the month or year, you generally want the entire previous time period rather than the empty beginning of a new one.  So, to implement, wherever you have @{pDate}{CURRENT_DATE} replace it with @{pDate}{TIMESTAMPADD(SQL_TSI_DAY,-1,CURRENT_DATE)}
  • Presentation Variables can also be used to determine if you want to display year over year values by month or by week by inserting a variable into your SQL_TSI_MONTH and DAYOFMONTH statements.  Changing MONTH to a presentation variable, SQL_TSI_@{INT}{MONTH} and DAYOF@{INT}{MONTH}, where INT is the name of our variable.  This will require you to create a dummy variable in your prompt to allow users to select either MONTH or WEEK.  You can try something like this: CASE MOD(DAY(“Time”.”Date”),2) WHEN 0 ‘WEEK’ WHEN 1 THEN ‘MONTH’ END



DropDown DS

In order for our interaction between Month and Week to run smoothly we have to make one more consideration.  If we are to take the date December 1st, 2014 and subtract one year we get December 1st, 2013, however, if we take the first day of this week, Sunday December 14, 2014 and subtract one year we get Saturday December 14, 2014.  In our analysis this will cause an extra partial week to show up for prior years.  To get around this we will add a case statement determining if ‘@{INT}{MONTH}’ = ‘Week’ THEN subtract 52 weeks from the first of the week ELSE subtract 1 year from the first of the month.

Our final filter set will look like this:

Final Filter DS

With the use of these filters and some creative dashboarding you can end up with a report that easily allows you to view a year over year analysis from any date in time for any number of periods either by month or by week.

Final Month Chart DS

Final Week Chart DS

That really got out of hand in a hurry! Surely, this will impress someone at your work, or even Angus MacGyver, if for nothing less than he or she won’t understand it, but hopefully, now you do!

Also, a colleague of mine Spencer McGhin just wrote a similar article on year over year analyses using a different approach. Feel free to review and consider your options.


Calendar Date/Time Functions

These are functions you can use within OBIEE and within TIMESTAMPS to extract the information you need.

  • Current_Date
  • Current_Time
  • Current_TimeStamp
  • Day_Of_Quarter
  • DayName
  • DayOfMonth
  • DayOfWeek
  • DayOfYear
  • Hour
  • Minute
  • Month
  • Month_Of_Quarter
  • MonthName
  • Now
  • Quarter_Of_Year
  • Second
  • TimestampAdd
  • TimestampDiff
  • Week_Of_Quarter
  • Week_Of_Year
  • Year

Back to section


Presentation Variables

The only way you can create variables within the presentation side of OBIEE is with the use of presentation variables. They can only be defined by a report prompt. Any value selected by the prompt will then be sent to any references of that filter throughout the dashboard page.

In the prompt:

Pres Var DS

From the “Set a variable” dropdown, select “Presentation Variable”. In the textbox below the dropdown, name your variable (named “n” above).

When calling this variable in your report, use the syntax @{n}{default}

If your variable is a string make sure to surround the variable in single quotes: ‘@{CustomerName]{default}’

Also, when using your variable in your report, it is good practice to assign a default value so that you can work with your report before publishing it to a dashboard. For variable n, if we want a default of 6 it would look like this @{n}{6}

Presentation variables can be called in filters, formulas and even text boxes.

Back to section


Dummy Column Prompt

For situations where you would like users to select a numerical value for a presentation variable, like we do with @{n}{6} above, you can convert something like a date field into values up to 365 by using the function DAYOFYEAR(“Time”.”Date”).

As you can see we are returning the SQL Choice List Values of DAYOFYEAR(“Time”.”Date”) <= 52.  Make sure to include an ORDER BY statement to ensure your values are well sorted.

Dummy Script DS

Back to Section

Categories: BI & Warehousing

New Version Of XPLAN_ASH Utility

Randolf Geist - Sun, 2014-12-21 16:40
A new version 4.2 of the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

There were no too significant changes in this release, mainly some new sections related to I/O figures were added.

One thing to note is that some of the sections in recent releases may require a linesize larger than 700, so the script's settings have been changed to 800. If you use corresponding settings for CMD.EXE under Windows for example you might have to adjust accordingly to prevent ugly line wrapping.

Here are the notes from the change log:

- New sections "Concurrent activity I/O Summary based on ASH" and "Concurrent activity I/O Summary per Instance based on ASH" to see the I/O activity summary for concurrent activity

- Bug fixed: When using MONITOR as source for searching for the most recent SQL_ID executed by a given SID due to some filtering on date no SQL_ID was found. This is now fixed

- Bug fixed: In RAC GV$ASH_INFO should be used to determine available samples

- The "Parallel Execution Skew ASH" indicator is now weighted - so far any activity level per plan line and sample below the actual DOP counted as one, and the same if the activity level was above
The sum of the "ones" was then set relative to the total number of samples the plan line was active to determine the "skewness" indicator

Now the actual difference between the activity level and the actual DOP is calculated and compared to the number of total samples active times the actual DOP
This should give a better picture of the actual impact the skew has on the overall execution

- Most queries now use a NO_STATEMENT_QUEUING hint for environments where AUTO DOP is enabled and the XPLAN_ASH queries could get queued otherwise

- The physical I/O bytes on execution plan line level taken from "Real-Time SQL Monitoring" has now the more appropriate heading "ReadB" and "WriteB", I never liked the former misleading "Reads"/"Writes" heading

Installing VirtualBox on Mint with a CentOS Guest

The Anti-Kyte - Sun, 2014-12-21 12:48

Christmas is almost upon us. Black Friday has been followed by Small Business Saturday and Cyber Monday.
The rest of the month obviously started on Skint Tuesday.
Fortunately for all us geeks, Santa Claus is real. He’s currently posing as Richard Stallman.
I mean, look at the facts. He’s got the beard, he likes to give stuff away for free, and he most definitely has a “naughty” list.

Thanks to Santa Stallman and others like him, I can amuse myself in the Holidays without putting any more strain on my Credit Card.

My main machine is currently running Mint 17 with the Cinnamon desktop. Whilst I’m very happy with this arrangement, I would like to play with other Operating Systems, but without all the hassle of installing/uninstalling etc.
Now, I do have Virtualbox on a Windows partition, but I would rather indulge my OS promiscuity from the comfort of Linux… sorry Santa – GNU/Linux.

So what I’m going to cover here is :

  • Installing VirtualBox on a Debian-based distro
  • Installing CentOS as a Guest Operating System
  • Installing VirtualBox Guest Additions Drivers on CentOS

I’ve tried to stick to the command-line for the installation steps for VirtaulBox so they should be generic to any Debian based host.


Throughout this post I’ll be referring to the Host OS and the Guest OS, as well as Guest Additions. These terms can be defined as :

  • Host OS – the Operating System of the physical machine that Virtualbox is running on ( Mint in my case)
  • Guest OS – the Operating System of the virtual machine that is running in VirtualBox (CentOS here)
  • Guest Additions – drivers that are installed on the Guest OS to enable file sharing, viewport resizing etc
Options for getting VirtualBox

Before I get into the installation steps it’s probably worth explaining why I chose the method I did for getting VirtualBox in the first place.
You can get VirtualBox from a repository, instructions for which are on the VirtualBox site itself. However, the version currently available ( 4.3.12 at the time of writing) does not play nicely with Red Hat based guests when it comes to Guest Additions. This issue is fixed in the latest version of Virtualbox (4.3.20) which can be downloaded directly from the site. Therefore, this is the approach I ended up taking.

Right, now that’s out of the way…

Installing VirtualBox Step 1 – Prepare the Host

Before we download VirtualBox, we need to ensure that the dkms package is installed and up to date. So, fire up good old terminal and type :

sudo apt-get install dkms

Running this, I got :

Reading package lists... Done
Building dependency tree       
Reading state information... Done
dkms is already the newest version.
0 to upgrade, 0 to newly install, 0 to remove and 37 not to upgrade.

One further step is to make sure that your system is up-to-date. For Debian based distros, this should do the job :

sudo apt-get update
Step 2 – Get the software

Now, head over to the VirtualBox Downloads Page and select the appropriate file.

NOTE – you will have the choice of downloading either the i386 or the AMD64 versions.
The difference is simply that i386 is 32-bit and AMD64 is 64-bit.

In my case, I’m running a 64-bit version of Mint (which is based on Ubuntu), so I selected :

Ubuntu 13.04( “Raring Ringtail”)/ 13.10(“Saucy Salamander”)/14.04(“Trusty Tahr”)/14.10(“Utopic Unicorn”) – the AMD64 version.

NOTE – if you’re not sure whether you’re running on 32 or 64-bit, simply type the following in a terminal session :

uname -i

If this comment returns x86_64 then you’re running a 64-bit version of your OS. If it returns i686, then you’re running a 32-bit version.

A short time later, you’ll find that Santa has descended the chimney that is your browser and in the Downloads folder that is your living room you have present. Run…

ls -lh $HOME/Downloads/virtualbox*

… and you’ll find the shiny new :

-rw-r--r-- 1 mike mike 63M Dec  5 16:22 /home/mike/Downloads/virtualbox-4.3_4.3.20-96996~Ubuntu~raring_amd64.deb
Step 3 – Installation

To virtually unwrap this virtual present….

cd $HOME/Downloads
sudo dpkg -i virtualbox-4.3_4.3.20-96996~Ubuntu~raring_amd64.deb

On running this the output should be similar to :

(Reading database ... 148385 files and directories currently installed.)
Preparing to unpack virtualbox-4.3_4.3.20-96996~Ubuntu~raring_amd64.deb ...
Stopping VirtualBox kernel modules ...done.
Unpacking virtualbox-4.3 (4.3.20-96996~Ubuntu~raring) over (4.3.12-93733~Ubuntu~raring) ...
Setting up virtualbox-4.3 (4.3.20-96996~Ubuntu~raring) ...
Installing new version of config file /etc/init.d/vboxdrv ...
addgroup: The group `vboxusers' already exists as a system group. Exiting.
Stopping VirtualBox kernel modules ...done.
Uninstalling old VirtualBox DKMS kernel modules ...done.
Trying to register the VirtualBox kernel modules using DKMS ...done.
Starting VirtualBox kernel modules ...done.
Processing triggers for ureadahead (0.100.0-16) ...
Processing triggers for hicolor-icon-theme (0.13-1) ...
Processing triggers for shared-mime-info (1.2-0ubuntu3) ...
Processing triggers for gnome-menus (3.10.1-0ubuntu2) ...
Processing triggers for desktop-file-utils (0.22-1ubuntu1) ...
Processing triggers for mime-support (3.54ubuntu1) ...

Note As this was not my first attempt at installing VirtualBox, there are some feedback lines here that you probably won’t get.

Anyway, once completed, you should have a new VirtualBox icon somewhere in your menu.
In my case (Cinnamon desktop on Mint 17, remember), it’s appeared in the Administration Menu :


As part of the installation, a group called vboxusers has now been created.
You’ll want to add yourself to this group so that you can access the shared folders, which is something I’ll come onto in a bit. For now though…

sudo usermod -a -G vboxusers username

… where username is your user.

Now, finally, we’ve set it up and can start playing. Click on the menu icon. Alternatively, if you can’t find the icon, or if you just prefer the terminal, the following command should have the same effect :


Either way, you should now see this :


One present unwrapped, assembled and ready to play with…and you don’t even need to worry about cleaning up the discarded wrapping paper.

Installing the CentOS Guest

I fancy having a play with a Red Hat-based distro for a change. CentOS fits the bill perfectly.
Additionally, I happen to have an iso lying around on a cover disk.
If you’re not so lucky, you can get the latest version of CentOS (currently 7) from the website here.

I’ve created a directory called isos and put the CentOS iso there :

ls -lh CentOS*
-rw------- 1 mike mike 687M Jul  9 22:53 CentOS-7.0-1406-x86_64-livecd.iso

Once again, I’ve downloaded the 64-bit version, as can be seen from the x86-64 in the filename.

Now for the installation.

Open VirtualBox and click New :

In the Name and operating system window enter :

Name : CentOS7
Type : Linux
Version Red Hat(64 bit)


In the Memory Size Window :

Settings here depend on the resources available to the host machine and what you want to use the VM for.
In my case, my host machine has 8GB RAM.
Also, I want to install Oracle XE on this VM.
Given that, I’m going to allocate 2GB to this image :


In the Hard Drive Window :

I’ve got plenty of space available so I’ll just accept the default to Create a virtual hard drive of 8GB now.

Hard Drive File Type :

Accept the default ( VDI (VirtualBox Disk Image))

and hit Next…

Storage on physical hard drive :

I’ll leave this as the default – Dynamically allocated
Click Next…

File location and size :

I’ve left the size at the default…


I now have a new VirtualBox image :
The vdi file created to act as the VM’s hard drive is in my home directory under VirtualBox VMs/CentOS7


Now to point it at the iso file we want to use.

Hit Start and ….



You should now see the chosen .iso file identified as the startup disk :


Now hit start….

Don’t worry too much about the small viewport for now. Guest Additions should resolve that issue once we get it installed.
You probably do need to be aware of the fact that you can transfer the mouse pointer between the Guest and Host by holding down the right CTRL key on your keyboard and left-clicking the mouse.
This may well take a bit of getting used to at first.

Anyway, once you’re guest knows where your mouse is, the first thing is to actually install CentOS into the VDI. At the moment, remember, we’re just running a Live Image.

So, click the Install to Hard Drive icon on the CentOS desktop and follow the prompts as normal.

At the end of the installation, make sure that you’ve ejected your virtual CD from the drive.
To do this :

  1. Get the Host to recapture the mouse (Right CTRL + left-click)
  2. Go to the VirtualBox Menu on the VDI and select Devices/CD/DVD Devices/Remove disk from virtual drive


Now re-start CentOS.

Once it comes back, we’re ready to round things off by…

Installing Guest Additions

It’s worth noting that when CentOS starts, Networking is disconnected by default. To enable, simply Click the Network icon on the toolbar at the top of the screen and switch it on :


We need to make sure that the packages are up to date on CentOS in the same way as we did for the Host at the start of all this so…

sudo yum update

Depending on how recent the iso file you used is, this could take a while !

We also need to install further packages for Guest Additions to work…

sudo yum install gcc
sudo yum install kerenel-devel-2.10.0-123.9.3.el.x86_64

Note It’s also recommended that dkms is installed on “Fedora” (i.e. Red Hat) based Guests. However when I ran …

sudo yum install dkms

I got an error saying “No package dkms available”.
So, I’ve decided to press on regardless…

In the VirtualBox Devices Menu, select Insert Guest Additions CD Image

You should then see a CD icon on your desktop :


The CD should autorun on load.

You’ll see a Virtual Box Guest Additions Installation Terminal Window come up that looks something like this :

Verifying archive integrity... All good.
Uncompressing VirtualBox 4.3.20 Guest Additions for Linux............
VirtualBox Guest Additions installer
Removing installed version 4.3.12 of VirtualBox Guest Additions...
Copying additional installer modules ...
Installing additional modules ...
Removing existing VirtualBox non-DKMS kernel modules       [  OK  ]
Building the VirtualBox Guest Additions kernel modules
Building the main Guest Additions module                   [  OK  ]
Building the shared folder support module                  [  OK  ]
Building the OpenGL support module                         [  OK  ]
Doing non-kernel setup of the Guest Additions              [  OK  ]
Starting the VirtualBox Guest Additions                    [  OK  ]
Installing the Window System drivers
Installing X.Org Server 1.15 modules                       [  OK  ]
Setting up the Window System to use the Guest Additions    [  OK  ]
You may need to restart the hal service and the Window System (or just restart
the guest system) to enable the Guest Additions.

Installing graphics libraries and desktop services componen[  OK  ]

Eject the CD and re-start the Guest.

Now, you should see CentOS in it’s full-screen glory.

Tweaks after installing Guest Additions

First off, let’s make things run a bit more smoothly on the Guest :

On the Host OS in VirtualBox Manager, highlight the CentOS7 image and click on Settings.
Go to Display.

Here, we can increase the amount of Video Memory from the default 12MB to 64MB.
We can also check Enable 3D Acceleration :


Next, in the General Section, click on the Advanced Tab and set the following :

Shared Clipboard : Bidirectional
Drag’n’Drop : Bidirectional


You should now be able to cut-and-paste from Guest to host and vice-versa.

Shared Folders

At some point you’re likely to want to either put files onto or get files from your Guest OS.

To do this :

On the Host

I’ve created a folder to share on my Host system :

mkdir $HOME/Desktop/vbox_shares/centos

Now, in VirtualBox Manager, back in the Settings for CentOS, open the Shared Folders section.

Click the Add icon


Select the folder and make it Auto-mount


On the Guest

In earlier versions of VirtualBox, getting the shared folders to mount was, well, a bit of messing about.
Happily, things are now quite a bit easier.

As we’ve set the shared folder to Auto-mount, it’s mounted on the Guest on


…where sharename is the name of the share we assigned to it on the Host. So, the shared folder I created exists as :


In order to gain full access to this folder, we simply need to add our user to the vboxsf group that was created when Guest Additions was installed :

sudo usermod -a -G vboxsf username

…where username is your user on the Guest OS.

Note – you’ll need to logout and login again for this change to take effect, but once you do, you should have access to the shared folder.

Right, that should keep me out of trouble (and debt) for a while, as well as offering a distraction from all the things I know I shouldn’t eat…but always do.
That reminds me, where did I leave my nutcracker ?

Filed under: Linux, VirtualBox Tagged: centos 7 guest, copy and paste from clipboard, guest additions, how to tell if your linux os is 32-bit or 64-bit, mint 17 host, shared folders, uname -i, VirtualBox

Digital Delivery "Badge"

Bradley Brown - Sun, 2014-12-21 00:44
At InteliVideo we have come to understand that we need to do everything we can to help our clients sell more digital content. It seems obvious that consumers want to watch videos on devices like their phones, tablets, laptops, and TVs, but it's not so obvious to the everyone. They have been using DVDs for a number of years - and likely VHS tapes before that. We believe it’s important for your customers to understand why they would want to purchase a digital product rather than a physical product (i.e. a DVD).
Better buttons drive sales.  Across all our apps and clients we know we are going to need to really nail our asset delivery process with split tests and our button and banner catalog.  We've simplified the addition of a badge on a client's page. They effectively have to add 4 lines of HTML in order to add our digital delivery badge.
Our clients can use any of the images that InteliVideo provides or we’re happy to provide an editable image file (EPS format) so they can make their own image.  Here are some of our badges that we created:
Screenshot 2014-12-16 19.39.25.png
On our client's web page, it looks something like this:
Screenshot 2014-12-17 14.01.11.png
The image above (Watch Now on Any Device) is the important component.  This is the component that our clients are placing somewhere on their web page(s).  When this is clicked, the existing page will be dimmed and the lightbox will popup and display the “Why Digital” message:
Screenshot 2014-12-17 16.31.54.png
What do your client's customers need to know about in order to help you sell more?

Log Buffer #402, A Carnival of the Vanities for DBAs

Pakistan's First Oracle Blog - Sat, 2014-12-20 18:39
This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!


EM12c and the Optimizer Statistics Console.
OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.
Oracle Bundle Patching.
Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?
Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.
Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.
Introduction to Azure SQL Database Scalability.
What To Do When the Import and Export Wizard Fails.


Orchestrator 1.2.9 GA released.
Making HAProxy 1.5 replication lag aware in MySQL.
Monitor MySQL Performance Interactively With VividCortex.
InnoDB’s multi-versioning handling can be Achilles’ heel.
Memory summary tables in Performance Schema in MySQL 5.7.

Also published here.
Categories: DBA Blogs

What an App Cost?

Bradley Brown - Sat, 2014-12-20 17:59
People will commonly ask me this question, which has a very wide range as the answer.  You can get an app build on oDesk for nearly free - i.e. $2000 or less.  Will it provide the functionality you need?  It might!  Do you need a website that does the same thing?  Do you need a database (i.e. something beyond the app) to store your data for your customers?

Our first round of apps at InteliVideo cost us $2,000-10,000 to develop each of them.  We spent a LOT of money on the backend server code.  Our first versions were pretty fragile (i.e. broke fairly easily) and we're very sexy.  We decided that we needed to revamp our apps from stem to stern...APIs to easy branding to UI.

Here's a look at our prior version.  Our customers (people who buy videos) aren't typically buying from more than 1 of our clients - yet.  But in the prior version I saw a list of all of the products I had purchased.  It's not a very sexy UI - just a simple list of videos:

When I drilled into a specific product, again I see a list of videos within the product:

I can download or play a video in a product:

Here's what it looks like for The Dailey Method:

Here's the new version demonstrating the branding for Chris Burandt.  I've purchased a yearly subscription that currently includes 73 videos.  I scroll (right not down) through those 73 videos here:

Or if I click on the title, I get to see a list of the videos in more detail:

Notice the colors (branding) is shown everywhere here.  I scrolled up to look through those videos:

Here's a specific video that talked about a technique to set your sled unstuck:

Here's what the app looks like when I'm a The Dailey Method customer.  Again, notice the branding everywhere:

Looking at a specific video and it's details:

We built a native app for iOS (iPad, iPhone, iPod), Android, Windows and Mac that has all of the same look, feel, functionality, etc.  This was a MAJOR undertaking!

The good news is that if you want to start a business and build an MVP (Minimally Viable Product) to see if there is actually a market for your product, you don't have to spend hundreds of thousands to do so...but you might have to later!

e-Literate Top 20 Posts For 2014

Michael Feldstein - Sat, 2014-12-20 12:17

I typically don’t write year-end reviews or top 10 (or 20) lists, but I need to work on our consulting company finances. At this point, any distraction seems more enjoyable than working in QuickBooks.

We’ve had a fun year at e-Literate, and one recent change is that we are now more willing break stories when appropriate. We typically comment on ed tech stories a few days after the release, providing analysis and commentary, but there are several cases where we felt a story needed to go public. In such cases (e.g. Unizin creation, Cal State Online demise, management changes at Instructure and Blackboard) we tend to break the news objectively, providing mostly descriptions and explanations, allowing others to provide commentary.

The following list is based on Jetpack stats on WordPress, which does not capture people who read posts through RSS feeds (we send out full articles through the feed). So the stats have a bias towards people who come to e-Literate for specific articles rather than our regular readers. We also tend to get longer-term readership of articles over many months, so this list also has a bias for articles posted a while ago.

With that in mind, here are the top 20 most read articles on e-Literate in terms of page views for the past 12 months along with publication date.

  1. Can Pearson Solve the Rubric’s Cube? (Dec 2013) – This article proves that people are willing to read a 7,000 word post published on New Year’s Eve.
  2. A response to USA Today article on Flipped Classroom research (Oct 2013) – This article is our most steady one, consistently getting around 100 views per day.
  3. Unizin: Indiana University’s Secret New “Learning Ecosystem” Coalition (May 2014) – This is the article where we broke the story about Unizin, based largely on a presentation at Colorado State University.
  4. Blackboard’s Big News that Nobody Noticed (Jul 2014) – This post commented on the Blackboard users’ conference and some significant changes that got buried in the keynote and much of the press coverage.
  5. Early Review of Google Classroom (Jul 2014) – Meg Tufano got pilot access to the new system and allowed me to join the testing; this article mostly shares Meg’s findings.
  6. Why Google Classroom won’t affect institutional LMS market … yet (Jun 2014) – Before we had pilot access to the system, this article described the likely market affects from Google’s new system.
  7. Competency-Based Education: An (Updated) Primer for Today’s Online Market (Dec 2013) – Given the sudden rise in interest in CBE, this article updated a 2012 post explaining the concept.
  8. The Resilient Higher Ed LMS: Canvas is the only fully-established recent market entry (Feb 2014) – Despite all the investment in ed tech and market entries, this article noted how stable the LMS market is.
  9. Why VCs Usually Get Ed Tech Wrong (Mar 2014) – This post combined references to “selling Timex knockoffs in Times Square” with a challenge to the application of disruptive innovation.
  10. New data available for higher education LMS market (Nov 2013) – This article called out the Edutechnica and ListEdTech sites with their use of straight data (not just sampling surveys) to clarify the LMS market.
  11. InstructureCon: Canvas LMS has different competition now (Jun 2014) – This was based on the Instructure users’ conference and the very different attitude from past years.
  12. Dammit, the LMS (Nov 2014) – This rant called out how the LMS market is largely following consumer demand from faculty and institutions.
  13. Why Unizin is a Threat to edX (May 2014) – This follow-on commentary tried to look at what market effects would result from Unizin introduction.
  14. State of the Anglosphere’s Higher Education LMS Market: 2013 Edition (Nov 2013) – This was last year’s update of the LMS squid graphic.
  15. Google Classroom: Early videos of their closest attempt at an LMS (Jun 2014) – This article shared early YouTube videos showing people what the new system actually looked like.
  16. State of the US Higher Education LMS Market: 2014 Edition (Oct 2014) – This was this year’s update of the LMS squid graphic.
  17. About Michael – How big is Michael’s fan club?
  18. What is a Learning Platform? (May 2012) – The old post called out and helped explain the general move from monolithic systems to platforms.
  19. What Faculty Should Know About Adaptive Learning (Dec 2013) – This was a reprint of invited article for American Federation of Teachers.
  20. Instructure’s CTO Joel Dehlin Abruptly Resigns (Jul 2014) – Shortly after the Instructure users’ conference, Joel resigned from the company.

Well that was more fun that financial reporting!

The post e-Literate Top 20 Posts For 2014 appeared first on e-Literate.

Exadata Patching Introduction

The Oracle Instructor - Sat, 2014-12-20 10:24

These I consider the most important points about Exadata Patching:

Where is the most recent information?

MOS Note 888828.1 is your first read whenever you think about Exadata Patching

What is to patch with which utility?

Exadata Patching

Expect quarterly bundle patches for the storage servers and the compute nodes. The other components (Infiniband switches, Cisco Ethernet Switch, PDUs) are less frequently patched and not on the picture therefore.

The storage servers have their software image (which includes Firmware, OS and Exadata Software)  exchanged completely with the new one using patchmgr. The compute nodes get OS (and Firmware) updates with, a tool that accesses an Exadata yum repository. Bundle patches for the Grid Infrastructure and for the Database Software are being applied with opatch.

Rolling or non-rolling?

This the sensitive part! Technically, you can always apply the patches for the storage servers and the patches for compute node OS and Grid Infrastructure rolling, taking down only one server at a time. The RAC databases running on the Database Machine will be available during the patching. Should you do that?

Let’s focus on the storage servers first: Rolling patches are recommended only if you have ASM diskgroups with high redundancy or if you have a standby site to failover to in case. In other words: If you have a quarter rack without a standby site, don’t use rolling patches! That is because the DBFS_DG diskgroup that contains the voting disks cannot have high redundancy in a quarter rack with just three storage servers.

Okay, so you have a half rack or bigger. Expect one storage server patch to take about two hours. That summarizes to 14 hours (for seven storage servers) patching time with the rolling method. Make sure that management is aware about that before they decide about the strategy.

Now to the compute nodes: If the patch is RAC rolling applicable, you can do that regardless of the ASM diskgroup redundancy. If a compute node gets damaged during the rolling upgrade, no data loss will happen. On a quarter rack without a standby site, you put availability at risk because only two compute nodes are there and one could fail while the other is just down.

Why you will want to have a Data Guard Standby Site

Apart from the obvious reason for Data Guard – Disaster Recovery – there are several benefits associated to the patching strategy:

You can afford to do rolling patches with ASM diskgroups using normal redundancy and with RAC clusters that have only two nodes.

You can apply the patches on the standby site first and test it there – using the snapshot standby database functionality (and using Database Replay if you licensed Real Application Testing)

A patch set can be applied on the standby first and the downtime for end users can be reduced to the time it takes to do a switchover

A release upgrade can be done with a (Transient) Logical Standby, reducing again the downtime to the time it takes to do a switchover

I suppose this will be my last posting in 2014, so Happy Holidays and a Happy New Year to all of you :-)

Tagged: exadata
Categories: DBA Blogs

PeopleTools 8.54 Feature: Support for Oracle Database Materialized Views

Javier Delgado - Fri, 2014-12-19 17:04
One of the new features of PeopleTools 8.54 is the support of Oracle Database Materialized Views. In a nutshell, Materialized Views can be seen as a snapshot of a given view. When you query a Materialized View, the data is not necessarily accessed online, but instead it is retrieved from the latest snapshot. This can greatly contribute to improve query performance, particularly for complex SQLs or Pivot Grids.

Materialized Views Features
Apart from the performance benefits associated with them, one of the most interesting features of Materialized Views is how the data refresh is handled. Oracle Database supports two ways of refreshing data:

  • On Commit: data is refreshed whenever a commit takes place in any of the underlying tables. In a way, this method is equivalent to maintaining through triggers a staging table (the Materialized View) whenever the source table changes, but all this complexity is hidden from the developer. Unfortunately, this method is only available with join-based or single table aggregate views.

Although it has the benefit of almost retrieving online information, normally you would use the On Commit for views based on tables that do not change very often. As every time a commit is made, the information is refreshed in the Materialized View, the insert, update and delete performance on the source tables will be affected.Hint: You would normally use On Commit method for views based on Control tables, not Transactional tables.
  • On Demand: data is refreshed on demand. This option is valid for all types of views, and implies that the Materialized View data is only refreshed when requested by the administrator. PeopleTools 8.54 include a page named Materialized View Maintenance where the on demand refreshes can be configured to be run periodically.

In case you choose the On Demand method, the data refresh can actually be done following two different methods:

  • Fast, which just refreshed the rows in the Materialized View affected by the changes made to the source records.

  • Full, which fully recalculated the Materialized View contents. This method is preferable when large volume changes between refreshes are usually performed against the source records. Also, this option is required after certain types of updates on the source records (ie: INSERT statements using the APPEND hint). Finally, this method is required when one of the source records is also a Materialized View and has been refreshed using the Full method. 

How can we use them in PeopleTools?
Before PeopleTools 8.54, Materialized Views could be used as an Oracle Database feature, but the DBA would need to be responsible of editing the Application Designer build scripts to include the specific syntax for this kind of views. On top of that, the DBA would need to schedule the data refresh directly from the database.

PeopleTools 8.54 introduces support within PeopleSoft tools. In first place, Application Designer will now show new options for View records:

We have already seen what Refresh Mode and Refresh Method mean. The Build Options indicate to Application Designer whether the Materialized View date needs to be calculated upon its build is executed or if it could be delayed until the first refresh is requested from the Materialized View Maintenance page.

This page is used to determine when to refresh the Materialized Views. The refresh can be executed for multiple views at once and scheduled using the usual PeopleTools Process Scheduler recurrence features. Alternatively, the Refresh Interval [seconds] may be used to indicate the database that this view needs to be refreshed every n seconds.

The main disadvantage of using Materialized Views is that they are specific to Oracle Database. They will not work if you are using any other platform, in which case the view acts like a normal view, which keeps a similar functional behaviour, but without all the performance advantages of Materialized Views.

All in all, Materialized Views provide a very interesting feature to improve the system performance, while keeping the information reasonably up to date. Personally, I wish I've had this feature available for many of the reports I've built in all these years... :-)

Consumer Security for the season and Today's World

PeopleSoft Technology Blog - Fri, 2014-12-19 13:41

Just to go beyond my usual security sessions, I was asked recently to talk to a local business and consumer group about personal cyber security. Here is the document I used for the session and you might find some useful tips.

Protecting your online shopping experience

- check retailer returns policy

- use a credit card rather than debit card, or check the protection on the debit card

- use a temporary/disposable credit card e.g. ShopSafe from Bank of America

- use a low limit credit card - with protection, e.g. AMEX green card

- check your account for random small amount charges and charitable contributions

- set spending and "card not present" alerts

Protecting email

- don't use same passwords for business and personal accounts

- use a robust email service provider

- set junk/spam threshold in your email client

- only use web mail for low risk accounts (see Note below)

- don't click on links in the email, DON’T click on links in email – no matter who you think sent it

Protecting your computer

- if you depend on a computer/laptop/tablet for business, ONLY use it for business

- don't share your computer with anyone, including your children

- if you provide your children with a computer/laptop, refresh them from "recovery disks" on a periodic basis

- teach children value of backing up important data

- if possible have your children only use their laptops/devices in family rooms where the activity can be passively observed

- use commercial, paid subscription, antivirus/anti malware on all devices (see Note below)

- carry and use a security cable when traveling or away from your office

Protecting your smart phone/tablet

- don't share your device

- make sure you have a secure lock phrase/PIN and set the idle timeout

- don't recharge it using the USB port on someone else's laptop/computer

- ensure the public Wi-Fi which you use is a trusted Wi-Fi (also - see Note below)

- store your data in the cloud, preferably not (or not only) the phone/tablet

- don't have the device "remember" your password, especially for sensitive accounts

- exercise caution when downloading software e.g. games/apps, especially "free" software (see Note below)

Protect your social network

- don't mix business and personal information in your social media account

- use separate passwords for business and personal social media accounts

- ensure you protect personal information from the casual user

- check what information is being shared about you or photos tagged by your "friends"

- don't share phone numbers or personal/business contact details,
e.g. use the "ask me for my ..." feature

General protection and the “Internet of Things”

- be aware of cyber stalking

- be aware of surreptitious monitoring
e.g. “Google Glass” and smart phone cameras

- consider “nanny” software, especially for children’s devices

- be aware of “click bait” – e.g. apparently valid “news” stories which are really sponsored messages

- be aware of ATM “skimming”, including self serve gas pumps

- be aware of remotely enabled camera and microphone (laptop, smart phone, tablet)

Note: Remember, if you’re not paying for the product, you ARE the product

Important! PeopleTools Requirements for PeopleSoft Interaction Hub

PeopleSoft Technology Blog - Fri, 2014-12-19 12:04

The PeopleSoft Interaction Hub* follows a release model of continuous delivery.  With this release model, Oracle delivers new functionality and enhancements on the latest release throughout the year, without requiring application upgrades to new releases.

PeopleTools is a critical enabler for delivering new functionality and enhancements for releases on continuous delivery.  The PeopleTools adoption policy for applications on continuous release is designed to provide reasonable options for customers to stay current on PeopleTools while adopting newer PeopleTools features. The basic policy is as follows:

Interaction Hub customers must upgrade to a PeopleTools release no later than 24 months after that PeopleTools release becomes generally available.  

For example, PeopleTools 8.53 was released in February 2013. Therefore, customers who use Interaction Hub will be required to upgrade to PeopleTools 8.53 (or newer, such as PeopleTools 8.54) no later than February 2015 (24 months after the General Availability date of PeopleTools 8.53). As of February 2015, product maintenance and new features may require PeopleTools 8.53. 

Customers should start planning their upgrades if they are on PeopleTools releases that are more than 24 months old.  See the Lifetime Support Summary for PeopleSoft Releases (doc id 1348959.1) in My Oracle Support for details on PeopleTools support policies.

* The PeopleSoft Interaction Hub is the latest branding of the product.  These support guidelines also apply to the same product under the names PeopleSoft Enterprise Portal, PeopleSoft Community Portal, and PeopleSoft Applications Portal.

Do You Really Need a Content Delivery Network (CDN)?

Bradley Brown - Fri, 2014-12-19 10:39
When I first heard about Amazon's offering called CloudFront I really didn't understand what it offered and who would want to use it.  I don't think they initially called it a content delivery network (CDN), but I could be wrong about that.  Maybe it was just something I didn't think I needed at that time.

Amazon states it well today (as you might expect).  The offering "gives developers and businesses an easy way to distribute content to end users with low latency, and high data transfer speeds."

So when you hear the word "content" what is it that you think about?  What is content?  First off, it's digital content. pages?  That's what I initially thought of.  But it's really any digital content.  Audio books, videos, PDFs - files of any time, any size.

When it comes to distributing this digital content, why would you need to do this with low latency and/or high transfer speeds?  Sure, this is important if your website traffic scales up from 1-10 concurrent viewers to millions overnight.  How realistic is that for your business?  What about the other types of content - such as videos?  Yep, now I'm referring to what we do at InteliVideo!

A CDN allows you to scale up to any number of customers viewing or downloading your content concurrently.  Latency can be translated to "slowness" when you're trying to download a video when you're in Japan because it's moving the file across the ocean.  The way that Amazon handles this is that they move the file across the ocean using their fast pipes (high speed internet) between their data centers and then the customer downloads the file effectively directly from Japan.

Imagine that you have this amazing set of videos that you want to bundle up and sell to millions of people.  You don't know when your sales will go viral, but when it happens you want to be ready!  So how do you implement a CDN for your videos, audios, and other content?  Leave that to us!

So back to the original question.  Do you really need a content delivery network?  Well...what if you could get all of the benefits of having one without having to lift a finger?  Would you do it then?  Of course you would!  That's exactly what we do for you.  We make it SUPER simple - i.e. it's done 100% automatically for our clients and their customers.  Do you really need a CDN?  It depends on how many concurrent people are viewing your content and where they are located.

For my Oracle training classes that I offer through BDB Software, I have customers from around the world, which I personally find so cool!  Does BDB Software need a CDN?  It absolutely makes for a better customer experience and I have to do NOTHING to get this benefit!

“Innovation in Managing the Chaos of Everyday Project Management” is now on YouTube

If you missed Fishbowl’s recent webinar on our new Enterprise Information Portal for Project Management, you can now view a recording of it on YouTube.


Innovation in Managing the Chaos of Everyday Project Management discusses our strategy for leveraging the content management and collaboration features of Oracle WebCenter to enable project-centric organizations to build and deploy a project management portal. This solution was designed especially for groups like E & C firms and oil and gas companies, who need applications to be combined into one portal for simple access.

If you’d like to learn more about the Enterprise Information Portal for Project Management, visit our website or email our sales team at

The post “Innovation in Managing the Chaos of Everyday Project Management” is now on YouTube appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Log Buffer #402, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-12-19 09:15

This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!


EM12c and the Optimizer Statistics Console.


OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.

Oracle Bundle Patching.

Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?

Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.

Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.

Introduction to Azure SQL Database Scalability.

What To Do When the Import and Export Wizard Fails.


Orchestrator 1.2.9 GA released.

Making HAProxy 1.5 replication lag aware in MySQL.

Monitor MySQL Performance Interactively With VividCortex.

InnoDB’s multi-versioning handling can be Achilles’ heel.

Memory summary tables in Performance Schema in MySQL 5.7.

Categories: DBA Blogs

What Do Oracle Audit Vault Collection Agents Do?

The Oracle Audit Vault is installed on a server, and collector agents are installed on the hosts running the source databases.  These collector agents communicate with the audit vault server. 

If the collection agents are not active, no audit data is lost, as long as the source database continues to collect the audit data.  When the collection agent is restarted, it will capture the audit data that the source database had collected during the time the collection agent was inactive.

There are three types of agent collectors for Oracle databases.  There are other collectors for third-party database vendors such as SAP Sybase, Microsoft SQL-Server, and IBM DB2.

Audit Value Collectors for Oracle Databases*

Audit Trail Type

How Enabled

Collector Name

Database audit trail

For standard audit records: AUDIT_TRAIL initialization parameter set to: DB or DB, EXTENDED.

For fine-grained audit records: The audit trail parameter of DBMS_FGA.ADD_POLICY procedure is set to: DBMS_FGA.DB or DBMS_FGA.DB + DBMS_FGA.EXTENDED.


Operating system audit trail

For standard audit records: AUDIT_TRAIL initialization parameter is set to: OSXML, or XML, EXTENDED.

For syslog audit trails, AUDIT_TRAIL is set to OS and the AUDIT_SYS_OPERATIONS parameter is set to TRUE.  In addition, the AUDIT_SYSLOG_LEVEL parameter must be set.

For fine-grained audit records: The audit_trail parameter of the DBMS_FGA.ADD_POLICY procedure is set to DBMS_FGA.XML or DBMS_FGA.XML + DBMS_FGA.EXTENDED.


Redo log files

The table that you want to audit must be eligible.  See "Creating Capture Rules for Redo Log File Auditing" for more information.


 *Note if using Oracle 12c; the assumption is that Mixed Mode Unified Auditing is being used

If you have questions, please contact us at

Reference Tags: AuditingOracle Audit VaultOracle Database
Categories: APPS Blogs, Security Blogs

OBIEE Enterprise Security

Rittman Mead Consulting - Fri, 2014-12-19 05:35

The Rittman Mead Global Services team have recently been involved in a number of security architecture implementations and produced a security model which meets a diverse set of requirements.  Using our experience and standards we have been able to deliver a robust model that addresses the common questions we routinely receive around security, such as :

“Whats considerations do I need to make when exposing Oracle BI to the outside world?”


“How can I make a flexible security model which is robust enough to meet the demands of my organisation but easy to maintain?”

The first question is based on a standard enterprise security model where the Oracle BI server is exposed by a web host, enabling SSL and tightening up access security.  This request can be complex to achieve but is something that we have implemented many times now.

The second question is much harder to answer, but our experience has led us to develop a multi-dimensional inheritance security model, with numerous clients that has yielded excellent results.

What is a Multi-dimensional Inheritance Security Model?

The wordy title is actually a simple concept that incorporates 5 key areas:

  • Easy to setup and maintain
  • Flexible
  • Durable
  • Expandable
  • Be consistent throughout the product

While there numerous ways of implementing a security model in Oracle BI, by sticking to the key concepts above, we ensure we get it right.  The largest challenge we face in BI is the different types of security required, and all three need to work in harmony:

  • Application security
  • Content security
  • Data security
Understanding the organisation makeup

The first approach is to consider the makeup of a common organisation and build our security around it.


This diagram shows different Departments (Finance, Marketing, Sales) whose data is specific to them, so normally the departmental users should only see their own data that is relevant to them.  In contrast the IT department who are developing the system need visibility across all data and so do the Directors.


What types of users do I have?

Next is to consider the types of users we have:

  1. BI Consumer: This will be the most basic and common user who needs to access the system for information.
  2. BI Analyst: As an Analyst the user will be expected to generate more bespoke queries and need ways to represent them. They will also need an area to save these reports.
  3. BI Author: The BI Author will be able to create content and publish that content for the BI Consumers and BI Analysts.
  4. BI Department Admin: The BI Department Admin will be responsible for permissions for their department as well as act as a focal point user.
  5. BI Developer: The BI Developer can be thought of as the person(s) who creates models in the RPD and will need additional access to the system for testing of their models. They might also be responsible for delivering Answers Requests or Dashboards in order to ‘Prove’ the model they created.
  6. BI Administrator:  The Administrator will be responsible for the running of the BI system and will have access to every role.  Most Administrator Task will not require Skills in SQL/Data Warehouse and is generally separated from the BI Developer role.

The types of users here are a combination of every requirement we have seen and might not be required by every client.  The order they are in shows the implied inheritance, so the BI Analyst inherits permissions and privileges from the BI Consumer and so on.

What Types do I need?

Depending on the size of organization determines what types of user groups are required. By default Oracle ships with:

  1. BI Consumer
  2. BI Author
  3. BI Administrator

Typically we would recommend inserting the BI Analyst into the default groups:

  1. BI Consumer
  2. BI Analyst
  3. BI Author
  4. BI Administrator

This works well when there is a central BI team who develop content for the whole organization. The structure would look like this:



For larger organizations where dashboard development and permissions is handled across multiple BI teams then the BI Administrator group can be used.  Typically we see the BI team as a central Data Warehouse team who deliver the BI model (RPD) to the multiple BI teams.  In a large Organization the administration of Oracle BI should be handled by someone who isn’t the BI Developer, the structure could look like:




Permissions on groups

Each of the groups will require different permissions, at a high level the permissions would be:


Name Permissions BI Consumer
  • View Dashboards
  • Save User Selections
  • Subscribe to Ibots
BI Analyst
  • Access to Answers and standard set of views
  • Some form of storage
  • Access to Subject areas
BI Author
  • Access to Create/Modify Dashboards
  • Save Predefined Sections
  • Access to Action Links
  • Access to Dashboard Prompts
  • Access to BI Publisher
BI Department Admin
  • Ability to apply permissions and manage the Web Catalog
BI Developer
  • Advance access to answers
  • Access to all departments
BI Administrator
  • Everything


Understanding the basic security mechanics in 10g and 11g

In Oracle BI 10g the majority of the security is handled in the Oracle BI Server.  This would normally be done through initialisation blocks, which would authenticate the user from a LDAP server, then run a query against a database tables to populate the user into ‘Groups’ used in the RPD and ‘Web Groups’ used in the presentation server.  These groups would have to match in each level; Database, Oracle BI Server and Oracle BI Presentation Server.

With the addition of Enterprise Manager and Weblogic the security elements in Oracle BI 11g radically changed.  Authenticating the user is in the Oracle BI server is no longer the recommended way and is limited in Linux. While the RPD Groups and Presentation Server Web Groups still exist they don’t need to be used.  Users are now authenticated against Weblogic.  This can be done by using Weblogic’s own users and groups or by plugging it into a choice of LDAP servers.  The end result will be Groups and Users that exist in Weblogic.  The groups then need to be mapped to Application Roles in Enterprise Manager, which can be seen by the Oracle BI Presentation Services and Oracle BI Server.  It is recommended to create a one to one mapping for each group.



What does all this look like then?

Assuming this is for an SME size organization where the Dashboard development (BI Author) is done by the central BI team the groups would like:




The key points are:

  • The generic BI Consumer/Analyst groups give their permissions to the department versions
  • No users should be in the generic BI Consumer/Analyst groups
  • Only users from the BI team should be in the generic BI Author/Administrator group
  • New departments can be easily added
  • the lines denote the inheritance of permissions and privileges


Whats next – The Web Catalog?

The setup of the web catalog is very important to ensure that it does not get unwieldy, so it needs to reflect the security model and we would recommend setting up some base folders which look like:



Each department has their own folder and 4 sub folders. The permissions applied to each department’s root folder is BI Administrators so full control is possible across the top.  This is also true for every folder below however they will have additional explicit permissions described to ensure that the department cannot create any more than the four sub folders.

  • The Dashboard folder is where the dashboards go and the departments BI Developers group will have Full control and the departments BI consumer will have read . This will allow the departments BI Developers to create dashboards,  the departments BI Administrators to apply permissions and the departments consumers and analysts the ability to view.
  • The same permissions are applied to the Dashboard Answers folder to the same effect.
  • The Development Answers folder has Full control given to the departments BI Developers and no access to for the departments BI Analysts or BI Consumers. This folder is mainly for the departments BI Developers to store answers when in the process of development.
  • The Analyst folder is where the departments BI Analysts can save Answers. Therefore they will need full control of this folder.

I hope this article gives some insight into Security with Oracle BI.  Remember that our Global Services products offer a flexible support model where you can harness our knowledge to deliver your projects in a cost effective manner.

Categories: BI & Warehousing

Elephants and Tigers - V8 of the Website

Bradley Brown - Thu, 2014-12-18 21:54
It's amazing how much work goes into a one page website these days!  We've been working on our new version of our website (which is basically one page) for the last month or so.  The content is "easy" part on one hand and the look and feel / experience is the time consuming part.  To put it another way, it's all about the entire experience, not just the text/content.

Since we're a video company, it's important that they first page show some video...which required production and editing.  We're hunting elephants, so we need to tell the full story of the implementations that we've done for our large clients.  What all can you sell on our platform?  A video?  Audio books?  Movies?  TV Shows?  What else?  We needed to talk about our onboarding process for the big guys.  What's the shopping cart integration look like?  We have an entirely new round of apps coming out soon, so we need to show those off.  We need to answer that question of "What do our apps look like?"    Everybody wants analytics right?  You want to know who watched what - for how long, when and where!  What about all of the ways you can monetize - subscriptions (SVOD), transactional (TVOD) - rentals and purchases, credit-based purchases, and more.  What about those enterprises who need to restrict (or allow) viewing based on location?
Yes, it's quite a story that we've learned over the past few years.  Enterprises (a.k.a. Elephants) need it all.  We're "enterprise guys" after all.  It's natural for us to hunt Elephants.
Let's walk through this step-by-step.  In some ways it's like producing movie.  A lot of moving parts, a lot of post editing and ultimately comes down to the final cut.
What is that you want to deliver?  Spoken word?  TV Shows?  Training?  Workouts?  Maybe you want to jump right into why digital, how to customize or other topics...

Let's talk about why go digital?  Does it seem obvious to you?  It's not obvious to everyone.  Companies are still selling a lot of DVDs.

Any device, anywhere, any time!  That's how your customers want the content.

We have everything from APIs to Single Sign On, and SO much more...we are in fact an enterprise solution.

It's time to talk about the benefits.  We have these awesome apps that we've spent a fortune developing and allowing our clients to have full branding experience as you see here for UFC FIT.

We integrate to most of our large customers existing shopping carts.  We simply receive an instant payment notification from them to authorize a new customer.

I'm a data guy at heart, so we track everything about who's watching what, where they are watching from and so much more.  Our analytics reporting shows you this data.  Ultimately this leads to strategic upsell to existing customers.  It's always easier to sell someone who's already purchased over a new customer.

What website would be complete without a full list of client testimonials?

If you can dream up a way to monetize your content, we can implement it.  Credit based subscription systems to straight out purchase...we have it all!

What if you want to sell through affiliates?  How about selling the InteliVideo platform as an affiliate?  Our founders came from ClickBank, so we understand Affiliate payments and how to process them.

Do you need a step-by-step guide to our implementation process?  Well...if so, here you have it!  It's as simple as 5 steps.  For some customers this is a matter of hours and for others it's months.  The first step is simply signing up for an InteliVideo account at: 

We handle payment processing for you if you would like.  But...most big companies have already negotiated their merchant processing rates AND they typically already have a shopping cart.  So we integrate as needed.

Loading up your content is pretty easy with our platform.  Then again, we have customers with as few as one product and others with thousands of products and 10s of thousands of assets (videos, audio files, etc.).  Most of our big customers simply send us a drive.  We have a bulk upload process where you give us your drive and all of the metadata (descriptions) and the mapping of each...and we load it all up for you.

Our customers can use our own sales pages and/or membership area...or we have a template engine that allows for comprehensive redesign of the entire look and feel.  Out of the box implementations are simple...

Once our clients sign off on everything and our implementation team does as's time to buy your media, promote your products and start selling.  We handle the delivery.

For those who would like to sign up or need more information, what website would be complete without a contact me page?  There are other pages (like our blog, about us, etc), but this page has a lot of information.  It's a story.  At the bottom of the page there is a "Small Business" link, which takes you to the prior version of our website...for small businesses.

As I said at the beginning of this blog's amazing how much thought goes into a new web page!  We're very excited about our business.  Hopefully this post helped you think through how you want to tell the stories about your business.  How should you focus on your elephants and tigers?  How often should you update your website?  Go forth and crush it!
This new version of our website should be live in the next day or always, I'd love to hear your feedback!

Helix Education puts their competency-based LMS up for sale

Michael Feldstein - Thu, 2014-12-18 17:05

Back in September I wrote about the Helix LMS providing an excellent view into competency-based education and how learning platforms would need to be designed differently for this mode. The traditional LMS – based on a traditional model using grades, seat time and synchronous cohort of students – is not easily adapted to serve CBE needs such as the following:

  1. Explicit learning outcomes with respect to the required skills and concomitant proficiency (standards for assessment)
  2. A flexible time frame to master these skills
  3. A variety of instructional activities to facilitate learning
  4. Criterion-referenced testing of the required outcomes
  5. Certification based on demonstrated learning outcomes
  6. Adaptable programs to ensure optimum learner guidance

In a surprise move, Helix Education is putting the LMS up for sale.  Helix Education provided e-Literate the following statement to explain the changes, at least from a press release perspective.

With a goal of delivering World Class technologies and services, a change we are making is with Helix LMS. After thoughtful analysis and discussion, we have decided to divest (sell) Helix LMS. We believe that the best way for Helix to have a positive impact on Higher Education is to:

  • Be fully committed and invest properly in core “upstream” technologies and services that help institutions aggregate, analyze and act upon data to improve their ability to find, enroll and retain students and ensure their success
  • Continue to build and share our thought leadership around TEACH – program selection, instructional design and faculty engagement for CBE, on-campus, online and hybrid delivery modes.
  • Be LMS neutral and support whichever platform our clients prefer. In fact, we already have experience in building CBE courses in the top three LMS solutions.

There are three aspects of this announcement that are quite interesting to me.

Reversal of Rebranding

Part of the surprise is that Helix rebranded the company based on their acquisition of the LMS – this was not just a simple acquisition of a learning platform – and just over a year after this event Helix Education is reversing course, selling the Helix LMS and going LMS-neutral. From the earlier blog post [emphasis added]:

In 2008 Altius Education, started by Paul Freedman, worked with Tiffin University to create a new entity called Ivy Bridge College. The goal of Ivy Bridge was to help students get associate degrees and then transfer to a four-year program. Altius developed the Helix LMS specifically for this mission. All was fine until the regional accrediting agency shut down Ivy Bridge with only three months notice.

The end result was that Altius sold the LMS and much of the engineering team to Datamark in 2013. Datamark is an educational services firm with a focus on leveraging data. With the acquisition of the Helix technology, Datamark could expand into the teaching and learning process, leading them to rebrand as Helix Education – a sign of the centrality of the LMS to the company’s strategy. Think of Helix Education now as an OSP (a la carte services that don’t require tuition revenue sharing) with an emphasis on CBE programs.

Something must have changed in their perception of the market to cause this change in direction. My guess is that they are getting pushback from schools who insist on keeping their institutional LMS, even with the new CBE programs. Helix states they have worked with “top three LMS solutions”, but as seen in the demo (read the first post for more details), capabilities such as embedding learning outcomes throughout a course and providing a flexible time frame work well outside the core design assumptions of a traditional LMS. I have yet to see an elegant design for CBE with a traditional LMS. I’m open to being convinced otherwise, but count me as skeptical.

Upstream is Profitable

The general move sounds like the main component is the moving “upstream” element. To be more accurate, it’s more a matter of staying “upstream” and choosing to not move downstream. It’s difficult, and not always profitable, to deal with implementing academic programs. Elements built on enrollment and retention are quite honestly much more profitable. Witness the recent sale of the enrollment consulting firm Royall & Company for $850 million.

The Helix statement describes their TEACH focus as one of thought leadership. To me this sounds like the core business will be on enrollment, retention and data analysis while they focus academic efforts not on direct implementation products and services, but on white papers and presentations.

Meaning for Market

Helix Education was not the only company building CBE-specific learning platforms to replace the traditional LMS. FlatWorld Knowledge built a platform that is being used at Brandman University. LoudCloud Systems built a new CBE platform FASTrak – and they already have a traditional LMS (albeit one designed with a modern architecture). Perhaps most significantly, the CBE pioneers Western Governors University and Southern New Hampshire University’s College for America (CfA) built custom platforms based on CRM technology (i.e. Salesforce) based on their determination that the traditional LMS market did not suit their specific needs. CfA even spun off their learning platform as a new company – Motivis Learning.

If Helix Education is feeling the pressure to be LMS-neutral, does that mean that these other companies are or will be facing the same? Or, is Helix Education’s decision really based on company profitability and capabilities that are unique to their specific situation?

The other side of the market effect will be determined by which company buys the Helix LMS. Will a financial buyer (e.g. private equity) choose to create a standalone CBE platform company? Will a traditional LMS company buy the Helix LMS to broaden their reach in the quickly-growing CBE space (350 programs in development in the US)? Or will an online service provider and partial competitor of Helix Education buy the LMS? It will be interesting to see which companies bid on this product line and who wins.


If I find out more about what this change in direction means for Helix Education or for competency-based programs in general, I’ll share in future posts.

The post Helix Education puts their competency-based LMS up for sale appeared first on e-Literate.

Seasons's Greetings from the Oracle ISV Migration Center Team


We share our skills to maximize your revenue!
Categories: DBA Blogs