Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 16 hours 18 min ago

Helix: View of an LMS designed for competency-based education

Thu, 2014-09-18 11:42

Within higher education, we tend to talk about LMS solutions based on an institutional perspective – which systems can serve as the official LMS for an entire institution. While this view is important and forms the basis for my LMS graphics, the emergence of new educational delivery models has led to the development of some interesting program-specific LMS models. One example that I have already written about is 2U’s platform (built on top of Moodle and Adobe Connect) for their specific Online Service Provider (OSP) business.

One educational model that is becoming more and more important is competency-based education (CBE). One of the challenges for this model is that the traditional LMS – based on a traditional model using grades, seat time and synchronous cohort of students – is not easily adapted to serve CBE needs. As described in this CBE primer:

OBE [Outcome-based education] can be implemented in various modalities, including face-to-face, online and hybrid models.

Competency-based education (CBE) is a narrower concept, a subset or instance of OBE, where the outcomes are more closely tied to job skills or employment needs, and the methods are typically self-paced. Again based on the Malan article, the six critical components of CBE are as follows:

  1. Explicit learning outcomes with respect to the required skills and concomitant proficiency (standards for assessment)
  2. A flexible time frame to master these skills
  3. A variety of instructional activities to facilitate learning
  4. Criterion-referenced testing of the required outcomes
  5. Certification based on demonstrated learning outcomes
  6. Adaptable programs to ensure optimum learner guidance

Enter Helix (again)

In 2008 Altius Education, started by Paul Freedman, worked with Tiffin University to create a new entity called Ivy Bridge College. The goal of Ivy Bridge was to help students get associate degrees and then transfer to a four-year program. Altius developed the Helix LMS specifically for this mission. All was fine until the regional accrediting agency shut down Ivy Bridge with only three months notice.[1]

The end result was that Altius sold the LMS and much of the engineering team to Datamark in 2013. Datamark is an educational services firm with a focus on leveraging data. With the acquisition of the Helix technology, Datamark could expand into the teaching and learning process, leading them to rebrand as Helix Education – a sign of the centrality of the LMS to the company’s strategy. Think of Helix Education now as an OSP (a la carte services that don’t require tuition revenue sharing) with an emphasis on CBE programs.

I asked the Helix team to give me a demo of their tool, with permission to do screen grabs, to get a better sense of the system design and inter-relationship with CBE. The goal is to understand some of the nuances brought about by a CBE design focus, based on 3 of the 6 components from SPT Malan. I am not trying to judge better or worse in terms of LMS design but rather to get a better visualization of the implications of CBE.

First – the main page:

classroom

1) Explicit Learning Outcomes

The learning outcomes are embedded throughout the course, including the ability to allow a pretest and determine the starting set of competencies.

pretest_feedback

From a faculty and / or course designer standpoint, there is also a view to determine the coverage of competencies within the course.

OutcomesCoverageReport

2) Flexible Time Frame

The system gives estimates of the time required for most tasks and allows the student to choose their own course plan. The “schedule” then adjusts based on the course plan.

CreateCoursePlan

Because the course is essentially self-paced, Helix adds some features to allow peer assessment and discussions, often in an asynchronous manner (based on current and past discussions of students at the same point in the course, but allowing real-time faculty interaction).

Answer Forum - student

Students can ‘like’ and choose to ‘follow’ different discussions. I asked if students can ‘follow’ other students, and the answer was no (but possible in roadmap).

AnswerForum

6) Adaptable Programs / Learner Guidance

This one was interesting, as Helix allows students to select different levels of guidance – hints, examples, step-by-step, etc.

practice

Notes
  • I am doing some research to see if I can get other examples from CBE programs, but there seems to be movement of newer educational models, including CBE, becoming more likely to develop program-specific LMS solutions.
  • While the Helix team stated that the assessments (formative and summative) can be set up with essays or other non-multiple-choice formats, the overall view of this platform gives a good view of the fairly prescriptive, objectivist nature of CBE.[2]
  • Helix stated several times during the demo that the LMS could be applied to non-CBE programs, but my focus was on the CBE nature of the system.
  • Helix is building up its management team, including several key hires with a background at eCollege (acquired by Pearon in 2007), Western Governors University and DeVry University.
  • Finally, this feature is not CBE-based, but I do like the ability for students to provide feedback on lessons within the course itself.

LessonFeedback

  1. More on that sad tale in this post as well as this Chronicle article.
  2. See Tony Bates article for more on this topic.

The post Helix: View of an LMS designed for competency-based education appeared first on e-Literate.

Opening Up the LMS Walled Garden

Wed, 2014-09-17 08:45

In yesterday’s post I described where I (and many others) see the LMS market heading in terms of interoperability.

At the same time, the LMS does a very poor job at providing a lot of the learning technologies desired by faculty and students. There is no way that a monolithic LMS can keep up with the market – it cannot match functionality of open internet tools especially without adding feature bloat.

I would add that part of the cause of the “false binary position” that D’Arcy points out is that much of the public commentary focuses on where the LMS has been rather than where it is going. There is a significant movement based on interoperability that is leading, perhaps painfully and slowly, to a world where the LMS can coexist with open educational tools, with even end users (faculty and students) eventually having the ability to select their tools that can share rosters and data with the institutional LMS.

Coexistence and interoperability, however, should not imply merely having links from the LMS to external tools as is too often the case.

The Walled Garden

The LMS (which George Station rightly points out was really called the Course Management System in the early years) started out as a walled garden with basic functionality of syllabus sharing, announcements, gradebook, email, and a few other tools.

walledgarden

Over time, as both Jared Stein points out in his blog post:

Flash forward to 2005(ish), when “Web 2.0” was on many educators’ minds as a new wave of services that made it easier for anyone to express themselves to anyone who was interested in participating. New web services and social media made the legacy LMS look like what it was: A slow-moving cruise ship that locked passengers in their cabins. It didn’t care about user experience. It didn’t care about integrating with social media. It didn’t care about encouraging novel practices or experimentation. But those were really just symptoms; the sickness was that the LMS vendors didn’t care about what was happening in our culture and in our communities as connectivity and multimedia exploded through the open web.

The LMS vendors did not just ignore these new services, however, but they tried to eat their cake and have it, too, by creating poor imitations of the external tools and stuffing them inside the LMS.

walledgarden2-lmsAs Web 2.0 tools proliferated, this approach of maintaining the walled garden was one of the primary causes of feature bloat and poorly-designed learning tools within the LMS.

walledgarden3

False Binary – A Choice

This situation – a walled garden LMS with feature bloat and inelegant tools while multiplying external tools become available – represents the bad side of the ed tech market as it has existed. Despite the weakness of this design approach, the vendors themselves were not the only ones at fault. As Mike Caulfield points out in his description of the “elegant and extensible Prometheus:

A number of years later I asked a person I knew who worked at Prometheus why Prometheus failed. Did Blackboard crush them?

His answer was interesting. No, it wasn’t Blackboard at all. It was the educational institutions. With the slow, resource-intensive and state-mandated RFP processes, the interminable faculty commitees, and the way that even after the deal was signed the institution would delay payment and implementation as long as possible (or suddenly throw it into an unanticipated ‘final review’) it was just not possible to grow a stable business. The process institutions followed was supposed to ensure equitable access to contracts, but what it did was made it impossible for any company not sitting on a pile of cash to stay in business. (I’m extrapolating a bit here, but not much).

I would add that the RFP process also encourages a feature checklist mentality, elevating the importance of being able to say “we have that feature” and minimizing the ability to say “this design doesn’t suck”.

Many institutions have reacted slowly to the proliferation of tools and officially support only the enterprise LMS – often due to FERPA / student privacy concerns but also due to perceived inability of central units to provide support to faculty and students on multiple tools.

But this is a choice, even in the current market with limited interoperability. There are other institutions that support not only the official enterprise LMS but also multiple learning tools. While institutions have a responsibility to provide baseline LMS services for faculty, there is a strong argument that they also have a responsibility to support the innovators and early adopters that want to explore with different learning tools, whether or not they integrate with the LMS within a course.

Moving Beyond the Wall

But can the market progress such that the enterprise LMS can coexist with open tools even at the course level? The answer in my mind is yes, and the work to move in this direction has been in progress for years. Thanks to LTI specification, and in the future the Caliper interoperability framework, the vision that George Kroner describes is getting closer and closer.

But the LMSs today won’t be the LMSs of tomorrow. Rather than being a “dumping ground” for content, maybe one possible future for LMSs is as Learning Management Scaffolding – metaphorically supporting learning no matter its shape or form – with content being viewed and activities taking place inside and outside of the LMS. Maybe content will be seamlessly navigable around the LMS and the web – and perhaps in other types of systems like LCMSs – Learning Content Management Systems. Maybe learning tools of all types and sizes – but external to the LMS – will support every long-tail instructional desire imaginable while assessment results feed back into the LMS gradebook. Maybe the LMS will be the storage mechanism for leaning analytics as well, but it is more likely that it will become only one source of data feeding into another system better-suited for the task. But try as I might I fail to imagine a future in which some centrally-managed, instructor-accessible system stores rosters and grades, enforces privacy and security policies, and provides some form of starting-off point for students.

In this developing future market, coexistence of LMS and Open will include not just links or grudging institutional support, but it will also include information sharing of rosters, data, and context. Open tools that will start with the class roster in place, data of user activity shared between apps, and the ability to external apps to be run in the context of the course design and recent class activities.

walledgarden4 20140916

There will be painful implementations – caused both by LMS vendors and by institutions – that will prevent a smooth transition to this breakdown of the walled garden, but it will become increasingly difficult for LMS solutions to survive over time if they don’t adapt. There will also be market niches (e.g. specific online programs) that will retain the walled garden LMS approach, but in general the markets should change.

I personally see the realistic future as having more of a choice of tools rather than a minimal LMS. LMS vendors will continue to have reasons to develop (or acquire) their own internal tools, and there will even be cases where the tight integration and focused development will lead to better tools in the LMS than outside. The key change will be the ability for integration decisions – which tools to use in specific classes or in specific institutions – to be made closer to the faculty and student end users. From LMS vendor to central IT to academic program to even individual faculty – moving closer to those who know the specific needs of the class. Central IT and the institution will remain important in setting policies and permissions to protect student privacy and provide guidance to faculty and course designers who are more conservative in their ed tech usage. But either way (minimal LMS or swappable tool LMS), I think the long-term trend is moving in this direction of LMS and Open tool coexistence.

The post Opening Up the LMS Walled Garden appeared first on e-Literate.

LMS and Open: The false binary is based on past, not future markets

Mon, 2014-09-15 12:46

D’Arcy Norman has an excellent blog post up titled “On the false binary of LMS vs. Open” that captures a false framing issue.

We’re pushed into a false binary position – either you’re on the side of the evil LMS, working to destroy all that is beautiful and good, or you’re on the side of openness, love, and awesomeness. Choose. There is no possible way to teach (or learn) effectively in an LMS! It is EVIL and must be rooted out before it sinks its rotting tendrils into the unsuspecting students who are completely and utterly defenseless against its unnatural power!

While D’Arcy is a proponent of open tools, he rightly calls out the need to understand institutional responsibilities.

But. We can’t just abdicate the responsibility of the institution to provide the facilities that are needed to support the activities of the instructors and students. That doesn’t mean just “hey – there’s the internet. go to it.” It means providing ways for students to register in courses. For their enrolment to be automatically processed to provision access to resources (physical classrooms, online environments, libraries, etc…). For students’ grades and records to be automatically pushed back into the Registrar’s database so they can get credit for completing the course. For integration with library systems, to grant acccess to online reserve reading materials and other resources needed as part of the course.

This is an important point, in that the institutional LMS is important and will not, and should not, go away anytime soon. I have pointed out recently that the LMS is one of the very few technologies now used in a majority of courses within an institution, and the institutional responsibility described above helping to explain why.

In our consulting work Michael and I often help survey institutions to discover what technologies are being used within courses, and typically the only technologies that are used by a majority of faculty members or in a majority of courses are the following:

  • AV presentation in the classroom;
  • PowerPoint usage in the classroom (obviously connected with the projectors);
  • Learning Management Systems (LMS);
  • Digital content at lower level than a full textbook (through open Internet, library, publishers, other faculty, or OER); and
  • File sharing applications.

At the same time, the LMS does a very poor job at providing a lot of the learning technologies desired by faculty and students. There is no way that a monolithic LMS can keep up with the market – it cannot match functionality of open internet tools especially without adding feature bloat.

I would add that part of the cause of the “false binary position” that D’Arcy points out is that much of the public commentary focuses on where the LMS has been rather than where it is going. There is a significant movement based on interoperability that is leading, perhaps painfully and slowly, to a world where the LMS can coexist with open educational tools, with even end users (faculty and students) eventually having the ability to select their tools that can share rosters and data with the institutional LMS.

Below is a modified presentation I gave at the Apereo Mexico conference in the spring (with a few changes to explain slides without audio). The key point is that there are subtle changes to the LMS market that are significant, and the coexistence of the LMS with open tools will be central to the market’s future.

Will all LMS vendors move this direction? In marketing, yes, but in reality, no. There are different approaches to this coexistence issue from the LMS vendors, ranging from lip service to outright support, and several points in between. But the overall trend is clearing moving this direction, even if some solutions lose out over time.

Download (PDF, 4.21MB)

The post LMS and Open: The false binary is based on past, not future markets appeared first on e-Literate.

Pearson’s Efficacy Listening Tour

Thu, 2014-09-11 14:06

Back around New Year, Michael wrote a post examining Pearson’s efficacy initiative and calling on the company to engage in active discussions with various communities within higher education about defining “efficacy” with educators rather than for educators. It turns out that post got a fair bit of attention within the company. It was circulated in a company-wide email from CEO John Fallon, and the blog post and all the comments were required reading for portions of the company leadership. After a series of discussions with the company, we, through our consulting company, have been hired by Pearson to facilitate a few of these conversations. We also asked for and received permission to blog about them. Since this is an exception to our rule that we don’t blog about our paid engagements, we want to tell you a little more about the engagement, our rationale for blogging about it, and the ground rules.

The project itself is fairly straightforward. We’re facilitating conversations with a few different groups of educators in different contexts. The focus of each conversation is how they define and measure educational effectiveness in their respective contexts. There will be some  discussion of Pearson’s efficacy efforts at a high level, but mainly for the purpose of trying to map what the educators are telling us about their practices to how Pearson is thinking about efficacy in the current iteration of their approach. After doing a few of these, we’ll bring together the participants along with other educators in a culminating event. At this meeting, the participants will hear a summary of the lessons learned from the earlier conversations, learn a bit more about Pearson’s efficacy work, and then break up into mixed discussion groups to provide more feedback on how to move the efficacy conversation forward and how Pearson’s own efforts can be improved to make them maximally useful to educators.

Since both e-Literate readers and Pearson seemed to get a lot of value from our original post on the topic, we believe there would be value in sharing some of the ongoing conversation here as well. So we asked for and received permission from Pearson to blog about it. Here are the ground rules:

  • We are not getting paid to blog and are under no obligation to blog.
  • Our blog posts do not require prior editorial review by Pearson.
  • Discussions with Pearson during the engagement are considered fair game for blogging unless they are explicitly flagged as otherwise.
  • On the other hand, we will ask for Pearson customers for approval prior to writing about their own campus initiatives (and, in fact, will extend that courtesy to all academic participants).

The main focus of these posts, like the engagement itself, is likely to be on how the notion of efficacy resonates (or doesn’t) with various academic communities in various contexts. Defining and measuring the effectiveness of educational experiences—when measurement is possible and sensible—is a subject with much broader application’s than Pearson’s product development, which is why we are making an exception to our blogging recusal policy for our consulting engagements and why we appreciate Pearson giving us a free hand to write about what we learn.

The post Pearson’s Efficacy Listening Tour appeared first on e-Literate.

GAO Report: Yes, student debt is growing problem

Thu, 2014-09-11 10:57

In case anyone needed additional information to counter the Brookings-fed meme that “Americans who borrowed to finance their education are no worse off today than they were a generation ago”, theU.S. Government Accountability Office (GAO) released a report yesterday with some significant findings. As reported at Inside Higher Ed by Michael Stratford:

More than 700,000 households headed by Americans 65 or older now carry student debt, according to a report released Wednesday by the U.S. Government Accountability Office. And the amount of debt owed by borrowers 65 and older jumped from $2.8 billion in 2005 to $18.2 billion last year. [snip]

Between 2004 and 2010, for instance, the number of households headed by individuals 65 to 74 with student loan debt more than quadrupled, going from 1 percent to 4 percent of all such families. During that same period, the rate of borrowing among Americans under 44 years old increased between 40 and 80 percent, even though borrowing among that age group is far more prevalent than it is among senior citizens.

I have been highly critical of the Brookings Institutions and their report and update. This new information from the GAO goes outside the selective Brookings data set of households headed by people aged 20 – 40, but it should be considered by anyone trying to draw conclusions about student debt holders.

Noting that Brookings analysis is based on “Americans who borrowed to finance their education” and the GAO report is on student debt holders, it is worth asking if we’re looking at a similar definition. For the most part, yes, as explained at IHE:

While some of the debt reflects loans taken out by parents on behalf of their children, the vast majority — roughly 70 to 80 percent of the outstanding debt — is attributable to the borrowers’ own education. Parent PLUS loans accounted for only about 27 percent of the student debt held by borrowers 50 to 64 years old, and an even smaller share for borrowers over 65.

Go read at least the entire IHE article, if not the entire GAO report.

Student debt is a growing problem in the US, and the Brookings Institution conclusions are misleading at best.

The post GAO Report: Yes, student debt is growing problem appeared first on e-Literate.

Brookings Institution analysis on student debt becoming a farce

Wed, 2014-09-10 12:39

I have previously written about the deeply flawed Brookings Institution analysis on student debt with its oft-repeated lede:

These data indicate that typical borrowers are no worse off now than they were a generation ago …

Their data is based on the triennial Survey of Consumer Finances (SCF) by the Federal Reserve Board, with the report based on 2010 data. With the release of the 2013 SCF data, Brookings Institution put out an update this week on their report, and they continue with the lede:

The 2013 data confirm that Americans who borrowed to finance their educations are no worse off today than they were a generation ago. Given the rising returns to postsecondary education, they are probably better off, on average. But just because higher education is still a good investment for most students does not mean that high and rising college costs should be left unquestioned.

This conclusion is drawn despite the following observations of changes from 2010 – 2013 in their own update:

  • The share of young (age 20 – 40) households with student debt rose from 36% to 38%;
  • The average amount of debt per household rose 14%;
  • The distribution of debt holders rose by 50% for debt levels of $20k – $75k and dropped by 19% for debt levels of $1k – $10k; and
  • Wage income is stagnant and same level as ~1999, yet debt amounts have risen by ~50% in that same time period (see below).

Wage and borrowing over time

Brookings’ conclusion from this chart?

The upshot of the 2013 data is that households with education debt today are still no worse off than their counterparts were more than 20 years ago. Even though rising debt continued to cut into stagnant incomes, the average household with debt is better off than it used to be.

The strongest argument that Brookings presents is that the median monthly payment-to-income ratios have stayed fairly consistent at ~4% over the past 20 years. What they fail to mention is that households are taking much longer to pay off student loans now.

More importantly, the Brookings analysis ignores the simple and direct measurement of loan delinquency. See this footnote from the original report [emphasis added]:

These statistics are based on households that had education debt, annual wage income of at least $1,000, and that were making positive monthly payments on student loans. Between 24 and 36 percent of borrowers with wage income of at least $1,000 were not making positive monthly payments, likely due to use of deferment and forbearance …

That’s what I call selective data analysis. In the same SCF report that Brookings used for its update:

Delinquencies

The delinquency rate for student loans has gone up ~50% from 2010 to 2013!

How can anyone claim that Americans with student debt are no worse off when:

  • More people have student debt;
  • The average amount of debt has risen;
  • Wage income has not risen; and
  • The delinquency rate for student loans has risen.

None of the secondary spreadsheet jockeying from Brookings counters these basic facts. This ongoing analysis by Brookings on student debt is a farce.

The post Brookings Institution analysis on student debt becoming a farce appeared first on e-Literate.

What the Apple Watch Tells Us About the Future of Ed Tech

Wed, 2014-09-10 12:20

Nothing.

So please, if you’re thinking about writing that post or article, don’t.

I’m begging you.

The post What the Apple Watch Tells Us About the Future of Ed Tech appeared first on e-Literate.

Say What? Buzzfeed follows up on D2L story with solid reporting

Sun, 2014-09-07 13:14

In a post last month I questioned the growth claims that D2L was pushing to the media based on their recent massive funding round. A key part of the article was pointing out the lack of real reporting from news media.

It is worth noting that not a single media outlet listed by EDUKWEST or quoted above (WSJ, Reuters, Bloomberg, re/code, edSurge, TheStar) challenged or even questioned D2L’s bold claims. It would help if more media outlets didn’t view their job as paraphrasing press releases.

I should give credit where it’s due: Education reporter Molly Hensley-Clancy at Buzzfeed has done some solid reporting with her article out today.

In response to detailed questions from BuzzFeed News about figures to back up its claims of record growth in higher education and internationally, the company released a statement to BuzzFeed News, saying “As a private company, D2L does not publicly disclose these details. The past year has been one of record growth for D2L, culminating in the recent $85 million round of financing.” A representative declined to make the company’s CEO, or any other executive, available for an interview related to the company’s growth.

The stonewalling didn’t come as a surprise to former employees with whom BuzzFeed News spoke.

“The picture they’re painting of growth is not accurate,” said one former employee, who left the company within the last year and asked to remain anonymous, citing his confidentiality agreement with the company. “If you look at actual metrics, they tell a different story. They’re very likely not seeing growth in higher education.”

Molly’s article included discussions with three former D2L employees, an interview with CSU Channel Islands CIO Michael Berman, and a D2L official response (in a manner of speaking). Who would have thought that Buzzfeed would be the source of valuable reporting that challenges the all-too-easy headlines provided through press releases?

Me, for one. If you follow the Buzzfeed education articles, you’ll notice a pattern of this type of reporting – mostly focused on the business of education. Consider the following articles:

In each case, Molly challenges public perceptions, digs up unique information through interviews and document research, and describes the findings in a hard-hitting but balanced article. Buzzfeed is becoming an important source for education news and a welcome addition.

The post Say What? Buzzfeed follows up on D2L story with solid reporting appeared first on e-Literate.

Unizin Updates: Clarification on software development and potential new members

Wed, 2014-09-03 15:11

In a recent post on Kuali, I characterized Unizin as a community source initiative. Brad Wheeler, CIO at Indiana University and co-founder of Kuali and Unizin, responded via email (with permission to quote):

Unizin is not a Community Source effort in the way that I understand Community Source as we started applying the label 10+ years ago. Unizin is better understood, as you have reported, as a cloud-scale service operator somewhat like I2. It does not plan to do lots of software development other than as needed for integrations. No biggie, just a nuanced observation from the end of the story.

Brad is correct, and I note that Unizin document has been fairly consistent in the lack of plans for software development, as seen in Unizin FAQs:

Is Unizin another open- or community-source project like Sakai or Kuali?
No – those endeavors focus on building software as a community, for use by individual institutions. Unizin strives to foster a community more concerned with creating and sharing content and improving outcomes.

I have already revised the Kuali post to add a clarification on this point. I asked Brad whether this means that Unizin is ruling out software development. His reply:

Unizin is working on its roadmap for each area. If we do need to head down some development approach that is more than integration, we’ll give thought to the full range of options for best achieving that, but there is no plan to begin an open/community source effort at this time.

All public indications are that Unizin plans to source existing technologies (as they have done with Canvas as the LMS) for content repository and learning analytics functionality, focusing any software development on integrations.

Potential New Consortium Members

Based on two articles about the University of Iowa (D2L) exploration of Unizin[1], the University of Illinois (Blackboard), University of Wisconsin (D2L), University of Minnesota (Moodle), and Ohio State University (D2L) are listed as schools that are “close to joining” Unizin, quoting the Iowa CIO and an internal Illinois email.

The original four consortium members are Indiana University, University of Michigan, University of Florida and Colorado State University. Unizin materials originally indicated the following set of potential partners:

  • Colorado State University (Blackboard)
  • Indiana University (Sakai)
  • Oregon State University (Blackboard)
  • Purdue University (Blackboard)
  • University of Florida (Sakai, with some use of Canvas)
  • University of Maryland (Canvas)
  • University of Michigan (Sakai)
  • University of Texas (Blackboard, with at least one campus in the process of moving to Canvas)
  • University of Wisconsin (Desire2Learn)
  • University of Utah (Canvas)

Of the new schools mentioned, only Wisconsin appears on the earlier list.

From the first article:

Lon Moeller, the associate provost for undergraduate education, will present the idea to the council. He said he is not yet sure whether the UI will commit to Unizin.

“We’re talking to faculty and administrators across campus to see if there’s interest in Unizin as a program,” he said. “We’re evaluating Unizin as a pilot project.”

From the second article:

“We are discussing doing a pilot project for next spring, but we need to decide if the project is worth our time,” Fleagle [CIO] said.

The articles make it sound like UI officials are pushing for the campus to join Unizin:

If the UI decided to join Unizin, it could have input into the new software as well as help to develop it.

“There is no software yet. The four partners [Colorado State, Indiana, Florida, and Michigan] are working on it,” Fleagle said. “They have groups working on it now. If we wanted to do it, we need to get in so that Iowa’s needs can get met … The disadvantage of waiting is that you don’t get access to those discussions right away. I think we have as good of people at Iowa as anyone else involved influencing it.”

I talked to Maggie Jesse, Senior IT Director for the Office of Teaching, Learning, & Technology, to get a better understanding of their decision process. The picture that emerges is a campus that has been invited to join Unizin, is potentially willing to do a pilot program, but is not in marketing mode.

To set the stage, Iowa recently performed an assessment to get campus feedback on ICON, their branded version of the LMS. The message coming back from the assessment was that faculty were not motivated to do a major change - they were not unhappy and did not need the burden of an LMS change. Furthermore, Iowa just completed an upgrade to ICON, changing the D2L version and reconfiguring the campus hosting and support structure.

In the article quote above, a faculty member had asked whether Iowa should just wait for others to join Unizin and let Iowa decide later. Steve Fleagle answered that the argument for deciding early was to influence Unizin direction. The pilot option allows Iowa to work within its usual culture of campus decision making- piloting a system and then taking the outcomes to the campus for a decision. Iowa has developed a communications plan to discuss a possible Unizin pilot with over 20 different campus groups to determine if there is interest in Unizin, and yesterday’s meeting was just one of these.

Illinois Confusion

The first article quoted sections of an email at the University of Illinois that reflected a poor understanding of Unizin [emphasis added].

Charles Tucker, the vice provost for undergraduate education and innovation at the University of Illinois, wrote in an email that the benefits of many universities joining Unizin include being able to band together to “influence the companies that provide software for teaching and learning.” [snip]

Tucker said the system can streamline different technologies to make things simpler for professors and students.

“The Unizin approach is to promote a standards-based approach where the same digital course materials can work in any learning-management system,” he said. “Faculty spend less time making the technology work, and students get a better course.”

This statement is odds with the description at Inside Higher Ed from July:

The digital learning consortium, announced Wednesday morning, aims to simplify how universities share learning analytics, content and software platforms. But in order to do so, Unizin needs its members to use the same infrastructure. A common learning management system is the first part of that package.

“You don’t really have common infrastructure if you’re saying everything is heterogeneous,” said Brad Wheeler, the Unizin co-founder who serves as vice president for IT and chief information officer at Indiana University. “A lot of these different learning tools — Sakai, Blackboard, Canvas — they all do a bunch of really good stuff. But five universities picking five different ones — what’s the end value in that if they want to do something together?” [snip]

“The presumption is if I join Unizin, I’m going to be using Canvas sooner or later,” said Joel Dehlin, Instructure’s chief technology officer [ed. now CEO of Kuali's for-profit arm]. Joining Unizin without using Canvas, he added, is “like joining a country club and paying for the country club and not using the golf course.”

The goal of Unizin includes promoting control over digital course materials and even various reuse of content. This goal is not based on LMS neutrality driven by standards, but it is based on having a common infrastructure that would sit on top of and benefit from standards. Note that these are goals and not realities yet. However, Illinois might benefit from making their decision based on a more accurate understanding of Unizin.

Update: See comment below from Rob Abel, CEO of the IMS Global Learning Consortium, below re. standards.

  1. Disclosure: I was directly involved as a consultant in the original Iowa decision to adopt D2L. See this link for details.

The post Unizin Updates: Clarification on software development and potential new members appeared first on e-Literate.

Cal State Online: Public records shed light on what happened

Thu, 2014-08-28 14:35

Last month I shared the system announcement that the Cal State Online (CSO) initiative is finished. Despite the phrasing of “re-visioning” and the retention of the name, the concept of a standalone unit to deliver and market online programs for the system is gone. Based on documents obtained by e-Literate through a public records request:[1]

  • The original concept of “a standardized, centralized, comprehensive business, marketing and outreach support structure for all aspects of online program delivery for the Cal State University System” was defined in summer 2011, formally launched in Spring 2013, and ultimately abandoned in Fall 2013;
  • CSO was only able to enroll 130 full-time equivalent students (FTES) in CY2013 despite starting from pre-existing campus-based online programs and despite minimum thresholds of 1,670 FTES in the Pearson contract;
  • CSO was able to sign up only five undergraduate degree-completion programs and two master’s programs offered at four of the 23 Cal State campuses;
  • Faculty groups overtly supported investments in online education but did not feel included in the key decision processes;
  • Pearson’s contract as a full-service Online Service Provider was in place for less than one year before contract renegotiations began, ultimately leading to LMS services only; and
  • The ultimate trigger to abandon the original model was the $10 million state funding for online education to address bottleneck courses.

That last one might seem counter-intuitive without the understanding that CSO did not even attempt to support matriculated Cal State students in state-funded programs.

Terminology note: CSO measured course enrollments as “one student registered in one online course”, such that one student taking two courses would equal two course enrollments, etc. Internally CSO calculated 10 course enrollments = 1 FTES.

Below is a narrative of the key milestones and decisions as described by the public documents. I’ll share more of my thoughts in a future post.

2011

Based on foundational work done in 2010 by the Technology Steering Committee (TSC), a group of nine campus presidents along with six Chancellor’s Office staff, a contract is awarded to a consultant (Richard Katz and Associates) to produce five reports on online learning (link will download zip file) and Cal State Universities work to date. TSC then produced an overview document for what would become CSO in June 2011, including 10 guiding principles and the first schedule estimate. An October 2011 update document further clarified the plans. Some key decisions made in 2011 included forming a separate 501(c)3 organization owned by Cal State University and funding the creation of CSO by the contribution of $50,000 from each of the 23 CSU campuses.

Two key decisions from this period are worth highlighting, as they explain much of the trajectory of CSO in retrospect. The first one defined the need for an Online Service Provider (ultimately chosen as Pearson).

A business partner for CSU Online might be needed in order to provide the necessary student support services, including, for example, advising, financial aid, career services, and tutoring. In addition, a business partner could provide the 24/7/365 help desk support absolutely critical for CSU Online. Market research and marketing of programs are other potential areas for the contributions of a business partner. Instructional design support for faculty is another potential area, as is technological support for the effort.

The second decision defined a strategy in terms of which types of online programs to add in which order.

Following from the bedrock of our Principles, the TSC supported a tactical entrance into CSU Online by focusing on those areas in which CSU campuses are already strong and proficient. We believe that it is imperative to start from a position of program strength rather than to straggle into the market in areas as yet not fully defined or ready for implementation. Accordingly, the TSC recommends that CSU Online address six areas, with two ready for immediate roll out.

  1. The 60 or so Masters level programs that exist throughout the CSU should comprise our initial effort with an eye toward serving the extensive mid-career professional and unemployed adults who are in need of this level of education to advance their careers.
  2. Our second focus should entail the presentation of two or three degree completion programs in an effort to enhance workforce development.

An important note on both of these areas is that they are both self-support, offered through continued or extended education groups and not eligible for state funding. These self-support programs do not have the same constraints on setting tuition and tend to it significantly higher than state-support mainline programs.

The overview also estimated the timeline to include an RFP for commercial partner (OSP) to be released in Fall 2011.

By late 2011 there were already signs of faculty discontent with the inclusion of faculty in CSO decision-making and with the planned usage of a commercial partner. The Cal State Dominguez Hills faculty senate resolved in November:

Growing faculty concerns about the minimal faculty input in the development of the Online Initiative, as well as the direction the Initiative may be taking have led three Academic Senates (CSUSB, CSU Stanislaus, and Sonoma State) to pass resolutions calling for the suspension of the Initiative until basic issues are addressed and approved by campus senates. In addition a “CSU Online Faculty Task Force,” consisting of over 80 faculty across the CSU, has been actively questioning features of the Initiative and has written an open letter to Chancellor Reed expressing opposition to outsourcing to for‐profit online providers or attempts to circumvent collective bargaining.

The task force open letter can be found here.

2012

The RFP was actually released in April 2012. To my reading, the document was unorganized and lacked enough structure to let bidders know what to expect or what was needed. On schedule and enrollments, the RFP advised the following:

1.5 Cal State Online expects to officially launch in January 2013, with as many as ten degree programs. For the late fall 2012 term (beginning in late October 2012) Cal State Online anticipates offering two to three courses in several programs in a live beta test term.

1.6 ENROLLMENT PROJECTIONS Vendors should base proposals on 1,000 three unit course enrollments in year one and 3,000 three unit course enrollments in year two.

The RFP evaluation process was described in the first CSO Advisory Board meeting notes from June 2012, showing the final decision to select between Pearson and Academic Partnerships. Pearson was selected as the partner, and their contract[2] has an unexplained change in enrollments.

The spending amounts detailed below (which may also be increased as appropriate, in Pearson’s discretion) are dependent on Cal State Online meeting the defined Enrollment thresholds for the prior calendar year. If Cal State Online does not meet such thresholds, the spending amounts for the then-current calendar year will be adjusted to reflect the actual number of enrollments achieved during the previous calendar year.

Pearson Thresholds

I do not know how the numbers went from an estimate of 1,000 course enrollments for 2013 in the RFP to a minimum of 16,701 course enrollments for 2013 in the contract. In retrospect, this huge increase can be described as wishful thinking, perhaps with the goals of making the financial case work for both CSO and Pearson.

The Advisory Board also decided in the June 2012 meeting to set standardized tuition for CSO at $500 per unit (compared with approximately $270 per unit for traditional campus student with 12 units per semester).

By October CSO had identified the specific campus programs interested in participating, document in the Launch Programs Report. The first page called out two of the first programs bringing in 200 students and 20 students – in other words, CSO migrated several hundred students to get started.

Launch_Programs_Report_October_2012_pdf__page_1_of_3_

2013: Winter and Spring

In the Spring 2013 term, CSO kicked off with the Launch Programs described in the February 2013 Advisory Board meeting minutes.

Launch Programs: 6 Programs from 3 Campuses

  • CSU Fullerton launched 3 courses in their online Business BA program January 14th 2013; marketing and recruiting of next group of students in progress. 35 + 18 Existing Students.
  • CSU Dominguez Hills will launch their BA MBA and PA MPA online programs in spring 2013; marketing and recruiting students is in progress. BA Applied Studies will launch in summer 2013; first CSU reconnect program.
  • CSU Monterey Bay will launch two new masters programs, Technology and MS in IT Management in spring 2013 and MS in Instructional Science and Technology will launch in summer 2013. Marketing to begin ASAP.

The notes also call out a financial model (document not shared with Advisory Board but notes taken) with three scenarios.

Three scenarios:

  • Scenerio [sic] 1: Baseline Growth Modeling where projected enrollments grom [sic] from 188 to 7500; programs grom from 3 to 25; revenues from to over $11 million and additional investment required $2.2 million. Break even in FY 12/14.
  • Scenario 2: Break Even in fiscal year 2012/14 Modeling where enrollments from from 188 to 15,750, programs grom from 3 to 30, revenues grom to over 23 million and additional investment required is $1 million.
  • Scenario 3: Best/Strong Growth where enrollments grow from 254 to 36,250, programs grow from 3 to 50, revenues grow to over $54 million and additional investment required is $1 million.

The budget planning seems to fall on fiscal years (Jul 1 – Jun 30), whereas all other CSO planning was based on calendar years. Note that the best case scenario included an additional $1 million in CSU investment, and the baseline scenario estimated 7,500 course enrollments from Fall 13 thru Spring 14. Based on an email exchange with CSU Public Affairs, Fall 13 saw almost 1,200 course enrollments, which would have required a six-fold increase in Spring 14 just to make the baseline scenario.

Update: Also in February, CSO executive director Ruth Claire Black testified at the Little Hoover Commission (an independent state oversight board in California) describing the CSO initiative as part of discussion on state needs in higher education.

By the April Advisory Board meeting, CSO was seeing some positive interest from campuses, although the numbers were fairly modest compared to previous CSO estimations.

April Launch Report

  • Fullerton business degree completion program is making good progress; 83 applications pending, 17 admitted for fall. Heavily oversubscribed for Fullerton. Good review from stundents on coaching. 50% of inquiries are for Fullerton program.
  • Dominguez Hills BS Applied Studies program starts May 4. Large cohort of existing students. 13 students admitted for summer; fall 17 students admitted.
  • The next undergraduate program will be the Northridge Reconnect program. In the next 30 days website will be updated to reflect Reconnect.
  • Fresno MBA 60 inquiries; 1 applicant and 1 admission
  • Other 4 grad programs slow build; redirect marketing resources towards masters programs
  • Fresno Homeland Security Certificate website and Humboldt Golden Four are up on website. We are seeing equal demand across the courses (3 GE courses)
  • Interest list has grown significantly; campuses who are not currently participating Cal State Online is full for fall. If existing Cal State Online campus may have capacity. Sociology at Fullerton. Dominguez Hills QA for fall start. Taking advantage of launch financial model.

The notes showed the group watching new activity from the California state legislature regarding online education, including the infamous SB 520.[3] This raised the question of what Cal State Online’s role should be with this new emphasis. [emphasis added below]

Can Cal State Online fulfill the role of putting all online? Where should we focus? State side or Cal State Online. Chancellor wants this to happen. Ruth and Marge are working on a plan. Need to be cautious to not cause confusion to students and not diminish Cal State Online.

Requirement of bill is that courses must be articulated statewide. Makes sense for Cal State Online to take ownership.

In May the CSU faculty senate passed a resolution calling on Cal State Online to promote all online programs and not just the six run through CSO.

RESOLVED: That all online degree programs offered by CSU campuses be given the same degree of prominence on the Calstateonline.com and Calstateonline.net websites as the online degree programs offered through Cal State Online; and be it further

RESOLVED: That there should be no charge for listing state­support online degree programs on the Calstateonline.com and Calstateonline.net websites;

By the June Advisory Board meeting, there was some progress for Fall enrollments, and there was concern that the state legislature did not understand the bottleneck problem.

Legislature thinks that if students knew about online courses our bottleneck problem would be solved. State is not funding FTES. Enrolling students online will need state subsidy. There is a belief that we can educate students online cheaply. There is a disconnect in Sacramento. Enrollment caps are more the issue, not bottlenecks.

There was also an enrollment presentation for the June meeting:

Download (PDF, 221KB)

2013: Summer and Fall

Despite planned meetings every two months, the CSO Advisory Board did not meet again until October, and in this interim the decision was made to abandon the original concept and to change the Pearson contract. Advisory Board members were not pleased with the process.

In early summer Pearson requested changes in the CSU/Pearson contract; wanted to increase CSU costs for services. The quality of the marketing provided by Pearson was not adequate. There were multiple meetings between Pearson and Cal State Online to resolve concerns resulting in changes to the contract.

The new marketing firm for Cal State Online is DENT; replaces Pearson; started in July 2013. So far there is a high level of satisfaction

A communication was distributed to the Advisory Board and CSU system stakeholders on October 17th regarding the Pearson/Cal State Online contract changes. The communication can be found on the Cal State Online CSYOU site [ed. no longer available].

Discussion/Comments: 

  • Members of the Advisory Board stated that there was little to no communication to them about the changes taking place. The last board meeting was a telelconference call in June and the August in-person meeting was cancelled.
    • There was a need to keep only a small number of people involved during the complicated negotiation process

The CSO entity was never formed as a 501(c)3 organization, and with the summer changes CSO would now report to Academic Affairs. The meeting notes further describe the changes.

The current Cal State Online business model will be in place until the end of 2013 and will then change. The Advisory Board will help identify opportunities and provide direction. It is anticipated that this will result in some changes in current program participation but hope that the current campuses will continue. Since campuses now have the option to use the LMS platform of their choice some campuses may elect to change to their own platform. [snip]

The Governor contributed $10 million to increase online education within the CSU. AB 386 Levine. Public postsecondary education: cross-enrollment: online education at the California State University was approved by the Governor on September 26, 2013 [emphasis added].

  • With the changes in the Pearson relationship and the passing of AB 386 we are now taking a much broader view of Cal State Online; will be used as a store front for CSU online courses. All online courses and programs in system will have Cal State Online as the store front.

The CSU faculty senate unanimously passed another resolution related to CSO in November. The resolution applauded the movement of CSO to report to Academic Affairs and the allowance for campus selection of LMS, but the real focus was the lack of faculty input in the decision-making.

RESOLVED: That the Academic Senate of the California State University (ASCSU) express its dismay that recent changes to Cal State Online were announced to system constituencies without review or input from the Cal State Online Advisory Board; and be it further [snip]

RESOLVED: That the ASCSU contend that the dissolution of the Cal State Online Board should not occur until a plan for a new governance structure that includes faculty is established, and be it further

RESOLVED: That the ASCSU recommend the establishment of a newly configured Cal State Online system­ wide advisory committee to include at least 5 faculty members, and the creation of a charge, in a partnership between the ASCSU and the Academic Affairs division of the Chancellor’s Office;

This issue – involvement in decision-making – was continued at the final Advisory Board meeting just three days after the senate resolution.

Ephraim Smith (VP Academic Affairs): The Cal State Online Board was originally created for a 501c3 organization but there was a change in direction and did not pursue 501c3; board then acted as advisory. Now that Cal State Online hase moved to Academic Affairs the question is how should it interact with constituencies; work through existing committees? Need to discuss.

There are three full pages of notes on the resultant discussion, ended in a plan to form a Commission that looks broadly at online education across the CSU.

2014

Despite the decision being made in Fall 2013 on the major changes to Cal State Online, the systemwide communication listed in my July post was not made until June 2014. The above description is mostly based on CSO documentation, but I plan to add a few of my own thoughts of the lessons learned from this short-lived online initiative in a future post.

  1. CSU officials did not respond to requests to be interviewed for this story. The offer is still open if someone would like to comment.
  2. The contract is no longer available in public, so I will only share one excerpt here.
  3. Disclosure: Michael and I wrote a white paper for 20 Million Minds Foundation calling out how Cal State Online did not attempt to address relieving bottleneck courses for matriculated students, which was the purported goal of much of the state legislative debate.

The post Cal State Online: Public records shed light on what happened appeared first on e-Literate.

Numbers: Administrative Costs Soaring? Maybe not

Thu, 2014-08-28 09:19

August 27, 2014

There’s just a mind-boggling amount of money per student that’s being spent on administration

Andrew Gillen, quoted in “New Analysis Shows Problematic Booming Higher Ed Administrators,” Huffington Post, August 26, 2014

 Administrative growth drives up costs at state-owned universities

Debra Edrleu, TribLive, July 28, 2013

 Across U.S. higher education, nonclassroom costs have ballooned, administrative payrolls being a prime example.

Wall Street Journal as quoted by Phil Hill, e-Literate, January 2, 2013

 Administrative costs on college campuses are soaring.

J. Paul Robinson, quoted in “Bureaucrats Paid $250,000 Feed OutcryOver College Costs, Bloomberg News, November 14, 2012

 Administrative Costs Mushrooming

Georget Leff , John William Pope Center for Higher Education Policy, September 15, 2010

 

Are these true, or generalizations that lack the rigor of research? What does the data say?

Since 2004 The National Center for Education Statistics (NCES) Integrated Postsecondary Education Data System (IPEDS) financial survey of colleges and universities has reported the costs of Institutional Support in a standard form. This broad category includes “general administrative services, central executive-level activities concerned with management, legal and fiscal operations, space management, employee personnel and records, … and information technology.” In business this is often called “administration.”

Data from NCES’s Digest of Education Statistics 2012 shows decreases in cost per student from 2003-2004 through 2010-2011 except for public 4 year colleges and universities that increased expenses by 4.1% as shown in Table 1.

Institutional Support per Student

2003-04

2010-11

Change

Public 4 year

$2,212

$2,302

4.1%

Private 4 year

4,611

3,887

-15.7%

Public 2 year

1,045

875

-16.3%

Private 2 year

783

401

-48.8%

Table 1 – Cost of “administration” per enrolled student

These data are expressed in July 2014 dollars adjusted using the Consumer Price Index CPI-U so the results would be unaffected by inflation. The year 2003-2004 was selected for comparison because the data definitions and formats were the first consistent with 2010-11. Because private colleges and universities do not report operation of plant, that cost was omitted from the percentage computations of both. Headcount was used since administrative expenses are more closely related to enrollment of real students than to a mythical full-time equivalent (FTE).

These data are shown graphically in Figure 1.

Figure1

Figure 1 – Comparative Administrative Expenses 2003-2004 and 2010-2011

Data showing administration as a percent of institutional expenses omitting independent organizations, hospitals, and auxiliary enterprises, is shown in Figure 2.

Figure2

Figure 2 – Administration Expenses as a Percent of Institutional Expenses

The percentages are near equal for the two years though the administration expenses per student declined during this period except for the public 4 year colleges and universities. This reduction, likely true also for the cost of instruction, is influenced by increased enrollment and institutional budget that was typically less or about the same as 2003-2004.

The IPEDS revision introduced in the late 70s early 80s was based on program budgeting. The mission of the college or university was considered to be a combination of instruction, research, and public service—sometimes call direct costs. The library and computing was consolidated into academic support upon the belief that books would transition into electronic documents. Student services was another indirect category that includes admissions, registrar, and activities that contribute to students emotional and physical well-being, intramural athletics, and student organizations. Intercollegiate athletics and student health services may be included “except when operated as self-supporting auxiliary enterprises.”

IPEDS tried to avoid financial aid in institutional expenses of mission-based programs since, for example, it is a transfer payment of one student (tuition paid) to another (tuition discount).

NCES now makes the data from these surveys available using several different statistical tools (software).

The NCES data are very useful in analysis and in communicating with the public that seem to be receiving more opinions than facts.

This analysis is an example of verifying assertions that administration expenses are mushrooming, soaring, or ballooning.

Are administrative expenses soaring? The evidence is “no.” But that doesn’t make a sensational headline.

The post Numbers: Administrative Costs Soaring? Maybe not appeared first on e-Literate.

Community Source Is Dead

Tue, 2014-08-26 11:21

As Phil noted in yesterday’s post, Kuali is moving to a for-profit model, and it looks like it is motivated more by sustainability pressures than by some grand affirmative vision for the organization. There has been a long-term debate in higher education about the value of “community source,” which is a particular governance and funding model for open source projects. This debate is arguably one of the reasons why Indiana University left the Sakai Foundation (as I will get into later in this post). At the moment, Kuali is easily the most high-profile and well-funded project that still identifies itself as Community Source. The fact that this project, led by the single most vocal proponent for the Community Source model, is moving to a different model strongly suggests that Community Source has failed.

It’s worth taking some time to talk about why it has failed, because the story has implications for a wide range of open-licensed educational projects. For example, it is very relevant to my recent post on business models for Open Educational Resources (OER).

What Is Community Source?

The term “Community Source” has a specific meaning and history within higher education. It was first (and possibly only) applied to a series of open source software projects funded by the Mellon Foundation, including Sakai, Kuali, Fedora, and DSpace (the latter two of which have merged). As originally conceived, Community Source was an approach that was intended to solve a perceived resource allocation problem in open source. As then-Mellon Foundation Associate Program Officer Chris Mackie put it,

For all that the OSS movement has produced some runaway successes, including projects like Perl, Linux, and Mozilla Firefox, there appear to be certain types of challenges that are difficult for OSS to tackle. Most notably, voluntaristic OSS projects struggle to launch products whose primary customers are institutions rather than individuals: financial or HR systems rather than Web servers or browsers; or uniform, manageable desktop environments rather than programming languages or operating systems. This limitation may trace to any of several factors: the number of programmers having the special expertise required to deliver an enterprise information system may be too small to sustain a community; the software may be inherently too unglamorous or uninteresting to attract volunteers; the benefits of the software may be too diffuse to encourage beneficiaries to collaborate to produce it; the software may be too complex for its development to be coordinated on a purely volunteer basis; the software may require the active, committed participation of specific firms or institutions having strong disincentives to participate in OSS; and so on. Any of these factors might be enough to prevent the successful formation of an OSS project, and there are many useful types of enterprise software—including much of the enterprise software needed by higher education institutions—to which several of them apply. In short, however well a standard OSS approach may work for many projects, there is little reason to believe that the same model can work for every conceivable software project.

This is not very different from the argument I made recently about OER:

In the early days of open source, projects were typically supported through individual volunteers or small collections of volunteers, which limited the kinds and size of open source software projects that could be created. This is also largely the state of OER today. Much of it is built by volunteers. Sometimes it is grant funded, but there typically is not grant money to maintain and update it. Under these circumstances, if the project is of the type that can be adequately well maintained through committed volunteer efforts, then it can survive and potentially thrive. If not, then it will languish and potentially die.

The Mellon Foundation’s answer to this problem was Community Source, again as described by Chris Mackie:

Under this new model, several institutions contract together to build software for a common need, with the intent of releasing that software as open source. The institutions form a virtual development organization consisting of employees seconded from each of the partners. This entity is governed cooperatively by the partners and managed as if it were an enterprise software development organization, with project and team leads, architects, developers, and usability specialists, and all the trappings of organizational life, including reporting relationships and formal incentive structures. During and after the initial construction phase, the consortial partners open the project and invite in anyone who cares to contribute; over time the project evolves into a more ordinary OSS project, albeit one in which institutions rather than individual volunteers usually continue to play a major role.

A good friend of mine who has been involved in Mellon-funded projects since the early days describes Community Source more succinctly as a consortium with a license. Consortial development is a longstanding and well understood method of getting things done in higher education. If I say to you, “Kuali is a consortium of universities trying to build an ERP system together,” you will probably have some fairly well-developed notions of what the pros and cons of that approach might be. The primary innovation of Community Source is that it adds an open source license to the product that the consortium develops, thus enabling another (outer) circle of schools to adopt and contribute to the project. But make no mistake: Community Source functions primarily like a traditional institutional consortium. This can be best encapsulated by what Community Source proponents refer to as the Golden Rule: “If you bring the gold then you make the rules.”[1]

Proponents of Community Source suggested even from the early days that Community Source is different from open source. Technically, that’s not true, since Community Source projects produce open source software. But it is fair to say that Community Source borrows the innovation of the open source license while maintaining traditional consortial governance and enterprise software management techniques. Indiana University CIO and Community Source proponent Brad Wheeler sometimes refers to Community Source as “the pub between the Cathedral and the Bazaar (a reference to Eric Raymond’s seminal essay on open source development).” More recently, Brad and University of Michigan’s Dean of Libraries James Hilton codified what they consider to be the contrasts between open source and Community Source in their essay “The Marketecture of Community,” and which Brad elaborates on in his piece “Speeding Up On Curves.” They represent different models of procuring software in a two-by-two matrix, where the dimensions are “authority” and “influence”:

Note that both of these dimensions are about the degree of control that the purchaser has in deciding what goes into the software. It is fundamentally a procurement perspective. However, procuring software and developing software are very different processes.

A Case Study in Failure and Success

The Sakai community and the projects under its umbrella provide an interesting historical example to see how Community Source has worked and where it has broken down. In its early days, Indiana University and the University of Michigan where primary contributors to Sakai and very much promoted the idea of Community Source. I remember a former colleague returning from a Sakai conference in the summer of 2005 commenting, “That was the strangest open source conference I have ever been to. I have never seen an open source project use the number of dollars they have raised as their primary measure of success.” The model was very heavily consortial in those days, and the development of the project reflected that model. Different schools built different modules, which were then integrated into a portal. As Conway’s Law predicts, this organizational decision led to a number of technical decisions. Modules developed by different schools were of differing quality and often integrated with each other poorly. The portal framework created serious usability problems like breaking the “back” button on the browser. Some of the architectural consequences of this approach took many years to remediate. Nevertheless, Sakai did achieve a small but significant minority of U.S. higher education market share, particularly at its peak a few years ago. Here’s a graph showing the growth of non-Blackboard LMSs in the US as of 2010, courtesy of data from the Campus Computing Project:

Meanwhile, around 2009, Cambridge University built the first prototype of what was then called “Sakai 3.” It was intended to be a ground-up rewrite of a next-generation system. Cambridge began developing it themselves as an experiment out of their Centre for Applied Research in Educational Technologies, but it was quickly seized upon by NYU and several other schools in the Sakai community as interesting and “the future.” A consortial model was spun up around it, and then spun up some more. Under pressure from Indiana University and University of Michigan, the project group created multiple layers of governance, the highest of which eventually required a $500K institutional commitment in order to participate. Numbers of feature requirements and deadlines proliferated, while project velocity slowed. The project hit technical hurdles, principally around scalability, that it was unable to resolve, particularly given ambitious deadlines for new functionality. In mid-2012, Indiana University and University of Michigan “paused investment” in the project. Shortly thereafter, they left the project altogether, taking with them monies that they had previously committed to invest under a Memorandum of Understanding. The project quickly collapsed after that, with several other major investors leaving. (Reread Phil’s post from yesterday with this in mind and you’ll see the implications for measuring Kuali’s financial health.)

Interestingly, the project didn’t die. Greatly diminished in resources but freed from governance and management constraints of the consortial approach, the remaining team not only finally re-architected the platform to solve the scalability problems but also have managed seven major releases since that implosion in 2012. The project, now called Apereo OAE, has returned to its roots as an academic (including learning) collaboration platform and is not trying to be a direct LMS replacement. It has even begun to pick up significant numbers of new adoptees—a subject that I will return to in a future post.

It’s hard to look at the trajectory of this project and not conclude that the Community Source model was a fairly direct and significant cause of its troubles. Part of the problem was the complex negotiations that come along with any consortium. But a bigger part, in my opinion, was the set of largely obsolete enterprise software management attitudes and techniques that come along as a not-so-hidden part of the Community Source philosophy. In practice, Community Source is essentially project management approach focused on maximizing the control and influence of the IT managers whose budgets are paying for the projects. But those people are often not the right people to make decisions about software development, and the waterfall processes that they often demand in order to exert that influence and control (particularly in a consortial setting) are antithetical to current best practices in software engineering. In my opinion, Community Source is dead primarily because the Gantt Chart is dead.

Not One Problem but Two

Community Source was originally developed to address one problem, which was the challenge of marshalling development resources for complex (and sometimes boring) software development projects that benefit higher education. It is important to understand that, in the 20 years since the Mellon Foundation began promoting the approach, a lot has changed in the world of software development. To begin with, there are many more open source frameworks and better tools for developing good software more quickly. As a result, the number of people needed for software products (including voluntaristic open source projects) has shrunk dramatically—in some cases by as much as an order of magnitude. Instructure is a great example of a software platform that reached first release with probably less than a tenth of the money that Sakai took to reach its first release. But also, we can reconsider that “voluntaristic” requirement in a variety of ways. I have seen a lot of skepticism about the notion of Kuali moving to a commercial model. Kent Brooks’ recent post is a good example. The funny thing about it, though, is that he waxes poetic about Moodle, which has a particularly rich network of for-profit companies upon which it depends for development, including Martin Dougiamas’ company at the center. In fact, in his graphic of his ideal world of all open source, almost every project listed has one or more commercial companies behind it without which it would either not exist or would be struggling to improve:

BigBlueButton is developed entirely by a commercial entity. The Apache web server gets roughly 80% of its contributions from commercial entities, many of which (like IBM) get direct financial benefit from the project. And Google Apps aren’t even open source. They’re just free. Some of these projects have strong methods for incorporating voluntaristic user contributions and taking community input on requirements, while others have weak ones. But across that spectrum of practices, community models, and sustainability models, they manage to deliver value. There is no one magic formula that is obviously superior to the others in all cases. This is not to say that shifting Kuali’s sustainability model to a commercial entity is inevitably a fine idea that will succeed in enabling the software to thrive while preserving the community’s values. It’s simply to say that moving to a commercially-driven sustainability model isn’t inherently bad or evil. The value (or lack thereof) will all depend on how the shift is done and what the Kuali-adopting schools see as their primary goals.

But there is also a second problem we must consider—one that we’ve learned to worry about in the last couple of decades of progress in the craft of software engineering (or possibly a lot earlier, if you want to go back as far as the publication of The Mythical Man Month). What is the best way to plan and execute software development projects in light of the high degree of uncertainty inherent in developing any software with non-trivial complexity and a non-trivial set of potential users? If Community Source failed primarily because consortia are hard to coordinate, then moving to corporate management should solve that problem. But if it failed primarily because it reproduces failed IT management practices, then moving to a more centralized decision-making model could exacerbate the problem. Shifting the main stakeholders in the project from consortium partners to company investors and board members does not require a change in this mindset. No matter who the CEO of the new entity is, I personally don’t see Kuali succeeding unless it can throw off its legacy of Community Source IT consortium mentality and the obsolete, 1990′s-era IT management practices that undergird it.

  1. No, I did not make that up. See, for example, https://chronicle.com/article/Business-Software-Built-by/49147

The post Community Source Is Dead appeared first on e-Literate.

Kuali For-Profit: Change is an indicator of bigger issues

Mon, 2014-08-25 14:28

On Friday the Kuali Foundation announced the creation of a new for-profit entity to be led by the former CTO of Instructure, Joel Dehlin. Jeff Young at the Chronicle described the change:

Ten years ago, a group of universities started a collaborative software project touted as an alternative to commercial software companies, which were criticized as too costly. On Friday the project’s leaders made a surprising announcement: that it would essentially become a commercial entity. [snip]

The Kuali Foundation will continue to exist as a non-profit, but it will be an investor in a new commercial entity to back the Kuali software development. Leaders insisted that they would maintain the values of the project despite creating the kind of organization that they once criticized. For one thing, the source software will remain free and open, but the company will sell services, like software hosting. On Friday the group issued an FAQ with details about the change.

As Carl Straumsheim put it at Inside Higher Ed:

The Kuali Foundation, after a decade of fighting commercial software vendors as a community source initiative, will launch a commercial company to better fight… commercial software vendors.

Despite the positioning that this change is about innovating into the next decade, there is much more to this change than might be apparent on the surface. The creation of a for-profit entity to “lead the development and ongoing support” and to enable “an additional path for investment to accelerate existing and create new Kuali products“ fundamentally moves Kuali away from the community source model. Member institutions will no longer have voting rights for Kuali projects but will instead be able to “sit on customer councils and will give feedback about design and priority”. Given such a transformative change to the underlying model, there are some big questions to address.

Financial Needs

Kuali, being a non-profit foundation, has its financial records available online, and the tax reporting form 990s are easily obtained through sites such as GuideStar. Furthermore, instructional media + magic (im+m) has a public eLibrary where they have shared Kuali documentation over the years.[1] There does not appear to be a smoking gun found in the financials to directly explain the need for such a significant change, but there are hints of issues that provide some insight. In a recent analysis of Kuali’s financials from these public sources, im+m noted how Kuali has reserves to survive between 8 – 34 months with no additional income, depending on the percentage of uncollectible accounts receivables. Months to survive In an article in the Chronicle this past spring, Kuali leaders described their apparent financial strength.

The foundation is in the best financial shape it has ever been, its officials say. Membership dues for small colleges start at a few thousand dollars; some big institutions contribute up to seven figures for specific software projects.

“We are about a $30-million net-asset organization,” says Ms. Foutty, the executive director. “There is not a concern that we are going to lack cash flow or net assets to do what we want to do.”

But what comprises these net assets? It turns out that the vast majority is comprised of accounts receivable, and more specifically, committed in-kind contribution of project resources from member institutions on the various projects. By looking at the financial report from last year (ended June 30, 2013 – see p. 3), we can see that Kuali had net assets of $26.4 million of which $21.3 million were “contributions receivable”. I would assume that current assets have approximately the same ratios. Kuali assets What this means is that a foundation such as Kuali is more dependent on member institutions keeping the faith and honoring contribution commitments than they are on pure dues and hard cash. Kuali cannot afford for too many institutions to to pull out of the consortium and write-off their commitments, and this aspect is based on whether Kuali will deliver the products that the institutions need.

Timing

According to the Kuali web site, the addition of a for-profit entity was based on two community strategy meetings that were held June 25-26 and July 30-31 of this year. Brad Wheeler, chair of the Kuali Foundation and CIO at Indiana University, wrote his summary of the meetings on Aug 1, 2014, including these two prophetic notes:

  • We need to accelerate completion of our full suite of Kuali software applications, and to do so we need access to substantially more capital than we have secured to date to meet this need of colleges and universities.
  • Kuali should consider any applicable insights from a new breed of “professional open source” firms (ex. RedHat, MySQL, Instructure) that are succeeding in blending commercial, open source, and foundation models. This should include consideration of possibly creating a commercial arm of the Kuali community.

There were also direct notes about the need for cloud services and better project coordination and decision-making.

The changes announced on Friday come less than two months after the first community strategy meeting, so I have trouble seeing the meetings as the cause and the Friday changes as the effect. There is reason to believe that the changes have been in the works prior to June of this year.

Change as an Indicator

When Kuali makes this radical of a change (moving away from community source model) within this short of a timeframe (less than two months), I think the best way to view the change is as an indicator that there are bigger issues under the surface. I wrote in a post on Unizin about a key question about the community source model:

Community source has proven its ability to develop viable solutions for known product categories and generally based on existing solutions – consider Sakai as an LMS (heavily based on U Michigan’s CHEF implementation and to a lesser degree on Indiana University’s OnCourse), Kuali Financial System (based directly on IU’s financial system), and Kuali Coeus (based on MIT’s research administration system). When you get rid of a pre-existing solution, the results are less promising. Kuali Student, based on a known product category but designed from the ground up, is currently on track to take almost 8 years from concept to full functionality. Looking further, are there any examples where a new product in an ill-defined product category has successfully been developed in a community source model?

Kent Brooks, CIO of Casper College, wrote a post this morning and called out a critical aspect of why this challenge is so important.

My overall observation is that the 10 year old Kuali project seems to have hit a bit of a lull in new adoptions. Partly is because institutions such as mine provide the next ‘wave of growth’ potential and most are unwilling to listen to the Kuali talk when there is not a Kuali Walk…aka a complete suite of tools with which one can operate the entire institution. It is a deal breaker for the 4000ish small to mid sized institutions in the US alone.

In other words, the vision of Kuali requires the availability of Kuali Student in particular, but also for HR / Payroll. Both of these project are based on future promises. I strongly suspect that the lack of completion of a complete suite of tools that Kent mentions is the real driving issue here for the changes.

Kuali must have new investment in order to complete its suite of applications, and the for-profit entity is the vehicle that the Foundation needs to raise the capital. One model that certainly informs this approach is ANGEL Learning, a for-profit entity which was founded and partially owned by the non-profit Indiana University (IU). ANGEL was able to raise additional investment beyond IU, and when ANGEL was sold for $100 million in 2009, IU made approximately $23 million in proceeds from the sale.

Required Change

Although there is a lot still to learn, my view is that the creation of a for-profit entity is not just a choice for acceleration into the next decade but is a change that the Kuali Foundation feels is required. Kuali can no longer bet that the community source model as currently implemented can successfully complete new products not based on pre-existing university applications, and they cannot rely on the current model to attract sufficient investment to finish the job.

Brad Wheeler was quoted at Inside Higher Education summarizing the changes.

“What we’re really doing is gathering the good things a .com can do: stronger means of making decisions, looking broadly at the needs of higher education and maybe sharpening product offerings a bit more,” Wheeler said. “This is going to be a very values-based organization with patient capital, not venture capital.”

The foundation will fund the launch, Wheeler said. For future funding, the company won’t pursue venture capital or private equity, but money from “values-based investors” such as university foundations. That means Kuali won’t need to be run like a traditional ed-tech startup, he said, as the company won’t be “beholden to Wall Street.”

In a post from this afternoon, Chris Coppola from rSmart (a co-founder of Kuali) provided his summary:

The Kuali mission is unwavering, to drive down the cost of administration for colleges and universities to keep more money focused on the core teaching and research mission. Our (the Kuali community) mission hasn’t changed, but the ability to execute on it has improved dramatically. The former structure made it too difficult for colleges and universities to engage and benefit from Kuali’s work. This new model will simplify how institutions can engage. The former structure breeds a lot of duplicative (and even competitive) work. The new structure will be more efficient.

More to Come

There is a lot of news to unpack here, and Michael and I will report and provide analysis as we learn more. For now, there are some big questions to consider:

  1. If you read the rest of Kent Brooks’ blog, you’ll see that he is now delaying the decision for his school to join the Kuali community. How many other schools will rethink their membership in Kuali based on the new model? The Kuali FAQ acknowledges that they will lose members but also predicts they will gain new membership. Will this prediction prove to be accurate?
  2. More importantly, are there already current member institutions providing significant resources that are threatening to pull out of Kuali?
  3. Given the central need for new, significant investment, will Kuali and the new for-profit entity succeed in bringing in this investment?
  4. Will the new entity directly address the project challenges and complete the full suite of applications that is needed by the Kuali community?
  5. What effect will Kuali’s changes have on other community source initiatives such as Sakai / Apereo and Unizin (if it does get into software development)?

Update 8/26: Clarified language on voting rights from ‘customers’ to ‘member institutions’; added qualified in last question re. Unizin (it would only be community source if it gets into software development).

  1. Disclosure: Jim Farmer from im+m has been a guest blogger at e-Literate for many years.

The post Kuali For-Profit: Change is an indicator of bigger issues appeared first on e-Literate.

On ECAR data and ed tech purgatory

Fri, 2014-08-22 09:24

Recently I wrote a post about many ed tech products being stuck in pilots without large-scale adoption.

In our consulting work Michael and I often help survey institutions to discover what technologies are being used within courses, and typically the only technologies that are used by a majority of faculty members or in a majority of courses are the following:

  • AV presentation in the classroom;
  • PowerPoint usage in the classroom (obviously connected with the projectors);
  • Learning Management Systems (LMS);
  • Digital content at lower level than a full textbook (through open Internet, library, publishers, other faculty, or OER); and
  • File sharing applications. [snip]

This stuck process ends up as an ed tech purgatory – with promises and potential of the heaven of full institutional adoption with meaningful results to follow, but also with the peril of either never getting out of purgatory or outright rejection over time.

With the Chronicle’s Almanac coming out this week, there is an interesting chart that on the surface might contradict the above information, showing ~20 technologies with above 50% adoption.

 Educause Center for Analysis and Research

Note: Data are drawn from responses by a subset of more than 500 of the nearly 800 institutions that participated in a survey conducted from June to October 2013. Reported statistics are either an estimated proportion of the population or an estimated median.
Source: Educause Center for Analysis and Research [ECAR]

The difference, however, is that ECAR (through The Chronicle) asked how many institutions have different ed tech products and our survey asked how many courses within an institution use different ed tech products.

There are plenty of technologies being piloted but few hitting the mainstream, and adoption within an institution is one of the key indicators to watch.

The post On ECAR data and ed tech purgatory appeared first on e-Literate.

OER and the Future of Knewton

Mon, 2014-08-18 11:41

Jose Ferriera, the CEO of Knewton, recently published a piece on edSurge arguing that scaling OER cannot “break the textbook industry” because, according to him, it has low production values, no instructional design, and is not enterprise grade. Unsurprisingly, David Wiley disagrees. I also disagree, but for somewhat different reasons than David’s.

When talking about Open Educational Resources or, for that matter, open source software, it is important to distinguish between license and sustainability model, as well as distinguishing between current sustainability models and possible sustainability models. It all starts with a license. Specifically, it starts with a copyright license. Whether we are talking about Creative Commons or GPL, an open license grants copyright permission to anyone who wants it, provided that the people who want to reuse the content are willing to abide by the terms of the license. By granting blanket permission, the copyright owner of the resource chooses to give up certain (theoretical) revenue earning potential. If the resource is available for free, then why would you pay for it?

This raises a question for any resource that needs to be maintained and improved over time about how it will be supported. In the early days of open source, projects were typically supported through individual volunteers or small collections of volunteers, which limited the kinds and size of open source software projects that could be created. This is also largely the state of OER today. Much of it is built by volunteers. Sometimes it is grant funded, but there typically is not grant money to maintain and update it. Under these circumstances, if the project is of the type that can be adequately well maintained through committed volunteer efforts, then it can survive and potentially thrive. If not, then it will languish and potentially die.

But open resources don’t have to be supported through volunteerism. It is possible to build revenue models that can pay for their upkeep. For example, it is possible to charge for uses of materials other than those permitted by the open license. Khan Academy releases their videos under a Creative Commons Noncommercial Share-Alike (CC NC-SA) license. Everyday students and teachers can use it for free under normal classroom circumstances. But if a textbook publisher wants to bundle that content with copyrighted material and sell it for a fee, the license does not give them permission to do so. Khan Academy can (and, as far as I know, does) charge for commercial reuse of the content.

Another possibility is to sell services related to the content. In open source software, this is typically in the form of support and maintenance services. For education content, it might be access to testing or analytics software, or curriculum planning and implementation services. This is a non-exhaustive list. The point is that it is possible to generate revenue from open content. And revenue can pay for resources to support high production values, instructional design, and enterprise scaling, particularly when paired with grant funding and volunteer efforts. These other options don’t necessarily generate as much revenue as traditional copyright-based licensing, but that’s often a moot point. Business models based on open licenses generally get traction when the market for licensed product is beginning to commodify, meaning that companies are beginning to lose their ability to charge high prices for their copyrighted materials anyway.

That’s the revenue side. It’s also important to consider the cost side. On the one hand, the degree to which educational content needs high production values and “enterprise scaling” is arguable. Going back to Khan Academy for a moment, Sal Khan popularized the understanding that one need not have an expensive three-camera professional studio production to create educational videos that have reach and impact. That’s just one of the better known of many examples of OER that is considered high-quality even though it doesn’t have what publishing professionals traditionally have thought of as “high production values.” On the other hand, it is important to recognize that a big portion of textbook revenues go into sales and marketing, and for good reason. Despite multiple efforts by multiple parties to create portals through which faculty and students can find good educational resources, the adoption process in higher education remains badly broken. So far with a few exceptions, the only good way to get widespread adoption of curricular materials still seems to be to hire an army of sales reps to go knock on faculty doors. It is unclear when or how this will change.

This brings us to the hard truth of why the question of whether OER can “win” is harder than it seems. Neither the OER advocates nor the textbook publishers have a working economic model right now. The textbook publishers were very successful for many years but have grown unsustainable cost structures which they can no longer prop up through appeals to high production values and enterprise support. But the OER advocates have not yet cracked the sales and marketing nut or proven out revenue models that enable them to do what is necessary to drive adoption at scale. If everybody is losing, then nobody is winning. At least at the moment.

This is where Knewton enters the picture. As you read Jose’s perspective, it is important to keep in mind that his company has a dog in this fight. (To be fair at the risk of stating the obvious, so does David’s.) While Knewton is making noises about releasing a product that will enable end users to create adaptive content with any materials (including, presumably, OER), their current revenues come from textbook publishers and other educational content companies. Further, adaptive capabilities such as the ones Knewton offers add to the cost of an educational content product, both directly through the fees that the company charges and indirectly through the additional effort required to design, produce, and maintain adaptive products. To me, the most compelling argument David makes in favor of OER “winning” is that it is much easier to lower the price of educational materials than it is to increase their efficacy. So if you’re measuring the value of the product by standard deviations per dollar, then smart thing is to aim for the denominator (while hopefully not totally ignoring the numerator). The weak link in this argument is that it works best in a relatively rational and low-friction market that limits the need for non-product-development-related expenses such as sales and marketing. In other words, it works best in the antithesis of the conditions that exist today. Knewton, on the other hand, needs there to be enough revenue for curricular materials to pay for the direct and indirect costs of their platform. This is not necessarily a bad thing for education if Knewton-enhanced products can actually raise the numerator as much as or more than OER advocates can lower the denominator. But their perspective—both in terms of how they think about the question of value in curricular materials and in terms of how they need to build a business capable of paying back $105 million in venture capital investment—tilts toward higher costs that one hopes would result in commensurately higher value.

All of this analysis assumes that in David’s ratio of standard deviations per dollar, all that matters is the ratio itself, independently of the individual numbers that make it up. But that cannot be uniformly true. Some students cannot afford educational resources above a certain price no matter how effective they are. (I would love to lower my carbon footprint by buying a Tesla. Alas….) In other cases, getting the most effective educational resources possible is most important and the extra money is not a big issue. This comes down to not only how much the students themselves can afford to pay but also how education is funded and subsidized in general. So there are complex issues in play here regarding “value.” But on the first-order question of whether OER can “break the textbook industry,” my answer is, “it depends.”

The post OER and the Future of Knewton appeared first on e-Literate.

D2L raises $85 million but growth claims defy logic

Fri, 2014-08-15 09:57

Yesterday D2L announced a second round of investment, this time raising $85 million (a mix of debt and equity) to go with their $80 million round two years ago (see EDUKWEST for a useful roundup of news and article links). While raising $165 million is an impressive feat, does this funding give us new information on the LMS market?

First, here are the claims by D2L as part of this round of financing, from EdSurge:

The deal comes on the heels of what the company calls “a year of record growth in the higher education, K-12 and corporate markets.” John Baker, founder and CEO, says the company currently serves 1,100 institutions and 15 million learners–up from 850 and 10 million, respectively, at this time last year. The company also recently opened offices in Latin America, Asia Pacific and Europe.

That’s a 29% growth in the number of institutions and a 50% growth in the number of learners in just one year. Quite impressive if accurate.

Yet the company went through a significant round of layoffs in late 2013 that let go more than 7% of its workforce, and according to both LinkedIn data and company statements they have had no significant growth in number of employees over the past year. According to the EdSurge article, the company does plan to use the new money to hire more staff [emphasis added].

This time, the company says it will play it cool. “There are no planned acquisitions at this stage,” Baker tells EdSurge. “At this point, we’re primarily focused on building out our learning platform to support our clients and thousands of integration partners.” To do so, the company will grow its team of 783 full-time employees. “We are actively looking for dozens of new positions; over 60 in R&D alone,” shares Baker.

Note this slide from John Baker’s FUSION keynote one year ago:

John Baker keynote slide from FUSION conference July 2013

John Baker keynote slide from FUSION conference July 2013

If you take the information above – 800+ employees last year and 783 today - at face value, D2L has actually dropped in employee headcount. Does it make sense that a company can grow 50% in terms of learners without growing company employment, especially coming between two massive funding rounds?

Secondarily, what about the statement of “thousands of integration partners”? D2L is claiming to have more than twice as many integration partners as they do actual clients.

The other issue is market share. It is clear that D2L is planning to grow in corporate (10% of their business according to WSJ), K-12, and international higher ed markets; however, their largest business is still US higher ed. And here they have actually shown signs of no real growth, and for community colleges even dropping market share.

For the first time in an LMS market survey that I am aware of, Desire2Learn has actually lost market share. In fact, Desire2Learn is now lower than both Moodle and Canvas for community colleges according to this survey. This is a topic worth exploring further, especially in relation to last year’s layoffs.

Edutechnica ran the numbers for US higher education in October 2013.[1]

Edutechnica data from Oct 13 for US institutions with more than 2,000 FTE

Edutechnica data from Oct 13 for US institutions with more than 2,000 FTE

Edutechnica ran the numbers again for the end of June for 2,000 FTE and above (to allow an apples-to-apples comparison with Oct 2013), but they have not yet published the results. George did agree to share preliminary information with me, and D2L came out with 225 institutions and 2,084,089 enrollments.[2] The Edutechnica numbers leads to an increase of 3% in number of US institutions and 2% in enrollment (number of learners) over the past 10 months. If D2L has grown its total number of learners by 50% over the past year, it would make sense that we would see very different numbers for their largest market.

In another interview with local media outlet The Star, CEO John Baker described growth this way:

“We’re seeing very rapid growth in Europe, we’ve seen triple-digit growth in Latin America and Asia Pacific. In terms of new accounts we’re seeing great growth basically everywhere we look,” Baker said. Desire2Learn is prioritizing growth in “key hubs,” including Brazil, Mexico, the U.S. and Singapore, he said.

This raises some questions:

  • They mention growth everywhere they look, including the US. Where is this growth that is not showing up in market data?
  • What percentage of their business – in terms of revenue, customers or learner counts – comes from international markets? The company press releases mention their investments in international hubs but I can find no significant news on new accounts with huge numbers.

D2L did not respond to several requests for comment or clarification for this post.

My intention in this and previous posts is to explain what I am seeing in the market and challenge the marketing claims - education institutions need an accurate understanding of what is happening in the LMS market. It is worth noting that not a single media outlet listed by EDUKWEST or quoted above (WSJ, Reuters, Bloomberg, re/code, edSurge, TheStar) challenged or even questioned D2L’s bold claims. It would help if more media outlets didn’t view their job as paraphrasing press releases.

  1. Edutechnica also ran an update in May 2014, but that used a different criteria of ‘more than 1,000 FTE’.
  2. By the way, think of how useful the Edutechnica data approach is compared to annual surveys, with the ability to adjust variables and update results so quickly.

The post D2L raises $85 million but growth claims defy logic appeared first on e-Literate.

Pilots: Too many ed tech innovations stuck in purgatory

Tue, 2014-08-12 13:44

Steve Kolowich wrote an article yesterday in the Chronicle that described the use of LectureTools, a student engagement and assessment application created by faculty member Perry Sampson at the University Michigan. These two paragraphs jumped out at me.

The professor has had some success getting his colleagues to try using LectureTools in large introductory courses. In the spring, the software was being used in about 40 classrooms at Michigan, he says.

Adoption elsewhere has been scattered. In 2012, Mr. Samson sold LectureTools to Echo360[1], an education-technology company, which has started marketing it to professors at other universities. The program is being used in at least one classroom at 1,100 institutions, according to Mr. Samson, who has kept his title of chief executive of LectureTools. But only 80 are using the software in 10 or more courses.

93% of LectureTools clients use the tool for less than 10 courses total, meaning that the vast majority of customers are running pilot projects almost two years after the company was acquired by a larger ed tech vendor.

We are not running out of ideas in the ed tech market – there are plenty of new products being introduced each year. What we are not seeing, however, are ed tech innovations that go beyond a few pilots in each school. Inside Higher Ed captured this sentiment when quoting a Gallup representative after the GSV+ASU EdInnovations conference this year:

“Every one of these companies has — at least most of them — some story of a school or a classroom or a student or whatever that they’ve made some kind of impact on, either a qualitative story or some real data on learning improvement,” Busteed said. “You would think that with hundreds of millions of dollars, maybe billions now, that’s been plowed into ed-tech investments … and all the years and all the efforts of all these companies to really move the needle, we ought to see some national-level movement in those indicators.”

In our consulting work Michael and I often help survey institutions to discover what technologies are being used within courses[2], and typically the only technologies that are used by a majority of faculty members or in a majority of courses are the following:

  • AV presentation in the classroom;
  • PowerPoint usage in the classroom (obviously connected with the projectors);
  • Learning Management Systems (LMS);
  • Digital content at lower level than a full textbook (through open Internet, library, publishers, other faculty, or OER); and
  • File sharing applications.

Despite the billions of dollars invested over the past several years, the vast majority of ed tech is used in only a small percentage of courses at most campuses.[3] Most ed tech applications or devices have failed to cross the barriers into mainstream adoption within an institution. This could be due to the technology not really addressing problems that faculty or students face, a lack of awareness and support for the technology, or even faculty or student resistance to the innovation. Whatever the barrier, the situation we see far too often is a breakdown in technology helping the majority of faculty or courses.

Diffusion of Innovations – Back to the basics

Everett Rogers wrote the book on the spread of innovations within an organization or cultural group in his book Diffusions of Innovations. Rogers’ work led to many concepts that we seem to take for granted, such as the S-curve of adoption:

 The Diffusion of Innovations, 5th ed, p. 11

Source: The Diffusion of Innovations, 5th ed, p. 11

leading to the categorization of adopters (innovators, early adopters, early majority, late majority, laggards), and the combined technology adoption curve.

 The Diffusion of Innovations, 5th ed., p. 281

Source: The Diffusion of Innovations, 5th ed., p. 281

But Rogers did not set out to describe the diffusion of innovations as an automatic process following a pre-defined path. The real origin of his work was trying to understand why some innovations end up spreading throughout a social group while others do not, somewhat independent of whether the innovation could be thought of as a “good idea”. From the first paragraph of the 5th edition:

Getting a new idea adopted, even when it has obvious advantages, is difficult. Many innovations require a lengthy period of many years from the time when they become available to the time when they are widely adopted. Therefore, a common problem for many individuals and organizations is how to speed up the rate of diffusion of an innovation.

Rogers defined diffusion as “a special type of communication in which the messages are about a new idea” (p. 6), and he focused much of the book on the Innovation-Decision Process. This gets to the key point that availability of a new idea is not enough; rather, diffusion is more dependent on the communication and decision-process about whether and how to adopt the new idea. This process is shown below (p. 170):

 The Diffusion of Innovations, 5th ed., p. 170

Source: The Diffusion of Innovations, 5th ed., p. 170

What we are seeing in ed tech in most cases, I would argue, is that for institutions the new ideas (applications, products, services) are stuck the Persuasion stage. There is knowledge and application amongst some early adopters in small-scale pilots, but majority of faculty members either have no knowledge of the pilot or are not persuaded that the idea is to their advantage, and there is little support or structure to get the organization at large (i.e. the majority of faculty for a traditional institution, or perhaps for central academic technology organization) to make a considered decision. It’s important to note that in many cases, the innovation should not be spread to the majority, either due to being a poor solution or even due to organizational dynamics based on how the innovation is introduced.

The Purgatory of Pilots

This stuck process ends up as an ed tech purgatory – with promises and potential of the heaven of full institutional adoption with meaningful results to follow, but also with the peril of either never getting out of purgatory or outright rejection over time.

Ed tech vendors can be too susceptible to being persuaded by simple adoption numbers such as 1,100 institutions or total number of end users (millions served), but meaningful adoption within an institution – actually affecting the majority of faculty or courses – is necessary in most cases before there can be any meaningful results beyond anecdotes or marketing stories. The reason for the extended purgatory is most often related to people issues and communications, and the ed tech market (and here I’m including vendors as well as campus support staff and faculty) has been very ineffective in dealing with real people at real institutions beyond the initial pilot audience.

Update: Add parenthetical in last sentence to clarify that I’m not just talking about vendors as key players in diffusion.

  1. Disclosure: Echo360 was a recent client of MindWires
  2. For privacy reasons I cannot share the actual survey results publicly.
  3. I’m not arguing against faculty prerogative in technology adoption and for a centralized, mandatory approach, but noting the disconnect.

The post Pilots: Too many ed tech innovations stuck in purgatory appeared first on e-Literate.

Federal Reserve Board backs up e-Literate in criticism of Brookings report on student debt

Sat, 2014-08-09 13:30

I have been very critical of the Brookings Institution report on student debt, particularly in my post “To see how illogical the Brookings Institution report on student loans is, just read the executive summary”.

D’oh! It turns out that real borrowers with real tax brackets paying off off real loans are having real problems. The percentage at least 90 days delinquent has more than doubled in just the past decade. In fact, based on another Federal Reserve report, the problem is much bigger for the future, “44% of borrowers are not yet in repayment, and excluding those, the effective 90+ delinquency rate rises to more than 30%”.

More than 30% of borrowers who should be paying off their loans are at least 90 days delinquent? It seems someone didn’t tell them that their payment-to-income ratios (at least for their mythical average friends) are just fine and that they’re “no worse off”.

Well now the Federal Reserve Board themselves weighs in on the subject with a new survey, at least as described by an article in The Huffington Post.  I have read the Fed report and concur with HP analysis – it does argue against the Brookings findings.

Among the emerging risks spotlighted by the survey is the nation’s $1.3 trillion in unpaid student debt, suggesting that high levels of student debt are crimping the broader economy. Nearly half of Americans said they had to curb their spending last year in order to make payments on student loans, adding weight to the fear among federal financial regulators that the burden of student debt on households will depress economic growth for years to come.

Some 35 percent of survey respondents who are paying back student loans said they had to reduce their spending by “a little” over the past year to keep up with their student debt payments. Another 11 percent said they had to cut back their spending by “a lot.”

The Fed’s findings appear to challenge recent research by a pair of economists at the Brookings Institution, highlighted in The New York Times and cited by the White House, that argues that households with student debt are no worse off today than they were two decades ago.

The full Fed report can be found here. Much of the survey was focused on borrowers and their perceptions of how their student loans impact them, which is much more reliable than Brookings’ assumptions on how convoluted financial ratios should affect borrowers. In particular, consider this table:

Fed Table 11

Think about this situation – amongst borrowers who have completed their degrees, almost equal numbers think the financial benefits of a degree outweigh the costs as think the opposite (41.5% to 38.1%). I don’t see this as an argument against getting a degree, but rather as clear evidence that the student loan crisis is real and will have a big impact on the economy and future student decision-making.

Thanks to the Federal Reserve Board for helping us out.

Update: Clarified that this is Federal Reserve Board and not NY Fed.

The post Federal Reserve Board backs up e-Literate in criticism of Brookings report on student debt appeared first on e-Literate.

Update on 2U: First full quarterly earnings and insight into model

Wed, 2014-08-06 19:09

2U, the online service provider that went public in the spring, just released its financial report for the first full quarter of operations as a public company. The company beat estimates on total revenue and also lost less money than expected. Overall, it was a strong performance (see WSJ for basic summary or actual quarterly report for more details). The basics:

  • Revenue of $24.7 million for the quarter and $51.1 m for the past six months, which represents year-over-year increase of 32 and 35%;
  • EBITDA Losses of $7.1 m for the quarter and $10.9 m for the past six months, which represents year-over-year increase of -2% and 12%; and
  • Enrollment growth of 31 – 34% year-over-year.

Per the WSJ coverage of the conference call:

“I’m very pleased with our second quarter results, and that we have both the basis and the visibility to increase all of our guidance measures for 2014,” said Chip Paucek, 2U’s Chief Executive Officer and co-founder. “We’ve reached a turning point where, even with continued high investment for growth, our losses have stopped accelerating. At the midpoint of our new guidance range, we now expect our full year 2014 adjusted EBITDA loss to improve by 17% over 2013. Further, we’ve announced a schedule that meets our stated annual goal for new program launches through 2015.”

The company went public in late March at $14 / share and is still at that range ($14.21 before the quarterly earnings release – it might go up tomorrow). As one of only three ed tech companies to have gone public in the US over the past five years, 2U remains worth watching both for its own news and as a bellwether of the IPO market for ed tech.

Notes

The financials provide more insight into the world of Online Service Providers (OSP, aka Online Program Management, School-as-a-Service, Online Enablers, the market with no name). On the conference call 2U’s CEO Chip Paucek reminded analysts that they typically invest (money spent – revenue) $4 – $9 million per program in the early years and do not start to break even until years 3 – 4. 2U might be on the high side of these numbers given their focus on small class sizes at big-name schools, but this helps explain why the OSP market typically focuses on long-term contracts of 10+ years. Without such a long-term revenue-sharing contract, it would difficult for an OSP to ever break even.

As the market matures – with more competitors and with schools developing their own experiences in online programs, it will become more and more difficult for companies to maintain these commitments from schools. We have already seen signs over the past year of changes in institutional expectations.

2U, meanwhile, has positioned itself at the high-end of the market, relying on high tuitions and brand-name elite schools with small classes. The company for the most part will not even compete in a Request for Proposal process, avoiding direct competition with Embanet, Deltak, Academic Partnerships and others. Their prospects seem much stronger than the more competitive mainstream of OSP providers.

See the posts here at e-Literate for more background.

2U has changed one aspect of their strategy, as noted by Donna Murdoch on G+. At least through 2012 the company positioned itself as planning to work with one school per discipline (or vertical in their language). Pick one school for Masters of Social Work, one for MBA, etc. As described in Jan 2012:

“As we come into a new vertical, 2tor basically partners with one great school per vertical. We find one partner, one brand that is world-class. We partner with that brand over a long time period to create the market leader in that space for that discipline.”

2U now specifically plans for secondary schools in different verticals as can be seen in their press release put out today:

Programs Aug 2014

Note the duplication of Social Work between USC and Simmons, Nursing between Georgetown and Simmons, and Data Science between Berkeley and SMU. Note the new approach from page 20 of the quarterly report:

As described above, we have added, and we intend to continue to add, degree programs in a number of new academic disciplines each year, as well as to expand the delivery of existing degree programs to new clients.

View Into Model

Along with the first quarter release (which was not based on a full quarter of operations as a public company), 2U release some interesting videos that give a better view into their pedagogical approach and platform. In this video they describe their “Bi-directional Learning Tool (BLT)”:

This image is from a page on the 2U website showing their approach, with a view of the infamous Brady Bunch layout for live classes (synchronous).

Live Courses

We’ll keep watching 2U and share significant developments as we see them.

The post Update on 2U: First full quarterly earnings and insight into model appeared first on e-Literate.

Full Disclosure

Sat, 2014-08-02 12:41

As you probably know, we run a consulting business (MindWires Consulting) and sometimes work with the companies and schools that we write about here. Consequently, we periodically remind you and update you on our conflict of interest policies. We do our best to avoid or minimize conflicts of interest where we can, but since our system isn’t perfect, we want you to understand how we handle them when they arise so that you can consider our analysis with the full context in mind. We value your trust and don’t take it for granted.

We talk a lot with each other about how to deal with conflicts of interest because we run into them a lot. On the one hand, we find that working with the vendors and schools that we write about provides us with insight that is helpful to a wide range of clients and readers. There just aren’t too many people who have the benefit of being able to see how all sides of the ed tech relationships work. But along with that perspective comes an inevitable and perpetual tension with objectivity. When we started our business together 18 months ago, we didn’t have a clear idea where these tensions would show up or how big of an issue they might turn out to be. We originally thought that our blogging was going to remain an addiction that was subsidized but somewhat disconnected from our consulting. But it turns out that more than 90% of our business comes from readers of the blog, and a significant portion of it comes out of conversations stimulated by a specific post. Now that we understand that relationship better, we’re getting a better handle on the kinds of conflict of interest that can arise and how best to mitigate them. Our particular approach in any given situation depends on lot on whether the client wants analysis or advice.

Disclosure

In many cases, clients want us to provide deeper, more heavily researched, and more tailored versions of the analysis that we’ve provided publicly on this blog. In this situation, there isn’t a strong a direct conflict of interest between working providing them with what they are asking for and writing public analysis about various aspects of their business. That said, no matter how hard we try to write objectively about an organization that is, was, or could be a client, human nature being what it is, we can’t guarantee that we will never be even subconsciously influenced in our thinking. That is why we have a policy to always disclose when we are blogging about a client. We have done this in various ways in the past. Going forward, we are standardizing on an approach in which we will insert a disclosure footnote at the end of the first sentence in the post in which the client is named. It will look like this.[1] (We are not fully satisfied that the footnote is prominent enough, so we will be investigating ways to make it a little more prominent.) We will insert these notices in all future posts on the blog, whether or not we are the authors of those posts. In cases where the company in question is not currently a client but was recently and could be again in the near future, we will note that the company “was recently a client of MindWires Consulting”.

Recusal

Sometimes the client wants not only analysis but also strategic advice. Those situations can be trickier. We want to avoid cases in which we blog in praise (or condemnation) of a company for taking an action that they paid us to tell them to take. Our policy is that we don’t blog about any decisions that a company might make based on our advice. There are some theoretical situations in which we might consider making an exception to that rule, but if they ever do come up in reality, then the disclosure principle will apply. We will let you know if, when, and why we would make the exception. Aside from that currently theoretical exception, we recuse ourselves from blogging about the results of our own consulting advice. Furthermore, when potential clients ask us for advice that we think will put us into a long-term conflict of interest regarding one of our core areas of analysis, we turn down that work. Analysis take precedence over advice.

Getting Better at This

We’re going to continue thinking about this and refining our approach as we learn more. We also have some ideas about business models that could further minimize potential conflicts in the future. We’ll share the details with you if and when we get to the point where we’re ready to move forward on them. In the meantime, we will continue to remind you of our current policy periodically so that you are in a better position to judge our analysis. And as always, we welcome your feedback.

 

  1. Full disclosure: Acme Ed Tech Company is a client of MindWires Consulting, the sponsor of e-Literate.

The post Full Disclosure appeared first on e-Literate.