Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 2 days 23 hours ago

Coursera CEO Interview: Mike Caulfield nailed it two months ago

Tue, 2014-04-15 20:03

Two months ago Mike Caulfield lamented the inability for many people in online education, especially massive online initiatives, to honestly learn from the past. In the post Mike referred to the failed AllLearn initiative and the seminal post-mortem written up in University Business.

How does that relate? A paragraph from the 2006 post-mortem of AllLearn really stuck out for me:

Oxford, Yale, and Stanford have kept quiet about the collapse of their joint e-learning venture…[h]owever, AllLearn’s closure could offer an unprecedented opportunity to step back and discuss the strengths and weaknesses of the business model… Further research into the series of collapsed online ventures may shed some light on what makes a successful distance education program, and enable some of the surviving online providers to redefine their business models and marketing strategies accordingly

Of course they don’t delve into these things honestly, and as a result most people in these institutions are unaware of them. Like Leonard, the institutions alter the record of the past. They wake up the next day with amnesia, consult a set of dramatically altered notes, and wonder why no one has tried massive Ivy League courses yet. The PR push to cover one’s tracks ends up erasing the institutional knowledge that could build a better initiative.

Little did Mike realize that he was writing a script.

One month later Coursera hired Richard Levin as its new CEO. As president of Yale, Levin was one of the key figures in the creation of All Learn in 2000, and after the 2006 collapse of the initiative Levin was one of the key figures directly responsible for the Open Yale Courses initiative.

The consensus view is that AllLearn failed to generate enough interest in its non-credit elite courses, and subsequently the program closed due to economics (by Levin’s own previous admission). In 2005 AllLearn attempted to address this challenge by branching beyond alumni as related in this Yale Daily News post [emphasis added in all quotes below].

“I think we’ve learned a lot form the experiment,” Levin said. “While I believe we’ve produced some very high quality courses, we’ve learned that it’s hard to generate large audiences sufficiently from these courses from just the alumni of the three partner schools. So we’ve reached out to attract broader audiences through affiliating with universities and through finding other organizations that might have an interest in making courses available to members.”

Fast forward a year, and it is clear that the effort had failed economically despite the broadening of audiences, again from the Yale Daily News.

Yale President Richard Levin, who served as AllLearn’s chairman, said he thinks that while the participating institutions learned what is necessary to manage a successful distance learning program, they were unable to make the project financially viable.

“We are disappointed that we weren’t able to find a way to make this successful economically,” Levin said. “[But] we learned a lot, and I think it will serve us well in the future.”

Open Yale Courses also provides non-credit elite courses. The problem? You might have guessed it, as described by this 2012 report for the Committee on Online Education.

Open Yale Courses has been supported by generous grants from the Hewlett Foundation, but those grants are ending this semester; and there is no provision yet for the continuation of this program. There has been extensive planning, however, to keep the 42 existing courses on the Yale site as well as the iTunes U and YouTube platforms. All of the courses are being stored and preserved for future study. New visitors are discovering Open Yale Courses daily and global media coverage, which has been constant and enthusiastic since the start of the project, continues unabated.

The initiative is now attempting direct solicitation as a method of funding.

I don’t mean to question Levin’s good intentions nor his considerable support of the mission of making education more accessible through online technology. However, I find it disingenuous to try and alter history. This week the New York Times interviewed Levin about his new role as Coursera CEO, and the reporter asked some good questions but lacked follow-up.

Q. Yale has not exactly been a mass institution.

A. No, but we were early in the on-line arena, with a venture back in 2000 called All-Learn.

Q. How much did you lose, and why didn’t that spoil this for you?

A. It was too early. Bandwidth wasn’t adequate to support the video. But we gained a lot of experience of how to create courses, and then we used it starting in 2007 to create very high quality videos, now supported by adequate bandwidth in many parts of the world, with the Open Yale courses. We’ve released over 40 of them, and they gained a wide audience.

So here we have yet another initiative offering non-credit elite courses, and one of the biggest challenges that Coursera faces is that it has yet to find a viable business model. The company is living on $85 million in venture capital investment and has not yet found revenue sources that go beyond a few million dollars per year (Signature Track). Levin called out this challenge in the same NYT interview.

Q. Doesn’t edX have an advantage in being not-for-profit, meaning they don’t have to worry about returning on investment so soon? Yesterday Andrew Ng said, “We’ve raised $85 million, so we’ve got some runway.” How much runway?

A. I think the principal investors in Coursera understand that this is a long term play. We’re fortunate to have patient investors; and as Andrew said, we’re quite adequately capitalized. I think we can become financially viable certainly within that five-year framework.

Q. You’re an economist. How do you get from here to there?

A. Right now courses are free and we’re charging for certification. We think that as the idea of using Coursera courses for professional advancement grows, the numbers seeking certificates will grow. And the price we charge probably can grow, too. A move from $50 or $60 for Signature Track to $100 is certainly imaginable. At $100 a pop, if you had two or three, or five million people. …

I would suggest that Coursera will not “get from here to there” by altering the record of the past. AllLearn failed to generate sufficient interest in its courses, and the proximate cause was not “insufficient bandwidth”. AllLearn in fact had several approaches that alleviated bandwidth concerns, including CD-ROM delivery and the ability to turn off high-bandwidth features. AllLearn’s average learner was a 47-year-old Yale alumni – hardly a case of low-income lack of access to sufficient bandwidth. Plenty of online ventures started in 2000 or prior have succeeded – Penn State’s World Campus, UMUC, most large for-profits, UMassOnline, University of Central Florida, etc. This was not a case of being “too early”.

Read the University Business post-mortem and the Yale article. The issue involved economics and insufficient revenue to offset expenses.

Coursera and all the xMOOC providers have this same long-term challenge of adequate business models. I called out this challenge as one of the four key barriers that MOOcs faced, based on a July 2012 post. I speak as someone who would like to see MOOCs succeed – not in their current form, but in a form that evolves to better meet learner needs. This healthy evolution won’t happen, however, unless the providers honestly evaluate the lessons of the past.

figure3

 

Update (4/17): It appears that Levin is doubling down on his new theory about AllLearn. From the Yale Daily News today:

“I knew from the beginning that [online ed] had great potential,” Levin said.

In fact, Yale’s experiment with online education began under Levin’s presidency. In 2000, Yale launched AllLearn, a joint venture with Stanford and Oxford that faltered after four years due to insufficient technology at the time. The Internet bandwidth in most homes was inadequate for properly sharing course material, Levin admitted.

For the record, I can find no explanations from the time of AllLearn’s demise in 2006 that insufficient bandwidth was the problem. What we do have are statements including from Levin himself that insufficient demand leading to unsustainable revenue was the problem.

The post Coursera CEO Interview: Mike Caulfield nailed it two months ago appeared first on e-Literate.

Links to External Articles and Interviews

Tue, 2014-04-15 11:41

Last week I was off the grid (not just lack of Internet but also lack of electricity), but thanks to publishing cycles I managed to stay artificially productive: two blog posts and one interview for an article.

Last week brought news of a new study on textbooks for college students, this time from a research arm of the  National Association of College Stores. The report, “Student Watch: Attitudes and Behaviors toward Course Materials, Fall 2013″, seems to throw some cold water on the idea of digital textbooks based on the press release summary [snip]

While there is some useful information in this survey, I fear that the press release is missing some important context. Namely, how can students prefer something that is not really available?

March 28, 2014 may well go down as the turning point where Big Data lost its placement as a silver bullet and came down to earth in a more productive manner. Triggered by a March 14 article in Science Magazine that identified “big data hubris” as one of the sources of the well-known failures of Google Flu Trends,[1] there were five significant articles in one day on the disillusionment with Big Data. [snip]

Does this mean Big Data is over and that education will move past this over-hyped concept? Perhaps Mike Caulfield from the Hapgood Blog stated it best, including adding the education perspective . . .

This is the fun one for me, as I finally have my youngest daughter’s interest (you made Buzzfeed!). Buzzfeed has added a new education beat focusing on the business of education.

The public debut last week of education technology company 2U, which partners with nonprofit and public universities to offer online degree programs, may have looked like a harbinger of IPO riches to come for companies that, like 2U, promise to disrupt the traditional education industry. At least that’s what the investors and founders of these companies want to believe. [snip]

“We live in a post-Facebook area where startups have this idea that they can design a good product and then just grow, grow, grow,” said Phil Hill, an education technology consultant and analyst. “That’s not how it actually works in education.”

 

The post Links to External Articles and Interviews appeared first on e-Literate.

Head in the Oven, Feet in the Freezer

Mon, 2014-04-14 05:19

Some days, the internet gods are kind. On April 9th, I wrote,

We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention.

On the very same day, the estimable Larry Cuban blogged,

So it is hardly surprising, then, that many others, including myself, have been skeptical of the popular idea that evidence-based policymaking and evidence-based instruction can drive teaching practice. Those doubts have grown larger when one notes what has occurred in clinical medicine with its frequent U-turns in evidence-based “best practices.” Consider, for example, how new studies have often reversed prior “evidence-based” medical procedures. *Hormone therapy for post-menopausal women to reduce heart attacks wasfound to be more harmful than no intervention at all. *Getting a PSA test to determine whether the prostate gland showed signs of cancer for men over the age of 50 was “best practice” until 2012 when advisory panels of doctors recommended that no one under 55 should be tested and those older  might be tested if they had family histories of prostate cancer. And then there are new studies that recommend women to have annual mammograms, not at age  50 as recommended for decades, but at age 40. Or research syntheses (sometimes called “meta-analyses”) that showed anti-depressant pills worked no better than placebos. These large studies done with randomized clinical trials–the current gold standard for producing evidence-based medical practice–have, over time, produced reversals in practice. Such turnarounds, when popularized in the press (although media attention does not mean that practitioners actually change what they do with patients) often diminished faith in medical research leaving most of us–and I include myself–stuck as to which healthy practices we should continue and which we should drop. Should I, for example, eat butter or margarine to prevent a heart attack? In the 1980s, the answer was: Don’t eat butter, cheese, beef, and similar high-saturated fat products. Yet a recent meta-analysis of those and subsequent studies reached an opposite conclusion. Figuring out what to do is hard because I, as a researcher, teacher, and person who wants to maintain good health has to sort out what studies say and  how those studies were done from what the media report, and then how all of that applies to me. Should I take a PSA test? Should I switch from margarine to butter?

He put it much better than I did. While the gains in overall modern medicine have been amazing, anybody who has had even a moderately complex health issue (like back pain, for example) has had the frustrating experience of having a billion tests, being passed from specialist to specialist, and getting no clear answers.1 More on this point later. Larry’s next post—actually a guest post by Francis Schrag—is an imaginary argument between an evidence-based education proponent and a skeptic. I won’t quote it here, but it is well worth reading in full. My own position is somewhere between the proponent and the skeptic, though leaning more in the direction of the proponent. I don’t think we can measure everything that’s important about education, and it’s very clear that pretending that we can has caused serious damage to our educational system. But that doesn’t mean I think we should abandon all attempts to formulate a science of education. For me, it’s all about literacy. I want to give teachers and students skills to interpret the evidence for themselves and then empower them to use their own judgment. To that end, let’s look at the other half of Larry’s April 9 post, the title of which is “What’s The Evidence on School Devices and Software Improving Student Learning?”

Lies, Damned Lies, and…

The heart of the post is a study by John Hattie, a Professor at the University of Auckland (NZ). He’s done meta-analysis on an enormous number of education studies, looking at effect sizes, measured on a scale from 0.1, which is negligible, to 1.0, which is a full standard deviation.

He found that the “typical” effect size of an innovation was 0.4. To compare different classroom approaches shaped student learning, Hattie used the “typical” effect size (0.4) to mean that a practice reached the threshold of influence on student learning (p. 5). From his meta-analyses, he then found that class size had a .20 effect (slide 15) while direct instruction had a .59 effect (slide 21). Again and again, he found that teacher feedback had an effect size of .72 (slide 32). Moreover, teacher-directed strategies of increasing student verbalization (.67) and teaching meta-cognition strategies (.67) had substantial effects (slide 32). What about student use of computers (p. 7)? Hattie included many “effect sizes” of computer use from distance education (.09), multimedia methods (.15), programmed instruction (.24), and computer-assisted instruction (.37). Except for “hypermedia instruction” (.41), all fell below the “typical ” effect size (.40) of innovations improving student learning (slides 14-18). Across all studies of computers, then, Hattie found an overall effect size of .31 (p. 4).

The conclusion is that changing a classroom practice can often produce a significant effect size while adding a technology rarely does. But as my father likes to say, if you stick your head in the oven and your feet in the freezer, on average you’ll be comfortable. Let’s think about introducing clickers to a classroom, for example. What class are you using them in? How often do you use them? When do you use them? What do you use them for? Clickers in and of themselves change nothing. No intervention is going to be educationally effective unless it gets students to perceive, act, and think differently. There are lots of ways to use clickers in the classroom that have no such effect. My guess is that, most of the time, they are used for formative assessments. Those can be helpful or not, but generally when done in this way are more about informing the teacher than they are directly about helping the student. But there are other uses of clicker technologies. For example, University of Michigan professor Perry Samson recently blogged about using clickers to compare students’ sense of their physical and emotional well-being with their test performance:

Figure 2.  Example of results from a student wellness question for a specific class day.  Note the general collinearity of physical and emotional wellness.
FIGURE 2. EXAMPLE OF RESULTS FROM A STUDENT WELLNESS QUESTION FOR A SPECIFIC CLASS DAY. NOTE THE GENERAL COLLINEARITY OF PHYSICAL AND EMOTIONAL WELLNESS.

I have observed over the last few years that a majority of the students who were withdrawing from my course in mid-semester commented on a crisis in health or emotion in their lives.  On a lark this semester I created an image-based question to ask students in LectureTools at the beginning of each class (example, Figure 2) that requested their self assessment of their current physical and emotional state. Clearly there is a wide variation in students’ perceptions of their physical and emotional state.  To analyze these data I performed cluster analysis on students’ reported emotional state prior to the first exam and found that temporal trends in this measure of emotional state could be clustered into six categories.

Figure 3.  Trends in students' self reported emotional state prior to the first exam in class are clustered into six categories.  The average emotional state for each cluster appears to be predictive of median first exam scores.
FIGURE 3. TRENDS IN STUDENTS’ SELF REPORTED EMOTIONAL STATE PRIOR TO THE FIRST EXAM IN CLASS ARE CLUSTERED INTO SIX CATEGORIES. THE AVERAGE EMOTIONAL STATE FOR EACH CLUSTER APPEARS TO BE PREDICTIVE OF MEDIAN FIRST EXAM SCORES.

Perhaps not surprisingly Figure 3 shows that student outcomes on the first exam were very much related to the students’ self assessment of their emotional state prior to the exam.  This result is hard evidence for the intuitive, that students perform better when they are in a better emotional state.

I don’t know what Perry will end up doing with this information in terms of a classroom intervention. Nor do I know whether any such intervention will be effective. But it seems common sense not to lump it in with a million billion professors asking quiz questions on their clickers to aggregate it into an average of how effective clickers are. To be fair, that’s not Larry’s point for quoting the Hattie study. He’s arguing against the reductionist argument that technology fixes everything—an argument which seems obviously absurd to everybody except, sadly, the people who seem to have the power to make decisions. But my point is that it is equally absurd to use this study as evidence that technology is generally not helpful. What I think it suggests is that it makes little sense to study the efficacy of educational technologies or products outside the context of the efficacy of the practices that they enable. More importantly, it’s a good example of how we all need to get much more sophisticated about reading the studies so we can judge for ourselves what they do and do not prove.

Of Back Mice and Men

I have had moderate to severe back pain for the past seven years. I have been to see orthopedists, pain specialists, rheumatologists, urologists, chiropractors, physical therapists, acupuncturists, and massage therapists. In many cases, I have seen more than one in any given category. I had X-rays, CAT scans, MRIs, and electrical probes inserted into my abdomen and legs. I had many needles of widely varying gauges stuck in me, grown humans walking on my back, gallons of steroids injected into me. I had the protective sheathes of my nerves fried with electricity. If you’ve ever had chronic pain, you know that you would probably go to a voodoo priest and drink goat urine if you thought it might help. (Sadly, there are apparently no voodoo priests in my area of Massachusetts—or at least none who have a web page.) Nobody I went to could help me. Not too long ago, I had cause to visit my primary care physician, who is a good old country doctor. No specialist certificates, no Ivy League medical school degrees. Just a solid GP with some horse sense. In a state of despair, I explained my situation to him. He said, “Can I try something? Does it hurt when I touch you here?” OUCH!!!! It turns out that I have a condition called “back mice,” also called “episacral lipomas” when it is referred to in the medical literature, which, it turns out, happens rarely. I won’t go into the details of what they are, because that’s not important to the story. What’s important is what the doctor said next. “There’s hardly anything on them in the literature,” he said. “The thing is, they don’t show up on any scans. They’re impossible to diagnose unless you actually touch the patient’s back.” I thought back to all the specialists I had seen over the years. None of the doctors ever once touched my back. Not one. My massage therapist actually found the back mice, but she didn’t know what they were, and neither of us knew that they were significant. It turns out that once my GP discovered that these things exist, he started finding them everywhere. He told me a story of an eighty-year-old woman who had been hospitalized for “non-specific back pain.” They doped her up with opiates and the poor thing couldn’t stand up without falling over. He gave her a couple of shots in the right place, and a week later she was fine. He has changed my life as well. I am not yet all better—we just started treatment two weeks ago—but I am already dramatically better. The thing is, my doctor is an empiricist. In fact, he is one of the best diagnosticians I know. (And I have now met many.) He knew about back mice in the first place because he reads the literature avidly. But believing in the value of evidence and research is not the same thing as believing that only that which has been tested, measured, and statistically verified has value. Evidence should be a tool in the service of judgment, not a substitute for it. Isn’t that what we try to teach our students?

  1. But I’m not bitter.

The post Head in the Oven, Feet in the Freezer appeared first on e-Literate.

AAC&U GEMs: Exemplar Practice

Sat, 2014-04-12 06:04

A while back, I wrote about my early experiences as a member of the Digital Working Group for the AAC&U General Education Maps and Markers (GEMs) initiative and promised that I would do my homework for the group in public. Today I will make good on that promise. The homework is to write-up an exemplar practice of how digital tools and practices can help support students in their journeys through GenEd.

As I said in my original post, I think this is an important initiative. I invite all of you to write up your own exemplars, either in the comments thread here or in your own blogs or other digital spaces.

The template for the exemplar is as follows:

Evocative Examples of Digital Resources and Strategies that can Improve General Education: What are these cases a case of?

Brief Description of practice:

  • In what ways is the practice effective or transformative for student learning? What’s the evidence? How do we know? (If you can tie the practice to any of the outcomes in the DQP and/or the LEAP Essential Learning Outcomes, that would be great.)
  • How does the practice reflect the digital world as lived student culture? What are the skills and content associated with the digital practice or environment? How does the practice deepen or shape behavior of students with digital tools and environments with which they may be variously familiar?
  • What does it take to make the practice work? What is the impact on faculty time? Does it take a team to design, implement, assess? What are the implications for organizational change?
  • How is it applicable to gen ed (if example doesn’t come from gen ed)?
  • Are there references or literature to which you can point that is relevant to the practice?

I decided to base my exemplar on the MSU psychology class that I’ve written about recently.

Flipped and Blended Class with Homework Platform Support

In this practice, every effort is made to move both direct instruction and formative assessment outside of class time. The “flipped classroom” (or “flipped learning”) approach is employed, providing students with instructional videos and other supplemental content. In addition, a digital homework platform is employed, enabling students to get regular formative assessments. In order to give students more time for these activities, the amount of in-class time is reduced, making the course effectively a blended or hybrid course. In-class time is devoted either to class discussion, which is supported by the instructor’s knowledge of the students’ performance on the regular formative assessments, and by group work.

In what ways is the practice effective or transformative for student learning? What’s the evidence? How do we know?

This is a particular subset of a practice that the National Center for Academic Transformation (NCAT) calls “the replacement model”, and they have a variety of course redesign projects that demonstrated improved outcomes relative to the control. For example, a redesign of a psychology Gen Ed course at Missouri State University produced the following results:

  • On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement).
  • Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement).
  • Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%)
  • Over a three-semester period following the redesign, the course DFW rate improved from 24.6% to 18.4% (most of which was because of a significant drop in the withdrawal rate).

One of the investigators of the project, who also was a course instructor, indicated that the quality of class discussion improved significantly as well.

Possible reasons why the practice is effective include the following:

  • Teacher/student contact time is maximized for interactivity.
  • Regular formative assessments with instant feedback help students to be better prepared to maximize discussion time with the teacher and with peers.
  • Feedback from the homework system enabled the instructor to walk into class knowing where students need the most help.
  • Reduced number of physical class meetings reduces the chances that a student will withdraw due to grade damaging absences.

How does the practice reflect the digital world as lived student culture? What are the skills and content associated with the digital practice or environment? How does the practice deepen or shape behavior of students with digital tools and environments with which they may be variously familiar?

Students are used to getting their information online. They are also often very effective at “time slicing,” in which they use small increments of time (e.g., when they are on a bus or waiting for an appointment) to get things done. This exemplar practice enables students to do that with the portions of academic work that are suited to it while preserving and actually expanding room for long and deep academic discussion.

What does it take to make the practice work? What is the impact on faculty time? Does it take a team to design, implement, assess? What are the implications for organizational change?

The redesign effort is significant and, because the creation of significant digital resources is involved, is often best done by a team (although that is not strictly necessary). For the purposes of this design, the homework platform need not be cutting-edge adaptive, as long as it provides formative assessments that are consistent with the summative assessments and provides both students and instructors with good, regular feedback. That said, implementing the technology is often not seamless and may take several semesters to work the kinks out. The shift to a flipped classroom also puts new demands on students and may take several semesters for the campus culture to adjust to the new approach.

How is it applicable to gen ed (if example doesn’t come from gen ed)?

This model is often used in Gen Ed. It is particularly appropriate for larger classes where the DFW rate is high and where a significant percentage of the subject matter—at least the foundational knowledge on the lower rungs of Bloom’s taxonomy—can be assessed through software.

Are there references or literature to which you can point that is relevant to the practice?

http://mfeldstein.com/efficacy-adaptive-learning-flipped-classroom/

http://mfeldstein.com/efficacy-adaptive-learning-flipped-classroom-part-ii/

http://www.thencat.org/PlanRes/R2R_Model_Rep.htm

http://www.thencat.org/PCR/R3/TCC/TCC_Overview.htm

http://www.flippedlearning.org/

The post AAC&U GEMs: Exemplar Practice appeared first on e-Literate.

Efficacy, Adaptive Learning, and the Flipped Classroom, Part II

Wed, 2014-04-09 10:45

In my last post, I described positive but mixed results of an effort by MSU’s psychology department to flip and blend their classroom:

  • On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement).
  • Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement).
  • Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%)
  • They did not get a statistically significant improvement in the number of failures and withdrawals, which was one of the main goals of the redesign, although they note that “it does appear that the distribution of A’s, B’s, and C’s shifted such that in the redesign, there were more A’s and B’s and fewer C’s compared to the traditional course.”
  • In terms of cost reduction, while they fell short of their 17.8% goal, they did achieve a 10% drop in the cost of the course….

It’s also worth noting that MSU expected to increase enrollment by 72 students annually but actually saw a decline of enrollment by 126 students, which impacted their ability to deliver decreased costs to the institution.

Those numbers were based on the NCAT report that was written up after the first semester of the redesigned course. But that wasn’t the whole story. It turns out that, after several semesters of offering the course, MSU was able to improve their DFW numbers after all:

MSU DFWThat’s a fairly substantial reduction. In addition, their enrollment numbers have returned to roughly what they were pre-redesign (although they haven’t yet achieved the enrollment increases they originally hoped for).

When I asked Danae Hudson, one of the leads on the project, why she thought it took time to see these results, here’s what she had to say:

I do think there is a period of time (about a full year) where students (and other faculty) are getting used to a redesigned course. In that first year, there are a few things going on 1) students/and other faculty are hearing about “a fancy new course” – this makes some people skeptical, especially if that message is coming from administration; 2) students realize that there are now a much higher set of expectations and requirements, and have all of their friends saying “I didn’t have to do any of that!” — this makes them bitter; 3) during that first year, you are still working out some technological glitches and fine tuning the course. We have always been very open with our students about the process of redesign and letting them know we value their feedback. There is a risk to that approach though, in that it gives students a license to really complain, with the assumption that the faculty team “doesn’t know what they are doing”. So, we dealt with that, and I would probably do it again, because I do really value the input from students.

I feel that we have now reached a point (2 years in) where most students at MSU don’t remember the course taught any other way and now the conversations are more about “what a cool course it is etc”.

Finally, one other thought regarding the slight drop in enrollment we had. While I certainly think a “new blended course” may have scared some students away that first year, the other thing that happened was there were some scheduling issues that I didn’t initially think about. For example, in the Fall of 2012 we had 5 sections and in an attempt to make them very consistent and minimize missed classes due to holidays, we scheduled all sections on either a Tuesday or a Wednesday. I didn’t think about how that lack of flexibility could impact enrollment (which I think it did). So now, we are careful to offer sections (Monday through Thursday) and in morning and afternoon.

To sum up, she thinks there were three main factors: (1) it took time to get the design right and the technology working optimally; (2) there was a shift in cultural expectations on campus that took several semesters; and (3) there was some noise in the data due to scheduling glitches.

There are a number of lessons one could draw from this story, but from the perspective of educational efficacy, I think it underlines how little the headlines (or advertisements) we get really tell us, particularly about components of a larger educational intervention. We could have read, “Pearson’s MyPsychLabs Course Substantially Increased Students Knowledge, Study Shows.” That would have been true, but we have little idea how much improvement there would have been had the course not been fairly radically redesigned at the same time. We also could have read, “Pearson’s MyPsychLabs Course Did Not Improve Pass and Completion Rates, Study Shows.” That would have been true, but it would have told us nothing about the substantial gains over the semesters following the study. We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention.

The post Efficacy, Adaptive Learning, and the Flipped Classroom, Part II appeared first on e-Literate.

Efficacy, Adaptive Learning, and the Flipped Classroom

Wed, 2014-04-02 12:10

Speaking of efficacy and the complexity of measuring it, I had an interesting conversation the other day with Danae Hudson, a professor of Psychology at Missouri State University, about a course redesign effort that she participated in. The initial contact came from a P.R. firm hired by Pearson. Phil and I get a lot of these and turn down most of them. This one interested me for several reasons. First, it was an opportunity to talk directly to a faculty member who was involved in the project. (A tip to all you P.R. folks out there: You will have a better chance of getting our attention when the focus of the call is to put us in direct contact with your customers about what they are achieving.) Second, the project was facilitated by The National Center for Academic Transformation (NCAT). I am a big fan of NCAT’s work, despite the fact that they seem to have an almost pathological urge to sabotage efforts to bring their work the attention that they deserve. Pearson’s interest lab in the call was that MyPsychLab was an integral part of the course redesign. My interest was to see what I could learn about the interaction between educational technology products and educational practices in delivering educational efficacy.

What I heard tended to confirm my suspicions (and common sense): Educational technology products can produce significant learning gains, but they often doing so by supporting changes in classroom practices.

The Goals

Like all NCAT redesign projects, this one has a complete write-up on the NCAT site. The document summarizes the redesign context and goals as follows:

Introductory Psychology is a semester-long, general education course at Missouri State University (MSU). The course falls within the self understanding/social behavioral perspective area of general education and is by far, the most popular choice for students within that area. Each academic year, at least 18 traditional face-to-face sections are offered with a total enrollment of 2,500-2,700 students. The course is lecture-based and typically taught by 65% full-time faculty and 35% adjunct instructors. While there are common general education goals across all sections, each instructor makes individual choices of content and delivery.

Despite being a popular choice among students, Introductory Psychology has traditionally experienced a high DFW rate (approximately 25%). The department wants to find ways to develop a more engaging course that will result in improved student learning outcomes and student satisfaction. Due to the large enrollment and numerous sections offered throughout the year, a significant number of adjunct instructors teach the course, which has contributed to some course drift and grade inflation. Currently, each section of 153 students is taught by one instructor, which significantly limits the type of activities that can be assigned and graded. The vast majority of the final course grade is derived from a series of multiple-choice exams. The goal is to redesign the course to be much more engaging and interactive, with an emphasis on true mastery of the course material.

To sum up: We have a popular Gen Ed course with a high failure and withdrawal rate. Danae also told me that the psychology department had long delivered a formative exam at the beginning of that class, and that they were unhappy with the level of improvement students were showing between the formative and summative exams. The faculty wanted to improve those numbers by making the course “more engaging and interactive, with an emphasis on the true mastery of the course material.”

This is typically where we start hearing that teaching effectively is expensive. But NCAT has a strong track record of proving that to be false. It turns out that ineffective teaching methods are usually often inefficient. Let’s pause and think about the formulation of that last sentence for a moment. It’s not always the case that effective teaching measures are cost-efficient. Of course we know that good seminars with low teacher/student ratios can be very effective but, to adopt the current parlance, “don’t scale.”  In that situation, there is a tension between effectiveness and efficiency. But despite appearances, some traditional styles of teaching—most notably the classic ginormous lecture class—are both ineffective and inefficient. Why is that so? For several reasons. First, both the physical plant and the labor structure of the large lecture class limit its ability to scale. If you run out of lecture hall seats, or you run out of TAs, you have exhausted your ability to increase the number of students taught with the faculty that you have. The central innovation of video-based xMOOCs is that they remove this limitation without changing the underlying pedagogical model of the large lecture. But the central problem is that cost and effectiveness is a two-way street in education. In my last post, I discussed David Wiley’s argument that cost of curricular materials impact effectiveness insofar as cost limits student access to those materials. But it goes the other way too. There is a cost for every student who fails or withdraws from a class and therefore has to retake it. The direct cost is in the tuition paid for two classes rather than one—a cost paid but the financial aid providers in addition to the student—but indirect costs include increased chances that the student might have to stay an extra semester or drop out altogether as well as the knock-on effect of the student blocking the seat for another student in an enrollment-capped but graduation-required course. NCAT typically doesn’t even look at these indirect costs and are often able to find significant direct cost savings by restructuring courses away from ineffective pedagogical approaches toward more effective pedagogical approaches that also happen to be more scalable. In MSU’s case, they projected that they would be able to lower the direct cost of the course by 17.8% while still achieving the primary goal of increasing effectiveness. The NCAT report notes,

The cost savings will remain in the psychology department and will be used to provide support for the redesigned course in the future, faculty wishing to take on additional course redesign projects and faculty travel to present at conferences related to the scholarship of teaching and learning.

But How?

MSU decided to redesign its course around what NCAT calls “the Replacement Model,” which can be thought of as a combination of flipped and hybrid. At this point most people have at least a basic idea of what “hybrid” means, but “flipped” is a lot less clear. The Chronicle of Higher Education recently published a column by Robert Talbert highlighting a group that is trying to establish definition and best practices around what they call “flipped learning,” which they describe as follows:

Flipped Learning is a pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter.

That’s it in a nutshell: Move direct instruction (i.e., lectures) out of class time so that there can be more direct student interaction time. Which sounds great, but it leads to a frequently asked question. If students have to do all the homework they were doing before plus watching all the lecture videos at home, isn’t that going to dramatically increase the amount of time they have to spend on the class? How can they do all of that work? NCAT’s answer is that you give them back some of that time by making the class “hybrid” in the sense that you reduce their in-class seat time by 50%. That’s why it’s called the “Replacement Model.”

While Danae never used the term “flipped learning”, she did talk about the flipped classroom and made it very clear that she meant using it to increase the amount of class time spent interacting with students and focusing on their particular needs. But the MSU plan called for decreasing class time by 50% while doubling the number of students per class from an average of 153 to 300. How was that supposed to work?

Part of the answer lies in using traditional techniques like group projects, but a lot of it is in using data to provide students with more feedback and fine tune the classroom experience. This is where Pearson comes in. I wrote a while back that the promise of adaptive learning programs is to transform the economics of tutoring:

The simplest way to think about adaptive learning products in their current state is as tutors. Tutors, in the American usage of the word, provide supplemental instruction and coaching to students on a one-on-one basis. They are not expected to know everything that the instructor knows, but they are good at helping to ensure that the students get the basics right. They might quiz students and give them tips to help them remember key concepts. They might help a student get unstuck on a particular step that he hasn’t quite understood.  And above all, they help each student to figure out exactly where she is doing well and where she still needs help.

Adaptive learning technologies are potentially transformative in that they may be able to change the economics of tutoring. Imagine if every student in your class could have a private tutor, available to them at any time for as long as they need. Imagine further that these tutors work together to give you a daily report of your whole class—who is doing well, who is struggling on which concepts, and what areas are most difficult for the class as a whole. How could such a capability change the way that you teach? What would it enable you to spend less of your class time doing, and what else would it enable you to spend more of your class time doing? How might it impact your students’ preparedness and change the kinds of conversations you could have with them? The answers to these questions are certainly different for every discipline and possibly even for every class. The point is that these technologies can open up a world of new possibilities.

This is exactly how MSU is using MyPsychLab. One of the biggest benefits that Danae cited was being able to walk into a class knowing what students were doing well with and what they were struggling with. This enables her and her colleagues to focus on the topics that those particular students need the most help with in class while simultaneously communicating to the students that their teacher is aware of how they are doing and what they need. Likewise, she said that the students are coming to class more engaged with better questions. MSU also uses clickers in class to augment the feedback loop that they are getting from the homework platform. This certainly was a critical enabler at a class size of 300 and would be useful in a significantly smaller lecture class as well.

Did it work? The results are overall very positive but mixed:

  • On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement).
  • Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement).
  • Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%)
  • They did not get a statistically significant improvement in the number of failures and withdrawals, which was one of the main goals of the redesign, although they note that “it does appear that the distribution of A’s, B’s, and C’s shifted such that in the redesign, there were more A’s and B’s and fewer C’s compared to the traditional course.”
  • In terms of cost reduction, while they fell short of their 17.8% goal, they did achieve a 10% drop in the cost of the course.
Intuitions and Open Questions

The study of the course redesign was intended to measure the overall impact of the effort rather than to research the components of efficacy, which means that we don’t have good data from which we can draw strong conclusions on the most interesting questions in this regard. But I’m not afraid to make some guesses and I asked Danae to do the same with me. To be clear, her first answer to any of the questions I’m going to bring up in this section of the post was consistently along the lines of, “I don’t have data that speaks to that question.” Which is the right answer. I want to be clear that wherever I reference her opinions here that it was in this context and that she was appropriately tentative.

First of all, what role did MyPsychLab have in the improvements? Here we have at least one hard number:

A significant portion of the redesigned course utilized publisher-customized digital learning technology. A correlation was calculated between the students’ online total score of assigned material and the total of five exam scores. This correlation was .68, p < .001 suggesting a strong relationship between the completion of online learning activities and exam performance.

But why? Obviously, practice is part of the equation. Students who do the homework tend to do better in classes in general. I asked Danae what she thought the big drivers were beyond that. She cited the feedback to faculty and student engagement. The product seemed to succeed in getting students engaged, from her perspective. When pressed about the degree to which the adaptive component of the product made a difference, she guessed that it wasn’t as big a factor. “My gut tells me that it is less about the personalization,” she said. But then she added that the personalization may have helped to drive student engagement by making the students feel like the content was tailored to their needs. “I think personalization is the part that appeals to the students.” This raises the question about the degree to which any gains that we see added to an adaptive product may be because of a digital…er…analog to teaching presence and as opposed to the software’s real ability to adapt to individual student needs and capabilities.

Second, I asked Danae to characterize how much she thinks adopting MyPsychLab would have driven improvements had it been added to the original class before the redesign. Her instinct was not nearly as much, which is my instinct too. We don’t have numbers to separate the impact of the practice from the impact of the tailored instruction that resulted from having the student data in the product. Nor do we know how much student engagement with the product was impacted by the fact that it was integrated into the whole course redesign. These would be important questions to answer before we can have a clear and nuanced answer to the question of the product’s “efficacy.” Efficacious under what circumstances?

Finally, I’d like to return to David Wiley’s question about cost as a proxy for access and its impact on efficacy. Danae was traveling and didn’t have access to the course materials cost information when I reached her by email today, but she was confident that the cost had not gone up significantly and thought it might have actually gone done post-redesign. (And no, Pearson did not subsidize the cost of MyPsychLab to the students.) So we have no before/after data from which we can make inferences regarding the impact of cost on student outcomes. But it’s possible that MSU could have had a more significant impact on its DFW rate had the total cost to the students been lower. It’s also worth noting that MSU expected to increase enrollment by 72 students annually but actually saw a decline of enrollment by 126 students, which impacted their ability to deliver decreased costs to the institution. Would they have seen different enrollments had the curricular materials been less expensive? Or free? We don’t know. But this raises the point that efficacy cannot be reduced to one aggregate number. Improving student aggregate test scores and reducing the number of students who fail or withdraw are two different goals which certainly need to be measured differently and probably need different sorts of interventions to achieve.

Postscript

After this post went live, Danae shared some data with me from the semesters after the NCAT report was published. As it turns out, the course did see significant reduction in its DFW rates and enrollments bounced back over the course of several semesters. You can read about the details, as well as possible explanations, here.

The post Efficacy, Adaptive Learning, and the Flipped Classroom appeared first on e-Literate.

Efficacy Math is Hard

Tue, 2014-04-01 10:12

David Wiley has a great post up on efficacy and OER in response to my original post about Pearson’s efficacy plan. He opens the piece by writing about Benjamin Bloom’s famous “2 sigma” problem:

The problem isn’t that we don’t know how to drastically increasing learning. The two-part problem is that we don’t know how to drastically increase learning while holding cost constant. Many people have sought to create and publish “grand challenges” in education, but to my mind none will ever be more elegant than Bloom’s from 30 years ago:

“If the research on the 2 sigma problem yields practical methods – which the average teacher or school faculty can learn in a brief period of time and use with little more cost or time than conventional instruction – it would be an educational contribution of the greatest magnitude.” (p. 6; emphasis in original)

So the conversation can’t focus on efficacy only – if there were no other constraints, we actually know how to do “effective.” But there are other constraints to consider, and to limit our discussions to efficacy is to remain in the ethereal imaginary realm where cost doesn’t matter. And cost matters greatly.

David then launches into a discussion of what he calls his “golden ratio,” or standard deviations per dollar. I have long been a fan of this formulation and quote it frequently. I’m not going to try to summarize his explication of it in his post; you really should go read it. But I would like to tease out a few implications here.

Cost/Effectiveness Analysis

By expressing cost and educational impact in a ratio, David is engaging in something called cost/effectiveness analysis. You may be more familiar with the closely related term “cost/benefit analysis.” The main difference between these two is that in the latter benefit is expressed in financial terms while in the former it is expressed in non-financial terms (such as learning gains, in this case). This is a powerful tool which is unfortunately misapplied more often than not. When people invoke cost/benefit, what often mean to invoke is cost, as in, “Do you really think this is worth it?” It is used to selectively question an expenditure that somebody doesn’t like. (Note that I am not accusing David of making this error; I’m just talking about common usage.) In Congress, cost/benefit is often a requirement tacked on to a bill to decrease the likelihood that the thing the amendment author doesn’t like will actually get funding. Likewise in education, cost/benefit or cost/effectiveness is loosely invoked for things that the invokers don’t think are worth the money up front, whether it’s textbooks, LMSs, or teacher salaries.

But the better way to apply the tool is comparatively across the range of possible investment decisions. “Given X amount of money, do we get more standard deviations for our dollars by investing in A or B?” This moves us away from a focus on preventing spending on things we don’t like and toward a focus on maximizing utility, which is what David is after.  And this is where it gets complicated. A good part of David’s post is about the complexities of measuring and impacting the numerator in standard deviations per dollar. Unfortunately, we have a lot of trouble tracking the denominator as well. Even the institutional costs can be complex, as Phil’s recent response to Chris Newfield regarding the true cost of the UF/Pearson deal illustrates. It gets a lot more complicated when we start asking, “Cost to whom?” The controversy around the UF deal centers around the cost to the institution and ultimately to the state. Textbooks are paid for by students. Mostly. Sort of. Except when they spend university scholarship money on them. Or state or Federal financial aid on them. None of this argues against the framework that David is presenting. It just makes the practical application of it more challenging.

But It’s Worse Than That

So far, we’ve been talking about the ratio as if “efficacy” is represented in the numerator. David reinforces this impression when he writes,

So the conversation can’t focus on efficacy only – if there were no other constraints, we actually know how to do “effective.” But there are other constraints to consider, and to limit our discussions to efficacy is to remain in the ethereal imaginary realm where cost doesn’t matter.

But that’s not really his argument. His argument is cost impacts access which impacts efficacy. If students fail to use the prescribed product because they cannot afford to buy it, and they therefore do poorly in class, then the cost of the product is inextricable from the measure of its efficacy. This is an excellent example of what Mike Caulfield meant when he referred to the “last mile” problem. An educational product, technique, or intervention can only be said to be “effective” when it has an effect. It can only have an effect if it is actually used—and often only if it is actually used in the way it was designed to be used. Of course, if students can’t afford to buy the product, then they won’t use it and it therefore is not effective for them.

So maybe the entire ratio, including numerator and denominator, collectively expresses a measure of effectiveness, right? Not so fast. There are two colleges that are fairly close to where I live. Once, Berkshire Community College, has a total non-residential cost of $5,850 per year for Massachusetts residents taking 15 credits per semester. The other, Simon’s Rock College, has a total residential cost of $60,000 per year. A cost of $100 for curricular materials could have a dramatic impact on access (and therefore efficacy) in the former environment but negligible in the latter. Standard deviations per dollar does not capture this difference. We could instead express the denominator in terms of percentage of total cost, which would help somewhat for this particular purpose. But what we really need is empirical data quantifying the impact of cost on student access under different conditions. Doing so would enable us to separate the numerator and the denominator once again. If the impact of cost for a particular educational population is already factored into the numerator, then we can get back to a discussion of bang for the buck. We also can make more nuanced evaluations. It may be that, because of the access issue, a commercial product is more effective for Simon’s Rock students than it is for BCC students. Further, we could (theoretically) perform a calculation to determine its effectiveness for University of Massachusetts students, which would presumably be different from either of the other two.

I guess what I’m trying to say is that efficacy is complicated. It’s a great goal, but teasing out what it means and how to measure it in authentic and useful ways is going to be very difficult.

The post Efficacy Math is Hard appeared first on e-Literate.

Clarifications on UF Online Payments to Pearson Embanet

Mon, 2014-03-31 15:35

I wrote a post over the weekend that included information from the Gainesville Sun about the University of Florida Online (UF Online) program and its expected payments to Pearson Embanet. Chris Newfield from Remaking the University also wrote on the subject today. Chris raises some very important issues in his post, including his point:

Universities may have a cost disease, but they now have a privatization disease that is even worse.

In the article, however, there seems to be a misunderstanding of how the revenue sharing agreement works. Given the importance of the questions that Chris raises, I think it is important to understand the payment model used by most Online Service Providers (OSP) such as in place at UF Online.

The part of the blog post that is mistaken, in my understanding, is this [emphasis added]:

UF did admit that it had to pay Pearson cash up front: it just wouldn’t say how much. A week later, Mr. Schweers reported that through various documents he’d been able to show that UF would pay Pearson Embanet $186 million over the 11 year life of the contract. The business plan sounds much like the Udacity-Georgia Tech deal. It involved very large growth projections to 13,000 students paying full in-state or non-resident tuition for an all-online program by 2018, with Pearson getting, in addition to its fee, $19 million of $43 million in projected revenues. 13,000 is the size of UF’s first year class.

The revenue estimates are worth pondering. Even if Pearson fails, it will effectively pocket all of the state funding that was given to UF for online, and some internal UF money besides. Pearson is owed $186 million over time for getting involved, and the state provided $35 million. Pearson will contractually absorb all of the state money and then be entitled to another $151 million of UF’s internal funds. (UF Associate Provost Andy McDonough says that Pearson will get $9.5 million in the first five years, but it is not clear whether or how this reflects the still partially redacted contract.)

If somehow the Pearson dragnet finds thousands of students to pay full tuition for an all-online program with the University of Florida name, UF is slated to gross $24 million in 2019, which is projected to rise to $48 million five years later. In this best possible scenario, UF will get back its initial $151 million around ten years from now. The University will thus be ready to earn its first net dollar in 2025.

The basic idea is that the OSP provides up-front investment, spending far more money in the initial years of an online program than it makes from the school. This is why 2U is growing quickly ($83.1 million revenue on 49% growth) but still is losing big ($27.9 million last year, with unclear prospects on breaking even). Most of 2U’s programs are in the early stages, when they are investing more in the online program than they are making in revenue.

In the UF Online case, they appear to be paying Pearson Embanet $9.5 million for the first five years as partial compensation for these up-front expenses. I believe that Pearson will internally spend far in excess of $9.5 million over the next five years, running a loss. During that same startup period, however, the Florida legislature will fund UF Online with $35 million. Pearson will only make 27% of this money if the Gainesville Sun is correct in its analysis of the contract.

After 2019, all payments shift to revenue from tuition and fees paid by students, as described by the Sun:

After 2018, UF will also stop paying Pearson directly and Pearson’s income will come entirely from its share of tuition revenue and any fees it charges. UF projects it will have over 13,000 students in UF Online generating $43 million in tuition revenue by 2019 — of which Pearson will get close to $19 million.

By 2024, with 24,000 students anticipated, revenues generated will be about $76 million, with $28 million going to Pearson, McCullough said.

OSPs typically take a percentage of the tuition revenue based on enrollment targets. What is important here is that the revenue for the OSP depends on enrollment – if UF Online does not hit the enrollment targets, Pearson Embanet will not get $186 million in revenue. They make a percentage of the revenue without guaranteed payments.

In the best possible scenario for UF Online and for Pearson Embanet, the school will start making money from students on day 1. In 2019, if UF Online hits enrollment targets, UF Online will net $24 million ($43 million of revenue, paying $19 million to Pearson Embanet). As the enrollment grows (again, assuming that it does), then UF Online will make more and more over time, estimated to be net $44 million in 2024 ($76 million of revenue, paying $28 million to Pearson Embanet). If UF Online does not hit targets, both UF Online and Pearson Embanet will make far less than the projections in the article.

As mentioned before, Chris raises some important questions, but this is not a matter of a school paying all revenue to an OSP without seeing a dime of net revenue until 2025 and beyond.

Update (3/31): I found the spreadsheets in the business plan, and these contract numbers are directly derived from the plan. The key is that they label Pearson Embanet (OSP) as “P3″ for Public / Private Partnership (see page 87 for explanation.

As for the mechanism to pay Embanet, they use a moving scale, with different percentages of revenue split per year and per in-state or out-of-state tuition. In 2015 Pearson Embanet makes 40% of the in-state tuition and 60% of the out-of-state tuition, and then in 2022 they make 30% and 42%. This also shows the “Additional Fixed Fee” of $9.5 million broken up over the first five years. See here on page 84:

Revenue mechanism

On page 82 these numbers are applied to the estimated enrollment, with the resultant fee to Pearson Embanet labeled as “P3 Services”. This combines the tuition sharing along with the additional fixed fee. For example in 2016, ($3.2 m * 0.4) + ($7.2 m * 0.6) + $2.0 m = $7.6 million. If you add up the row labeled “P3 Services” you get the total of $186 million.

Revenue and costs

What is unknown from this analysis is the internal costs from Pearson Embanet. The document on page 87 includes the following language, which seems to quantify the Embanet “investment” as “direct cost savings realized from these transfers” at $14 million per year.

There are some recognizable cost transfers in the service purchase, “partnership” plan. It is admittedly difficult to capture all of the services that are part of an external package in an internal matrix subject to per unit, per student, or per activity pricing. However, there are recognizable cost transfers in the market assessment, marketing services, recruitment, contact call center, production (on demand), program coordinators (retention), digital content and tutoring. The direct cost savings realized from these transfers is estimated at about $14 million per year. The present value of the P3 services annualized is approximately $15 million. The University believes the summation of the immediacy of the expertise, the on-request availability, the joint research opportunities, and the expanding innovative digital content represent greater value added than the differential.

Update (2): Note that these are projections that seem to be best-case scenarios.

Full disclosure: Pearson is a client of MindWires Consulting but not for OSP. All information here is from public sources.

The post Clarifications on UF Online Payments to Pearson Embanet appeared first on e-Literate.

Embanet and 2U: More financial insight into Online Service Providers

Sat, 2014-03-29 11:24

While I have written recently about UF Online and 2U, there is actually very little insight into the operations and finances of the market segment for Online Service Providers (OSP, also known as School-as-a-Service, Online Program Management). Thanks to 2U going public yesterday and the Gainesville Sun doing investigative work on UF Online, we have more information on one of the highest growth segments for educational technology and online learning.

2U’s IPO

2U went public yesterday, initially offered at $13.00 per share and closing the day at $13.98 (a 7.5% gain). The following is not intended to be a detailed stock market evaluation – just the basics to present the general scale of the company as insight into the market. While there is not a direct comparison, this IPO is a much better IPO than the most recent ed tech offering when Chegg (down 2.7% its first day and down 26% to date). Based on 2U’s first day of trading and the IPO filing:

  • 2U’s market valuation is $547 million, and the company raised $120 million from the IPO;
  • 2U’s annual revenue for 2013 was $83.1 million with $28.0 million in net losses, representing a revenue growth of 49% per year;
  • 69% of this revenue ($57 million) came from one client, USC, with two programs – masters of education (Rossier Online) and social work;
  • Across all 9 customers, 2U makes $10,000 – $15,000 in revenue per student per year;
  • Across all 9 customers, 2U makes an average of $10 million in revenue per customer per year;
  • Across all 9 customers, 2U’s customers make an average of $10 million in net revenue per year; and
  • Across all 9 customers, 2U’s customers are charging $17,000 – $45,000 per student per year in tuition.
Pearson Embanet’s Contract with UF Online

Meanwhile, the Gainesville Sun has been doing some investigative work on the University of Florida Online (UF Online) contract with Pearson Embanet. Embanet is the largest OSP in the market and was purchased by Pearson for $650 million in 2012. From yesterday’s article in the Sun we get some specific information on the UF Online contract.

The University of Florida will pay Pearson Embanet an estimated $186 million over the life of its 11-year contract — a combination of direct payments and a share of tuition revenue — to help launch and manage the state’s first fully online, four-year degree program.

Initially the financial terms of the contract were hidden by University of Florida officials due to “trade secrets”, but the Sun was persistent, found a presentation with similar information, eventually leading to UF providing the contract with most redactions removed.

According to the article and its interview with Associate Provost Andy McDonough (who took over the executive director position at UF Online when the first one resigned after just two and a half months), Pearson Embanet will be paid $9.5 million over the first five years to help with startup costs. After this point, Pearson Embanet’s pay will come from revenue sharing (similar to 2U and most OSP contracts).

Gov. Rick Scott signed a bill last year tapping UF to create an online university that would offer a full, four-year degree program at 75 percent of the tuition that residential students pay. The Legislature gave UF $35 million in startup funds for the first five years, and also gave the university six months to get the program up and running.

The program started in January 2014 with 583 transfer students, with the first freshman expected in September 2014. What we don’t know about the program startup is how much Pearson Embanet will invest in the program. Typically an OSP loses money for the first 3 – 5 years of program startup ($9.5 million will not cover costs), which is one of the rationale’s for the long-term contracts of 10 years or more. The model is that up front the provider loses money (see 2U’s losses for comparison) and makes a profit on the back end of the contract. For UF Online, the state legislature plans to stop subsidies by 2019, assuming the program will be self-sustaining.

For the fall term (first term not purely based on transfer students), UF Online is planning on 1,000 students, and so far 91 have signed up. I do not know if this is on target or not.

Under its new contract with UF, Pearson is responsible for creating “proprietary digital content,” providing admission and enrollment support, generating leads and signing new students, tracking retention rates, engaging in joint research and development, and providing on-demand student support.

Note that this set of services is not as comprehensive as what 2U provides. For example, UF Online will use the Canvas LMS from Instructure, like the rest of the University of Florida, whereas 2U provides its own learning platform built on top of Moodle and Adobe Connect.

After 2018, UF will also stop paying Pearson directly and Pearson’s income will come entirely from its share of tuition revenue and any fees it charges. UF projects it will have over 13,000 students in UF Online generating $43 million in tuition revenue by 2019 — of which Pearson will get close to $19 million.

By 2024, with 24,000 students anticipated, revenues generated will be about $76 million, with $28 million going to Pearson, McCullough said.

Based on these numbers, UF Online expects to make just approximately $3,167 per student in revenue with Pearson Embanet making $1,167 per student.

 Notes

Below are some additional notes the 2U and Pearson Embanet examples.

  • It is important to recognize the difference in target markets here. 2U currently targets high-tuition master’s programs, and the UF Online example is an undergraduate program with the goal of charging students 75% of face-to-face UF costs.
  • While the total contract values seem high, the argument for this model is that without the massive investment and startup capability of OSP companies, the school either would not be able to create the online program by itself or at least would not have been able to do so as quickly.
  • Despite the difference in market and in services, it is still remarkable the difference in revenue per student between 2U and Pearson Embanet – $10 – $15k for 2U vs. $1.2k for Pearson Embanet.

Full disclosure: Pearson is a client of MindWires Consulting but not for OSP. All information here is from public sources.

The post Embanet and 2U: More financial insight into Online Service Providers appeared first on e-Literate.

Two-Year Anniversary of Blackboard Acquisition of Moodlerooms and NetSpot

Wed, 2014-03-26 16:39

Two years ago today, Blackboard made a dramatic change of course with a series of public announcements:

At the time I described these changes:

Most of the discussion in articles and blogs follows the meme of Blackboard entering open source, or even the meme of Blackboard acquiring competitors. I think the news is more significant than either of these two memes.

Blackboard just did a 180-degree turn on their strategy for their core LMS business. They have moved from consolidating all customers into Learn 9.1 to providing products and services that are almost LMS-agnostic.

Archive of Statements

Given this dramatic turn of events, I wrote an additional post that captured the public statements (press releases, blog posts) from Blackboard, Moodlerooms, NetSpot, and even Blackboard competitors for the purpose of checking to see if the acquisitions really did signal a true change in strategy and support for open source. This two-year anniversary seems the perfect time to check up.

Bb’s Previous Open Source View

Just how big of a change did the announcements represent? Consider Blackboard’s moves regarding alternative LMS solutions in the previous six years.

  • Blackboard acquired WebCT, their biggest competitor, in 2006.
  • Blackboard sued Desire2Learn over patent infringement from 2006 — 2009.
  • Blackboard acquired ANGEL in 2009.
  • Blackboard argues that Moodle is just meeting a minimum standard of competence as an LMS.

On that last point, note that less than three years prior to the acquisition of Moodlerooms and NetSpot, Blackboard publicly argued that Moodle was not a viable enterprise system, as described and quoted by Michael, in response to a study from the North Carolina Community College System (snippets below are quotes from Blackboard).

“The Moodle product roadmap offers only a short-term view into the direction of future product upgrades and is controlled by a single person, Martin Dougiamas.” [snip]

“Global innovation is by nature not static and moves at a rapid pace. Open source may meet some minimum standard of LMS competency, but only Blackboard can grow, adapt, and evolve with the customized and ever changing needs of your [NCCCS] students and faculty”

Bb’s Current Open Source View

Fast forward to March 2012, and Blackboard’s press release shows a remarkable change in its dealings with Dougiamas.

Leaders from each company recently traveled to Perth, Australia to meet with Martin Dougiamas, founder of Moodle and Managing Director of Moodle Pty Ltd, and present their plans. The meeting included Blackboard CEO Michael Chasen and Chief Technology Officer Ray Henderson, Moodlerooms CEO Lou Pugliese and Chief Architect Tom Murdock, and NetSpot Managing Director Allan Christie.

In a recent phone interview Mark Strassman called out the continued support for Moodle by stating:

It’s really clear that one size does not fit all, and we’re thrilled that we can help those who want Moodle, giving them a mechanism for support. Having different platforms for different types of customers makes sense.

Change in Management

In the past two years most of the senior leadership involved in the acquisitions has changed, yet the basic approach to open source seems to remain the same.

  • Michael Chasen and Ray Henderson are gone from Blackboard (at least operational management of products, as Ray is still on the board), replaced by Jay Bhatt, Mark Strassman and Gary Lang (the team formerly together at AutoDesk).
  • Lou Pugliese, David Mills, and Tom Murdock are gone from Moodlerooms, although Phill Miller remains.
  • The original NetSpot leadership, however, has actually grown in importance in the Blackboard organization. Allan Christie is now General Manager, Mark Drechsler is now in charge of consulting services, and Stephen Watt is in charge of sales & marketing — for most of Blackboard’s products and not just Moodle support. This support includes bringing Moodlerooms services into Australia.
  • Charles Severance recently announced his departure from Blackboard.
Statement of Principles

In terms of open source support, the most important document to review is Blackboard’s statement of principles signed by Chasen, Henderson, Pugliese, Christie and Severance (note that only one remains). This short document was directly aimed at countering skepticism about Blackboard’s actual intents with the acquisitions.

We are committed to supporting the growth, development and use of open source technologies in education worldwide. We expect to make significant contributions to the community to help ensure that open source options remain strong, reliable and sustainable for all. Our work will be guided by the following principles:

1. We will work in a way that supports and honors the values of the communities we serve, and will continue to participate and support important community gatherings.

2. We will continue contributions of code from products we support and from in-kind development activities conducted in partnership with community organizations like Moodle Trust.

3. We will continue to focus on supporting open standards through organizations like the IMS Global Learning Consortium to ensure interoperability and eliminate vendor lock-in for all LMS options, commercial and open source.

4. We will provide financial support for Moodle Trust and other open source organizations as our strategy evolves.

5. We will work to deliver innovative, visually elegant, and technologically robust education solutions to clients regardless of whether they are open source, proprietary, or a blend between them.

On principles 1 — 4, I would argue that Blackboard has met their commitments, with support for MoodleMoots, investment in data centers, contribution of code (LTI plugin, common cartridge backup/restore, outcomes system), and development and support of LTI 2.0 code. Principle 5 in my opinion is really a marketing statement subject to evaluation by customers.

Severance, in his departure note, pointed out his view of Blackboard supporting open source communities even beyond Moodle.

Blackboard is a great supporter of open source, Sakai and open standards, and it’s investments helped move Sakai 2.9 towards its ultimate release. [snip]

Arguably the most significant effort supported by Blackboard was my work on IMS LTI 2.0.

Financial Contributions

On the financial contribution (principle #4), there are some indications that this support is quite significant. At a recent presentation to Moodle partners, it was claimed that the Moodlerooms and NetSpot combined contributions to Moodle Trust was more than 50% of the total partner contributions.

NOTE: I have not been able to confirm this information from additional sources, and every person I asked during interviews declined to speak on the subject. I went back to the Twitter stream, and it appears that the original tweets (and therefore retweets) have been deleted. Open source does not equal open books, apparently.

Whether this information is correct or not, Blackboard has certainly maintained their financial support of Moodle through the partners program. If the information is accurate, however, there is a risk that Blackboard is contributing too much financial support for Moodle Trust.

Change in Emphasis

One change worth noting with the new Blackboard executive team is the change in emphasis on international markets. Jay Bhatt has stated to me that Blackboard has under-penetrated international markets, especially in growth areas such as Latin American and China. The company has pegged much of its growth potential overseas, and according to Bhatt Moodle is a natural fit in this role. Moodle already has a strong international base of users, and many markets cannot afford or support a full-scale Learn LMS deployment. The same could be said for U.S. K-12 markets.

Blackboard has also kept their commitment (to date) in supporting the ANGEL platform, which I described in more detail here. Some customers have pointed to this June 2012 message from Blackboard as somewhat of a mixed message for ANGEL clients:

Previously we had announced an end of life target for Release 8..0 of the ANGEL Edition LMS for October 2014. In order to ensure that you have sufficient time to plan and execute a move to Blackboard Learn 9.1, we are extending that end of life window beyond the previously announced EOL date. Blackboard will evaluate the decision on an ongoing basis and provide sufficient notice of any future change to plans for support. We plan to make select enhancements to ANGEL 8.0 over time to ensure that the solution is a viable platform until you are ready to plan a migration.

Nevertheless, it is two years after the announcement, ANGEL is still supported, and Blackboard’s management still encourages the usage of ANGEL. In a recent phone interview Gary Lang stated:

Customers are happy with ANGEL, people at Blackboard are still working on the software, and there is no end-of-life (EOL) planned. For many customers the LMS is not just about feature parity — some are religious about their choices — and Blackboard is fine with this choice.

Does It Matter?

While Blackboard has kept their word and made a major change in strategy, the question arises of whether that matters. According to the Campus Computing Survey for 2011 and 2013, Blackboard’s market share (combining Learn, WebCT, and ANGEL product lines) has continue to fall in the US over the past two years, from 51% of institutions to 41%. Moodle has risen from 19% to 23% (these numbers do not separate the market share of Moodlerooms clients, just usage of Moodle). Within US higher education, at least, Blackboard has not succeeded in stopping their erosion of market share, but they have tapped into a still-growing Moodle community.

It is difficult to evaluate the company’s fortunes in K-12, professional ed (corporate and for-profit clients) and international markets without reliable market numbers. Phill Miller and Allan Christie stated that Moodlerooms and NetSpot have continued to grow since the acquisition.

Bhatt, Strassman and Lang all emphasize that Blackboard is no longer primarily an LMS company. As Strassman indicated in our call, Blackboard is “expanding its purview beyond the LMS to the broader teaching and learning continuum”. As they do so, the company will focus on interoperability and providing choices in platforms.

At the end of the day, this story is not that dramatic. Blackboard has kept their word, followed their stated principles, and kept their support of multiple LMS solutions including open source over the past two years. What is more interesting is to step back and see this update as confirmation on just how big of a change Blackboard made two years ago.

The post Two-Year Anniversary of Blackboard Acquisition of Moodlerooms and NetSpot appeared first on e-Literate.

Coursera and edX Hire New Executives: What about online experience?

Mon, 2014-03-24 18:42

Today’s big news is the concurrent change in leadership at two of the big three MOOC providers. First, Coursera announced they had hired Richard Levin, former president of Yale University, to be the company’s new CEO. Besides being a big-name college executive, Levin also led (or at least was president during) the development of Open Yale Courses. As reported by the New York Times:

Mr. Levin, who has been an adviser to Coursera since January, has been experimenting with online education for years, beginning in 2000 in a partnership with Stanford and Oxford. In 2007, he started Open Yale Courses to make dozens of classes taught by Yale professors available without cost.

“The main thing we will work on is to establish this model so our partner universities feel that offering large-scale MOOCs is an important part of their mission that helps faculty expand their reach, and benefits the world,” Mr. Levin said.

Mr. Levin, who has extensive experience in China, will also work on expanding Coursera’s presence there. Already, he said, China is the second-biggest source of Coursera enrollment, after the United States.

Meanwhile, the Coursera founders and previous president are taking on new roles. Daphne Koller will become president, Andrew Ng will become chairman of the board of directors and chief evangelist, and Lila Ibrahim will become chief business officer.

On the same day, coincidentally, edX announced they had hired Wendy Cebula as the company’s new president and chief operating officer. According to the company’s web site:

EdX, the nonprofit online learning initiative, today announced the appointment of Wendy Cebula as the company’s president and chief operating officer. Ms. Cebula will assume the day-to-day management and oversight of the nonprofit’s operations, while current president Anant Agarwal will become edX’s CEO, focusing on its strategic direction, growth and partnerships.

The appointment of Ms. Cebula, a former Vistaprint executive, comes as edX continues to grow, expanding users, courses and partners. Ms. Cebula will help to build out and scale operations to accelerate edX’s next phase of growth. The company currently provides a world-class destination site for learning and an open sourced platform, embraced by countries, institutions and companies around the world.

By looking at Ms. Cebula’s LinkedIn page, however, her most recent endeavor has been as an advisor to Gemvara, an online jewelry retailer with sophisticated e-commerce approach.

Comments

It is somewhat interesting that Coursera, a VC-backed company in Silicon Valley, has hired a former president of an Ivy League university while edX, an Ivy League non-profit organization, has hired a professional marketing services executive.

I think the hiring of Levin by Coursera speaks volumes about their need to further integrate with and support elite universities while targeting international expansion. The hiring of Cebula by edX speaks volumes about their need to grow through marketing.

What is missing, however, from all of the big three MOOC providers, is extensive executive experience with online education. There’s value in not being overly tied to the models and history of online education over the past two decades, but there is also a significant imbalance in not leveraging the experience of those leaders who have created and shepherded online education in credit-bearing programs. It is somewhat baffling that we are not reading about these MOOC providers trying to hire executives from community colleges, where much of online education is focused, nor from OpenSUNY, University of Central Florida, Penn State World Campus, Rio Salado College, Colorado Community College Online, UMUC, any of the for-profits, etc. If you look at the top leadership of Coursera, edX and Udacity, there is only one person with extensive online education experience (Udacity’s Clarissa Shen from Apollo Group, the parent company of the University of Phoenix).

Below are the basic org chart for Coursera, edX, and Udacity – with labels including name, title, and most recent or significant previous employment. I have not added positions such as legal counsel or financial officers.

Coursera

 

edX

 

Udacity

Stanford, MIT, and Yale are all excellent schools, but just 3.6%, 0.2% and 0.1% of their students take any courses online, according to 2012 IPEDS data. Compare this to approximately 26% of students taking any online courses across all US institutions.

I think that all three of these MOOC providers suffer from a huge lack of understanding of what it actually takes to make online education work based on two decades of experience.

The post Coursera and edX Hire New Executives: What about online experience? appeared first on e-Literate.

Why VCs Usually Get Ed Tech Wrong

Sat, 2014-03-22 10:26

I don’t often get to write these words, but there is a new must-read blog post on educational technology by a venture capitalist. Rethink Education’s Matt Greenfield argues that there is no generalized bubble in ed tech investment; rather, the problem is that the venture community has a habit of systematically betting on the wrong horses.

It’s worth noting that Matt is not your typical VC. For starters, he doesn’t live in the Valley echo chamber. Perhaps more importantly, he has a background as an academic. He has a PhD in English from Yale, taught at Bowdoin and CUNY, and taught graduate classes in literature to teachers from the New York City public schools. As such, he has an unusual perspective for an ed tech venture capitalist.

Matt uses digital textbook platforms as his example of the problem he wants to highlight:

What type of ed tech have venture capitalists approached with the greatest enthusiasm and the largest piles of cash? The answer is new textbook solutions, including digital textbook platforms like Kno and renters of physical textbooks like Chegg, which just went public. Venture capitalists have put over $500 million into just the top ten companies in this sector….

I talked to the CEO of an academic bookstore company recently. How many digital textbook platforms would you guess that his stores handle? Five? No, more than that. Ten? Nope, guess again. Twenty? Still too low. The answer is forty-two different digital textbook platforms. Forty-two. Now try to imagine each of those textbook platform companies pitching a book store. Or an author. Or a publisher. Or a venture capitalist. “Choose my platform, choose me! Our platform is totally different!” How many of those platforms does the world really need? How many of those platforms can make money? What do you think the meaningful differences between those forty-two platforms might be?…Meanwhile, even the century-old publishing incumbents are moving away from book-like things to adaptive courseware: learning objects that simply will not fit into the wrappers being built by companies like Kno.

So there is a bubble in venture funding for education ventures that are obsolete at birth. Meanwhile, there are large opportunities in areas where few venture capitalists will invest.

This is a fascinating case study. Why would VCs, with their much vaunted drive for innovation, be so taken with the idea of rebuilding an aging instructional modality (i.e., the textbook) on a digital platform—particularly when, as Matt spells out in detail in his blog post, it’s clearly a bad bet for a lot of reasons? It’s worth unpacking this a bit to get at the underlying pathology.

Disrupting Innovation

A big part of the problem is the Valley’s obsession with disruption. These days, “disruptive” and “innovation” seem to always come together in the same sentence. It’s a bit like “big galoot.” Theoretically, “big” is a modifier for “galoot.” But you never hear people talking about small galoots, average sized galoots, or galoots of indeterminate size. In modern common usage, galoots are always big. “Big galoot” has pretty much become an open form compound word, like “post office” or “real estate.” But “disruptive innovation” is not a compound word. Disruptive innovation is a particular kind of innovation, and a fairly narrow kind at that. Specifically, disruptive innovation is a phenomenon in which a new market entrant can overtake a leader in an established market by offering cheaper and simpler solutions. It’s important to remember that some of Clayton Christensen’s seminal examples of disruptive innovations were steam shovels and disk drives. This is not the same kind of innovation that produced the iPhone. It’s essentially about identifying the slow fat rich kid and taking his lunch money. To be fair, it’s not that inherently mean-spirited, because presumably one takes the rich kid’s lunch money (or market share) by providing solutions that consumers prefer. But the point is that disruptive innovation is generally not about solving new problems with brilliant out-of-the-box ideas. It’s primarily about solving old problems better because the old solutions have gotten overbuilt.

Who are the slow fat rich kids in the education markets? The textbook vendors. They make tons of money, they are clearly dysfunctional, and they are having trouble adjusting to change. It should be easy to take their lunch money, the theory goes. And since nobody really likes textbook vendors, you get to feel like a hero. Plus, it shouldn’t be that hard because disruption. Disruptive innovation, valid though it may be as a theory for explaining how established market leaders get upended, also encourages a certain intellectual laziness if you start to think that a disruptive innovation is like a big galoot. In that worldview, all you have to do, in any case, ever, is deliver a simpler or cheaper solution, and you win. That’s what Chegg was all about. The used book market was eating into the textbook publishers’ market; what if we could make buying and selling used textbooks easier? Disruptive innovation!

Ron Paul and Mrs. Paul Capitalism

Another reason that Chegg was attractive to VCs is that the used textbook market is targeted directly at students and doesn’t require any involvement with faculty unions, departmental committees, or (heaven forbid) governmental regulation. There tends to be a lot of libertarian chest thumping around this approach in the Valley. In Matt’s post, he quotes noted investor Marc Andreessen as saying,

I wouldn’t want to back a business that’s selling to public schools or characterized by public financing, unions, or government-run institutions. Those institutions are incredibly hostile to change.

The narrative here is that change equals innovation and therefore no self-respecting change agent (like a VC or a startup) would want to be held back by any institution that makes change slower or more difficult. But the truth is more mundane and less idealistic. The truth is that it’s just harder to run a company that sells to institutions with complex purchasing processes than it is to run a company that sells to individual consumers. Famed investor Peter Lynch once advised, “Invest in businesses any idiot could run, because someday one will.” Under this theory, it is safer to invest in a company that sells fish sticks than it is to put your money in one that sells airplane navigation and safety devices that requires more skill to run well because the product has to be shepherded through an FAA approval process. This is a particularly useful maxim when you invest in startups. While I have met very few ed tech entrepreneurs who are idiots in the shouldn’t-handle-sharp-objects sense, I have met many who are idiots in the I-just-hit-puberty sense. They tend to be extremely smart young people who nevertheless lack critical experience for certain kinds of business-building. Finding a 21-year-old who understands how to sell to a university system with a state-mandated procurement process and no single clear decision-maker is hard. Finding a 21-year-old who understands how to sell to 19-year-olds seems easier.

A Toxic Combination

The combination of the obsession with disruption and the phobia of institutions is a particularly bad one for the education markets. When I say that, I don’t mean that it is harmful to education (although that is probably also true). What I mean is that it leads to consistently unprofitable investment decisions. For Exhibit A, let’s return to the case of Boundless, which I wrote several posts about a while ago. Now, despite what some in the industry think, I do not particularly enjoy overtly bashing companies, even when I think they deserve it. But since my diplomacy in those posts appears to have been lost on at least some who read them, I shall abandon all subtlety here:

I think that Boundless’ entire business is specifically designed to attract investment by appealing to investors’ prejudices. (Who’s the slow fat rich kid now?)

Update: Upon further consideration, I softened the language of the above paragraph a bit. My point is not that I think Boundless deliberately deceived anyone but rather that I think they designed their company around ideas they thought investors would like, instead of around a sound product concept.

This is a company whose pitch was that they are Cheggier than Chegg. “It’s like Chegg, but with no warehouses! We’re disrupting the disrupters!” They came up with a strategy that makes a show of punching the slow fat rich kid—the textbook publisher, not the VC—in the face. Yay disruption! They also made as conspicuous an end run around teachers and institutions as possible. They didn’t just sell a used version of the textbook that the teacher required (which the teacher probably wouldn’t care about). Nor did they attempt to persuade teachers that they offer less expensive but high quality alternatives the way, say, Lumen Learning does. Instead, they marketed what amounts to the CliffsNotes of the textbook directly to the students. Take that, you change-hostile, union-joining classroom bureaucrats! I don’t think it would be possible to come up with a business plan that is less subtle about playing to VC prejudices. It’s like the Sand Hill Road equivalent of an email from a Nigerian prince.

But what is their business, really? Boundless basically sells Rolex knockoffs in Times Square. OK, I’m not being fair. Most textbooks are hardly Rolexes. Really, Boundless is selling Timex knockoffs in Times Square. There is no innovation in this disruption. In a market that is overrun by e-book platforms and in which downward price pressures are causing low-cost options to proliferate, why in the world would any rational investor think that Boundless is a good bet? And yet, the company has received $9.7 million in venture funding.

A Better Idea

A while back, I made the following observation in the context of Udacity’s fall from grace:

Silicon Valley can’t disrupt education because, for the most part, education is not a product category. “Education” is the term we apply to a loosely defined and poorly differentiated set of public and private goods (where “goods” is meant in the broadest sense, and not just something you can put into your Amazon shopping cart). Consider the fact that John Adams included the right to an education in the constitution for the Commonwealth of Massachusetts. The shallow lesson to be learned from this is that education is something so integral to the idea of democracy that it never will and never should be treated exclusively as a product to be sold on the private markets. The deeper lesson is that the idea of education—its value, even its very definition—is inextricably tangled up in deeper cultural notions and values that will be impossible to tease out with A/B testing and other engineering tools. This is why education systems in different countries are so different from each other. “Oh yes,” you may reply, “Of course I’m aware that education in India and China are very different from how it is here.” But I’m not talking about India and China. I’m talking about Germany. I’m talking about Italy. I’m talking about the UK. All these countries have educational systems that are very substantially different from the U.S., and different from each other as well. These are often not differences that a product team can get around through “localization.” They are fundamental differences that require substantially different solutions. There is no “education.” There are only educations.

I just don’t see disruptive innovation as a good guide for investment in education. And for similar reasons, I think the notion of avoiding institutional entanglements is pretty much hopeless, since the very idea of education is inextricably bound up in those institutions. Disruptive innovation and direct-to-consumer are both investment strategies that are designed to avoid complexities that lead to investment risk. But in education, complexity is unavoidable, which means strategies that attempt to avoid it usually result in risk-increasing ignorance rather than risk-avoiding safety. And as Warren Buffett said, “When you combine ignorance and leverage, you get some pretty interesting results.”

Buffett also said, “I am a better investor because I am a businessman, and a better businessman because I am no investor.” Call me old-fashioned, but I believe that if you want to find a good ed tech investment, you have to understand what the company does. In the real world, not in some Ayn Randian fantasy where technology unleashes the power of the individual. How will this product or service impact real students? Who would want to buy it and how would it help them? Very, very often, that will mean dealing with companies that sell to institutions or deal with institutional politics, because that’s how education works today in America and around the globe. If you want to find a good business to invest in, then think like a consumer. Better yet, think like a parent. Ask yourself, “Would my kid benefit from this? Would I like to see her have this? Would I, as a parent, take steps to make sure she can have this?” These are often businesses that can’t be run by any idiot, which makes them risky. But they are less risky than giving your money to a Nigerian prince.

The post Why VCs Usually Get Ed Tech Wrong appeared first on e-Literate.

Proposed State Authorization: Dramatic increase in federal control of distance ed

Fri, 2014-03-21 13:02

The Department of Education (DOE) released their proposed State Authorization regulations this week as part of the negotiated rulemaking process that seeks to replace previous rules struck down by courts in 2011. While the new process is more transparent than before (which was the basis of the court rulings), the proposed rulings would represent a dramatic increase in federal control of distance education and compliance burden for institutions. Greg Ferenbach from Cooley LLP noted these changes in a listserv discussion at WCET [used by permission from author, emphasis added]:

What I don’t think many folks appreciate … is this proposal would be a huge change from the way things work today. From a quick read, it appears as though the proposal would require all states to authorize distance ed (with no exemptions for accreditation, etc.). Basically, this would be a mandate to either obtain specific state approval or participate in reciprocity as a condition for continuing to offer distance education with federal aid.

Note that this is quite different from the last rule, which only mandated that you need to meet state requirements, if any, and thus it appears to impose a huge new burden on states and institutions. Think at least double.

The gist of State Authorization is to force distance education and correspondence programs to comply not only with their home state regulations but to also comply with regulations for any state of residence for students. While most readings of relevant rulings indicate that distance programs already have to comply with student’s resident state regulations, the compliance has been uneven – in a survey reported a year ago 32% of programs had not even applied to other states for authorization and many institutions are starting to not accept students from certain states. The federal ruling would formalize this compliance and tie it to Title IV federal financial aid.

The good news is that the new rulings would formally recognize and support state reciprocity agreements as a means to achieve authorization. The primary mechanism would be the State Authorization Reciprocity Agreement.

The bad news is the dramatic change in tone and expansion of federal power in the ruling. In the Chronicle story, both Greg and Russ Poulin were quoted on this subject.

Gregory Ferenbach, a lawyer with Cooley LLP who has followed the rule closely, said that change would require states to regulate online learning “whether they want to or not.” Currently, three-quarters of states don’t oversee purely online programs at all, he said.

“The whole federalism principle of leaving it to the states to regulate this activity certainly goes right out the window,” he said.

Russell Poulin, a negotiator, said he was “disappointed by the perceived change in tone from the department.”

“The original regulation was elegant and trusted the states,” said Mr. Poulin, who is deputy director for research and analysis at the Western Interstate Commission for Higher Education’s Cooperative for Educational Technologies. “The proposed language is prescriptive and will take great effort by the states to comply. Institutions that are currently approved in some states will need to reapply.”

Mandate on States

DOE released an issue paper to help explain the changes, and WCET has posted both the issue paper and the proposed ruling on its site. Emphasis added in all bold text below.

The original language from §600.9(c) that was vacated by the courts:

(c) If an institution is offering postsecondary education through distance or correspondence education to students in a State in which it is not physically located or in which it is otherwise subject to State jurisdiction as determined by the State, the institution must meet any State requirements for it to be legally offering postsecondary distance or correspondence education in that State. An institution must be able to document to the Secretary the State’s approval upon request.

In essence, this is the approach where DOE “trusted the state” just telling institutions to comply where states require. There was no direction on what the states must do. The new ruling would replace these 75 words with 1,086 words of new requirements. Under the proposal the DOE would tell every state that it must regulate distance education and tell them what these regulations must include.

The new language for §600.9(c):

[The new condition is for an institution offering distance education out of state] is considered to be legally authorized in that State if–

(i) The State has a process to review and appropriately act in a timely manner on complaints concerning the institution, including enforcing applicable State law, and has the final authority to resolve complaints and enforce applicable State law; and

(ii) The institution meets State requirements that it be approved or licensed by name–

Furthermore, any previous exceptions that allowed institutions to avoid explicit state authorization due to accreditation or time in operations would be disallowed:

(7)  An institution is not considered to be legally authorized to offer postsecondary distance or correspondence education in a State if it is exempt from State approval or licensure requirements based on accreditation, years in operation, or other comparable exemption.

While the federal government is now telling the states what they have to do to regulate distance education, the proposed rulings specifies that the federal government itself can authorize schools and the states must accept this authorization (this is my reading – someone correct me if I am wrong).

Proposed §600.9(c)(8) provides that an institution authorized by name to offer distance or correspondence education beyond secondary education by the Federal Government, or under certain conditions, an Indian tribe, is considered to be legally authorized.

The Net Effect

As Greg mentioned in his comments to the Chronicle, some three-quarters of states that previously did not regulate distance education would now be forced to begin regulations, and given the new conditions, several states will have to revise their regulations.

Keep in mind that this is a proposed ruling that will go through further negotiations starting next week, so these rules do not yet apply. If these rules go through however, the compliance requirements for both states and distance education programs will increase dramatically, and the power in education policy-setting will further shift from states toward the federal government.

The post Proposed State Authorization: Dramatic increase in federal control of distance ed appeared first on e-Literate.

MOOCs and Academic Exchanges Now Allowed for Iranian Students

Thu, 2014-03-20 14:08

One development covered here at e-Literate this year has been the US State Department and Treasury Department forcing the MOOC providers to block access for students in Iran, Sudan, Cuba and Syria. Kris Olds has also provided excellent coverage as well as Carl Straumsheim at Inside Higher Ed. In late January Coursera had to start blocking students in these four countries while edX continued working with these students. Then in early March edX had to start blocking students in Iran, Sudan and Cuba.

Today’s news is that OFAC has granted a General License G to allow academic exchanges such as MOOCs to operate within Iran.

US Authorizes Academic Exchanges & MOOC access for Iran http://t.co/hlLln9QJJR | http://t.co/iIKjP8OvEI | http://t.co/kJeNSpYl0c

— GlobalHigherEd (@GlobalHigherEd) March 20, 2014

It appears that the main effect is that now Coursera will be able to allow Iranian students to access the majority (but not all) of their courses. The actual license applies to courses:

provided by U.S. academic institutions in the humanities, social sciences, law, or business provided that the courses are the equivalent of courses ordinarily required for the completion of undergraduate degree programs in the humanities, social sciences, law, or business, or are introductory undergraduate level science, technology, engineering, or math courses ordinarily required for the completion of undergraduate degree programs in the humanities, social sciences, law, or business.

From my reading of this license, edX will not see a change. They only had certain courses blocked such as “Flight Vehicle Aerodynamics”, and this course does not seem to fit under the new license.

We now have confirmation that one key determinant that triggered OFAC to force the blocking of MOOCs was specifically the grading available in MOOCs. From the license:

U.S. persons, wherever located, are authorized to administer professional certificate examinations and university entrance examinations, including, but not limited to, multiple choice standardized tests, and to provide those services that are necessary or required for admission to U.S. academic institutions, to individuals who are located in Iran or located outside Iran but who are ordinarily resident in Iran.

The license is broader than just for MOOCs:

Academic Exchanges. Except as provided in paragraph (c) of this general license, accredited graduate and undergraduate degree-granting academic institutions located in the United States (collectively, “U.S. academic institutions”), including their contractors, are authorized to enter into student academic exchange agreements with universities located in Iran (collectively, “Iranian universities”) related to undergraduate or graduate educational courses, and to engage in all activities related to such agreements, including, but not limited to, the provision of scholarships to students enrolled in Iranian universities to allow such students to attend U.S. academic institutions.

Erich Ferrari at LobeLog has some interesting thoughts on the politics that prompted this change in policy.

With Iran’s participation in on-going talks regarding its disputed nuclear program, the US appears to have found another concession that it can easily offer to the Iranians as a show of good faith. That said, what appears like an easing of sanctions is in reality merely the broadening of a current policy that benefits the Iranian people while also cutting down on the administrative paperwork OFAC will have to handle as it shifts from a specific licensing policy to a generally authorized one.

While this license does change requirements from a specific license such as what edX has in place to a general one that multiple programs can use, it should be noted that the US government is clearly saying that institutions and MOOC providers can operate in Iran (and the other three countries) only under a license. Permission is required, and OFAC has shown its ability to change course when politics or diplomacy get in the way.

In my opinion, this release of General License G is an improvement, but a very limited improvement. We are still in a situation requiring explicit government approval based on specific countries and specific areas of study.

The post MOOCs and Academic Exchanges Now Allowed for Iranian Students appeared first on e-Literate.

2U’s Upcoming IPO: Filing estimates $533 million company value

Wed, 2014-03-19 13:14

One month ago 2U filed its registration for an IPO in 2014. 2U is an online service provider that helps traditional universities develop fully-online programs, currently based on 9 customers at the master’s level (see here for summary of revenue per student and per customer). On Monday the company set the terms for the IPO, as described by Bill Flook in the Washington Business Journal.

2U Inc. on Monday set terms for its upcoming initial public offering. The Landover-based ed-tech company plans to sell 8 million shares at between $11 to $13 apiece. Selling stockholders plan to offload another 1.17 million shares in the offering.

Altogether, the IPO would raise a total $110 million, assuming the 2U prices at the mid-point of that range. The company plans to list on the Nasdaq under the ticker symbol “TWOU.”

The updated S-1 filing is here. Based on the document and per Nasdaq, the company would be worth $533 million if it hits the midpoint of pricing at $12 per share. As a reminder, the company currently makes $83 million in revenue at a loss of $28 million, with almost a 50% per year revenue growth rate.

2U financials

 

With all of the recent investment in ed tech, especially since 2010, I would expect more examples of investors looking to get payoff via IPO or via corporate acquisition. The 2U IPO will be a real bellwether on how well the market will value ed tech companies, especially after Chegg’s disappointing IPO and subsequent trading.

The post 2U’s Upcoming IPO: Filing estimates $533 million company value appeared first on e-Literate.

WCET Post on Distance Education Misconceptions

Wed, 2014-03-19 10:37

It is gratifying to see WCET pick up the mantle with their analysis of distance learning based on the new IPEDS data. They have several posts up already, and today’s post is quite good and important. If only more people jumping into the fray on higher education history and reform would start with a grounding in facts, the public debate and resulting recommendations would be much more useful.

It is surprising how many times people conflate distance (or online) education with for-profit institutions. Often these are people who should know better, whether in Congress, the press, research universities, or other higher education pundits. Certainly, the for-profits have had a huge impact on the distance education world, but maintaining unfounded perceptions does not inform policy or practice. Some examples:

  • Scene 1: A fiscal analyst calls Russ Poulin saying that he is glad to see an article in the higher education press about public and non-profit institutions finally starting to get into distance education. Russ asked him if he knew that the majority of enrollments in distance education were in public and non-profit institutions. The analyst would not believe it at all. Heavy sigh.

  • Scene 2: At the opening press conference for one of the big-name MOOC providers, a member of the press asked if the MOOC leaders had sought advice from others already involved in distance education. One of the MOOC leaders responded that they saw no reason to consult with for-profit institutions. Did we mention this was someone from a research university? First, the leader thought that distance education equated with for-profit institutions. Second, even if only for-profits were involved, wouldn’t you want to learn from those with experience? Double sigh.

The post analyzes much of the same data that I covered here, but Russ and Terri have pulled out some important and little-understood findings, including the following:

  • For Fully Distance Students, Just Over One-Third Enroll in For-profit Institutions
  • Fully Distance Students are Nearly Half of For-Profit Enrollments; Around 10% for Other Sectors
  • Two-thirds of Students Enrolled in At Least One Distance Course Are at a Public Institution

There’s much more data and analysis at WCET’s blog – I recommend reading the entire post.

The post WCET Post on Distance Education Misconceptions appeared first on e-Literate.

Major Change at UF Online: Exec director abruptly resigns

Thu, 2014-03-13 10:22

I wrote a post several weeks ago about UF Online, partially based on a phone interview with the executive director Betty Phillips.

Several weeks ago the University of Florida Online program opened for the Spring 2014 semester, accepting 600 transfer students, and the new program will accept Freshmen starting August 2014. This announcement comes just 2 years after the Florida legislature commissioned a study from the Parthenon Group on how to best leverage online programs in the state, and this program is probably one of the highest profile new online programs in the US within the past few years (along with California’s online initiative, Open SUNY, SJSU / Udacity and GaTech / Udacity). [snip]

What UF Online is, however, is an exclusively-online baccalaureate program leading to a UF degree for lower costs than the traditional on-campus experience. This is about expanding capacity and access to a research university, and the program aims to meet the same academic standards as the traditional UF experience . . .

Well it seems like the situation in Florida changes rapidly in multiple directions. Just two and half months after she officially took the new position, Betty Phillips has abruptly resigned. By all appearances, this move was unexpected as the University of Florida appears to have been caught flat-footed. The Gainesville Sun reports:

But less than three months after she officially began as director of UF Online on Jan. 1, Phillips — the wife of former UF Senior Vice President and COO Win Phillips — is no longer in charge of that program.

In an administrative memo sent out to faculty after 9 p.m. Tuesday, UF Provost Joe Glover didn’t even mention Betty Capaldi Phillips by name.

“Due to changes in personnel, effective immediately, Associate Provost Andy McCollough will become responsible for the administration of UF Online. Consequently, issues related to this program should be directed to his attention,” Glover’s memo said.

The memo was not posted on the Administrative Memo web page until late Wednesday afternoon.

The article describes that Phillips plans to return to ASU in a teaching role, focusing on research into personalized learning. I’m not sure if personal or HR issues were involved, but the article described other details:

Phillips had given up her job as provost and executive vice president of ASU — a position she held since 2006. She took a substantial pay cut to return to UF — from the $425,000 she earned as provost at ASU to $285,000 salary to run the online program here.

Meanwhile, her husband, Win Phillips, stepped down as COO in December and took a position with Innovation Square, UF’s technology startup incubator downtown.

When I did my interview, I had asked about the UF Online staff now that the program has started (they accepted 583 transfer students in January). Phillips’ replied that there was only two people, including herself. This resignation is much more significant than a single leader of a functioning program office departing – this is a complete change of the program office.

I’ll share more information as it becomes available.

Update (3/14): Carl Straumsheim has an article at IHE today that includes interviews with Andy McCollough from UF and Todd Hitchcock from Pearson. It’s well worth reading.

Until Florida can fill the position, W. Andrew McCollough, associate provost for teaching and technology, will lead UF Online. He said Phillips has chosen to return to ASU as a faculty member.

“She, in discussing the matter with me, had evinced after she had gotten herself immersed in the management of and the directing of UF Online that her first love at this point was functioning as a faculty member and doing research on online learning,” McCollough said. “It became evident to her that she would have difficulties fitting that preference into her waking hours here at the university.” [snip]

“The work doesn’t stop because the person in the director’s seat changes,” McCollough said. “The truth of the matter is in the work that is being done by faculty and instructional designers and videographers and web developers and marketing. The people that are doing the real work continue to do the real work.”

I personally think the explanations are part of the story – something doesn’t add up here.

The post Major Change at UF Online: Exec director abruptly resigns appeared first on e-Literate.

First-hand experience moving to team-based course design

Wed, 2014-03-12 15:58

As Michael mentioned, we have posted the last episode in the pilot e-Literate TV series with the topic of moving to a team-based course design for a flipped classroom usage. This episode provides an excellent opportunity to hear the first-hand experience of a faculty member going through this transition for the first time. Vanessa Perry of GWU had no real expectations or understanding of what this transition would encompass.

The adoption of team-based course design presents a cultural barrier for traditional institutions, as shown by a wall in the graphic that was the basis for a 2012 EDUCAUSE Review article.

The Wall

Vanessa describes what this barrier, or wall, feels like as you attempt to cross it, as seen in this snippet from the interview (full transcript and video and graphics available at the full episode on e-Literate TV).

Starting at 2:44, emphasis added.

Phil Hill:

Now, given those changes, which they sound great on paper, but one of the issues that’s very interesting that I’d like to get your thoughts on, is it’s fine to say, “There’s a reason for change,” but then there’s the issue of, “Well, we need to actually go through the change,” particularly for a faculty member where you’re really presenting the material, guiding students.

So, tell me a little bit about the change from your perspective of what you used to teach—how your method used to be of teaching—and then getting into this flipped classroom method, particularly with working with a team for design. How did that change things?

Vanessa Perry:

Well, you’re right, it’s one of those things that does sound good on paper. And so, I’d heard that it was really easy and “All you’ve got to do is take the stuff that you have in your PowerPoints and they’re just going to record it.

It’ll be fine; it will be very much like what you have always done.” Well that is so absolutely not the case. It’s nothing like what we’ve always done.

Traditionally, what I did was I sort of accumulated materials over time. I had my PowerPoint slides, and I would get up, and I would talk.

And I would talk for 60 minutes or 75 minutes or for however long, and I would stop, and I would ask students questions. And it was semi-interactive, but it was pretty much traditional talk, one-way kind of communication.

What I had to do was go back through those materials and figure out: What are the key elements that I’m best at delivering via lecture?

Versus: Now, what are the elements that are probably best delivered some other way? “Let’s put this package together.”

This is a production; it’s not like a class in the way that I had thought about it. And that was a real transformation—a real evolution—but it was certainly not easy at first.

Later on (starting at 6:12) Vanessa describes how she was very skeptical going into the process.

Vanessa Perry:

Oh, I was one of the people running around saying that this online instruction is never really going to work. And I thought it was sort of this low-quality alternative.

And I actually thought of it the same way that I thought about taking driver’s ed refresher courses; I had no real experience so all I could draw on were a couple of anecdotal situations, which were kind of these low-quality kinds of lecture capture, automated voiceover kinds of delivery mechanisms. And so, I was completely skeptical.

And when our school decided that we were going to take this sort of strategic step, I was totally against it.

In Vanessa’s case, eventually she saw the benefits and even applied the new methods to her face-to-face course (starting at 9:32).

Vanessa Perry:

I had to teach a face-to-face course, and I decided to use the digital community version in the face-to-face course, and it went so well. It went far better than, I think, teaching that course face-to-face has ever done before.

Vanessa’s case is but one example of this digital transformation in teaching, but I think her story describes several situations that many faculty members face. It is very useful to hear and understand these issues.

There’s much more to hear in this interview – see the full episode here.

The post First-hand experience moving to team-based course design appeared first on e-Literate.

New e-Literate TV Episode: Flipped Classrooms and Team-based Course Design

Wed, 2014-03-12 14:08

In our final episode of the pilot series, Phil interviews George Washington University Chair and Associate Professor of Marketing Vanessa Perry about her experiences developing a flipped class, with a focus on the team course development effort. Long-time followers of the blog know that Phil and I think the transition to team-based course design is both a major barrier and a major inflection point for institutions moving into digital education.

Check it out.

The post New e-Literate TV Episode: Flipped Classrooms and Team-based Course Design appeared first on e-Literate.

New post at 20MM: Two Approaches to Watch in Remedial Education Innovation

Mon, 2014-03-10 13:12

The 20 Million Minds Foundation is starting a new blog, and for topics of mutual interest, Michael and I will be posting articles  several times a month on their site and occasionally cross-posting here. The first article of mine went up last week.

Two Approaches to Watch in Remedial Education Innovation

It is no secret that the current approach to remedial (or developmental) courses does not work. According to a study by Complete College America (see original for footnotes documenting sources):

“The numbers tell a dispiriting story. Half of all undergraduates and 70 percent of community college students take at least one remedial course. Too many of these students never overcome being placed into a remedial course. Only about a quarter of community college students who take a remedial course graduate within eight years. In fact, most students who are referred to remedial education do not even complete the remedial sequence: One study found 46 percent of students completed the sequence in reading and only 33 percent completed it in math.”

Following the maxim that “if it doesn’t work, try something else”, two community colleges on opposite coasts – Miami-Dade College in Florida and Cerritos College in California – are trying to change the game. What can we learn from these two divergent examples?

Read the whole article here.

 

The post New post at 20MM: Two Approaches to Watch in Remedial Education Innovation appeared first on e-Literate.