In the fall of 2011 I made the following argument:
We need more transparency in the LMS market, and clients should have access to objective measurements of the security of a solution. To paraphrase Michael Feldstein’s suggestions from a 2009 post:
- There is no guarantee that any LMS is more secure just because they say they are more secure
- Customers should ask for, and LMS vendors should supply, detailed information on how the vendor or open source community has handled security issues in practice
- LMS providers should make public a summary of vulnerabilities, including resolution time
I would add to this call for transparency that LMS vendors and open source communities should share information from their third-party security audits and tests. All of the vendors that I talked to have some form of third-party penetration testing and security audits; however, how does this help the customer unless this information is transparent and available? Of course this transparency should not include details that would advertise vulnerabilities to hackers, but there should be some manner to be open and transparent on what the audits are saying. [new emphasis added]
Inspired by fall events and this call for transparency, Instructure (maker of the Canvas LMS) decided to hold an public security audit using a white hat testing company, where A) the results of the testing would be shared publicly, and B) I would act as an independent observer to document the process. The results of this testing are described in two posts at e-Literate and by a post at Instructure.
Instructure has kept up the practice and just released their third public security audit.
To be clear, we are continually performing security audits on Canvas. Occasionally, our customers even call for their own third-party audits, which we fully support. But once a year, we bring in a third party for an annual public audit, which helps us remain objective and committed to the security of your information.
This year we retained the company Secure Ideas, a network security consulting firm based in Orange Park, Florida. Their security consultants have spent years researching various exploits and vulnerabilities, building toolsets, and helping organizations secure their networks.
This year’s audit started in November 2013. Secure Ideas spent three weeks doing penetration testing and conducting a general review of Canvas’ security architecture. They presented their findings in this Final Summary Report. In short, they found 0 critical, 1 high, 1 medium, and 2 low priority vulnerabilities. Details of fixes can be found in our Security Notes Forum.
No other LMS vendor has taken up this call for public security testing to my knowledge, and I attempted to describe some of the arguments against the practice here.
While I obviously have not had the same insight into the second and third annual public audits (you can review the results in the public report), I am impressed to see that the company has kept their word.
As such, we see no reason why all LMS providers in the market shouldn’t provide open security audits on an annual basis.
I still think it would help the market in general if more LMS providers adopted this practice of public security audits – it would be useful for higher ed clients and it would be good for the providers themselves.
The post Instructure releases their third public security audit appeared first on e-Literate.
On December 17th, the Boundless OER-based textbook startup issued a press release describing the settlement they had reached with Pearson, Cengage, and Macmillan in the lawsuit those three companies had filed against the company. (Full disclosure: Pearson has been a client of MindWires Consulting.) Actually, a lot of the press release wasn’t really about the lawsuit, and the description of the settlement consisted of the following:
Today, we’re excited to announce that we’ve settled the lawsuit. In agreeing to a confidential settlement agreement, along with a public judgment and injunction entered by the Court, the parties have resolved the dispute. The resolution allows the parties to move forward and focus on their mutually shared goal of helping students learn. Boundless now has a clear path for building and marketing its OER-driven textbook alternatives without treading upon the Plaintiffs’ rights, and it is confident that it is in compliance and will not have further legal issues with the Plaintiff publishers. In turn, Plaintiffs have reinforced the strong protection they have in and to their copyrighted works and the related goodwill that they and their authors have established, and look forward to Boundless operating its business within the agreed upon framework.
This seemed like a strangely muted ending to a strange story. It’s hard to tell from the press release what actually happened. But having read the consent decree and injunction, I have come to two conclusions. First, Boundless lost. Second, the suit and its outcome tell us very little about the future of OER but rather more about business strategy for ed tech startups.Boundless Copyright?
Viewed at a distance, the lawsuit by the publishers looked preposterous. Boundless was publishing textbooks that competed with the publishers’ popular titles but were built using OER content. Everybody agreed that the words in Boundless’ books did not copy the words of the publishers’ products, and yet the publishers sued for copyright infringement. How could that be? Were they claiming that they owned copyright for the table of contents of, say, a standard calculus textbook? Aren’t these books structured by widely shared learning objectives? Were they claiming to own the very idea of a calculus course?
But the truth was rather revealed in the court document is more complex than that. First, Boundless wasn’t just marketing their products as competitive with those of the other publishers. They were marketing their books as, for example, “the Boundless version” of Mankiw’s Principles of Economics (published by Cengage). And they used a picture of the Mankiw book cover when advertising the “Boundless version.” The textbook publishers also alleged that Boundless “copied the selection, coordination and/or arrangement of these textbooks, including with respect to topics, sub-topics, sub-sub-topics and photos, figures, illustrations, and examples.” In other words, the Boundless products were being designed and marketed as meticulous paraphrases of entire books. The fact that they happened to use OER as raw materials was incidental, except insofar as it helped keep their product costs down. There was no talk, for example, of the remixing value that OER advocates tout. To the contrary, the whole point of the Boundless products was that they were exactly like the name-brand products they were genericizing. The company’s business strategy was never about capitalizing on the values of Openness; it was about capitalizing on the valuation of Chegg.“It’s like Thelma and Louise, only with aliens!”
According to CrunchBase, Boundless received its seed funding in April 2012, a month after Chegg received $25 million in their F round and at a time when it was clear that the used textbook insurgent was preparing for an IPO. The VC pitch pretty much writes itself. “Imagine Chegg, but with no warehouse to maintain and no physical books to ship!” VCs tend to love this kind of pitch, for several reasons. First, in a complex industry where it’s often very difficult to recognize a good bet, a safe strategy is to copy something that is already a hit. Second, VCs tend to be deeply suspicious of any business plan that comes within 100 miles of a bureaucratic process (or a union representative). By marketing direct to students, Boundless planned to avoid having to deal with the complexities of faculty adoption (including having to an army of sales reps). This was a pure consumer play. And production costs were low. After all, they were taking content created by other people and fitting it into a detailed structure created by still other people. To be clear, I don’t have any problem at all with re-using openly licensed content in order to lower costs for students (and for the companies that distribute that content). My point is that Boundless was never intended to be a content company, so they weren’t burdened with high content creation costs.
So, to recap: Low product creation costs, no distribution costs and, because they are marketing directly to internet-savvy and price-sensitive students while drafting behind the textbook companies that were driving the faculty adoptions, low sales and marketing costs. Plus a sales model that’s based pretty heavily on one of the few education startup success stories of the time. Boundless was perfectly engineered to attract VC money. Unfortunately for the founders, it was less perfectly engineered to withstand a copyright suit from the publishers whose books they were openly imitating.Live to fight another day (and way)?
The net result of the suit is the following:
- Boundless is enjoined from selling their products that are “aligned with” those of the plaintiffs and must destroy all copies of the books and the marketing materials.
- The company is enjoined from selling “aligned” products, using the images of their competitors’ products, or describing their own products as a “version,” “copy,” or “equivalent” to the plaintiffs’ products.
- Boundless will pay $200,000 to each of the plaintiffs.
That was the public part. There was a private settlement as well.
Boundless say they have “increased their usage and reach” to 3 million users of 21 titles (according to their press release), raise a total of $9.7 million in funding, and executed a pivot while all of this is going on. If you go to their website today, the headline on the home page reads, “Introducing Boundless for Educators.” Rather than doing an end run around faculty, they are now marketing directly to them. In fact, they look a lot like the current incarnation of FlatWorld Knowledge. With the cloud of the lawsuit removed, they can now focus on trying to drive that new strategy forward (and potentially raising more investment money). What they’ll be able to do with that start is not clear. But this is one “pivot” that was predictable, probably from the very beginning.
Like I said, I don’t think there’s much of a lesson here for the OER community, but there may very well be one for the VC community.
Here’s the full text of the settlement:
The post Lessons from the Boundless Copyright Infringement Suit appeared first on e-Literate.
Update: The original version had incorrect totals for total enrollment including non-online students. I have removed those columns until I can verify that data. I apologize for the mistake and any confusion.
The National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) provide the most official data on colleges and universities in the United States. At long last they have started to include data fields for online education (technically distance education, the vast majority of which is online) starting with the preliminary data released through Fall 2012 term. Despite all of the talk about data and measuring online programs, we can only now start to get official information from a central source rather than relying on surveys or institutional data.
As an example, let’s look at the top 20 online programs (in terms of total number of students taking at least one online course) for various sectors. Some notes on the data:
- I have combined the categories ‘students exclusively taking distance education courses’ and ‘students taking some but not all courses as distance education’ into ‘total students taking online courses’.
- IPEDS tracks data based on the accredited body, which can differ for systems. For example, the University of Phoenix puts all of their online students into the Online Campus while DeVry, Kaplan and Heald assign their online students a home campus. I manually added the DeVry, Kaplan and Heald totals, but I’m sure there are other examples where the data should be combined.
- I have not been able to set up WordPress to both show these tables in a usable format while also allowing copy / paste / sort, so for now these are images.
Private 2-Year Institutions – Fall 2012
This is great data which should support much better data analysis. Kudos to the Department of Education and NCES.
The post New IPEDS Data: Top 20 online US institutions by sector appeared first on e-Literate.
Last week, as expected, a California superior court judge ruled on whether to allow the Accrediting Commission for Community and Junior Colleges (ACCJC) to end accreditation for City College of San Francisco (CCSF) as of July 31, 2014. As reported in multiple news outlets, the judge granted an injunction preventing ACCJC from stripping CCSF’s accreditation at least until a court trial based on the city of San Francisco lawsuit, which would occur in the summer 2014 at the earliest. This means that CCSF will stay open for at least another academic term (fall 2014), and it is possible that ACCJC would have to redo their accreditation review.
What was the actual decision and what are the implications for other schools?
The original issues found by ACCJC were raised in the 2006 review, leading to multiple follow-up reports and actions. By summer 2012 ACCJC issued a Show Cause ruling based on a new review – the one that is the crux of the lawsuits and injunction. The full 2012 report documented the new evaluation that in order to “fully meet each ACCJC Accreditation Standard and Eligibility Requirements [sic]“, the college must follow 14 recommendations by March 2013 to keep its accreditation. CCSF did not meet this timeline and ACCJC in July 2013 sent a letter stating that CCSF accreditation would be revoked as of July 31, 2014. For full background, read this post.
Despite the seven-year buildup, CCSF finally got serious about changes in summer 2013, and they replaced their Board of Trustees with a “special trustee” (Robert Agrella) ”with unilateral powers to try and save the school from losing accreditation in one year”.
As CCSF is the largest college in California (85,000 students before 2012) and potentially the largest college ever to lose accreditation, the issue quickly became political. Three groups filed law suits seeking for force ACCJC to maintain CCSF accreditation – the City of San Francisco’s attorney Dennis Herrera, the American Federation of Teachers (AFT) Local 2121 and the California Federation of Teachers (CFT), and the Save City College Coalition (which was not part of last week’s ruling). Much of these three lawsuits’ arguments were based on a Department of Education notification from August 2013 that ACCJC was “out of compliance in several areas related to its sanctioning of City College”.
Both CCSF administration as well as California Community College system officials accepted the ACCJC ruling and decided to work within the system, even arguing against the three lawsuits. As described in the San Francisco Chronicle:
“The ruling doesn’t affect me at all,” said Robert Agrella, the special trustee appointed by the state to replace the elected Board of Trustees last summer. “I was brought in to meet the accreditation standards, and that is exactly what we’re doing.”
Brice Harris, chancellor of the statewide community college system, agreed. In a letter to Herrera on Thursday, Harris expressed dismay that the courts had gotten involved at all.
“Court intervention is not necessary to keep City College open,” Harris wrote. “Characterizations that the cases before the court are a ‘last-ditch’ effort to ‘save’ City College are inaccurate and will do additional damage to the college’s enrollment.”
He then listed nine areas in which the college had made significant progress, including hiring a permanent chancellor, hiring a collection agency to recoup millions of dollars in student fees it never collected, and mapping out progress on each of the 357 steps needed to fully comply with accreditation standards.
In fact, CCSF has maintained a public spreadsheet detailing its efforts:
Nevertheless the city kept up the pressure through its lawsuit, as described by the San Francisco Chronicle:
The city’s suit says the commission allowed political bias and conflicts of interest to influence not only its decision to revoke the college’s accreditation next summer, but also its entire evaluation of the college that began in March 2012.
The suit alleges that the commission unfairly stacked its evaluation team with supporters of a statewide initiative called the Student Success Task Force that sought to limit college access for thousands of students whose academic goals did not include a degree or transfer to a four-year college. The commission’s president, Barbara Beno, also wrote letters to the state in support of the initiative, which became law. At the same time, City College students and faculty members were among the most outspoken critics of the idea.
The suit also claims the evaluation team had too few faculty members and should not have included Beno’s husband, Peter Crabtree.
What the judge ruled:
- To prevent ACCJC from finalizing its revocation of accreditation for CCSF until a trial can be held based on the city’s lawsuit;
- To deny the city’s request to prevent ACCJC from blocking any accreditation rulings for all colleges it reviews (mostly California Community Colleges);
- To deny the CFT request for injunction based on alternate legal theories; and
- To deny two ACCJC requests to throw out the city and CFT lawsuits.
By reading the ruling itself it becomes apparent that the basis of the ruling was California’s Unfair Competition Law (UCL) governing “unfair”, “unlawful” and “fraudulent” practices. The City Attorney’s claimed in a press releasethat:
the court recognized that Herrera’s office is likely to prevail on the merits of his case when it proceeds to trial,
yet the actual language of the ruling was that (p. 41):
In short, I conclude there is some possibility that the City Attorney will ultimately prevail on the merits, because there is some possibility that he will establish some Commission practices (i) have zero utility and so demonstrating their unfairness, or others (ii) are illegal.
The injunction really is based on the harm done to CCSF by allowing revocation to proceed before the lawsuit goes to trial, but it does not give significant insight into whether the lawsuit might prevail.
Why did the judge grant an injunction based on the city lawsuit but not the faculty union? He explained that this was mostly a matter of the attorney’s role, as shown in the Plain English Summary (p. 54):
Back to this case. Some of the plaintiffs (the union, teachers and students) have a problem with their case. They have probably shown enough to conclude that the Commission imposed unfair procedures, but they have not shown that those procedures led to the Commission’s adverse decision. As far as the evidence presented to the court shows, the Commission might have issued exactly the same decisions with fair procedures. The plaintiffs have argued that they can win by just showing unfair procedures, and it doesn’t matter if the Commission would have done the same thing or not. But under the UCL, it does matter, at least as far as the union, teachers, and students are concerned. They have at least to show they were harmed by the specific acts they say were unfair or illegal under the UCL. They didn’t do that. It’s not good enough to argue that the Commission’s ultimate decisions (for example, threatening to terminate accreditation) causes harm.
The situation is different with respect to the case brought by the City Attorney. As a law enforcement officer he is empowered, along with other City Attorneys and the state’s Attorney General, to enforce the UCL without showing that any particular person was harmed.
As I mentioned last week:
What is remarkable for such a significant decision is that the CCSF deficiencies are not related to academic quality, and no one (CCSF, City of San Francisco, faculty union) has argued that the actual accreditation findings are in error. We’re facing the biggest accreditation shut down in history, and the issue is whether procedures were followed in evaluating non-academic management. Go figure.
Significance Inside and Outside California
The reason I am covering this case in so much detail is that it gives insight into the external pressures on higher education institutions. The real significance of the CCSF injunction is that it opens the door to direct political action to change the accreditation processes. Yes, there have been other cases where a court granted an injunction to delay revocation of accreditation, but to my knowledge all previous cases have involved motions coming from the affected institution itself (e.g. St Paul). With CCSF we have state and city politicians who went to court and prevailed (at least in their motion) despite the school and the system accepting the decision.
Accreditation is a hot political issue, and there is now blood in the water. Politicians can prevail with direct action on accreditation and not just by indirect pressure and not even through arcane accreditation procedures (CCSF ruling based on California law). As the San Francisco Bay Guardian described the political stakes in California:
The ACCJC has come under increasing fire from state education advocates, a bipartisan coalition of state legislators and U.S. Rep. Jackie Speier for its controversial advocacy to dramatically restrict the mission of California’s community colleges by focusing on degree completion to the detriment of vocational, remedial and non-credit education. The accrediting body’s political agenda — shared by conservative advocacy organizations, for-profit colleges and student lender interests — represents a significant departure from the abiding “open access” mission repeatedly affirmed by the California legislature and pursued by San Francisco’s Community College District since it was first established.
And there is a great interest in changing accreditation processes even at the federal level. Just last month the President’s Council of Advisors on Science and Technology made a specific recommendation on the subject:
2. Encourage accrediting bodies to be flexible in response to educational innovation. College degrees in the United States are accredited primarily by regional nonprofit organizations whose members collaborate in accrediting one another.19 These organizations, on the whole, do a reasonably good job of quality assurance, but they have many standards (concerning the adequacy of physical facilities, library collections, etc.) that are irrelevant to providers of online courses and degrees. The Federal Government (and in particular, the U.S. Department of Education) should continue to encourage the regional accrediting bodies to be flexible in recognizing that many standards normally required for an accredited degree should be modified in the online arena; it should also encourage such flexibility in state oversight of education.20 If the bar for accreditation is set too high, the infant industry developing MOOC and related technology platforms may struggle to realize its full potential.
How would the Tiffin University / Ivy Bridge College case have turned out if Ohio politicians had taken a similar approach to San Francisco politicians? CCSF has had seven years warning to deal with its issues, yet Ivy Bridge College was shut down, and Altius Education broken apart and sold off, based on a notice of several months from its accrediting agency HLC. From the press release [emphasis added]:
Today, Tiffin University announced to students a directive from the Higher Learning Commission (HLC) that the school must discontinue offering associate degree programs through Ivy Bridge College as of October 20, 2013. Ivy Bridge College, a college within Tiffin University, has offered online associate degree programs to students across the U.S. since its creation in 2008. The HLC directive, which was issued on July 25, was unexpected by Tiffin University, and Ivy Bridge College is now intensely focused on ensuring that its students’ progress towards a degree won’t be interrupted by the decision despite the very short timeline.
I agree with Audrey Watters’ take last week:
As I noted in one of my year-end review posts, I predict this and other accreditation battles will dominate the headlines in 2014.
The post CCSF Accreditation Injunction: The decisions and implications appeared first on e-Literate.
I am amazed at the number of comments we have gotten already on the other day’s Pearson post. Don’t you people have better things to do on a holiday than read and comment on 7,000-word blog posts about textbook publishers (asks the man who spent his holiday writing a 7,000-word blog post about a textbook publisher)? Seriously, I am humbled by your commitment. For those of you who subscribe to comments on this blog by email, I’m afraid that is no longer a reliable way to track the conversation. We have integrated Google+ into our blog posts which has the benefit of attracting more commenters and longer conversations at the cost of having two different commenting systems running simultaneously and no good way to track or integrate them. Unfortunately, we are back to the days where you have to go to the page periodically to see what is happening, at least for now.
Anyway, unsurprisingly, there is a lot of skepticism about Pearson and also the notion of “efficacy” among the commenters. There is also some discussion about the complexity of how we define (or fail to define) the goals education and how the lack of clearly articulated goals makes any attempt to measure efficacy problematic. (Efficacious at what?) This is a point I’ve been trying to make in different ways and for different audiences in several recent posts. Larry Cuban has a timely blog post up on the history of this problem in math education.
Meanwhile, Carrie Saarinen has a more positive take on the idea of efficacy. I met Carrie at the NERCOMP LMS unConference, when she was working at Brown University. She has since been hired by Instructure (to their great credit). I highly recommend reading Carrie’s post and following her blog.
I’m generally conflicted about year-end lists of top blog posts because there is no single way to order the list that is truly reflective of the conversations that we’ve been having together on the blog. But after finding Audrey Watters’ list for Hack Education so interesting I thought, “Oh, what the heck.”
First, some general stats:
- We published 148 blog posts in 2013, bringing the total of posts on e-Literate to 1,116.
- We had about 310,000 visitors during the course of the year
- The most common search terms that led people to the blog (excluding variations on the blog name) were “moocs”, “lms market size”, and “mcgraw hill education”.
- We had visitors from 193 countries around the world.
Here are the top five posts as measured by the number of views:
- “Six Ways the edX Automated Grading Announcement Gets It Wrong” in April 2013 by Elijah Mayfield (with a whopping 87 comments on it, I might add)
- “The Most Thorough Summary To Date of MOOC Completion Rates” in February 2013 by Phil Hill
- “A Response to the USA Today Article on Flipped Classroom Research” in October 2013 by Phil Hill
- “State of the Higher Education LMS Market: A Graphical View” in September 2012 by Phil Hill
- “Emerging Student Patterns in MOOCs: A Graphical View” in March 2013 by Phil Hill
If we look at the number of “thumbs up” and “thumbs down” ratings that readers gave to posts, the top five were as follows:
- “Six Ways the edX Automated Grading Announcement Gets It Wrong” in April 2013 by Elijah Mayfield
- “A Taxonomy of Adaptive Analytics Strategies” in March 2013 by Michael Feldstein
- “A Response to the USA Today Article on Flipped Classroom Research” in October 2013 by Phil Hill
- “Cengage MindTap and the Evolution of Courseware” in May 2013 by Michael Feldstein
- “Why Big Data (Mostly) Can’t Improve Teaching” in January 2013 by Michael Feldstein
And if we look at the number of social media mentions in Twitter, Facebook, Google+, LinkedIn, StumbedUpon, and Pinterest, then the top five are as follows:
- “Why the Google Art Project is Important” in May 2013 by Beth Harris and Steven Zucker
- “‘Can I Use This?’ How Museum Image Policies Undermine Education” in November 2012 by Beth Harris and Steven Zucker
- “Google Apps for Education: When Will It Replace the LMS?” in April 2012 by Audrey Watters
- “U.S. Claims Global Jurisdiction for .com and .net Web Sites: Is .edu Next?” in January 2012 by Jim Farmer
- “Why Higher Education Is In Trouble, In One Graph” in March 2011 by Michael Feldstein
I would like to provide the top five most commented posts, but since we have integrated Google+ into our post pages a lot of our readers are commenting through that system rather than through WordPress, and I have no way of combining those numbers accurately.
At any rate, by any measure you choose, Phil and I are thrilled with the year we’ve had. We are humbled by the wonderful featured bloggers who have graced our pages—people like Elijah Mayfield, Mike Caulfield, Bill Jerome, Audrey Watters, Jim Farmer, Beth Harris, and Steven Zucker—and deeply grateful for all of you who have taken the time to read, share, think about, and respond to what we have written.
We look forward to an even better year with you in 2014.
Love ‘em or hate ‘em, it’s hard to dispute that Pearson has an outsized impact on education in America. This huge company—they have a stock market valuation of $18 billion—touches all levels from kindergarten through career education, providing textbooks, homework platforms, high-stakes testing, and even helping to design entire online degree programs. So when they announce a major change in their corporate strategy, it is consequential.
That is one reason why I think that most everybody who is motivated to read this blog on a regular basis will also find it worthwhile to read Pearson’s startling publication, “The Incomplete Guide to Delivering Learning Outcomes” and, more generally, peruse their new efficacy web site. One of our goals for e-Literate is to explain what the industry is doing, why, and what it might mean for education. Finding the answers to these questions is often an exercise in reading the tea leaves, as Phil ably demonstrated in his recent posts on the Udacity/SJSU pilot and the layoffs at Desire2Learn. But this time is different. In all my years of covering the ed tech industry, I have never seen a company be so explicit and detailed about their strategy as Pearson is being now with their efficacy publications. Yes, there is plenty of marketing speak here. But there is also quite a bit about what they are actually doing as a company internally—details about pilots and quality reviews and hiring processes and M&A criteria. These are the gears that make a company go. The changes that Pearson is making in these areas are the best clues we can possibly have as to what the company really means when they say that they want efficacy to be at the core of their business going forward. And they have published this information for all the world to see.
These now-public details suggest a hugely ambitious change effort within the company. Phil and I have consulted for a few textbook publishers, including Pearson, and I worked for Cengage for a year and a half. We have a pretty good idea of the magnitude of the change management challenges these companies face right now and the strategies that various publishers are bringing to bear in an effort to meet them. I can say with absolute conviction that what Pearson has announced is no half-hearted attempt or PR window dressing, and I can say with equal conviction that what they are attempting will be enormously difficult to pull off. They are not screwing around. Whatever happens going forward, Pearson is likely to be a business school case study for the ages.
As if all of this drama weren’t enough, Pearson’s strategy raises another question which should be fascinating for educators; namely, can a rubric transform a multi-billion-dollar company?
Fair warning: This post is ridiculously long. Even by my standards.Transforming U
Before we can really dig into the questions at hand, let’s talk about people’s beliefs about Pearson and how those beliefs may color their interpretations of the company’s actions. I’ve been thinking a lot lately about motivated cognition—the depressingly robust and pervasive process by which humans strongly tend to draw conclusions from new facts or arguments that are consistent with what they already believe (and want to believe), particularly about hot-button issues like politics. (As usual, Mike Caulfield has already written a thought-provoking post on this topic and some of its implications for education.) Now that Blackboard isn’t suing anybody anymore, Pearson is probably the single most emotionally charged brand in the education industry. Anything at all having to do with their actions or intentions is likely to function as something of a Rorschach test. It can be helpful in such situations to try to analyze the questions before us in a different, less emotionally charged context first, and then apply what we know about the context afterward to see how it might change our judgments.
To that end, I propose a thought experiment. Imagine if the president of U.C. Berkeley wrote the following:
The learning challenge is unrelenting in scale (by 2025 the world’s population will have doubled twice in the space of 75 years), and in its increasing complexity. Globally, at least 60 million children remain without access to primary school education, and an estimated quarter of a billion are lacking in basic reading and writing skills. The International Labor Organization estimates 200 million people are unemployed in 2013. The labor market demand for routine tasks has fallen rapidly over the past 50 years, meaning we can no longer rely on memorizing and reproducing knowledge acquired from a specific curriculum to support us in our careers.
At U.C. Berkeley, we are building our strategy around trying to make some impact on those big needs. Education remains very much a black box in which inputs are turned into outputs in ways that are difficult to quantify or predict consistently. But that’s no excuse for us or indeed for anyone else involved in education. We need to learn more. At Berkeley, our mission is to help people make progress in their lives through learning. So we had better be sure that we can demonstrate that progress and measure our impact in a meaningful way. Our commitment is to make improving learning outcomes the central driving force of the university, to report on progress and to use the findings to propel continuous innovation and improvement.
We borrowed the term ‘efficacy’ from healthcare. Just as a pharmacist would talk about relieving a tickly cough rather than listing the ingredients of the syrup, we want to be able to state the outcome we help to produce rather than describe the input we provide. Efficacy might be complex enough in healthcare; in education it is frequently many more times so. The process of learning is a social, dynamic, and interactive one. Context — culture, community, language — is central to the learning outcome. We may never be able to ‘prescribe’ an educational process to deliver a desired learning outcome with the same precision as a doctor or pharmacist. Yet all of us involved in education must take a lesson from that most famous phrase in school reports: we must do better.
As we transform our university, our aim is to ensure that our actions, our decisions, and our investments are driven by a clear sense of how we can make a measurable difference on learning outcomes. That idea will shape the choices that we make—what we do, what we stop doing, where we invest and innovate, who we partner with, how we engage with our students, and how we recruit, develop and reward our people.
To get started, we needed a definition. Keeping it simple, we chose to define as follows: a course has efficacy if it has “a measurable impact on improving people’s lives through learning.”
Note that, with this definition, it is the learning outcome that we are pursuing. Passing a test or an exam is good, but it is not an end in itself: what we really want to see is the benefit of doing so in somebody’s life. To give an example, if a student passes an ESL course, that is good. What really matters, though is that their mastery of English helps them to make progress in their career. Or, as another example, achieving an “A” in a class is good, but what really matters is that, as a result, the student can progress to their degree or career of their choice, prepared for work, college, and citizenship.
Central to this effort, we designed our Efficacy Rubric which, instead of imposing a model, asked a set of questions which, once answered by the faculty, would set the course on a path to efficacy. The rubric is in four sections. The idea is that, for each of the four sections, an judgment is reached on a four-point scale. Those four judgments can be combined into a single overall judgement on the likely efficacy of the course. We should emphasize at the outset that the rubric does not claim to be scientific: its basis is in informed human judgment. Crucially, though, it is provocative and challenging. It asks difficult questions; it demands that people think deeply about what they do and why.
Each of the four sections of the rubric is intended to get at a specific vital angle of the course’s efficacy. The brief summary is as follows:
- Section 1, The Efficacy Goals: What are you trying to do?
- Section 2, The Evidence: Why should we believe you?
- Section 3, The Plan: How do you intend to achieve your efficacy goals?
- Section 4, Capacity: Do you, and those you depend on, have the knowledge, skills, and relationships to deliver the efficacy goals?
The idea is that if the faculty can answer the questions convincingly, then their courses are likely to either already be demonstrating efficacy, or at least to be on the path to efficacy.
This would be a pretty provocative pronouncement for the president of a public university (or any university) to make. It would raise all sorts of questions. At the highest level we might ask, Is efficacy the right model for expressing our educational responsibility to our students? What kinds of conversations would adopting this model lead to among faculty, and how would it influence the way courses are designed and taught? What trade-offs would we make by adopting this particular framework, and do we do more good than harm, on balance, by picking one framework for everyone? Then there would be practical questions. How would faculty begin to apply such a framework as a group, across all departments and courses? Would they even accept the idea that they should? Do they have the skills to do so? What support might they need?
The words above come not from the president of U.C. Berkeley but from Pearson’s report, with some elisions and modifications on my part to make them fit with the thought experiment. I do think that we may ask different questions and, in some cases, answer the same questions differently when we think about the framework in terms of its appropriateness and usefulness for education separately from its context as vendor public relations materials.Transforming (For? Because Of? Despite?) You
Of course, Pearson is not a public university, and their decision to pursue this strategy as what has historically been a textbook company also raises some different questions. As you think about Pearson declaring that they are now focused on evaluating all their products based on efficacy, one reaction that you may be having is something along the lines of, “Wait. You mean to tell me that, for all of those educational products you’ve been selling for all these years, your product teams are only now thinking about efficacy for the first time?” Another reaction might be, “Wait. You mean to tell me that you think that you, a textbook company, should be defining the learning outcomes and determining the effectiveness of a course rather than the faculty who teach the course?” These questions represent the rock and the hard place between which Pearson finds itself. I will address the former now and the latter in the last section of this post.
It’s impossible to unpack the meaning of Pearson’s move without putting it in the context of the historical relationship between the textbook industry and the teachers who adopt their products. Despite all of the complaints about how bad textbooks are and how clueless these companies are, the relationship between textbook publishers and faculty is unusually intimate. To begin with, I can’t think of any other kind of company that hires literally thousands of sales representatives whose job it is to go visit individual faculty, show them the company’s products, answer questions, and bring feedback on the products back to the company. And speaking of those products, the overwhelming majority of them are written by faculty—many with input from an advisory committee of faculty and pre-publication reviews by other faculty. You can fairly accuse the textbook publishers of many different faults and sins, but not taking faculty input seriously isn’t one of them. Historically, they have relied heavily on that faculty input to shape the pedagogical features on the textbooks. And they have had to, because most of the editors are not teachers themselves. More often than not, they started off as textbook sales reps. If they taught at all, it was typically ten or twenty years ago, and just for a few a few years—long enough for them to figure out that teaching and the academic life weren’t for them. This doesn’t mean that they don’t care about pedagogy or don’t know anything about it, but it does mean that most of what they know comes from talking with their authors and customers.
And by “customers,” I mean faculty, despite the fact that it is the students who actually buy the product. Pearson’s choice to build their learning outcomes effort around a term that comes from the pharmaceutical industry is an historically apt one for the textbook industry. In higher education in the United States, faculty prescribe textbooks and students purchase them. As a result, textbook publishers have generally designed their products to please faculty rather than students. One consequence of this is that they had no need to distinguish product features that offer faculty convenience from those that actually impact student learning. When faculty/customers said to the textbook publishers, “I want my book to come with slides, lecture notes, and a self-grading homework platform so that I don’t have to put as much work into that annoying survey course the department head is making me teach,” then that’s what they provided. Whether that collection of materials had positive impact, negative impact, or no impact on student outcomes was not a question that the textbook publisher had any particular reason to ask. For the most part, the publishers relied on their authors and customers to make good decisions for the students. As long as the they provided the raw materials that the faculty said they needed, the companies’ work was done.
That relationship has been slowly breaking down over the past couple of decades and has now reached a crisis point, for a variety of reasons. The first and most obvious is that students are no longer meekly taking—or at least, and more importantly for the textbook publishers, buying—the “meds” prescribed by their teachers. The used textbook market has been growing since at least 1980s when I was a student, meaning that the number of copies sold of each textbook edition has been going down. For many years, the textbook companies compensated for their losses by artificially creating reasons for new, incompatible editions every few years and by raising prices every year to make up in dollars per book sold what they were losing in number of books sold. This strategy worked for a surprisingly long time, but the publishers have finally hit the price ceiling in the market—aided somewhat by new ways for students to find used books, increasing student willingness to skip buying the books altogether, and increasing faculty willingness to include free or low-cost alternative curricular materials. The textbook industry is in trouble and no longer has the luxury of just trying to squeeze another year out of the old model. Revenues and profits are down. Even Pearson, which is in the best shape of the cohort, have had bad financial results and expect the problems to continue. Most of the major publishers have gone through very significant reorgs, downsizings, and in some cases recapitalization exercises in the past few years. Cengage declared bankruptcy. McGraw Hill sold off their education business to private equity. And so on.
By and large, the textbook publishers have adopted three new strategies to deal with their existential crisis. First, they are diversifying into services. They see that a lot of schools are having trouble, for example, launching online learning programs, so they provide outsourced services to those schools. Second, they are bundling homework platforms and curricular content into one digital platform. If students have to get graded in the homework platform in order to pass the class, then they have to buy the product. And if those homework platforms rely on digital access keys that expire at the end of the semester, then students can’t resell them into the used textbook market. And third, they are finally giving some thought to how they can design products that students actually want to use and buy. They are doing this mostly by trying to learn lessons from other industries about how to design products that make their customers happy. Cengage, for example, has recently undergone a major restructuring to emphasize product management teams in the style of software companies rather than traditional editorial teams.
There is a fourth strategy that is only beginning to emerge. Textbook publishers have noticed that products that have the word “analytics” or “adaptive learning” associated with them sell well. There is still a lot of whiz-bangery to the industry’s thinking, but it is starting to dawn on them that their products might sell better if they can prove that those products actually…you know…work. If they do work. If they provably help people to learn.
Pearson’s efficacy strategy is all about the lessons that they are learning from the success of their MyLabs products. They are trying to change the buying criteria for curricular materials. What if, instead of choosing a textbook based on the reputation of author who wrote it, faculty were to choose the materials they prescribe to their students based on the evidence that those materials actually impact learning? What if students thought of those materials not just as that $150, 10-pound piece of garbage that their professor is making them buy and of which they will only read 10%, but as something that they have reason to believe will help them? What if university administrations and state governments began demanding evidence that students are getting value from the education that they are paying for? (Oh wait; that’s already happening.)
Pearson believes that this world is coming, they want to accelerate its arrival, and they believe that they can position themselves as the market leader in it. As I will outline in the next section, we have plenty of evidence to suggest that they are very, very serious about it. Whether they can actually pull it off is a vastly harder question.Transforming P(earson)
Many academics who have never worked in a large corporation harbor a stereotype that whatever the CEO says goes (in stark contrast to the academic environment). And there are some companies for which that is mostly true. When I worked at Oracle, I was amazed at how, even in an organization of 120,000 people, when Larry Ellison wanted something to happen, it happened. Even more amazingly, if there was something that somebody in that company wanted to happen, and that somebody wasn’t Larry Ellison or one of his lieutenants, it rarely happened. But in my experience, Oracle is the exception that proves the rule. Most companies have their fiefdoms, their internecine squabbles, their cliques and their cultural inertia that a CEO cannot simply overrule. Let’s start with the sales force. In many companies that have a sales force (including textbook companies), sales reps get either most or all of their compensation not from salary but from commission. Which means that, in an important sense, they don’t really work for the company. They are like franchisees. They are bound by certain rules of their agreement with the company that provides the products that they sell, but they are fundamentally in business for themselves. In such a scenario, which products do salespeople sell? There are two answers to this question: (1) whichever ones they can and (2) whichever ones earn them the biggest commission. Suppose you’re the CEO of a textbook company and you want your sales force to emphasize a new product line that you think has a bright future or that takes your company in a new direction. But your sales rep finds it easier to sell the old product. Which product is she going to sell? Why, the one that’s easier. You can tip the scales a little by providing a bigger commission for the new product, but at the end of the day, your rep is going to weigh that commission against how hard the sale is and how many sales of each kind of product she thinks she can make. Even if you, as CEO, are willing to let the company’s revenues take a short-term hit by emphasizing the product that has better long-term prospects and losing some sales in the process, you can’t force your rep to go along with that decision because she doesn’t work for you. Not really.
So most typical companies that have substantial commission-driven sales forces have at least some significant limits on the CEO’s ability to turn the ship, so to speak. But in this regard, textbook companies are not typical companies. They are much, much worse. All the big textbook publishers got big because they are what business folks call “rollups“. Some physics and engineering textbook publisher somewhere had a really great couple of years and decided to buy a math textbook publisher. That company grew and eventually bought a biology publisher. And so on. But the main business purpose of these mergers and acquisitions was to save costs. Why run two distribution warehouses when you can run one? Why have two printing operations, or two marketing or HR departments? In terms of the core work of producing content, the physics editors have no more reason to talk to the biology editors now that they are under one roof than they did when they were separate. Each small publisher still ran mostly on its own. And that strategy worked just fine until publishers had to do start doing big, whole-company projects like building digital platforms. Then it became a nightmare. But because each individual business unit has its own strong cultures, processes, and internal loyalties, getting them all to line up behind one strategy and set of priorities—particularly in cases where the leader of one business unit had to sacrifice in the interest of helping another—was incredibly hard. Again, in a sense each business unit was a franchise, compensated through its own success and working in its own ways. The history of large textbook companies in the past decade is a history of new leadership trying to centralize power, failing, and then being replaced by new leadership that starts all over again.
So if you’re the CEO of major textbook publisher and you want to unite the entire 45,000-employee company around a plan to transform the way the company does business, what do you do? Surprisingly, Pearson’s CEO John Fallon’s answer was, “I’ll create a rubric.”
I’m not going to analyze Pearson’s rubric in detail here because (a) this post is already ridiculously long even without that analysis, and (b) there are plenty of others in the educational community who are incredibly well qualified to offer a critique, and I’m hoping that somebody will. (Pearson has an online tool for applying the rubric if you want to give it whirl.) I’ll say this much about it: It’s nothing special. It’s not bad, but it’s not genius either. There are plenty of flaws and limitations you could find if you worked at it and applied it broadly enough. There is no magic in it.
But here’s the thing: There is never any magic in a rubric. The magic, when there is any, happens from the norming conversations that the rubric engenders. It happens when one colleague says to another, “What do you mean by ‘quality of evidence’?” Or “I scored that course a 2 on effectiveness. Why did you think it was a 4?” To the degree that the Effectiveness Framework proves to have any magic for Pearson, it will be in the norming conversations that it engenders across the company. Like our hypothetical Berkeley president, Fallon is working with diverse groups within an institution that has a culture of independence and Balkanization. Some of this is for good reason; conversations about effectiveness in chemistry education should look very different from conversations about effectiveness in fine arts education. Some of the fractiousness is about lack of a common culture and language necessary to discuss what otherwise are common challenges. And some of it is just human territoriality and self-interest. The first two challenges might be addressed by having a deep and wide ongoing norming conversation about a rubric that is general enough to cover a wide range disciplines and products but focused enough to provoke important discussions. The goal is for that conversation to become the basis for a new culture. The third challenge might be addressed by reinforcing that culture through your HR and other business practices.
The actions spelled out in Pearson’s document demonstrate that the company is making a substantial investment in activities that should be conversation- and culture-building:
- They recruited volunteer business leaders within the company who were interested in piloting the framework.
- The created a process by which the product teams under those business leaders conducted self-assessments of their products using the framework. Typically six to eight members of each team participated in the reviews.
- Each product team would norm their self-evaluations with the central efficacy review team.
- After the first few reviews, Pearson began running training sessions on the review process for volunteers—in fact, prospective attendees had to apply—in at least five locations around the globe.
- To date, the company has conducted 100 efficacy reviews across 15 countries and trained 600 reviewers across 25 countries.
- They have developed the capacity to conduct 150 new efficacy reviews per year.
- They have recruited a volunteer Efficacy Steering Committee of people who “lead efficacy for entire regions or business units on top of their day jobs.”
- They have developed a workshop to practice applying the efficacy framework which they have delivered to 5,500 participants.
- They are developing an e-learning module for training on the framework.
It’s hard to argue that internal activity of this scope is just PR. This is internal-focused effort, not just decorations for press releases. How much of it is genuine culture-building? It’s hard to know from outside, but the company took pains in the document to talk about the importance of fostering “debate” and building enthusiasm that inspires employees to participate rather than forcing them.
In addition to the actions themselves, the document gives us some clues as to how Pearson executives characterize what all these efforts add up to:
To convene a company around a single idea, you need to do three essential things: realign the existing portfolio around the idea, set the routines in place to embed that idea in the future, and build a community to make the change irreversible.
Reviewing the existing portfolio will help steer the current business towards delivery better outcomes. But in order to have perpetual change, we also need to shift an idea from the periphery to the centre of the organisation. You need to ensure that it’s a recurring thought, day in, day out, and a part of every employee’s routine.
In fact, you need to deconstruct the essential routines that form the core business processes of a company, and reconstruct them around the idea you’re seeking to embed. It’s the only way to ensure that future generations of employees and leaders across the company hold the values on which you depend.
We will need to address each of these to deliver ‘institutionalisation’ of efficacy at Pearson, taking all the elements of the company that keep the core business engine running and ask ourselves: “What if we could redesign these processes from scratch to ensure they resulted in delivering learning outcomes?”
If we take them at their word, they are trying to rebuild the company from the ground up. Again, the scope of the effort is consistent with the rhetoric. In addition to the cultural work, they are making significant changes to their business processes:
- They have created a global educational research function in the company, as well as a research center for each of three major areas—Schools, Higher Education, and Professional Learning.
- They are developing a repository of internal and external educational research that will be available to “business leaders” (presumably this means within Pearson).
- They have developed a partnership with Nesta to define standards of evidence in educational efficacy research.
- They have built a formal efficacy evaluation into their business acquisition process which applies to every acquisition above $3 million.
- They apply an efficacy review to internal product investments that are over $1 million and to selected smaller projects that have strategic importance.
- They have developed efficacy review tools that have to be applied on both bids and post-delivery reviews for bids on large-scale projects.
- They have added statements about commitment to efficacy in their job advertisements, made efficacy a topic in interviews of prospective employees, and added efficacy training to employee orientation.
- They are adding efficacy criteria to employee performance reviews and, in some cases, linking efficacy results to compensation.
- They are including efficacy updates in their monthly progress reports to their board of directors.
So we have plenty of reason to believe that Pearson is quite serious about becoming a company whose mission is to deliver educational efficacy, whatever that may mean to them. But that leads us to several more questions. First, does Pearson’s notion of efficacy truly align with the academic community’s ideas of what a good education is supposed to accomplish? And second, will Pearson be successful as a company if they deliver “efficacious” educational products? To my mind, these are the same question. Pearson can succeed in the market with this strategy to the degree that they build products which accomplish educational goals that educational stakeholders agree is important, as proven by measures that educational stakeholders agree are valid and significant.
And this brings us to a huge gap in Pearson’s thinking about efficacy to-date.The Other Half of the Job
Let’s think some more about the analogy to efficacy in health care. Suppose Pfizer declared that they were going to define the standards by which efficacy in medicine would be measured. They would conduct internal research, cross-reference it with external research, come up with a rating system for the research, and define what it means for medicines to be effective. They would then apply those standards to their own medicines. And, after all is said and done, they would share their system with physicians and university researchers in the hopes that the medical community might be reassured about the quality of Pfizer’s products and maybe even contribute some ideas to the framework around the edges. How confident would we be that what Pfizer delivers would consistently be in the objective best interest of improving health? This is not entirely hypothetical; much of the drug research that happens today is sponsored by drug companies. Unsurprisingly, this state of affairs is viewed by many as deeply problematic, to say the least. It certainly doesn’t help the brand value of Pfizer. But at least much of that medical research is conducted by physicians and academic researchers and is subject to the scientific peer review process. Pearson is creating their framework largely on their own, selectively inviting in external participants here and there.
I get why they had to do this. The company is bleeding money, it will take some time to stop the flow of blood, and they couldn’t wait to build consensus before they tied the tourniquet. But it is not going to get them where they want to go. While there are obvious concerns about ethics and about whether such a company-driven approach is fundamentally compatible with progress on complex questions such as defining what an education is good for and how we know when we have achieved these ends, I want to focus on the business aspects of the problem. I want to focus on why continuing down this path is bad for Pearson. Or rather, why driving hard toward becoming facilitators rather than owners of efficacy research is good for Pearson.
I could give a number of examples, but one should hopefully suffice. In preparing to write this post, I asked Annie Cellini, Pearson’s Senior Vice President of Marketing and Strategy, whether Pearson intends to share the completed rubrics for their products with customers and prospects. This was her reply:
Though we don’t plan to share product efficacy scoring as part of our sales and marketing materials per se, where a product has a strong research and evidence base, we will communicate that to customers. It’s also worth saying that the most important output of an efficacy review isn’t a rubric score. We believe that much of a review’s value comes from the conversations that it prompts teams to have, which focus on the path forward, and on how to improve the product or service from a learner perspective. A poor score does not mean the product doesn’t work well. It often means that teams are not collecting the type of data needed in order to get a sufficiently robust view of the product’s efficacy, or they may not have a sufficiently practical plan to continuously enhance the product based on data. Their improvement plan will encourage them to start gathering new information, to start working in new ways, and to make sure that their customers are aligned with the outcomes they plan to achieve and understand their role in the product’s path to efficacy.
This is a perfectly sensible and responsible reply if you believe that the main value of the Efficacy Framework to customers is in the data that results from the work a product team does after an efficacy review. But remember, the magic of the rubric is in the norming conversations. Annie’s reply suggests that Pearson understands this in terms of the Pearson-internal processes but not yet in terms of their relationships with their customers. If Pearson were to say to faculty, “Here’s what we think we know about the efficacy of this product, here’s what we don’t know yet, and here is how we are thinking about the question,” they might get a number of responses. Maybe they would get, “Oh, well here’s how I know that it’s effective with my class.” Or “The reason that you don’t have a good answer on effectiveness yet is that your rubric doesn’t provide a way to capture the educational value that your product delivers for my students.” Or “I don’t use this product because it has direct educational effectiveness. It frees me up from some grunt work so that I can conduct activities with the class that have educational impact.” Most of all, if you’re John Fallon, you really want faculty to say to their sales reps, “Huh. I never thought about the product in quite those terms, and it makes me think a little differently about how I might use it going forward. What can you tell me about the effectiveness of this other product that I’m thinking about using, at least as Pearson sees it?” And you really want your sales reps to run back to the product teams, hair on fire, saying “Quick! Tell me everything you know about the effectiveness of this product!”
Pearson won’t get that conversation by just publishing end results of their internal analysis when they have them, which means that they have a high risk of failing to align their products with the needs and desires of their market if they think about the relationship between their framework and their customers in that way. I don’t think Pearson fully gets that yet. While the authors of The Incomplete Guide frequently invoke terms like “community” and “leaders” in the document, they generally seem to mean the community and leaders within Pearson. The company’s efforts to reach out to the academic community for feedback and participation are generally framed as an extension of their efforts rather than the very heart of them. And yet, Pearson’s brightest possible future is not as a company that designs educationally effective products, but as one that facilitates conversation and research about efficacy within the broader academic community (and in so doing is able to design products that their customers agree are effective for important educational goals as determined by meaningful measures).
There are a number of reasons why this part of the transformation will be at least as difficult as the part that Pearson is undertaking now. First, it is far from clear that the company has the trust of the academic community that would be necessary for them to take such a role. That would have to be built, in some cases from the ground (or even the basement) up. Pearson does have real strengths that are known within certain segments of the academic community—in data science, for example—but this does not transfer to a general reputation. Second (and relatedly), unlike the medical research community, the educational research community is still nascent and fragmented. Finding non-paternalistic but effective ways to bring that community together and facilitate useful conversations will be difficult to say the least. These two challenges are outside the company’s sphere of control, which means that Pearson will have to develop new ways to think about how to build their relationships with the broader educational community. Mike Caulfield wrote a post a while back about Eric von Hippel’s work on customer innovation, which is one potential font of inspiration for the company. But make no mistake; this will be a tough nut to crack.
Internally, changing the way they think about answering the questions that the framework asks them will entail as much subtle, difficult, and pervasive re-engineering of the corporate reflexes and business processes as the work being undertaken now. As I described earlier, all textbook companies that have been around for a while are wired for a particular relationship with faculty that is at the heart of how they design, produce, and sell their products. Their editors have gone through decades of tuning the way they think and work to this process, and so have their customers. When Pearson layers a discussion of efficacy onto these business processes, a tension is created between the old and new ways of doing things. Suddenly, authors and customers don’t necessarily get what they want from their products just because they asked for them. There are potentially conflicting criteria. The framework itself provides nothing to help resolve this tension. At best, it potentially scaffolds a norming conversation. But a product management methodology that can combine knowledge about efficacy, user desires, and usability requires more tools than that. And that problem is even worse in some ways now that product teams have multiple specialized roles. The editor, author, adopting teacher, instructional designer, cognitive science researcher, psychometrician, data scientist, and UX engineer may all work together to develop a unified vision for a product, but more often than not they are like the blind man and the elephant. Agreeing in principle on what attributes an effective product might have is not at all the same as being able to design a product to be effective, where “effective” is shared notion between the company and the customers.
Pearson will need to create a new methodology and weave it into the fabric of the company. There are a number of sources from which they can draw. The Incomplete Guide mentions Lean Startup techniques, which are as good a place to start as any. But there is no methodology I know of that will work off-the-rack for education, and there certainly is no talent pool that has been trained in any such methodology. I have worked with multiple educational technology product teams in multiple companies on just this problem, and it is very, very hard. In fact, it may be the single hardest problem that the educational technology industry faces today, as well as one of the harder problems that the larger educational community faces.
We should not underestimate the scope of the effort that Pearson is making, but neither should they underestimate the scope of the challenge that is yet to come.
I have often said that corporations are amoral in the same sense that lawnmowers are amoral. They do what they are designed to do. Pearson has published a remarkably detailed blueprint of how they intend to rebuild their machine from the ground up. And in doing so, they have revealed just how hard a task they have in front of them.
Over the summer I covered the drama surrounding the impending shut down of the largest college in California – City College of San Francisco, or CCSF – due to termination of accreditation. The short version is that the Accrediting Commission for Community and Junior Colleges (ACCJC) voted to end accreditation for CCSF as of July 31, 2014. Unless reversed, the loss of accreditation would likely force the 80,000 student college to shut down. CCSF would be the largest US college to date to lose accreditation.
- CCSF Accreditation Crisis: Seven Years in the Making
- CCSF Accreditation Crisis: The Dissenting Voices
- Major Twist in CCSF Accreditation Crisis: DOE Threatens Accrediting Agency
- Higher Ed Accrediting Commissions: Transparency for thee, not for me
- Postscript on accreditation transparency: Basic financials of two accrediting commissions
Since this summer there have been several lawsuits, most notably by the City of San Francisco and the CCSF faculty union (California Federation of Teachers), seeking an injunction to stop ACCJC’s removal of accreditation. The DOE findings (documented here) form much of the basis of these lawsuits that will likely come to a head soon as the superior court judge is expected to rule on the injunction this week.
Meanwhile, CCSF administration is actually supporting the accrediting commission regarding the lawsuits. As also reported by the San Francisco Chronicle, the CCSF administration is attempting to reverse the decision by working within the ACCJC process by addressing the deficiencies.
The state has replaced City College’s elected trustees with a single decision-maker, Special Trustee Robert Agrella, who recently wrote to the commission’s president, Barbara Beno, in support of the process.
He said the school evaluations “have been found to be accurate and, unfortunately in some areas, even understated in the depth of problems the college faces.”
Agrella also said the evaluation process had “revealed problems that are now being addressed to assure the long-term viability of the college.”
He is asking the commission to reconsider its decision during an appeals process that is confidential under the commission’s rules.
This means there are two chances for the accreditation decision to be reversed – either by a judicial injunction or by CCSF demonstrating enough progress to cause ACCJC to delay its decision. For its part, ACCJC so far is sticking to its guns and showing no signs of backing down.
Meanwhile, enrollment at CCSF has plummeted.
So far this year, 14,870 students have signed up for credit classes in the spring compared with 19,289 by this time last year, a 23 percent decline. And registration is down 34 percent compared with two years ago, a difference of 7,524 students, according to a daily count of spring registrations that began more than two weeks ago.
What is remarkable for such a significant decision is that the CCSF deficiencies are not related to academic quality, and no one (CCSF, City of San Francisco, faculty union) has argued that the actual accreditation findings are in error. We’re facing the biggest accreditation shut down in history, and the issue is whether procedures were followed in evaluating non-academic management. Go figure.
The post Ruling expected this week on court challenge to CCSF loss of accreditation appeared first on e-Literate.
From NPR this morning:
With 1 billion unique visitors per month, YouTube offers a glimpse of the online world’s tastes and interests. And this year, one notable trend — for better or worse — is that people are spending more time watching videos about video games. [snip]
In case this has you thinking, “Oh great, another way that YouTube has given us to waste time (as if cat videos weren’t enough),” here’s the good news: The number of people watching educational videos on YouTube has surpassed cats.
And with that premise, I found my ticket onto NPR (segment starting at 2:54).
The post Educational videos now outrank cat videos – my ticket onto NPR appeared first on e-Literate.
While it is well hidden, wrapped in a very careful press release, Phil’s sharp eye has caught the details in SJSU’s press release about the next phase in the Udacity pilot that suggest the partnership between the school and the company is winding down. When Carl Straumsheim of Inside Higher Ed asked an SJSU spokesperson point-blank whether Udacity would continue to be involved with the courses, the reply he got was “Good question for Udacity.”
The Schadenfreude surrounding Sebastian Thrun’s fall from grace has been intense ever since the Fast Company article quoted the man who the author labeled the “godfather of free online education” as saying that he realized he had a bad product, and noted that his company is “changing course” to focus on corporate training. Mike Caulfield captured the tone of the reaction in ed tech circles rather nicely when he wrote:
Thrun can’t build a bucket that doesn’t leak, so he’s going to sell sieves….Udacity dithered for a bit on whether it would be accountable for student outcomes. Failures at San José State put an end to that. The move now is to return to the original idea: high failure rates and dropouts are features, not bugs, because they represent a way to thin pools of applicants for potential employers. Thrun is moving to an area where he is unaccountable, because accountability is hard.
I imagine that it would be easy for somebody running or funding an ed tech startup to draw the wrong lessons from this sad story. Consider this blog post to be an open letter to my friends at ed tech startups with some advice about how to avoid the kind of disdain and ridicule that Thrun is receiving now.Pride Goeth Before the Fall
In some ways, it’s hard to separate Thrun’s current problems from his biography. He’s the guy who invented the self-driving car. He’s been a research scientist at Google and a professor at Stanford. He’s a competitive cyclist. And now he is building a startup that will, he hopes, transform education. He is Silicon Valley’s own Buckaroo Banzai. None of which is something that Thrun is to be blamed for. The facts of his life are the facts of his life. But it all plays rather nicely with the story guys like Tom Friedman love to tell about how some technology genius is going to build the new gadget that will blow away all those stodgy old institutions that are holding back human potential and save the world. After all, Thrun invented the self-driving car! How hard could education be?
Thrun didn’t create these narratives, but neither did he discourage them. To the contrary, he did things like making himself the face of the California SB 520 bill by showing up as a featured speaker—and sometimes as the only featured speaker—whenever a legislator or the Governor staged an event about the bill. The unavoidable implication was that Udacity was expected somehow to “save” higher education in the state of California. Thrun was apparently either oblivious to or comfortable with that message.
And then there were the things that he has said.
Make no mistake: Sebastian Thrun is not being mocked because he said that his company had “a lousy product.” He is being mocked for the things he said before he said that. Like telling a reporter from Information Week last August that Udacity “has found the magic formula.” (Was that before or after he realized that his product was lousy?) Or like telling a Wired reporter that he thinks that in 50 years, “there will be only 10 institutions in the world delivering higher education and Udacity has a shot at being one of them.”
It is this last comment that I want to talk about in particular, because it is particularly instructive.A Lousy Product
It may sound so far like I am suggesting that Thrun’s mistakes were about marketing, but I am not. The problem I’m concerned with is product design in the deepest sense possible. I’m talking about how you conceive of the problem that your product is designed to solve.
Suppose somebody came to you and said, “I’ve solved it! I’ve solved the problem of data.” Or how about this: “In fifty years, there are only going to be ten apps in the iOS app store, and we have a shot at being one of them.” You would think that person is an idiot. If you want to tell yourself a story of the Silicon Valley hero riding in to save education from the hands of selfish and incompetent bureaucrats and union interests (to the cheering of the huddled masses yearning to be free), then it’s probably easy to convince yourself that the reason many educators scoffed at Thrun’s “ten universities” claim is that it threatened their livelihoods. And honestly, there was probably some of that. But mostly it was because the statement was nonsensical on its face.
Silicon Valley can’t disrupt education because, for the most part, education is not a product category. “Education” is the term we apply to a loosely defined and poorly differentiated set of public and private goods (where “goods” is meant in the broadest sense, and not just something you can put into your Amazon shopping cart). Consider the fact that John Adams included the right to an education in the constitution for the Commonwealth of Massachusetts. The shallow lesson to be learned from this is that education is something so integral to the idea of democracy that it never will and never should be treated exclusively as a product to be sold on the private markets. The deeper lesson is that the idea of education—its value, even its very definition—is inextricably tangled up in deeper cultural notions and values that will be impossible to tease out with A/B testing and other engineering tools. This is why education systems in different countries are so different from each other. “Oh yes,” you may reply, “Of course I’m aware that education in India and China are very different from how it is here.” But I’m not talking about India and China. I’m talking about Germany. I’m talking about Italy. I’m talking about the UK. All these countries have educational systems that are very substantially different from the U.S., and different from each other as well. These are often not differences that a product team can get around through “localization.” They are fundamental differences that require substantially different solutions. There is no “education.” There are only educations.
But maybe I’ve gotten a little too abstract and philosophical for a practical-minded engineer or entrepreneur. Let’s get real. I went to Rutgers, a large state university in New Jersey. If somebody asked me what attributes of the “product” that was my college education were the ones that made me think it was a good value, I would give them a list that looks something like this:
- I got a scholarship, so it was practically free for me.
- They had a really good philosophy program and a pretty good linguistics program, both of which were areas where I really learned to think.
- I loved the diversity of the people who went there, which contrasted with the bucolic rural/suburban neighborhood I grew up in.
- I found a handful of teachers and more than a handful of students who inspired me to be curious and think harder.
- I was able to explore my musical side, which had no impact on my career (sadly) but a deep and lasting impact on my happiness.
- It was close but not too close to home, and my sister went there.
Nowhere on that list is “get a degree.” This is not to say that a degree was unimportant, but it’s not something that I, as a relatively privileged and relatively well-educated 18-year-old thought about. If you asked me which pieces were the most valuable, how much I would pay for each, and how I would react to “unbundled” services that met one or more of these needs, I wouldn’t know what to say. Moreover, I’m pretty sure that the working-class captain of his high school wrestling team with whom I shared a dorm room my freshman year would make a list of reasons why he went to Rutgers that would bear little or no resemblance to mine. Despite the fact that a college education has some consumer characteristics—you can shop for it and you can buy it—it is not an easily definable product. One college—heck, one class—can serve radically different needs for different students. So what, exactly, would you be disrupting when you disrupt college? What is the product that could replace what college does?
This is a big reason why Udacity failed in their SJSU pilot. Into an already experimental cohort, they threw some inner-city high school students at the last minute. There are lots of reasons why this was an irresponsible and stupid thing to do, but I want to focus on one in particular. The underlying assumption here is that those students need basically the same thing that the college students need, but maybe with a little more of something. Tutoring, maybe. This mentality is exactly the opposite of the promises we’ve heard about how technology is going to “personalize” education. News flash: Inner-city high school students don’t just have more educational needs than students who have successfully matriculated to SJSU; they have different needs (and different goals and motivations). Why would we assume that the same course would work for them?
What’s weird is that this mentality is also the opposite of what has made Silicon Valley great in recent years. The real revolution in software over the last decade has been in product design and development techniques that give us much better ways to understand the real and specific needs of well-defined classes of users. There is a range of these methodologies, but they all build from the common bedrock conviction that you don’t understand your users’ needs when you start. As Clay Shirky is reported to have said, “The first prototype isn’t meant to show a solution. It’s to show that you don’t yet understand the problem.”
If you want to help improve education as an entrepreneur, then start with that nugget of wisdom. Start by assuming that you don’t yet understand the problem, and that educators and students know more about the problems that need solving than you do. Use your skills to help them illuminate and elucidate the problems that they are trying to solve, and then work on your solution—not to “education”, but to a specific educational problem for specific actual humans. This is not to say that you can’t have a big impact. Education is in desperate need of help to untangle the mess of needs, goals, approaches, and institutional structures so that we can do a better job of helping more people. There are big challenges. But note the plural. There is not one hard problem. This a complex of many intertwined and poorly defined hard problems. Improving education isn’t like designing a better way to order a taxi, or building a better smart phone, or even inventing a self-driving car. It’s harder than any of those, because it is far messier than any of those. This makes it an incredibly gratifying space to work in as long as you don’t do so out of a fantasy that you and your entrepreneur peers are the heroes who are going to “save” it, after which you will be greeted as liberators. That way lies madness. And failure.
This week the WICHE Cooperative for Educational Technologies (WCET) released its Managing Online Education survey results that were previewed at the WCET13 conference in November. Despite all of the talk about the potential data-driven decision-making in higher education, it is remarkable how little we know.
Based on media coverage, the WCET survey results that are grabbing people’s attention:
- For institutions reporting both completion rates, the on-campus completion rate was better than the online rate by an average of less than 5 percent
- Institutions had trouble providing completion rates with 65 percent not being able to provide an on-campus rate and 55 percent not reporting an online rate
These factors were covered quite well at Inside Higher Ed:
Some respondents blamed the lack of data on course catalogs that don’t specify if a particular section of a course is online or not. Distance education providers have for years fought to eliminate the stigma of online courses’ implied lack of quality, and the shift toward an equal billing makes it difficult to distinguish between different forms of course delivery.
“If institutions wish to improve retention, they will need to collect these statistics,” the report reads. “It’s hard to improve what is not measured.”
Russell Poulin, deputy director of research and analysis for the WICHE Cooperative for Educational Technologies, offered another theory: What if institutions are intentionally withholding low completion rates?
As the WCET survey points out, “There is considerable mythology around completion rates for online courses.” Where do we get this information? To the best of my knowledge, the most well-known studies were targeted specifically at community and technical colleges. The WCET survey may actually be the first cross-institution study looking at online vs. on-campus completion rates across 2-year, 4-year, masters and research institutions.
The Community College Research Center (CCRC), which is part of Teachers College at Columbia University, performed two studies in the mid 2000s.
- They first looked at the 2004 cohort across the Virginia Community College System, finding a “13 percentage point difference in completion between face-to-face and online courses”, as reported in 2010.
- The same research team (Shanna Smith Jaggars and Di Xu) then looked at the 2004 and 2008 cohorts at Washington State Community and Technical Colleges, finding “online course completion rates were 8 percentage points lower than face-to-face completion rates” for the 2004 cohort and 6 percentage points lower in the 2008 cohort, as reported in 2011.
The Instructional Technology Council has captured data since 2004 in their Distance Education Survey results, also targeted at community and technical colleges [emphasis added]
During the early years of distance education, retention and completion rates could easily fall below 50 percent. Studies consistently report that colleges have positively addressed this challenge, despite continued misconceptions. The ITC Survey participants reported that the gap between online and face-to-face student retention now averages only eight percent. In nine years of data, the trend in online retention continues to improve, but challenges remain, and addressing the gap is a major priority for many programs.
While many individual schools and even statewide systems have internally published their online and on-campus course completion rates, the WCET is a valuable addition for several reasons.
- The study looks across a broad array of institution types; of the 225 responding schools, 43% have associates as the highest degree, 9% bachelors, 18% masters and 31% doctorate;
- The study adds the important context that more than half of schools either cannot or will not share their course completion data; and
- The study is based on the 2011 – 12 academic year, helping to provide more up-to-date information.
There is a huge gap between the discussions of the power of data to improve education and the on-the-ground reality of missing or inconsistent data, even for the most basic of measurements such as course completion. This is true both at the institutional level, as evidenced by the high percentage of schools that could not provide the completion data for both online and on-campus courses, and at the cross-institutional level, as evidenced by the very small number of studies available.
Practice What We Preach, Not What We Do
There is a lot more to the WCET survey (and the CCRC and ITC studies, as well) than just course completion data, and it is well worth reading the whole report. One section that got my attention was on academic and student support services.
- Only about one quarter (22 percent) of respondents require their online students to take an orientation prior to their first online course, even though research suggests that experience aids in online course success.
- The vast majority of institutions offer library services and advising to online students. Fewer, but still a majority, offer tutoring services.
- More than three quarters of institutions have a policy on “academic integrity” (preventing cheating on assessments) for online learners. Only about 40 percent use technologies to authenticate the identity of online learners.
- Only about one-third (30 percent) of institutions offer 24/7 technical support for students. Given that students work all hours on online courses, the lack of support could hamper their success in the course.
- In meeting the needs of those with disabilities, it is alarming that sixteen percent have no policy on this subject and another thirty-six percent rely on the faculty to provide support. Therefore, at least half of the responding institutions have no systematic way to assure that students with disabilities are well-served.
It is fairly well known that students perform better in online courses once they have experience with the medium. The importance of student support services increases for low-income and underprepared students. Based on the survey, too many institutions are not providing the needed services.
The ITC survey documented what e-learning professionals identified as the “greatest challenges for students enrolled in distance education classes”, and the list reads like the flip side of the WCET findings:
- (biggest challenge) Orientation/preparation for taking distance education classes
- Providing equivalent student services virtually
- Assessing student learning and performance in distance education classes
- Computer problems/technical support
- Low student completion rate
- Completion of student evaluations
Here is another huge gap, this time between what we know is needed and what is provided. It makes you wonder what the online course completion rate would be if institutions could provide the needed academics and student support services.
After a great deal of publicity from their spring and summer pilots, San Jose State University has just announced that they will offer three of the courses again in Spring 2014 – but with a twist. On the surface, the announcement sounds like a continuation of the pilot.
This spring, San Jose State will offer three online courses that were developed with Udacity to SJSU and California State University students.
San Jose State students are registering now for Elementary Statistics, Introduction to Programming and General Psychology. In addition, the programming and statistics courses will be open to all CSU students through the CSU’s CourseMatch program.
But after digging deeper, it really appears that this is an effort to separate without admitting failure or making either side look bad. There are some significant changes here:
- Udacity is no longer being paid for the courses;
- All the course content is free and open to SJSU and CSU faculty;
- The for-credit course content will be on Udacity platform but faculty interactions and assessments will be run on SJSU’s official LMS (Canvas); and
- SJSU will provide the teaching assistants.
Meanwhile, Udacity will keep the courses available on its platform for non-credit students.
The SJSU instructors who originally developed the programming and psychology courses with Udacity will continue to teach these classes to SJSU and CSU students this spring. The statistics course will be transitioned to a different SJSU instructor in the same department. SJSU will hire and train teaching assistants as needed. All faculty members and students will use SJSU’s learning management system, Canvas.
For anyone who has followed the Udacity story of late, the failure of the SJSU pilots was perhaps the biggest factor in Sebastian Thrun’s decision to pivot his company away from for-credit higher education, as described in Fast Company [emphasis added].
Viewed within this frame, the results were disastrous. Among those pupils who took remedial math during the pilot program, just 25% passed. And when the online class was compared with the in-person variety, the numbers were even more discouraging. A student taking college algebra in person was 52% more likely to pass than one taking a Udacity class, making the $150 price tag–roughly one-third the normal in-state tuition–seem like something less than a bargain. The one bright spot: Completion rates shot through the roof; 86% of students made it all the way through the classes, better than eight times Udacity’s old rate. (The program is supposed to resume this January; for more on the pilot, see “Mission Impossible.”)
But for Thrun, who had been wrestling over who Udacity’s ideal students should be, the results were not a failure; they were clarifying. “We were initially torn between collaborating with universities and working outside the world of college,” Thrun tells me. The San Jose State pilot offered the answer. “These were students from difficult neighborhoods, without good access to computers, and with all kinds of challenges in their lives,” he says. “It’s a group for which this medium is not a good fit.”
So the SJSU / Udacity saga has reached its end game.
Update (12/18): While I still believe this move is the end game for the SJSU / Udacity pilot, there is a detail that I got wrong. The course materials themselves will be hosted on Udacity, even for SJSU and CSU students, while all faculty interaction and testing will move to Canvas. I have corrected the bullet point above. This information is based on today’s IHE article:
The spring semester courses will be available to all students in the California State University System. San Jose State has reserved half of the seats in the statistics and programming courses for its own students. The courses will still be hosted on Udacity, but students will use Canvas, a learning management system created by Instructure, to communicate with instructors and take exams, said Clarissa Shen, Udacity’s vice president of strategic business and marketing. The MOOC provider will also collect data about how students engage with the courses. “So, no, not walking away,” Shen said in an email.
The post SJSU and Udacity End Game: 3 courses to be offered for-credit on Canvas LMS appeared first on e-Literate.
I was honored to be asked by the American Federation of Teachers to write an article on what their membership should know about adaptive learning technologies. That piece is running in this month’s issue of AFT On Campus. I am reprinting it here with their permission.
The phrase “adaptive learning” is an umbrella term that applies to an incredibly broad range of technologies and techniques with very different educational applications. The common thread is that they all involve software that observes some aspect of student performance and adjusts what it presents to each student based on those observations. In other words, all adaptive software tries to mimic some aspect of what a good teacher does, given that every student has individual needs.
Here are a few examples of adaptive learning in action:
- A student using a physics program answers quiz questions about angular momentum incorrectly, so the program offers supplemental materials and more practice problems on that topic.
- A history student answers questions about the War of the Roses correctly the first time, so the program waits an interval of time and then requizzes the student to make sure that she is able to remember the information.
- A math student makes a mistake with the specific step of factoring polynomials while attempting to solve a polynomial equation, so the program provides the student with extra hints and supplemental practice problems on that step.
- An ESL writing student provides incorrect subject/verb agreement in several places within her essay, so the program provides a lesson on that topic and asks the student to find and correct her mistakes.
In most cases, the software is adapting to details of student performance that would be obvious to any good instructor if she had the time to observe closely enough. Occasionally, there may be some extra bit of cognitive science knowledge built into the program that the average instructor would not know. For example, most teachers probably don’t know the details of how frequently and at what intervals humans should be retested on a memorized fact in order to ensure that fact gets into long-term memory. (And even those teachers who do know generally do not have the time to work one-on-one with students and requiz them appropriately.)What It’s Good For
The simplest way to think about adaptive learning products in their current state is as tutors. Tutors, in the American usage of the word, provide supplemental instruction and coaching to students on a one-on-one basis. They are not expected to know everything that the instructor knows, but they are good at helping to ensure that the students get the basics right. They might quiz students and give them tips to help them remember key concepts. They might help a student get unstuck on a particular step that he hasn’t quite understood. And above all, they help each student to figure out exactly where she is doing well and where she still needs help.
Adaptive learning technologies are potentially transformative in that they may be able to change the economics of tutoring. Imagine if every student in your class could have a private tutor, available to them at any time for as long as they need. Imagine further that these tutors work together to give you a daily report of your whole class—who is doing well, who is struggling on which concepts, and what areas are most difficult for the class as a whole. How could such a capability change the way that you teach? What would it enable you to spend less of your class time doing, and what else would it enable you to spend more of your class time doing? How might it impact your students’ preparedness and change the kinds of conversations you could have with them? The answers to these questions are certainly different for every discipline and possibly even for every class. The point is that these technologies can open up a world of new possibilities.What to Watch Out For
Despite the promise of adaptive technologies, and despite the liberal use of buzz phrases like “big data” and “brain science” by the vendors who create products based on these technologies, adaptive learning systems are not magic. They are tools that should be understood and employed appropriately by skilled educational practitioners. So while they are well worth exploring, there are questions you should ask and issues you should think about before making any big decisions.
To begin with, if you are thinking about trying an adaptive learning product in your class, it is important for you to understand in what ways the software adapts to the students. Before you hire a tutor, you want to know what that tutor can and cannot help your students with. You might even want to watch the tutor work so you can see how skilled she is and where her limitations are. The same is true with adaptive software. There is nothing these packages do that you, as an educator, are not capable of understanding from a pedagogical perspective. If the vendor cannot explain the software’s capabilities in what amounts to common-sense language about teaching and learning—if all you get is techno-babble—then you should think twice about adopting. There is no reason why you should have to accept a black box as a teaching product.
One reason that you need to understand how it works is that you need to decide how much you trust the software to do what it claims it can do. These are your students, and you are turning them over to the care of a tutor. Do you trust the tutor to teach the right concepts and, perhaps more importantly, not to give false or misleading guidance? How much you trust your adaptive technology depends a lot on what it is supposed to do. A multiple choice test question that links incorrect answers with supplemental content is easier to make work right than an essay assessment program that attempts to diagnose student writing problems. Context also matters. We can tolerate tools that are not perfectly accurate in some cases better than we can in others. Most students learn pretty quickly that a Google search will yield some results that aren’t helpful and adjust accordingly. Getting them to understand when to trust a grammar checker and when not to trust it is a lot harder.
More broadly, it is critical to develop a clear and well-articulated position on which teaching functions the software can fulfill and which it can’t in order to defend the value of a real college education and the faculty who deliver it. There is a cultural temptation, fed somewhat by eager vendors and a press that tends toward an excess of techno-optimism, to believe that adaptive learning platforms are the future of education and can be full replacements for teacher-facilitated classes. The root of the problem is not the adaptive technology itself so much as the belief that a “good” education is entirely quantifiable and therefore manageable by computer. When policies to hold schools accountable for student success get reduced to a handful of all-important metrics, there is danger. When the idea that machine-assessed competencies capture everything important that a student should learn in a class, there is danger. In these circumstances, the notion of adaptive learning technologies can be abused as a kind of magic incantation by the reductionists.
The countervailing temptation for faculty, then, is to reject all adaptive learning itself as a fraud and a conspiracy to defund education. That temptation should be resisted. Adaptive technologies can have real value and are not going away. They can free up faculty to spend more time doing what they do best in the classroom—work that is not replicable by a machine. Rejecting these capabilities out-of-hand would risk damaging the credibility of faculty while denying students support that could improve their chances of success. The better approach, from both educational and labor perspectives, is to examine each tool on a case-by-case basis with an open mind, insist on demystifying explanations of how it works, embrace the tools that make educational sense, and think hard about how having them could empower you to be a better teacher and provide your students with richer educational experiences. Don’t be content to merely argue that you can’t be replaced by a machine. That’s a losing strategy. The winning strategy is to prove it.
Martin Weller has a great blog post up about course design responses to MOOC completion rates. He starts by arguing that, while completion rates are not everything in MOOCs, they are not nothing either. A lot depends on whether you think completion is an important metric to meet the course goals because, for example, the course is designed to help remedial students pass into a non-remedial track, or whether having students explore the content in a non-comprehensive way accomplishes your course goal. (Martin brings up an analogy by Stephen Downes that nobody complains about the low newspaper completion rates, which I have never heard before and which I love.)
This is good stuff, but it starts us down the path toward a more radical re-examination of how we think about course design. Because while Martin is focusing primarily on course goals and how those should determine metrics, he’s beginning to raise the question of how individual learner goals should influence course design. And once you start asking that question, it changes everything.The Myth of the Unified Course Goal
Pretty much all popular course design methodologies that I can think of start with the assumption that the goal of the course is to be able to certify that students in the class have learned a well-defined set of knowledge and skills (or, at least, 70% of that set, which is generally enough to pass the course). The truth of the matter is that it is never that simple. Students always have different reasons for taking the class and therefore different goals, different support needs for achieving those goals, and different behaviors that they adopt in order to achieve their goals. When I sign up for a class, I might be signing up for it because I am a major in the subject, want to pursue a related career, and need to learn everything I can about what is being taught. Or I might be taking it because I have never taken a course in the subject before and am curious. Maybe I have a passion for one of the subtopics covered in the syllabus—a particular poet, for example, or a theory that is related to a different discipline which is the one I really am passionate about. Or maybe I’m just trying to fill a prerequisite on my way to getting a diploma. Or I heard that the professor is great. Or easy. Or I have a crush on one of my classmates. Or I have a crush on the professor. Each of these motivations will impact my definition of success for the class and therefore my behavior in class.
We are able to paper over these differences, in part, because of the high barrier to entry for traditional university schooling. Courses cost money and take time. Whatever other motivations I may have as a student, the odds are pretty good that if I am willing to spend the time and the money then one of my goals is to get credit for the course. So in our design activities, we often pretend that this is the only or, at least, the most important goal that the students have. To the degree that good teachers distinguish among the different goals of their individual students, they generally don’t do so through course design. Most of the time, they adjust their teaching styles and work their personal relationships with those students. At design time, we assume homogeneity of goals, even as we (hopefully) work hard to account for heterogeneity of abilities. This is a convenient fiction because our tools for designing courses for diverse student goals are pretty limited in a traditional class. For one thing, the teacher can only be in one place at a time. Most traditional face-to-face course designs are pretty much single-threaded (although there are some exceptions). For another, the whole system of charging tuition in exchange for credits really pushes us to take seriously our responsibility to provide the certification that the tuition dollars supposedly buy.
But MOOCs explode these constraints, even in courses like the one that Martin describes where remediation of students on a for-credit path is the primary goal of the course design. Because the barrier to entry is so low—zero cost, zero travel or scheduling demands, and zero consequence for dropping out in the middle—you will get students in the class with substantially different goals, including many that do not care about certification at all. This is something that Bob Hoar from the University of Wisconsin taught me at the recent MOOC conference when we chatted about a remedial math MOOC that was similar to the one that Martin wrote about in his post. I made a comment to the effect that his MOOC would probably attract a much more traditional and homogenous group of students than, say, a MOOC about the science of cooking. Oh no, he replied. Actually, some of their registrants included parents of students taking the course who wanted to help their kids. Others were adults who were not UW students but had always struggled with math and wanted to finally get it right.
In an ideal world, we would design our courses to explicitly support goals like these. And we would design our analytics to account for the different goals when we try to measure the “success” of the course. The good news is that massive, technology-enabled courses not only enable us to create different paths for different students; they almost force us to do so. One of the most transformative aspects of distance education in general and MOOCs in particular is that these modalities challenge us to look for pedagogically effective alternatives to the control that faculty can assert in a face-to-face class and that they can’t assert online. But in order to really learn from this forcing function, we need to go beyond designing solely for course goals and explicitly design for student goals.Differentiated Engagement
To start with, we need some common language to talk about this new design ethic. Mike Caulfield put me onto a term from feminist pedagogy called “differential participation.” While I found the language to be provocative, once I dug into the details behind the idea I found that the motivation for it is different from mine. Differential participation is about the different power relationships among participants. I’m more interested in talking about the goals of different learners. So the term I’m playing with now is “differentiated engagement.” For starters, I like “differentiated” rather than “differential” because this isn’t about more or less, better or worse. It’s about personalization. And I prefer “engagement” over “participation” for similar reasons. I’m not interested in how much or how well a student participates. I’m interested in how and why a student is engaged. We should have a course design methodology that creates courses which invite students to engage with the content and activities in ways that are consistent with their personal goals. Such a design should also, at least in theory, enable us to distinguish between a student whose non-participation is consistent with her goals for the course and one whose non-participation is in conflict with her goals for the course. Differentiated engagement.
The next thing we need is a methodology that enables us to do this sort of design work. And it turns out that we have a pretty good model in software design. Software designers often create personas representing particular types of users as one of the first steps in their product designs. But we need to use these tools correctly. While I have seen personas used in course design exercises before, they generally are deployed in order to identify what students need in order to achieve the course goals rather than to refine our notion of what the student’s goals are:
Dmitri is an avid soccer player who struggles with math and would prefer being on the field to doing his homework. He wants to complete his homework as quickly as possible, and is often a little sloppy about getting it done in his eagerness to get back outside.
While this little snapshot does tell us something about Dmitri’s goals, the focus is on getting him to do what he doesn’t want to do rather than starting by figuring out what he does want to do that is directly related to why he is in the class. We should start instead by identifying the student’s affirmative goals for the course, which entails acknowledging those goals as legitimate in some important sense and our responsibility, in part, as helping Dmitri to meet these goals. If his goal is to just get through the class with as little pain as possible so he can graduate, then whatever else we try to do for Dmitri, we should help him to get through the class with as little pain as possible so that he can graduate.
This does not mean we should abandon the idea of overarching course goals or the responsibilities that those goals are intended to meet, but it does mean that we should stop relying on the crutch of course credit to force students to embrace our course goals as their own. Phil and I have been thinking about this challenge a lot as we design e-Literate TV, which is somewhat MOOC-like in its ambitions. Our goal is provoke conversations on campus that will lead to better, more consensus-driven decisions about how to deploy technology in the service of improving education. That said, while we don’t expect a lot of people to disagree with that goal in principle, we also don’t expect it to be the immediate motivator in a lot of cases. The immediate motivator is more likely to be the Board of Trustees telling the President that he has to get on the MOOC bandwagon (or get off it), or a faculty member with a labor concern preparing for a conversation at a faculty senate meeting, or a CET director feeling cut out of the decision loop. In other words, everybody has their own problems to solve. We can’t give course credit for e-Literate TV. The only way that we can bring people in is to provide them with something that will hopefully help them meet their goals. And then, through our course design, we hope to show them that the best, most satisfying way to meet their goals is to embrace ours as well.
I believe that we should be employing the same approach in traditional course designs. It seems obvious to say that students who actually have intrinsic motivation to engage with our courses will tend to learn better than the ones who are doing the work primarily because they are being told that they have to eat their vegetables or they can’t have any dessert. On a personal level, many teachers do work very hard to engage their students through their day-to-day interactions with them. But that work often stops where course design begins. There will always be a certain “eat your vegetables” aspect of schooling as long as schools are in the business of certifying knowledge, which is one reason why open courses are such an important addition to our tool set. But even within the bounds of a traditional college education, we can do much better at accomplishing our goals for our students by helping them to accomplish their goals for themselves.
In a little-reported event the week of Thanksgiving, Desire2Learn let go 28 employees. The only public report I’m aware of comes from The Record out of Desire2Learn’s hometown of Kitchener, Ontario in Canada.
E-learning company Desire2Learn has cut about 25 workers from its product development department.
Virginia Jamieson, spokesperson for the Kitchener-based company, said nine per cent of the 280-member product development section was let go. That represents about three per cent of the firm’s total workforce. [snip]
Since it was founded in 1999, the company has grown to more than 900 [sic] employees in several countries.
“So it is a small percentage of that, but it happens to be people in Kitchener, which is where our initial growth was, so it may seem bigger than it really is,” Jamieson said.
While this move might be harsh for those employees affected, it does not sound too significant. But is there more to the story?
According to Sources
Michael and I have talked to 10 off-the-record sources and reviewed Twitter, LinkedIn and Glassdoor as we looked into this issue, and the consistent story we heard was that the layoffs may be much more significant, both in number and motivation. After research, we now believe:
- while 25 people in product development were let go, there were 28 people in total affected that week;
- a total of 56 people were let go in the past six months;
- product development is not the only group affected;
- the company now has ~750 employees, not ‘more than 900’; and
- 8 of the 10 sources indicated that the layoffs were related to the company not meeting sales growth targets.
Besides the cuts in product development, it appears that there have been quite a few people (~18) in marketing and several people in business development and project management who were also let go in the past six months.
What has me interested in this story is that Desire2Learn:
- Raised $80 million in August 2012 in the largest ever VC round for a Canadian company;
- Was profitable as of the funding round – the funds were not needed for operations at that company size;
- Has continued to grow in their core market (North American higher education), according to both Campus Computing and Edutechnica; and
- Seems to be growing in K-12, international and corporate markets.
Given this situation, why would Desire2Learn let go more than 7% of its workforce?
According to Desire2Learn
I asked Desire2Learn to comment on this story, and they provided the following response (at the time I had heard that ~100 people had been let go but now believe the number is 56).
Thanks for touching base and sharing the discussion from the field. To start, the sources stating that close to 100 people have been “laid off” over the past four months are simply not accurate. In fact, they are way off. There have been some incremental changes over the course of the year and the restructuring that occurred last week impacted 28 people.
The assumption that this reflects the company’s performance versus the recent investment is also incorrect. We’ve had a great year and the recent changes have nothing to do with the company’s performance – they have been strategic decisions to put the right structure in place to help Desire2Learn’s continued transformation into a global company.
This past spring, we brought in a new product team leader (Nick Oddson, who you met at FUSION) who has tremendous experience in growing global software companies. He restructured our R&D organization last week to align the department around our new markets and strategic directions. Other departments were reevaluated and reorganized earlier in the Fall.
As a result of these changes, Desire2Learn is in a great place for continued growth.
There is no story here other than the fact that D2L has set up a foundation to position itself for the next wave of growth. 2013 was an amazing year and we are looking to even more exciting things ahead in 2014!
I also talked to one of the lead investors, Jon Sakoda of NEA, for his perspective.
We are hiring very rapidly and have grown from ~500 people to ~750 people in less than 18 months. This is a lot of new people, and I think great companies always need to assess their talent and determine how to transition people who can’t be long term performers. [snip]
We added 140 people and churned 56 people since June 1. Forced churn is good and healthy when you are scaling. All of our companies do it – it’s a best practice.
Jon also pointed me to a blog post he had previously written on the subject of companies needing to ‘churn’ employees as they rapidly grow, which is consistent with his comments on Desire2Learn.
In a high growth company one of the hardest tests of leadership and loyalty is determining who can make the ascent and who will lag behind. Paradoxically, the bonds of friendship, camaraderie, and trust that make start-up teams strong in the early part of a company’s life become the hardest obstacles to overcome in making the tough decisions that set up companies to take on the challenges ahead. How can you lead your company through these transitions? Here are some best practices I’ve seen great leaders follow through the years: [snip]
Don’t Make “Churn” a Bad Word – in a scaling company, there is a relentless focus on hiring great talent to fill important roles. But it is equally important to assess overall quality, not just quantity, along the way and to be honest about hiring mistakes that are inevitable in a hyper growth environment. Make employee “churn” a metric that is measured every quarter, and don’t make “churn” a bad word.
What We Know
- 28 people were let go in November, 25 from product development;
- There have been additional rounds of people being let go since June, totaling 56 people;
- The company’s growth in the past year (in north american higher ed) appears to be in the ~6% range, from 11.1% to 11.8% from 2012 to 2013;
- The company is investing and most likely growing in K-12, international and corporate markets;
- The company has grown its workforce by 50% in the past 18 months, going from ~500 to ~750; and
- Since the Aug 2012 VC funding, the company has acquired three companies or platforms (Knowillage, Wiggio and Degree Compass) and opened four new offices (Boston, Melbourne, Sao Paolo, and Newfoundland) to join London and Singapore as their international offices.
What We Don’t Know
- How much has D2L grown in K-12, international, and non-education markets not measured by Campus Computing or Edutechnica;
- How much of the $80 million investment is still available for operations (SEC rules prevent the company or investors from commenting on financial matters); and
- Whether the end-of-January 2013 massive system outages or the problems with its Analytics engine have affected sales or not.
In the end, I have trouble believing that these recent cuts are solely based on improving Desire2Learn’s growth without needing to correct for slower-than-expected growth. The arguments made by our sources that these are significant cuts driven by not hitting growth targets are compelling. However, there is no smoking gun that I have found to back up these claims definitively.
What I do feel confident in saying regarding employee numbers is that there’s more to the story here than just ‘churn’ alongside aggressive hiring. Since the end of the Blackboard patent lawsuit, Desire2Learn has grown at an average of 13 employees per month (140 in Nov 09, 560 in Sep 12, 750 in Nov 13). Yet the numbers might have actually gone down since July 2013:
- At FUSION in July the company indicated they had more than 800 employees; yet
- Today the company has ~750 employees.
Unless the company employed more than 100 summer interns, it appears that the growth in headcount has stopped, if not reversed. I do not know how these public numbers relate to the comment about hiring 140 and letting go 56 since June.
Update (12/16): To be more direct on this point:
- Desire2Learn had “more than 800″ employees as of July 2013 at the FUSION Conference;
- The company estimates of 140 added / 56 let go leads to a net gain of ~84 over the past 18 months;
- This should lead to 884 employees today less voluntary departures;
- But they claim ~750 employees today.
This discrepancy of more than 130 employees is non-trivial and indicates there is more to the story than we are being told.
I suspect that the problems with the Analytics engine (described by Michael in this post) is having more of an impact than the fallout from January system outages. Desire2Learn invested heavily in its analytics and student success system, yet I have not seen any significant customer wins based on these product lines (although Degree Compass is showing some promise). Conversely, I am not aware of any real problems with the Summer 2013 or Fall 2013 start-of-term system performance, so perhaps Desire2Learn has recovered from the January outages.
Furthermore, the changes being made to refocus product development and even pull back on some of the product release plans makes sense to me. I think that Desire2Learn has overextended itself and would benefit from focusing more on its core product and making sure that its existing customers are happy.
The picture I get is that the truth is somewhere in the middle and yes, I believe there is more to the story than reported in the news article. I believe that Desire2Learn most likely had a difficult year in terms of failing to meet growth targets. Based on these results the company probably had to restructure and reduce middle management layers and headcount in several groups (mostly in product development and marketing). But at the same time, this is a company with financial resources to continue investing in capital projects and product improvements, and even in additional hiring.
This is a situation to keep watching over time, and we’ll keep you posted as we learn more.
With all of the great discussions spawned by the “greatest MOOC conference in the history of MOOCs” (MRI13), it seems a good time to share a segment of a keynote presentation I gave last year on MOOC history. This presentation was at the American Association of the Colleges of Nursing (AACN) conference in April 2013. For context, I had just shared how online education was no longer an issue kept in the corner, away from mainstream higher ed, but was now affecting the traditional campus discussions.
There is a relevant graphic that was shown during this segment of the talk, showing the MOOC timeline (updated version shown below).
As Phil mentioned, he and I were both lucky to attend the MOOC Research Initiative conference, which was a real tour de force. Jim Groom observed that even the famously curmudgeonly Stephen Downes appeared to be enjoying himself, and I would make a similar observation about the famously curmudgeonly Jonathan Rees. If both of those guys can be simultaneously (relatively) pleased at a MOOC conference, then something is going either spectacularly right or horribly wrong. I believe it was the former in this case.
We are at one of those rare moments when there’s enough confusion that real conversation happens and possibilities open up. The sense I got is that everybody is really grappling with the questions of where we can take the concept of a “MOOC” and what MOOCishness might be good for. That is fun and hope-inducing. Phil and I spent a lot of the time interviewing folks for a future e-Literate TV series (coming to a computing device near you in March or April of 2014), so we were lucky to hear a lot of perspectives. There is some very good exploration happening now. George Siemens and his fellow conference organizers (as well as the Bill and Melinda Gates Foundation, which sponsored the event and the research) did a real service by bringing people together to talk about these issues at this pregnant moment.
One thing happened toward the end of the conference that has me puzzled, though. Jim mentioned it in his blog post:
At the same time[,] Bon Stewart’s admonitions for some kind of organized response to start filling the temporary void of direction with alternative narrative still rings in my ears—and it is very much the lesson I took away from Audrey Watters keynote at OpenEd.
There was a lot of conversation, really throughout the conference but coming to a head at the end, that the term of MOOC is somehow damaged goods and that…something…should be done about it. Usually the word “narrative” was brought up. But this talk of “alternative narratives” or, as Bonnie put it, “changing the narrative”, confuses me. As far as I’m concerned, the connectivist/open ed crowd has been spectacularly, stunningly successful at “changing the narrative,” and I’m not at all clear what it would look like to somehow do it differently. I don’t understand what they mean here. Unfortunately I had to rush out the door to try to catch a plane shortly after the panel discussion and didn’t have an opportunity to follow up with some of the attendees. So I’m going to try to express my confusion in this blog post and hope that somebody can help me figure out what I’m missing.
Warning: This post is long and lit crit wonkish.The Archeologies of Ed Tech Narratives
Before there was “MOOC,” there was “edupunk.” Jim coined this term in 2008 as a way of describing an anti-consumerist educational ethos. He was rejecting LMSs, course cartridges, PowerPoint decks, and other tools that tend to encourage (in his view) the notion of education as something that can be packaged and delivered. Journalist Anya Kamenetz picked up this term in her book DIY U: Edupunks, Edupreneurs, and the Coming Transformation of Higher Education. Despite the fact that Anya explicitly cited Jim and some of his peers as sources of inspiration for her book, the edupunk crowd was not amused. I didn’t follow this falling out closely, but my sense is that they didn’t like the book because it is, in part, consumerist in its recommendations to students about how they should think about their education. (Anya’s Gates-funded sequel, The Edupunks’ Guide to a DIY Credential, is essentially a consumers’ guide.) Anya’s use of the term and her impressive success at promoting the book and the ideas in it eventually prompted Jim and others to stop using the term edupunk.
And yet, I think it’s worthwhile for the DIY U critics to ask themselves what that narrative would have been like had it not been for the influence of their word on the book. Remember, Anya’s primary concern is the student debt crisis. Her goal is to show students that they don’t have to feel locked into the default path of a traditional college education that will plunge them deep into debt. There are other narratives that could have served her purpose. Consider, for example, libertarian billionaire Peter Thiel’s Ayn Randian exhortation that young people should drop out of college and create their own startups. Anya’s book title could have been simply DIY U: Edupreneurs and the Coming Transformation of Education. The addition of “edupunks” destabilizes the narrative that would have been implicit in that title. It raises questions for the reader: What is an edupunk? Where did that term come from? What do punks have to do with edupreneurs, or the coming transformation of higher education? You could say that the term “edupunk” was co-opted, and there would be some truth to that statement. You could also say that “edupunk” infected or informed the narrative about the student debt crisis. There would be some truth to that statement too.
The story of “MOOC” is different but it shares some important characteristics. In this case, I believe the xMOOC proponents were largely unaware of the connectivist work when they took up the term. Sebastian Thrun and Peter Norvig cited Salman Khan as their inspiration; I don’t recall them ever mentioning George Siemens, Stephen Downes, or David Cormier. I suspect that “MOOC” was a convenient term that they and others latched onto without giving it a lot of deep thought. (And for the Derrida fans in the crowd, somebody then had to create the term “SPOC” to position “private” as the absence of “open”.) But imagine if they had latched onto or made up a different term, like “Internet-scale Courses (ISC)”. In this post-pivot moment, what conversation would that have provoked? With “MOOC,” we can ask questions like, “Really, what do we mean by ‘massiveness’ and ‘openness’, and why (and how, and where) are those useful features of an educational experience?” No such possibility would exist in “Internet-scale Courses.”
Is there a world in which an original idea like “edupunk” or “MOOC’ could both become dominant and remain true to its roots? One narrative we should be particularly careful of is the narrative of co-optation. The notion that some pure Idea is insidiously taken over by Forces and corrupted to their Evil Ends is both convenient enough to be almost inevitably wrong and simple enough to contradict the epistemological tenets that undergird the very idea of connectivism.Writing and Diffidence
I have largely put away the theoretical tools that I learned as a graduate student in media studies, but one that has stayed with me is the notion of critique in the Derridian sense. Now, I will be honest: There are vast swathes of Derrida that I simply do not understand. In fact, I have always suspected that his works were partly jokes about the knowability of meaning at the expense of the reader, in somewhat the same way that Shelley’s “Ozymandias” can be read as a joke about the knowability of identity. But one thing that I did take away from Derrida (and Foucault, in a different way) is that there is an inherent, inevitable, and eternal tendency in human culture to develop simple stories about what is. These stories are always wrong, in part because they are simple. You can’t fix this. You can’t “change the narrative” to something that is “true.” We want easy answers but there are no easy answers. One can buy this much of the theory without buying the idea that meaning is radically relative, but connectivists in particular should grok this concept. Changing the narrative does not get us out of the fundamental problem that all narratives are, in some important sense, false (or, if you want to get all post-structuralist, that they can only be “true” in the sense and to the degree that they are consistent with the rest of a belief system). Nor does it solve the problem that any narrative will inevitably be warped by the powerful human tendency to make what they are hearing consistent with what they think they already know and, more importantly, with what they want to believe. The best you can do, according to this view of the world, is continually destabilize the dominant narrative—to challenge people to look, for a moment, beyond the easy and search for the true.
And this brings me back to the thing that I don’t get. Given this view of the world, what does it mean to “change the narrative” or “create alternative narratives”? What would success look like? How is it different from what has already happened with “edupunk” and “MOOC”? If those stories are failure stories, then how would a success story be different?
Phil and I aren’t thinking about e-Literate TV as a work of critique—we’re just not that smart—but I suppose you could say that one of our goals with it is to change, or at least destabilize, narratives. What we see happening on campuses is something like this:
- The campus president announces, “I just met with the very nice people at [insert commercial MOOC vendor]. We are making a MOOC. This is going to transform our university! Please make the MOOC by next week.”
- Somebody in the faculty senate declares, “I heard that MOOCs give you cancer and melt the polar ice caps.”
- Food fight.
We want to challenge both the president’s and the faculty member’s narratives, not because we want to replace them with a “better” or “truer” one, but because the most interesting conversations happen when people on both sides of the argument start realizing that the situation is more complicated than they thought it was. This is precisely what was so inspiring about the MOOC conference, and it’s the most that we know how to aspire to. If there is a more effective strategy or a higher goal for “changing the narrative,” I would like to understand what it is. But at the moment, I am having a failure of imagination.
Michael and I have been at the MOOC Research Initiative conference in Arlington, TX (#mri13) for the past three days. Actually, thanks to the ice storm it turns out MRI is the Hotel California of conferences.
credit: Bailey Carter assignment for Laura Gibbs’ class
While I’m waiting to find out which fine Texas hotel dinner I might enjoy tonight, I thought it would be worthwhile to share more information from the University of Pennsylvania research that seems to be the focus of media reports on the conference (see Chronicle, Inside Higher Ed, and eCampusNews, for example). Penn has tracked approximately one million students through their 17 first-generation MOOCs on Coursera, which provided the foundation for this research.
“Emerging data … show that massive open online courses (MOOCs) have relatively few active users, that user ‘engagement’ falls off dramatically especially after the first 1-2 weeks of a course, and that few users persist to the course end,” a summary of the study reads.
For anyone who has paid even the slightest bit of attention to the MOOC space over the past year, those conclusions hardly qualify as revelations. Yet some presenters said they felt the first day of the conference served as an opportunity to confirm some of those commonly held beliefs about MOOCs.
While it is accurate that these basic observations have been made in the past, there was some additional information from U Penn worth considering. The following slide images are courtesy of Laura Perna, a member of the research team.
The research team (but apparently not the faculty members) classified only two of the courses studied as targeted at college students (Single-variable Calculus and Principles of Microeconomics). There were seven courses targeted at “occupational” students (Cardiac Arrest, Gamification, Networked Life, Into to Ops Management, Fundamentals of Pharmacology, Scarce Medical Resources and Vaccines) and eight for “enrichment” (ADHD, Artifacts in Society, Health Policy and ACA, Genome Science, Modern American Poetry, Greek and Roman Mythology, Listening to World Music, and Growing Old). Update: I have changed the language in this paragraph based on commentary from one of the MOOC faculty; see clarification at end of article.
As the Chronicle pointed out, there was a wide variation in these courses.
The courses varied widely in topic, length, intended audience, amount of work expected, and other details. The largest, “Introduction to Operations Management,” enrolled more than 110,000 students, of whom about 2 percent completed the course. The course with the highest completion rate, “Cardiac Arrest, Resuscitation Science, and Hypothermia,” enrolled just over 40,000 students, of whom 13 percent stuck with it to the end.
This variation included the use of teaching assistants.
The research tracked several characteristics of the student population:
- Users – these are all students who registered for the course, regardless of time frame.
- Registrants – these are the subset of Users who registered before the course through the last week of the course. The difference is interesting, as there were quite a few Users who registered well after the course was over, essentially opting for a self-paced experience. We have seen very little analysis of this difference.
- Starters – these are the students who logged into the course and had some basic course activity.
- Active User – these are the students who watched at least one video (I’m not 100% sure if this is accurate, but it is close).
- Persister – these are the students who were still active within the last week of the course.
Given their categories, the Penn team showed percentages across all the courses in question. The completion rate (% of Registrants who were Persisters) varied from 13% to 2%. More useful, in my opinion, was the view of all categories across all courses.
And finally, they showed the pattern of MOOC activity over time, as shown by this view of quizzes in one course. This general pattern of steep drop-off in week one, followed by a slower decrease.
1) Which Categories - I think the team missed an opportunity to build on the work of the Stanford team, which identified different student patterns with more precision (see Stanford report here and my graphical mash-up here).
2) Self-Paced - As mentioned before, it is interesting the separation of students who registered during the course official time frame (Registrants) and those who registered after the course was over. This later group ranged from 2% to 23%, which is significant. Thousands and even tens of thousands of students are choosing to register and access course material when the course is not even “running”. They would have access to open material, quizzes and presumably assignments on a self-paced basis, but likely have no interactions with other students or the faculty.
3) Learner Goals - As was discussed frequently at the conference (but not in news articles about the conference), when you open a course up in terms of enrollment, one result is that you get a variety of student types with different goals. Not everyone desires to “complete” a course, and it is a mistake to solely focus on “course completion” when referring to MOOCs. For future research, I would hope that U Penn and others would find a way to determine learner goals near the beginning of the course then measure whether students met their learning goals either when finishing or dropping out.
Update (12/7): From the comments, one of the Penn professors who taught one of the MOOCs (Kevin Werbach) has provided some clarifications that I feel are important enough to include within the article.
I’m glad to see the Penn research getting so much attention, but it seems it primarily confirms what all other studies have shown.
As far as I know, the researchers didn’t have any contact with the faculty teaching the courses. So some of their statements are generalizations. E.g., I’m not sure what it means for a course to be “targeted at college students.” E.g., I teach the in-person version of my course (Gamification) to college students, and I would think most of the people who study modern poetry do so in college.
Also, I wouldn’t take the TA numbers too seriously. There’s a big difference between an undergrad and a PhD student in the field, for example, and those numbers don’t indicate how much time they worked or whether they were paid. And it looks like they confused the two sessions of my course. The first one (which seems to be what they looked at) had 1 TA. In the second session, I experimented with using two MBA students supervising 4 undergrads (hence the 6), which worked poorly.
Finally, including people who signed up after the course ended seems very odd, especially when one of the metrics is what percentage were in the course at the time it ended. Plus Coursera implemented their Watchlist feature somewhere in the middle of this process, which I think would significantly change the post-course registration behavior.
Full disclosure: Coursera has been a client of MindWires Consulting.
Two weeks ago I attended the WCET Conference in Denver. While much smaller than EDUCAUSE and some others, I find these conferences to be great learning experiences, especially as WCET supports open dialogue between academic leaders (provosts, deans, etc), academic IT and edtech support, and industry leaders. The combination of mindsets – especially the combination of academic and technology – leads to very strategic discussions.
This year the keynote was presented by Dr. Paul LeBlanc, the president of Southern New Hampshire University, which has made quite a name for itself with the College for America (CfA) program. College for America is probably the second best-known example of competency-based education (CBE) after Western Governors University (WGU), and in fact the CfA program was the first to gain Department of Education approval using the “direct assessment” rule that completely avoids seat time. See my previous post for a primer on CBE.
While you can see the entire keynote here (using the mediasite player), I wanted to highlight four key points that help illuminate the reality of competency-based education today. As CBE becomes more hotly debated, it is useful to have real examples to evaluate. I have very roughly paraphrased and taken notes on what I heard in the keynote on these points, not as a defense or endorsement of CfA, but as a real-world early example of CBE.
1) Competency-based education is typically targeted at working adults (14:11 – 20:40)
One of the things that muddies our own internal debates and policy maker debates is that we say things about higher education as if it’s monolithic. We say that ‘competency-based education is going to ruin the experience of 18-year-olds’. Well, that’s a different higher ed than the people we serve in College for America. There are multiple types of higher ed with different missions.
The one CfA is interested in is the world of working adults – this represent the majority of college students today. Working adults need credentials that are useful in the workplace, they need low cost, they need me short completion time, and they need convenience. Education has to compete with work and family requirements.
CfA targets the bottom 10% of wage earners in large companies – these are the people not earning sustainable wages. They need stability and advancement opportunities.
CfA has two primary customers – the students and the employers who want to develop their people. In fact, CfA does not have a retail offering, and they directly work with employers to help employees get their degrees.
2) Competency-based education can require the unbundling of instruction (25:56 – 32:14)
One of the goals of CfA is to use technology to rethink its own business processes, which leads to disaggregation or unbundling. Higher ed does have experience with unbundling with food services, marketing services, but typically higher ed has resisted unbundling the core of what we do – instruction. Traditional faculty can fiercely hold on to these functions. This unbundling causes you to rethink the processes around course design, instruction, advising, and assessment, and there are a growing number of sources for these services (see slide below).
The most important change is to rethink the credit hour, which is the Higgs-Boson – the god particle – of higher ed. Amy Laitinen has a great article explaining the history of the credit hour. It’s great at telling you how long a student has sat down, but not very good at telling you what the student knows. We know that employers trust higher ed less and less. This leads to the core concept of competency-based education – focus on competencies rather than seat times. Jobs for students are not the only goal for higher education, but employers don’t think we do a very good job.
credit: Paul LeBlanc slides at WCET13
3) Competency-based assessment does not equal testing (41:14 – 45:05)
Competencies are can-do statements, they’re measurable, they’re observable, and they are harder for some disciplines than for others.
CfA doesn’t do tests – they instead rely on project-based learning. Competencies never exist in isolation, as they end up in workplace situations. When students select which cluster of competencies to work on, they select an appropriate role to take and work through the projects.
The projects lead to filled-out rubrics that are evaluated by trained faculty, typically with a 48-hour turn-around time.
4) Competency-based education often requires custom IT systems such as the LMS (45:08 – 46:19)
Most higher ed IT systems have been designed with the traditional credit hour in mind, with defined start and end dates. CfA started using Blackboard but abandoned that effort right away. They then went to Canvas, but Canvas is still very credit-hour based. This is not a knock on those LMSs, since they were built to solve a different problem.
In the end, CfA developed their own LMS based on the Salesforce.com platform. The first iteration of the platform was a kludgy mashup, and they had to do a lot of work to simplify the user interface.
Full disclosure: Western Governors University has been a client of MindWires Consulting.
Long-time readers know that I have had a close affiliation with the Sakai Foundation at teams and served on the Board of Directors relatively recently. This year, Sakai merged with the Jasig Foundation to form the Apereo Foundation. The purpose of the new organization is to become a sort of Apache Foundation of higher education in the sense that it is an umbrella community where members of different open source projects can share best practices and find fellow travelers for higher ed-relevant open source software projects. In this merger, the whole is greater than the sum of its parts. For example, last week the foundation just announced that the board of the Opencast project, which supports the Matterhorn lecture capture and video management project, voted to effectively merge with the Apereo Foundation. (The decision is subject to discussion and feedback by the Opencast community.) On the one hand, Opencast probably wouldn’t have joined the Sakai Foundation because they want to interoperate with all LMSs and wouldn’t want to be perceived as favoring one over another. On the other hand, had they joined Jasig prior to the merger, they would not have had access to the rich community of education-focused technologists that Sakai has to offer. (Jasig has historically focused mostly on tech-oriented solutions like portals and identity management solutions). The fact that the Opencast community is interested in joining Apereo is a strong indicator that the new foundation is achieving its goal of establishing an ecumenical brand.
And in that context, I am pleased to tell you that I will be involved with the foundation in a more ecumenical role. Specifically, I will be facilitating an advisory council.About the Council
The goal of the council is to provide the Apereo Foundation Board and community with perspective. When you are running an open source software project, it’s easy to get a little near-sighted as you focus on the hard work of shipping code, gathering requirements, coordinating volunteer developers, finding additional volunteers, and so on. You can get lost in the details that are essential to short-term viability for the project but that can distract you from issues of long-term sustainability, such as thinking about schools that have not adopted your project that might, or about important changes on campuses that are relevant tot he ways in which your software will be perceived and used. The Advisory Council is meant to offer some of that perspective. We have invited participants who are one or several steps removed from the projects. They might work at a school that is heavily involved but not be personally heavily involved. They might be at a school that has adopted a community project but is not deeply involved with the community at the moment. They might have been active in the community in their previous job but further removed from it at present. Or they might come from a school that has not adopted any of the projects but could be receptive to adopting the right project some time in the future.
The group will convene four times a year to provide feedback on presentations from the Board and from various project teams on their vision, goals and plans. That’s it, really. Provide perspective. It’s a simple role, but an important one. We hope that if all goes well our council members will choose to act as informal ambassadors between the Apereo community and the broader community of higher education, but that would really be a byproduct of success rather than something that we’re asking our councilors to do.The Members
I am absolutely delighted with the group that we have assembled:
- Kimberly Arnold, Evaluation Consultant, University of Wisconsin
- Lois Brooks, Vice Provost of Information Services, Oregon State University
- Laura Cierniewicz, Director of the Centre for Educational Technology, University of Cape Town
- Tedd Dodds, CIO, Cornell University
- Kent Eaton, Provost, McPherson College
- Stuart Lee, Deputy CIO, IT Services, Oxford University
- Patrick Masson, General Manager, Open Source Initiative
- Lisa Ruud, Associate Provost, Empire Education Corporation
I feel privileged to be able to work with this group.Diversity
If your goal is to provide perspective, then it is particularly important to get a diverse group together. Overall, I’m pleased to say that we have achieved diversity across a number of dimensions:
- Roles: I wanted to get a good balance of academic and IT stakeholders, as well as at least one ed tech researcher. We’ve achieved that.
- Institutions: We definitely achieved some diversity of institutions, particularly when you throw the Empire Education Corporation and the Open Source Initiative into the mix. In the future, though, I would like to get more representation from smaller schools that don’t typically get involved in open source projects.
- Geography: Apereo is a global community and ultimately needs global input. But we have to balance that against the need to have a workable spread of time zones among council members who will be meeting with each other mostly virtually. The compromise we struck this time around was to have one representative from Europe and one from Africa as a down payment toward that goal. Ultimately, we will probably need to have several regional advisory councils.
- Gender: In keeping with the Apereo Foundation Board’s stated goal of cultivating women leaders in the community, I am delighted that we have achieved gender balance in our council membership. By the way, this was not hard. Asking colleagues to recommend women who would be good isn’t any different from asking them to recommend academic deans or leaders from small schools.
- Race: I don’t know for certain how we did on this metric because I haven’t met some of the council members in person yet, but my sense is that it is either a near or a total failure. Going forward, I would like to shoot for more racial diversity.
* * * * *
So that’s the deal. I am looking forward to convening the first meeting of this group (probably in January) and getting to know them better. More generally, I am really happy with the direction that the new foundation is taking and am privileged to be able to play a small part in it.