Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 6 hours 16 min ago

Postscript on Student Textbook Expenditures: More details on data sources

Fri, 2015-03-27 12:20

By Phil HillMore Posts (307)

There has been a fair amount of discussion around my post two days ago about what US postsecondary students actually pay for textbooks.

The shortest answer is that US college students spend an average of $600 per year on textbooks despite rising retail prices.

I would not use College Board as a source on this subject, as they do not collect their own data on textbook pricing or expenditures, and they only use budget estimates.

<wonk> I argued that the two best sources for rising average textbook price are the Bureau of Labor Statistics and the National Association of College Stores (NACS), and when you look at what students actually pay (including rental, non-consumption, etc) the best sources are NACS and Student Monitor. In this post I’ll share more information on the data sources and their methodologies. The purpose is to help people understand what these sources tell us and what they don’t tell us.

College Board and NPSAS

My going-in- argument was that the College Board is not a credible source on what students actually pay:

The College Board is working to help people estimate the total cost of attendance; they are not providing actual source data on textbook costs, nor do they even claim to do so. Reporters and advocates just fail to read the footnotes.

Both the College Board and National Postsecondary Student Aid Study (NPSAS, official data for the National Center for Education Statistics, or NCES) currently use cost of attendance data created by financial aid offices of each institution, using the category “Books and Supplies”. There is no precise guidance from DOE on the definition of this category, and financial aid offices use very idiosyncratic methods for this budget estimate. Some schools like to maximize the amount of financial aid available to students, so there is motivation to keep this category artificially high.

The difference is three-fold:

  • NPSAS uses official census reporting from schools while the College Board gathers data from a subset of institution – their member institutions;
  • NPSAS reports the combined data “Average net price” and not the sub-category “Books and Supplies”; and
  • College Board data targeted at freshman full-time student.

From NCES report just released today based on 2012 data (footnote to figure 1):

The budget includes room and board, books and supplies, transportation, and personal expenses. This value is used as students’ budgets for the purposes of awarding federal financial aid. In calculating the net price, all grant aid is subtracted from the total price of attendance.

And the databook definition used, page 130:

The estimated cost of books and supplies for classes at NPSAS institution during the 2011–12 academic year. This variable is not comparable to the student-reported cost of books and supplies (CSTBKS) in NPSAS:08.

What’s that? It turns out that in 2008 NCES actually used a student survey – asking them what they spent rather than asking financial aid offices for net price budget calculation. NCES fully acknowledges that the current financial aid method “is not comparable” to student survey data.

As an example of how this data is calculated, see this guidance letter from the state of California [emphasis added].

The California Student Aid Commission (CSAC) has adopted student expense budgets, Attachment A, for use by the Commission for 2015-16 Cal Grant programs. The budget allowances are based on statewide averages from the 2006-07 Student Expenses and Resources Survey (SEARS) data and adjusted to 2015-16 with the forecasted changes in the California Consumer Price Index (CPI) produced by the Department of Finance.

The College Board asks essentially the same question from the same sources. I’ll repeat again – The College Board is not claiming to be an actual data source for what students actually spend on textbooks.


NACS has two sources of data – both bookstore financial reporting from member institutions and from a Student Watch survey report put out in the Fall and Spring of each academic year. NACS started collecting student expenditure data in 2007, initially every two years, then every year, then twice a year.

NACS sends their survey through approximately 20 – 25 member institutions to distribute to the full student population for that institution or a representative sample. For the Fall 2013 report:

Student WatchTM is conducted online twice a year, in the fall and spring terms. It is designed to proportionately match the most recent figures of U.S. higher education published in The Chronicle of Higher Education: 2013/2014 Almanac. Twenty campuses were selected to participate based on the following factors: public vs. private schools, two-year vs. four-year degree programs, and small, medium, and large enrollment levels.

Participating campuses included:

  • Fourteen four-year institutions and six two-year schools; and
  • Eighteen U.S. states were represented.

Campus bookstores distributed the survey to their students via email. Each campus survey fielded for a two week period in October 2013. A total of 12,195 valid responses were collected. To further strengthen the accuracy and representativeness of the responses collected, the data was weighted based on gender using student enrollment figures published in The Chronicle of Higher Education: 2013/2014 Almanac. The margin of error for this study is +/- 0.89% at the 95% confidence interval.

I interviewed Rich Hershman and Liz Riddle, who shared the specific definitions they use.

Required Course Materials:Professor requires this material for the class and has made this known through the syllabus, the bookstore, learning management system, and/or verbal instructions. These are materials you purchase/rent/borrow and may include textbooks (including print and/or digital versions), access codes, course packs, or other customized materials. Does not include optional or recommended materials.

The survey goes to students who report what they actually spent. This includes the categories of sharing materials, choosing not to acquire, rental, purchase new and purchase used.

The data is aggregated across full-time and part-time students, undergraduates and graduates. So the best way to read the data I shared previously ($638 per year) is as per-capita spending. The report breaks down further by institution type (2-yr public, etc) and type (purchase new, rental, etc). The Fall 2014 data is being released next week, and I’ll share more breakdowns with this data.

In future years NACS plans to expand the survey to go through approximately 100 institutions.

Student Monitor

Student Monitor describes their survey as follows:

  • Conducted each Spring and Fall semester
  • On campus, one-on-one intercepts conducted by professional interviewers during the three week period March 24th to April 14th, 2014 [Spring 2014 data] and October 13th-27th [Fall 2014 data]
  • 1,200 Four Year full-time undergrads (Representative sample, 100 campuses stratified by Enrollment, Type, Location, Census Region/Division)
  • Margin of error +/- 2.4%

In other words, this is an intercept survey conducted with live interviews on campus, targeting full-time undergraduates. This includes the categories of sharing materials, choosing not to acquire, rental, purchase new and purchase used.

In comparison to NACS, Student Monitor tracks more schools (100 vs. 20) but fewer students (1,200 vs. 12,000).

Despite the differences in methodology, Student Monitor and NACS report spending that is fairly consistent (both on the order of $600 per year per student).

New Data in Canada

Alex Usher from Higher Education Strategy Associates shared a blog post in response to my post that is quite interesting.

This data is a little old (2012), but it’s interesting, so my colleague Jacqueline Lambert and I thought we’d share it with you. Back then, when HESA was running a student panel, we asked about 1350 university students across Canada about how much they spent on textbooks, coursepacks, and supplies for their fall semester. [snip]

Nearly 85% of students reported spending on textbooks. What Figure 1 shows is a situation where the median amount spent is just below $300, and the mean is near $330. In addition to spending on textbooks, another 40% or so bought a coursepack (median expenditure $50), and another 25% reported buying other supplies of some description (median expenditure: also $50). Throw that altogether and you’re looking at average spending of around $385 for a single semester.

Subtracting out the “other supplies” that do not fit in NACS / Student Monitor definitions, and acknowledging that fall spending is typically higher than spring due to full-year courses, this data is also in the same ballpark of $600 per year (slightly higher in this case).

Upcoming NPSAS Data

The Higher Education Act of 2008 required NCES to add student expenditures on course materials to the NPSAS database, but this has not been added yet. According to Rich Hershman from NACS, NCES is using a survey question that is quite similar to NACS and field testing this spring. The biggest difference will be that NPSAS is annual data whereas NACS and Student Monitor send out their survey in fall and spring (then combining data).

Sometime in 2016 we should have better federal data on actual student expenditures.


Update: Mistakenly published without reference to California financial aid guidance. Now fixed.

Update 3/30: I mistakenly referred to the IPEDS database for NCES when this data is part of National Postsecondary Student Aid Study (NPSAS). All references to IPEDS have been corrected to NPSAS. I apologize for confusion.

The post Postscript on Student Textbook Expenditures: More details on data sources appeared first on e-Literate.

How Much Do College Students Actually Pay For Textbooks?

Wed, 2015-03-25 07:16

By Phil HillMore Posts (306)

With all of the talk about the unreasonably high price of college textbooks, the unfulfilled potential of open educational resources (OER), and student difficulty in paying for course materials, it is surprising how little is understood about student textbook expenses. The following two quotes illustrate the most common problem.

Atlantic: “According to a recent College Board report, university students typically spend as much as $1,200 a year total on textbooks.”

US News: “In a survey of more than 2,000 college students in 33 states and 156 different campuses, the U.S. Public Interest Research Group found the average student spends as much as $1,200 each year on textbooks and supplies alone.”

While I am entirely sympathetic to the need and desire to lower textbook and course material prices for students, no one is served well by misleading information, and this information is misleading. Let’s look at the actual sources of data and what that data tells us, focusing on the aggregate measures of changes in average textbook pricing in the US and average student expenditures on textbooks. What the data tells us is that the answer is that students spend on average $600 per year on textbooks, not $1,200.

First, however, let’s address the all-too-common College Board reference.

College Board Reference

The College Board positions itself as the source for the cost of college, and their reports look at tuition (published and net), room & board, books & supplies, and other expenses. This chart is the source of most confusion.

College Board Chart

The light blue “Books and Supplies” data, ranging from $1,225 to $1,328, leads to the often-quoted $1,200 number. But look at the note right below the chart:

Other expense categories are the average amounts allotted in determining total cost of attendance and do not necessarily reflect actual student expenditures.

That’s right – the College Board just adds budget estimates for the books & supplies category, and this is not at all part of their actual survey data. The College Board does, however, point people to one source that they use as a rough basis for their budgets.

According to the National Association of College Stores, the average price of a new textbook increased from $62 (in 2011 dollars) in 2006-07 to $68 in 2011-12. Students also rely on textbook rentals, used books, and digital resources. (

The College Board is working to help people estimate the total cost of attendance; they are not providing actual source data on textbook costs, nor do they even claim to do so. Reporters and advocates just fail to read the footnotes. The US Public Interest Research Group is one of the primary reasons that journalists use the College Board data incorrectly, but I’ll leave that subject for another post.

The other issue is the combination of books and supplies. Let’s look at actual data and sources specifically for college textbooks.

Average Textbook Price Changes

What about the idea that textbook prices keep increasing?

BLS and Textbook Price Index

The primary source of public data for this question is the Consumer Price Index (CPI) from the Bureau of Labor Statistics (BLS). The CPI sets up a pricing index based on a complex regression model. The index is set to 100 for December, 2001 when they started tracking this category. Using this data tool for series CUUR0000SSEA011 (college textbooks), we can see the pricing index from 2002 – 2014[1].

CPI Annual

This data equates to roughly 6% year-over-year increases in the price index of new textbooks, roughly doubling every 11 years. But note that this data is not inflation-adjusted, as the CPI is used to help determine the inflation rate. Since the US average inflation rate over 2002 – 2014 has averaged 2%, this means that textbook prices are rising roughly 3 times the rate of inflation.

NACS and Average Price Per Textbook

NACS, as its name implies, surveys college bookstores to determine what students spend on various items. The College Board uses them as a source. This is the most concise summary, also showing rising textbook prices on a raw, non inflation-adjusted basis, although a lower rate of increase than the CPI.

The following graph for average textbook prices is based on data obtained in the annual financial survey of college stores. The most recent data for “average price” was based on the sale of 3.4 million new books and 1.9 million used books sold in 134 U.S. college stores, obtained in the Independent College Stores Financial Survey 2013-14.

NACS Avg Textbook Price

Other Studies

The Government Accountability Office (GAO) did a study in 2013 looking at textbook pricing, but their data source was the BLS. This chart, however, is popularly cited.

GAO Chart

There are several private studies done by publishers or service companies that give similar results, but by definition these are not public.

Student Expenditure on Books and Supplies

For most discussion on textbook pricing, the more relevant question is what do students actually spend on textbooks, or at least on required course materials. Does the data above indicate that students are spending more and more every year? The answer is no, and the reason is that there are far more options today for getting textbooks than there used to be, and one choice – choosing not to acquire the course materials – is rapidly growing. According to Student Monitor, 30% of students choose to not acquire every college textbook.

Prior to the mid 2000s, the rough model for student expenditures was that roughly 65% purchased new textbooks and 35% bought used textbooks. Today, there are options for rentals, digital textbooks, and courseware, and the ratios are changing.

The two primary public sources for how much students spend on textbooks are the National Association of College Stores (NACS) and The Student Monitor.


The NACS also measures average student expenditure for required course materials, which is somewhat broader than textbooks but does not include non-required course supplies.

The latest available data on student spending is from Student Watch: Attitudes & Behaviors toward Course Materials, Fall 2014. Based on survey data, students spent an average of $313 on their required course materials, including purchases and rentals, for that fall term. Students spent an average of $358 on purchases for “necessary but not required” technology, such as laptops, USB drives, for the same period.

NACS Course Material Expenditures

Note that by the nature of analyzing college bookstores, NACS is biased towards traditional face-to-face education and students aged 18-24.

Update: I should have described the NACS methodology in more depth (or probably need a follow-on post), but their survey is distributed through the bookstore to students. Purchasing through Amazon, Chegg, rental, and decisions not to purchase are all captured in that study. It’s not flawless, but it is not just for purchases through the bookstore. From the study itself:

Campus bookstores distributed the survey to their students via email. Each campus survey fielded for a two week period in October 2013. A total of 12,195 valid responses were collected. To further strengthen the accuracy and representativeness of the responses collected, the data was weighted based on gender using student enrollment figures published in The Chronicle of Higher Education: 2013/2014 Almanac. The margin of error for this study is +/- 0.89% at the 95% confidence interval.

Student Monitor

Student Monitor is a company that provides syndicated and custom market research, and they produce extensive research on college expenses in the spring and fall of each year. This group interviews students for their data, rather than analyzing college bookstore financials, which is a different methodology than NACS. Based on the Fall 2014 data specifically on textbooks, students spent an average of $320 per term, which is quite close to the $638 per year calculated by NACS. Based on information from page 126:

Average Student Acquisition of Textbooks by Format/Source for Fall 2014

  • New print: 59% of acquirers, $150 total mean
  • Used print: 59% of acquirers, $108 total mean
  • Rented print: 29% of acquirers, $38 total mean
  • eTextbooks (unlimited use): 16% of acquirers, $15 total mean
  • eTextbooks (limited use): NA% of acquirers, $9 total mean
  • eTextbooks (file sharing): 8% of acquirers, $NA total mean
  • Total for Fall 2014: $320 mean
  • Total on Annual Basis: $640 mean

Note, however, that the Fall 2014 data ($640 annual) represents a steep increase from the previous trend as reported by NPR (but based on Student Monitor data). I have asked Student Monitor for commentary on the increase but have not heard back (yet).

NPR Student Monitor

Like NACS, Student Monitor is biased towards traditional face-to-face education and students aged 18-24.


I would summarize the data as follows:

The shortest answer is that US college students spend an average of $600 per year on textbooks despite rising retail prices.

I would not use College Board as a source on this subject, as they do not collect their own data on textbook pricing or expenditures, and they only use budget estimates.

I would like to thank Rob Reynolds from NextThought for his explanation and advice on the subject.

Update (3/25): See note on NACS above.

Update (3/27): See postcript post for additional information on data sources.

  1. Note that BLS has a category CUSR0000SEEA (Educational Books & Supplies) that has been tracked far longer than the sub-category College Textbooks. We’ll use the textbooks to simplify comparisons.

The post How Much Do College Students Actually Pay For Textbooks? appeared first on e-Literate.

Austin Community College’s ACCelerator: Big bet on emporium approach with no pilots

Sun, 2015-03-22 14:55

By Phil HillMore Posts (305)

While at SXSWedu, I was able to visit Austin Community College’s ACCelerator lab, which got a fair bit of publicity over the past month. While the centerpiece of ACCelerator usage is for developental math, the 600+ workstation facility spread over 32,000 square feet also supports Tutoring in a variety of subjectsFirst year experienceGroup advisingAcademic CoachingAdult EducationContinuing EducationCollege readiness assessment preparation, and Student skills workshops.


But it is the developmental math course that has received the most coverage.

Austin Community College welcomed second lady Dr. Jill Biden and Under Secretary of Education Dr. Ted Mitchell on Monday, March 9, to tour the Highland Campus’ ACCelerator and meet with students and faculty of the college’s new developmental math course, MATD 0421. [snip]

“I teach a lot of developmental students,” says Dr. Biden. “The one stumbling block does seem to be math and math anxiety and ‘Can I do it?’. This (course) seems to be so empowering and so positive. Students can see immediate success.”

MATD 0421 is a self-paced, emporium-style course that encompasses all three levels of developmental math. Paul Fain at Inside Higher Ed had an excellent article that included a description of the motivation.

Dismal remedial success rates have been a problem at Austin, which enrolls 60,000 students. So faculty members from the college looked around for alternative approaches to teaching math.

“Really, there’s nothing to lose,” said [Austin CC president] Rhodes.

The Highland Campus, where the ACCelerator lab is located, is built in a former shopping mall. Student in Austin CC can choose courses at any of the 8 campuses or 5 centers. All developmental math at the Highland Campus is run through MATD 0421, so students across the system can choose traditional approaches at other campuses of the emporium approach at Highland.

Austin CC picked this approach after researching several other initiatives (Fain describes Virginia Tech and Montgomery College examples). The IHE article then describes the design:

Austin officials decided to try the emporium method. They paired it with adaptive courseware, which adjusts to individual learners based on their progress and ability to master concepts. The college went with ALEKS, an adaptive software platform from McGraw-Hill Education.

Fain describes the personalization aspect:

The new remedial math course is offered at the ACCelerator. The computer stations are arranged in loose clusters of 25 or so. Faculty members are easy to spot in blue vests. Student coaches and staff wear red ones.

This creates a more personalized form of learning, said Stacey Güney, the ACCelerator’s director. That might seem paradoxical in computer lab that has a bit of a Matrix feel. But Güney said that instead of a class size of 25 students per instructor, the course features 25 classes of one student.

“In here there is no back of the classroom,” she said.

While the program is fairly new (second term), there are some initial results described by the official site:

In MATD 0421’s inaugural semester:

  • The withdrawal rate was less than half the rate for traditional developmental math courses.
  • 75 percent of the students completed the equivalent of one traditional course.
  • Nearly 45 percent completed the equivalent to a course and one-half.
  • Over 14 percent completed the equivalent to two courses.
  • 13 students completed all the equivalent of three courses.

Go read the full IHE article for a thorough description. I would offer the following observations.

  • Rather than a pilot program, which I have argued plagues higher ed and prevents diffusion of innovations, Austin CC has committed to a A) a big program up front (~700 students in the Fall 2014 inaugural semester) and ~1,000 students in Spring 2015, yet B) they offer students the choice of traditional or emporium. To me, this offers the best of both worlds in allowing a big bet that doesn’t get caught in the “purgatory of pilots” while offering student choice.
  • While the computer lab and software are easy headlines, I hope people don’t miss the heavy staffing that are a central feature of this lab – there are more than 90 faculty and staff working there, teaching the modular courses, roving the aisles to provide help, and working in help desks. The ACCelerator is NOT an exercise in replacing faculty with computers.
  • During my tour, instructor Christie Allen-Johnson and associate professor Ann P. Vance described their plans to perform a more structured analysis of the results. Expect to see more validated outcomes starting at the end of CY2015.
  • When and if Austin CC proves the value and results of the model, that would be the time to migrate most of the remaining developmental math courses into this emporium model.
  • The one area that concerns me is the lack of structured time for students away from the workstations. Developmental students in community colleges often have not experienced academic success – knowing how to succeed, learning how to learn, believing in their ability to succeed – and often this non-cognitive aspect of math is as important as the actual coursework. Allen-Johnson described the availability of coaching that goes beyond coursework, but that is different than providing structure for coaching and self-regulated learning.

The post Austin Community College’s ACCelerator: Big bet on emporium approach with no pilots appeared first on e-Literate.

Our Policy on Cookies and Tracking

Thu, 2015-03-19 10:00

By Michael FeldsteinMore Posts (1024)

In the wake of the Pearson social media monitoring controversy, edubloggers like Audrey Watters and D’arcy Norman have announced their policies regarding code that can potentially track users on their blogs. This is a good idea, so we are following their example.

We use Google Analytics and WordPress analytics on both e-Literate and e-Literate TV. The main reason we do so is that we believe the information these packages provide help us create more useful content. Even after a decade of blogging, we are still surprised sometimes by which posts earn your attention and which ones don’t. We look at our analytics results fairly regularly to see what we can learn about writing more content that you find to be worth your time. This is by no means the only or even the main way that we decide what we will write, but we think of it as one of relatively few clues we have to understand to which posts and topics will have the most value to you. We do not run ads and have no intention of doing so in the future. In the case of e-Literate TV, where the content is expensive to make, we may also use information regarding the number of viewers of the episodes in the future to demonstrate to sponsors that our content is having an impact. We make no effort to track individuals and, in fact, have always had a policy of letting our readers comment on posts without registering on the site. But Google in particular is likely making more extensive use of the usage data that they gather.

In addition to the two analytics packages mentioned above, we do embed YouTube videos and use social media buttons, which may carry their own tracking code with them from the companies that supply them. Unfortunately, this is just part of the deal with embedding YouTube videos or adding convenient “Tweet this” links. The tracking code (which usually, but not always, means the same thing as “cookies”) on our site is pretty typical for what you will find for any site that provides these sorts of conveniences.

But that doesn’t mean that you have to allow yourself to be tracked if you prefer not to be. There are a number of excellent anti-tracking plugins available for the mainstream browsers, including Ghostery and Disconnect. If you are concerned about being tracked (here or anywhere), then we recommend installing one or more of these plugins, and we also recommend spending a little time to learn how they work and what sorts of tracking code are embedded on the different sites you visit so that you can make informed and fine-grained decisions about what information you do and do not want to share. These tools often let you make service-by-service and site-by-site decisions, but they generally start with the default of protecting your privacy by blocking everything.

To sum up and clarify our privacy policies:

  • We do use Google Analytics and WordPress analytics.
  • We do embed social media tools that in some cases carry their own tracking code.
  • We do not make any effort to track individuals on our sites.
  • We do not use or plan to use analytics for ads or in any way sell the information from our analytics to third parties, including but not limited to ads.
  • We may in the future provide high-level summaries of site traffic and video views to e-Literate TV sponsors.
  • We do support commenting on blog posts without registration.[1]
  • We do provide our full posts in our RSS feed, which excludes most (but not all) tracking code.
  • We do provide CC-BY licensing on our content so that it can be used on other sites, including ones that do not have any tracking code .
  1. Note: We do require an email address from commenters for the sole purpose of providing us with a means of contacting the poster in the event that the person has written something uncivil or marginally inappropriate and we need to discuss the matter with that person privately before deciding what to do about moderation. In the 10-year history of e-Literate, this has happened about three or four times. There are two differences relevant to reader privacy between requiring the email address and requiring registration. First, we allow people to use multiple email addresses or even temporary email addresses if they do not wish that email to be personally identifiable. We only require that the email address be a working address. Second and probably more importantly, without registration, there is no mechanism to link comments to browsing behavior on the site.

The post Our Policy on Cookies and Tracking appeared first on e-Literate.

Back To The Future: Looking at LMS forecasts from 2011 – 2014

Wed, 2015-03-18 18:11

By Phil HillMore Posts (303)

At today’s Learning Analytics and Knowledge 2015 conference (#LAK15), Charles Severance (aka Dr. Chuck) gave the morning keynote organized around the theme of going back in time to see what people (myself and Richard Katz primarily) were forecasting for education. By looking at the reality of 2015, we can see which forecasts were on track and which were not. I like this concept, as it is useful to go back and see what we got right and wrong, so this post is meant to provide some additional context particularly for LMS market. Chuck’s keynote also gives cover for doing so without seeming too self-absorbed.

But enough about me. What do you think about me?

I use the term forecast since I tend to describe patterns and trends and then try to describe the implications. This is different than the Katz video which aimed to make specific predictions as a thought-provoking device.


I introduced the LMS squid diagram in 2008 as a tool to help people see the LMS market holistically rather than focusing on detailed features. Too much of campus evaluations then (and even now) missed the big picture that there were only a handful of vendors and some significant market dynamics at play.

A 2009 presentation, by the way, was the basis for Michael and me connecting for the first time. Bromance.


In early 2011 I wrote a post on Visigoths at the LMS Gates, noting:

I am less inclined to rely on straight-line projections of market data to look ahead, and am more inclined to think the market changes we are seeing are driven by outside forces with potentially nonlinear effects. Rome may have been weakened from within, but when real change happened, the Visigoths made it happen. [snip]

Today, there is a flood of new money into the educational technology market. In addition to the potential acquisition of Blackboard, Instructure just raised $8M in venture funding and vying for the role of Alaric in their marketing position, Pearson has been heavily investing in Learning Studio (eCollege for you old-timers), and Moodlerooms raised $7+M in venture funding. Publishing companies, ERP vendors, private equity, venture funding – these are major disruptive forces. And there is still significant moves being made by technology companies such as Google.

In August I started blogging at e-Literate with this post on Emerging Trends in LMS / Ed Tech Market. The trends I described (summary here, see post for full description):

From my viewpoint in 2011, the market has essentially moved beyond Blackboard as the dominant player driving most of the market dynamics.

  • The market is more competitive, with more options, than it has been for years.
  • Related to the above, there is a trend towards software as a service (SaaS) models for new LMS solutions.
  • Also related to the above, the market is demanding and getting real Web 2.0 and Web 3.0 advances in LMS user interfaces and functionality. We are starting to see some real improvements in usability in the LMS market.
  • The lines are blurring between content delivery systems (e.g. Cengage MindTap, Pearson MyLabs, etc) and LMS.
  • Along those same lines, it is also interesting in what is not being seen as a strategic blurring of lines – between LMS and student information systems.
  • Analytics and data reporting are not just aspirational goals for LMS deployments, but real requirements driven by real deadlines.

Looking back at the 2011 posts, I would note the following:

  • I think all of the basic trends have proven to be accurate, although I over-stated the analytics importance of “real requirements driven by real deadlines”. Analytics are important and some schools have real requirements, but for most schools analytics is not far beyond “aspirational goals”.
  • Chuck over-interpreted the “it’s all about MyLabs”. The real point is the blurring of lines between previously distinct categories of delivery platforms and digital content. I would argue that the courseware movement as well as most CBE platforms shows this impact in 2015. MyLabs was just an example in the graphic.
  • My main message about outside forces was that the internal players (Blackboard, Desire2Learn, Moodle, etc) were not going to be the source of change, rather “new competitors and new dynamics” would force change. Through the graphic, I over-emphasized the ERP and big tech players (Oracle, Google, Pearson & eCollege, etc) while I under-emphasized Instructure, which has proven to be the biggest source of change (although driven by VC funding).
  • I still like the Rome / Visigoths / Alaric metaphor.

In early 2012 I had a post Farewell to the Enterprise LMS, Greetings to the Learning Platform that formed the basis of the forecasts Chuck commented on in the LAK15 keynote.

In my opinion, when we look back on market changes, 2011 will stand out as the year when the LMS market passed the point of no return and changed forever. What we are now seeing are some real signs of what the future market will look like, and the actual definition of the market is changing. We are going from an enterprise LMS market to a learning platform market.

In a second post I defined the characteristics of a Learning Platform (or what I meant by the term):

  1. Learning Platforms are next-generation technology compared to legacy LMS solutions arising in the late 1990’s / early 2000’s. While many features are shared between legacy LMS and learning platforms, the core designs are not constrained by the course-centric, walled-garden approach pioneered by earlier generations.
  2. Learning Platforms tend to be SaaS (software as a service) offerings, based in a public or private cloud on multi-tenant designs. Rather than being viewed as an enterprise application to be set up as a customized instance for each institution, there is a shared platform that supports multiple customers, leveraging a shared technology stack, database, and application web services.
  3. Learning Platforms are intended to support and interoperate with multiple learning and social applications, and not just as extensions to the enterprise system, but as a core design consideration.
  4. Learning Platforms are designed around the learner, giving a sense of identify that is maintained throughout the learning lifecycle. Learners are not just pre-defined roles with access levels within each course, but central actors in the system design.
  5. Learning Platforms therefore are social in nature, supporting connections between learners and customization of content based on learner needs.
  6. Learning Platforms include built-in analytics based on the amalgamation of learner data across courses, across institutions, and even beyond institutions.
  7. Learning Platforms allow for the discovery of instructional content, user-generated content, and of other learners.

Going back to the Farewell post, the forecast was:

Another trend that is becoming apparent is that many of the new offerings are not attempting to fully replace the legacy LMS, at least all at once. Rather than competing with all of the possible features that are typical in enterprise LMS solutions, the new platforms appear to target specific institutional problems and offer only the features needed. Perhaps inspired by Apple’s success in offering elegant solutions at the expense of offering all the features, or perhaps inspired by Clayton Christensen’s disruptive innovation model, the new learning platform providers are perfectly willing to say ‘no – we just don’t offer this feature or that feature’.

Looking back at the 2012 posts, I would note the following:

  • I still see the move from enterprise LMS to learning platform, but it is happening slower than I might have thought and more unevenly. The attributes of SaaS and fewer features has happened (witness Canvas in particular), and the interoperability capabilities are occurring (with special thanks to Chuck and his work with IMS developing LTI). However, the adoption and true usage of multiple learning and social applications connected through the platform is quite slow.
  • The attributes of learner-centric design built-in analytics can be seen in many of the CBE platforms, but not really in the general LMS market itself.
2013 / 2014

In 2013 and 2014 I updated the LMS squid graphic.


  • Chuck was right to point out the revision that I no longer included the outside forces of ERP & big tech. The key point of 2011 forecasts was outside forces making changes, but by 2013 it was clear that ERP & big tech were not part of this change.
  • There is also a big addition of homegrown solutions, or alternative learning platforms that is worth noting. The entrance of so new CBE platforms designed from the ground up for a specific purposes is an example of this trend.
Overall Notes

Thanks to Chuck, this has been informative (to me, at least) to go back and review forecasts and see what I got right and what I got wrong. Chuck’s general point on my forecasts seem to be that I am over-emphasizing the emergence of learning platforms at least as a distinct category from enterprise LMS, and that we’re still seeing LMS market although with changed internals (fewer features, more interoperability). I don’t disagree with this point (if I am summarizing accurately). However, if you read the actual forecasts above, I don’t think Chuck and I are too far apart. I may be more optimistic than he is and need to clarify my terminology somewhat, but we’re in the same ball park.

Now let’s turn the tables. My main critique with Dr. Chuck’s keynote is that he just didn’t commit on the song. We know he is willing to boldly sing, after all (skip ahead to 1:29).

Click here to view the embedded video.

Update: Clarified language on LTI spec

The post Back To The Future: Looking at LMS forecasts from 2011 – 2014 appeared first on e-Literate.

Blackboard Brain Drain: One third of executive team leaves in past 3 months

Tue, 2015-03-17 10:02

By Phil HillMore Posts (302)

In August 2013 Michael described Ray Henderson’s departure from an operational role at Blackboard. As of the end of 2014, Ray is no longer on the board of directors at Blackboard either. He is focusing on his board activity (including In The Telling, our partner for e-Literate TV) and helping with other ed tech companies. While Ray’s departure from the board did not come as a surprise to me, I have been noting the surprising number of other high-level departures from Blackboard recently.

As of December 24, 2014, Blackboard listed 12 company executives in their About > Leadership page. Of those 12 people, 4 have left the company since early January. Below is the list of the leadership team at that time along with notes on changes:

  • Jay Bhatt, CEO
  • Maurice Heiblum, SVP Higher Education, Corporate And Government Markets (DEPARTED February, new job unlisted)
  • Mark Belles, SVP K-12 (DEPARTED March, now President & COO at Teaching Strategies, LLC)
  • David Marr, SVP Transact
  • Matthew Small, SVP & Managing Director, International
  • Gary Lang, SVP Product Development, Support And Cloud Services (DEPARTED January, now VP B2B Technology, Amazon Supply)
  • Katie Blot, SVP Educational Services (now SVP Corporate Strategy & Business Development)
  • Mark Strassman, SVP Industry and Product Management
  • Bill Davis, CFO
  • Michael Bisignano, SVP General Counsel, Secretary (DEPARTED February, now EVP & General Counsel at CA Technologies)
  • Denise Haselhorst, SVP Human Resources
  • Tracey Stout, SVP Marketing

Beyond the leadership team, there are three others worth highlighting.

  • Brad Koch, VP Product Management (DEPARTED January, now at Instructure)
  • David Ashman, VP Chief Architect, Cloud Architecture (DEPARTED February, now CTO at Teaching Strategies, LLC)
  • Mark Drechsler, Senior Director, Consulting (APAC) (DEPARTED March, now at Flinders University)

I mentioned Brad’s departure already and the significance in this post. Mark is significant in terms of his influence in the Australian market, as he came aboard from the acquisition of NetSpot.

David is significant as he was Chief Architect and had the primary vision for Blackboard’s impending moving into the cloud. Michael described this move in his post last July.

Phil and I are still trying to nail down some of the details on this one, particularly since the term “cloud” is used particularly loosely in ed tech. For example, we don’t consider D2L’s virtualization to be a cloud implementation. But from what we can tell so far, it looks like a true elastic, single-instance multi-tenant implementation on top of Amazon Web Services. It’s kind of incredible. And by “kind of incredible,” I mean I have a hard time believing it. Re-engineering a legacy platform to a cloud architecture takes some serious technical mojo, not to mention a lot of pain. If it is true, then the Blackboard technical team has to have been working on this for a long time, laying the groundwork long before Jay and his team arrived. But who cares? If they are able to deliver a true cloud solution while still maintaining managed hosting and self-hosted options, that will be a major technical accomplishment and a significant differentiator.

This seems like the real deal as far as we can tell, but it definitely merits some more investigation and validation. We’ll let you know more as we learn it.

This rollout of new cloud architecture has taken a while, and I believe it is hitting select customers this year. Will David’s departure add risk to this move? I talked to David a few weeks ago, and he said that he was leaving for a great opportunity at Teaching Strategies, and that while he was perhaps the most visible face of the cloud at Blackboard, others behind the scenes are keeping the vision. He does not see added risk. While I appreciate the direct answers David gave me to my questions, I still cannot see how the departure of Gary Lang and David Ashman will not add risk.

So why are so many people leaving? From initial research and questions, the general answer seems to be ‘great opportunity for me professionally or personally, loved working at Blackboard, time to move on’. There is no smoking gun that I can find, and most departures are going to very good jobs.

Jay Bhatt, Blackboard’s CEO, provided the following statement based on my questions.

As part of the natural evolution of business, there have been some transitions that have taken place. A handful of executives have moved onto new roles, motivated by both personal and professional reasons. With these transitions, we have had the opportunity to add some great new executive talent to our company as well. Individuals who bring the experience and expertise we need to truly capture the growth opportunity we have in front of us. This includes Mark Gruzin, our new NAHE/ProEd GTM lead, Peter George, our new head of product development and a new general counsel who will be starting later this month. The amazing feedback we continue to receive from customers and others in the industry reinforces how far we’ve come and that we are on the right path. As Blackboard continues to evolve, our leaders remain dedicated to moving the company forward into the next stage of our transformation.

While Jay’s statement matches what I have heard, I would note the following:

  • The percentage of leadership changes within a 3 month period rises above the level of “natural evolution of business”. Correlation does not imply causation, but neither does it imply a coincidence.
  • The people leaving have a long history in educational technology (Gary Lang being the exception), but I have not seen the same in reverse direction. Mark Gruzin comes from a background in worldwide sales and federal software group at IBM. Peter George comes from a background in Identity & Access Management as well as Workforce Management companies. They both seem to be heavy hitters, but not in ed tech. Likewise, Jay himself along with Mark Strassman and Gary Lang had no ed tech experience when they joined Blackboard. This is not necessarily a mistake, as fresh ideas and approaches were needed, but it is worth noting the stark differences in people leaving and people coming in.
  • These changes come in the middle of Blackboard making huge bets on a completely new user experience and a move into the cloud. These changes were announced last year, but they have not been completed. This is the most important area to watch – whether Blackboard completes these changes and successfully rolls them out to the market.

We’ll keep watching and update where appropriate.

The post Blackboard Brain Drain: One third of executive team leaves in past 3 months appeared first on e-Literate.

Rutgers and ProctorTrack Fiasco: Impact of listening to regulations but not to students

Mon, 2015-03-16 13:07

By Phil HillMore Posts (302)

If you want to observe the unfolding impact of an institution ignoring the impact of policy decisions on students, watch the situation at Rutgers University. If you want to see the power of a single student saying “enough is enough”, go thank Betsy Chao and sign her petition. The current situation is that students are protesting the Rutgers usage of ProctorTrack software – which costs students $32 in additional fees, accessing their personal webcams, automatically tracks face and knuckle video as well as watching browser activity – in online courses. Students seem to be outraged at the lack of concern over student privacy and additional fees.

Prior to 2015, Rutgers already provided services for online courses to comply with federal regulations to monitor student identity. The rationale cited [emphasis added]:

The 2008 Higher Education Opportunity Act (HEOA) requires institutions with distance education programs to have security mechanisms in place that ensure that the student enrolled in a particular course is in fact the same individual who also participates in course activities, is graded for the course, and receives the academic credit. According to the Department of Education, accrediting agencies must require distance education providers to authenticate students’ identities through secure (Learning Management System) log-ins and passwords, proctored exams, as well as “new identification technologies and practices as they become widely accepted.”

This academic term, Rutgers added a new option – ProctorTrack:

Proctortrack is cost-effective and scalable for any institution size. Through proprietary facial recognition algorithms, the platform automates proctoring by monitoring student behavior and action for test policy compliance. Proctortrack can detect when students leave their space, search online for additional resources, look at hard notes, consult with someone, or are replaced during a test.

This occurred at the same time as the parent company Verificient received a patent for their approach, in January 2015.

A missing piece not covered in the media thus far is that Rutgers leaves the choice of student identify verification approach up to individual faculty or academic program [emphasis added].

In face-to-face courses, all students’ identities are confirmed by photo ID prior to sitting for each exam and their activities are monitored throughout the exam period. To meet accreditation requirements for online courses, this process must also take place. Rutgers makes available electronic proctoring services for online students across the nation and can assist with on-site proctoring solutions. Student privacy during a proctored exam at a distance is maintained through direct communication and the use of a secure testing service. Students must be informed on the first day of class of any additional costs they may incur for exam proctoring and student authentication solutions.

The method of student authentication used in a course is the choice of the individual instructor and the academic unit offering the course. In addition to technology solutions such as Examity and ProctorTrack, student authentication can also be achieved through traditional on-site exam proctoring solutions. If you have any questions, talk to your course instructor.

As the use of of ProctorTrack rolled out this term, at least one student – senior Betsy Chao – was disturbed and on February 5th created a petition on

However, I recently received emails from both online courses, notifying me of a required “Proctortrack Onboarding” assessment to set up Proctortrack software. Upon reading the instructions, I was bewildered to discover that you had to pay an additional $32 for the software on top of the $100 convenience fee already required of online courses. And I’m told it’s $32 per online class. $32 isn’t exactly a large sum, but it’s certainly not pocket change to me. Especially if I’m taking more than one online class. I’m sure there are many other college students who echo this sentiment. Not only that, but nowhere in either of the syllabi was there any inkling of the use of Proctortrack or the $32 charge. [snip]

Not only that, but on an even more serious note, I certainly thought that the delicate issue of privacy would be more gracefully handled, especially within a school where the use of webcams was directly involved in a student’s death. As a result, I thought Rutgers would be highly sensitive to the issue of privacy.

If accurate, this clearly violates the notification policy of Rutgers highlighted above. Betsy goes on to describe the alarming implications relating to student privacy.

On February 7th, New Brunswick Today picked up on the story.

Seven years ago, Congress passed the Higher Education Opportunity Act of 2008, authorizing the U.S Department of Education to outline numerous recommendations on how institutions should administer online classes.

The law recommended that a systemic approach be deveoped to ensure that the student taking exams and submitting projects is the same as the student who receives the final grade, and that institutions of higher education employ “secure logins and passwords, or proctored exams to verify a student’s identity.”

Other recommendations include the use of an identity verification process, and the monitoring by institutions of the evolution of identity verification technology.

Under these recommendations by the U.S Department of Education, Rutgers would technically be within its right to implement the use of ProctorTrack, or an alternative form of identity verification technology.

However, the recommendations are by no means requirements, and an institution can decide whether or not to take action.

The student newspaper at Rutgers, The Daily Targum, ran stories on February 9th and February 12th, both highly critical of the new software usage. All of this attention thanks to one student who refused to quietly comply.

The real problem in my opinion can be found in this statement from the New Brunswick Today article.

“The university has put significant effort into protecting the privacy of online students,” said the Rutgers spokesperson. “The 2008 Act requires that verification methods not interfere with student privacy and Rutgers takes this issue very seriously.”

The Rutgers Center for Center for Online and Hybrid Learning and Instructional Technologies (COHLIT) would oversee the implementation and compliance with the usage of ProctorTrack, according to Rutgers spokesperson E.J. Miranda, who insisted it is not mandatory.

“ProctorTrack is one method, but COHLIT offers other options to students, faculty and departments for compliance with the federal requirements, such as Examity and ExamGuard,” said Miranda.

Rutgers has also put up a FAQ page on the subject.

The problem is that Rutgers is paying attention to federal regulations and assuming their solutions are just fine, yet:

  • Rutgers staff clearly spent little or no time asking students for their input on such an important and highly charged subject;
  • Rutgers policy leaves the choice purely up to faculty or academic programs, meaning that there was no coordinated decision-making and communication to students;
  • Now that students are complaining, Rutgers spokes person has been getting defensive, implying ‘there’s nothing to see here’ and not taking the student concerns seriously;
  • At no point that I can find has Rutgers acknowledged the problem of a lack of notification and new charges for students, nor have they acknowledged that students are saying that this solution goes too far.

That is why this is a fiasco. Student privacy is a big issue, and students should have some input into the policies shaped by institutions. The February 12th student paper put it quite well in conclusion.

Granted, I understand the University’s concern — if Rutgers is implementing online courses, there need to be accountability measures that prevent students from cheating. However, monitoring and recording our computer activity during online courses is not the solution, and failing to properly inform students of ProctorTrack’s payment fee is only a further blight on a rather terrible product. If Rutgers wants to transition to online courses, then the University needs to hold some inkling of respect for student privacy. Otherwise, undergraduates have absolutely no incentive to sign up for online classes.

If Rutgers administration wants to defuse this situation, they will be to find a way to talk and listen to students on the subject. Pure and simple.

H/T: Thanks to Audrey Watters and to Jonathan Rees for highlighting this situation.

Update: Bumping comment from Russ Poulin into post itself [emphasis added]:

The last paragraph in the federal regulation regarding academic integrity (602.17) reads:

“(2) Makes clear in writing that institutions must use processes that protect student privacy and notify students of any projected additional student charges associated with the verification of student identity at the time of registration or enrollment.”

The privacy issue is always a tricky one when needing to meet the other requirements of this section. But, it does sound like students were not notified of the additional charges at the time of registration.

The post Rutgers and ProctorTrack Fiasco: Impact of listening to regulations but not to students appeared first on e-Literate.

Slides and Follow-up From Faculty Development Workshop at Aurora University

Fri, 2015-03-13 20:41

By Phil HillMore Posts (302)

Today I facilitated a faculty development workshop at Aurora University, sponsored by the Center for Excellence in Teaching and Learning and the IT Department. I always enjoy sessions like this, particularly with the ability to focus our discussions squarely on technology in support of teaching and learning. The session was titled “Emerging Trends in Educational Technology and Implications for Faculty”. Below are very rough notes, slides, and a follow-up.

Apparent Dilemma and Challenge

Building off of previous presentations at ITC Network, there is an apparent dilemma:

  • One one hand, little has changed: Despite all the hype and investment in ed tech, there is only one new fully-established LMS vendor in the past decade (Canvas), and the top uses of LMS are for course management (rosters, content sharing, grades). Plus the MOOC movement fizzled out, at least for replacing higher ed programs or courses.
  • On the other hand, everything has changed: There are examples of redesigned courses such as Habitable Worlds at ASU that are showing dramatic results in the depth of learning by students.

The best lens to understand this dilemma is Everett Rogers’ Diffusions of Innovations and the technology adoption curve and categories. Geoffrey Moore extended this work to call out a chasm between Innovators / Early Adopters on the left side (wanting advanced tech, OK with partial solutions they cobble together, pushing the boundary) and Majority / Laggards on the right side (wanting full solution – just make it work, make it reliable, make it intuitive). Whereas Moore described Crossing the Chasm for technology companies (moving from one side to the other), in most cases in education we don’t have that choice. The challenge in education is Straddling the Chasm (a concept I’m developing with a couple of consulting clients as well as observations from e-Literate TV case studies):

Straddling the Chasm 3

This view can help explain how advances in pedagogy and learning approaches generally fit on the left side and have not diffused into mainstream, whereas advances in simple course management generally fit on the right side and have diffused, although we want more than that. You can also view the left side as faculty wanting to try new tools and faculty on the right just wanting the basics to work.

The trend in market moving away from walled garden offers education the chance to straddle the chasm.

Implications for Faculty

1) The changes are not fully in place, and it’s going to be a bumpy ride. One example is difficulty in protecting privacy and allowing accessibility in tools not fully centralized. Plus, the LTI 2.0+ and Caliper interoperability standards & frameworks are still a work in progress.

2) While there are new possibilities to use commercial tools, there are new responsibilities as the left side of chasm and non-standard apps require faculty and local support (department, division) to pick up support challenges.

3) There is a challenge is balance innovation with the student need for consistency across courses, mostly in terms of navigation and course administration.

4) While there are new opportunities for student-faculty and student-student engagement, there are new demands on faculty to change their role and to be available on the students’ schedule.

5) Sometimes, simple is best. It amazes me how often the simple act of moving lecture or content delivery online is trivialized. What is enabled here is the ability for students to work at their own pace and replay certain segments without shame or fear of holding up their peers (or even jumping ahead and accelerating).


Emerging Trends in Educational Technology and Implications for Faculty from Phil Hill Follow-Up

One item discussed in the workshop was how to take advantage of this approach in Aurora’s LMS, Moodle. While Moodle has always supported the open approach and has supported LTI standards, I neglected to mention a missing element. Commercial apps such as Twitter, Google+, etc, do not natively follow LTI standards, which are education-specific. The EduAppCenter was created to help with this challenge by creating a library of apps and wrappers around apps that are LTI-compliant.

The post Slides and Follow-up From Faculty Development Workshop at Aurora University appeared first on e-Literate.

Brian Whitmer No Longer in Operational Role at Instructure

Wed, 2015-03-11 09:17

By Phil HillMore Posts (302)

Just over a year and a half ago, Devlin Daley left Instructure, the company he co-founded. It turns out that both founders have made changes as Brian Whitmer, the other company co-founder, left his operational role in 2014 but is still on the board of directors. For some context from the 2013 post:

Instructure was founded in 2008 by Brian Whitmer and Devlin Daley. At the time Brian and Devlin were graduate students at BYU who had just taken a class taught by Josh Coates, where their assignment was to come up with a product and business model to address a specific challenge. Brian and Devlin chose the LMS market based on the poor designs and older architectures dominating the market. This design led to the founding of Instructure, with Josh eventually providing seed funding and becoming CEO by 2010.

Brian had a lead role until last year for Instructure’s usability design and for it’s open architecture and support for LTI standards.

The reason for Brian’s departure (based on both Brian’s comments and comments from Instructure statements) is based on his family. Brian’s daughter has Rett Syndrome:

Rett syndrome is a rare non-inherited genetic postnatal neurological disorder that occurs almost exclusively in girls and leads to severe impairments, affecting nearly every aspect of the child’s life: their ability to speak, walk, eat, and even breathe easily.

As Instructure grew, Devlin became the road show guy while Brian stayed mostly at home, largely due to family. Brian’s personal experiences have led him to create a new company: CoughDrop.

Some people are hard to hear — through no fault of their own. Disabilities like autism, cerebral palsy, Down syndrome, Angelman syndrome and Rett syndrome make it harder for many individuals to communicate on their own. Many people use Augmentative and Alternative Communication (AAC) tools in order to help make their voices heard.

We work to help bring out the voices of those with complex communication needs through good tech that actually makes things easier and supports everyone in helping the individual succeed.

This work sounds a lot like early Instructure, as Brian related to me this week.

Augmentative Communication is a lot like LMS space was, in need of a reminder of how things can be better.

By the middle of 2014, Brian left all operational duties although he remains on the board (and he plans to remain on the board and acting as an adviser).

How will this affect Instructure? I would look at Brian’s key roles in usability and open platform to see if Instructure keeps up his vision. From my view the usability is just baked into the company’s DNA[1] and will likely not suffer. The question is more on the open side. Brian led the initiative for the App Center as I described in 2013:

The key idea is that the platform is built to easily add and support multiple applications. The apps themselves will come from EduAppCenter, a website that launched this past week. There are already more than 100 apps available, with the apps built on top of the Learning Tools Interoperability (LTI) specification from IMS global learning consortium. There are educational apps available (e.g. Khan Academy, CourseSmart, Piazza, the big publishers, Merlot) as well as general-purpose tools (e.g. YouTube, Dropbox, WordPress, Wikipedia).

The apps themselves are wrappers that pre-integrate and give structure access to each of these tools. Since LTI is the most far-reaching ed tech specification, most of the apps should work on other LMS systems. The concept is that other LMS vendors will also sign on the edu-apps site, truly making them interoperable. Whether that happens in reality remains to be seen.

What the App Center will bring once it is released is the simple ability for Canvas end-users to add the apps themselves. If a faculty adds an app, it will be available for their courses, independent of whether any other faculty use that set up. The same applies for students who might, for example, prefer to use Dropbox to organize and share files rather than native LMS capabilities.

The actual adoption by faculty and institutions of this capability takes far longer than people writing about it (myself included) would desire. It takes time and persistence to keep up the faith. The biggest risk that Instructure faces by losing Brian’s operational role is whether they will keep this vision and maintain their support for open standards and third-party apps – opening up the walled garden, in other words.

Melissa Loble, Senior Director of Partners & Programs at Instructure[2], will play a key role in keeping this open vision alive. I have not heard anything indicating that Instructure is changing, but this is a risk from losing a founder who internally ‘owned’ this vision.

I plan to share some other HR news from the ed tech market in future posts, but for now I wish Brian the best with his new venture – he is one of the truly good guys in ed tech.

Update: I should have given credit to Audrey Watters, who prompted me to get a clear answer on this subject.

  1. Much to Brian’s credit
  2. Formerly Associate Dean of Distance Ed at UC Irvine and key player in Walking Dead MOOC

The post Brian Whitmer No Longer in Operational Role at Instructure appeared first on e-Literate.

Dana Center and New Mathways Project: Taking curriculum innovations to scale

Tue, 2015-03-10 15:01

By Phil HillMore Posts (301)

Last week the University of Texas’ Dana Center announced a new initiative to digitize their print-based math curriculum and expand to all 50 community colleges in Texas. The New Mathways Project is ‘built around three mathematics pathways and a supporting student success course’, and they have already developed curriculum in print:

Tinkering with the traditional sequence of math courses has long been a controversial idea in academic circles, with proponents of algebra saying it teaches valuable reasoning skills. But many two-year college students are adults seeking a credential that will improve their job prospects. “The idea that they should be broadly prepared isn’t as compelling as organizing programs that help them get a first [better-paying] job, with an eye on their second and third,” says Uri Treisman, executive director of the Charles A. Dana Center at UT Austin, which spearheads the New Mathways Project. [snip]

Treisman’s team has worked with community-college faculty to create three alternatives to the traditional math sequence. The first two pathways, which are meant for humanities majors, lead to a college-level class in statistics or quantitative reasoning. The third, which is still in development, will be meant for science, technology, engineering, and math majors, and will focus more on algebra. All three pathways are meant for students who would typically place into elementary algebra, just one level below intermediate algebra.

When starting, the original problem was viewed as ‘fixing developmental math’. As they got into the design, the team restated the problem to be solved as ‘developing coherent pathways through gateway courses into modern degrees of study that lead to economic mobility’. The Dana Center worked with the Texas Association of Community Colleges to develop the curriculum, which is focused on active learning and group work that can be tied to the real world.

The Dana Center approach is based on four principles:

  • Courses student take in college math should be connected to their field of study.
  • The curriculum should accelerate or compress to allow students to move through developmental courses in one year.
  • Courses should align with student support more closely, and sophisticated learning support will be connected to campus support structures.
  • Materials should be connected to context-sensitive improvement strategy.

What they have found is that there are multiple programs nationwide working roughly along the same principles, including the California improvement project, Accelerated learning project at Baltimore City College, and work in Tennessee at Austin Peay College. In their view the fact of independent bodies coming to similar conclusions adds validity to the overall concept.

One interesting aspect of the project is that it is targeted for an entire state’s community college system – this is not a pilot approach. After winning an Request for Proposal selection, Pearson[1] will integrate the active-learning content into a customized mix of MyMathLabs, Learning Catalytics, StatCrunch and CourseConnect tools. Given the Dana Center’s small size, one differentiator for Pearson was their size and ability to help a program move to scale.

Another interesting aspect is the partnership approach with TACC. As shared on the web site:

  • A commitment to reform: The TACC colleges have agreed to provide seed money for the project over 10 years, demonstrating a long-term commitment to the project.
  • Input from the field: TACC member institutions will serve as codevelopers, working with the Dana Center to develop the NMP course materials, tools, and services. They will also serve as implementation sites. This collaboration with practitioners in the field is critical to building a program informed by the people who will actually use it.
  • Alignment of state and institutional policies: Through its role as an advocate for community colleges, TACC can connect state and local leaders to develop policies to support the NMP goal of accelerated progress to and through coursework to attain a degree.

MDRC, the same group analyzing CUNY’s ASAP program, will provide independent reporting of the results. There should be implementation data available by the end of the year, and randomized controlled studies to be released in 2016.

To me, this is a very interesting initiative to watch. Given MDRC’s history of thorough documentation, we should be able to learn plenty of lessons from the state-wide deployment.

  1. Disclosure: Pearson is a client of MindWires Consulting.

The post Dana Center and New Mathways Project: Taking curriculum innovations to scale appeared first on e-Literate.

Blueprint for a Post-LMS, Part 5

Sun, 2015-03-08 17:38

By Michael FeldsteinMore Posts (1021)

In parts 1, 2, 3, and 4 of this series, I laid out a model for a learning platform that is designed to support discussion-centric courses. I emphasized how learning design and platform design have to co-evolve, which means, in part, that a new platform isn’t going to change much if it is not accompanied by pedagogy that fits well with the strengths and limitations of the platform. I also argued that we won’t see widespread changes in pedagogy until we can change faculty relationships with pedagogy (and course ownership), and I proposed a combination of platform, course design, and professional development that might begin to chip away at that problem. All of these ideas are based heavily on lessons learned from social software  and from cMOOCs.

In this final post in the series, I’m going to give a few examples of how this model could be extended to other assessment types and related pedagogical approaches, and then I’ll finish up by talking about what it would take to make the peer grading system described in part 2 be (potentially) accepted by students as at least a component of a grading system in a for-credit class.

Competency-Based Education

I started out the series talking about Habitable Worlds, a course out of ASU that I’ve written about before and that we feature in the forthcoming e-Literate TV series on personalized learning. It’s an interesting hybrid design. It has strong elements of competency-based education (CBE) and mastery learning, but the core of it is problem-based learning (PBL). The competency elements are really just building blocks that students need in the service of solving the big problem of the course. Here’s course co-designer and teacher Ariel Anbar talking about the motivation behind the course:

Click here to view the embedded video.

It’s clear that the students are focused on the overarching problem rather than the competencies:

Click here to view the embedded video.

And, as I pointed out in the first post in the series, they end up using the discussion board for the course very much like professionals might use a work-related online community of practice to help them work through their problems when they get stuck:

Click here to view the embedded video.

This is exactly the kind of behavior that we want to see and that the analytics I designed in part 3 are designed to measure. You could attach a grade to the students’ online discussion behaviors. But it’s really superfluous. Students get their grade from solving the problem of the course. That said, it would be helpful to the students if productive behaviors were highlighted by the system in order to make them easier to learn. And by “learn,” I don’t mean “here are the 11 discussion competencies that you need to display.” I mean, rather, that there are different patterns of productive behavior in a high-functioning group. It would be good for students to see not only the atomic behaviors but different patterns and even how different patterns complement each other within a group. Furthermore, I could imagine that some employers might be interested in knowing the collaboration style that a potential employee would bring to the mix. This would be a good fit for badges. Notice that, in this model, badges, competencies, and course grades serve distinct purposes. They are not interchangeable. Competencies and badges are closer to each other than either is to a grade. They both indicate that the student has mastered some skill or knowledge that is necessary to the central problem. But they are different from each other in ways that I haven’t entirely teased out in my own head yet. And they are not sufficient for a good course grade. To get that, the student must integrate and apply them toward generating a novel solution to a complex problem.

The one aspect of Habitable Worlds that might not fit with the model I’ve outlined in this series is the degree to which it has a mandatory sequence. I don’t know the course well enough to have a clear sense, but I suspect that the lessons are pretty tightly scripted, due in part to the fact that the overarching structure of the course is based on an equation. You can’t really drop out one of the variables or change the order willy-nilly in an equation. There’s nothing wrong with that in and of itself, but in order to take full advantage of the system I’ve proposed here, the course design must have a certain amount of play in it for faculty teaching their individual classes to contribute additions and modifications back. It’s possible to use the discussion analytics elements without the social learning design elements, but then you don’t get potential the system offers for faculty buy-in “lift.”

Adding Assignment Types

I’ve written this entire series talking about “discussion-based courses” as if that were a thing, but it’s vastly more common to have discussion and writing courses. One interesting consequences of the work that we did abstracting out the Discourse trust levels is that we created a basic (and somewhat unconventional) generalized peer review system in the process. As long as conversation is the metric, we can measure the conversational aspects generated by any student-created artifact. For example, we could create a facility in OAE for students to claim the RSS feeds from their blogs. Remember, any integration represents a potential opportunity to make additional inferences. Once a post is syndicated into the system and associated with the student, it can generate a Discourse thread just like any other document. That discussion can be included in  With a little more work, you could have student apply direct ratings such as “likes” to the documents themselves. Making the assessment work for these different types isn’t quite as straightforward as I’m making it sound, either from a user experience design perspective or from a technology perspective. But the foundation is there to build on.

One of the commenters on part 1 of the series provided another interesting use case:

I’m the product manager for Wiki Education Foundation, a nonprofit that helps professors run Wikipedia assignments, in which the students write Wikipedia articles in place of traditional term papers. We’re building a system for managing these assignments, from building a week-by-week assignment plan that follows best practices, to keeping track of student activity on Wikipedia, to pulling in view data for the articles students work on, to finding automated ways of helping students work through or avoid the typical stumbling blocks for new Wikipedia editors.

Wikipedia is its own rich medium for conversation and interaction. I could imagine taking that abstracted peer review system and just hooking it up directly to student activity within Wikipedia itself. Once we start down this path, we really need to start talking about IMS Caliper and federated analytics. This has been a real bottom-up analysis, but we quickly reach the point where we want to start abstracting out the particular systems or even system types, and start looking at a general architecture for sharing learning data (safely). I’m not going to elaborate on it here—even I have to stop at some point—but again, if you made it this far, you might find it useful to go back and reread my original post on the IMS Caliper draft standard and the comments I made on its federated nature in my most recent walled garden post. Much of what I have proposed here from an architectural perspective is designed specifically with a Caliper implementation in mind.

Formal Grading

I suppose my favorite model so far for incorporating the discussion trust system into a graded, for-credit class is the model I described above where the analytics act as more of a coach to help students learn productive discussion behavior, while the class grade actually comes from their solution to the central problem, project, or riddle of the course. But if we wanted to integrate the trust analytics as part of the formal grading system, we’d have to get over the “Wikipedia objection,” meaning the belief that somehow vetting by a single expert produces more reliably generates accurate results than crowdsourcing. Some students will want grades from their teachers and will tend to think that the trust levels are bogus as a grade. (Some teachers will agree.) To address their concerns, we need three things. First, we need objectivity, by which I mean that the scoring criteria themselves are being applied the same to everyone. “Objectivity” is often about as real in student evaluation as it is journalism (which is to say, it isn’t), but people do want some sense of fairness, which is probably a better goal. Clear ratings criteria applied to everyone equally gives some sense of fairness. Second, the trust scores themselves must be transparent, by which I mean that students should be able to see how they earned their trust scores. They should also be able to see various paths to improving their scores. And finally, there should be auditability, by which I mean that, in the event that a student is given a score by her peers that her teacher genuinely disagrees with (e.g., a group ganging up to give one student thumbs-downs, or a lot of conversation being generated around something that is essentially not helpful to the problem-solving effort), there is an ability for the faculty member to override that score. This last piece can be a rabbit hole, both in terms of user interface design and in terms of eroding the very sense you’re trying to build of a trust network, but it is probably necessary to get buy-in. The best thing to do is to pilot the trust system (and the course design that is supposed to inspire ranking-worthy conversation) and refine it to the point where it inspires a high degree of confidence before you start using it for formal grading.

That’s All

No, really. Even I run out of gas. Eventually.

For a while.

The post Blueprint for a Post-LMS, Part 5 appeared first on e-Literate.

Blueprint for a post-LMS, Part 4

Sat, 2015-03-07 18:17

By Michael FeldsteinMore Posts (1021)

In part 1 of this series, I talked about some design goals for a conversation-based learning platform, including lowering the barriers and raising the incentives for faculty to share course designs and experiment with pedagogies that are well suited for conversation-based courses. Part 2 described a use case of a multi-school faculty professional development course which would give faculty an opportunity to try out these affordances in a low-stakes environment. In part 3, I discussed some analytics capabilities that could be added to a discussion forum—I used the open source Discourse as the example—which would lead to richer and more organic assessments in conversation-based courses. But we haven’t really gotten to the hard part yet. The hard part is encouraging experimentation and cross-fertilization among faculty. The problem is that faculty are mostly not trained, not compensated, and otherwise not rewarded for their teaching excellence. Becoming a better teacher requires time, effort, and thought, just as becoming a better scholar does. But even faculty at many so-called “teaching schools” are given precious little in the way of time or resources to practice their craft properly, never mind improving it.

The main solution to this problem that the market has offered so far is “courseware,” which you can think of as a kind of course-in-a-box. In other words, it’s an attempt to move as much as the “course” as possible into the “ware”, or the product. The learning design, the readings, the slides, and the assessments are all created by the product maker. Increasingly, the students are even graded by the product.


This approach as popularly implemented in the market has a number of significant and fairly obvious shortcomings, but the one I want to focus on for this post is these packages are still going to be used by faculty whose main experience is the lecture/test paradigm.[1] Which means that, whatever the courseware learning design originally was, it will tend to be crammed into a lecture/test paradigm. In the worst case, the result is that we have neither the benefit of engaged, experienced faculty who feel ownership of the course nor an advanced learning design that the faculty member has not learned how to implement.

One of the reasons that this works from a commercial perspective is that it relies on the secret shame that many faculty members feel. Professors were never taught to teach, nor are they generally given the time, money, and opportunities necessary to learn and improve, but somehow they have been made to feel that they should already know how. To admit otherwise is to admit one’s incompetence. Courseware enables faculty to keep their “shame” secret by letting the publishers do the driving. What happens in the classroom stays in the classroom. In a weird way, the other side of the shame coin is “ownership.” Most faculty are certainly smart enough to know that neither they nor anybody else is going to get rich off their lecture notes. Rather, the driver of “ownership” is fear of having the thing I know how to do in my classroom taken away from me as “mine” (and maybe exposing the fact that I’m not very good at this teaching thing in the process). So many instructors hold onto the privacy of their classrooms and the “ownership” of their course materials for dear life.

Obviously, if we really want to solve this problem at its root, we have to change faculty compensation and training. Failing that, the next best thing is to try to lower the barriers and increase the rewards for sharing. This is hard to do, but there are lessons we can learn from social media. In this post, I’m going to try to show how learning design and platform design in a faculty professional development course might come together toward this end.

You may recall from part 2 of this series that use case I have chosen is a faculty professional development “course,” using our forthcoming e-Literate TV series about personalized learning as a concrete example. The specific content isn’t that important except to make the thought experiment a little more concrete. The salient details are as follows:

  1. The course is low-stakes; nobody is going to get mad if our grading scheme is a little off. To the contrary, because it’s a group of faculty engaged in professional development about working with technology-enabled pedagogy, the participants will hopefully bring a sense of curiosity to the endeavor.
  2. The course has one central, course-long problem or question: What, if anything, do we (as individual faculty, as a campus, and as a broader community of teachers) want to do with so-called “personalized learning” tools and approaches? Again, the specific question doesn’t matter so much as the fact that there is an overarching question where the answer is going to be specific to the people involved rather than objective and canned. That said, the fact that the course is generally about technology-enabled pedagogy does some work for us.
  3. Multiple schools or campuses will participate in the course simultaneously (though not in lock-step, as I will discuss in more detail later in this post). Each campus cohort will have a local facilitator who will lead some local discussions and customize the course design for local needs. That said, participants will also be able (and encouraged) to have discussions across campus cohorts.
  4. The overarching question naturally lends itself to discussion among different subgroups of the larger inter-campus group, e.g., teachers of the same discipline, people on the same campus, among peer schools, etc.

That last one is critical. There are natural reasons for participants to want to discuss different aspects of the overarching question of the course with different peer groups within the course. Our goal in both course and platform design is to make those discussions as easy and immediately rewarding as possible. We are also going to take advantage of the electronic medium to blur the distinction between contributing a comment, or discussion “post,” with longer contributions such as documents or even course designs.

We’ll need a component for sharing and customizing the course materials, or “design” and “curriculum,” for the local cohorts. Again, I will choose a specific piece of software in order to make the thought experiment more concreted, but as with Discourse in part 3 of this series, my choice of example is in no way intended to suggest that it is the only or best implementation. In this case, I’m going to use the open source Apereo OAE for this component in the thought experiment.

When multiple people teach their own courses using existing the same curricular materials (like a textbook, for example), there is almost always a lot of customization that goes on at the local level. Professor A skips chapters 2 and 3. Professor B uses her own homework assignments instead of the end-of-chapter problems. Professor C adds in special readings for chapter 7. And so on. With paper-based books, we really have no way of knowing what gets used and reused, what gets customized (and how it gets customized), and what gets thrown out. Recent digital platforms, particularly from the textbook publishers, are moving in the direction of being able to track those things. But academia hasn’t really internalized this notion that courses are more often customized than built from scratch, never mind the idea that their customizations could (and should) be shared for the sake of collective improvement. What we want is a platform that makes the potential for this virtuous cycle visible and easy to take advantage of without forcing participants to sacrifice any local control (including the control to take part or all of their local course private if that’s what they want to do).

OAE allows a user to create content that can be published into groups. But published doesn’t mean copied. It means linked. We could have the canonical copy of the ETV personalized learning MOOC (for example), that includes all the episodes from all the case studies plus any supplemental materials we think are useful. The educational technology director at Some State University (SSU) could create a group space for faculty and other stakeholders from her campus. She could choose to pull some, but not all, of the materials from the canonical course into her space. She could rearrange the order. You may recall from part 3 that Discourse can integrate with WordPress, spawning a discussion for every new blog post. We could easily imagine the same kind of integration with OAE. Since anything the campus facilitator pulls from the canonical course copy will be surfaced in her course space rather than copied into it, we would still have analytics on use of the curricular materials across the cohorts, then any discussions in Discourse that are related to the original content items would maintain their linkage (including the ability to automatically publish the “best” comments from the thread back into SSU’s course space). The facilitator could also add her own content, make her space private (from the default of public), and spawn private cohort-specific conversations. In other words, she could make it her own course.

I slipped the first bit of magic into that last sentence. Did you catch it? When the campus facilitator creates a new document, the system can automatically spawn a new discussion thread in Discourse. By default, new documents from the local cohort become available for discussion to all cohorts. And with any luck, some of that discussion will be interesting and rewarding to the person creating the document. The cheap thrill of any successful social media platform is having the (ideally instant) gratification of seeing somebody respond positively to something you say or do. That’s the feeling we’re trying to create. Furthermore, because of the way OAE share documents across groups, if the facilitator in another cohort were to pull your document into her course design, it wouldn’t have to be invisible to you the way creating a copy is. We could create instant and continuously updated feedback on the impact of your sharing. Some documents (and discussions) in some cohorts might need to be private, and OAE supports that, but the goal is to get private, cohort- (or class-)internal sharing feel something like direct messaging feels on Twitter. There is a place for it, but it’s not what makes the experience rewarding.

To that end, we could even feed sharing behavior from OAE into the trust analytics I described in part 3 of this post series. One of the benefits of abstracting the trust levels from Discourse into an external system that has open APIs is that it can take inputs from different systems. It would be possible, for example, to make having your document shared into another cohort on OAE or having a lot of conversation generated from your document count toward your trust level. I don’t love the term “gamification,” but I do love the underlying idea that a well-designed system should make desired behaviors feel good. That’s also a good principle for course design.

I’m going to take a little detour into some learning design elements here, because they are critical success factors for the platform experience. First, the Problem-based Learning (PBL)-like design of the course is what makes it possible for individual cohorts to proceed at their own pace, in their own order, and with their own shortcuts or added excursions and still enable rich and productive discussions across cohorts. A course design that requires that units be released to the participants one week at a time will not work, because discussions will get out of sync as different cohorts proceed differently, and synchronization matters to the course design. If, on the other hand, synchronization across cohorts doesn’t matter because participants are going to the discussion authentically as needed to work out problems (the way they do all the time in online communities but much less often in typical online courses), then discussions will naturally wax and wane with participant needs and there will be no need to orchestrate them. Second, the design is friendly to participation through local cohorts but doesn’t require it. If you want to participate in the course as a “free agent” and have a more traditional MOOC-like experience, you could simply work off the canonical copy of the course materials and follow the links to the discussions.

End of detour. There’s one more technology piece I’d like to add to finish off the platform design for our use case. Suppose that all the participants could log into the system with their university credentials through an identity management scheme like InCommon. This may seem like a trivial implementation detail that’s important mainly for participant convenience, but it actually adds the next little bit of magic to the design. In part 3, I commented that integrating the discussion forum with a content source enables us to make new inferences because we now know that a discussion is “about” the linked content in some sense, and because content creators often have stronger motivations than discussion participants to add metadata like tags or learning objectives that tell us more about the semantics. One general principle that is always worth keeping in mind when designing learning technologies these days is that any integration presents an potential opportunity for new inferences. In the case of single sign-on, we can go to a data source like IPEDS to learn a lot about the participants’ home institutions and therefore about their potential affinities. Affinities are the fuel that provides any social platform with its power. In our use case, participants might be particularly interested in seeing comments from their peer institutions. If we know where they are coming from, then we can do that automatically rather than forcing them to enter information or find each other manually. In a course environment, faculty might want to prioritize the trust signals from students at similar institutions over those from very different institutions. We could even generate separate conversation threads based on these cohorts. Alternatively, people might want to find people with high trust levels who are geographically near them in order to form meetups or study groups.

And that’s it, really. The platform consists of a discussion board, a content system, and a federated identity management system that have been integrated in particular ways and used in concert with particular course design elements. There is nothing especially new about either the technology or the pedagogy. The main innovation here, to the degree that there is one, is combining them in a way that creates the right incentives for the participants. When I take a step back and really look at it, it seems too simple and too much like other things I’ve seen and too little like other things I’ve seen and too demanding of participants to possibly work. Then again, I said the same thing about blogs, Facebook, Twitter, and Instagram. They all seemed stupid to me before I tried them. Facebook still seems stupid to me, and I haven’t tried Instagram, but the point remains that these platforms succeeded not because of any obvious feat of technical originality but because they got the incentive structures right in lots of little ways that added up to something big. What I’m trying to do here with this design proposal is essentially to turn the concept of courseware inside out, changing the incentive structures in lots of little ways that hopefully add up to something bigger. Rather than cramming as much of the “course” as possible into the “ware,” reinforcing the isolation of the classroom in the process, I’m trying to make the “ware” generated by the living, human-animated course, making learning and curriculum design inherently social processes and hopefully thereby circumventing the shame reflex. And I’m trying to do that in the context of a platform and learning design that attempt to both reward and quantify social problem solving competencies in the class itself.

I don’t know if it will fly, but it might. Stranger things have happened.[2]

In the last post in this series, I will discuss some extensions that would probably have to be made in order to use this approach in a for-credit class as well as various miscellaneous considerations. Hey, if you’ve made it this far, you might as well read the last one and find out who dunnit.


  1. Of course, I recognize that some disciplines don’t do a lot of lecture/test (although they may do lecture/essay). These are precisely the disciplines in which courseware has been the least commercially successful.
  2. My wife agreeing to marry me, for instance.

The post Blueprint for a post-LMS, Part 4 appeared first on e-Literate.

Blueprint for a post-LMS, Part 3

Fri, 2015-03-06 16:03

By Michael FeldsteinMore Posts (1021)

In the first part of this series, I identified four design goals for a learning platform that supports conversation-based courses. In the second part, I brought up a use case of a kind of faculty professional development course that works as a distributed flip, based on our forthcoming e-Literate TV series on personalized learning. In the next two posts, I’m going to go into some aspects of the system design. But before I do that, I want to address a concern that some readers have raised. Pointing to my apparently infamous “Dammit, the LMS” post, they raise the question of whether I am guilty of a certain amount of techno-utopianism. Whether I’m assuming just building a new widget will solve a difficult social problem. And whether any system, even if it starts out relatively pure, will inevitably become just another LMS as the same social forces come into play.


I hope not. The core lesson of “Dammit, the LMS” is that platform innovations will not propagate unless the pedagogical changes that take advantages of those changes also propagate, and pedagogical changes will not propagate without changes in the institutional culture in which they are embedded. Given that context, the use case I proposed in part 2 of this series is every bit as important as the design goals in part 1 because it provides a mechanism by which we may influence the culture. This actually aligns well with the “use scale appropriately” design goal from part 1, which included this bit:

Right now, there is a lot of value to the individual teacher of being able to close the classroom door and work unobserved by others. I would like to both lower barriers to sharing and increase the incentives to do so. The right platform can help with that, although it’s very tricky. Learning Object Repositories, for example, have largely failed to be game changers in this regard, except within a handful of programs or schools that have made major efforts to drive adoption. One problem with repositories is that they demand work on the part of the faculty while providing little in the way of rewards for sharing. If we are going to overcome the cultural inhibitions around sharing, then we have to make the barrier as low as possible and the reward as high as possible.

When we get to part 4 of the series, I hope to show how the platform, pedagogy, and culture might co-evolve through a combination of curriculum design, learning design, platform design, prepared for faculty as participants in a low-stakes environment. But before we get there, I have to first put some building blocks in place related to fostering and assessing educational conversation. That’s what I’m going to try to do in this post.

You may recall from part 1 of this series that trust, or reputation, has been the main proxy for expertise throughout most of human history. Credentials are a relatively new invention designed to solve the problem that person-to-person trust networks start to break down when population sizes get beyond a certain point. The question I raised was whether modern social networking platforms, combined with analytics, can revive something like the original trust network. LinkedIn is one example of such an effort. We want an approach that will enable us to identify expertise through trust networks based on expertise-relevant conversations of the type that might come up in a well facilitated discussion-based class.

It turns out that there is quite a bit of prior art in this area. Discussion board developers have been interested in ways to identify experts in the conversation for as long as internet-based discussions have grown large enough that people need help figuring out who to pay attention to and who to ignore (and who to actively filter out). Keeping the signal-to-noise ratio was a design goal, for example, in the early versions of the software developed to manage the Slashdot community in the late 1990s. (I suspect some of you have even earlier examples.) Since that design goal amounts to identifying community-recognized expertise and value in large-scale but authentic conversations (authentic in the sense that people are not participating because they were told to participate), it makes sense to draw on that accumulated experience in thinking through our design challenges. For our purposes, I’m going to look at Discourse, an open source discussion forum that was designed by some of the people who worked on the online community Stack Overflow.

Discourse has a number of features for scaling conversations that I won’t get into here, but their participant trust model is directly relevant. They base their model on one described by Amy Jo Kim in her book Community Building on the Web:

The progression, visitor > novice > regular > leader > elder, provides a good first approximation of levels for an expertise model. (The developers of Discourse change the names of the levels for their own purposes, but I’ll stick with the original labels here.) Achieving a higher level in Discourse unlocks certain privileges. For example, only leaders or elders can recategorize or rename discussion threads. This is mostly utilitarian, but it has an element of gamification to it. Your trust level is a badge certifying your achievements in the discussion community.

The model that Discourse currently uses for determining participant trust levels is pretty simple. For example, in order to get to the middle trust level, a participant must do the following:

  • visiting at least 15 days, not sequentially
  • casting at least 1 like
  • receiving at least 1 like
  • replying to at least 3 different topics
  • entering at least 20 topics
  • reading at least 100 posts
  • spend a total of 60 minutes reading posts

This is not terribly far from a very basic class participation grade. It is grade-like in the sense that it is a five-point evaluative scale, but it is simple like a the most basic of participation grades in the sense that it mostly looks at quantity rather than quality of participation. The first hint of a difference is “receiving at least 1 like.” A “like” is essentially a micro-scale peer grade.

We could also imagine other, more sophisticated metrics that directly assess the degree to which a participant is considered to be a trusted community member. Here are a few examples:

  • The number of replies or quotes that a participant’s comments generate
  • The number of mentions the participant generates (in the @twitterhandle sense)
  • The number of either of the above from participants who have earned high trust levels
  • The number of “likes” you get for posts in which a participant mentions or quotes another post
  • The breadth of the network of people with whom the participant converses
  • Discourse analysis of the language used in the participant’s post to see if they are being helpful or if they are asking clear questions (for example)

Some of these metrics use the trust network to evaluate expertise, e.g., “many participants think you said something smart here” or “trusted participants think you said something smart here.” But some directly measure actual competencies, e.g., the ability to find pre-existing information and quote it. You can combine these into a metric of the ability to find pre-existing relevant information and quote it appropriately by looking at posts that contain quote and were liked by a number of participants or by trusted participants.

Think about these metrics as the basis for a grading system. Does the teacher want to reward students who show good teamwork and mentoring skills? Then she might increase the value of metrics like “post rated helpful by a participant with less trust” or “posts rated helpful by many participants.” If she wants to prioritize information finding skills, then she might increase the weight of appropriate quoting of relevant information. Note that, given a sufficiently rich conversation with a sufficiently rich set of metrics, there will be more than one way to climb the five-point scale. We are not measuring fine-grained knowledge competencies. Rather, we are holistically assessing the student’s capacity to be a valuable and contributing member of a knowledge-building community. There should be more than one way to get high marks at that. And again, these are high-order competencies that most employers value highly. They are just not broken down into itsy bitsy pass-or-fail knowledge chunks.

Unfortunately, Discourse doesn’t have this rich array of metrics or options for combining them. So one of the first things we would want to do in order to adapt it for our use case is abstract Discourse’s trust model, as well as all the possible inputs, using IMS Caliper (or something based on the current draft of it, anyway). There are a few reasons for this. First, we’d want to be able to add inputs as we think of them. For example, we might want to include how many people start using a tag that a participant has introduced. You don’t want to have to hard code every new parameter and every new way of weighing the parameters against them. Second, we’re eventually going to want to add other forms of input from other platforms (e.g., blog posts) that contribute to a participant’s expertise rating. So we need the ratings code in a form that is designed for extension. We need APIs. And finally, we’d want to design the system so that any vendor, open source, or home-grown analytics system could be plugged in to develop the expertise ratings based on the inputs.

Discourse also has integration with WordPress which is interesting not so much because of WordPress itself but because the nature of the integration points toward more functionality that we can use, particularly for analytics purposes. The Discourse WordPress plugin can automatically spawn a discussion tread in Discourse automatically for every new post in WordPress. This is interesting because it gives us a semantic connection between a discussion and a piece of (curricular) material. We automatically know what the discussion is “about.” It’s hard to get participants in a discussion to do a lot of tagging of their posts. But it’s a lot easier to get curricular materials tagged. If we know that a discussion is about a particular piece of content and we know details about the subjects or competencies that the content is about (and whether that content contains an explanation to be understood, a problem to be solved, or something else), then we can make some relatively good inferences about what it says about a person’s expertise when she makes a several highly rated comments in discussions about content items that share the same competency or topic tag. Second, Discourse has the ability to publish the comments on the content back to the post. This is a capability that we’re going to file away for use in the next part of this series.

If we were to abstract the ratings system from Discourse, add an API that lets it take different variables (starting with various metadata about users and posts within Discourse), and add a pluggable analytics dashboard that let teachers and other participants experiment with different types of filters, we would have a reasonably rich environment for a cMOOC. It would support large-scale conversations that could be linked to specific pieces of curricular content (or not). It would help people find more helpful comments and more helpful commenters. It could begin to provide some fairly rich community-powered but analytics-enriched evaluations of both of these. And, in our particular use case, since we would be talking about analytics-enriched personalized learning products and strategies, having some sort of pluggable analytics that are not hidden by a black box could give participants more hands-on experience with how analytics can work in a class situation, what they do well, what they don’t do well, and how you should manage them as a teacher. There are some additional changes we’d need to make in order to bring the system up to snuff for traditional certification courses, but I’ll save those details for part 5.

The post Blueprint for a post-LMS, Part 3 appeared first on e-Literate.

Blueprint for a Post-LMS, Part 2

Thu, 2015-03-05 13:25

By Michael FeldsteinMore Posts (1020)

In the first post of this series, I identified four design goals for a learning platform that would be well suited for discussion-based courses:

  1. Kill the grade book in order to get faculty away from concocting arcane and artificial grading schemes and more focused on direct measures of student progress.
  2. Use scale appropriately in order to gain pedagogical and cost/access benefits while still preserving the value of the local cohort guided by an expert faculty member, as well as to propagate exemplary course designs and pedagogical practices more quickly.
  3. Assess authentically through authentic conversations in order to give credit for the higher order competencies that students display in authentic problem-solving conversations.
  4. Leverage the socially constructed nature of expertise (and therefore competence) in order to develop new assessment measures based on the students’ abilities to join, facilitate, and get the full benefits from trust networks.

I also argued that platform design and learning design are intertwined. One implication of this is that there is no platform that will magically make education dramatically better if it works against the grain of the teaching practices in which it is embedded. The two need to co-evolve.

This last bit is an exceedingly tough nut to crack. If we were to design a great platform for conversation-based courses but it got adopted for typical lecture/test courses, the odds are that faculty would judge the platform to be “bad.” And indeed it would be, for them, because it wouldn’t have been designed to meet their particular teaching needs. At the same time, one of our goals is to use the platform to propagate exemplary pedagogical practices. We have a chicken and egg problem. On top of that, our goals suggest assessment solutions that differ radically from traditional ones, but we only have a vague idea so far of what they will be or how they will work. We don’t know what it will take to get them to the point where faculty and students generally agree that they are “fair,” and that they measure something meaningful. This is not a problem we can afford to take lightly. And finally, while one of our goals is to get teachers to share exemplary designs and practices, we will have to overcome significant cultural inhibitions to make this happen. Sometimes systems do improve sharing behavior simply by making sharing trivially easy—we see that with social platforms like Twitter and Facebook, for example—but it is not at all clear that just making it easy to share will improve the kind of sharing we want to encourage among faculty. We need to experiment in order to find out what it takes to help faculty become comfortable or even enthusiastic about sharing their course designs. Any one of these challenges could kill the platform if we fail to take them seriously.

When faced with a hard problem, it’s a good idea to find a simpler one you can solve that will get you partway to your goal. That’s what the use case I’m about to describe is designed to do. The first iteration of any truly new system should be designed as an experiment that can test hypotheses and assumptions. And the first rule of experimental design is to control the variables.

Of the three challenges I just articulated, the easiest one to get around is the assessment trust issue. The right use case should be an open, not-for-credit, not-for-certification course. There will be assessments, but the assessments don’t count. We would therefore be creating a situation somewhat like a beta test of a game. Participants would understand that the points system is still being worked out, and part of the fun of participation is seeing how it works and offering suggestions for improvement. The way to solve the problem of potential mismatches between platform and content is to test the initial release of the platform with content that was designed for it. As for the third problem, we need to pick a domain that is far enough away from the content and designs that faculty feel are “theirs” that the inhibitions regarding sharing are lower.

All of these design elements point toward piloting the platform with a faculty professional development cMOOC. Faculty can experience the platform as students in a low-stakes environment. And I find that even faculty who are resistant to talking about pedagogy in their traditional classes tend to be more open-minded when technology enters the picture because it’s not an area where they feel they are expected to be experts. But it can’t be a traditional cMOOC (if that isn’t an oxymoron). We want to model the distributed flip, where there are facilitators of local cohorts in addition to the large group participation. This suggests a kind of a “reading group” or “study group” structure. The body of material for the MOOC is essentially a library of content. Each campus-based group chooses to go through the content in their own way. They may cover all of it or skip some of it. They may add their own content. Each group will have its own space to organize its activities, but this space will be open to other groups. There will be discussions open to everyone, but groups and individual members can participate in those or not, as they choose. Presumably each group would have at least a nominal leader who would take the lead on organizing the content and activities for the local cohort. This would typically be somebody like the head of a Center for Educational Technology, but it could also be an interested faculty member, or the group could organize its activities by consensus.

To make the use case more concrete, let’s assume that the curriculum will revolve around the forthcoming e-Literate TV series on personalized learning. This is something that I would ideally like to do in the real world, but it also has the right characteristics for the current thought experiment. The heart of the series is five case studies of schools trying different personalized learning approaches:

  • Middlebury College, an elite New England liberal arts school in rural Vermont
  • Essex County College, a community college in Newark, NJ
  • Empire State College, a SUNY school that focuses on non-traditional students and has a heavy distance learning program
  • Arizona State University, a large public university with a largely top-down approach to implementing personalized learning
  • A large public university with a largely bottom-up approach to implementing personalized learning

These thirty-minute case studies, plus the wrapper content that Phil and I are putting together (including a recorded session at the last ELI conference), covers a number of cross-cutting issues. Here are a few:

  • What does “personalized” really mean? When (and how) does technology personalize, and when does it depersonalize?
  • How does the idea of “personalized” change based on the needs of different kinds of students in different kinds of institutions?
  • How do personalized learning technologies, implemented thoughtfully in these different contexts, change the roles of the teacher, the TA, and the students?
  • What kinds of pedagogy seem to work best with self-paced products that are labeled as providing personalized learning?
  • What’s hard about using these technologies effectively, and what are the risks?

That’s the content and the context. Since we’re going for something like a PBL design, the central problem that each cohort would need to tackle is, “What, if anything, should we be doing with personalized learning tools and pedagogical approaches in our school?” This question can be tackled in a lot of different ways, depending on the local culture. If it is taken seriously, there are likely to be internal discussions about politics, budgets, implementation issues, and so on. Cohorts might also be very interested to have conversations with other cohorts from peer schools to see what they are thinking and what their experiences have been. Not only that, they may also be interested in how their peers are organizing their campus conversations about personalized learning. This is the equivalent of sharing course designs in this model. And of course, there will hopefully also be very productive conversations across all cohorts, pooling expertise, experience, and insight. This sort of community “sharding” is consistent with the cMOOC design thinking that has come before. We’re simply putting some energy into both learning design and platform design to make that approach work with a facilitation structure that is closer to a traditional classroom setting. We’re grafting a cMOOC-like course design onto a distributed flip facilitation structure in the hopes of coming up with something that still feels like a traditional class in some ways but brings in the benefits of a global conversation (among teachers as well as students).

The primary goal of such a “course” wouldn’t be to certify knowledge or even to impart knowledge but rather to help participants build their intra- and inter-campus expertise networks on personalized learning, so that educators could learn from each other more and re-invent the wheel less. But doing so would entail raising the baseline level of knowledge of the participants (like a course) and could support the design goals. The e-Literate TV series provides us with a concrete example to work with, but any cross-cutting issue or change that academia is grappling with would work as a use case for attacking our design goals in an environment that is relatively lower-risk than for-credit classes. The learning platform necessary to make such a course work would need to both support the multi-layered conversations and provide analytics tools to help identify both the best posts and the community experts.

In the next two posts, I will lay out the basic design of the system I have in mind. Then, in the final post of the series, I will discuss ways of extending the model to make it more directly suitable for traditional for-credit class usage.

The post Blueprint for a Post-LMS, Part 2 appeared first on e-Literate.

Blueprint for a Post-LMS, Part 1

Wed, 2015-03-04 11:27

By Michael FeldsteinMore Posts (1020)

Reading Phil’s multiple reviews of Competency-Based Education (CBE) “LMSs”, one of the implications that jumps out at me is that we see a much more rapid and coherent progression of learning platform designs if you start with a particular pedagogical approach in mind. CBE is loosely tied to family of pedagogical methods, perhaps the most important of which at the moment is mastery learning. In contrast, questions about why general LMSs aren’t “better” beg the question, “Better for what?” Since conversations of LMS design are usually divorced from conversations of learning design, we end up pretending that the foundational design assumptions in an LMS are pedagogically neutral when they are actually assumptions based on traditional lecture/test pedagogy. I don’t know what a “better” LMS looks like, but I am starting to get a sense of what an LMS that is better for CBE looks like. In some ways, the relationship between platform and pedagogy is similar to the relationship former Apple luminary Alan Kay claimed between software and hardware: “People who are really serious about software should make their own hardware.” It’s hard to separate serious digital learning design from digital learning platform design (or, for that matter, from physical classroom design). The advances in CBE platforms are a case in point.

But CBE doesn’t work well for all content and all subjects. In a series of posts starting with this one, I’m going to conduct a thought experiment of designing a learning platform—I don’t really see it as an LMS, although I’m also not allergic to that term as some are—that would be useful for conversation-based courses or conversation-based elements of courses. Because I like thought experiments that lead to actual experiments, I’m going to propose a model that could realistically be built with named (and mostly open source) software and talk a bit about implementation details like use of interoperability standards. But all of the ideas here are separable from the suggested software implementations. The primary point of the series is to address the underlying design principles.

In this first post, I’m going to try to articulate the design goals for the thought experiment.

When you ask people what’s bad about today’s LMSs, you often get either a very high-level answer—“Everything!”—or a litany of low-level answers about how archiving is a pain, the blog app is bad, the grade book is hard to use, and so on. I’m going to try to articulate some general goals for improvement that are in between those two levels. They will be general design principles. Some of them apply to any learning platform, while others apply specifically to the goal of developing a learning platform geared toward conversation-based classes.

Here are four:

1. Kill the Grade Book

One of the biggest problems with mainstream models of teaching and education is their artificiality. Students complete assignments to get grades. Often, they don’t care about the assignment, and the assignments aren’t often designed to be something that entice students to care about them. To the contrary, they are often designed to test specific knowledge or competency goals, most of which would never be practically tested in isolation in the real world. In the real world, our lives or livelihoods don’t depend solely on knowing how to solve a quadratic equation or how to support an argument with evidence. We use these pieces to accomplish more complex real-world goals that are (usually) meaningful to us. That’s the first layer of artificiality. The second layer is what happens in the grade book. Teachers make up all kinds of complex weighting systems, dropping the lowest, assigning a percentage weight to different classes of assignments, grading on curves, and so on. Faculty often spend a lot of energy first creating and refining these schemes and then using them to assign grades. And they are all made up, artificial, and often flawed. (For example, many faculty who are not in mathematically heavy disciplines make the mistake at one time or another of mixing points with percentage grades, and then spend many hours experimenting with complex fudge factors because they don’t have an intuition of how those two grading schemes interact with each other.)

Some of this artificiality is fundamentally baked into the foundational structures of schooling and accreditation, but some of it is contingent. For example, while CBE approaches don’t, in and of themselves, do anything to get rid of the artificiality of the schooling tasks themselves (and may, in fact, exacerbate them, depending on the course design), they can reduce or eliminate a traditional grade book, particularly in mastery learning courses. With CBE in general, you have a series of binary gates: Either you did demonstrate competency or you didn’t. You can set different thresholds, and sometimes you can assess different degrees of competency. But at the end of the day, the fundamental grading unit in a CBE course is the competency, not the quiz or assignment. This simplifies grading tremendously. Rather than forcing teachers to answer questions like, “How many points should each in class quiz be, and what percentage of the total grade should the count for,” teachers instead have to answer questions like, “How much should students’ ability to describe a minor seventh chord count toward their music theory course grade?” The latter question is both a lot more straightforward and more focused on teachers’ intuitions about what it means for a student to learn what a class has to teach.

Master Scale

Details of LoudCloud’s CBE Platform

Nobody likes a grade book, so let’s see how close we can get to eliminating the need for one. In general, we want a grading system that enables teachers to make adjustments to their course evaluation system based on questions that are closely related to their expertise—i.e., what students need to know and whether they seem to know it—rather than on their skills in constructing complex weighting schemes. The mechanism by which we do so will be different for discussion-based course components than for many typical implementations of CBE, particularly machine-graded CBE, but I believe that a combination of good course design and good software design can actually help reduce both layers of grading artificiality that I mentioned above.

2. Use scale appropriately

Most of the time the word “scale” used in an educational context attaches to a monolithic, top-down model like MOOCs. It takes a simplistic view of Baumol’s Cost Disease (which is probably the wrong model of the problem to begin with) and boils down to asking, “How can we reduce the per-student costs by cramming more students into the same class?” I’m more interested in a different question: What new models can we develop that harness both the economic and the pedagogical benefits of large-scale classes without sacrificing the value of teacher-facilitated cohorts? Models like Mike Caulfield’s and Amy Collier’s distributed flip, or FemTechNet’s Distributed Open Collaborative Courses (DOCCs). There are almost certainly some gains to be made using these designs in increasing access by lowering cost. They might (or might not) be more incremental than the centralized scale-out model, but they should hopefully not come with the same trade-offs in educational quality. In fact, they will hopefully improve educational quality by harnessing global resources (including a global community of peers for both students and teachers) while still preserving the local support. And I think there’s actually a potential for some pretty significant scaling without quality loss when the model I have in mind is used in combination with a CBE mastery learning approach in a broader, problem-based learning course design. More on that later.

Another kind of scaling that interests me is scaling (or propagating) changes in pedagogical models. We know a lot about what works well in the classroom that never gets anywhere because we have few tools for educating faculty about these proven techniques and helping them to adopt them. I’m interested in creating an environment in which teachers share learning design customizations by default, and teachers who create content can see what other teachers are doing with it—and especially what students in other classes are doing with it—by default. Right now, there is a lot of value to the individual teacher of being able to close the classroom door and work unobserved by others. I would like to both lower barriers to sharing and increase the incentives to do so. The right platform can help with that, although it’s very tricky. Learning Object Repositories, for example, have largely failed to be game changers in this regard, except within a handful of programs or schools that have made major efforts to drive adoption. One problem with repositories is that they demand work on the part of the faculty while providing little in the way of rewards for sharing. If we are going to overcome the cultural inhibitions around sharing, then we have to make the barrier as low as possible and the reward as high as possible.

3. Assess authentically through authentic conversations

Circling back to the design goal of killing the grade book, what we want to be able to do is directly assess the student’s quality of participation, rather than mediate it through a complicated assignment grading and weighting scheme. Unfortunately, the minute you tell students they are getting a “class participation” grade, you immediately do massive damage to the likelihood of getting authentic conversation and completely destroy the chances that you can use the conversation as authentic assessment. People perform to the metrics. That’s especially true when the conversations are driven by prompts written by the teacher or textbook publisher. Students will have fundamentally different types of conversations if their conversations are not isolated graded assignments but rather integral steps on their way to accomplish some larger task. Problem-Based Learning (PBL) is a good example. If you have a course design in which students have to do some sort of project or respond to some sort of case study, and that project is hard and rich enough that students have to work with each other to pool their knowledge, expertise, and available time, you will begin to see students act as authentic experts in discussions centered around solving the course problem set before them.

A good example of this is ASU’s Habitable Worlds, which I have blogged about in the past and which will be featured in an episode of the aforementioned e-Literate TV series. Habitable Worlds is roughly in the pedagogical family of CBE and mastery learning. It’s also a PBL course. Students are given a randomly generated star field and are given a semester-long project to determine the likelihood that intelligent life exists in that star field. There are a number of self-paced adaptive lessons built on the Smart Sparrow platform. Students learn competencies through those lessons, but they are competencies that are necessary to complete the larger project, rather than simply a set of hoops that students need to jump through. In other words, the competency lessons are resources for the students. They also happen to be assessments, but that’s not the only reason, and hopefully not the main reason, students have to care about them anymore. The class discussions can be positioned in the same way, given the right learning design. Here’s a student featured in our e-Literate TV episode talking about that experience:

Click here to view the embedded video.

The way the course is set up, students use the discussion board for authentic science-related problem solving. In doing so, they are exhibiting competencies necessary to be a good scientist (or a good problem solver, or a supportive member of a problem-solving team). They have to know when to search for information that already exists on the discussion board, how to ask for help when they are stuck, how to facilitate a problem-solving conversation, and so on. And these are, in fact, more valuable competencies for employers, society, and the students themselves than knowing the necessary ingredients for a planet to be habitable (for example). Yet we generally ignore these skills in our grading and pretend that the knowledge quizzes tell us what we need to know, because those are easier to assess. I would like for us to refuse to settle for that anymore.

This is a great example of how learning design and learning platform design can go hand-in-hand. If the platform and learning design work together to enable students to have their discussions within a very large (possibly global) group of learners who are solving similar problems, then there are richer opportunities to evaluate students’ respective abilities to demonstrate both expertise and problem-solving skills across a wide range of social interactions. Assuming a distributed flip model (where faculty are teaching their own classes on their own campuses with their own students but also using MOOC-like content and discussions that multiple other classes are also using), if you can develop analytics that help the local teachers directly and efficiently evaluate students’ demonstrated skills in these conversations, then you can feed the output of the analytics, tweaked by faculty based on which criteria for evaluating students’ participation they think are most important, into a simplified grading system. I’ll have a fair bit to say about what this could look like in practice in a later post in this series.

4. Leverage the socially constructed nature of expertise (and therefore competence)

Why do colleges exist? Once upon a time, if you went to a local blacksmith that you hadn’t been to before, you could ask your neighbor about his experience as a customer or look at the products the blacksmith produced. If you wanted to hire somebody you didn’t know to work in your shop, you would do the same. You’d generally get a holistic evaluation with some specific examples. “Oh, he’s great. My horse has five hooves. He figured out how to make a special shoe for that fifth hoof and didn’t even charge me extra!” You might gather a few of these stories and then make your decision. One thing you would not do is make a list of the 193 competencies that a blacksmith should have and check to see whether he’s been tested against them.

For a variety of reasons, it’s not that simple to evaluate expertise anymore. Credentialing institutions have therefore become proxies for these sorts of community trust network. “I don’t know you, but you graduated from Harvard, and I believe Harvard is a good school.” There was some of that in the early days—“I don’t know you, but you apprenticed with Rolf, and I trust Rolf”—but the universities (and other guilds) took this proxy relationship to the next step by asking people to invest their trust in the institution rather than the particular teacher. The paradox is that, in order to justify their authority as reputation proxies, these institutions came under increasing pressure to produce objective sounding assessments of their students’ expertise. As we go further and further down this road, these assessments look less and less like the original trust network assessment that the credential is supposed to be a proxy for. This may be one reason why a variety of measures show employers don’t pay much attention to where prospective employees get their degrees and don’t have a high opinion of the degree to which college is preparing students for their careers. As somebody who has made hiring decisions in both big and small companies, I can tell you that I don’t remember even looking at the prospective employees’ college credentials. The first screening was based on what work they had done for whom. If the positions had been entry-level, I might have looked at their college backgrounds, but even there, I probably would have looked more at the recommendations, extra-curricular activities, and any portfolio projects. In other words, who will vouch for you, what you are passionate about, and what work you can show. At most, the college degree is a gateway requirement except in a few specific fields. You may have to have one in order to be considered for some jobs, but it doesn’t help you actually land those jobs. And there is little evidence I am aware of that increasingly fine-grained competency assessments improve the value of the credential. This isn’t to say that there is no assessment mechanism better than the old ways. Nor is it to say anything about the value of CBE for either pedagogical purposes (e.g., the way it is used the Habitable Worlds example above) or its value in increasing access to education (and educational credentials) through prior learning assessments and the ability to free the students from the tyranny of seat time requirements. It’s just to say that it’s not clear to me that the path toward exhaustive assessment of fine-grained competencies leads us anywhere useful in terms of the value of the credential itself or in fostering the deep learning that a college degree is supposed to certify. In fact, it may be harmful in those respects.

If we could muster the courage to loosen our grip on the current obsession with objective, knowledge-based certification, we might discover that the combination of digital social networks and various types of analytics hold out the promise that we can recreate something akin to the original community trust network at scale. Participants—students, in our case—could be evaluated on their expertise based on whether people with good reputations in their community (or network) think that they have demonstrated expertise. Just as they always have been. And the demonstration of that expertise will be on full display for direct evaluation because the conversation(s) in which the demonstration(s) occurred and got judged by our trusted community members are on permanent digital display.[1] The learning design creates situations in which students are motivated to build trust networks in the pursuit of solving a difficult, college-level problem. The platform helps us to externalize, discover, and analyze these local trust networks (even if we don’t know any of the participants).

* * *

Those are the four main design goals for the series. (Nothing too ambitious.) In my next post, I’ll lay out the use case that will drive the design.



  1. Hat tip to Patrick Masson, among others, for guiding me to this insight.

The post Blueprint for a Post-LMS, Part 1 appeared first on e-Literate.

Alternate Ledes for CUNY Study on Raising Graduation Rates

Sun, 2015-03-01 14:23

By Phil HillMore Posts (295)

Last week MDRC released a study on the City University of New York’s (CUNY) Accelerated Study in Associate Programs (ASAP) with near breathless terms.

Title page

  • ASAP was well implemented. The program provided students with a wide array of services over a three-year period, and effectively communicated requirements and other messages.
  • ASAP substantially improved students’ academic outcomes over three years, almost doubling graduation rates. ASAP increased enrollment in college and had especially large effects during the winter and summer intersessions. On average, program group students earned 48 credits in three years, 9 credits more than did control group students. By the end of the study period, 40 percent of the program group had received a degree, compared with 22 percent of the control group. At that point, 25 percent of the program group was enrolled in a four-year school, compared with 17 percent of the control group.
  • At the three-year point, the cost per degree was lower in ASAP than in the control condition. Because the program generated so many more graduates than the usual college services, the cost per degree was lower despite the substantial investment required to operate the program.

Accordingly the media followed suit with breathless coverage[1]. Consider this from Inside Higher Ed and their article titled “Living Up to the Hype”:

Now that firm results are in, across several different institutions, CUNY is confident it has cracked the formula for getting students to the finish line.

“It doesn’t matter that you have a particularly talented director or a president who pays attention. The model works,” said John Mogulescu, the senior university dean for academic affairs and the dean of the CUNY School of Professional Studies. “For us it’s a breakthrough program.”

MDRC and CUNY also claim that “cracking the code” means that other schools can benefit, as described earlier in the article:

“We’re hoping to extend that work with CUNY to other colleges around the country,” said Michael J. Weiss, a senior associate with MDRC who coauthored the study.

Unfortunately . . .

If you read the report itself, the data doesn’t back up the bold claims in the executive summary and in the media. A more accurate summary might be:

For the declining number of young, living-with-parents community college students planning to attend full-time, CUNY has explored how to increase student success while avoiding any changes in the classroom. The study found that a package of interventions requiring full-time enrollment, increasing per-student expenditures by 63%, and providing aggressive advising as well as priority access to courses can increase enrollment by 22%, inclusive of term-to-term retention. At the 3-year mark these combined changes translate into an 82% increase in graduation rates, but it is unknown if any changes to the interventions would affect the results, and it is unknown what results would occur at the 4-year mark. Furthermore, it is unclear whether this program can scale due to priority course access and effects on the growing non-traditional student population. If a state sets performance-funding based on 3-year graduation rates and nothing else, this program could even reduce costs.

Luckily, the report is very well documented, so nothing is hidden. What are the problems that would lead to this alternate description?

  • This study is only for one segment of the population, those willing to go full-time, first-time students, low income, and one or two developmental course requirements (not zero, not three+). This targeted less than one-fourth of the CUNY 2-year student population where 73% live at home with parents and 77% are younger than 22. For the rest, including the growing working-adult population:

(p. 92): It is unclear, however, what the effects might be with a different target group, such as low-income parents. It is also unclear what outcomes an ASAP-type program that did not require full-time enrollment would yield.

  • The study required full-time enrollment (12 credits attempted per term) and only evaluated 3-year graduation rates, which is almost explains the results by itself. Do the math (24 credits / year over 3 years minus 3 – 6 as developmental courses don’t count for degree credit) and you see that going “full-time” and getting 66 credits is likely the only way to graduate with a 60-credit associate’s degree in 3 years. As the report itself states:

(p. 85): It is likely that ASAP’s full-time enrollment requirement, coupled with multiple supports to facilitate that enrollment, were central to the program’s success.

  • The study created a special class of students with priority enrollment. One of the biggest challenges of public colleges is for students to even have access to the courses they need. The ASAP students were given priority enrollment as the report itself states:

(p. 34): In addition, students were able to register for classes early in every semester they participated in the program. This feature allowed ASAP students to create convenient schedules and have a better chance of enrolling in all the classes they need. Early registration may be especially beneficial for students who need to enroll in classes that are often oversubscribed, such as popular general education requirements or developmental courses, and for students in their final semesters as they complete the last courses they need to graduate.

  • The study made no attempt to understand the many variables at play. There were a plethora of interventions – full-time enrollment requirement, priority enrollment, special seminars, reduced load on advisers, etc. Yet we have no idea which components lead to which effects. From the report

(p. 85): What drove the large effects found in the study and which of ASAP’s components were most important in improving students’ academic outcomes? MDRC’s evaluation was not designed to definitively answer that question. Ultimately, each component in ASAP had the potential to affect students’ experiences in college, and MDRC’s evaluation estimates the effect of ASAP’s full package of services on students’ academic outcomes.

  • The study made no changes at all to actual teaching and learning practices. It almost seems this was the point to find out how we can everything except teaching and learning to get students to enroll full-time. From the report

(p. 34): ASAP did not make changes to pedagogy, curricula, or anything else that happened inside of the classroom.

What Do We Have Left?

In the end this was a study on pulling out all of the non-teaching stops to see if we can get students to enroll full-time. Target only students willing to go full-time, then constantly advise them to enroll full-time and stick with it, and remove as many financial barriers (fund gap between cost and financial aid, free textbooks, gas cards, etc) as is feasible. With all of this effort, the real result of the study is that they increased the number of credits attempted and credits earned by 22%.

We already know that full-time enrollment is the biggest variable for graduation rates in community colleges, especially if measured over 4 years or less. Look at the recent National Student Clearinghouse report at a national level (tables 11-13):

  • Community college 4-year completion rate for exclusively part-time students: 2.32%
  • Community college 4-year completion rate for mixed enrollment students (some terms FT, some PT): 14.25%
  • Community college 4-year completion rate for exclusively full-time students: 27.55%

And that data is for 4 years – 3 years would have been more dramatic simply due to the fact that it’s almost impossible to get 60 credits if you don’t take at least 12 credits per term over 3 years.

What About Cost Analysis?

The study showed that CUNY spent approximately 63% more per student for the program compared to the control group. The bigger claim, however, is that cost per graduate is actually lower (163% of the cost with 182% of the graduates). But what about the students who don’t graduate or transfer? What about the students who graduate in 4 years instead of 3? Colleges spend money on all their students, and most community college students (60%) can only go part-time and will never be able to graduate in 3 years.

Even if you factor in performance-based funding, using a 3-year graduation basis is misleading. No state is considering funding only for 3-year successful graduation. If that were so, I have a much easier solution – refuse to admit any students seeking less than 12 credits per term. That will produce dramatic cost savings and dramatic increases in graduation rates . . . as long as you’re willing to completely ignore the traditional community college mission that includes:

serv[ing] all segments of society through an open-access admissions policy that offers equal and fair treatment to all students

Can It Scale?

Despite the claims that “the model works” and that CUNY has cracked the formula, does the report actually support this claim? Specifically, can this program scale?

First of all, the report only makes its claims for a small percentage of students that are predominantly young and live at home with their parents – we don’t know if it applies beyond the target group as the report itself calls out.

But within this target group, I think there are big problems with scaling. One of which is the priority enrollment in all courses, including oversubscribed courses and those available at convenient times. The control group was at a disadvantage as were all non-target students (including the growing working adult population and students going back to school). This priority enrollment approach is based on scarcity, and the very nature of scaling the program will reduce the benefits of the intervention.

I have Premier Silver status at United airlines thanks to a few international trips. If this status gave me realistic priority access to first-class upgrades, then I would be more likely to fly United on a routine basis. As it is, however, I often show up at the gate and see myself #30 or higher in line for first-class upgrades when the cabin only has 5-10 first class grades available. The priority status has lost most of its benefits as United has scaled such that more than a quarter of all passengers on many routes also have priority status.

CUNY plans to scale from 456 students in the ASAP study all the way up to 13,000 students in the next two years. Assuming even distribution over two years, this changes the group size from 1% of the entering freshman population to 19%. Won’t that make a dramatic difference in how easy it will be for ASAP students to get into the classes and convenient class times they seek? And doesn’t this program conflict with the goals of offering “equal and fair treatment to all students”?

Alternate Ledes for Media Coverage of Study

I realize my description above is too lengthy for media ledes, so here are some others that might be useful:

  • CUNY and MDRC prove that enrollment correlates with graduation time.
  • Requiring full-time enrollment and giving special access to courses leads to more full-time enrollment.
  • What would it cost to double an artificial metric without asking faculty to change any classroom activities? 63% more per student.
Don’t Get Me Wrong

I’m all for spending money and trying new approaches to help students succeed, including raising graduation rates. I’m also for increasing the focus on out-of-classroom support services to help students. I’m also glad that CUNY is investing in a program to benefit its own students.

However, the executive summary of this report and the resultant media coverage are misleading. We have not cracked the formula, CUNY is not ready to scale this program or export to other colleges, and taking the executive summary claims at face value is risky at best. The community would be better served if CUNY:

  • Made some effort to separate variables and effect on enrollment and graduation rates;
  • Extended the study to also look at more realistic 4-year graduate rates in addition to 3-year rates;
  • Included an analysis of diminishing benefits from priority course access; and
  • Performed a cost analysis based on the actual or planned funding models for community colleges.
  1. And this article comes from a reporter for whom I have tremendous respect.

The post Alternate Ledes for CUNY Study on Raising Graduation Rates appeared first on e-Literate.


Sat, 2015-02-28 16:00

By Michael FeldsteinMore Posts (1020)

A little while back, e-Literate suddenly got hit by a spammer who was registering for email subscriptions to the site at a rate of dozens of new email addresses every hour. After trying a number of less extreme measures, I ended up removing the subscription widget from the site. Unfortunately, as a few of you have since pointed out to me, by removing the option to subscribe by email, I also inadvertently removed the option to unsubscribe. Once I realized there was a problem (and cleared some time to figure out what to do about it), I investigated a number of other email subscription plugins, hoping that I could find one that is more secure. After some significant research, I came to the conclusion, that there is no alternate solution that I can trust more than the one we already have.

The good news is that I discovered the plugin we have been using has an option to disable the subscribe feature while leaving on the unsubscribe feature. I have done so. You can now find the unsubscribe capability back near the top of the right-hand sidebar. Please go ahead and unsubscribe yourself if that’s what you’re looking to do. If any of you need help unsubscribing, please don’t hesitate to reach out to me.

Sorry for the trouble. On a related note, I hope to reactivate the email subscription feature for new subscribers once I can find the right combination of spam plugins to block the spam registrations without getting in the way of actual humans trying to use the site.

The post Unsubscribe appeared first on e-Literate.

Greg Mankiw Thinks Greg Mankiw’s Textbook Is Fairly Priced

Fri, 2015-02-27 16:37

By Michael FeldsteinMore Posts (1020)

This is kind of hilarious.

Greg Mankiw has written a blog post expressing his perplexity[1] with The New York Times’ position that textbooks are overpriced:

To me, this reaction seems strange. After all, the Times is a for-profit company in the business of providing information. If it really thought that some type of information (that is, textbooks) was vastly overpriced, wouldn’t the Times view this as a great business opportunity? Instead of merely editorializing, why not enter the market and offer a better product at a lower price? The Times knows how to hire writers, editors, printers, etc. There are no barriers to entry in the textbook market, and the Times starts with a pretty good brand name.

My guess is that the Times business managers would not view starting a new textbook publisher as an exceptionally profitable business opportunity, which if true only goes to undermine the premise of its editorial writers.

It’s worth noting that Mankiw received a $1.4 million advance for his economics textbook from his original publisher Harcourt Southwestern, which was later acquired by the company now known as Cengage Learning. That was in 1997. Now in its seventh edition, Mankiw has five different versions of his book published by Cengage (not counting the five versions of the previous edition, which is still on the market). That said, he is probably right that NYT would not view the textbook industry as a profitable business opportunity. But think about that. A newspaper finds the textbook industry unattractive economically. The textbook industry is imploding. Mankiw’s publisher just emerged from bankruptcy, and textbook sales are down and still dropping across the board.

One reason that textbook prices have not been responsive to market forces is that most faculty do not have strong incentives to search for less expensive textbooks and, to the contrary, have high switching costs. They have to both find an alternative that fits their curriculum and teaching approach—a non-trivial investment in itself—and then rejigger their course design to fit with the new book. A second part of the problem is that the publishers really can’t afford to lower the textbook prices at this point without speeding up their slow-motion train crash because their unit sales keep dropping as students find more creative ways to avoid buying the book. Their way of dealing with falling sales is to raise the price on each book that they sell. It’s a vicious cycle—one that could potentially be broken by the market forces that Mankiw seems so sure are providing fair pricing if only the people making the adoption decisions had motivations that were aligned with the people making the purchasing decisions. The high cost of switching for faculty, coupled with their relative personal immunity to pricing increases, translate into a barrier to entry for potential competitors looking to underbid the established players. Which brings me to the third reason. There are plenty of faculty who would like to believe that they could make money writing a textbook someday and that doing so would generate enough income to make a difference in their lives. Not all, not most, and probably not even the majority, but enough to matter. As long as faculty can potentially get compensated for sales, there will be motivation for them to see high textbook prices that they don’t have to pay themselves as “fair” or, at least, tolerable. It’s a conflict of interest. And Greg Mankiw, as a guy who’s made the big score, has the biggest conflict of interest of all and the least motivation of anyone to admit that textbook prices are out of hand, and that the textbook “market” he wants to believe in probably doesn’t even properly qualify as a market, never mind an efficient one.

  1. Hat tip to Stephen Downes for the link.

The post Greg Mankiw Thinks Greg Mankiw’s Textbook Is Fairly Priced appeared first on e-Literate.

Editorial Policy: Notes on recent reviews of CBE learning platforms

Fri, 2015-02-27 12:30

By Phil HillMore Posts (292)

Oh let the sun beat down upon my face, stars to fill my dream
I am a traveler of both time and space, to be where I have been
To sit with elders of the gentle race, this world has seldom seen
They talk of days for which they sit and wait and all will be revealed

- R Plant, Kashmir

Over the past half year or so I’ve provided more in-depth product reviews of several learning platforms than is typical – Helix, FlatWorld, LoudCloud, Bridge. Understanding that at e-Literate we are not a review site nor do we tend to analyze technology for technology’s sake, it’s worth asking ‘why the change?’. There has been a lot of worthwhile discussion in several blogs recently about whether the LMS is obsolete or critical to the future of higher ed, and this discussion even raised the subject of how we got to the current situation in the first place.

An interesting development I’ve observed is that the learning environment of the future might already be emerging on its own, but not necessarily coming from the institution-wide LMS market. Canvas, for all its market-changing power, is almost a half decade old. The area of competency-based education (CBE), with its hundreds of pilot programs, appears to be generating a new generation of learning platforms that are designed around the learner (rather than the course) and around learning (or at least the proxy of competency frameworks). It seems useful to get a more direct look at these platforms to understand the future of the market and to understand that the next generation environment is not necessarily a concept yet to be designed.

At the same time, CBE is a very important development in higher ed, yet there are plenty of signs of assuming that CBE is students working in isolation to learn regurgitated facts assessed by multiple choice questions. Yes, that does happen in cases and is a risk for the field, but CBE is far richer. Criticize CBE if you will, but do so based on what’s actually happening[1].

Both Michael and I have observed and even participated in efforts that seek to explore CBE and the learning environment of the future.

Perhaps given that I’m prone to visual communication approaches, the best approach for me to work out my own thoughts on the subjects as well as share more broadly through e-Literate has been to do more in-depth product reviews with screenshots.

Bridge, from Instructure, is a different case. I frequently get into discussions about how Instructure might evolve as a company, especially given their potential IPO. The public markets will demand continued growth, so what will this change in terms of their support of Canvas as a higher education LMS? Will they get into adjacent markets? With the latest news of the company raising $40 million in what is likely the last pre-IPO VC funding round as well as their introduction of Bridge to move into the corporate learning space, we now have a pretty solid basis for answering these questions. Understanding that Bridge is a separate product and seeing how the company approaches both its design and lack of change to Canvas are the keys.

With this in mind, it’s worth noting some editorial policy stuff at e-Literate:

  • We do not endorse products; in fact, we generally focus on the academic or administrative need first as well as how a product is selected and implemented.
  • We do not take solicitations to review products, even if a vendor’s competitors have been reviewed. The reviews mentioned above were more about understanding market changes and understanding CBE as a concept than about the products per se.
  • We might accept a vendor’s offer of a demo at our own discretion, either online or at a conference, but even then we do not promise to cover within a blog post.

OK, the lead-in quote is a stretch, but it does tie in to one of the best videos I have seen in a while.

Click here to view the embedded video.

  1. And you would do well to read Michael’s excellent post on CBE meant for faculty trying to understand the subject.

The post Editorial Policy: Notes on recent reviews of CBE learning platforms appeared first on e-Literate.

LoudCloud Systems and FASTRAK: A non walled-garden approach to CBE

Thu, 2015-02-26 13:44

By Phil HillMore Posts (291)

As competency-based education (CBE) becomes more and more important to US higher education, it would be worth exploring the learning platforms in use. While there are cases of institutions using their traditional LMS to support a CBE program, there is a new market developing specifically around learning platforms that are designed specifically for self-paced, fully-online, competency-framework based approaches.

Recently I saw a demo of the new CBE platform from LoudCloud Systems, a company whose traditional LMS I have covered a few years ago. The company is somewhat confusing to me – I had expected a far larger market impact from them based on their product design than what has happened in reality. LoudCloud has recently entered the CBE market, not by adding features to their core LMS but by creating a new product called FASTRAK. Like Instructure with their creation of a new LMS for a different market (corporate learning), LoudCloud determined that CBE called for a new design and that the company can handle two platforms for two mostly distinct markets. In the case of Bridge and FASTRAK, I believe the creation of a new learning platform took approximately one year (thanks a lot, Amazon). LoudCloud did leverage several of the traditional LMS tools such as rubrics, discussion forums and their LoudBook interactive eReader.

As was the case for the description of the Helix CBE-based learning platform and the description of FlatWorld’s learning platform, my interest here is not merely to review one company’s products, but rather to illustrate aspects of the growing CBE movement using the demo.

LoudCloud’s premier CBE partner is the University of Florida’s Lastinger Center, a part of the College of Education that provides professional development for Florida’s 55,000 early learning teachers. They have or expect to have more than a dozen pilot programs for CBE in place during the first half of 2015.

Competency Framework

Part of the reason for developing a new platform is that FASTRAK appears to be designed around a fairly comprehensive competency framework embodied in LoudTrack – an authoring tool and competency repository. This framework allows the school to combine their own set of competencies along with externally-defined job-based competencies such as O*NET Online.

Competency Structure

The idea is to (roughly in order):

  • Develop competencies;
  • Align to occupational competencies;
  • Define learning objectives;
  • Develop assessments; and
  • Then design academic programs.

LoudTrack Editing

One question within CBE design is what is the criteria for mastery within a specific competency – passing some, most, all of the sub-competencies? FASTRAK allows this decision to be set by program configuration.

Master Scale

Many traditional academic programs have learning outcomes, but a key differentiator for a CBE program is having some form of this competency framework and up-front design.

A unique feature (at least unique that I’ve seen so far) is FASTRAK’s ability to allow faculty to set competencies at an individual course level, provided in a safe area that stay outside of the overall competency repository unless reviewed and approved.

The program or school can also group together specific competencies to define sub-degree certificates.

Course Design Beyond Walled Garden

At the recent Instructional Technology Council (ITC) eLearning 2015 conference, I presented a view of the general ed tech market moving beyond the walled garden approach. As part of this move, however, I described that the walled garden will likely live on within top-down designs of specific academic programs such as many (if not most) of the CBE pilots underway.

Now it's clear what's the role @PhilOnEdTech gives to #LMS when he talks about a new "walled garden" age. #LTI +1

— Toni Soto (@ToniSoto_Vigo) February 22, 2015

What FASTRAK shows, however, is that CBE does not require a walled garden approach. Keep in mind the overall approach of starting with the competency framework through assessments and then academic program design. In this last area FASTRAK allows several approaches to bringing in pre-existing content and separate applications.

Add Resource Type

The system, along with current version of LoudBooks, is LTI as well as SCORM compliant and uses this interoperability to give choices to faculty. Remember that FlatWorld prides themselves on deeply integrating content, mostly their own, into the platform. While they can bring in outside content like OER, it is the FlatWorld designers who have to do this work. LoudCloud, by contrast, puts this choice in the hands of faculty. Two very different approaches.

LTI Apps

FASTRAK does provide a fairly impressive set of reports to see how students are doing against the competencies, which should help faculty and program designers to see where students are having problems or where the course designs need improving.

Competency Reporting


An interesting note from the demo and conversation is that LoudCloud claims that half of their pilots are CBE-light, where schools want to try out competencies at the course level but not at the program level. This approach allows them to avoid the need for regulatory approval.

While I have already called out the basics of what CBE entails in this primer, I have also seen a lot of watering down or alteration of the CBE terminology. Steven Mintz from the University of Texas recently published an article at Inside Higher Ed that calls out CBE 2.0 in his terms, where they are trying approaches that are not fully online or even self-paced. This will be a topic for a future post on what really qualifies as CBE and where are people just co-opting the terminology.

The post LoudCloud Systems and FASTRAK: A non walled-garden approach to CBE appeared first on e-Literate.