Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 7 hours 13 min ago

Forbes Fantasies: Why Hillsdale College is not in the College Scorecard (hint, boring reasons)

Sun, 2015-09-20 13:00

By Phil HillMore Posts (370)

Richard Vedder wrote a particularly uninformed article in Forbes on Friday about the Education Department (ED) not including Hillsdale College in the new College Scorecard. Freed from the burden of facts or research, Vedder let loose the dogs of conspiracy [emphasis in original].

The Obama Administration, with much hype, released its College Scorecard recently, designed to help students find the college that best fits their interest. The Scorecard includes some interesting information, such as data on student repayment of college loan debt, the average post-graduate earnings, et cetera. But as we delve more into it, something arguably sinister is revealed: the Scorecard excludes mention of several prominent colleges with a conservative or traditional academic orientation.

Vedder then speculates that the reasons for the omission are A) Hillsdale not accepting any form of federal financial assistance and the associated regulations and B) Hillsdale having a conservative orientation. At the end of the short post Vedder comes back to reality.

It may be my sinister thinking is unwarranted, that there were reasons unrelated to ideology or refusal to accept financial aid that figure in the exclusion of these schools. But even so, they are legitimate, even accredited, educational institutions, and their exclusion diminishes the utility and the perceived integrity of the new Scorecard.

Would that Richard of the First Paragraph would talk to Richard of the Last Paragraph before posting, because Richard the Last got it right. The sinister thinking is unwarranted.

Last week Russ Poulin was the first person I read who noted the missing colleges in the new College Scorecard. I subsequently did some research (with additional pointers from Russ) and discovered the “Brian criteria”:

In particular, the Education Department (ED) is using a questionable method of determining whether an institution is degree-granting rather than relying on the IPEDS data source. In a nutshell, if an institution awarded more certificates than degrees, then it is not labeled as “predominantly awarded 2-year or 4-yeard degrees” and therefore excluded.

I even noted in the last article that fully one in three two-year institutions are excluded from the Scorecard even though they are degree-granting institutions.

Since I care about quality journalism, here are a few suggestions for Forbes’ editors to get to the underlying facts.

Listen to ED Spokesperson

Richard Vedder was not the first person to write about the Hillsdale College exclusion, as his article came out 9/18 at 1:35pm. I have no idea if he saw these two earlier articles or not since he included no references or links.

In the Collegian, there was an answer provided which confirms the Brian criteria above:

“Hillsdale does offer bachelor’s degrees,” Denise Horn, assistant press secretary for the U.S. Department of Education, told the Collegian. “However, because the plurality of degrees it awards are certificates, not two-year or four-year degrees, it was not included on the Scorecard at launch.”

Update 9/22: It should be noted, however, that the IPEDS instructions for reporting certificates is as follows:

Formal awards conferred by the institution as the result of completion of an academic or occupational program of study. The instructional activity completed as part of the program of study must be credit-bearing, but can be measured in credit hours, contact hours, or some other unit of measurement.

The certificates that Hillsdale College awards are for their online program which is not credit-bearing. Think of it as outreach or public service. Therefore, Horn’s ED statement is incorrect in the case of Hillsdale College as they do not report any formal certificates.

The main point of this blog post still applies (missing from source data sets, not a conservative enemies list as Vedder states). I apologize for not catching this mistake.

Not In Source Data Sets

But it turns out that Hillsdale College is not only missing from the College Scorecard website, it is also missing from the underlying data dump. This is due to it not being in the two source data used for the Scorecard.

  • Integrated Postsecondary Education Data System (IPEDS): Hillsdale does not collect or report any data on student race, ethnicity, etc and does not accept Title IV funding; therefore Hillsdale is not listed in IPEDS.
  • National Student Loan Data System (NSLDS): Since Hillsdale does not accept any form of federal financial aid or funding, none of its students’ data will appear in NSLDS.

A simple Google search uncovers the following statement:

This institution does not report to IPEDS but was added to the Classification Universe based on institution request.

In other words, Hillsdale College administrators are fully aware that they are not listed in IPEDS. I suspect that Richard Vedder already knows this to be true based on his understanding of the following:

I think the real reason for exclusion is two-fold. First, Hillsdale does not accept any form of federal financial assistance, and thus escapes the regulatory clutches of the Department of Education. It voluntarily offers to provide some data, but refuses to even collect data on race, which the Department insists on having.

I applaud Hillsdale College for being consistent – not reporting data they don’t collect but also not accepting any form of federal financial assistance, which is their right as a private college. But this public lack of aid and lack of reporting has impacts on federal government data listings.

30 – 60 Minutes Can Go A Long Way

Alas, the answers are quite boring, at least compared to sinister plots by the White House or the ED. Hillsdale College is not in the source IPEDS and NLSDS data sets, and even if they were, they do not pass the arbitrary Brian criteria (# degrees > # certificates) applied to all institutions.

I certainly fault the ED for releasing the Scorecard and underlying data dumps with such glaring omissions – remember, one in three two-year schools are excluded. I also fault the ED for using the misleading first-time full-time graduation rate as a centerpiece of reporting – see this post for an example of how misleading that data is. I will also note that any schools with similar philosophies as Hillsdale – not wanting to be tangled up with federal regulations – will also be excluded if they are not in IPEDS or NSLDS data sources. I am in complete agreement with this part of Richard Vedder’s post:

But even so, they are legitimate, even accredited, educational institutions, and their exclusion diminishes the utility and the perceived integrity of the new Scorecard.

But that is a far cry from sinister views that the White House or ED is actively trying to exclude schools of a particular political viewpoint. It took me approximately 20 minutes to look up and find the actual answers to why Hillsdale College is not represented in the College Scorecard. Let’s pretend that I am an obsessed individual and that others might take 30 or 60 minutes to do the same. Journalists should spend some time doing research on their topics, and editors of national publications (Forbes, Daily Caller) should apply some standards to their articles.

The post Forbes Fantasies: Why Hillsdale College is not in the College Scorecard (hint, boring reasons) appeared first on e-Literate.

The University As Ed Tech Startup: UMUC, Global Campus, Texas, and SNHU roll their own

Fri, 2015-09-18 13:08

By Phil HillMore Posts (370)

Today the University of Maryland University Campus (UMUC) announced its plans to spin off their Office of Analytics into a separate for-profit ed tech company.

The University System of Maryland Board of Regents today approved a University of Maryland University College (UMUC) plan to spin off its Office of Analytics into a new company, HelioCampus, that will provide business intelligence products and services to universities nationwide. [snip]

The new company will provide a foundational analytics platform and data analysis services. This comprehensive offering will include all the tools needed to support or jumpstart an analytics program. HelioCampus will host a secure platform in the cloud that will include flexible data models and best-in-class visual analytics to accelerate analysis.

The technology will be complemented by a team of higher-education and business-intelligence experts. Each institution will be assigned a dedicated analyst that will partner with key stakeholders to interpret the data and highlight key trends.

The Chronicle quotes two UMUC execs – CIO Pete Young (staying at UMUC) and VP for Analytics Darren Catalano (leaving UMUC to become HelioCampus’ chief executive). It is worth noting, as Stephen deFilipo did on Twitter, that both Pete Young and Darren Catalano come from Rosetta Stone.

“The market as it currently stands has some very traditional software tools,” said Peter C. Young, UMUC’s senior vice president for analytics, planning, and technology. “You buy software licenses, you pay for very expensive implementation. Fundamentally, you’re doing your own thing.” [snip]

“We know what data is valuable and what data is not,” said Darren Catalano, UMUC’s vice president for analytics. Most colleges aren’t using their data effectively, he said, and the ability to understand your own data “is no longer optional in higher education.”

This story, however, is important beyond just UMUC and HelioCampus, as it is part of a growing trend for universities to act as ed tech startup. This can be done either with a for-profit spinoff company or with an internal incubator.

SNHU and Motivis Learning

SNHU, the university best known for its College for America competency-based education (CBE) program, spun off its learning platform into a separate company. As reported by Inside Higher Ed in September 2014:

Southern New Hampshire University, seeing an opening in the market for a learning management system designed around competency-based education, is spinning off the custom-made system it built to support College for America.

Before College for America launched in January 2013, the university considered building a platform to support the competency-based education subsidiary on top of the learning management system used on campus, Blackboard Learn. The university instead picked Canvas, created by Instructure, but after only a couple of months, “we decided we needed to build our own,” said Paul J. LeBlanc, president of the university. [snip]

On Oct. 1, one year after the system went live at College for America, the university is spinning it off as Motivis Learning and writing the for-profit subsidiary a $7 million check. In its first phase, LeBlanc said, the company will further develop its platform based on how other institutions are approaching competency-based learning.

University of Texas and UTx / TEx

The University of Texas System and its Institute for Transformational Learning (ITL), with a $50 million kickstart in 2012, created as an office leading a series of ed tech initiatives. One of UTx’s signature initiatives is a “next-generation educational ecosystem” called TEx that supports the U Texas CBE programs. From Inside Higher Ed in November 2014:

The institute is building its own competency-based platform, but is drawing pieces from several vendors to put it together.

Marni Baker Stein, the institute’s chief innovation officer, described a “stack” of elements, dubbed TEx, for Total Educational Experience. TEx will include customer relationship software, adaptive technology, a “cross-contextual” user interface and mobile technology. RealizeIt from CCKF, Big Tomorrow and Robots and Pencils are some of the firms the system has tapped.

CSU Global Campus

Like UMUC, Colorado State University System’s online arm – the Global Campus – is also in the middle of creating a spin-off for-profit company partially based on analytics. They have not made public the selected name of the venture, so their internal documents list it as “NewCo”. The February 2015 board of governors’ meeting described the goals.

President Takeda-Tinker explained the concept of NewCo as a tech transfer entity to leverage CSU- Global Campus’ expertise to serve market needs. Services would be offered to other educational institutions and business sectors, and would include consulting, technology licenses, and outsource services. NewCo would benefit the CSU System and CSU-Global Campus in multiple ways in planning for future financial viability. The suggestion was made to provide examples of Colorado tech transfer entities in the business plan.

From what I can read in the subsequent meeting notes, CSU has approved the creation of the entity, selected (but not made public) the name, and are currently setting it up as legal entity owned by the CSU Foundation. The key elements of technology are portals (faculty, staff, and student) as well as faculty management system. This last one is interesting, based on systems to verify how much time instructors actually spend on each course, what their level of interaction is with students, and other monitoring software that can be controversial outside of large online programs.

There are more details in this slide presentation that was part of the December 2014 meeting.

Download (PDF, 294KB)

More to Come

Are these efforts paving the way for universities who know their own business to create profitable ed tech and services offerings based on unique insights into how schools really work, or are they vehicles for star-struck administrators seeing glory and easy revenues? Or both? Only time will tell, but I would expect to see more announcements of a similar nature over the next year or two. Watch for more coverage here.

The post The University As Ed Tech Startup: UMUC, Global Campus, Texas, and SNHU roll their own appeared first on e-Literate.

College Scorecard Problem Gets Worse: One in three associate’s degree institutions are not included

Wed, 2015-09-16 16:54

By Phil HillMore Posts (369)

Late yesterday I posted about the Education Department (ED) new College Scorecard and how it omits a large number of community colleges based on an arbitrary metric.

In particular, the Education Department (ED) is using a questionable method of determining whether an institution is degree-granting rather than relying on the IPEDS data source. In a nutshell, if an institution awarded more certificates than degrees, then it is not labeled as “predominantly awarded 2-year or 4-yeard degrees” and therefore excluded.

I am not quite confident that the explanation for the vast majority of missing schools is based on this finding. In short, if an institutions awards more certificates than degrees, ED removes them from the public-facing website even if they are technically degree-granting institutions.

Originally it appeared this situation encompassed 17% of all community colleges, but further analysis shows it to be more significant. Rather than using the official IPEDS definition of sector (public 4-year, public 2-year, etc), the College Scorecard looks at the largest number of degrees awarded and uses an Associate’s or Bachelor’s classification. This primarily affects community colleges that offer bachelor’s programs but the majority of graduating students get two-year Associate’s degrees. Think of the recent legislation in California to allow certain community colleges to offer new bachelor’s programs.

To account for this definition, I took the IPEDS data (2012-13 school year) and created new ‘sectors’ called Public 4-year AD, Private 4-year AD, and For-profit 4-year AD, with the AD standing for Associate’s Degrees. I then combined these AD schools with the corresponding Public 2-year, Private 2-year, and For-profit 2-year schools into a Combined Public Associate’s, Combined Private Associate’s, and Combined For-profit Associate’s category. The result: I can now come close to matching the results of the College Scorecard, meaning that this definition (called Brian Criteria after the commenter who described it on Stack Exchange) does indeed account a large majority of the missing institutions.

The numbers are even bigger than I thought.


  • IPEDS Listing = number of US, Title IV schools, with degree-seeking students and degrees awarded in 2012-13
  • Brian Criteria = number of schools passing Brian Criteria on IPEDS data
  • Scorecard = number of schools returned on College Scorecard using Advanced Search

Fully one in four Public Associate’s Degree institutions (mostly community colleges) are not listed on College Scorecard.

More than four in ten For-profit Associate’s Degree institutions are not listed.

Overall, almost one in three Associate’s Degree institutions are not listed.

These categories account for almost half of all US postsecondary students.

Cali Morrison from WCET described a big problem with this approach by email.

In my opinion, where this hurts the most is in promoting to potential students the idea of stackable credentials. Many of these certificates are awarded as an interim step to an associate degree. They are high quality certificates that lead to job potential and contribute to a student’s eventual degree. These institutions not appearing on this touted ‘good data’ site, produced by the government may make some students shy away from what could be a really useful, and employable credential.

Consider some of the worst examples, in terms of degree-granting institutions not included in Scorecard:

Top 30 Silly Factor

The post College Scorecard Problem Gets Worse: One in three associate’s degree institutions are not included appeared first on e-Literate.

Is Moodle “Bigger than Martin”?

Wed, 2015-09-16 10:27

By Michael FeldsteinMore Posts (1048)

In his recent post on why Moodle matters, Phil wrote,

For a large portion of our readers who deal mostly with US higher education, it could be easy to dismiss Moodle as an LMS and an idea past its prime.[…]And yet no other academic LMS solution comes close to Moodle in terms of worldwide deployments and learners enrolled.

Likewise, if you’re not embedded in the Moodle community, you may not know how central Moodle creator Martin Dougiamas is to that project, or even know who he is. And yet, he is huge. I can name maybe a handful of people who are relatively widely known and respected in educational technology. I can name only a few who are admired and even beloved. I can name very few indeed who are not working for universities (although that’s changing a little, now that Jim Groom and David Wiley have both joined commercial ventures). Within the circle that knows him, Martin’s many admirers have been fiercely loyal to and protective of him, trusting him absolutely to steer Moodle the product, Moodle the community, and Moodle the brand. When you add to that the size of Moodle’s adoption footprint, one can make the case that Martin Dougiamas is one of the most consequential figures in the history of educational technology.

Which is why it is so remarkable that Phil and I are hearing, for the first time ever, from a number of different, independent sources, the phrase, “Moodle is bigger than Martin now.” It is another indicator that Moodle is reaching an inflection point.

For starters, it is important to understand just how central Martin is to the Moodle ecosystem. Although Moodle is open source, all substantial development of Moodle core (as opposed to optional plugins) flows through Martin’s privately owned company, Moodle Pty (more commonly referred to in the Moodle community as “Moodle HQ” or just “HQ”). Here is a slide from Martin’s recent Moodle moot keynote:

Screenshot 2015-09-16 09.29.58

(Sorry for the poor image quality; it’s a screen grab from the video.)

Moodle Partners sell support, services, and proprietary add-ons around the core open source platform. They then tithe about 10% of their Moodle-related gross revenues to Moodle Pty, which uses the money to hire developers, under Martin’s direction, to work on the next versions of Moodle.  In Phil’s recent interview with him, Martin shied away from using the term “benevolent dictator,” but if you read his comments closely, he is more objecting to the connotations of the word “dictator” than he is to the characterization of him as the guy in charge. (In the past, Martin has used the term to describe his role.)

For people inside the Moodle community, this has been considered a feature, not a bug, because they trust Martin. Former Moodlerooms CEO Lou Pugliese recently told us,

Working with Martin Dougiamas has been one of the highlights of my career… he’s a brilliant, honest broker in global eLearning whose values remain steadfastly uncompromised in continuing to advance open source alternative in the market.[1]

In my experience, that is not an unusual statement. With minimal effort, I could string together a dozen similar quotes from both major and minor figures in the Moodle community. Phil and I heard a number of equally glowing statements in our recent spade work for this post series. In the eyes of many folks in the community, even now, Martin is Moodle. Moodle is Martin.

In fact, the personal loyalty to Martin has been so fierce that it has made reporting on Moodle difficult at times. For example, in the past I have tried to write about Moodle’s financial sustainability model. It is unprecedented in ed tech and has, up until now, been pretty wildly successful. But I couldn’t get much real information about it. It wasn’t a secret, exactly. Martin gave an overview of the way the ecosystem works in my 2010 interview with him. It just that the financial side of it wasn’t talked about, and certainly not in the level of detail that would enable me to do meaningful analysis. Martin himself doesn’t like to talk about finances, as he admits in the keynote from which the slide above comes. He tells us this, tells us he is going to talk about it anyway now, and then launches into a 45-minute keynote in which he…mostly doesn’t talk about finances. There’s a little more than that slide, but really, not that much. It wasn’t any different back when I was trying to write about the Moodle sustainability model the first time. When I went to the Moodle Partners to ask them about the details, none of them would say much. What I really wanted to know was exactly how much money was flowing through the system and where it was going. I couldn’t come close to getting those numbers. They all deferred to Martin. I would know whatever he was comfortable with me knowing. And yet when I asked whether I should approach Martin directly, I received more than one (friendly) suggestion that it would probably be a bad idea. Martin doesn’t like to talk about money, I was told.

I thought about writing a piece highlighting how the financial information wasn’t available. I shared this thought with somebody who was not a Moodle Partner but who I knew was a mutual friend of Martin’s and mine. I raised the point that here is a substantial amount of money flowing through an organizational structure that we don’t fully understand, supporting the world’s most widely adopted LMS, into a private company owned by one guy, and we have no visibility into how it works, how much money we are talking about, or where it’s going. He replied, “I can understand why that would be concerning from an American point of view.” [Subtext: I don’t share your concern.] “But I worry that if you write that piece, Martin will just clam up even more. You won’t get anywhere. He doesn’t like to talk about money.”

In the end, I never wrote that story. I had no reason to believe that anything shady was going on, no information that would shed new insights, and no desire to take a swing at a project and a person that both seemed to be doing net good in ed tech. In effect, the unity of the Moodle community behind Martin became the more important story for me, and although the lack of transparency bothered me, it didn’t bother me enough to feel like I needed to insert myself into that story.

Which is why,  when members of the Moodle community, including but not limited to current and past Moodle Partners, are beginning to talk to us about their concerns regarding the direction of Moodle, and that the phrase “Moodle is bigger than Martin now” is popping up in different, independent conversations, I interpret them as indicators that something serious is afoot. They reinforce the sense of change we were already getting from seeing Totara fork from Moodle, Remote Learner leave the Partner Program and sell off its UK subsidiary as well, and regional Partners such as Nivel Siete sell themselves to Blackboard. The point of my focus on Martin in this post is not whether people think he’s a swell guy or not. (For the record, they still do. Even those folks who are saying that “Moodle is bigger than Martin” are careful to quickly follow that statement with a declaration that they know Martin has the best interests of the community at heart.) Rather, it speaks to Moodle’s sustainability model going forward. Right now, as the graphic above shows, the substantial majority of Moodle development resources flow through Moodle HQ. And Martin is Moodle HQ. Moodle HQ is Martin. When somebody says, “Moodle is bigger than Martin,” what they really mean is that Moodle is bigger than Moodle HQ. They are effectively questioning whether Moodle development resources should flow through Moodle HQ.

There is an interesting bifurcation in the Moodle sustainability model, in the sense that it depends on both a large grassroots community to drive interest and energy and a relatively small circle of commercial partners to generate revenue for development. In light of the above, that raises a few questions. Will commercial partners begin to move away from Moodle HQ in sufficient numbers to substantially impact the development resources available for Moodle core? And what are the forces that would drive this divergence? Also, if the commercial partners move away from Martin and Moodle HQ, will the Moodle-adopting schools be both willing and able vote with their feet and leave their commercial partners in sufficient numbers to impact those companies? And finally, if these tensions play out as actions, what will happen to Moodle? There are some hints and possibilities of alternative sustainability models and alternative futures that Phil will play out in a future post.


  1. You can see my 2011 Skype interview with Lou, six months after he had taken the helm at Moodlerooms, here.

The post Is Moodle “Bigger than Martin”? appeared first on e-Literate.

17% Of Community Colleges Are Not Included In College Scorecard

Tue, 2015-09-15 18:45

By Phil HillMore Posts (368)

In addition to the highly-misleading usage of ‘first-time full-time’ qualification for official graduate rates reported in the College Scorecard, there appears to be another major issue with the data. In particular, the Education Department (ED) is using a questionable method of determining whether an institution is degree-granting rather than relying on the IPEDS data source. In a nutshell, if an institution awarded more certificates than degrees, then it is not labeled as “predominantly awarded 2-year or 4-yeard degrees” and therefore excluded.

Russ Poulin noted in his WCET post that several community colleges were missing from the Scorecard based on a quick spot check:

  • Colorado – Aims, Front Range, Pueblo, and Otero Community Colleges.
  • Arizona – Rio Salado College.
  • California – Bakersfield College.

The consumer website itself offers no explanation that certain degree-granting schools are excluded. The Technical Paper that corresponds to the data release explains that selection of schools on page 28:

The data presented in this report cover the universe of all institutions of higher education that are active participants in the Title IV federal aid program, are located in the 50 states and the District of Columbia, and both have a positive number of degree-seeking undergraduates and awarded some undergraduate credentials in the 2012-2013 school year. The IPEDS definition of institution is used as the basic level for reporting data, though for many analyses in this report a higher level of aggregation (i.e., six-digit OPEID) is used due to data limitations.

OK, this indicates the following criteria:

  • Titile IV participation
  • Located in US
  • Positive number of degree-seeking undergraduates
  • Awarded some undergraduate credentials in 2012-2013
  • Listed as institution under IPEDS

All school mentioned by Russ (and others that I could find) are listed in the actual data files, but they are not on the consumer website. And all meet the criteria listed above. Something is wrong here.

There are others, such as Robert Kelchen, who have noted the missing colleges. In this Stack Exchange discussion I think we find the answer to the missing data in an answer from Brian, who works at the ED.

At the present time, the Scorecard site is limited to institutions that predominantly grant 2-year and/or 4-year degrees. The Department of Education is exploring whether and how to add predominantly less-than-two-year schools that offer associate’s or bachelor’s degree programs to the site in future iterations of Scorecard.

Brian further clarifies in a separate answer.

Predominant degree is calculated by the most common category of award conferred by the institution, calculated from counts of awards included in the IPEDS Completions component.

Sure enough, Front Range Community College awarded 1,771 certificates and 1,693 associate’s degrees in 2012-2013. Based on the private Brian criteria, this cuts Front Rage out of the current Scorecard. I did a quick check of all public 2-year degree-granting institutions, and these omissions are significant. IPEDS lists 939 schools meeting this criteria while the College Scorecard lists only 780. That’s 17% of all degree-granting community colleges that are not included in the College Scorecard. In Colorado, only 6 of 13 community colleges are included. In California, only 94 of 112 are included.[1]

Furthermore, this private criteria explained by Brian is not mentioned in the technical documentation, and it is not mentioned on the website. Why did ED not simply use the well-known and accepted IPEDS classification of “degree-granting”? I can only assume that they wanted to exclude measurements of certificates, but they ended up eliminating a significant number of community colleges in the process. And they did so in an undocumented, arbitrary manner.

Again, I ask: is this how the ED is going to help consumers make informed choices?

Update 9/15: To do more extensive checking, I applied Brian’s criteria above to IPEDS data for the first 3 pages of College Scorecard results, alphabetically sorted. For IPEDS I listed all degree-granting public 2-year US-based institutions. For Scorecard I searched public, associate’s. The results:

  • College Scorecard = 60 results
  • IPEDS = 84 results
  • For every IPEDS result where # certificates > # degrees, the school was not on the Scorecard
  • There were 7 schools on Scorecard not on IPEDS list due to being listed as 4-year although vast majority degrees being associates; but they were in IPEDS under public 4-year category

In other words, Brian’s criteria (# degrees must be > # certificates to be listed) does seem to explain the missing colleges.

Update 9/16: For another sanity check, I downloaded IPEDS data and filtered on the public criteria listed above (Title IV, US, positive # of degree-seeking students, some awards). I then applied Brian’s criteria on my own in Excel to see if I could come close to matching results with the Scorecard. Due to the differences in how IPEDS classifies sectors as 2-year and 4-year whereas the Scorecard uses Associate’s and Bachelor’s, I combined Public 4-year and Public 2-year data together. The results:

  • 1,617 institutions that meet the public criteria
  • 1,355 institutions that pass Excel version of Brian’s criteria
  • 1,368 institutions listed in College Scorecard
  • That means 15 – 16% of all public institutions are missing from the Scorecard

I haven’t figured out all of the nuances of how ED has created their public-facing website (and note that it does not allow download of results to allow independent analysis), but this is pretty close. I strongly believe that Brian’s private criteria explains the vast majority of missing public institutions in the College Scorecard.[2]

  1. Note that I spot checked some other missing schools to see if this private criteria is the explanation (and it was), but I did not exhaustively check all cases. I will correct and update post as needed.
  2. I have not had time to do the same analysis on private and for-profit sectors.

The post 17% Of Community Colleges Are Not Included In College Scorecard appeared first on e-Literate.

College Scorecard: An example from UMUC on fundamental flaw in the data

Mon, 2015-09-14 13:33

By Phil HillMore Posts (367)

Russ Poulin at WCET has a handy summary of the new College Scorecard produced by the Education Department (ED) and the White House. This is a “first read” given the scorecard’s Friday release, but it is quite valuable since Russ participated on an ED Data Panel related to the now-abandoned Ratings System, the precursor to the Scorecard. Russ describes the good, the “not so good”, and the “are you kidding me?” elements. One area in particular highlighted by Russ is the usage of the “dreaded first-time, full-time completion rates”:

I knew this would be the case, but it really irks me. Under current data collected by the Department’s IPEDS surveys. They the group on which they base their “Graduation Rate” as: “Data are collected on the number of students entering the institution as full-time, first-time, degree/certificate-seeking undergraduate students in a particular year (cohort), by race/ethnicity and gender; the number completing their program within 150 percent of normal time to completion; the number that transfer to other institutions if transfer is part of the institution’s mission.”

This rate has long been a massive disservice to institutions focused on serving adults and community colleges. Here are some example rates: Empire State: 28%, Western Governors University: 26%, University of Maryland University College: 4%, Charter Oak Colleges: no data, and Excelsior College: no data.. The problem is that these numbers are based on incredibly small samples for these schools and do not reflect the progress of the bulk of the student body.

I won’t quote data for community colleges because they are all negatively impacted. They often serve a large number of students who are not “first-time” or define “success” in other ways.

I know that they are working on a fix to this problem in the future. Meanwhile, who atones for the damage this causes to these institution’s reputation. This data display rewards colleges who shy away from non-traditional or disadvantaged students. Is this what we want?

Russ is not the only one noting this problem. Consider this analysis from Friday [emphasis added]:

The most commonly referenced completion rates are those reported to IPEDS and are included on the College Scorecard (measuring completion within 150 percent, or six years, for predominantly four-year colleges; and within four years for predominantly two- or less-than-two-year schools). However, they rely on a school’s population of full-time students who are enrolled in college for the first-time. This is increasingly divergent from the profile of the typical college student, particularly at many two-year institutions and some four-year schools. For instance, Marylhurst University in Oregon, a four-year institution that has been recognized for serving adult students, reportedly had a 23 percent, six-year completion rate – namely because a very small subset of its students (just one percent) fall in the first- time, full-time cohort used to calculate completion rates. As with many schools that serve students who already have some college experience, this rate is, therefore, hardly representative of the school’s student body.

Who wrote this critical analysis, you ask? The Education Department in their own Policy Paper on the College Scorecard (p 17). Further down the page:

The Department has previously announced plans to work with colleges and universities to improve the graduation rates measured by the IPEDS system. Beginning in 2016, colleges will begin reporting completion rates for the other subsets of their students: first-time, part-time students; non-first-time, full-time students; and non-first-time, part-time students. In the meantime, by using data on federal financial aid recipients that the Department maintains in the National Student Loan Data System (NSLDS) for the purposes of distributing federal grants and loans, we constructed completion rates of all students receiving Title IV aid at each institution. For many institutions, Title IV completion rates are likely more representative of the student body than IPEDS completion rates – about 70 percent of all graduating postsecondary students receive federal Pell Grants and/or federal loans.

Given concerns about the quality of historical data, these NSLDS completion rates are provided on the technical page, rather than on the College Scorecard itself.

In other words, ED is fully aware of the problems of using IPEDS first-time full-time completion data, and they have plans to help improve the data, yet they chose to make fundamentally-flawed data a centerpiece of the College Scorecard.

Furthermore, the Policy Paper also addressed the need to understand transfer rates and not just graduation rates (p 18) [emphasis in original]:

The Administration also believes it is important that the College Scorecard address students who transfer to a higher degree program. Many students receive great value in attending a two-year institution first, and eventually transferring to a four-year college to obtain their bachelor’s degrees. In many cases, the transfer students do not formally complete the two-year program and so do not receive an associate degree prior to transferring. When done well, with articulation agreements that allow students to transfer their credits, this pathway can be an affordable and important way for students to receive four-year degrees. In particular, according to a recent report from the National Center of Education Statistics (NCES), students were best able to transfer credits when they moved from two-year to four-year institutions, compared with horizontal and reverse transfers.

To address this important issue, ED put the transfer data they have not on the consumer website but in the technical and data site (massive spreadsheets, data dictionaries, crosswalks all found here). Why did they not make this data easier to find? The answer is in a footnote:

We hope to be able to produce those figures for consumers after correcting for the same reporting limitations as exist for the completion rates.

To their credit, ED does address these limitations thoroughly in the Policy Paper and the Technical Paper, but very few people will read them. The end result is a consumer website that is quite misleading. Knowing all the problems of the data, this is what you see for UMUC.


Consider what prospective students will think seeing this page. UMUC sucks, I’m likely to never graduate.

UMUC points out in this document that less than 2% of their student body are first-time full-time, and that the real results paint a different picture.

UMUC report

UMUC report grad

Consider the harm done to prospective UMUC students by seeing the flawed, over-simplified ED College Scorecard data, and consider the harm done to UMUC as they have to play defense and explain why prospects should see a different situation. Given the estimate that non-traditional students – those who would not be covered at all in IPEDS graduation rates – comprise more than 70% of all students, you can see how UMUC is not alone. Community colleges face an even bigger problem with the lack of transfer rate reporting.

And this is how the ED is going to help consumers make informed choices?

Count me as in agreement with Russ in his conclusions:

The site is a good beginning at addressing the needs of the traditional student leaving high school and seeking a college. It leaves much to be desired for the non-traditional students who now comprise a very large portion of the college-seeking population.

I applaud the consumer-focused vision and hope that feedback continues to improve the site. I actually think this could be a fantastic service. I just worry that in the haste to get it out that we did not wait until we had the data to do it correctly.

The post College Scorecard: An example from UMUC on fundamental flaw in the data appeared first on e-Literate.

Release of Analysis Episode for e-Literate TV Series on Personalized Learning

Thu, 2015-09-10 09:01

By Phil HillMore Posts (367)

Today we are thrilled to release the the final episode in our new e-Literate TV series on “personalized learning”. In this series, we examine how that term, which is heavily marketed but poorly defined, is implemented on the ground at a variety of colleges and universities. While today’s episode is the final one released due to its analysis of what we learned in the five case studies, it was designed to be used as an introduction to the series.

We have deliberately held back from providing a lot of analysis and commentary within each case study – letting faculty, students, administrators and staff speak for themselves – but in today’s episode we share some of the key lessons we learned. We had a variety schools profiled in the series, and our analysis addresses the commonalities and differences that we saw. You can see the analysis episode at this link or in the embed below.


Introduction: What Did We Learn In Our Personalized Learning Series?

This episode introduces a new feature in e-Literate TV. We can now embed other episodes as well as YouTube videos directly in this episode. As viewers watch us discuss different lessons, they can also watch additional video-based discussions in context for a deeper dive into the topic at hand. As we discuss specific case studies (Middlebury College, Essex County College, Arizona State University, Empire State College, or UC Davis), the actual case study episodes appear on the right side.

Embedded Episodes

We also had the chance to participate in a panel discussion on the series at the EDUCAUSE Learning Initiative (ELI) conference along with Malcolm Brown and Veronica Diaz. They made some of their own observations and asked some excellent questions. We have embedded the specific questions from the conference as YouTube videos with the analysis episode.

Embedded ELI

e-Literate TV, owned and run by MindWires Consulting, is funded in part by the Bill & Melinda Gates Foundation. When we first talked about the series with the Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

As with the previous series, we are working in collaboration with In the Telling, our partners providing the platform and video production. Telling Story platform allows people to choose their level of engagement, from just watching the video to accessing synchronized transcripts and accessing transmedia. We have added content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. With In The Telling’s help, we are crafting episodes that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies.

We welcome your feedback, either in comments or on Twitter using the hashtag #eLiterateTV. Enjoy!

The post Release of Analysis Episode for e-Literate TV Series on Personalized Learning appeared first on e-Literate.

Why Moodle Matters

Mon, 2015-09-07 13:20

By Phil HillMore Posts (367)

For a large portion of our readers who deal mostly with US higher education, it could be easy to dismiss Moodle as an LMS and an idea past its prime. Market data from Edutechnica shows that since 2013 the number of adopting US institutions has hit a plateau, but the total number of students served (number of institutions using Moodle as primary LMS scaled by enrollment) has actually decreased. Gone are the days of UCLA, LSU, Athabasca, North Carolina Community Colleges, UNC Charlotte and other high-profile Moodle selections from the late 2000s.[1] Pop quiz: name the last significant LMS decision process in US higher ed that selected Moodle as a replacement for another system.

. . .

And yet no other academic LMS solution comes close to Moodle in terms of worldwide deployments and learners enrolled. Even if you’re a US college or university, consider several reasons why Moodle matters.

Most Common LMS For Small Schools

Again using Edutechnica data (this time from early 2014), for US institutions with enrollments of 2,500 FTE or less Moodle is the market leader, surpassing Blackboard Learn.

Edutechnica Spring 2014 Small

Most Prevalent LMS In The World Based On Self-Reported Data

This is a tricky measurement as different solutions report their data differently and there is a motivation to inflate numbers, but it is still a useful data source to consider. The commercial LMSs release their number of worldwide clients and total number of students / learners in press releases, blog posts, or article references – typically combining data across all markets (higher ed, K-12, corporate learning). Open source solutions like Moodle and Sakai tend to list or summarize their actual adopting institutions on public web pages.

This data, however, has a large margin for error.


  • On one hand, a large number of instances are for 500 students or less (as shown above), indicating personal or ad hoc non-institutional usage.
  • On the other hand, system registration is voluntary and no one knows how many organizations just don’t report. For example, Blackboard’s Moodlerooms clients are not counted as the use a SaaS model without separate installations.

We can eliminate much of the personal or ad hoc reporting usage by eliminating the 46,000 sites and 3.6 million students where enrollment is less than 500 per site. This leaves a conservative estimate of 14,600 sites (> 500 enrollment) and 73 million students.

Even with these conservative estimates, Moodle numbers are greater than any other academic LMS and are comparable to the largest corporate LMS (Skillsoft after its acquisition of SumTotal for 10,000 clients and 60 million end users).  It is likely that even with the noise in the system, Moodle is still the most prevalent academic LMS in the world and on the same rough level as Skillsoft and Edmodo[2] for overall LMS.

Institutional Reporting

It is not easy to get worldwide LMS market data that does not come directly from the provider. The best one that I know of for academic usage is ListEdTech, which uses a wiki along with staff data collection and cleanup for higher education. This chart from 2013 shows the most commonly deployed LMS in each country, and Moodle has the most countries, by far.


Anecdotally, every time I have spot checked per country, as was done in Colombia talking to institutional staff and vendor resellers whose business it is to know their local market, I get similar results showing Moodle as the most common system outside of the US.

Online Service Providers

Two years ago I wrote how Moodle had “quietly become the dominant LMS for online service providers“, noting:

It is this market where the Moodle LMS has seen its most impressive growth in the US in the past two years, as approximately half of these providers use Moodle as their core learning platform. Typically the companies customize the platform by extensions to Moodle and white labeling. This market is interesting, as the companies themselves are growing, leading to organic growth in LMS usage.

Series of Posts

What began as an idea for a standalone Moodle analysis post has now morphed into a series of posts that Michael and I will write over the next couple of weeks. The reason we are putting this much emphasis on Moodle is that it matters a great deal, even in the US higher education market. With this large worldwide presence, it is natural that Moodle has become big business – institutions and corporates rely on Moodle for their core learning platform. Perhaps because of this usage growth, there appears to be some significant changes that to me indicate an inflection point for the Moodle community and how it allocates resources.

What changes are going on? How sustainable is this particular open source model? How well is the system adapting to new LMS solutions and end user expectations? Stay tuned for more.

  1. Disclosure: I helped UCLA with their LMS selection.
  2. I don’t consider Edmodo a full-fledged LMS, but it does claim to have 51 million end users.

The post Why Moodle Matters appeared first on e-Literate.

In Defense of the Lecture

Sun, 2015-09-06 10:28

By Michael FeldsteinMore Posts (1046)

Following the IHE piece on Essex County College’s struggles to get good outcomes from their personalized learning program in developmental math, and following my blog post on the topic, Phil and I had an interesting exchange about the topic in email with ECC’s Vice President for Planning, Research, and Assessment Doug Walercz. With his permission, I’d like to share some of his observations with you. One of the big takeaways from the conversation, for me, is that our cultural notion of the pedagogical work that happens in a good lecture is pretty impoverished relative to the reality. We don’t have a clear understanding of all the things that a good lecture accomplishes, and therefore we often lose valuable elements of student support when we try to replace it. This has pretty serious implications for MOOCs, flipped classrooms, personalized learning, and a wide array of pedagogical approaches that replace a traditional in-person lecture with something else.

Here is part of Doug’s initial response to my blog post and the IHE article:

My understanding or the experience at Essex continues to evolve, and it is something like this:

  1. Good teachers deliver a multi-dimensional learning experience, and the experience usually revolves around the content. When I say multi-dimensional, I mean that while the teacher is delivering the content, she is also assessing prior knowledge, building a positive classroom climate, establishing (high) goals for student performance, developing metacognition, dividing complex knowledge into manageable pieces, providing motivation for learning, and helping organize knowledge around key features. And all of these threads are woven into the lecture and discussion that comprises the classroom experience.
  2. Adopting a system like ALEKS moves the content from the professor to the software, and effectively removes the primary vehicle that used to carry essential dimensions of the learning experience. The software is great at delivering content, assessing prior knowledge, and dividing complex knowledge into manageable pieces, but it is not good at classroom climate, goal setting, metacognition, motivation, and organizing knowledge around key features. And our instructors don’t know how to deliver these threads outside of a content-driven lesson.
  3. If you have students who are “good” students then they already have behaviors to establish a positive climate, they have metacognitive skills, they are self-motivated, etc. So, if your students are “college ready” they will not suffer significantly due to the absence of these threads in an adaptive-software driven course. However, if your students do not have the beliefs and behaviors of successful students, if they lack metacognition, if their primary motivation is driven by teacher approval (or lack thereof), if they don’t know how to set goals or organize knowledge, then the absence of these threads will have a critical impact on their performance.
  4. The potential for better learning is there. Adaptive software does a better job of delivering pure content than faculty, especially when the students have a wide array of prior knowledge. Adaptive software also gives faculty more time to devote to non-content-driven threads, so there is the potential for significant gains in learning, but it will only happen after faculty learn to deliver those threads outside of a content-driven lesson. So, I am trying to focus on how to conceptualize these threads and get faculty to understand that they can be taught even when they are not giving a content lecture.

His assertion in his last point that “[a]daptive software does a better job of delivering pure content than faculty, especially when the students have a wide array of prior knowledge” tends to generate some heated debate, particularly in math education circles, so I asked him to elaborate on it. Here’s what he had to say:

When I made the comment about delivering pure content, I was talking about the process of breaking complicated concepts or knowledge domains into component pieces, learning the pieces separately, and then integrating them into a whole. Faculty (and all experts) often suffer from “expert blind spot” when they try to teach a novice a skill that the expert mastered a long time ago. Even if the expert is tuned into some of his blind spots, it is very difficult to deconstruct a task and teach every step and not forget anything and do it reliably class after class. Adaptive software, on the other hand, is very well suited for this type of content delivery. A team of experts has done the deconstruction in painstaking detail, and data from thousands or millions of students have been analyzed to identify points where more (or less) explication is needed. Humans can’t compete with that combination of comprehensiveness, detail, reliability, and adaptivity. I will say that when it comes to integrating component skills into large, integrative, projects, software is usually not up to the task because these integrative projects are almost always open-ended, and computers don’t do “open-ended.”

This is a defensible, research-backed position. I wrote about expert blindness a while back in my post on the Pittsburgh Science of Learning Center. I am aware of (and sympathetic to) what we might call the Dan Meyer school of thought regarding math education, which advocates for different goals and measures of success in math education. There is a reasonable debate to be had among thoughtful and competent math education professionals—one that I do not have any interest in rehashing, much less adjudicating, in this post. My point for the current purpose is simply that Doug is not coming from a knee-jerk “computers are awesome, teachers suck” perspective. Rather, he and his colleagues are engaged in an empirical examination of the various pedagogical functions that are necessary to help their students succeed.

For now, I’m more interested in his hypothesis that ECC instructors do know how to “classroom climate, goal setting, metacognition, motivation, and organizing knowledge around key features,” but only in the context of a “content-driven lesson.” This is an interesting assertion. What does it mean?

We tend to think of a traditional “content-driven lesson” as a “lecture,” and we tend to think of a “lecture” as a professor droning on for an hour and twenty minutes with no student interaction. But most lectures are not that, and no lectures are only that. Let’s start with the simple fact that lectures are live and in person. My wife and I recently watched a very odd miniseries on PBS called Big Blue Live. A lot of emphasis was placed on the “live” part. The various hosts and experts kept going on excitedly about how the show was happening “live.” But it didn’t feel live. It didn’t feel like we were actually there. Seeing a blue whale breach in person has to be a pretty dramatic experience. Watching it on TV did not feel anything like that. What it felt like was a prerecorded reality TV show. The hosts were excited because the whale was live for them, and their authentic in-the-moment reactions added to the viewing experience at home. But it was not at all like being there. And while I am not in any way suggesting that seeing a live lecture on, say, the role of prostitution on race relations in Reconstruction-era Memphis is anything like seeing a blue whale breach live, I would also say that it is inherently different from seeing the same lecture on video tape. It doesn’t activate our attention mechanism in the same way.

But the emotional impact of a live performance is really just a small part of the picture. Good teachers who lecture do a lot more than just deliver a canned speech and then walk out the door. There is often a lot going on during that talk. Some of the most basic pedagogical moves in a good lecture have been re-invented in video-based pedagogy, without any apparent awareness that they often happen in a physical classroom as well. For example, much is made of the fact that videos should be ten minutes long or less. First of all, there may be some difference in attention lengths between a live and a recorded lecture. But more importantly, good lecturers have a rhythm to their presentations. They will signal a break in one way or another. They will pause. They will crack a joke. They will ask a question of the group. This brings us to the next “innovation” in video-based pedagogy. Asking a reinforcing question after a short lecture segment is not a new idea. It is true that it is easier to do with technology because you can give every student an opportunity to answer. But lacking that, rhetorical questions and questions asked to the class as a whole both work. And they are common.

Beyond that, good teachers who lecture are always scanning the room, seeing who is paying attention, who is drifting off, who is in trouble, and so on. Students can raise their hands and ask questions. And a teacher can do something different in response to any of these cues. She can change the lecture, or drop it for a moment to engage in a discussion.

In the popular narrative about lectures, either these moves don’t exist or they don’t matter. But they do. The thing is, since most faculty have received no training at all in teaching, even the truly great ones aren’t always even fully conscious of what they are doing, and they don’t always know how to separate the pedagogical functions and apply them in different contexts. They learned to do what they do by watching other good teachers do the same thing. There is no particular reason to think that professors will spontaneously, without any training or modeling, be able to transfer those pedagogical functions into an environment in which they can’t weave it into main mode of teaching that they have seen throughout their academic lives.

When I think about the things that scare me the most about big technology-driven changes to the way we teach, this kind of thing is right up near the top. We don’t fully understand what we are doing well now. Therefore, when we attempt to deconstruct it and then reconstruct it in a different environment, we don’t really know what we will miss or how we will need to retrain our instructors so that we won’t miss it. That’s why it is so important to undertake these sorts of experiments thoughtfully, self-critically, and iteratively.

The post In Defense of the Lecture appeared first on e-Literate.

Breaking: Totara LMS Forks From Moodle And Changes Relationship

Wed, 2015-09-02 13:43

By Phil HillMore Posts (365)

What interesting timing. Just as I published my interview with Martin Dougiamas, I was notified that Totara LMS, a Moodle derivative aimed at the corporate learning market, has forked from Moodle and is changing its relationship with the Moodle Community. From their newsletter released today (Sept 3 Australia time):

The relationship between Totara and Moodle is changing

We have made the carefully considered decision that from 2016 Totara LMS will no longer be in lockstep with Moodle. This will free the team at Totara Learning to focus on big leaps forward in usability and modernising the framework for our enterprise customers.

Further down, Richard Wyles wrote an additional post explaining the fork, starting with his long-term relationship with Moodle. He then explains:

Why are we forking?

From 2016 onwards we will no longer be in lockstep. Totara LMS will progressively diverge from its Moodle foundations.

Why have we made this decision? There are several factors;

  1. Innovation. A benefit of open source software is the ability to extend the code base of an application and develop it in a new direction. Over the past few years we have added more than 450,000 lines of code comprising a series of modular, interwoven extensions layered on top of a standard Moodle. All the additional features reflect the different needs of our user community and Totara LMS is now almost unrecognisable from a standard Moodle installation. We’ve taken a lot of care to achieve these results with minimal alterations to Moodle’s core codebase. That policy has been beneficial to both projects. However it also comes with constraints, particularly with some feature requests such as multi-tenancy. To do this well requires deep architectural changes. Overall, to continue, and accelerate our rate of innovation we need to start diverging the base platforms.
  2. Modernising the platform. It is our view, and we know it is a shared view with many Totara Partners, that the current product needs a significant investment in the overall UX. Due to the following point regarding collaboration we are unable to make this investment without diverging from Moodle. We are committed to doing the best by our Totara Partners, key stakeholders in our open source ecosystem, and our growing (collective) customer base. Our 2016 release (which will be tagged as Totara LMS version 9.0) will have a major focus on improving the UX design and overall quality assurance.

Richard goes on with other reasons and concludes:

The decision to forge a new direction is simply based on the need to deliver the best product we’re able – fit for purpose for modern workplace learning, guided by the needs of our partners and customers.

The Totara LMS home page links to a YouTube video introduction, and I note that the lack of reference to “Moodle” name.

Wow. This is a significant move for several reasons, including the following:

  • The long-term relationship of Richard and others in Totara to the Moodle Community, which will now diverge;
  • The importance of corporate learning for many, if not most, Moodle Partners;
  • One of the reasons not quoted above in Richard’s post is that “The leadership of Moodle Pty Ltd has made it clear to us that it is their intent to clone recent Totara LMS versions to offer the market ‘Moodle for Workplace.’” (read Richard’s post in full); and
  • Totara has contributed an large amount of code to Moodle, including “with Moodle HQ incorporating Totara developed features; Learning Plans and Competencies”.

I will now extend my core argument from last week’s post on Blackboard’s Moodle strategy in Latin America.

The Moodle community at large appears to be at an inflection point. This inflection point I see comes from a variety of triggers:

  • Blackboard acquisitions causing Moodle HQ, other Moodle Partners, and some subset of users’ concerns about commercialization;
  • Creation of the Moodle Association as well as Moodle Cloud services as alternate paths to Moodle Partners for revenue and setup;
  • Remote-Learner leaving the Moodle Partner program and planning to join the Moodle Association, with its associated lost revenue and public questioning value; and
  • Totara LMS forking and diverging from Moodle core.

Analysis post coming soon.

The post Breaking: Totara LMS Forks From Moodle And Changes Relationship appeared first on e-Literate.

Interview With Martin Dougiamas On Changes To Moodle Community This Year

Wed, 2015-09-02 12:59

By Phil HillMore Posts (364)

In my post last week on Blackboard’s Moodle strategy in Latin America, I made the following observation:

At the same time, this strategy and growth comes at a time where the Moodle community at large appears to be at an inflection point. This inflection point I see comes from a variety of triggers:

  • Blackboard acquisitions causing Moodle HQ, other Moodle Partners, and some subset of users’ concerns about commercialization;
  • Creation of the Moodle Association as well as Moodle Cloud services as alternate paths to Moodle Partners for revenue and setup; and
  • Remote-Learner leaving the Moodle Partner program and planning to join the Moodle Association, with its associated lost revenue and public questioning value.

I’m working on a follow-up post that looks more deeply at these changes to the Moodle community, and as part of the research I’ve interviewed Martin Dougiamas, Moodle Founder and CEO, by email. Given Martin’s role, I wanted to avoid the risk of having his answers get buried within my upcoming analysis post; therefore, I’ve decided to publish the interview in full. The only changes I have made are for clarity: showing and correcting[1] full names instead of acronyms[2], correcting grammar, and reordering questions to show follow-up discussions in context.

Phil: Given Blackboard’s trend in acquisitions for Moodle (Remote-Learner UK, X-Ray Analytics, Nivel Siete), and assuming these are not the last, how do these moves affect the Moodle community and future (including roadmap, Moodle HQ funding, whatever)? What are the biggest benefits and / or what are the risks and downsides?

Martin: In any community there’s always going to be some concern about any one organisation trying to gain dominance. Our certified Moodle Partner program was designed specifically to avoid these kind of risks by building a large global network of different companies (currently 68 and growing, including Moonami and Elearning Experts recently in the US) who are committed to supporting Moodle HQ. The recent Blackboard acquisitions don’t bring any benefits to Moodle as a whole.

Phil: When you say “the recent Blackboard acquisitions don’t bring any benefits to Moodle as a whole”, I note that in Latin America the only other Moodle Partners are in Argentina (1) and Brazil (3). Would Blackboard / Nivel Siete expansion to service most of Latin America end up generating more official Moodle Partner revenue, thus helping fund more core development through HQ?

Martin: We have South American Moodle Partners in Argentina, Bolivia, Chile, Peru and several in Brazil, as well as Partners who work in South America from other locations. Our Partner program is all about supporting local businesses who are Moodle experts, and they support us by paying royalties.

There is always some talk around acquisitions which it’s good to be mindful of. From a Moodle point of view there’s no new “expansion” – it was already happening.

Nivel Siete, like Moodlerooms, was a tiny company of several people who grew to 20 or so people with our support over many years. Meanwhile, Blackboard has had offices and resellers selling Blackboard Learn in South America for many years. As you know, acquisitions usually happen to remove a competitor or to gain some capabilities that the buying company was not able to develop on their own.

Phil: Do you agree with my characterization that “Moodle community at large appears to be at an inflection point” this year, driven by the three examples listed?

Martin: Sorry, I don’t really agree with your characterization. Unlike nearly all other LMS companies, Moodle is not profit-focussed (all our revenue goes into salaries). We are an organisation that is completely focussed on supplying a true open source alternative for the world without resorting to venture capital and the profit-driven thinking that comes with that.

Of course we still want to grow our core development team significantly in order to help Moodle evolve faster. So some of the big new things you’re seeing from us this year have been in the pipeline for a while and are about driving that: the Moodle Association is a formalisation of crowd-funding for additional new core developments; and MoodleCloud is very much about supporting and strengthening the Moodle Partner brand (while helping those who want these new services).

Regarding our ex-Partner Remote-Learner, it’s a shame we’ve lost them as friends but they are driven by their own internal issues. Saying they have switched to the Association is a little like saying you switched to Kickstarter, it doesn’t mean much. In any case they cannot actually even join the Moodle Association as commercial LMS service providers are not eligible.

Phil: My note on “inflection point” is not based on a profit-driven assumption. The idea is that significant changes are underway that could change the future direction of Moodle. A lot depends on Blackboard’s acquisition strategy (assuming it goes beyond Remote-Learner UK and Nivel Siete), whether other Moodle Partners follow Remote-Learner’s decision, and whether Moodle Association shows signs of producing similar or larger revenues than the Moodle Partner program. What I don’t see happening is extension of the status quo.

Martin: Moodle’s mission is not changing at all, we are just expanding and improving how we do things in response to a shifting edtech world. We are starting the Moodle Association to fill a gap that our users have often expressed to us – they wanted a way to have some more direct input over major changes in core Moodle. There is no overlap between this and the Moodle Partners – in fact we are also doing a great deal to improve and grow the Moodle Partner program and as well as the user experience for those who need Moodle services from them.

Phil: You have previously described the Moodle model as a ‘benevolent dictatorship’. Do you see that core model changing in the near future based on the three items I mentioned under inflection point (Moodle Association, Blackboard acquisitions, Remote-Learner leaving Moodle Partner program) or do you see roughly the same model but just with additional crowd-funding through Moodle Association? I think you’re answering the latter, but I want to make sure.

Martin: Yes, the latter.

I don’t use the ‘benevolent dictatorship’ term myself although it’s common in the open source world. Yes, I wrote everything in the first versions of Moodle, and my company continues to lead the project via Moodle Pty Ltd [aka Moodle HQ].

However, rather than any kind of dictatorship we see our mission as being *servants* to the community of teachers and learners who need Moodle and quality open source Free software. Our core duty is to give away the software we develop. Our values are to support educators with respect, integrity, openness and innovation. See This is never going to change.

This is in contrast to multi-billion companies whose value is in increasing their EBITDA [earnings before interest, taxes, depreciation and amortization] before a sale, and whose mission is to expand by acquiring markets in other countries.

Phil: Could you comment on the deep penetration of Moodle worldwide into corporate learning (maybe equal to higher ed / K-12)?

Martin: Yes, Moodle is used a lot in corporate learning worldwide. In fact something like 40% of the many thousands of clients using Moodle Partners as service providers are using Moodle for company training, including some really huge ones. We have a few case studies on our website at if you’re interested.

  1. Changing references to “Remote Learner” to follow the proper “Remote-Learner” usage
  2. For example, replacing “BB” with “Blackboard”, “NS” with “Nivel Siete”, etc

The post Interview With Martin Dougiamas On Changes To Moodle Community This Year appeared first on e-Literate.

Personalized Learning is Hard

Sat, 2015-08-29 10:04

By Michael FeldsteinMore Posts (1046)

Paul Fain has written a really good, nuanced article at IHE covering the update that Essex County College gave of their developmental math adaptive learning pilot at a recent conference in Washington, DC. We did a twopart case study on ECC in our e-Literate TV series). The headline results are as follows:

  • In the first year, the pass rate was worse than  in the traditional classes. (The first semester was “disastrous.”)
  • This year—the second year—the pass rate is coming closer to the traditional class but is still underperforming.
  • The article seems to imply that students who earn a C in the personalized learning class do better than students who earn a C in the traditional class, but the article is not explicit about that.

There is no magic pill. As Phil and I have been saying all along—most recently in my last post, which mentioned ECC’s use of adaptive learning—the software is, at best, an enabler. It’s the work that the students and teachers do around the software that makes the difference. Or not. In ECC’s case, they are trying to implement a pretty radical change in pedagogy with an at-risk population. It’s worth digging into the details.

Let’s start by reviewing the basics of their situation:

  • ECC has a 50% pass rate in their lowest level developmental math class, and a 50% pass rate in the next developmental math class up. Since a substantial majority of ECC students place into developmental math, a big part of ECC’s college completion problem can be traced to students failing developmental math.
  • ECC believes that a big reason they have a high failure rate is that students come into that class with an incredibly wide range of prior skills and knowledge—wide enough that a traditional lecture-based class would not address the needs of a majority of the students.
  • They decided to try a radical change in the way the developmental math course was structured.
    • Students would work self-paced on a mastery learning curriculum in labs using McGraw Hill’s ALEKS adaptive learning software. Students could ask each other or the roving instructor for help.
    • Students also met with a teacher each week, separately from the lab sessions, to report their progress of the week, assess the success or failure of their learning strategies, and set new strategies and goals for the next week.

So why does ECC think that they are not getting the results that they hoped for? Doug Walercz, ECC’s Vice President for Planning, Research, and Assessment, offered a few observations. From the article:

  • “[A]daptive courses provide less “accountability.” That’s because students move through content at different paces and it’s harder to make sure they master concepts by a certain point. ‘There is no classwide mile post.'”
  • “[T]he college leaned heavily on graduate students from nearby Rutgers University at Newark and the New Jersey Institute of Technology to teach parts of the adaptive courses during the first year.”
  • “’We underestimated the skill that you would need as a teacher to deliver that content,’ he said.”
  • “Faculty buy-in has also been a challenge. In adaptive courses, instructors do not give lectures or teach in the traditional format. Instead, they circulate among students who are working on computer-based courseware, offering help when needed, much like tutors. That feels like a job ‘below faculty status’ for some instructors, Walcerz said.”

Putting this all together, here is what I see:

  • ECC is starting with an at-risk population, a large portion of which probably has not been taught good meta-cognitive skills or help-seeking behaviors.
  • They are putting those students into a curriculum which, whatever its other virtues may be, puts a higher demand on those meta-cognitive and help-seeking behaviors than a traditional class would.
  • The burden of addressing that weakness in the course design falls on the faculty. But ECC has been working with untrained and inexperienced adjuncts—in, fact, graduate students—as well as some faculty who may be hostile to the project. (ECC has since moved away from using graduate students, according to the article.)

There may or may not also be problems with the software. For what it’s worth, Walercz seems to think highly of the software and doesn’t believe that it is contributing to the poor results. Personally, I think the problems with the match between the student skills and the course design are sufficient to explain the problem. The kind of burden that a self-paced program like this puts on these students is somewhat analogous to the burden that an online course puts on them. We know that the type of population that would be enrolled in a developmental math course in a community college in Newark, NJ typically does not do well in online courses. The difference is that, in ECC’s design, there actually are faculty there to intervene and coach the students personally. It stands to reason that the quality of that coaching would be a critical success factor.

Does this mean that ECC’s approach was a bad idea? I don’t think so. Differentiated instruction is a logical pedagogical response to a heterogeneous class problem. But it can only work in their environment if they have appropriately skilled, trained, and motivated faculty. ECC made substantial investments in software and facilities, but this result highlights the fact that the critical success factors in many cases will be making a substantial investment in providing faculty with appropriate professional development and a motivating compensation and promotion plan. It sounds like they have come to realize that and are taking some steps in that direction.

Truly effective innovation in education is hard. As Phil likes to stress, it takes both brutal honesty regarding the results and a commitment to iterate when the results are almost inevitably not what we hoped for in the first try. A while back, I blogged about an interesting case study at MSU where they did exactly that with a psychology class. If you read the comments thread in the follow-on post, you’ll see that Mike Caulfield brought up a potentially new insight that the course’s DWF pattern may be related to interactions between the course’s absence policy and the blended format. Course problems (and course successes) can be subtle and hard to tease out.

There. Is. No. Magic. Pill.

The post Personalized Learning is Hard appeared first on e-Literate.

The Fraught Interaction Design of Personalized Learning Products

Wed, 2015-08-26 12:49

By Michael FeldsteinMore Posts (1044)

David Wiley has a really interesting post up about Lumen Learning’s new personalized learning platform. Here’s an excerpt:

A typical high-level approach to personalization might include:

  • building up an internal model of what a student knows and can do,
  • algorithmically interrogating that model, and
  • providing the learner with a unique set of learning experiences based on the system’s analysis of the student model

Our thinking about personalization started here. But as we spoke to faculty and students, and pondered what we heard from them and what we have read in the literature, we began to see several problems with this approach. One in particular stood out:

There is no active role for the learner in this “personalized” experience. These systems reduce all the richness and complexity of deciding what a learner should be doing to – sometimes literally – a “Next” button. As these systems painstakingly work to learn how each student learns, the individual students lose out on the opportunity to learn this for themselves. Continued use of a system like this seems likely to create dependency in learners, as they stop stretching their metacognitive muscles and defer all decisions about what, when, and how long to study to The Machine.

Instructure’s Jared Stein really likes Lumen’s approach, writing,

So much work in predictive analytics and adaptive learning seeks to relieve people from the time-consuming work of individual diagnosis and remediation — that’s a two-edged sword: Using technology to increase efficiency can too easily sacrifice humanness — if you’re not deliberate in the design and usage of the technology. This topic came up quickly amongst the #DigPedNetwork group when Jim Groom and I chatted about closed/open learning environments earlier this month, suggesting that we haven’t fully explored this dilemma as educators or educational technologist.

I would add that I have seen very little evidence that either instructors or students place a high value on the adaptivity of these products. Phil and I have talked to a wide range of folks using these products, both in our work on the e-Literate TV case studies and in our general work as analysts. There is a lot of interest in the kind of meta-cognitive dashboarding that David is describing. There is little interest in, and in some cases active hostility toward, adaptivity. For example, Essex County College is using McGraw Hill’s ALEKS, which has one of the more sophisticated adaptive learning approaches on the market. But when we talked to faculty and staff there, the aspects of the program that they highlighted as most useful were a lot more mundane, e.g.,

It’s important for students to spend the time, right? I mean learning takes time, and it’s hard work. Asking students to keep time diaries is a very difficult ask, but when they’re working in an online platform, the platform keeps track of their time. So, on the first class day of the week, that’s goal-setting day. How many hours are you going to spend working on your math? How many topics are you planning to master? How many classes are you not going to be absent from?

I mean these are pretty simple goals, and then we give them a couple goals that they can just write whatever they feel like. And I’ve had students write, “I want to come to class with more energy,” and other such goals. And then, because we’ve got technology as our content delivery system, at the end of the week I can tell them, in a very efficient fashion that doesn’t take up a lot of my time, “You met your time goal, you met your topic goal,” or, “You approached it,” or, “You didn’t.”

So one of the most valuable functions of this system in this context is to reflect back to the students what they have done in terms that make sense to them and are relevant to the students’ self-selected learning goals. The measures are fairly crude—time on task, number of topics covered, and so on—and there is no adaptivity necessary at all.

But I also think that David’s post hints at some of the complexity of the design challenges with these products.

You can think of the family of personalized learning products as having potentially two components: diagnostic and prescriptive. Everybody who likes personalized learning products in any form likes the diagnostic component. The foundational value proposition for personalization, (which should not in any way be confused with “personal”), is having the system provide feedback to students and teachers about what the student does well and where the student is struggling. Furthermore, the perceived value of the product is directly related to the confidence that students and teachers have that the product is rendering an accurate diagnosis. That’s why I think products that provide black box diagnoses are doomed to market failure in the long term. As the market matures, students and teachers are going to want to know not only what the diagnosis is but also what the basis of the diagnosis is, so that they can judge for themselves whether they think the machine is correct.

Once the system has diagnosed the student’s knowledge or skill gaps—and it is worth calling out that these many of these personalized learning systems work on a deficit model, where the goal is to get students to fill in gaps—the next step is to prescribe actions that will help students to address those gaps. Here again we get into the issue of transparency. As David points out, some vendors hide the rationale for their prescriptions, even going so far as to remove user choice and just hide the adaptivity behind the “next” button. Note that the problem isn’t so much with providing a prescription as it is with the way in which it is provided. The other end of the spectrum, as David argues, is to make recommendations. The full set of statements from a well behaved personalized learning product to a student or teacher might be something like the following:

  1. This is where I think you have skill or knowledge gaps.
  2. This is the evidence and reasoning for my diagnosis.
  3. This is my suggestion for what you might want to do next.
  4. This is my reasoning for why I think it might help you.

It sounds verbose, but it can be done in fairly compact ways. Netflix’s “based on your liking Movie X and Movie Y, we think you would give Movie Z 3.5 stars” is one example of a compact explanation that provides at least some of this information. There are lots of ways that a thoughtful user interface designer can think about progressively revealing some of this information and providing “nudges” that encourage students on certain paths while still giving them the knowledge and freedom they need to make choices for themselves. The degree to which the system should be heavy-handed in its prescription probably depends in part on the pedagogical model. I can see something closer to “here, do this next” feeling appropriate in a self-paced CBE course than in a typical instructor-facilitated course. But even there, I think the Lumen folks are 100% right that the first responsibility of the adaptive learning system should be to help the learner understand what the system is suggesting and why so that the learner can gain better meta-cognitive understanding.

None of which is to say that the fancy adaptive learning algorithms themselves are useless. To the contrary. In an ideal world, the system will be looking at a wide range of evidence to provide more sophisticated evidence-based suggestions to the students. But the key word here is “suggestions.” Both because a critical part of any education is teaching students to be more self-aware of their learning processes and because faulty prescriptions in an educational setting can have serious consequences, personalized learning products need to evolve out of the black box phase as quickly as possible.



The post The Fraught Interaction Design of Personalized Learning Products appeared first on e-Literate.

Inside View Of Blackboard’s Moodle Strategy In Latin America

Wed, 2015-08-26 11:45

By Phil HillMore Posts (362)

One year ago Blackboard’s strategy for Moodle was floundering. After the 2012 acquisition of Moodlerooms and Netspot, Blackboard had kept its promises of supporting the open source community – and in fact, Blackboard pays much more than 50% of the total revenue going to Moodle HQ[1] – but that does not mean they had a strategy. Key Moodlerooms employees were leaving, and the management was frustrated. Last fall the remaining Moodlerooms management put together an emerging strategy to invest in (through corporate M&A) and grow the Moodle business, mostly outside of the US.

In just the past twelve months, Blackboard has acquired three Moodle-based companies – Remote-Learner UK (Moodle Partner in the UK), X-Ray Analytics (learning analytics for Moodle), and Nivel Siete (Moodle Partner in Colombia). When you add in organic growth to these acquisition, Blackboard has added ~450 new clients using Moodle in this same time period, reaching a current total of ~1400.

This is a change worth exploring. To paraphrase Michael’s statements to me and in his recent BbWorld coverage:

If you want to understand Blackboard and their future, you have to understand what they’re doing internationally. If you want to understand what they’re doing internationally, you have to understand what they’re doing with Moodle.

Based on this perspective, I accepted an invitation from Blackboard to come visit Nivel Siete last week to get a first-hand view of what this acquisition means I also attended the MoodleMoot Colombia #mootco15 conference and talked directly to Moodle customers in Latin America. Let’s first unpack that last phrase.

  • Note that due to the nature of this trip, I “talked directly” with Blackboard employees, Nivel Siete employees, Blackboard resellers, and Nivel Siete customers. They did give me free access to talk privately with whoever I wanted to, but treat this post as somewhat of an inside view rather than one that also includes perspectives from competitors.
  • “Moodle” is very significant in Latin America. It is the default LMS that dominates learning environments. The competition, or alternative solution, there is Blackboard Learn or . . . another route to get Moodle. In this market D2L and Canvas have virtually no presence – each company has just a couple of clients in Latin America and are not currently a factor in LMS decision-making. Schoology has one very large customer in Uruguay serving hundreds of thousands of students. Blackboard Learn serves the top of the market – e.g. the top 10% in terms of revenue of Colombian institutions, where they already serve the majority of that sub-market according to the people I talked to. For the remaining 90%, it is pretty much Moodle, Moodle, alternate applications that are not LMSs, or nothing.[2]
  • I chose “customers” instead of “schools” or “institutions” for a reason. What is not understood in much of the education community is that Moodle has a large footprint outside of higher ed and K-12 markets. Approximately 2/3 of Nivel Siete’s clients are in corporate learning, and several others are government. And this situation is quite common for Moodle. In the US, more than 1/3 of Moodlerooms’ and approximately 1/2 of Remote-Learner’s customers are corporate learning. Phill Miller, the VP of International for Moodlerooms, said that for most of the Moodle hosting and service providers he has met, they also are serving corporate clients at similar numbers as education.
  • I chose “Latin America” instead of “Colombia” for a reason. While all but ~12 of Nivel Siete’s existing clients are in Colombia, Blackboard bought the company to act as a center of excellence or support service company for most of Latin America – Colombia, Mexico, Brazil, and Peru in particular. Cognos Online, their current local reseller for Latin America for core Blackboard products (Learn, Collaborate, etc) will become the reseller also for their Moodle customers. Nivel Siete will support a broader set of clients. In other words, this is not a simple acquisition of customers – it is an expansion of international presence.

And while we’re at it, the conference reception included a great opera mini flash mob (make sure to watch past 0:37):

Nivel Siete

Nivel Siete (meaning Level 7, a reference from two of the founders’ college days when a professor talked about need to understand deeper levels of the technology stack than just top-level applications that customers see), is a company of just over 20 employees in Bogota. They have 237+ clients, but that is growing. During the three days while I was there they signed several new contracts. They offer Moodle hosting and service in a cloud environment based on Amazon Web Services (AWS) – not true SaaS, as they allow multiple software versions in production and have not automated all provisioning or upgrade processes. What they primarily offer, according to the founders, is a culture of how to service and support using cloud services and specific marketing and sales techniques.

In Latin America, most customers care more about the local sales and support company than they do about the core software. As one person put it, they believe in skin-to-skin sales, where clients have relationships they trust as long as solutions are provided. Most LMS customers in Latin America do not care as much about the components of that solution as they do about relationships, service, and price. And yet, due to open source software and lightweight infrastructure needs, Moodle is dominant as noted above. The Moodle brand, code base, and code licensing does not matter as much as the Moodle culture and ecosystem. From a commercial standpoint, Nivel Siete’s competitors include a myriad of non Moodle Partner hosting providers – telcos bundling in hosting, mom-and-pop providers, self-hosting – or non-consumption. For a subset of the market, Nivel Siete has competed with Blackboard Learn.

Beyond Cognos Online, Blackboard has another ~9 resellers in Latin America, and Nivel Siete (or whatever they decide to name the new unit) will support all of these resellers. This is actually the biggest motivation other than cash for the company to sell – they were seeking methods to extend their influence, and this opportunity made the most sense.

Blackboard Learn and Ultra

What about that Learn sub-market? Most clients and sales people (resellers as well as Blackboard channel manager) are aware of Learn Ultra, but the market seems to understand already that Ultra is not for them . . . yet. They appear to be taking a ‘talk to me when it’s done and done in Spanish’ approach and not basing current decisions on Ultra. In this sense, the timing for Ultra does not matter all that much, as the market is not waiting on it. Once Ultra is ready for Latin America, Blackboard sales (channel manager and resellers) expect the switchover to be quicker than in the US, as LMS major upgrades (involving major UI and UX changes) or adoptions tend to take weeks or months instead of a year or more as we often see in the states. At least in the near term, Learn Ultra is not a big factor in this market.

What Blackboard is best known for in this market is the large SENA contract running on Learn. SENA (National Service for Learning) is a government organization that runs the majority of all vocational colleges – providing certificates and 2-year vocational degrees mostly for lower-income students, a real rising middle class move that is important in developing countries. Blackboard describes SENA as having 6+ million total enrollment, with ~80% in classrooms and ~20% in distance learning.


The challenge Blackboard faces is integrating its Learn and Moodle operations through the same groups – Nivel Siete internal group, Cognos Online and other resellers serving both lines – without muddling the message and go-to-market approach. Currently Learn is marketed and sold through traditional enterprise sales methods – multiple meetings, sales calls, large bids – while Nivel Siete’s offering of Moodle is marketed and sold with more of a subscription-based mentality. As described by ForceManagement:

A customer who has moved to a subscription-based model of consumption has completely different expectations about how companies are going interact with them.

How you market to them, how you sell to them, how you bill them, how you nurture the relationship – it’s all affected by the Subscription Economy. The customer’s idea of value has changed. And, if the customer’s idea of value has changed, your value proposition should be aligned accordingly. [snip]

The subscription-based sales process relies less on the closing of a sale and more on the nurturing of a long-term relationship to create lifetime customer value.

One of Nivel Siete’s most effective techniques is their The e-Learner Magazine that highlights customer telling their own stories and lessons in a quasi-independent fashion. The company has relied on inbound calls and quick signups and service startups. There is quite a cultural difference between enterprise software and subscription-based approaches. While Blackboard themselves are facing such changes due to Ultra and newly-offered SaaS models, the group in Latin America is facing the challenge of two different cultures served by the same organizations today.

To help address this challenge, Cognos Online is planning to have two separate teams selling / servicing mainline Blackboard products and Moodle products. But even then, CEO Fernery Morales described that their biggest risk is muddling the message and integrating appropriately.

Moodle Strategy and Risk

At the same time, this strategy and growth comes at a time where the Moodle community at large appears to be at an inflection point. This inflection point I see comes from a variety of triggers:

  • Blackboard acquisitions causing Moodle HQ, other Moodle Partners, and some subset of users’ concerns about commercialization;
  • Creation of the Moodle Association as well as Moodle Cloud services as alternate paths to Moodle Partners for revenue and setup; and
  • Remote-Learner leaving the Moodle Partner program and planning to join the Moodle Association, with its associated lost revenue and public questioning value.

I don’t have time to fully describe these changes here, but Moodle itself is both an opportunity and a risk mostly based on its own success globally. More of that in a future post.

What Does This Mean Beyond Latin America?

It’s too early to fully know, but here are a few notes.

  • Despite the positioning in the US media, there is no “international” market. There are multiple local or regional markets outside of the US that have tremendous growth opportunities for US and other companies outside of those immediate markets. Addressing these markets puts a high premium on localization – having feet on the ground for people who know the culture, can be trusted in the region, and including product customizations meant for those markets. Much of the ed tech investment boom is built on expectations of international growth, but how many ed tech companies actually know how to address local or regional non-US markets? This focus on localizing international markets is one of Blackboard’s greatest strengths.
  • Based on the above, at least in Latin America Blackboard is building itself up as being the status quo before other learning platforms really get a chance to strategically enter the market. For example, Instructure has clearly not chosen to go after non English-speaking international markets yet, but by the time they do push Canvas into Latin America, and if Blackboard is successful integrating Nivel Siete, for example, it is likely Instructure will face an entrenched competitor and potential clients who by default assume Moodle or Learn as solutions.
  • Blackboard as a company has one big growth opportunity right now – the collection of non-US “international” markets that represent just under 1/4 of the company’s revenue. Domestic higher ed is not growing, K-12 is actually decreasing, but international is growing. These growing markets need Moodle and  traditional Learn 9.1 much more than Ultra. I suspect that this growing importance is creating more and more tension internal to Blackboard, as the company needs to balance Ultra with traditional Learn and Moodle development.
  • While I strongly believe in the mission of US community colleges and low-cost 4-year institutions, in Latin America the importance of education in building up an emerging middle class is much greater than in US. We hear this “importance of education” and “building of middle class” used in generic terms regarding ed tech potential, but seeing this connection more closely by being in country is inspiring. This is a real global need that can and should drive future investment in people and technology to address.
  1. This information based on tweet last spring showing Moodlerooms + Netspot combined were more than 50% of revenue, and that the next largest Moodle Partner, Remote-Learner, has left the program. Since last year I have confirmed this information through multiple sources.
  2. Again, much of this information is from people related to Blackboard, but it also matches my investigation of press releases and public statements about specific customers of D2L and Instructure.

The post Inside View Of Blackboard’s Moodle Strategy In Latin America appeared first on e-Literate.

Ed Tech Evaluation Plan: More problems than I initially thought

Mon, 2015-08-24 14:21

By Phil HillMore Posts (361)

Late last week I described the new plan from the US Department of Education (ED) and their Office of Educational Technology (OET) to “call for better methods for evaluating educational apps”. Essentially the ED is seeking proposals for new ed tech evaluation methods so that they can share the results with schools – helping them evaluate specific applications. My argument [updated DOE to be ED]:

Ed tech apps by themselves do not “work” in terms of improving academic performance. What “works” are pedagogical innovations and/or student support structure that are often enabled by ed tech apps. Asking if apps works is looking at the question inside out. The real question should be “Do pedagogical innovations or student support structures work, under which conditions, and which technology or apps support these innovations?”. [snip]

I could see that for certain studies, you could use the ED template and accomplish the same goal inside out (define the conditions as specific pedagogical usage or student support structures), thus giving valuable information. What I fear is that the pervasive assumption embedded in the program setup, asking over and over “does this app work” will prove fatal. You cannot put technology as the center of understanding academic performance.

Upon further thought as well as prompting from the comments and private notes, this ED plan has even more problems that I initially thought.

Advocate or Objective Evaluator

There is a real problem with this plan coming out of the Office of Educational Technology due to their mission.

The mission of the Office of Educational Technology (OET) is to provide leadership for transforming education through the power of technology. OET develops national educational technology policy and establishes the vision for how technology can be used to support learning.

The OET strongly advocates for the use of ed tech applications, which I think is a primary cause of their inside-out, technology first view of the world. They are not an objective organization in terms of whether and when technology should be used, but rather an advocate assuming that technology should be used, but please make it effective. Consider these two statements, the first from the National Technology Plan and the second from the paper “Learning Technology Effectiveness” [emphasis added]:

  • The plans calls for applying the advanced technologies used in our daily personal and professional lives to our entire education system to improve student learning, accelerate and scale up the adoption of effective practices, and and use data and information for continuous improvement.
  • While this fundamental right to technology access for learning is nonnegotiable, it is also just the first step to equitable learning opportunities.

I have no problem with these goals, per se, but it would be far more useful to not have advocates in charge of evaluations.

A Better View of Evaluation

Richard Hershman from the National Association of College Stores (NACS) shared with me an article that contained a fascinating section on just this subject.

Why Keep Asking the Same Questions When They Are Not the Right Questions?

There are no definitive answers to questions about the effectiveness of technology in boosting student learning, student readiness for workforce skills, teacher productivity, and cost effectiveness. True, some examples of technology have shown strong and consistent positive results. But even powerful programs might show no effects due to myriad methodological flaws. It would be most unfortunate to reject these because standardized tests showed no significant differences. Instead, measures should evaluate individual technologies against specific learning, collaboration, and communication goals.

The source of this excellent perspective on evaluating ed tech? An article called “Plugging In: Choosing and Using Educational Technology” from the North Central Regional Educational Laboratory and commissioned by the US Department of Education in 1995.

As Richard Parent commented in my recent post:

You’re exactly right to reframe this question. It’s distressing when the public demands to know “what works” as if there are a set of practices or tools that simply “are” good education. It’s downright depressing when those who should be in the know do so, too.

Update: This is not fully to the level of response, but Rolin Moe got Richard Culatta to respond to his tweet about the initial article.

@RMoeJo it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence

— Richard Culatta (@rec54) August 25, 2015

Rolin Moe: Most important thing I have read all year – @philonedtech points out technocentric assumptions of US ED initiative

Richard Culatta: it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence

The post Ed Tech Evaluation Plan: More problems than I initially thought appeared first on e-Literate.

US Department of Education: Almost a good idea on ed tech evaluation

Fri, 2015-08-21 16:53

By Phil HillMore Posts (360)

Richard Culatta from the US Department of Education (DOE, ED, never sure of proper acronym) wrote a Medium post today describing a new ED initiative to evaluate ed tech app effectiveness.

As increasingly more apps and digital tools for education become available, families and teachers are rightly asking how they can know if an app actually lives up to the claims made by its creators. The field of educational technology changes rapidly with apps launched daily; app creators often claim that their technologies are effective when there is no high-quality evidence to support these claims. Every app sounds world-changing in its app store description, but how do we know if an app really makes a difference for teaching and learning?

He then describes the traditional one-shot studies of the past (control group, control variables, year or so of studies, get results) and notes:

This traditional approach is appropriate in many circumstances, but just does not work well in the rapidly changing world of educational technology for a variety of reasons.

The reasons?

  • Takes too long
  • Costs too much and can’t keep up
  • Not iterative
  • Different purpose

This last one is worth calling out in detail, as it underlies the assumptions behind this initiative.

Traditional research approaches are useful in demonstrating causal connections. Rapid cycle tech evaluations have a different purpose. Most school leaders, for example, don’t require absolute certainty that an app is the key factor for improving student achievement. Instead, they want to know if an app is likely to work with their students and teachers. If a tool’s use is limited to an after-school program, for example, the evaluation could be adjusted to meet this more targeted need in these cases. The collection of some evidence is better than no evidence and definitely better than an over-reliance on the opinions of a small group of peers or well-designed marketing materials.

The ED plans are good in terms of improving the ability to evaluate effectiveness in such a manner that accounts for rapid technology evolution. The general idea of ED investing in the ability to provide better decision-making information is a good one. It’s also very useful to see ED recognize context of effectiveness claims.

The problem I see, and it could be a fatal one, is that ED is asking the wrong question for any technology or apps related to teaching and learning. [emphasis added]

The important questions to be asked of an app or tool are: does it work? with whom? and in what circumstances? Some tools work better with different populations; educators want to know if a study included students and schools similar to their own to know if the tool will likely work in their situations.

Ed tech apps by themselves do not “work” in terms of improving academic performance[1]. What “works” are pedagogical innovations and/or student support structure that are often enabled by ed tech apps. Asking if apps works is looking at the question inside out. The real question should be “Do pedagogical innovations or student support structures work, under which conditions, and which technology or apps support these innovations?”.

Consider our e-Literate TV coverage of Middlebury College and one professor’s independent discover of flipped classroom methods.

How do you get valuable information if you ask the question “Does YouTube work” to increase academic performance? You can’t. YouTube is a tool that the professor used. Now you could get valuable information if you ask the question “Does flipped classroom work for science courses, and which tools work in this context?” You could even ask “For the tools that support this flipped classroom usage, does the choice of tool (YouTube, Vimeo, etc) correlate with changes in student success in the course?”.

I could see that for certain studies, you could use the ED template and accomplish the same goal inside out (define the conditions as specific pedagogical usage or student support structures), thus giving valuable information. What I fear is that the pervasive assumption embedded in the program setup, asking over and over “does this app work” will prove fatal. You cannot put technology as the center of understanding academic performance.

I’ll post this as a comment to Richard’s Medium post as well. With a small change in the framing of the problem, this could be a valuable initiative from DOE.

Update: Changed DOE to ED for accuracy.

Update: This is not fully to the level of response, but Rolin Moe got Richard Culatta to respond to his tweet about this article.

@RMoeJo it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence

— Richard Culatta (@rec54) August 25, 2015

Rolin Moe: Most important thing I have read all year – @philonedtech points out technocentric assumptions of US ED initiative

Richard Culatta: it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence

  1. And yes, they throw in a line that it is not just about academic performance but also administrative claims. But the whole setup is on teaching and learning usage, which is the primary focus of my comments.

The post US Department of Education: Almost a good idea on ed tech evaluation appeared first on e-Literate.

68 Percent of Statistics Are Meaningless, Purdue University Edition

Fri, 2015-08-21 10:13

By Michael FeldsteinMore Posts (1044)

I don’t know of any other way to put this. Purdue University is harming higher education by knowingly peddling questionable research for the purpose of institutional self-aggrandizement. Purdue leadership should issue a retraction and an apology.

We have covered Purdue’s Course Signals extensively here at e-Literate. It is a pioneering program, and evidence does suggest that it helps at-risk students pass courses. That said, Purdue came out with a later study that is suspect. The study in question claimed that students who used Course Signals in consecutive classes were more likely to see improved performance over time, even in courses that did not use the tool. Mike Caulfield looked at the results and had an intuition that the result of the study was actually caused by selection bias. Students who stuck around to take courses in consecutive semesters were more likely to…stick around and take more courses in consecutive semesters. So students who stuck around to take more Course Signals courses in consecutive semesters would, like their peers, be more likely to stick around and take more courses. Al Essa did a mathematical simulation and proved Mike’s intuition that Purdue’s results could be the result of selection bias. Mike wrote up a great explainer here on e-Literate that goes into all the details. If there was indeed a mistake in the research, it was almost certainly an honest one. Nevertheless, there was an obligation on Purdue’s part to re-examine the research in light of the new critique. After all, the school was getting positive press from the research and had licensed the platform to SunGard (now Ellucian). Furthermore, as a pioneering and high-profile foray into learning analytics, Course Signals was getting a lot of attention and influencing future research and product development in the field. We needed a clearer answer regarding the validity of the findings.

Despite our calls here on the blog, and our efforts to contact Purdue directly, and attention the issue got in the academic press, Purdue chose to remain silent on the issue. Our sources informed us at the time that Purdue leadership was aware of the controversy surrounding the study and made a decision not to respond. Keep in mind that the research was conducted by Purdue staff rather than faculty. As a results, those researchers did not have the cover of academic freedom and were not free to address the study on their own without first getting a green light from their employer. To make matters more complicated, none of the researchers on that project still work at Purdue anymore. So the onus was on the institution to respond. They chose not to do so.

That was bad enough. Today it became clear that Purdue is actively promoting that questionable research. In a piece published today in Education Dive, Purdue’s “senior communications and marketing specialist” Steve Tally said

the initial five- and six-year raw data about the impact of Signals showed students who took at least two Signals-enabled courses had graduation rates that were 20% higher. Tally said the program is most effective in freshman and sophomore year classes.

“We’re changing students’ academic behaviors,” Tally said, “which is why the effect is so much stronger after two courses with Signals rather than one.” A second semester with Signals early on in students’ degree programs could set behaviors for the rest of their academic careers.

It’s hard to read this as anything other than a reference the study that Mike and Al challenged. Furthermore, the comment about “raw data” suggests that Purdue has made no effort to control for the selection bias in question. Two years after the study was challenged, they have not responded, not looked into it, and continue to use it to promote the image of the university.

This is unconscionable. If an academic scholar behaved that way, she would be ostracized in her field. And if a big vendor like Pearson or Blackboard behaved that way, it would be broadly vilified in the academic press and academic community. Purdue needs to come clean. They need to defend the basis on which they continue to make claims about their program the same way a scholar applying for tenure at their institution would be expected to be responsible for her claims. Purdue’s peer institutions likewise need to hold the school accountable and let them know that their reputation for integrity and credibility is at stake.

The post 68 Percent of Statistics Are Meaningless, Purdue University Edition appeared first on e-Literate.

Challenge Of Student Transition Between Active And Passive Learning Models

Thu, 2015-08-20 08:59

By Phil HillMore Posts (358)

Last week the Hechinger Report profiled an innovative charter school in San Diego called High Tech High (insert surfer jokes here) that follows an active, project based learning (PBL) model. The school doesn’t use textbooks, and they don’t base the curriculum on testing. The question they ask is whether this approach prepares students for college.

As a result, for [former HTH student Grace] Shefcik, college – with its large classes and lecture-based materials– came as a bit of a shock at first. At the University of California, Santa Cruz, she is one of more than 15,000 undergraduates, her assignments now usually consist of essays and exams. At High Tech High, Shefcik had just 127 students in her graduating class, allowing her to form close relationships with peers and teachers.

The premise of the article is that PBL prepares students for life but maybe not for college. Grace described the big difference between high school, with constant feedback and encouragement, to college, where you rarely get feedback. Other students describe their frustration in not knowing how to study for tests once they get to college.

After a recent screening of “Most Likely to Succeed” at the New Schools Summit in Burlingame, California, High Tech High CEO Larry Rosenstock told an audience, “We actually find that many of our students find themselves bored when they get to college.”

Teachers and administrators at High Tech High don’t tell many stories about their students reporting boredom, but they do hear about experiences like Shefcik’s. They say students find themselves overwhelmed by the different environment at college and have a difficult time making the transition to lecture-hall learning.

Students do tend to adjust, but this process can take longer than it does for traditionally-taught students.

But sometimes it takes High Tech High graduates a semester or a year at college or university before they feel like they’ve cracked the code.

“I had a harder time transitioning than other students,” said Mara Jacobs, a High Tech High graduate who just finished her second year at Cornell University in Ithaca, New York, and is the daughter of major donors Gary and Jerri-Ann Jacobs. “I couldn’t just do the work if I wasn’t bought into how I was being taught.”

My problem with the article is that it makes the assumption that all colleges outside of small private institutions base their entire curriculum on passive lectures and testing, not acknowledging many of the innovations and changes coming from these same colleges. We have profiled personalized learning approaches in our e-Literate TV series, including a PBL approach at Arizona State University for the Habitable Worlds course (see this episode for in-depth coverage).

Nevertheless, the general point remains that it is difficult for students to transition between active learning models and passive lecture and test models. The Hechinger Report calls out the example of K-12 students moving into college, but we talked to faculty and staff at UC Davis who saw the flip side of that coin – students used to passive learning at high school trying to adapt to an active learning science course in college.

Phil Hill: While the team at UC Davis is seeing some encouraging initial results from their course redesign, these changes are not easy. In our discussions, the faculty and staff provided insight into the primary barriers that they face when looking to build on their success and get other faculty members to redesign their courses.

Catherine Uvarov: Well, I have had some very interesting experiences with students. Last quarter, my class was mostly incoming freshman, and it’s like their very first quarter at UC Davis, so they have never taken a UC Davis class before. My class is pretty different from either classes they’ve taken in high school or other classes that they were still taking in their first quarter at Davis because these changes are not as widespread as they could be.

Some students push back at first, and they’re like, “Oh, my, gosh, I have to read the book. Oh, my, gosh, I have to open the textbook. Oh, my, gosh, I have to do homework every week. I have to do homework every day.” They kind of freaked out a little bit in the beginning, but as the quarter progressed, they realized that they are capable of doing this type of learning style.

There’s more info at both the Hechinger Report article and in the ASU and UC Davis case studies, but taken together they point out the challenges students face when transitioning between pedagogical models. These transitions can occur between high school and college, but more often they occur from course to course. Active learning and PBL are not just minor changes away from lecture and test – they require a new mindset and set of habits from students.

The post Challenge Of Student Transition Between Active And Passive Learning Models appeared first on e-Literate.

Reuters: Instructure has filed for IPO later this year

Thu, 2015-08-13 10:29

By Phil HillMore Posts (358)

Reuters is on a breaking news roll lately with ed tech. This time it is about Instructure filing for an initial public offering (IPO).

Instructure is planning an initial public offering later this year that could value the education software company at $500 million to $800 million, according to people familiar with the matter.

Instructure, based in Salt Lake City, has hired Morgan Stanley (MS.N) and Goldman Sachs (GS.N) to help prepare for the IPO, which has been filed confidentially, the people said. They requested anonymity because the news of the IPO was not public.

Under the Jumpstart Our Business Startups Act, new companies that generate less than $1 billion in revenue can file for IPOs with the U.S. Securities and Exchange Commission without immediately disclosing details publicly.

Instructure has long stated its plans to eventually IPO, so the main question has been one of timing. Now we know that it is late 2015 (assuming Reuters story is correct, but they have been quite accurate with similar stories).

Michael and I have written recently about Instructure’s strong performance, including this note about expanding markets and their consistent growth in higher ed, K-12 and potentially corporate learning.

InstructureCon 2015 Growth Slide

Taken together, what we see is a company with a fairly straightforward strategy. Pick a market where the company can introduce a learning platform that is far simpler and more elegant than the status quo, then just deliver and go for happy customers. Don’t expand beyond your core competency, don’t add parallel product lines, don’t over-complicate the product, don’t rely on corporate M&A. Where you have problems, address the gap. Rinse. Repeat.

Instructure has now solidified their dominance in US higher ed (having the most new client wins), they have hit their stride with K-12, and they are just starting with corporate learning. What’s next? I would assume international education markets, where Instructure has already started to make inroads in the UK and a few other locations.

The other pattern we see is that the company focuses on the mainstream from a technology adoption perspective. That doesn’t mean that they don’t want to serve early adopters with Canvas or Bridge, but Instructure more than any other LMS company knows how to say ‘No’. They don’t add features or change designs unless the result will help the mainstream adoption – which is primarily instructors. Of course students care, but they don’t choose whether to use an LMS for their course – faculty and teachers do. For education markets, the ability to satisfy early adopters rests heavily on the Canvas LTI-enabled integrations and acceptance of external application usage; this is in contrast to primarily relying on having all the features in one system.

Combine this news with that of Blackboard being up for sale and changes in Moodle’s approach, and you have some big moves in the LMS market that should have long-term impacts on institutional decision-making. Watch this space for more coverage.

The post Reuters: Instructure has filed for IPO later this year appeared first on e-Literate.

Blackboard Acquires Large Latin American Moodle Provider

Thu, 2015-08-13 10:13

By Michael FeldsteinMore Posts (1043)

In my first post-BbWorld blog post, I noted that the international market is hugely important for Blackboard and Moodle is hugely important for their international strategy. Nearly a quarter of the company’s revenue and much of their growth comes from their international business, where they seem to be way ahead of their main North American competition in many countries. Learn has some really large contracts—whole country contracts, in some cases—but Moodle has a lot of smaller contracts. In some countries, you just can’t get traction in the LMS space unless you have a Moodle offering. In my post, I predicted that we would see continuing investments in Moodle, based on what we heard from senior management.

Today, Blackboard announced that they have acquired Nivel Siete, a Colombia-based Moodle hosting and services provider with over 200 customers in Latin America. This follows their acquisition of Remote Learner UK, a company that serviced about 100 UK- and Ireland-based Moodle schools at the time of the acquisition and their acquisition of the X-Ray learning analytics company that currently is focused on Moodle. These are all in the last year. And they are on top of the original acquisition of Moodlerooms and Netspot, two of the biggest Moodle providers around. There are some interesting—and complicated—long-term implications here for the governance and financial structure of the Moodle ecosystem that Phil and I will eventually write about, but for now it’s worth noting that Blackboard is making serious investments in Moodle and international growth.

The post Blackboard Acquires Large Latin American Moodle Provider appeared first on e-Literate.