Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 6 hours 31 min ago

Fall 2012 US Distance Education Enrollment: Now viewable by each state

Wed, 2014-07-02 23:15

Starting in late 2013, the National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) started providing preliminary data for the Fall 2012 term that for the first time includes online education. Using Tableau (thanks to Justin Menard for prompting me to use this), we can now see a profile of online education in the US for degree-granting colleges and university, broken out by sector and for each state.

Please note the following:

  • For the most part distance education and online education terms are interchangeable, but they are not equivalent as DE can include courses delivered by a medium other than the Internet (e.g. correspondence course).
  • There are three tabs below – the first shows totals for the US by sector and by level (grad, undergrad); the second also shows the data for each state (this is new); the third shows a map view.

Learn About Tableau

The post Fall 2012 US Distance Education Enrollment: Now viewable by each state appeared first on e-Literate.

Is the DOE backing down on proposed State Authorization regulations?

Thu, 2014-06-26 08:25

Now witness the firepower of this fully written and delivered WCET / UPCEA /Sloan-C letter!

- D. Poulin

One of the policies that we’re tracking at e-Literate is the proposed State Authorization regulation that the US Department of Education (DOE) has been pushing. The latest DOE language represents a dramatic increase in federal control of distance education and in bureaucratic compliance required of institutions and states. In the most recent post we shared a letter from WCET, UPCEA and Sloan-C to Secretary Duncan at the DOE.

What does it take to get all of the higher education institutions and associations to agree? Apparently the answer is for the Department of Education to propose its new State Authorization regulations. [snip]

Here’s what is newsworthy – the idea and proposed language is so damaging to innovation in higher ed (which the DOE so fervently supports in theory) and so burdensome to institutions and state regulators that three higher ed associations have banded together to oppose the proposed rules. WCET (WICHE Cooperative on Educational Technologies), UPCEA (University Professional and Continuing Education Association) and Sloan-C (Sloan Consortium) wrote a letter to Secretary Arne Duncan calling for the DOE to reconsider their planned State Authorization regulations.

While it is unclear how direct an impact the letter had, yesterday brought welcome news from Ted Mitchell at the DOE: they have effectively paused their efforts to introduce new State Authorization regulations. As described at Inside Higher Ed:

The Obama administration is delaying its plan to develop a controversial rule that would require online programs to obtain approval from each and every state in which they enroll students, a top Education Department official said Wednesday.

Under Secretary of Education Ted Mitchell said that the administration would not develop a new “state authorization” regulation for distance education programs before its November 1 deadline.
“We, for all intents and purposes, are pausing on state authorization,” Mitchell said during remarks at the Council for Higher Education Accreditation conference. “It’s complicated, and we want to get it right.”

Mitchell said he wanted make sure the regulation was addressing a “specific problem” as opposed to a general one. The goal, he said, should be to promote consumer protection while also allowing for innovation and recognizing that “we do live in the 21st century and boundaries don’t matter that much.”

It gets better. Mitchell made this statement while at a workshop for the Council for Higher Education Accreditation, and his speech mentioned his desire to clean up some of the regulatory burden on accrediting agencies. As described at the Chronicle:

Ted Mitchell, the under secretary of education, told attendees at a workshop held by the Council for Higher Education Accreditation that accreditors’ acceptance of more responsibility over the years for monitoring colleges had created “complicated expectations for institutions, regulators, politicians, and the public.”

Much of the work accreditors do to ensure that colleges comply with federal regulations is “less appropriate to accreditors than it may be to the state or federal government,” said Mr. Mitchell, who is the No. 2 official in the Department of Education and oversees all programs related to postsecondary education and federal student aid.

“If I could focus on a spot today,” he said, “it would be the compliance work and seeing if we could relieve accreditors of the burden of taking that on for us.”

This is just a speech, and we do not know what the DOE will eventually propose (or not) on State Authorization. But it is certainly a welcome sign that the department has heard the concerns of many in the higher education community.

Update: See Russ Poulin’s blog post at WCET with more context and inside info.

WCET joined with Sloan-C and UPCEA to write a letter to Education Secretary Arne Duncan and Under Secretary Mitchell about our concerns with the direction the Department was taking and to give recommendations on how the Department might proceed. I have also been talking with numerous groups and individuals that have been writing their own letters or have used their contacts.

On Tuesday of this week, Marshall Hill (Executive Director of the National Council on State Authorization Reciprocity Agreements) and some high-ranking members of the National Council leadership board met with Mr. Mitchell. According to Marshall, Mr. Mitchell was aware of many of the concerns that they raised and was very supportive of reciprocity. From that meeting, Mr. Mitchell indicated that more work needed to be done, but did not suggest the delay.

Mr. Mitchell’s reference in the Inside Higher Ed article about addressing a “specific problem” showed that our message was being heard.

The post Is the DOE backing down on proposed State Authorization regulations? appeared first on e-Literate.

WWW-based online education turns 20 this summer

Tue, 2014-06-24 17:01

I’m a little surprised that this hasn’t gotten any press, but Internet-based online education turns 20 this summer. There were previous distance education programs that used networks of one form or another as the medium (e.g. University of Phoenix established its “online campus” in 1989), but the real breakthrough is the use of the world wide web (WWW), effectively creating what people most commonly know as “the Internet”.

To the best of my knowledge (correct me in comments if there are earlier examples), the first accredited school to offer a course over the WWW was the Open University in a pilot Virtual Summer School project in the summer of 1994. The first course was in Cognitive Psychology, offered to 12 students, as described in this paper by Marc Eisenstadt and others involved in the project (the HTML no longer renders):

In August and September 1994, a Virtual Summer School (VSS) for Open University undergraduate course D309 Cognitive Psychology enabled students to attend an experimental version of summer school ‘electronically’, i.e. from their own homes using a computer and a modem. VSS students were able to participate in group discussions, run experiments, obtain one-to-one tuition, listen to lectures, ask questions, participate as subjects in experiments, conduct literature searches, browse original journal publications, work in project teams, undertake statistical analyses, prepare and submit nicely formatted individual or joint written work, prepare plenary session presentations, and even socialize and chit-chat, all without ever leaving their homes. The term ‘Virtual Summer School’ was used to mean that the software packages supplied to students emulate many aspects of a residential summer school, but without requiring physical attendance. As with many other Open University activities, we feel that face-to-face tuition and peer group interaction would still be preferable if it could be achieved. However, there are sometimes circumstances which preclude physical attendance, so we want to provide the best possible alternative. Virtual Summer School was a first step in this direction. This year, it was only an experimental option for a dozen already-excused students, which gave us a low-risk entry in order to assess the viability of the approach.

There is even a concept video put together by the Open University at the end of 1994 that includes excerpts of the VSS course.

And now for your trip down memory lane, I have taken the paper, cleaned up the formatting, and fixed / updated / removed the links that no longer work. The modified paper is below for easier reading:


Virtual Summer School Project, 1994



One of the great strengths of the UK’s Open University is its extensive infrastructure, which provides face-to-face tuition through a network of more than 7000 part-time tutors throughout the UK and Europe. This support network, combined with in-house production of high-quality text and BBC-produced videos, provides students with much more than is commonly implied by the phrase ‘distance teaching’! Moreover, students on many courses must attend residential schools (e.g. a one-week summer school to gain experience conducting Biology experiments), providing an additional layer of support. About 10% of students have genuine difficulty attending such residential schools, and increasingly we have started to think about addressing the needs of students at a greater distance from our base in the UK. This is where the Virtual Summer School comes in.

The Cognitive Psychology Virtual Summer School

In August and September 1994, a Virtual Summer School (VSS) for Open University undergraduate course D309 Cognitive Psychology enabled students to attend an experimental version of summer school ‘electronically’, i.e. from their own homes using a computer and a modem. VSS students were able to participate in group discussions, run experiments, obtain one-to-one tuition, listen to lectures, ask questions, participate as subjects in experiments, conduct literature searches, browse original journal publications, work in project teams, undertake statistical analyses, prepare and submit nicely formatted individual or joint written work, prepare plenary session presentations, and even socialize and chit-chat, all without ever leaving their homes. The term ‘Virtual Summer School’ was used to mean that the software packages supplied to students emulate many aspects of a residential summer school, but without requiring physical attendance. As with many other Open University activities, we feel that face-to-face tuition and peer group interaction would still be preferable if it could be achieved. However, there are sometimes circumstances which preclude physical attendance, so we want to provide the best possible alternative. Virtual Summer School was a first step in this direction. This year, it was only an experimental option for a dozen already-excused students, which gave us a low-risk entry in order to assess the viability of the approach.

Below we describe the technology involved, evaluation studies, and thoughts about the future.

The Technology

Three main categories of technology were required: communications & groupwork tools, support & infrastructure software/hardware, and academic project software.

Communications and Groupwork
  • Email, Usenet newsgroups, live chat lines and low-bandwidth (keyboard) conferencing: this technology was provided by FirstClass v. 2.5 from SoftArc in Toronto, and gave students a nice-looking veneer for many of their day-to-day interactions. A ‘Virtual Campus’ map appeared on their desktops, and folder navigation relied on a ‘room’ metaphor to describe crucial meeting places and bulletin boards.
  • WWW access: NCSA Mosaic 1.0.3 for Macintosh was provided for this purpose [in the days before Netscape was released] . Students had customized Hotlists which pointed them to academically-relevant places (such as Cognitive & Psychological Sciences on The Internet), as well as some fun places.
  • Internet videoconferencing: Using Cornell University’s CU-SeeMe, students with ordinary Macs or Windows PCs (even over dial-up lines from home) were able to watch multiple participants around the world. Video transmission from slightly higher-spec Macs & PCs was used for several Virtual Summer School events, including a Virtual Guest Lecture by Donald A. Norman, formerly Professor of Psychology at the University of California at San Diego (founder of its Cognitive Science Programme), and now an Apple Fellow.
  • Remote presentation software: we used a product called ‘The Virtual Meeting’ (from RTZ in Cupertino), which allowed synchronized slide & movie presentations on remote Macs & PCs distributed across local, wide, or global (including dial-in) networks, displayed images of all remote ‘participants’, and facilitated moderated turn-taking, ‘hand-raising’, interactive whiteboard drawing & question/answer sessions.
  • Mobile telephone support and voice conferencing: every VSS student was supplied with an NEC P100 cellular phone, so that they could use it while their domestic phone was busy with their modem (some day they’ll have ISDN of fibre optic lines, but not this year). Audio discussions were facilitated by group telephone conference calls, run concurrently with CU-SeeMe and other items shown above. Our largest telephone conference involved 17 participants, and worked fine given that basic politeness constraints were obeyed.
  • Remote diagnostic support and groupwork: Timbuktu Pro from Farallon, running over TCP/IP, enabled us to ‘cruise in’ to our students’ screens while chatting to them on their mobile phones, and to help them sort out specific problems. Students could also work in small self-moderated groups this way, connecting as observers to one user’s Macintosh.
Support and infrastructure software/hardware
  • Comms Infrastructure: TCP/IP support was provided by a combination of MacTCP, MacPPP, VersaTerm Telnet Tool on each student’s machine, plus an Annex box at The Open University connecting to a Mac Quadra 950 running a FirstClass Server and 3 Suns running cross-linked CU-SeeMe reflectors.
  • Tutorial Infrastructure: each student was supplied with HyperCard, MoviePlay, and SuperCard 1.7 to run pre-packaged tutorial and demonstration programs, some of which were controlled remotely by us during group presentations. Pre-packaged ‘guided tour’ demos of all the software were also provided (prepared with a combination of MacroMind Director and CameraMan). To help any computer-naive participants ‘bootstrap’ to the point where they can at least send us an email plea for help, we also supplied a short video showing them how to unpack and connect all of their equipment, and how to run some of the demos and FirstClass.
  • Hardware: one of our aims was to foreshadow the day in the near future when we can presuppose that (a) most students will be computer-literate, (b) students will have their own reasonable-specification hardware, (c) bandwidth limitations will not be so severe, and (d) all of our software will be cross-platform (e.g. Mac or Windows). We could only approximate that in 1994, so we supplied each VSS student with a Macintosh LC-II with 8MB of RAM, a 14.4Kbps modem, a StyleWriter-II printer, 13″ colour monitor, mobile phone and extra mobile phone battery. Students were given a conventional video cassette showing how to set up all the equipment (see tutorial infrastructure above).
Academic project software

Our students had four main support packages to help them in their Cognitive Psychology studies:

  • a custom-built ‘Word Presentation Program’, which allowed them to create stimuli for presentation to other students and automatically record data such as reaction times and button presses (they could create a turnkey experiment package for emailing to fellow students, and then have results emailed back);
  • a HyperCard-based statistics package, for analysing their data;
  • MacProlog from Logic Programming Associates in the UK, for writing simple Artificial Intelligence and Cognitive Simulation programs;
  • ClarisWorks, for preparing reports and presentations, reading articles that we emailed to them as attachments, and doing richer data analyses.
Timetable and evaluation

Students had a three-week warmup period in order to become familiar with their new equipment and run some trial (fun) activities with every piece of software, and formal academic activities took place from August 27th – Sept. 9th, 1994, mostly in the evenings. Thus, the conventional one-week residential summer school was stretched out for two weeks to allow for part-time working. During week one the students concentrated on experimental projects in the area of “Language & Memory” (typically demonstrating inferences that “go beyond the information given”). During week two the students wrote simple AI programs in Prolog that illustrate various aspects of cognitive processing (e.g. simulating children’s arithmetic errors). They were supplied with Paul Mulholland’s version of our own Prolog trace package (see descriptions of our work on Program Visualization) to facilitate their Prolog debugging activities.

A detailed questionnaire was supplied both to the Virtual Summer School students and to conventional summer school students taking the same course. We looked at how students spent their time, which activities were beneficial for them, and many other facets of their Virtual Summer School experience.

[removed reference to Kim Isikoff's paper and student interviews, as all links were broken]

The future

The Virtual Summer School finished on 9th September 1994 (following our Virtual Disco on 8th September 1994, incidentally…. we told students about music available on the World Wide Web for private use). What happens next? Here are several issues of importance to us:

  • We must lobby for ever-increasing ‘bandwidth’ [i.e. channel capacity, reflected directly in the amount and quality of full-colour full-screen moving images and quality sound that can be handled]. This is necessary not only for Open University students, but also for the whole of the UK, and indeed for the whole world. As capacity and technology improve, so does the public expectation and need [analagous to the way the M25 motorway was overfull with cars the first day it opened-- the technology itself helps stimulate demand]. Whatever the current ‘Information SuperHighway’ plans are [just like Motorway construction plans], there is a concern that they don’t go far enough.
  • We must RADICALLY improve both (i) the user interfaces and (ii) the underlying layers of communications tools. Even with the excellent software and vendor support that we had at our disposal, all the layers of tools needed (TCP/IP, PPP, Communications Toolbox, etc.) made a veritable house of cards. The layers of tools were (i) non-trivial to configure optimally in the first place (for us, not the students); (ii) non-trivial to mass-install as ‘turnkey’-ready systems for distribution to students; (iii) non-trivial for students to use straight ‘out of the box’ (naturally almost everything in the detailed infrastructure is hidden from the students, but one or two items must of necessity rear their ugly heads, and that gets tricky); and (iv) ‘temperamental’ (students could get interrupted or kicked off when using particular combinations of software). We were fully prepared for (iv), because that’s understandible in the current era of communicating via computers, but (i), (ii), and (iii) were more surprising. [If anyone doubts the nature of these difficulties, I hereby challenge them to use Timbuktu Pro, a wonderful software product, with 4 remotely-sited computer-naive students using TCP/IP over a dial-up PPP connection.] We can do better, and indeed we MUST do better in the future. Many vendors and academic institutions are working on these issues, and they need urgent attention.
  • We must obtain a better understanding of the nature of remote groupwork. Our students worked in groups of size 2, 3, or 4 (depending on various project selection circumstances). Yet even with pre-arranged group discussions by synchronous on-line chat or telephone conference calls, a lot of fast-paced activity would suddenly happen, involving just one student and one tutor. For example, student A might post a project idea to a communal reading area accessible only to fellow project-group students B and C and also tutor T. Tutor T might post a reply with some feedback, and A might read it and react to it before B and C had logged in again. Thus, A and T would have inadvertently created their own ‘shared reality’– a mini-dialogue INTENDED for B and C to participate in as well, yet B and C would get left behind just because of unlucky timing. The end result in this case would be that students A, B, and C would end up doing mostly individual projects, rather than a group project. Tutors could in future ‘hold back’, but this is probably an artificial solution. The ‘shared reality’ between A and T in the above scenario is no different from what would happen if A cornered T in the bar after the day’s activities had finished at a conventional Summer School. However, in that situation T could more easily ensure that B and C were brought up to date the next day. We may ultimately have to settle for project groups of size 2, but not before doing some more studies to try to make larger groups (e.g. size 4) much more cohesive and effective.
  • We need to improve ‘tutor leverage’ (ability to reach and influence more people). Let’s suppose that we have thoroughly researched and developed radical improvements for the three items above (more bandwidth, nice user interfaces with smooth computer/communications infrasture [sic], happy cohesive workgroups of size 4). It would be a shame if, after all that effort and achievement, each tutor could only deal with, say, 3 groups of 4 students anywhere in the world. The sensory overload for tutors at the existing Virtual Summer School was considerable… many simultaneous conversations and many pieces of software and technology running at once. The 1994 Virtual Summer School was (of necessity) run by a self-selecting group of tutors who were competent in both the subject matter and the technology infrastructure. Less technologically-capable tutors need to be able to deal with larger numbers of students in a comfortable fashion, or Virtual Summer School will remain quite a ‘niche’ activity.

The four areas above (more bandwidth, better computer/comms interfaces, larger workgroups, increased tutor leverage) are active areas of research for us…. stay tuned (and see what we’re now doing in KMi Stadium)!

Who made it work?
  • Marc Eisenstadt: VSS Course Director, Slave Driver, and Fusspot
  • Mike Brayshaw: VSS Tutor & Content Wizard
  • Tony Hasemer: VSS Tutor & FirstClass Wizard
  • Ches Lincoln: VSS Counsellor and FirstClass Guru
  • Simon Masterton: VSS Academic Assistant, Mosaic Webmaster, and Mobile Phone Guru
  • Stuart Watt: VSS Mac Wizard
  • Martin Le Voi: VSS Memory/Stats Advisor & Unix Guru
  • Kim Issroff: VSS Evaluation and <A HREF=”#kim-report”>Report</A>
  • Richard Ross: VSS Talking Head Guided Tour
  • Donald A. Norman (Apple, Inc.): VSS Virtual Guest Lecturer
  • Blaine Price: Unix & Internet Guru & Catalyst
  • Adam Freeman: Comms & Networking Guru
  • Ian Terrell: Network Infrastructure Wizard
  • Mark L. Miller (Apple, Inc.): Crucial Guidance
  • Christine Peyton (Apple UK): Support-against-all-odds
  • Ortenz Rose: Admin & Sanity Preservation
  • Elaine Sharkey: Warehousing/Shipping Logistics

Update: Changed title and Internet vs. WWW language to avoid post-hoc flunking of Dr. Chuck’s IHTS MOOC.

The post WWW-based online education turns 20 this summer appeared first on e-Literate.

Coursera shifts focus from ‘impact on learners’ to ‘reach of universities’

Mon, 2014-06-23 17:15

Richard Levin, the new CEO of Coursera, is getting quite clear about the new goals for the company. At first glance the changes might seem semantic in nature, but I believe the semantics are revealing. Consider this interview with the Washington Post that was published today in the Washington Post [emphasis added in both cases below]:

Richard C. Levin, the new chief executive of Coursera, the most widely used MOOC platform, wants to steer the conversation back to what grabbed public attention in the first place: the wow factor.

Sure, Levin said, the emerging technology will help professors stimulate students on campus who are tired of old-school lectures. The talk of “flipped classrooms” and “blended learning” — weaving MOOCs into classroom experiences — is not mere hype.

“But that is not the big picture,” Levin said in a visit last week to The Washington Post. “The big picture is this magnifies the reach of universities by two or three orders of magnitude.”

Contrast this interview with Daphne Koller’s December article at EdSurge:

Among our priorities in the coming year, we hope to shift the conversation around these two dimensions of the learning experience, redefine what it means to be successful, and lay the groundwork for products, offerings, and features that can help students navigate this new medium of learning to meet their own goals, whether that means completing dozens of courses or simply checking out a new subject. [snip]

Still, we are deeply committed to expanding our impact on populations that have been traditionally underserved by higher education, and are actively working to broaden access for students in less-developed countries through a range of initiatives

There are valid criticisms of how well Coursera has delivered on its goal of helping students meet their own learning goals, but now it is apparent that the focus of their efforts is shifting away from the learner and towards the institution. Below are a few notes based on these recent interviews.

Changing Direction From Founders’ Vision

This is the second interview where Levin contradicts the two Coursera founders. In the case above Levin shows the point of Coursera is not primarily impact on learners but is reach of great universities. In a New York Times interview from April he made similar points in contrast to Andrew Ng.

In a recent interview, Mr. Levin predicted that the company would be “financially viable” within five years. He began by disagreeing with Andrew Ng, Coursera’s co-founder, who described Coursera as “a technology company.”

Q. Why is the former president of Yale going to a technology company?

A. We may differ in our views. The technology is obviously incredibly important, but what really makes this interesting for me is this capacity to expand the mission of our great universities, both in the United States and abroad, to reach audiences that don’t have access to higher education otherwise.

Levin is signifying a change at Coursera, and he is not just a new CEO to manage the same business. Andrew Ng no longer has an operational role in the company, but he remains as Chairman of the Board (I’m not claiming a correlation here, but just noting the change in roles).

Reach Is Not Impact

@PhilOnEdTech Is "reach" the same as "impact"?

— Russell Poulin (@RussPoulin) June 23, 2014

The answer in my opinion is only ‘yes’ if the object of the phrase is the universities. Impact on learners is not the end goal. In Levin’s world there is a class of universities that are already “great”, and the end goal is to help these universities reach more people. This is about A) having more people understand the value of each university (branding, eyeballs) and B) getting those universities to help more people. I’m sure that B) is altruistic in nature, but Levin does not seem to focus on what that help actually comprises. Instead we get abstract concepts as we see in the Washington Post:

“That’s why I decided to do it,” Levin said. “Make the great universities have an even bigger impact on the world.”

Levin seems enamored of the scale of Coursera (8.2 million registered students, etc), but I can find no concrete statements in his recent interviews that focus on actual learning results or improvements to the learning process (correct me in the comments if I have missed some key interview). This view is very different from the vision Koller was offering in December. In her vision, Koller attempts to improve impact on learners (the end) by using instruction from great university (the means).

Other People’s Money

Given this view of expanding the reach of great universities, the candor about a lack of revenue model is interesting.

“Nobody’s breathing down our necks to start to turn a profit,” he said. Eventually that will change.

Levin said, however, that “a couple” universities are covering their costs through shared revenue. He declined to identify them.

This lack of priority on generating a viable revenue model is consistent with the pre-Levin era, but what if you take it to its logical end with the new focus of the company? What we now have is a consistent story with AllLearn and Open Yale Courses – spending other people’s money to expand the reach of great universities. Have we now reached the point where universities that often have billion-dollar endowments are using venture capital money to fund part of their branding activities? There’s a certain irony in that situation.

It is possible that Levin’s focus will indirectly improve the learning potential of Coursera’s products and services, but it is worth noting a significant change in focus from the largest MOOC provider.

The post Coursera shifts focus from ‘impact on learners’ to ‘reach of universities’ appeared first on e-Literate.

“Personalized Learning” Is Redundant

Mon, 2014-06-23 11:13

Dan Meyer has just published a provocative post called “Don’t Personalize Learning,” inspired by an even more provocative post with the same title by Benjamin Riley (as well as being a follow-up to Meyer’s post “Tools for Socialized Instruction not Individualized Instruction“). Part of the confound here is sloppy terminology. Specifically, I think the term “personalized learning” doesn’t really mean anything, so it’s hard to have an intelligent conversation about it.

All learning is personalized in virtue of the fact that it is accomplished by a person for him or herself. This may seem like a pedantic point, but if the whole point of creating the term is to focus on fitting the education to the student rather than the other way around, then it’s important to be clear about agency. What we really want to talk about, I think, is “personalized education” or, more specifically, “personalized instruction.” Here too we need to be thoughtful about what we mean by “personalized.” To me, “personalized” means “to make more personal,” which has to do with the goals and desires of the person in question. If I let you choose what you want to learn and how you want to learn it, those are aspects of personalization. Riley argues that radical personalization, where students make all the choices, isn’t necessarily a good thing, for several reasons. One reason he gives is that learning is cumulative and students are not likely to stumble upon the correct ordering by themselves. He asserts that teaching was invented “largely to solve for that problem.” I agree that one of the main values of a teacher is to help students find good learning paths, but I disagree that students are unlikely to find good paths themselves. Teachers can help students optimize, but the truth is that people learn all sorts of things all the time on their own. Teaching is about the zone of proximal development; it’s about helping students learn (and discover) those things that they are not quite ready to learn on their own but can learn with a little bit of help. That’s not the same thing at all as saying that humans aren’t good at constructing good learning experiences for themselves (which is what you get if you take Ryan’s argument to its logical conclusion). Also, I believe in the value of curriculum, but it’s a bit of a straw man to suggest that personalized learning must mean that students decide everything, for themselves and on their own.

And I vehemently disagree with him when he writes,

Second, the problem with the pace argument is that it too contradicts one of the key insights from cognitive science: our minds are not built to think. In fact, our brains are largely oriented to avoid thinking. That’s because thinking is hard. And often not fun, at least at first. As a result, we will naturally gravitate away from activities that we find hard and unpleasant.

Frankly, I think he draws exactly the wrong conclusion from the research he cites. I would say, rather, that we are most inclined to think about things that inspire a sense of fun. We like stories and puzzles. But which stories and which puzzles we like is…well…personal. If you want to to get humans to think on a regular basis, then you have to make it personal to them. My own experience as both a teacher and a learner is that if a person is personally engaged then he or she can often learn quite quickly and eagerly. The same cannot often be said of somebody who is personally disengaged. Of course, one can be personally engaged without having a personalized learning experience, if by the latter you mean that the student chooses the work. But the point I made at the top of the post is that “personal” is inherent to the person. The student may not decide what work to do, but she and only she always decides whether or not to engage with that work. When the work is not personalized, a good teacher is always performing in acts of persuasion, trying to help students find personal reasons to engage.

Meyer is latching onto something different. By “personal” he seems to mean “solitary,” and I interpret him to be responding specifically to adaptive systems, which are often labeled “personalized learning” (as well as “new and improved” and “99.44% pure”). First of all, in and of themselves, adaptive systems are often not personalized in the sense that I described above. They are customized, in that they respond to the individual learner’s knowledge and skill gaps, but they are not personalized. Customized solitary instruction has its place, as I described in my post about what teachers should know about adaptive systems. Customized instruction can also be personalized—for example, students can choose their path down a skill tree on Khan Academy. But I think Dan’s main point is that many of the more interesting and potent learning experiences tend to happen when humans talk with other intelligent humans. We learn from each other, traveling down paths that machines can’t take us yet (and probably won’t be able to for quite a while). It is possible for a learning experience to be simultaneously social and personalized, for example, when students individually individually work on problems they choose that are interesting to them but then discuss their ideas and solutions with their classmates.

So, to sum up:

  1. Humans are generally pretty good at learning what they want to learn (but can get stuck sometimes).
  2. Help from good teachers can enable humans to learn more effectively than they can on their own in many cases.
  3. Sometimes solitary study can be helpful, particularly for practicing weak skills.
  4. Conversations with other humans often lead to rich, powerful, and personal learning experiences that are difficult or impossible to have on one’s own.
  5. All learning is personal. Some instruction is personalized to a student’s individual interests and choices, and some is customized to a students individual skills and knowledge. Some is both and some is neither.
  6. Personalized instruction may or may not include social learning activities.
  7. Customized instruction may or may not include some personalization.

Why do we make this stuff so complicated?

The post “Personalized Learning” Is Redundant appeared first on e-Literate.

InstructureCon: Canvas LMS has different competition now

Thu, 2014-06-19 05:27

For the first few years of the Canvas LMS, Instructure’s core message was ‘Canvas is better than Blackboard’. This positioning was thinly veiled in the company’s 2011 spoof of the Apple / 1984 commercial and even hitting the level of gloating in a company blog commenting on Blackboard’s strategy reversal in 2012. Instructure made their name by being the anti-Blackboard.

At InstructureCon 2014, there was hardly a mention of Blackboard or any of the other LMS providers. In fact, most of the general sessions avoided any direct or indirect comparison of LMS products. This year there were three observations that surprised me:

  • Snow in June;
  • Canvas growth in K-12 markets; and
  • Lack of mention of LMS competitors or product one-upmanship.
Snow in June

14 - 1

The weather eventually cleared up, however, with a high of 69 forecast for later today.

Company Growth, Even in K-12

Instructure is on a roll, and in the 3.5 years since the launch of Canvas LMS, they have grown to have more than 800 customers and more than 12 million end users registered in their system. During Josh Coates’ keynote, he showed a chart that showed the growth, including breakouts per market. In the two years since I was last at InstructureCon (2012), the company has almost tripled the number of higher ed clients and more than quadrupled the total number of customers.

Beyond the impressive overall growth, I was surprised to see that Instructure now appears to have approximately 3/4 the number of K-12 customers as they do higher ed customers. I also noticed a large number of K-12 users at the conference.

Graphic presented by Instructure at 2014 InstructureCon

Graphic presented by Instructure at 2014 InstructureCon

It is worth pointing out a few caveats:

  • The K-12 market is more of “the wild west” (term used at Instructure) than higher ed, with a large number of unconnected districts without consistent purchasing patterns;
  • There are far more K-12 schools and districts than there are higher ed institutions, and Canvas market percentages are much lower in K-12 than higher ed;
  • Typical customer sizes can be much smaller in K-12, so I doubt that Instructure makes 3/4 the revenue in K-12 as they do in higher ed; and
  • These are self-reported student and faculty registrations, which includes newly-signed schools that have not yet migrated from their old system (this chart is more of a leading indicator than typical market share measures).
Canvas Moving to Next Stage

Given the significant growth over just 3.5 years, it is striking the change in tone from Instructure. Call it a maturing process, or call it confidence from winning the majority of LMS selections in higher ed recently, but Instructure has subtly but significantly changed their assumed competition. Rather than focusing on being better than Blackboard or Desire2Learn or Moodle or Sakai, the real competition for Canvas now seems to be lack of meaningful adoption, whether the end users are working online or face-to-face.

Josh Coates’ keynote was close to 40 minutes in duration, and I estimate he mentioned Canvas for 4 minutes or less – and that was on system uptime and growth in adoption presented in an introspective manner. Rather than pitching the product, Josh spent the majority of the keynote talking about his fictional and real inspirations or heroes, including Katherine Switzer, Atticus Finch, Norman Borlaug, and Sophie Scholl. Is this the same Josh Coates from the 2013 Learning Impact Fight Club as described by Claude Vervoort?

LMS CEOs Panel: Leave all Political Correctness at the door, thanks! I was amazed. It all started smoothly but it did not take long for Instructure’s CEO Josh Coates to give a kick in the anthill. I could not believe my ears :) Not always constructive but surely entertaining!

For the general sessions and over-riding conference themes, the primary product announcement was on “Lossless Learning” – combining “the ease and efficiency of online learning with the magic of a face-to-face environment”. The idea is to use online tools to augment the face-to-face experience, with four tools to support this idea:

  • Canvas Polls – a built-in response system using iOS or Android devices to replace clickers;
  • Magic Marker – an iPad app allowing the instructor to observe and evaluate individual performance of students in group environments, integrated with Canvas gradebook;
  • Quiz Stats – an improved visualization of multiple-choice quizzes and item analysis; and
  • Learning Mastery for Students – the student view of mastery-based gradebook.

Mike Caulfield describes more of the minimally invasive assessment angle in this blog post.

Instructure has a new announcement about Canvas, and it’s in an area close to my heart. They are rolling out a suite of tools that allow instructors to capture learning data from in-class activities.

But Mike, you say, the LMS is evil, and more LMS is eviler. Why you gotta be Satan’s Cheerleader?

Well, here’s my take on that. The LMS is not evil. What is evil is making the learning environment of your class serve the needs of the learning management system rather than serve the needs of the students.

Leading this Lossless Learning effort is Jared Stein, whose role is to connect the Canvas product team with actual classroom usage and vice virsa. When co-founder Devlin Daley left Instructure last year, I made the following observation:

While that official explanation makes sense, it doesn’t mean that Devlin’s departure will not affect Instructure. The biggest challenge they will face, in my opinion, is having someone out on the road, working with customers, asking why and what if questions. Just naming a person or two to this role is not the same as having the original vision and skills from a co-founder, although I would expect Jared Stein to play a key role in this regard.

What I believe I am seeing at InstructureCon is just how important Jared is becoming to Instructure’s strategy.

Rather than Canvas vs. Blackboard or Desire2Learn or Moodle or Sakai, the message now has shifted to more meaningful implementations of Canvas vs. shallow usage of an LMS.


I used to think that the biggest risk that Instructure faced was the lack of focus on large online programs (University of Central Florida being the primary exception). No longer, as I see that the company has plenty of headroom to grow with their focus more on augmenting traditional face-to-face or hybrid programs, especially with K-12 markets and international markets being open. The biggest risks I now see:

  • Hubris – there is a fine line between confidence that allows a company to look beyond other LMS providers and cockiness; if the company becomes too comfortable in their growth and becomes cocky, then they can take a fall like other LMS providers have shown.
  • Focus – as the company grows and adds customers, it will be increasingly difficult to maintain the focus that has led them to have a clean, intuitive user interface and to avoid feature bloat.

We’ll keep watching Instructure and their Canvas product suite, but we’ll also look at other LMS providers and how they might change the market.

The post InstructureCon: Canvas LMS has different competition now appeared first on e-Literate.

Happy Birthday, e-Literate

Wed, 2014-06-18 14:17

Ten years ago today, I wrote my first blog post on e-Literate. At the time, my only real ambitions were to learn about blogging and use the fact that I was writing in public to force myself to think more clearly about what I believed about educational technologies. I never could have imagined that, not only would I still be writing a decade later, but that it would be come such a large and important part of my professional and personal life. I have met many wonderful people and had unbelievable opportunities as a result of this blog.

At the beginning of my career as an educator, I was a middle school and high school teacher. I am the son of teachers, brother of teachers, husband of a teacher, and father-in-law of a teacher. I still consider myself to be a teacher. I particularly loved teaching 8th grade and plan to go back to it someday. Occasionally I wonder if I’m doing the right thing by doing what I do now instead of going back to the classroom immediately. But then I run into people like Jon and Chris Boggiano at someplace like the GSV conference. These two brothers enlisted in the army after 9/11, went to West Point, and served in active combat duty in places like Iraq, Kosovo, and Afghanistan. After they got out of the Army, they turned down lucrative careers in corporate America to create a startup training veterans and other folks in green jobs. They were recognized by the White House for their success. They then sold their startup, enrolled in graduate school in Stanford together, used their proceeds to become educational technology angel investors, and are busy planning their next startup. Here’s a video of them in action, presenting their thoughts on childhood and educational philosophy at a Stanford event:

Click here to view the embedded video.

One more thing: Jon and Chris are my former seventh and eighth grade students.

Their choices make me feel better about mine. If they think that developing technology in the service of education is a good way to try to change the world, then I am going to do everything I can to help them and people like them. I don’t always trust my judgment about what’s the best thing to do for our future, but I trust theirs.

They are also a reminder that we often don’t know the consequences of the actions we take. I was absolutely stunned to see their names on the attendee list for the ed tech conference, but I also felt incredibly proud. Jon and Chris deserve 100% credit for who they have become and what they have accomplished, but I feel privileged to have played a small part in their grand adventure.

And in that spirit, I want to ask you all if you might consider giving e-Literate a birthday present. When Phil and I write our blog posts, we are in our respective homes. Maybe we talk to each other a little bit about what we’re writing, but it’s mostly a solitary experience. We get sporadic feedback, often long after we have written our pieces. We get some idea of what people are thinking by the comments and the web traffic, but we don’t know if what we’re doing really matters. So if something we’ve written here has made a difference, if there was a decision you made differently or some post that influenced your thinking, we’d be grateful if you would let us know in comments or in email. It would mean a great deal to us to know that we’ve had even a small impact.

Thanks for sticking with us, and I hope that you will still find us worth reading ten years from now.

The post Happy Birthday, e-Literate appeared first on e-Literate.

WCET, UPCEA & Sloan-C call on DOE to change State Authorization proposal

Mon, 2014-06-16 20:46

What does it take to get all of the higher education institutions and associations to agree? Apparently the answer is for the Department of Education to propose its new State Authorization regulations.

As part of DOE’s negotiated rulemaking process over the past half year representatives from schools (Columbia University, Youngstown State University, Benedict College, Santa Barbara City College, Clemson University, MIT, Capella University) to higher ed associations (WCET) were unanimous in their rejection of the proposed State Authorization rules. As Russ Poulin wrote for WCET:

On Tuesday May 20, the Committee we had our final vote on the proposed language. I voted “no.” I was joined in withholding consent by all the representatives of every higher education sector. Nine out of sixteen negotiators voting “no” is a high ratio.

Note that only one of the mentioned groups is a for-profit university – the purported offenders causing the need for the regulations. I wrote a post arguing that the proposed rules represented a dramatic increase in control over distance education that would cause a significant increase in compliance and administrative overhead for both colleges / universities and for states themselves.

In the end, predictably, the rulemaking process ended in a lack of consensus that allows the DOE to propose whatever language they desire. The latest proposal was from DOE, and it would make sense for the final proposal to follow this language closely.

Here’s what is newsworthy – the idea and proposed language is so damaging to innovation in higher ed (which the DOE so fervently supports in theory) and so burdensome to institutions and state regulators that three higher ed associations have banded together to oppose the proposed rules. WCET (WICHE Cooperative on Educational Technologies), UPCEA (University Professional and Continuing Education Association) and Sloan-C (Sloan Consortium) wrote a letter to Secretary Arne Duncan calling for the DOE to reconsider their planned State Authorization regulations. As the intro states [emphasis added]:

The member institutions of our three organizations are leaders in the practice of providing quality postsecondary distance education to students throughout the nation and the world. Our organizations represent the vast majority of institutions that are passionate about distance education across the country and across all higher education sectors.

For the first time our organizations are joining with one voice to express our concern over the Department of Education’s “state authorization for distance education” proposal(1) that was recently rejected by most of the members of the Program Integrity and Improvement Negotiated Rulemaking Committee. Our comments are focused on the final draft proposal presented to the Committee. We believe the final draft represents the most current thinking of Department staff as they construct a regulation for public comment.

We are eager to promote policies and practices that protect consumers and improve the educational experience of the distance learner. Unfortunately, the final draft regulation would achieve neither of those goals.

The impact of the proposed regulations would be large-scale disruption, confusion, and higher costs for students in the short-term. In addition, there would be no long-term benefits for students. This letter briefly outlines our concerns and provides recommendations that achieve the Department’s goals without disrupting students enrolling in distance education programs across state lines.

As an example of the problems with the latest proposal:

Second, when pressed to define an “active review,” the Department provided a short list of criteria that states could use in the review, such as submitting a fiscal statement or a list of programs to be offered in the state. While it may sound simple to add a few review criteria, state regulators cannot act arbitrarily. Their authorization actions must be based on state laws and regulations. Therefore, state laws would need to be changed and the state regulators would need to add staff to conduct the necessary reviews. Our analysis estimates that 45 states would need to make these changes. This is a large amount of activity and added costs for what appears to be a “cursory” review. These reviews will likely not change a decision regarding an institution’s eligibility in a state. There is no benefit for the student.

The letter does not just list objections but also offers eight concrete recommendations that would help DOE achieve its stated goals.

Michael and I fully endorse this letter and also call on the DOE to rethink its position.

The full letter can be found at WCET’s site along with an explanatory blog post.

The post WCET, UPCEA & Sloan-C call on DOE to change State Authorization proposal appeared first on e-Literate.

Starbucks Paying for Employees Tuition at ASU Online

Mon, 2014-06-16 10:08

This is a big deal:

Starbucks will provide a free online college education to thousands of its workers, without requiring that they remain with the company, through an unusual arrangement with Arizona State University, the company and the university will announce on Monday.

The program is open to any of the company’s 135,000 United States employees, provided they work at least 20 hours a week and have the grades and test scores to gain admission to Arizona State. For a barista with at least two years of college credit, the company will pay full tuition; for those with fewer credits it will pay part of the cost, but even for many of them, courses will be free, with government and university aid.

Over the past few decades, America has slowly but surely been transitioning from a system in which college education was treated as a public good (and therefore subsidized by taxpayers) to being a private good (and therefore paid for entirely by students and their families). And while there is no substitute for that model, it is interesting and important that Starbucks is positioning college tuition the way companies position health insurance plans—as a benefit they use to compete for better workers.

This is not an entirely new idea. Many companies have tuition reimbursement, although it often comes with more restrictions and is typically aimed at white-collar workers. A while back, Wal-Mart made headlines by offering heavily subsidized (but not free) college credit in partnership with APU. Starbucks takes this to the next level. Since both Wal-Mart and Starbucks have reputations as union busters, it will be interesting to see how their respective college subsidization moves impact their struggles with their labor forces. Will tuition help them lower demand for unionization? Will it become another bargaining chip at the negotiating table?

I wrote a while back about the idea of reviving the apprenticeship for the digital age and gave an example of an Indian tech company that is doing it. I think we’re going to see a lot more of variations on the theme of employer-funded education in the future.

You can learn more about the Starbucks college program at their website.

The post Starbucks Paying for Employees Tuition at ASU Online appeared first on e-Literate.

Why Google Classroom won’t affect institutional LMS market … yet

Sun, 2014-06-15 16:54

Yesterday I shared a post about the new Google Classroom details that are coming out via YouTube videos, and as part of that post I made the following statement [emphasis added]:

I am not one to look at Google’s moves as the end of the LMS or a complete shift in the market (at least in the short term), but I do think Classroom is significant and worth watching. I suspect this will have a bigger impact on individual faculty adoption in higher ed or as a secondary LMS than it will on official institutional adoption, at least for the next 2 – 3 years.

The early analysis is based on this video that shows some of the key features:

There is also a new video showing side-by-side instructor and student views that is worth watching.

Here’s why I believe that Classroom will not affect the LMS market for several years. Google Classroom is a slick tool that appeals to individual instructors whose schools use Google Apps for Education (GAE) – primarily K-12 instructors but also to higher ed faculty members. The tight integration of Google Drive, Google+ and GAE rosters allows for easy creation of course sites by the instructor, easy sharing of assignments and documents (particularly where the instructor creates the GDrive document and has students directly edit and add to that document), and easy feedback and grading of individual assignments. Working with the GAE framework, there are a lot of possibilities for individual instructors or instructional designers to expand the course tools. All of these features are faculty-friendly and help Google’s promise of “More time for teaching; more time for learning”.

But these features are targeted at innovators and early adopter instructors who are willing to fill in the gaps themselves.

  • The course creation, including setting up of rosters, is easy for an instructor to do manually, but it is manual. There has been no discussion that I can find showing that the system can automatically create a course, including roster, and update over the add / drop period.
  • There is no provision for multiple roles (student in one class, teacher in another) or for multiple teachers per class.
  • The integration with Google Drive, especially with Google Docs and Sheets, is quite intuitive. But there is no provision for PDF or MS Word docs or even publisher-provided courseware.
  • There does not appear to be a gradebook – just grading of individual assignments. There is a button to export grades, and I assume that you can combine all the grades into a custom Google Sheets spreadsheet or even pick a GAE gradebook app. But there is no consistent gradebook available for all instructors within an institution to use and for students to see consistently.

For higher ed institutions in particular, we are just now getting to the stage where the majority of faculty use the institutional LMS. I am seeing more and more surveys on individual institutions where 70+ % of faculty use the LMS for most of their courses. What this means, however, is that we have a different categories of adopter for institutional LMS – the early majority (characterized by pragmatic approach) and late majority (characterized by a conservative approach) as shown by the technology adoption curve. I am showing the version that Geoffrey Moore built on top of the Everett Rogers base model.


With adoption often above 50% or more of faculty, the institution has to serve both the group on the left (innovators and early adopters) and the larger group on the right (early and late majority more than laggards). As poorly designed as some of the institutional LMS solutions are, they typically allow automatic course and roster creation with updates, sharing of multiple document types, integrated standard gradebooks, and many others.

Institutions can (and really should) allow innovators and early adopters to try out new solutions and help create course designs not bound by the standard LMS implied pedagogy, but institutions cannot ignore the majority faculty who are typically unwilling to spend their own time to fill in the technology gaps – especially now that these faculty are just getting used to LMS usage.

None of this argues that Google Classroom is an inferior tool – it is just not designed to replace the full-featured LMS. Remember that Google is a technology-vision company that is comfortable putting out new tools before they understand how the tools will be used. Google is also comfortable playing the long game, getting more and more instructors and faculty using, giving feedback, and pushing forward the new toolset. This process will take some time to play out – at least 2 or 3 years in my opinion before a full institutional LMS may be available. If Google like the direction Classroom usage is going.

Google Classroom does attempt to partially understand the instructor use cases, but is not designed as a holistic product. Think of Classroom as ‘let’s see how to tie existing Google tools together to advance the ball in the general course site world’. It is still technology first and tied more specifically Google technology first. The use cases are simple (e.g. one instructor sharing GDrive-based assignment to students who edit and then submit for feedback), but there are many possibilities for clever faculty to innovate.

In the near-term, Google Classroom will likely be a factor for individual faculty adoption (innovators, early adopters) at schools with GAE licenses or even as secondary LMS. But not as a replacement.

The post Why Google Classroom won’t affect institutional LMS market … yet appeared first on e-Literate.

Google Classroom: Early videos of their closest attempt at an LMS

Sat, 2014-06-14 12:51

For years the ed tech community has speculated about Google entering the LMS market, including Wave (discontinued, but some key features embedded in other tools), Apps for Education, and even incorrectly with Pearson OpenClass. Each time there is some possibilities, but Google has not shown interest in fully replacing LMS functionality.

Google Classroom, announced in May and with new details coming out this week, is the closest that Google has come to fully providing an LMS. The focus is much more on K-12 than on higher education.

For background on the concept, see this video from May which emphasizes roster integration (with some instructor setup required), document sharing through Google Drive, server and security through the cloud, discussions tied to assignments, and a focus on ‘letting teachers and students teach or learn their way’.

Yesterday Google shared a new video that directly shows the user interface and some key features.

You can see more directly how to create a class, add students, creating assignments, sharing resources (GDrive, YouTube, website), student editing in G Docs, tracking student submissions, grading individual assignments.

What I don’t see (and admittedly this is based on short videos) is a full gradebook. The grades are tied to each assignment, but they do not show any collection of grades into a course-based gradebook.


Teachers with early access to Google Classroom are starting to upload videos with their reviews. Dan Leighton has one from a UK perspective that has more details, including his concern that the system has a ‘sage on the stage’ design assumption.

I am not one to look at Google’s moves as the end of the LMS or a complete shift in the market (at least in the short term), but I do think Classroom is significant and worth watching. I suspect this will have a bigger impact on individual faculty adoption in higher ed or as a secondary LMS than it will on official institutional adoption, at least for the next 2 – 3 years.

The post Google Classroom: Early videos of their closest attempt at an LMS appeared first on e-Literate.

Unizin membership fee is separate from Canvas license fee

Thu, 2014-06-12 12:12

With Unizin going public yesterday, I’ve been looking over our three posts at e-Literate to see if there are any corrections or clarifications needed.

Based on yesterday’s press release, official web site release and media event, I have not found any corrections needed, but I do think there is a clarification needed on whether Unizin membership includes Canvas usage or not (it does not).

Common Infrastructure

During the media call, the Unizin team (representatives from the four founding schools plus Internet2 and Instructure) pushed the common infrastructure angle. Rather than being viewed as a buying club where schools can buy from a common set of pre-negotiated services, Unizin can more accurately be viewed as planned integration and operation of a common service to allow white-label outsourcing of the learning infrastructure. The idea is that a school can use Unizin’s infrastructure for LMS, content repository and analytics engine rather than hosting or maintaining their own, but the service will not be branded as Unizin for the faculty and student end users. From the Unizin FAQ page:

There is another sense in which Unizin is different from MOOC efforts. Unizin is about providing common infrastructure to support the missions of its university members. It is not a public-facing brand. It will not offer content, courses, or degrees in its own name. Unizin’s membership model is built on the premise that universities need to strengthen their influence in the use of data and content for the long run. They will accomplish this goal by working together and taking greater control over content, while also opening content up for selective sharing.

In this morning’s IHE article, the Unizin founders described the need for a common infrastructure:

The digital learning consortium, announced Wednesday morning, aims to simplify how universities share learning analytics, content and software platforms. But in order to do so, Unizin needs its members to use the same infrastructure. A common learning management system is the first part of that package.

“You don’t really have common infrastructure if you’re saying everything is heterogeneous,” said Brad Wheeler, the Unizin co-founder who serves as vice president for IT and chief information officer at Indiana University. “A lot of these different learning tools — Sakai, Blackboard, Canvas — they all do a bunch of really good stuff. But five universities picking five different ones — what’s the end value in that if they want to do something together?”

Brad Wheeler went on to describe the results that will occur from sharing infrastructure:

“This is a hard decision,” Wheeler said about picking Canvas. “I think the key point is enabling Unizin to do what it’s meant to do…. The path for Unizin is creating a dependence on Unizin — a university-owned entity — but creating potential for greater interdependence among our institutions.”

Instead of differentiating themselves based on what software tools they individually pick, Wheeler said, Unizin’s member institutions will stand out based on what they do with the common platform — in other words, the degrees they offer, the research they produce and the students they serve.

Clarification on Canvas Licensing

It has not been clear at all on whether Unizin membership gives a school access to use Canvas or whether membership gives a school the ability to purchase a Canvas license using Unizin agreement. In other words, do Unizin member institutions pay $1 million for Unizin membership and pay Instructure for Canvas, or do they just pay Unizin membership? This question goes beyond Canvas – when a learning repository or analytics engine is in place, will schools have to pay additional for that solution or is it already included in membership?

From the Unizin FAQ:

Will Unizin be a Canvas reseller?
No. Members of Unizin will be able to obtain the Canvas LMS product for campus use via their membership. If members already have Canvas, they can maintain that existing relationship (either via a direct contract with Instructure or via Internet2 Net+ services) or take advantage of the Unizin agreement.

What does “will be able to obtain” mean? Brad Wheeler clarified the situation through an email exchange.

The investment capitalizes Unizin for the integration and operation of the service including a Content and Analytics capability. The Canvas costs are pass-through as they vary ‎by institutional size.

So schools pay for Unizin membership and they pay for Canvas. Presumably there will be a similar arrangement for future learning repository and analytics engines, if these components end up being commercially-developed software.

As additional clarification and answer to our previous open questions, the usage of Canvas is on their current multi-tenant production site, just as it is for current customers. Unizin presumably will have outsized influence on product roadmap and open source code additions (through integrated apps or possibly to Canvas code itself).

We have also heard that there is a support fee for indirect support of the Canvas service of some percentage of the license fee. Treat this component as strong speculation rather than a confirmed detail.

In summary, this means that member institutions pay for the following for the current service:

  • Just over $1 million paid over 3 years (~$350 k per year according to media event) for Unizin membership to create the organization that integrates and operates the service (LMS + Content + Analytics);
  • Canvas license fee paid one of three ways: 1) pass-through using Unizin agreement, 2) Net+ agreement from Internet2, or 3) existing private license between school and Instructure; and
  • Potential (not confirmed) support fee paid to Unizin consortium for their indirect support of Canvas as a service.

The post Unizin membership fee is separate from Canvas license fee appeared first on e-Literate.

It’s Official: Unizin Is Real

Wed, 2014-06-11 10:24


A giant deity from the confines of space and time, Unizin has orbited the Earth like a comet, appearing once every twelve years of Christmas Eve. One man by the name of Dr. Kori wanted to study Unizin after seeing the monster himself when he was a boy. Even though he was called a con artist trying to capture Unizin he never gave up his search. On Christmas Eve day, he was working on his self made radar that would find Unizin when Elly came and was shocked by the device. After Dr. Kori repaired her the two made a trap to capture Unizin. That night, the dimensional kaiju made himself known as both DASH and Dr. Kori captures it in a mystical trap, but time around him began to disappear. Kaito turned into Ultraman Max and made a barrier to slow down the process. Not wanting the graceful giant to be hurt Dr. Kori let it go, restoring everything to normal. Unizin gave the doctor the branch of a tree he had not seen in some time, whether it was intended to be given to him or a reward for releasing him was unknown, but he was grateful nonetheless.

- The Ultraman Wiki

No, not that Unizin. This Unizin. The secret university consortium is no longer secret. Phil and I wrote a few posts about the consortium before the group went public:

So far, four of the ten universities we reported were considering joining have officially and publicly joined: Indiana University, University of Michigan, Colorado State University, and University of Florida.

Here’s a roundup of the news coverage:

Probably most important to read, in addition to IHE’s coverage and ours, is the “Why Unizin?” blog post on the Unizin website.

There was a press call this afternoon, so I expect we will be seeing more articles over the next few days. Of course, Phil and I will be providing some additional analysis as well. Stay tuned.

The post It’s Official: Unizin Is Real appeared first on e-Literate.

A response to new NCES report on distance education

Wed, 2014-06-11 08:30

By Phil Hill and Russ Poulin, cross-posted to WCET blog

Last week the National Center for Education Statistics (NCES) released a new report analyzing the new IPEDS data on distance education. The report, titled Enrollment in Distance Education Courses, by State: Fall 2012, is a welcome addition to those interested in analyzing and understanding the state of distance education (mostly as an online format) in US higher education.

The 2012 Fall Enrollment component of the Integrated Postsecondary Education Data System (IPEDS) survey collected data for the first time on enrollment in courses in which instructional content was delivered exclusively through distance education, defined in IPEDS as “education that uses one or more technologies to deliver instruction to students who are separated from the instructor and to support regular and substantive interaction be- tween the students and the instructor synchronously or asynchronously.” These Web Tables provide a current profile of enrollment in distance education courses across states and in various types of institutions. They are intended to serve as a useful baseline for tracking future trends, particularly as certain states and institutions focus on MOOCs and other distance education initiatives from a policy perspective.

We have previously done our own analysis of the new IPEDS data at both e-Literate and WCET blogs. While the new report is commendable in its improved access to the important dataset, we feel the missing analysis and potentially misleading introductory narrative takes away from the value of this report.

Value of Report

The real value of this report in our opinion is the breakdown of IPEDS data by different variables such as state jurisdiction, control of institution, sector and student level. Most people are not going to go to the trouble of generating custom tables, so including such data in a simple PDF report will go a long way towards improving access to this important data. As an example of the data provided, consider this excerpt of table 3:

NCES Table 3 excerpt

The value of the data tables and the improved access to this information are precisely why we are concerned about the introductory text of the report. These reports matter.

Need for Better Analysis and Context

We were hoping to see some highlights or observations in the report, but the authors decided to present the results as “Web Tables” without any interpretation. From one standpoint, this is commendable because NCES is playing an important role in providing the raw data for pundits like us to examine. It is also understandable that since this was the first IPEDS survey regarding distance education in many years, there truly was no baseline data for comparison. Even so, a few highlights of significant data points would have been helpful.

There also is a lack of caveats. The biggest one has to do with the state-by-state analyses. Enrollments follow where the institution is located and not where the student is located while taking the distance courses. Consider Arizona: the state has several institutions (Arizona State University, Grand Canyon University, Rio Salado College, and the University of Phoenix) with large numbers of enrollments in other states. Those enrollments are all counted in Arizona, so the state-by-state comparisons have specific meanings that might not be apparent without some context provided.

Even though there are no highlights, the first two paragraphs contain a (sometimes odd) collection of references to prior research. These citations beg the question as to what the tables in this report have to say on the same points of analysis.

Postsecondary enrollment in distance education courses, particularly those offered online, has rapidly increased in recent years (Allen and Seaman 2013).

This description cites the long-running Babson Survey Research Group report by Allen and Seaman. Since the current IPEDS survey provides baseline data, there is no prior work on which to judge growth; therefore, this reference makes sense to include. It would have made sense, however, to provide some explanation of the key differences between IPEDS and Babson data. For example, Phil described in e-Literate the fact that there is major discrepancy in number of students taking at least one online course – 7.1 million for Babson and 5.5 million for IPEDS. Jeff Seaman, one of the two Babson authors, is also quoted in e-Literate on his interpretation of the differences. The NCES report would have done well to at least refer to the significant differences.

Traditionally, distance education offerings and enrollment levels have varied across different types of institutions. For example, researchers have found that undergraduate enrollment in at least one distance education course is most common at public 2-year institutions, while undergraduate enrollment in online degree programs was most common among students attending for-profit institutions.

This reference indirectly cites a previous NCES survey that used a different methodology regarding students in 2007-08.

  • That survey found that enrollment in at least one distance education course was “most common” at public 2-year colleges and the new data reaffirms that finding.
  • Enrollment in fully distance programs was “most common” in students attending for-profit institutions and the new data reaffirms that finding. However, leaving the story there perpetuates the myth that “distance education” equals “for-profit education.” The new IPEDS data show (see Table 1 below from a WCET post by Russ) that 35% of students enrolled exclusively at a distance attend for-profit institutions and only 5% of those who enroll in some (not all) distance courses attend for-profits. People are often amazed at what a big portion of the distance education market is actually in the public sector.

WCET Table 1

A 2003 study found that historically black colleges and universities (HBCUs) and tribal colleges and universities (TCUs) offered fewer distance education courses compared with other institutions, possibly due to their smaller average size (Government Accountability Office 2003)

What a difference a decade makes. Both types of institutions show few of their students enrolled completely at a distance, but they now above the national average in terms of percentage of students enrolled in some distance courses in Fall 2012.

Rapidly changing developments, including recent institutional and policy focus on massive open online courses (MOOCs) and other distance education innovations, have changed distance education offerings.

Only a small number of MOOCs offer instruction that would be included in this survey. We’re just hoping that the uniformed will not think that the hyperbolic MOOC numbers have been counted in this report. They have not.

Upcoming Findings on Missing IPEDS Data

We are doing some additional research, but it is worth noting that we have found some significant cases of undercounting in the IPEDS data. In short, there has been confusion over which students get counted in IPEDS reporting and which do not. We suspect that the undercounting, which is independent of distance education status, is in the hundreds of thousands. We will describe these findings in an upcoming article.

In summary, the new NCES report is most welcome, but we hope readers do not make incorrect assumptions based on the introductory text of the report.

The post A response to new NCES report on distance education appeared first on e-Literate.

Learner-Centered Analytics: Example from UW La Crosse MOOC research

Sun, 2014-06-08 10:15

Last week I wrote a post What Harvard and MIT could learn from the University of Phoenix about analytics. As a recap, my argument was:

Beyond data aggregated over the entire course, the Harvard and MIT edX data provides no insight into learner patterns of behavior over time. Did the discussion forum posts increase or decrease over time, did video access change over time, etc? We don’t know. There is some insight we could obtain by looking at the last transaction event and number of chapters accessed, but the insight would be limited. But learner patterns of behavior can provide real insights, and it is here where the University of Phoenix (UoP) could teach Harvard and MIT some lessons on analytics.

Beyond the University of Phoenix, there other examples of learner-centered analytics exploring usage patterns over time. While I was at a summit at the University of Wisconsin at La Crosse last week, Bob Hoar showed me some early results of their “UW-System College Readiness Math MOOC” research that is part of the MOOC Research Initiative. I interviewed Bob Hoar and Natalie Solverson as part of e-Literate TV, where they described their research project:

The results to date focus on capturing and visualizing the student patterns, and progress can be tracked at this project site (go to the site to see interactive graphics).

The Desire2Learn learning management system recorded a log containing over 1.2 million ‘events’ that occurred during the first few months of the MOOC. Each event corresponds to the action of a particular student. Each action, as well as a timestamp, was recorded. The image below contains a graphical representation of the elements of the course. The items on the horizontal axis (y=0) represent information about the course (syllabus, FAQ, how to contact instructors, etc.). The other items in the chart relate to the mathematics content; the (WebWork) homework, the quizzes, the online office hours and the live tutoring and lectures for the 9 math modules in the course. Hover over each bubble to see a short description of the item.

course elements The idea is that each element represents a learning object within a module. The enforced structure of the course was that students have to complete the quiz for each module before moving to the next module, but once there, students can choose the order and timing of each learning object interaction and even when to go back to review previous modules. The next visualization tracks one student (anonymized data) over the course duration, which color coding of the type of element, vertical axis capturing specific module, and horizontal axis capturing items visited in time order. To make the Google motion chart work, the research team used counts of items visited and mapped them artificially to years starting in 1900 (there are ~300 items visited over the course).

course activityThis view shows some interesting patterns, which is described in the site (they described a bubble motion chart, but I find the visualization above more informative).

The motion map quickly illustrates that the student visited nearly every course element, and, after completing the first few modules, they did not need to return to the course information module. This indicates that the student quickly understood the design of the course. In addition, the video indicates that the student occasionally jumped back to earlier material in the course. Such movements may indicate that the learning materials in the location of the jump may need to be reviewed.

Now that we’re looking at student patterns over time, the analytics are much more meaningful from a learning perspective than they would be with just course completion rates or A/B testing results. Learning is a process that cannot be reduced down to independent events. With many online courses, students now can create their own learning pathways. In the example above, notice how the student frequently reviewed modules 0, 2 and 6. This information could be used to study how students learn and how to improve course designs. Research teams would do well to put more focus on learner patterns, and MOOC platforms would do well to make this research easier.

The UW La Crosse research team has not finished their analysis, but the early results show a much richer approach to analytics than focusing on single person-course measurements or aggregated analysis of a single event.

The post Learner-Centered Analytics: Example from UW La Crosse MOOC research appeared first on e-Literate.

Three Makes a Movement: Branson creates youth panel for student voice in ed tech

Sat, 2014-06-07 15:18

Based on my involvement in the Evolve conference sponsored by the 20 Million Minds Foundation, held in January, I wrote a series of posts covering the discussions around online education and educational technology. The three main posts:

During the conference I put out a call for other conferences to follow 20MM’s lead and work harder to directly include students in their discussions of ed tech – full post here and video below:

Before we get to the analyses, however, it is important to highlight once again how unique this format is in education or ed tech settings. There is plenty of discussion about needing course design and support services that are learner-centric, yet typically ed tech conferences don’t have learner-centric discussions. We need to stop just talking about students and add the element of talking with students.

While I do not believe there is a direct connection, this week Sir Richard Branson created a youth panel as part of the UK’s Generation Tech review, giving students a direct voice in educational technology. The panel’s focus is K-12 usage and is described in The Telegraph:

Young people will be given the chance to voice their ideas about how technology can support learning in the UK, thanks to a new council being created as part of the ‘Generation Tech’ review.

The new Digital Youth Council, a panel of students aged between 13 and 17, will share their experiences with technology and discuss ways in which education technology can be improved in a classroom setting. [snip]

The council is being created as part of a wider review, launched at the end of April and led by Sir Richard Branson, looking at what impact technology is having in schools and what the future holds for teachers and pupils alike.

As children become increasingly confident using new technology, schools have often struggled to keep up – however, many classrooms are now equipped with tablets, interactive white boards and online learning platforms which allow teachers to more effectively monitor pupils’ learning.

The wider Generation Tech review is set to analyse how these new technologies are impacting education.

This is welcome news, and I hope these two efforts, along with WCET’s commitment for a student panel in their fall conference, mark the start of a movement. Who else will join? Are there other examples people can share in the comments?

The post Three Makes a Movement: Branson creates youth panel for student voice in ed tech appeared first on e-Literate.

eCampus News Advisory Board and Gophers

Thu, 2014-06-05 20:25

I have recently accepted an eCampus News offer to be part of their new advisory board. The idea is to have myself and the 10 other members help their editors get a better handle on the industry while also providing useful information to readers through opinion, advice or commentary. The other 10 members of the advisory board:

  • Brian Lukoff, Program Director for Learning Catalytics at Pearson Education
  • Crystal Sands, Director of the Online Writing Lab at Excelsior College
  • Connor Gray, Chief Strategy Officer at Campus Management
  • David J. Hinson, Executive Vice President & Chief Information Officer of Hendrix College
  • Joanna Young, Chief Information Officer and AVP for Finance & Budget at the University of New Hampshire
  • John Orlando, Northcentral University Associate Director of Faculty Training in the Center for Faculty Excellence
  • Mark Baker, Assistant Registrar at Whitworth University
  • Paige Francis, Chief Information Officer for Fairfield University
  • Roxann Riskin, Technology Specialist/Technology Student Assistant Service Supervisor at Fairfield University
  • Salwa Ismail, Head of the Department of Library Information Technology at the Georgetown University Library

There is an article in eCampus News introducing the advisory committee, including bios, thoughts on tends and game-changers, and some personal thoughts. I’ve included my thoughts below (couldn’t help myself on the quote). Judging by others’ responses, this is an eclectic group with quite a broad array of interests, and I’m looking forward to this new role.

The game-changer: Despite the hype of adaptive learning as an automated, black-box, magic-bullet solution, the broader field of personalized learning is likely to be a game changer in higher ed. For the first generation of online learning, the tendency was to replicate the factory model of education (one size fits all) but just do it online. For the second generation, the ability to use online technologies to create multiple pathways for students and to personalize learning will be a strength that can even go beyond face-to-face methods (for any classes larger than 10 to 15 students). We’re already starting to see some real improvements in remedial coursework based on students’ use of personalized learning tools, but this has been in pilot programs to date. As this usage spreads over time, personalized learning, including adaptive data-driven systems, will present real change to our educational system.

Passion: Transparency in education. Like Laura Gibbs, I believe in the open syllabus concept where students should be able to see what is in a course without having to enroll; while ed-tech vendors and open source providers can be very supportive of education, we should have an open view of how well the products and companies are doing; when schools adopt strategic technology initiatives, the process should be open and inclusive; schools should have their results (including academic performance of students) open for others to view. I realize there are risks involved, such as the over-simplification of college scorecards, but the general need for transparency is one that I firmly support.

Hobby: Traveling with family and experiencing local cultures. Whether that is simply a different town or region of California, or different locations internationally, my wife and I enjoy seeing new places and trying to embed ourselves with locals.

Quote/Belief: “I have to laugh, because I’ve out-finessed myself. My foe, my enemy, is an animal. And in order to conquer an animal, I have to think like an animal, and—whenever possible—to look like one. I’ve gotta’ get inside this guy’s pelt and crawl around for a few days.” – C Spackler

Update: In what could be one of my biggest professional mistakes ever, I listed groundhogs instead of gophers in reference to the Carl Spackler quote (confusing Bill Murray movies). You cannot imagine my self-disappointment at this point. Mea culpa.

The post eCampus News Advisory Board and Gophers appeared first on e-Literate.

No, I don’t believe that Harvard or MIT are hiding edX data

Tue, 2014-06-03 12:58

Since my Sunday post What Harvard and MIT could learn from the University of Phoenix about analytics, there have been a few comments with a common theme about Harvard and MIT perhaps withholding any learner-centered analytics data. As a recap, my argument was:

Beyond data aggregated over the entire course, the Harvard and MIT edX data provides no insight into learner patterns of behavior over time. Did the discussion forum posts increase or decrease over time, did video access change over time, etc? We don’t know. There is some insight we could obtain by looking at the last transaction event and number of chapters accessed, but the insight would be limited. But learner patterns of behavior can provide real insights, and it is here where the University of Phoenix (UoP) could teach Harvard and MIT some lessons on analytics.

Some of the comments that are worth addressing:

“Non-aggregated microdata (or a “person-click” dataset, see ) are much harder (impossible?) to de-identify. So you are being unfair in comparing this public release of data with internal data analytic efforts.”

“Agreed. The part I don’t understand is how they still don’t realize how useless this all is. Unless they are collecting better data, but just not sharing it openly, hogging it to themselves until it ‘looks good enough for marketing’ or something.”

“The edX initiative likely has event-level data to analyze. I don’t blame them for not wanting to share that with the world for free though. That would be a very valuable dataset.”

The common theme seems to be that there must be learner-centered data over time, but Harvard and MIT chose not to release this data either due to privacy or selfish reasons. This is a valid question to raise, but I see no evidence to back up these suppositions.

Granted, I am arguing without definitive proof, but this is a blog post, after all. I base my argument on two points – there is no evidence of HarvardX or MITx pursuing learner-centered long-running data, and I believe there is great difficulty getting non-event or non-aggregate data out of edX, at least in current forms.

Update: See comments starting here from Justin Reich from HarvardX. My reading is that he agrees that Harvard is not pursuing learner-centered long-running data analysis (yet, and he cannot speak for Stanford or MIT), but that he disagrees about the edX data collection and extraction. This does not capture all of his clarifications, so read comments for more.

Evidence of Research

Before presenting my argument, I’d again like to point out the usefulness of the HarvardX / MITx approach to open data as well as the very useful interactive graphics. Kudos to the research teams.

The best places to see what Harvard and MIT are doing with their edX data are the very useful sites HarvardX Data & Research and MITx Working Papers. The best-known research released as a summary report (much easier to present than released de-identified open dataset) is also based on data aggregated over a course, such as this graphic:


Even more useful is the presentation HarvardX Research 2013-2014 Looking Forward, Looking Back. In this presentation, there is a useful presentation of the types of research HarvardX is pursuing.

Four kinds of MOOC research

None of these approaches (topic modeling, pre-course survey, interviews, or A/B testing) look at learner’s activities over time. They are all based on either specific events with many interactions (discussion forum on a particular topic with thousands of entries, video with many views, etc) or subjective analysis on an entire course. Useful data, but not based on a learner’s ongoing activities.

I’d be happy to be proven wrong, but I see no evidence of the teams currently analyzing or planning to analyze such learner data over time. The research team does get the concept (see the article on person-click data):

We now have the opportunity to log everything that students do in online spaces: to record their contributions, their pathways, their timing, and so forth. Essentially, we are sampling each student’s behavior at each instant, or at least at each instant that a student logs an action with the server (and to be sure, many of the things we care most about happen between clicks rather than during them).

Thus, we need a specialized form of the person-period dataset: the person-click dataset, where each row in the dataset records a student’s action in each given instant, probably tracked to the second or tenth of a second. (I had started referring to this as the person-period(instantaneous) dataset, but person-click is much better). Despite the volume of data, the fundamental structure is very simple. [snip]

What the “person-period” dataset will become is just a roll-up of person-click data. For many research questions, you don’t need to know what everyone did every second, you just need to know what they do every hour, day or week. So many person-period datasets will just be “roll-ups” of person-click datasets, where you run through big person-click datasets and sum up how many videos a person watched, pages viewed, posts added, questions answered, etc. Each row will represent a defined time period, like a day. The larger your “period,” the smaller your dataset.

All of these datasets use the “person” as the unit of analysis. One can also create datasets where learning objects are the unit of analysis, as I have done with wikis and Mako HIll and Andres Monroy-Hernandes have done with Scratch projects. These can be referred to as project-level and project-period datasets, or object-level and object-period datasets.

The problem is not with the research team, the problem is with the data available. Note how the article above is referencing future systems and future capabilities. And also notice that none of this “person period” research is referenced in current HarvardX plans.

edX Data Structure

My gut feel (somewhat backed up by discussions with researchers I trust) is that the underlying data model is the issue, as I called out in my Sunday post.

In edX, by contrast, the data appears to be organized a series of log files organized around server usage. Such an organization allows aggregate data usage over a course, but it makes it extremely difficult to actually follow a student over time and glean any meaningful information.

If this assumption is correct, then the easiest approach to data analysis would be to look at server logs for specific events, pull out the volume of user data on that specific event, and see what you can learn; or, write big scripts to pull out aggregated data over the entire course. This is exactly what the current research seems to do.

Learner-Centered Data Analysis Over Time

It is possible to look at data over time, as was shown by two Stanford-related studies. The study Deconstructing Disengagement:Analyzing Learner Subpopulations in Massive Open Online Courses. looked at specific learners over time and looked for patterns.

Stanford reportMike Caulfield, Amy Collier and Shawaf Halawa wrote an article for EDUCAUSE Review titled Rethinking Online Community in MOOCs Used for Blended Learning that explored learner data over time.

ERO Study

In both cases, the core focus was learner activity over time. I believe this focus is a necessary part of any learning analytics research program that seeks to improve teaching and learning.

What is interesting in the EDUCAUSE article is that the authors used Stanford’s Class2Go platform, which is now part of OpenEdX. Does this mean that such data analysis is possible with edX, or does it mean that it was with Class2Go but not with the current platform? I’m not sure (comments welcome).

I would love to hear from Justin Reich, Andrew Ho or any of the other researchers involved at HarvardX or MITx. Any insight, including corrections, would be valuable.

The post No, I don’t believe that Harvard or MIT are hiding edX data appeared first on e-Literate.

What Harvard and MIT could learn from the University of Phoenix about analytics

Sun, 2014-06-01 17:42

Last week Harvard and MIT released de-identified data from their edX-based MOOCs. Rather than just produce a summary report, the intent of this release was to open up the data and share it publicly. While it is good to see this approach to Open Data, unfortunately the data set is of limited value, and it actually illustrates a key problem with analytics in higher ed. From MIT News description:

A research team from Harvard University and MIT has released its third and final promised deliverable — the de-identified learning data — relating to an initial study of online learning based on each institution’s first-year courses on the edX platform.

Specifically, the dataset contains the original learning data from the 16 HarvardX and MITx courses offered in 2012-13 that formed the basis of the first HarvardX and MITx working papers (released in January) and underpin a suite of powerful open-source interactive visualization tools (released in February).

At first I was eager to explore the data, but I am not sure how much useful insight is possible due to how the data was collected. The data is structured with one student per row for each course they took (taking multiple courses would lead to multiple rows of data). The data columns (pulled from the Person Course Documentation file) are shown below:

  • course_id: ID for the course
  • userid_DI: de-identified unique identifier of student
  • registered: 0/1 with 1 = registered for this course
  • viewed: 0/1 with 1 = anyone who accessed the ‘courseware’ tab
  • explored: 0/1 with 1 = anyone who accessed at least half of the chapters in the courseware
  • certified: 0/1 with 1 = anyone who earned a certificate
  • final_cc_name_DI: de-identified geographic information
  • LoE: user-provided highest level of education completed
  • YoB: year of birth
  • gender: self-explanatory
  • grade: final grade in course
  • start_time_DI: date of course registration
  • last_event_DI: date of last interaction with course
  • nevents: number of interactions with the course
  • ndays_act: number of unique days student interacted with course
  • nplay_video: number of play video events
  • nchapters: number of courseware chapters with which the student interacted
  • nforum_posts: number of posts to the discussion forum
  • roles: identifies staff and instructors

The problem is that this data only tells us very shallow usage patterns aggregated over the entire course – did they look at courseware, how many video views, how many forum posts, final grade, etc. I have described several times how open courses such as MOOCs have different student patterns, since not all students have the same goals for taking the course.



The Harvard and MIT data ignores student goals or any information giving a clue on whether students desired to complete the course, get a good grade, get a certificate, or just sample some material. Without this information on student goals, the actual aggregate behavior is missing context. We don’t know if a certain student intended to just audit a course, sample it, or attempt to complete it. We don’t know if students started the course intended to complete but became frustrated and dropped down to just auditing or even dropped out.

Beyond data aggregated over the entire course, the Harvard and MIT edX data provides no insight into learner patterns of behavior over time. Did the discussion forum posts increase or decrease over time, did video access change over time, etc? We don’t know. There is some insight we could obtain by looking at the last transaction event and number of chapters accessed, but the insight would be limited. But learner patterns of behavior can provide real insights, and it is here where the University of Phoenix (UoP) could teach Harvard and MIT some lessons on analytics.

Also last week, the Apollo Group (parent of UoP) CIO Mike Sajor gave an interview to Campus Technology, and he discussed their new learning platform (also see my previous post on the subject). In one segment Sajor explained how the analytics are being used.

Sajor: Another aspect: We leverage the platform to collect a vast amount of data about students as they traverse their learning journey. We know what they’re doing, when they’re doing it, how long it takes, anything they do along the journey that might not have been the right choice. We collect that data … and use it to create some set of information about student behaviors. We generate insight; and insight tells us an interesting fact about a student or even a cohort of students. Then we use that insight to create an intervention that will change the probability of the student outcome.

CT: Give an example of how that might work.

Sajor: You’re a student and you’re going along and submitting assignments, doing reading, doing all those things one would normally do in the course of a class. Assignments are generally due in your class Sunday night. In the first few weeks you turn your assignments in on Friday. And suddenly, you turn in an assignment on Saturday evening, and the next week you turn one in mid-day Sunday. Well, we’re going to notice that in our analytics. We’ll pick that up and say, “Wait a second. Sally Student now has perturbation in her behavior. She was exhibiting a behavioral pattern over time since she started as a student. Now her pattern has shifted.” That becomes an insight. What we do at that point is flag the faculty member or an academic adviser or enrollment adviser to contact Sally using her preferred mode — e-mail, phone call. And we’ll ask, “Hey Sally, we noticed you’re turning in your assignments a little bit later than you normally did. Is there anything we can do to help you?” You’d be amazed at the answers we get, like, “My childcare on Thursday and Friday night fell apart.” That gives us an opportunity to intervene. We can say, “You’re in Spokane. We know some childcare providers. We can’t recommend anybody; but we can give you a list that might help you.”

UoP recognizes the value of learner behavior patterns, which can only be learned by viewing data patterns over time. The student’s behavior in a course is a long-running transaction, with data sets organized around the learner.

In edX, by contrast, the data appears to be organized a series of log files organized around server usage. Such an organization allows aggregate data usage over a course, but it makes it extremely difficult to actually follow a student over time and glean any meaningful information.

The MIT News article called out why this richer data set is so important:

Harvard’s Andrew Ho, Chuang’s co-lead, adds that the release of the data fulfills an intention — namely, to share best practices to improve teaching and learning both on campus and online — that was made with the launch of edX by Harvard and MIT in May 2012.

If you want to “share best practices to improve teaching and learning”, then you need data organized around the learner, with transactions captured over time – not just in aggregate. What we have now is an honest start, but a very limited data set.

I certainly wouldn’t advocate Harvard and MIT becoming the University of Phoenix, but in terms of useful learner analytics, they could learn quite a bit. I applaud Harvard and MIT for their openness, but I hope they develop better approaches to analytics and learn from others.

Note: The Harvard and MIT edX is de-identified to fit within FERPA requirements, but after reading their process, it does not appear that the learner patterns were removed due to privacy concerns.

Update: Based on private feedback, I should clarify that I have not validated that the UoP analytics claims actually work in practice. I am giving them credit for at least understanding the importance of learner-centered, behavior-based data to improve teaching and learning, but I do not know what has been fully implemented. If I find out more, I’ll share in a separate post.

On this point, there is an angle of ‘what University of Phoenix could learn from Harvard and MIT on analytics’ regarding Open Data and the ability to see real results.

The post What Harvard and MIT could learn from the University of Phoenix about analytics appeared first on e-Literate.

Unizin: What are the primary risks?

Thu, 2014-05-29 15:50

In Michael’s most recent post on Unizin, the new “learning ecosystem” initiative driven by Indiana University, he asked the question of who would be threatened by the proposed consortium (with the answer of edX). This question assumes of course that Unizin actually succeeds in large part, but what are the primary risks for the initiative to succeed in the first place? Based on the public information we have available to date (primarily in the two posts linked above), I see two near-term risks and one long-term risk that rise above the others.

Near-Term Risk: Getting Schools to Sign Up

The obvious question is whether there are enough schools willing to commit $1 million and adopt the proposed platforms to get the consortium off the ground. Based on the Colorado State University recording, it appears that the goal is to get 9 – 10 schools to commit $9 – $10 million in the initial phase. Beyond Indiana University, the most likely school to commit is the University of Michigan. Their leadership (dean of libraries, CIO) are fully behind the initiative, and from press reports they are seeking final approval. I cannot find any evidence that any other schools have reached this point, however.

Slide from CSU Presentation

Slide from CSU Presentation

There are active debates in the Committee on Institutional Cooperation (CIC), primarily between provosts and CIOs, about Unizin and whether this approach works for member institutions. The provosts in fact already put out a position paper generally endorsing the same concept.

While new and cost effective technological capabilities make certain changes in higher education possible, it does not necessarily follow that such changes are desirable, or would be endorsed or utilized by our existing students, faculty, or community members. Nor does it mean that we fully grasp the costs and business models that might surround new strategies for broadly disseminating course content. University leaders committed to addressing the new opportunities in higher education need to recognize that the primary basis for motivating and inspiring faculty to engage these opportunities will not be the technologies themselves, but rather, the fundamental academic values and pedagogical principles that need to be infused in these emerging instructional technologies. For these reasons, we believe that the chief academic officers of our CIC member universities are in the best position—individually and collectively—to be leading these efforts.

Putting out a position paper is not the same as getting buy-in from a campus or contributing real money, and I suspect that most of the potential campuses will need some form of this discussion before signing up.

Near-Term Risk: Secretive decision process

On the subject of campus buy-in, the actual secretive process that is being pursued by Unizin and prospective schools is itself a significant risk, especially in the post MOOC-hype environment. Institutions are considering this major investment and commitment in a deliberately opaque process. Provosts, CIOs and occasionally faculty groups are being briefed, but almost all documentation is being hidden. During the Colorado State University meeting, one faculty member asked about this process:

At the recorded CSU meeting, one of the presenters—it’s impossible to tell which is the speaker from the recording we have—acknowledges that the meetings were largely conducted in secret when challenged by a faculty member on the lack of faculty involvement. He cited sensitive negotiations among the ten universities and Instructure as the reason.

These same questions are being raised about the decision processes behind many of the MOOC adoptions. Consider the University of Texas, which committed $5 million to their involvement in edX. The Daily Texan has publicly started a debate on that campus about the motivation and benefits of that decision.

The MOOCs were, apparently, designed without revenue in mind, though the System invested $10 million to both develop the MOOCs and to host the courses on edX, an online platform created by Harvard and MIT. [snip]

Of course, the System has made large and unproven investments in online education platforms before — MyEdu rings a bell. The Tribune recently reported that the System will see no financial return on its $10 million investment in MyEdu, which was ultimately sold to Blackboard. Again, there was no long-term financial plan in mind, but there was a lot of money on the table.

The System should stop investing millions of dollars on gambles like these, which lack financial exit strategies and viable forms of revenue. If the founding structure of a project doesn’t include a business model for growth and profitability for the University, who is expected to fund it?

Now UT is considering another seven-figure investment in a very closed process. If they join, UT could face pushback from faculty on campus based on any decision to join Unizin, partially reaping what edX sowed.

Faculty groups nationwide are concerned about administrative decision-making that directly impacts academics without directly and transparently involving broad faculty input. Unizin involves not only an LMS adoption but also learning content repository and learning analytics platform. This gets the difficult questions of how and whether to share learning content as well as measuring learning outcomes. Faculty will care.

And there is a hint of a typical university conflict embedded at the end of the CIC provosts’ position paper quote - “we believe that the chief academic officers of our CIC member universities are in the best position … to be leading these efforts”, perhaps with the unwritten phrase “as opposed to CIOs”.

It used to be that CIOs and their organizations would make most technology platform decisions, and quite often it was hard to get the provost office to participate. As can be seen in this statement, we now have situations where provosts and their offices want to be the driving force even for platform decisions. Ideally, the better approach is collaborative where the provosts and CIOs work together, generally with provosts taking a more active role in defining needs or problems and CIOs taking a more active role defining solutions.

In the Unizin content repository case, what would be more natural is for the provosts to first help define what learning content should be shared – learning objects, courseware, courses, textbooks – and under what conditions. After defining goals it would be appropriate to describe how a software platform would facilitate this content sharing, with CIOs taking a more active role in determining whether certain scenarios are feasible and which platforms are the best fit. Throughout the process faculty would ideally have the opportunity to give input on needs, to give feedback on proposed solutions, and to have visibility in the decision process.

Whether this type of open, collaborative decision process is happening behind closed doors is not known, but the apparent need to keep the process quiet raises the risk of pushback on the consortium decision.

Long-Term Risk: Development of Content Repository and Learning Analytics

Even if Unizin succeeds in getting 9 – 10 schools to fund and start the consortium, and even if they successfully manage the faculty buy-in aspects, there is a longer-term risk on making the “learning ecosystem” a reality. Currently the three primary components are very uneven. The LMS is a no-brainer as Canvas already exists and already has broad acceptance as the most popular LMS on the market in terms of recent LMS evaluations and new adoptions. The two other components are very different and might not be well-suited for a community-source development model.

Unizin Diagram of Services

The ed tech road is littered with unsuccessful and disappointing content repositories. The concept of making it easy to share learning content outside of a specific program has long looked beautiful in white papers and conference briefings, but the reality of actual adoption and usage is quite different. Whether the challenge is product design, product completion, or just plain faculty adoption, there are no indications that there is a demand for broad-based sharing of academic content. In essence, the product category is unproven, and it is not clear that we even know what to build in the first place.

Community source has proven its ability to develop viable solutions for known product categories and generally based on existing solutions – consider Sakai as an LMS (heavily based on U Michigan’s CHEF implementation and to a lesser degree on Indiana University’s OnCourse), Kuali Financial System (based directly on IU’s financial system), and Kuali Coeus (based on MIT’s research administration system). When you get rid of a pre-existing solution, the results are less promising. Kuali Student, based on a known product category but designed from the ground up, is currently on track to take almost 8 years from concept to full functionality. Looking further, are there any examples where a new product in an ill-defined product category has successfully been developed in a community source model?

Learning analytics is similar to content repositories in the sense that the concept looks much better in a whitepaper than it does in reality. I remember in the late 2000s when the LMS user conferences came across as ‘we’re learning outcomes companies that happen to have an LMS also’. Remember Blackboard Outcomes System – its “most significant product offering”?

The difference between learning analytics and content repositories, however, is that there are much stronger examples of real adoption on the analytics side. Purdue has successfully implemented Course Signals and has succeeded in improve course retention (despite the challenge of whether inter-course retention has improved). Blackboard Analytics (based on the iStrategy acquisition) has been implemented with real results at a growing number of schools.

More significantly, perhaps, is the work done by the Predictive Analytics Framework (PAR), which just today announced that it was becoming a separate organization spun off by WICHE. The Unizin slides explicitly reference PAR, and some of the analytics language closely mirrors PAR descriptions. The reason this is significant is that the PAR framework goes a long way towards helping to define the product needs.

The question for analytics, therefore, is less on the product category and more on the ability of Unizin to deliver actual results.

If Unizin succeeds in addressing the above risks, then the state of art for learning ecosystems will jump forward. If the proposed consortium does not succeed, the result will be a buyer’s club that makes Canvas a very expensive LMS. That result would be ironic, given some of the foundational concepts behind Unizin.

The post Unizin: What are the primary risks? appeared first on e-Literate.