Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 21 hours 35 min ago

Three Makes a Movement: Branson creates youth panel for student voice in ed tech

Sat, 2014-06-07 15:18

Based on my involvement in the Evolve conference sponsored by the 20 Million Minds Foundation, held in January, I wrote a series of posts covering the discussions around online education and educational technology. The three main posts:

During the conference I put out a call for other conferences to follow 20MM’s lead and work harder to directly include students in their discussions of ed tech – full post here and video below:

Before we get to the analyses, however, it is important to highlight once again how unique this format is in education or ed tech settings. There is plenty of discussion about needing course design and support services that are learner-centric, yet typically ed tech conferences don’t have learner-centric discussions. We need to stop just talking about students and add the element of talking with students.

While I do not believe there is a direct connection, this week Sir Richard Branson created a youth panel as part of the UK’s Generation Tech review, giving students a direct voice in educational technology. The panel’s focus is K-12 usage and is described in The Telegraph:

Young people will be given the chance to voice their ideas about how technology can support learning in the UK, thanks to a new council being created as part of the ‘Generation Tech’ review.

The new Digital Youth Council, a panel of students aged between 13 and 17, will share their experiences with technology and discuss ways in which education technology can be improved in a classroom setting. [snip]

The council is being created as part of a wider review, launched at the end of April and led by Sir Richard Branson, looking at what impact technology is having in schools and what the future holds for teachers and pupils alike.

As children become increasingly confident using new technology, schools have often struggled to keep up – however, many classrooms are now equipped with tablets, interactive white boards and online learning platforms which allow teachers to more effectively monitor pupils’ learning.

The wider Generation Tech review is set to analyse how these new technologies are impacting education.

This is welcome news, and I hope these two efforts, along with WCET’s commitment for a student panel in their fall conference, mark the start of a movement. Who else will join? Are there other examples people can share in the comments?

The post Three Makes a Movement: Branson creates youth panel for student voice in ed tech appeared first on e-Literate.

eCampus News Advisory Board and Gophers

Thu, 2014-06-05 20:25

I have recently accepted an eCampus News offer to be part of their new advisory board. The idea is to have myself and the 10 other members help their editors get a better handle on the industry while also providing useful information to readers through opinion, advice or commentary. The other 10 members of the advisory board:

  • Brian Lukoff, Program Director for Learning Catalytics at Pearson Education
  • Crystal Sands, Director of the Online Writing Lab at Excelsior College
  • Connor Gray, Chief Strategy Officer at Campus Management
  • David J. Hinson, Executive Vice President & Chief Information Officer of Hendrix College
  • Joanna Young, Chief Information Officer and AVP for Finance & Budget at the University of New Hampshire
  • John Orlando, Northcentral University Associate Director of Faculty Training in the Center for Faculty Excellence
  • Mark Baker, Assistant Registrar at Whitworth University
  • Paige Francis, Chief Information Officer for Fairfield University
  • Roxann Riskin, Technology Specialist/Technology Student Assistant Service Supervisor at Fairfield University
  • Salwa Ismail, Head of the Department of Library Information Technology at the Georgetown University Library

There is an article in eCampus News introducing the advisory committee, including bios, thoughts on tends and game-changers, and some personal thoughts. I’ve included my thoughts below (couldn’t help myself on the quote). Judging by others’ responses, this is an eclectic group with quite a broad array of interests, and I’m looking forward to this new role.

The game-changer: Despite the hype of adaptive learning as an automated, black-box, magic-bullet solution, the broader field of personalized learning is likely to be a game changer in higher ed. For the first generation of online learning, the tendency was to replicate the factory model of education (one size fits all) but just do it online. For the second generation, the ability to use online technologies to create multiple pathways for students and to personalize learning will be a strength that can even go beyond face-to-face methods (for any classes larger than 10 to 15 students). We’re already starting to see some real improvements in remedial coursework based on students’ use of personalized learning tools, but this has been in pilot programs to date. As this usage spreads over time, personalized learning, including adaptive data-driven systems, will present real change to our educational system.

Passion: Transparency in education. Like Laura Gibbs, I believe in the open syllabus concept where students should be able to see what is in a course without having to enroll; while ed-tech vendors and open source providers can be very supportive of education, we should have an open view of how well the products and companies are doing; when schools adopt strategic technology initiatives, the process should be open and inclusive; schools should have their results (including academic performance of students) open for others to view. I realize there are risks involved, such as the over-simplification of college scorecards, but the general need for transparency is one that I firmly support.

Hobby: Traveling with family and experiencing local cultures. Whether that is simply a different town or region of California, or different locations internationally, my wife and I enjoy seeing new places and trying to embed ourselves with locals.

Quote/Belief: “I have to laugh, because I’ve out-finessed myself. My foe, my enemy, is an animal. And in order to conquer an animal, I have to think like an animal, and—whenever possible—to look like one. I’ve gotta’ get inside this guy’s pelt and crawl around for a few days.” – C Spackler

Update: In what could be one of my biggest professional mistakes ever, I listed groundhogs instead of gophers in reference to the Carl Spackler quote (confusing Bill Murray movies). You cannot imagine my self-disappointment at this point. Mea culpa.

The post eCampus News Advisory Board and Gophers appeared first on e-Literate.

No, I don’t believe that Harvard or MIT are hiding edX data

Tue, 2014-06-03 12:58

Since my Sunday post What Harvard and MIT could learn from the University of Phoenix about analytics, there have been a few comments with a common theme about Harvard and MIT perhaps withholding any learner-centered analytics data. As a recap, my argument was:

Beyond data aggregated over the entire course, the Harvard and MIT edX data provides no insight into learner patterns of behavior over time. Did the discussion forum posts increase or decrease over time, did video access change over time, etc? We don’t know. There is some insight we could obtain by looking at the last transaction event and number of chapters accessed, but the insight would be limited. But learner patterns of behavior can provide real insights, and it is here where the University of Phoenix (UoP) could teach Harvard and MIT some lessons on analytics.

Some of the comments that are worth addressing:

“Non-aggregated microdata (or a “person-click” dataset, see http://blogs.edweek.org/edweek/edtechresearcher/2013/06/the_person-click_dataset.html ) are much harder (impossible?) to de-identify. So you are being unfair in comparing this public release of data with internal data analytic efforts.”

“Agreed. The part I don’t understand is how they still don’t realize how useless this all is. Unless they are collecting better data, but just not sharing it openly, hogging it to themselves until it ‘looks good enough for marketing’ or something.”

“The edX initiative likely has event-level data to analyze. I don’t blame them for not wanting to share that with the world for free though. That would be a very valuable dataset.”

The common theme seems to be that there must be learner-centered data over time, but Harvard and MIT chose not to release this data either due to privacy or selfish reasons. This is a valid question to raise, but I see no evidence to back up these suppositions.

Granted, I am arguing without definitive proof, but this is a blog post, after all. I base my argument on two points – there is no evidence of HarvardX or MITx pursuing learner-centered long-running data, and I believe there is great difficulty getting non-event or non-aggregate data out of edX, at least in current forms.

Update: See comments starting here from Justin Reich from HarvardX. My reading is that he agrees that Harvard is not pursuing learner-centered long-running data analysis (yet, and he cannot speak for Stanford or MIT), but that he disagrees about the edX data collection and extraction. This does not capture all of his clarifications, so read comments for more.

Evidence of Research

Before presenting my argument, I’d again like to point out the usefulness of the HarvardX / MITx approach to open data as well as the very useful interactive graphics. Kudos to the research teams.

The best places to see what Harvard and MIT are doing with their edX data are the very useful sites HarvardX Data & Research and MITx Working Papers. The best-known research released as a summary report (much easier to present than released de-identified open dataset) is also based on data aggregated over a course, such as this graphic:

World_Map_of_Enrollment___HarvardX

Even more useful is the presentation HarvardX Research 2013-2014 Looking Forward, Looking Back. In this presentation, there is a useful presentation of the types of research HarvardX is pursuing.

Four kinds of MOOC research

None of these approaches (topic modeling, pre-course survey, interviews, or A/B testing) look at learner’s activities over time. They are all based on either specific events with many interactions (discussion forum on a particular topic with thousands of entries, video with many views, etc) or subjective analysis on an entire course. Useful data, but not based on a learner’s ongoing activities.

I’d be happy to be proven wrong, but I see no evidence of the teams currently analyzing or planning to analyze such learner data over time. The research team does get the concept (see the article on person-click data):

We now have the opportunity to log everything that students do in online spaces: to record their contributions, their pathways, their timing, and so forth. Essentially, we are sampling each student’s behavior at each instant, or at least at each instant that a student logs an action with the server (and to be sure, many of the things we care most about happen between clicks rather than during them).

Thus, we need a specialized form of the person-period dataset: the person-click dataset, where each row in the dataset records a student’s action in each given instant, probably tracked to the second or tenth of a second. (I had started referring to this as the person-period(instantaneous) dataset, but person-click is much better). Despite the volume of data, the fundamental structure is very simple. [snip]

What the “person-period” dataset will become is just a roll-up of person-click data. For many research questions, you don’t need to know what everyone did every second, you just need to know what they do every hour, day or week. So many person-period datasets will just be “roll-ups” of person-click datasets, where you run through big person-click datasets and sum up how many videos a person watched, pages viewed, posts added, questions answered, etc. Each row will represent a defined time period, like a day. The larger your “period,” the smaller your dataset.

All of these datasets use the “person” as the unit of analysis. One can also create datasets where learning objects are the unit of analysis, as I have done with wikis and Mako HIll and Andres Monroy-Hernandes have done with Scratch projects. These can be referred to as project-level and project-period datasets, or object-level and object-period datasets.

The problem is not with the research team, the problem is with the data available. Note how the article above is referencing future systems and future capabilities. And also notice that none of this “person period” research is referenced in current HarvardX plans.

edX Data Structure

My gut feel (somewhat backed up by discussions with researchers I trust) is that the underlying data model is the issue, as I called out in my Sunday post.

In edX, by contrast, the data appears to be organized a series of log files organized around server usage. Such an organization allows aggregate data usage over a course, but it makes it extremely difficult to actually follow a student over time and glean any meaningful information.

If this assumption is correct, then the easiest approach to data analysis would be to look at server logs for specific events, pull out the volume of user data on that specific event, and see what you can learn; or, write big scripts to pull out aggregated data over the entire course. This is exactly what the current research seems to do.

Learner-Centered Data Analysis Over Time

It is possible to look at data over time, as was shown by two Stanford-related studies. The study Deconstructing Disengagement:Analyzing Learner Subpopulations in Massive Open Online Courses. looked at specific learners over time and looked for patterns.

Stanford reportMike Caulfield, Amy Collier and Shawaf Halawa wrote an article for EDUCAUSE Review titled Rethinking Online Community in MOOCs Used for Blended Learning that explored learner data over time.

ERO Study

In both cases, the core focus was learner activity over time. I believe this focus is a necessary part of any learning analytics research program that seeks to improve teaching and learning.

What is interesting in the EDUCAUSE article is that the authors used Stanford’s Class2Go platform, which is now part of OpenEdX. Does this mean that such data analysis is possible with edX, or does it mean that it was with Class2Go but not with the current platform? I’m not sure (comments welcome).

I would love to hear from Justin Reich, Andrew Ho or any of the other researchers involved at HarvardX or MITx. Any insight, including corrections, would be valuable.

The post No, I don’t believe that Harvard or MIT are hiding edX data appeared first on e-Literate.

What Harvard and MIT could learn from the University of Phoenix about analytics

Sun, 2014-06-01 17:42

Last week Harvard and MIT released de-identified data from their edX-based MOOCs. Rather than just produce a summary report, the intent of this release was to open up the data and share it publicly. While it is good to see this approach to Open Data, unfortunately the data set is of limited value, and it actually illustrates a key problem with analytics in higher ed. From MIT News description:

A research team from Harvard University and MIT has released its third and final promised deliverable — the de-identified learning data — relating to an initial study of online learning based on each institution’s first-year courses on the edX platform.

Specifically, the dataset contains the original learning data from the 16 HarvardX and MITx courses offered in 2012-13 that formed the basis of the first HarvardX and MITx working papers (released in January) and underpin a suite of powerful open-source interactive visualization tools (released in February).

At first I was eager to explore the data, but I am not sure how much useful insight is possible due to how the data was collected. The data is structured with one student per row for each course they took (taking multiple courses would lead to multiple rows of data). The data columns (pulled from the Person Course Documentation file) are shown below:

  • course_id: ID for the course
  • userid_DI: de-identified unique identifier of student
  • registered: 0/1 with 1 = registered for this course
  • viewed: 0/1 with 1 = anyone who accessed the ‘courseware’ tab
  • explored: 0/1 with 1 = anyone who accessed at least half of the chapters in the courseware
  • certified: 0/1 with 1 = anyone who earned a certificate
  • final_cc_name_DI: de-identified geographic information
  • LoE: user-provided highest level of education completed
  • YoB: year of birth
  • gender: self-explanatory
  • grade: final grade in course
  • start_time_DI: date of course registration
  • last_event_DI: date of last interaction with course
  • nevents: number of interactions with the course
  • ndays_act: number of unique days student interacted with course
  • nplay_video: number of play video events
  • nchapters: number of courseware chapters with which the student interacted
  • nforum_posts: number of posts to the discussion forum
  • roles: identifies staff and instructors

The problem is that this data only tells us very shallow usage patterns aggregated over the entire course – did they look at courseware, how many video views, how many forum posts, final grade, etc. I have described several times how open courses such as MOOCs have different student patterns, since not all students have the same goals for taking the course.

studentPatternsInMoocs20130930

 

The Harvard and MIT data ignores student goals or any information giving a clue on whether students desired to complete the course, get a good grade, get a certificate, or just sample some material. Without this information on student goals, the actual aggregate behavior is missing context. We don’t know if a certain student intended to just audit a course, sample it, or attempt to complete it. We don’t know if students started the course intended to complete but became frustrated and dropped down to just auditing or even dropped out.

Beyond data aggregated over the entire course, the Harvard and MIT edX data provides no insight into learner patterns of behavior over time. Did the discussion forum posts increase or decrease over time, did video access change over time, etc? We don’t know. There is some insight we could obtain by looking at the last transaction event and number of chapters accessed, but the insight would be limited. But learner patterns of behavior can provide real insights, and it is here where the University of Phoenix (UoP) could teach Harvard and MIT some lessons on analytics.

Also last week, the Apollo Group (parent of UoP) CIO Mike Sajor gave an interview to Campus Technology, and he discussed their new learning platform (also see my previous post on the subject). In one segment Sajor explained how the analytics are being used.

Sajor: Another aspect: We leverage the platform to collect a vast amount of data about students as they traverse their learning journey. We know what they’re doing, when they’re doing it, how long it takes, anything they do along the journey that might not have been the right choice. We collect that data … and use it to create some set of information about student behaviors. We generate insight; and insight tells us an interesting fact about a student or even a cohort of students. Then we use that insight to create an intervention that will change the probability of the student outcome.

CT: Give an example of how that might work.

Sajor: You’re a student and you’re going along and submitting assignments, doing reading, doing all those things one would normally do in the course of a class. Assignments are generally due in your class Sunday night. In the first few weeks you turn your assignments in on Friday. And suddenly, you turn in an assignment on Saturday evening, and the next week you turn one in mid-day Sunday. Well, we’re going to notice that in our analytics. We’ll pick that up and say, “Wait a second. Sally Student now has perturbation in her behavior. She was exhibiting a behavioral pattern over time since she started as a student. Now her pattern has shifted.” That becomes an insight. What we do at that point is flag the faculty member or an academic adviser or enrollment adviser to contact Sally using her preferred mode — e-mail, phone call. And we’ll ask, “Hey Sally, we noticed you’re turning in your assignments a little bit later than you normally did. Is there anything we can do to help you?” You’d be amazed at the answers we get, like, “My childcare on Thursday and Friday night fell apart.” That gives us an opportunity to intervene. We can say, “You’re in Spokane. We know some childcare providers. We can’t recommend anybody; but we can give you a list that might help you.”

UoP recognizes the value of learner behavior patterns, which can only be learned by viewing data patterns over time. The student’s behavior in a course is a long-running transaction, with data sets organized around the learner.

In edX, by contrast, the data appears to be organized a series of log files organized around server usage. Such an organization allows aggregate data usage over a course, but it makes it extremely difficult to actually follow a student over time and glean any meaningful information.

The MIT News article called out why this richer data set is so important:

Harvard’s Andrew Ho, Chuang’s co-lead, adds that the release of the data fulfills an intention — namely, to share best practices to improve teaching and learning both on campus and online — that was made with the launch of edX by Harvard and MIT in May 2012.

If you want to “share best practices to improve teaching and learning”, then you need data organized around the learner, with transactions captured over time – not just in aggregate. What we have now is an honest start, but a very limited data set.

I certainly wouldn’t advocate Harvard and MIT becoming the University of Phoenix, but in terms of useful learner analytics, they could learn quite a bit. I applaud Harvard and MIT for their openness, but I hope they develop better approaches to analytics and learn from others.

Note: The Harvard and MIT edX is de-identified to fit within FERPA requirements, but after reading their process, it does not appear that the learner patterns were removed due to privacy concerns.

Update: Based on private feedback, I should clarify that I have not validated that the UoP analytics claims actually work in practice. I am giving them credit for at least understanding the importance of learner-centered, behavior-based data to improve teaching and learning, but I do not know what has been fully implemented. If I find out more, I’ll share in a separate post.

On this point, there is an angle of ‘what University of Phoenix could learn from Harvard and MIT on analytics’ regarding Open Data and the ability to see real results.

The post What Harvard and MIT could learn from the University of Phoenix about analytics appeared first on e-Literate.

Unizin: What are the primary risks?

Thu, 2014-05-29 15:50

In Michael’s most recent post on Unizin, the new “learning ecosystem” initiative driven by Indiana University, he asked the question of who would be threatened by the proposed consortium (with the answer of edX). This question assumes of course that Unizin actually succeeds in large part, but what are the primary risks for the initiative to succeed in the first place? Based on the public information we have available to date (primarily in the two posts linked above), I see two near-term risks and one long-term risk that rise above the others.

Near-Term Risk: Getting Schools to Sign Up

The obvious question is whether there are enough schools willing to commit $1 million and adopt the proposed platforms to get the consortium off the ground. Based on the Colorado State University recording, it appears that the goal is to get 9 – 10 schools to commit $9 – $10 million in the initial phase. Beyond Indiana University, the most likely school to commit is the University of Michigan. Their leadership (dean of libraries, CIO) are fully behind the initiative, and from press reports they are seeking final approval. I cannot find any evidence that any other schools have reached this point, however.

Slide from CSU Presentation

Slide from CSU Presentation

There are active debates in the Committee on Institutional Cooperation (CIC), primarily between provosts and CIOs, about Unizin and whether this approach works for member institutions. The provosts in fact already put out a position paper generally endorsing the same concept.

While new and cost effective technological capabilities make certain changes in higher education possible, it does not necessarily follow that such changes are desirable, or would be endorsed or utilized by our existing students, faculty, or community members. Nor does it mean that we fully grasp the costs and business models that might surround new strategies for broadly disseminating course content. University leaders committed to addressing the new opportunities in higher education need to recognize that the primary basis for motivating and inspiring faculty to engage these opportunities will not be the technologies themselves, but rather, the fundamental academic values and pedagogical principles that need to be infused in these emerging instructional technologies. For these reasons, we believe that the chief academic officers of our CIC member universities are in the best position—individually and collectively—to be leading these efforts.

Putting out a position paper is not the same as getting buy-in from a campus or contributing real money, and I suspect that most of the potential campuses will need some form of this discussion before signing up.

Near-Term Risk: Secretive decision process

On the subject of campus buy-in, the actual secretive process that is being pursued by Unizin and prospective schools is itself a significant risk, especially in the post MOOC-hype environment. Institutions are considering this major investment and commitment in a deliberately opaque process. Provosts, CIOs and occasionally faculty groups are being briefed, but almost all documentation is being hidden. During the Colorado State University meeting, one faculty member asked about this process:

At the recorded CSU meeting, one of the presenters—it’s impossible to tell which is the speaker from the recording we have—acknowledges that the meetings were largely conducted in secret when challenged by a faculty member on the lack of faculty involvement. He cited sensitive negotiations among the ten universities and Instructure as the reason.

These same questions are being raised about the decision processes behind many of the MOOC adoptions. Consider the University of Texas, which committed $5 million to their involvement in edX. The Daily Texan has publicly started a debate on that campus about the motivation and benefits of that decision.

The MOOCs were, apparently, designed without revenue in mind, though the System invested $10 million to both develop the MOOCs and to host the courses on edX, an online platform created by Harvard and MIT. [snip]

Of course, the System has made large and unproven investments in online education platforms before — MyEdu rings a bell. The Tribune recently reported that the System will see no financial return on its $10 million investment in MyEdu, which was ultimately sold to Blackboard. Again, there was no long-term financial plan in mind, but there was a lot of money on the table.

The System should stop investing millions of dollars on gambles like these, which lack financial exit strategies and viable forms of revenue. If the founding structure of a project doesn’t include a business model for growth and profitability for the University, who is expected to fund it?

Now UT is considering another seven-figure investment in a very closed process. If they join, UT could face pushback from faculty on campus based on any decision to join Unizin, partially reaping what edX sowed.

Faculty groups nationwide are concerned about administrative decision-making that directly impacts academics without directly and transparently involving broad faculty input. Unizin involves not only an LMS adoption but also learning content repository and learning analytics platform. This gets the difficult questions of how and whether to share learning content as well as measuring learning outcomes. Faculty will care.

And there is a hint of a typical university conflict embedded at the end of the CIC provosts’ position paper quote - “we believe that the chief academic officers of our CIC member universities are in the best position … to be leading these efforts”, perhaps with the unwritten phrase “as opposed to CIOs”.

It used to be that CIOs and their organizations would make most technology platform decisions, and quite often it was hard to get the provost office to participate. As can be seen in this statement, we now have situations where provosts and their offices want to be the driving force even for platform decisions. Ideally, the better approach is collaborative where the provosts and CIOs work together, generally with provosts taking a more active role in defining needs or problems and CIOs taking a more active role defining solutions.

In the Unizin content repository case, what would be more natural is for the provosts to first help define what learning content should be shared – learning objects, courseware, courses, textbooks – and under what conditions. After defining goals it would be appropriate to describe how a software platform would facilitate this content sharing, with CIOs taking a more active role in determining whether certain scenarios are feasible and which platforms are the best fit. Throughout the process faculty would ideally have the opportunity to give input on needs, to give feedback on proposed solutions, and to have visibility in the decision process.

Whether this type of open, collaborative decision process is happening behind closed doors is not known, but the apparent need to keep the process quiet raises the risk of pushback on the consortium decision.

Long-Term Risk: Development of Content Repository and Learning Analytics

Even if Unizin succeeds in getting 9 – 10 schools to fund and start the consortium, and even if they successfully manage the faculty buy-in aspects, there is a longer-term risk on making the “learning ecosystem” a reality. Currently the three primary components are very uneven. The LMS is a no-brainer as Canvas already exists and already has broad acceptance as the most popular LMS on the market in terms of recent LMS evaluations and new adoptions. The two other components are very different and might not be well-suited for a community-source development model.

Unizin Diagram of Services

The ed tech road is littered with unsuccessful and disappointing content repositories. The concept of making it easy to share learning content outside of a specific program has long looked beautiful in white papers and conference briefings, but the reality of actual adoption and usage is quite different. Whether the challenge is product design, product completion, or just plain faculty adoption, there are no indications that there is a demand for broad-based sharing of academic content. In essence, the product category is unproven, and it is not clear that we even know what to build in the first place.

Community source has proven its ability to develop viable solutions for known product categories and generally based on existing solutions – consider Sakai as an LMS (heavily based on U Michigan’s CHEF implementation and to a lesser degree on Indiana University’s OnCourse), Kuali Financial System (based directly on IU’s financial system), and Kuali Coeus (based on MIT’s research administration system). When you get rid of a pre-existing solution, the results are less promising. Kuali Student, based on a known product category but designed from the ground up, is currently on track to take almost 8 years from concept to full functionality. Looking further, are there any examples where a new product in an ill-defined product category has successfully been developed in a community source model?

Learning analytics is similar to content repositories in the sense that the concept looks much better in a whitepaper than it does in reality. I remember in the late 2000s when the LMS user conferences came across as ‘we’re learning outcomes companies that happen to have an LMS also’. Remember Blackboard Outcomes System – its “most significant product offering”?

The difference between learning analytics and content repositories, however, is that there are much stronger examples of real adoption on the analytics side. Purdue has successfully implemented Course Signals and has succeeded in improve course retention (despite the challenge of whether inter-course retention has improved). Blackboard Analytics (based on the iStrategy acquisition) has been implemented with real results at a growing number of schools.

More significantly, perhaps, is the work done by the Predictive Analytics Framework (PAR), which just today announced that it was becoming a separate organization spun off by WICHE. The Unizin slides explicitly reference PAR, and some of the analytics language closely mirrors PAR descriptions. The reason this is significant is that the PAR framework goes a long way towards helping to define the product needs.

The question for analytics, therefore, is less on the product category and more on the ability of Unizin to deliver actual results.

If Unizin succeeds in addressing the above risks, then the state of art for learning ecosystems will jump forward. If the proposed consortium does not succeed, the result will be a buyer’s club that makes Canvas a very expensive LMS. That result would be ironic, given some of the foundational concepts behind Unizin.

The post Unizin: What are the primary risks? appeared first on e-Literate.

New Internet Trends Presentation: Mary Meeker gets education partly right

Thu, 2014-05-29 10:50

Mary Meeker from Kleiner Perkins Caufield Byers (KPCB) has released her annual Internet Trends presentation, which has taken on a life of its own. Her data is very useful to see macro trends and the significance of Internet-related technology. Even in the non-education sections, the presentation should be very useful to education.

One interesting perspective is that tablets continue their massive growth. Anecdotally, I am not seeing such a strong uptick in tablet usage by students on campuses, but the overall growth is worth exploring further. It would be interesting to see more longitudinal data specifically on tablet usage on campus.

Tablet growth

Another mobile perspective to keep watching is the rising percentage of web usage by mobile devices (smart phones and tablets), which jumped from 14% to 25% in just the past year. If your campus or technology company does not have a mobile strategy, then you are missing what students (and even faculty and staff) are actually experiencing on the web.

Mobile web percentage

Once you get down to the education section (starting on slide 24), I found Meeker’s overall setup good, but her argument of a “tipping point” unpersuasive. She starts out by observing how important the education sector is in the US:

  • Education is Important
  • Education is Expensive
  • Education Results are Often Subpar
  • People Care About Education
  • Personalized Education Ramping
  • Distribution Expanding & Education Start-up Costs Declining

While these are not new observations, I think she is right in using them to explain why the sector is important and why this is different than other technology-driven sectors susceptible to bubbles and lost interest. I could see educational technology investment dropping in the near future, but the challenges and opportunities will not go away.

Where I have a problem is her justification that “education may be at a tipping point”. These examples seem to cherry-pick technology adoption for consumer tools rather than showing any evidence of change based on the technologies. It’s not that I don’t see change happening, just that this slide is one of the weaker arguments in the deck.

Ed Green Shoots

Later in the presentation is an interesting observation showing the low level of useful information that is actually tagged and analyzed, which is also true for education.

Data useful analyzed

 

You can see the entire presentation here:

KPCB Internet trends 2014 from Kleiner Perkins Caufield & Byers

The post New Internet Trends Presentation: Mary Meeker gets education partly right appeared first on e-Literate.

Why Unizin is a Threat to edX

Sat, 2014-05-24 15:49

In the week since we published our Unizin exposé, there has been nary a peep from the group to us, or apparently to the traditional news outlets either. When we ran the piece, we emailed Indiana University CIO Brad Wheeler to request comment or corrections. We have not heard back from him yet. Brad, if you’re reading this, our door is always open. Talk to us.

Luckily for us, we don’t need to rely on new pronouncements from Brad to give us more insight into what’s going on. It turns out that he’s been speaking in public about this idea for years. He just hasn’t called it Unizin. And from what I can tell, it should give edX cause to worry.

Understanding Brad

If you want to understand Unizin, you really have to understand Brad Wheeler. He is clearly leading this group and has a history of starting such coalitions (like Sakai, Kuali, and HathiTrust, to name a few). Unizin has his fingerprints all over it. And if you want to understand how Brad thinks, the first thing you need to know is that he is a professor of information systems at a business school. He thinks like a business school professor. For as long as I have been aware of his work, which is about nine years now, he has been focused on the problem of the ed tech ecosystem as an inefficient market. For example, in a 2012 EDUCAUSE Review piece co-authored with Charles Henry, Brad and Charles wrote,

Currently in higher education, we are seeing a growing misalignment between the buyers (higher education institutions) and sellers in these areas.The commercial side has swiftly adapted to scale, consolidating in every domain where there are efficiencies in digital distribution and reuse. Many firms have bought up former rivals in order to vastly consolidate the suppliers of critical software, content, and services. They did so to achieve better economics by scaling the combined revenues of their acquired firms and splitting operational costs over more products.

Colleges and universities, however, remain highly fragmented with mostly one-off “deals” that aptly illustrate the price and cost inefficiencies of skewed markets. A large number of uncoordinated buyers will have little effect on pricing terms when a few suppliers are near oligopolies in terms of industry structure. Beyond price, colleges and universities are also collectively and unintentionally ceding critical capabilities in many categories of essential software, content, and services, necessitating that they buy these capabilities back from the commercial markets.

In their view, the paucity of vendors and consolidation in the space has given the vendors unhealthy leverage over their customers. The solution to this problem, in their view, is for customers to stage their own consolidation in order to rebalance the power relationship with the vendors:

Within the academy, many have expressed outrage at the pricing and sales strategies of the aggregated commercial firms. We dismiss that point of view.he problem lies not with the commercial firms, who rightly endeavor to transfer money from higher education institutions to their shareholders.The problem lies in the behavior of those of us in higher education—the buyers who fail to aggregate at scale in the interests of our institutions. Achieving multi-institutional scale in order to level the playing field and become players in the new game in this era of digital distribution and reuse requires that we make behavioral changes. These changes are neither pro-commercial nor anti-commercial, and attempts to frame them as such are without merit. The changes are, rather, pro-institutional.

Brad thinks a lot about how universities as institutions can recover some of the money that they are currently giving over to vendors. Here he is in 2007, writing about open source Learning Management Systems:

Developing sustainable economics and advancing the frontiers of innovation are the dual challenges for application software in higher education. Sustainable economics means that an institution’s base budgets can support the licensing fees, developers, maintenance, training, and support required for application software. For example, it means that the viability of a course management system (CMS) is not dependent on the next grant or on a one-time budgetary accommodation. Since making changes to application software invokes cost, minimizing change is one tactic for achieving sustainable economics through lower IT costs. In higher education, however, the creative nature of colleges and universities motivates faculty and staff to innovate with new pedagogy and with the use of online resources. Application software that fails to evolve or to allow experimentation and innovation in teaching is unlikely to be well received.

Higher education is in search of a new model to address these dual challenges, and open source application development has been proffered as a solution. Open source software, which is usually obtained without paying any licensing fee to its creators, allows developers to modify the inner workings of the software. In contrast, commercial application software, which is usually licensed for an annual fee, does not allow the inner workings of the software to be modified. Open source software is not free, however, when properly viewed from a total cost of ownership (TCO) perspective. Like all other systems, it requires investments for hardware, user support staff, training, integration with other systems, and so forth. Thus licensing fees, technical support, and control of destiny in evolving the software features are the discriminating cost factors. But licensing fees are not trivial: some estimates place licensing at 20–25 percent of the TCO—in the hundreds of thousands of dollars for many institutions.

Here he is in 2010, writing about the problem of academic journal fees:

Imagine if an efficiency consultant uncovered the following situation. In 1984, an important business function in a university began using a convenient copy service for $0.10 per copy. The university staff provided most of the labor, and the copy service company provided the machine. Twenty-six years later, the university is still using the copy service and still providing most of the labor costs, but the price has changed—to $0.85 per copy. In addition, the copy service now imposes restrictions such that only certain members of the university can read the copied documents. If others want to read the copies, the university must pay extra for the number of readers of the document rather than just the per-copy fee.

Ridiculous? A fee that is 850 percent of the rate twenty-five years earlier? The fee should be approximately $0.21 per copy if it had tracked with the U.S. inflation rate and had gained no technology efficiencies in twenty-six years.  Surely, no efficiency expert would affirm continuing to rent the copier under these terms. The expert would undoubtedly point out that the university could own and operate a copier for a much lower cost and without any restrictive use rules that impede the university’s work. If this situation were found in a non-core administrative area, it would be viewed as outrageous and would be changed immediately.

In fact, a very similar situation exists today in a core area of research and education. This copier example is imperfectly illustrative of the cost and restrictive use imposed on most academic libraries by academic journals. For example, in 2006 a group of 275 doctoral/research universities paid a combined $1 billion to essentially rent the “journal copier” system that would provide their faculty, staff, and students with access to scholarly journals. In 2010, they are paying even more, and the real scale of the cost to colleges and universities spans globally to institutions of all sizes. A sweeping study from the United Kingdom estimated total annual expenditure for journals at £597.4 million (approximately U.S. $952 million) in 2006–7. The total estimate for scholarly communications—inclusive of faculty time for editing and reviewing—was £5.4 billion (approximately U.S. $ 8.6 billion).

And here he is in 2012 writing about textbooks:

Over the years, students and content creators (authors and publishers) have been engaged in a self-reinforcing, negative economic loop for textbooks. Creators only get paid for their investment and work when a new textbook is sold, and students save money by purchasing a used textbook at a lower cost. Creators price higher as fewer students buy new, and students either seek used books or older editions, go without an assigned text, or turn to digital piracy in response to higher prices.

Early signs in the shift to digital were also troubling. Shrewd students who succeeded in buying a used textbook and selling it back had a net cost of about 35 percent of the book’s list price, but less than half of students generally succeeded in selling back. In 2010, e-text pricing was around 70–75 percent of a new paper book or roughly double the cost of the buy-sellback net cost for students. E-texts (naturally) had no option for sellback, and they were riddled with restrictions concerning printing, length of access, and so forth. In addition, publishers were employing a bridging strategy to kill the used-book market by combining single-use website codes with new textbooks for essential online materials. If a student bought a used book, he or she would then still need to pay retail price for a website code.

Thus, while the shift to digital provided new opportunities for students to save money and publishers to rethink their business models, the trend was heading in precisely the wrong direction for content pricing. Also, publishers, bookstores, and others were coming forward with clever new software and hardware platforms for students to read and annotate e-texts. In the absence of a university plan, it is not unreasonable to foresee that a freshman could, with five courses, have seven e-texts requiring four or five different types of software just to study! Obviously, that makes no sense.

As we will see, Brad has gone on record that Unizin is driven by the same concerns (although he doesn’t use the coalition’s name). Whatever else it is also about, you can bet money that it’s about controlling vendor costs.

Not Just the LMS

But if Unizin were only about controlling the cost of the LMS, it would be an odd way to do it. First of all, why invest $1 million per university for a 7-year payback? (And by the way, one thing we don’t know yet is what that $1 million covers. Does it cover migration support, for example? Licensing? What is the total cost of Unizin, including moving to Canvas, and how much of that money goes to Instructure in the end?) Also, if you’re trying to drive down prices, then you get more leverage from a larger buying club. Internet2, which will reportedly be the business entity that will host Unizin, already has an agreement with Instructure through it’s NET+ program. If you’re not familiar with it, NET+ is essentially a buyers’ club for all Internet2 members. Working through members who act as sponsors for a vendor, Internet2 develops a range of review criteria to ensure that the product in question is secure, scalable, offered under equitable contractual terms, and so on. And then they negotiate a price. This is a great approach if you want to get better pricing from vendors while at the same time encouraging a healthier vendor ecosystem. On the one hand, you are aggregating demand across many institutions, so you have a lot of leverage. On the other hand, you are also rationalizing the due diligence process for the vendors, so that purchasing decisions by Internet2 members will presumably not be painful tell-me-everything RFPs. (In fact, it may eliminate the need for RFPs altogether.) This reduces the cost of sales for the vendors which, in turn, makes it affordable for more vendors to be competitive. NET+ is a great program, and Brad was involved with its creation.

And it does not appear to be what Unizin is doing, despite the fact that Internet2 is the “fiscal agent” for Unizin.

I can think of several possibilities to account for this. First, it is possible that Unizin is, in fact, using NET+ and is negotiating some additional clauses needed by the coalition that will eventually be incorporated back into the NET+ program. The second possibility is that the coalition has needs that are different enough from those of the larger Internet2 community that they feel they need a separate arrangement. The third possibility is that the reason for negotiating separately has nothing to do with contractual needs. Rather, creating a club of attractive peer universities might fulfill a political need to provide cover for the CIOs at their home institutions.

Which of these is right? Or is there some other explanation? We don’t know. Unizin isn’t talking. But my guess is that it’s actually a bit of all three. I suspect that NET+ was probably the starting point of the contractual negotiations, that the coalition has needs beyond generic LMS use, and that some of their needs are political in nature. I base that conjecture on a paper and presentation in which Brad Wheeler makes the case for an approach that is uncannily similar to what we know about Unizin.

Speeding Up On Curves

If you want to understand Unizin, you must read Brad’s paper “Speeding Up on Curves” and view his presentation by the same name. Both are important. The article has the more cogent and complete argument, while the presentation contains some revealing improvisational riffs. The article in particular makes the case for the creation of a coalition of interdependent universities to provide an LMS, content repository, and learning analytics system.

Sound familiar?

Brad writes about four educational technology-enabled models, in decreasing order in cost and revenue generation:

  1. Residential education (flipped classrooms): $$$
  2. Online courses and degrees: $$
  3. Massive Online Courses (MOCs): $
  4. Massive Open Online Courses (MOOCs): Free1

He argues that these tools are necessary to scale institutional revenues in the face of declining public funding and rising operational costs. He envisions a new, digitally enabled marketplace:

It looks to me like Brad is trying to run the table. He wants to control costs from LMS vendors, MOOC platform vendors, analytics and content management vendors, and textbook vendors, all in one shot, while scaling up revenue through online offerings. In business school jargon, he wants to transform the Unizin cohort into a vertically integrated cartel that controls its entire value chain.

That’s a very big vision. In my experience, however, universities do not typically put up $1 million for vision. They need something more concrete and immediate. On the other hand, licensing an LMS doesn’t seem sufficient justification for a $1 million investment with a 7-year payback either. There must be something in the middle ground between the tactical LMS license and the big picture that is attracting the other Unizin schools. What is it? What’s the pitch? Again, we don’t know. Unizin isn’t talking. But I have a guess, and edX isn’t going to like it.

How Much Does Free Cost?

Suppose you’re an executive in a big R1 university or university system. Suppose you’re very well aware of and very worried about the economic trends that Brad has outlined in his “Speeding Up On Curves” article. (The second supposition is practically redundant to the first one.) Suppose you don’t know what the answer to this problem is but you feel like you have to be in the MOOC game in case some sort of partial solution should emerge out of it. What are your options?

  1. You could go to Coursera. This is the easiest option in some ways, and has the benefit of reaching the largest number of eyeballs. But it feels like you’re surrendering your brand, which you believe is one of your core assets, to Coursera’s brand. And beyond that, the whole Silicon Valley thing makes you uncomfortable, particularly when you don’t understand how they intend to make money (or don’t believe their answers when you ask them that question).
  2. You could go it alone and put out your own MOOCs on your own platform. But your LMS might not be set up well for MOOCs, and in any event, how would anyone know that you are putting them out? You aren’t confident in your ability to market courses in a way that will attract tens of thousands of students on your own.
  3. You could go to edX. It’s a non-profit run by universities, so it’s closer to your comfort level. And it is getting significant student traffic—more than you could probably get on your own. But it costs a lot of money to join that club—anywhere from $1 million to $5 million, from what’s leaked out into the public about the prices that particular schools have paid. And what you get for that money, beyond the brand recognition and the portal destination, is a very immature platform and some community that you could easily reproduce elsewhere.

Suppose somebody came to you and said, “For the same cost of an edX membership, I can get you a vastly more mature platform for MOOCs, which also happens to be the sexy new LMS that your faculty are bugging you to let them use for their more traditional courses, and a coalition that can build a brand to rival edX while still being run by your peers and not some icky VCs. I also can offer you a much grander vision of bigger benefits, but even if you don’t believe that vision will come to pass, you can think of getting the chance to be in on the ground floor as a free bonus.”

This pitch would be perfectly aligned with Instructure’s pitch for the Canvas Network (their MOOC support), as I wrote about previously. It also would be aligned with the positioning of the LMS in “Speeding Up On Curves”:

Distribution Platforms. The Learning Management System (LMS) and its variants have long been viewed as the online learning distribution platform. But now, as content becomes platform, Coursera is growing like a juggernaut, edX is expanding, and publishers are producing adaptive learning systems. If we back up fifteen years and look at the initial heterogeneity of the LMS space, we see that the path to scale for software platforms played out with mergers and acquisitions and yielded a few dominant options with high switching costs among them. The platforms become increasingly important as education becomes more digital, since they provide distribution for educational experiences, courses, and degrees to students who are on or off campus. In business terms, these platforms are how our educational services reach markets of students both near and far away. And just as happened with Internet commerce more generally, students are becoming far more discerning in their educational purchases that these platforms enable.

Brad has framed the LMS as the storefront, which is probably the principal innovation of xMOOC platforms.

What are the odds that the participants in the Unizin coalition are moderately to highly motivated by the MOOC question? Well, let’s see:

Slide from CSU Presentation

Yes, I believe there is a pretty good chance that an alternative edX-like coalition would be interesting to the executives from at least some of these schools. And if so, then going with the vanilla NET+ contract probably wouldn’t be enough. For starters, they would want to create an edX-like portal, separate from the generic Canvas.net portal, for this coalition. It would be their edY, or something. There might (or might not) be software-related steps beyond the contract that Instructure would have to do, like adding code to the Instructure instance of Canvas or spinning up a Unizin-specific instance. And there may be other, administrative-related functionality changes that the coalition would want (for example). Perhaps more importantly, though, if the Unizin stakeholders want to be able to go back to their campuses and make a case for edY, they will have to come with a group of schools that they believe will hold up well (in the eyes of their campus stakeholders) against the likes of Harvard and MIT. A big, splashy launch of a Unizin coalition with a number of peer schools who are all going to market together, supported by the LMS that all the cool kids dig, could be just the ticket. The group could conceivably use the NET+ contract as a starting point, but they would want to separate themselves from it somewhat for both substantive and political reasons.

Again, this is all just guesswork. But it fits what we know.

So if the medium-term ambition of Unizin is to create a MOOC portal, who is threatened by that? I don’t think it hurts Coursera too badly. They now have enough eyeballs that I think even schools in other coalitions are likely to hedge their bets and put a course or two on that platform. edX is another matter, though. Assuming that Unizin could succeed in making a big media splash and attract students to their course catalog, I don’t see what edX offers that Unizin on Canvas couldn’t do better, and the value to administrators of getting all MOOC and non-MOOC courses on the same supported platform shouldn’t be underestimated. If I were Anant Agrawal, I would be very worried about Unizin poaching my customers.

But What About the Big Vision?

The LMS + MOOC pitch explains why these universities might be interested in a coalition, but it doesn’t fully explain the interest in the Learning Object Repository and analytics system. To explain that, you need to look at Brad’s bigger ambition. And I think both the goal and the strategies there get dicier. But more on that in a future post.

  1. Whether MOOCs are actually low-cost relative to the other options on Brad’s list depends a lot on how you’re measuring cost.

The post Why Unizin is a Threat to edX appeared first on e-Literate.

DOE Doubles Down on State Authorization: 25x increase in regulatory language

Wed, 2014-05-21 11:11

Now that the Kabuki Theatre of the Department of Education’s negotiated rulemaking process has finished its penultimate act, can we all act surprised that the likely result includes the proposed State Authorization regulations growing by a factor of 25 with no comments allowed by one of the groups most affected?

The gist of State Authorization is to force distance education and correspondence programs to comply not only with their home state regulations but to also comply with regulations for any state of residence for students. The proposed regulations from 2010 (struck down by courts in 2011) came in at 75 words:

§600.9(c) If an institution is offering postsecondary education through distance or correspondence education to students in a State in which it is not physically located or in which it is otherwise subject to State jurisdiction as determined by the State, the institution must meet any State requirements for it to be legally offering postsecondary distance or correspondence education in that State. An institution must be able to document to the Secretary the State’s approval upon request.

The most recent submission from DOE comes in at 1,912 words. You can read the full text here. This growth in regulatory language will have real costs. Today the president of Saint Leo College described how much of the administrative bloat in campuses is due to increasing federal regulations.

Let me be clear, not all of the increases in college tuition and administrative bloat are caused by regulation, but some are—and far more than you think.

The reason that the 2010 language was struck down by the courts was described in a WCET post:

As we’ve noted before, the distance education language was not included in the original language proposed by the USDOE in June 2010 as part of its rulemaking process.  The rule appeared in the final ‘Program Integrity’ regulations released in October 2010.  As a result, the court seems to have agreed that the USDOE did not follow proper procedures in implementing the rule.  If the language had been included in the June 2010 proposed rules, WCET and the distance education community could have provided comments to improve the final regulation.

What is disconcerting now is that once again the DOE has not included a major stakeholder group that will be directly impacted by the new regulations, in this case state regulators. I described in a previous post how the new regs would dramatically increase federal control over distance education and would significantly increase the required oversight by state regulators. It is this issue that appears to have been the breaking point in current negotiations, per the Chronicle today:

On state authorization, the department and several negotiators representing student and consumer groups remained insistent that distance-education programs could be eligible for federal student aid only if they were approved by a state with an “active process” for authorizing them—a process that went beyond passively approving programs merely if they were accredited or had been operating in the state for a long time.

“We’re not willing to walk away” from that position, said Pam Moran, representing the department.

And what do the state regulators have to say in this process?

Elizabeth Hicks, executive director of student financial services at the Massachusetts Institute of Technology, was one of several negotiators who asked the department to hold off on issuing a state-authorization rule until it could consult with state regulators, who were not represented among the 31 negotiators from outside the department.

“Many of us feel you are trying to leverage institutions to regulate states,” said Ms. Hicks. “That’s disingenuous.”

What do you get for this massive growth in regulatory language? Russ Poulin from WCET, who was one of the negotiators and has provided great insight into the process, stated in a comment to the Chronicle article:

The objection of most negotiators was not bureaucracy, but meaningless bureaucracy. The Department’s proposal would have forced institutions to lobby states (in which they serve students) to change their approval processes to a process that would not be substantially different than what they do now. This would not improve consumer protection. It would throw the status of hundreds of thousands of students into question while states decide if they would make the necessary changes and what changes they would make. Yes, there was a roll-out period, but uncertainty would remain. Why disrupt the lives of students to implement a process that would result in the same approval outcome?

I am sure that many individuals within the DOE truly wanted to reach consensus within negotiated rulemaking, but the process for doing so was flawed from the beginning. The process worked as designed, and no consensus was reached. Now the DOE has the discretion to introduce the proposed language it wants or even to drop the subject.

The shame of this is that there really is a problem to be addressed. eCampusNews described today how Ashford University agreed to a $7.25 million settlement and revise marketing practices based on a state of Iowa lawsuit. One former student commented:

Pat Kean, a military veteran from Madrid and former Ashford College of Education student, said he used higher education funds earned through his service to pay for online classes, only to discover in his final days as a student that his teaching degree wasn’t valid and his credits weren’t transferrable. Even the land-based campus in Clinton wouldn’t accept his credits, he said.

Students do need to have pathways, including via states, to address misleading marketing practices and poor quality. The process seems to have worked here, even without State Authorization, partially as students in the Ashford home state of Iowa complained. But where would students in Oregon complain? This is the rationale for State Authorization.

Despite the exclusion of state regulators in the negotiated rulemaking process and the last-minute changes to requirements on state regulators, the DOE has actually been forthright in their intention to force states to ramp up “active” oversight. From the Federal Register in 2010 (see page 34813, left column):

Upon further review, we believe the better approach is to view the State approval to offer postsecondary educational programs as a substantive requirement where the State is expected to take an active role in approving an institution and monitoring complaints from the public about its operations and responding appropriately.

In other words, there is a problem to be solved, and the intent to place new regulatory requirements on the states themselves has been known. What is unfortunate is the process that A) grew regulatory language by a factor of 25, B) still excluded key input from affected parties, and C) predictably failed to reach consensus.

The post DOE Doubles Down on State Authorization: 25x increase in regulatory language appeared first on e-Literate.

Some Real Reasons the “Two-speed Internet” Hurts Higher Education (Hint: impact on the “Next Facebook” is not one of them)

Mon, 2014-05-19 21:22

The debate around net neutrality so far has been almost as depressing as the set of judicial and administrative decisions that got us here. Central to the debate has been the obsession about how the two-speed internet will “stop the next Facebook/Google/Netflix” from being able to innovate.

nextfacebook

Save the Internet does a bit better than most at teasing out some of the other issues (privacy, freedom of speech), but states the business core of the argument like so:

Net Neutrality lowers the barriers of entry for entrepreneurs, startups and small businesses by ensuring the Web is a fair and level playing field. It’s because of Net Neutrality that small businesses and entrepreneurs have been able to thrive on the Internet. They use the Internet to reach new customers and showcase their goods, applications and services.

I’m not going to argue that this is wrong. Monopoly power on this scale is a dangerous thing. Until recently, there were decent laws preventing companies from owning all the media outlets in a single metro — we are now moving towards allowing one company to control most of America’s access to the Internet. It’s easy to put on the weary entitlement of “It’s all just Google vs. Comcast, Goliath vs. Goliath, what do I care?” But, of course, this is the well-established point of anti-monopoly law — the world is a better place for David when Goliath fights Goliath than when Goliath stands unopposed. When Goliath stands unopposed, bad things happen. You don’t have to root for Goliath Number Two to understand the utility of that.

At the same time, these arguments have obscured some of the real threats to education that have nothing to do with the “next Facebook” scenario. Primary among these threats is the issue of what happens to traffic that is not from traditional content providers. I’d like to sketch out what that means for higher education, and why your institution should be talking about the dangers of creating a provider-paid express lane on the Internet.

The BitTorrent Roots of the Current Mess and the Problem of “No-Provider” and “Own-Provider” Services

How we got to the current policy is a bit convoluted, but it’s worthwhile to go back to the last great success in the fight for net neutrality. In 2008, the FCC ruled that Comcast had to stop throttling BitTorrent traffic. For those unfamiliar with BitTorrent, it is a peer-to-peer technology that is used to share files on the Internet. Key to its peer-to-peer design is that it is “providerless” — there is no content company that mediates the traffic — all users of a particular torrent connect directly to each other. Your content doesn’t come to me via Google or Dropbox — it comes to me directly from your computer, and from the computers of the others downloading or “seeding” it.

2014-05-19_1401

Of course, most campus IT administrators are intimately familiar with the technology, as it was one of the things slowing campus Internet to a crawl several years back. And as such, I’m sure that that at least some campus IT administrators sympathize with Comcast’s decision — after all, a number of campuses ended up throttling BitTorrent as well.

But consider the issue the student who wanted to use BitTorrent faced on such a campus. No matter how much money they paid for Internet service, they could never significantly increase their BitTorrent speeds. Meanwhile, the speeds of everything else provided through campus pipes increased.

Under the new FCC rules, all applications and providers can be subject to the same sort of limits (the newer FCC rules apparently ban throttling, but there is slim difference between throttling and the separation of traffic into fast and slow lanes). The difference here is “content providers” can pay a fee to ISPs to get out of “throttling prison” and use the full bandwidth available to the consumer to deliver their service. So Netflix pays Comcast, and gets out of throttling prison. Netflix’s upstart competitor doesn’t have the money and so gets slower service.

Supporters of the proposal say this is where it ends — it’s just a matter of who owes money to who, and setting up reasonable guidelines for that.

But what about the person using BitTorrent? The problem with BitTorrent is that there is no provider to pay the cable company to get fast lane access. This is not simply a case of how much Goliath owes Goliath. This is a case of David not even having access to the currency system. BitTorrent applications have no content provider status, and so will be relegated permanently to the slow lane.

This problem, that the proposed rules are built around assumptions of a “provider” negotiating with cable companies, is potentially more damaging to education than the actual details of what those negotiations are allowed to entail. To paraphrase Milton Friedman paraphrasing William Harcourt: We are all torrenters now. And that means we have little control over our future.

Beyond BitTorrent: Video Clips for a Media Class

If you think this doesn’t apply to your campus, think again. Because higher education deals quite a lot with services where there is no corporate provider.

First, consider a non-peer-to-peer example. On most campuses, media and communications faculty use clips from films from their class, and quite often distribute them via the Internet. They are allowed to do this because of explicit protections granted to them by the U.S. government, but because they must show care in how they distribute content, they generally use a free standing server on campus (such as Kaltura) to deliver them.  In a world where you are studying these clips in, say, a class that deals with cinematography, the quality of the clips could be essential to the activity. As it stands now, what you might say in your course description is “Students should have access to broadband in order to view the video clips for homework.”

tumblr_mhzbiyaWFM1rwpn15o1_500

So here’s a question — how can you make sure your students at home can get “fast lane” access to these clips?

If you were to put them up on YouTube or Vimeo, then YouTube would negotiate the agreement. But in your case, you just are serving them up through a campus server. Who do you call? How much does it cost?

Ok, now that you have done that for Comcast, it’s time to ask yourself — what other cable internet providers do your students have? If it’s an online course, how do you deal with a local cable provider in Athens, GA when you are in Seattle, WA?

There’s no real answer to these questions. Or rather, the answer is clear — unless you are the University of Phoenix, you aren’t going to be able to negotiate this. Your institution is not set up for it. And so, as the fiber revolution rolls out across the nation, most of higher education will be stuck in the copper lane.

A Peer-to-Peer Example: Educational Videoconferencing

Videoconferencing is one area where the increasing quality of internet connections is poised to have great impact. Most of what sucks about videoconferencing comes down to latency (that ‘you-go-no-you-go’ seconds-long delay that makes you feel like you are conducting class over a walkie-talkie), stability of connection, and visual clarity (which allows you to see the microexpressions that signal to you important things, like ‘Is this student getting this at all?’).

All of these aspects improve with increased bandwidth. And it’s possible, of course, that your third-party video-conferencing provider will be able to pay the fee to Comcast and others that allows your students to tap into to such things.

Assuming, of course, that you have such a provider. The recent trend in video-conferencing is toward peer-to-peer products which connect conference participants directly instead of through an intermediate server. This dramatically lowers latency, leading to a conversational flow that more closely resembles face-to-face discussion. As more remote students have access to high-quality connections, peer-to-peer video conferencing has the potential to increase the impact of online education substantially, and, just as importantly, make such models more humane by providing students and teachers access to the facial “microexpressions” and conversational cues that make such events emotionally meaningful experiences.

otherone

Except — how will you ensure access to the bandwidth and latency you need to make this work? Your students can’t buy it — they may have the Comcast Super-Turbo-Extra-Boost plan, but that’s only going to increase the speed of prioritized traffic they receive, such as Netflix.

And your institution can’t buy it either, because there is no central server. When your student Jane from Twin Rivers talks, the traffic doesn’t come from an identifiable university computer. It comes from Jane’s computer in Twin Rivers, and goes directly to you and the five other students in the review session. Jane doesn’t have an option to call Comcast and get her traffic into the fast lane. So while Hulu will be able in 10 years to deliver multi-terabyte holographic versions of The Good Wife to your living room, the peer-to-peer video your campus is using will remain rooted in 2014, always on the verge of not sucking, but never quite making it to the next level.

Other Examples

These are just two examples from areas I’m deeply familiar with, but if you talk other people at your institution, you can uncover other examples fairly quickly. Here’s what you ask:

Is there anything you do in your teaching or research that relies on connections to the Internet and is not delivered by a major third-party provider (such as YouTube, Dropbox, etc.)?

You’ll find out there’s quite a lot of things that work like that. For example, there has been a major push to shut down computer labs on campuses as a cost saving measure — after all, most students have laptops. As we’ve done that we’ve pushed students into using virtualized software, often across consumer connections. In virtualized scenarios, students remotely tap into high speed servers loaded with specialized software. It allows a department to make sure that all students have access to the computing power and software they need without needing access to a computer lab.

There’s a lot of potential for virtualized software to reduce cost and increase student access. While having a good connection to the Internet to use it is costly, its considerably less costly for many students than having to drive to campus several nights a week to complete assignments, and far more convenient. As consumer bandwidth increases, the dream of virtualizing most of the software students need becomes an achievable reality.

Except… You see where this is going. How does your campus make sure that your students virtualized instance gets the maximum bandwidth the student’s connection can support? Failing having a full-time campus cable negotiator, it’s hard to see how this happens. Like the peer-to-peer video-conferencing revolution, the move to virtualization could be over before it has begun, and with it the potential decreases in cost and increases in access.

Once you start to look for this issue, you’ll find it everywhere. There are certain IT functions we keep on campus due to security and privacy issues, for example. We may be pushed into moving these into third party software if we cannot negotiate the same speed for on-campus functions as for off-campus third-party provided functions. Our students are increasingly working with large datasets as part of their research — how, exactly, does one get fast lane access for one’s 50 GB GIS homework?

These are small problems now, but without continued access to top-tier service they can become big problems soon.

But, Chairman Wheeler says….

Of course, the current FCC Chairman says that the fears are overblown. There are many great sites out there that debunk the FCC’s “Don’t Panic” rhetoric better than I could, but let me deal with three common objections quickly.

First, there is some confusion about whether the new rules allow providers to prioritize traffic to consumers. Wheeler says they don’t, but this is a bit of a word game. To vastly simplify the issue, Wheeler has guaranteed that the on- and off-ramps to the Information Superhighway won’t have slow and fast lanes. The actually highway? He’s determined that’s outside the FCC’s purview. And since any connection is only as strong as its weakest link, having no priority lanes on the ramps means very little if providers are carving up the highway into express lanes and economy ones.

Second, he’s guaranteed that providers won’t be able to “slow down” any traffic, only prioritize some traffic. A simple thought experiment demonstrates the ridiculousness of this claim. During peak hours, Netflix currently makes up about 34% of Internet traffic. The cable companies are now going to make Netflix pay to prioritize their content. Given that bandwidth is a finite resource, it doesn’t take a genius to realize the even if the cable companies just went after Netflix it would adversely impact your university’s efforts. By definition, to prioritize one thing is to de-prioritize something else, and in this case that something else is your connection to your students.

Finally, sitting here in 2014, it’s tempting to see the bandwidth and low-latency connections we have now as sufficient for our needs. This is part of the rhetoric of the cable companies. What do you use the Internet for now? Well, you’ll still get to do that!

But how much of what we do now could we do in 2004? Would any of our stakeholders — the students, the legislature, the taxpayers, the businesses we send our students into — be happy with us utilizing the Internet at a 2004 level?

Pundits often complain that the world of education does not adopt technology at the speed of business. That’s true, partially. And we could do better. But the currently proposed FCC rules all but guarantee that we won’t be allowed to.

To me, that’s a bigger issue than where the “next Facebook” comes from. And it’s one that we need to start talking about.

 

Image Credits:

Peer-to-Peer diagram courtesy of Wikimedia Commons: http://upload.wikimedia.org/wikipedia/commons/0/09/BitTorrent_network.svg. Modified by Michael Caulfield.

“Charles Foster Kane is Dead” GIF, by howtoctachamonster. Published at http://howtocatchamonster.tumblr.com/tagged/citizen-kane

Other images are screenshots by Michael Caulfield.

 

The post Some Real Reasons the “Two-speed Internet” Hurts Higher Education (Hint: impact on the “Next Facebook” is not one of them) appeared first on e-Literate.

The IMS Is More Important Than You Think It Is

Sun, 2014-05-18 08:21

I have long argued that the development of technical interoperability standards for education are absolutely critical for enabling innovation and personalized learning environments. Note that I usually avoid those sorts of buzzwords—”innovation” and “personalized learning”—so when I use them here, I really mean them. If there are two fundamental lessons we have learned in the last several decades of educational technology development, they are these:

  1. Building monolithic learning environments generally results in building impoverished learning environments. Innovation and personalization happen at the edges of the system.
  2. There are tensions between enabling innovation at the edges and creating a holistic view of student learning and a usable learning environment. Integration does matter.

To these two education-specific principles, I would add a general principle about software:

  • All software eventually grows old and dies. If you can’t get your data out easily, then everything you have done in the software will die with it (and quite possibly kill you in the process).

Together, these lessons make the case for strong interoperability standards. But arriving at those standards often feels like what Max Weber referred to as “the strong and slow boring of hard boards.” It is painful, frustratingly slow, and often lacking a feeling of accomplishment. It’s easy to give up on the process.

Having recently returned from the IMS Learning Impact Leadership Institute, I must say that the feeling was different this time. Some of this is undoubtedly because I no longer serve on any technical standards committees, so I am free to look at the big picture without getting caught up in the horrifying spectacle of the sausage making (to mix Germanic political metaphors). But it’s also because the IMS is just knocking the cover off the ball in terms of its current and near-term prospective impact. This is not your father’s standards body.

Community Growth

The first indicator that things are different at the IMS these days is the health of the community. When I first got involved with the organization eight years ago, it was in the early stages of recovery from a near-death experience. The IMS was dying because it had become irrelevant. It just wasn’t doing work that mattered. So people stopped coming and organizations stopped paying dues. Then Rob Abel took over and things started turning around.

Slide07

Membership has quadrupled. Interestingly, there was also a very strong K12 contingent at the meeting this year, which is new. This trend is accelerating. According to Rob, the IMS has averaged adding about 20 new members a year for the last eight years but has added 25 so far in 2014.

Implementations of IMS standards is also way up:

Slide09

Note that conformance certifications is a new thing for IMS. One of the key changes in the organization was an effort to make sure that the specifications led to true interoperability, rather than kinda-sorta-theoretical interoperability. Close to 250 systems have now been certified as conforming to at least one IMS specification. (Note that there are also a number of systems that conform but have not yet applied for certification. So this number is not comprehensive.) And here again, the trend continues to accelerate. According to Rob, the IMS averaged two new conformance certifications a week in 2013 and is averaging four new certifications a week so far in 2014.

Keep in mind that these numbers are for systems. A lot of the things (for lack of a better word) that conform to IMS specifications are not systems but bundles of content. Here the numbers are also impressive:

Slide10

So a lot more people are participating in IMS and a lot more products are conforming to IMS specification profiles.

Impact

One of the effects of all of this is that LMS switching has gotten a lot easier. I have noticed a significant decline in campus anxiety about moving from one LMS to another over the past few years. There are probably a number of reasons for this, but one is undoubtedly that switching has gotten significantly easier due to IMS interoperability specifications. For content, all popular LMSs in the US higher education market import Common Cartridge packages, and several of them export to Common Cartridge. (By the way, I will say it again: If you are in the process of selecting a new LMS, you should make export to Common Cartridge a buying criterion.) Hooking up the LMS to the administrative systems so that class information can be populated into the LMS and final grades can flow back to the SIS has gotten much easier thanks to the LIS specification. And third-party (including home-grown) tools that work in one LMS usually work in another without extra programming, thanks to the LTI standard.

But I think the IMS is still just warming up. LTI is leading the way for the next wave of progress. Under the stewardship of Chuck Severence, LTI now is supported by 25 learning platforms. edX and Coursera both recently announced that they support LTI, for example. It has become the undisputed standard for integrating learning tools into platforms. This means that new learning tool developers have a straightforward path to integrating with 25 learning platforms simply by supporting LTI. My guess is that a good portion of those 4 new conformance certifications a week are LTI certifications. I see signs that LTI is facilitating a proliferation of new learning tools.

There is a lot of new work happening at the IMS now, but I want to highlight two specifications in development that I think will take things to the next level. The first is Caliper. I have waxed poetic about this specification in a previous post. In my opinion, the IMS is under-representing its value by billing it as an analytics specification. It is really a learning data interoperability specification. If you want loosely coupled learning tools to be able to exchange relevant data with each other so that they can work in concert, Caliper will enable that. It as close to a Holy Grail as I can think of in terms of resolving the tension that I called out at the top of the post.

The second one is the Community App Sharing Architecture (CASA). Think of it as kind of a peer-to-peer replacement for an app store, allowing the decentralized sharing of learning apps. As the UCLA Education and Collaborative Technology Group (ECTG) puts it,

The World Wide Web is a vast, mildly curated repository of information. While search engines fairly accurately filter the Internet based on content, they are less effective at filtering based on functionality. For example, they lack options to identify mobile-capable sites, sites that provide certain interoperability mechanisms, or sites related to certain industries or with certain content rating levels. There is a space where such a model already exists: the “app stores” that pervade the native mobile app landscape. In addition to the app itself, these hubs have deep awareness of application metadata, such as mobile and/or tablet support. Another deficit of search engines is their inability to allow organization-based configuration, defining a worldview with trust relationships, filters and transformations to curate the results they present to end users. Native app stores use a star (hub-and-spoke) topology with a central hub for publishing, which lacks this fine-grain customizability, but an alternative peer-to-peer topology, as is used for autonomous systems across the Internet, restores this freedom.

CASA should facilitate the further proliferation of learning apps by making them more easily findable and sharable, drawing on affinity networks (e.g., English composition teachers or R1 universities). Caliper will enable these tools to talk to each other and create an ensemble learning environment without having to rely on vendor-specific infrastructure. And LTI will enable them to plug into a portal-like unifying environment if and when that is desirable.

Who says technical standards aren’t exciting?

The post The IMS Is More Important Than You Think It Is appeared first on e-Literate.

Unizin: Indiana University’s Secret New “Learning Ecosystem” Coalition

Fri, 2014-05-16 10:00

Indiana University has been the driving force behind the creation of a new organization to develop a “learning ecosystem”. At least ten schools are being quietly asked to contribute $1 million each over a three-year period to join the consortium. The details of what that $1 million buys are unclear at this point. The centerpiece for the short-term appears to be a contract with Instructure for use of the Canvas LMS. But there are also hints of ambitious plans regarding learning object repositories and learning analytics.

What is remarkable is the level of secrecy surrounding the project. Several sources from involved schools have indicated that very few people have been informed regarding their institutions’ prospective involvement. When school discussions do take place, care is being taken to keep them quiet. For example, a video recording of a presentation to faculty about Unizin at Colorado State University has since been removed from public access after it received some attention on Twitter (although e-Literate downloaded a copy of the video before it was removed from public access).

Nevertheless, details of the project are beginning to leak out. In this post, I will share the facts that we have been able to confirm. Phil and I will both follow on with some analysis and inferences early next week. And of course, we will continue to bring you updates as news breaks.

The (Prospective) Members

As far as we can tell, there are no definite commitments to join this group yet other than from Indiana, although University of Michigan and Colorado State University are beginning to socialize the idea on their respective campuses. The recording obtained by e-Literate is a presentation by CSU Dean of Libraries Pat Burns and Associate Provost for Innovation Mike Palmquist to what was apparently a group of faculty members. The ten potential Unizin members listed on the presentation slides (along with their current LMSs) are as follows:

  • Colorado State University (Blackboard)
  • Indiana University (Sakai)
  • Oregon State University (Blackboard)
  • Purdue University (Blackboard)
  • University of Florida (Sakai, with some use of Canvas)
  • University of Maryland (Canvas)
  • University of Michigan (Sakai)
  • University of Texas (Blackboard, with at least one campus in the process of moving to Canvas)
  • University of Wisconsin (Desire2Learn)
  • University of Utah (Canvas)

Slide from CSU Presentation

An article in the University of Michigan campus newspaper about a briefing to the faculty senate corroborates Michigan, Indiana, and Florida as participants in Unizin discussions. There may be others involved in the discussions as well. For example, we have some evidence that Rice University is or was considering joining. A bond issuance document by the university refers to its participation in something called “Coalition for Online Teaching and Learning (COLT)”. A CV for a Vice Provost at Purdue University describes one of her job duties as representing the university at “the Coalition for Online Learning and Teaching (now referred to as UniZin, the online learning ecosystem consortium)”.1

While the group seems only loosely connected to the Big 10′s Committee on Institutional Cooperation (CIC), the group does share some common key players with an earlier position paper by a CIC task force motivated by the perceived need to take back control of distance learning from MOOC providers such as Coursera.

At the recorded CSU meeting, one of the presenters—it’s impossible to tell which is the speaker from the recording we have—acknowledges that the meetings were largely conducted in secret when challenged by a faculty member on the lack of faculty involvement. He cited sensitive negotiations among the ten universities and Instructure as the reason.

The CSU presenters specifically state that CSU was “invited” to join Unizin—the name is apparently a portmanteau of “university” and “zinfandel”—and suggested that one of the reasons for their inclusion was their experience with distance education relative to the other members. According to the meeting notes of the CSU technology fee advisory board dated April 7, 2014, Pat Burns said,

They want similar universities and in a lot of ways we are very similar to these other universities and institutions. We are way ahead in some areas. We do about $38 million a year in online education. We are leaders in online education and how to do that right.

More on this later in the post.

As I mentioned before, each university will be expected to contribute $1 million. According to the CSU presentation, Internet2 is acting as the “fiscal agent” and the hiring of an executive director for the project is “imminent”.

Unizin Status

The LMS Deal

Although there are ambitious larger plans for the coalition, the immediate driver of the deal appears to be the formation of a buyers’ club to license Instructure Canvas. Indiana University, which is leading the discussions of the coalition, has already announced that they will be moving to Canvas after a selection process comparing that LMS to Blackboard and Desire2Learn (ironically using product evaluation categories developed during the design process for Sakai OAE). According to CSU presentation and documents, the Canvas license would cost $150,000 the university less per year than their Blackboard license under this deal. However, because the university would be paying in $1,000,000 over three years up front, they expect “about a seven-year payback,” according to the meeting minutes. (The presenter in the video recording said “seven to ten years.”)

Unizin Business Model

The level of due diligence each campus has put into the selection of Canvas seems to vary widely. Indiana has documented extensive research at their LMS selection site. A Canvas pilot was run at the University of Wisconsin, while Oregon State University is in the process of piloting Canvas. On the other hand, CSU seems mainly to rely on IU’s research as their due diligence. Some of the faculty members in the audience seemed to know more about Canvas functionality versus Blackboard and Blackboard-to-Canvas migration issues than the staff did.

We have no evidence that the current deal includes special provisions about opening the source code of the platform, much but not all of which Instructure releases under the AGPL open source license. However, the CSU presenters speculated the the coalition could self-host, stop paying fees to Instructure, and develop the LMS themselves in the future. No mention was made of the fact that the Sakai schools in the Unizin coalition would be leaving just such an open source coalition in favor of a hosted and externally developed solution.

Prospective: Learning Object Repository (LOR) and Analytics

In addition to the LMS, the coalition has ambitions to add a LOR and a learning analytics platform.

Unizin Diagram of Services

It is not clear whether Unizin intends to build or buy these components, or even whether that decision has been made at this time. However, the CSU presenters (correctly) mentioned that University of Wisconsin has significant expertise and experience in learning analytics, suggesting that this is one reason why they were asked to join the coalition. Wisconsin is involved in the Open Learning Analytics initiative and has been one of a few early adopters of Desire2Learn’s analytics platform. Purdue is also famous for its Course Signals learning analytics platform (and infamous for its lack of transparency about problems with their research). Likewise, LORs have been developed by both the University of Utah and Rice. These pieces, along with the CSU comments about the coalition pooling development resources, suggest that consortium-based development (with or without an open source license) is an option being considered.

According the CSU presenters, all content would be stored in a “monolithic” content repository for the coalition, “outside of vendor platforms” (which may be another indicator that Unizin intends to develop its own software, or may just mean “outside the LMS”). The repository would also act as a “reflector,” meaning that it could contain links to externally hosted content.

Unizin Content Repository

Unizin Content Repository 2

Various rationales were given by the CSU presenters regarding the need for a centralized LOR. One was that they claimed it would make future LMS migrations easier if the content were externally stored. Another was that it would make inter-campus course sharing easier. On this last point, while this possibility was mentioned several times, with the example that CSU might share engineering courses with IU in exchange for foreign language courses, it does not appear that there were any concrete sharing arrangements worked out.

The scope of the analytics ambitions seem broad:

Unizin Predictive Analytics

Unizin Real-Time Analytics

These slides contain a menu of just about every kind of learning analytics in the market today. There is no indication that specific plans or priorities have been developed for Unizin analytics at this time.

Unanswered Questions

That sums up what we know about Unizin at this time. As I mentioned earlier, Phil and I will be writing next week about what we think we can infer from what we know. In the meantime, I want to close this post with some questions focusing on what we don’t know:

  • What does that $1 million per campus pay for?
  • Will the coalition be hosting Canvas itself, or will Instructure do that?
  • Internet2 already has a pricing agreement with Instructure through NET+; how is Unizin’s different and (presumably) better?
  • Both Apereo and Kuali are foundations that host open source software projects relevant to Unizin’s ambitions and of which at least a few of the Unizin schools are contributing members; why are neither of these foundations (apparently) involved in the project?
  • What are the real drivers for the content repository development? Ease of LMS migration seems like an inadequate explanation for an effort of this size, and the course sharing ambitions seem vague.
  • How does Unizin intend to solve the content re-use problem which has plagued pretty much every other LOR initiative in the long history of LOR initiatives?
  • What would the first goals of the learning analytics project be (given how many possible goals are articulated), and how would they interact with the content repository requirements?
  • How will faculty react when Unizin is made public on their campuses? And does this reflect a broader shift in the role of faculty governance?
  1. Pray that your spouse’s divorce lawyer never hires Phil Hill as a private investigator.

The post Unizin: Indiana University’s Secret New “Learning Ecosystem” Coalition appeared first on e-Literate.

Policy Updates on FERPA and Net Neutrality

Thu, 2014-05-15 18:01

Two policy debates that could have a significant impact on education – updates on FERPA and data privacy & FCC proposals on Net Neutrality – both entered the next stage this week.

FERPA Modernization

I recently wrote about the new federal moves to update FERPA to handle the age of Big Data (should I have used scare quotes there?).

Yesterday the White House released its report on big data and privacy implications. The focus was broadly on big data, but there will be implications for ed tech, with several key recommendations specifically focused on the education sector. Specifically, there will be a push to update and revise the Family Educational Rights and Privacy Act (FERPA, enacted in 1974) and Children’s Online Privacy Protection Act (COPPA, enacted in 2000).

I was quite optimistic about the federal approach based on this report, and yesterday we got some more apparently good news – a bipartisan approach to improve data privacy and update FERPA made in a reasonable fashion (dogs and cats, living together). As reported by THE Journal, Senators Markey (D-MA) and Hatch (R-UT) introduced the “Protecting Student Privacy Act”, getting even a warm reception from the Software and Information Industry Association (SIIA), which had objected to the earlier version discussed in January. THE Journal summarized the key parts of the legislation (full text here):

  • Prohibiting the use of personally identifiable student data for advertising or marketing purposes;
  • Requiring certain safeguards be in place to protect the integrity of data in the hands of private companies;
  • Giving parents the explicit right to view their children’s data and make corrections to erroneous information;
  • Making available the names of every company that has access to a district’s student data;
  • Limiting the personally identifiable information that can be transferred between companies and schools; and
  • Ensuring that “private companies cannot maintain dossiers on students in perpetuity by requiring the companies to later delete personally identifiable information.”
FCC and Net Neutrality

Now that you’re in a good mood with renewed faith in federal policy-making, let’s bring back that cynicism. The FCC, despite significant protests, approved a plan to allow paid priority on the Internet, potentially killing the Net Neutrality concept. From the Washington Post:

The Federal Communications Commission on Thursday voted in favor of advancing a proposal that could dramatically reshape the way consumers experience the Internet, opening the possibility of Internet service providers charging Web sites for higher-quality delivery of their content to American consumers.

The plan, approved in a three-to-two vote along party lines, could unleash a new economy on the Web where an Internet service provider such as Verizon would charge a Web site such as Netflix for faster video streaming. The proposal would, though, prohibit telecom firms from outright blocking Web sites.

THE Journal ran a piece in January showing the debate about Net Neutrality and how it could affect education. Some Pollyanna argued in that piece that [emphasis added]:

However, not everybody agrees with a grim conclusion that education is destined to suffer under a “two tier” Internet. “I think the reaction to the net neutrality ruling is overwrought and distracting,” said Phil Hill, co-founder of MindWires, an education technology consulting practice. What’s important to remember, he said, is that the court “actually agreed” that the FCC has “legal authority over broadband.”

Hill asserted that net neutrality “is an important principle for education, as we need freedom for students and institutions to access Internet services that are becoming more and more important without having broadband carriers decide on which services have priority over others.” However, he pointed out, “This should allow the FCC to implement new rules that don’t step on the toes of the common carrier rules. In other words, as long as the FCC doesn’t screw up, it should be able to regulate and enforce net neutrality with future rules. But there will be a lot of overheated rhetoric in the meantime. I see short-term confusion for K-12 but long-term there should be little or no lasting impact.”

Oh wait, that was me, and I was completely wrong.

The post Policy Updates on FERPA and Net Neutrality appeared first on e-Literate.

Six Months After Layoffs: No signs of growth at Desire2Learn

Thu, 2014-05-15 13:23

Late last year I wrote a post about Desire2Learn’s laying off more than 7% of the workforce, speculating that something had changed at the company. In particular, I didn’t buy the company line about continued growth.

In the end, I have trouble believing that these recent cuts are solely based on improving Desire2Learn’s growth without needing to correct for slower-than-expected growth. The arguments made by our sources that these are significant cuts driven by not hitting growth targets are compelling. However, there is no smoking gun that I have found to back up these claims definitively.

What I do feel confident in saying regarding employee numbers is that there’s more to the story here than just ‘churn’ alongside aggressive hiring. Since the end of the Blackboard patent lawsuit, Desire2Learn has grown at an average of 13 employees per month (140 in Nov 09, 560 in Sep 12, 750 in Nov 13). Yet the numbers might have actually gone down since July 2013.

After that post, Desire2Learn (through an interview with a local media outlet) further explained the company line about growth with challenges. [emphasis added]

The past year has included hirings and staff reductions as the developer of online learning software prepares for growth and expansion, says Dennis Kavelman, the Kitchener-based company’s chief operating officer.

Kavelman stresses that the company is growing, not shrinking, and restructuring to handle the growth resulted in some layoffs.

The past year alone saw new offices open on the East and West Coasts of Canada, another in Boston and significant expansion of business into Latin America, Europe and the Middle East.

Kavelman says he was hired to manage a fast-growing company, in part because of his experience at Research In Motion, now BlackBerry. Kavelman, RIM’s former chief financial officer, started at the smartphone maker when it had only 20 employees. When he left in 2010, it had about 14,000 employees.

“Because things change a lot when you grow from small, medium to large and you kind of have to do things the right way,” Kavelman says.

As some Desire2Learn employees were restructured out of a job in recent months, word of layoffs and cutbacks spread through blogs and emails. But Kavelman says the changes are necessary to ensure the company grows as efficiently as possible.

[snip]

The company expects continued growth in 2014, although Kavelman will not be specific about numbers.

“We are just going to keep growing for a while, and everybody there really wants to do this for quite a long time,” he says.

I believe that the e-Literate article was one of the main targets of the highlighted comment, so it seems worthwhile to provide an update. Now that it has been almost six full months since the layoffs, what does LinkedIn data show? First, some caveats:

  • Not every employee of a company lists their affiliation with the company on LinkedIn, and not every employee has a LinkedIn profile. So the best view of this data is to look for trends.
  • From my experience with technology companies, often 90 – 95% of employees are listed on LinkedIn, which at least provides a sanity check.

A few months after the layoffs, once employees or ex-employees had time to update their profiles, LinkedIn showed 711 employees. This seems to be in line with Desire2Learn’s updated claims of 750 employees (94.8% on LinkedIn connected to the company).

LinkedIn Feb 24

Today, six months after the layoffs and three months after the above LinkedIn view, they have 715 employees listed.

LinkedIn May 15

I don’t want to over-interpret smaller variations, but the message I get is that unless Desire2Learn has convinced all new employees to avoid LinkedIn, the company has not had any significant growth since last summer. As I described in the layoff post, prior to summer 2013 Desire had been adding approximately 14 employees per month. If, as the company claims, they are continuing their aggressive growth, then we should have seen something on the order of 84 employees added since the time of the layoffs and some significant increase since February. We have not seen this.

The more data I see lately, including the recent ITC study on community colleges, it seems that Desire2Learn has also stopped growing in terms of market share in US higher education. They do claim growth opportunities outside of this market (K-12, corporate learning, international), but even if they are growing in these other markets, it does not appear to be affecting the company size.

It is possible, however, that some of the international growth would include new employees who are less likely to have a LinkedIn profile. This might explain part of the discrepancy, but not all or even most of it.

We’ll keep you updated as we hear more public information.

Update: I forgot about this article, but Forbes did a profile of John Baker last month and included this description:

Today the approximately 750 employee company is one of the leaders in the burgeoning e-learning market.

750 is the number of employees listed by the company as of the layoffs last year. This is another source of information to add to the LinkedIn data backing up my point.

The post Six Months After Layoffs: No signs of growth at Desire2Learn appeared first on e-Literate.

ITC Survey: Four trends to watch in LMS market for community colleges

Tue, 2014-05-06 09:26

The Instructional Technology Council (ITC), an affiliated council of the American Association of Community Colleges (AACC), has conducted a distance education survey since 2004 focusing on online education trends among community colleges in the US and Canada. With this focus, the member colleges tend to be those with a stronger interest in online education programs and therefore are more progressive in technology usage than the general community college population. One part of the ITC survey is on general LMS usage at member schools, and this provides another source of data on the higher ed LMS market.

The survey this year was based on 142 responses out of the 375 member institutions (39%), and there are a few caveats that should be noted:

  • Prior to 2012 the survey was emailed to all AACC colleges, but due to uneven responses the survey changed to just survey ITC colleges; and
  • “No answer” responses are not listed in percentages, thus totals will not always equal 100%.

Despite those caveats, “70 percent of the annual submissions have come from the same campuses during the nine years of the survey”, which means that it is worth exploring some broad trends for the LMS market for community colleges. Here is the data presented on LMS usage this year on page 15 (April 2014 report of Fall 2013 survey data):

Table 3

 

This data is a little confusing due to the Blackboard acquisition of WebCT in 2005 and ANGEL in 2009. The survey changed from listing ANGEL as its own LMS to being the same as Blackboard this year. For simplicity’s sake, I’ve combined Bb Learn, WebCT and ANGEL data into one category called “Blackboard”.

ITC LMS Market Share

 

Given the change in data collection (combining Blackboard solutions, changing listed LMS options, dropping the AACC mailing), I think it would be safer to concentrate on trends rather than absolute market share numbers. Here is a view of trend lines:

ITC LMS Trend Lines

 

With this data, there are four trends that seem important for the LMS market.

  • Despite the change in categories, Blackboard may be showing signs of organic market growth – taking customers away from competitors rather than just acquiring or retaining customers. Does this represent a change in Blackboard’s ongoing market share losses overall? I don’t think we can draw conclusions yet, but this is new data worth exploring (watch for Campus Computing Survey, Edutechnica, the next ITC survey).
  • Moodle and Canvas are consistently growing in the community college market – Canvas more so than Moodle. In the case of Moodle, the ITC survey shows fairly consistent growth since 2009 (the year Blackboard acquired ANGEL). In the case of Canvas, the ITC survey shows rapid growth over the past 2 years.
  • For the first time in an LMS market survey that I am aware of, Desire2Learn has actually lost market share. In fact, Desire2Learn is now lower than both Moodle and Canvas for community colleges according to this survey. This is a topic worth exploring further, especially in relation to last year’s layoffs.
  • If you look at the total responses, notice that the top 4 LMS solutions (Blackboard, Moodle, Canvas, Desire2Learn) accounted for 74% total in 2011 and 98.5% total in 2013. I asked the researchers, and they said the survey lists 15 different LMSs plus the ability to write-in, but that this year “the respondents reported a much more narrow number of LMS solutions”.

Update (5/6): George Kroner was kind enough to run data from Edutechnica site, getting fairly similar numbers.

@PhilOnEdTech for Carnegie 2010 basic #1-14: 35% Bb, 11% ANGEL, 12% D2L, 12% Canvas, 15% Moodle, 3% Sakai, 3% Pearson, 1% WebCT, 8% Other

— George Kroner (@georgekroner) May 7, 2014

@georgekroner Thanks – seems the biggest disconnect is Bb / ANGEL / WebCT totals of 58% for ITC and 47% for your data.

— Phil Hill (@PhilOnEdTech) May 7, 2014

The post ITC Survey: Four trends to watch in LMS market for community colleges appeared first on e-Literate.

UCF and Temple IPEDS corrections lead to new Top 20 in online enrollment

Sun, 2014-05-04 12:12

In response to the new distance education (roughly equivalent to online education) data released by the National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS), I have written a series of posts showing different ways to view the data. In short, the data allows much greater insight into degree-granting online higher education in the US than we previously had. WCET also has a series of posts worth exploring.

One post in particular listed the top 20 institutions in terms of how many students take at least one online course, broken down by sector. The list for public 4-year showed the following (undergrad + grad, # students taking at least one online course in Fall 2012):

Public 4-year

Two schools, upon viewing this data, have discovered that their data provided to the IPEDS database was incorrect.

University of Central Florida

Thomas Cavanaugh from UCF was a little surprised to see his school listed with 21,782 students online when his group’s internal data showed higher numbers. For context, UCF has had one of the most proactive and strategic approaches to online education of any R1 university, including mandatory training and quality assurance standards for any course offered online or in hybrid format. What Cavanaugh and team discovered was that the school was not reporting every category of course that is actually considered distance education. UCF defines their modalities in five categories:

World Wide Web (W) — courses conducted via web-based instruction and collaboration. Some courses may require minimal campus attendance or in-person/proctored examinations.

Video Streaming (V) — courses delivered over the web via streaming digital video which may be supplemented by additional Web activity, projects or exams.

Video Streaming/ Reduced Seat Time (RV) — classroom-based content is available over the web via streaming video and classroom attendance is not required. Other required activities that substitute for video instruction may include any of the following elements: web activity, in-person or proctored examinations, and labs. See course notes for details.

Mixed Mode/Reduced Seat Time (M) — courses include both required classroom attendance and online instruction. Classes have substantial activity conducted over the web, which substitutes for some classroom meetings.

Face To Face Instruction (P) — courses have required classroom attendance and meet on a regularly scheduled basis. Students may encounter internet and/or computer requirements in these classes.

Meanwhile, the official IPEDS definitions:

Distance education – Education that uses one or more technologies to deliver instruction to students who are separated from the instructor and to support regular and substantive interaction between the students and the instructor synchronously or asynchronously.

Technologies used for instruction may include the following: Internet; one-way and two-way transmissions through open broadcasts, closed circuit, cable, microwave, broadband lines, fiber optics, satellite or wireless communication devices; audio conferencing; and video cassette, DVDs, and CD-ROMs, if the cassette, DVDs, and CD-ROMs are used in a course in conjunction with the technologies listed above.

Distance education course - A course in which the instructional content is delivered exclusively via distance education. Requirements for coming to campus for orientation, testing, or academic support services do not exclude a course from being classified as distance education.

The mistake made at UCF was that the Video Streaming categories (V and RV) were originally noted reported to IPEDS as distance education. You can see that V and RV categories deliver instructional content exclusively via “one or more technologies to deliver instruction to students who are separated from the instructor and to support regular and substantive interaction between the students and the instructor synchronously or asynchronously.” UCF has since sent corrections to NCES.

The results is that UCF moves from 21,782 students taking at least one online course to 28,954 (this may change slightly due to definition of census date), which is a 33% change.

Temple University

Several readers were, like me, surprised to see Temple University with such a high rate of online courses (35,248 or 96% of all students taking at least one online course), when the school does not the reputation for offering online courses and the web site does not emphasize online.

I contacted the school’s Institutional Research and Assessment group to ask for clarifications, and it turns out they also discovered that their IPEDS reporting data was incorrect – but in the opposite direction from UCF. In their case, Temple appears to just made a mistake in entering the data due to new reporting standards and survey overload. According to my contact, one of the issues is that IPEDS notifies a school if there are any anomalies in the reported data, but the distance education reporting is brand new. The mistake was not caught until the public blogging of data.

The results is that Temple University moves from 35,248 students taking at least one online course to 1,809, which is a 95% change. Temple has graciously sent corrections to NCES.

Value of Open Data

I’d like to point out the value of Open Data in this case, as the public reporting and public blogging of data is leading to transparency of the data and reporting process as well as improvements in the data. This would not have happened using previous methods of surveys with closed data.

I’d also like to thank UCF and Temple personnel for their openness in noticing, correcting, and describing data entry mistakes. They provide excellent examples for other schools.

Changes in Top 20

I am not sure what the IPEDS process is to update the database based on corrections, so the resultant changes reported below will not match official records for a while (in other words, you read it here first at e-Literate). NOTE: I have also revised the view to focus just on degree or certificate-seeking students, so there are some other changes as well.

Top 20 US Public 4-year Institutions by Online Enrollment (Revised) Listing of Top 20 US public 4-year institutions Fall 2012 online education enrollment for combined undergraduate and graduate degree and certificate-seeking students, per IPEDS RankInstitutionTotal Students Taking At Least One Online CoursePercentageTotal Students 1University of Maryland-University College41,357100%41,457 2Arizona State University36,05049%73,179 3University of Central Florida28,95449%59,212 4Florida International University24,40354%44,940 5University of Florida22,77346%49,160 6Thomas Edison State College20,45699%20,606 7University of South Florida-Main Campus16,11340%40,459 8Troy University15,44469%22,407 9St Petersburg College15,04455%27,574 10Pennsylvania State University-Main Campus13,12829%45,138 11Northern Arizona University12,47848%25,850 12Valencia College11,94833%35,728 13College of Southern Nevada11,86739%30,787 14Middle Tennessee State University11,37945%25,144 15University of North Texas11,25930%37,950 16University of Houston10,80928%39,303 17Indiana University-Purdue University-Indianapolis10,53436%29,559 18Florida State College at Jacksonville10,33639%26,634 19University of Arizona10,31626%39,875 20North Carolina State University at Raleigh10,08531%32,798

The post UCF and Temple IPEDS corrections lead to new Top 20 in online enrollment appeared first on e-Literate.

White House report on big data will impact ed tech

Fri, 2014-05-02 14:56

Yesterday the White House released its report on big data and privacy implications. The focus was broadly on big data, but there will be implications for ed tech, with several key recommendations specifically focused on the education sector. Specifically, there will be a push to update and revise the Family Educational Rights and Privacy Act (FERPA, enacted in 1974) and Children’s Online Privacy Protection Act (COPPA, enacted in 2000). Education Week set the context quite well in its article:

FERPA, which was written before the Internet existed, is intended to protect disclosure of the personally identifiable information contained in children’s education records. And COPPA, which requires parental consent under certain conditions for the online collection of personal information from children under age 13, was written before the age of smartphones, tablets, apps, the cloud, and big data.

Think Progress, part of a group founded by John Podesta, who also led the White House study, summarized the key findings as follows:

  1. Giving consumers more protection and control over their private data with a bill of rights

  2. Pass a singular data breach law to prevent the next Target fiasco

  3. Strengthen outdated and archaic laws, such as the Electronic Communications Privacy Act (ECPA), which dictates how the government accesses emails

  4. Give non-citizens the same privacy protections

  5. Ensure data collected on students is used only for educational purposes

Ed Tech Sections of the Report Itself:

First, there is a description of the situation in pages 24 – 26 that is too long to quote but worth highlighting:

Learning about Learning: Big Data and Education Education at both the K-12 and university levels is now supported inside and outside the classroom by a range of technologies that help foster and enhance the learning process. Students now access class materials, watch instructional videos, comment on class activities, collaborate with each other, complete homework, and take tests online. Technology-based educational tools and platforms offer important new capabilities for students and teachers. After only a few generations of evolution, these tools provide real-time assessment so that material can be presented based on how quickly a student learns. Education technologies can also be scaled to reach broad audiences, enable continuous improvement of course content, and increase engagement among stu- dents.

Later in the conclusions section of the report, pages 62 – 63, there are the relevant recommendations (quoted in full):

2. Responsible Educational Innovation in the Digital Age Big data offers significant opportunities to improve learning experiences for children and young adults. Big data intersects with education in two important ways. As students begin to share information with educational institutions, they expect that they are doing so in order to develop knowledge and skills, not to have their data used to build extensive profiles about their strengths and weaknesses that could be used to their disadvantage in later years. Educational institutions are also in a unique position to help prepare children, adolescents, and adults to grapple with the world of big data.

Ensure data protection while promoting innovation in learning Substantial breakthroughs stand to be made using big data to improve education as personalized learning on network-enabled devices becomes more common. Over the next five years, under the President’s ConnectED initiative, American classrooms will receive a dramatic influx of technology—with substantial potential to enhance teaching and learning, particularly for disadvantaged communities. Internet-based education tools and

Recognize digital literacy as an important 21st century skill. In order to ensure students, citizens, and consumers of all ages have the ability to adequately protect themselves from data use and abuse, it is important that they develop fluency in understanding the ways in which data can be collected and shared, how algorithms are employed and for what purposes, and what tools and techniques they can use to protect themselves. Although such skills will never replace regulatory protections, in- creased digital literacy will better prepare individuals to live in a world saturated by data. Digital literacy—understanding how personal data is collected, shared, and used— should be recognized as an essential skill in K-12 education and be integrated into the standard curriculum.

RECOMMENDATION: The federal government should ensure that data collected in schools is used for educational purposes and continue to support investment and innovation that raises the level of performance across our schools. To promote this innovation, it should explore how to modernize the privacy regulatory framework under the Family Educational Rights and Privacy Act and Children’s Online Privacy Protection Act and Children’s Online Privacy Protection Act to ensure two complementary goals: 1) protecting students against their data being shared or used inappropriately, especially when that data is gathered in an educational context, and 2) ensuring that innovation in educational technology, including new approaches and business models, have ample opportunity to flourish.

This new policy push will be important to watch over the next two years or so as it moves beyond white papers into proposed legislation or regulatory changes. My initial read is that this report is quite balanced and well-researched – it is difficult to find much to fault.

There were some cautious views voiced at EdWeek, however:

“When they say ‘modernize,’ we say, ‘build upon’ because I don’t want to get away from the privacy protections that current laws already afford,” said Khaliah Barnes, a lawyer for the Washington-based nonprofit Electronic Privacy Information Center, which filed 14 pages of comments on big data for the White House report before it was released.

Barnes viewed the report favorably, in general, but took issue with a statement that education technologies are already being deployed “with strong privacy and safety protections for students, inside and outside of the classroom.” She mentioned statewide longitudinal databases and cloud computing databases “being used every day that do not adequately protect students.”

The Software & Information Industry Association, a trade group for technology businesses, issued a statement concurring about the value of big data to support student learning. However, the group known as SIIA said, “modernizing privacy rules need not involve new legislation.”

The post White House report on big data will impact ed tech appeared first on e-Literate.

GSV+ASU EdInnovations Conference: A proxy for ed tech’s big challenge

Tue, 2014-04-29 21:01

Last week Michael and I attended the GSV+ASU EdInnovations conference commonly known as “Davos in the Desert”. This conference is growing in importance, especially in the effect on ed tech investment, as can be seen by the rising attendance. Just five years ago there were less than 300 attendees, and this year’s event had well over 2,000. Some of the notable articles covering the event:

  • Inside Higher Ed: “Let’s Make a Deal” – this article compares the rise of ed tech in general to the rise in the conference, but asks about lack of educators and lack of nationwide results (more on this later).
  • Huffington Post: “Is the American Dream a Dream Deferred?” – this article takes a positive spin on the conference, especially the focus on improving access and student success, but also questions the lack of diversity (more on gender lines than educator / non-educator lines).
  • EdSurge: “What Happened at ASU-GSV…” – this article presents an insider, booster view of the conference occasionally to accentuate the positive and defuse the negative, but it does describe the conference events quite well.
  • Getting Smart: “ASU+GSV: Education Dreaming” – ditto.
  • EdWeek: “Teachers at Summit Tell Ed. Companies What They Want–and What They Don’t” – this article covers one panel in particular, but this one is highly relevant to the following discussion on educator involvement.
  • Dusty World: “ASU/GSV Summit” – this article has a slightly humorous description of “the strangest education conference of my career”, noting that most discussions actually focused on politics and not pedagogical approaches.

As for myself, I live-tweeted some of my observations, particularly focusing on pro-immigration reform, foundation influence, and the difficulty of interviewing Don Graham (former owner of the Washington Post and current family owner of Kaplan).

Lack of Big Results

Rather than re-hashing some of these discussions, I’d like to build on and clarify some comments from the Inside Higher Ed article where I was quoted.

As the billion-dollar education technology industry holds what has become its primary gathering here this week, the onus is on vendors to show they can produce not only profits, but also improved outcomes.

The best section of the article came in the interview from Gallup’s representative.

“At a national level, there is no evidence that educational technology has reduced the cost of education yet or improved the efficacy of education,” said Brandon Busteed, executive director of Gallup Education. “And that’s just as true as it gets. Maybe there will be some day, but that’s the question: How much longer do we think it will take before we can detect movement on the national needle?” [snip]

“Every one of these companies has — at least most of them — some story of a school or a classroom or a student or whatever that they’ve made some kind of impact on, either a qualitative story or some real data on learning improvement,” Busteed said. “You would think that with hundreds of millions of dollars, maybe billions now, that’s been plowed into ed-tech investments … and all the years and all the efforts of all these companies to really move the needle, we ought to see some national-level movement in those indicators.”

This hits it just right. Ed tech in general, and the conference as a proxy, focus very well on good intentions and successful pilot programs, but we are not seeing the results scale. The problem is not a lack of innovative ideas, the problem is a breakdown in the Diffusions of Innovations.

Good Intentions, Not Just Pursuit of the Dollar

On the positive side, I found that the conversations, at least in the plenary sessions, were refreshing in their focus not on making money, but on aiming to help societal challenges. Money really was secondary in importance for the most part in the official conference focus – the ‘doing well by doing good’ concept.

From IHE:

Hill, on the other hand, said the summit could also show the investment community in a good light, with a focus on money, to be sure, but also plenty of good intentions and a quest for results.

“I think there’s a lot of people in higher education who are concerned that this is all about people making a buck,” he said. “If there were more of them here, I think they would hear that there’s a lot of very legitimate concern for improving education.”

This sentiment was also captured by the Huffington Post article:

There was a lot of talk at the summit about the American dream and creating opportunity for all. In fact it was the theme of the conference. I heard a lot about fears that the American dream is dying. [snip]

The sessions were engaging and valuable. During one, Michael Crow, ASU’s president, championed the need for “culture change” in families, in societies and in education in order for us to see any real improvements in learning outcomes. Sentiments I am in agreement with.

These good intentions from the plenary sessions and panel discussions don’t necessarily extend to the hallway conversations. It was a pitch-fest, and there was too much exuberance.

Missing Conversations and Depth of Understanding

What I found most problematic from this conference was the lack of meaningful involvement by educators. From IHE:

Phil Hill, an education consultant, said the lack of faculty and teacher input has limited the conversation about how to address the challenges facing K-12 and higher education. Some speakers, he said, used data to make the case that the education system is broken, as opposed to showcasing how they plan to fix it.

“You end up with everybody pitching everybody,” Hill said. “Where are the schools, the teachers, the people who could actually use the innovations coming out of here to do something with it?”

EdSurge attempted to defuse this feedback (not mine specifically, but from general discussions), noting twice that there were more than 100 educators (including K-12 and higher education). GSV themselves put the numbers in their post-conference survey:

Brief 2014 Summit Fast Facts:

  • Over 2100 Registered Participants
  • Over 200 Investors
  • Over 535 Entrepreneurs
  • Over 80 Higher Education Institutions
  • Over 180 Educators
  • 230 Presenting Companies
  • 23 Countries Represented

From a pure numerical perspective, it is worth noting that the majority of higher ed “educators” were from for-profit institutions (roughly 70) and the hosts at ASU (roughly 34). Even with these caveats there were less than 10% of attendees as educators, and very little diversity (Update: To clarify, I’m calling out the issue of including significant educator input from a variety of institution types and a variety of adoption categories such as early adopters, early majority, etc). The EdWeek article called out a panel discussion where faculty gave some welcome feedback:

The speakers at one panel during the conference were able to offer companies some insights, often in very blunt terms. The panelists included a pair of teachers, who spoke about their hopes for classroom technology, and their disappointments about what it doesn’t deliver.

Tanesha Dixon, a middle school social studies teacher, said that many of the classroom technologies she saw didn’t seem as if they’d been designed to make educators’ lives easier, and would most likely add to their workload.

The teacher at the Wheatley Education Campus, a public school in the District of Columbia, said she wasn’t convinced that developers were doing enough to seek out the opinions of educators.

“I always wonder, how do you develop a product?” Dixon asked the audience. “Are you sitting at home in your basement?…I feel like no one’s talking to us.”

This feedback really gets to my biggest complaint at the conference: too many well-intentioned conversations talking about educators without the effort of talking with educators (and yes, the panel in my mind is the exception that proves the rule). This criticism is really aimed at much of ed tech in general and not just the conference.

I doubt that the GSV and ASU organizers intend to exclude educators (see Dusty World for discussion of reduced rates available for non-profits), but that was the result. This is a comment more on reality, not on intentions.

Put Them All Together

And more importantly, I see these themes overlapping. Not having enough well-intentioned discussions with educators is one reason we are seeing a breakdown in the diffusion of innovations, partially caused by a lack of depth in understanding.

Michael called this situation out in his post on why VCs often get ed tech wrong.

The combination of the obsession with disruption and the phobia of institutions is a particularly bad one for the education markets. When I say that, I don’t mean that it is harmful to education (although that is probably also true). What I mean is that it leads to consistently unprofitable investment decisions. [snip]

I just don’t see disruptive innovation as a good guide for investment in education. And for similar reasons, I think the notion of avoiding institutional entanglements is pretty much hopeless, since the very idea of education is inextricably bound up in those institutions. Disruptive innovation and direct-to-consumer are both investment strategies that are designed to avoid complexities that lead to investment risk. But in education, complexity is unavoidable, which means strategies that attempt to avoid it usually result in risk-increasing ignorance rather than risk-avoiding safety. And as Warren Buffett said, “When you combine ignorance and leverage, you get some pretty interesting results.”

I call out this challenge not to criticize the GSV and ASU conference organizers, but rather to use the conference observations to help illustrate the big challenge we have in the education community – plenty of good ideas but frustrating inability to scale and diffuse innovations effectively.

The post GSV+ASU EdInnovations Conference: A proxy for ed tech’s big challenge appeared first on e-Literate.

New e-Literate TV Series on MOOCs

Fri, 2014-04-25 13:28

hypecycle

If 2013 was the year that MOOC mania peaked in the Gartner hype cycle, then surely 2014 is the year in which they have entered the trough of disillusionment. And just as it was important last year to push back on the inflated expectations, it will be equally important this year to make sure we don’t throw out the baby with the bath water.

With that in mind, I’m pleased to announce our new e-Literate TV series, “Massive and Open: What Are We Learning?” Filmed at the 2013 MOOC Research Initiative (MRI) conference, it delves into the current state of knowledge of the field. The interview with Stanford’s Amy Collier from our pilot series should really be considered Episode Zero for the series, as it explores the value of the first generation of MOOCs (particularly xMOOCs). To that, we have added three episodes from the new series:

  1. MOOC Research Initiative Grant Projects: After a brief introduction with MRI lead researcher George Siemens, Phil and I talk to a number of the grantees about the research they are undertaking. The range of research questions is eye-opening.
  2. MOOC Point/Counterpoint: We intercut interviews between MOOC enthusiast Keith Devlin and MOOC skeptic Jonathan Rees. The result is a rich and nuanced dialog between these two smart guys.
  3. Crowdsourced Answers to Popular MOOC Questions: We got MRI conference attendees to give their answers to frequently asked MOOC questions. The answers are varied and give a good sense of just how much our thinking about MOOCs is still all over the place.

Enjoy!

The post New e-Literate TV Series on MOOCs appeared first on e-Literate.

Partial Transcript: Richard Levin (new Coursera CEO) on Charlie Rose

Wed, 2014-04-23 19:53

I have written two posts recently about Coursera’s appointment of the former president of Yale as the company’s new CEO, with the implicit argument that this move represents a watershed moment for commercial MOOCs. In particular, Coursera seems likely to become the third generation of Richard Levin’s dream, following AllLearn and Open Yale Courses. I’ve also argued that Levin is embellishing the history by making Internet bandwidth a primary factor in the demise of AllLearn when the lack of a viable business model was the more important issue, with even Levin arguing this point.

Richard Levin was just interviewed by Charlie Rose, and I am including a transcript of most of the segment (starting around 3:15), highlighting some key points in bold. This interview should give us further insight into the future of commercial MOOCs, especially as we have the first non-founder CEO in one of the big three commercial MOOC providers. Follow this link to watch on CharlieRose.com and avoid the annoying Hulu ad.

Rose: You could have gotten a government job, as an ambassador or something; maybe been Secretary of the Treasury as far as I know . . . you could have done a lot of things. But you’re out running some online education company (laughs).

Levin: It’s a fantastic mission, it’s really the perfect job for me and for following a university president’s [job].

Rose: Why’s that?

Levin: One, I like running things, so it’s an opportunity to run something. But most important it’s so much an extension of what I’ve tried to do. It’s to take Yale to the world, and this is an opportunity to take 108 of the world’s greatest educational institutions (and there’ll probably be some more) and teach the planet.

Rose: Before you go there, let’s get the landscape. At Yale you tried some online education. A couple of others have had . . . and there is a checkered past.

Levin: Well, there was a time of experimentation in learning. When we started in 2000 with Stanford and Oxford as partners, we thought our market was our own alumni so we sort of narrowcast over the Internet. Then we opened it to the public the bandwidth wasn’t there.

Rose: You mean the technical bandwidth?

Levin: Yes, this was still the era of your videos were jerking around, you remember that? So it had that problem, and we just didn’t have the right model for making it work. And also it didn’t have a high degree of interactivity. Basically you watched a lecturer give a lecture, and maybe there were some visuals, but that was it.

And then the next thing we did were “Open Yale Courses”, which basically were videos of 42 of our best lecture courses put out for free over the Internet, with support of the Hewlett Foundation. They were great, but very few people watched them from beginning to end. They were free. The materials of the course were distributed, but there were no quizzes, no exercises.

Now what Coursera has done has sort of recognized that first of all, we have greater bandwidth, we can support lots of people at once – taking quizzes, reacting to the material, getting feedback; having professors look at the data to see what parts students are having a hard time and improving their courses. So it’s a constant feedback loop.

It’s really terrific and the scale is immense. It’s amazing, we’ve had 7 million different people.

Rose: Separate the landscape for me. There’s edX, there’s Sebastian Thern’s [Thrun’s] thing, Udacity. How are the three different?

Levin: They’re all a little bit different, but those are three that are involved in this MOOC space [uses square quotes]. There’s lots of other things online. Many schools have had things online with closed enrollments for substantial tuition dollars for some time now.

What these three are trying to do is go to a wide open public, and putting courses out for free and getting hundreds of thousands of people to sign up for them.

Our approach and edX’s are pretty similar.

Rose: edX is Harvard and MIT?

Levin: edX is ‘Harvard and MIT want to do their own thing and not sign up with Coursera (laughs). At this time we have about three times as many partner institutions and three or four times the audience. It’s [edX] is a worthy effort, they’re doing a good job, and so are we, and we’re competing on what are the features offered for students. edX is open source software , which some of the computer science types like that – it means they can play with it, they can add to the features on their own.

But we’re developing interfaces that will allow faculty to add features as well.

I think it’s good there’s competition. I’ve studied innovative industries; before I became president of Yale it was my field. Competition is good for innovation, the products will get better.

Rose: But is the mission the same?

Levin: I think that edX and Coursera have very similar missions. It’s to have great universities as the partners . . . the universities develop the courses. We’re not a university, Coursera’s not a university. Coursera is a platform and a technology company that serves universities.

Rose: Backed by venture capital?

Levin: Yeah, but I think the key lesson here is the scale.

The post Partial Transcript: Richard Levin (new Coursera CEO) on Charlie Rose appeared first on e-Literate.

Coursera CEO Interview: Mike Caulfield nailed it two months ago

Tue, 2014-04-15 20:03

Two months ago Mike Caulfield lamented the inability for many people in online education, especially massive online initiatives, to honestly learn from the past. In the post Mike referred to the failed AllLearn initiative and the seminal post-mortem written up in University Business.

How does that relate? A paragraph from the 2006 post-mortem of AllLearn really stuck out for me:

Oxford, Yale, and Stanford have kept quiet about the collapse of their joint e-learning venture…[h]owever, AllLearn’s closure could offer an unprecedented opportunity to step back and discuss the strengths and weaknesses of the business model… Further research into the series of collapsed online ventures may shed some light on what makes a successful distance education program, and enable some of the surviving online providers to redefine their business models and marketing strategies accordingly

Of course they don’t delve into these things honestly, and as a result most people in these institutions are unaware of them. Like Leonard, the institutions alter the record of the past. They wake up the next day with amnesia, consult a set of dramatically altered notes, and wonder why no one has tried massive Ivy League courses yet. The PR push to cover one’s tracks ends up erasing the institutional knowledge that could build a better initiative.

Little did Mike realize that he was writing a script.

One month later Coursera hired Richard Levin as its new CEO. As president of Yale, Levin was one of the key figures in the creation of All Learn in 2000, and after the 2006 collapse of the initiative Levin was one of the key figures directly responsible for the Open Yale Courses initiative.

The consensus view is that AllLearn failed to generate enough interest in its non-credit elite courses, and subsequently the program closed due to economics (by Levin’s own previous admission). In 2005 AllLearn attempted to address this challenge by branching beyond alumni as related in this Yale Daily News post [emphasis added in all quotes below].

“I think we’ve learned a lot form the experiment,” Levin said. “While I believe we’ve produced some very high quality courses, we’ve learned that it’s hard to generate large audiences sufficiently from these courses from just the alumni of the three partner schools. So we’ve reached out to attract broader audiences through affiliating with universities and through finding other organizations that might have an interest in making courses available to members.”

Fast forward a year, and it is clear that the effort had failed economically despite the broadening of audiences, again from the Yale Daily News.

Yale President Richard Levin, who served as AllLearn’s chairman, said he thinks that while the participating institutions learned what is necessary to manage a successful distance learning program, they were unable to make the project financially viable.

“We are disappointed that we weren’t able to find a way to make this successful economically,” Levin said. “[But] we learned a lot, and I think it will serve us well in the future.”

Open Yale Courses also provides non-credit elite courses. The problem? You might have guessed it, as described by this 2012 report for the Committee on Online Education.

Open Yale Courses has been supported by generous grants from the Hewlett Foundation, but those grants are ending this semester; and there is no provision yet for the continuation of this program. There has been extensive planning, however, to keep the 42 existing courses on the Yale site as well as the iTunes U and YouTube platforms. All of the courses are being stored and preserved for future study. New visitors are discovering Open Yale Courses daily and global media coverage, which has been constant and enthusiastic since the start of the project, continues unabated.

The initiative is now attempting direct solicitation as a method of funding.

I don’t mean to question Levin’s good intentions nor his considerable support of the mission of making education more accessible through online technology. However, I find it disingenuous to try and alter history. This week the New York Times interviewed Levin about his new role as Coursera CEO, and the reporter asked some good questions but lacked follow-up.

Q. Yale has not exactly been a mass institution.

A. No, but we were early in the on-line arena, with a venture back in 2000 called All-Learn.

Q. How much did you lose, and why didn’t that spoil this for you?

A. It was too early. Bandwidth wasn’t adequate to support the video. But we gained a lot of experience of how to create courses, and then we used it starting in 2007 to create very high quality videos, now supported by adequate bandwidth in many parts of the world, with the Open Yale courses. We’ve released over 40 of them, and they gained a wide audience.

So here we have yet another initiative offering non-credit elite courses, and one of the biggest challenges that Coursera faces is that it has yet to find a viable business model. The company is living on $85 million in venture capital investment and has not yet found revenue sources that go beyond a few million dollars per year (Signature Track). Levin called out this challenge in the same NYT interview.

Q. Doesn’t edX have an advantage in being not-for-profit, meaning they don’t have to worry about returning on investment so soon? Yesterday Andrew Ng said, “We’ve raised $85 million, so we’ve got some runway.” How much runway?

A. I think the principal investors in Coursera understand that this is a long term play. We’re fortunate to have patient investors; and as Andrew said, we’re quite adequately capitalized. I think we can become financially viable certainly within that five-year framework.

Q. You’re an economist. How do you get from here to there?

A. Right now courses are free and we’re charging for certification. We think that as the idea of using Coursera courses for professional advancement grows, the numbers seeking certificates will grow. And the price we charge probably can grow, too. A move from $50 or $60 for Signature Track to $100 is certainly imaginable. At $100 a pop, if you had two or three, or five million people. …

I would suggest that Coursera will not “get from here to there” by altering the record of the past. AllLearn failed to generate sufficient interest in its courses, and the proximate cause was not “insufficient bandwidth”. AllLearn in fact had several approaches that alleviated bandwidth concerns, including CD-ROM delivery and the ability to turn off high-bandwidth features. AllLearn’s average learner was a 47-year-old Yale alumni – hardly a case of low-income lack of access to sufficient bandwidth. Plenty of online ventures started in 2000 or prior have succeeded – Penn State’s World Campus, UMUC, most large for-profits, UMassOnline, University of Central Florida, etc. This was not a case of being “too early”.

Read the University Business post-mortem and the Yale article. The issue involved economics and insufficient revenue to offset expenses.

Coursera and all the xMOOC providers have this same long-term challenge of adequate business models. I called out this challenge as one of the four key barriers that MOOcs faced, based on a July 2012 post. I speak as someone who would like to see MOOCs succeed – not in their current form, but in a form that evolves to better meet learner needs. This healthy evolution won’t happen, however, unless the providers honestly evaluate the lessons of the past.

figure3

 

Update (4/17): It appears that Levin is doubling down on his new theory about AllLearn. From the Yale Daily News today:

“I knew from the beginning that [online ed] had great potential,” Levin said.

In fact, Yale’s experiment with online education began under Levin’s presidency. In 2000, Yale launched AllLearn, a joint venture with Stanford and Oxford that faltered after four years due to insufficient technology at the time. The Internet bandwidth in most homes was inadequate for properly sharing course material, Levin admitted.

For the record, I can find no explanations from the time of AllLearn’s demise in 2006 that insufficient bandwidth was the problem. What we do have are statements including from Levin himself that insufficient demand leading to unsustainable revenue was the problem.

The post Coursera CEO Interview: Mike Caulfield nailed it two months ago appeared first on e-Literate.