Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 12 hours 38 min ago

WCET, UPCEA & Sloan-C call on DOE to change State Authorization proposal

Mon, 2014-06-16 20:46

What does it take to get all of the higher education institutions and associations to agree? Apparently the answer is for the Department of Education to propose its new State Authorization regulations.

As part of DOE’s negotiated rulemaking process over the past half year representatives from schools (Columbia University, Youngstown State University, Benedict College, Santa Barbara City College, Clemson University, MIT, Capella University) to higher ed associations (WCET) were unanimous in their rejection of the proposed State Authorization rules. As Russ Poulin wrote for WCET:

On Tuesday May 20, the Committee we had our final vote on the proposed language. I voted “no.” I was joined in withholding consent by all the representatives of every higher education sector. Nine out of sixteen negotiators voting “no” is a high ratio.

Note that only one of the mentioned groups is a for-profit university – the purported offenders causing the need for the regulations. I wrote a post arguing that the proposed rules represented a dramatic increase in control over distance education that would cause a significant increase in compliance and administrative overhead for both colleges / universities and for states themselves.

In the end, predictably, the rulemaking process ended in a lack of consensus that allows the DOE to propose whatever language they desire. The latest proposal was from DOE, and it would make sense for the final proposal to follow this language closely.

Here’s what is newsworthy – the idea and proposed language is so damaging to innovation in higher ed (which the DOE so fervently supports in theory) and so burdensome to institutions and state regulators that three higher ed associations have banded together to oppose the proposed rules. WCET (WICHE Cooperative on Educational Technologies), UPCEA (University Professional and Continuing Education Association) and Sloan-C (Sloan Consortium) wrote a letter to Secretary Arne Duncan calling for the DOE to reconsider their planned State Authorization regulations. As the intro states [emphasis added]:

The member institutions of our three organizations are leaders in the practice of providing quality postsecondary distance education to students throughout the nation and the world. Our organizations represent the vast majority of institutions that are passionate about distance education across the country and across all higher education sectors.

For the first time our organizations are joining with one voice to express our concern over the Department of Education’s “state authorization for distance education” proposal(1) that was recently rejected by most of the members of the Program Integrity and Improvement Negotiated Rulemaking Committee. Our comments are focused on the final draft proposal presented to the Committee. We believe the final draft represents the most current thinking of Department staff as they construct a regulation for public comment.

We are eager to promote policies and practices that protect consumers and improve the educational experience of the distance learner. Unfortunately, the final draft regulation would achieve neither of those goals.

The impact of the proposed regulations would be large-scale disruption, confusion, and higher costs for students in the short-term. In addition, there would be no long-term benefits for students. This letter briefly outlines our concerns and provides recommendations that achieve the Department’s goals without disrupting students enrolling in distance education programs across state lines.

As an example of the problems with the latest proposal:

Second, when pressed to define an “active review,” the Department provided a short list of criteria that states could use in the review, such as submitting a fiscal statement or a list of programs to be offered in the state. While it may sound simple to add a few review criteria, state regulators cannot act arbitrarily. Their authorization actions must be based on state laws and regulations. Therefore, state laws would need to be changed and the state regulators would need to add staff to conduct the necessary reviews. Our analysis estimates that 45 states would need to make these changes. This is a large amount of activity and added costs for what appears to be a “cursory” review. These reviews will likely not change a decision regarding an institution’s eligibility in a state. There is no benefit for the student.

The letter does not just list objections but also offers eight concrete recommendations that would help DOE achieve its stated goals.

Michael and I fully endorse this letter and also call on the DOE to rethink its position.

The full letter can be found at WCET’s site along with an explanatory blog post.

The post WCET, UPCEA & Sloan-C call on DOE to change State Authorization proposal appeared first on e-Literate.

Starbucks Paying for Employees Tuition at ASU Online

Mon, 2014-06-16 10:08

This is a big deal:

Starbucks will provide a free online college education to thousands of its workers, without requiring that they remain with the company, through an unusual arrangement with Arizona State University, the company and the university will announce on Monday.

The program is open to any of the company’s 135,000 United States employees, provided they work at least 20 hours a week and have the grades and test scores to gain admission to Arizona State. For a barista with at least two years of college credit, the company will pay full tuition; for those with fewer credits it will pay part of the cost, but even for many of them, courses will be free, with government and university aid.

Over the past few decades, America has slowly but surely been transitioning from a system in which college education was treated as a public good (and therefore subsidized by taxpayers) to being a private good (and therefore paid for entirely by students and their families). And while there is no substitute for that model, it is interesting and important that Starbucks is positioning college tuition the way companies position health insurance plans—as a benefit they use to compete for better workers.

This is not an entirely new idea. Many companies have tuition reimbursement, although it often comes with more restrictions and is typically aimed at white-collar workers. A while back, Wal-Mart made headlines by offering heavily subsidized (but not free) college credit in partnership with APU. Starbucks takes this to the next level. Since both Wal-Mart and Starbucks have reputations as union busters, it will be interesting to see how their respective college subsidization moves impact their struggles with their labor forces. Will tuition help them lower demand for unionization? Will it become another bargaining chip at the negotiating table?

I wrote a while back about the idea of reviving the apprenticeship for the digital age and gave an example of an Indian tech company that is doing it. I think we’re going to see a lot more of variations on the theme of employer-funded education in the future.

You can learn more about the Starbucks college program at their website.

The post Starbucks Paying for Employees Tuition at ASU Online appeared first on e-Literate.

Why Google Classroom won’t affect institutional LMS market … yet

Sun, 2014-06-15 16:54

Yesterday I shared a post about the new Google Classroom details that are coming out via YouTube videos, and as part of that post I made the following statement [emphasis added]:

I am not one to look at Google’s moves as the end of the LMS or a complete shift in the market (at least in the short term), but I do think Classroom is significant and worth watching. I suspect this will have a bigger impact on individual faculty adoption in higher ed or as a secondary LMS than it will on official institutional adoption, at least for the next 2 – 3 years.

The early analysis is based on this video that shows some of the key features:

There is also a new video showing side-by-side instructor and student views that is worth watching.

Here’s why I believe that Classroom will not affect the LMS market for several years. Google Classroom is a slick tool that appeals to individual instructors whose schools use Google Apps for Education (GAE) – primarily K-12 instructors but also to higher ed faculty members. The tight integration of Google Drive, Google+ and GAE rosters allows for easy creation of course sites by the instructor, easy sharing of assignments and documents (particularly where the instructor creates the GDrive document and has students directly edit and add to that document), and easy feedback and grading of individual assignments. Working with the GAE framework, there are a lot of possibilities for individual instructors or instructional designers to expand the course tools. All of these features are faculty-friendly and help Google’s promise of “More time for teaching; more time for learning”.

But these features are targeted at innovators and early adopter instructors who are willing to fill in the gaps themselves.

  • The course creation, including setting up of rosters, is easy for an instructor to do manually, but it is manual. There has been no discussion that I can find showing that the system can automatically create a course, including roster, and update over the add / drop period.
  • There is no provision for multiple roles (student in one class, teacher in another) or for multiple teachers per class.
  • The integration with Google Drive, especially with Google Docs and Sheets, is quite intuitive. But there is no provision for PDF or MS Word docs or even publisher-provided courseware.
  • There does not appear to be a gradebook – just grading of individual assignments. There is a button to export grades, and I assume that you can combine all the grades into a custom Google Sheets spreadsheet or even pick a GAE gradebook app. But there is no consistent gradebook available for all instructors within an institution to use and for students to see consistently.

For higher ed institutions in particular, we are just now getting to the stage where the majority of faculty use the institutional LMS. I am seeing more and more surveys on individual institutions where 70+ % of faculty use the LMS for most of their courses. What this means, however, is that we have a different categories of adopter for institutional LMS – the early majority (characterized by pragmatic approach) and late majority (characterized by a conservative approach) as shown by the technology adoption curve. I am showing the version that Geoffrey Moore built on top of the Everett Rogers base model.

chasmDiagramLabels

With adoption often above 50% or more of faculty, the institution has to serve both the group on the left (innovators and early adopters) and the larger group on the right (early and late majority more than laggards). As poorly designed as some of the institutional LMS solutions are, they typically allow automatic course and roster creation with updates, sharing of multiple document types, integrated standard gradebooks, and many others.

Institutions can (and really should) allow innovators and early adopters to try out new solutions and help create course designs not bound by the standard LMS implied pedagogy, but institutions cannot ignore the majority faculty who are typically unwilling to spend their own time to fill in the technology gaps – especially now that these faculty are just getting used to LMS usage.

None of this argues that Google Classroom is an inferior tool – it is just not designed to replace the full-featured LMS. Remember that Google is a technology-vision company that is comfortable putting out new tools before they understand how the tools will be used. Google is also comfortable playing the long game, getting more and more instructors and faculty using, giving feedback, and pushing forward the new toolset. This process will take some time to play out – at least 2 or 3 years in my opinion before a full institutional LMS may be available. If Google like the direction Classroom usage is going.

Google Classroom does attempt to partially understand the instructor use cases, but is not designed as a holistic product. Think of Classroom as ‘let’s see how to tie existing Google tools together to advance the ball in the general course site world’. It is still technology first and tied more specifically Google technology first. The use cases are simple (e.g. one instructor sharing GDrive-based assignment to students who edit and then submit for feedback), but there are many possibilities for clever faculty to innovate.

In the near-term, Google Classroom will likely be a factor for individual faculty adoption (innovators, early adopters) at schools with GAE licenses or even as secondary LMS. But not as a replacement.

The post Why Google Classroom won’t affect institutional LMS market … yet appeared first on e-Literate.

Google Classroom: Early videos of their closest attempt at an LMS

Sat, 2014-06-14 12:51

For years the ed tech community has speculated about Google entering the LMS market, including Wave (discontinued, but some key features embedded in other tools), Apps for Education, and even incorrectly with Pearson OpenClass. Each time there is some possibilities, but Google has not shown interest in fully replacing LMS functionality.

Google Classroom, announced in May and with new details coming out this week, is the closest that Google has come to fully providing an LMS. The focus is much more on K-12 than on higher education.

For background on the concept, see this video from May which emphasizes roster integration (with some instructor setup required), document sharing through Google Drive, server and security through the cloud, discussions tied to assignments, and a focus on ‘letting teachers and students teach or learn their way’.

Yesterday Google shared a new video that directly shows the user interface and some key features.

You can see more directly how to create a class, add students, creating assignments, sharing resources (GDrive, YouTube, website), student editing in G Docs, tracking student submissions, grading individual assignments.

What I don’t see (and admittedly this is based on short videos) is a full gradebook. The grades are tied to each assignment, but they do not show any collection of grades into a course-based gradebook.

Grades

Teachers with early access to Google Classroom are starting to upload videos with their reviews. Dan Leighton has one from a UK perspective that has more details, including his concern that the system has a ‘sage on the stage’ design assumption.

I am not one to look at Google’s moves as the end of the LMS or a complete shift in the market (at least in the short term), but I do think Classroom is significant and worth watching. I suspect this will have a bigger impact on individual faculty adoption in higher ed or as a secondary LMS than it will on official institutional adoption, at least for the next 2 – 3 years.

The post Google Classroom: Early videos of their closest attempt at an LMS appeared first on e-Literate.

Unizin membership fee is separate from Canvas license fee

Thu, 2014-06-12 12:12

With Unizin going public yesterday, I’ve been looking over our three posts at e-Literate to see if there are any corrections or clarifications needed.

Based on yesterday’s press release, official web site release and media event, I have not found any corrections needed, but I do think there is a clarification needed on whether Unizin membership includes Canvas usage or not (it does not).

Common Infrastructure

During the media call, the Unizin team (representatives from the four founding schools plus Internet2 and Instructure) pushed the common infrastructure angle. Rather than being viewed as a buying club where schools can buy from a common set of pre-negotiated services, Unizin can more accurately be viewed as planned integration and operation of a common service to allow white-label outsourcing of the learning infrastructure. The idea is that a school can use Unizin’s infrastructure for LMS, content repository and analytics engine rather than hosting or maintaining their own, but the service will not be branded as Unizin for the faculty and student end users. From the Unizin FAQ page:

There is another sense in which Unizin is different from MOOC efforts. Unizin is about providing common infrastructure to support the missions of its university members. It is not a public-facing brand. It will not offer content, courses, or degrees in its own name. Unizin’s membership model is built on the premise that universities need to strengthen their influence in the use of data and content for the long run. They will accomplish this goal by working together and taking greater control over content, while also opening content up for selective sharing.

In this morning’s IHE article, the Unizin founders described the need for a common infrastructure:

The digital learning consortium, announced Wednesday morning, aims to simplify how universities share learning analytics, content and software platforms. But in order to do so, Unizin needs its members to use the same infrastructure. A common learning management system is the first part of that package.

“You don’t really have common infrastructure if you’re saying everything is heterogeneous,” said Brad Wheeler, the Unizin co-founder who serves as vice president for IT and chief information officer at Indiana University. “A lot of these different learning tools — Sakai, Blackboard, Canvas — they all do a bunch of really good stuff. But five universities picking five different ones — what’s the end value in that if they want to do something together?”

Brad Wheeler went on to describe the results that will occur from sharing infrastructure:

“This is a hard decision,” Wheeler said about picking Canvas. “I think the key point is enabling Unizin to do what it’s meant to do…. The path for Unizin is creating a dependence on Unizin — a university-owned entity — but creating potential for greater interdependence among our institutions.”

Instead of differentiating themselves based on what software tools they individually pick, Wheeler said, Unizin’s member institutions will stand out based on what they do with the common platform — in other words, the degrees they offer, the research they produce and the students they serve.

Clarification on Canvas Licensing

It has not been clear at all on whether Unizin membership gives a school access to use Canvas or whether membership gives a school the ability to purchase a Canvas license using Unizin agreement. In other words, do Unizin member institutions pay $1 million for Unizin membership and pay Instructure for Canvas, or do they just pay Unizin membership? This question goes beyond Canvas – when a learning repository or analytics engine is in place, will schools have to pay additional for that solution or is it already included in membership?

From the Unizin FAQ:

Will Unizin be a Canvas reseller?
No. Members of Unizin will be able to obtain the Canvas LMS product for campus use via their membership. If members already have Canvas, they can maintain that existing relationship (either via a direct contract with Instructure or via Internet2 Net+ services) or take advantage of the Unizin agreement.

What does “will be able to obtain” mean? Brad Wheeler clarified the situation through an email exchange.

The investment capitalizes Unizin for the integration and operation of the service including a Content and Analytics capability. The Canvas costs are pass-through as they vary ‎by institutional size.

So schools pay for Unizin membership and they pay for Canvas. Presumably there will be a similar arrangement for future learning repository and analytics engines, if these components end up being commercially-developed software.

As additional clarification and answer to our previous open questions, the usage of Canvas is on their current multi-tenant production site, just as it is for current customers. Unizin presumably will have outsized influence on product roadmap and open source code additions (through integrated apps or possibly to Canvas code itself).

We have also heard that there is a support fee for indirect support of the Canvas service of some percentage of the license fee. Treat this component as strong speculation rather than a confirmed detail.

In summary, this means that member institutions pay for the following for the current service:

  • Just over $1 million paid over 3 years (~$350 k per year according to media event) for Unizin membership to create the organization that integrates and operates the service (LMS + Content + Analytics);
  • Canvas license fee paid one of three ways: 1) pass-through using Unizin agreement, 2) Net+ agreement from Internet2, or 3) existing private license between school and Instructure; and
  • Potential (not confirmed) support fee paid to Unizin consortium for their indirect support of Canvas as a service.

The post Unizin membership fee is separate from Canvas license fee appeared first on e-Literate.

It’s Official: Unizin Is Real

Wed, 2014-06-11 10:24

 

A giant deity from the confines of space and time, Unizin has orbited the Earth like a comet, appearing once every twelve years of Christmas Eve. One man by the name of Dr. Kori wanted to study Unizin after seeing the monster himself when he was a boy. Even though he was called a con artist trying to capture Unizin he never gave up his search. On Christmas Eve day, he was working on his self made radar that would find Unizin when Elly came and was shocked by the device. After Dr. Kori repaired her the two made a trap to capture Unizin. That night, the dimensional kaiju made himself known as both DASH and Dr. Kori captures it in a mystical trap, but time around him began to disappear. Kaito turned into Ultraman Max and made a barrier to slow down the process. Not wanting the graceful giant to be hurt Dr. Kori let it go, restoring everything to normal. Unizin gave the doctor the branch of a tree he had not seen in some time, whether it was intended to be given to him or a reward for releasing him was unknown, but he was grateful nonetheless.

- The Ultraman Wiki

No, not that Unizin. This Unizin. The secret university consortium is no longer secret. Phil and I wrote a few posts about the consortium before the group went public:

So far, four of the ten universities we reported were considering joining have officially and publicly joined: Indiana University, University of Michigan, Colorado State University, and University of Florida.

Here’s a roundup of the news coverage:

Probably most important to read, in addition to IHE’s coverage and ours, is the “Why Unizin?” blog post on the Unizin website.

There was a press call this afternoon, so I expect we will be seeing more articles over the next few days. Of course, Phil and I will be providing some additional analysis as well. Stay tuned.

The post It’s Official: Unizin Is Real appeared first on e-Literate.

A response to new NCES report on distance education

Wed, 2014-06-11 08:30

By Phil Hill and Russ Poulin, cross-posted to WCET blog

Last week the National Center for Education Statistics (NCES) released a new report analyzing the new IPEDS data on distance education. The report, titled Enrollment in Distance Education Courses, by State: Fall 2012, is a welcome addition to those interested in analyzing and understanding the state of distance education (mostly as an online format) in US higher education.

The 2012 Fall Enrollment component of the Integrated Postsecondary Education Data System (IPEDS) survey collected data for the first time on enrollment in courses in which instructional content was delivered exclusively through distance education, defined in IPEDS as “education that uses one or more technologies to deliver instruction to students who are separated from the instructor and to support regular and substantive interaction be- tween the students and the instructor synchronously or asynchronously.” These Web Tables provide a current profile of enrollment in distance education courses across states and in various types of institutions. They are intended to serve as a useful baseline for tracking future trends, particularly as certain states and institutions focus on MOOCs and other distance education initiatives from a policy perspective.

We have previously done our own analysis of the new IPEDS data at both e-Literate and WCET blogs. While the new report is commendable in its improved access to the important dataset, we feel the missing analysis and potentially misleading introductory narrative takes away from the value of this report.

Value of Report

The real value of this report in our opinion is the breakdown of IPEDS data by different variables such as state jurisdiction, control of institution, sector and student level. Most people are not going to go to the trouble of generating custom tables, so including such data in a simple PDF report will go a long way towards improving access to this important data. As an example of the data provided, consider this excerpt of table 3:

NCES Table 3 excerpt

The value of the data tables and the improved access to this information are precisely why we are concerned about the introductory text of the report. These reports matter.

Need for Better Analysis and Context

We were hoping to see some highlights or observations in the report, but the authors decided to present the results as “Web Tables” without any interpretation. From one standpoint, this is commendable because NCES is playing an important role in providing the raw data for pundits like us to examine. It is also understandable that since this was the first IPEDS survey regarding distance education in many years, there truly was no baseline data for comparison. Even so, a few highlights of significant data points would have been helpful.

There also is a lack of caveats. The biggest one has to do with the state-by-state analyses. Enrollments follow where the institution is located and not where the student is located while taking the distance courses. Consider Arizona: the state has several institutions (Arizona State University, Grand Canyon University, Rio Salado College, and the University of Phoenix) with large numbers of enrollments in other states. Those enrollments are all counted in Arizona, so the state-by-state comparisons have specific meanings that might not be apparent without some context provided.

Even though there are no highlights, the first two paragraphs contain a (sometimes odd) collection of references to prior research. These citations beg the question as to what the tables in this report have to say on the same points of analysis.

Postsecondary enrollment in distance education courses, particularly those offered online, has rapidly increased in recent years (Allen and Seaman 2013).

This description cites the long-running Babson Survey Research Group report by Allen and Seaman. Since the current IPEDS survey provides baseline data, there is no prior work on which to judge growth; therefore, this reference makes sense to include. It would have made sense, however, to provide some explanation of the key differences between IPEDS and Babson data. For example, Phil described in e-Literate the fact that there is major discrepancy in number of students taking at least one online course – 7.1 million for Babson and 5.5 million for IPEDS. Jeff Seaman, one of the two Babson authors, is also quoted in e-Literate on his interpretation of the differences. The NCES report would have done well to at least refer to the significant differences.

Traditionally, distance education offerings and enrollment levels have varied across different types of institutions. For example, researchers have found that undergraduate enrollment in at least one distance education course is most common at public 2-year institutions, while undergraduate enrollment in online degree programs was most common among students attending for-profit institutions.

This reference indirectly cites a previous NCES survey that used a different methodology regarding students in 2007-08.

  • That survey found that enrollment in at least one distance education course was “most common” at public 2-year colleges and the new data reaffirms that finding.
  • Enrollment in fully distance programs was “most common” in students attending for-profit institutions and the new data reaffirms that finding. However, leaving the story there perpetuates the myth that “distance education” equals “for-profit education.” The new IPEDS data show (see Table 1 below from a WCET post by Russ) that 35% of students enrolled exclusively at a distance attend for-profit institutions and only 5% of those who enroll in some (not all) distance courses attend for-profits. People are often amazed at what a big portion of the distance education market is actually in the public sector.

WCET Table 1

A 2003 study found that historically black colleges and universities (HBCUs) and tribal colleges and universities (TCUs) offered fewer distance education courses compared with other institutions, possibly due to their smaller average size (Government Accountability Office 2003)

What a difference a decade makes. Both types of institutions show few of their students enrolled completely at a distance, but they now above the national average in terms of percentage of students enrolled in some distance courses in Fall 2012.

Rapidly changing developments, including recent institutional and policy focus on massive open online courses (MOOCs) and other distance education innovations, have changed distance education offerings.

Only a small number of MOOCs offer instruction that would be included in this survey. We’re just hoping that the uniformed will not think that the hyperbolic MOOC numbers have been counted in this report. They have not.

Upcoming Findings on Missing IPEDS Data

We are doing some additional research, but it is worth noting that we have found some significant cases of undercounting in the IPEDS data. In short, there has been confusion over which students get counted in IPEDS reporting and which do not. We suspect that the undercounting, which is independent of distance education status, is in the hundreds of thousands. We will describe these findings in an upcoming article.

In summary, the new NCES report is most welcome, but we hope readers do not make incorrect assumptions based on the introductory text of the report.

The post A response to new NCES report on distance education appeared first on e-Literate.

Learner-Centered Analytics: Example from UW La Crosse MOOC research

Sun, 2014-06-08 10:15

Last week I wrote a post What Harvard and MIT could learn from the University of Phoenix about analytics. As a recap, my argument was:

Beyond data aggregated over the entire course, the Harvard and MIT edX data provides no insight into learner patterns of behavior over time. Did the discussion forum posts increase or decrease over time, did video access change over time, etc? We don’t know. There is some insight we could obtain by looking at the last transaction event and number of chapters accessed, but the insight would be limited. But learner patterns of behavior can provide real insights, and it is here where the University of Phoenix (UoP) could teach Harvard and MIT some lessons on analytics.

Beyond the University of Phoenix, there other examples of learner-centered analytics exploring usage patterns over time. While I was at a summit at the University of Wisconsin at La Crosse last week, Bob Hoar showed me some early results of their “UW-System College Readiness Math MOOC” research that is part of the MOOC Research Initiative. I interviewed Bob Hoar and Natalie Solverson as part of e-Literate TV, where they described their research project:

The results to date focus on capturing and visualizing the student patterns, and progress can be tracked at this project site (go to the site to see interactive graphics).

The Desire2Learn learning management system recorded a log containing over 1.2 million ‘events’ that occurred during the first few months of the MOOC. Each event corresponds to the action of a particular student. Each action, as well as a timestamp, was recorded. The image below contains a graphical representation of the elements of the course. The items on the horizontal axis (y=0) represent information about the course (syllabus, FAQ, how to contact instructors, etc.). The other items in the chart relate to the mathematics content; the (WebWork) homework, the quizzes, the online office hours and the live tutoring and lectures for the 9 math modules in the course. Hover over each bubble to see a short description of the item.

course elements The idea is that each element represents a learning object within a module. The enforced structure of the course was that students have to complete the quiz for each module before moving to the next module, but once there, students can choose the order and timing of each learning object interaction and even when to go back to review previous modules. The next visualization tracks one student (anonymized data) over the course duration, which color coding of the type of element, vertical axis capturing specific module, and horizontal axis capturing items visited in time order. To make the Google motion chart work, the research team used counts of items visited and mapped them artificially to years starting in 1900 (there are ~300 items visited over the course).

course activityThis view shows some interesting patterns, which is described in the site (they described a bubble motion chart, but I find the visualization above more informative).

The motion map quickly illustrates that the student visited nearly every course element, and, after completing the first few modules, they did not need to return to the course information module. This indicates that the student quickly understood the design of the course. In addition, the video indicates that the student occasionally jumped back to earlier material in the course. Such movements may indicate that the learning materials in the location of the jump may need to be reviewed.

Now that we’re looking at student patterns over time, the analytics are much more meaningful from a learning perspective than they would be with just course completion rates or A/B testing results. Learning is a process that cannot be reduced down to independent events. With many online courses, students now can create their own learning pathways. In the example above, notice how the student frequently reviewed modules 0, 2 and 6. This information could be used to study how students learn and how to improve course designs. Research teams would do well to put more focus on learner patterns, and MOOC platforms would do well to make this research easier.

The UW La Crosse research team has not finished their analysis, but the early results show a much richer approach to analytics than focusing on single person-course measurements or aggregated analysis of a single event.

The post Learner-Centered Analytics: Example from UW La Crosse MOOC research appeared first on e-Literate.

Three Makes a Movement: Branson creates youth panel for student voice in ed tech

Sat, 2014-06-07 15:18

Based on my involvement in the Evolve conference sponsored by the 20 Million Minds Foundation, held in January, I wrote a series of posts covering the discussions around online education and educational technology. The three main posts:

During the conference I put out a call for other conferences to follow 20MM’s lead and work harder to directly include students in their discussions of ed tech – full post here and video below:

Before we get to the analyses, however, it is important to highlight once again how unique this format is in education or ed tech settings. There is plenty of discussion about needing course design and support services that are learner-centric, yet typically ed tech conferences don’t have learner-centric discussions. We need to stop just talking about students and add the element of talking with students.

While I do not believe there is a direct connection, this week Sir Richard Branson created a youth panel as part of the UK’s Generation Tech review, giving students a direct voice in educational technology. The panel’s focus is K-12 usage and is described in The Telegraph:

Young people will be given the chance to voice their ideas about how technology can support learning in the UK, thanks to a new council being created as part of the ‘Generation Tech’ review.

The new Digital Youth Council, a panel of students aged between 13 and 17, will share their experiences with technology and discuss ways in which education technology can be improved in a classroom setting. [snip]

The council is being created as part of a wider review, launched at the end of April and led by Sir Richard Branson, looking at what impact technology is having in schools and what the future holds for teachers and pupils alike.

As children become increasingly confident using new technology, schools have often struggled to keep up – however, many classrooms are now equipped with tablets, interactive white boards and online learning platforms which allow teachers to more effectively monitor pupils’ learning.

The wider Generation Tech review is set to analyse how these new technologies are impacting education.

This is welcome news, and I hope these two efforts, along with WCET’s commitment for a student panel in their fall conference, mark the start of a movement. Who else will join? Are there other examples people can share in the comments?

The post Three Makes a Movement: Branson creates youth panel for student voice in ed tech appeared first on e-Literate.

eCampus News Advisory Board and Gophers

Thu, 2014-06-05 20:25

I have recently accepted an eCampus News offer to be part of their new advisory board. The idea is to have myself and the 10 other members help their editors get a better handle on the industry while also providing useful information to readers through opinion, advice or commentary. The other 10 members of the advisory board:

  • Brian Lukoff, Program Director for Learning Catalytics at Pearson Education
  • Crystal Sands, Director of the Online Writing Lab at Excelsior College
  • Connor Gray, Chief Strategy Officer at Campus Management
  • David J. Hinson, Executive Vice President & Chief Information Officer of Hendrix College
  • Joanna Young, Chief Information Officer and AVP for Finance & Budget at the University of New Hampshire
  • John Orlando, Northcentral University Associate Director of Faculty Training in the Center for Faculty Excellence
  • Mark Baker, Assistant Registrar at Whitworth University
  • Paige Francis, Chief Information Officer for Fairfield University
  • Roxann Riskin, Technology Specialist/Technology Student Assistant Service Supervisor at Fairfield University
  • Salwa Ismail, Head of the Department of Library Information Technology at the Georgetown University Library

There is an article in eCampus News introducing the advisory committee, including bios, thoughts on tends and game-changers, and some personal thoughts. I’ve included my thoughts below (couldn’t help myself on the quote). Judging by others’ responses, this is an eclectic group with quite a broad array of interests, and I’m looking forward to this new role.

The game-changer: Despite the hype of adaptive learning as an automated, black-box, magic-bullet solution, the broader field of personalized learning is likely to be a game changer in higher ed. For the first generation of online learning, the tendency was to replicate the factory model of education (one size fits all) but just do it online. For the second generation, the ability to use online technologies to create multiple pathways for students and to personalize learning will be a strength that can even go beyond face-to-face methods (for any classes larger than 10 to 15 students). We’re already starting to see some real improvements in remedial coursework based on students’ use of personalized learning tools, but this has been in pilot programs to date. As this usage spreads over time, personalized learning, including adaptive data-driven systems, will present real change to our educational system.

Passion: Transparency in education. Like Laura Gibbs, I believe in the open syllabus concept where students should be able to see what is in a course without having to enroll; while ed-tech vendors and open source providers can be very supportive of education, we should have an open view of how well the products and companies are doing; when schools adopt strategic technology initiatives, the process should be open and inclusive; schools should have their results (including academic performance of students) open for others to view. I realize there are risks involved, such as the over-simplification of college scorecards, but the general need for transparency is one that I firmly support.

Hobby: Traveling with family and experiencing local cultures. Whether that is simply a different town or region of California, or different locations internationally, my wife and I enjoy seeing new places and trying to embed ourselves with locals.

Quote/Belief: “I have to laugh, because I’ve out-finessed myself. My foe, my enemy, is an animal. And in order to conquer an animal, I have to think like an animal, and—whenever possible—to look like one. I’ve gotta’ get inside this guy’s pelt and crawl around for a few days.” – C Spackler

Update: In what could be one of my biggest professional mistakes ever, I listed groundhogs instead of gophers in reference to the Carl Spackler quote (confusing Bill Murray movies). You cannot imagine my self-disappointment at this point. Mea culpa.

The post eCampus News Advisory Board and Gophers appeared first on e-Literate.

No, I don’t believe that Harvard or MIT are hiding edX data

Tue, 2014-06-03 12:58

Since my Sunday post What Harvard and MIT could learn from the University of Phoenix about analytics, there have been a few comments with a common theme about Harvard and MIT perhaps withholding any learner-centered analytics data. As a recap, my argument was:

Beyond data aggregated over the entire course, the Harvard and MIT edX data provides no insight into learner patterns of behavior over time. Did the discussion forum posts increase or decrease over time, did video access change over time, etc? We don’t know. There is some insight we could obtain by looking at the last transaction event and number of chapters accessed, but the insight would be limited. But learner patterns of behavior can provide real insights, and it is here where the University of Phoenix (UoP) could teach Harvard and MIT some lessons on analytics.

Some of the comments that are worth addressing:

“Non-aggregated microdata (or a “person-click” dataset, see http://blogs.edweek.org/edweek/edtechresearcher/2013/06/the_person-click_dataset.html ) are much harder (impossible?) to de-identify. So you are being unfair in comparing this public release of data with internal data analytic efforts.”

“Agreed. The part I don’t understand is how they still don’t realize how useless this all is. Unless they are collecting better data, but just not sharing it openly, hogging it to themselves until it ‘looks good enough for marketing’ or something.”

“The edX initiative likely has event-level data to analyze. I don’t blame them for not wanting to share that with the world for free though. That would be a very valuable dataset.”

The common theme seems to be that there must be learner-centered data over time, but Harvard and MIT chose not to release this data either due to privacy or selfish reasons. This is a valid question to raise, but I see no evidence to back up these suppositions.

Granted, I am arguing without definitive proof, but this is a blog post, after all. I base my argument on two points – there is no evidence of HarvardX or MITx pursuing learner-centered long-running data, and I believe there is great difficulty getting non-event or non-aggregate data out of edX, at least in current forms.

Update: See comments starting here from Justin Reich from HarvardX. My reading is that he agrees that Harvard is not pursuing learner-centered long-running data analysis (yet, and he cannot speak for Stanford or MIT), but that he disagrees about the edX data collection and extraction. This does not capture all of his clarifications, so read comments for more.

Evidence of Research

Before presenting my argument, I’d again like to point out the usefulness of the HarvardX / MITx approach to open data as well as the very useful interactive graphics. Kudos to the research teams.

The best places to see what Harvard and MIT are doing with their edX data are the very useful sites HarvardX Data & Research and MITx Working Papers. The best-known research released as a summary report (much easier to present than released de-identified open dataset) is also based on data aggregated over a course, such as this graphic:

World_Map_of_Enrollment___HarvardX

Even more useful is the presentation HarvardX Research 2013-2014 Looking Forward, Looking Back. In this presentation, there is a useful presentation of the types of research HarvardX is pursuing.

Four kinds of MOOC research

None of these approaches (topic modeling, pre-course survey, interviews, or A/B testing) look at learner’s activities over time. They are all based on either specific events with many interactions (discussion forum on a particular topic with thousands of entries, video with many views, etc) or subjective analysis on an entire course. Useful data, but not based on a learner’s ongoing activities.

I’d be happy to be proven wrong, but I see no evidence of the teams currently analyzing or planning to analyze such learner data over time. The research team does get the concept (see the article on person-click data):

We now have the opportunity to log everything that students do in online spaces: to record their contributions, their pathways, their timing, and so forth. Essentially, we are sampling each student’s behavior at each instant, or at least at each instant that a student logs an action with the server (and to be sure, many of the things we care most about happen between clicks rather than during them).

Thus, we need a specialized form of the person-period dataset: the person-click dataset, where each row in the dataset records a student’s action in each given instant, probably tracked to the second or tenth of a second. (I had started referring to this as the person-period(instantaneous) dataset, but person-click is much better). Despite the volume of data, the fundamental structure is very simple. [snip]

What the “person-period” dataset will become is just a roll-up of person-click data. For many research questions, you don’t need to know what everyone did every second, you just need to know what they do every hour, day or week. So many person-period datasets will just be “roll-ups” of person-click datasets, where you run through big person-click datasets and sum up how many videos a person watched, pages viewed, posts added, questions answered, etc. Each row will represent a defined time period, like a day. The larger your “period,” the smaller your dataset.

All of these datasets use the “person” as the unit of analysis. One can also create datasets where learning objects are the unit of analysis, as I have done with wikis and Mako HIll and Andres Monroy-Hernandes have done with Scratch projects. These can be referred to as project-level and project-period datasets, or object-level and object-period datasets.

The problem is not with the research team, the problem is with the data available. Note how the article above is referencing future systems and future capabilities. And also notice that none of this “person period” research is referenced in current HarvardX plans.

edX Data Structure

My gut feel (somewhat backed up by discussions with researchers I trust) is that the underlying data model is the issue, as I called out in my Sunday post.

In edX, by contrast, the data appears to be organized a series of log files organized around server usage. Such an organization allows aggregate data usage over a course, but it makes it extremely difficult to actually follow a student over time and glean any meaningful information.

If this assumption is correct, then the easiest approach to data analysis would be to look at server logs for specific events, pull out the volume of user data on that specific event, and see what you can learn; or, write big scripts to pull out aggregated data over the entire course. This is exactly what the current research seems to do.

Learner-Centered Data Analysis Over Time

It is possible to look at data over time, as was shown by two Stanford-related studies. The study Deconstructing Disengagement:Analyzing Learner Subpopulations in Massive Open Online Courses. looked at specific learners over time and looked for patterns.

Stanford reportMike Caulfield, Amy Collier and Shawaf Halawa wrote an article for EDUCAUSE Review titled Rethinking Online Community in MOOCs Used for Blended Learning that explored learner data over time.

ERO Study

In both cases, the core focus was learner activity over time. I believe this focus is a necessary part of any learning analytics research program that seeks to improve teaching and learning.

What is interesting in the EDUCAUSE article is that the authors used Stanford’s Class2Go platform, which is now part of OpenEdX. Does this mean that such data analysis is possible with edX, or does it mean that it was with Class2Go but not with the current platform? I’m not sure (comments welcome).

I would love to hear from Justin Reich, Andrew Ho or any of the other researchers involved at HarvardX or MITx. Any insight, including corrections, would be valuable.

The post No, I don’t believe that Harvard or MIT are hiding edX data appeared first on e-Literate.

What Harvard and MIT could learn from the University of Phoenix about analytics

Sun, 2014-06-01 17:42

Last week Harvard and MIT released de-identified data from their edX-based MOOCs. Rather than just produce a summary report, the intent of this release was to open up the data and share it publicly. While it is good to see this approach to Open Data, unfortunately the data set is of limited value, and it actually illustrates a key problem with analytics in higher ed. From MIT News description:

A research team from Harvard University and MIT has released its third and final promised deliverable — the de-identified learning data — relating to an initial study of online learning based on each institution’s first-year courses on the edX platform.

Specifically, the dataset contains the original learning data from the 16 HarvardX and MITx courses offered in 2012-13 that formed the basis of the first HarvardX and MITx working papers (released in January) and underpin a suite of powerful open-source interactive visualization tools (released in February).

At first I was eager to explore the data, but I am not sure how much useful insight is possible due to how the data was collected. The data is structured with one student per row for each course they took (taking multiple courses would lead to multiple rows of data). The data columns (pulled from the Person Course Documentation file) are shown below:

  • course_id: ID for the course
  • userid_DI: de-identified unique identifier of student
  • registered: 0/1 with 1 = registered for this course
  • viewed: 0/1 with 1 = anyone who accessed the ‘courseware’ tab
  • explored: 0/1 with 1 = anyone who accessed at least half of the chapters in the courseware
  • certified: 0/1 with 1 = anyone who earned a certificate
  • final_cc_name_DI: de-identified geographic information
  • LoE: user-provided highest level of education completed
  • YoB: year of birth
  • gender: self-explanatory
  • grade: final grade in course
  • start_time_DI: date of course registration
  • last_event_DI: date of last interaction with course
  • nevents: number of interactions with the course
  • ndays_act: number of unique days student interacted with course
  • nplay_video: number of play video events
  • nchapters: number of courseware chapters with which the student interacted
  • nforum_posts: number of posts to the discussion forum
  • roles: identifies staff and instructors

The problem is that this data only tells us very shallow usage patterns aggregated over the entire course – did they look at courseware, how many video views, how many forum posts, final grade, etc. I have described several times how open courses such as MOOCs have different student patterns, since not all students have the same goals for taking the course.

studentPatternsInMoocs20130930

 

The Harvard and MIT data ignores student goals or any information giving a clue on whether students desired to complete the course, get a good grade, get a certificate, or just sample some material. Without this information on student goals, the actual aggregate behavior is missing context. We don’t know if a certain student intended to just audit a course, sample it, or attempt to complete it. We don’t know if students started the course intended to complete but became frustrated and dropped down to just auditing or even dropped out.

Beyond data aggregated over the entire course, the Harvard and MIT edX data provides no insight into learner patterns of behavior over time. Did the discussion forum posts increase or decrease over time, did video access change over time, etc? We don’t know. There is some insight we could obtain by looking at the last transaction event and number of chapters accessed, but the insight would be limited. But learner patterns of behavior can provide real insights, and it is here where the University of Phoenix (UoP) could teach Harvard and MIT some lessons on analytics.

Also last week, the Apollo Group (parent of UoP) CIO Mike Sajor gave an interview to Campus Technology, and he discussed their new learning platform (also see my previous post on the subject). In one segment Sajor explained how the analytics are being used.

Sajor: Another aspect: We leverage the platform to collect a vast amount of data about students as they traverse their learning journey. We know what they’re doing, when they’re doing it, how long it takes, anything they do along the journey that might not have been the right choice. We collect that data … and use it to create some set of information about student behaviors. We generate insight; and insight tells us an interesting fact about a student or even a cohort of students. Then we use that insight to create an intervention that will change the probability of the student outcome.

CT: Give an example of how that might work.

Sajor: You’re a student and you’re going along and submitting assignments, doing reading, doing all those things one would normally do in the course of a class. Assignments are generally due in your class Sunday night. In the first few weeks you turn your assignments in on Friday. And suddenly, you turn in an assignment on Saturday evening, and the next week you turn one in mid-day Sunday. Well, we’re going to notice that in our analytics. We’ll pick that up and say, “Wait a second. Sally Student now has perturbation in her behavior. She was exhibiting a behavioral pattern over time since she started as a student. Now her pattern has shifted.” That becomes an insight. What we do at that point is flag the faculty member or an academic adviser or enrollment adviser to contact Sally using her preferred mode — e-mail, phone call. And we’ll ask, “Hey Sally, we noticed you’re turning in your assignments a little bit later than you normally did. Is there anything we can do to help you?” You’d be amazed at the answers we get, like, “My childcare on Thursday and Friday night fell apart.” That gives us an opportunity to intervene. We can say, “You’re in Spokane. We know some childcare providers. We can’t recommend anybody; but we can give you a list that might help you.”

UoP recognizes the value of learner behavior patterns, which can only be learned by viewing data patterns over time. The student’s behavior in a course is a long-running transaction, with data sets organized around the learner.

In edX, by contrast, the data appears to be organized a series of log files organized around server usage. Such an organization allows aggregate data usage over a course, but it makes it extremely difficult to actually follow a student over time and glean any meaningful information.

The MIT News article called out why this richer data set is so important:

Harvard’s Andrew Ho, Chuang’s co-lead, adds that the release of the data fulfills an intention — namely, to share best practices to improve teaching and learning both on campus and online — that was made with the launch of edX by Harvard and MIT in May 2012.

If you want to “share best practices to improve teaching and learning”, then you need data organized around the learner, with transactions captured over time – not just in aggregate. What we have now is an honest start, but a very limited data set.

I certainly wouldn’t advocate Harvard and MIT becoming the University of Phoenix, but in terms of useful learner analytics, they could learn quite a bit. I applaud Harvard and MIT for their openness, but I hope they develop better approaches to analytics and learn from others.

Note: The Harvard and MIT edX is de-identified to fit within FERPA requirements, but after reading their process, it does not appear that the learner patterns were removed due to privacy concerns.

Update: Based on private feedback, I should clarify that I have not validated that the UoP analytics claims actually work in practice. I am giving them credit for at least understanding the importance of learner-centered, behavior-based data to improve teaching and learning, but I do not know what has been fully implemented. If I find out more, I’ll share in a separate post.

On this point, there is an angle of ‘what University of Phoenix could learn from Harvard and MIT on analytics’ regarding Open Data and the ability to see real results.

The post What Harvard and MIT could learn from the University of Phoenix about analytics appeared first on e-Literate.

Unizin: What are the primary risks?

Thu, 2014-05-29 15:50

In Michael’s most recent post on Unizin, the new “learning ecosystem” initiative driven by Indiana University, he asked the question of who would be threatened by the proposed consortium (with the answer of edX). This question assumes of course that Unizin actually succeeds in large part, but what are the primary risks for the initiative to succeed in the first place? Based on the public information we have available to date (primarily in the two posts linked above), I see two near-term risks and one long-term risk that rise above the others.

Near-Term Risk: Getting Schools to Sign Up

The obvious question is whether there are enough schools willing to commit $1 million and adopt the proposed platforms to get the consortium off the ground. Based on the Colorado State University recording, it appears that the goal is to get 9 – 10 schools to commit $9 – $10 million in the initial phase. Beyond Indiana University, the most likely school to commit is the University of Michigan. Their leadership (dean of libraries, CIO) are fully behind the initiative, and from press reports they are seeking final approval. I cannot find any evidence that any other schools have reached this point, however.

Slide from CSU Presentation

Slide from CSU Presentation

There are active debates in the Committee on Institutional Cooperation (CIC), primarily between provosts and CIOs, about Unizin and whether this approach works for member institutions. The provosts in fact already put out a position paper generally endorsing the same concept.

While new and cost effective technological capabilities make certain changes in higher education possible, it does not necessarily follow that such changes are desirable, or would be endorsed or utilized by our existing students, faculty, or community members. Nor does it mean that we fully grasp the costs and business models that might surround new strategies for broadly disseminating course content. University leaders committed to addressing the new opportunities in higher education need to recognize that the primary basis for motivating and inspiring faculty to engage these opportunities will not be the technologies themselves, but rather, the fundamental academic values and pedagogical principles that need to be infused in these emerging instructional technologies. For these reasons, we believe that the chief academic officers of our CIC member universities are in the best position—individually and collectively—to be leading these efforts.

Putting out a position paper is not the same as getting buy-in from a campus or contributing real money, and I suspect that most of the potential campuses will need some form of this discussion before signing up.

Near-Term Risk: Secretive decision process

On the subject of campus buy-in, the actual secretive process that is being pursued by Unizin and prospective schools is itself a significant risk, especially in the post MOOC-hype environment. Institutions are considering this major investment and commitment in a deliberately opaque process. Provosts, CIOs and occasionally faculty groups are being briefed, but almost all documentation is being hidden. During the Colorado State University meeting, one faculty member asked about this process:

At the recorded CSU meeting, one of the presenters—it’s impossible to tell which is the speaker from the recording we have—acknowledges that the meetings were largely conducted in secret when challenged by a faculty member on the lack of faculty involvement. He cited sensitive negotiations among the ten universities and Instructure as the reason.

These same questions are being raised about the decision processes behind many of the MOOC adoptions. Consider the University of Texas, which committed $5 million to their involvement in edX. The Daily Texan has publicly started a debate on that campus about the motivation and benefits of that decision.

The MOOCs were, apparently, designed without revenue in mind, though the System invested $10 million to both develop the MOOCs and to host the courses on edX, an online platform created by Harvard and MIT. [snip]

Of course, the System has made large and unproven investments in online education platforms before — MyEdu rings a bell. The Tribune recently reported that the System will see no financial return on its $10 million investment in MyEdu, which was ultimately sold to Blackboard. Again, there was no long-term financial plan in mind, but there was a lot of money on the table.

The System should stop investing millions of dollars on gambles like these, which lack financial exit strategies and viable forms of revenue. If the founding structure of a project doesn’t include a business model for growth and profitability for the University, who is expected to fund it?

Now UT is considering another seven-figure investment in a very closed process. If they join, UT could face pushback from faculty on campus based on any decision to join Unizin, partially reaping what edX sowed.

Faculty groups nationwide are concerned about administrative decision-making that directly impacts academics without directly and transparently involving broad faculty input. Unizin involves not only an LMS adoption but also learning content repository and learning analytics platform. This gets the difficult questions of how and whether to share learning content as well as measuring learning outcomes. Faculty will care.

And there is a hint of a typical university conflict embedded at the end of the CIC provosts’ position paper quote - “we believe that the chief academic officers of our CIC member universities are in the best position … to be leading these efforts”, perhaps with the unwritten phrase “as opposed to CIOs”.

It used to be that CIOs and their organizations would make most technology platform decisions, and quite often it was hard to get the provost office to participate. As can be seen in this statement, we now have situations where provosts and their offices want to be the driving force even for platform decisions. Ideally, the better approach is collaborative where the provosts and CIOs work together, generally with provosts taking a more active role in defining needs or problems and CIOs taking a more active role defining solutions.

In the Unizin content repository case, what would be more natural is for the provosts to first help define what learning content should be shared – learning objects, courseware, courses, textbooks – and under what conditions. After defining goals it would be appropriate to describe how a software platform would facilitate this content sharing, with CIOs taking a more active role in determining whether certain scenarios are feasible and which platforms are the best fit. Throughout the process faculty would ideally have the opportunity to give input on needs, to give feedback on proposed solutions, and to have visibility in the decision process.

Whether this type of open, collaborative decision process is happening behind closed doors is not known, but the apparent need to keep the process quiet raises the risk of pushback on the consortium decision.

Long-Term Risk: Development of Content Repository and Learning Analytics

Even if Unizin succeeds in getting 9 – 10 schools to fund and start the consortium, and even if they successfully manage the faculty buy-in aspects, there is a longer-term risk on making the “learning ecosystem” a reality. Currently the three primary components are very uneven. The LMS is a no-brainer as Canvas already exists and already has broad acceptance as the most popular LMS on the market in terms of recent LMS evaluations and new adoptions. The two other components are very different and might not be well-suited for a community-source development model.

Unizin Diagram of Services

The ed tech road is littered with unsuccessful and disappointing content repositories. The concept of making it easy to share learning content outside of a specific program has long looked beautiful in white papers and conference briefings, but the reality of actual adoption and usage is quite different. Whether the challenge is product design, product completion, or just plain faculty adoption, there are no indications that there is a demand for broad-based sharing of academic content. In essence, the product category is unproven, and it is not clear that we even know what to build in the first place.

Community source has proven its ability to develop viable solutions for known product categories and generally based on existing solutions – consider Sakai as an LMS (heavily based on U Michigan’s CHEF implementation and to a lesser degree on Indiana University’s OnCourse), Kuali Financial System (based directly on IU’s financial system), and Kuali Coeus (based on MIT’s research administration system). When you get rid of a pre-existing solution, the results are less promising. Kuali Student, based on a known product category but designed from the ground up, is currently on track to take almost 8 years from concept to full functionality. Looking further, are there any examples where a new product in an ill-defined product category has successfully been developed in a community source model?

Learning analytics is similar to content repositories in the sense that the concept looks much better in a whitepaper than it does in reality. I remember in the late 2000s when the LMS user conferences came across as ‘we’re learning outcomes companies that happen to have an LMS also’. Remember Blackboard Outcomes System – its “most significant product offering”?

The difference between learning analytics and content repositories, however, is that there are much stronger examples of real adoption on the analytics side. Purdue has successfully implemented Course Signals and has succeeded in improve course retention (despite the challenge of whether inter-course retention has improved). Blackboard Analytics (based on the iStrategy acquisition) has been implemented with real results at a growing number of schools.

More significantly, perhaps, is the work done by the Predictive Analytics Framework (PAR), which just today announced that it was becoming a separate organization spun off by WICHE. The Unizin slides explicitly reference PAR, and some of the analytics language closely mirrors PAR descriptions. The reason this is significant is that the PAR framework goes a long way towards helping to define the product needs.

The question for analytics, therefore, is less on the product category and more on the ability of Unizin to deliver actual results.

If Unizin succeeds in addressing the above risks, then the state of art for learning ecosystems will jump forward. If the proposed consortium does not succeed, the result will be a buyer’s club that makes Canvas a very expensive LMS. That result would be ironic, given some of the foundational concepts behind Unizin.

The post Unizin: What are the primary risks? appeared first on e-Literate.

New Internet Trends Presentation: Mary Meeker gets education partly right

Thu, 2014-05-29 10:50

Mary Meeker from Kleiner Perkins Caufield Byers (KPCB) has released her annual Internet Trends presentation, which has taken on a life of its own. Her data is very useful to see macro trends and the significance of Internet-related technology. Even in the non-education sections, the presentation should be very useful to education.

One interesting perspective is that tablets continue their massive growth. Anecdotally, I am not seeing such a strong uptick in tablet usage by students on campuses, but the overall growth is worth exploring further. It would be interesting to see more longitudinal data specifically on tablet usage on campus.

Tablet growth

Another mobile perspective to keep watching is the rising percentage of web usage by mobile devices (smart phones and tablets), which jumped from 14% to 25% in just the past year. If your campus or technology company does not have a mobile strategy, then you are missing what students (and even faculty and staff) are actually experiencing on the web.

Mobile web percentage

Once you get down to the education section (starting on slide 24), I found Meeker’s overall setup good, but her argument of a “tipping point” unpersuasive. She starts out by observing how important the education sector is in the US:

  • Education is Important
  • Education is Expensive
  • Education Results are Often Subpar
  • People Care About Education
  • Personalized Education Ramping
  • Distribution Expanding & Education Start-up Costs Declining

While these are not new observations, I think she is right in using them to explain why the sector is important and why this is different than other technology-driven sectors susceptible to bubbles and lost interest. I could see educational technology investment dropping in the near future, but the challenges and opportunities will not go away.

Where I have a problem is her justification that “education may be at a tipping point”. These examples seem to cherry-pick technology adoption for consumer tools rather than showing any evidence of change based on the technologies. It’s not that I don’t see change happening, just that this slide is one of the weaker arguments in the deck.

Ed Green Shoots

Later in the presentation is an interesting observation showing the low level of useful information that is actually tagged and analyzed, which is also true for education.

Data useful analyzed

 

You can see the entire presentation here:

KPCB Internet trends 2014 from Kleiner Perkins Caufield & Byers

The post New Internet Trends Presentation: Mary Meeker gets education partly right appeared first on e-Literate.

Why Unizin is a Threat to edX

Sat, 2014-05-24 15:49

In the week since we published our Unizin exposé, there has been nary a peep from the group to us, or apparently to the traditional news outlets either. When we ran the piece, we emailed Indiana University CIO Brad Wheeler to request comment or corrections. We have not heard back from him yet. Brad, if you’re reading this, our door is always open. Talk to us.

Luckily for us, we don’t need to rely on new pronouncements from Brad to give us more insight into what’s going on. It turns out that he’s been speaking in public about this idea for years. He just hasn’t called it Unizin. And from what I can tell, it should give edX cause to worry.

Understanding Brad

If you want to understand Unizin, you really have to understand Brad Wheeler. He is clearly leading this group and has a history of starting such coalitions (like Sakai, Kuali, and HathiTrust, to name a few). Unizin has his fingerprints all over it. And if you want to understand how Brad thinks, the first thing you need to know is that he is a professor of information systems at a business school. He thinks like a business school professor. For as long as I have been aware of his work, which is about nine years now, he has been focused on the problem of the ed tech ecosystem as an inefficient market. For example, in a 2012 EDUCAUSE Review piece co-authored with Charles Henry, Brad and Charles wrote,

Currently in higher education, we are seeing a growing misalignment between the buyers (higher education institutions) and sellers in these areas.The commercial side has swiftly adapted to scale, consolidating in every domain where there are efficiencies in digital distribution and reuse. Many firms have bought up former rivals in order to vastly consolidate the suppliers of critical software, content, and services. They did so to achieve better economics by scaling the combined revenues of their acquired firms and splitting operational costs over more products.

Colleges and universities, however, remain highly fragmented with mostly one-off “deals” that aptly illustrate the price and cost inefficiencies of skewed markets. A large number of uncoordinated buyers will have little effect on pricing terms when a few suppliers are near oligopolies in terms of industry structure. Beyond price, colleges and universities are also collectively and unintentionally ceding critical capabilities in many categories of essential software, content, and services, necessitating that they buy these capabilities back from the commercial markets.

In their view, the paucity of vendors and consolidation in the space has given the vendors unhealthy leverage over their customers. The solution to this problem, in their view, is for customers to stage their own consolidation in order to rebalance the power relationship with the vendors:

Within the academy, many have expressed outrage at the pricing and sales strategies of the aggregated commercial firms. We dismiss that point of view.he problem lies not with the commercial firms, who rightly endeavor to transfer money from higher education institutions to their shareholders.The problem lies in the behavior of those of us in higher education—the buyers who fail to aggregate at scale in the interests of our institutions. Achieving multi-institutional scale in order to level the playing field and become players in the new game in this era of digital distribution and reuse requires that we make behavioral changes. These changes are neither pro-commercial nor anti-commercial, and attempts to frame them as such are without merit. The changes are, rather, pro-institutional.

Brad thinks a lot about how universities as institutions can recover some of the money that they are currently giving over to vendors. Here he is in 2007, writing about open source Learning Management Systems:

Developing sustainable economics and advancing the frontiers of innovation are the dual challenges for application software in higher education. Sustainable economics means that an institution’s base budgets can support the licensing fees, developers, maintenance, training, and support required for application software. For example, it means that the viability of a course management system (CMS) is not dependent on the next grant or on a one-time budgetary accommodation. Since making changes to application software invokes cost, minimizing change is one tactic for achieving sustainable economics through lower IT costs. In higher education, however, the creative nature of colleges and universities motivates faculty and staff to innovate with new pedagogy and with the use of online resources. Application software that fails to evolve or to allow experimentation and innovation in teaching is unlikely to be well received.

Higher education is in search of a new model to address these dual challenges, and open source application development has been proffered as a solution. Open source software, which is usually obtained without paying any licensing fee to its creators, allows developers to modify the inner workings of the software. In contrast, commercial application software, which is usually licensed for an annual fee, does not allow the inner workings of the software to be modified. Open source software is not free, however, when properly viewed from a total cost of ownership (TCO) perspective. Like all other systems, it requires investments for hardware, user support staff, training, integration with other systems, and so forth. Thus licensing fees, technical support, and control of destiny in evolving the software features are the discriminating cost factors. But licensing fees are not trivial: some estimates place licensing at 20–25 percent of the TCO—in the hundreds of thousands of dollars for many institutions.

Here he is in 2010, writing about the problem of academic journal fees:

Imagine if an efficiency consultant uncovered the following situation. In 1984, an important business function in a university began using a convenient copy service for $0.10 per copy. The university staff provided most of the labor, and the copy service company provided the machine. Twenty-six years later, the university is still using the copy service and still providing most of the labor costs, but the price has changed—to $0.85 per copy. In addition, the copy service now imposes restrictions such that only certain members of the university can read the copied documents. If others want to read the copies, the university must pay extra for the number of readers of the document rather than just the per-copy fee.

Ridiculous? A fee that is 850 percent of the rate twenty-five years earlier? The fee should be approximately $0.21 per copy if it had tracked with the U.S. inflation rate and had gained no technology efficiencies in twenty-six years.  Surely, no efficiency expert would affirm continuing to rent the copier under these terms. The expert would undoubtedly point out that the university could own and operate a copier for a much lower cost and without any restrictive use rules that impede the university’s work. If this situation were found in a non-core administrative area, it would be viewed as outrageous and would be changed immediately.

In fact, a very similar situation exists today in a core area of research and education. This copier example is imperfectly illustrative of the cost and restrictive use imposed on most academic libraries by academic journals. For example, in 2006 a group of 275 doctoral/research universities paid a combined $1 billion to essentially rent the “journal copier” system that would provide their faculty, staff, and students with access to scholarly journals. In 2010, they are paying even more, and the real scale of the cost to colleges and universities spans globally to institutions of all sizes. A sweeping study from the United Kingdom estimated total annual expenditure for journals at £597.4 million (approximately U.S. $952 million) in 2006–7. The total estimate for scholarly communications—inclusive of faculty time for editing and reviewing—was £5.4 billion (approximately U.S. $ 8.6 billion).

And here he is in 2012 writing about textbooks:

Over the years, students and content creators (authors and publishers) have been engaged in a self-reinforcing, negative economic loop for textbooks. Creators only get paid for their investment and work when a new textbook is sold, and students save money by purchasing a used textbook at a lower cost. Creators price higher as fewer students buy new, and students either seek used books or older editions, go without an assigned text, or turn to digital piracy in response to higher prices.

Early signs in the shift to digital were also troubling. Shrewd students who succeeded in buying a used textbook and selling it back had a net cost of about 35 percent of the book’s list price, but less than half of students generally succeeded in selling back. In 2010, e-text pricing was around 70–75 percent of a new paper book or roughly double the cost of the buy-sellback net cost for students. E-texts (naturally) had no option for sellback, and they were riddled with restrictions concerning printing, length of access, and so forth. In addition, publishers were employing a bridging strategy to kill the used-book market by combining single-use website codes with new textbooks for essential online materials. If a student bought a used book, he or she would then still need to pay retail price for a website code.

Thus, while the shift to digital provided new opportunities for students to save money and publishers to rethink their business models, the trend was heading in precisely the wrong direction for content pricing. Also, publishers, bookstores, and others were coming forward with clever new software and hardware platforms for students to read and annotate e-texts. In the absence of a university plan, it is not unreasonable to foresee that a freshman could, with five courses, have seven e-texts requiring four or five different types of software just to study! Obviously, that makes no sense.

As we will see, Brad has gone on record that Unizin is driven by the same concerns (although he doesn’t use the coalition’s name). Whatever else it is also about, you can bet money that it’s about controlling vendor costs.

Not Just the LMS

But if Unizin were only about controlling the cost of the LMS, it would be an odd way to do it. First of all, why invest $1 million per university for a 7-year payback? (And by the way, one thing we don’t know yet is what that $1 million covers. Does it cover migration support, for example? Licensing? What is the total cost of Unizin, including moving to Canvas, and how much of that money goes to Instructure in the end?) Also, if you’re trying to drive down prices, then you get more leverage from a larger buying club. Internet2, which will reportedly be the business entity that will host Unizin, already has an agreement with Instructure through it’s NET+ program. If you’re not familiar with it, NET+ is essentially a buyers’ club for all Internet2 members. Working through members who act as sponsors for a vendor, Internet2 develops a range of review criteria to ensure that the product in question is secure, scalable, offered under equitable contractual terms, and so on. And then they negotiate a price. This is a great approach if you want to get better pricing from vendors while at the same time encouraging a healthier vendor ecosystem. On the one hand, you are aggregating demand across many institutions, so you have a lot of leverage. On the other hand, you are also rationalizing the due diligence process for the vendors, so that purchasing decisions by Internet2 members will presumably not be painful tell-me-everything RFPs. (In fact, it may eliminate the need for RFPs altogether.) This reduces the cost of sales for the vendors which, in turn, makes it affordable for more vendors to be competitive. NET+ is a great program, and Brad was involved with its creation.

And it does not appear to be what Unizin is doing, despite the fact that Internet2 is the “fiscal agent” for Unizin.

I can think of several possibilities to account for this. First, it is possible that Unizin is, in fact, using NET+ and is negotiating some additional clauses needed by the coalition that will eventually be incorporated back into the NET+ program. The second possibility is that the coalition has needs that are different enough from those of the larger Internet2 community that they feel they need a separate arrangement. The third possibility is that the reason for negotiating separately has nothing to do with contractual needs. Rather, creating a club of attractive peer universities might fulfill a political need to provide cover for the CIOs at their home institutions.

Which of these is right? Or is there some other explanation? We don’t know. Unizin isn’t talking. But my guess is that it’s actually a bit of all three. I suspect that NET+ was probably the starting point of the contractual negotiations, that the coalition has needs beyond generic LMS use, and that some of their needs are political in nature. I base that conjecture on a paper and presentation in which Brad Wheeler makes the case for an approach that is uncannily similar to what we know about Unizin.

Speeding Up On Curves

If you want to understand Unizin, you must read Brad’s paper “Speeding Up on Curves” and view his presentation by the same name. Both are important. The article has the more cogent and complete argument, while the presentation contains some revealing improvisational riffs. The article in particular makes the case for the creation of a coalition of interdependent universities to provide an LMS, content repository, and learning analytics system.

Sound familiar?

Brad writes about four educational technology-enabled models, in decreasing order in cost and revenue generation:

  1. Residential education (flipped classrooms): $$$
  2. Online courses and degrees: $$
  3. Massive Online Courses (MOCs): $
  4. Massive Open Online Courses (MOOCs): Free1

He argues that these tools are necessary to scale institutional revenues in the face of declining public funding and rising operational costs. He envisions a new, digitally enabled marketplace:

It looks to me like Brad is trying to run the table. He wants to control costs from LMS vendors, MOOC platform vendors, analytics and content management vendors, and textbook vendors, all in one shot, while scaling up revenue through online offerings. In business school jargon, he wants to transform the Unizin cohort into a vertically integrated cartel that controls its entire value chain.

That’s a very big vision. In my experience, however, universities do not typically put up $1 million for vision. They need something more concrete and immediate. On the other hand, licensing an LMS doesn’t seem sufficient justification for a $1 million investment with a 7-year payback either. There must be something in the middle ground between the tactical LMS license and the big picture that is attracting the other Unizin schools. What is it? What’s the pitch? Again, we don’t know. Unizin isn’t talking. But I have a guess, and edX isn’t going to like it.

How Much Does Free Cost?

Suppose you’re an executive in a big R1 university or university system. Suppose you’re very well aware of and very worried about the economic trends that Brad has outlined in his “Speeding Up On Curves” article. (The second supposition is practically redundant to the first one.) Suppose you don’t know what the answer to this problem is but you feel like you have to be in the MOOC game in case some sort of partial solution should emerge out of it. What are your options?

  1. You could go to Coursera. This is the easiest option in some ways, and has the benefit of reaching the largest number of eyeballs. But it feels like you’re surrendering your brand, which you believe is one of your core assets, to Coursera’s brand. And beyond that, the whole Silicon Valley thing makes you uncomfortable, particularly when you don’t understand how they intend to make money (or don’t believe their answers when you ask them that question).
  2. You could go it alone and put out your own MOOCs on your own platform. But your LMS might not be set up well for MOOCs, and in any event, how would anyone know that you are putting them out? You aren’t confident in your ability to market courses in a way that will attract tens of thousands of students on your own.
  3. You could go to edX. It’s a non-profit run by universities, so it’s closer to your comfort level. And it is getting significant student traffic—more than you could probably get on your own. But it costs a lot of money to join that club—anywhere from $1 million to $5 million, from what’s leaked out into the public about the prices that particular schools have paid. And what you get for that money, beyond the brand recognition and the portal destination, is a very immature platform and some community that you could easily reproduce elsewhere.

Suppose somebody came to you and said, “For the same cost of an edX membership, I can get you a vastly more mature platform for MOOCs, which also happens to be the sexy new LMS that your faculty are bugging you to let them use for their more traditional courses, and a coalition that can build a brand to rival edX while still being run by your peers and not some icky VCs. I also can offer you a much grander vision of bigger benefits, but even if you don’t believe that vision will come to pass, you can think of getting the chance to be in on the ground floor as a free bonus.”

This pitch would be perfectly aligned with Instructure’s pitch for the Canvas Network (their MOOC support), as I wrote about previously. It also would be aligned with the positioning of the LMS in “Speeding Up On Curves”:

Distribution Platforms. The Learning Management System (LMS) and its variants have long been viewed as the online learning distribution platform. But now, as content becomes platform, Coursera is growing like a juggernaut, edX is expanding, and publishers are producing adaptive learning systems. If we back up fifteen years and look at the initial heterogeneity of the LMS space, we see that the path to scale for software platforms played out with mergers and acquisitions and yielded a few dominant options with high switching costs among them. The platforms become increasingly important as education becomes more digital, since they provide distribution for educational experiences, courses, and degrees to students who are on or off campus. In business terms, these platforms are how our educational services reach markets of students both near and far away. And just as happened with Internet commerce more generally, students are becoming far more discerning in their educational purchases that these platforms enable.

Brad has framed the LMS as the storefront, which is probably the principal innovation of xMOOC platforms.

What are the odds that the participants in the Unizin coalition are moderately to highly motivated by the MOOC question? Well, let’s see:

Slide from CSU Presentation

Yes, I believe there is a pretty good chance that an alternative edX-like coalition would be interesting to the executives from at least some of these schools. And if so, then going with the vanilla NET+ contract probably wouldn’t be enough. For starters, they would want to create an edX-like portal, separate from the generic Canvas.net portal, for this coalition. It would be their edY, or something. There might (or might not) be software-related steps beyond the contract that Instructure would have to do, like adding code to the Instructure instance of Canvas or spinning up a Unizin-specific instance. And there may be other, administrative-related functionality changes that the coalition would want (for example). Perhaps more importantly, though, if the Unizin stakeholders want to be able to go back to their campuses and make a case for edY, they will have to come with a group of schools that they believe will hold up well (in the eyes of their campus stakeholders) against the likes of Harvard and MIT. A big, splashy launch of a Unizin coalition with a number of peer schools who are all going to market together, supported by the LMS that all the cool kids dig, could be just the ticket. The group could conceivably use the NET+ contract as a starting point, but they would want to separate themselves from it somewhat for both substantive and political reasons.

Again, this is all just guesswork. But it fits what we know.

So if the medium-term ambition of Unizin is to create a MOOC portal, who is threatened by that? I don’t think it hurts Coursera too badly. They now have enough eyeballs that I think even schools in other coalitions are likely to hedge their bets and put a course or two on that platform. edX is another matter, though. Assuming that Unizin could succeed in making a big media splash and attract students to their course catalog, I don’t see what edX offers that Unizin on Canvas couldn’t do better, and the value to administrators of getting all MOOC and non-MOOC courses on the same supported platform shouldn’t be underestimated. If I were Anant Agrawal, I would be very worried about Unizin poaching my customers.

But What About the Big Vision?

The LMS + MOOC pitch explains why these universities might be interested in a coalition, but it doesn’t fully explain the interest in the Learning Object Repository and analytics system. To explain that, you need to look at Brad’s bigger ambition. And I think both the goal and the strategies there get dicier. But more on that in a future post.

  1. Whether MOOCs are actually low-cost relative to the other options on Brad’s list depends a lot on how you’re measuring cost.

The post Why Unizin is a Threat to edX appeared first on e-Literate.

DOE Doubles Down on State Authorization: 25x increase in regulatory language

Wed, 2014-05-21 11:11

Now that the Kabuki Theatre of the Department of Education’s negotiated rulemaking process has finished its penultimate act, can we all act surprised that the likely result includes the proposed State Authorization regulations growing by a factor of 25 with no comments allowed by one of the groups most affected?

The gist of State Authorization is to force distance education and correspondence programs to comply not only with their home state regulations but to also comply with regulations for any state of residence for students. The proposed regulations from 2010 (struck down by courts in 2011) came in at 75 words:

§600.9(c) If an institution is offering postsecondary education through distance or correspondence education to students in a State in which it is not physically located or in which it is otherwise subject to State jurisdiction as determined by the State, the institution must meet any State requirements for it to be legally offering postsecondary distance or correspondence education in that State. An institution must be able to document to the Secretary the State’s approval upon request.

The most recent submission from DOE comes in at 1,912 words. You can read the full text here. This growth in regulatory language will have real costs. Today the president of Saint Leo College described how much of the administrative bloat in campuses is due to increasing federal regulations.

Let me be clear, not all of the increases in college tuition and administrative bloat are caused by regulation, but some are—and far more than you think.

The reason that the 2010 language was struck down by the courts was described in a WCET post:

As we’ve noted before, the distance education language was not included in the original language proposed by the USDOE in June 2010 as part of its rulemaking process.  The rule appeared in the final ‘Program Integrity’ regulations released in October 2010.  As a result, the court seems to have agreed that the USDOE did not follow proper procedures in implementing the rule.  If the language had been included in the June 2010 proposed rules, WCET and the distance education community could have provided comments to improve the final regulation.

What is disconcerting now is that once again the DOE has not included a major stakeholder group that will be directly impacted by the new regulations, in this case state regulators. I described in a previous post how the new regs would dramatically increase federal control over distance education and would significantly increase the required oversight by state regulators. It is this issue that appears to have been the breaking point in current negotiations, per the Chronicle today:

On state authorization, the department and several negotiators representing student and consumer groups remained insistent that distance-education programs could be eligible for federal student aid only if they were approved by a state with an “active process” for authorizing them—a process that went beyond passively approving programs merely if they were accredited or had been operating in the state for a long time.

“We’re not willing to walk away” from that position, said Pam Moran, representing the department.

And what do the state regulators have to say in this process?

Elizabeth Hicks, executive director of student financial services at the Massachusetts Institute of Technology, was one of several negotiators who asked the department to hold off on issuing a state-authorization rule until it could consult with state regulators, who were not represented among the 31 negotiators from outside the department.

“Many of us feel you are trying to leverage institutions to regulate states,” said Ms. Hicks. “That’s disingenuous.”

What do you get for this massive growth in regulatory language? Russ Poulin from WCET, who was one of the negotiators and has provided great insight into the process, stated in a comment to the Chronicle article:

The objection of most negotiators was not bureaucracy, but meaningless bureaucracy. The Department’s proposal would have forced institutions to lobby states (in which they serve students) to change their approval processes to a process that would not be substantially different than what they do now. This would not improve consumer protection. It would throw the status of hundreds of thousands of students into question while states decide if they would make the necessary changes and what changes they would make. Yes, there was a roll-out period, but uncertainty would remain. Why disrupt the lives of students to implement a process that would result in the same approval outcome?

I am sure that many individuals within the DOE truly wanted to reach consensus within negotiated rulemaking, but the process for doing so was flawed from the beginning. The process worked as designed, and no consensus was reached. Now the DOE has the discretion to introduce the proposed language it wants or even to drop the subject.

The shame of this is that there really is a problem to be addressed. eCampusNews described today how Ashford University agreed to a $7.25 million settlement and revise marketing practices based on a state of Iowa lawsuit. One former student commented:

Pat Kean, a military veteran from Madrid and former Ashford College of Education student, said he used higher education funds earned through his service to pay for online classes, only to discover in his final days as a student that his teaching degree wasn’t valid and his credits weren’t transferrable. Even the land-based campus in Clinton wouldn’t accept his credits, he said.

Students do need to have pathways, including via states, to address misleading marketing practices and poor quality. The process seems to have worked here, even without State Authorization, partially as students in the Ashford home state of Iowa complained. But where would students in Oregon complain? This is the rationale for State Authorization.

Despite the exclusion of state regulators in the negotiated rulemaking process and the last-minute changes to requirements on state regulators, the DOE has actually been forthright in their intention to force states to ramp up “active” oversight. From the Federal Register in 2010 (see page 34813, left column):

Upon further review, we believe the better approach is to view the State approval to offer postsecondary educational programs as a substantive requirement where the State is expected to take an active role in approving an institution and monitoring complaints from the public about its operations and responding appropriately.

In other words, there is a problem to be solved, and the intent to place new regulatory requirements on the states themselves has been known. What is unfortunate is the process that A) grew regulatory language by a factor of 25, B) still excluded key input from affected parties, and C) predictably failed to reach consensus.

The post DOE Doubles Down on State Authorization: 25x increase in regulatory language appeared first on e-Literate.

Some Real Reasons the “Two-speed Internet” Hurts Higher Education (Hint: impact on the “Next Facebook” is not one of them)

Mon, 2014-05-19 21:22

The debate around net neutrality so far has been almost as depressing as the set of judicial and administrative decisions that got us here. Central to the debate has been the obsession about how the two-speed internet will “stop the next Facebook/Google/Netflix” from being able to innovate.

nextfacebook

Save the Internet does a bit better than most at teasing out some of the other issues (privacy, freedom of speech), but states the business core of the argument like so:

Net Neutrality lowers the barriers of entry for entrepreneurs, startups and small businesses by ensuring the Web is a fair and level playing field. It’s because of Net Neutrality that small businesses and entrepreneurs have been able to thrive on the Internet. They use the Internet to reach new customers and showcase their goods, applications and services.

I’m not going to argue that this is wrong. Monopoly power on this scale is a dangerous thing. Until recently, there were decent laws preventing companies from owning all the media outlets in a single metro — we are now moving towards allowing one company to control most of America’s access to the Internet. It’s easy to put on the weary entitlement of “It’s all just Google vs. Comcast, Goliath vs. Goliath, what do I care?” But, of course, this is the well-established point of anti-monopoly law — the world is a better place for David when Goliath fights Goliath than when Goliath stands unopposed. When Goliath stands unopposed, bad things happen. You don’t have to root for Goliath Number Two to understand the utility of that.

At the same time, these arguments have obscured some of the real threats to education that have nothing to do with the “next Facebook” scenario. Primary among these threats is the issue of what happens to traffic that is not from traditional content providers. I’d like to sketch out what that means for higher education, and why your institution should be talking about the dangers of creating a provider-paid express lane on the Internet.

The BitTorrent Roots of the Current Mess and the Problem of “No-Provider” and “Own-Provider” Services

How we got to the current policy is a bit convoluted, but it’s worthwhile to go back to the last great success in the fight for net neutrality. In 2008, the FCC ruled that Comcast had to stop throttling BitTorrent traffic. For those unfamiliar with BitTorrent, it is a peer-to-peer technology that is used to share files on the Internet. Key to its peer-to-peer design is that it is “providerless” — there is no content company that mediates the traffic — all users of a particular torrent connect directly to each other. Your content doesn’t come to me via Google or Dropbox — it comes to me directly from your computer, and from the computers of the others downloading or “seeding” it.

2014-05-19_1401

Of course, most campus IT administrators are intimately familiar with the technology, as it was one of the things slowing campus Internet to a crawl several years back. And as such, I’m sure that that at least some campus IT administrators sympathize with Comcast’s decision — after all, a number of campuses ended up throttling BitTorrent as well.

But consider the issue the student who wanted to use BitTorrent faced on such a campus. No matter how much money they paid for Internet service, they could never significantly increase their BitTorrent speeds. Meanwhile, the speeds of everything else provided through campus pipes increased.

Under the new FCC rules, all applications and providers can be subject to the same sort of limits (the newer FCC rules apparently ban throttling, but there is slim difference between throttling and the separation of traffic into fast and slow lanes). The difference here is “content providers” can pay a fee to ISPs to get out of “throttling prison” and use the full bandwidth available to the consumer to deliver their service. So Netflix pays Comcast, and gets out of throttling prison. Netflix’s upstart competitor doesn’t have the money and so gets slower service.

Supporters of the proposal say this is where it ends — it’s just a matter of who owes money to who, and setting up reasonable guidelines for that.

But what about the person using BitTorrent? The problem with BitTorrent is that there is no provider to pay the cable company to get fast lane access. This is not simply a case of how much Goliath owes Goliath. This is a case of David not even having access to the currency system. BitTorrent applications have no content provider status, and so will be relegated permanently to the slow lane.

This problem, that the proposed rules are built around assumptions of a “provider” negotiating with cable companies, is potentially more damaging to education than the actual details of what those negotiations are allowed to entail. To paraphrase Milton Friedman paraphrasing William Harcourt: We are all torrenters now. And that means we have little control over our future.

Beyond BitTorrent: Video Clips for a Media Class

If you think this doesn’t apply to your campus, think again. Because higher education deals quite a lot with services where there is no corporate provider.

First, consider a non-peer-to-peer example. On most campuses, media and communications faculty use clips from films from their class, and quite often distribute them via the Internet. They are allowed to do this because of explicit protections granted to them by the U.S. government, but because they must show care in how they distribute content, they generally use a free standing server on campus (such as Kaltura) to deliver them.  In a world where you are studying these clips in, say, a class that deals with cinematography, the quality of the clips could be essential to the activity. As it stands now, what you might say in your course description is “Students should have access to broadband in order to view the video clips for homework.”

tumblr_mhzbiyaWFM1rwpn15o1_500

So here’s a question — how can you make sure your students at home can get “fast lane” access to these clips?

If you were to put them up on YouTube or Vimeo, then YouTube would negotiate the agreement. But in your case, you just are serving them up through a campus server. Who do you call? How much does it cost?

Ok, now that you have done that for Comcast, it’s time to ask yourself — what other cable internet providers do your students have? If it’s an online course, how do you deal with a local cable provider in Athens, GA when you are in Seattle, WA?

There’s no real answer to these questions. Or rather, the answer is clear — unless you are the University of Phoenix, you aren’t going to be able to negotiate this. Your institution is not set up for it. And so, as the fiber revolution rolls out across the nation, most of higher education will be stuck in the copper lane.

A Peer-to-Peer Example: Educational Videoconferencing

Videoconferencing is one area where the increasing quality of internet connections is poised to have great impact. Most of what sucks about videoconferencing comes down to latency (that ‘you-go-no-you-go’ seconds-long delay that makes you feel like you are conducting class over a walkie-talkie), stability of connection, and visual clarity (which allows you to see the microexpressions that signal to you important things, like ‘Is this student getting this at all?’).

All of these aspects improve with increased bandwidth. And it’s possible, of course, that your third-party video-conferencing provider will be able to pay the fee to Comcast and others that allows your students to tap into to such things.

Assuming, of course, that you have such a provider. The recent trend in video-conferencing is toward peer-to-peer products which connect conference participants directly instead of through an intermediate server. This dramatically lowers latency, leading to a conversational flow that more closely resembles face-to-face discussion. As more remote students have access to high-quality connections, peer-to-peer video conferencing has the potential to increase the impact of online education substantially, and, just as importantly, make such models more humane by providing students and teachers access to the facial “microexpressions” and conversational cues that make such events emotionally meaningful experiences.

otherone

Except — how will you ensure access to the bandwidth and latency you need to make this work? Your students can’t buy it — they may have the Comcast Super-Turbo-Extra-Boost plan, but that’s only going to increase the speed of prioritized traffic they receive, such as Netflix.

And your institution can’t buy it either, because there is no central server. When your student Jane from Twin Rivers talks, the traffic doesn’t come from an identifiable university computer. It comes from Jane’s computer in Twin Rivers, and goes directly to you and the five other students in the review session. Jane doesn’t have an option to call Comcast and get her traffic into the fast lane. So while Hulu will be able in 10 years to deliver multi-terabyte holographic versions of The Good Wife to your living room, the peer-to-peer video your campus is using will remain rooted in 2014, always on the verge of not sucking, but never quite making it to the next level.

Other Examples

These are just two examples from areas I’m deeply familiar with, but if you talk other people at your institution, you can uncover other examples fairly quickly. Here’s what you ask:

Is there anything you do in your teaching or research that relies on connections to the Internet and is not delivered by a major third-party provider (such as YouTube, Dropbox, etc.)?

You’ll find out there’s quite a lot of things that work like that. For example, there has been a major push to shut down computer labs on campuses as a cost saving measure — after all, most students have laptops. As we’ve done that we’ve pushed students into using virtualized software, often across consumer connections. In virtualized scenarios, students remotely tap into high speed servers loaded with specialized software. It allows a department to make sure that all students have access to the computing power and software they need without needing access to a computer lab.

There’s a lot of potential for virtualized software to reduce cost and increase student access. While having a good connection to the Internet to use it is costly, its considerably less costly for many students than having to drive to campus several nights a week to complete assignments, and far more convenient. As consumer bandwidth increases, the dream of virtualizing most of the software students need becomes an achievable reality.

Except… You see where this is going. How does your campus make sure that your students virtualized instance gets the maximum bandwidth the student’s connection can support? Failing having a full-time campus cable negotiator, it’s hard to see how this happens. Like the peer-to-peer video-conferencing revolution, the move to virtualization could be over before it has begun, and with it the potential decreases in cost and increases in access.

Once you start to look for this issue, you’ll find it everywhere. There are certain IT functions we keep on campus due to security and privacy issues, for example. We may be pushed into moving these into third party software if we cannot negotiate the same speed for on-campus functions as for off-campus third-party provided functions. Our students are increasingly working with large datasets as part of their research — how, exactly, does one get fast lane access for one’s 50 GB GIS homework?

These are small problems now, but without continued access to top-tier service they can become big problems soon.

But, Chairman Wheeler says….

Of course, the current FCC Chairman says that the fears are overblown. There are many great sites out there that debunk the FCC’s “Don’t Panic” rhetoric better than I could, but let me deal with three common objections quickly.

First, there is some confusion about whether the new rules allow providers to prioritize traffic to consumers. Wheeler says they don’t, but this is a bit of a word game. To vastly simplify the issue, Wheeler has guaranteed that the on- and off-ramps to the Information Superhighway won’t have slow and fast lanes. The actually highway? He’s determined that’s outside the FCC’s purview. And since any connection is only as strong as its weakest link, having no priority lanes on the ramps means very little if providers are carving up the highway into express lanes and economy ones.

Second, he’s guaranteed that providers won’t be able to “slow down” any traffic, only prioritize some traffic. A simple thought experiment demonstrates the ridiculousness of this claim. During peak hours, Netflix currently makes up about 34% of Internet traffic. The cable companies are now going to make Netflix pay to prioritize their content. Given that bandwidth is a finite resource, it doesn’t take a genius to realize the even if the cable companies just went after Netflix it would adversely impact your university’s efforts. By definition, to prioritize one thing is to de-prioritize something else, and in this case that something else is your connection to your students.

Finally, sitting here in 2014, it’s tempting to see the bandwidth and low-latency connections we have now as sufficient for our needs. This is part of the rhetoric of the cable companies. What do you use the Internet for now? Well, you’ll still get to do that!

But how much of what we do now could we do in 2004? Would any of our stakeholders — the students, the legislature, the taxpayers, the businesses we send our students into — be happy with us utilizing the Internet at a 2004 level?

Pundits often complain that the world of education does not adopt technology at the speed of business. That’s true, partially. And we could do better. But the currently proposed FCC rules all but guarantee that we won’t be allowed to.

To me, that’s a bigger issue than where the “next Facebook” comes from. And it’s one that we need to start talking about.

 

Image Credits:

Peer-to-Peer diagram courtesy of Wikimedia Commons: http://upload.wikimedia.org/wikipedia/commons/0/09/BitTorrent_network.svg. Modified by Michael Caulfield.

“Charles Foster Kane is Dead” GIF, by howtoctachamonster. Published at http://howtocatchamonster.tumblr.com/tagged/citizen-kane

Other images are screenshots by Michael Caulfield.

 

The post Some Real Reasons the “Two-speed Internet” Hurts Higher Education (Hint: impact on the “Next Facebook” is not one of them) appeared first on e-Literate.

The IMS Is More Important Than You Think It Is

Sun, 2014-05-18 08:21

I have long argued that the development of technical interoperability standards for education are absolutely critical for enabling innovation and personalized learning environments. Note that I usually avoid those sorts of buzzwords—”innovation” and “personalized learning”—so when I use them here, I really mean them. If there are two fundamental lessons we have learned in the last several decades of educational technology development, they are these:

  1. Building monolithic learning environments generally results in building impoverished learning environments. Innovation and personalization happen at the edges of the system.
  2. There are tensions between enabling innovation at the edges and creating a holistic view of student learning and a usable learning environment. Integration does matter.

To these two education-specific principles, I would add a general principle about software:

  • All software eventually grows old and dies. If you can’t get your data out easily, then everything you have done in the software will die with it (and quite possibly kill you in the process).

Together, these lessons make the case for strong interoperability standards. But arriving at those standards often feels like what Max Weber referred to as “the strong and slow boring of hard boards.” It is painful, frustratingly slow, and often lacking a feeling of accomplishment. It’s easy to give up on the process.

Having recently returned from the IMS Learning Impact Leadership Institute, I must say that the feeling was different this time. Some of this is undoubtedly because I no longer serve on any technical standards committees, so I am free to look at the big picture without getting caught up in the horrifying spectacle of the sausage making (to mix Germanic political metaphors). But it’s also because the IMS is just knocking the cover off the ball in terms of its current and near-term prospective impact. This is not your father’s standards body.

Community Growth

The first indicator that things are different at the IMS these days is the health of the community. When I first got involved with the organization eight years ago, it was in the early stages of recovery from a near-death experience. The IMS was dying because it had become irrelevant. It just wasn’t doing work that mattered. So people stopped coming and organizations stopped paying dues. Then Rob Abel took over and things started turning around.

Slide07

Membership has quadrupled. Interestingly, there was also a very strong K12 contingent at the meeting this year, which is new. This trend is accelerating. According to Rob, the IMS has averaged adding about 20 new members a year for the last eight years but has added 25 so far in 2014.

Implementations of IMS standards is also way up:

Slide09

Note that conformance certifications is a new thing for IMS. One of the key changes in the organization was an effort to make sure that the specifications led to true interoperability, rather than kinda-sorta-theoretical interoperability. Close to 250 systems have now been certified as conforming to at least one IMS specification. (Note that there are also a number of systems that conform but have not yet applied for certification. So this number is not comprehensive.) And here again, the trend continues to accelerate. According to Rob, the IMS averaged two new conformance certifications a week in 2013 and is averaging four new certifications a week so far in 2014.

Keep in mind that these numbers are for systems. A lot of the things (for lack of a better word) that conform to IMS specifications are not systems but bundles of content. Here the numbers are also impressive:

Slide10

So a lot more people are participating in IMS and a lot more products are conforming to IMS specification profiles.

Impact

One of the effects of all of this is that LMS switching has gotten a lot easier. I have noticed a significant decline in campus anxiety about moving from one LMS to another over the past few years. There are probably a number of reasons for this, but one is undoubtedly that switching has gotten significantly easier due to IMS interoperability specifications. For content, all popular LMSs in the US higher education market import Common Cartridge packages, and several of them export to Common Cartridge. (By the way, I will say it again: If you are in the process of selecting a new LMS, you should make export to Common Cartridge a buying criterion.) Hooking up the LMS to the administrative systems so that class information can be populated into the LMS and final grades can flow back to the SIS has gotten much easier thanks to the LIS specification. And third-party (including home-grown) tools that work in one LMS usually work in another without extra programming, thanks to the LTI standard.

But I think the IMS is still just warming up. LTI is leading the way for the next wave of progress. Under the stewardship of Chuck Severence, LTI now is supported by 25 learning platforms. edX and Coursera both recently announced that they support LTI, for example. It has become the undisputed standard for integrating learning tools into platforms. This means that new learning tool developers have a straightforward path to integrating with 25 learning platforms simply by supporting LTI. My guess is that a good portion of those 4 new conformance certifications a week are LTI certifications. I see signs that LTI is facilitating a proliferation of new learning tools.

There is a lot of new work happening at the IMS now, but I want to highlight two specifications in development that I think will take things to the next level. The first is Caliper. I have waxed poetic about this specification in a previous post. In my opinion, the IMS is under-representing its value by billing it as an analytics specification. It is really a learning data interoperability specification. If you want loosely coupled learning tools to be able to exchange relevant data with each other so that they can work in concert, Caliper will enable that. It as close to a Holy Grail as I can think of in terms of resolving the tension that I called out at the top of the post.

The second one is the Community App Sharing Architecture (CASA). Think of it as kind of a peer-to-peer replacement for an app store, allowing the decentralized sharing of learning apps. As the UCLA Education and Collaborative Technology Group (ECTG) puts it,

The World Wide Web is a vast, mildly curated repository of information. While search engines fairly accurately filter the Internet based on content, they are less effective at filtering based on functionality. For example, they lack options to identify mobile-capable sites, sites that provide certain interoperability mechanisms, or sites related to certain industries or with certain content rating levels. There is a space where such a model already exists: the “app stores” that pervade the native mobile app landscape. In addition to the app itself, these hubs have deep awareness of application metadata, such as mobile and/or tablet support. Another deficit of search engines is their inability to allow organization-based configuration, defining a worldview with trust relationships, filters and transformations to curate the results they present to end users. Native app stores use a star (hub-and-spoke) topology with a central hub for publishing, which lacks this fine-grain customizability, but an alternative peer-to-peer topology, as is used for autonomous systems across the Internet, restores this freedom.

CASA should facilitate the further proliferation of learning apps by making them more easily findable and sharable, drawing on affinity networks (e.g., English composition teachers or R1 universities). Caliper will enable these tools to talk to each other and create an ensemble learning environment without having to rely on vendor-specific infrastructure. And LTI will enable them to plug into a portal-like unifying environment if and when that is desirable.

Who says technical standards aren’t exciting?

The post The IMS Is More Important Than You Think It Is appeared first on e-Literate.

Unizin: Indiana University’s Secret New “Learning Ecosystem” Coalition

Fri, 2014-05-16 10:00

Indiana University has been the driving force behind the creation of a new organization to develop a “learning ecosystem”. At least ten schools are being quietly asked to contribute $1 million each over a three-year period to join the consortium. The details of what that $1 million buys are unclear at this point. The centerpiece for the short-term appears to be a contract with Instructure for use of the Canvas LMS. But there are also hints of ambitious plans regarding learning object repositories and learning analytics.

What is remarkable is the level of secrecy surrounding the project. Several sources from involved schools have indicated that very few people have been informed regarding their institutions’ prospective involvement. When school discussions do take place, care is being taken to keep them quiet. For example, a video recording of a presentation to faculty about Unizin at Colorado State University has since been removed from public access after it received some attention on Twitter (although e-Literate downloaded a copy of the video before it was removed from public access).

Nevertheless, details of the project are beginning to leak out. In this post, I will share the facts that we have been able to confirm. Phil and I will both follow on with some analysis and inferences early next week. And of course, we will continue to bring you updates as news breaks.

The (Prospective) Members

As far as we can tell, there are no definite commitments to join this group yet other than from Indiana, although University of Michigan and Colorado State University are beginning to socialize the idea on their respective campuses. The recording obtained by e-Literate is a presentation by CSU Dean of Libraries Pat Burns and Associate Provost for Innovation Mike Palmquist to what was apparently a group of faculty members. The ten potential Unizin members listed on the presentation slides (along with their current LMSs) are as follows:

  • Colorado State University (Blackboard)
  • Indiana University (Sakai)
  • Oregon State University (Blackboard)
  • Purdue University (Blackboard)
  • University of Florida (Sakai, with some use of Canvas)
  • University of Maryland (Canvas)
  • University of Michigan (Sakai)
  • University of Texas (Blackboard, with at least one campus in the process of moving to Canvas)
  • University of Wisconsin (Desire2Learn)
  • University of Utah (Canvas)

Slide from CSU Presentation

An article in the University of Michigan campus newspaper about a briefing to the faculty senate corroborates Michigan, Indiana, and Florida as participants in Unizin discussions. There may be others involved in the discussions as well. For example, we have some evidence that Rice University is or was considering joining. A bond issuance document by the university refers to its participation in something called “Coalition for Online Teaching and Learning (COLT)”. A CV for a Vice Provost at Purdue University describes one of her job duties as representing the university at “the Coalition for Online Learning and Teaching (now referred to as UniZin, the online learning ecosystem consortium)”.1

While the group seems only loosely connected to the Big 10′s Committee on Institutional Cooperation (CIC), the group does share some common key players with an earlier position paper by a CIC task force motivated by the perceived need to take back control of distance learning from MOOC providers such as Coursera.

At the recorded CSU meeting, one of the presenters—it’s impossible to tell which is the speaker from the recording we have—acknowledges that the meetings were largely conducted in secret when challenged by a faculty member on the lack of faculty involvement. He cited sensitive negotiations among the ten universities and Instructure as the reason.

The CSU presenters specifically state that CSU was “invited” to join Unizin—the name is apparently a portmanteau of “university” and “zinfandel”—and suggested that one of the reasons for their inclusion was their experience with distance education relative to the other members. According to the meeting notes of the CSU technology fee advisory board dated April 7, 2014, Pat Burns said,

They want similar universities and in a lot of ways we are very similar to these other universities and institutions. We are way ahead in some areas. We do about $38 million a year in online education. We are leaders in online education and how to do that right.

More on this later in the post.

As I mentioned before, each university will be expected to contribute $1 million. According to the CSU presentation, Internet2 is acting as the “fiscal agent” and the hiring of an executive director for the project is “imminent”.

Unizin Status

The LMS Deal

Although there are ambitious larger plans for the coalition, the immediate driver of the deal appears to be the formation of a buyers’ club to license Instructure Canvas. Indiana University, which is leading the discussions of the coalition, has already announced that they will be moving to Canvas after a selection process comparing that LMS to Blackboard and Desire2Learn (ironically using product evaluation categories developed during the design process for Sakai OAE). According to CSU presentation and documents, the Canvas license would cost $150,000 the university less per year than their Blackboard license under this deal. However, because the university would be paying in $1,000,000 over three years up front, they expect “about a seven-year payback,” according to the meeting minutes. (The presenter in the video recording said “seven to ten years.”)

Unizin Business Model

The level of due diligence each campus has put into the selection of Canvas seems to vary widely. Indiana has documented extensive research at their LMS selection site. A Canvas pilot was run at the University of Wisconsin, while Oregon State University is in the process of piloting Canvas. On the other hand, CSU seems mainly to rely on IU’s research as their due diligence. Some of the faculty members in the audience seemed to know more about Canvas functionality versus Blackboard and Blackboard-to-Canvas migration issues than the staff did.

We have no evidence that the current deal includes special provisions about opening the source code of the platform, much but not all of which Instructure releases under the AGPL open source license. However, the CSU presenters speculated the the coalition could self-host, stop paying fees to Instructure, and develop the LMS themselves in the future. No mention was made of the fact that the Sakai schools in the Unizin coalition would be leaving just such an open source coalition in favor of a hosted and externally developed solution.

Prospective: Learning Object Repository (LOR) and Analytics

In addition to the LMS, the coalition has ambitions to add a LOR and a learning analytics platform.

Unizin Diagram of Services

It is not clear whether Unizin intends to build or buy these components, or even whether that decision has been made at this time. However, the CSU presenters (correctly) mentioned that University of Wisconsin has significant expertise and experience in learning analytics, suggesting that this is one reason why they were asked to join the coalition. Wisconsin is involved in the Open Learning Analytics initiative and has been one of a few early adopters of Desire2Learn’s analytics platform. Purdue is also famous for its Course Signals learning analytics platform (and infamous for its lack of transparency about problems with their research). Likewise, LORs have been developed by both the University of Utah and Rice. These pieces, along with the CSU comments about the coalition pooling development resources, suggest that consortium-based development (with or without an open source license) is an option being considered.

According the CSU presenters, all content would be stored in a “monolithic” content repository for the coalition, “outside of vendor platforms” (which may be another indicator that Unizin intends to develop its own software, or may just mean “outside the LMS”). The repository would also act as a “reflector,” meaning that it could contain links to externally hosted content.

Unizin Content Repository

Unizin Content Repository 2

Various rationales were given by the CSU presenters regarding the need for a centralized LOR. One was that they claimed it would make future LMS migrations easier if the content were externally stored. Another was that it would make inter-campus course sharing easier. On this last point, while this possibility was mentioned several times, with the example that CSU might share engineering courses with IU in exchange for foreign language courses, it does not appear that there were any concrete sharing arrangements worked out.

The scope of the analytics ambitions seem broad:

Unizin Predictive Analytics

Unizin Real-Time Analytics

These slides contain a menu of just about every kind of learning analytics in the market today. There is no indication that specific plans or priorities have been developed for Unizin analytics at this time.

Unanswered Questions

That sums up what we know about Unizin at this time. As I mentioned earlier, Phil and I will be writing next week about what we think we can infer from what we know. In the meantime, I want to close this post with some questions focusing on what we don’t know:

  • What does that $1 million per campus pay for?
  • Will the coalition be hosting Canvas itself, or will Instructure do that?
  • Internet2 already has a pricing agreement with Instructure through NET+; how is Unizin’s different and (presumably) better?
  • Both Apereo and Kuali are foundations that host open source software projects relevant to Unizin’s ambitions and of which at least a few of the Unizin schools are contributing members; why are neither of these foundations (apparently) involved in the project?
  • What are the real drivers for the content repository development? Ease of LMS migration seems like an inadequate explanation for an effort of this size, and the course sharing ambitions seem vague.
  • How does Unizin intend to solve the content re-use problem which has plagued pretty much every other LOR initiative in the long history of LOR initiatives?
  • What would the first goals of the learning analytics project be (given how many possible goals are articulated), and how would they interact with the content repository requirements?
  • How will faculty react when Unizin is made public on their campuses? And does this reflect a broader shift in the role of faculty governance?
  1. Pray that your spouse’s divorce lawyer never hires Phil Hill as a private investigator.

The post Unizin: Indiana University’s Secret New “Learning Ecosystem” Coalition appeared first on e-Literate.

Policy Updates on FERPA and Net Neutrality

Thu, 2014-05-15 18:01

Two policy debates that could have a significant impact on education – updates on FERPA and data privacy & FCC proposals on Net Neutrality – both entered the next stage this week.

FERPA Modernization

I recently wrote about the new federal moves to update FERPA to handle the age of Big Data (should I have used scare quotes there?).

Yesterday the White House released its report on big data and privacy implications. The focus was broadly on big data, but there will be implications for ed tech, with several key recommendations specifically focused on the education sector. Specifically, there will be a push to update and revise the Family Educational Rights and Privacy Act (FERPA, enacted in 1974) and Children’s Online Privacy Protection Act (COPPA, enacted in 2000).

I was quite optimistic about the federal approach based on this report, and yesterday we got some more apparently good news – a bipartisan approach to improve data privacy and update FERPA made in a reasonable fashion (dogs and cats, living together). As reported by THE Journal, Senators Markey (D-MA) and Hatch (R-UT) introduced the “Protecting Student Privacy Act”, getting even a warm reception from the Software and Information Industry Association (SIIA), which had objected to the earlier version discussed in January. THE Journal summarized the key parts of the legislation (full text here):

  • Prohibiting the use of personally identifiable student data for advertising or marketing purposes;
  • Requiring certain safeguards be in place to protect the integrity of data in the hands of private companies;
  • Giving parents the explicit right to view their children’s data and make corrections to erroneous information;
  • Making available the names of every company that has access to a district’s student data;
  • Limiting the personally identifiable information that can be transferred between companies and schools; and
  • Ensuring that “private companies cannot maintain dossiers on students in perpetuity by requiring the companies to later delete personally identifiable information.”
FCC and Net Neutrality

Now that you’re in a good mood with renewed faith in federal policy-making, let’s bring back that cynicism. The FCC, despite significant protests, approved a plan to allow paid priority on the Internet, potentially killing the Net Neutrality concept. From the Washington Post:

The Federal Communications Commission on Thursday voted in favor of advancing a proposal that could dramatically reshape the way consumers experience the Internet, opening the possibility of Internet service providers charging Web sites for higher-quality delivery of their content to American consumers.

The plan, approved in a three-to-two vote along party lines, could unleash a new economy on the Web where an Internet service provider such as Verizon would charge a Web site such as Netflix for faster video streaming. The proposal would, though, prohibit telecom firms from outright blocking Web sites.

THE Journal ran a piece in January showing the debate about Net Neutrality and how it could affect education. Some Pollyanna argued in that piece that [emphasis added]:

However, not everybody agrees with a grim conclusion that education is destined to suffer under a “two tier” Internet. “I think the reaction to the net neutrality ruling is overwrought and distracting,” said Phil Hill, co-founder of MindWires, an education technology consulting practice. What’s important to remember, he said, is that the court “actually agreed” that the FCC has “legal authority over broadband.”

Hill asserted that net neutrality “is an important principle for education, as we need freedom for students and institutions to access Internet services that are becoming more and more important without having broadband carriers decide on which services have priority over others.” However, he pointed out, “This should allow the FCC to implement new rules that don’t step on the toes of the common carrier rules. In other words, as long as the FCC doesn’t screw up, it should be able to regulate and enforce net neutrality with future rules. But there will be a lot of overheated rhetoric in the meantime. I see short-term confusion for K-12 but long-term there should be little or no lasting impact.”

Oh wait, that was me, and I was completely wrong.

The post Policy Updates on FERPA and Net Neutrality appeared first on e-Literate.