Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 11 hours 4 min ago

WCET, UPCEA & Sloan-C call on DOE to change State Authorization proposal

Mon, 2014-06-16 20:46

What does it take to get all of the higher education institutions and associations to agree? Apparently the answer is for the Department of Education to propose its new State Authorization regulations.

As part of DOE’s negotiated rulemaking process over the past half year representatives from schools (Columbia University, Youngstown State University, Benedict College, Santa Barbara City College, Clemson University, MIT, Capella University) to higher ed associations (WCET) were unanimous in their rejection of the proposed State Authorization rules. As Russ Poulin wrote for WCET:

On Tuesday May 20, the Committee we had our final vote on the proposed language. I voted “no.” I was joined in withholding consent by all the representatives of every higher education sector. Nine out of sixteen negotiators voting “no” is a high ratio.

Note that only one of the mentioned groups is a for-profit university – the purported offenders causing the need for the regulations. I wrote a post arguing that the proposed rules represented a dramatic increase in control over distance education that would cause a significant increase in compliance and administrative overhead for both colleges / universities and for states themselves.

In the end, predictably, the rulemaking process ended in a lack of consensus that allows the DOE to propose whatever language they desire. The latest proposal was from DOE, and it would make sense for the final proposal to follow this language closely.

Here’s what is newsworthy – the idea and proposed language is so damaging to innovation in higher ed (which the DOE so fervently supports in theory) and so burdensome to institutions and state regulators that three higher ed associations have banded together to oppose the proposed rules. WCET (WICHE Cooperative on Educational Technologies), UPCEA (University Professional and Continuing Education Association) and Sloan-C (Sloan Consortium) wrote a letter to Secretary Arne Duncan calling for the DOE to reconsider their planned State Authorization regulations. As the intro states [emphasis added]:

The member institutions of our three organizations are leaders in the practice of providing quality postsecondary distance education to students throughout the nation and the world. Our organizations represent the vast majority of institutions that are passionate about distance education across the country and across all higher education sectors.

For the first time our organizations are joining with one voice to express our concern over the Department of Education’s “state authorization for distance education” proposal(1) that was recently rejected by most of the members of the Program Integrity and Improvement Negotiated Rulemaking Committee. Our comments are focused on the final draft proposal presented to the Committee. We believe the final draft represents the most current thinking of Department staff as they construct a regulation for public comment.

We are eager to promote policies and practices that protect consumers and improve the educational experience of the distance learner. Unfortunately, the final draft regulation would achieve neither of those goals.

The impact of the proposed regulations would be large-scale disruption, confusion, and higher costs for students in the short-term. In addition, there would be no long-term benefits for students. This letter briefly outlines our concerns and provides recommendations that achieve the Department’s goals without disrupting students enrolling in distance education programs across state lines.

As an example of the problems with the latest proposal:

Second, when pressed to define an “active review,” the Department provided a short list of criteria that states could use in the review, such as submitting a fiscal statement or a list of programs to be offered in the state. While it may sound simple to add a few review criteria, state regulators cannot act arbitrarily. Their authorization actions must be based on state laws and regulations. Therefore, state laws would need to be changed and the state regulators would need to add staff to conduct the necessary reviews. Our analysis estimates that 45 states would need to make these changes. This is a large amount of activity and added costs for what appears to be a “cursory” review. These reviews will likely not change a decision regarding an institution’s eligibility in a state. There is no benefit for the student.

The letter does not just list objections but also offers eight concrete recommendations that would help DOE achieve its stated goals.

Michael and I fully endorse this letter and also call on the DOE to rethink its position.

The full letter can be found at WCET’s site along with an explanatory blog post.

The post WCET, UPCEA & Sloan-C call on DOE to change State Authorization proposal appeared first on e-Literate.

Starbucks Paying for Employees Tuition at ASU Online

Mon, 2014-06-16 10:08

This is a big deal:

Starbucks will provide a free online college education to thousands of its workers, without requiring that they remain with the company, through an unusual arrangement with Arizona State University, the company and the university will announce on Monday.

The program is open to any of the company’s 135,000 United States employees, provided they work at least 20 hours a week and have the grades and test scores to gain admission to Arizona State. For a barista with at least two years of college credit, the company will pay full tuition; for those with fewer credits it will pay part of the cost, but even for many of them, courses will be free, with government and university aid.

Over the past few decades, America has slowly but surely been transitioning from a system in which college education was treated as a public good (and therefore subsidized by taxpayers) to being a private good (and therefore paid for entirely by students and their families). And while there is no substitute for that model, it is interesting and important that Starbucks is positioning college tuition the way companies position health insurance plans—as a benefit they use to compete for better workers.

This is not an entirely new idea. Many companies have tuition reimbursement, although it often comes with more restrictions and is typically aimed at white-collar workers. A while back, Wal-Mart made headlines by offering heavily subsidized (but not free) college credit in partnership with APU. Starbucks takes this to the next level. Since both Wal-Mart and Starbucks have reputations as union busters, it will be interesting to see how their respective college subsidization moves impact their struggles with their labor forces. Will tuition help them lower demand for unionization? Will it become another bargaining chip at the negotiating table?

I wrote a while back about the idea of reviving the apprenticeship for the digital age and gave an example of an Indian tech company that is doing it. I think we’re going to see a lot more of variations on the theme of employer-funded education in the future.

You can learn more about the Starbucks college program at their website.

The post Starbucks Paying for Employees Tuition at ASU Online appeared first on e-Literate.

Why Google Classroom won’t affect institutional LMS market … yet

Sun, 2014-06-15 16:54

Yesterday I shared a post about the new Google Classroom details that are coming out via YouTube videos, and as part of that post I made the following statement [emphasis added]:

I am not one to look at Google’s moves as the end of the LMS or a complete shift in the market (at least in the short term), but I do think Classroom is significant and worth watching. I suspect this will have a bigger impact on individual faculty adoption in higher ed or as a secondary LMS than it will on official institutional adoption, at least for the next 2 – 3 years.

The early analysis is based on this video that shows some of the key features:

There is also a new video showing side-by-side instructor and student views that is worth watching.

Here’s why I believe that Classroom will not affect the LMS market for several years. Google Classroom is a slick tool that appeals to individual instructors whose schools use Google Apps for Education (GAE) – primarily K-12 instructors but also to higher ed faculty members. The tight integration of Google Drive, Google+ and GAE rosters allows for easy creation of course sites by the instructor, easy sharing of assignments and documents (particularly where the instructor creates the GDrive document and has students directly edit and add to that document), and easy feedback and grading of individual assignments. Working with the GAE framework, there are a lot of possibilities for individual instructors or instructional designers to expand the course tools. All of these features are faculty-friendly and help Google’s promise of “More time for teaching; more time for learning”.

But these features are targeted at innovators and early adopter instructors who are willing to fill in the gaps themselves.

  • The course creation, including setting up of rosters, is easy for an instructor to do manually, but it is manual. There has been no discussion that I can find showing that the system can automatically create a course, including roster, and update over the add / drop period.
  • There is no provision for multiple roles (student in one class, teacher in another) or for multiple teachers per class.
  • The integration with Google Drive, especially with Google Docs and Sheets, is quite intuitive. But there is no provision for PDF or MS Word docs or even publisher-provided courseware.
  • There does not appear to be a gradebook – just grading of individual assignments. There is a button to export grades, and I assume that you can combine all the grades into a custom Google Sheets spreadsheet or even pick a GAE gradebook app. But there is no consistent gradebook available for all instructors within an institution to use and for students to see consistently.

For higher ed institutions in particular, we are just now getting to the stage where the majority of faculty use the institutional LMS. I am seeing more and more surveys on individual institutions where 70+ % of faculty use the LMS for most of their courses. What this means, however, is that we have a different categories of adopter for institutional LMS – the early majority (characterized by pragmatic approach) and late majority (characterized by a conservative approach) as shown by the technology adoption curve. I am showing the version that Geoffrey Moore built on top of the Everett Rogers base model.

chasmDiagramLabels

With adoption often above 50% or more of faculty, the institution has to serve both the group on the left (innovators and early adopters) and the larger group on the right (early and late majority more than laggards). As poorly designed as some of the institutional LMS solutions are, they typically allow automatic course and roster creation with updates, sharing of multiple document types, integrated standard gradebooks, and many others.

Institutions can (and really should) allow innovators and early adopters to try out new solutions and help create course designs not bound by the standard LMS implied pedagogy, but institutions cannot ignore the majority faculty who are typically unwilling to spend their own time to fill in the technology gaps – especially now that these faculty are just getting used to LMS usage.

None of this argues that Google Classroom is an inferior tool – it is just not designed to replace the full-featured LMS. Remember that Google is a technology-vision company that is comfortable putting out new tools before they understand how the tools will be used. Google is also comfortable playing the long game, getting more and more instructors and faculty using, giving feedback, and pushing forward the new toolset. This process will take some time to play out – at least 2 or 3 years in my opinion before a full institutional LMS may be available. If Google like the direction Classroom usage is going.

Google Classroom does attempt to partially understand the instructor use cases, but is not designed as a holistic product. Think of Classroom as ‘let’s see how to tie existing Google tools together to advance the ball in the general course site world’. It is still technology first and tied more specifically Google technology first. The use cases are simple (e.g. one instructor sharing GDrive-based assignment to students who edit and then submit for feedback), but there are many possibilities for clever faculty to innovate.

In the near-term, Google Classroom will likely be a factor for individual faculty adoption (innovators, early adopters) at schools with GAE licenses or even as secondary LMS. But not as a replacement.

The post Why Google Classroom won’t affect institutional LMS market … yet appeared first on e-Literate.

Google Classroom: Early videos of their closest attempt at an LMS

Sat, 2014-06-14 12:51

For years the ed tech community has speculated about Google entering the LMS market, including Wave (discontinued, but some key features embedded in other tools), Apps for Education, and even incorrectly with Pearson OpenClass. Each time there is some possibilities, but Google has not shown interest in fully replacing LMS functionality.

Google Classroom, announced in May and with new details coming out this week, is the closest that Google has come to fully providing an LMS. The focus is much more on K-12 than on higher education.

For background on the concept, see this video from May which emphasizes roster integration (with some instructor setup required), document sharing through Google Drive, server and security through the cloud, discussions tied to assignments, and a focus on ‘letting teachers and students teach or learn their way’.

Yesterday Google shared a new video that directly shows the user interface and some key features.

You can see more directly how to create a class, add students, creating assignments, sharing resources (GDrive, YouTube, website), student editing in G Docs, tracking student submissions, grading individual assignments.

What I don’t see (and admittedly this is based on short videos) is a full gradebook. The grades are tied to each assignment, but they do not show any collection of grades into a course-based gradebook.

Grades

Teachers with early access to Google Classroom are starting to upload videos with their reviews. Dan Leighton has one from a UK perspective that has more details, including his concern that the system has a ‘sage on the stage’ design assumption.

I am not one to look at Google’s moves as the end of the LMS or a complete shift in the market (at least in the short term), but I do think Classroom is significant and worth watching. I suspect this will have a bigger impact on individual faculty adoption in higher ed or as a secondary LMS than it will on official institutional adoption, at least for the next 2 – 3 years.

The post Google Classroom: Early videos of their closest attempt at an LMS appeared first on e-Literate.

Unizin membership fee is separate from Canvas license fee

Thu, 2014-06-12 12:12

With Unizin going public yesterday, I’ve been looking over our three posts at e-Literate to see if there are any corrections or clarifications needed.

Based on yesterday’s press release, official web site release and media event, I have not found any corrections needed, but I do think there is a clarification needed on whether Unizin membership includes Canvas usage or not (it does not).

Common Infrastructure

During the media call, the Unizin team (representatives from the four founding schools plus Internet2 and Instructure) pushed the common infrastructure angle. Rather than being viewed as a buying club where schools can buy from a common set of pre-negotiated services, Unizin can more accurately be viewed as planned integration and operation of a common service to allow white-label outsourcing of the learning infrastructure. The idea is that a school can use Unizin’s infrastructure for LMS, content repository and analytics engine rather than hosting or maintaining their own, but the service will not be branded as Unizin for the faculty and student end users. From the Unizin FAQ page:

There is another sense in which Unizin is different from MOOC efforts. Unizin is about providing common infrastructure to support the missions of its university members. It is not a public-facing brand. It will not offer content, courses, or degrees in its own name. Unizin’s membership model is built on the premise that universities need to strengthen their influence in the use of data and content for the long run. They will accomplish this goal by working together and taking greater control over content, while also opening content up for selective sharing.

In this morning’s IHE article, the Unizin founders described the need for a common infrastructure:

The digital learning consortium, announced Wednesday morning, aims to simplify how universities share learning analytics, content and software platforms. But in order to do so, Unizin needs its members to use the same infrastructure. A common learning management system is the first part of that package.

“You don’t really have common infrastructure if you’re saying everything is heterogeneous,” said Brad Wheeler, the Unizin co-founder who serves as vice president for IT and chief information officer at Indiana University. “A lot of these different learning tools — Sakai, Blackboard, Canvas — they all do a bunch of really good stuff. But five universities picking five different ones — what’s the end value in that if they want to do something together?”

Brad Wheeler went on to describe the results that will occur from sharing infrastructure:

“This is a hard decision,” Wheeler said about picking Canvas. “I think the key point is enabling Unizin to do what it’s meant to do…. The path for Unizin is creating a dependence on Unizin — a university-owned entity — but creating potential for greater interdependence among our institutions.”

Instead of differentiating themselves based on what software tools they individually pick, Wheeler said, Unizin’s member institutions will stand out based on what they do with the common platform — in other words, the degrees they offer, the research they produce and the students they serve.

Clarification on Canvas Licensing

It has not been clear at all on whether Unizin membership gives a school access to use Canvas or whether membership gives a school the ability to purchase a Canvas license using Unizin agreement. In other words, do Unizin member institutions pay $1 million for Unizin membership and pay Instructure for Canvas, or do they just pay Unizin membership? This question goes beyond Canvas – when a learning repository or analytics engine is in place, will schools have to pay additional for that solution or is it already included in membership?

From the Unizin FAQ:

Will Unizin be a Canvas reseller?
No. Members of Unizin will be able to obtain the Canvas LMS product for campus use via their membership. If members already have Canvas, they can maintain that existing relationship (either via a direct contract with Instructure or via Internet2 Net+ services) or take advantage of the Unizin agreement.

What does “will be able to obtain” mean? Brad Wheeler clarified the situation through an email exchange.

The investment capitalizes Unizin for the integration and operation of the service including a Content and Analytics capability. The Canvas costs are pass-through as they vary ‎by institutional size.

So schools pay for Unizin membership and they pay for Canvas. Presumably there will be a similar arrangement for future learning repository and analytics engines, if these components end up being commercially-developed software.

As additional clarification and answer to our previous open questions, the usage of Canvas is on their current multi-tenant production site, just as it is for current customers. Unizin presumably will have outsized influence on product roadmap and open source code additions (through integrated apps or possibly to Canvas code itself).

We have also heard that there is a support fee for indirect support of the Canvas service of some percentage of the license fee. Treat this component as strong speculation rather than a confirmed detail.

In summary, this means that member institutions pay for the following for the current service:

  • Just over $1 million paid over 3 years (~$350 k per year according to media event) for Unizin membership to create the organization that integrates and operates the service (LMS + Content + Analytics);
  • Canvas license fee paid one of three ways: 1) pass-through using Unizin agreement, 2) Net+ agreement from Internet2, or 3) existing private license between school and Instructure; and
  • Potential (not confirmed) support fee paid to Unizin consortium for their indirect support of Canvas as a service.

The post Unizin membership fee is separate from Canvas license fee appeared first on e-Literate.

It’s Official: Unizin Is Real

Wed, 2014-06-11 10:24

 

A giant deity from the confines of space and time, Unizin has orbited the Earth like a comet, appearing once every twelve years of Christmas Eve. One man by the name of Dr. Kori wanted to study Unizin after seeing the monster himself when he was a boy. Even though he was called a con artist trying to capture Unizin he never gave up his search. On Christmas Eve day, he was working on his self made radar that would find Unizin when Elly came and was shocked by the device. After Dr. Kori repaired her the two made a trap to capture Unizin. That night, the dimensional kaiju made himself known as both DASH and Dr. Kori captures it in a mystical trap, but time around him began to disappear. Kaito turned into Ultraman Max and made a barrier to slow down the process. Not wanting the graceful giant to be hurt Dr. Kori let it go, restoring everything to normal. Unizin gave the doctor the branch of a tree he had not seen in some time, whether it was intended to be given to him or a reward for releasing him was unknown, but he was grateful nonetheless.

- The Ultraman Wiki

No, not that Unizin. This Unizin. The secret university consortium is no longer secret. Phil and I wrote a few posts about the consortium before the group went public:

So far, four of the ten universities we reported were considering joining have officially and publicly joined: Indiana University, University of Michigan, Colorado State University, and University of Florida.

Here’s a roundup of the news coverage:

Probably most important to read, in addition to IHE’s coverage and ours, is the “Why Unizin?” blog post on the Unizin website.

There was a press call this afternoon, so I expect we will be seeing more articles over the next few days. Of course, Phil and I will be providing some additional analysis as well. Stay tuned.

The post It’s Official: Unizin Is Real appeared first on e-Literate.

A response to new NCES report on distance education

Wed, 2014-06-11 08:30

By Phil Hill and Russ Poulin, cross-posted to WCET blog

Last week the National Center for Education Statistics (NCES) released a new report analyzing the new IPEDS data on distance education. The report, titled Enrollment in Distance Education Courses, by State: Fall 2012, is a welcome addition to those interested in analyzing and understanding the state of distance education (mostly as an online format) in US higher education.

The 2012 Fall Enrollment component of the Integrated Postsecondary Education Data System (IPEDS) survey collected data for the first time on enrollment in courses in which instructional content was delivered exclusively through distance education, defined in IPEDS as “education that uses one or more technologies to deliver instruction to students who are separated from the instructor and to support regular and substantive interaction be- tween the students and the instructor synchronously or asynchronously.” These Web Tables provide a current profile of enrollment in distance education courses across states and in various types of institutions. They are intended to serve as a useful baseline for tracking future trends, particularly as certain states and institutions focus on MOOCs and other distance education initiatives from a policy perspective.

We have previously done our own analysis of the new IPEDS data at both e-Literate and WCET blogs. While the new report is commendable in its improved access to the important dataset, we feel the missing analysis and potentially misleading introductory narrative takes away from the value of this report.

Value of Report

The real value of this report in our opinion is the breakdown of IPEDS data by different variables such as state jurisdiction, control of institution, sector and student level. Most people are not going to go to the trouble of generating custom tables, so including such data in a simple PDF report will go a long way towards improving access to this important data. As an example of the data provided, consider this excerpt of table 3:

NCES Table 3 excerpt

The value of the data tables and the improved access to this information are precisely why we are concerned about the introductory text of the report. These reports matter.

Need for Better Analysis and Context

We were hoping to see some highlights or observations in the report, but the authors decided to present the results as “Web Tables” without any interpretation. From one standpoint, this is commendable because NCES is playing an important role in providing the raw data for pundits like us to examine. It is also understandable that since this was the first IPEDS survey regarding distance education in many years, there truly was no baseline data for comparison. Even so, a few highlights of significant data points would have been helpful.

There also is a lack of caveats. The biggest one has to do with the state-by-state analyses. Enrollments follow where the institution is located and not where the student is located while taking the distance courses. Consider Arizona: the state has several institutions (Arizona State University, Grand Canyon University, Rio Salado College, and the University of Phoenix) with large numbers of enrollments in other states. Those enrollments are all counted in Arizona, so the state-by-state comparisons have specific meanings that might not be apparent without some context provided.

Even though there are no highlights, the first two paragraphs contain a (sometimes odd) collection of references to prior research. These citations beg the question as to what the tables in this report have to say on the same points of analysis.

Postsecondary enrollment in distance education courses, particularly those offered online, has rapidly increased in recent years (Allen and Seaman 2013).

This description cites the long-running Babson Survey Research Group report by Allen and Seaman. Since the current IPEDS survey provides baseline data, there is no prior work on which to judge growth; therefore, this reference makes sense to include. It would have made sense, however, to provide some explanation of the key differences between IPEDS and Babson data. For example, Phil described in e-Literate the fact that there is major discrepancy in number of students taking at least one online course – 7.1 million for Babson and 5.5 million for IPEDS. Jeff Seaman, one of the two Babson authors, is also quoted in e-Literate on his interpretation of the differences. The NCES report would have done well to at least refer to the significant differences.

Traditionally, distance education offerings and enrollment levels have varied across different types of institutions. For example, researchers have found that undergraduate enrollment in at least one distance education course is most common at public 2-year institutions, while undergraduate enrollment in online degree programs was most common among students attending for-profit institutions.

This reference indirectly cites a previous NCES survey that used a different methodology regarding students in 2007-08.

  • That survey found that enrollment in at least one distance education course was “most common” at public 2-year colleges and the new data reaffirms that finding.
  • Enrollment in fully distance programs was “most common” in students attending for-profit institutions and the new data reaffirms that finding. However, leaving the story there perpetuates the myth that “distance education” equals “for-profit education.” The new IPEDS data show (see Table 1 below from a WCET post by Russ) that 35% of students enrolled exclusively at a distance attend for-profit institutions and only 5% of those who enroll in some (not all) distance courses attend for-profits. People are often amazed at what a big portion of the distance education market is actually in the public sector.

WCET Table 1

A 2003 study found that historically black colleges and universities (HBCUs) and tribal colleges and universities (TCUs) offered fewer distance education courses compared with other institutions, possibly due to their smaller average size (Government Accountability Office 2003)

What a difference a decade makes. Both types of institutions show few of their students enrolled completely at a distance, but they now above the national average in terms of percentage of students enrolled in some distance courses in Fall 2012.

Rapidly changing developments, including recent institutional and policy focus on massive open online courses (MOOCs) and other distance education innovations, have changed distance education offerings.

Only a small number of MOOCs offer instruction that would be included in this survey. We’re just hoping that the uniformed will not think that the hyperbolic MOOC numbers have been counted in this report. They have not.

Upcoming Findings on Missing IPEDS Data

We are doing some additional research, but it is worth noting that we have found some significant cases of undercounting in the IPEDS data. In short, there has been confusion over which students get counted in IPEDS reporting and which do not. We suspect that the undercounting, which is independent of distance education status, is in the hundreds of thousands. We will describe these findings in an upcoming article.

In summary, the new NCES report is most welcome, but we hope readers do not make incorrect assumptions based on the introductory text of the report.

The post A response to new NCES report on distance education appeared first on e-Literate.

Learner-Centered Analytics: Example from UW La Crosse MOOC research

Sun, 2014-06-08 10:15

Last week I wrote a post What Harvard and MIT could learn from the University of Phoenix about analytics. As a recap, my argument was:

Beyond data aggregated over the entire course, the Harvard and MIT edX data provides no insight into learner patterns of behavior over time. Did the discussion forum posts increase or decrease over time, did video access change over time, etc? We don’t know. There is some insight we could obtain by looking at the last transaction event and number of chapters accessed, but the insight would be limited. But learner patterns of behavior can provide real insights, and it is here where the University of Phoenix (UoP) could teach Harvard and MIT some lessons on analytics.

Beyond the University of Phoenix, there other examples of learner-centered analytics exploring usage patterns over time. While I was at a summit at the University of Wisconsin at La Crosse last week, Bob Hoar showed me some early results of their “UW-System College Readiness Math MOOC” research that is part of the MOOC Research Initiative. I interviewed Bob Hoar and Natalie Solverson as part of e-Literate TV, where they described their research project:

The results to date focus on capturing and visualizing the student patterns, and progress can be tracked at this project site (go to the site to see interactive graphics).

The Desire2Learn learning management system recorded a log containing over 1.2 million ‘events’ that occurred during the first few months of the MOOC. Each event corresponds to the action of a particular student. Each action, as well as a timestamp, was recorded. The image below contains a graphical representation of the elements of the course. The items on the horizontal axis (y=0) represent information about the course (syllabus, FAQ, how to contact instructors, etc.). The other items in the chart relate to the mathematics content; the (WebWork) homework, the quizzes, the online office hours and the live tutoring and lectures for the 9 math modules in the course. Hover over each bubble to see a short description of the item.

course elements The idea is that each element represents a learning object within a module. The enforced structure of the course was that students have to complete the quiz for each module before moving to the next module, but once there, students can choose the order and timing of each learning object interaction and even when to go back to review previous modules. The next visualization tracks one student (anonymized data) over the course duration, which color coding of the type of element, vertical axis capturing specific module, and horizontal axis capturing items visited in time order. To make the Google motion chart work, the research team used counts of items visited and mapped them artificially to years starting in 1900 (there are ~300 items visited over the course).

course activityThis view shows some interesting patterns, which is described in the site (they described a bubble motion chart, but I find the visualization above more informative).

The motion map quickly illustrates that the student visited nearly every course element, and, after completing the first few modules, they did not need to return to the course information module. This indicates that the student quickly understood the design of the course. In addition, the video indicates that the student occasionally jumped back to earlier material in the course. Such movements may indicate that the learning materials in the location of the jump may need to be reviewed.

Now that we’re looking at student patterns over time, the analytics are much more meaningful from a learning perspective than they would be with just course completion rates or A/B testing results. Learning is a process that cannot be reduced down to independent events. With many online courses, students now can create their own learning pathways. In the example above, notice how the student frequently reviewed modules 0, 2 and 6. This information could be used to study how students learn and how to improve course designs. Research teams would do well to put more focus on learner patterns, and MOOC platforms would do well to make this research easier.

The UW La Crosse research team has not finished their analysis, but the early results show a much richer approach to analytics than focusing on single person-course measurements or aggregated analysis of a single event.

The post Learner-Centered Analytics: Example from UW La Crosse MOOC research appeared first on e-Literate.

Three Makes a Movement: Branson creates youth panel for student voice in ed tech

Sat, 2014-06-07 15:18

Based on my involvement in the Evolve conference sponsored by the 20 Million Minds Foundation, held in January, I wrote a series of posts covering the discussions around online education and educational technology. The three main posts:

During the conference I put out a call for other conferences to follow 20MM’s lead and work harder to directly include students in their discussions of ed tech – full post here and video below:

Before we get to the analyses, however, it is important to highlight once again how unique this format is in education or ed tech settings. There is plenty of discussion about needing course design and support services that are learner-centric, yet typically ed tech conferences don’t have learner-centric discussions. We need to stop just talking about students and add the element of talking with students.

While I do not believe there is a direct connection, this week Sir Richard Branson created a youth panel as part of the UK’s Generation Tech review, giving students a direct voice in educational technology. The panel’s focus is K-12 usage and is described in The Telegraph:

Young people will be given the chance to voice their ideas about how technology can support learning in the UK, thanks to a new council being created as part of the ‘Generation Tech’ review.

The new Digital Youth Council, a panel of students aged between 13 and 17, will share their experiences with technology and discuss ways in which education technology can be improved in a classroom setting. [snip]

The council is being created as part of a wider review, launched at the end of April and led by Sir Richard Branson, looking at what impact technology is having in schools and what the future holds for teachers and pupils alike.

As children become increasingly confident using new technology, schools have often struggled to keep up – however, many classrooms are now equipped with tablets, interactive white boards and online learning platforms which allow teachers to more effectively monitor pupils’ learning.

The wider Generation Tech review is set to analyse how these new technologies are impacting education.

This is welcome news, and I hope these two efforts, along with WCET’s commitment for a student panel in their fall conference, mark the start of a movement. Who else will join? Are there other examples people can share in the comments?

The post Three Makes a Movement: Branson creates youth panel for student voice in ed tech appeared first on e-Literate.

eCampus News Advisory Board and Gophers

Thu, 2014-06-05 20:25

I have recently accepted an eCampus News offer to be part of their new advisory board. The idea is to have myself and the 10 other members help their editors get a better handle on the industry while also providing useful information to readers through opinion, advice or commentary. The other 10 members of the advisory board:

  • Brian Lukoff, Program Director for Learning Catalytics at Pearson Education
  • Crystal Sands, Director of the Online Writing Lab at Excelsior College
  • Connor Gray, Chief Strategy Officer at Campus Management
  • David J. Hinson, Executive Vice President & Chief Information Officer of Hendrix College
  • Joanna Young, Chief Information Officer and AVP for Finance & Budget at the University of New Hampshire
  • John Orlando, Northcentral University Associate Director of Faculty Training in the Center for Faculty Excellence
  • Mark Baker, Assistant Registrar at Whitworth University
  • Paige Francis, Chief Information Officer for Fairfield University
  • Roxann Riskin, Technology Specialist/Technology Student Assistant Service Supervisor at Fairfield University
  • Salwa Ismail, Head of the Department of Library Information Technology at the Georgetown University Library

There is an article in eCampus News introducing the advisory committee, including bios, thoughts on tends and game-changers, and some personal thoughts. I’ve included my thoughts below (couldn’t help myself on the quote). Judging by others’ responses, this is an eclectic group with quite a broad array of interests, and I’m looking forward to this new role.

The game-changer: Despite the hype of adaptive learning as an automated, black-box, magic-bullet solution, the broader field of personalized learning is likely to be a game changer in higher ed. For the first generation of online learning, the tendency was to replicate the factory model of education (one size fits all) but just do it online. For the second generation, the ability to use online technologies to create multiple pathways for students and to personalize learning will be a strength that can even go beyond face-to-face methods (for any classes larger than 10 to 15 students). We’re already starting to see some real improvements in remedial coursework based on students’ use of personalized learning tools, but this has been in pilot programs to date. As this usage spreads over time, personalized learning, including adaptive data-driven systems, will present real change to our educational system.

Passion: Transparency in education. Like Laura Gibbs, I believe in the open syllabus concept where students should be able to see what is in a course without having to enroll; while ed-tech vendors and open source providers can be very supportive of education, we should have an open view of how well the products and companies are doing; when schools adopt strategic technology initiatives, the process should be open and inclusive; schools should have their results (including academic performance of students) open for others to view. I realize there are risks involved, such as the over-simplification of college scorecards, but the general need for transparency is one that I firmly support.

Hobby: Traveling with family and experiencing local cultures. Whether that is simply a different town or region of California, or different locations internationally, my wife and I enjoy seeing new places and trying to embed ourselves with locals.

Quote/Belief: “I have to laugh, because I’ve out-finessed myself. My foe, my enemy, is an animal. And in order to conquer an animal, I have to think like an animal, and—whenever possible—to look like one. I’ve gotta’ get inside this guy’s pelt and crawl around for a few days.” – C Spackler

Update: In what could be one of my biggest professional mistakes ever, I listed groundhogs instead of gophers in reference to the Carl Spackler quote (confusing Bill Murray movies). You cannot imagine my self-disappointment at this point. Mea culpa.

The post eCampus News Advisory Board and Gophers appeared first on e-Literate.

No, I don’t believe that Harvard or MIT are hiding edX data

Tue, 2014-06-03 12:58

Since my Sunday post What Harvard and MIT could learn from the University of Phoenix about analytics, there have been a few comments with a common theme about Harvard and MIT perhaps withholding any learner-centered analytics data. As a recap, my argument was:

Beyond data aggregated over the entire course, the Harvard and MIT edX data provides no insight into learner patterns of behavior over time. Did the discussion forum posts increase or decrease over time, did video access change over time, etc? We don’t know. There is some insight we could obtain by looking at the last transaction event and number of chapters accessed, but the insight would be limited. But learner patterns of behavior can provide real insights, and it is here where the University of Phoenix (UoP) could teach Harvard and MIT some lessons on analytics.

Some of the comments that are worth addressing:

“Non-aggregated microdata (or a “person-click” dataset, see http://blogs.edweek.org/edweek/edtechresearcher/2013/06/the_person-click_dataset.html ) are much harder (impossible?) to de-identify. So you are being unfair in comparing this public release of data with internal data analytic efforts.”

“Agreed. The part I don’t understand is how they still don’t realize how useless this all is. Unless they are collecting better data, but just not sharing it openly, hogging it to themselves until it ‘looks good enough for marketing’ or something.”

“The edX initiative likely has event-level data to analyze. I don’t blame them for not wanting to share that with the world for free though. That would be a very valuable dataset.”

The common theme seems to be that there must be learner-centered data over time, but Harvard and MIT chose not to release this data either due to privacy or selfish reasons. This is a valid question to raise, but I see no evidence to back up these suppositions.

Granted, I am arguing without definitive proof, but this is a blog post, after all. I base my argument on two points – there is no evidence of HarvardX or MITx pursuing learner-centered long-running data, and I believe there is great difficulty getting non-event or non-aggregate data out of edX, at least in current forms.

Update: See comments starting here from Justin Reich from HarvardX. My reading is that he agrees that Harvard is not pursuing learner-centered long-running data analysis (yet, and he cannot speak for Stanford or MIT), but that he disagrees about the edX data collection and extraction. This does not capture all of his clarifications, so read comments for more.

Evidence of Research

Before presenting my argument, I’d again like to point out the usefulness of the HarvardX / MITx approach to open data as well as the very useful interactive graphics. Kudos to the research teams.

The best places to see what Harvard and MIT are doing with their edX data are the very useful sites HarvardX Data & Research and MITx Working Papers. The best-known research released as a summary report (much easier to present than released de-identified open dataset) is also based on data aggregated over a course, such as this graphic:

World_Map_of_Enrollment___HarvardX

Even more useful is the presentation HarvardX Research 2013-2014 Looking Forward, Looking Back. In this presentation, there is a useful presentation of the types of research HarvardX is pursuing.

Four kinds of MOOC research

None of these approaches (topic modeling, pre-course survey, interviews, or A/B testing) look at learner’s activities over time. They are all based on either specific events with many interactions (discussion forum on a particular topic with thousands of entries, video with many views, etc) or subjective analysis on an entire course. Useful data, but not based on a learner’s ongoing activities.

I’d be happy to be proven wrong, but I see no evidence of the teams currently analyzing or planning to analyze such learner data over time. The research team does get the concept (see the article on person-click data):

We now have the opportunity to log everything that students do in online spaces: to record their contributions, their pathways, their timing, and so forth. Essentially, we are sampling each student’s behavior at each instant, or at least at each instant that a student logs an action with the server (and to be sure, many of the things we care most about happen between clicks rather than during them).

Thus, we need a specialized form of the person-period dataset: the person-click dataset, where each row in the dataset records a student’s action in each given instant, probably tracked to the second or tenth of a second. (I had started referring to this as the person-period(instantaneous) dataset, but person-click is much better). Despite the volume of data, the fundamental structure is very simple. [snip]

What the “person-period” dataset will become is just a roll-up of person-click data. For many research questions, you don’t need to know what everyone did every second, you just need to know what they do every hour, day or week. So many person-period datasets will just be “roll-ups” of person-click datasets, where you run through big person-click datasets and sum up how many videos a person watched, pages viewed, posts added, questions answered, etc. Each row will represent a defined time period, like a day. The larger your “period,” the smaller your dataset.

All of these datasets use the “person” as the unit of analysis. One can also create datasets where learning objects are the unit of analysis, as I have done with wikis and Mako HIll and Andres Monroy-Hernandes have done with Scratch projects. These can be referred to as project-level and project-period datasets, or object-level and object-period datasets.

The problem is not with the research team, the problem is with the data available. Note how the article above is referencing future systems and future capabilities. And also notice that none of this “person period” research is referenced in current HarvardX plans.

edX Data Structure

My gut feel (somewhat backed up by discussions with researchers I trust) is that the underlying data model is the issue, as I called out in my Sunday post.

In edX, by contrast, the data appears to be organized a series of log files organized around server usage. Such an organization allows aggregate data usage over a course, but it makes it extremely difficult to actually follow a student over time and glean any meaningful information.

If this assumption is correct, then the easiest approach to data analysis would be to look at server logs for specific events, pull out the volume of user data on that specific event, and see what you can learn; or, write big scripts to pull out aggregated data over the entire course. This is exactly what the current research seems to do.

Learner-Centered Data Analysis Over Time

It is possible to look at data over time, as was shown by two Stanford-related studies. The study Deconstructing Disengagement:Analyzing Learner Subpopulations in Massive Open Online Courses. looked at specific learners over time and looked for patterns.

Stanford reportMike Caulfield, Amy Collier and Shawaf Halawa wrote an article for EDUCAUSE Review titled Rethinking Online Community in MOOCs Used for Blended Learning that explored learner data over time.

ERO Study

In both cases, the core focus was learner activity over time. I believe this focus is a necessary part of any learning analytics research program that seeks to improve teaching and learning.

What is interesting in the EDUCAUSE article is that the authors used Stanford’s Class2Go platform, which is now part of OpenEdX. Does this mean that such data analysis is possible with edX, or does it mean that it was with Class2Go but not with the current platform? I’m not sure (comments welcome).

I would love to hear from Justin Reich, Andrew Ho or any of the other researchers involved at HarvardX or MITx. Any insight, including corrections, would be valuable.

The post No, I don’t believe that Harvard or MIT are hiding edX data appeared first on e-Literate.

What Harvard and MIT could learn from the University of Phoenix about analytics

Sun, 2014-06-01 17:42

Last week Harvard and MIT released de-identified data from their edX-based MOOCs. Rather than just produce a summary report, the intent of this release was to open up the data and share it publicly. While it is good to see this approach to Open Data, unfortunately the data set is of limited value, and it actually illustrates a key problem with analytics in higher ed. From MIT News description:

A research team from Harvard University and MIT has released its third and final promised deliverable — the de-identified learning data — relating to an initial study of online learning based on each institution’s first-year courses on the edX platform.

Specifically, the dataset contains the original learning data from the 16 HarvardX and MITx courses offered in 2012-13 that formed the basis of the first HarvardX and MITx working papers (released in January) and underpin a suite of powerful open-source interactive visualization tools (released in February).

At first I was eager to explore the data, but I am not sure how much useful insight is possible due to how the data was collected. The data is structured with one student per row for each course they took (taking multiple courses would lead to multiple rows of data). The data columns (pulled from the Person Course Documentation file) are shown below:

  • course_id: ID for the course
  • userid_DI: de-identified unique identifier of student
  • registered: 0/1 with 1 = registered for this course
  • viewed: 0/1 with 1 = anyone who accessed the ‘courseware’ tab
  • explored: 0/1 with 1 = anyone who accessed at least half of the chapters in the courseware
  • certified: 0/1 with 1 = anyone who earned a certificate
  • final_cc_name_DI: de-identified geographic information
  • LoE: user-provided highest level of education completed
  • YoB: year of birth
  • gender: self-explanatory
  • grade: final grade in course
  • start_time_DI: date of course registration
  • last_event_DI: date of last interaction with course
  • nevents: number of interactions with the course
  • ndays_act: number of unique days student interacted with course
  • nplay_video: number of play video events
  • nchapters: number of courseware chapters with which the student interacted
  • nforum_posts: number of posts to the discussion forum
  • roles: identifies staff and instructors

The problem is that this data only tells us very shallow usage patterns aggregated over the entire course – did they look at courseware, how many video views, how many forum posts, final grade, etc. I have described several times how open courses such as MOOCs have different student patterns, since not all students have the same goals for taking the course.

studentPatternsInMoocs20130930

 

The Harvard and MIT data ignores student goals or any information giving a clue on whether students desired to complete the course, get a good grade, get a certificate, or just sample some material. Without this information on student goals, the actual aggregate behavior is missing context. We don’t know if a certain student intended to just audit a course, sample it, or attempt to complete it. We don’t know if students started the course intended to complete but became frustrated and dropped down to just auditing or even dropped out.

Beyond data aggregated over the entire course, the Harvard and MIT edX data provides no insight into learner patterns of behavior over time. Did the discussion forum posts increase or decrease over time, did video access change over time, etc? We don’t know. There is some insight we could obtain by looking at the last transaction event and number of chapters accessed, but the insight would be limited. But learner patterns of behavior can provide real insights, and it is here where the University of Phoenix (UoP) could teach Harvard and MIT some lessons on analytics.

Also last week, the Apollo Group (parent of UoP) CIO Mike Sajor gave an interview to Campus Technology, and he discussed their new learning platform (also see my previous post on the subject). In one segment Sajor explained how the analytics are being used.

Sajor: Another aspect: We leverage the platform to collect a vast amount of data about students as they traverse their learning journey. We know what they’re doing, when they’re doing it, how long it takes, anything they do along the journey that might not have been the right choice. We collect that data … and use it to create some set of information about student behaviors. We generate insight; and insight tells us an interesting fact about a student or even a cohort of students. Then we use that insight to create an intervention that will change the probability of the student outcome.

CT: Give an example of how that might work.

Sajor: You’re a student and you’re going along and submitting assignments, doing reading, doing all those things one would normally do in the course of a class. Assignments are generally due in your class Sunday night. In the first few weeks you turn your assignments in on Friday. And suddenly, you turn in an assignment on Saturday evening, and the next week you turn one in mid-day Sunday. Well, we’re going to notice that in our analytics. We’ll pick that up and say, “Wait a second. Sally Student now has perturbation in her behavior. She was exhibiting a behavioral pattern over time since she started as a student. Now her pattern has shifted.” That becomes an insight. What we do at that point is flag the faculty member or an academic adviser or enrollment adviser to contact Sally using her preferred mode — e-mail, phone call. And we’ll ask, “Hey Sally, we noticed you’re turning in your assignments a little bit later than you normally did. Is there anything we can do to help you?” You’d be amazed at the answers we get, like, “My childcare on Thursday and Friday night fell apart.” That gives us an opportunity to intervene. We can say, “You’re in Spokane. We know some childcare providers. We can’t recommend anybody; but we can give you a list that might help you.”

UoP recognizes the value of learner behavior patterns, which can only be learned by viewing data patterns over time. The student’s behavior in a course is a long-running transaction, with data sets organized around the learner.

In edX, by contrast, the data appears to be organized a series of log files organized around server usage. Such an organization allows aggregate data usage over a course, but it makes it extremely difficult to actually follow a student over time and glean any meaningful information.

The MIT News article called out why this richer data set is so important:

Harvard’s Andrew Ho, Chuang’s co-lead, adds that the release of the data fulfills an intention — namely, to share best practices to improve teaching and learning both on campus and online — that was made with the launch of edX by Harvard and MIT in May 2012.

If you want to “share best practices to improve teaching and learning”, then you need data organized around the learner, with transactions captured over time – not just in aggregate. What we have now is an honest start, but a very limited data set.

I certainly wouldn’t advocate Harvard and MIT becoming the University of Phoenix, but in terms of useful learner analytics, they could learn quite a bit. I applaud Harvard and MIT for their openness, but I hope they develop better approaches to analytics and learn from others.

Note: The Harvard and MIT edX is de-identified to fit within FERPA requirements, but after reading their process, it does not appear that the learner patterns were removed due to privacy concerns.

Update: Based on private feedback, I should clarify that I have not validated that the UoP analytics claims actually work in practice. I am giving them credit for at least understanding the importance of learner-centered, behavior-based data to improve teaching and learning, but I do not know what has been fully implemented. If I find out more, I’ll share in a separate post.

On this point, there is an angle of ‘what University of Phoenix could learn from Harvard and MIT on analytics’ regarding Open Data and the ability to see real results.

The post What Harvard and MIT could learn from the University of Phoenix about analytics appeared first on e-Literate.

Unizin: What are the primary risks?

Thu, 2014-05-29 15:50

In Michael’s most recent post on Unizin, the new “learning ecosystem” initiative driven by Indiana University, he asked the question of who would be threatened by the proposed consortium (with the answer of edX). This question assumes of course that Unizin actually succeeds in large part, but what are the primary risks for the initiative to succeed in the first place? Based on the public information we have available to date (primarily in the two posts linked above), I see two near-term risks and one long-term risk that rise above the others.

Near-Term Risk: Getting Schools to Sign Up

The obvious question is whether there are enough schools willing to commit $1 million and adopt the proposed platforms to get the consortium off the ground. Based on the Colorado State University recording, it appears that the goal is to get 9 – 10 schools to commit $9 – $10 million in the initial phase. Beyond Indiana University, the most likely school to commit is the University of Michigan. Their leadership (dean of libraries, CIO) are fully behind the initiative, and from press reports they are seeking final approval. I cannot find any evidence that any other schools have reached this point, however.

Slide from CSU Presentation

Slide from CSU Presentation

There are active debates in the Committee on Institutional Cooperation (CIC), primarily between provosts and CIOs, about Unizin and whether this approach works for member institutions. The provosts in fact already put out a position paper generally endorsing the same concept.

While new and cost effective technological capabilities make certain changes in higher education possible, it does not necessarily follow that such changes are desirable, or would be endorsed or utilized by our existing students, faculty, or community members. Nor does it mean that we fully grasp the costs and business models that might surround new strategies for broadly disseminating course content. University leaders committed to addressing the new opportunities in higher education need to recognize that the primary basis for motivating and inspiring faculty to engage these opportunities will not be the technologies themselves, but rather, the fundamental academic values and pedagogical principles that need to be infused in these emerging instructional technologies. For these reasons, we believe that the chief academic officers of our CIC member universities are in the best position—individually and collectively—to be leading these efforts.

Putting out a position paper is not the same as getting buy-in from a campus or contributing real money, and I suspect that most of the potential campuses will need some form of this discussion before signing up.

Near-Term Risk: Secretive decision process

On the subject of campus buy-in, the actual secretive process that is being pursued by Unizin and prospective schools is itself a significant risk, especially in the post MOOC-hype environment. Institutions are considering this major investment and commitment in a deliberately opaque process. Provosts, CIOs and occasionally faculty groups are being briefed, but almost all documentation is being hidden. During the Colorado State University meeting, one faculty member asked about this process:

At the recorded CSU meeting, one of the presenters—it’s impossible to tell which is the speaker from the recording we have—acknowledges that the meetings were largely conducted in secret when challenged by a faculty member on the lack of faculty involvement. He cited sensitive negotiations among the ten universities and Instructure as the reason.

These same questions are being raised about the decision processes behind many of the MOOC adoptions. Consider the University of Texas, which committed $5 million to their involvement in edX. The Daily Texan has publicly started a debate on that campus about the motivation and benefits of that decision.

The MOOCs were, apparently, designed without revenue in mind, though the System invested $10 million to both develop the MOOCs and to host the courses on edX, an online platform created by Harvard and MIT. [snip]

Of course, the System has made large and unproven investments in online education platforms before — MyEdu rings a bell. The Tribune recently reported that the System will see no financial return on its $10 million investment in MyEdu, which was ultimately sold to Blackboard. Again, there was no long-term financial plan in mind, but there was a lot of money on the table.

The System should stop investing millions of dollars on gambles like these, which lack financial exit strategies and viable forms of revenue. If the founding structure of a project doesn’t include a business model for growth and profitability for the University, who is expected to fund it?

Now UT is considering another seven-figure investment in a very closed process. If they join, UT could face pushback from faculty on campus based on any decision to join Unizin, partially reaping what edX sowed.

Faculty groups nationwide are concerned about administrative decision-making that directly impacts academics without directly and transparently involving broad faculty input. Unizin involves not only an LMS adoption but also learning content repository and learning analytics platform. This gets the difficult questions of how and whether to share learning content as well as measuring learning outcomes. Faculty will care.

And there is a hint of a typical university conflict embedded at the end of the CIC provosts’ position paper quote - “we believe that the chief academic officers of our CIC member universities are in the best position … to be leading these efforts”, perhaps with the unwritten phrase “as opposed to CIOs”.

It used to be that CIOs and their organizations would make most technology platform decisions, and quite often it was hard to get the provost office to participate. As can be seen in this statement, we now have situations where provosts and their offices want to be the driving force even for platform decisions. Ideally, the better approach is collaborative where the provosts and CIOs work together, generally with provosts taking a more active role in defining needs or problems and CIOs taking a more active role defining solutions.

In the Unizin content repository case, what would be more natural is for the provosts to first help define what learning content should be shared – learning objects, courseware, courses, textbooks – and under what conditions. After defining goals it would be appropriate to describe how a software platform would facilitate this content sharing, with CIOs taking a more active role in determining whether certain scenarios are feasible and which platforms are the best fit. Throughout the process faculty would ideally have the opportunity to give input on needs, to give feedback on proposed solutions, and to have visibility in the decision process.

Whether this type of open, collaborative decision process is happening behind closed doors is not known, but the apparent need to keep the process quiet raises the risk of pushback on the consortium decision.

Long-Term Risk: Development of Content Repository and Learning Analytics

Even if Unizin succeeds in getting 9 – 10 schools to fund and start the consortium, and even if they successfully manage the faculty buy-in aspects, there is a longer-term risk on making the “learning ecosystem” a reality. Currently the three primary components are very uneven. The LMS is a no-brainer as Canvas already exists and already has broad acceptance as the most popular LMS on the market in terms of recent LMS evaluations and new adoptions. The two other components are very different and might not be well-suited for a community-source development model.

Unizin Diagram of Services

The ed tech road is littered with unsuccessful and disappointing content repositories. The concept of making it easy to share learning content outside of a specific program has long looked beautiful in white papers and conference briefings, but the reality of actual adoption and usage is quite different. Whether the challenge is product design, product completion, or just plain faculty adoption, there are no indications that there is a demand for broad-based sharing of academic content. In essence, the product category is unproven, and it is not clear that we even know what to build in the first place.

Community source has proven its ability to develop viable solutions for known product categories and generally based on existing solutions – consider Sakai as an LMS (heavily based on U Michigan’s CHEF implementation and to a lesser degree on Indiana University’s OnCourse), Kuali Financial System (based directly on IU’s financial system), and Kuali Coeus (based on MIT’s research administration system). When you get rid of a pre-existing solution, the results are less promising. Kuali Student, based on a known product category but designed from the ground up, is currently on track to take almost 8 years from concept to full functionality. Looking further, are there any examples where a new product in an ill-defined product category has successfully been developed in a community source model?

Learning analytics is similar to content repositories in the sense that the concept looks much better in a whitepaper than it does in reality. I remember in the late 2000s when the LMS user conferences came across as ‘we’re learning outcomes companies that happen to have an LMS also’. Remember Blackboard Outcomes System – its “most significant product offering”?

The difference between learning analytics and content repositories, however, is that there are much stronger examples of real adoption on the analytics side. Purdue has successfully implemented Course Signals and has succeeded in improve course retention (despite the challenge of whether inter-course retention has improved). Blackboard Analytics (based on the iStrategy acquisition) has been implemented with real results at a growing number of schools.

More significantly, perhaps, is the work done by the Predictive Analytics Framework (PAR), which just today announced that it was becoming a separate organization spun off by WICHE. The Unizin slides explicitly reference PAR, and some of the analytics language closely mirrors PAR descriptions. The reason this is significant is that the PAR framework goes a long way towards helping to define the product needs.

The question for analytics, therefore, is less on the product category and more on the ability of Unizin to deliver actual results.

If Unizin succeeds in addressing the above risks, then the state of art for learning ecosystems will jump forward. If the proposed consortium does not succeed, the result will be a buyer’s club that makes Canvas a very expensive LMS. That result would be ironic, given some of the foundational concepts behind Unizin.

The post Unizin: What are the primary risks? appeared first on e-Literate.

New Internet Trends Presentation: Mary Meeker gets education partly right

Thu, 2014-05-29 10:50

Mary Meeker from Kleiner Perkins Caufield Byers (KPCB) has released her annual Internet Trends presentation, which has taken on a life of its own. Her data is very useful to see macro trends and the significance of Internet-related technology. Even in the non-education sections, the presentation should be very useful to education.

One interesting perspective is that tablets continue their massive growth. Anecdotally, I am not seeing such a strong uptick in tablet usage by students on campuses, but the overall growth is worth exploring further. It would be interesting to see more longitudinal data specifically on tablet usage on campus.

Tablet growth

Another mobile perspective to keep watching is the rising percentage of web usage by mobile devices (smart phones and tablets), which jumped from 14% to 25% in just the past year. If your campus or technology company does not have a mobile strategy, then you are missing what students (and even faculty and staff) are actually experiencing on the web.

Mobile web percentage

Once you get down to the education section (starting on slide 24), I found Meeker’s overall setup good, but her argument of a “tipping point” unpersuasive. She starts out by observing how important the education sector is in the US:

  • Education is Important
  • Education is Expensive
  • Education Results are Often Subpar
  • People Care About Education
  • Personalized Education Ramping
  • Distribution Expanding & Education Start-up Costs Declining

While these are not new observations, I think she is right in using them to explain why the sector is important and why this is different than other technology-driven sectors susceptible to bubbles and lost interest. I could see educational technology investment dropping in the near future, but the challenges and opportunities will not go away.

Where I have a problem is her justification that “education may be at a tipping point”. These examples seem to cherry-pick technology adoption for consumer tools rather than showing any evidence of change based on the technologies. It’s not that I don’t see change happening, just that this slide is one of the weaker arguments in the deck.

Ed Green Shoots

Later in the presentation is an interesting observation showing the low level of useful information that is actually tagged and analyzed, which is also true for education.

Data useful analyzed

 

You can see the entire presentation here:

KPCB Internet trends 2014 from Kleiner Perkins Caufield & Byers

The post New Internet Trends Presentation: Mary Meeker gets education partly right appeared first on e-Literate.