Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 13 hours 35 min ago

Blueprint for a Post-LMS, Part 2

Thu, 2015-03-05 13:25

By Michael FeldsteinMore Posts (1020)

In the first post of this series, I identified four design goals for a learning platform that would be well suited for discussion-based courses:

  1. Kill the grade book in order to get faculty away from concocting arcane and artificial grading schemes and more focused on direct measures of student progress.
  2. Use scale appropriately in order to gain pedagogical and cost/access benefits while still preserving the value of the local cohort guided by an expert faculty member, as well as to propagate exemplary course designs and pedagogical practices more quickly.
  3. Assess authentically through authentic conversations in order to give credit for the higher order competencies that students display in authentic problem-solving conversations.
  4. Leverage the socially constructed nature of expertise (and therefore competence) in order to develop new assessment measures based on the students’ abilities to join, facilitate, and get the full benefits from trust networks.

I also argued that platform design and learning design are intertwined. One implication of this is that there is no platform that will magically make education dramatically better if it works against the grain of the teaching practices in which it is embedded. The two need to co-evolve.

This last bit is an exceedingly tough nut to crack. If we were to design a great platform for conversation-based courses but it got adopted for typical lecture/test courses, the odds are that faculty would judge the platform to be “bad.” And indeed it would be, for them, because it wouldn’t have been designed to meet their particular teaching needs. At the same time, one of our goals is to use the platform to propagate exemplary pedagogical practices. We have a chicken and egg problem. On top of that, our goals suggest assessment solutions that differ radically from traditional ones, but we only have a vague idea so far of what they will be or how they will work. We don’t know what it will take to get them to the point where faculty and students generally agree that they are “fair,” and that they measure something meaningful. This is not a problem we can afford to take lightly. And finally, while one of our goals is to get teachers to share exemplary designs and practices, we will have to overcome significant cultural inhibitions to make this happen. Sometimes systems do improve sharing behavior simply by making sharing trivially easy—we see that with social platforms like Twitter and Facebook, for example—but it is not at all clear that just making it easy to share will improve the kind of sharing we want to encourage among faculty. We need to experiment in order to find out what it takes to help faculty become comfortable or even enthusiastic about sharing their course designs. Any one of these challenges could kill the platform if we fail to take them seriously.

When faced with a hard problem, it’s a good idea to find a simpler one you can solve that will get you partway to your goal. That’s what the use case I’m about to describe is designed to do. The first iteration of any truly new system should be designed as an experiment that can test hypotheses and assumptions. And the first rule of experimental design is to control the variables.

Of the three challenges I just articulated, the easiest one to get around is the assessment trust issue. The right use case should be an open, not-for-credit, not-for-certification course. There will be assessments, but the assessments don’t count. We would therefore be creating a situation somewhat like a beta test of a game. Participants would understand that the points system is still being worked out, and part of the fun of participation is seeing how it works and offering suggestions for improvement. The way to solve the problem of potential mismatches between platform and content is to test the initial release of the platform with content that was designed for it. As for the third problem, we need to pick a domain that is far enough away from the content and designs that faculty feel are “theirs” that the inhibitions regarding sharing are lower.

All of these design elements point toward piloting the platform with a faculty professional development cMOOC. Faculty can experience the platform as students in a low-stakes environment. And I find that even faculty who are resistant to talking about pedagogy in their traditional classes tend to be more open-minded when technology enters the picture because it’s not an area where they feel they are expected to be experts. But it can’t be a traditional cMOOC (if that isn’t an oxymoron). We want to model the distributed flip, where there are facilitators of local cohorts in addition to the large group participation. This suggests a kind of a “reading group” or “study group” structure. The body of material for the MOOC is essentially a library of content. Each campus-based group chooses to go through the content in their own way. They may cover all of it or skip some of it. They may add their own content. Each group will have its own space to organize its activities, but this space will be open to other groups. There will be discussions open to everyone, but groups and individual members can participate in those or not, as they choose. Presumably each group would have at least a nominal leader who would take the lead on organizing the content and activities for the local cohort. This would typically be somebody like the head of a Center for Educational Technology, but it could also be an interested faculty member, or the group could organize its activities by consensus.

To make the use case more concrete, let’s assume that the curriculum will revolve around the forthcoming e-Literate TV series on personalized learning. This is something that I would ideally like to do in the real world, but it also has the right characteristics for the current thought experiment. The heart of the series is five case studies of schools trying different personalized learning approaches:

  • Middlebury College, an elite New England liberal arts school in rural Vermont
  • Essex County College, a community college in Newark, NJ
  • Empire State College, a SUNY school that focuses on non-traditional students and has a heavy distance learning program
  • Arizona State University, a large public university with a largely top-down approach to implementing personalized learning
  • A large public university with a largely bottom-up approach to implementing personalized learning

These thirty-minute case studies, plus the wrapper content that Phil and I are putting together (including a recorded session at the last ELI conference), covers a number of cross-cutting issues. Here are a few:

  • What does “personalized” really mean? When (and how) does technology personalize, and when does it depersonalize?
  • How does the idea of “personalized” change based on the needs of different kinds of students in different kinds of institutions?
  • How do personalized learning technologies, implemented thoughtfully in these different contexts, change the roles of the teacher, the TA, and the students?
  • What kinds of pedagogy seem to work best with self-paced products that are labeled as providing personalized learning?
  • What’s hard about using these technologies effectively, and what are the risks?

That’s the content and the context. Since we’re going for something like a PBL design, the central problem that each cohort would need to tackle is, “What, if anything, should we be doing with personalized learning tools and pedagogical approaches in our school?” This question can be tackled in a lot of different ways, depending on the local culture. If it is taken seriously, there are likely to be internal discussions about politics, budgets, implementation issues, and so on. Cohorts might also be very interested to have conversations with other cohorts from peer schools to see what they are thinking and what their experiences have been. Not only that, they may also be interested in how their peers are organizing their campus conversations about personalized learning. This is the equivalent of sharing course designs in this model. And of course, there will hopefully also be very productive conversations across all cohorts, pooling expertise, experience, and insight. This sort of community “sharding” is consistent with the cMOOC design thinking that has come before. We’re simply putting some energy into both learning design and platform design to make that approach work with a facilitation structure that is closer to a traditional classroom setting. We’re grafting a cMOOC-like course design onto a distributed flip facilitation structure in the hopes of coming up with something that still feels like a traditional class in some ways but brings in the benefits of a global conversation (among teachers as well as students).

The primary goal of such a “course” wouldn’t be to certify knowledge or even to impart knowledge but rather to help participants build their intra- and inter-campus expertise networks on personalized learning, so that educators could learn from each other more and re-invent the wheel less. But doing so would entail raising the baseline level of knowledge of the participants (like a course) and could support the design goals. The e-Literate TV series provides us with a concrete example to work with, but any cross-cutting issue or change that academia is grappling with would work as a use case for attacking our design goals in an environment that is relatively lower-risk than for-credit classes. The learning platform necessary to make such a course work would need to both support the multi-layered conversations and provide analytics tools to help identify both the best posts and the community experts.

In the next two posts, I will lay out the basic design of the system I have in mind. Then, in the final post of the series, I will discuss ways of extending the model to make it more directly suitable for traditional for-credit class usage.

The post Blueprint for a Post-LMS, Part 2 appeared first on e-Literate.

Blueprint for a Post-LMS, Part 1

Wed, 2015-03-04 11:27

By Michael FeldsteinMore Posts (1020)

Reading Phil’s multiple reviews of Competency-Based Education (CBE) “LMSs”, one of the implications that jumps out at me is that we see a much more rapid and coherent progression of learning platform designs if you start with a particular pedagogical approach in mind. CBE is loosely tied to family of pedagogical methods, perhaps the most important of which at the moment is mastery learning. In contrast, questions about why general LMSs aren’t “better” beg the question, “Better for what?” Since conversations of LMS design are usually divorced from conversations of learning design, we end up pretending that the foundational design assumptions in an LMS are pedagogically neutral when they are actually assumptions based on traditional lecture/test pedagogy. I don’t know what a “better” LMS looks like, but I am starting to get a sense of what an LMS that is better for CBE looks like. In some ways, the relationship between platform and pedagogy is similar to the relationship former Apple luminary Alan Kay claimed between software and hardware: “People who are really serious about software should make their own hardware.” It’s hard to separate serious digital learning design from digital learning platform design (or, for that matter, from physical classroom design). The advances in CBE platforms are a case in point.

But CBE doesn’t work well for all content and all subjects. In a series of posts starting with this one, I’m going to conduct a thought experiment of designing a learning platform—I don’t really see it as an LMS, although I’m also not allergic to that term as some are—that would be useful for conversation-based courses or conversation-based elements of courses. Because I like thought experiments that lead to actual experiments, I’m going to propose a model that could realistically be built with named (and mostly open source) software and talk a bit about implementation details like use of interoperability standards. But all of the ideas here are separable from the suggested software implementations. The primary point of the series is to address the underlying design principles.

In this first post, I’m going to try to articulate the design goals for the thought experiment.

When you ask people what’s bad about today’s LMSs, you often get either a very high-level answer—“Everything!”—or a litany of low-level answers about how archiving is a pain, the blog app is bad, the grade book is hard to use, and so on. I’m going to try to articulate some general goals for improvement that are in between those two levels. They will be general design principles. Some of them apply to any learning platform, while others apply specifically to the goal of developing a learning platform geared toward conversation-based classes.

Here are four:

1. Kill the Grade Book

One of the biggest problems with mainstream models of teaching and education is their artificiality. Students complete assignments to get grades. Often, they don’t care about the assignment, and the assignments aren’t often designed to be something that entice students to care about them. To the contrary, they are often designed to test specific knowledge or competency goals, most of which would never be practically tested in isolation in the real world. In the real world, our lives or livelihoods don’t depend solely on knowing how to solve a quadratic equation or how to support an argument with evidence. We use these pieces to accomplish more complex real-world goals that are (usually) meaningful to us. That’s the first layer of artificiality. The second layer is what happens in the grade book. Teachers make up all kinds of complex weighting systems, dropping the lowest, assigning a percentage weight to different classes of assignments, grading on curves, and so on. Faculty often spend a lot of energy first creating and refining these schemes and then using them to assign grades. And they are all made up, artificial, and often flawed. (For example, many faculty who are not in mathematically heavy disciplines make the mistake at one time or another of mixing points with percentage grades, and then spend many hours experimenting with complex fudge factors because they don’t have an intuition of how those two grading schemes interact with each other.)

Some of this artificiality is fundamentally baked into the foundational structures of schooling and accreditation, but some of it is contingent. For example, while CBE approaches don’t, in and of themselves, do anything to get rid of the artificiality of the schooling tasks themselves (and may, in fact, exacerbate them, depending on the course design), they can reduce or eliminate a traditional grade book, particularly in mastery learning courses. With CBE in general, you have a series of binary gates: Either you did demonstrate competency or you didn’t. You can set different thresholds, and sometimes you can assess different degrees of competency. But at the end of the day, the fundamental grading unit in a CBE course is the competency, not the quiz or assignment. This simplifies grading tremendously. Rather than forcing teachers to answer questions like, “How many points should each in class quiz be, and what percentage of the total grade should the count for,” teachers instead have to answer questions like, “How much should students’ ability to describe a minor seventh chord count toward their music theory course grade?” The latter question is both a lot more straightforward and more focused on teachers’ intuitions about what it means for a student to learn what a class has to teach.

Master Scale

Details of LoudCloud’s CBE Platform

Nobody likes a grade book, so let’s see how close we can get to eliminating the need for one. In general, we want a grading system that enables teachers to make adjustments to their course evaluation system based on questions that are closely related to their expertise—i.e., what students need to know and whether they seem to know it—rather than on their skills in constructing complex weighting schemes. The mechanism by which we do so will be different for discussion-based course components than for many typical implementations of CBE, particularly machine-graded CBE, but I believe that a combination of good course design and good software design can actually help reduce both layers of grading artificiality that I mentioned above.

2. Use scale appropriately

Most of the time the word “scale” used in an educational context attaches to a monolithic, top-down model like MOOCs. It takes a simplistic view of Baumol’s Cost Disease (which is probably the wrong model of the problem to begin with) and boils down to asking, “How can we reduce the per-student costs by cramming more students into the same class?” I’m more interested in a different question: What new models can we develop that harness both the economic and the pedagogical benefits of large-scale classes without sacrificing the value of teacher-facilitated cohorts? Models like Mike Caulfield’s and Amy Collier’s distributed flip, or FemTechNet’s Distributed Open Collaborative Courses (DOCCs). There are almost certainly some gains to be made using these designs in increasing access by lowering cost. They might (or might not) be more incremental than the centralized scale-out model, but they should hopefully not come with the same trade-offs in educational quality. In fact, they will hopefully improve educational quality by harnessing global resources (including a global community of peers for both students and teachers) while still preserving the local support. And I think there’s actually a potential for some pretty significant scaling without quality loss when the model I have in mind is used in combination with a CBE mastery learning approach in a broader, problem-based learning course design. More on that later.

Another kind of scaling that interests me is scaling (or propagating) changes in pedagogical models. We know a lot about what works well in the classroom that never gets anywhere because we have few tools for educating faculty about these proven techniques and helping them to adopt them. I’m interested in creating an environment in which teachers share learning design customizations by default, and teachers who create content can see what other teachers are doing with it—and especially what students in other classes are doing with it—by default. Right now, there is a lot of value to the individual teacher of being able to close the classroom door and work unobserved by others. I would like to both lower barriers to sharing and increase the incentives to do so. The right platform can help with that, although it’s very tricky. Learning Object Repositories, for example, have largely failed to be game changers in this regard, except within a handful of programs or schools that have made major efforts to drive adoption. One problem with repositories is that they demand work on the part of the faculty while providing little in the way of rewards for sharing. If we are going to overcome the cultural inhibitions around sharing, then we have to make the barrier as low as possible and the reward as high as possible.

3. Assess authentically through authentic conversations

Circling back to the design goal of killing the grade book, what we want to be able to do is directly assess the student’s quality of participation, rather than mediate it through a complicated assignment grading and weighting scheme. Unfortunately, the minute you tell students they are getting a “class participation” grade, you immediately do massive damage to the likelihood of getting authentic conversation and completely destroy the chances that you can use the conversation as authentic assessment. People perform to the metrics. That’s especially true when the conversations are driven by prompts written by the teacher or textbook publisher. Students will have fundamentally different types of conversations if their conversations are not isolated graded assignments but rather integral steps on their way to accomplish some larger task. Problem-Based Learning (PBL) is a good example. If you have a course design in which students have to do some sort of project or respond to some sort of case study, and that project is hard and rich enough that students have to work with each other to pool their knowledge, expertise, and available time, you will begin to see students act as authentic experts in discussions centered around solving the course problem set before them.

A good example of this is ASU’s Habitable Worlds, which I have blogged about in the past and which will be featured in an episode of the aforementioned e-Literate TV series. Habitable Worlds is roughly in the pedagogical family of CBE and mastery learning. It’s also a PBL course. Students are given a randomly generated star field and are given a semester-long project to determine the likelihood that intelligent life exists in that star field. There are a number of self-paced adaptive lessons built on the Smart Sparrow platform. Students learn competencies through those lessons, but they are competencies that are necessary to complete the larger project, rather than simply a set of hoops that students need to jump through. In other words, the competency lessons are resources for the students. They also happen to be assessments, but that’s not the only reason, and hopefully not the main reason, students have to care about them anymore. The class discussions can be positioned in the same way, given the right learning design. Here’s a student featured in our e-Literate TV episode talking about that experience:

Click here to view the embedded video.

The way the course is set up, students use the discussion board for authentic science-related problem solving. In doing so, they are exhibiting competencies necessary to be a good scientist (or a good problem solver, or a supportive member of a problem-solving team). They have to know when to search for information that already exists on the discussion board, how to ask for help when they are stuck, how to facilitate a problem-solving conversation, and so on. And these are, in fact, more valuable competencies for employers, society, and the students themselves than knowing the necessary ingredients for a planet to be habitable (for example). Yet we generally ignore these skills in our grading and pretend that the knowledge quizzes tell us what we need to know, because those are easier to assess. I would like for us to refuse to settle for that anymore.

This is a great example of how learning design and learning platform design can go hand-in-hand. If the platform and learning design work together to enable students to have their discussions within a very large (possibly global) group of learners who are solving similar problems, then there are richer opportunities to evaluate students’ respective abilities to demonstrate both expertise and problem-solving skills across a wide range of social interactions. Assuming a distributed flip model (where faculty are teaching their own classes on their own campuses with their own students but also using MOOC-like content and discussions that multiple other classes are also using), if you can develop analytics that help the local teachers directly and efficiently evaluate students’ demonstrated skills in these conversations, then you can feed the output of the analytics, tweaked by faculty based on which criteria for evaluating students’ participation they think are most important, into a simplified grading system. I’ll have a fair bit to say about what this could look like in practice in a later post in this series.

4. Leverage the socially constructed nature of expertise (and therefore competence)

Why do colleges exist? Once upon a time, if you went to a local blacksmith that you hadn’t been to before, you could ask your neighbor about his experience as a customer or look at the products the blacksmith produced. If you wanted to hire somebody you didn’t know to work in your shop, you would do the same. You’d generally get a holistic evaluation with some specific examples. “Oh, he’s great. My horse has five hooves. He figured out how to make a special shoe for that fifth hoof and didn’t even charge me extra!” You might gather a few of these stories and then make your decision. One thing you would not do is make a list of the 193 competencies that a blacksmith should have and check to see whether he’s been tested against them.

For a variety of reasons, it’s not that simple to evaluate expertise anymore. Credentialing institutions have therefore become proxies for these sorts of community trust network. “I don’t know you, but you graduated from Harvard, and I believe Harvard is a good school.” There was some of that in the early days—“I don’t know you, but you apprenticed with Rolf, and I trust Rolf”—but the universities (and other guilds) took this proxy relationship to the next step by asking people to invest their trust in the institution rather than the particular teacher. The paradox is that, in order to justify their authority as reputation proxies, these institutions came under increasing pressure to produce objective sounding assessments of their students’ expertise. As we go further and further down this road, these assessments look less and less like the original trust network assessment that the credential is supposed to be a proxy for. This may be one reason why a variety of measures show employers don’t pay much attention to where prospective employees get their degrees and don’t have a high opinion of the degree to which college is preparing students for their careers. As somebody who has made hiring decisions in both big and small companies, I can tell you that I don’t remember even looking at the prospective employees’ college credentials. The first screening was based on what work they had done for whom. If the positions had been entry-level, I might have looked at their college backgrounds, but even there, I probably would have looked more at the recommendations, extra-curricular activities, and any portfolio projects. In other words, who will vouch for you, what you are passionate about, and what work you can show. At most, the college degree is a gateway requirement except in a few specific fields. You may have to have one in order to be considered for some jobs, but it doesn’t help you actually land those jobs. And there is little evidence I am aware of that increasingly fine-grained competency assessments improve the value of the credential. This isn’t to say that there is no assessment mechanism better than the old ways. Nor is it to say anything about the value of CBE for either pedagogical purposes (e.g., the way it is used the Habitable Worlds example above) or its value in increasing access to education (and educational credentials) through prior learning assessments and the ability to free the students from the tyranny of seat time requirements. It’s just to say that it’s not clear to me that the path toward exhaustive assessment of fine-grained competencies leads us anywhere useful in terms of the value of the credential itself or in fostering the deep learning that a college degree is supposed to certify. In fact, it may be harmful in those respects.

If we could muster the courage to loosen our grip on the current obsession with objective, knowledge-based certification, we might discover that the combination of digital social networks and various types of analytics hold out the promise that we can recreate something akin to the original community trust network at scale. Participants—students, in our case—could be evaluated on their expertise based on whether people with good reputations in their community (or network) think that they have demonstrated expertise. Just as they always have been. And the demonstration of that expertise will be on full display for direct evaluation because the conversation(s) in which the demonstration(s) occurred and got judged by our trusted community members are on permanent digital display.[1] The learning design creates situations in which students are motivated to build trust networks in the pursuit of solving a difficult, college-level problem. The platform helps us to externalize, discover, and analyze these local trust networks (even if we don’t know any of the participants).

* * *

Those are the four main design goals for the series. (Nothing too ambitious.) In my next post, I’ll lay out the use case that will drive the design.

 

 

  1. Hat tip to Patrick Masson, among others, for guiding me to this insight.

The post Blueprint for a Post-LMS, Part 1 appeared first on e-Literate.

Alternate Ledes for CUNY Study on Raising Graduation Rates

Sun, 2015-03-01 14:23

By Phil HillMore Posts (295)

Last week MDRC released a study on the City University of New York’s (CUNY) Accelerated Study in Associate Programs (ASAP) with near breathless terms.

Title page

  • ASAP was well implemented. The program provided students with a wide array of services over a three-year period, and effectively communicated requirements and other messages.
  • ASAP substantially improved students’ academic outcomes over three years, almost doubling graduation rates. ASAP increased enrollment in college and had especially large effects during the winter and summer intersessions. On average, program group students earned 48 credits in three years, 9 credits more than did control group students. By the end of the study period, 40 percent of the program group had received a degree, compared with 22 percent of the control group. At that point, 25 percent of the program group was enrolled in a four-year school, compared with 17 percent of the control group.
  • At the three-year point, the cost per degree was lower in ASAP than in the control condition. Because the program generated so many more graduates than the usual college services, the cost per degree was lower despite the substantial investment required to operate the program.

Accordingly the media followed suit with breathless coverage[1]. Consider this from Inside Higher Ed and their article titled “Living Up to the Hype”:

Now that firm results are in, across several different institutions, CUNY is confident it has cracked the formula for getting students to the finish line.

“It doesn’t matter that you have a particularly talented director or a president who pays attention. The model works,” said John Mogulescu, the senior university dean for academic affairs and the dean of the CUNY School of Professional Studies. “For us it’s a breakthrough program.”

MDRC and CUNY also claim that “cracking the code” means that other schools can benefit, as described earlier in the article:

“We’re hoping to extend that work with CUNY to other colleges around the country,” said Michael J. Weiss, a senior associate with MDRC who coauthored the study.

Unfortunately . . .

If you read the report itself, the data doesn’t back up the bold claims in the executive summary and in the media. A more accurate summary might be:

For the declining number of young, living-with-parents community college students planning to attend full-time, CUNY has explored how to increase student success while avoiding any changes in the classroom. The study found that a package of interventions requiring full-time enrollment, increasing per-student expenditures by 63%, and providing aggressive advising as well as priority access to courses can increase enrollment by 22%, inclusive of term-to-term retention. At the 3-year mark these combined changes translate into an 82% increase in graduation rates, but it is unknown if any changes to the interventions would affect the results, and it is unknown what results would occur at the 4-year mark. Furthermore, it is unclear whether this program can scale due to priority course access and effects on the growing non-traditional student population. If a state sets performance-funding based on 3-year graduation rates and nothing else, this program could even reduce costs.

Luckily, the report is very well documented, so nothing is hidden. What are the problems that would lead to this alternate description?

  • This study is only for one segment of the population, those willing to go full-time, first-time students, low income, and one or two developmental course requirements (not zero, not three+). This targeted less than one-fourth of the CUNY 2-year student population where 73% live at home with parents and 77% are younger than 22. For the rest, including the growing working-adult population:

(p. 92): It is unclear, however, what the effects might be with a different target group, such as low-income parents. It is also unclear what outcomes an ASAP-type program that did not require full-time enrollment would yield.

  • The study required full-time enrollment (12 credits attempted per term) and only evaluated 3-year graduation rates, which is almost explains the results by itself. Do the math (24 credits / year over 3 years minus 3 – 6 as developmental courses don’t count for degree credit) and you see that going “full-time” and getting 66 credits is likely the only way to graduate with a 60-credit associate’s degree in 3 years. As the report itself states:

(p. 85): It is likely that ASAP’s full-time enrollment requirement, coupled with multiple supports to facilitate that enrollment, were central to the program’s success.

  • The study created a special class of students with priority enrollment. One of the biggest challenges of public colleges is for students to even have access to the courses they need. The ASAP students were given priority enrollment as the report itself states:

(p. 34): In addition, students were able to register for classes early in every semester they participated in the program. This feature allowed ASAP students to create convenient schedules and have a better chance of enrolling in all the classes they need. Early registration may be especially beneficial for students who need to enroll in classes that are often oversubscribed, such as popular general education requirements or developmental courses, and for students in their final semesters as they complete the last courses they need to graduate.

  • The study made no attempt to understand the many variables at play. There were a plethora of interventions – full-time enrollment requirement, priority enrollment, special seminars, reduced load on advisers, etc. Yet we have no idea which components lead to which effects. From the report

(p. 85): What drove the large effects found in the study and which of ASAP’s components were most important in improving students’ academic outcomes? MDRC’s evaluation was not designed to definitively answer that question. Ultimately, each component in ASAP had the potential to affect students’ experiences in college, and MDRC’s evaluation estimates the effect of ASAP’s full package of services on students’ academic outcomes.

  • The study made no changes at all to actual teaching and learning practices. It almost seems this was the point to find out how we can everything except teaching and learning to get students to enroll full-time. From the report

(p. 34): ASAP did not make changes to pedagogy, curricula, or anything else that happened inside of the classroom.

What Do We Have Left?

In the end this was a study on pulling out all of the non-teaching stops to see if we can get students to enroll full-time. Target only students willing to go full-time, then constantly advise them to enroll full-time and stick with it, and remove as many financial barriers (fund gap between cost and financial aid, free textbooks, gas cards, etc) as is feasible. With all of this effort, the real result of the study is that they increased the number of credits attempted and credits earned by 22%.

We already know that full-time enrollment is the biggest variable for graduation rates in community colleges, especially if measured over 4 years or less. Look at the recent National Student Clearinghouse report at a national level (tables 11-13):

  • Community college 4-year completion rate for exclusively part-time students: 2.32%
  • Community college 4-year completion rate for mixed enrollment students (some terms FT, some PT): 14.25%
  • Community college 4-year completion rate for exclusively full-time students: 27.55%

And that data is for 4 years – 3 years would have been more dramatic simply due to the fact that it’s almost impossible to get 60 credits if you don’t take at least 12 credits per term over 3 years.

What About Cost Analysis?

The study showed that CUNY spent approximately 63% more per student for the program compared to the control group. The bigger claim, however, is that cost per graduate is actually lower (163% of the cost with 182% of the graduates). But what about the students who don’t graduate or transfer? What about the students who graduate in 4 years instead of 3? Colleges spend money on all their students, and most community college students (60%) can only go part-time and will never be able to graduate in 3 years.

Even if you factor in performance-based funding, using a 3-year graduation basis is misleading. No state is considering funding only for 3-year successful graduation. If that were so, I have a much easier solution – refuse to admit any students seeking less than 12 credits per term. That will produce dramatic cost savings and dramatic increases in graduation rates . . . as long as you’re willing to completely ignore the traditional community college mission that includes:

serv[ing] all segments of society through an open-access admissions policy that offers equal and fair treatment to all students

Can It Scale?

Despite the claims that “the model works” and that CUNY has cracked the formula, does the report actually support this claim? Specifically, can this program scale?

First of all, the report only makes its claims for a small percentage of students that are predominantly young and live at home with their parents – we don’t know if it applies beyond the target group as the report itself calls out.

But within this target group, I think there are big problems with scaling. One of which is the priority enrollment in all courses, including oversubscribed courses and those available at convenient times. The control group was at a disadvantage as were all non-target students (including the growing working adult population and students going back to school). This priority enrollment approach is based on scarcity, and the very nature of scaling the program will reduce the benefits of the intervention.

I have Premier Silver status at United airlines thanks to a few international trips. If this status gave me realistic priority access to first-class upgrades, then I would be more likely to fly United on a routine basis. As it is, however, I often show up at the gate and see myself #30 or higher in line for first-class upgrades when the cabin only has 5-10 first class grades available. The priority status has lost most of its benefits as United has scaled such that more than a quarter of all passengers on many routes also have priority status.

CUNY plans to scale from 456 students in the ASAP study all the way up to 13,000 students in the next two years. Assuming even distribution over two years, this changes the group size from 1% of the entering freshman population to 19%. Won’t that make a dramatic difference in how easy it will be for ASAP students to get into the classes and convenient class times they seek? And doesn’t this program conflict with the goals of offering “equal and fair treatment to all students”?

Alternate Ledes for Media Coverage of Study

I realize my description above is too lengthy for media ledes, so here are some others that might be useful:

  • CUNY and MDRC prove that enrollment correlates with graduation time.
  • Requiring full-time enrollment and giving special access to courses leads to more full-time enrollment.
  • What would it cost to double an artificial metric without asking faculty to change any classroom activities? 63% more per student.
Don’t Get Me Wrong

I’m all for spending money and trying new approaches to help students succeed, including raising graduation rates. I’m also for increasing the focus on out-of-classroom support services to help students. I’m also glad that CUNY is investing in a program to benefit its own students.

However, the executive summary of this report and the resultant media coverage are misleading. We have not cracked the formula, CUNY is not ready to scale this program or export to other colleges, and taking the executive summary claims at face value is risky at best. The community would be better served if CUNY:

  • Made some effort to separate variables and effect on enrollment and graduation rates;
  • Extended the study to also look at more realistic 4-year graduate rates in addition to 3-year rates;
  • Included an analysis of diminishing benefits from priority course access; and
  • Performed a cost analysis based on the actual or planned funding models for community colleges.
  1. And this article comes from a reporter for whom I have tremendous respect.

The post Alternate Ledes for CUNY Study on Raising Graduation Rates appeared first on e-Literate.

Unsubscribe

Sat, 2015-02-28 16:00

By Michael FeldsteinMore Posts (1020)

A little while back, e-Literate suddenly got hit by a spammer who was registering for email subscriptions to the site at a rate of dozens of new email addresses every hour. After trying a number of less extreme measures, I ended up removing the subscription widget from the site. Unfortunately, as a few of you have since pointed out to me, by removing the option to subscribe by email, I also inadvertently removed the option to unsubscribe. Once I realized there was a problem (and cleared some time to figure out what to do about it), I investigated a number of other email subscription plugins, hoping that I could find one that is more secure. After some significant research, I came to the conclusion, that there is no alternate solution that I can trust more than the one we already have.

The good news is that I discovered the plugin we have been using has an option to disable the subscribe feature while leaving on the unsubscribe feature. I have done so. You can now find the unsubscribe capability back near the top of the right-hand sidebar. Please go ahead and unsubscribe yourself if that’s what you’re looking to do. If any of you need help unsubscribing, please don’t hesitate to reach out to me.

Sorry for the trouble. On a related note, I hope to reactivate the email subscription feature for new subscribers once I can find the right combination of spam plugins to block the spam registrations without getting in the way of actual humans trying to use the site.

The post Unsubscribe appeared first on e-Literate.

Greg Mankiw Thinks Greg Mankiw’s Textbook Is Fairly Priced

Fri, 2015-02-27 16:37

By Michael FeldsteinMore Posts (1020)

This is kind of hilarious.

Greg Mankiw has written a blog post expressing his perplexity[1] with The New York Times’ position that textbooks are overpriced:

To me, this reaction seems strange. After all, the Times is a for-profit company in the business of providing information. If it really thought that some type of information (that is, textbooks) was vastly overpriced, wouldn’t the Times view this as a great business opportunity? Instead of merely editorializing, why not enter the market and offer a better product at a lower price? The Times knows how to hire writers, editors, printers, etc. There are no barriers to entry in the textbook market, and the Times starts with a pretty good brand name.

My guess is that the Times business managers would not view starting a new textbook publisher as an exceptionally profitable business opportunity, which if true only goes to undermine the premise of its editorial writers.

It’s worth noting that Mankiw received a $1.4 million advance for his economics textbook from his original publisher Harcourt Southwestern, which was later acquired by the company now known as Cengage Learning. That was in 1997. Now in its seventh edition, Mankiw has five different versions of his book published by Cengage (not counting the five versions of the previous edition, which is still on the market). That said, he is probably right that NYT would not view the textbook industry as a profitable business opportunity. But think about that. A newspaper finds the textbook industry unattractive economically. The textbook industry is imploding. Mankiw’s publisher just emerged from bankruptcy, and textbook sales are down and still dropping across the board.

One reason that textbook prices have not been responsive to market forces is that most faculty do not have strong incentives to search for less expensive textbooks and, to the contrary, have high switching costs. They have to both find an alternative that fits their curriculum and teaching approach—a non-trivial investment in itself—and then rejigger their course design to fit with the new book. A second part of the problem is that the publishers really can’t afford to lower the textbook prices at this point without speeding up their slow-motion train crash because their unit sales keep dropping as students find more creative ways to avoid buying the book. Their way of dealing with falling sales is to raise the price on each book that they sell. It’s a vicious cycle—one that could potentially be broken by the market forces that Mankiw seems so sure are providing fair pricing if only the people making the adoption decisions had motivations that were aligned with the people making the purchasing decisions. The high cost of switching for faculty, coupled with their relative personal immunity to pricing increases, translate into a barrier to entry for potential competitors looking to underbid the established players. Which brings me to the third reason. There are plenty of faculty who would like to believe that they could make money writing a textbook someday and that doing so would generate enough income to make a difference in their lives. Not all, not most, and probably not even the majority, but enough to matter. As long as faculty can potentially get compensated for sales, there will be motivation for them to see high textbook prices that they don’t have to pay themselves as “fair” or, at least, tolerable. It’s a conflict of interest. And Greg Mankiw, as a guy who’s made the big score, has the biggest conflict of interest of all and the least motivation of anyone to admit that textbook prices are out of hand, and that the textbook “market” he wants to believe in probably doesn’t even properly qualify as a market, never mind an efficient one.

  1. Hat tip to Stephen Downes for the link.

The post Greg Mankiw Thinks Greg Mankiw’s Textbook Is Fairly Priced appeared first on e-Literate.

Editorial Policy: Notes on recent reviews of CBE learning platforms

Fri, 2015-02-27 12:30

By Phil HillMore Posts (292)

Oh let the sun beat down upon my face, stars to fill my dream
I am a traveler of both time and space, to be where I have been
To sit with elders of the gentle race, this world has seldom seen
They talk of days for which they sit and wait and all will be revealed

- R Plant, Kashmir

Over the past half year or so I’ve provided more in-depth product reviews of several learning platforms than is typical – Helix, FlatWorld, LoudCloud, Bridge. Understanding that at e-Literate we are not a review site nor do we tend to analyze technology for technology’s sake, it’s worth asking ‘why the change?’. There has been a lot of worthwhile discussion in several blogs recently about whether the LMS is obsolete or critical to the future of higher ed, and this discussion even raised the subject of how we got to the current situation in the first place.

An interesting development I’ve observed is that the learning environment of the future might already be emerging on its own, but not necessarily coming from the institution-wide LMS market. Canvas, for all its market-changing power, is almost a half decade old. The area of competency-based education (CBE), with its hundreds of pilot programs, appears to be generating a new generation of learning platforms that are designed around the learner (rather than the course) and around learning (or at least the proxy of competency frameworks). It seems useful to get a more direct look at these platforms to understand the future of the market and to understand that the next generation environment is not necessarily a concept yet to be designed.

At the same time, CBE is a very important development in higher ed, yet there are plenty of signs of assuming that CBE is students working in isolation to learn regurgitated facts assessed by multiple choice questions. Yes, that does happen in cases and is a risk for the field, but CBE is far richer. Criticize CBE if you will, but do so based on what’s actually happening[1].

Both Michael and I have observed and even participated in efforts that seek to explore CBE and the learning environment of the future.

Perhaps given that I’m prone to visual communication approaches, the best approach for me to work out my own thoughts on the subjects as well as share more broadly through e-Literate has been to do more in-depth product reviews with screenshots.

Bridge, from Instructure, is a different case. I frequently get into discussions about how Instructure might evolve as a company, especially given their potential IPO. The public markets will demand continued growth, so what will this change in terms of their support of Canvas as a higher education LMS? Will they get into adjacent markets? With the latest news of the company raising $40 million in what is likely the last pre-IPO VC funding round as well as their introduction of Bridge to move into the corporate learning space, we now have a pretty solid basis for answering these questions. Understanding that Bridge is a separate product and seeing how the company approaches both its design and lack of change to Canvas are the keys.

With this in mind, it’s worth noting some editorial policy stuff at e-Literate:

  • We do not endorse products; in fact, we generally focus on the academic or administrative need first as well as how a product is selected and implemented.
  • We do not take solicitations to review products, even if a vendor’s competitors have been reviewed. The reviews mentioned above were more about understanding market changes and understanding CBE as a concept than about the products per se.
  • We might accept a vendor’s offer of a demo at our own discretion, either online or at a conference, but even then we do not promise to cover within a blog post.

OK, the lead-in quote is a stretch, but it does tie in to one of the best videos I have seen in a while.

Click here to view the embedded video.

  1. And you would do well to read Michael’s excellent post on CBE meant for faculty trying to understand the subject.

The post Editorial Policy: Notes on recent reviews of CBE learning platforms appeared first on e-Literate.

LoudCloud Systems and FASTRAK: A non walled-garden approach to CBE

Thu, 2015-02-26 13:44

By Phil HillMore Posts (291)

As competency-based education (CBE) becomes more and more important to US higher education, it would be worth exploring the learning platforms in use. While there are cases of institutions using their traditional LMS to support a CBE program, there is a new market developing specifically around learning platforms that are designed specifically for self-paced, fully-online, competency-framework based approaches.

Recently I saw a demo of the new CBE platform from LoudCloud Systems, a company whose traditional LMS I have covered a few years ago. The company is somewhat confusing to me – I had expected a far larger market impact from them based on their product design than what has happened in reality. LoudCloud has recently entered the CBE market, not by adding features to their core LMS but by creating a new product called FASTRAK. Like Instructure with their creation of a new LMS for a different market (corporate learning), LoudCloud determined that CBE called for a new design and that the company can handle two platforms for two mostly distinct markets. In the case of Bridge and FASTRAK, I believe the creation of a new learning platform took approximately one year (thanks a lot, Amazon). LoudCloud did leverage several of the traditional LMS tools such as rubrics, discussion forums and their LoudBook interactive eReader.

As was the case for the description of the Helix CBE-based learning platform and the description of FlatWorld’s learning platform, my interest here is not merely to review one company’s products, but rather to illustrate aspects of the growing CBE movement using the demo.

LoudCloud’s premier CBE partner is the University of Florida’s Lastinger Center, a part of the College of Education that provides professional development for Florida’s 55,000 early learning teachers. They have or expect to have more than a dozen pilot programs for CBE in place during the first half of 2015.

Competency Framework

Part of the reason for developing a new platform is that FASTRAK appears to be designed around a fairly comprehensive competency framework embodied in LoudTrack – an authoring tool and competency repository. This framework allows the school to combine their own set of competencies along with externally-defined job-based competencies such as O*NET Online.

Competency Structure

The idea is to (roughly in order):

  • Develop competencies;
  • Align to occupational competencies;
  • Define learning objectives;
  • Develop assessments; and
  • Then design academic programs.

LoudTrack Editing

One question within CBE design is what is the criteria for mastery within a specific competency – passing some, most, all of the sub-competencies? FASTRAK allows this decision to be set by program configuration.

Master Scale

Many traditional academic programs have learning outcomes, but a key differentiator for a CBE program is having some form of this competency framework and up-front design.

A unique feature (at least unique that I’ve seen so far) is FASTRAK’s ability to allow faculty to set competencies at an individual course level, provided in a safe area that stay outside of the overall competency repository unless reviewed and approved.

The program or school can also group together specific competencies to define sub-degree certificates.

Course Design Beyond Walled Garden

At the recent Instructional Technology Council (ITC) eLearning 2015 conference, I presented a view of the general ed tech market moving beyond the walled garden approach. As part of this move, however, I described that the walled garden will likely live on within top-down designs of specific academic programs such as many (if not most) of the CBE pilots underway.

Now it's clear what's the role @PhilOnEdTech gives to #LMS when he talks about a new "walled garden" age. #LTI +1 pic.twitter.com/DXgdjctHto

— Toni Soto (@ToniSoto_Vigo) February 22, 2015

What FASTRAK shows, however, is that CBE does not require a walled garden approach. Keep in mind the overall approach of starting with the competency framework through assessments and then academic program design. In this last area FASTRAK allows several approaches to bringing in pre-existing content and separate applications.

Add Resource Type

The system, along with current version of LoudBooks, is LTI as well as SCORM compliant and uses this interoperability to give choices to faculty. Remember that FlatWorld prides themselves on deeply integrating content, mostly their own, into the platform. While they can bring in outside content like OER, it is the FlatWorld designers who have to do this work. LoudCloud, by contrast, puts this choice in the hands of faculty. Two very different approaches.

LTI Apps

FASTRAK does provide a fairly impressive set of reports to see how students are doing against the competencies, which should help faculty and program designers to see where students are having problems or where the course designs need improving.

Competency Reporting

CBE-Light

An interesting note from the demo and conversation is that LoudCloud claims that half of their pilots are CBE-light, where schools want to try out competencies at the course level but not at the program level. This approach allows them to avoid the need for regulatory approval.

While I have already called out the basics of what CBE entails in this primer, I have also seen a lot of watering down or alteration of the CBE terminology. Steven Mintz from the University of Texas recently published an article at Inside Higher Ed that calls out CBE 2.0 in his terms, where they are trying approaches that are not fully online or even self-paced. This will be a topic for a future post on what really qualifies as CBE and where are people just co-opting the terminology.

The post LoudCloud Systems and FASTRAK: A non walled-garden approach to CBE appeared first on e-Literate.

e-Literate TV Preview: Essex County College and changing role of faculty

Wed, 2015-02-25 17:58

By Phil HillMore Posts (291)

As we get closer to the release of the new e-Literate TV series on personalized learning, Michael and I will be posting previews highlighting some of the more interesting segments from the series. When we first talked about the series with its sponsors, the Bill & Melinda Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

In this video preview (about 4:18 in duration), we hear from two faculty members who have first-hand experience in using a personalized learning approach as well as a traditional approach to remedial math. We also hear from students on what they are learning about learning. In our case studies so far, the real faculty issue is not that software is being designed to replace faculty, but rather that successful implementation of personalized learning necessarily changes the role of faculty. One of our goals with e-Literate TV is to allow faculty, staff and students to describe direct experiences in their own words. Take a look.

Click here to view the embedded video.

Stay tuned for the full episodes to be released on the In The Telling platform[1]. You can follow me (@PhilOnEdTech), Michael (@mfeldstein67), or e-Literate TV (@eLiterateTV) to stay up to date. You can also follow the e-Literate TV YouTube channel. We will also announce the release here on e-Literate.

  1. ITT is our partner in developing this series, providing video production as well as the platform.

The post e-Literate TV Preview: Essex County College and changing role of faculty appeared first on e-Literate.

First View of Bridge: The new corporate LMS from Instructure

Tue, 2015-02-24 04:41

By Phil HillMore Posts (291)

Last week I covered the announcement from Instructure that they had raised another $40 million in venture funding and were expanding into the corporate learning market. Today I was able to see a demo of their new corporate LMS, Bridge. While Instructure has very deliberately designed a separate product from Canvas, their education-focused LMS, you can see the same philosophy of market strategy and product design embedded in the new system. In a nutshell, Bridge is designed to a simple, intuitive platform that moves control of the learning design away from central HR or IT control and closer to the end user.

While our primary focus at e-Literate is on higher ed and even some K-12 learning, the development of professional development and corporate training markets are becoming more important even in the higher ed context. At the least, this is important for those who are tracking Instructure and how their company plans might affect the future of education platforms.

The core message of Instructure regarding Bridge – just as with Canvas – is that it is focused on ease-of-use whereas the entrenched competition has fallen prey to feature bloat based on the edge cases. Despite this claim and despite Instructure’s track record with Canvas, what does this mean? I’m pretty sure every vendor out there claims ease-of-use whether or not there are elegant or terrible designs[1].

Based on the demo, Bridge appears to define ease-of-use in three distinct areas – streamlined, clutter-free interface for learners, simple tools for content creation by business units, and simple tools for managing learners and content.

Learner User Experience

Bridge has been designed over the past year based on Instructure’s design to avoid force-fitting Canvas into corporate learning markets. The core use cases of this new market are far simpler than education use cases, and the resultant product has fewer bells and whistles than Canvas. In Instructure’s view, the current market has such cumbersome products that learning platforms are mostly used just for compliance – take this course or you lose your job – and not at all for actual learning. The Bridge interface (shown alongside the mobile screen and on laptop) is simple.

Mobile_same_as_laptop

Learner_progress

While this is a clean interface, I don’t see it as being that big of a differentiator or rationale for a new product line.

Content Creation

The content creation tools, however, start to show Instructure’s distinctive approach. They have made their living on being able to say no – refusing to let user requests for additional features to change their core design principle.  The approach for Bridge is to assume that content creators have no need to have web design or instructional design experience, providing them with simple formatting and suggestion-based tools to make content creation easy. The formatting looks to be on the level of Google Docs, or basic WordPress, rather than Microsoft Word.

Content_authoring_tool

When creating new content, the Bridge LMS even puts up prompts for pre-formatted content types.

Content_prompts

When creating quizzes, they have an interesting tool that adds natural language processing to facilitate simple questions that can be randomized. The author could write a simple sentence of what they are trying to convey to users, such as “Golden Gate Bridge is in San Francisco”. The tool selects each word and allows the author to add alternative objects that can serve in a quiz, such as suggesting San Mateo or San Diego (it is not clear if you can group words to replace the full “San Francisco” rather than “Francisco”). The randomized quiz questions could then be automatically created.

Quiz Creation

For content that is more complex, Instructure is taking the approach of saying ‘no’ – go get that content from a SCORM/AICC import coming from a more complex authoring tool.

Learner Administration Tools

Rather than relying on complex HR systems to manage employees, Bridge goes with a CSV import tool that reminds me of Tableau in that it pre-imports, shows the fields, and allows a drag-and-drop selection and re-ordering of fields for the final import[2].

CSV_Learner_Import

The system can also create or modify groups based on rules.

Group_creation_tool

To pull this together, Bridge attempts to automate as much of the background process as is feasible. To take one example, when you hire a new employee or change the definition of groups, the system retroactively adds the revised list of learners or groups to assigned courses.

For live training, you can see where Bridge takes the opposite approach to Canvas. In Canvas (as with most education LMSs), it is assumed that more time in the system means more time learning – the core job of learners. In Bridge, however, the assumption is that LMS time-on-task should be minimized. For compliance training in particular, you want the employee to spend as little time as reasonable training so they can get their real job done. Bridge focuses not on the live training itself but rather on the logistics tasks in setting up the course (scheduling, registering, taking attendance).

Live_training_tools_1

Prospects and Implications

Taken together, the big story here is that Instructure seeks to change the situation where learning management in corporations is cloistered within HR, IT and instructional design units.. As they related today, they want to democratize content creation and center learning in the business units where the subject matter experts reside.

Their future plans focus on engagement – getting feedback and dialogue from employees rather than just one-way content dissemination and compliance. If they are successful, this is where they will gain lasting differentiation in the market.

What does this mean from a market perspective? Although I do not have nearly as much experience with corporate training as I do with higher education, this LMS seems like a real system and a real market entry into corporate learning. The primary competitors in this space are not Blackboard, as TechCrunch and Buzzfeed implied, but are Saba, SumTotal, SuccessFactors, Cornerstone, etc. Unlike education, this is a highly fragmented market. I suspect that this means that the growth prospects for Instructure will be slower than in education, but real nonetheless. Lambda Solutions shared the Bersin LMS study to give a view of the market.

lms-market

This move is clearly timed to help with Instructure’s planned IPO that could happen as soon as November 2015[3]. Investors can now see potential growth in an adjacent market to ed tech where they have already demonstrated growth.

I mentioned in my last post that the biggest risk I see is management focus and attention. I suspect with their strong fund-raising ($90 million to date) that the company has enough cash to hire staff for both product lines, but senior management will oversee both the Canvas and the Bridge product lines and markets.

  1. Although I would love to see the honest ad: “With a horrible, bloated user interface based on your 300-item RFP checklist!”
  2. I assume they can integrate with HR systems as well, but we did not discuss this aspect.
  3. Note this is based on my heuristic analysis and not from Instructure employees.

The post First View of Bridge: The new corporate LMS from Instructure appeared first on e-Literate.

ITC #eLearning2015 Keynote Video and Material

Sat, 2015-02-21 17:20

By Phil HillMore Posts (291)

This past week I had the opportunity to provide the keynote at the Instructional Technology Council (ITC) eLearning2015 conference in Las Vegas. ITC is a great group that provides leadership and professional development to faculty and staff in community and junior colleges in online education, and increasingly in hybrid course models. To save time on individual sharing, I have included most of the material below.

Here is the MediaSite recording of the keynote:

And here are the slides in SlideShare:

And here is the YouTube channel for e-Literate TV. The Essex County College clip is a sneak preview of an upcoming e-Literate TV case study on personalized learning (more on that in the next post).

Click here to view the embedded video.

Finally, here are the two clips from the WCET14 student panel:

Need for some level of standardization:

Click here to view the embedded video.

Need for interaction:

Click here to view the embedded video.

And last, but certainly not least, the infamous Underpants Gnome video:

Click here to view the embedded video.

The post ITC #eLearning2015 Keynote Video and Material appeared first on e-Literate.

What TechCrunch Got Wrong (and Right) About Instructure Entering Corporate Learning Market

Thu, 2015-02-19 17:08

By Phil HillMore Posts (291)

After yesterday’s “sources say” report from TechCrunch about Instructure – maker of the Canvas LMS – raising a new round of financing and entering the corporate LMS space, Instructure changed plans and made their official announcement to today. The funding is to both expand the Canvas team and to establish the new corporate LMS team. I’m not a fan of media attempts to get a scoop based purely on rumors, and in this case TechCrunch got a few items wrong that are worth correcting.

  • Instructure raised $40 million in new financing (series E), not “between $50 to $70 million”. TechCrunch did hedge their bets with “low end of the range at over $40 million”.
  • The primary competition in the corporate LMS space is Saba, SumTotal, Skillsoft, Cornerstone – and not Blackboard.
  • The Canvas LMS was launched in 2010, not 2011. (OK, I’ll give them this one, as even Instructure seems to use the 2011 date).

TechCrunch did get the overall story of fund-raising and new corporate product right, but these details matter.

Instructure’s new product for the corporate learning market is called Bridge, with its web site here. This is an entirely new product, although it does share a similar product architecture as Canvas, the LMS designed for the education market (including being based on Ruby on Rails). Unlike Canvas, Bridge was designed mobile-first, with all mobile capabilities embedded in the product and not as separate applications. In an interview with Josh Coates, CEO of Instructure, he described their motivation for this new product.

We like the idea of building software that helps people get smarter. Post education there is a void, with bad corporate software.

The design goal of Bridge is to make the creation and consumption of learning content easy, although future directions for the company will emphasize employee engagement and two-way conversations within companies. According to Coates, this focus on engagement parallels their research for future emphasis in the education market.

Bridge

The Bridge product line will have a separate sales team and product team. From the press release:

Foundation partners include CLEARLINK, OpenTable and Oregon State University.

Oregon State University is an interesting customer of both products – they are adopting Canvas as part of their Unizin membership, and they are piloting Bridge as an internal HR system for training staff. This move will likely be adopted by other Canvas education customers.

Given the self-paced nature of both Competency-Based Education (CBE) and corporate learning systems, I asked if Bridge is targeted to get Instructure into the CBE land grab. Coates replied that they are researching whether and how to get into CBE, but they are first exploring if this can be done with Canvas. In other words, Bridge truly is aimed at the corporate learning market.

While Instructure has excelled on maintaining product focus and simplicity of user experience, this move outside of education raises the question about whether they can maintain company focus. The corporate market is very different than the education market – different product needs, fragmented vendor market, different buying patterns. Many companies have tried to cross over between education and corporate learning, but most have failed. Blackboard, D2L and Moodle have made a footprint in the corporate space using one product for both markets. Instructure’s approach is different.

As for the fund-raising aspects, Instructure has made it very clear they are planning to go public with an IPO sometime soon, as reported by Buzzfeed today.

CEO Josh Coates told BuzzFeed today that the company had raised an additional $40 million in growth funding ahead of a looming IPO, confirming a rumor that was first reported by Tech Crunch yesterday. The company has now raised around $90 million.

Given their cash, a natural question is whether Instructure plans to use this to acquire other companies. Coates replied that they get increasingly frequent inbound requests (for Instructure to buy other companies) that they evaluate, but they are not actively pursuing M&A as a key corporate strategy.

I have requested a demo of the product for next week, and I’ll share the results on e-Literate as appropriate.

Update: Paragraph on organization corrected to point out separate product team. Also added sentence on funding to go to both Canvas and Bridge.

The post What TechCrunch Got Wrong (and Right) About Instructure Entering Corporate Learning Market appeared first on e-Literate.

NGDLE: The quest to eat your cake and have it too

Tue, 2015-02-17 05:51

By Phil HillMore Posts (291)

And I’m going old school and sticking to the previous saying.

Google_Ngram_Viewer

Today I’m participating in the EDUCAUSE meeting on Next Generation Digital Learning Environments, funded by the Bill & Melinda Gates Foundation[1]. From the invitation:

The purpose of the panel is to identify potential investment strategies that are likely to encourage and hasten the arrival of “next-generation digital learning environments,” online learning environments that take us beyond the LMS to fully support the needs of today’s students and instructors. [snip]

It is clear that to meet the needs of higher education and today’s learner, the NGDLE must support a much wider range of functionality than today’s LMS, including different instructional modes, alternative credit models, personalized learning, robust data and content exchange, real-time and iterative assessment, the social web, and contemporary software design and usability practices. The policy and cultural context at our colleges and universities must also adapt to a world in which all learning has a digital component.

As I’m making an ill-timed trip from sunny California to snow-ravaged DC for a reduced-attendance meeting, I should at least lay down some of my thoughts on the subject in writing[2].

There is potential confusion of language here by implying NGDLE as an environment to replace today’s LMS. Are we talking about new, monolithic systems that replace today’s LMS but also have a range of functionality to support new needs, or are we talking about an environment that allows reasonably seamless integration and navigation between multiple systems? Put another way, investing in what?

To get at that question we should consider the current LMS market.

Current Market

Unlike five years ago, market dynamics are now leading to systems that better meet the needs of students. Primarily driven by the entrance of the Canvas LMS, the end of the Blackboard – Desire2Learn patent lawsuit, and new ed tech investment, today’s systems are lower in costs than previous systems and have much better usability. Canvas changed the standard of what an LMS can be for traditional courses – competitors that view it as just the shiny new object and not a material difference in usability have done so at their own peril. Blackboard is (probably / eventually / gosh I hope) releasing an entirely new user experience this year that seems to remove much of the multiple-click clunkiness of the past. Moodle has eliminated most of the scroll of death. Sakai 10 introduced a new user interface that is far better than what they had in the past.

It seems at every school I visit and every report I read, students are asking for consistency of usage and navigation along with more usable systems. This is, in fact, what the market is finally starting to deliver. It’s not a perfect market, but there are real changes occurring.

I have already written about the trend of the LMS, particularly based on IMS standards, to go from a walled garden approach:

walledgarden2

to an open garden approach that allows the coordination of the base system with external tools.

walledgarden5

 

Largely due to adoption of Learning Tools Interoperability (LTI) specifications from IMS Global, it is far easier today to integration different applications with an LMS. Perhaps more importantly, the ability to move the integration closer to end users (from central IT to departments and faculty) is getting closer and closer to reality. Michael has also written about the potential of the Caliper framework to be even more significant in expanding interoperability.

The LMS is not going away, but neither is it going to be the whole of the online learning experience anymore. It is one learning space among many now. What we need is a way to tie those spaces together into a coherent learning experience. Just because you have your Tuesday class session in the lecture hall and your Friday class session in the lab doesn’t mean that what happens in one is disjointed from what happens in the other. However diverse our learning spaces may be, we need a more unified learning experience. Caliper has the potential to provide that.

At the same time there are a new wave of learning platforms designed specifically for this latter category. I have started to cover the CBE platforms recently, as Motivis, Helix, FlatWorld, LoudCloud Systems, and others have been introduced with radically different features and capabilities. At e-Literate TV we are learning more about adaptive and personalized systems such as ALEKS, Smart Sparrow, OLI, Cerego and others that design around the learning.

If you look at this new wave of learning environments, you’ll see that they are designed around the learner instead of the course and are focused on competencies or some other form of learning outcomes.

In a sense, the market is working. Better usability for traditional LMS, greater interoperability, and new learning platforms designed around the learner. There is a risk for NGDLE in that you don’t want to screw up the market when it’s finally moving in the right direction.

And Yet . . .

The primary benefits of today’s LMS remains administrative management of traditionally-designed courses. From last year’s ECAR report on the LMS, faculty and students rated their LMS satisfaction highest for the basic administrative functions.

Faculty satisfaction LMS

Student satisfaction LMS

Opponents of the traditional LMS are right to call out how its design can stifle creativity and prevent real classroom engagement. Almost all capabilities of the LMS are available on the free Internet, typically in better-designed tools.

This situation leads to three challenges:

  • The community has discussed the need for direct teaching and learning support for years, yet most courses only use the LMS for rosters, grade book and document sharing (syllabus, readings, assignments). The market changed en masse to call their systems Learning Management Systems in the late 2000s, but the systems mostly remain Course Management Systems as previously named. Yes, some schools and faculty – innovators and early adopters – have found ways to get learning benefits out of the systems, but that is secondary to managing the course.
  • New educational delivery models such as competency-based education (CBE) and personalized learning require a learner-centric design that is not just based on added some features on top of the core LMS. It is worth noting that the new learning platforms tend to be wholesale replacements for the LMS in specific programs rather than expansion of capabilities.
  • The real gains in learner-specific functionality have arisen from applications that don’t attempt to be all things to all people. In today’s world it’s far easier to create a new web and mobile-based application that ever before, and many organizations are taking this approach. Any attempt to push platforms into broader functionality creates the risk of pushing the market backwards into more feature bloat.
Back to the NGDLE

I won’t go into investment strategies for NGDLE, as that is the topic for group discussions today. But I think it is worth calling out the need to support two seemingly incompatible needs.

  • Given the very real improvements in the LMS market, we should not abandon the gains made by institutions and faculty that have taken ~15 years to achieve.
  • The market should not just evolve – new educational models require new ground-up designs, and we need far more emphasis on learning support and student engagement.

Is it possible to eat your cake and have it, too? In my opinion, our best chance is through the encouragement and support for interoperability frameworks that allow a course or learner hub / aggregator – providing consistent navigation and support for faculty not looking to innovate with technology – along with an ecosystem of true learning applications and environments. This is the move to learning platforms, not just as marketing terms but as true support for integrated world of applications.

  1. Disclosure: Our upcoming e-Literate TV series has also received a grant from the Gates Foundation.
  2. Now that I’ve gone down for breakfast, the 2-inch snowfall would be somewhat embarrassing if not for the city being shut down.

The post NGDLE: The quest to eat your cake and have it too appeared first on e-Literate.

What Does Unizin Mean for Digital Learning?

Mon, 2015-02-16 13:41

By Michael FeldsteinMore Posts (1015)

Speaking of underpants gnomes sales pitches, Phil and I spent a fair amount of time hearing about Unizin at the ELI conference. Much of that time was spent hearing friends that I know, trust, and respect talk about the project. At length, in some cases. On the one hand, it is remarkable that, after these long conversations, I am not much clearer on the purpose of Unizin than I was the week before. On the other hand, being reminded that some of my friends really believe in this thing helped me refill my reservoir of patience for the project, which had frankly run dry.

Alas, that reservoir was largely drained away again during a Unizin presentation with the same title as this blog post. I went there expecting the presenters to answer that question for the audience.

Alack.

The main presentation was given by Anastasia Morrone of IUPUI, was probably the most straightforward and least hype-filled presentation about Unizin that I have heard so far. It was also short. Just when I was warming to it and figuring we’d get to the real meat, her last slide came up:

Split into groups of 5-7 people and discuss the following:

How can faculty, teaching center consultants, and learning technologists contribute to best practices with the evolving Unizin services?

Wait. What?

That’s right. They wanted us to tell them what Unizin means for digital learning. That might have been a good question to ask before they committed to spend a million dollars each on the initiative.

I joined one of the groups, resolving to try as hard as I could to keep my tongue in check and be constructive (or, at least, silent) for as long as I could. The very first comment in my group—not by me, I swear—was, “Before I can contribute, can somebody please explain to me what Unizin is?” It didn’t get any better from there. At the end of the breakout session, our group’s official answer was essentially, “Yeah, we don’t have any suggestions to contribute, so we’re hoping the other groups come up with something.” None of them did, really. The closest they came were a couple of vague comments on inclusive governance. I understand from a participant in one of the other groups that they simply refused to even try to answer the question. It was brutal.

Click here to view the embedded video.

Still, in the spirit of the good intentions behind their request for collaborative input, I will list here some possible ways in which Unizin could provide value, in descending order of credibility.

I’ll start with the moderately credible:

  • Provide a layer of support services on top of and around the LMS: This barely even gets mentioned by Unizin advocates but it is the one that makes the most sense to me. Increasingly, in addition to your LMS, you have a bunch of connected tools and services. It might be something basic like help desk support for the LMS itself. It might be figuring out how an external application like Voicethread works best with your LMS. As the LMS evolves into the hub of a larger ecosystem, it is putting increasing strain on IT department in everything from procurement to integration to ongoing support. Unizin could be a way of pooling resources across institutions to address those needs. If I were a CIO in a big university with lots of demands for LMS plug-in services, I would want this.
  • Provide a university-controlled environment for open courses: Back when Instructure announced Canvas Network, I commented that the company had cannily targeted the issue that MOOC providers seemed to be taking over the branding, not to mention substantial design and delivery decisions, from their university “partners.” Canvas Network is marketed as “open courses for the rest of us.” By adopting Canvas as their LMS, Unizin gets this for free. Again, if I were a CIO or Provost at a school that was either MOOCing or MOOC-curious, I would want this.
  • Providing buying power: What vendor would not want to sew up a sales deal with ten large universities or university systems (and counting) through one sales process? So far it is unclear how much Unizin has gained in reality through group negotiations, but it’s credible that they could be saving significant money through group contracting.
  • Provide a technology-assisted vehicle for sharing course materials and possibly even course cross-registrations: The institutions involved are large, and most or all probably have specialty strengths in some curricula area or other. I could see them wanting to trade, say, an Arabic degree program for a financial technology degree program. You don’t need a common learning technology infrastructure to make this work, but having one would make it easier.
  • Provide a home for a community researching topics like learning design and learning analytics: Again, you don’t need a common infrastructure for this, but it would help, as would having courses that are shared between institutions.

Would all of this amount to a significant contribution to digital learning, as the title of the ELI presentation seems to ask? Maybe! It depends on what happens in those last two bullet points. But the rollout of the program so far does not inspire confidence that the Unizin leadership knows how to facilitate the necessary kind of community-building. Quite the opposite, in fact. Furthermore, the software has only ancillary value in those areas, and yet it seems to be what Unizin leaders want to talk about 90%+ of the time.

Would these benefits justify a million-dollar price tag? That’s a different question. I’m skeptical, but a lot depends on specific inter-institutional intentions that are not public. A degree program has a monetary value to a university, and some universities can monetize the value better than others depending on which market they can access with significant degrees of penetration. Throw in the dollar savings on group contracting, and you can have a relatively hard number for the value of the coalition to a member. I know that a lot of university folk hate to think like that, but it seems to be the most credible way to add the value of these benefits up and get to a million dollars.

Let’s see if we can sweeten the pot by adding in the unclear or somewhat dubious but not entirely absurd benefits that some Unizin folk have claimed:

  • Unizin will enable universities to “own” the ecosystem: This claim is often immediately followed by the statement that their first step in building that ecosystem was to license Canvas. The Unizin folks seem to have at least some sense that it seems contradictory to claim you are owning the ecosystem by licensing a commercial product, so they immediately start talking about how Canvas is open source and Unizin could take it their own way if they wanted to. Yet this flies in the face of Unizin’s general stated direction of mostly licensing products and building connectors and such when they have to. Will all products they license be open source? Do they seriously commit to forking Canvas should particular circumstances arise? If not, what does “ownership” really mean? I buy it in relation to the MOOC providers, because there they are talking about owning brand and process. But beyond that, the message is pretty garbled. There could be something here, but I don’t know what it is yet.
  • Unizin could pressure vendors and standards groups to build better products: In the abstract, this sounds credible and similar to the buying power argument. The trouble is that it’s not clear either that pressure on these groups will solve our most significant problems or that Unizin will ask for the right things. I have argued that the biggest reason LMSs are…what they are is not vendor incompetence or recalcitrance but that faculty always ask for the same things. Would Unizin change this? Indiana University used what I would characterize as a relatively progressive evaluation framework when they chose Canvas, but there is no sign that they were using the framework to push their faculty to fundamentally rethink what they want to do with a virtual learning environment and therefore what it needs to be. I don’t doubt the intellectual capacity of the stakeholders in these institutions to ask the right questions. I doubt the will of the institutions themselves to push for better answers from their own constituents. As for the standards, as I have argued previously, the IMS is doing quite well at the moment. They could always move faster, and they could always use more university members who are willing to come to the table with concrete use cases and a commitment to put in the time necessary to work through a standards development process (including implementation). Unizin could do that, and it would be a good thing if they did. But it’s still pretty unclear to me how much their collective muscle would be useful to solve the hard problems.

Don’t get me wrong; I believe that both of the goals articulated above are laudable and potentially credible. But Unizin hasn’t really made the case yet.

Instead, at least some of the Unizin leaders have made claims that are either nonsensical (in that they don’t seem to actually mean anything in the real world) or absurd:

  • “We are building common gauge rails:” I love a good analogy, but it can only take you so far. What rides on those rails? And please don’t just say “content.” Are we talking about courses? Test banks? Individual test questions? Individual content pages? Each of these have very different reuse characteristics. Content isn’t just a set of widgets that can be loaded up in rail cars and used interchangeably wherever they are needed. If it were, then reuse would have been a solved problem ten years ago. What problem are you really trying to solve here, and why do you think that what you’re building will solve it (and is worth the price tag)?
  • “Unizin will make migrating to our next LMS easier because moving the content will be easy.” No. No, no, no, no, no, no, no. This is the perfect illustration of why the “common gauge rails” statement is meaningless. All major LMSs today can import IMS Common Cartridge format, and most can export in that format. You could modestly enhance this capability by building some automation that takes the export from one system and imports it into the other. But that is not the hard part of migration. The hard part is that LMSs work differently, so you have to redesign your content to make best use of the design and features of the new platform. Furthermore, these differences are generally not one that you want to stamp out—at least, not if you care about these platforms evolving and innovating. Content migration in education is inherently hard because context makes a huge difference. (And content reuse is exponentially harder for the same reason.) There are no widgets that can be neatly stacked in train cars. Your rails will not help here.
  • “Unizin will be like educational moneyball.” Again with the analogies. What does this mean? Give me an example of a concrete goal, and I will probably be able to evaluate the probability that you can achieve it, it’s value to students and the university, and therefore whether it is worth a million-dollar institutional investment. Unizin doesn’t give us that. Instead, it gives us statements like, “Nobody ever said that your data is too big.” Seriously? The case for Unizin comes down to “my data is bigger than yours”? Is this a well-considered institutional investment or a midlife crisis? The MOOC providers have gobs and gobs of data, but as HarvardX researcher Justin Reich has pointed out, “Big data sets do not, by virtue of their size, inherently possess answers to interesting questions….We have terabytes of data about what students clicked and very little understanding of what changed in their heads.” Tell us what kinds of research questions you intend to ask and how your investment will make it possible to answer them. Please. And also, don’t just wave your hands at PAR and steal some terms from their slides. I like PAR. It’s a Good Thing. But what new thing are you going to do with it that justifies a million bucks per institution?

I want to believe that my friends, who I respect, believe in Unizin because they see a clear justification for it. I want to believe that these schools are going to collectively invest $10 million or more doing something that makes sense and will improve education. But I need more than what I’m getting to be convinced. It can’t be the case that the people not in the inner circle have to convince themselves of the benefit of Unizin. One of my friends inside the Unizin coalition said to me, “You know, a lot of big institutions are signing on. More and more.” I replied, “That means that either something very good is happening or something very bad is happening.” Given the utter disaster that was the ELI session, I’m afraid that I continue to lean in the direction of badness.

 

The post What Does Unizin Mean for Digital Learning? appeared first on e-Literate.

Wanted – A Theory of Change

Sun, 2015-02-15 14:25

By Michael FeldsteinMore Posts (1014)

Phil and I went to the ELI conference this week. It was my first time attending, which is odd given that it is one of the best conferences that I’ve attended in quite a while. How did I not know this?

We went, in part, to do a session on our upcoming e-Literate TV series, which was filmed for use in the series. (Very meta.) Malcolm Brown and Veronica Diaz did a fantastic job of both facilitating and participating in the conversation. I can’t wait to see what we have on film. Phil and I also found that an usually high percentage of sessions were ones that we actually wanted to go to and, once there, didn’t feel the urge to leave. But the most important aspect of any conference is who shows up, and ELI did not disappoint there either. The crowd was diverse, but with a high percentage of super-interesting people. On the one hand, I felt like this was the first time that there were significant numbers of people talking about learning analytics who actually made sense. John Whitmer from Blackboard (but formerly from CSU), Mike Sharkey from Blue Canary (but formerly from University of Phoenix), Rob Robinson from Civitas (but formerly from the University of Texas), Eric Frank of Acrobatiq (formerly of Flat World Knowledge)—these people (among others) were all speaking a common language, and it turns out that language was English. I feel like that conversation is finally beginning to come down to earth. At the same time, I got to meet Gardner Campbell for the first time and ran into Jim Groom. One of the reasons that I admire both of these guys is that they challenge me. They unsettle me. They get under my skin, in a good way (although it doesn’t always feel that way in the moment).

And so it is that I find myself reflecting disproportionately on the brief conversations that I had with both of them, and about the nature of change in education.

I talked to Jim for maybe a grand total of 10 minutes, but one of the topics that came up was my post on why we haven’t seen the LMS get dramatically better in the last decade and why I’m pessimistic that we’ll see dramatic changes in the next decade. Jim said,

Your post made me angry. I’m not saying it was wrong. It was right. But it made me angry.

Hearing this pleased me inordinately, but I didn’t really think about why it pleased me until I was on the plane ride home. The truth is that the post was intended to make Jim (and others) angry. First of all, I was angry when I wrote it. We should be frustrated at how hard and slow change has been. It’s not like anybody out there is arguing that the LMS is the best thing since sliced bread. Even the vendors know better than to be too boastful these days. (Most of them, anyway.) At best, conversations about the LMS tend to go like the joke about the old Jewish man complaining about a restaurant: “The food here is terrible! And the portions are so small!” After a decade of this, the joke gets pretty old. Somehow, what seemed like Jack Benny has started to feel more like Franz Kafka.

Second, it is an unattractive personal quirk of mine than I can’t resist poking at somebody who seems confident of a truth, no matter what that truth happens to be. Even if I agree with them. If you say to me, “Michael, you know, I have learned that I don’t really know anything,” I will almost inevitably reply, “Oh yeah? Are you sure about that?” The urge is irresistible. If you think I’m exaggerating, then ask Dave Cormier. He and I had exactly this fight once. This may make me unpopular at parties—I like to tell myself that’s the reason—but it turns out to be useful in thinking about educational reform because just about everybody shares some blame in why change is hard, and nobody likes to admit that they are complicit in a situation that they find repugnant. Faculty hate to admit that some of them reinforce the worst tendencies of LMS and textbook vendors alike by choosing products that make their teaching easier rather than better. Administrators hate to admit that some of them are easily seduced by vendor pitches, or that they reflexively do whatever their peer institutions do without a lot of thought or analysis. Vendors hate to admit that their organizations often do whatever they have to in order close the sale, even if it’s bad for the students. And analysts and consultants…well…don’t get me started on those smug bastards. It would be a lot easier if there were one group, one cause that we could point to as the source of our troubles. But there isn’t. As a result, if we don’t acknowledge the many and complex causes of the problems we face, we risk having an underpants gnomes theory of change:

Click here to view the embedded video.

I don’t know what will work to bring real improvements to education, but here are a few things that won’t:

  • Just making better use of the LMS won’t transform education.
  • Just getting rid of the LMS won’t transform education.
  • Just bringing in the vendors won’t transform education.
  • Just getting rid of the vendors won’t transform education.
  • Just using big data won’t transform education.
  • Just busting the faculty unions won’t transform education.
  • Just listening to the faculty unions won’t transform education.

Critiques of some aspect of education or other are pervasive, but I almost always feel like I am listening to an underpants gnomes sales presentation, no matter who is pitching it, no matter what end of the political spectrum they are on. I understand what the speaker wants to do, and I also understand the end state to which the speaker aspires, but I almost never understand how the two are connected. We are sorely lacking a theory of change.

This brings me to my conversation with Gardner, which was also brief. He asked me whether I thought ELI was the community that could…. I put in an ellipse there both because I don’t remember Gardner’s exact wording and because a certain amount of what he was getting at was implied. I took him to mean that he was looking for the community that was super-progressive that could drive real change (although it is entirely possible that I was and am projecting some hope that he didn’t intend). It took me a while to wrap my head around this encounter too. On the one hand, I am a huge believer in the power of communities as networks for identifying and propagating positive change. On the other hand, I have grown to be deeply skeptical of them as having lasting power in broad educational reform. Every time I have found a community that I got excited about, one of two things inevitably happened: either so many people piled into it that it lost its focus and sense of mission, or it became so sure of its own righteousness that the epistemic closure became suffocating. There may be some sour grapes in that assessment—as Groucho Marx said, I don’t want to belong to any club that would have me as a member—but it’s not entirely so. I think communities are essential. And redeeming. And soul-nourishing. But I think it’s a rare community indeed—particularly in transient, professional, largely online communities, where members aren’t forced to work out their differences because they have to live with each other—that really provides transformative change. Most professional communities feel like havens, when I think we need to feel a certain amount of discomfort for real change to happen. The two are not mutually exclusive in principle—it is important to feel like you are in a safe environment in order to be open to being challenged—but in practice, I don’t get the sense that most of the professional communities I have been in have regularly encouraged  creative abrasion. At least, not for long, and not to the point where people get seriously unsettled.

Getting back to my reaction to Jim’s comment, I guess what pleased me so much is that I was proud to have provided a measure of hopefully productive and thought-provoking discomfort to somebody who has so often done me the same favor. This is a trait I admire in both Jim and Gardner. They won’t f**king leave me alone. Another thing that I admire about them is that they don’t just talk, and they don’t just play in their own little sandboxes. Both of them build experiments and invite others to play. If there is a way forward, that is it. We need to try things together and see how they work. We need to apply our theories and find out what breaks (and what works better than we could have possibly imagined). We need to see if what works for us will also work for others. Anyone who does that in education is a hero of mine.

So, yeah. Good conference.

 

The post Wanted – A Theory of Change appeared first on e-Literate.

e-Literate TV Case Study Preview: Middlebury College

Sun, 2015-02-15 10:48

By Michael FeldsteinMore Posts (1013)

As we get closer to the release of the new e-Literate TV series on personalized learning, Phil and I will be posting previews highlighting some of the more interesting segments from the series. Both our preview posts and the series itself start with Middlebury College. When we first talked about the series with its sponsors, the Bill & Melinda Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent. And as part of our effort to establish a more objective frame, we started the series by going not to a school that was a Gates Foundation grantee but to the kind of place that Americans probably think of first when they think of a high-quality personalized education outside the context of technology marketing. We decided to go to an elite New England liberal arts college. We wanted to use that ideal as the context for talking about personalizing learning through technology. At the same time, we were curious to find out how technology is changing these schools and their notion of what a personal education is.

We picked Middlebury because it fit the profile and because we had a good connection through our colleagues at IN THE TELLING.[1] We really weren’t sure what we would find once we arrived on campus with the cameras. Some of what we found there was not surprising. In a school with a student/teacher ratio of 8.6 to 1, we found strong student/teacher relationships and empowered, creative students. Understandably, we heard concerns that introducing technology into this environment would depersonalize education. But we also heard great dialogues between students and teachers about what “personalized” really means to students who have grown up with the internet. And, somewhat unexpectedly, we saw some signs that the future of educational technology at places like Middlebury College may not be as different from what we’re seeing at public colleges and universities as you might think, as you’ll see in the interview excerpt below.

Jeff Howarth is an Assistant Professor of Geography at Middlebury. He teaches a very popular survey-level course in Geographic Information Systems (GIS). But it’s really primarily a course about thinking about spaces. As Jeff pointed out to me, we typically provide little to no formal education on spacial reasoning in primary and secondary schooling. So the students walking into his class have a wide range of skills, based primarily on their natural ability to pick them up on their own. This broad heterogeneity is not so different from the wide spread of skills that we saw in the developmental math program at Essex County College in Newark, NJ. Furthermore, the difference between a novice and an expert within a knowledge domain is not just about how many competencies they have racked up. It’s also about how they acquire those competencies. Jeff did his own study of how students learn in his class which confirmed broader educational research showing that novices in a domain tend to start with specific problems and generalize outward, while experts (like professors, but also like more advanced students) tend to start with general principles and apply them to the specific problem at hand. As Jeff pointed out to me, the very structure of the class schedule conspires against serving novice learners in the way that works best for them. Typically, students go to a lecture in which they are given general principles and then are sent to a lab to apply those principles. That order works for students who have enough domain experience to frame specific situations in terms of the general principles but not for the novices who are just beginning to learn what those general principles might even look like.

When Jeff thought about how to serve the needs of his students, the solution he came up with—partly still a proposal at this point—bears a striking resemblance to the basic design of commercial “personalized learning” courseware. I emphasize that he arrived at this conclusion through his own thought process rather than by imitating commercial offerings. Here’s an excerpt in which he describes deciding to flip his classroom before he had ever even heard of the term:

Click here to view the embedded video.

In the full ten-minute episode, we hear Jeff talk about his ideas for personalized courseware (although he never uses that term). And in the thirty-minute series, we have a great dialogue between students and faculty as well as some important context setting from the college leadership. The end result is that the Middlebury case study shows us that personalized learning software tools do not just have to be inferior substitutes for the real thing that are only for “other people’s children” while simultaneously reminding us of what a real personal education looks like and what we must be careful not to lose as we bring more technology into the classroom.

  1. Full disclosure: Since filming the case study, Middlebury has become a client of MindWires Consulting, the company that Phil and I run together.

The post e-Literate TV Case Study Preview: Middlebury College appeared first on e-Literate.

California Community College OEI Selects LMS Vendor

Thu, 2015-02-12 13:53

By Phil HillMore Posts (289)

The Online Education Initiative (OEI) for California’s Community College System has just announced its vendor selection for a Common Course Management System (CCMS)[1]. For various reasons I cannot provide any commentary on this process, so I would prefer to simply direct people to the OEI blog site. Update: To answer some questions, the reason I cannot comment is that CCC is a MindWires client, and I facilitated the meetings. Based on this relationship we have a non-disclosure agreement with OEI.

Here is the full announcement.

The California Community Colleges (CCC) Online Education Initiative (OEI) announced its intent to award Instructure Inc. the contract to provide an online course management system and related services to community colleges statewide.

Support for Instructure’s Canvas system was nearly unanimous among the OEI’s Common Course Management System (CCMS) Committee members, with overwhelming support from student participants, officials said. Canvas is a course management platform that is currently being used by more than 1,000 colleges, universities and school districts across the country.

“Both the students and faculty members involved believed that students would be most successful using the Canvas system,” said OEI Statewide Program Director Steve Klein. “The student success element was a consistent focus throughout.”

The announcement includes some information on the process as well.

A 55-member selection committee participated in the RFP review that utilized an extensive scoring rubric. The decision-making process was guided by and included the active involvement of the CCMS Committee, which is composed of the CCMS Workgroup of the OEI Steering Committee, the members of OEI’s Management Team, and representatives from the eight Full Launch Pilot Colleges, which will be the first colleges to test and deploy the CCMS tool.

The recommendation culminated an extremely thorough decision-making process that included input from multiple sources statewide, and began with the OEI’s formation of a CCMS selection process in early 2014. The selection process was designed to ensure that a partner would be chosen to address the initiative’s vision for the future.

  1. Note that this is an Intent to Award, not yet a contract.

The post California Community College OEI Selects LMS Vendor appeared first on e-Literate.

A Sneak Preview of e-Literate TV at ELI

Tue, 2015-02-10 00:58

By Michael FeldsteinMore Posts (1013)

Phil and I will be chatting with Malcolm Brown and Veronica Diaz about our upcoming e-Literate TV series on personalized learning in a featured session at ELI tomorrow. We’ll be previewing short segments of video case studies that we’ve done on an elite New England liberal arts college, an urban community college, and large public university. Audience participation in the discussion is definitely encouraged. It will be tomorrow at 11:45 AM in California C for those of you who are here at the conference, and also webcast for those of you registered for the virtual conference.

We hope to see you there.

The post A Sneak Preview of e-Literate TV at ELI appeared first on e-Literate.

Flat World and CBE: Self-paced does not imply isolation

Mon, 2015-02-09 07:48

By Phil HillMore Posts (287)

As competency-based education (CBE) becomes more and more important to US higher education, it would be worth exploring the learning platforms in use. While there are cases of institutions using their traditional LMS to support a CBE program, there is a new market developing specifically around learning platforms that are designed specifically for self-paced, fully-online, competency-framework based approaches.

Several weeks ago Flat World announced their latest round of funding, raising $5 million of debt financing, raising their total to $40.7 million. The company started out by offering full e-textbooks (and was previously named FlatWorld Knowledge), developing 110 titles that included 25 of the 50 most-used lecture courses. The e-textbook market was not working out, however, and the company pivoted to competency-based education around the time that Chris Etesse became CEO two years ago. Now the company is developing a combined CBE learning platform with integrated course content – much of it repurposing the pre-existing e-textbook materials. Their first academic partner for CBE is Brandman University, a non-traditional part of the Chapman University system and is currently one of the CBEN network.

One central tenet of the Flat World approach is based on their history and pivot – a tight integration of content and platform. As Etesse describes it, content is a 1st-class citizen in their system whereas other loosely-coupled approaches that do not tie content and platform together can be difficult to navigate and collect learning analytics. In other words, this intentionally is a walled-garden approach. For Brandman, approximately 70% of the content comes from the pre-existing FlatWorld texts, 25% comes from various OER sources, and about 5% has been custom-designed for Brandman.

In other words, this is very much a walled garden by design. While there is support for outside content, I believe this integration must be done by Flat World designers.

As was the case for the description of the Helix CBE-based learning platform, my interest here is not merely to review one company’s products, but rather to illustrate aspects of the growing CBE movement using the demo.

CBE programs by their very nature tend to be self-paced. One criticism or line of questions I’m seeing more often deals with the nature of self-paced learning itself. Are students just plugging through mindless e-text and multiple-choice assessments in isolation? What Flat World illustrates – as with other major CBE learning platforms – is that self-paced does not imply isolation, either from a student-teacher or from a student-student perspective. New approaches that are different than simple discussion forums are required, however.

FlatWorld shows several learning activities:

  • Reading text and viewing multi-media content adaptively presented based on a pretest and progress against competencies;
  • Taking formative assessments primarily through multiple-choice quizzes;
  • Interacting with students and with faculty;
  • Working through project-based assignments;
  • Taking summative assessments through proctored, webcam-streaming approach.

The activities and assessments do not have to be students working in isolation and using multiple-choice. For example, the project based work can be included and assignments can include submission of written reports or based on short-form prompts. As can be seen below, the assessments can be based on submitted written work which faculty grade and use for feedback.

FWK_Demo_Submit_Assessment

For communication with others, students are tracked in how active they are in communicating with faculty and even with other students (called ‘social’), as seen below.

FWK_Demo_7_Day_Breakdown

One challenge of a self-paced program such as CBE approaches is figuring out how to encourage students to interact with others. There is not a simple cohort to work with – the interaction instead will often be based on content. Who else is working through the same content in roughly the same time period.

FlatWorld uses an approach that is very similar to Stack Overflow, where students can ask and answer questions over time, and the answers are voted up or down to allow the best answers to rise to the top. The stack overflow is moderated by faculty at Brandman. This not only allows students working on the same competencies at roughly the same time, but it even allows interaction with students on similar competencies separated in time.

FW_DiscussionBoards

 

FW_SocialInteraction

There certainly is a tendency in many CBE programs to stick to multiple-choice assignments and quizzes and to avoid much social interaction. That method is a whole lot easier to design, and with several hundred of new programs under development, I think the overall quality can be quite low in many programs, particularly those looking for a quick-win CBE introduction, essentially trying to jump on the bandwagon. You can see the tendency towards multiple-choice in the FlatWorld system as well.

But self-paced does not imply isolation, and the Flat World implementation of the Brandman University program shows how CBE can support project-based work, written assignments and assessments, and interaction between students and faculty as well as between multiple students.

The post Flat World and CBE: Self-paced does not imply isolation appeared first on e-Literate.

Instructure Releases 4th Security Audit, With a Crowd-sourcing Twist

Sat, 2015-02-07 12:17

By Phil Hill

Phil is a consultant and industry analyst covering the educational technology market primarily for higher education. He has written for e-Literate since Aug 2011. For a more complete biography, view his profile page.

Web | Twitter | LinkedIn | Google+ | More Posts (286)

In the fall of 2011 I made the following argument:

We need more transparency in the LMS market, and clients should have access to objective measurements of the security of a solution. To paraphrase Michael Feldstein’s suggestions from a 2009 post:

  • There is no guarantee that any LMS is more secure just because they say they are more secure
  • Customers should ask for, and LMS vendors should supply, detailed information on how the vendor or open source community has handled security issues in practice
  • LMS providers should make public a summary of vulnerabilities, including resolution time

I would add to this call for transparency that LMS vendors and open source communities should share information from their third-party security audits and tests.  All of the vendors that I talked to have some form of third-party penetration testing and security audits; however, how does this help the customer unless this information is transparent and available?  Of course this transparency should not include details that would advertise vulnerabilities to hackers, but there should be some manner to be open and transparent on what the audits are saying. [new emphasis added]

Inspired by fall events and this call for transparency, Instructure (maker of the Canvas LMS) decided to hold an public security audit using a white hat testing company, where A) the results of the testing would be shared publicly, and B) I would act as an independent observer to document the process. The results of this testing are described in two posts at e-Literate and by a post at Instructure.

Instructure has kept up the process, this year with a crowd-sourcing twist:

What was so special about this audit? For starters, we partnered with Bugcrowd to enlist the help of more than 60 top security researchers. To put that number in context, typical third-party security audits are performed by one or two researchers, who follow standard methodologies and use “tools of the trade.” Their results are predictable, consistent, and exactly what you’d want and expect from this type of service. This year, we wanted an audit that would produce “unexpected” results by testing our platform in unpredictable ways. And with dozens of the world’s top experts, plus Bugcrowd’s innovative and scrappy crowdsourcing approach, that’s exactly what we got.

So while last year’s audit found six issues, this year’s process unearthed a startling 59. (Yeah, you read that right. Fifty-nine.) Witness the power of crowdsourcing an open security audit.

The blog post goes on to state that all 59 issues have been fixed with no customer impacts.

I harp on this subject not just to congratulate Instructure on keeping up the process, but to maintain that the ed tech world would benefit from transparent, open security audits. Back in 2011 there were ed tech executives who disagreed with the approach of open audits.

There are risks, however, to this method of public security testing. Drazen Drazic, the managing director of Securus Global, indicated that in talking to people around the world through security-related social networks, no other companies have chosen to use an independent observer for this testing. This is not to argue that no one should do it, but clearly we are breaking new ground here and need to be cautious.

One downside of public security assessments is that the act of publicizing results can in fact increase the likelihood that vulnerabilities would be exploited by hackers. As one executive from a competitive LMS put it to me, we need to focus on security consistently and not as a once-a-year exercise. Any public exposure of vulnerabilities can increase the likelihood of hackers exploiting those vulnerabilities, so the trick is to not disclose specific pathways to exploitation. In our case, I described the category of vulnerability found, and I avoided disclosing any information on the critical and high-risk vulnerabilities until after they had been remediated. Still, this is a tricky area.

Two competitive LMS vendors have criticized these tests as a marketing ploy that could be dangerous. In their opinion, student and client data is best protected by keeping the testing process out of the public domain. I cannot speak for Instructure’s motivations regarding marketing, but I did want to share these criticisms.

We are now in the fourth year of Instructure providing transparent security audits, and I would note the following:

  • The act of publicizing the results has not in fact enabled hackers to exploit the security vulnerabilities identified.
  • While I am sure there is marketing value to this process, I would argue that the primary benefits have been enhanced security of the product, but more importantly better information for the institutions evaluating or even using Canvas.

I repeat my call for more ed tech vendors to follow a this type of process. I would love to cover similar stories.

The post Instructure Releases 4th Security Audit, With a Crowd-sourcing Twist appeared first on e-Literate.

Babson Study of Online Learning Released

Wed, 2015-02-04 23:52

By Phil Hill

Phil is a consultant and industry analyst covering the educational technology market primarily for higher education. He has written for e-Literate since Aug 2011. For a more complete biography, view his profile page.

Web | Twitter | LinkedIn | Google+ | More Posts (285)

Babson Survey Research Group (BSRG) just released its annual survey of online learning in US higher education (press release here). This year they have moved from use of survey methodology for the online enrollment section to use of IPEDS distance education data. Russ Poulin from WCET and I provided commentary on the two data sources as an appendix to the study.

The report highlights the significant drop in growth of online education in the US (which I covered previously in this e-Literate post). Some of the key findings:

  • Previous reports in this series noted the proportion of institutions that believe that online education is a critical component of their long-term strategy has shown small but steady increases for a decade, followed by a retreat in 2013.
  • After years of a consistently growing majority of chief academic officers rating the learning outcomes for online education “as good as or better” than those for face-to-face instruction, the pattern reversed itself last year.
  • This report series has used its own data to chronicle the continued increases in the number of students taking at least one online course. Online enrollments have increased at rates far in excess of those of overall higher education. The pattern, however, has been one of decreasing growth rates over time. This year marks the first use of IPEDS data to examine this trend.
  • While the number of students taking distance courses has grown by the millions over the past decade, it has not come without considerable concerns. Faculty acceptance has lagged, concerns about student retention linger, and leaders continue to worry that online courses require more faculty effort than face-to-face instruction.

BSRG looked at the low growth (which I characterized as ‘no discernible’ growth’ due to noise in the data) and broke down trends by sector.

Growth by sector

The report also found that more institutions are viewing online education as ‘critical to the long term strategy of my institution’.

Strategic online

 

There’s lots of good data and analysis available – read the whole report here.

I’ll write more about the critique of data sources that Russ and I provided in the next few days.

We are especially pleased that Phil Hill and Russ Poulin have contributed their analysis of the transition issues of moving to IPEDS data. Their clear and insightful description will be of value for all who track distance education.

I want to personally thank Jeff Seaman for the opportunity he and his team provided for us to provide this analysis.

The post Babson Study of Online Learning Released appeared first on e-Literate.