By Phil HillMore Posts (405)
During the most recent quarterly earnings call for Instructure, an analyst asked an interesting question (despite starting off from the Chris Farley Show format).
Corey Greendale (First Analysis Securities Corporation): Awesome. A couple of other things on the, primarily on the Higher Ed space but I guess on education space, there’s a whole lot of couple questions about the competitive environment. When you’re and I don’t know if you will ever get into this level of granularity but when you got competitive wins against the Blackboard, are those predominantly from legacy ANGEL, are you getting those wins as much from Learn as well.
Josh Coates (CEO of Instructure): A lot of them are from Learn. Most, you know I don’t have the stats right off the top of my head. A lot of the ANGEL and WebCT stuff is been mopped up in the previous years and so, what’s left the majority of what’s left is Learn and our win rate against Blackboard it continues to be incredibly high, not just domestically but internationally as well.
In fact, I think most of three out of the four international schools that we announced in this earnings where Blackboard Learn replacements, so yes Learn’s getting it.
The question gets to the issue of whether Canvas is just picking up higher education clients coming off of discontinued LMSs (Angel, WebCT, etc) or if they are picking up clients from ongoing platforms such as Blackboard Learn. Beyond the obvious interest of investors and other ed tech vendors, this issue in general affects higher education institutions going through a vendor selection – for the system in consideration, are there many other schools considering the same migration path?
Thanks to the work we’ve been doing with LISTedTECH and our new subscription service, we can answer this question in a little more detail. One of the charts we share shows higher education migrations over the past five years in the US and Canada.
Looking at the bottom right of the chart, you can see that Canvas has picked up clients previously using WebCT, Learn, ANGEL, Moodle, LearningStudio, Sakai, Homegrown, and Brightspace (from D2L).
Josh could have answered that Canvas actually has picked up more clients formerly using Learn than those using ANGEL, but a small portion of Learn includes those using the discontinued Basic edition. Nevertheless, there are quite a few wins coming from systems that have not been discontinued, which I think was the main point of the question.
As you can see, there is interesting data on other systems as well. Some notes from this view:
- Blackboard Learn new implementation have mostly come from the company’s own discontinued WebCT and ANGEL platforms, with some small contributions from LearningStudio and Moodle.
- D2L Brightspace has an impressive client retention rate, with very few clients migrating off of their system.
What other trends do you see in this data?
The post Previous LMS For Schools Moving to Canvas in US and Canada appeared first on e-Literate.
By Michael FeldsteinMore Posts (1071)
I was delighted that we are able to publish Mike Caulfield’s post on how ed tech gets personalization backwards, partly because Mike is such a unique and inventive thinker, but also because he provided such a great example of how “personalized learning” teaching techniques are different than adaptive content and other product capabilities.
The heart of his post is two stories about teachable moments he had with his daughters. In one, he helped his middle school-aged daughter understand why an Iranian author was worried that people in the Western world have harmful stereotypes of Iranians. In the other, he helped his high school aged-daughter see how her knowledge of the history of rocket science could be useful in answering a question she was asked about Churchill’s Iron Curtain speech. Mike’s stories show truly significant learning of the kind that changes students perspectives and, if we’re lucky, their lives. It is not just personalized but deeply personal. He was able to reach his daughters because he understood them as humans, well beyond the boundaries of a list of competencies they had or had not mastered within the disciplines they were studying.
For now and the foreseeable future, no robot tutor in the sky is going to be able to take Mike’s place in those conversations. This is the kind of personal teaching that humans are good at and robots are not. But neither are the tools we have today useless for this sort of teaching. Vendors, administrators, and faculty alike have broadly misunderstood their role and potential. In this post, I’m going to talk about both how these tools are useful for the kind of education that Mike cares about (and I care about) as well as, perhaps more importantly, why we are so prone to getting that role wrong.Robot Tutors in the Weeds
One fact that shines through about Mike’s daughters is that they are both pretty smart. His middle schooler clearly read and understood the book by the Iranian author. She was stuck on not a “what” question but a “why” question. His high schooler knew a lot about the space race—not just the what, but also the why. She just hadn’t yet seen the relevance of what she knew to a “why” question she was being asked in a different domain. Both girls were working on higher-order thinking skills. One of the reasons that Mike could teach them is that they already had a lot of foundational knowledge.
Not all students do. They don’t all have strong reading comprehension or study study skills. Not everybody is good at remembering facts or distinguishing causal connections from co-occurring but irrelevant details. For example, some readers could benefit from being stopped every few paragraphs or pages and asked questions to help them check themselves to see if they’ve understood what they read (and become more self-aware about their reading comprehension in general). This is the sort of thing that computers are good at.
What happens in classrooms where students don’t have this sort of comprehension tutoring help (robot or otherwise)? Sometimes the students who need that help don’t get it, and they fail. They are never able to answer the “why” questions because they don’t know the “what.” Other times, the teacher slows down to cover the “what” in class in order to help the students who are struggling. This teaching strategy has a few side effects. First, it takes a lot of class time, which means that there is little or no time to discuss the “why.” This leaves kids like Mike’s daughters, who are ready and hungry for the “why,” bored. Second, all the sudents quickly learn that they don’t have to read the book because the teacher will go over the important parts in class. I hear complaints from teachers all the time that they have trouble getting “kids today” to read. I believe them. But I’m skeptical of the explanations that I hear for why this is so. I don’t think it’s primarily because of TV or YouTube or mobile phones. All of those factors fall under the larger umbrella cause that students don’t have to read anyore. Nowhere is that more true than in the classroom. If you were asked to read something in advance of a meeting, and you knew the person running the meeting would take almost all the meeting time reviewing the aspects of the reading that she thought were important for you to know, would you read in advance? Or would you find a better use for that time?How Homework Got Broken and How Not to Fix It
Most teachers—especially middle school and above—have a passion for their subject. (Elementary school teachers, who are generalists, more often have a passion for the students, although the two are not mutually exclusive.) They love the “why” and want to talk about it. But they end up spending most of their time talking about the “what” because if they don’t they will leave some students behind. As we have seen, a side effect of this understandable behavior is that students learn not to do the homework, which means that they increasingly come into class not knowing the what. And the viscious cycle continues.
So teachers grade the homework, hoping that the grades will force the students to come to class prepared to discuss the “why.” To be clear, there are different reasons why teachers might want to count homework toward a course grade. One is when the homework is carefully constructed to incrementally build skills, so each homework assignment is essentially a summative assessment of the next small step on the hill the teacher is trying to get the students to climb. We see this most often in math, engineering, or other subjects where there is a strong emphasis on increasingly sophisticated application of procedural knowledge. But more often than not, teachers count day-to-day homework toward a course grade primarily because they are trying to motivate students to learn the “what” at home.
This approach has side effects of its own. Students are motivated by grades, but only to a point. They quickly become quite sophisticated at calculating how much of the homework they have to do in order to get the minimum grade that they want to achieve. Excellent students and weak ones alike make this calculation. Unfortunately, weak students often miscalculate, undershoot, and fail. Meanwhile, good students may be getting good grades, but they are not necessarily learning all that they could be. And, of course, the more the homework counts toward the course grade, the more incentive students have to cheat.
Part of my day job as a consultant is to help companies who design educational technology products understand teachers and students better. In the course of doing that work over the past few years, I have spoken to a lot of students and teachers about the homework problem. Many of the best teachers either don’t count homework toward the course grade or count it just a little—enough to communicate to the students that the homework matters, but not enough to trigger the what’s-the-minimum-I-have-to-do calculation. They use the grade as just one tool in an overall strategy designed to help students see that the “what” questions they are learning to answer in their homework are relevant to the far more interesting “why” questions about which the teachers are passionate and would like their students to become passionate about too. They pose mysteries at the end of class that the students can only solve with the knowledge they gain from doing the homework. Or they have little verbal in-class quizzes to keep the students on their toes, in the context of a discussion of how the tidbit in the verbal quiz matters to the larger topic being discussed.
Interestingly, a number of students have told me that they like in-class verbal quizzes. Well, “like” probably isn’t quite the right word. Appreciate. Value. Are grateful for. But they only will tell you this if you ask the right question, which is the following:
“How can you tell that your teacher cares about you?”
The story that we often tell ourselves about other people’s children is that they are lazy. They don’t like to work or to learn. But the first question that most students are trying to answer for themselves when they start a new class, particularly if that class is about something they don’t already care about, is “Does this teacher care if I learn?” If the answer is “no,” if the relationship is purely transactional, then most students will try to figure out the minimum cost they have to pay in order to get a satisfactory grade. Think back to your own school days. Didn’t you do that sometimes? I did. The less I thought the teacher respected me or cared about me, the harder I played the I’m-going-to-do-almost-nothing-and-still-ace-your-stupid-class-you-arrogant-ass game. And the more I could get away with it, the more I was convinced that the teacher didn’t care about me. “After all,” I reasoned, “nobody who really cares about me as a student would let me get away with being so damned lazy.”
This is a problem that grading homework won’t fix, robo-grading won’t fix, and adaptive robo-grading won’t fix. In fact, those strategies often make the situation worse. Two things enrage students almost more than anything:
- Making them buy expensive books that they never actually need and that the teacher never even mentions in class discussion
- Making them do hundreds of stupid homework problems that seem to have no obvious connection to anything on the tests (or, really, anything in the world) and that the teacher never even bothers to talk about in class
Luckily, these selfsame robo-homework tools actually can help avoid the cascade of course design failures I have traced in this post, if only they are designed and deployed a little differently.Personalized Learning
Let’s review some of the things that educational software can do well:
- Check students’ mastery of low-level cognitive skills such as memory of facts and application of procedural knowledge
- Provide feedback to students and teachers on the students’ progress toward mastery
- Provide feedback to the teachers on the students’ progress
- In some (but not all) cases, help the students when they get stuck on low-level mastery skills
When I say “low-level,” that is not a value judgment. Mike’s daughter had to know basic facts about the U.S. and Soviet space programs in order to make inferences about their consequences for the broader political climate. The teacher clearly wanted to spend time with the students discussing the “why” question, and Mike’s story illustrated how humans are much better suited than robots for helping students learn to answer those sorts of question. But we also know that teachers get stuck spending all their time on the “what” because some students get stuck there. Students don’t want to spend any more time on the “what” than their teachers do. It is in everybody’s interest to get students to learn as much of the basic facts and procedural knowledge as possible outside of class so that the teacher can spend class time on the real intellectually challenging aspects of the subject.
Software can help solve this problem by giving both students and teachers feedback on how the students are doing with the “what.” Some students, like Mike’s daughters, won’t need help, but it’s not an all-or-nothing thing. Most people are better at grasping the basics of some subjects than others, or better at some low-level cognitive tasks than others. Personally, my reading comprehension is very good, but I am horrible at memorizing. I was interested in science but ended up dropping every college science course I registered for because the memorization killed me every time. Tutoring software might have helped me. In high school, I scored very well on the physics Achievement test because I could derive most of the physics needed to answer the questions based on the “why” I had absorbed in class. But I did poorly in the class itself because I was bad at remembering and applying simple formulas.
I could have become good at physics. I could have learned to love it. When I was a kid, I used to write letters to NASA to request pictures from their telescopes. My high school teacher knew that about me, because I was in a small class, and because he was the kind of teacher who made a point of knowing that sort of thing about his students. But my college professors had no way of knowing given the contact that they had with me in their large lectures. If they could have seen my results on formative assessments, and if they had more time in class to help students like me with the sticking points rather than repeating the reading that almost nobody did because they knew the professor would repeat it, then I might have had a different relationship with the subject. I took every philosophy of science course I could but avoided the actual science classes because I was afraid of them.
Tools that can help students like me exist today. But more often than not, two common mistakes currently get in the way of them being used in ways that actually would have helped me get through physics (or biology, or art history). The first is grading. The minute the homework becomes high-stakes, it breaks the ability to help students who are stuck. Rather than reducing student anxiety about the course, it raises it. Rather than motivating students to do the best they can for the teacher and themselves, it motivates them to calculate the impact of each assignment on their grade. Students need to believe that mastering the “what” matters, but this is not the way to convince them.
This brings me to the second and related mistake, which is failing to make explicit connections between the “what” and the “why” for students. They need to understand the point of learning all that low-level stuff. I didn’t care about solving physics problems, but I did care about understanding physics. I might have been more motivated to take on the scary work that was hard for me if I had seen a clearer connection between the two. This is all about course design. It’s about using the homework tool in context. It’s about reclaiming classroom time to have discussions like the ones that Mike had with his daughters, and maybe sometimes to review the specific “what” problems that students are getting stuck on.
Putting all this together, fixing the problem of broken homework requires the three personalized learning strategies that Phil and I have been writing about:
- Moving content broadcast—especially lectures about the “what”— out of the classroom to make room for discussions about the why
- Making homework time contact time, so that students can get help from the teacher when they are stuck with the “what” and also see that the teacher cares about whether they are learning
- Providing a tutor in cases where the software can help the student with the “what,” or maybe a human tutor by enabling the teacher to see where students are stuck and focus class time on getting them unstuck
The term of art for using homework this way is”continuous formative assessment:”
You don’t need technology to do this. It’s just a feedback loop that could be accomplished by manually marking up students’ work or otherwise guiding them as they work. Technology just provides the ability to amplify that feedback loop and make it less labor-intensive to implement. But most vendors aren’t optimizing their homework products for this kind of use. Instead, they spend all their time adding gradebook features and increasingly complex ways for instructors to customize problem sets and reduce cheating. And they do this, more often than not, because their customers ask them to. (Of course, they don’t often hear from non-customers who aren’t interested in graded homework but might be interested in continuous formative assessment.)
The fundamental problem isn’t the tool or the vendor. It’s the cascade of unintended consequences caused by students who come into the class with different levels of skill and motivation, and the coping mechanisms teachers have employed to deal with the challenges of teaching a heterogenous group. Right now, we are mostly getting products that are designed to minimize the pain caused by that cascade or tools that are designed to replicate the failures in a more automated and therefore cheaper way. But we could easily be getting products that help teachers to create that positive feedback loop between themselves and their students. If we want that to happen, then we have to start asking a different question:
How can the capabilities afforded by educational technologies empower teachers to learn and implement teaching strategies that work better for them and their students?
The post What Homework and Adaptive Platforms Are (and Aren’t) Good For appeared first on e-Literate.
By Michael FeldsteinMore Posts (1071)
Today we are pleased to announce that our LMS subscription service is available for purchase. As promised, we have kept the promotion on the blog to a minimum, providing details instead to people who specifically sign up for the mailing list. But since today is the big unveiling, it seems appropriate to tell you all a little bit about it and what it means for where we are going with the blog and our work.
The subscription is called “e-Literate Big Picture: LMS.” As you might guess from the title, there will be other “e-Literate Big Picture” subscriptions. Each of these will provide regular updates and in-depth analysis that go beyond the free analysis that we will continue to provide here on the blog, focused particularly on helping campuses make good decisions about how to evaluate, acquire, implement, and support educational technology in ways that will have the most positive impact for students and faculty. They will extend the work of the blog with analysis that is more actionable.
In the case of this first subscription, we have been able to augment our analysis of the LMS market with data and analytics support from our partner LISTedTECH. Their database contains information on LMS selections of more than 4,000 colleges and universities in the U.S. and Canada, plus thousands of more worldwide. The database goes well beyond who is using which LMS today, as it also contains information on LMS migrations (decommission, implementations, pilots, etc) over the last 15 years. This information has given us an unprecedented chance to both test hypotheses we’ve had about the LMS market as well as form new ones. The bottom line is that we have gained insights into how the LMS market is changing that we believe are new.
One of the big findings of the report is that we are seeing signs that the LMS market may be leaving a decade-long period of relative stasis and entering a period of rapid change. Since we are mindful of the perennial predictions of the imminent death of the LMS, we try to be cautious and data-driven in our analysis. But it sure looks like the market could be very different a few years from now than it is today, and there are several different ways that the trend could break.
The subscription, which provides an annual, institution-wide update, will provide regular updates on these trends that will be helpful to schools making (or thinking about making) adoption decisions (as well as to vendors and investors who are trying to understand how campus needs and decisions are evolving). We will be adding various levels of available additional support to schools going forward.
Because we want this support to be available and useful to all schools that want it, we have kept the price low and have allowed the report to be shared within organizations under fairly generous terms relative to other subscriptions we have seen.
More information is available on the subscription page.
By Mike CaulfieldMore Posts (6)
[Note – an earlier version of the first half of this post was first published at Mike’s Hapgood site. We asked him to make some alterations for the e-Literate audience and republish here. – ed]Indie Rock and Donald Trump
I drive my oldest daughter to high school every day. She goes to a magnet STEM school in the district that’s on the campus where I work. I’ve been brainwashing her into liking indie rock one car ride at a time using carefully planned mix CDs.
Last week she tells me I need to put more Magnetic Fields songs in the mix. Why? I ask.
“Physics homework.” she says.
It turns out that there’s a number of principles of physics that she remembers through a complex set of associations she’s developed referencing indie rock songs. I don’t pretend to get them all, but the 69 Love Songs hit “Meaningless” plays an apparently crucial role.
Later that day, my youngest daughter is asking me about the book Persepolis, a book about growing up Iranian during and after the 1979 Islamic Revolution. The author of that book spends the preface talking about the reasons she wrote it, and how she felt the understanding of her native country of Iran was too narrow, and in a way, too exotic. My daughter tells me that she doesn’t quite get what the author is talking about. After all, there’s a lot of fundamentalism in the early parts of the book — and people really are in a revolution in 1978, so what are we getting wrong in the West?
I know that this daughter, a middle schooler, has had some stress about Donald Trump. She has people in her class who like him, and she can’t understand why when he’s so mean. It worries her.
I ask her if Trump gets elected, how would she feel if everyone assumed all Americans were like Donald Trump. Well, we wouldn’t be, she says.
Oh, she says.We Have Personalization Backwards
When we talk personalization, we tend to talk about targeting. You learn a certain set of things, you get tested, the personalization software finds knowledge gaps and runs you through the set of canned explanations that you need. (There are other, related meanings, for which Feldstein and Hill provide a partial taxonomy here).
The idea seems to be that there is a wide variety in what concepts students struggle with, but there is one perfect explanation per concept. Personalization gets the explanation of that concept to the student.
That’s part of the story, but it’s not the most important half.
When tutors work with students they certainly alter what they work on based on student need. They personalize what skills they target.
But the biggest advantage of a tutor is not that they personalize the task, it’s that they personalize the explanation. They look into the eyes of the other person and try to understand what material the student has locked in their head that could be leveraged into new understandings. When they see a spark of insight, they head further down that path. When they don’t, they try new routes.
If you find yourself teaching people something — anything — you’ll see this at work. How many times do you being with the phrase “So have you heard of X?” There you are, looking for the way into the explanation. It could be from point X, a Magnetic Fields song. Or from point Y, a Trump analogy. For a Trump-supporting indie-rock-hater it’s going to be a completely different entry point, and a different explanation.
I see little talk about this in the marketing of personalization. Yes, we talk “playlists” and “customized paths”, but it’s as if we’re addressing vitamin deficiencies. The assumption is that everyone is deficient in different ways, but given any individual’s deficiency there is one remedy. You’re uniquely deficient in folate and iron; here’s the standard five milligrams of folic acid and an iron supplement. In a personal playlist!
While not entirely useless, this conception doesn’t fit the bulk of my experience as either a teacher or a learner. In my experience, students often have very similar skill gaps, but the remedy for each student may be radically different.
A truly personalized system would not merely find the questions the student needs explanations for: it would find the explanations most useful to the students. And by most useful, I don’t mean “learning styles” or “friendly robots in the sky” or anything like that. I mean keying into the fact that an adult student who is struggling in Introductory Psychology might in fact be an expert in business negotiation, and finding an explanation of “confirmation bias” that makes use of that student’s existing knowledge and talents rather than treating it as irrelevant.
I mean giving some students that have access to a lot of vocabulary that can help them grasp new concepts quickly one explanation, while giving other students less concise explanations that avoid specialized vocabulary for the moment. At some level, I mean that the kid who loves indie rock will find the indie rock inflected explanation of something when it’s out there.
I’m not talking about a system that stores three publisher produced versions (image-based! textual! kinesthetic!) of content directed to students on the basis of a multiple choice test. I’m talking about something much more radical than that, a system that has hundreds of explanations and examples for any concept the student wishes to learn.What Real Personalization Would Look Like
With the right system of connected explanations and examples we could serve students the individual content they need on numerous dimensions:
- The”traditional” student could get the explanation that taps into dorm life and high school drama, while the older student could get an example that resonates more with their life and doesn’t make them feel unwelcome every time they read the textbook.
- Students who find one example too difficult could “dial-down” to something more introductory.
- Students with a special area of expertise would have opportunities to leverage that expertise to understand new things.
- Students with accessibility issues could get accessible content, and rather than universal design meaning the best possible path for everybody it could mean the material and platform out of which anyone could construct a viable and unique path, regardless of strengths and challenges.
- As students came to understand things, they would write their own explanations and examples which could be fed back into the system to be used by others.
If we in the OER community were pursuing this dream, here’s some things we’d be working on:
- Modular content instead of textbooks. Tons of the stuff.
- Massively varied content instead massively generic content.
- Recommendation engines that track what sort of content works for individual people.
- Technology that allows individual students to share and curate the material that works for them.
- Systems for students and faculty to create, fork, and improve content they use.
- Pedagogy that allows multiple approaches to (and ultimately multiple interpretations of) course goals.
Now, here’s what we seem to be working on instead:
- Textbooks (but open).
- End-to-end courseware (but free).
- Personalization technologies that map the thin content of textbooks and courseware to testable student needs.
I understand why this is (I really do!). We have to pave the existing cowpaths, to some extent, and that means content swapped in at the level of the course, not the individualized need. The fact we are even at the table suggesting open substitutes is the result of a decade and a half of focused effort. I get it.
But as we build these linear, centralized systems, the rest of the world is moving beyond them. It’s time we take notice and build the kind of learning environment that open resources might make uniquely possible.
On today’s indie rock car ride my daughter mentions she has to write a piece for her college history course relating Churchill’s Iron Curtain speech to the ensuing Cold War. She absolutely hates history (which kills a history lover like me, but there you go). She’s an engineering geek, an astronomy nut who loves to tell you the physics of how the first probe got to Mars, but couldn’t distinguish John Adams from Thomas Jefferson.
So an essay about the Iron Curtain? Ugh.
Well, I venture, one thing to remember about that speech is that the Soviets were really frightening at the time. They were a technologically advanced society that seemed willing sacrifice anything for the national interest. It scared the bejeezus out of us.
“Tell me about it,” she says. And then she launches into this detailed story of how the Americans were in awe of the early Soviet space program, and couldn’t figure out how they were doing this stuff. But the thing about the Soviets was they would never announce a mission until it was successful, so they could fail multiple times where the Americans couldn’t. And even with small things they would control the narrative. Laika, the famous dog the Russians sent into space, died a brutal death in the first hours of her flight and was never meant to be retrieved, but the full story of that flight wasn’t known until the 2000s. Even the Russian focus on Venus vs. Mars was interesting, in that failure on Venus was much less public.
Huh, I say. You think you could relate some of that to the Iron Curtain speech?
“Maybe,” she laughs.
By Michael FeldsteinMore Posts (1071)
So here we go again. Another terminology war. First there was the battle for open. Then the battle for MOOCs. Somewhere in there was the battle for edupunk.
I stay out of terminology wars because, even though they are often about very real and important issues, the emphasis on finding a single correct definition tends to distract rather than focus the conversation.
It’s a different with “personalized learning” because there is no fight over its meaning right now. Rather, it seems to have no specific meaning at all. Sometimes it is used interchangeably with “adaptive learning.” But not always. And not exactly. More often it means, roughly, “robot tutor in the sky.” Shorter version: “WHEEEEEE!!!” Or maybe, “Wingardium leviosa!”
Phil and I have decided to claim this prime piece of linguistic real estate. We are asserting squatters’ rights.
We hereby decree, by the power vested in us by nobody at all, that “personalized learning” shall henceforth refer to a family of teaching practices that are intended to help reach students in the metaphorical back row. The ones who are bored, or confused, or tuned out, or feeling stupid. Personalized learning practices are almost always ones that teachers have been using for a very long time but that digital tools can support or enhance. Here are a few that we have identified so far:
Move content broadcast out of the classroom: In many disciplines, the ideal teaching format is a seminar, in which students spend class time engaged in conversation with a professor. In others, it is a lab. Both models have students actively engaged in academic practice during class time, when the professor, as the expert practitioner, is present to coach them. Every class spent lecturing is a wasted coaching opportunity.
Many disciplines have traditionally used assigned readings to move content broadcast out of the classroom, and some still do. But it is not always possible to find readings that capture what you want to cover, and in any case, it is becoming harder to persuade students to read. Luckily, there are tools that can help with this problem. You can record and post your lectures as videos, which students can watch as many times as they need to absorb what you’re trying to tell them. You can assign podcasts that they can listen to on the go, or find interactive content that keeps them more engaged.
Make homework time contact time: Good teachers help students see the direct connection between the work they do at home and the overall purpose of the class. They do this in a variety of ways. Sometimes they mark up and comment on the student work. Sometimes they ask the students questions in class that require them to build on the work they did at home. For a variety of reasons, which often boil down to professors’ having less available time per student, this has become harder to do. The great crutch that is now being used to limp along without actually solving this problem is robo-graded homework assignments. By itself, automated practice might help some students drag themselves through to the end of the semester. But it doesn’t often inspire them to think that maybe they are not destined to be the student in the back row forever. (There are important exceptions to this rule, which I address below.)
On the other hand, these same automated homework tools can also give teachers an easy view into how their students are doing and create opportunities to engage with those students. “Analytics” in these tools are roughly analogous to your ability to scan the classroom visually and see, at a glance, who is paying attention, who looks confused, who has a question. Nor are these the only tools available for making homework time feel less isolated and pointless. Any homework activity that is done electronically can be socially connected. Group work done on a discussion board can be read over by the professor when she has time. Highlights and margin notes on readings can be shared and discussed in class. This sort of effort on the professor’s part doesn’t have to be exhaustive (or exhausting). Sometimes a small gesture to show a student that you see her is all it takes.
Hire a tutor: You know what tutors are typically good for in your particular discipline. You also know that there generally aren’t enough good ones available, and that even when there are, it’s tough to get students to come into the tutoring center. One of the best uses of machine-graded homework systems, especially when they are “adaptive,” is to treat them as personal tutors that are available to students whenever they need them and wherever they are. They aren’t perfect, but what tutors are? Sometimes getting students out of the back row means helping them to believe that they are capable of learning. And sometimes students are willing to pose a question to a computer that they would be embarrassed to ask in person. In those cases, a little extra practice and feedback on the basics, without judgment, can make all the difference — even if the feedback comes from a machine. And if adaptive learning robo-tutors don’t fit the needs of your students and your discipline, technology also makes it possible to connect students with actual human tutors, who are available online to help them get through the rough spots.
We have written about this idea briefly in The Chronicle as well as EdSurge and in detail in EDUCAUSE Review. We will keep beating this drum everywhere we can. We do not want or expect to create a definitive list of teaching techniques that define personalized learning. But higher ed needs a general term that focuses attention on student needs and teaching practices that can support those needs rather than on product features and computational plumbing. Since “personalized learning” is both widely known and nearly perfectly meaningless—all we know is that it has something to do with individual persons, learning, and technology—it seems like a good fit.
Furthermore, the vendors who have adopted the “personalized learning” mantle need this change as much as teachers and students do. While it is apparently possible to raise more than $150 million in venture funding on the strength of “robot tutor in the sky,” generating actual revenue from paying customers is another matter entirely. Faculty want to know what these “personalized learning” thingamabobs do, and they want to control how the products integrate into their classes. The evidence that this is so can be found in the massive, industry-wide pivot away from the xMOOC dream that one super-professor and big data can make all other versions of a course obsolete. Acrobatiq, a firm spun out of Carnegie Mellon University’s work on cognitive tutoring courseware that required a great deal of expert labor to hand-craft each one, now is touting an authoring platform for faculty. McGraw Hill Education, which has been promoting SmartBooks, has now released the authoring platform behind those products so that anybody can create their own. SmartSparrow, which has always been a DIY tool, is touting its
“free learning design tool” that “tries to make it possible for faculty to incorporate their own course materials in an adaptive learning environment.” Meanwhile, the Washington Post has declared, “‘Big data’ was supposed to fix education. It didn’t. It’s time for ‘small data.'”
Where have you gone, Joe diMaggio Thomas Friedman?
The nation turns its lonely eyes to you.
Woo woo woo!
What’s that you say, Professor Robinson?
Flat World Tom has left and gone away.
Hey hey hey!
Hey hey hey.
So I will say it again, not for the last time: Personalized learning is not a product. It is not a thing you can buy. If the term means anything at all, it must mean a set of teaching practices for reaching struggling or alienated students that many good teachers have known and employed for a very long time and that sometimes can be supported or improved through the use of educational technologies.
What do you know? It turns out the world is round after all.
- In what I can only imagine is a deliberate attempt to demonstrate that they have not over-invested in marketing whiz-bangery, the company has named the product the Learning Science Platform and promoted it nowhere I can find to nobody I can find. But it does exist, and they do talk about it, at least to me. I will have more to say, and show, about the interesting product with the sad name at some point soon.
By Phil HillMore Posts (405)
At e-Literate we mostly avoid blogging about our consulting work through MindWires Consulting, but we have an opportunity with our work for California’s Online Education Initiative (OEI) to share information with the higher education community on a topic of growing importance. The OEI is California Community College System’s approach to help individual colleges collaborate with their online courses and programs, including a OEI Course Exchange to be launched this Fall in pilot mode that will “allow students to register for online courses across participating colleges without requiring students to complete separate application and matriculation processes”.
Last year we at MindWires helped OEI select a Common Course Management System (yes, they use the CMS language instead of LMS) by providing market analysis and facilitating the group decision-making process. This year they asked us to review similar efforts at other consortia in the US and Canada. The point of a CCMS is not the technology platform itself but rather what the common e-learning infrastructure could allow a consortium to do – address issues such as course redesign, professional development, student support, etc. Even though the OEI was based on selecting a common system from the beginning and has experienced significant adoption already, there is still a great deal of value in learning from others that have gone before. We are releasing the result of this work with the report – “A Retrospective on Implementing Course Management Systems: Motivations, Benefits, Drawbacks and Recommendations“.
While I contributed to the report, O’Neal Spicer from MindWires and Michelle Pilati from OEI are the primary authors. They interviewed staff and leadership from 10 different organizations that have implemented a common CMS, deliberately chose not to do so, or support such groups. The interviewed groups include:
- The State University of New York
- Utah Education Network
- Mississippi Virtual Community College
- University of Wisconsin System
- Virginia’s Community Colleges
- Colorado Community Colleges Online
- WICHE Cooperative for Education Technologies (WCET)
- British Columbia Campus (BC Campus)
- Great Plains Interactive Distance Education Alliance (IDEA)
- Connecticut Distance Learning Consortium
The report explores what the initial motivations were for these groups to select a common platform, the benefits that have been realized, the drawbacks consortia face, the challenges in implementation, and recommendations for those groups considering similar choices.
You can view and download the full report here:
Update: Fixed page number error
The post A Retrospective on Implementing Common Course Management Systems appeared first on e-Literate.
By Phil HillMore Posts (405)
Last week I shared a static view of the US institutions with the 30 highest enrollments of students taking at least one online (distance ed, or DE) course. But we can do better than that, thanks to some help from Justin Menard at LISTedTECH and his Tableau guidance.
The following interactive chart allows you to see the full rankings based on undergraduate, graduate and combined enrollments. And it has two views – one for students taking at least one online course and one for exclusive online students. Note the following:
- (1) shows how you can change views by selecting the appropriate tab.
- (2) shows how you can sort on any of the three measures (hover over the column header).
- (3) shows the sector for each institution by the institution name.
Have at it! You can also go directly to Tableau, allowing a wider view of the table.
The post Fall 2014 IPEDS Data: Interactive table ranking DE programs by enrollment appeared first on e-Literate.
By Michael FeldsteinMore Posts (1071)
ASU’s Lou Pugliese was kind enough to invite me to participate on a panel discussion on “Next-Generation Digital Platforms,” which was really about a soup of adaptive learning, CBE, and other stuff that the industry likes to lump under the heading “personalized learning” these days. One of the reasons the panel was interesting was that we had some smart people on the stage who were often talking past each other a little bit because the industry wants to talk about the things that it can do something about—features and algorithms and product design—rather than the really hard and important parts that it has little influence over—teaching practices and culture and other messy human stuff. I did see a number of signs at the conference (and on the panel) that ed tech businesses and investors are slowly getting smarter about understanding their respective roles and opportunities. But this particular topic threw the panel right into the briar patch. It’s hard to understand a problem space when you’re focusing on the wrong problems. I mean no disrespect to the panelists or to Lou; this is just a tough nut to crack.
I admit, I have few filters under the best of circumstances and none left at all by the second afternoon of an ASU/GSV conference. I was probably a little disruptive, but I prefer to think of it as disruptive innovation.
Here’s the video of the panel:
The post No Filters: My ASU/GSV Conference Panel on Personalized Learning appeared first on e-Literate.
By Phil HillMore Posts (405)
The National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) provide the most official data on colleges and universities in the United States. This is the third year of data.
Let’s look at the top 30 online programs for Fall 2014 (in terms of total number of students taking at least one online course). Some notes on the data source:
- I have combined the categories ‘students exclusively taking distance education courses’ and ‘students taking some but not all distance education courses’ to obtain the ‘at least one online course’ category;
- Each sector is listed by column;
- IPEDS tracks data based on the accredited body, which can differ for systems – I manually combined most for-profit systems into one institution entity as well as Arizona State University;
- See this post for Fall 2013 Top 30 data and see this post for Fall 2014 profile by sector and state.
The post Fall 2014 IPEDS Data: Top 30 largest online enrollments per institution appeared first on e-Literate.
By Phil HillMore Posts (405)
The National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) provide the most official data on colleges and universities in the United States. I have been analyzing and sharing the data in the initial Fall 2012 dataset and for the Fall 2013 dataset. Both WCET and the Babson Survey Research Group also provide analysis of the IPEDS data for distance education. I highly recommend the following analysis in addition to the profile below (we have all worked together behind the scenes to share data and analyses).
- WCET’s initial analysis of Fall 2014 data
- WCET’s comparison of Fall 2014 to past years
- BSRG’s annual report on distance education using Fall 2014 data
- WCET’s update on the data quality issues with IPEDS data
Below is a profile of online education in the US for degree-granting colleges and university, broken out by sector and for each state.
Please note the following:
- For the most part distance education and online education terms are interchangeable, but they are not equivalent as DE can include courses delivered by a medium other than the Internet (e.g. correspondence course).
- I have provided some flat images as well as an interactive graphic at the bottom of the post. The interactive graphic has much better image resolution than the flat images.
- There are three tabs below in the interactive graphic – the first shows totals for the US by sector and by level (grad, undergrad); the second also shows the data for each state; the third shows a map view.
- Yes, I know I’m late this year in getting to the data.
If you select the middle tab, you can view the same data for any selected state. As an example, here is data for Virginia in table form.
There is also a map view of state data colored by number of, and percentage of, students taking at least one online class for each sector. If you hover over any state you can get the basic data. As an example, here is a view highlighting Virginia private 4-year institutions.
For those of you who have made it this far, here is the interactive graphic. Enjoy the data.
The post Fall 2014 IPEDS Data: New Profile of US Higher Ed Online Education appeared first on e-Literate.
By Phil HillMore Posts (404)
This following excerpt is based on a post first published at The Chronicle of Higher Education.
With all of the discussion around the role of online education for traditional colleges and universities, over the past month we have seen reminders that key concerns are about people and pedagogy, not technology. And we can thank two elite universities that don’t have large online populations — MIT and George Washington University — for this clarity.
On April 1, the MIT Online Education Policy Initiative released its report,“Online Education: A Catalyst for Higher Education Reforms.” The Carnegie Corporation-funded group was created in mid-2014, immediately after an earlier initiative looked at the future of online education at MIT. The group’s charter emphasized a broader policy perspective, however, exploring “teaching pedagogy and efficacy, institutional business models, and global educational engagement strategies.”
While it would be easy to lament that this report comes from a university with few online students and yet dives into how online learning fits in higher education, it would be a mistake to dismiss the report itself. This lack of “in the trenches” experience with for-credit online education helps explain the report’s overemphasis on MOOCs and its underemphasis on access and nontraditional learner support. Still, the MIT group did an excellent job of getting to some critical questions that higher-education institutions need to address. Chief among them is the opportunity to use online tools and approaches to instrument and enable enhanced teaching approaches that aren’t usually possible in traditional classrooms.
The core of the report, in fact, is based on the premise that online education and online tools can enable advances in effective pedagogical approaches, including constructivism, active learning, flipped classrooms, problem-based learning, and student-centered education. It argues that the right way to use technology is to help professors teach more effectively:
“Technology can support teachers in the application of the relevant principles across a group of students with high variability. In fact, technology can help tailor lessons to the situation in extremely powerful ways.
The instrumentation of the online learning environment to sense the student experience and the ability to customize content on a student-by-student basis may be the key to enabling teachers to provide differentiated instruction, informed by a solid foundation in cognitive science. Modern online courses and delivery platforms already implement some of these concepts, and provide a framework for others.”
But there is value in seeing what happens when that advice is ignored. And that’s where an incident at George Washington University comes in. If technology is just thrown at the problem with no consideration of helping educators to adopt sound pedagogical design, then we can see disasters.
On April 7, four students who took an online program for a master’s degree in security and safety leadership from George Washington’s College of Professional Studies filed a class-action lawsuit against the university for negligence and misleading claims. As reported byThe GW Hatchet, a student newspaper:
For a non-paywall version of the full article, good through 4/26, follow this link.
Update: What interesting timing! See Michelle Pacansky-Brock’s post on very similar topic.
The nature of online classes varies dramatically, much like face-to-face classes. But, in both scenarios, the teacher matters and the teaching matters. When an online class is taught by an engaged and empathetic instructor who seeks to be aware of the needs of her students, the asynchronous nature of online learning may become a benefit to students, not a disadvantage. This is contingent upon the design of the course, which is where instructional designers or “learning engineers” can play an important role. Many instructors, however, play both roles — and those who do are often the professors who experience deep transformations in their face-to-face classes as a result of what they learned from teaching online.
The post A Moment of Clarity on the Role of Technology in Teaching appeared first on e-Literate.
By Phil HillMore Posts (403)
Just over four years after Providence Equity Partners acquired Blackboard and three years after they brought in Jay Bhatt to replace co-founder Michael Chasen, the company hired Bill Ballhaus as its new CEO at the beginning of January. 100 days in, Ballhaus is starting to make changes to the organization and providing some insights into future corporate directions.
The most significant change is a reorganization that combines strategy, product management and marketing in one group under Katie Blot. In an interview Michael and I had with Ballhaus and Blot earlier this week, they described the primary motivation for the organizational change as the need to more tightly align those functions. Also significant is that this change means the departure of Mark Strassman, SVP Product Marketing & Management, and Tracey Stout, SVP of Marketing & Sales Effectiveness. Blackboard provided the following statement.
We are deeply grateful for the many contributions both Mark and Tracey have made at Blackboard. Both of these individuals have been critical to driving the transformation and evolution of our PMM and Marketing organizations.
Katie Blot joined Blackboard in 2008 as President of Global Services and has been SVP of Corporate & Industry Strategy since early 2015. Her long experience at Blackboard is worth considering as is the fact that both departing executives both worked with Jay Bhatt at Autodesk earlier in their careers and were brought into Blackboard as part of his new management team. I am not suggesting that the purpose of the move was based on corporate pedigree, but I am suggesting that the move effectively changes the balance in how much ed tech experience and even Blackboard experience rests with the top company executives.
When we asked Ballhaus about lessons learned after his listening tour with customers, he told us that the company must do a few things very well. And the top of his priority list is the Learn LMS product family. This focus on products stands in contrast to Bhatt’s broader and more vague focus on solutions. Michael noted the change in tone back in the 2013 BbWorld keynote:
The big corporate keynote had to be one of the strangest I’ve ever seen. CEO Jay Bhatt ran through a whole long list of accomplishments for the year, but he only gave each one a few seconds as he rattled through the checklist. He mentioned that the company has a new mission statement but didn’t bother to explain it. It took nearly an hour of mostly talking about big macro trends in education and generalities about the categories of goals that the company has set before he finally got around to new product announcements. And then commenced what I can only describe as a carpet bombing run of announcements—a series of explosions that were over by the time you realized that they had started, leaving you to wonder what the heck had just happened.
At that same 2013 keynote (and in Michael’s post) Blackboard announced a major UX overhaul for Learn (the Ultra part) and a move to the cloud (the SaaS part). By the 2015 BbWorld conference Michael shared how Ultra was a year late and not yet ready for schools to test. The company has tripped over itself in not getting product out the door and not being able to create effective messaging. Just what is Learn Ultra and Learn SaaS and when will real colleges and universities get to evaluate them?
When we asked when Learn Ultra would be available for schools to actively pilot (real courses, real students, with major integrations to student rosters, etc), it was interesting to hear both Ballhaus and Blot take a very different approach and give what appears to be much more conservative estimates. Learn Ultra should be available for limited-functionality pilots for specific faculty (e.g. for courses not using the LMS heavily) by Fall 2016 and more broadly for institutions in Spring 2017, leading to general availability in Summer or Fall 2017.
It is encouraging that Blackboard appears to be increasing its focus on getting the core LMS product updates, and we have also noticed a tighter message about Ultra over the past two months. There is now a Learn Ultra preview for educators, where people can sign up and play around with courses both in Original View (what you know as Learn 9.1) and Ultra View (the new UX). Part of the purpose of this preview is to enable customers to get a better feel of Learn SaaS and also to help them determine whether a Fall 2016 or a Spring 2017 Learn Ultra pilot makes sense for them.
We will bring you more analysis of the Learn Ultra preview and of the broader analysis of the organizational changes at Blackboard in future posts. Stay tuned, and you can also sign up for more information on our upcoming e-Literate LMS subscription service.
"2016 is going to be an eventful year for the LMS" ® by @mfeldstein67
— Phil Hill (@PhilOnEdTech) February 19, 2016
The post Blackboard CEO’s First 100 Days: Reorganization and Learn Ultra Updates appeared first on e-Literate.
By Michael FeldsteinMore Posts (1070)
Which CEO has recently said or done all of the following:
- Suggested to an audience of VCs and ed tech entrepreneurs at the GSV conference that the importance of big data in education has been overstated
- Told that same audience that the biggest gains from adaptive learning come when it is wrapped in good pedagogy delivered by good teachers
- Asked former CIOs from Harvard and MIT, both of whom are senior company employees, to develop collaborations with the academic learning science community
- Accurately described Benjamin Bloom’s two-sigma research, with special attention to the implications for the bottom half of the bell curve
- When asked a question by an audience member about an IMS technical interoperability standard in development, correctly described both the goals of the standard and its value to educators in plain English
Answer: David Levin of McGraw Hill.
Yes yes, those are just words. But I have gotten a good look at some of what their ed tech product and data science groups have been up to lately, and I have spoken to Levin at length on a few occasions (and grilled him at length on two of them).
My advice: Pay attention to this company. They are not screwing around.
By Phil HillMore Posts (403)
This is the eighth year I have shared the LMS market share graphic, commonly known as the squid graphic, for (mostly) US higher education. The original idea remains – to give a picture of the LMS market in one page, highlighting the story of the market over time. The key to the graphic is that the width of each band represents the percentage of institutions using a particular LMS as its primary system.
This year marks a significant change based on our upcoming LMS subscription service. We are working with LISTedTECH to provide market data and visualizations. This data source provides historical and current measures of institutional adoptions, allowing new insights into how the market has worked and current trends. This current graphic gets all of its data from LISTedTECH. Where previous versions of the graphic used an anchoring technique, combining data from different sources in different years, with interpolation where the data was unavailable. Now, every year’s data is based on this single data source.
This graphic has been in the public domain for years, however, and we think it best to keep it that way. In this way we hope that the new service will provide valuable insight for subscribers but also improve what we continue to share here on the e-Literate blog.
Since we have data over time now and not just snapshots, we have picked the end of each year for that data. For this reason, the data goes through the end of 2015. We have 2016 data but chose not to share partial-year results in an effort to avoid confusion.
A few items to note:
- As noted in previous years, the fastest-growing LMS is Canvas. There is no other solution close in terms of matching the Canvas growth.
- Blackboard continues to lose market share, although the vast majority of that reduction over the past two years has been from customers leaving ANGEL. Blackboard Learn lost only a handful of clients in the past year.
- While the end-of-life occurs next year, Pearson’s has announced LearningStudio’s end-of-life for the end of 2017.
- With the new data set, the rapid rise and market strength of WebCT becomes much more apparent than previous graphics.
- There is a growing line for “Other”, capturing the growth of those systems with less than 50 active implementations as primary systems; systems like Jenzabar, Edvance360, LoudCloud Systems, WebStudy, Schoology, and CampusCruiser.
- While we continue to show Canvas in the Open Source area, we have noted a more precise description as an Open Core model.
The post State of Higher Ed LMS Market for US and Canada: Spring 2016 Edition appeared first on e-Literate.
By Phil HillMore Posts (403)
As we roll out our upcoming LMS subscription service here at e-Literate (see Michael’s post for initial description), we suspect that many of the e-Literate readers will be interested, but not all. We value the community here at e-Literate and want to ensure that the blog site itself remains as it’s always been – ad free, uncluttered, and with the same rough amount and breadth of content and discussions.
To help maintain the blog site’s feel, we have created a second email subscription for those people who would like more information on the LMS subscription service – when it’s going to be available, what the reports will look like, summaries of LMS analysis from the report and curated from the blog site, etc. You should see this new signup on the top of the right column in the desktop view, right under the signup for e-Literate posts. We will also include the form within posts that are relevant to analysis of the LMS market.
We are not going to automatically add current e-Literate subscribers to this new list, so if you’re interested in learning more on the service and getting content updates, sign up for the new subscription at this link.
Update: I apologize for any confusion as we test the signup button. The fields embedded in the post were not working in all cases, so we have replaced with a link to a web signup page. Thank you for your patience as we fix any remaining issues.
The post Signup For More Information On LMS Subscription Service appeared first on e-Literate.
By Michael FeldsteinMore Posts (1069)
Not too long ago, Phil and I wrote a post about our long, slow process of realization that our blogging at e-Literate and our consulting at MindWires are not two mostly unrelated things but really two halves of a whole. And we teased the idea that these two worlds would be coming together soon.
Today we’re ready to pull back the curtain a little bit on what we’ve been working on for the short term and offer some hints about what we’re thinking about for the medium term. We have some fairly audacious ambitions for the long term, but we don’t expect to get there overnight. In fact, we are going to start with a humble and somewhat unlikely (but hopefully useful) first paid subscription offering, which we will be making available in just a couple of weeks under the e-Literate brand. It will provide information and analysis above and beyond our continuing free content here on the blog (to which we remain strongly committed).
We’re going to be releasing an LMS market dynamics report, in partnership with LISTedTECH. We’d like to explain why we’re starting there, what we hope the report will accomplish, and where we will go from there.The Much Maligned Minivan
Phil has called the LMS “the minivan of ed tech.” I remember back around 2000, I saw a Harvard Kennedy School executive education course start with a clicker question:
Which are you more likely to buy?
It turned out that there was a pretty interesting correlation between participant reactions to different case studies and their minivan/SUV split. Their car preferences provided a window into their larger views of the world and themselves.
But a lot has changed in the last sixteen years. While a Dodge Caravan is still nobody’s answer to a mid-life crisis, and while there are certainly still people who wouldn’t be caught dead in a minivan, the current generation of young parents, the ones who grew up in Dodge Caravans, are often unembarassed to admit how much they like their family vehicle. It’s convenient. It solves a lot of problems. It’s not life-changing, it’s far from perfect, but it mostly works. Some of that change in attitude has been because of innovation in the product category, but we’re talking about innovation on the level of sliding doors on both sides that can close by remote control. Most of the change is really about people growing up with the product, getting used to it as a feature in the environment, and getting better at figuring out how to take advantage of what does well and work around what it doesn’t do well.
We have reached a similar stage with the LMS. We have entered the late majority phase. Most faculty that we speak to these days take the LMS for granted and, while they will often grumble about some aspect that they are unhappy with, more and more of them are making significant use of the platform—more than just posting a syllabus and some announcements. More of them will use adjectives like “useful,” unprompted, when talking about their particular LMS. I even heard one faculty member describe his school’s particular LMS as “humane” recently.
This is a more profound change than may be immediately obvious. One of the big reasons that I originally got interested in LMS design (back in 2000, around the same time as the minivan/SUV poll) was that I wanted to make it easier for more faculty to try teaching online, and the main reason I wanted to see that was that I knew from experience that moving to online teaching forces an instructor to think about pedagogy. I wanted to have that conversation with faculty. I wanted to help them think about how they could teach differently. The LMS is a kind of a gateway drug for ed tech and, to a certain degree, for course redesign. Many faculty who end up teaching on WordPress or Mediawiki started on an LMS and then got passionate enough and clear enough about what they wanted that it launched them on a quest. Slowly, more and more faculty are beginning to have those pedagogy conversations. And the LMS is very often the catalyst.
At the same time, the LMS market remains mystifying. There was a time when everybody was sure that Blackboard would buy all the major LMS vendors and become a monopoly. Then a lot people thought that the market would react against Blackboard’s looming monopoly by moving en masse to open source and staying there. Now people are wondering how long it will take for Instructure to completely own the market. And in the middle of all that, there are persistent predictions that the LMS will die any time now, either killed off by WordPress or disrupted by some startup or made free by Google or something else. We have gotten our sliding doors on both sides that close by remote control, but mainly, the LMS remains a stubbornly persistent category overall, and it has been hard to predict how people will move from one product to another—even if you follow the news obsessively and talk to a lot of people about it, like we do.
In some ways, maybe that’s OK. Maybe the old-timers and hard-liners are just going to have to make room in their hearts for the much maligned minivan. But at the same time, if we really are at a moment when change is creating new possibilities for conversations about improving teaching, then we don’t want to miss the opportunity to take maximum advantage. As Phil and I have both written about recently, there is currently a very poor connection between the real faculty and student needs that are surfacing on campuses and the ways in which those needs get translated, prioritized, and communicated to the folks who make the enabling technologies. This is one major reason that the market behavior is both hard to understand and frustratingly slow to respond sometimes. Education may not be “broken,” but the ed tech market most definitely is.Increasing Capacity for Change
We have had an increasingly strong intuation for maybe the last six months or so, based on various bits and pieces that we have observed in the course of our daily blogging and consulting work, that the LMS landscape is far more ripe for dramatic change than it has been in a long time—maybe since the market reached saturation around 2003 or 2004. More importantly, we think this change is both partly driven by changes on campus and, depending on how the LMS market evolution goes, can either be more helpful to supporting and amplifying positive change or not. There is an opportunity to increase capacity for pedaogical sophistication on campuses, if only we can get educators and ed tech developers working together in the right ways.This is exactly the kind of problem that Phil and I most love to take on, both in our blogging and in our consulting.
We decided to start with an “outside in” look at the market data—who is moving to what, how quickly campuses are moving, and so on. There are some folks who came before us in this regard that are worth calling out. Campus Computing Project was the only source of this kind of data for a long time, and the market has benefited from its long-term analysis. Edutechnica expanded the universe of available data by building a web crawler. After much thought, we decided to work with LISTedTECH, partly because their unique collection of historic and current data allowed us to ask and answer new questions. Some of what we learned in the process of that partnership work confirmed our previous intuitions, and some of it surprised us. We’re going to hold off on sharing the details for just a couple of more weeks until the report is ready to be released, but one thing we can say now is that the process of compiling the report increased our conviction that we are at an inflection point, both for the LMS market and for the broader positive use of technology in education.
We will be issuing a new report with updated analysis every six months, with regular updates (including the same level of free coverage that we have always given on e-Literate) more frequently.
In the fall, we will take the next step by offering a more “inside out” analysis aimed at supporting colleges and universities through the often painful process of selecting an LMS. We do a lot consulting work of this type—not picking the LMS for the schools, but helping them to make better decisions for themselves by getting high-quality input from all their important campus stakeholder groups, organizing a rational process, asking good questions of the vendors, and so on. Part of this work is our analysis of what is going on with each of the LMS platforms—the development strategies, differentiators, and above all what we hear from customers—and part of it is the advising on the decision process elements that I just mentioned. We are going to boil as much of that as we can down to subscription content and tools that campuses can use either with or without our facilitation help.
We’ll have more to say about the report in the next couple of weeks as we get closer to its release, and we’ll also have more to say in the coming months about our larger ambitions, both for helping with LMS selection and for helping more broadly to improve the pace of positive educational change on campuses, supported by appropriate enabling technologies.
Update (PH): To sign up for more information on the upcoming service, go to this page.
- Not even the Grand Caravan.
By Michael FeldsteinMore Posts (1069)
My latest Chronicle column is on how inherently difficult it is to evaluate learning science claims, particularly when they get boiled down to marketing claims about product efficacy, and how deep academic distrust of vendors makes this already incredibly difficult challenge nearly impossible.
Here’s where I stand on vendor participation in ed tech and learning science research:
On the one hand, vendors have access to data and resources that, for a variety of reasons, are difficult or even impossible for universities to access. They sometimes have millions of students using their products and relatively few internal barriers to doing certain kinds of large-scale effectiveness research (although to a certain degree, territoriality, bureaucracy, and poorly designed data architectures are universal problems). On the other hand, vendors should not under any circumstances be allowed to define the research agenda and arbitrate the validity of learning sciences claims for all of education. Theoretically, there is a simple solution to this. Higher ed knows how to do science. It knows how to review scientific claims. Vendor research should be put through the peer review process. They should publish enough information that their results can be evaluated and, when feasible, duplicated. Their reputations should be based in part on who is doing (or, at least, properly using) good scientific research. After all, most of the researchers at these companies are PhDs who are trained in the academic research process. There is no reason in the world why these companies can’t contribute to real, rigorous, peer-reviewed dialog and progress in the field.
Unfortunately, the current state of the learning sciences research community is weak and fragmented. There is incredibly good work going on in pockets here and there, but overall, it’s a mess. This is bad for many reasons. It both lets vendors get away with junk science claims while failing to reward good behavior. Worse, it effectively abdicates the central role that academia should be playing in driving the research agenda.
The post We Need a More Robust Learning Sciences Research Community appeared first on e-Literate.
By Phil HillMore Posts (401)
I have previously written a primer on competency-based education (CBE) using SPT Malan’s seminal article as the basis for understanding the key elements. Chris Mallett, formerly associate provost at Western Governors University (WGU) and currently VP for online programs at Northeastern University, has just posted a broader historical survey on CBE that is well worth reading. His extensive first-hand knowledge of the development of CBE in higher ed adds another reason to read the article.
In “What’s old is new again . . . a CBE long read”, Chris traces the origins of CBE back further than does Malan.
The earliest American competency-based education initiatives are said to have emerged through the development of the training programs used to quickly prepare soldiers, airmen, and others who were needed in support of the nation’s efforts in World Word II (Joyce, 1971). The ability to deliver “precise and rapid training which considered the learner chiefly in terms of his capacity to respond to the training” was of paramount concern at the time (Joyce, 1971, p. 21). According to Gagne (as cited in Joyce, 1971), training programs deployed for these purposes were developed in four phases:
1. Program goals were identified with particular emphasis on behavioral elements and competencies to be achieved.
2. Behavioral elements and competencies were organized into coherent units.
3. Training exercises that aligned with desired behaviors and competencies were developed.
4. An evaluation system to assess acquisition of the desired behaviors and competencies was developed. Feedback from the system was provided to trainees and their instructors.
As seen in this excerpt, the article provides useful summaries of key concepts to help readers understand what CBE is and what it isn’t. Jumping ahead to the 1980s we get to a point that is crucial to understand – the focus on adult learners.
Early 1980s competency-based education programs used development methods and service practices similar to those used by the competency-based practitioners of the 1960s and 1970s. According to Kasworm (1980), the programs identified specific learning outcomes and used both pre- and post-assessment instruments to determine if competencies had been achieved and mastered. Course content, instructional strategies, and processes all varied by program and were deployed consistent with the needs of students. Most programs employed an adult-learner orientation and aspired to achieve the certification of mastery, not just minimal competence (1980).
All of the programs studied by Kasworm were designed with the realities of adult learners in mind and centered on prescribed objectives and outcomes (i.e., the competencies). Most offered flexibility of time and participation so that “students may begin their learning at any time, progress in their learning of competencies at their own pace, and have opportunities to return to inadequately learned concepts of skills until mastery (Kasworm, 1980, p. 19). Many programs offered personalized instruction. Pre-assessments were often used to diagnose skills and knowledge gaps. Post-learning assessment instruments provided for the certification of mastery. Most programs allowed for variable instruction, allowing students to select learning resources and experiences that would best meet their specific needs. Some competency-based education programs provided advisement or counseling. Some featured established competencies with an aligned, standard curriculum. Others directed students to curricular resources but left it to the learner to choose an appropriate path on his or her own (Kasworm, 1980).
Given the wide range of programs trying some flavor of CBE in the past few years, it is useful to see acknowledgement of the diversity of approaches.
The competency-based education practices Klein-Collins (2012) examined varied dramatically by institution. Some institutions emphasized competencies within traditional, instructor-led, credit-hour based systems. Klein-Collins described these institutions as offering “competency-focused programs” (2012, p. 31) in that leaders had applied a competencies framework to their existing, credit hour-based programs. Other institutions, she said, used “purely competency-based programs” (2012, p. 31) in lieu of traditional systems, creating efficiencies, learning flexibility, and economic advantages in the process. Among the latter group, institutions Klein-Collins examined all relied on the use assessments to verify students’ competencies and awarded credits and credentials strictly according to students’ performance with such instruments (2012).
Chris indicates that there will be future posts on the subject. I hope that he addresses two in particular:
- What are the limits of CBE, or under what conditions should CBE be attempted?
- What are examples of assignments and assessments within CBE programs that go beyond simple quizzes and multiple-choice assessments?
For those interested in CBE, go read the full article.
By Michael FeldsteinMore Posts (1068)
Those of you subscribed to the site by email may have noticed that you didn’t get anything in the last 24 hours. (Or maybe you didn’t notice, since the email never came.) We are aware of the problem. The new system sends a message once a day and is next scheduled to send an email digest every morning at 4 AM Eastern Time. For whatever reason, it did not engage this morning, but we believe that it is working correctly now. You should get something in your inbox that includes this post tomorrow morning. Obviously, if you are reading this by email, then all is well.
Thanks for your patience as we work out the kinks in the new system.
By Michael FeldsteinMore Posts (1068)
For those of you who subscribe to e-Literate by email, be aware that we’re switching over to a new system for handling emails today. Among other things, we’ve had complaints that a few people had trouble unsubscribing.
Which is bad.
The new plugin, Mailchimp, will hopefully solve this problem while enabling us to do some new things we’ve been thinking about as well. (More on that soon.)
We will switch over to the new plugin shortly after I post this message and will be publishing one or two new posts in the next 24 hours. So if you don’t receive any emails by this time tomorrow, or if you have any other email-related problems, then please let us know.