Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 13 hours 3 min ago

Release of Empire State College Case Study on e-Literate TV

Fri, 2015-06-26 15:03

By Phil HillMore Posts (337)

Today we are thrilled to release the fourth case study in our new e-Literate TV series on “personalized learning”. In this series, we examine how that term, which is heavily marketed but poorly defined, is implemented on the ground at a variety of colleges and universities.

We are adding two episodes from Empire State College (ESC), a school that was founded in 1971 as part of the State University of New York. Through a lot of one-on-one, student-faculty interactions, the school was designed to serve the needs of students who don’t do well at traditional colleges. What problems are they trying to solve? How do students view some of the changes? What role does the practice of granting prior-learning assessments (PLA) play in non-traditional students’ education?

You can see all the case studies (either 2 or 3 per case study) at the series link, and you can access individual episodes below.

ESC Case Study: Personalized Prior Learning Assessments

ESC Case Study: Personalizing Personalization

e-Literate TV, owned and run by MindWires Consulting, is funded in part by the Bill & Melinda Gates Foundation. When we first talked about the series with the Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

As with the previous series, we are working in collaboration with In the Telling, our partners providing the platform and video production. Their Telling Story platform allows people to choose their level of engagement, from just watching the video to accessing synchronized transcripts and accessing transmedia. We have added content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. With In The Telling’s help, we are crafting episodes that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies.

We will release one more case study in early July, and we also have two episodes discussing the common themes we observed on the campuses. We welcome your feedback, either in comments or on Twitter using the hashtag #eLiterateTV.

Enjoy!

The post Release of Empire State College Case Study on e-Literate TV appeared first on e-Literate.

68% of Statistics Are Meaningless, D2L Edition

Wed, 2015-06-24 17:27

By Michael FeldsteinMore Posts (1033)

Two years ago, I wrote about how D2L’s analytics package looked serious and potentially ground-breaking, but that there were serious architectural issues with the underlying platform that were preventing the product from working properly for customers. Since then, we’ve been looking for signs that the company has dealt with these issues and is ready to deliver something interesting and powerful. And what we’ve seen is…uh…

…uh…

Well, the silence has ended. I didn’t get to go to FUSION this year, but I did look at the highlights of the analytics announcements, and they were…

…they were…

OK, I’ll be honest. They were incredibly disappointing in almost every way possible, and good examples of a really bad pattern of hype and misdirection that we’ve been seeing from D2L lately.

You can see a presentation of the “NEW Brightspace Insights(TM) Analytics Suite” here. I would embed the video for you but, naturally, D2L uses a custom player from which they have apparently stripped embedding capabilities. Anyway, one of the first things we learn from the talk is that, with their new, space-age, cold-fusion-powered platform, they “deliver the data to you 20 times faster than before.” Wow! Twenty times faster?! That’s…like…they’re giving us the data even before the students click or something. THEY ARE READING THE STUDENTS’ MINDS!

Uh, no. Not really.

A little later on in the presentation, if you listen closely, you’ll learn that D2L was running a batch process to update the data once every 24 hours. Now, two years after announcing their supposed breakthrough data analytics platform, they are proud to tell us that they can run a batch process every hour. As I write this, I am looking at my real-time analytics feed on my blog, watching people come and go. Which I’ve had for a while. For free. Of course, saying it that way, a batch process every hour, doesn’t sound quite as awesome as

TWENTY TIMES FASTER!!!!!

So they go with that.

There was an honest way in which they could have made the announcement and still sounded great. They could have said something like this:

You know, when LMSs were first developed, nobody was really thinking about analytics, and the technology to do analytics well really wasn’t at a level where it was practical for education anyway. Times have changed, and so we have had to rebuild Brightspace from the inside out to accommodate this new world. This is an ongoing process, but we’re here to announce a milestone. By being able to deliver you regular, intra-day updates, we can now make a big difference in their value to you. You can respond more quickly to student needs. We are going to show you a few examples of it today, but the bigger deal is that we have this new structural capability that will enable us to provide you with more timely analytics as we go.

That’s not a whole lot different in substance than what they actually said. And they really needed to communicate in a hype-free way, because what was the example that they gave for this blazing fast analytics capability? Why, the ability to see if students had watched a video.

Really. That was it.

Now, here again, D2L could have scored real points for this incredibly underwhelming example if they had talked honestly about Caliper and its role in this demo. The big deal here is that they are getting analytics not from Brightspace but from a third-party tool (Kaltura) using IMS Caliper. Regular readers know that I am a big fan of the standard-in-development. I think it’s fantastic that an LMS company has made an early commitment to implement the standard and is pushing it hard as differentiator. That can make the difference between a standard getting traction or remaining an academic exercise. How does D2L position this move? From their announcement:

With our previous analytics products, D2L clients received information on student success even before they took their first test. This has helped them improve student success in many ways, but the data is limited to Brightspace tools. The new Brightspace Insights is able to aggregate student data, leveraging IMS Caliper data, across a wide variety of learning tools within an institution’s technology ecosystem.

We’ve seen explosive growth in the use of external learning tools hooked into Brightspace over the past eighteen months. In fact, we are trending toward 200% growth over 2014. [Emphasis added.] That’s a lot of missing data.

This helps create a more complete view of the student. All of their progress and experiences are captured and delivered through high performance reports, comprehensive data visualizations, and predictive analytics.

Let’s think about an example like a student’s experiences with publisher content and applications. Until now, Brightspace was able to capture final grades but wouldn’t track things like practice quizzes or other assessments a student has taken. It wouldn’t know if a student didn’t get past the table of contents in a digital textbook. Now, the new Brightspace Insights captures all of this data and creates a more complete, living, breathing view of a student’s performance.

This is a big milestone for edtech. No other LMS provider is able to capture data across the learning technology ecosystem like this. [Emphasis added.]

I have no problem with D2L crowing about being early to market with a Caliper implementation. But let’s look at how they positioned it. First, they talked about 200% growth in use of external learning tools in 2015. But what does that mean? Going from one tool to three tools? And what kind of tools are they? And what do we know about how they are being used? OK, on that last question, maybe analytics are needed to answer it. But the point is that D2L has a pattern of punctuating every announcement or talk with an impressive-sounding but meaningless statistic to emphasize how awesome they are. Phil recently caught John Baker using…questionable retention statistics in a speech he gave. In that case, the problem wasn’t that the statistic itself was meaningless but rather that there was no reason to believe that D2L had anything to do with the improvement in the case being cited. And then there’s the slight-of-hand that Phil just called out regarding their LeaP marketing. It’s not as bad as some of the other examples, in my opinion, but still disturbingly consistent with the pattern we are seeing. I am starting to suspect that somebody in the company literally made a rule: Every talk or announcement must have a statistic in it. Doesn’t matter what the statistic is, or whether it means anything. Make one up if you have to, but get it in there.

But back to analytics. The more egregious claim in the quote above is that “no other LMS provider is able to capture data across the learning technology like this [example that we just gave],” because D2L can’t either yet. They have implemented a pre-final draft of a standard which requires both sides to implement in order for it to work. I don’t know of any publishers who have announced they are ready to provide data in the way described in D2L’s example. In fact, there are darned few app providers of any kind who are there yet. (Apparently, Kaltura is one of them.) Again, this could have been presented honestly in a way that made D2L look fantastic. Implementing first puts them in a leadership position, even if that leadership will take a while to pay practical dividends for the customer. But they went for hype instead.

I can’t remember the last time I read one of D2L’s announcements without rolling my eyes. I used to have respect for the company, but now I have to make a conscious effort not to dismiss any of their pronouncements out-of-hand. Not because I think it’s impossible that they might be doing good work, but because they force me to dive into a mountain of horseshit in the hopes of finding a nugget of gold at the bottom. Every. Single. Time. I’m not sure how much of the problem is that they have decided that they need to be disingenuous because they are under threat from Instructure or under pressure from investors and how much of it is that they are genuinely deluding themselves. Sadly, there have been some signs that at least part of the problem is the latter situation, which is a lot harder to fix. But there is also a fundamental dishonesty in the way that these statistics have been presented.

I don’t like writing this harshly about a company—particularly one that I have had reason to praise highly in the past. I don’t do it very often. But enough is enough already.

 

The post 68% of Statistics Are Meaningless, D2L Edition appeared first on e-Literate.

About The D2L Claim Of BrightSpace LeaP And Academic Improvements

Wed, 2015-06-24 16:07

By Phil HillMore Posts (335)

Recently I wrote a post checking up on a claim by D2L that seems to imply that their learning platform leads to measurable improvements in academic performance. The genesis of this thread is a panel discussion at the IMS Global conference where I argued that LMS usage in aggregate has not improved academic performance but is important, or even necessary, infrastructure with a critical role. Unfortunately, I found that D2L’s claim from Lone Star was misleading:

That’s right – D2L is taking a program where there is no evidence that LMS usage was a primary intervention and using the results to market and strongly suggest that using their LMS can “help schools go beyond simply managing learning to actually improving it”. There is no evidence presented[2] of D2L’s LMS being “foundational” – it happened to be the LMS during the pilot that centered on ECPS usage.

Subsequently I found a press release at D2L with a claim that appeared to be more rigorous and credible (written in an awful protected web page that prevents select – copy – paste).

D2L Launches the Next Generation of BrightSpace and Strives to Accelerate the Nation’s Path to 60% Attainment

D2L, the EdTech company that created Brightspace, today announces the next generation of its learning platform, designed to develop smarter learners and increase graduation rates. By featuring a new faculty user interface (UI) and bringing adaptive learning to the masses, Brightspace is more flexible, smarter, and easier to use. [snip]

D2L is changing the EdTech landscape by enabling students to learn more with Brightspace LeaP adaptive learning technology that brings personalized learning to the masses, and will help both increase graduation rates and produce smarter learners. The National Scientific Research Council of Canada (NSERC) produced a recent unpublished study that states: “After collating and processing the results, the results were very favourable for LeaP; the study demonstrates, with statistical significance, a 24% absolute gain and a 34% relative gain in final test scores over a traditional LMS while shortening the time on task by 30% all while maintaining a high subjective score on perceived usefulness.”

I asked the company to provide more information on this “unpublished study”, and I got no response.

Hello, Internet search and phone calls – time to do some investigation to see if there is real data to back up claims.

Details on the Study

The Natural Sciences and Engineering Research Council of Canada (NSERC) is somewhat similar to the National Science Foundation in the US – they are funding agency. When I called them they made it perfectly clear that they don’t produce any studies as claimed, they only fund them. I would have to find the appropriate study and contact the lead researcher. Luckily they shared the link to their awards database, and I did some searching on relevant terms. I eventually found some candidate studies and contacted the lead researchers. It turns out that the study in question was led by none other than Dragan Gasevic, founding program co-chair of the International Conference on Learning Analytics & Knowledge (LAK) in 2011 and 2012, and he is now at the University of Edinburgh.

The grant was one of NSERC’s Engage grants which look for researchers to team with companies, and Kowillage was the partner – they have an adaptive learning platform. D2L acquired Knowillage in the middle of the study, and they currently offer the technology as LeaP. LeaP is integrated into the main D2L learning platform (LMS).

The reason the study was not published was simply that Dragan was too busy, including his move to Edinburgh, to complete and publish, but he was happy to share information by Skype.

The study was done on an Introduction to Chemistry course at an unnamed Canadian university. Following ~130 students, the study looked at test scores and time to complete, with two objectives reported – from the class midterm and class final. This was a controlled experiment looking at three groupings:

  • A control group with no LMS, using just search tools and loosely organized content;
  • A group using Moodle as an LMS with no adaptive learning; and
  • A group using Moodle as an LMS with Knowillage / LeaP integrated following LTI standards.

Of note, this study did not even use D2L’s core learning platform, now branded as BrightSpace. It used Moodle as the LMS, but the study was not about the LMS – it was about the pedagogical usage of the adaptive engine used on top of Moodle. It is important to call out that to date, LeaP has been an add-on application that works with multiple LMSs. I have noticed that D2L now redirects their web pages that called out such integrations (e.g. this one showing integration with Canvas and this one with Blackboard) to new marketing just talking about BrightSpace. I do not know if this means D2L no longer allows LeaP integration with other LMSs or not. Update 6/25: Confirmed that LeaP is still being actively marketed to customers of other LMS vendors.

The study found evidence that Knowillage / LeaP allows students to have better test scores than students using just Moodle or no learning platform. This finding was significant even when controlling for students’ prior knowledge and for students’ dispositions (using a questionnaire commonly used in Psychology for motivational strategies and skills). The majority of the variability (a moderate effect size) was still explained by the test condition – use of adaptive learning software.

Dragan regrets the research team’s terminology of “absolute gain” and “relative gain”, but the research did clearly show increased test score gains by use of the adaptive software.

The results were quite different between the mid-term (no significant difference between Moodle+LeaP group and Moodle only group or control group) and the final (significant improvements for Moodle+LeaP well over other groups). Furthermore, the Moodle only group and control group with no LMS reversed gains between midterms and finals. To Dragan, these are study limitations and should be investigated in future research. He still would like to publish these results soon.

Overall, this is an interesting study, and I hope we get a published version soon – it could tell us a bit about adaptive learning, at least in the context of Intro to Chemistry usage.

Back to D2L Claim

Like the Lone Star example, I find a real problem with misleading marketing. D2L could have been more precise and said something like the following:

We acquired a tool, LeaP, that when integrated with another LMS was shown to improve academic performance in a controlled experiment funded by NSERC. We are now offering this tool with deep integration into our learning platform, BrightSpace, as we hope to see similar gains with our clients in the future.

Instead, D2L chose to use imprecise marketing language that implies, or allows the reader to conclude that their next-generation LMS has been proven to work better than a traditional LMS. They never come out and say “it was our LMS”, but they also don’t say enough for the reader to understand the context.

What is clear is that D2L’s LMS (the core of the BrightSpace learning platform) had nothing to do with the study, the actual gains were recorded by LeaP integrated with Moodle, and that the study was encouraging for adaptive learning and LeaP but limited in scope. We also have no evidence that the BrightSpace integration gives any different results than Moodle or Canvas or Blackboard Learn integrations with LeaP. For all we know given the scope of the study, it is entirely possible that there was something unique about the Moodle / LeaP integration that enabled the positive results. We don’t know that, but we can’t rule it out, either.

Kudos to D2L for acquiring Knowillage and for working to make it more available to customers, but once again the company needs to be more accurate in their marketing claims.

The post About The D2L Claim Of BrightSpace LeaP And Academic Improvements appeared first on e-Literate.

An Example Why LMS Should Not Be Only Part of Learning Ecosystem

Tue, 2015-06-23 11:51

By Phil HillMore Posts (333)

In Michael’s initial post on the Post-LMS, he built on this central theme:

Reading Phil’s multiple reviews of Competency-Based Education (CBE) “LMSs”, one of the implications that jumps out at me is that we see a much more rapid and coherent progression of learning platform designs if you start with a particular pedagogical approach in mind.

The idea here is not that the traditional LMS has no value (it can be critical infrastructure, particularly for mainstream faculty adoption), but rather that in the future we both see more learning platform designs being tied to specific pedagogies. This idea is quite relevant given the ongoing LMS users’ conferences (InstructureCon last week, D2L Fusion this week, BbWorld next month, Apereo / Sakai as well as iMoot in the past two months).

Later in the post Michael mentions ASU’s Habitable Worlds as an example of assessing the quality of students’ participation instead of direct grading.

A good example of this is ASU’s Habitable Worlds, which I have blogged about in the past and which will be featured in an episode of the aforementioned e-Literate TV series. Habitable Worlds is roughly in the pedagogical family of CBE and mastery learning. It’s also a PBL [problem-based learning] course. Students are given a randomly generated star field and are given a semester-long project to determine the likelihood that intelligent life exists in that star field. There are a number of self-paced adaptive lessons built on the Smart Sparrow platform. Students learn competencies through those lessons, but they are competencies that are necessary to complete the larger project, rather than simply a set of hoops that students need to jump through. In other words, the competency lessons are resources for the students.

In our recent case study on ASU, Lev Horodyskyj shared his experiences helping to design the course. He specifically called out the difficulties they faced when initially attempting this pedagogical approach with a traditional LMS.

Phil Hill: But the team initially found that the traditional technologies on campus were not suited to support this new personalized learning approach.

Lev Horodyskyj: Within a traditional system it was fairly difficult. Traditional learning management systems aren’t really set up to allow a lot of interactivity. They’re more designed to let you do things that you would normally do in a traditional classroom: multiple choice tests; quizzes; turning in papers; uploading, downloading things.

Especially when you’re teaching science, a range of possibilities are viable answers, and oftentimes when we teach science, we’re more interested in what you’re not allowed to do rather than what you’re allowed to do.

Traditional LMS’s don’t allow you to really program in huge parameter spaces that you can work with. They’re basically looking for, “What are the exact correct answers you are allowed to accept?”

I was brought into the picture once Ariel decided that this could be an interesting way to go, and I started playing around with the system. I instantly fell in love with it because it was basically like PowerPoint. I could drop whatever I wanted wherever I wanted, and then wire it up to behave the way I wanted it to behave.

Now, instead of painstakingly programming all the 60 possible answers that a student might write that are acceptable, I can all of sudden set up a page to take any answer I want and evaluate it in real time. I no longer have to program those 60 answers; I could just say, “Here are the range of answer that are acceptable,” and it would work with that.

Phil Hill: And this was the Smart Sparrow system?

Lev Horodyskyj: This was the Smart Sparrow system, correct. It was really eye-opening because it allowed so many more possibilities. It was literally a blank canvas where I could put whatever I wanted.

This pedagogical approach, supported by appropriate learning platform design, seems to lead to conceptual understanding.

Eric Berkebile: My experiences were very similar. What amazed me the most about it was more how the course was centered upon building concept. It wasn’t about hammering in detail. They weren’t trying to test you on, “How much can you remember out of what we’re feeding you?” It wasn’t about hammering in detail. They weren’t trying to test you on ‘How much can you remember?’

You go through the slides, you go through the different sections, and you are building conceptual knowledge while you are doing it. Once you’ve demonstrated that you can actually apply the concept that they are teaching you, then you can move forward. Until that happens, you’re going to be stuck exactly where you are, and you’re going to have to ask help from other students in the class; you’re going to have to use the resources available.

They want you to learn how to solve problems, they want you to learn how to apply the concepts, and they want you to do it in a way that’s going to work best for you.

Phil Hill: So, it’s multidisciplinary for various disciplines but all held together by project problem-solving around Drake’s equation?

Todd Gilbert: Yeah. One concept really ties it all together, and if you want to answer those questions around that kind of problem, like, “Is there life out there? Are we alone?” you can’t do that with just astronomy, you can’t do that with just biology. It touches everything, from sociology down to physics. Those are very, very different disciplines, so you have to be adaptable.

But I mean if you rise to that kind of a challenge—I can honestly say, this is not hyperbole or anything. It is my favorite class I’ve taken at this college, and it’s a half-semester online course. It is my favorite class I’ve taken at this college.

Eric Berkebile: By far the best course I’ve taken, and I’ve recommended it to everybody I’ve talked to since.

This approach is not mainstream in the sense that the vast majority of courses are not designed as problem-based learning, so I am not arguing that all LMSs should change accordingly or that Smart Sparrow is a superior product. I do, however, think that this episode gives a concrete example of how the traditional LMS should not be the only platform available in a learning ecosystem and how we will likely see more development of platforms tied to specific pedagogical approaches.

The post An Example Why LMS Should Not Be Only Part of Learning Ecosystem appeared first on e-Literate.

The EDUCAUSE NGDLE and an API of One’s Own

Sun, 2015-06-14 14:04

By Michael FeldsteinMore Posts (1033)

I have been meaning for some time to get around to blogging about the EDUCAUSE Learning Initiative’s (ELI’s) paper on a Next-Generation Digital Learning Environment (NGDLE) and Tony Bates’ thoughtful response to it. The core concepts behind the NGDLE are that a next-generation digital learning environment should have the following characteristics:

  • Interoperability and Integration
  • Personalization
  • Analytics, Advising, and Learning Assessment
  • Collaboration
  • Accessibility and Universal Design

The paper also suggests that the system should be modular. They draw heavily on an analogy to LEGOs and make a call for more robust standards. In response, Bates raises three concerns:

  1. He is suspicious of a potentially heavy and bureaucratic standards-making process that is vulnerable to undue corporate influence.
  2. He worries that LEGO is a poor metaphor that suggests an industrialized model.
  3. He is concerned that, taken together, the ELI requirements for an NGDLE will push us further in the direction of computer-driven rather than human-driven classes.

As it happens, ELI’s vision for NGDLE bears a significant resemblance to a vision that some colleagues and I came up with ten years ago when we were trying to help the SUNY system find an LMS that would fit the needs of all 64 campuses,[1] ranging from small, rural community colleges to R1 universities to medical and ophthalmology schools to a school of fashion. We got pretty deep into thinking about the implementation details, so it’s been on my mind to write my own personal perspective on the answers to Tony’s questions, based in large part on that previous experience. In the meantime, Jim Groom, who has made a transition from working at a university to working full-time at Reclaim Hosting, has written a series of really provocative and, to me, exciting posts on the future of the digital learning environment from his own perspective. Jim shares the starting assumption of the ELI and SUNY that a learning environment should be “learner-centric,” but he has a much more fully developed (and more radical) idea of what that really means, based on his previous work with A Domain of One’s Own. He also, in contrast to the ELI and SUNY teams, does not start from the assumption that “next-generation” means evolving the LMS. Rather, the questions he seems to be asking are “What is minimum amount of technical infrastructure required to create a rich digital learning environment?” and “Of that minimal amount of infrastructure we need, what is the minimal amount that needs to be owned by the institution rather than the learner?” I see these trains of thought emerging his posts on a university API, a personal API, and a syndication bus. What’s exciting to me about these posts is that, even though Jim is starting from a very different set of assumptions, he is also converging on something like the vision we had for SUNY.

In this post, I’m going to try to respond to both Tony and Jim. One of the challenges of this sort of conversation is that the relationship between the technical architecture and the possibilities it creates for the learners is complex. It’s easy to oversimplify or even conflate the two if we’re not very careful. So one of the things that I’m going to try to do here is untangle the technical talk from the functional talk.

I’ll start with Tony Bates’ concerns.

The Unbearable Heaviness of Standards

This is the most industry-talky part of the post, but it’s important for the later stuff. So if talk of Blackboard and Pearson sitting around a technical standards development table turns you off, please bear with me.

Bates writes,

First, this seems to be much too much of a top-down approach to developing technology-based learning environments for my taste. Standards are all very well, but who will set these standards? Just look at the ways standards are set in technology: international committees taking many years, with often powerful lobby groups and ‘rogue’ corporations trying to impose new or different standards.

Is that what we want in education? Or will EDUCAUSE go it alone, with the rest of the world outside the USA scrambling to keep up, or worse, trying to develop alternative standards or systems? (Just watch the European Commission on this one.) Attempts to standardize learning objects through meta-data have not had much success in education, for many good reasons, but EDUCAUSE is planning something much more ambitious than this.

Let me start by acknowledging, as somebody who has been involved in the sausage-making, that the technical standards development process is inherently difficult and fraught and that, because it is designed to produce a compromise that everybody can live with, it rarely produces a specification that anybody is thrilled with. Technical standards-making sucks, and its output often sucks as well. In fact, both process and output generally suck so badly that they collectively beg the question: Why would anyone ever do it? The answer is simple: Standards are usually created when the pain of not having a standard exceeds the pain of creating and living with one.

One of the biggest pains driving technical standards-making in educational technology has been the pain of vendor lock-in. Back in the days when Blackboard owned the LMS market and the LMS product category pretty much was the educational technology market, it was hard to get anyone developing digital learning tools or digital content to integrate with any other platform. Because there were no integration standards, anyone who wanted to integrate with both Blackboard and Moodle would have to develop that integration twice. Add in D2L and Sakai—this was pre-Canvas—and you had four times the effort. This is a problem in any field, but it’s particularly a problem in education because neither students nor courses are widgets. This means that we need a ton of specialized functionality, down to a very fine level. For example, both art historians and oncologists need image annotation tools to teach their classes digitally, but they use those tools very differently and therefore need different features. Ed tech is full of tiny (but important) niches, which means that there are needs for many tools that will make nobody rich. You’re not going to see a startup go to IPO with their wicked good art history image annotation tool. And so, inevitably, the team that develops such a tool will start small and stay small, whether they are building a product for sale, an open source project, or some internal project for a university or for their own classes. Having to develop for multiple platforms is just not feasible for a small team, which means the vast majority of teaching functionality will be available only on the most widely adopted platform. Which, in turn, makes that platform very hard to leave, because you’d also have to give up all those other great niche capabilities developed by third parties.

But there was a chicken-and-egg problem. To Tony’s point about the standards process being prone to manipulation, Blackboard had nothing to gain and a lot to lose from interoperability standards back when they dominated the market. They had a lot to gain from talking about standards, but nothing to gain (and a lot to lose) by actually implementing good standards. In those days, the kindest interpretation of their behavior in the IMS (which is the main technical standards body for ed tech) is that standards-making was not a priority for them. A more suspicious mind might suspect that there were times when they actively sabotaged those efforts. And they could, because a standard that wasn’t implemented by the platform used by 70% of the market was not one that would be adopted by those small tool makers. They would still have to build at least two integrations—one for Blackboard and one for everyone else. Thankfully, two big changes in the market disrupted this dynamic. First, Blackboard lost its dominance, thanks in part to the backlash among customers against just such anti-competitive behavior. It is no coincidence that then-CEO Michael Chasen chose to retain Ray Henderson, who was known for his long-standing commitment to open standards (and…um…actually caring about customer needs) right at the point when Blackboard backlash was at its worst and the company faced the probability of a mass exodus as they killed off WebCT. Second, content-centric platforms became increasingly sophisticated with consequent increasingly sophisticated needs for integrating other tools. This was driven by the collapse of the textbook publishers’ business model and their need to find some other way to justify their existence, but it was a welcome development for standards both because it brought more players to the table and because the world desperately needed (and still needs) alternative visions to the LMS for a digital learning environment, and the textbook publishers have the muscle to actually implement and drive adoption of their own visions. It doesn’t matter so much whether you like those visions or the players who are pushing them (although, honestly, almost anything would be a welcome change from the bento box that was and, to a large degree, still is the traditional LMS experience). What mattered from the standards-making perspective is that there were more players who had something to prove in the market and whose ideas about how niche functionality should integrate with the larger learning experience that their platform affords was not all the same. As a result, we are getting substantially richer and more polished ed tech integration standards more quickly from the IMS than we were getting a decade ago.

Unfortunately, the change in the market only helps with one of the hard problems of technical standards-making in ed tech. Another one, which Bates alludes to with his comment about failed efforts to standardize metadata for learning objects, is finding the right level of abstraction. There are a lot of reasons why learning objects have failed to gain the traction that advocates had hoped, but one good one is that there is no such thing as a learning object. At least, not one that we can define generically. What is it that syllabi, quizzes, individual quiz questions, readings, videos, simulations, week-long collections of all these things (or “modules”), and 15-week collections of these things (or “courses”) have in common? It is tempting to pretend that all of these things are alike in some fundamental way so that we can easily reuse them and build new things with them. You know…like LEGOs. If they were, then it would make sense to have one metadata standard to describe them all, because it would mean that the main challenge of building a new course out of old pieces would be finding the right pieces, and a metadata standard can help with that.

Alas.

Folks who are non-technical tend to think of software as a direct implementation of their functional needs, and their understanding of technical standards flows from that view of the world. As a result, it’s easy to overgeneralize the lesson of the learning object metadata standards failures. But the history of computing is one of building up successive layers of abstraction. For example, TCP/IP is a low-level technical standard that enables internet servers to connect to and communicate with each other, whether that communication takes the form of sending email, transferring a file, or looking up the address of a web site. Most of us don’t know about or care about what sorts of connections TCP/IP allows or doesn’t allow. At our level, it is indistinguishable from LEGOs in the sense tha we see these pieces fitting together generically and we don’t see a need for them to do anything else. But the programmers who built TCP/IP implemented it on top of the C programming language (which was standard in the informal sense that eventually became a Standard(TM) in the formal sense), which compiled to a number of different machine languages for different computer chips, making those chips more like LEGOs. Then other programmers created HTML and Javascript as a abstraction layers on top of TCP/IP, making web pages like LEGOs in the sense that any web server can serve any standards-conformant web page and any browser can read any such web page. From here, higher layers of abstraction get dicier, which is probably why we don’t have many higher-level Standards(TM). Instead, we start getting into things called “libraries” and “frameworks”. These are bits of code that are re-usable by enough developers that they are worth sharing and adopting, but not so much that they are worth going through the pain of formal standards development or become universal through some other means. And then, of course, there is just a vast amount of development on the web that is individual to the project and cannot be standardized, whether formally or informally. If you try to standardize that which is not standard, chances are that your “standard” will remain pretty non-standard.

So there is a generic danger that if we try to build a standard at the wrong level of abstraction, we will fail. But in education there is also the danger that we will try to build at the wrong level of abstraction and succeed. What I mean by this is we will enshrine a limited or even stunted vision of what kinds of teaching and learning a digital learning environment should support into the fundamental building blocks that we use to create new learning environments and learning experiences.

In What Sense Like LEGOs?

To wit, Bates writes:

A next generation digital learning environment where all the bits fit nicely together seems far too restrictive for the kinds of learning environments we need in the future. What about teaching activities and types of learning that don’t fit so nicely?

We need actually to move away from the standardization of learning environments. We have inherited a largely industrial and highly standardized system of education from the 19th century designed around bricks and mortar, and just as we are able to start breaking way from rigid standardization EDUCAUSE wants to provide a digital educational environment based on standards.

I have much more faith in the ability of learners, and less so but still a faith in teachers and instructors, to be able to combine a wide range of technologies in the ways that they decide makes most sense for teaching and learning than a bunch of computer specialists setting technical standards (even in consultation with educators).

Audrey Watters captured the subtlety of this challenge beautifully in her piece on the history of LEGO Mindstorms:

In some ways, the educational version of Mindstorms faces a similar problem as it struggles to balance imagination with instructions. As the product have become more popular in schools, Lego Education has added new features that make Mindstorms more amenable to the classroom, easier for teachers to use: portfolios, curriculum, data-logging and troubleshooting features for teachers, and so on.

“Little by little, the subversive features of the computer were eroded away. Instead of cutting across and challenging the very idea of subject boundaries, the computer now defined a new subject; instead of changing the emphasis from impersonal curriculum to excited live exploration by students, the computer was now used to reinforce School’s ways. What had started as a subversive instrument of change was neutralized by the system and converted into an instrument of consolidation.” – Seymour Papert, The Children’s Machine

That constructionist element is still there, of course – in Lego the toy and in Lego Mindstorms. Children of all ages continue to build amazing things. Yet as Mindstorms has become a more powerful platform – in terms of its engineering capabilities and its retail and educational success – it has paradoxically perhaps also become a less playful one.

There is a fundamental tension between making something more easily adoptable for a broad audience and making it challenging in the way that education should be challenging, i.e., that it is generative and encourages creativity (a quality that Amy Collier, Jen Ross, and George Veletsianos have started calling “not-yetness“). I don’t know about you, but when I was a kid, my LEGO kits didn’t look like this:

If I wanted to build the Millennium Falcon, I would have to figure out how to build it from scratch, which meant I was more likely to decide that it was too hard and that I couldn’t do it. But it also meant I was much more likely to build my own idea of a space ship rather than reproducing George Lucas’ idea. This is a fundamental and inescapable tension of educational technology (as well as the broad reuse or mass production of curricular materials), and it increases exponentially when teachers and administrators and parents are added as stakeholders in the mix of end users. But notice that, even with the real, analog-world LEGO kits, there are layers of abstraction and standardization. Standardizing the pin size on the LEGO blocks is generative because it suggests more possibilities for building new stuff out of the LEGOs. Standardizing the pieces to build one specialized model is reductive because it suggests fewer possibilities for building new stuff out of the LEGOs. To find ed tech interoperability standards that are generative rather than reductive, we need to first find the right level of abstraction.

What Does Your Space Ship Look Like?

This brings us to Tony Bates’ third concern:

I am becoming increasingly disturbed by the tendency of software engineers to force humans to fit technology systems rather than the other way round (try flying with Easyjet or Ryanair for instance). There may be economic reasons to do this in business enterprises, but we need in education, at least, for the technology to empower learners and teachers, rather than restrict their behaviour to fit complex technology systems. The great thing about social media, and the many software applications that result from it, is its flexibility and its ability to be incorporated and adapted to a variety of needs, despite or maybe even because of its lack of common standards.

When I look at EDUCAUSE’s specifications for its ‘NGDLE-conformant standards’, each on its own makes sense, but when combined they become a monster of parts. Do I want teaching decisions influenced by student key strokes or time spent on a particular learning object, for instance? Behind each of these activities will be a growing complexity of algorithms and decision-trees that will take teachers and instructors further way from knowing their individual students and making intuitive and inductive decisions about them. Although humans make many mistakes, they are also able to do things that computers can’t. We need technology to support that kind of behaviour, not try to replace it.

I read two interrelated concerns here. One is that, generically speaking, humans have a tendency to move too far in the direction of standardizing that which should not be standardized in an effort to achieve scalability of efficiency or one of those other words that would have impressed the steel and railroad magnates of a hundred years ago. This results in systems that are user-unfriendly at best and inhumane at worst. The second, more education-specific concern I’m hearing is that NGDLE as ELI envisions it would feed the beast that is our cultural mythology that education can and should be largely automated, which is pretty much where you arrive if you follow the road of standardization ad absurdam. So again, it comes down to standardizing the right things at the right levels of abstraction so that the standards are generative rather than reductive.

I’ll give an example of a level of ed tech interoperability that achieves a good level of LEGOicity.[2] Whatever digital learning environment you choose, whether it’s next-generation, this-generation, last-generation, or whatever-generation, there’s a good chance that you are going to want it to have some sense of “class-ness”, by which I mean that you will probably want to define a group of people who are in a class. This isn’t always true, but it is often true. And once you decide that you need that, you then need to specify who is in the class. That means, for every single class section that needs a sense of group, you need to register those users in the new system. If the system supports multiple classes that the students might be in (like an LMS, for example), then you’ll need unique identifiers for the class groups so that the system doesn’t get them mixed up, and you will also need human-readable identifiers (which may or may not be unique) so that the humans don’t get them mixed up and get lost in the system. Depending on the system, you may also want it to know when the class starts and ends, when it meets, who the teacher is, and so on. Again, not all digital learning environments require this information, but many do, including many that work very differently from each other. Furthermore, trying to move this information manually by, for example, asking your students to register themselves and then join a group themselves is…challenging. It makes sense to create a machine-to-machine method for sharing this information (a.k.a. an application programming interface, or API) so that the humans don’t have to do the tedious and error-prone manual work, and it makes sense to have this API be standard so that anybody developing a digital learning environment or learning tool anywhere can write one set of integration code and get this information from the relevant university system that has it, regardless of the particular brand or version of the system that the particular university is using. The IMS actually has two different standards—LIS and LTI—that do subsets of this sort of thing in different ways. Each one is useful for a particular and different set of situations, so it’s rare that you would be in a position of having to pick between the two. In most cases, one will be obviously better for you than the other. The existence and adoption of these standards are generative, because more people can build their own tools, or next-generation digital learning environments, or whatever, and easily make them work well for teachers and students by saving them from that tedious and frustrating registration and group creation workflow.

Notice the level of abstraction we are at. We are not standardizing the learning environment itself. We are standardizing the tools necessary for developers to build a learning environment. But even here, there are layers. Think about your mobile phone. It takes a lot of people with a lot of technical expertise a lot of time to build a mobile phone operating system. It takes a single 12-year-old a day to build a simple mobile phone app. This is one reason why there are only a few mobile phone operating systems which all tend to be similar while there are many, many, mobile apps that are very different from each other. Up until now, building digital learning environments has been more like building operating systems than like building mobile apps. When my colleagues and I were thinking about SUNY’s digital learning environment needs back in 2005, we wanted to create something we called a Learning Management Operating System (LMOS), but not because we thought that either learning management or operating systems were particularly sexy. To the contrary, we wanted to standardize the unsexy but essential foundations upon which a million billion sexy learning apps could be built by others. Try to remember what your smart phone was like before you installed any apps on it. Pretty boring, right? But it was just the right kind of standardized boring stuff that enabled such miracles of modern life as Angry Birds and Instragram. That’s what we wanted, but for teaching and learning.

Toward University APIs

Let’s break this down some more. Have you ever seen one of these sorts of prompts on your smart phone?

I bet that you have. This is one of those incredibly unsexy layers of standardization that makes incredibly sexy things happen. It enables my LinkedIn Connected app to know who I just met with and offer to make a connection with them. It lets any new social service I join know who I already know and therefore who I might want to connect with on that service. It lets the taxicab I’m ordering know where to pick me up. It lets my hotel membership apps find the nearest hotel for me. And so on. But there’s something weird going on in this screen grab. Fantastical, which is a calendar app, is asking permission to access my calendar. What’s up with that?

Apple provides a standard Calendar app that is…well…not terribly impressive. But that’s not what this dialog box is referring to. Apple also has an underlying calendaring API and data store, which is confusingly also named Calendar. It is this latter piece of unsexy but essential infrastructure that Fantastical is asking to access. It is also the unsexy piece of infrastructure that makes all the scheduling-related sexiness happen across apps. It’s the lingua franca for scheduling.

Now imagine a similar distinction between a rather unimpressive Discussions app within an LMS and a theoretical Discussions API in an LMOS. Most discussion apps have certain things in common. There are posts by authors. There are subjects and bodies and dates and times to those posts. Sometimes there are attachments. There are replies which form threads. Sometimes those threads branch. Imagine that you have all of that abstracted into an API or service. You could do a lot of things with it. For starters, you could build a different or better discussion board, the way Fantastical has done on top of Apple’s Calendar API. It could be a big thing that has all kinds of cool extra features, or it could be a little thing that, for example, just lets you attach a discussion thread anywhere on any page. Maybe you’re building an art history image annotation app and want to be able to hang a discussion thread off of particular spots on the image. Wouldn’t it be cool if you didn’t have to build all that discussion stuff yourself, but could just focus on the parts that are specific to your app? Maybe you’re not building something that needs a discussion thread at all but rather something that could use the data from the discussions app. Maybe you want to build a “Find a Study Buddy” app, and you want that app to suggest people in your class that you have interacted with frequently in class discussions. Or maybe you’re building an analytics app that looks at how often and how well students are using the class discussions. There’s a lot you could do if this infrastructure were standardized and accessible via an API. An LMOS is really a university API for teaching- and learning-relevant data and functionality, with a set of sample apps built on top of that API.

What’s valuable about this approach is that it can support and enable many different kinds of digital learning environments. If you want to build a super-duper adaptive-personalized-watching-every-click thing, an LMOS should make that easier to do. If you want to build a post-edupunk-open-ed-only-nominally-institutional thing, then an LMOS should make it easier to do that too. You can build whatever you need more quickly and easily, which means that you are more likely to build it. Done right, an LMOS should also support the five attributes that ELI is calling for:

  • Interoperability and Integration
  • Personalization
  • Analytics, Advising, and Learning Assessment
  • Collaboration
  • Accessibility and Universal Design

An LMOS-like infrastructure doesn’t require any of these things. It doesn’t require you to build analytics, for example. But by making the learning apps programmatically accessible via APIs, it makes analytics feasible if analytics are what you want. It is the roughly the right level of abstraction.

It is also roughly where we are headed, at least from a technical perspective. Returning to the earlier question of “at what price standards,” I believe that we have most or all of the essential technical interoperability standards we need to build an LMOS right now. Yes, there are a couple of interesting standards-in-development that may add further value, and yes, we will likely discover further holes that need to be filled here and there, but I think we have all the basic parts that we need. This is in part due to the fact that, with IMS’s new Caliper standard, we have yet another level of abstraction that makes it very flexible. Building on the previous discussion service example, Caliper lets you define a profile for a discussion, which is really just a formalization of all the pieces that you want to share—subject, body, author, time stamp, reply, thread, etc. You can also define a profile for, say, a note-taking app that re-uses the same Caliper infrastructure. If you come up with a new kind of digitally mediated learning interaction in a new app, you can develop a new Caliper profile for it. You might start by developing it just for your own use and then eventually submit it to the IMS for ratification as an official standard when there is enough demand to justify it. This also dramatically reduces the size of the negotiation that has to happen at the standards-making table and therefore improves both speed and quality of the output.

Toward a Personal API

I hope that I have addressed Tony Bates’ concerns, but I’m pretty sure that I haven’t gotten to the core of Jim Groom’s yet. Jim wants students to own their learning infrastructure, content, and identity as much as possible. And by “own,” he means that quite literally. He wants them to have their own web domains where the substantial majority of their digital learning lives resides permanently. To that end, he has started thinking about what he calls a Personal API:

[W]hat if one’s personal domain becomes the space where students can make their own calls to the University API? What if they have a personal API that enables them to decide what they share, with whom, and for how long. For example, what if you had a Portfolio site with a robust API (which was the use case we were discussing) that was installed on student’s personal domain at portfolio.mydomain.com, and enabled them to do a few basic things via API:

  • It called the University API and populated the students classes for that semester.
  • It enabled them to pull in their assignments from a variety of sources (and even version them).
  • it also let them “submit” those assignment to the campus LMS.
  • This would effectively be enabling the instructor to access and provide feedback that the student would now have as metadata on that assignment in their portfolio.
  • It can also track course events, discussions, etc.

This is very consistent with the example I gave in my 2005 blog post about how a student’s personal blog could connect bi-directionally with an LMOS:

Suppose that, in addition to having students publish information into the course, the service broker also let the course publish information out to the student’s personal data store (read “portfolio”). Imagine that for every content item that the student creates and owns in her personal area–blog posts, assignment drafts in her online file storage, etc.–there is also a data store to which courses could publish metadata. For example, the grade book, having recorded a grade and a comment about the student’s blog post, could push that information (along with the post’s URL as an identifier) back out to the student’s data store. Now the student has her professor’s grade and comment (in read-only format, of course), traveling with her long after the system administrator closed an archived the Psych 101 course. She can publish that information to her public e-portfolio, or not, as she pleases.

Fortuitously, this vision is also highly consistent with the fundamental structure that underlies IMS Caliper. Caliper is federated. That is, it assumes that there are going to be different sources of authority for different (but related) types of content, and that there will be different sharing models. So it is very friendly to world in which students own some data and universities own other data and could provide the “facade” necessary for the communication between the two world. So again, we have roughly the right level of abstraction to be generative rather than reductive. Caliper can support both a highly scaffolded and data-driven adaptive environment and a highly decentralized and extra-institutional environment. And, perhaps best of all, it lets us get to either incrementally by growing an ecosystem piece by piece rather than engineering a massive and monolithic platform.

Nifty, huh?

Believe it or not, none of this is the hard part. The hard part is the cultural and institutional barriers that prevent people from demanding the change that is very feasible from a technical perspective. But that’s another blog post (or fifty) for another time.

  1. I understand that SUNY has since added a 65th campus
  2. Yes, that is a word.

The post The EDUCAUSE NGDLE and an API of One’s Own appeared first on e-Literate.

Personalized Learning Changes: Effect on instructors and coaches

Fri, 2015-06-12 17:03

By Phil HillMore Posts (332)

Kate Bowles left an interesting comment at my previous post about an ASU episode on e-Literate TV, where I argued that there is a profound change in the instructor role. Her comment:

Phil, I’m interested to know if you found anything out about the pay rates for coaches v TAs. I’m also interested in what coaches were actually paid to do — how the parameters of their employable hours fit what they ended up doing. Academics are rarely encouraged to think of their work in terms of billable increments, because this would sink the ship. But still I’m curious. Did ASU really just hike up their staffing costs in moving to personalised learning, or was there some other cost efficiency here? If the overall increase in students paid off, how did this happen? I’m grappling with how this worked for ASU in budgetary terms, as the pedagogical gain is so clear.

This comment happened to coincide with my participation in WCET’s Leadership Summit on Adaptive Learning, where similar subjects were being discussed. For the purposes of this blog post, we’ll use the “personalized learning” language, which includes use of adaptive software as a subset. Let’s first address the ASU-specific questions.

ASU

The instructor in the ASU episode was Sue McClure who was kind enough to help answer these questions by email. Sue is a lecturer at ASU Online, which is a full-time salaried position with a teaching load of four courses per semester. Typical loads include 350 – 400 students over those four courses, and the MAT 110 personalized learning course (using Khan Academy) did not change this ratio. Sue added these observations:

During the Fall Semester of 2014 we offered our first MAT 110 courses using Khan. There was a great deal of work in the planning of the course, managing the work, working with students, hiring and managing the coaches, tracking student progress, and more. Of course, our main responsibility to help our students to be successful in our course overshadowed all of this. The work load during the first semester of our pilot was very much increased compared to previous semesters teaching MAT 110.

By the time that we reached Spring Semester of 2015 we had learned much more about methods that work best for student success, our coaches were more experienced, and our technology to track student progress and work was improved. During the second semester my work load was very much more in line with teaching MAT 110 before the pilot was begun.

For the TAs (coaches), they also had the same contracts as before the personalized learning approach, but they are paid on an hourly basis. I do not know if they ended up working more hours than expected in this course, but I did already note that there were many more coaches in the new course than is typical. Unfortunately, I cannot answer Kate’s follow-up question about TA / coach hourly pay issues in more detail, at least for now.

Putting it together, ASU is clearly investing in personalized learning – including investing in instructional resources – rather than trying to find cost efficiencies up front. Adrian Sannier in episode 1 described the “payoff” or goal for ASU.

Adrian Sannier:So, we very much view our mission as helping those students to find their way past the pastiche of holes that they might have and then to be able to realize their potential.

So, take math as an example. Math is, I think, a very easy place for most people to understand because I think almost everybody in the country has math deficits that they’re unaware of because you get a B in third-grade math. What that means is there were a couple of things you didn’t understand. Nobody tells you what those things are—you don’t have a very clear idea—but for the rest of your life, all the things that depend on those things that you missed you will have a rocky understanding of.

So, year over year you accumulate these holes. Then finally, somebody in an admissions exam or on the SAT or the ACT faces you with a comprehensive survey of your math knowledge, and you suddenly realize, “Wow, I’m under-prepared. I might even have gotten pretty good grades, but there are places where I have holes.”

We very much view our mission as trying to figure how it is that we can serve the student body. Even though our standards haven’t changed, our students certainly have because the demographics of the country have changed, the character of the country has changed, and the things we’re preparing students for have changed.

We heard several times in episode 1 that ASU wants to scale the number of students served (with same standards) without increasing faculty at the same rate, and to do this they need to help more of today’s students succeed in math. The payoff is retention, which is how the budget will work if they succeed (remember this is a new program).

WCET Adaptive Learning Summit

The WCET summit allowed for a more generalized response. In one panel moderated by Tom Cavanaugh from University of Central Florida (UCF), panelists were asked about the Return on Investment (ROI) of personalized learning[1]. Some annoying person in the audience[2] further pressed the panel during Q&A time to more directly address the issue raised by Kate. All the panelists view personalized / adaptive learning as an investment, where the human costs in instructors / faculty / TAs / coaches actually go up, at least in early years. They do not see this as cost efficiency, at least for the foreseeable future.

Santa Fe Rainbow

(My photos from inside the conference stunk, so I’ll use a better one from dinner instead.)

David Pinkus from Western Governors University answered that the return was three words: retention, retention, retention. Tom Cavanaugh added that UCF invested in additional staff for their personalized / adaptive learning program, specifically as a method to reduce the “friction” of faculty time investment.

I should point out that e-Literate TV case studies are not exhaustive. As Michael and I described:

We did look for schools that were being thoughtful about what they were trying to do and worked with them cooperatively, so it was not the kind of journalism that was likely to result in an exposé. We went in search of the current state of the art as practiced in real classrooms, whatever that turned out to be and however well it is working.

Furthermore, the panelists at the WCET Summit tended to be from schools that were leading the pack in thoughtful personalized learning implementations. In other words, the perspective I’m sharing in this post is for generally well-run programs that consciously considered student and faculty support as the key drivers.[3] When these programs have developed enough to allow independent reviews of effectiveness, student retention – both with the course and ideally within a program – should be one of the key metrics to evaluate.

Investment vs. Sustainability

There is another side to this coin, however, as pointed out by someone at the WCET Summit[4]. With so many personalized learning programs funded by foundations and even institutional investments above normal operations, there is a question of sustainability. It’s all well and good to demonstrate that a school is investing in new programs, including investments in faculty and TA support, but I do not think that many programs have considered the sustainability of these initiatives. If the TA quoted in the previous blog is accurate, ASU went from 2 to 11 TAs for the MAT 110 course. Essex County College invested $1.2 million in an emporium remedial math program. Even if the payoff is “retention”, will there be enough improvement in retention to justify an ongoing expenditure to support a program? Sustainability should be another key metric as groups evaluate the effectiveness of personalized learning approaches.

  1. specifically adaptive learning
  2. OK, me
  3. There will be programs that do seek to use personalized / adaptive learning as a cost-cutting measure or as primarily technology-driven. But I would be willing to bet that those programs will not succeed in the long run.
  4. I apologize for forgetting who this was.

The post Personalized Learning Changes: Effect on instructors and coaches appeared first on e-Literate.

Instructor Replacement vs. Instructor Role Change

Tue, 2015-06-09 07:53

By Phil HillMore Posts (329)

Two weeks ago I wrote a post about faculty members’ perspective on student-centered pacing within a course. What about the changing role of faculty members – how do their lives change with some of the personalized learning approaches?

In the video below, I spoke with Sue McClure, who teaches a redesigned remedial math course at Arizona State University (ASU) that is based on the use of Khan Academy videos. There are plenty of questions about whether this approach works and is sustainable, but for now let’s just get a first-hand view of how Sue’s role changed in this specific course. You’ll see that it took some prodding to get her to talk about her personal experience, and I did have to reflect back what I was hearing. Note that the “coaches” she described are teaching assistants.

Phil Hill: Let’s get more of a first-hand experience as the instructor for the course. What is a typical week for you as the course is running? What do you do? Who do you interact with?

Sue McClure:I interact by e-mail, and sometimes Google Hangouts, with the coaches and with some of the students. Now, not all of the students are going to contact me about a problem they might have because many of them don’t have any problems, and that’s wonderful. But quite a few of them do have problems either with understanding what they’re supposed to be doing or how to do what they’re supposed to be doing or how to contact somebody about something, and then they’ll send me an e-mail.

Phil Hill: So, as you go through this, it sounds like there’s quite a change in the role of the faculty member from a traditional course, and since you just got involved several months ago in the design and in instructing it, describe for me the difference in that role. What’s changed, and how does it affect you as a professor?

Sue McClure: Before I did this course, the way it’s being done now, I had taught [Math 110] online a few other semesters, and the main difference between those experiences and this experience is that with this experience our students have far more help, far more assistance, far more people willing to step up when they need help with anything to try to make them be successful. The main difference … is that with this experience our students have far more help.

Phil Hill: What about the changes for you personally?

Sue McClure: Partly because I think ASU is growing so much, my class sizes are getting bigger and bigger. That probably would have happened even if we were teaching these the way that we taught them before. That’s one big change—more and more students. So, having these coaches that we have working with us and for us has just been priceless. We couldn’t do it without them.

Phil Hill: It seems your role comes into more of an overseeing the coaches for their direct support of the students. Plus it sounds like you step in to directly talk to students where needed as well. Your role comes into more of an overseeing the coaches for their direct support of the students.

Sue McClure: Right. I think that explains it very well.

From what Michael and I have seen in the e-Literate TV case studies as well as other on-campus consulting experiences, the debate over adaptive software or personalized learning being used to replace faculty members is a red herring. Faculty replacement does happen in some cases, but that debate masks a more profound issue – how faculty members have to change roles to adapt to a student-centered personalized learning course design. [updated to clarify language]

For this remedial math course, the faculty member changes from one of content delivery to one of oversight, intervention, and coaching. This change is not the same for all disciplines, as we’ll see in upcoming case studies, but it is quite consistent with the experience at Essex County College.

As mentioned by Sue, however, these instructional changes do not just impact faculty members – they also affect teaching assistants. Below is a discussion with some TAs from the same course.

Phil Hill:Beyond the changes to the role of faculty, there are also changes to the role of teaching assistants.

Namitha Ganapa:Basically, in a traditional course there’s one instructor, maybe two TAs, and a class of maybe 175 students. So, it’s pretty hard for the instructor to go to each and every student. Now, we are 11 coaches for Session C. Each coach is having a particular set of students, so it’s much easier to focus on the set of students, and that helps for the progress.

We should stop here and note the investment being made by ASU – moving from 2 TAs to 11 for this course. There are two sides to this coin, however. On one side, not all schools can afford this investment in a new course design and teaching style. On the other side, it is notable that instructor roles are increasing (same number of faculty members, more TAs).

Jacob Cluff: I think, as a coach, it’s a little more involved with the students on a day-to-day basis. Every day I keep track of all the students, their progress, and if they’re struggling on a skill I make a video, send it to them, ask them if they need help understanding it—that sort of thing.

Phil Hill: So, Jacob, it sounds like this is almost an intervention model—that your role is looking at where students are and figuring out where to intervene and prompt them. Is that an accurate statement?

Jacob Cluff: I think that’s a pretty fair statement because most of the students (a lot of students)—they’re fine on their own and don’t really need help at all. They kind of just get off and run. So, I spend most of my time helping the students that actually need help, and I also spend time and encourage students that are doing well at the same time.
I spend most of my time helping the students that actually need help.

Phil Hill: So, Namitha, describe what is the typical week for you, and is it different? Any differences in how you approach the coaching role than from what we’ve heard from Jacob?

Namitha Ganapa: It’s pretty much the same, but my style of teaching is I make notes. I use different colors to highlight the concept, the formula, and how does the matter go. Many of my students prefer notes, so that is how I do it.

Phil Hill: So, there’s sort of a personal style to coaches that’s involved.

This aspect of the changing role of both faculty members and TAs is too often overlooked, and it’s helpful to hear from them first-hand.

The post Instructor Replacement vs. Instructor Role Change appeared first on e-Literate.

Moodle Association: New pay-for-play roadmap input for end users

Mon, 2015-06-08 12:27

By Phil HillMore Posts (329)

As long as we’re on the subject of changes to open source LMS models . . .

Moodle is in the midst of releasing a fairly significant change to the community with a new not-for-profit entity called the Moodle Association. The idea is to get end users more directly involved in setting the product roadmap, as explained by Martin Dougiamas in this discussion thread and in his recent keynotes (the one below from early March in Germany).

[After describing new and upcoming features] So that’s the things we have going now, but going back to this – this is the roadmap. Most people agree those things are pretty important right now. That list came from mostly me, getting feedback from many, many, many places. We’ve got the Moots, we’ve got the tracker, we’ve got the community, we’ve got Moodle partners who have many clients (and they collect a lot of feedback from their paying clients). We have all of that, and somehow my job is to synthesize all of that into a roadmap for 30 people to work on. It’s not ideal because there’s a lot, a lot of stuff going on in the community.

So I’m trying to improve that, and one of the things – this is a new thing that we’re starting – is a Moodle Association. And this will be starting in a couple of months, maybe 3 or 4 months. It will be at moodleassociation.org, and it’s a full association. It’s a separate legal organization, and it’s at arm’s length from Moodle [HQ, the private company that develops Moodle Core]. It’s for end users of Moodle to become members, and to work together to decide what the roadmap should be. At least part of the roadmap, because there will be other input, too. A large proportion, I hope, will be driven by the Moodle Association.

They’ll become members, sign up, put money every year into the pot, and then the working groups in there will be created according to what the brainstorming sessions work out, what’s important, create working groups around those important things, work together on what the specifications of that thing should be, and then use the money to pay for that development, to pay us (Moodle HQ), to make that stuff.

It’s our job to train developers, to keep the organization of the coding and review processes, but the Moodle Association is telling us “work on this, work on that”. I think we’ll become a more cohesive community with the community driving a lot of the Moodle future.

I’m very excited about this, and I want to see this be a model of development for open source. Some other projects have something like this thing already, but I think we can do it better.

In the forum, Martin shared two slides on the funding model. The before model:

moodle-model-before

 

The model after:

moodle-model-after

 

One obvious change is that Moodle partners (companies like Blackboard / Moodlerooms, RemoteLearner, etc) will no longer be the primary input to development of core Moodle. This part is significant, especially as Blackboard became the largest contributing member of Moodle with its acquisition of Moodlerooms in 2012. This situation became more important after Blackboard also bought Remote-Learner UK this year. It’s worth noting that Martin Dougiamas, founder of Moodle, was on the board of Remote-Learner parent company in 2014 but not this year.

A less obvious change, however, is that the user community – largely composed of schools and individuals using Moodle for free – has to contend with another pay-for-play source of direction. End users can pay to join the association, and the clear message is that this is the best way to have input. In a slide shown at the recent iMoot conference and shared at MoodleNews, the membership for the association was called out more clearly.

massociation2

What will this change do to the Moodle community? We have already seen the huge changes to the Kuali open source community caused by the creation of KualiCo. While the Moodle Association is not as big of a change, I cannot imagine that it won’t affect the commercial partners.

There are already grumblings from the Moodle end user community (labeled as Moodle.org, as this is where you can download code for free), as indicated by the discussion forum started just a month ago.

I’m interested to note that Moodle.org inhabitants are not a ‘key stakeholder’, but maybe when you say ‘completely separate from these forums and the tracker’ it is understandable. Maybe with the diagram dealing only with the money connection, not the ideas connection, if you want this to ‘work’ then you need to talk to people with $$. ie key = has money.

I’ll be interested how the priorities choice works: do you get your say dependent on how much money you put in?

This to me is the critical issue with the future.

Based on MoodleNews coverage of the iMoot keynote, the answer to this question is that the say is dependent on money.

Additionally, there will be levels of membership based on the amount you contribute. The goal is to embrace as many individuals from the community but also to provide a sliding scale of membership tiers so that larger organizations, like a university, large business, or non-Moodle Partner with vested interested in Moodle, (which previously could only contribute through the Moodle Partner arrangement, if at all) can be members for much larger annual sums (such as AU$10k).

The levels will provide votes based on dollars contributed (potentially on a 1 annual dollar contributed = 1 vote).

This is why I use the phrase “pay-for-play”. And a final thought – why is it so hard to get public information (slides, videos, etc) from the Moodle meetings? The community would benefit from more openness.

Update 6/10: Corrected statement that Martin Dougiamas was on the Remote Learner board in 2014 but not in 2015.

The post Moodle Association: New pay-for-play roadmap input for end users appeared first on e-Literate.

rSmart to Asahi to Scriba: What is happening to major Sakai partner?

Mon, 2015-06-08 11:16

By Phil HillMore Posts (329)

It looks like we have another name and ownership change for one of the major Sakai partners, but this time the changes have a very closed feel to them. rSmart, led by Chris Coppola at the time, was one of the original Sakai commercial affiliates, and the LMS portion of the company was eventually sold to Asahi Net International (ANI) in 2013. ANI had already been involved in the Sakai community as a Japanese partner and also as an partial investor in rSmart, so that acquisition was not seen as a huge change other than setting the stage for KualiCo to acquire the remainder of rSmart.

In late April, however, ANI was acquired by a private equity firm out of Los Angeles (Vert Capital), and this move is different. Vert Capital did not just acquire ANI; they also changed the company name to Scriba and took the company off the grid for now. No news items explaining intentions, no web site, no changes to Apereo project page, etc. Japanese press coverage of the acquisition mentions the parent company’s desire to focus on the Japanese market.

What is going on?

A rudimentary search for “Scriba education learning management” brings up no news or web sites, but it does bring up a recent project on freelancer.com to create the new company logo. By the way, paying $90 gets 548 entries from 237 freelancers – and adjuncts are underpaid?! The winning logo has a certain “we’re like Moodle, but our hat covers two letters” message that I find quite original.

Furthermore, neither scriba.com nor scriba.org are registered by the company (both are owned by keyword naming companies that pre-purchase domains for later sale). The ANI website mentions nothing about the sale, and in fact has no news since October, 2014. The Sakai project page has no update, but the sponsorship page for Open Apereo conference last week did have new logo. This sale has the appearance of a last-minute acquisition under financial distress[1].

Vert Capital is a “private investment firm that provides innovative financing solutions to lower/middle market companies globally”. The managing director who is leading this deal, Adam Levin, has a background in social media and general media companies. Does Vert Capital plan on making further ed tech acquisitions? I wouldn’t be surprised, as ed tech is fast-changing market yet more companies are in need of “innovative financing”.

I have asked Apereo for comment, and I will share that or any other updates as I get it. If anyone has more information, feel free to share in comments or send me a private note.

H/T: Thanks to reader who wishes to remain anonymous for some pointers to public information for this post.

  1. Note, that is conjecture.

The post rSmart to Asahi to Scriba: What is happening to major Sakai partner? appeared first on e-Literate.

Pilots? We don’t need no stinkin’ pilots!

Thu, 2015-06-04 19:33

By Phil HillMore Posts (329)

Timothy Harfield commented on Arizona State University’s approach to pilots and scaling innovation at ASU.

.@philonedtech excellent comment on the problem of scaling innovation in #HigherEd. This is a central concern for @UIAinnovation.

— Timothy D. Harfield (@tdharfield) June 4, 2015

excellent comment on the problem of scaling innovation in #HigherEd. This is a central concern for @UIAinnovation.

The University Innovation Alliance is “a consortium of 11 large public research universities committed to making high-quality college degrees accessible to a diverse body of students”. I wrote about this “central concern” last summer in a post titled “Pilots: Too many ed tech innovations stuck in purgatory”, using the frame of Everett Rogers’ Diffusions of Innovations model. While the trigger for that post was on ed tech products, the same situation applies for the course design situation.

5 Stages of Adoption

What we are seeing in ed tech in most cases, I would argue, is that for institutions the new ideas (applications, products, services) are stuck the Persuasion stage. There is knowledge and application amongst some early adopters in small-scale pilots, but majority of faculty members either have no knowledge of the pilot or are not persuaded that the idea is to their advantage, and there is little support or structure to get the organization at large (i.e. the majority of faculty for a traditional institution, or perhaps for central academic technology organization) to make a considered decision. It’s important to note that in many cases, the innovation should not be spread to the majority, either due to being a poor solution or even due to organizational dynamics based on how the innovation is introduced.

The Purgatory of Pilots

This stuck process ends up as an ed tech purgatory – with promises and potential of the heaven of full institutional adoption with meaningful results to follow, but also with the peril of either never getting out of purgatory or outright rejection over time.

Back to Timothy’s comment. He was specifically commenting on Phil Regier’s interview in the e-Literate TV case study on ASU.

Phil Hill: There are plenty of institutions experimenting with new technology-based pedagogical approaches, but pilots often present a challenge to scale with quality. ASU’s vision, however, centers on scale and access. One observation I’ve seen from what’s happening in the US is there are a lot of pilots, but that never scale to go across a school. You sound confident that you will be scaling.

Philip Regier: We kind of don’t pilot stuff here. When we did the math program, we actually turned it on in August 2012 after all of nine months of preparation working with Knewton.We turned it on, and it applied to every seat in every freshman math course at the university. And there’s a reason for that. My experience—not just mine, but the university’s experience with pilots is that they have a very difficult time getting to scale.
Pilots … have a very difficult time getting to scale.

Part of the reason is because, guess what? It doesn’t work the first time. It doesn’t work the first time, maybe not the second. It takes multiple iterations before you understand and are able to succeed.If you start with a pilot and you go a semester or two and it’s, “Hey, this isn’t as good as what we were doing,” you’ll never get to scale.

In our case, the experience with math is a very good example of that because working with a new technology is not a silver bullet. It’s not like we’re going to use this technology, and now all of the grades are going to go up by 15 percent. What you have to do is work with the technology and develop the entire learning ecosystem around it, and that means training faculty.

That’s one approach to the scaling innovation challenge that affects not just the University Innovation Alliance institutions but most schools. This approach also raises some questions. While Phil Regier stated in further comments not in the episode that faculty were fully involved in the decision to implement new programs, are they also fully involved in evaluating whether new programs are working and whether changes are needed? Does this no pilot approach lead to the continuation of programs that have fatal flaws and should be ended rather than changed?

It is, however, an approach that directly addresses the structural barriers to diffusing the innovations. Based on Phil Regier’s comments, this approach also leads to investment in and professional development of faculty members involved.

The post Pilots? We don’t need no stinkin’ pilots! appeared first on e-Literate.

NYT Michael Crow Condensed Interview: More Info needed . . . and available

Thu, 2015-06-04 09:49

By Phil HillMore Posts (328)

The New York Times ran an “edited and condensed” interview with Arizona State University (ASU) president Michael Crow, titled “Reshaping Arizona State, and the Public Model”.

Michael M. Crow sees Arizona State as the model of a public research university that measures itself by inclusivity, not exclusivity. In his 13 years as its president, he has profoundly reshaped the institution — hiring faculty stars from across the country, starting a bevy of interdisciplinary programs, growing the student body to some 83,000 and using technology to bring his ideas to scale, whether with web-based introductory math classes or eAdvisor, which monitors students’ progress toward their major. Last year, Dr. Crow made headlines when the university partnered with Starbucks to offer students the chance to complete their degree online for free. His new book, written with the historian William B. Dabars, is called, appropriately, “Designing the New American University.”

The problem is that the interview was so condensed that it lost a lot of context. Since Michael and I just released an e-Literate TV case study on ASU, the first episode could help as a companion to the NYT article by calling out a lot more information from ASU executives their mission. We would like this information to be useful for others to decide what they think about this model.

ASU Case Study: Ambitious Approach to Change in R1 University

The post NYT Michael Crow Condensed Interview: More Info needed . . . and available appeared first on e-Literate.

Release of ASU Case Study on e-Literate TV

Mon, 2015-06-01 06:55

By Phil HillMore Posts (327)

Today we are thrilled to release the third case study in our new e-Literate TV series on “personalized learning”. In this series, we examine how that term, which is heavily marketed but poorly defined, is implemented on the ground at a variety of colleges and universities.

We are adding three episodes from Arizona State University (ASU), a school that is frequently in the news. Rather than just talking about the ASU problems, we are talking with the ASU people involved. What problems are they trying to solve? How do students view some of the changes? Are faculty being replaced by technology or are they changing roles? For that matter, how are faculty members involved in designing some of these changes?

You can see all the case studies (either 2 or 3 per case study) at the series link, and you can access individual episodes below.

ASU Case Study: Ambitious Approach to Change in R1 University

ASU Case Study: Rethinking General Education Science for Non-Majors

ASU Case Study: The Changing Role of Faculty and Teaching Assistants

e-Literate TV, owned and run by MindWires Consulting, is funded in part by the Bill & Melinda Gates Foundation. When we first talked about the series with the Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

As with the previous series, we are working in collaboration with In the Telling, our partners providing the platform and video production. Their Telling Story platform allows people to choose their level of engagement, from just watching the video to accessing synchronized transcripts and accessing transmedia. We have added content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. With In The Telling’s help, we are crafting episodes that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies.

We will release two more case studies over the next month, and we also have two episodes discussing the common themes we observed on the campuses. We welcome your feedback, either in comments or on Twitter using the hashtag #eLiterateTV.

Enjoy!

The post Release of ASU Case Study on e-Literate TV appeared first on e-Literate.

UF Online and Enrollment Warning Signs

Thu, 2015-05-28 19:33

By Phil HillMore Posts (327)

The University of Florida Online (UF Online) program is one of the highest profile online initiatives to be started over the past few years (alongside other public institution programs such as California’s Online Education Initiative, OpenSUNY, Cal State Online, and Georgia Tech / Udacity). UF Online, which I first described in this blog post, is an exclusively-online baccalaureate program leading to a UF degree for lower costs than the traditional on-campus experience.

As part of a new program augmenting UF Online, qualified students that are not admitted to the University of Florida due to space constraints can be accepted to UF Online’s PaCE program, although the Washington Post in April called out that these students had not asked to be part of UF Online.

Some 3,100 students accepted as freshman by the University of Florida for the fall got a big surprise along with their congratulations notices: They were told that the acceptance was contingent on their agreement to spend their first year taking classes online as part of a new program designed to attract more freshmen to the flagship public university.

The 3,118 applicants accepted this way to the university — above and beyond the approximately 12,000 students offered traditional freshman slots — did not apply to the online program. Nor were they told that there was a chance that they would be accepted with the online caveat. They wound up as part of an admissions experiment.

Fast forward to this week’s news from the Gainesville Sun.

Fewer than 10 percent of 3,118 high school students invited to sign up for a new online program after their applications were rejected for regular admission to the University of Florida have accepted the offer.

The 256 students who signed up for the Pathway to Campus Enrollment [PaCE] program will be guaranteed a spot at UF after they complete the minimum requirements: two semesters and at least 15 hours of online course work. [snip]

The PACE program was created as a way to boost the numbers of first-time-in-college students enrolling in UF Online, to provide an alternate path to residential programs, and to populate major areas of study that have been under-enrolled in recent years.

The fact that less than 10% of students accepted the offer is not necessarily news, as the campus provost predicted this situation last month (see the Washington Post article). What is more troubling is the hubris exhibited by how UF Online is reacting to enrollment problems. Administrators at the university seem to view UF Online as a mechanism to serve institutional needs and are not focused on meeting student needs. This distorted lens is leading to some poor decision-making that is likely making the enrollment situation worse in the long run. Rather than asking “which students need UF Online and what support do they need”, the institution is asking “what do we need and how can we use UF Online to fill any gaps”.

Let’s step back from PaCE and look at the bigger picture. The following chart shows the targeted enrollment numbers that formed the basis for the UF Online strategic plan, compared to actual and currently estimated enrollment (click to enlarge).

Enrollments vs Plan Spring 2015

As of this term, they are off by ~23% (1000 out of a target of 1304 students), which is not unreasonable for a program that started so quickly. What is troubling, however, is that the targets rise quickly (3698 next spring, 6029 the year after) while the actuals have not shown significant growth yet. Note that UF Online is estimating enrollment to double, from 1000 to 2000, for fall 2015 – that is a bold assumption. To make the challenge even more difficult (from March article in Gainesville Sun):

That growth in revenue also depends largely on a growing number of out-of-state online students who would pay four to five times higher tuition rates, based on market conditions.

Specifically, the business plan assumes a mix of 43% out-of-state students in UF Online by year 10, yet currently there are only 9% out-of-state students. How realistic is it to attract large numbers of out-of-state students given the increasing options for online programs?

In the midst of the challenging startup, UF Online had to deal with the premature departure of the initial executive director. After a one-year search process, UF Online chose a new leader who has absolutely no experience in online education.

UF Online is welcoming Evangeline Cummings as its new director, and she has the task of raising the program’s enrollment. [snip]

Cummings starts July 1 with a salary of $185,000. She is currently a director with the U.S. Environmental Protection Agency.

UF spokesman Steve Orlando wrote in an email that she showed skills desirable for the position. “The search committee and the provost were looking for someone with the ability to plan strategically and to manage a large and complex operation,” he said.

At this point, it might have been worth stepping back and challenging some of the original assumptions. Specifically, is UF Online targeting the right students and addressing an unmet need? The plan assumes there are many students who want a U of Florida degree but just can’t get in or want to do so from out of state. This is different than asking what types of students need an anywhere, anytime online program from an R1 university and then figuring out what to provide in an academic program.

Instead, the administrators came up with the PaCE program as a way to augment enrollment. Which academic majors are allowed under PaCE?

The PACE program was created as a way to boost the numbers of first-time-in-college students enrolling in UF Online, to provide an alternate path to residential programs, and to populate major areas of study that have been under-enrolled in recent years.

The school didn’t say “what are the majors that students need once they transfer to the residential program”, they asked “how can we use these online students to fill some gaps we already have”. And students who sign the PaCE contract (yes, it is a contractual agreement) cannot change majors even after they move to a campus program.

And while the students are in UF Online:

PACE students can’t live in student dormitories, and their tuition doesn’t cover meals, health services, the recreation center and other student activities because they aren’t paying the fees for those services. They can’t get student tickets to UF cultural and sporting events.

They also can’t ride for free on Regional Transportation Service buses or get student parking passes.

PACE students also will not be able to participate in intercollegiate athletics or try out for the Gator Marching Band. They can use the libraries on campus but can’t check out books.

U of Florida seems to have spent plenty of time figuring out what not to provide these students.

One additional challenge that UF Online will face is student retention. The Instructional Technology Council (ITC) described in this year’s Distance Education report:

Nationally, student retention in online courses tends to be eight percentage points lower than that of face-to-face instruction. Online students need to be self-disciplined to succeed. Many underestimate how much time online coursework requires. Others fall behind or drop out for the same reasons they enrolled in online courses in the first place—they have other responsibilities and life challenges, such as work and/or family, and are too busy to prepare for, or complete, their online coursework.

Yet UF Online is targeting the students who might have the most trouble with online courses. First-time entering freshman, particularly students who actually want a residential program and might not even understand online programs, are not ideal students to succeed in a fully-online program. San Jose State University and Udacity learned this lesson the hard way, although they threw MOOCs and remedial math into the mix as well.

UF Online seems to be institutionally-focused rather than student-focused, and the initiative is shaping up to be a case study in hubris. Without major changes in how the program is managed, including the main campus input into decisions, UF Online risks becoming the new poster child of online education failures. I honestly hope they succeed, but the current outlook is not encouraging.

The post UF Online and Enrollment Warning Signs appeared first on e-Literate.

Worth Considering: Faculty perspective on student-centered pacing

Tue, 2015-05-26 11:43

By Phil HillMore Posts (326)

Over the weekend I wrote a post based on the comment thread at Friday’s Chronicle article on e-Literate TV.

One key theme coming through from comments at the Chronicle is what I perceive as an unhealthy cynicism that prevents many people from listening to students and faculty on the front lines (the ones taking redesigned courses) on their own merits.

Sunday’s post highlighted two segments of students describing their experiences with re-designed courses, but we also need to hear directly from faculty. Too often the public discussion of technology-enabled initiatives focus on the technology itself, often assuming that the faculty involved are bystanders or technophiles. But what about the perspectives of faculty members – you know, those who are in the classrooms working with real students – on what challenges they face and what changes are needed from an educational perspective? There is no single perspective from faculty, but we could learn a great deal through their unique, hands on experiences.

Consider the the specific case of why students might need to work at their own pace.

The first example is from a faculty member at Middlebury College describing the need for a different, more personalized approach for his geographic information system (GIS) course.

Jeff Howarth: And what I would notice is that there would be some students who would like me to go a little bit faster but had to wait and kind of daydream because they were just waiting. And then there were some students that desperately wanted me slow down. Then you get into that kind of slowest-car-on-the-freeway, how-fast-can-you-really-go type of thing. So, I would slow down, which would lose part of the group.

Then there would be some folks that probably would want me to slow down but would never ask because they don’t want to call attention to themselves as being the kind of—the slow car on the freeway.

Michael Feldstein: At this point, Jeff realized that even his small class might not be as personalized as it could be with the support of a little more technology.

Jeff Howarth: What I realized is that, if I just started packaging that instruction, the worked example, I could deliver the same content but allow students to first—if I made videos and posted it on something like YouTube, I was putting out the same content, but students could now watch it at their own pace and in the privacy of being able to go as slow as they need to without the social hang-ups of being considered different.

Students could now watch it at their own pace and … and go as slow as they need to without the social hang-ups of being considered different. So, that was really the first step of—I did all of this, and then I told another colleague in languages what I was doing. And he said, “Well, that’s called ‘flipping the classroom.’” And I thought, “OK.” I mean, but that’s not really why. I did it without knowing that I was flipping the classroom, but then that’s how it happened.

Compare this description with an example from an instructor at Essex County College teaching developmental math.

Pamela Rivera: When I was teaching the traditional method, I’ll have students coming in and they didn’t know how to multiply. They didn’t know how to add and subtract. Rarely would those students be able to stay throughout the semester, because after the third—no, even after the second week, everyone else was already in division and they’re still stuck.

And the teacher can’t stop the class and say, “OK, let’s continue with multiplication,” because you have a syllabus to stick to. You have to continue teaching, and so those students will be frustrated, and so they drop the class. The Teacher can’t stop the class…because you have a syllabus to stick to.

At the same time, you had students who—the first couple of weeks they’ll be extremely bored because they already know all of that. And so, unfortunately, what would happen is eventually you would get to a point in the content that—they don’t know that, but because they have been zoning out for weeks, they don’t get that “OK, now, I actually have to start paying attention.” And so, yes, they should have been able to do that, but they still are not very successful because they were used to not paying attention.

Remarkably Similar Descriptions

Despite these two examples coming from very different cases, the actual descriptions that faculty offer on the need for course designs that allow students to control their own pacing are remarkably similar. These isolated examples are not meant to end debate on personalized learning or on what role technology should play (rather they should encourage debate), but it is very useful to listen to faculty members describe the challenges they face on an educational level.

The post Worth Considering: Faculty perspective on student-centered pacing appeared first on e-Literate.

Worth Considering: Students can have their own perspectives on edtech initiatives

Sun, 2015-05-24 15:06

By Phil HillMore Posts (326)

Triggered by Friday’s article on e-Literate TV, there have been some very interesting conversations both in the Chronicle comment thread and on the e-Literate TV site. The most, um, intense conversations have centered on the application of self-regulated learning (SRL) in combination with adaptive software (ALEKS) to redesign a remedial math course at Essex County College. Michael has been wading in very deep waters in the comment threads, trying emphasize variations of the following point.

But that debate should be in the context of what’s actually happening in real classrooms with real students, what the educational results are, and what the teachers and students involved think of their experiences.

Right now, the “sides” are having a fight–it’s not really a debate because the sides aren’t really talking to each other–in near total absence of any rational, educator-evaluated, evidence-based conversation about what these approaches are good for. One side says they will “fix” a “broken” education system, while the other side says they will “destroy” the education system. Well, what are the students saying?

One key theme coming through from comments at the Chronicle is what I perceive as an unhealthy cynicism that prevents many people from listening to students and faculty on the front lines (the ones taking redesigned courses) on their own merits. Michael called out this situation in the same comment:

What bothers me is the seemingly complete lack of interest among the commenters in this thread about actually hearing what these teachers and students have to say, and the disregard for the value of their perspectives. It is possible to raise legitimate concerns about techno-solutionism, anti-labor practices, and other serious abuses while simultaneously acknowledging that so-called “personalized learning” approaches can have real educational value when properly applied in the appropriate context by competent and concerned educators and serious students.

One of our primary goals for e-Literate TV is to give additional access to those on the front lines, thus allowing debates and conversations about the role of ed tech and personalized learning approaches. However, it is important to recognize that students can have their own perspectives and are not just robots who are told what to say and do. Consider the following panels discussion with students. To me, the students are quite well-spoken and have real insights.

Sade: A typical day is, like, you basically come in—you go and you log on and you do your ALEKS. You do it at your own pace. Every individual works at their own pace. That’s why I like it. Because some people are ahead, and if you’re in a typical, a regular class, then you have to go with the pace of everybody else. Even if you don’t understand, you have to be—you have to try to catch up. Here, you work at your own pace.

Viviane: It’s been a very good experience for basically the same reasons. Where you just sit and you work and if you can solve 10 problems in one hour, it’s better for you if you keep working at your own pace.

And there’s also—the professor that helps you, or you can even bother one of your classmates and say, “Hey, can you help me out over here with this problem?” or something like that. I mean it’s—I feel as if it’s a very interactive and open classroom.

As per other classes, I don’t think that a regular math class would be able—I mean you wouldn’t be able to sit and ask another classmate for help or anything like that. You would have to just wait for your professor.

Most students we talked to appreciated the self-paced nature of the lab portion (working on the computers emporium style with faculty roaming the room for one-on-one support), but it is very clear that the technology itself was one component of the solution. Students are reflecting back that it is the combination of self-paced design along with interactive support that is critical to success. Not only that, but note how students value the ability for peer support – students helping students. That design element of courses is often overlooked.

In another segment, students explored this concept in more depth with an additional element of ownership of the learning process.

Phil: Most of the students we talked to seem to have internalized the lessons of self-regulated learning and feel empowered to learn.

Sade: It’s really good because, for example, say I’m doing a topic, and I’m stalling. Vivian is faster than I am. I could work by my own pace and then it’s a professor there that I could raise my hand. “Excuse me. I don’t understand this. Could you help me with it?”—because everybody learns at their own pace. Everybody learns at their own pace.

Khalid: Yeah, we are typically just sitting down on the computer screen, but we’re sitting next to our classmates, so if there’s a problem on it, I could ask my classmate. Like, that’s actually the best thing about ALEKS, is that there’s an explain button right there.

We would do well to listen to students more often, judging input on their own merits.

Update: Fixed first video link.

The post Worth Considering: Students can have their own perspectives on edtech initiatives appeared first on e-Literate.

LMS Observations: You had me until you went nihilist

Wed, 2015-05-20 16:20

By Phil HillMore Posts (325)

Mark Drechsler has a fascinating post in response to my recent LMS as minivan about D2L’s retention claims, mostly playing off of this theme:

I answered another question by saying that the LMS, with multiple billions invested over 17+ years, has not “moved the needle” on improving educational results. I see the value in providing a necessary academic infrastructure that can enable real gains in select programs or with new tools (e.g. adaptive software for remedial math, competency-based education for working adults), but the best the LMS itself can do is get out of the way – do its job quietly, freeing up faculty time, giving students anytime access to course materials and feedback. In aggregate, I have not seen real academic improvements directly tied to the LMS.

In response, Mark gives “a personal view of my own journey towards LMS nihilism” in a post titled “How I lost my faith in the LMS” that has some excellent points (first go read his whole post, I’ll wait).

Mark Nihilist.001

Mark describes how the LMS market in Australia changed dramatically – mostly towards Moodle – due to Bb / D2L lawsuit, end-of-life for WebCT, and Moodle release of 1.9, noting:

There were, I believe, a variety of reasons that Moodle was so successful during this time, but one of the most common things that I would hear during this period was that, compared to incumbent LMS, Moodle simply ‘got out of the way’ and let academic staff do their thing. It helped the LMS stop being a barrier, and moved it closer to being an enabler, which is exactly what it should have been.

During this time Moodle was booming in popularity, and the transitions I was involved in by and large went as well as any other campus-wide technology platform change can, but one big question (and I must send out a thank you my friend and sounding board James Hamilton for planting this seed) was lurking in the background – how do we measure the success of the implementation? How do we know that the LMS in and of itself is making any difference whatsoever?

The answer he comes to is that no, the LMS in and of itself does not change outcomes and that:

The specific LMS that was in use paled into insignificance next to the innovation, dedication and craftiness of the person using it.

Here Mark makes one of the best points I’ve seen in the LMS discussions of late [emphasis added].

In a commodity market, the argument often turns to cost. In the case of the LMS, like any piece of campus-wide technology, the cost of the service in technology terms often pales into insignificance when compared with the cost in terms of time spent (wasted?) by academic and administrative staff being forced to use a system designed to try and satisfy a large set of complex requirements. Perhaps this was one of the most compelling things about Moodle back in its heyday – the perception that it simply ‘got out of the way’ of teachers wanting to do their job – and the significant ‘switching cost’ in terms of managing a large-scale change program that is needed to swap out an LMS was deemed worth it in terms of the longer term reduction of burden on users.

That was then, however, not now.

Where we slightly differ is in the conclusions. The lack of evidence of LMS usage directly impacting academic results does not make the LMS a commodity and is no reason to go nihilist[1].

Two measures of value in a traditional LMS can be thought of as how well it ‘gets out of the way’ and how well it enables apps that can directly affect student learning. From my experience, the various LMS options differ greatly in these two attributes. I have seen examples at campuses where an LMS adoption led to one that was much more intuitive, reliable, and easy to adopt to the point that training resources were diverted away from ‘here’s how to migrate a course and which button to push’ to ‘here are some pedagogical improvements to consider using online tools’. I have seen schools benefit simply from having reliable systems that don’t go down during exams. In other words, and LMS solution can significantly reduce the “cost in terms of time spent by academic and administrative staff”. And by the way, that choice might not always be the same LMS – it depends greatly on course design and pedagogical models.

While I detest most RFP processes, there are examples (typically involving creative compliance with purchasing rules or active support from an enlightened purchasing guru) where the planning process itself leads to increased collaboration among academic and administrative staff. If done well, a vendor selection process can enable greater focus on teaching and learning effectiveness and cross-pollination of ideas.

Update (hit publish too soon): I have also seen situations where an LMS is so painful to use that faculty don’t take advantage of tools that are appropriate or useful. While there is risk in broadly looking at depth of LMS adoption as a net positive, the wrong LMS choice or implementation can prevent faculty or instructional designers from doing what they’d like.

While I might be misreading Mark’s nihilism reference, he makes some great observations based on his personal journey. In the end, however, I do not see the LMS as a commodity.

  1. Mark’s conclusion is “So then, in my mind, while the LMS may not quite yet be considered a commodity in terms of features and functions, it might as well be a commodity in terms of the overall impact it has on student learning outcomes.

The post LMS Observations: You had me until you went nihilist appeared first on e-Literate.

Miami, Harvard and MIT: Disability discrimination lawsuits focused on schools as content providers

Wed, 2015-05-20 11:52

By Phil HillMore Posts (324)

In the discussions at Google+ based on last week’s post about the Miami University of Ohio disability discrimination lawsuit[1], George Station made two important points that deserve more visibility.

It’s been a-coming for several years now. Cal State has some pretty strong rules in place for compliance with ADA and state-level disability laws. Still, [Universal Design for Learning] UDL is a little-known acronym on any campus you care to visit, and staff support is probably one person in an office, except for Miami of Ohio as of this week, I guess…

Add the recent edX settlement with the US Department of Justice, and the whole direction of edtech changes…

Put another way, it should come as no surprise that the US Department of Justice is ramping up its enforcement of disability discrimination regulations in the education world. Captioning service provider CaptionSync has an excellent summary of the field, written before the DOJ intervention at Miami.

Accessibility laws applicable to higher education have been in place in the United States for decades, but many schools are still not fully compliant with the laws. Part of the lag in compliance can be attributed to lenient enforcement in the early years of these laws; the Rehabilitation Act was enacted in 1973 and the Americans with Disabilities Act was enacted in 1990, but initially there were very few government investigations or enforcement actions. Over time both government agencies (such as the Office for Civil Rights) and advocacy groups (such as the National Federation for the Blind and the National Association for the Deaf) have increasingly been making efforts to enforce the provisions of these laws. Recent civil suits filed by the National Association for the Deaf (NAD) and other advocacy organizations against both Harvard and MIT suggest that now is a good time to take a hard look at your accessibility compliance efforts if you work with video in a college or university setting.

The Department of Justice (DOJ) sent a letter to all college and university presidents on the topic of accessibility for emerging technologies in 2010; it contained a useful summary of various accessibility regulations and how they apply to the education community.

In February, the National Association of the Deaf (NAD) filed suit against Harvard and MIT based on their MOOCs using edX. It is worth noting that the lawsuit is against the schools, not the MOOC provider. In the announcement:

Many videos simply aren’t captioned at all.  For example, a Harvard program on the 50th anniversary of Brown v. Board of Education, a 2013 Harvard Q&A with Bill Gates and a 2013 MIT discussion with MIT professor Noam Chomsky about the leaks attributable to Chelsea (formerly Bradley) Manning all lack closed captions.

“Worse still,” said attorney Timothy Fox, “a sampling of the videos available illustrates the problem with inaccurate captioning, making them confusing and sometimes completely unintelligible.”

The issue is not that there is no capability for captioning, but that those producing the content (Harvard and MIT) do not provide captions or do some with many errors. Subsequently, the DOJ and edX settled out of court based on the following:

5. Following the compliance review, the United States determined that www.edx.org and the Platform were not fully accessible to some individuals with disabilities in violation of Title III of the ADA.

6. EdX disputes the findings set forth above and denies that www.edx.org, its mobile applications, and the Platform are covered by or are in violation of Title III of the ADA.

In the settlement, both parties go out of their way to clarify that edX is a software provider and that the schools are content providers. The DOJ settlement calls on edX within 18 months to conform with the Web Content Accessibility Guidelines (“WCAG”) 2.0 AA, published by the Web Accessibility Initiative of the World Wide Web Consortium (“W3C”). More importantly, however, the agreement stipulates that edX within 90 days provide guidance to content providers (schools).

27. Develop a guide for Content Providers entitled Accessibility Best Practices Guidance for Content Providers (“Accessibility Best Practices Guidance”) and distribute a copy to each Content Provider with instructions for redistribution among individuals involved in producing Course Content. The Accessibility Best Practices Guidance shall describe steps and resources on how Course Content may be made to conform with WCAG 2.0 AA for Participants with disabilities using the CMS and inform Content Providers that the following resources may assist them in producing accessible Course Content: UAAG 1.0, ATAG 2.0, WAI-ARIA, WCAG2ICT, EPUB3, DAISY, and MathML.

The DOJ insists not only that software include capabilities for accommodation of students with disabilities but also that schools actually include the content and related metadata that is required for compliance. It is no longer enough for schools to buy software that is “ADA compliant”. Faculty or instructional designers need to include captions, alt-texts and alternate pathways for students to have equal access.

The origin of the Miami U lawsuit and the DOJ intervention is based on blind students, but the issues are the same. Repeatedly the DOJ referred to edtech “as implemented by Miami University”. As noted by reader Brian Richwine, the original lawsuit does reference Sakai (the LMS at the time of lawsuit), but the focus is still on how the content was provided.

Going back to the CaptionSync blog post:

Some schools have pointed out that in the summer of 2015 the DOJ is expected to release new guidance on how accessibility for websites is to be handled and they are awaiting that guidance before they step up their accessibility efforts.

The big lesson is that higher education institutions themselves had better get ready to understand their role as content providers that must conform to disability standards. Just letting individual faculty members figure out what to do is a recipe for future lawsuits. Faculty need support, guidance and (gasp) appropriate oversight to get this right.

Beyond the regulations and frameworks listed in the DOJ documents, schools should also increase their understanding and use of the UDL framework and guidelines that George referenced in his comments.

The goal of education in the 21st century is not simply the mastery of content knowledge or use of new technologies. It is the mastery of the learning process. Education should help turn novice learners into expert learners—individuals who want to learn, who know how to learn strategically, and who, in their own highly individual and flexible ways, are well prepared for a lifetime of learning. Universal Design for Learning (UDL) helps educators meet this goal by providing a framework for understanding how to create curricula that meets the needs of all learners from the start.

The UDL Guidelines, an articulation of the UDL framework, can assist anyone who plans lessons/units of study or develops curricula (goals, methods, materials, and assessments) to reduce barriers, as well as optimize levels of challenge and support, to meet the needs of all learners from the start. They can also help educators identify the barriers found in existing curricula.

  1. Insert joke here about G+ and its hundreds of active users.

The post Miami, Harvard and MIT: Disability discrimination lawsuits focused on schools as content providers appeared first on e-Literate.

About Those D2L Claims of LMS Usage Increasing Retention Rates

Thu, 2015-05-14 09:43

By Phil HillMore Posts (322)

In my post last week on the IMS Global Consortium conference #LILI15, I suggested that LMS usage in aggregate has not improved academic performance and noted that John Baker from D2L disagreed.

John Baker from D2L disagreed on this subject, and he listed off internal data of 25% or more (I can’t remember detail) improved retention when clients “pick the right LMS”. John clarified after the panel the whole correlation / causation issue, but I’d love to see that data backing up this and other claims.

After the conference I did some checking based on prompts from some helpful readers, and I’m fairly certain that John’s comments referred to Lone Star College – University Park (LSC-UP) and its 24% increase in retention. D2L has been pushing this story recently, first in a blog post and then in a paid webinar hosted by Inside Higher Ed. From the blog post titled “Can an LMS improve retention?” [footnotes and emphasis in original]:

Can an LMS help schools go beyond simply managing learning to actually improving it?

Pioneering institutions like Lone Star College-University Park and Oral Roberts University are using the Brightspace platform to leverage learner performance data in ways that help guide instruction. Now, they’re able to provide students with more personalized opportunities to master content and build self-confidence. The results of their student-centered approach have been nothing short of amazing: For students coming in with zero credits, Lone Star estimates that persistence rates increased 19% between spring 2014 and fall 2014[3] and Oral Roberts University estimates a persistence rate of 75.5% for online programs, which is an all-time high.[4]

Then in the subsequent IHE webinar page [emphasis added]:

The results have been nothing short of amazing. Lone Star has experienced a 19% increase in persistence and Oral Roberts University has achieved a 75.5% persistence rate for online programs—an all-time high. Foundational to these impressive results is Brightspace by D2L—the world’s first Integrated Learning Platform (ILP)— which has moved far beyond the traditional LMS that, for years, has been focused on simply managing learning instead of improving it.

Then from page 68 of the webinar slides, as presented by LSC-UP president Shah Ardalan:

LSC Results 1

By partnering with D2L, using the nationally acclaimed ECPS, the Bill & Melinda Gates Foundation, and students who want to innovate, LSC-UP increased retention by 24% after the pilot of 2,000 students was complete.

ECPS and the Pilot

For now let’s ignore the difference between 19%, 24% and my mistake on 25%. I’d take any of those results as institutional evidence of (the right) LMS usage “moving the needle” and improving results[1]. This description of ECPS got my attention, so I did some more research on ECPS:

The Education and Career Positioning System is a suite of leading web and mobile applications that allow individuals to own, design, and create their education-to-career choices and pathways. The ability to own, design, and create a personal experience is accomplished by accessing, combining and aggregating lifelong personal info, educational records, career knowledge, and labor statistics …

I also called up the LSC-UP Invitation to Innovate program office to understand the pilot. ECPS is an advising and support system created by LCS-UP, and the pilot was partially funded by the Gates Foundation’s Integrated Planning and Advising Services (IPAS) program. The idea is that students do better by understanding their career choices and academic pathways up front rather than being faced with a broad set of options. LCS-UP integrated ECPS into a required course that all entering freshmen (not for transfers) take. Students used ECPS to identify their skills, explore careers, see what these careers would require, etc. LCS-UP made this ECPS usage a part of the entry course. While there is no published report, between Spring 2014 and Fall 2014 LCS-UP reports that increase in term-to-term persistence of 19+%. Quite interesting and encouraging, and kudos to everyone involved. You can find more background on ECPS here.

In the meantime, Lone Star College (the entire system of 92,000+ students) selected D2L and is now using Brightspace as its LMS; however, the ECPS pilot had little to do with LMS usage. The primary intervention was an advising system and course redesign to focus students on understanding career options and related academic pathways.

The Problem Is Marketing, Not Product

To be fair, what if D2L enabled LSC-UP to do the pilot in the first place by some unique platform or integration capabilities? There are two problems with this possible explanation:

  • ECPS follows IMS standards (LTI), meaning that any major LMS could have integrated with it; and
  • ECPS was not even integrated with D2L during the pilot.

That’s right – D2L is taking a program where there is no evidence that LMS usage was a primary intervention and using the results to market and strongly suggest that using their LMS can “help schools go beyond simply managing learning to actually improving it”. There is no evidence presented[2] of D2L’s LMS being “foundational” – it happened to be the LMS during the pilot that centered on ECPS usage.

I should be clear that D2L should rightly be proud of their selection as the Lone Star LMS, and from all appearances the usage of D2L is working for the school. At the very least, D2L is not getting in the way of successful pilots. It’s great to see D2L highlight the excellent work by LSC-UP and their ECPS application as they recently did in another D2L blog post extensively quoting Shah Ardalan:

Lone Star College-University Park’s incoming students are now leveraging ECPS to understand their future career path. This broadens the students’ view, allows them to share and discuss with family and friends, and takes their conversation with the academic and career advisors to a whole new level. “Data analytics and this form of ‘intentional advising’ has become part of our culture,” says Ardalan. “Because the students who really need our help aren’t necessarily the ones who call, this empowers them to make better decisions” he adds.

LSC-UP is also planning to starting using D2L’s analytics package Insights, and they may eventually get to the point where they can take credit for improving performance.

The problem is in misleading marketing. I say misleading because D2L and LSC-UP never come out and say “D2L usage increased retention”. They achieve their goal by clever marketing where the topic is whether D2L and their LMS can increase performance then they share the LSC success story. The reader or listener has to read the fine print or do additional research to understand the details, and most people will not do so.

The higher ed market deserves better.

I Maintain My Position From Conference Panel

After doing this research, I still back up my statement at the IMS panel and from my blog post.

I answered another question by saying that the LMS, with multiple billions invested over 17+ years, has not “moved the needle” on improving educational results. I see the value in providing a necessary academic infrastructure that can enable real gains in select programs or with new tools (e.g. adaptive software for remedial math, competency-based education for working adults), but the best the LMS itself can do is get out of the way – do its job quietly, freeing up faculty time, giving students anytime access to course materials and feedback. In aggregate, I have not seen real academic improvements directly tied to the LMS.

I’m still open to looking at programs that contradict my view, but the D2L claim from Lone Star doesn’t work.

  1. Although my comments refer to improvements in aggregate, going beyond pilots at individual schools, this claim would nonetheless be impressive.
  2. Evidence is based on blog posts, webinar, and articles as well as interview of LSC-UP staff; if D2L can produce evidence supporting their claim I will share it here.

The post About Those D2L Claims of LMS Usage Increasing Retention Rates appeared first on e-Literate.

Ed Tech World on Notice: Miami U disability discrimination lawsuit could have major effect

Wed, 2015-05-13 11:53

By Phil HillMore Posts (322)

This week the US Department of Justice, citing Title II of ADA, decided to intervene in a private lawsuit filed against Miami University of Ohio regarding disability discrimination based on ed tech usage. Call this a major escalation and just ask the for-profit industry how big an effect DOJ intervention can be. From the complaint:

Miami University uses technologies in its curricular and co-curricular programs, services, and activities that are inaccessible to qualified individuals with disabilities, including current and former students who have vision, hearing, or learning disabilities. Miami University has failed to make these technologies accessible to such individuals and has otherwise failed to ensure that individuals with disabilities can interact with Miami University’s websites and access course assignments, textbooks, and other curricular and co-curricular materials on an equal basis with non-disabled students. These failures have deprived current and former students and others with disabilities a full and equal opportunity to participate in and benefit from all of Miami University’s educational opportunities.

The complaint then calls out the nature of assistive technologies that should be available, including screen readers, Braille display, audio descriptions, captioning, and keyboard navigation. The complaint specifies that Miami U uses many technologies and content that is incompatible with these assistive technologies.

The complaint is very specific about which platforms and tools are incompatible:

  • The main website www.maimioh.edu
  • Vimeo and YouTube
  • Google Docs
  • TurnItIn
  • LearnSmart
  • WebAssign
  • MyStatLab
  • Vista Higher Learning
  • Sapling

Update: It is worth noting the usage of phrase “as implemented by Miami University” in most of these examples.

Despite the complaint listing the last 6 examples as LMS, it is notable that the complaint does not call out the school’s previous LMS (Sakai) nor its current LMS (Canvas). Canvas was selected last year to replace Sakai, and I believe both are in usage. Does this mean that Sakai and Canvas pass ADA muster? That’s my guess, but I’m not 100% sure.

The complaint is also quite specific about the Miami U services that are at fault. For example:

When Miami University has converted physical books and documents into digital formats for students who require such conversion because of their disabilities, it has repeatedly failed to do so in a timely manner. And Miami University has repeatedly provided these students with digitally-converted materials that are inaccessible when used with assistive technologies. This has made the books and documents either completely unusable, or very difficult to use, for the students with these disabilities.

Miami University has a policy or practice by which it converts physical texts and documents into electronic formats only if students can prove they purchased (rather than borrowed) the physical texts or documents. Miami University will not convert into digital formats any physical texts or documents from its library collections and it will not seek to obtain from other libraries existing copies of digitally-converted materials. This has rendered many of the materials that Miami University provides throughout its library system and which it makes available to its students unavailable to students who require that materials be converted into digital formats because of a disability.

The complaint also specifies the required use of clickers and content within PowerPoint.

This one seems to be a very big deal by nature of the DOJ intervention and the specifics of multiple technologies and services.

Thanks to Jim Julius for alerting me on this one.

.@PhilOnEdTech have you seen the Miami of Ohio accessibility complaint? This is going to generate shock waves. http://t.co/STA6Rw6nrR

— Jim Julius (@jjulius) May 13, 2015

The post Ed Tech World on Notice: Miami U disability discrimination lawsuit could have major effect appeared first on e-Literate.

Worth Reading: Use of adjuncts and one challenge of online education

Mon, 2015-05-11 12:36

By Phil HillMore Posts (321)

There is a fascinating essay today at Inside Higher Ed giving an inside, first-person view of being an adjunct professor.

2015 is my 25th year of adjunct teaching. In the fall I will teach my 500th three-credit college course. I have put in many 14- to 16-hour days, with many 70- to 80-hour weeks. My record is 27 courses in one year, although I could not do that now.

I want to share my thoughts on adjunct teaching. I write anonymously to not jeopardize my precarious positions. How typical is my situation?

The whole essay is worth reading, as it gives a great view into what the modern university and the implications of using adjuncts. But I want to highlight one paragraph in particular that captures the challenge of understanding online education.

I have taught many online courses. We have tapped about 10 percent of the potential of online courses for teaching. But rather than exploring the untapped 90 percent, the college where I taught online wanted to standardize every course with a template designed by tech people with no input from instructors.

I want to design amazing online courses: courses so intriguing and intuitive and so easy to follow no one would ever need a tutorial. I want to design courses that got students eager to explore new things. Let me be clear, I am not talking about gimmicks and entertainment; I am talking about real learning. Is anyone interested in this?

It is naive to frame the debate over online education as solely, or primarily, an issue of faculty resistance. Yes, there are faculty members who are against online education, but one reason for this resistance is a legitimate concern for the quality of courses. What the essay reminds us is that part of the quality issue arises from structural issues from the university and not from the actual potential of well-design and well-taught online courses.

David Dickens at Google+ had an interesting comment based on the “tech people” reference that points to the other side of the same coin.

As a tech guy I can tell you, we’d love to have the time and tools to work with motivated adjuncts (or anyone else), but often times we have to put out something that will work for everyone, will scale, and will be complete and tested before the end of the week.

It is endlessly frustrating to know that there is so much more that could be done. After all, we tech folks are completely submerged in our personal lives with much more awesome tech than we can include in these sorts of “products” as we are constrained to publish them.

There is an immense difference between A) the quality of online education and B) the quality of well-designed and well-taught online education, and that is even different than C) the potential of online education. It is a mistake to conflate A), B), and C).

Update: David is on a roll while in discussion with George Station. This clarification builds on the theme of this post.

My point is that IT isn’t the barrier, but rather we are the mask behind which all the real barriers like to hide. We’d love to do more but can’t, and we get put in the position of taking the blows that should be directed towards the underlying issues.

The post Worth Reading: Use of adjuncts and one challenge of online education appeared first on e-Literate.