Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 59 min 34 sec ago

Personalized Learning Changes: Effect on instructors and coaches

Fri, 2015-06-12 17:03

By Phil HillMore Posts (332)

Kate Bowles left an interesting comment at my previous post about an ASU episode on e-Literate TV, where I argued that there is a profound change in the instructor role. Her comment:

Phil, I’m interested to know if you found anything out about the pay rates for coaches v TAs. I’m also interested in what coaches were actually paid to do — how the parameters of their employable hours fit what they ended up doing. Academics are rarely encouraged to think of their work in terms of billable increments, because this would sink the ship. But still I’m curious. Did ASU really just hike up their staffing costs in moving to personalised learning, or was there some other cost efficiency here? If the overall increase in students paid off, how did this happen? I’m grappling with how this worked for ASU in budgetary terms, as the pedagogical gain is so clear.

This comment happened to coincide with my participation in WCET’s Leadership Summit on Adaptive Learning, where similar subjects were being discussed. For the purposes of this blog post, we’ll use the “personalized learning” language, which includes use of adaptive software as a subset. Let’s first address the ASU-specific questions.


The instructor in the ASU episode was Sue McClure who was kind enough to help answer these questions by email. Sue is a lecturer at ASU Online, which is a full-time salaried position with a teaching load of four courses per semester. Typical loads include 350 – 400 students over those four courses, and the MAT 110 personalized learning course (using Khan Academy) did not change this ratio. Sue added these observations:

During the Fall Semester of 2014 we offered our first MAT 110 courses using Khan. There was a great deal of work in the planning of the course, managing the work, working with students, hiring and managing the coaches, tracking student progress, and more. Of course, our main responsibility to help our students to be successful in our course overshadowed all of this. The work load during the first semester of our pilot was very much increased compared to previous semesters teaching MAT 110.

By the time that we reached Spring Semester of 2015 we had learned much more about methods that work best for student success, our coaches were more experienced, and our technology to track student progress and work was improved. During the second semester my work load was very much more in line with teaching MAT 110 before the pilot was begun.

For the TAs (coaches), they also had the same contracts as before the personalized learning approach, but they are paid on an hourly basis. I do not know if they ended up working more hours than expected in this course, but I did already note that there were many more coaches in the new course than is typical. Unfortunately, I cannot answer Kate’s follow-up question about TA / coach hourly pay issues in more detail, at least for now.

Putting it together, ASU is clearly investing in personalized learning – including investing in instructional resources – rather than trying to find cost efficiencies up front. Adrian Sannier in episode 1 described the “payoff” or goal for ASU.

Adrian Sannier:So, we very much view our mission as helping those students to find their way past the pastiche of holes that they might have and then to be able to realize their potential.

So, take math as an example. Math is, I think, a very easy place for most people to understand because I think almost everybody in the country has math deficits that they’re unaware of because you get a B in third-grade math. What that means is there were a couple of things you didn’t understand. Nobody tells you what those things are—you don’t have a very clear idea—but for the rest of your life, all the things that depend on those things that you missed you will have a rocky understanding of.

So, year over year you accumulate these holes. Then finally, somebody in an admissions exam or on the SAT or the ACT faces you with a comprehensive survey of your math knowledge, and you suddenly realize, “Wow, I’m under-prepared. I might even have gotten pretty good grades, but there are places where I have holes.”

We very much view our mission as trying to figure how it is that we can serve the student body. Even though our standards haven’t changed, our students certainly have because the demographics of the country have changed, the character of the country has changed, and the things we’re preparing students for have changed.

We heard several times in episode 1 that ASU wants to scale the number of students served (with same standards) without increasing faculty at the same rate, and to do this they need to help more of today’s students succeed in math. The payoff is retention, which is how the budget will work if they succeed (remember this is a new program).

WCET Adaptive Learning Summit

The WCET summit allowed for a more generalized response. In one panel moderated by Tom Cavanaugh from University of Central Florida (UCF), panelists were asked about the Return on Investment (ROI) of personalized learning[1]. Some annoying person in the audience[2] further pressed the panel during Q&A time to more directly address the issue raised by Kate. All the panelists view personalized / adaptive learning as an investment, where the human costs in instructors / faculty / TAs / coaches actually go up, at least in early years. They do not see this as cost efficiency, at least for the foreseeable future.

Santa Fe Rainbow

(My photos from inside the conference stunk, so I’ll use a better one from dinner instead.)

David Pinkus from Western Governors University answered that the return was three words: retention, retention, retention. Tom Cavanaugh added that UCF invested in additional staff for their personalized / adaptive learning program, specifically as a method to reduce the “friction” of faculty time investment.

I should point out that e-Literate TV case studies are not exhaustive. As Michael and I described:

We did look for schools that were being thoughtful about what they were trying to do and worked with them cooperatively, so it was not the kind of journalism that was likely to result in an exposé. We went in search of the current state of the art as practiced in real classrooms, whatever that turned out to be and however well it is working.

Furthermore, the panelists at the WCET Summit tended to be from schools that were leading the pack in thoughtful personalized learning implementations. In other words, the perspective I’m sharing in this post is for generally well-run programs that consciously considered student and faculty support as the key drivers.[3] When these programs have developed enough to allow independent reviews of effectiveness, student retention – both with the course and ideally within a program – should be one of the key metrics to evaluate.

Investment vs. Sustainability

There is another side to this coin, however, as pointed out by someone at the WCET Summit[4]. With so many personalized learning programs funded by foundations and even institutional investments above normal operations, there is a question of sustainability. It’s all well and good to demonstrate that a school is investing in new programs, including investments in faculty and TA support, but I do not think that many programs have considered the sustainability of these initiatives. If the TA quoted in the previous blog is accurate, ASU went from 2 to 11 TAs for the MAT 110 course. Essex County College invested $1.2 million in an emporium remedial math program. Even if the payoff is “retention”, will there be enough improvement in retention to justify an ongoing expenditure to support a program? Sustainability should be another key metric as groups evaluate the effectiveness of personalized learning approaches.

  1. specifically adaptive learning
  2. OK, me
  3. There will be programs that do seek to use personalized / adaptive learning as a cost-cutting measure or as primarily technology-driven. But I would be willing to bet that those programs will not succeed in the long run.
  4. I apologize for forgetting who this was.

The post Personalized Learning Changes: Effect on instructors and coaches appeared first on e-Literate.

Instructor Replacement vs. Instructor Role Change

Tue, 2015-06-09 07:53

By Phil HillMore Posts (329)

Two weeks ago I wrote a post about faculty members’ perspective on student-centered pacing within a course. What about the changing role of faculty members – how do their lives change with some of the personalized learning approaches?

In the video below, I spoke with Sue McClure, who teaches a redesigned remedial math course at Arizona State University (ASU) that is based on the use of Khan Academy videos. There are plenty of questions about whether this approach works and is sustainable, but for now let’s just get a first-hand view of how Sue’s role changed in this specific course. You’ll see that it took some prodding to get her to talk about her personal experience, and I did have to reflect back what I was hearing. Note that the “coaches” she described are teaching assistants.

Phil Hill: Let’s get more of a first-hand experience as the instructor for the course. What is a typical week for you as the course is running? What do you do? Who do you interact with?

Sue McClure:I interact by e-mail, and sometimes Google Hangouts, with the coaches and with some of the students. Now, not all of the students are going to contact me about a problem they might have because many of them don’t have any problems, and that’s wonderful. But quite a few of them do have problems either with understanding what they’re supposed to be doing or how to do what they’re supposed to be doing or how to contact somebody about something, and then they’ll send me an e-mail.

Phil Hill: So, as you go through this, it sounds like there’s quite a change in the role of the faculty member from a traditional course, and since you just got involved several months ago in the design and in instructing it, describe for me the difference in that role. What’s changed, and how does it affect you as a professor?

Sue McClure: Before I did this course, the way it’s being done now, I had taught [Math 110] online a few other semesters, and the main difference between those experiences and this experience is that with this experience our students have far more help, far more assistance, far more people willing to step up when they need help with anything to try to make them be successful. The main difference … is that with this experience our students have far more help.

Phil Hill: What about the changes for you personally?

Sue McClure: Partly because I think ASU is growing so much, my class sizes are getting bigger and bigger. That probably would have happened even if we were teaching these the way that we taught them before. That’s one big change—more and more students. So, having these coaches that we have working with us and for us has just been priceless. We couldn’t do it without them.

Phil Hill: It seems your role comes into more of an overseeing the coaches for their direct support of the students. Plus it sounds like you step in to directly talk to students where needed as well. Your role comes into more of an overseeing the coaches for their direct support of the students.

Sue McClure: Right. I think that explains it very well.

From what Michael and I have seen in the e-Literate TV case studies as well as other on-campus consulting experiences, the debate over adaptive software or personalized learning being used to replace faculty members is a red herring. Faculty replacement does happen in some cases, but that debate masks a more profound issue – how faculty members have to change roles to adapt to a student-centered personalized learning course design. [updated to clarify language]

For this remedial math course, the faculty member changes from one of content delivery to one of oversight, intervention, and coaching. This change is not the same for all disciplines, as we’ll see in upcoming case studies, but it is quite consistent with the experience at Essex County College.

As mentioned by Sue, however, these instructional changes do not just impact faculty members – they also affect teaching assistants. Below is a discussion with some TAs from the same course.

Phil Hill:Beyond the changes to the role of faculty, there are also changes to the role of teaching assistants.

Namitha Ganapa:Basically, in a traditional course there’s one instructor, maybe two TAs, and a class of maybe 175 students. So, it’s pretty hard for the instructor to go to each and every student. Now, we are 11 coaches for Session C. Each coach is having a particular set of students, so it’s much easier to focus on the set of students, and that helps for the progress.

We should stop here and note the investment being made by ASU – moving from 2 TAs to 11 for this course. There are two sides to this coin, however. On one side, not all schools can afford this investment in a new course design and teaching style. On the other side, it is notable that instructor roles are increasing (same number of faculty members, more TAs).

Jacob Cluff: I think, as a coach, it’s a little more involved with the students on a day-to-day basis. Every day I keep track of all the students, their progress, and if they’re struggling on a skill I make a video, send it to them, ask them if they need help understanding it—that sort of thing.

Phil Hill: So, Jacob, it sounds like this is almost an intervention model—that your role is looking at where students are and figuring out where to intervene and prompt them. Is that an accurate statement?

Jacob Cluff: I think that’s a pretty fair statement because most of the students (a lot of students)—they’re fine on their own and don’t really need help at all. They kind of just get off and run. So, I spend most of my time helping the students that actually need help, and I also spend time and encourage students that are doing well at the same time.
I spend most of my time helping the students that actually need help.

Phil Hill: So, Namitha, describe what is the typical week for you, and is it different? Any differences in how you approach the coaching role than from what we’ve heard from Jacob?

Namitha Ganapa: It’s pretty much the same, but my style of teaching is I make notes. I use different colors to highlight the concept, the formula, and how does the matter go. Many of my students prefer notes, so that is how I do it.

Phil Hill: So, there’s sort of a personal style to coaches that’s involved.

This aspect of the changing role of both faculty members and TAs is too often overlooked, and it’s helpful to hear from them first-hand.

The post Instructor Replacement vs. Instructor Role Change appeared first on e-Literate.

Moodle Association: New pay-for-play roadmap input for end users

Mon, 2015-06-08 12:27

By Phil HillMore Posts (329)

As long as we’re on the subject of changes to open source LMS models . . .

Moodle is in the midst of releasing a fairly significant change to the community with a new not-for-profit entity called the Moodle Association. The idea is to get end users more directly involved in setting the product roadmap, as explained by Martin Dougiamas in this discussion thread and in his recent keynotes (the one below from early March in Germany).

[After describing new and upcoming features] So that’s the things we have going now, but going back to this – this is the roadmap. Most people agree those things are pretty important right now. That list came from mostly me, getting feedback from many, many, many places. We’ve got the Moots, we’ve got the tracker, we’ve got the community, we’ve got Moodle partners who have many clients (and they collect a lot of feedback from their paying clients). We have all of that, and somehow my job is to synthesize all of that into a roadmap for 30 people to work on. It’s not ideal because there’s a lot, a lot of stuff going on in the community.

So I’m trying to improve that, and one of the things – this is a new thing that we’re starting – is a Moodle Association. And this will be starting in a couple of months, maybe 3 or 4 months. It will be at, and it’s a full association. It’s a separate legal organization, and it’s at arm’s length from Moodle [HQ, the private company that develops Moodle Core]. It’s for end users of Moodle to become members, and to work together to decide what the roadmap should be. At least part of the roadmap, because there will be other input, too. A large proportion, I hope, will be driven by the Moodle Association.

They’ll become members, sign up, put money every year into the pot, and then the working groups in there will be created according to what the brainstorming sessions work out, what’s important, create working groups around those important things, work together on what the specifications of that thing should be, and then use the money to pay for that development, to pay us (Moodle HQ), to make that stuff.

It’s our job to train developers, to keep the organization of the coding and review processes, but the Moodle Association is telling us “work on this, work on that”. I think we’ll become a more cohesive community with the community driving a lot of the Moodle future.

I’m very excited about this, and I want to see this be a model of development for open source. Some other projects have something like this thing already, but I think we can do it better.

In the forum, Martin shared two slides on the funding model. The before model:



The model after:



One obvious change is that Moodle partners (companies like Blackboard / Moodlerooms, RemoteLearner, etc) will no longer be the primary input to development of core Moodle. This part is significant, especially as Blackboard became the largest contributing member of Moodle with its acquisition of Moodlerooms in 2012. This situation became more important after Blackboard also bought Remote-Learner UK this year. It’s worth noting that Martin Dougiamas, founder of Moodle, was on the board of Remote-Learner parent company in 2014 but not this year.

A less obvious change, however, is that the user community – largely composed of schools and individuals using Moodle for free – has to contend with another pay-for-play source of direction. End users can pay to join the association, and the clear message is that this is the best way to have input. In a slide shown at the recent iMoot conference and shared at MoodleNews, the membership for the association was called out more clearly.


What will this change do to the Moodle community? We have already seen the huge changes to the Kuali open source community caused by the creation of KualiCo. While the Moodle Association is not as big of a change, I cannot imagine that it won’t affect the commercial partners.

There are already grumblings from the Moodle end user community (labeled as, as this is where you can download code for free), as indicated by the discussion forum started just a month ago.

I’m interested to note that inhabitants are not a ‘key stakeholder’, but maybe when you say ‘completely separate from these forums and the tracker’ it is understandable. Maybe with the diagram dealing only with the money connection, not the ideas connection, if you want this to ‘work’ then you need to talk to people with $$. ie key = has money.

I’ll be interested how the priorities choice works: do you get your say dependent on how much money you put in?

This to me is the critical issue with the future.

Based on MoodleNews coverage of the iMoot keynote, the answer to this question is that the say is dependent on money.

Additionally, there will be levels of membership based on the amount you contribute. The goal is to embrace as many individuals from the community but also to provide a sliding scale of membership tiers so that larger organizations, like a university, large business, or non-Moodle Partner with vested interested in Moodle, (which previously could only contribute through the Moodle Partner arrangement, if at all) can be members for much larger annual sums (such as AU$10k).

The levels will provide votes based on dollars contributed (potentially on a 1 annual dollar contributed = 1 vote).

This is why I use the phrase “pay-for-play”. And a final thought – why is it so hard to get public information (slides, videos, etc) from the Moodle meetings? The community would benefit from more openness.

Update 6/10: Corrected statement that Martin Dougiamas was on the Remote Learner board in 2014 but not in 2015.

The post Moodle Association: New pay-for-play roadmap input for end users appeared first on e-Literate.

rSmart to Asahi to Scriba: What is happening to major Sakai partner?

Mon, 2015-06-08 11:16

By Phil HillMore Posts (329)

It looks like we have another name and ownership change for one of the major Sakai partners, but this time the changes have a very closed feel to them. rSmart, led by Chris Coppola at the time, was one of the original Sakai commercial affiliates, and the LMS portion of the company was eventually sold to Asahi Net International (ANI) in 2013. ANI had already been involved in the Sakai community as a Japanese partner and also as an partial investor in rSmart, so that acquisition was not seen as a huge change other than setting the stage for KualiCo to acquire the remainder of rSmart.

In late April, however, ANI was acquired by a private equity firm out of Los Angeles (Vert Capital), and this move is different. Vert Capital did not just acquire ANI; they also changed the company name to Scriba and took the company off the grid for now. No news items explaining intentions, no web site, no changes to Apereo project page, etc. Japanese press coverage of the acquisition mentions the parent company’s desire to focus on the Japanese market.

What is going on?

A rudimentary search for “Scriba education learning management” brings up no news or web sites, but it does bring up a recent project on to create the new company logo. By the way, paying $90 gets 548 entries from 237 freelancers – and adjuncts are underpaid?! The winning logo has a certain “we’re like Moodle, but our hat covers two letters” message that I find quite original.

Furthermore, neither nor are registered by the company (both are owned by keyword naming companies that pre-purchase domains for later sale). The ANI website mentions nothing about the sale, and in fact has no news since October, 2014. The Sakai project page has no update, but the sponsorship page for Open Apereo conference last week did have new logo. This sale has the appearance of a last-minute acquisition under financial distress[1].

Vert Capital is a “private investment firm that provides innovative financing solutions to lower/middle market companies globally”. The managing director who is leading this deal, Adam Levin, has a background in social media and general media companies. Does Vert Capital plan on making further ed tech acquisitions? I wouldn’t be surprised, as ed tech is fast-changing market yet more companies are in need of “innovative financing”.

I have asked Apereo for comment, and I will share that or any other updates as I get it. If anyone has more information, feel free to share in comments or send me a private note.

H/T: Thanks to reader who wishes to remain anonymous for some pointers to public information for this post.

  1. Note, that is conjecture.

The post rSmart to Asahi to Scriba: What is happening to major Sakai partner? appeared first on e-Literate.

Pilots? We don’t need no stinkin’ pilots!

Thu, 2015-06-04 19:33

By Phil HillMore Posts (329)

Timothy Harfield commented on Arizona State University’s approach to pilots and scaling innovation at ASU.

.@philonedtech excellent comment on the problem of scaling innovation in #HigherEd. This is a central concern for @UIAinnovation.

— Timothy D. Harfield (@tdharfield) June 4, 2015

excellent comment on the problem of scaling innovation in #HigherEd. This is a central concern for @UIAinnovation.

The University Innovation Alliance is “a consortium of 11 large public research universities committed to making high-quality college degrees accessible to a diverse body of students”. I wrote about this “central concern” last summer in a post titled “Pilots: Too many ed tech innovations stuck in purgatory”, using the frame of Everett Rogers’ Diffusions of Innovations model. While the trigger for that post was on ed tech products, the same situation applies for the course design situation.

5 Stages of Adoption

What we are seeing in ed tech in most cases, I would argue, is that for institutions the new ideas (applications, products, services) are stuck the Persuasion stage. There is knowledge and application amongst some early adopters in small-scale pilots, but majority of faculty members either have no knowledge of the pilot or are not persuaded that the idea is to their advantage, and there is little support or structure to get the organization at large (i.e. the majority of faculty for a traditional institution, or perhaps for central academic technology organization) to make a considered decision. It’s important to note that in many cases, the innovation should not be spread to the majority, either due to being a poor solution or even due to organizational dynamics based on how the innovation is introduced.

The Purgatory of Pilots

This stuck process ends up as an ed tech purgatory – with promises and potential of the heaven of full institutional adoption with meaningful results to follow, but also with the peril of either never getting out of purgatory or outright rejection over time.

Back to Timothy’s comment. He was specifically commenting on Phil Regier’s interview in the e-Literate TV case study on ASU.

Phil Hill: There are plenty of institutions experimenting with new technology-based pedagogical approaches, but pilots often present a challenge to scale with quality. ASU’s vision, however, centers on scale and access. One observation I’ve seen from what’s happening in the US is there are a lot of pilots, but that never scale to go across a school. You sound confident that you will be scaling.

Philip Regier: We kind of don’t pilot stuff here. When we did the math program, we actually turned it on in August 2012 after all of nine months of preparation working with Knewton.We turned it on, and it applied to every seat in every freshman math course at the university. And there’s a reason for that. My experience—not just mine, but the university’s experience with pilots is that they have a very difficult time getting to scale.
Pilots … have a very difficult time getting to scale.

Part of the reason is because, guess what? It doesn’t work the first time. It doesn’t work the first time, maybe not the second. It takes multiple iterations before you understand and are able to succeed.If you start with a pilot and you go a semester or two and it’s, “Hey, this isn’t as good as what we were doing,” you’ll never get to scale.

In our case, the experience with math is a very good example of that because working with a new technology is not a silver bullet. It’s not like we’re going to use this technology, and now all of the grades are going to go up by 15 percent. What you have to do is work with the technology and develop the entire learning ecosystem around it, and that means training faculty.

That’s one approach to the scaling innovation challenge that affects not just the University Innovation Alliance institutions but most schools. This approach also raises some questions. While Phil Regier stated in further comments not in the episode that faculty were fully involved in the decision to implement new programs, are they also fully involved in evaluating whether new programs are working and whether changes are needed? Does this no pilot approach lead to the continuation of programs that have fatal flaws and should be ended rather than changed?

It is, however, an approach that directly addresses the structural barriers to diffusing the innovations. Based on Phil Regier’s comments, this approach also leads to investment in and professional development of faculty members involved.

The post Pilots? We don’t need no stinkin’ pilots! appeared first on e-Literate.

NYT Michael Crow Condensed Interview: More Info needed . . . and available

Thu, 2015-06-04 09:49

By Phil HillMore Posts (328)

The New York Times ran an “edited and condensed” interview with Arizona State University (ASU) president Michael Crow, titled “Reshaping Arizona State, and the Public Model”.

Michael M. Crow sees Arizona State as the model of a public research university that measures itself by inclusivity, not exclusivity. In his 13 years as its president, he has profoundly reshaped the institution — hiring faculty stars from across the country, starting a bevy of interdisciplinary programs, growing the student body to some 83,000 and using technology to bring his ideas to scale, whether with web-based introductory math classes or eAdvisor, which monitors students’ progress toward their major. Last year, Dr. Crow made headlines when the university partnered with Starbucks to offer students the chance to complete their degree online for free. His new book, written with the historian William B. Dabars, is called, appropriately, “Designing the New American University.”

The problem is that the interview was so condensed that it lost a lot of context. Since Michael and I just released an e-Literate TV case study on ASU, the first episode could help as a companion to the NYT article by calling out a lot more information from ASU executives their mission. We would like this information to be useful for others to decide what they think about this model.

ASU Case Study: Ambitious Approach to Change in R1 University

The post NYT Michael Crow Condensed Interview: More Info needed . . . and available appeared first on e-Literate.

Release of ASU Case Study on e-Literate TV

Mon, 2015-06-01 06:55

By Phil HillMore Posts (327)

Today we are thrilled to release the third case study in our new e-Literate TV series on “personalized learning”. In this series, we examine how that term, which is heavily marketed but poorly defined, is implemented on the ground at a variety of colleges and universities.

We are adding three episodes from Arizona State University (ASU), a school that is frequently in the news. Rather than just talking about the ASU problems, we are talking with the ASU people involved. What problems are they trying to solve? How do students view some of the changes? Are faculty being replaced by technology or are they changing roles? For that matter, how are faculty members involved in designing some of these changes?

You can see all the case studies (either 2 or 3 per case study) at the series link, and you can access individual episodes below.

ASU Case Study: Ambitious Approach to Change in R1 University

ASU Case Study: Rethinking General Education Science for Non-Majors

ASU Case Study: The Changing Role of Faculty and Teaching Assistants

e-Literate TV, owned and run by MindWires Consulting, is funded in part by the Bill & Melinda Gates Foundation. When we first talked about the series with the Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

As with the previous series, we are working in collaboration with In the Telling, our partners providing the platform and video production. Their Telling Story platform allows people to choose their level of engagement, from just watching the video to accessing synchronized transcripts and accessing transmedia. We have added content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. With In The Telling’s help, we are crafting episodes that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies.

We will release two more case studies over the next month, and we also have two episodes discussing the common themes we observed on the campuses. We welcome your feedback, either in comments or on Twitter using the hashtag #eLiterateTV.


The post Release of ASU Case Study on e-Literate TV appeared first on e-Literate.

UF Online and Enrollment Warning Signs

Thu, 2015-05-28 19:33

By Phil HillMore Posts (327)

The University of Florida Online (UF Online) program is one of the highest profile online initiatives to be started over the past few years (alongside other public institution programs such as California’s Online Education Initiative, OpenSUNY, Cal State Online, and Georgia Tech / Udacity). UF Online, which I first described in this blog post, is an exclusively-online baccalaureate program leading to a UF degree for lower costs than the traditional on-campus experience.

As part of a new program augmenting UF Online, qualified students that are not admitted to the University of Florida due to space constraints can be accepted to UF Online’s PaCE program, although the Washington Post in April called out that these students had not asked to be part of UF Online.

Some 3,100 students accepted as freshman by the University of Florida for the fall got a big surprise along with their congratulations notices: They were told that the acceptance was contingent on their agreement to spend their first year taking classes online as part of a new program designed to attract more freshmen to the flagship public university.

The 3,118 applicants accepted this way to the university — above and beyond the approximately 12,000 students offered traditional freshman slots — did not apply to the online program. Nor were they told that there was a chance that they would be accepted with the online caveat. They wound up as part of an admissions experiment.

Fast forward to this week’s news from the Gainesville Sun.

Fewer than 10 percent of 3,118 high school students invited to sign up for a new online program after their applications were rejected for regular admission to the University of Florida have accepted the offer.

The 256 students who signed up for the Pathway to Campus Enrollment [PaCE] program will be guaranteed a spot at UF after they complete the minimum requirements: two semesters and at least 15 hours of online course work. [snip]

The PACE program was created as a way to boost the numbers of first-time-in-college students enrolling in UF Online, to provide an alternate path to residential programs, and to populate major areas of study that have been under-enrolled in recent years.

The fact that less than 10% of students accepted the offer is not necessarily news, as the campus provost predicted this situation last month (see the Washington Post article). What is more troubling is the hubris exhibited by how UF Online is reacting to enrollment problems. Administrators at the university seem to view UF Online as a mechanism to serve institutional needs and are not focused on meeting student needs. This distorted lens is leading to some poor decision-making that is likely making the enrollment situation worse in the long run. Rather than asking “which students need UF Online and what support do they need”, the institution is asking “what do we need and how can we use UF Online to fill any gaps”.

Let’s step back from PaCE and look at the bigger picture. The following chart shows the targeted enrollment numbers that formed the basis for the UF Online strategic plan, compared to actual and currently estimated enrollment (click to enlarge).

Enrollments vs Plan Spring 2015

As of this term, they are off by ~23% (1000 out of a target of 1304 students), which is not unreasonable for a program that started so quickly. What is troubling, however, is that the targets rise quickly (3698 next spring, 6029 the year after) while the actuals have not shown significant growth yet. Note that UF Online is estimating enrollment to double, from 1000 to 2000, for fall 2015 – that is a bold assumption. To make the challenge even more difficult (from March article in Gainesville Sun):

That growth in revenue also depends largely on a growing number of out-of-state online students who would pay four to five times higher tuition rates, based on market conditions.

Specifically, the business plan assumes a mix of 43% out-of-state students in UF Online by year 10, yet currently there are only 9% out-of-state students. How realistic is it to attract large numbers of out-of-state students given the increasing options for online programs?

In the midst of the challenging startup, UF Online had to deal with the premature departure of the initial executive director. After a one-year search process, UF Online chose a new leader who has absolutely no experience in online education.

UF Online is welcoming Evangeline Cummings as its new director, and she has the task of raising the program’s enrollment. [snip]

Cummings starts July 1 with a salary of $185,000. She is currently a director with the U.S. Environmental Protection Agency.

UF spokesman Steve Orlando wrote in an email that she showed skills desirable for the position. “The search committee and the provost were looking for someone with the ability to plan strategically and to manage a large and complex operation,” he said.

At this point, it might have been worth stepping back and challenging some of the original assumptions. Specifically, is UF Online targeting the right students and addressing an unmet need? The plan assumes there are many students who want a U of Florida degree but just can’t get in or want to do so from out of state. This is different than asking what types of students need an anywhere, anytime online program from an R1 university and then figuring out what to provide in an academic program.

Instead, the administrators came up with the PaCE program as a way to augment enrollment. Which academic majors are allowed under PaCE?

The PACE program was created as a way to boost the numbers of first-time-in-college students enrolling in UF Online, to provide an alternate path to residential programs, and to populate major areas of study that have been under-enrolled in recent years.

The school didn’t say “what are the majors that students need once they transfer to the residential program”, they asked “how can we use these online students to fill some gaps we already have”. And students who sign the PaCE contract (yes, it is a contractual agreement) cannot change majors even after they move to a campus program.

And while the students are in UF Online:

PACE students can’t live in student dormitories, and their tuition doesn’t cover meals, health services, the recreation center and other student activities because they aren’t paying the fees for those services. They can’t get student tickets to UF cultural and sporting events.

They also can’t ride for free on Regional Transportation Service buses or get student parking passes.

PACE students also will not be able to participate in intercollegiate athletics or try out for the Gator Marching Band. They can use the libraries on campus but can’t check out books.

U of Florida seems to have spent plenty of time figuring out what not to provide these students.

One additional challenge that UF Online will face is student retention. The Instructional Technology Council (ITC) described in this year’s Distance Education report:

Nationally, student retention in online courses tends to be eight percentage points lower than that of face-to-face instruction. Online students need to be self-disciplined to succeed. Many underestimate how much time online coursework requires. Others fall behind or drop out for the same reasons they enrolled in online courses in the first place—they have other responsibilities and life challenges, such as work and/or family, and are too busy to prepare for, or complete, their online coursework.

Yet UF Online is targeting the students who might have the most trouble with online courses. First-time entering freshman, particularly students who actually want a residential program and might not even understand online programs, are not ideal students to succeed in a fully-online program. San Jose State University and Udacity learned this lesson the hard way, although they threw MOOCs and remedial math into the mix as well.

UF Online seems to be institutionally-focused rather than student-focused, and the initiative is shaping up to be a case study in hubris. Without major changes in how the program is managed, including the main campus input into decisions, UF Online risks becoming the new poster child of online education failures. I honestly hope they succeed, but the current outlook is not encouraging.

The post UF Online and Enrollment Warning Signs appeared first on e-Literate.

Worth Considering: Faculty perspective on student-centered pacing

Tue, 2015-05-26 11:43

By Phil HillMore Posts (326)

Over the weekend I wrote a post based on the comment thread at Friday’s Chronicle article on e-Literate TV.

One key theme coming through from comments at the Chronicle is what I perceive as an unhealthy cynicism that prevents many people from listening to students and faculty on the front lines (the ones taking redesigned courses) on their own merits.

Sunday’s post highlighted two segments of students describing their experiences with re-designed courses, but we also need to hear directly from faculty. Too often the public discussion of technology-enabled initiatives focus on the technology itself, often assuming that the faculty involved are bystanders or technophiles. But what about the perspectives of faculty members – you know, those who are in the classrooms working with real students – on what challenges they face and what changes are needed from an educational perspective? There is no single perspective from faculty, but we could learn a great deal through their unique, hands on experiences.

Consider the the specific case of why students might need to work at their own pace.

The first example is from a faculty member at Middlebury College describing the need for a different, more personalized approach for his geographic information system (GIS) course.

Jeff Howarth: And what I would notice is that there would be some students who would like me to go a little bit faster but had to wait and kind of daydream because they were just waiting. And then there were some students that desperately wanted me slow down. Then you get into that kind of slowest-car-on-the-freeway, how-fast-can-you-really-go type of thing. So, I would slow down, which would lose part of the group.

Then there would be some folks that probably would want me to slow down but would never ask because they don’t want to call attention to themselves as being the kind of—the slow car on the freeway.

Michael Feldstein: At this point, Jeff realized that even his small class might not be as personalized as it could be with the support of a little more technology.

Jeff Howarth: What I realized is that, if I just started packaging that instruction, the worked example, I could deliver the same content but allow students to first—if I made videos and posted it on something like YouTube, I was putting out the same content, but students could now watch it at their own pace and in the privacy of being able to go as slow as they need to without the social hang-ups of being considered different.

Students could now watch it at their own pace and … and go as slow as they need to without the social hang-ups of being considered different. So, that was really the first step of—I did all of this, and then I told another colleague in languages what I was doing. And he said, “Well, that’s called ‘flipping the classroom.’” And I thought, “OK.” I mean, but that’s not really why. I did it without knowing that I was flipping the classroom, but then that’s how it happened.

Compare this description with an example from an instructor at Essex County College teaching developmental math.

Pamela Rivera: When I was teaching the traditional method, I’ll have students coming in and they didn’t know how to multiply. They didn’t know how to add and subtract. Rarely would those students be able to stay throughout the semester, because after the third—no, even after the second week, everyone else was already in division and they’re still stuck.

And the teacher can’t stop the class and say, “OK, let’s continue with multiplication,” because you have a syllabus to stick to. You have to continue teaching, and so those students will be frustrated, and so they drop the class. The Teacher can’t stop the class…because you have a syllabus to stick to.

At the same time, you had students who—the first couple of weeks they’ll be extremely bored because they already know all of that. And so, unfortunately, what would happen is eventually you would get to a point in the content that—they don’t know that, but because they have been zoning out for weeks, they don’t get that “OK, now, I actually have to start paying attention.” And so, yes, they should have been able to do that, but they still are not very successful because they were used to not paying attention.

Remarkably Similar Descriptions

Despite these two examples coming from very different cases, the actual descriptions that faculty offer on the need for course designs that allow students to control their own pacing are remarkably similar. These isolated examples are not meant to end debate on personalized learning or on what role technology should play (rather they should encourage debate), but it is very useful to listen to faculty members describe the challenges they face on an educational level.

The post Worth Considering: Faculty perspective on student-centered pacing appeared first on e-Literate.

Worth Considering: Students can have their own perspectives on edtech initiatives

Sun, 2015-05-24 15:06

By Phil HillMore Posts (326)

Triggered by Friday’s article on e-Literate TV, there have been some very interesting conversations both in the Chronicle comment thread and on the e-Literate TV site. The most, um, intense conversations have centered on the application of self-regulated learning (SRL) in combination with adaptive software (ALEKS) to redesign a remedial math course at Essex County College. Michael has been wading in very deep waters in the comment threads, trying emphasize variations of the following point.

But that debate should be in the context of what’s actually happening in real classrooms with real students, what the educational results are, and what the teachers and students involved think of their experiences.

Right now, the “sides” are having a fight–it’s not really a debate because the sides aren’t really talking to each other–in near total absence of any rational, educator-evaluated, evidence-based conversation about what these approaches are good for. One side says they will “fix” a “broken” education system, while the other side says they will “destroy” the education system. Well, what are the students saying?

One key theme coming through from comments at the Chronicle is what I perceive as an unhealthy cynicism that prevents many people from listening to students and faculty on the front lines (the ones taking redesigned courses) on their own merits. Michael called out this situation in the same comment:

What bothers me is the seemingly complete lack of interest among the commenters in this thread about actually hearing what these teachers and students have to say, and the disregard for the value of their perspectives. It is possible to raise legitimate concerns about techno-solutionism, anti-labor practices, and other serious abuses while simultaneously acknowledging that so-called “personalized learning” approaches can have real educational value when properly applied in the appropriate context by competent and concerned educators and serious students.

One of our primary goals for e-Literate TV is to give additional access to those on the front lines, thus allowing debates and conversations about the role of ed tech and personalized learning approaches. However, it is important to recognize that students can have their own perspectives and are not just robots who are told what to say and do. Consider the following panels discussion with students. To me, the students are quite well-spoken and have real insights.

Sade: A typical day is, like, you basically come in—you go and you log on and you do your ALEKS. You do it at your own pace. Every individual works at their own pace. That’s why I like it. Because some people are ahead, and if you’re in a typical, a regular class, then you have to go with the pace of everybody else. Even if you don’t understand, you have to be—you have to try to catch up. Here, you work at your own pace.

Viviane: It’s been a very good experience for basically the same reasons. Where you just sit and you work and if you can solve 10 problems in one hour, it’s better for you if you keep working at your own pace.

And there’s also—the professor that helps you, or you can even bother one of your classmates and say, “Hey, can you help me out over here with this problem?” or something like that. I mean it’s—I feel as if it’s a very interactive and open classroom.

As per other classes, I don’t think that a regular math class would be able—I mean you wouldn’t be able to sit and ask another classmate for help or anything like that. You would have to just wait for your professor.

Most students we talked to appreciated the self-paced nature of the lab portion (working on the computers emporium style with faculty roaming the room for one-on-one support), but it is very clear that the technology itself was one component of the solution. Students are reflecting back that it is the combination of self-paced design along with interactive support that is critical to success. Not only that, but note how students value the ability for peer support – students helping students. That design element of courses is often overlooked.

In another segment, students explored this concept in more depth with an additional element of ownership of the learning process.

Phil: Most of the students we talked to seem to have internalized the lessons of self-regulated learning and feel empowered to learn.

Sade: It’s really good because, for example, say I’m doing a topic, and I’m stalling. Vivian is faster than I am. I could work by my own pace and then it’s a professor there that I could raise my hand. “Excuse me. I don’t understand this. Could you help me with it?”—because everybody learns at their own pace. Everybody learns at their own pace.

Khalid: Yeah, we are typically just sitting down on the computer screen, but we’re sitting next to our classmates, so if there’s a problem on it, I could ask my classmate. Like, that’s actually the best thing about ALEKS, is that there’s an explain button right there.

We would do well to listen to students more often, judging input on their own merits.

Update: Fixed first video link.

The post Worth Considering: Students can have their own perspectives on edtech initiatives appeared first on e-Literate.

LMS Observations: You had me until you went nihilist

Wed, 2015-05-20 16:20

By Phil HillMore Posts (325)

Mark Drechsler has a fascinating post in response to my recent LMS as minivan about D2L’s retention claims, mostly playing off of this theme:

I answered another question by saying that the LMS, with multiple billions invested over 17+ years, has not “moved the needle” on improving educational results. I see the value in providing a necessary academic infrastructure that can enable real gains in select programs or with new tools (e.g. adaptive software for remedial math, competency-based education for working adults), but the best the LMS itself can do is get out of the way – do its job quietly, freeing up faculty time, giving students anytime access to course materials and feedback. In aggregate, I have not seen real academic improvements directly tied to the LMS.

In response, Mark gives “a personal view of my own journey towards LMS nihilism” in a post titled “How I lost my faith in the LMS” that has some excellent points (first go read his whole post, I’ll wait).

Mark Nihilist.001

Mark describes how the LMS market in Australia changed dramatically – mostly towards Moodle – due to Bb / D2L lawsuit, end-of-life for WebCT, and Moodle release of 1.9, noting:

There were, I believe, a variety of reasons that Moodle was so successful during this time, but one of the most common things that I would hear during this period was that, compared to incumbent LMS, Moodle simply ‘got out of the way’ and let academic staff do their thing. It helped the LMS stop being a barrier, and moved it closer to being an enabler, which is exactly what it should have been.

During this time Moodle was booming in popularity, and the transitions I was involved in by and large went as well as any other campus-wide technology platform change can, but one big question (and I must send out a thank you my friend and sounding board James Hamilton for planting this seed) was lurking in the background – how do we measure the success of the implementation? How do we know that the LMS in and of itself is making any difference whatsoever?

The answer he comes to is that no, the LMS in and of itself does not change outcomes and that:

The specific LMS that was in use paled into insignificance next to the innovation, dedication and craftiness of the person using it.

Here Mark makes one of the best points I’ve seen in the LMS discussions of late [emphasis added].

In a commodity market, the argument often turns to cost. In the case of the LMS, like any piece of campus-wide technology, the cost of the service in technology terms often pales into insignificance when compared with the cost in terms of time spent (wasted?) by academic and administrative staff being forced to use a system designed to try and satisfy a large set of complex requirements. Perhaps this was one of the most compelling things about Moodle back in its heyday – the perception that it simply ‘got out of the way’ of teachers wanting to do their job – and the significant ‘switching cost’ in terms of managing a large-scale change program that is needed to swap out an LMS was deemed worth it in terms of the longer term reduction of burden on users.

That was then, however, not now.

Where we slightly differ is in the conclusions. The lack of evidence of LMS usage directly impacting academic results does not make the LMS a commodity and is no reason to go nihilist[1].

Two measures of value in a traditional LMS can be thought of as how well it ‘gets out of the way’ and how well it enables apps that can directly affect student learning. From my experience, the various LMS options differ greatly in these two attributes. I have seen examples at campuses where an LMS adoption led to one that was much more intuitive, reliable, and easy to adopt to the point that training resources were diverted away from ‘here’s how to migrate a course and which button to push’ to ‘here are some pedagogical improvements to consider using online tools’. I have seen schools benefit simply from having reliable systems that don’t go down during exams. In other words, and LMS solution can significantly reduce the “cost in terms of time spent by academic and administrative staff”. And by the way, that choice might not always be the same LMS – it depends greatly on course design and pedagogical models.

While I detest most RFP processes, there are examples (typically involving creative compliance with purchasing rules or active support from an enlightened purchasing guru) where the planning process itself leads to increased collaboration among academic and administrative staff. If done well, a vendor selection process can enable greater focus on teaching and learning effectiveness and cross-pollination of ideas.

Update (hit publish too soon): I have also seen situations where an LMS is so painful to use that faculty don’t take advantage of tools that are appropriate or useful. While there is risk in broadly looking at depth of LMS adoption as a net positive, the wrong LMS choice or implementation can prevent faculty or instructional designers from doing what they’d like.

While I might be misreading Mark’s nihilism reference, he makes some great observations based on his personal journey. In the end, however, I do not see the LMS as a commodity.

  1. Mark’s conclusion is “So then, in my mind, while the LMS may not quite yet be considered a commodity in terms of features and functions, it might as well be a commodity in terms of the overall impact it has on student learning outcomes.

The post LMS Observations: You had me until you went nihilist appeared first on e-Literate.

Miami, Harvard and MIT: Disability discrimination lawsuits focused on schools as content providers

Wed, 2015-05-20 11:52

By Phil HillMore Posts (324)

In the discussions at Google+ based on last week’s post about the Miami University of Ohio disability discrimination lawsuit[1], George Station made two important points that deserve more visibility.

It’s been a-coming for several years now. Cal State has some pretty strong rules in place for compliance with ADA and state-level disability laws. Still, [Universal Design for Learning] UDL is a little-known acronym on any campus you care to visit, and staff support is probably one person in an office, except for Miami of Ohio as of this week, I guess…

Add the recent edX settlement with the US Department of Justice, and the whole direction of edtech changes…

Put another way, it should come as no surprise that the US Department of Justice is ramping up its enforcement of disability discrimination regulations in the education world. Captioning service provider CaptionSync has an excellent summary of the field, written before the DOJ intervention at Miami.

Accessibility laws applicable to higher education have been in place in the United States for decades, but many schools are still not fully compliant with the laws. Part of the lag in compliance can be attributed to lenient enforcement in the early years of these laws; the Rehabilitation Act was enacted in 1973 and the Americans with Disabilities Act was enacted in 1990, but initially there were very few government investigations or enforcement actions. Over time both government agencies (such as the Office for Civil Rights) and advocacy groups (such as the National Federation for the Blind and the National Association for the Deaf) have increasingly been making efforts to enforce the provisions of these laws. Recent civil suits filed by the National Association for the Deaf (NAD) and other advocacy organizations against both Harvard and MIT suggest that now is a good time to take a hard look at your accessibility compliance efforts if you work with video in a college or university setting.

The Department of Justice (DOJ) sent a letter to all college and university presidents on the topic of accessibility for emerging technologies in 2010; it contained a useful summary of various accessibility regulations and how they apply to the education community.

In February, the National Association of the Deaf (NAD) filed suit against Harvard and MIT based on their MOOCs using edX. It is worth noting that the lawsuit is against the schools, not the MOOC provider. In the announcement:

Many videos simply aren’t captioned at all.  For example, a Harvard program on the 50th anniversary of Brown v. Board of Education, a 2013 Harvard Q&A with Bill Gates and a 2013 MIT discussion with MIT professor Noam Chomsky about the leaks attributable to Chelsea (formerly Bradley) Manning all lack closed captions.

“Worse still,” said attorney Timothy Fox, “a sampling of the videos available illustrates the problem with inaccurate captioning, making them confusing and sometimes completely unintelligible.”

The issue is not that there is no capability for captioning, but that those producing the content (Harvard and MIT) do not provide captions or do some with many errors. Subsequently, the DOJ and edX settled out of court based on the following:

5. Following the compliance review, the United States determined that and the Platform were not fully accessible to some individuals with disabilities in violation of Title III of the ADA.

6. EdX disputes the findings set forth above and denies that, its mobile applications, and the Platform are covered by or are in violation of Title III of the ADA.

In the settlement, both parties go out of their way to clarify that edX is a software provider and that the schools are content providers. The DOJ settlement calls on edX within 18 months to conform with the Web Content Accessibility Guidelines (“WCAG”) 2.0 AA, published by the Web Accessibility Initiative of the World Wide Web Consortium (“W3C”). More importantly, however, the agreement stipulates that edX within 90 days provide guidance to content providers (schools).

27. Develop a guide for Content Providers entitled Accessibility Best Practices Guidance for Content Providers (“Accessibility Best Practices Guidance”) and distribute a copy to each Content Provider with instructions for redistribution among individuals involved in producing Course Content. The Accessibility Best Practices Guidance shall describe steps and resources on how Course Content may be made to conform with WCAG 2.0 AA for Participants with disabilities using the CMS and inform Content Providers that the following resources may assist them in producing accessible Course Content: UAAG 1.0, ATAG 2.0, WAI-ARIA, WCAG2ICT, EPUB3, DAISY, and MathML.

The DOJ insists not only that software include capabilities for accommodation of students with disabilities but also that schools actually include the content and related metadata that is required for compliance. It is no longer enough for schools to buy software that is “ADA compliant”. Faculty or instructional designers need to include captions, alt-texts and alternate pathways for students to have equal access.

The origin of the Miami U lawsuit and the DOJ intervention is based on blind students, but the issues are the same. Repeatedly the DOJ referred to edtech “as implemented by Miami University”. As noted by reader Brian Richwine, the original lawsuit does reference Sakai (the LMS at the time of lawsuit), but the focus is still on how the content was provided.

Going back to the CaptionSync blog post:

Some schools have pointed out that in the summer of 2015 the DOJ is expected to release new guidance on how accessibility for websites is to be handled and they are awaiting that guidance before they step up their accessibility efforts.

The big lesson is that higher education institutions themselves had better get ready to understand their role as content providers that must conform to disability standards. Just letting individual faculty members figure out what to do is a recipe for future lawsuits. Faculty need support, guidance and (gasp) appropriate oversight to get this right.

Beyond the regulations and frameworks listed in the DOJ documents, schools should also increase their understanding and use of the UDL framework and guidelines that George referenced in his comments.

The goal of education in the 21st century is not simply the mastery of content knowledge or use of new technologies. It is the mastery of the learning process. Education should help turn novice learners into expert learners—individuals who want to learn, who know how to learn strategically, and who, in their own highly individual and flexible ways, are well prepared for a lifetime of learning. Universal Design for Learning (UDL) helps educators meet this goal by providing a framework for understanding how to create curricula that meets the needs of all learners from the start.

The UDL Guidelines, an articulation of the UDL framework, can assist anyone who plans lessons/units of study or develops curricula (goals, methods, materials, and assessments) to reduce barriers, as well as optimize levels of challenge and support, to meet the needs of all learners from the start. They can also help educators identify the barriers found in existing curricula.

  1. Insert joke here about G+ and its hundreds of active users.

The post Miami, Harvard and MIT: Disability discrimination lawsuits focused on schools as content providers appeared first on e-Literate.

About Those D2L Claims of LMS Usage Increasing Retention Rates

Thu, 2015-05-14 09:43

By Phil HillMore Posts (322)

In my post last week on the IMS Global Consortium conference #LILI15, I suggested that LMS usage in aggregate has not improved academic performance and noted that John Baker from D2L disagreed.

John Baker from D2L disagreed on this subject, and he listed off internal data of 25% or more (I can’t remember detail) improved retention when clients “pick the right LMS”. John clarified after the panel the whole correlation / causation issue, but I’d love to see that data backing up this and other claims.

After the conference I did some checking based on prompts from some helpful readers, and I’m fairly certain that John’s comments referred to Lone Star College – University Park (LSC-UP) and its 24% increase in retention. D2L has been pushing this story recently, first in a blog post and then in a paid webinar hosted by Inside Higher Ed. From the blog post titled “Can an LMS improve retention?” [footnotes and emphasis in original]:

Can an LMS help schools go beyond simply managing learning to actually improving it?

Pioneering institutions like Lone Star College-University Park and Oral Roberts University are using the Brightspace platform to leverage learner performance data in ways that help guide instruction. Now, they’re able to provide students with more personalized opportunities to master content and build self-confidence. The results of their student-centered approach have been nothing short of amazing: For students coming in with zero credits, Lone Star estimates that persistence rates increased 19% between spring 2014 and fall 2014[3] and Oral Roberts University estimates a persistence rate of 75.5% for online programs, which is an all-time high.[4]

Then in the subsequent IHE webinar page [emphasis added]:

The results have been nothing short of amazing. Lone Star has experienced a 19% increase in persistence and Oral Roberts University has achieved a 75.5% persistence rate for online programs—an all-time high. Foundational to these impressive results is Brightspace by D2L—the world’s first Integrated Learning Platform (ILP)— which has moved far beyond the traditional LMS that, for years, has been focused on simply managing learning instead of improving it.

Then from page 68 of the webinar slides, as presented by LSC-UP president Shah Ardalan:

LSC Results 1

By partnering with D2L, using the nationally acclaimed ECPS, the Bill & Melinda Gates Foundation, and students who want to innovate, LSC-UP increased retention by 24% after the pilot of 2,000 students was complete.

ECPS and the Pilot

For now let’s ignore the difference between 19%, 24% and my mistake on 25%. I’d take any of those results as institutional evidence of (the right) LMS usage “moving the needle” and improving results[1]. This description of ECPS got my attention, so I did some more research on ECPS:

The Education and Career Positioning System is a suite of leading web and mobile applications that allow individuals to own, design, and create their education-to-career choices and pathways. The ability to own, design, and create a personal experience is accomplished by accessing, combining and aggregating lifelong personal info, educational records, career knowledge, and labor statistics …

I also called up the LSC-UP Invitation to Innovate program office to understand the pilot. ECPS is an advising and support system created by LCS-UP, and the pilot was partially funded by the Gates Foundation’s Integrated Planning and Advising Services (IPAS) program. The idea is that students do better by understanding their career choices and academic pathways up front rather than being faced with a broad set of options. LCS-UP integrated ECPS into a required course that all entering freshmen (not for transfers) take. Students used ECPS to identify their skills, explore careers, see what these careers would require, etc. LCS-UP made this ECPS usage a part of the entry course. While there is no published report, between Spring 2014 and Fall 2014 LCS-UP reports that increase in term-to-term persistence of 19+%. Quite interesting and encouraging, and kudos to everyone involved. You can find more background on ECPS here.

In the meantime, Lone Star College (the entire system of 92,000+ students) selected D2L and is now using Brightspace as its LMS; however, the ECPS pilot had little to do with LMS usage. The primary intervention was an advising system and course redesign to focus students on understanding career options and related academic pathways.

The Problem Is Marketing, Not Product

To be fair, what if D2L enabled LSC-UP to do the pilot in the first place by some unique platform or integration capabilities? There are two problems with this possible explanation:

  • ECPS follows IMS standards (LTI), meaning that any major LMS could have integrated with it; and
  • ECPS was not even integrated with D2L during the pilot.

That’s right – D2L is taking a program where there is no evidence that LMS usage was a primary intervention and using the results to market and strongly suggest that using their LMS can “help schools go beyond simply managing learning to actually improving it”. There is no evidence presented[2] of D2L’s LMS being “foundational” – it happened to be the LMS during the pilot that centered on ECPS usage.

I should be clear that D2L should rightly be proud of their selection as the Lone Star LMS, and from all appearances the usage of D2L is working for the school. At the very least, D2L is not getting in the way of successful pilots. It’s great to see D2L highlight the excellent work by LSC-UP and their ECPS application as they recently did in another D2L blog post extensively quoting Shah Ardalan:

Lone Star College-University Park’s incoming students are now leveraging ECPS to understand their future career path. This broadens the students’ view, allows them to share and discuss with family and friends, and takes their conversation with the academic and career advisors to a whole new level. “Data analytics and this form of ‘intentional advising’ has become part of our culture,” says Ardalan. “Because the students who really need our help aren’t necessarily the ones who call, this empowers them to make better decisions” he adds.

LSC-UP is also planning to starting using D2L’s analytics package Insights, and they may eventually get to the point where they can take credit for improving performance.

The problem is in misleading marketing. I say misleading because D2L and LSC-UP never come out and say “D2L usage increased retention”. They achieve their goal by clever marketing where the topic is whether D2L and their LMS can increase performance then they share the LSC success story. The reader or listener has to read the fine print or do additional research to understand the details, and most people will not do so.

The higher ed market deserves better.

I Maintain My Position From Conference Panel

After doing this research, I still back up my statement at the IMS panel and from my blog post.

I answered another question by saying that the LMS, with multiple billions invested over 17+ years, has not “moved the needle” on improving educational results. I see the value in providing a necessary academic infrastructure that can enable real gains in select programs or with new tools (e.g. adaptive software for remedial math, competency-based education for working adults), but the best the LMS itself can do is get out of the way – do its job quietly, freeing up faculty time, giving students anytime access to course materials and feedback. In aggregate, I have not seen real academic improvements directly tied to the LMS.

I’m still open to looking at programs that contradict my view, but the D2L claim from Lone Star doesn’t work.

  1. Although my comments refer to improvements in aggregate, going beyond pilots at individual schools, this claim would nonetheless be impressive.
  2. Evidence is based on blog posts, webinar, and articles as well as interview of LSC-UP staff; if D2L can produce evidence supporting their claim I will share it here.

The post About Those D2L Claims of LMS Usage Increasing Retention Rates appeared first on e-Literate.

Ed Tech World on Notice: Miami U disability discrimination lawsuit could have major effect

Wed, 2015-05-13 11:53

By Phil HillMore Posts (322)

This week the US Department of Justice, citing Title II of ADA, decided to intervene in a private lawsuit filed against Miami University of Ohio regarding disability discrimination based on ed tech usage. Call this a major escalation and just ask the for-profit industry how big an effect DOJ intervention can be. From the complaint:

Miami University uses technologies in its curricular and co-curricular programs, services, and activities that are inaccessible to qualified individuals with disabilities, including current and former students who have vision, hearing, or learning disabilities. Miami University has failed to make these technologies accessible to such individuals and has otherwise failed to ensure that individuals with disabilities can interact with Miami University’s websites and access course assignments, textbooks, and other curricular and co-curricular materials on an equal basis with non-disabled students. These failures have deprived current and former students and others with disabilities a full and equal opportunity to participate in and benefit from all of Miami University’s educational opportunities.

The complaint then calls out the nature of assistive technologies that should be available, including screen readers, Braille display, audio descriptions, captioning, and keyboard navigation. The complaint specifies that Miami U uses many technologies and content that is incompatible with these assistive technologies.

The complaint is very specific about which platforms and tools are incompatible:

  • The main website
  • Vimeo and YouTube
  • Google Docs
  • TurnItIn
  • LearnSmart
  • WebAssign
  • MyStatLab
  • Vista Higher Learning
  • Sapling

Update: It is worth noting the usage of phrase “as implemented by Miami University” in most of these examples.

Despite the complaint listing the last 6 examples as LMS, it is notable that the complaint does not call out the school’s previous LMS (Sakai) nor its current LMS (Canvas). Canvas was selected last year to replace Sakai, and I believe both are in usage. Does this mean that Sakai and Canvas pass ADA muster? That’s my guess, but I’m not 100% sure.

The complaint is also quite specific about the Miami U services that are at fault. For example:

When Miami University has converted physical books and documents into digital formats for students who require such conversion because of their disabilities, it has repeatedly failed to do so in a timely manner. And Miami University has repeatedly provided these students with digitally-converted materials that are inaccessible when used with assistive technologies. This has made the books and documents either completely unusable, or very difficult to use, for the students with these disabilities.

Miami University has a policy or practice by which it converts physical texts and documents into electronic formats only if students can prove they purchased (rather than borrowed) the physical texts or documents. Miami University will not convert into digital formats any physical texts or documents from its library collections and it will not seek to obtain from other libraries existing copies of digitally-converted materials. This has rendered many of the materials that Miami University provides throughout its library system and which it makes available to its students unavailable to students who require that materials be converted into digital formats because of a disability.

The complaint also specifies the required use of clickers and content within PowerPoint.

This one seems to be a very big deal by nature of the DOJ intervention and the specifics of multiple technologies and services.

Thanks to Jim Julius for alerting me on this one.

.@PhilOnEdTech have you seen the Miami of Ohio accessibility complaint? This is going to generate shock waves.

— Jim Julius (@jjulius) May 13, 2015

The post Ed Tech World on Notice: Miami U disability discrimination lawsuit could have major effect appeared first on e-Literate.

Worth Reading: Use of adjuncts and one challenge of online education

Mon, 2015-05-11 12:36

By Phil HillMore Posts (321)

There is a fascinating essay today at Inside Higher Ed giving an inside, first-person view of being an adjunct professor.

2015 is my 25th year of adjunct teaching. In the fall I will teach my 500th three-credit college course. I have put in many 14- to 16-hour days, with many 70- to 80-hour weeks. My record is 27 courses in one year, although I could not do that now.

I want to share my thoughts on adjunct teaching. I write anonymously to not jeopardize my precarious positions. How typical is my situation?

The whole essay is worth reading, as it gives a great view into what the modern university and the implications of using adjuncts. But I want to highlight one paragraph in particular that captures the challenge of understanding online education.

I have taught many online courses. We have tapped about 10 percent of the potential of online courses for teaching. But rather than exploring the untapped 90 percent, the college where I taught online wanted to standardize every course with a template designed by tech people with no input from instructors.

I want to design amazing online courses: courses so intriguing and intuitive and so easy to follow no one would ever need a tutorial. I want to design courses that got students eager to explore new things. Let me be clear, I am not talking about gimmicks and entertainment; I am talking about real learning. Is anyone interested in this?

It is naive to frame the debate over online education as solely, or primarily, an issue of faculty resistance. Yes, there are faculty members who are against online education, but one reason for this resistance is a legitimate concern for the quality of courses. What the essay reminds us is that part of the quality issue arises from structural issues from the university and not from the actual potential of well-design and well-taught online courses.

David Dickens at Google+ had an interesting comment based on the “tech people” reference that points to the other side of the same coin.

As a tech guy I can tell you, we’d love to have the time and tools to work with motivated adjuncts (or anyone else), but often times we have to put out something that will work for everyone, will scale, and will be complete and tested before the end of the week.

It is endlessly frustrating to know that there is so much more that could be done. After all, we tech folks are completely submerged in our personal lives with much more awesome tech than we can include in these sorts of “products” as we are constrained to publish them.

There is an immense difference between A) the quality of online education and B) the quality of well-designed and well-taught online education, and that is even different than C) the potential of online education. It is a mistake to conflate A), B), and C).

Update: David is on a roll while in discussion with George Station. This clarification builds on the theme of this post.

My point is that IT isn’t the barrier, but rather we are the mask behind which all the real barriers like to hide. We’d love to do more but can’t, and we get put in the position of taking the blows that should be directed towards the underlying issues.

The post Worth Reading: Use of adjuncts and one challenge of online education appeared first on e-Literate.