Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 8 hours 19 min ago

D2L Again Misusing Academic Data For Brightspace Marketing Claims

Thu, 2015-07-02 05:56

By Phil HillMore Posts (333)

At this point I’d say that we have established a pattern of behavior.

Michael and I have been quite critical of D2L and their pattern of marketing behavior that is misleading and harmful to the ed tech community. Michael put it best:

I can’t remember the last time I read one of D2L’s announcements without rolling my eyes. I used to have respect for the company, but now I have to make a conscious effort not to dismiss any of their pronouncements out-of-hand. Not because I think it’s impossible that they might be doing good work, but because they force me to dive into a mountain of horseshit in the hopes of finding a nugget of gold at the bottom. Every. Single. Time. I’m not sure how much of the problem is that they have decided that they need to be disingenuous because they are under threat from Instructure or under pressure from investors and how much of it is that they are genuinely deluding themselves. Sadly, there have been some signs that at least part of the problem is the latter situation, which is a lot harder to fix. But there is also a fundamental dishonesty in the way that these statistics have been presented.

Well, here’s the latest. John Baker put out a blog called This Isn’t Your Dad’s Distance Learning Program with this theme:

But rather than talking about products, I think it’s important to talk about principles. I believe that if we’re going to use education technology to close the attainment gap, it has to deliver results. That — as pragmatic as it is — is the main guiding principle.

The link about “deliver results” leads to this page (excerpted as it existed prior to June 30th, for reasons that will become apparent).

Why Brightspace

Why Brightspace? Results.

So the stage is set – use ed tech to delivery results, and Brightspace (D2L’s learning platform, or LMS) delivers results. Now we come to the proof, including these two examples.

CSULB UWM Results

According to Californiat State University-Long Beach, retention has improved 6% year-over-year since they adopted Brightspace.[snip]

University of Wisconsin-Milwaukee reported an increase in the number of students getting A’s and B’s in Brightspace-powered courses by over 170%

Great results, no? Let’s check the sources. Ah . . . clever marketing folks – no supporting data or even hyperlinks to learn more. Let’s just accept their claims and move along.

. . .

OK, that was a joke.

CSU Long Beach

I contacted CSU Long Beach to learn more, but I could find no one who knew where this data came from or even that D2L was making this claim. I shared the links and context, and they went off to explore. Today I get a message saying that the issue has been resolved, but that CSU Long Beach would make no public statements on the matter. Fair enough – the observations below are my own.

If you now look at that Results page now, the CSU Long Beach claim is no longer there – down the memory hole[1] with no explanation, replaced by a new claim about Mohawk College.

Mohawk UWM Results

While CSU Long Beach would not comment further on the situation, there are only two plausible explanations for the issue being resolved by D2L taking down the data. Either D2L was using legitimate data that they were not authorized to use (best case scenario) or D2L was using data that doesn’t really exist. I could speculate further, but the onus should be on D2L since they are the ones who made the claim.

UW Milwaukee

I also contacted UW Milwaukee to learn more, and I believe the data in question is from the U-Pace program which has been fully documented.[2][3]

The U-Pace instructional approach combnes self-paced, master-based learning with instructor-initiated Amplified Assistance in an online environment.

The control group was traditionally-taught (read that as large lecture classes) for Intro to Psychology.

From the EDUCAUSE Quarterly article on U-Pace, for disadvantaged students the number of A’s and B’s increased 163%. This is the closest data I can find to back up D2L’s claim of 170% increase.

U-Pace results EQ

There are three immediate problems here (ignoring the fact that I can’t find improvements of more than 170% – I’ll take 163%).

  1. First, the data claim is missing the context of “for underprepared students” who exhibited much higher gains than prepared students. That’s a great result for the U-Pace program, but it is also important context to include.
  2. The program is an instructional change, moving from large lecture classes to self-paced, mastery-learning approach. That is the intervention, not the use of the LMS. In fact, D2L was the LMS used in both the control group and the U-Pace treatment group.
  3. The program goes out of its way to call out the minimal technology needed to adopt the approach, and they even list Blackboard, Desire2Learn, and Moodle as examples of LMS’s that work with the following conditions:

U-Pace LMS Reqs

This is an instructional approach that claims to be LMS neutral with D2L’s Brightspace used in both the control group and treatment group, yet D2L positions the results as proof that Brightspace gets results! It’s wonderful that Brightspace LMS worked during the test and did not get in the way, but that is a far cry from Brightspace “delivering results”.

The Pattern

We have to now add these two cases to the Lone Star College and LeaP examples. In all cases, there is a pattern.

  1. D2L makes marketing claim implying their LMS Brightspace delivers results, referring to academic outcomes data with missing supporting data or references.
  2. I contact school or research group to learn more.
  3. Data is either misleading (treatment group is not LMS usage but instead instructional approach, adaptive learning technology, or student support software) or just plain wrong (with data taken down).
  4. In all cases, the results could have been presented honestly, showing the appropriate context, links for further reading, and explanation of the LMS role. But they were not presented honestly.
  5. e-Literate blog post almost writes itself.
  6. D2L moves on to make their next claim, with no explanations.

I understand that other ed tech vendors make marketing claims that cannot always be tied to reality, but these examples cross a line. They misuse and misrepresent academic outcomes data – whether public research-based on internal research – and essentially take credit for their technology “delivering results”.

This is the misuse of someone else’s data for corporate gain. Institutional data. Student data. That is far different than using overly-positive descriptions of your own data or subjective observations. That is wrong.

The Offer

For D2L company officials, I have an offer.

  1. If you have answers or even corrections about these issues, please let us know through your own blog post or comments to this blog.
  2. If you find any mistakes in my analysis, I will write a correction post.
  3. We are happy to publish any reply you make here on e-Literate.
  1. Their web page does not allow archiving with the Wayback Machine, but I captured screenshots in anticipation of this move.
  2. Note – While I assume this claim derives from U-Pace, I am not sure. It is the closest example of real data that I could find, thanks to a helpful tip from UW-M staff. I’ll give D2L the benefit of the doubt despite their lack of reference.
  3. And really, D2L marketing staff should learn how to link to external sources. It’s good Internet practice.

The post D2L Again Misusing Academic Data For Brightspace Marketing Claims appeared first on e-Literate.

U of Phoenix: Losing hundreds of millions of dollars on adaptive-learning LMS bet

Tue, 2015-06-30 09:17

By Phil HillMore Posts (333)

It would be interesting to read (or write) a post mortem on this project some day.

Two and a half years ago I wrote a post describing the University of Phoenix investment of a billion dollars on new IT infrastructure, including hundreds of millions of dollars spent on a new, adaptive-learning LMS. In another post I described a ridiculous patent awarded to Apollo Group, parent company of U of Phoenix, that claimed ownership of adaptive activity streams. Beyond the patent, Apollo Group also purchased Carnegie Learning for $75 million as part of this effort.

And that’s all going away, as described by this morning’s Chronicle article on the company planning to go down to just 150,000 students (from a high of 460,000 several years ago).

And after spending years and untold millions on developing its own digital course platform that it said would revolutionize online learning, Mr. Cappelli said the university would drop its proprietary learning systems in favor of commercially available products. Many Apollo watchers had long expected that it would try to license its system to other colleges, but that never came to pass.

I wonder what the company will do with the patent and with Carnegie Learning assets now that they’re going with commercial products. I also wonder who is going to hire many of the developers. I don’t know the full story, but it is pretty clear that even with a budget of hundreds of millions of dollars and adjunct faculty with centralized course design, the University of Phoenix did not succeed in building the next generation learning platform.

Update: Here is full quote from earnings call:

Fifth. We plan to move away from certain proprietary and legacy IT systems to more efficiently meet student and organizational needs over time. This means transitioning an increased portion of our technology portfolio to commercial software providers, allowing us to focus more of our time and investment on educating and student outcomes. While Apollo was among the first to design an online classroom and supporting system, in today’s world it’s simply not as efficient to continue to support complicated, custom-designed systems particularly with the newer quality systems we have more recently found with of the self providers that now exist within the marketplace. This is expected to reduce costs over the long term, increase operational efficiency and effectiveness while still very much supporting a strong student experience.

The post U of Phoenix: Losing hundreds of millions of dollars on adaptive-learning LMS bet appeared first on e-Literate.

ASU Is No Longer Using Khan Academy In Developmental Math Program

Mon, 2015-06-29 17:37

By Phil HillMore Posts (333)

In these two episodes of e-Literate TV, we shared how Arizona State University (ASU) started using Khan Academy as the software platform for a redesigned developmental math course[1] (MAT 110). The program was designed in Summer 2014 and ran through Fall 2014 and Spring 2015 terms. Recognizing the public information shared through e-Literate TV, ASU officials recently informed us that they had made a programmatic change and will replace their use of Khan Academy software with McGraw-Hill’s LearnSmart software that is used in other sections of developmental math.

To put this news in context, here is the first episode’s mention of Khan Academy usage.

Phil Hill: The Khan Academy program that you’re doing, as I understand, it’s for general education math. Could you give just a quick summary of what the program is?

Adrian Sannier: Absolutely. So, for the last three-and-a-half years, maybe four, we have been using a variety of different computer tutor technologies to change the pedagogy that we use in first-year math. Now, first-year math begins with something we call “Math 110.” Math 110 is like if you don’t place into either college algebra, which has been the traditional first-year math course, or into a course we call “college math,” which is your non-STEM major math—if you don’t place into either of those, then that shows you need some remediation, some bolstering of some skills that you didn’t gain in high school.

So, we have a course for that. Our first-year math program encompasses getting you to either the ability to follow a STEM major or the ability to follow majors that don’t require as intense of a math education. What we’ve done is create an online mechanism to coach students. Each student is assigned a trained undergraduate coach under the direction of our instructor who then helps that student understand how to use the Khan Academy and other tools to work on the skills that they show deficit in and work toward being able to satisfy the very same standards and tests that we’ve always used to ascertain whether a student is prepared for the rest of their college work.

Luckily, the episode on MAT 110 focused mostly on the changing roles of faculty members and TAs when using an adaptive software approach, rather than focusing on Khan Academy itself. After reviewing the episode again, I believe that it stands on its own and is relevant even with the change in software platform. Nevertheless, I appreciate that ASU officials were proactive to let me know about this change, so that we can document the change here and in e-Literate TV transmedia.

The Change

Since the change has not been shared outside of this notification (limiting my ability to do research and analysis), I felt the best approach would be to again interview Adrian Sannier, Chief Academic Technology Officer at ASU Online. Below is the result of an email interview, followed by short commentary [emphasis added].

Phil Hill: Thanks for agreeing to this interview to update plans on the MAT 110 course featured in the recent e-Literate TV episode. Could you describe the learning platforms used by ASU in the new math programs (MAT 110 and MAT 117 in particular) as well as describe any changes that have occurred this year?

Adrian Sannier: Over the past four years, ASU has worked with a variety of different commercially available personalized math tutors from Knewton, Pearson, McGraw Hill and the Khan Academy applied to 3 different courses in Freshman Math at ASU – College Algebra, College Math and Developmental Math. Each of these platforms has strengths and weaknesses in practice, and the ASU team has worked closely with the providers to identify ways to drive continuous improvement in their use at ASU.

This past year ASU used a customized version of Pearson’s MyMathLab as the instructional platform for College Algebra and College Math. In Developmental Math, we taught some sections using the Khan Academy Learning Dashboard and others using McGraw Hill’s LearnSmart environment.

This Fall, ASU will be using the McGraw Hill platform for Developmental Math and Pearson’s MyMathLab for College Algebra and College Math. While we also achieved good results with the Khan Academy this past year, we weren’t comfortable with our current ability to integrate the Khan product at the institutional level.

ASU is committed to the personalized adaptive approach to Freshman mathematics instruction, and we are continuously evaluating the product space to identify the tools that we feel will work best for our students.

Phil Hill: I presume this means that ASU’s usage of McGraw Hill’s LearnSmart for Developmental Math will continue and also expand to essentially replace the usage of Khan Academy. Is this correct? If so, what do you see as the impact on faculty and students involved in the course sections that previously used Khan Academy?

Adrian Sannier: That’s right Phil. Based on our experience with the McGraw Hill product we don’t expect any adverse effects.

Phil Hill: Could you further explain the comment “we weren’t comfortable with our current ability to integrate the Khan product at the institutional level”? I believe that Khan Academy’s API approach is more targeted to B2C [business-to-consumer] applications, allowing individual users to access information rather than B2B [business-to-business] enterprise usage, whereas McGraw Hill LearnSmart and others are set up for B2B usage from an API perspective. Is this the general issue you have in mind?

Adrian Sannier: That’s right Phil. We’ve found that the less cognitive load an online environment places on students the better results we see. Clean, tight integrations into the rest of the student experience result in earlier and more significant student engagement, and better student success overall.

Notes

Keep in mind that ASU is quite protective of its relationship with multiple software vendors and that they go out of their way to not publicly complain or put their partners in a bad light, even if a change is required as in MAT 110. Adrian does make it clear, however, that the key issue is the ability to integrate reliably between multiple systems. As noted in the interview, I think a related issue here is a mismatch of business models. ASU wants enterprise software applications where they can deeply integrate with a reliable API to allow a student experience without undue “cognitive load” of navigating between applications. Khan Academy’s core business model relies on people navigating to their portal on their website, and this does not fit the enterprise software model. I have not interviewed Khan Academy, but this is how it looks from the outside.

There is another point to consider here. While I can see Adrian’s argument that “we don’t expect any adverse effects” in the long run, I do think there are switching costs in the short term. As Sue McClure told me via email, as an instructor she spent significantly more time than usual on this course due to course design and ramping up the new model. In addition, ASU added 11 TAs for the course sections using Khan Academy.  These people have likely learned important lessons about supporting students in an adaptive learning setting, but a great deal of their Khan-specific time is now gone. Plus, they will need to spend time learning LearnSmart before getting fully comfortable in that environment.

Unfortunately, with the quick change, we might not see hard data to determine if the changes were working. I believe ASU’s plans were to analyze and publish the results from this new program after the third term which will not happen.

If I find out more information, I’ll share it here.

  1. The terms remedial math and developmental math are interchangeable in this context.

The post ASU Is No Longer Using Khan Academy In Developmental Math Program appeared first on e-Literate.

Google Classroom Addresses Major Barrier To Deeper Higher Ed Adoption

Mon, 2015-06-29 11:28

By Phil HillMore Posts (333)

A year ago I wrote about Google Classroom, speculating whether it would affect the institutional LMS market in higher education. My initial conclusion:

I am not one to look at Google’s moves as the end of the LMS or a complete shift in the market (at least in the short term), but I do think Classroom is significant and worth watching. I suspect this will have a bigger impact on individual faculty adoption in higher ed or as a secondary LMS than it will on official institutional adoption, at least for the next 2 – 3 years.

And my explanation [emphasis added]:

But these features are targeted at innovators and early adopter instructors who are willing to fill in the gaps themselves.

  1. The course creation, including setting up of rosters, is easy for an instructor to do manually, but it is manual. There has been no discussion that I can find showing that the system can automatically create a course, including roster, and update over the add / drop period.

  1. There is no provision for multiple roles (student in one class, teacher in another) or for multiple teachers per class.
  2. The integration with Google Drive, especially with Google Docs and Sheets, is quite intuitive. But there is no provision for PDF or MS Word docs or even publisher-provided courseware.
  3. There does not appear to be a gradebook – just grading of individual assignments. There is a button to export grades, and I assume that you can combine all the grades into a custom Google Sheets spreadsheet or even pick a GAFE gradebook app. But there is no consistent gradebook available for all instructors within an institution to use and for students to see consistently.

Well today Google announced a new Google Classroom API that directly addresses the limitation in bullet #1 above and indirectly addresses #4.

The Classroom API allows admins to provision and manage classes at scale, and lets developers integrate their applications with Classroom. Until the end of July, we’ll be running a developer preview, during which interested admins and developers can sign up for early access. When the preview ends, all Apps for Education domains will be able to use the API, unless the admin has restricted access.

By using the API, admins will be able to provision and populate classes on behalf of their teachers, set up tools to sync their Student Information Systems with Classroom, and get basic visibility into which classes are being taught in their domain. The Classroom API also allows other apps to integrate with Classroom.

Google directly addresses the course roster management in their announcement; in fact, this appears to be the primary use case they had in mind. I suspect this by itself will have a big impact in the K-12 market (would love to hear John Watson’s take on this if he addresses in his blog), making it far more manageable for district-wide and school-wide Google Classroom adoptions.

The potential is also there for a third party to develop and integrate a viable grade book application available to an entire institution. While this could partially be done by the Google Apps for Education (GAFE) ecosystem, that is a light integration that doesn’t allow deep connection between learning activities and grades. The new API should allow for deeper integrations, although I am not sure how much of the current Google Classroom data will be exposed.

I still do not see Google Classroom as a current threat to the higher ed institutional LMS market, but it is getting closer. Current ed tech vendors should watch these developments.

Update: Changed Google Apps for Education acronym from GAE to GAFE.

The post Google Classroom Addresses Major Barrier To Deeper Higher Ed Adoption appeared first on e-Literate.

How Student and Faculty Interviews Were Chosen For e-Literate TV Series

Mon, 2015-06-29 06:47

By Phil HillMore Posts (333)

As part of our e-Literate TV set of case studies on personalized learning, Michael and I were fully aware that Arizona State University (ASU) was likely to generate the most controversy due to ASU’s aggressive changes to the concept of a modern research university. As we described in this introductory blog post:

Which is one reason why we’re pretty excited about the release of the first two case studies in our new e-Literate TV series on the trend of so-called “personalized learning.” We see the series as primarily an exercise in journalism. We tried not to hold onto any hypothesis too tightly going in, and we committed to reporting on whatever we found, good or bad. We did look for schools that were being thoughtful about what they were trying to do and worked with them cooperatively, so it was not the kind of journalism that was likely to result in an exposé. We went in search of the current state of the art as practiced in real classrooms, whatever that turned out to be and however well it is working.

As part of the back-and-forth discussions with the ASU case study release, John Warner brought up a good point in response to my description that our goal was “Basically to expose, let you form own opinions”.

@PhilOnEdTech Can't form opinion without a more thorough accounting. Ex. How did you choose students and fac. to talk to?

— John Warner (@biblioracle) June 1, 2015

Can’t form opinion without a more thorough accounting. Ex. How did you choose students and fac. to talk to?

Let’s explore this subject for the four case studies already released. Because the majority of interviewees shared positive experiences in our case studies, I’ll highlight some of the skeptical, negative or cautionary views that were captured in these case studies.

Our Approach To Lining Up Interviews


When we contacted schools to line up interviews on campus, it is natural to expect that the staff will tend to find the most positive examples of courses, faculty and students to share. As described above, we admit that we looked for schools with thoughtful approaches (and therefore courses), but we needed to try and expose some contrary or negative views as well. This is not to play gotcha journalism nor to create a false impression of equally good / equally bad perspectives. But it is important to capture that not everyone is pleased with the changes, and these skeptics are a good source of exposing risks and issues to watch. Below is the key section of the email sent to each school we visited.

The Case Study Filming Process
Each case study will include a couple of parts. First, we will interview the college leadership—whoever the school deems appropriate—to provide an overview of the school, it’s mission and history, it’s student body, and how “personalized education” (however that school defines the term) fits into that picture. If there are particular technology-driven initiatives related to personalized learning, then we may talk about those a bit. Second, we will want to talk some teachers and students, probably in a mixed group. We want to get some sample reactions from them about what they think is valuable about the education they get (or provide) at the school, how “personalization” fits into that, and how, when, and why they use or avoid technology in the pursuit of the educational goals. We’re not trying either to show “best/worst” here or to provide an “official” university position, but rather to present a dialog representing some of the diverse views present on the campus.

Campus Input on the Filming
In order for the project to have integrity, MindWires must maintain editorial independence. That said, our goal for the case studies is to show positive examples of campus communities that are authentically engaged in solving difficult educational challenges. We are interested in having the participants talk about both successes and failures, but our purpose in doing so is not to pass judgment on the institution but rather to enable to viewers to learn from the interviewees’ experiences. We are happy to work closely with each institution in selecting the participants and providing a general shape to the conversation. While we maintain editorial control over the final product, if there are portions of the interviews that make the institution uncomfortable then we are open to discussing those issues. As long as the institution is willing to allow an honest reflection of their own challenges and learning experiences as an educational community, then we are more than willing to be sensitive to and respectful of concerns that the end product not portray the institution in a way that might do harm to the very sort of campus community of practice that we are trying to capture and foster with our work.

As an example of what “willing to be sensitive to and respectful of concerns” means in practice, one institution expressed a concern that they did not want their participation in this personalized learning series to be over-interpreted as a full-bore endorsement of pedagogical change by the administration. The school was at the early stages of developing a dialog with faculty on where they want to go with digital education, and the administration did not want to imply that they already knew the direction and answers. We respected this request and took care to not imply any endorsement of direction by the administration.

Below are some notes on how this played out at several campuses.

Middlebury College

As described in our introductory blog post:

Middlebury College, the first school we went to when we started filming, was not taking part in any cross-institutional (or even institutional) effort to pilot personalized learning technologies and not the kind of school that is typically associated the “personalized learning” software craze. Which is exactly why we wanted to start there. When most Americans think of the best example of a personalized college education, they probably think of an elite New England liberal arts college with a student/teacher ratio of under nine to one. We wanted to go to Middlebury because we wanted a baseline for comparison. We were also curious about just what such schools are thinking about and doing with educational technologies.

Middlebury College staff helped identify one faculty member who is experimenting with technology use in his class with some interesting student feedback, which we highlighted in Middlebury Episode 2. They also found two faculty members for a panel discussion along with two students who have previously expressed strong opinions on where technology does and does not fit in their education. The panel discussion was highlighted in Middlebury Episode 3.

As this case study did not have a strong focus on a technology-enabled program, we did not push the issue of finding skeptical faculty or students and instead exposed that technology was not missing from the campus consideration of how to improve education.

The administration did express some cautionary notes on the use of technology to support “personalized learning” as captured in this segment:

Essex County College

By way of contrast, our second case study was at Essex County College, an urban community college in Newark, New Jersey. This school has invested approximately $1.2 million of its own money along with a $100 thousand Gates Foundation grant to implement an adaptive learning remedial math course designed around self-regulated learning. Our case study centered on this program specifically.

Of course, the place where you really expect to see a wide range of incoming skills and quality of previous education is in public colleges and universities, and at community colleges in particular. At Essex County College, 85% of incoming students start in the lowest level developmental math course. But that statistic glosses over a critical factor, which is there is a huge range of skills and abilities within that 85%. Some students enter almost ready for the next level, just needing to brush up on a few skills, while others come in with math skills at the fourth grade level. On top of that, students come in with a wide range of metacognitive skills. Some of them have not yet learned how to learn, at least this subject in this context.

Given the controversial nature of using adaptive learning software in a class, we decided to include a larger number of student voices in this case study. Douglas Walcerz, the faculty and staff member who designed the course, gave us direct access to the entire class. We actively solicited students to participate in interviews, as one class day was turned over to e-Literate TV video production and interviews, with the rest of the class watching their peers describe their experiences.

As we did the interviews, almost all students had a very positive view of the new class design, particularly the self-regulated learning aspect with the resultant empowerment they felt. What was missing was student voices who were not comfortable with the new approach. For the second day we actively solicited students who could provide a negative view. The result was shared in this interview:

As for faculty, it was easier to find some skeptical or cautionary voices, which we highlighted here.

As described above, our intent was not to present a false balance but rather to to include diverse viewpoints to help other schools know the issues to explore.

Arizona State University

At ASU we focused on two courses in particular, Habitable Worlds highlighted in episode 2 and remedial math (MAT 110) using Khan Academy software highlighted in episode 3.

We did have some difficulty getting on-campus student interviews due to both of these being online courses. For MAT 110 we did get find one student who expressed both positive and negative views on the approach, as shown in this episode.

Empire State College

Like ASU, Empire State College presented a challenge for on-campus video production from the nature of all-online courses. We worked with ESC staff to get students lined up for interviews, with the best stories coming from the prior learning affects on students.

It was easier and more relevant to explore the different perspectives on personalized learning from faculty and staff themselves, as evidenced by the following interview. ESC offered him up–proudly–knowing that he would be an independent voice. They understood what we meant in that email and were not afraid to show the tensions they are wrestling with on-camera. Not every administration will be as brave as ESC’s, but we are finding that spirit to be the norm rather than the exception.

Upcoming Episodes

It’s also worth pointing out the role of selecting colleges in the first place, which is not just about diversity. We know that different schools are going to have different perspectives, and we pick them carefully to set up a kind of implicit dialog. We know, for example, that ASU is going to give a full-throated endorsement of personalized learning software used to scale. So we balance them against Empire State College, which has always been about one-on-one mentoring in their design.

Hopefully this description of our process will help people like John Warner who need more information before forming their own opinion. At the least, consider this further documentation of the process. We are planning to release one additional case studies – the University of California at Davis in early July – as well as two analysis episodes. We’ll share more information once new episodes are released.

The post How Student and Faculty Interviews Were Chosen For e-Literate TV Series appeared first on e-Literate.

Prior Learning Assessments Done Right

Sun, 2015-06-28 21:53

By Michael FeldsteinMore Posts (1033)

This post has nothing to do with educational technology but everything to do with the kind of humane and truly personal education that we should be talking about when we throw around phrases like “personalized education.” Prior Learning Assessments (PLAs) go hand-in-glove with the trendy Competency-Based Education (CBE). The basic idea is that you test students on what they have learned in their own lives and give them credit toward their degrees based on what they already know. But it is often executed in a fairly mechanical way. Students are tested against the precise curriculum or competencies that a particular school has chosen for a particular class. Not too long ago, I heard somebody say, “We don’t need more college-ready students; we need more student-ready colleges.” In a logical and just world, we would start with what the student knows, rather than the with what one professor or group of professors decided one semester would be “the curriculum,” and we would give the student credit for whatever college-level knowledge she has.

It turns out that’s exactly what Empire State College (ESC) does. When we visited the college for an e-Literate TV case study, we learned quite a bit about this program and, in particular, about their PLA program for women of color.

But before we get into that, it’s worth backing up and looking at the larger context of ESC as an institution. Founded in 1971, the school was focused from the very beginning on “personalized learning”—but personalized in a sense that liberal intellectuals from the 1960s and 1970s would recognize and celebrate. Here’s Alan Mandell, who was one of the pioneering members of the faculty at ESC, on why the school has “mentors” rather than “professors”:

Alan Mandell: Every single person is called a mentor.

It’s valuable because of an assumption that is pretty much a kind of critique of the hierarchical model of teaching and learning that was the norm and remains the norm where there is a very, very clear sense of a professor professing to a student who is kind of taking in what one has to say.

Part of the idea of Empire State, and other institutions, more and more, is that there was something radically wrong with that. A, that students had something to teach us, as faculty, and that faculty had to learn to engage students in a more meaningful way to respond to their personal, academic, professional interests. It was part of the time. It was a notion of a kind of equality.

This was really interesting to me actually because I came here, and I was 25 years old. Every single student was older than I was, so the idea of learning from somebody else was actually not very difficult at all. It was just taken for granted. People would come with long professional lives, doing really interesting things, and I was a graduate student.

I feel, after many years, that this is still very much the case—that this is a more equal situation of faculty serving as guides to students who bring in much to the teaching and learning situation.

Unlike some of the recent adoptions of PLA, which are tied to CBE and the idea of getting students through their degree programs quickly, Empire State College approaches prior learning assessment in very much the spirit that Alan describes above. Here’s Associate Dean Cathy Leaker talking about their approach:

Cathy Leaker What makes Empire State College unique, even in the prior learning assessment field, is that many institutions that do prior learning assessment do what’s called a “course match.” In other words, a student would have to demonstrate—for example, if they wanted to claim credit for Introduction to Psychology, they would look at the learning objectives of the Introduction to Psychology course, and they would match their learning to that. We are much more open-ended, and as an institution, we really believe that learning happens everywhere, all the time. So, we try to look at learning organically, and we don’t assume that we already know exactly what might be required.

One of my colleagues, Elana Michelson, works on prior learning assessment. She started working in South Africa where they were—there it’s called “recognition for prior learning.” And she gives the example of some of the people who were involved in bringing down Apartheid, and how they, sort of as an institution working with the government, thought it might be ridiculous to ask those students to demonstrate problem solving skills, right? How the institution might look at problem-solving skills, and then if there was a strict match, they would say, “Well, wait a second. You don’t have it,” and yet, they’re activists that brought down the government and changed the world.

Those are some examples of why we really think we need to look at learning organically.

Students like Melinda come to us, talk about their learning, and then we try to help them identify it, come up with a name for it, and determine an amount of credit before submitting it for evaluation.

This is not personalized in the sense trying to figure out which institution-defined competencies you can check off on you way to an institution-defined collection of competencies that they call a “degree.” Rather, it’s an effort to have credentialed experts look at what you’ve done and what you know to find existing strengths that deserve to be recognized and credentialed. The Apartheid example is a particularly great one because it shows that traditional academic institutions may be poorly equipped to recognized and certify real-world demonstrations of competencies, particularly among people who come from disadvantaged or “marked” backgrounds. Here’s ESC faculty member Frances Boyce talking about why the school recognized a need to develop a particular PLA program for women of color:

Frances Boyce: Our project, Women of Color and Prior Learning Assessment, is based on a 2010 study done by Rebecca Klein-Collins and Richard Olson, “Fueling the Race to Success.” That found that students who do prior learning assessments are two and a half times more likely to graduate. When you start to unpack that data and you look at the graduation rates for students of color, for African American students the graduation rate increases fourfold. For Latino students it increases eightfold. Then, when you look at it in terms of gender, a woman who gets one to six credits in prior learning assessment will graduate more quickly than her male counterpart given the same amount of credit.

That seemed very important to us, and we decided, “Well, let’s see what we could do to improve the uptake rate for women of color.” So, we designed four workshops to help women of color, not only identify their learning—the value of their learning—but identify what they bring with them to the institution.

What’s going on here? Why is PLA more impactful than average for women and people of color? In addition to the fact that our institutions are not always prepared to recognize real-world knowledge and skills, as in the Apartheid example, people in non-privileged positions in our society are tacitly taught that college is not “for them.” That they don’t have what it takes to succeed there. By recognizing that they have, in fact, already acquired college-level skills and knowledge, PLA helps them get past the insults to their self-image and dignity and helps them to envision themselves as successful college graduates. Listen to ESC student Melinda Wills-Stallings’ story:

Michael Feldstein: I’m wondering if you can tell me, do you remember a particular moment, early on, when the lightbulb went off and you said to yourself, “Oh, that thing that’s part of my life counts”?

Melinda Wills-Stallings: I think when I was talking to my sons about the importance of their college education and how they couldn’t be successful without it and them saying to me, “But, Mom, you are successful. You run a school. You run a business.” To be told on days that I wasn’t there, the business wasn’t running properly or to be told by parents, “Oh, my, God. We’re so glad you’re back because we couldn’t get a bill, we couldn’t get a statement,” or, “No one knew how to get the payroll done.”

That’s when I knew, OK, but being told by an employer who said I wasn’t needed and I wasn’t relied on, I came to realize that it flipped on me. And I realized that’s what I had been told to keep me in my place, to keep me from aspiring to do the things that I knew that I was doing or I could do.

The lightbulb for me was when we were doing the interviews and Women of Color PLA, and Frances said to me, “That’s your navigational capital.” We would do these roundtables where you would interview with one mentor, and then you would go to another table. Then I went to another table, and she said, “Well, what do you hope to do with your college degree?” And I said, “I hope to pay it forward: to go continue doing what I love to do, but to come back to other women with like circumstances and inspire them and encourage them and support them to also getting their college degrees and always to be better today than I was yesterday, so that’s your aspirational capital.” And I went, “Oh, OK.” So, I have aspirational capital also, and then go to the next table and then I was like, I couldn’t wait to get to the next table because every table I went to, I walked away with one or two prior learning assessments.

And then to go home and to be able to put it into four- or five-page papers to submit that essay and to have it recognized as learning.

I was scared an awful lot of times from coming back to school because I felt, after I graduated high school and started college and decided I wanted to get married and have a family, I had missed the window to come back and get my college education. The light bulb was, “It’s never too late,” and that’s what I tell women who ask me, and I talk to them all the time about our school and our program. Like, “It’s never too late. You can always come back and get it done.”

Goals and dreams don’t have caps on them even though where I was, my employer had put a cap on where I could go on my salary and my position. Your goals and dreams don’t have a cap on it, so I think that was the light bulb for me—that it wasn’t too late.

It’s impossible to hear Melinda speak about her journey and not feel inspired. She built up the courage to walk into the doors of the college, despite being told repeatedly by her employer that she was not worthy. The PLA process quickly affirmed for her that she had done the right thing. At the same time, I recognize that traditionalists may feel uncomfortable with all this talk of “navigational capital” and “aspirational capital” and so on. Is there a danger of giving away degrees like candy and thus devaluing them? First, I don’t think there’s anything wrong with giving a person degree certification if they have become genuine experts in a college-appropriate subject through their life experience. In some ways, we are all the Scarecrow, the Tin Man, or the Cowardly Lion, waiting for some wizard to magically convey upon us a symbol that confers legitimacy upon our hard-won skills and attributes and thus somehow making them more real. But also, a funny thing happens when you treat a formal education as a tool for helping an individual reach her goals rather than a set of boxes that must be checked. Students start thinking about the work that education entails as something that is integral to them achieving those goals rather than a set of obstacles they have to get around in order to get the piece of paper that is the “real” value of college. Listen to ESC student Jessi Colón, a professional dancer who chose not to get all the credits she could have gotten for her dance knowledge because she wanted to focus on what she needed to learn for her next career working in animal welfare:

Jessi Colón: It was little bit tricky especially because I had really come here with the intention of maximizing and capitalizing on all this experience that I had. Part of the prior learning assessment and degree planning process is looking at other schools that may have somewhat relevant programs and trying to match what your learning is to those. As I was looking at other programs outside of New York or at other small, rural schools that do these little animal programs, I found that there were a lot of classes that I really wanted to take.

One of the really amazing things about Empire State is that they can also give you individualized courses, and I did a lot of those. So, once I saw these at other schools, I was like, “Man, I really want to take a class in animal-assisted therapy, and would I like to really, really indulge myself and do that or should I write another essay on jazz dance composition?” I knew that one would be more of a walk in the park than the other, but I was really excited about my degree and having this really personal degree allowed me to get excited about it. So, it made sense, though hard to let go of that prior learning in order to opt for the classes.

I could’ve written 20 different dance essays, but I wanted to really take a lot of classes. So, I filled that with taking more classes relevant to my degree, and then ended up only writing, I think, one or two dance-relevant essays.

It turns out that if you start from the assumption that the education they are coming for—not the certification, but the learning process itself—can and should have intrinsic value to them as tools toward pursuing their own ambitions, then people step up. They aspire to be more. They take on the work. If the education is designed to help them by recognizing how far they have come before they walk in the door and focusing on what they need to learn in order to do whatever it is they aspire to do after they leave, then students often come to see that gaming the system is just cheating themselves.

There are many ways to make schooling more personal but, in my opinion, what we see here is one of the deepest and most profound. This is what a student-ready college looks like. And in order to achieve it, there must be an institutional commitment to it that precedes the adoption of any educational technology. The software is just an enabler. If college community collectively commits to true personalization, then technology can help with that. If the community does not make such a commitment, then “personalized learning” software might help achieve other educational ends, but it will not personalize education in the sense that we see here.

I’m going to write a follow-up post how ESC is using that personalized learning software in their context, but you don’t have to wait to find out; you can just watch the second episode of the case study. While you’re at it, you should go back and watch the full ETV episode from which the above clips were excerpted. In addition to watching more great interview content, you can find a bunch of great related links to content that will let you dig deeper into many of the topics covered in the discussions.

The post Prior Learning Assessments Done Right appeared first on e-Literate.

Release of Empire State College Case Study on e-Literate TV

Fri, 2015-06-26 15:03

By Phil HillMore Posts (333)

Today we are thrilled to release the fourth case study in our new e-Literate TV series on “personalized learning”. In this series, we examine how that term, which is heavily marketed but poorly defined, is implemented on the ground at a variety of colleges and universities.

We are adding two episodes from Empire State College (ESC), a school that was founded in 1971 as part of the State University of New York. Through a lot of one-on-one, student-faculty interactions, the school was designed to serve the needs of students who don’t do well at traditional colleges. What problems are they trying to solve? How do students view some of the changes? What role does the practice of granting prior-learning assessments (PLA) play in non-traditional students’ education?

You can see all the case studies (either 2 or 3 per case study) at the series link, and you can access individual episodes below.

ESC Case Study: Personalized Prior Learning Assessments

ESC Case Study: Personalizing Personalization

e-Literate TV, owned and run by MindWires Consulting, is funded in part by the Bill & Melinda Gates Foundation. When we first talked about the series with the Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

As with the previous series, we are working in collaboration with In the Telling, our partners providing the platform and video production. Their Telling Story platform allows people to choose their level of engagement, from just watching the video to accessing synchronized transcripts and accessing transmedia. We have added content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. With In The Telling’s help, we are crafting episodes that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies.

We will release one more case study in early July, and we also have two episodes discussing the common themes we observed on the campuses. We welcome your feedback, either in comments or on Twitter using the hashtag #eLiterateTV.

Enjoy!

The post Release of Empire State College Case Study on e-Literate TV appeared first on e-Literate.

68% of Statistics Are Meaningless, D2L Edition

Wed, 2015-06-24 17:27

By Michael FeldsteinMore Posts (1033)

Two years ago, I wrote about how D2L’s analytics package looked serious and potentially ground-breaking, but that there were serious architectural issues with the underlying platform that were preventing the product from working properly for customers. Since then, we’ve been looking for signs that the company has dealt with these issues and is ready to deliver something interesting and powerful. And what we’ve seen is…uh…

…uh…

Well, the silence has ended. I didn’t get to go to FUSION this year, but I did look at the highlights of the analytics announcements, and they were…

…they were…

OK, I’ll be honest. They were incredibly disappointing in almost every way possible, and good examples of a really bad pattern of hype and misdirection that we’ve been seeing from D2L lately.

You can see a presentation of the “NEW Brightspace Insights(TM) Analytics Suite” here. I would embed the video for you but, naturally, D2L uses a custom player from which they have apparently stripped embedding capabilities. Anyway, one of the first things we learn from the talk is that, with their new, space-age, cold-fusion-powered platform, they “deliver the data to you 20 times faster than before.” Wow! Twenty times faster?! That’s…like…they’re giving us the data even before the students click or something. THEY ARE READING THE STUDENTS’ MINDS!

Uh, no. Not really.

A little later on in the presentation, if you listen closely, you’ll learn that D2L was running a batch process to update the data once every 24 hours. Now, two years after announcing their supposed breakthrough data analytics platform, they are proud to tell us that they can run a batch process every hour. As I write this, I am looking at my real-time analytics feed on my blog, watching people come and go. Which I’ve had for a while. For free. Of course, saying it that way, a batch process every hour, doesn’t sound quite as awesome as

TWENTY TIMES FASTER!!!!!

So they go with that.

There was an honest way in which they could have made the announcement and still sounded great. They could have said something like this:

You know, when LMSs were first developed, nobody was really thinking about analytics, and the technology to do analytics well really wasn’t at a level where it was practical for education anyway. Times have changed, and so we have had to rebuild Brightspace from the inside out to accommodate this new world. This is an ongoing process, but we’re here to announce a milestone. By being able to deliver you regular, intra-day updates, we can now make a big difference in their value to you. You can respond more quickly to student needs. We are going to show you a few examples of it today, but the bigger deal is that we have this new structural capability that will enable us to provide you with more timely analytics as we go.

That’s not a whole lot different in substance than what they actually said. And they really needed to communicate in a hype-free way, because what was the example that they gave for this blazing fast analytics capability? Why, the ability to see if students had watched a video.

Really. That was it.

Now, here again, D2L could have scored real points for this incredibly underwhelming example if they had talked honestly about Caliper and its role in this demo. The big deal here is that they are getting analytics not from Brightspace but from a third-party tool (Kaltura) using IMS Caliper. Regular readers know that I am a big fan of the standard-in-development. I think it’s fantastic that an LMS company has made an early commitment to implement the standard and is pushing it hard as differentiator. That can make the difference between a standard getting traction or remaining an academic exercise. How does D2L position this move? From their announcement:

With our previous analytics products, D2L clients received information on student success even before they took their first test. This has helped them improve student success in many ways, but the data is limited to Brightspace tools. The new Brightspace Insights is able to aggregate student data, leveraging IMS Caliper data, across a wide variety of learning tools within an institution’s technology ecosystem.

We’ve seen explosive growth in the use of external learning tools hooked into Brightspace over the past eighteen months. In fact, we are trending toward 200% growth over 2014. [Emphasis added.] That’s a lot of missing data.

This helps create a more complete view of the student. All of their progress and experiences are captured and delivered through high performance reports, comprehensive data visualizations, and predictive analytics.

Let’s think about an example like a student’s experiences with publisher content and applications. Until now, Brightspace was able to capture final grades but wouldn’t track things like practice quizzes or other assessments a student has taken. It wouldn’t know if a student didn’t get past the table of contents in a digital textbook. Now, the new Brightspace Insights captures all of this data and creates a more complete, living, breathing view of a student’s performance.

This is a big milestone for edtech. No other LMS provider is able to capture data across the learning technology ecosystem like this. [Emphasis added.]

I have no problem with D2L crowing about being early to market with a Caliper implementation. But let’s look at how they positioned it. First, they talked about 200% growth in use of external learning tools in 2015. But what does that mean? Going from one tool to three tools? And what kind of tools are they? And what do we know about how they are being used? OK, on that last question, maybe analytics are needed to answer it. But the point is that D2L has a pattern of punctuating every announcement or talk with an impressive-sounding but meaningless statistic to emphasize how awesome they are. Phil recently caught John Baker using…questionable retention statistics in a speech he gave. In that case, the problem wasn’t that the statistic itself was meaningless but rather that there was no reason to believe that D2L had anything to do with the improvement in the case being cited. And then there’s the slight-of-hand that Phil just called out regarding their LeaP marketing. It’s not as bad as some of the other examples, in my opinion, but still disturbingly consistent with the pattern we are seeing. I am starting to suspect that somebody in the company literally made a rule: Every talk or announcement must have a statistic in it. Doesn’t matter what the statistic is, or whether it means anything. Make one up if you have to, but get it in there.

But back to analytics. The more egregious claim in the quote above is that “no other LMS provider is able to capture data across the learning technology like this [example that we just gave],” because D2L can’t either yet. They have implemented a pre-final draft of a standard which requires both sides to implement in order for it to work. I don’t know of any publishers who have announced they are ready to provide data in the way described in D2L’s example. In fact, there are darned few app providers of any kind who are there yet. (Apparently, Kaltura is one of them.) Again, this could have been presented honestly in a way that made D2L look fantastic. Implementing first puts them in a leadership position, even if that leadership will take a while to pay practical dividends for the customer. But they went for hype instead.

I can’t remember the last time I read one of D2L’s announcements without rolling my eyes. I used to have respect for the company, but now I have to make a conscious effort not to dismiss any of their pronouncements out-of-hand. Not because I think it’s impossible that they might be doing good work, but because they force me to dive into a mountain of horseshit in the hopes of finding a nugget of gold at the bottom. Every. Single. Time. I’m not sure how much of the problem is that they have decided that they need to be disingenuous because they are under threat from Instructure or under pressure from investors and how much of it is that they are genuinely deluding themselves. Sadly, there have been some signs that at least part of the problem is the latter situation, which is a lot harder to fix. But there is also a fundamental dishonesty in the way that these statistics have been presented.

I don’t like writing this harshly about a company—particularly one that I have had reason to praise highly in the past. I don’t do it very often. But enough is enough already.

 

The post 68% of Statistics Are Meaningless, D2L Edition appeared first on e-Literate.

About The D2L Claim Of BrightSpace LeaP And Academic Improvements

Wed, 2015-06-24 16:07

By Phil HillMore Posts (333)

Recently I wrote a post checking up on a claim by D2L that seems to imply that their learning platform leads to measurable improvements in academic performance. The genesis of this thread is a panel discussion at the IMS Global conference where I argued that LMS usage in aggregate has not improved academic performance but is important, or even necessary, infrastructure with a critical role. Unfortunately, I found that D2L’s claim from Lone Star was misleading:

That’s right – D2L is taking a program where there is no evidence that LMS usage was a primary intervention and using the results to market and strongly suggest that using their LMS can “help schools go beyond simply managing learning to actually improving it”. There is no evidence presented[2] of D2L’s LMS being “foundational” – it happened to be the LMS during the pilot that centered on ECPS usage.

Subsequently I found a press release at D2L with a claim that appeared to be more rigorous and credible (written in an awful protected web page that prevents select – copy – paste).

D2L Launches the Next Generation of BrightSpace and Strives to Accelerate the Nation’s Path to 60% Attainment

D2L, the EdTech company that created Brightspace, today announces the next generation of its learning platform, designed to develop smarter learners and increase graduation rates. By featuring a new faculty user interface (UI) and bringing adaptive learning to the masses, Brightspace is more flexible, smarter, and easier to use. [snip]

D2L is changing the EdTech landscape by enabling students to learn more with Brightspace LeaP adaptive learning technology that brings personalized learning to the masses, and will help both increase graduation rates and produce smarter learners. The National Scientific Research Council of Canada (NSERC) produced a recent unpublished study that states: “After collating and processing the results, the results were very favourable for LeaP; the study demonstrates, with statistical significance, a 24% absolute gain and a 34% relative gain in final test scores over a traditional LMS while shortening the time on task by 30% all while maintaining a high subjective score on perceived usefulness.”

I asked the company to provide more information on this “unpublished study”, and I got no response.

Hello, Internet search and phone calls – time to do some investigation to see if there is real data to back up claims.

Details on the Study

The Natural Sciences and Engineering Research Council of Canada (NSERC) is somewhat similar to the National Science Foundation in the US – they are funding agency. When I called them they made it perfectly clear that they don’t produce any studies as claimed, they only fund them. I would have to find the appropriate study and contact the lead researcher. Luckily they shared the link to their awards database, and I did some searching on relevant terms. I eventually found some candidate studies and contacted the lead researchers. It turns out that the study in question was led by none other than Dragan Gasevic, founding program co-chair of the International Conference on Learning Analytics & Knowledge (LAK) in 2011 and 2012, and he is now at the University of Edinburgh.

The grant was one of NSERC’s Engage grants which look for researchers to team with companies, and Kowillage was the partner – they have an adaptive learning platform. D2L acquired Knowillage in the middle of the study, and they currently offer the technology as LeaP. LeaP is integrated into the main D2L learning platform (LMS).

The reason the study was not published was simply that Dragan was too busy, including his move to Edinburgh, to complete and publish, but he was happy to share information by Skype.

The study was done on an Introduction to Chemistry course at an unnamed Canadian university. Following ~130 students, the study looked at test scores and time to complete, with two objectives reported – from the class midterm and class final. This was a controlled experiment looking at three groupings:

  • A control group with no LMS, using just search tools and loosely organized content;
  • A group using Moodle as an LMS with no adaptive learning; and
  • A group using Moodle as an LMS with Knowillage / LeaP integrated following LTI standards.

Of note, this study did not even use D2L’s core learning platform, now branded as BrightSpace. It used Moodle as the LMS, but the study was not about the LMS – it was about the pedagogical usage of the adaptive engine used on top of Moodle. It is important to call out that to date, LeaP has been an add-on application that works with multiple LMSs. I have noticed that D2L now redirects their web pages that called out such integrations (e.g. this one showing integration with Canvas and this one with Blackboard) to new marketing just talking about BrightSpace. I do not know if this means D2L no longer allows LeaP integration with other LMSs or not. Update 6/25: Confirmed that LeaP is still being actively marketed to customers of other LMS vendors.

The study found evidence that Knowillage / LeaP allows students to have better test scores than students using just Moodle or no learning platform. This finding was significant even when controlling for students’ prior knowledge and for students’ dispositions (using a questionnaire commonly used in Psychology for motivational strategies and skills). The majority of the variability (a moderate effect size) was still explained by the test condition – use of adaptive learning software.

Dragan regrets the research team’s terminology of “absolute gain” and “relative gain”, but the research did clearly show increased test score gains by use of the adaptive software.

The results were quite different between the mid-term (no significant difference between Moodle+LeaP group and Moodle only group or control group) and the final (significant improvements for Moodle+LeaP well over other groups). Furthermore, the Moodle only group and control group with no LMS reversed gains between midterms and finals. To Dragan, these are study limitations and should be investigated in future research. He still would like to publish these results soon.

Overall, this is an interesting study, and I hope we get a published version soon – it could tell us a bit about adaptive learning, at least in the context of Intro to Chemistry usage.

Back to D2L Claim

Like the Lone Star example, I find a real problem with misleading marketing. D2L could have been more precise and said something like the following:

We acquired a tool, LeaP, that when integrated with another LMS was shown to improve academic performance in a controlled experiment funded by NSERC. We are now offering this tool with deep integration into our learning platform, BrightSpace, as we hope to see similar gains with our clients in the future.

Instead, D2L chose to use imprecise marketing language that implies, or allows the reader to conclude that their next-generation LMS has been proven to work better than a traditional LMS. They never come out and say “it was our LMS”, but they also don’t say enough for the reader to understand the context.

What is clear is that D2L’s LMS (the core of the BrightSpace learning platform) had nothing to do with the study, the actual gains were recorded by LeaP integrated with Moodle, and that the study was encouraging for adaptive learning and LeaP but limited in scope. We also have no evidence that the BrightSpace integration gives any different results than Moodle or Canvas or Blackboard Learn integrations with LeaP. For all we know given the scope of the study, it is entirely possible that there was something unique about the Moodle / LeaP integration that enabled the positive results. We don’t know that, but we can’t rule it out, either.

Kudos to D2L for acquiring Knowillage and for working to make it more available to customers, but once again the company needs to be more accurate in their marketing claims.

The post About The D2L Claim Of BrightSpace LeaP And Academic Improvements appeared first on e-Literate.

An Example Why LMS Should Not Be Only Part of Learning Ecosystem

Tue, 2015-06-23 11:51

By Phil HillMore Posts (333)

In Michael’s initial post on the Post-LMS, he built on this central theme:

Reading Phil’s multiple reviews of Competency-Based Education (CBE) “LMSs”, one of the implications that jumps out at me is that we see a much more rapid and coherent progression of learning platform designs if you start with a particular pedagogical approach in mind.

The idea here is not that the traditional LMS has no value (it can be critical infrastructure, particularly for mainstream faculty adoption), but rather that in the future we both see more learning platform designs being tied to specific pedagogies. This idea is quite relevant given the ongoing LMS users’ conferences (InstructureCon last week, D2L Fusion this week, BbWorld next month, Apereo / Sakai as well as iMoot in the past two months).

Later in the post Michael mentions ASU’s Habitable Worlds as an example of assessing the quality of students’ participation instead of direct grading.

A good example of this is ASU’s Habitable Worlds, which I have blogged about in the past and which will be featured in an episode of the aforementioned e-Literate TV series. Habitable Worlds is roughly in the pedagogical family of CBE and mastery learning. It’s also a PBL [problem-based learning] course. Students are given a randomly generated star field and are given a semester-long project to determine the likelihood that intelligent life exists in that star field. There are a number of self-paced adaptive lessons built on the Smart Sparrow platform. Students learn competencies through those lessons, but they are competencies that are necessary to complete the larger project, rather than simply a set of hoops that students need to jump through. In other words, the competency lessons are resources for the students.

In our recent case study on ASU, Lev Horodyskyj shared his experiences helping to design the course. He specifically called out the difficulties they faced when initially attempting this pedagogical approach with a traditional LMS.

Phil Hill: But the team initially found that the traditional technologies on campus were not suited to support this new personalized learning approach.

Lev Horodyskyj: Within a traditional system it was fairly difficult. Traditional learning management systems aren’t really set up to allow a lot of interactivity. They’re more designed to let you do things that you would normally do in a traditional classroom: multiple choice tests; quizzes; turning in papers; uploading, downloading things.

Especially when you’re teaching science, a range of possibilities are viable answers, and oftentimes when we teach science, we’re more interested in what you’re not allowed to do rather than what you’re allowed to do.

Traditional LMS’s don’t allow you to really program in huge parameter spaces that you can work with. They’re basically looking for, “What are the exact correct answers you are allowed to accept?”

I was brought into the picture once Ariel decided that this could be an interesting way to go, and I started playing around with the system. I instantly fell in love with it because it was basically like PowerPoint. I could drop whatever I wanted wherever I wanted, and then wire it up to behave the way I wanted it to behave.

Now, instead of painstakingly programming all the 60 possible answers that a student might write that are acceptable, I can all of sudden set up a page to take any answer I want and evaluate it in real time. I no longer have to program those 60 answers; I could just say, “Here are the range of answer that are acceptable,” and it would work with that.

Phil Hill: And this was the Smart Sparrow system?

Lev Horodyskyj: This was the Smart Sparrow system, correct. It was really eye-opening because it allowed so many more possibilities. It was literally a blank canvas where I could put whatever I wanted.

This pedagogical approach, supported by appropriate learning platform design, seems to lead to conceptual understanding.

Eric Berkebile: My experiences were very similar. What amazed me the most about it was more how the course was centered upon building concept. It wasn’t about hammering in detail. They weren’t trying to test you on, “How much can you remember out of what we’re feeding you?” It wasn’t about hammering in detail. They weren’t trying to test you on ‘How much can you remember?’

You go through the slides, you go through the different sections, and you are building conceptual knowledge while you are doing it. Once you’ve demonstrated that you can actually apply the concept that they are teaching you, then you can move forward. Until that happens, you’re going to be stuck exactly where you are, and you’re going to have to ask help from other students in the class; you’re going to have to use the resources available.

They want you to learn how to solve problems, they want you to learn how to apply the concepts, and they want you to do it in a way that’s going to work best for you.

Phil Hill: So, it’s multidisciplinary for various disciplines but all held together by project problem-solving around Drake’s equation?

Todd Gilbert: Yeah. One concept really ties it all together, and if you want to answer those questions around that kind of problem, like, “Is there life out there? Are we alone?” you can’t do that with just astronomy, you can’t do that with just biology. It touches everything, from sociology down to physics. Those are very, very different disciplines, so you have to be adaptable.

But I mean if you rise to that kind of a challenge—I can honestly say, this is not hyperbole or anything. It is my favorite class I’ve taken at this college, and it’s a half-semester online course. It is my favorite class I’ve taken at this college.

Eric Berkebile: By far the best course I’ve taken, and I’ve recommended it to everybody I’ve talked to since.

This approach is not mainstream in the sense that the vast majority of courses are not designed as problem-based learning, so I am not arguing that all LMSs should change accordingly or that Smart Sparrow is a superior product. I do, however, think that this episode gives a concrete example of how the traditional LMS should not be the only platform available in a learning ecosystem and how we will likely see more development of platforms tied to specific pedagogical approaches.

The post An Example Why LMS Should Not Be Only Part of Learning Ecosystem appeared first on e-Literate.

The EDUCAUSE NGDLE and an API of One’s Own

Sun, 2015-06-14 14:04

By Michael FeldsteinMore Posts (1033)

I have been meaning for some time to get around to blogging about the EDUCAUSE Learning Initiative’s (ELI’s) paper on a Next-Generation Digital Learning Environment (NGDLE) and Tony Bates’ thoughtful response to it. The core concepts behind the NGDLE are that a next-generation digital learning environment should have the following characteristics:

  • Interoperability and Integration
  • Personalization
  • Analytics, Advising, and Learning Assessment
  • Collaboration
  • Accessibility and Universal Design

The paper also suggests that the system should be modular. They draw heavily on an analogy to LEGOs and make a call for more robust standards. In response, Bates raises three concerns:

  1. He is suspicious of a potentially heavy and bureaucratic standards-making process that is vulnerable to undue corporate influence.
  2. He worries that LEGO is a poor metaphor that suggests an industrialized model.
  3. He is concerned that, taken together, the ELI requirements for an NGDLE will push us further in the direction of computer-driven rather than human-driven classes.

As it happens, ELI’s vision for NGDLE bears a significant resemblance to a vision that some colleagues and I came up with ten years ago when we were trying to help the SUNY system find an LMS that would fit the needs of all 64 campuses,[1] ranging from small, rural community colleges to R1 universities to medical and ophthalmology schools to a school of fashion. We got pretty deep into thinking about the implementation details, so it’s been on my mind to write my own personal perspective on the answers to Tony’s questions, based in large part on that previous experience. In the meantime, Jim Groom, who has made a transition from working at a university to working full-time at Reclaim Hosting, has written a series of really provocative and, to me, exciting posts on the future of the digital learning environment from his own perspective. Jim shares the starting assumption of the ELI and SUNY that a learning environment should be “learner-centric,” but he has a much more fully developed (and more radical) idea of what that really means, based on his previous work with A Domain of One’s Own. He also, in contrast to the ELI and SUNY teams, does not start from the assumption that “next-generation” means evolving the LMS. Rather, the questions he seems to be asking are “What is minimum amount of technical infrastructure required to create a rich digital learning environment?” and “Of that minimal amount of infrastructure we need, what is the minimal amount that needs to be owned by the institution rather than the learner?” I see these trains of thought emerging his posts on a university API, a personal API, and a syndication bus. What’s exciting to me about these posts is that, even though Jim is starting from a very different set of assumptions, he is also converging on something like the vision we had for SUNY.

In this post, I’m going to try to respond to both Tony and Jim. One of the challenges of this sort of conversation is that the relationship between the technical architecture and the possibilities it creates for the learners is complex. It’s easy to oversimplify or even conflate the two if we’re not very careful. So one of the things that I’m going to try to do here is untangle the technical talk from the functional talk.

I’ll start with Tony Bates’ concerns.

The Unbearable Heaviness of Standards

This is the most industry-talky part of the post, but it’s important for the later stuff. So if talk of Blackboard and Pearson sitting around a technical standards development table turns you off, please bear with me.

Bates writes,

First, this seems to be much too much of a top-down approach to developing technology-based learning environments for my taste. Standards are all very well, but who will set these standards? Just look at the ways standards are set in technology: international committees taking many years, with often powerful lobby groups and ‘rogue’ corporations trying to impose new or different standards.

Is that what we want in education? Or will EDUCAUSE go it alone, with the rest of the world outside the USA scrambling to keep up, or worse, trying to develop alternative standards or systems? (Just watch the European Commission on this one.) Attempts to standardize learning objects through meta-data have not had much success in education, for many good reasons, but EDUCAUSE is planning something much more ambitious than this.

Let me start by acknowledging, as somebody who has been involved in the sausage-making, that the technical standards development process is inherently difficult and fraught and that, because it is designed to produce a compromise that everybody can live with, it rarely produces a specification that anybody is thrilled with. Technical standards-making sucks, and its output often sucks as well. In fact, both process and output generally suck so badly that they collectively beg the question: Why would anyone ever do it? The answer is simple: Standards are usually created when the pain of not having a standard exceeds the pain of creating and living with one.

One of the biggest pains driving technical standards-making in educational technology has been the pain of vendor lock-in. Back in the days when Blackboard owned the LMS market and the LMS product category pretty much was the educational technology market, it was hard to get anyone developing digital learning tools or digital content to integrate with any other platform. Because there were no integration standards, anyone who wanted to integrate with both Blackboard and Moodle would have to develop that integration twice. Add in D2L and Sakai—this was pre-Canvas—and you had four times the effort. This is a problem in any field, but it’s particularly a problem in education because neither students nor courses are widgets. This means that we need a ton of specialized functionality, down to a very fine level. For example, both art historians and oncologists need image annotation tools to teach their classes digitally, but they use those tools very differently and therefore need different features. Ed tech is full of tiny (but important) niches, which means that there are needs for many tools that will make nobody rich. You’re not going to see a startup go to IPO with their wicked good art history image annotation tool. And so, inevitably, the team that develops such a tool will start small and stay small, whether they are building a product for sale, an open source project, or some internal project for a university or for their own classes. Having to develop for multiple platforms is just not feasible for a small team, which means the vast majority of teaching functionality will be available only on the most widely adopted platform. Which, in turn, makes that platform very hard to leave, because you’d also have to give up all those other great niche capabilities developed by third parties.

But there was a chicken-and-egg problem. To Tony’s point about the standards process being prone to manipulation, Blackboard had nothing to gain and a lot to lose from interoperability standards back when they dominated the market. They had a lot to gain from talking about standards, but nothing to gain (and a lot to lose) by actually implementing good standards. In those days, the kindest interpretation of their behavior in the IMS (which is the main technical standards body for ed tech) is that standards-making was not a priority for them. A more suspicious mind might suspect that there were times when they actively sabotaged those efforts. And they could, because a standard that wasn’t implemented by the platform used by 70% of the market was not one that would be adopted by those small tool makers. They would still have to build at least two integrations—one for Blackboard and one for everyone else. Thankfully, two big changes in the market disrupted this dynamic. First, Blackboard lost its dominance, thanks in part to the backlash among customers against just such anti-competitive behavior. It is no coincidence that then-CEO Michael Chasen chose to retain Ray Henderson, who was known for his long-standing commitment to open standards (and…um…actually caring about customer needs) right at the point when Blackboard backlash was at its worst and the company faced the probability of a mass exodus as they killed off WebCT. Second, content-centric platforms became increasingly sophisticated with consequent increasingly sophisticated needs for integrating other tools. This was driven by the collapse of the textbook publishers’ business model and their need to find some other way to justify their existence, but it was a welcome development for standards both because it brought more players to the table and because the world desperately needed (and still needs) alternative visions to the LMS for a digital learning environment, and the textbook publishers have the muscle to actually implement and drive adoption of their own visions. It doesn’t matter so much whether you like those visions or the players who are pushing them (although, honestly, almost anything would be a welcome change from the bento box that was and, to a large degree, still is the traditional LMS experience). What mattered from the standards-making perspective is that there were more players who had something to prove in the market and whose ideas about how niche functionality should integrate with the larger learning experience that their platform affords was not all the same. As a result, we are getting substantially richer and more polished ed tech integration standards more quickly from the IMS than we were getting a decade ago.

Unfortunately, the change in the market only helps with one of the hard problems of technical standards-making in ed tech. Another one, which Bates alludes to with his comment about failed efforts to standardize metadata for learning objects, is finding the right level of abstraction. There are a lot of reasons why learning objects have failed to gain the traction that advocates had hoped, but one good one is that there is no such thing as a learning object. At least, not one that we can define generically. What is it that syllabi, quizzes, individual quiz questions, readings, videos, simulations, week-long collections of all these things (or “modules”), and 15-week collections of these things (or “courses”) have in common? It is tempting to pretend that all of these things are alike in some fundamental way so that we can easily reuse them and build new things with them. You know…like LEGOs. If they were, then it would make sense to have one metadata standard to describe them all, because it would mean that the main challenge of building a new course out of old pieces would be finding the right pieces, and a metadata standard can help with that.

Alas.

Folks who are non-technical tend to think of software as a direct implementation of their functional needs, and their understanding of technical standards flows from that view of the world. As a result, it’s easy to overgeneralize the lesson of the learning object metadata standards failures. But the history of computing is one of building up successive layers of abstraction. For example, TCP/IP is a low-level technical standard that enables internet servers to connect to and communicate with each other, whether that communication takes the form of sending email, transferring a file, or looking up the address of a web site. Most of us don’t know about or care about what sorts of connections TCP/IP allows or doesn’t allow. At our level, it is indistinguishable from LEGOs in the sense tha we see these pieces fitting together generically and we don’t see a need for them to do anything else. But the programmers who built TCP/IP implemented it on top of the C programming language (which was standard in the informal sense that eventually became a Standard(TM) in the formal sense), which compiled to a number of different machine languages for different computer chips, making those chips more like LEGOs. Then other programmers created HTML and Javascript as a abstraction layers on top of TCP/IP, making web pages like LEGOs in the sense that any web server can serve any standards-conformant web page and any browser can read any such web page. From here, higher layers of abstraction get dicier, which is probably why we don’t have many higher-level Standards(TM). Instead, we start getting into things called “libraries” and “frameworks”. These are bits of code that are re-usable by enough developers that they are worth sharing and adopting, but not so much that they are worth going through the pain of formal standards development or become universal through some other means. And then, of course, there is just a vast amount of development on the web that is individual to the project and cannot be standardized, whether formally or informally. If you try to standardize that which is not standard, chances are that your “standard” will remain pretty non-standard.

So there is a generic danger that if we try to build a standard at the wrong level of abstraction, we will fail. But in education there is also the danger that we will try to build at the wrong level of abstraction and succeed. What I mean by this is we will enshrine a limited or even stunted vision of what kinds of teaching and learning a digital learning environment should support into the fundamental building blocks that we use to create new learning environments and learning experiences.

In What Sense Like LEGOs?

To wit, Bates writes:

A next generation digital learning environment where all the bits fit nicely together seems far too restrictive for the kinds of learning environments we need in the future. What about teaching activities and types of learning that don’t fit so nicely?

We need actually to move away from the standardization of learning environments. We have inherited a largely industrial and highly standardized system of education from the 19th century designed around bricks and mortar, and just as we are able to start breaking way from rigid standardization EDUCAUSE wants to provide a digital educational environment based on standards.

I have much more faith in the ability of learners, and less so but still a faith in teachers and instructors, to be able to combine a wide range of technologies in the ways that they decide makes most sense for teaching and learning than a bunch of computer specialists setting technical standards (even in consultation with educators).

Audrey Watters captured the subtlety of this challenge beautifully in her piece on the history of LEGO Mindstorms:

In some ways, the educational version of Mindstorms faces a similar problem as it struggles to balance imagination with instructions. As the product have become more popular in schools, Lego Education has added new features that make Mindstorms more amenable to the classroom, easier for teachers to use: portfolios, curriculum, data-logging and troubleshooting features for teachers, and so on.

“Little by little, the subversive features of the computer were eroded away. Instead of cutting across and challenging the very idea of subject boundaries, the computer now defined a new subject; instead of changing the emphasis from impersonal curriculum to excited live exploration by students, the computer was now used to reinforce School’s ways. What had started as a subversive instrument of change was neutralized by the system and converted into an instrument of consolidation.” – Seymour Papert, The Children’s Machine

That constructionist element is still there, of course – in Lego the toy and in Lego Mindstorms. Children of all ages continue to build amazing things. Yet as Mindstorms has become a more powerful platform – in terms of its engineering capabilities and its retail and educational success – it has paradoxically perhaps also become a less playful one.

There is a fundamental tension between making something more easily adoptable for a broad audience and making it challenging in the way that education should be challenging, i.e., that it is generative and encourages creativity (a quality that Amy Collier, Jen Ross, and George Veletsianos have started calling “not-yetness“). I don’t know about you, but when I was a kid, my LEGO kits didn’t look like this:

If I wanted to build the Millennium Falcon, I would have to figure out how to build it from scratch, which meant I was more likely to decide that it was too hard and that I couldn’t do it. But it also meant I was much more likely to build my own idea of a space ship rather than reproducing George Lucas’ idea. This is a fundamental and inescapable tension of educational technology (as well as the broad reuse or mass production of curricular materials), and it increases exponentially when teachers and administrators and parents are added as stakeholders in the mix of end users. But notice that, even with the real, analog-world LEGO kits, there are layers of abstraction and standardization. Standardizing the pin size on the LEGO blocks is generative because it suggests more possibilities for building new stuff out of the LEGOs. Standardizing the pieces to build one specialized model is reductive because it suggests fewer possibilities for building new stuff out of the LEGOs. To find ed tech interoperability standards that are generative rather than reductive, we need to first find the right level of abstraction.

What Does Your Space Ship Look Like?

This brings us to Tony Bates’ third concern:

I am becoming increasingly disturbed by the tendency of software engineers to force humans to fit technology systems rather than the other way round (try flying with Easyjet or Ryanair for instance). There may be economic reasons to do this in business enterprises, but we need in education, at least, for the technology to empower learners and teachers, rather than restrict their behaviour to fit complex technology systems. The great thing about social media, and the many software applications that result from it, is its flexibility and its ability to be incorporated and adapted to a variety of needs, despite or maybe even because of its lack of common standards.

When I look at EDUCAUSE’s specifications for its ‘NGDLE-conformant standards’, each on its own makes sense, but when combined they become a monster of parts. Do I want teaching decisions influenced by student key strokes or time spent on a particular learning object, for instance? Behind each of these activities will be a growing complexity of algorithms and decision-trees that will take teachers and instructors further way from knowing their individual students and making intuitive and inductive decisions about them. Although humans make many mistakes, they are also able to do things that computers can’t. We need technology to support that kind of behaviour, not try to replace it.

I read two interrelated concerns here. One is that, generically speaking, humans have a tendency to move too far in the direction of standardizing that which should not be standardized in an effort to achieve scalability of efficiency or one of those other words that would have impressed the steel and railroad magnates of a hundred years ago. This results in systems that are user-unfriendly at best and inhumane at worst. The second, more education-specific concern I’m hearing is that NGDLE as ELI envisions it would feed the beast that is our cultural mythology that education can and should be largely automated, which is pretty much where you arrive if you follow the road of standardization ad absurdam. So again, it comes down to standardizing the right things at the right levels of abstraction so that the standards are generative rather than reductive.

I’ll give an example of a level of ed tech interoperability that achieves a good level of LEGOicity.[2] Whatever digital learning environment you choose, whether it’s next-generation, this-generation, last-generation, or whatever-generation, there’s a good chance that you are going to want it to have some sense of “class-ness”, by which I mean that you will probably want to define a group of people who are in a class. This isn’t always true, but it is often true. And once you decide that you need that, you then need to specify who is in the class. That means, for every single class section that needs a sense of group, you need to register those users in the new system. If the system supports multiple classes that the students might be in (like an LMS, for example), then you’ll need unique identifiers for the class groups so that the system doesn’t get them mixed up, and you will also need human-readable identifiers (which may or may not be unique) so that the humans don’t get them mixed up and get lost in the system. Depending on the system, you may also want it to know when the class starts and ends, when it meets, who the teacher is, and so on. Again, not all digital learning environments require this information, but many do, including many that work very differently from each other. Furthermore, trying to move this information manually by, for example, asking your students to register themselves and then join a group themselves is…challenging. It makes sense to create a machine-to-machine method for sharing this information (a.k.a. an application programming interface, or API) so that the humans don’t have to do the tedious and error-prone manual work, and it makes sense to have this API be standard so that anybody developing a digital learning environment or learning tool anywhere can write one set of integration code and get this information from the relevant university system that has it, regardless of the particular brand or version of the system that the particular university is using. The IMS actually has two different standards—LIS and LTI—that do subsets of this sort of thing in different ways. Each one is useful for a particular and different set of situations, so it’s rare that you would be in a position of having to pick between the two. In most cases, one will be obviously better for you than the other. The existence and adoption of these standards are generative, because more people can build their own tools, or next-generation digital learning environments, or whatever, and easily make them work well for teachers and students by saving them from that tedious and frustrating registration and group creation workflow.

Notice the level of abstraction we are at. We are not standardizing the learning environment itself. We are standardizing the tools necessary for developers to build a learning environment. But even here, there are layers. Think about your mobile phone. It takes a lot of people with a lot of technical expertise a lot of time to build a mobile phone operating system. It takes a single 12-year-old a day to build a simple mobile phone app. This is one reason why there are only a few mobile phone operating systems which all tend to be similar while there are many, many, mobile apps that are very different from each other. Up until now, building digital learning environments has been more like building operating systems than like building mobile apps. When my colleagues and I were thinking about SUNY’s digital learning environment needs back in 2005, we wanted to create something we called a Learning Management Operating System (LMOS), but not because we thought that either learning management or operating systems were particularly sexy. To the contrary, we wanted to standardize the unsexy but essential foundations upon which a million billion sexy learning apps could be built by others. Try to remember what your smart phone was like before you installed any apps on it. Pretty boring, right? But it was just the right kind of standardized boring stuff that enabled such miracles of modern life as Angry Birds and Instragram. That’s what we wanted, but for teaching and learning.

Toward University APIs

Let’s break this down some more. Have you ever seen one of these sorts of prompts on your smart phone?

I bet that you have. This is one of those incredibly unsexy layers of standardization that makes incredibly sexy things happen. It enables my LinkedIn Connected app to know who I just met with and offer to make a connection with them. It lets any new social service I join know who I already know and therefore who I might want to connect with on that service. It lets the taxicab I’m ordering know where to pick me up. It lets my hotel membership apps find the nearest hotel for me. And so on. But there’s something weird going on in this screen grab. Fantastical, which is a calendar app, is asking permission to access my calendar. What’s up with that?

Apple provides a standard Calendar app that is…well…not terribly impressive. But that’s not what this dialog box is referring to. Apple also has an underlying calendaring API and data store, which is confusingly also named Calendar. It is this latter piece of unsexy but essential infrastructure that Fantastical is asking to access. It is also the unsexy piece of infrastructure that makes all the scheduling-related sexiness happen across apps. It’s the lingua franca for scheduling.

Now imagine a similar distinction between a rather unimpressive Discussions app within an LMS and a theoretical Discussions API in an LMOS. Most discussion apps have certain things in common. There are posts by authors. There are subjects and bodies and dates and times to those posts. Sometimes there are attachments. There are replies which form threads. Sometimes those threads branch. Imagine that you have all of that abstracted into an API or service. You could do a lot of things with it. For starters, you could build a different or better discussion board, the way Fantastical has done on top of Apple’s Calendar API. It could be a big thing that has all kinds of cool extra features, or it could be a little thing that, for example, just lets you attach a discussion thread anywhere on any page. Maybe you’re building an art history image annotation app and want to be able to hang a discussion thread off of particular spots on the image. Wouldn’t it be cool if you didn’t have to build all that discussion stuff yourself, but could just focus on the parts that are specific to your app? Maybe you’re not building something that needs a discussion thread at all but rather something that could use the data from the discussions app. Maybe you want to build a “Find a Study Buddy” app, and you want that app to suggest people in your class that you have interacted with frequently in class discussions. Or maybe you’re building an analytics app that looks at how often and how well students are using the class discussions. There’s a lot you could do if this infrastructure were standardized and accessible via an API. An LMOS is really a university API for teaching- and learning-relevant data and functionality, with a set of sample apps built on top of that API.

What’s valuable about this approach is that it can support and enable many different kinds of digital learning environments. If you want to build a super-duper adaptive-personalized-watching-every-click thing, an LMOS should make that easier to do. If you want to build a post-edupunk-open-ed-only-nominally-institutional thing, then an LMOS should make it easier to do that too. You can build whatever you need more quickly and easily, which means that you are more likely to build it. Done right, an LMOS should also support the five attributes that ELI is calling for:

  • Interoperability and Integration
  • Personalization
  • Analytics, Advising, and Learning Assessment
  • Collaboration
  • Accessibility and Universal Design

An LMOS-like infrastructure doesn’t require any of these things. It doesn’t require you to build analytics, for example. But by making the learning apps programmatically accessible via APIs, it makes analytics feasible if analytics are what you want. It is the roughly the right level of abstraction.

It is also roughly where we are headed, at least from a technical perspective. Returning to the earlier question of “at what price standards,” I believe that we have most or all of the essential technical interoperability standards we need to build an LMOS right now. Yes, there are a couple of interesting standards-in-development that may add further value, and yes, we will likely discover further holes that need to be filled here and there, but I think we have all the basic parts that we need. This is in part due to the fact that, with IMS’s new Caliper standard, we have yet another level of abstraction that makes it very flexible. Building on the previous discussion service example, Caliper lets you define a profile for a discussion, which is really just a formalization of all the pieces that you want to share—subject, body, author, time stamp, reply, thread, etc. You can also define a profile for, say, a note-taking app that re-uses the same Caliper infrastructure. If you come up with a new kind of digitally mediated learning interaction in a new app, you can develop a new Caliper profile for it. You might start by developing it just for your own use and then eventually submit it to the IMS for ratification as an official standard when there is enough demand to justify it. This also dramatically reduces the size of the negotiation that has to happen at the standards-making table and therefore improves both speed and quality of the output.

Toward a Personal API

I hope that I have addressed Tony Bates’ concerns, but I’m pretty sure that I haven’t gotten to the core of Jim Groom’s yet. Jim wants students to own their learning infrastructure, content, and identity as much as possible. And by “own,” he means that quite literally. He wants them to have their own web domains where the substantial majority of their digital learning lives resides permanently. To that end, he has started thinking about what he calls a Personal API:

[W]hat if one’s personal domain becomes the space where students can make their own calls to the University API? What if they have a personal API that enables them to decide what they share, with whom, and for how long. For example, what if you had a Portfolio site with a robust API (which was the use case we were discussing) that was installed on student’s personal domain at portfolio.mydomain.com, and enabled them to do a few basic things via API:

  • It called the University API and populated the students classes for that semester.
  • It enabled them to pull in their assignments from a variety of sources (and even version them).
  • it also let them “submit” those assignment to the campus LMS.
  • This would effectively be enabling the instructor to access and provide feedback that the student would now have as metadata on that assignment in their portfolio.
  • It can also track course events, discussions, etc.

This is very consistent with the example I gave in my 2005 blog post about how a student’s personal blog could connect bi-directionally with an LMOS:

Suppose that, in addition to having students publish information into the course, the service broker also let the course publish information out to the student’s personal data store (read “portfolio”). Imagine that for every content item that the student creates and owns in her personal area–blog posts, assignment drafts in her online file storage, etc.–there is also a data store to which courses could publish metadata. For example, the grade book, having recorded a grade and a comment about the student’s blog post, could push that information (along with the post’s URL as an identifier) back out to the student’s data store. Now the student has her professor’s grade and comment (in read-only format, of course), traveling with her long after the system administrator closed an archived the Psych 101 course. She can publish that information to her public e-portfolio, or not, as she pleases.

Fortuitously, this vision is also highly consistent with the fundamental structure that underlies IMS Caliper. Caliper is federated. That is, it assumes that there are going to be different sources of authority for different (but related) types of content, and that there will be different sharing models. So it is very friendly to world in which students own some data and universities own other data and could provide the “facade” necessary for the communication between the two world. So again, we have roughly the right level of abstraction to be generative rather than reductive. Caliper can support both a highly scaffolded and data-driven adaptive environment and a highly decentralized and extra-institutional environment. And, perhaps best of all, it lets us get to either incrementally by growing an ecosystem piece by piece rather than engineering a massive and monolithic platform.

Nifty, huh?

Believe it or not, none of this is the hard part. The hard part is the cultural and institutional barriers that prevent people from demanding the change that is very feasible from a technical perspective. But that’s another blog post (or fifty) for another time.

  1. I understand that SUNY has since added a 65th campus
  2. Yes, that is a word.

The post The EDUCAUSE NGDLE and an API of One’s Own appeared first on e-Literate.

Personalized Learning Changes: Effect on instructors and coaches

Fri, 2015-06-12 17:03

By Phil HillMore Posts (332)

Kate Bowles left an interesting comment at my previous post about an ASU episode on e-Literate TV, where I argued that there is a profound change in the instructor role. Her comment:

Phil, I’m interested to know if you found anything out about the pay rates for coaches v TAs. I’m also interested in what coaches were actually paid to do — how the parameters of their employable hours fit what they ended up doing. Academics are rarely encouraged to think of their work in terms of billable increments, because this would sink the ship. But still I’m curious. Did ASU really just hike up their staffing costs in moving to personalised learning, or was there some other cost efficiency here? If the overall increase in students paid off, how did this happen? I’m grappling with how this worked for ASU in budgetary terms, as the pedagogical gain is so clear.

This comment happened to coincide with my participation in WCET’s Leadership Summit on Adaptive Learning, where similar subjects were being discussed. For the purposes of this blog post, we’ll use the “personalized learning” language, which includes use of adaptive software as a subset. Let’s first address the ASU-specific questions.

ASU

The instructor in the ASU episode was Sue McClure who was kind enough to help answer these questions by email. Sue is a lecturer at ASU Online, which is a full-time salaried position with a teaching load of four courses per semester. Typical loads include 350 – 400 students over those four courses, and the MAT 110 personalized learning course (using Khan Academy) did not change this ratio. Sue added these observations:

During the Fall Semester of 2014 we offered our first MAT 110 courses using Khan. There was a great deal of work in the planning of the course, managing the work, working with students, hiring and managing the coaches, tracking student progress, and more. Of course, our main responsibility to help our students to be successful in our course overshadowed all of this. The work load during the first semester of our pilot was very much increased compared to previous semesters teaching MAT 110.

By the time that we reached Spring Semester of 2015 we had learned much more about methods that work best for student success, our coaches were more experienced, and our technology to track student progress and work was improved. During the second semester my work load was very much more in line with teaching MAT 110 before the pilot was begun.

For the TAs (coaches), they also had the same contracts as before the personalized learning approach, but they are paid on an hourly basis. I do not know if they ended up working more hours than expected in this course, but I did already note that there were many more coaches in the new course than is typical. Unfortunately, I cannot answer Kate’s follow-up question about TA / coach hourly pay issues in more detail, at least for now.

Putting it together, ASU is clearly investing in personalized learning – including investing in instructional resources – rather than trying to find cost efficiencies up front. Adrian Sannier in episode 1 described the “payoff” or goal for ASU.

Adrian Sannier:So, we very much view our mission as helping those students to find their way past the pastiche of holes that they might have and then to be able to realize their potential.

So, take math as an example. Math is, I think, a very easy place for most people to understand because I think almost everybody in the country has math deficits that they’re unaware of because you get a B in third-grade math. What that means is there were a couple of things you didn’t understand. Nobody tells you what those things are—you don’t have a very clear idea—but for the rest of your life, all the things that depend on those things that you missed you will have a rocky understanding of.

So, year over year you accumulate these holes. Then finally, somebody in an admissions exam or on the SAT or the ACT faces you with a comprehensive survey of your math knowledge, and you suddenly realize, “Wow, I’m under-prepared. I might even have gotten pretty good grades, but there are places where I have holes.”

We very much view our mission as trying to figure how it is that we can serve the student body. Even though our standards haven’t changed, our students certainly have because the demographics of the country have changed, the character of the country has changed, and the things we’re preparing students for have changed.

We heard several times in episode 1 that ASU wants to scale the number of students served (with same standards) without increasing faculty at the same rate, and to do this they need to help more of today’s students succeed in math. The payoff is retention, which is how the budget will work if they succeed (remember this is a new program).

WCET Adaptive Learning Summit

The WCET summit allowed for a more generalized response. In one panel moderated by Tom Cavanaugh from University of Central Florida (UCF), panelists were asked about the Return on Investment (ROI) of personalized learning[1]. Some annoying person in the audience[2] further pressed the panel during Q&A time to more directly address the issue raised by Kate. All the panelists view personalized / adaptive learning as an investment, where the human costs in instructors / faculty / TAs / coaches actually go up, at least in early years. They do not see this as cost efficiency, at least for the foreseeable future.

Santa Fe Rainbow

(My photos from inside the conference stunk, so I’ll use a better one from dinner instead.)

David Pinkus from Western Governors University answered that the return was three words: retention, retention, retention. Tom Cavanaugh added that UCF invested in additional staff for their personalized / adaptive learning program, specifically as a method to reduce the “friction” of faculty time investment.

I should point out that e-Literate TV case studies are not exhaustive. As Michael and I described:

We did look for schools that were being thoughtful about what they were trying to do and worked with them cooperatively, so it was not the kind of journalism that was likely to result in an exposé. We went in search of the current state of the art as practiced in real classrooms, whatever that turned out to be and however well it is working.

Furthermore, the panelists at the WCET Summit tended to be from schools that were leading the pack in thoughtful personalized learning implementations. In other words, the perspective I’m sharing in this post is for generally well-run programs that consciously considered student and faculty support as the key drivers.[3] When these programs have developed enough to allow independent reviews of effectiveness, student retention – both with the course and ideally within a program – should be one of the key metrics to evaluate.

Investment vs. Sustainability

There is another side to this coin, however, as pointed out by someone at the WCET Summit[4]. With so many personalized learning programs funded by foundations and even institutional investments above normal operations, there is a question of sustainability. It’s all well and good to demonstrate that a school is investing in new programs, including investments in faculty and TA support, but I do not think that many programs have considered the sustainability of these initiatives. If the TA quoted in the previous blog is accurate, ASU went from 2 to 11 TAs for the MAT 110 course. Essex County College invested $1.2 million in an emporium remedial math program. Even if the payoff is “retention”, will there be enough improvement in retention to justify an ongoing expenditure to support a program? Sustainability should be another key metric as groups evaluate the effectiveness of personalized learning approaches.

  1. specifically adaptive learning
  2. OK, me
  3. There will be programs that do seek to use personalized / adaptive learning as a cost-cutting measure or as primarily technology-driven. But I would be willing to bet that those programs will not succeed in the long run.
  4. I apologize for forgetting who this was.

The post Personalized Learning Changes: Effect on instructors and coaches appeared first on e-Literate.

Instructor Replacement vs. Instructor Role Change

Tue, 2015-06-09 07:53

By Phil HillMore Posts (329)

Two weeks ago I wrote a post about faculty members’ perspective on student-centered pacing within a course. What about the changing role of faculty members – how do their lives change with some of the personalized learning approaches?

In the video below, I spoke with Sue McClure, who teaches a redesigned remedial math course at Arizona State University (ASU) that is based on the use of Khan Academy videos. There are plenty of questions about whether this approach works and is sustainable, but for now let’s just get a first-hand view of how Sue’s role changed in this specific course. You’ll see that it took some prodding to get her to talk about her personal experience, and I did have to reflect back what I was hearing. Note that the “coaches” she described are teaching assistants.

Phil Hill: Let’s get more of a first-hand experience as the instructor for the course. What is a typical week for you as the course is running? What do you do? Who do you interact with?

Sue McClure:I interact by e-mail, and sometimes Google Hangouts, with the coaches and with some of the students. Now, not all of the students are going to contact me about a problem they might have because many of them don’t have any problems, and that’s wonderful. But quite a few of them do have problems either with understanding what they’re supposed to be doing or how to do what they’re supposed to be doing or how to contact somebody about something, and then they’ll send me an e-mail.

Phil Hill: So, as you go through this, it sounds like there’s quite a change in the role of the faculty member from a traditional course, and since you just got involved several months ago in the design and in instructing it, describe for me the difference in that role. What’s changed, and how does it affect you as a professor?

Sue McClure: Before I did this course, the way it’s being done now, I had taught [Math 110] online a few other semesters, and the main difference between those experiences and this experience is that with this experience our students have far more help, far more assistance, far more people willing to step up when they need help with anything to try to make them be successful. The main difference … is that with this experience our students have far more help.

Phil Hill: What about the changes for you personally?

Sue McClure: Partly because I think ASU is growing so much, my class sizes are getting bigger and bigger. That probably would have happened even if we were teaching these the way that we taught them before. That’s one big change—more and more students. So, having these coaches that we have working with us and for us has just been priceless. We couldn’t do it without them.

Phil Hill: It seems your role comes into more of an overseeing the coaches for their direct support of the students. Plus it sounds like you step in to directly talk to students where needed as well. Your role comes into more of an overseeing the coaches for their direct support of the students.

Sue McClure: Right. I think that explains it very well.

From what Michael and I have seen in the e-Literate TV case studies as well as other on-campus consulting experiences, the debate over adaptive software or personalized learning being used to replace faculty members is a red herring. Faculty replacement does happen in some cases, but that debate masks a more profound issue – how faculty members have to change roles to adapt to a student-centered personalized learning course design. [updated to clarify language]

For this remedial math course, the faculty member changes from one of content delivery to one of oversight, intervention, and coaching. This change is not the same for all disciplines, as we’ll see in upcoming case studies, but it is quite consistent with the experience at Essex County College.

As mentioned by Sue, however, these instructional changes do not just impact faculty members – they also affect teaching assistants. Below is a discussion with some TAs from the same course.

Phil Hill:Beyond the changes to the role of faculty, there are also changes to the role of teaching assistants.

Namitha Ganapa:Basically, in a traditional course there’s one instructor, maybe two TAs, and a class of maybe 175 students. So, it’s pretty hard for the instructor to go to each and every student. Now, we are 11 coaches for Session C. Each coach is having a particular set of students, so it’s much easier to focus on the set of students, and that helps for the progress.

We should stop here and note the investment being made by ASU – moving from 2 TAs to 11 for this course. There are two sides to this coin, however. On one side, not all schools can afford this investment in a new course design and teaching style. On the other side, it is notable that instructor roles are increasing (same number of faculty members, more TAs).

Jacob Cluff: I think, as a coach, it’s a little more involved with the students on a day-to-day basis. Every day I keep track of all the students, their progress, and if they’re struggling on a skill I make a video, send it to them, ask them if they need help understanding it—that sort of thing.

Phil Hill: So, Jacob, it sounds like this is almost an intervention model—that your role is looking at where students are and figuring out where to intervene and prompt them. Is that an accurate statement?

Jacob Cluff: I think that’s a pretty fair statement because most of the students (a lot of students)—they’re fine on their own and don’t really need help at all. They kind of just get off and run. So, I spend most of my time helping the students that actually need help, and I also spend time and encourage students that are doing well at the same time.
I spend most of my time helping the students that actually need help.

Phil Hill: So, Namitha, describe what is the typical week for you, and is it different? Any differences in how you approach the coaching role than from what we’ve heard from Jacob?

Namitha Ganapa: It’s pretty much the same, but my style of teaching is I make notes. I use different colors to highlight the concept, the formula, and how does the matter go. Many of my students prefer notes, so that is how I do it.

Phil Hill: So, there’s sort of a personal style to coaches that’s involved.

This aspect of the changing role of both faculty members and TAs is too often overlooked, and it’s helpful to hear from them first-hand.

The post Instructor Replacement vs. Instructor Role Change appeared first on e-Literate.

Moodle Association: New pay-for-play roadmap input for end users

Mon, 2015-06-08 12:27

By Phil HillMore Posts (329)

As long as we’re on the subject of changes to open source LMS models . . .

Moodle is in the midst of releasing a fairly significant change to the community with a new not-for-profit entity called the Moodle Association. The idea is to get end users more directly involved in setting the product roadmap, as explained by Martin Dougiamas in this discussion thread and in his recent keynotes (the one below from early March in Germany).

[After describing new and upcoming features] So that’s the things we have going now, but going back to this – this is the roadmap. Most people agree those things are pretty important right now. That list came from mostly me, getting feedback from many, many, many places. We’ve got the Moots, we’ve got the tracker, we’ve got the community, we’ve got Moodle partners who have many clients (and they collect a lot of feedback from their paying clients). We have all of that, and somehow my job is to synthesize all of that into a roadmap for 30 people to work on. It’s not ideal because there’s a lot, a lot of stuff going on in the community.

So I’m trying to improve that, and one of the things – this is a new thing that we’re starting – is a Moodle Association. And this will be starting in a couple of months, maybe 3 or 4 months. It will be at moodleassociation.org, and it’s a full association. It’s a separate legal organization, and it’s at arm’s length from Moodle [HQ, the private company that develops Moodle Core]. It’s for end users of Moodle to become members, and to work together to decide what the roadmap should be. At least part of the roadmap, because there will be other input, too. A large proportion, I hope, will be driven by the Moodle Association.

They’ll become members, sign up, put money every year into the pot, and then the working groups in there will be created according to what the brainstorming sessions work out, what’s important, create working groups around those important things, work together on what the specifications of that thing should be, and then use the money to pay for that development, to pay us (Moodle HQ), to make that stuff.

It’s our job to train developers, to keep the organization of the coding and review processes, but the Moodle Association is telling us “work on this, work on that”. I think we’ll become a more cohesive community with the community driving a lot of the Moodle future.

I’m very excited about this, and I want to see this be a model of development for open source. Some other projects have something like this thing already, but I think we can do it better.

In the forum, Martin shared two slides on the funding model. The before model:

moodle-model-before

 

The model after:

moodle-model-after

 

One obvious change is that Moodle partners (companies like Blackboard / Moodlerooms, RemoteLearner, etc) will no longer be the primary input to development of core Moodle. This part is significant, especially as Blackboard became the largest contributing member of Moodle with its acquisition of Moodlerooms in 2012. This situation became more important after Blackboard also bought Remote-Learner UK this year. It’s worth noting that Martin Dougiamas, founder of Moodle, was on the board of Remote-Learner parent company in 2014 but not this year.

A less obvious change, however, is that the user community – largely composed of schools and individuals using Moodle for free – has to contend with another pay-for-play source of direction. End users can pay to join the association, and the clear message is that this is the best way to have input. In a slide shown at the recent iMoot conference and shared at MoodleNews, the membership for the association was called out more clearly.

massociation2

What will this change do to the Moodle community? We have already seen the huge changes to the Kuali open source community caused by the creation of KualiCo. While the Moodle Association is not as big of a change, I cannot imagine that it won’t affect the commercial partners.

There are already grumblings from the Moodle end user community (labeled as Moodle.org, as this is where you can download code for free), as indicated by the discussion forum started just a month ago.

I’m interested to note that Moodle.org inhabitants are not a ‘key stakeholder’, but maybe when you say ‘completely separate from these forums and the tracker’ it is understandable. Maybe with the diagram dealing only with the money connection, not the ideas connection, if you want this to ‘work’ then you need to talk to people with $$. ie key = has money.

I’ll be interested how the priorities choice works: do you get your say dependent on how much money you put in?

This to me is the critical issue with the future.

Based on MoodleNews coverage of the iMoot keynote, the answer to this question is that the say is dependent on money.

Additionally, there will be levels of membership based on the amount you contribute. The goal is to embrace as many individuals from the community but also to provide a sliding scale of membership tiers so that larger organizations, like a university, large business, or non-Moodle Partner with vested interested in Moodle, (which previously could only contribute through the Moodle Partner arrangement, if at all) can be members for much larger annual sums (such as AU$10k).

The levels will provide votes based on dollars contributed (potentially on a 1 annual dollar contributed = 1 vote).

This is why I use the phrase “pay-for-play”. And a final thought – why is it so hard to get public information (slides, videos, etc) from the Moodle meetings? The community would benefit from more openness.

Update 6/10: Corrected statement that Martin Dougiamas was on the Remote Learner board in 2014 but not in 2015.

The post Moodle Association: New pay-for-play roadmap input for end users appeared first on e-Literate.

rSmart to Asahi to Scriba: What is happening to major Sakai partner?

Mon, 2015-06-08 11:16

By Phil HillMore Posts (329)

It looks like we have another name and ownership change for one of the major Sakai partners, but this time the changes have a very closed feel to them. rSmart, led by Chris Coppola at the time, was one of the original Sakai commercial affiliates, and the LMS portion of the company was eventually sold to Asahi Net International (ANI) in 2013. ANI had already been involved in the Sakai community as a Japanese partner and also as an partial investor in rSmart, so that acquisition was not seen as a huge change other than setting the stage for KualiCo to acquire the remainder of rSmart.

In late April, however, ANI was acquired by a private equity firm out of Los Angeles (Vert Capital), and this move is different. Vert Capital did not just acquire ANI; they also changed the company name to Scriba and took the company off the grid for now. No news items explaining intentions, no web site, no changes to Apereo project page, etc. Japanese press coverage of the acquisition mentions the parent company’s desire to focus on the Japanese market.

What is going on?

A rudimentary search for “Scriba education learning management” brings up no news or web sites, but it does bring up a recent project on freelancer.com to create the new company logo. By the way, paying $90 gets 548 entries from 237 freelancers – and adjuncts are underpaid?! The winning logo has a certain “we’re like Moodle, but our hat covers two letters” message that I find quite original.

Furthermore, neither scriba.com nor scriba.org are registered by the company (both are owned by keyword naming companies that pre-purchase domains for later sale). The ANI website mentions nothing about the sale, and in fact has no news since October, 2014. The Sakai project page has no update, but the sponsorship page for Open Apereo conference last week did have new logo. This sale has the appearance of a last-minute acquisition under financial distress[1].

Vert Capital is a “private investment firm that provides innovative financing solutions to lower/middle market companies globally”. The managing director who is leading this deal, Adam Levin, has a background in social media and general media companies. Does Vert Capital plan on making further ed tech acquisitions? I wouldn’t be surprised, as ed tech is fast-changing market yet more companies are in need of “innovative financing”.

I have asked Apereo for comment, and I will share that or any other updates as I get it. If anyone has more information, feel free to share in comments or send me a private note.

H/T: Thanks to reader who wishes to remain anonymous for some pointers to public information for this post.

  1. Note, that is conjecture.

The post rSmart to Asahi to Scriba: What is happening to major Sakai partner? appeared first on e-Literate.

Pilots? We don’t need no stinkin’ pilots!

Thu, 2015-06-04 19:33

By Phil HillMore Posts (329)

Timothy Harfield commented on Arizona State University’s approach to pilots and scaling innovation at ASU.

.@philonedtech excellent comment on the problem of scaling innovation in #HigherEd. This is a central concern for @UIAinnovation.

— Timothy D. Harfield (@tdharfield) June 4, 2015

excellent comment on the problem of scaling innovation in #HigherEd. This is a central concern for @UIAinnovation.

The University Innovation Alliance is “a consortium of 11 large public research universities committed to making high-quality college degrees accessible to a diverse body of students”. I wrote about this “central concern” last summer in a post titled “Pilots: Too many ed tech innovations stuck in purgatory”, using the frame of Everett Rogers’ Diffusions of Innovations model. While the trigger for that post was on ed tech products, the same situation applies for the course design situation.

5 Stages of Adoption

What we are seeing in ed tech in most cases, I would argue, is that for institutions the new ideas (applications, products, services) are stuck the Persuasion stage. There is knowledge and application amongst some early adopters in small-scale pilots, but majority of faculty members either have no knowledge of the pilot or are not persuaded that the idea is to their advantage, and there is little support or structure to get the organization at large (i.e. the majority of faculty for a traditional institution, or perhaps for central academic technology organization) to make a considered decision. It’s important to note that in many cases, the innovation should not be spread to the majority, either due to being a poor solution or even due to organizational dynamics based on how the innovation is introduced.

The Purgatory of Pilots

This stuck process ends up as an ed tech purgatory – with promises and potential of the heaven of full institutional adoption with meaningful results to follow, but also with the peril of either never getting out of purgatory or outright rejection over time.

Back to Timothy’s comment. He was specifically commenting on Phil Regier’s interview in the e-Literate TV case study on ASU.

Phil Hill: There are plenty of institutions experimenting with new technology-based pedagogical approaches, but pilots often present a challenge to scale with quality. ASU’s vision, however, centers on scale and access. One observation I’ve seen from what’s happening in the US is there are a lot of pilots, but that never scale to go across a school. You sound confident that you will be scaling.

Philip Regier: We kind of don’t pilot stuff here. When we did the math program, we actually turned it on in August 2012 after all of nine months of preparation working with Knewton.We turned it on, and it applied to every seat in every freshman math course at the university. And there’s a reason for that. My experience—not just mine, but the university’s experience with pilots is that they have a very difficult time getting to scale.
Pilots … have a very difficult time getting to scale.

Part of the reason is because, guess what? It doesn’t work the first time. It doesn’t work the first time, maybe not the second. It takes multiple iterations before you understand and are able to succeed.If you start with a pilot and you go a semester or two and it’s, “Hey, this isn’t as good as what we were doing,” you’ll never get to scale.

In our case, the experience with math is a very good example of that because working with a new technology is not a silver bullet. It’s not like we’re going to use this technology, and now all of the grades are going to go up by 15 percent. What you have to do is work with the technology and develop the entire learning ecosystem around it, and that means training faculty.

That’s one approach to the scaling innovation challenge that affects not just the University Innovation Alliance institutions but most schools. This approach also raises some questions. While Phil Regier stated in further comments not in the episode that faculty were fully involved in the decision to implement new programs, are they also fully involved in evaluating whether new programs are working and whether changes are needed? Does this no pilot approach lead to the continuation of programs that have fatal flaws and should be ended rather than changed?

It is, however, an approach that directly addresses the structural barriers to diffusing the innovations. Based on Phil Regier’s comments, this approach also leads to investment in and professional development of faculty members involved.

The post Pilots? We don’t need no stinkin’ pilots! appeared first on e-Literate.

NYT Michael Crow Condensed Interview: More Info needed . . . and available

Thu, 2015-06-04 09:49

By Phil HillMore Posts (328)

The New York Times ran an “edited and condensed” interview with Arizona State University (ASU) president Michael Crow, titled “Reshaping Arizona State, and the Public Model”.

Michael M. Crow sees Arizona State as the model of a public research university that measures itself by inclusivity, not exclusivity. In his 13 years as its president, he has profoundly reshaped the institution — hiring faculty stars from across the country, starting a bevy of interdisciplinary programs, growing the student body to some 83,000 and using technology to bring his ideas to scale, whether with web-based introductory math classes or eAdvisor, which monitors students’ progress toward their major. Last year, Dr. Crow made headlines when the university partnered with Starbucks to offer students the chance to complete their degree online for free. His new book, written with the historian William B. Dabars, is called, appropriately, “Designing the New American University.”

The problem is that the interview was so condensed that it lost a lot of context. Since Michael and I just released an e-Literate TV case study on ASU, the first episode could help as a companion to the NYT article by calling out a lot more information from ASU executives their mission. We would like this information to be useful for others to decide what they think about this model.

ASU Case Study: Ambitious Approach to Change in R1 University

The post NYT Michael Crow Condensed Interview: More Info needed . . . and available appeared first on e-Literate.

Release of ASU Case Study on e-Literate TV

Mon, 2015-06-01 06:55

By Phil HillMore Posts (327)

Today we are thrilled to release the third case study in our new e-Literate TV series on “personalized learning”. In this series, we examine how that term, which is heavily marketed but poorly defined, is implemented on the ground at a variety of colleges and universities.

We are adding three episodes from Arizona State University (ASU), a school that is frequently in the news. Rather than just talking about the ASU problems, we are talking with the ASU people involved. What problems are they trying to solve? How do students view some of the changes? Are faculty being replaced by technology or are they changing roles? For that matter, how are faculty members involved in designing some of these changes?

You can see all the case studies (either 2 or 3 per case study) at the series link, and you can access individual episodes below.

ASU Case Study: Ambitious Approach to Change in R1 University

ASU Case Study: Rethinking General Education Science for Non-Majors

ASU Case Study: The Changing Role of Faculty and Teaching Assistants

e-Literate TV, owned and run by MindWires Consulting, is funded in part by the Bill & Melinda Gates Foundation. When we first talked about the series with the Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

As with the previous series, we are working in collaboration with In the Telling, our partners providing the platform and video production. Their Telling Story platform allows people to choose their level of engagement, from just watching the video to accessing synchronized transcripts and accessing transmedia. We have added content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. With In The Telling’s help, we are crafting episodes that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies.

We will release two more case studies over the next month, and we also have two episodes discussing the common themes we observed on the campuses. We welcome your feedback, either in comments or on Twitter using the hashtag #eLiterateTV.

Enjoy!

The post Release of ASU Case Study on e-Literate TV appeared first on e-Literate.

UF Online and Enrollment Warning Signs

Thu, 2015-05-28 19:33

By Phil HillMore Posts (327)

The University of Florida Online (UF Online) program is one of the highest profile online initiatives to be started over the past few years (alongside other public institution programs such as California’s Online Education Initiative, OpenSUNY, Cal State Online, and Georgia Tech / Udacity). UF Online, which I first described in this blog post, is an exclusively-online baccalaureate program leading to a UF degree for lower costs than the traditional on-campus experience.

As part of a new program augmenting UF Online, qualified students that are not admitted to the University of Florida due to space constraints can be accepted to UF Online’s PaCE program, although the Washington Post in April called out that these students had not asked to be part of UF Online.

Some 3,100 students accepted as freshman by the University of Florida for the fall got a big surprise along with their congratulations notices: They were told that the acceptance was contingent on their agreement to spend their first year taking classes online as part of a new program designed to attract more freshmen to the flagship public university.

The 3,118 applicants accepted this way to the university — above and beyond the approximately 12,000 students offered traditional freshman slots — did not apply to the online program. Nor were they told that there was a chance that they would be accepted with the online caveat. They wound up as part of an admissions experiment.

Fast forward to this week’s news from the Gainesville Sun.

Fewer than 10 percent of 3,118 high school students invited to sign up for a new online program after their applications were rejected for regular admission to the University of Florida have accepted the offer.

The 256 students who signed up for the Pathway to Campus Enrollment [PaCE] program will be guaranteed a spot at UF after they complete the minimum requirements: two semesters and at least 15 hours of online course work. [snip]

The PACE program was created as a way to boost the numbers of first-time-in-college students enrolling in UF Online, to provide an alternate path to residential programs, and to populate major areas of study that have been under-enrolled in recent years.

The fact that less than 10% of students accepted the offer is not necessarily news, as the campus provost predicted this situation last month (see the Washington Post article). What is more troubling is the hubris exhibited by how UF Online is reacting to enrollment problems. Administrators at the university seem to view UF Online as a mechanism to serve institutional needs and are not focused on meeting student needs. This distorted lens is leading to some poor decision-making that is likely making the enrollment situation worse in the long run. Rather than asking “which students need UF Online and what support do they need”, the institution is asking “what do we need and how can we use UF Online to fill any gaps”.

Let’s step back from PaCE and look at the bigger picture. The following chart shows the targeted enrollment numbers that formed the basis for the UF Online strategic plan, compared to actual and currently estimated enrollment (click to enlarge).

Enrollments vs Plan Spring 2015

As of this term, they are off by ~23% (1000 out of a target of 1304 students), which is not unreasonable for a program that started so quickly. What is troubling, however, is that the targets rise quickly (3698 next spring, 6029 the year after) while the actuals have not shown significant growth yet. Note that UF Online is estimating enrollment to double, from 1000 to 2000, for fall 2015 – that is a bold assumption. To make the challenge even more difficult (from March article in Gainesville Sun):

That growth in revenue also depends largely on a growing number of out-of-state online students who would pay four to five times higher tuition rates, based on market conditions.

Specifically, the business plan assumes a mix of 43% out-of-state students in UF Online by year 10, yet currently there are only 9% out-of-state students. How realistic is it to attract large numbers of out-of-state students given the increasing options for online programs?

In the midst of the challenging startup, UF Online had to deal with the premature departure of the initial executive director. After a one-year search process, UF Online chose a new leader who has absolutely no experience in online education.

UF Online is welcoming Evangeline Cummings as its new director, and she has the task of raising the program’s enrollment. [snip]

Cummings starts July 1 with a salary of $185,000. She is currently a director with the U.S. Environmental Protection Agency.

UF spokesman Steve Orlando wrote in an email that she showed skills desirable for the position. “The search committee and the provost were looking for someone with the ability to plan strategically and to manage a large and complex operation,” he said.

At this point, it might have been worth stepping back and challenging some of the original assumptions. Specifically, is UF Online targeting the right students and addressing an unmet need? The plan assumes there are many students who want a U of Florida degree but just can’t get in or want to do so from out of state. This is different than asking what types of students need an anywhere, anytime online program from an R1 university and then figuring out what to provide in an academic program.

Instead, the administrators came up with the PaCE program as a way to augment enrollment. Which academic majors are allowed under PaCE?

The PACE program was created as a way to boost the numbers of first-time-in-college students enrolling in UF Online, to provide an alternate path to residential programs, and to populate major areas of study that have been under-enrolled in recent years.

The school didn’t say “what are the majors that students need once they transfer to the residential program”, they asked “how can we use these online students to fill some gaps we already have”. And students who sign the PaCE contract (yes, it is a contractual agreement) cannot change majors even after they move to a campus program.

And while the students are in UF Online:

PACE students can’t live in student dormitories, and their tuition doesn’t cover meals, health services, the recreation center and other student activities because they aren’t paying the fees for those services. They can’t get student tickets to UF cultural and sporting events.

They also can’t ride for free on Regional Transportation Service buses or get student parking passes.

PACE students also will not be able to participate in intercollegiate athletics or try out for the Gator Marching Band. They can use the libraries on campus but can’t check out books.

U of Florida seems to have spent plenty of time figuring out what not to provide these students.

One additional challenge that UF Online will face is student retention. The Instructional Technology Council (ITC) described in this year’s Distance Education report:

Nationally, student retention in online courses tends to be eight percentage points lower than that of face-to-face instruction. Online students need to be self-disciplined to succeed. Many underestimate how much time online coursework requires. Others fall behind or drop out for the same reasons they enrolled in online courses in the first place—they have other responsibilities and life challenges, such as work and/or family, and are too busy to prepare for, or complete, their online coursework.

Yet UF Online is targeting the students who might have the most trouble with online courses. First-time entering freshman, particularly students who actually want a residential program and might not even understand online programs, are not ideal students to succeed in a fully-online program. San Jose State University and Udacity learned this lesson the hard way, although they threw MOOCs and remedial math into the mix as well.

UF Online seems to be institutionally-focused rather than student-focused, and the initiative is shaping up to be a case study in hubris. Without major changes in how the program is managed, including the main campus input into decisions, UF Online risks becoming the new poster child of online education failures. I honestly hope they succeed, but the current outlook is not encouraging.

The post UF Online and Enrollment Warning Signs appeared first on e-Literate.

Worth Considering: Faculty perspective on student-centered pacing

Tue, 2015-05-26 11:43

By Phil HillMore Posts (326)

Over the weekend I wrote a post based on the comment thread at Friday’s Chronicle article on e-Literate TV.

One key theme coming through from comments at the Chronicle is what I perceive as an unhealthy cynicism that prevents many people from listening to students and faculty on the front lines (the ones taking redesigned courses) on their own merits.

Sunday’s post highlighted two segments of students describing their experiences with re-designed courses, but we also need to hear directly from faculty. Too often the public discussion of technology-enabled initiatives focus on the technology itself, often assuming that the faculty involved are bystanders or technophiles. But what about the perspectives of faculty members – you know, those who are in the classrooms working with real students – on what challenges they face and what changes are needed from an educational perspective? There is no single perspective from faculty, but we could learn a great deal through their unique, hands on experiences.

Consider the the specific case of why students might need to work at their own pace.

The first example is from a faculty member at Middlebury College describing the need for a different, more personalized approach for his geographic information system (GIS) course.

Jeff Howarth: And what I would notice is that there would be some students who would like me to go a little bit faster but had to wait and kind of daydream because they were just waiting. And then there were some students that desperately wanted me slow down. Then you get into that kind of slowest-car-on-the-freeway, how-fast-can-you-really-go type of thing. So, I would slow down, which would lose part of the group.

Then there would be some folks that probably would want me to slow down but would never ask because they don’t want to call attention to themselves as being the kind of—the slow car on the freeway.

Michael Feldstein: At this point, Jeff realized that even his small class might not be as personalized as it could be with the support of a little more technology.

Jeff Howarth: What I realized is that, if I just started packaging that instruction, the worked example, I could deliver the same content but allow students to first—if I made videos and posted it on something like YouTube, I was putting out the same content, but students could now watch it at their own pace and in the privacy of being able to go as slow as they need to without the social hang-ups of being considered different.

Students could now watch it at their own pace and … and go as slow as they need to without the social hang-ups of being considered different. So, that was really the first step of—I did all of this, and then I told another colleague in languages what I was doing. And he said, “Well, that’s called ‘flipping the classroom.’” And I thought, “OK.” I mean, but that’s not really why. I did it without knowing that I was flipping the classroom, but then that’s how it happened.

Compare this description with an example from an instructor at Essex County College teaching developmental math.

Pamela Rivera: When I was teaching the traditional method, I’ll have students coming in and they didn’t know how to multiply. They didn’t know how to add and subtract. Rarely would those students be able to stay throughout the semester, because after the third—no, even after the second week, everyone else was already in division and they’re still stuck.

And the teacher can’t stop the class and say, “OK, let’s continue with multiplication,” because you have a syllabus to stick to. You have to continue teaching, and so those students will be frustrated, and so they drop the class. The Teacher can’t stop the class…because you have a syllabus to stick to.

At the same time, you had students who—the first couple of weeks they’ll be extremely bored because they already know all of that. And so, unfortunately, what would happen is eventually you would get to a point in the content that—they don’t know that, but because they have been zoning out for weeks, they don’t get that “OK, now, I actually have to start paying attention.” And so, yes, they should have been able to do that, but they still are not very successful because they were used to not paying attention.

Remarkably Similar Descriptions

Despite these two examples coming from very different cases, the actual descriptions that faculty offer on the need for course designs that allow students to control their own pacing are remarkably similar. These isolated examples are not meant to end debate on personalized learning or on what role technology should play (rather they should encourage debate), but it is very useful to listen to faculty members describe the challenges they face on an educational level.

The post Worth Considering: Faculty perspective on student-centered pacing appeared first on e-Literate.