Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 1 day 2 hours ago

e-Literate Top 20 Posts For 2014

Sat, 2014-12-20 12:17

I typically don’t write year-end reviews or top 10 (or 20) lists, but I need to work on our consulting company finances. At this point, any distraction seems more enjoyable than working in QuickBooks.

We’ve had a fun year at e-Literate, and one recent change is that we are now more willing break stories when appropriate. We typically comment on ed tech stories a few days after the release, providing analysis and commentary, but there are several cases where we felt a story needed to go public. In such cases (e.g. Unizin creation, Cal State Online demise, management changes at Instructure and Blackboard) we tend to break the news objectively, providing mostly descriptions and explanations, allowing others to provide commentary.

The following list is based on Jetpack stats on WordPress, which does not capture people who read posts through RSS feeds (we send out full articles through the feed). So the stats have a bias towards people who come to e-Literate for specific articles rather than our regular readers. We also tend to get longer-term readership of articles over many months, so this list also has a bias for articles posted a while ago.

With that in mind, here are the top 20 most read articles on e-Literate in terms of page views for the past 12 months along with publication date.

  1. Can Pearson Solve the Rubric’s Cube? (Dec 2013) – This article proves that people are willing to read a 7,000 word post published on New Year’s Eve.
  2. A response to USA Today article on Flipped Classroom research (Oct 2013) – This article is our most steady one, consistently getting around 100 views per day.
  3. Unizin: Indiana University’s Secret New “Learning Ecosystem” Coalition (May 2014) – This is the article where we broke the story about Unizin, based largely on a presentation at Colorado State University.
  4. Blackboard’s Big News that Nobody Noticed (Jul 2014) – This post commented on the Blackboard users’ conference and some significant changes that got buried in the keynote and much of the press coverage.
  5. Early Review of Google Classroom (Jul 2014) – Meg Tufano got pilot access to the new system and allowed me to join the testing; this article mostly shares Meg’s findings.
  6. Why Google Classroom won’t affect institutional LMS market … yet (Jun 2014) – Before we had pilot access to the system, this article described the likely market affects from Google’s new system.
  7. Competency-Based Education: An (Updated) Primer for Today’s Online Market (Dec 2013) – Given the sudden rise in interest in CBE, this article updated a 2012 post explaining the concept.
  8. The Resilient Higher Ed LMS: Canvas is the only fully-established recent market entry (Feb 2014) – Despite all the investment in ed tech and market entries, this article noted how stable the LMS market is.
  9. Why VCs Usually Get Ed Tech Wrong (Mar 2014) – This post combined references to “selling Timex knockoffs in Times Square” with a challenge to the application of disruptive innovation.
  10. New data available for higher education LMS market (Nov 2013) – This article called out the Edutechnica and ListEdTech sites with their use of straight data (not just sampling surveys) to clarify the LMS market.
  11. InstructureCon: Canvas LMS has different competition now (Jun 2014) – This was based on the Instructure users’ conference and the very different attitude from past years.
  12. Dammit, the LMS (Nov 2014) – This rant called out how the LMS market is largely following consumer demand from faculty and institutions.
  13. Why Unizin is a Threat to edX (May 2014) – This follow-on commentary tried to look at what market effects would result from Unizin introduction.
  14. State of the Anglosphere’s Higher Education LMS Market: 2013 Edition (Nov 2013) – This was last year’s update of the LMS squid graphic.
  15. Google Classroom: Early videos of their closest attempt at an LMS (Jun 2014) – This article shared early YouTube videos showing people what the new system actually looked like.
  16. State of the US Higher Education LMS Market: 2014 Edition (Oct 2014) – This was this year’s update of the LMS squid graphic.
  17. About Michael – How big is Michael’s fan club?
  18. What is a Learning Platform? (May 2012) – The old post called out and helped explain the general move from monolithic systems to platforms.
  19. What Faculty Should Know About Adaptive Learning (Dec 2013) – This was a reprint of invited article for American Federation of Teachers.
  20. Instructure’s CTO Joel Dehlin Abruptly Resigns (Jul 2014) – Shortly after the Instructure users’ conference, Joel resigned from the company.

Well that was more fun that financial reporting!

The post e-Literate Top 20 Posts For 2014 appeared first on e-Literate.

Helix Education puts their competency-based LMS up for sale

Thu, 2014-12-18 17:05

Back in September I wrote about the Helix LMS providing an excellent view into competency-based education and how learning platforms would need to be designed differently for this mode. The traditional LMS – based on a traditional model using grades, seat time and synchronous cohort of students – is not easily adapted to serve CBE needs such as the following:

  1. Explicit learning outcomes with respect to the required skills and concomitant proficiency (standards for assessment)
  2. A flexible time frame to master these skills
  3. A variety of instructional activities to facilitate learning
  4. Criterion-referenced testing of the required outcomes
  5. Certification based on demonstrated learning outcomes
  6. Adaptable programs to ensure optimum learner guidance

In a surprise move, Helix Education is putting the LMS up for sale.  Helix Education provided e-Literate the following statement to explain the changes, at least from a press release perspective.

With a goal of delivering World Class technologies and services, a change we are making is with Helix LMS. After thoughtful analysis and discussion, we have decided to divest (sell) Helix LMS. We believe that the best way for Helix to have a positive impact on Higher Education is to:

  • Be fully committed and invest properly in core “upstream” technologies and services that help institutions aggregate, analyze and act upon data to improve their ability to find, enroll and retain students and ensure their success
  • Continue to build and share our thought leadership around TEACH – program selection, instructional design and faculty engagement for CBE, on-campus, online and hybrid delivery modes.
  • Be LMS neutral and support whichever platform our clients prefer. In fact, we already have experience in building CBE courses in the top three LMS solutions.

There are three aspects of this announcement that are quite interesting to me.

Reversal of Rebranding

Part of the surprise is that Helix rebranded the company based on their acquisition of the LMS – this was not just a simple acquisition of a learning platform – and just over a year after this event Helix Education is reversing course, selling the Helix LMS and going LMS-neutral. From the earlier blog post [emphasis added]:

In 2008 Altius Education, started by Paul Freedman, worked with Tiffin University to create a new entity called Ivy Bridge College. The goal of Ivy Bridge was to help students get associate degrees and then transfer to a four-year program. Altius developed the Helix LMS specifically for this mission. All was fine until the regional accrediting agency shut down Ivy Bridge with only three months notice.

The end result was that Altius sold the LMS and much of the engineering team to Datamark in 2013. Datamark is an educational services firm with a focus on leveraging data. With the acquisition of the Helix technology, Datamark could expand into the teaching and learning process, leading them to rebrand as Helix Education – a sign of the centrality of the LMS to the company’s strategy. Think of Helix Education now as an OSP (a la carte services that don’t require tuition revenue sharing) with an emphasis on CBE programs.

Something must have changed in their perception of the market to cause this change in direction. My guess is that they are getting pushback from schools who insist on keeping their institutional LMS, even with the new CBE programs. Helix states they have worked with “top three LMS solutions”, but as seen in the demo (read the first post for more details), capabilities such as embedding learning outcomes throughout a course and providing a flexible time frame work well outside the core design assumptions of a traditional LMS. I have yet to see an elegant design for CBE with a traditional LMS. I’m open to being convinced otherwise, but count me as skeptical.

Upstream is Profitable

The general move sounds like the main component is the moving “upstream” element. To be more accurate, it’s more a matter of staying “upstream” and choosing to not move downstream. It’s difficult, and not always profitable, to deal with implementing academic programs. Elements built on enrollment and retention are quite honestly much more profitable. Witness the recent sale of the enrollment consulting firm Royall & Company for $850 million.

The Helix statement describes their TEACH focus as one of thought leadership. To me this sounds like the core business will be on enrollment, retention and data analysis while they focus academic efforts not on direct implementation products and services, but on white papers and presentations.

Meaning for Market

Helix Education was not the only company building CBE-specific learning platforms to replace the traditional LMS. FlatWorld Knowledge built a platform that is being used at Brandman University. LoudCloud Systems built a new CBE platform FASTrak – and they already have a traditional LMS (albeit one designed with a modern architecture). Perhaps most significantly, the CBE pioneers Western Governors University and Southern New Hampshire University’s College for America (CfA) built custom platforms based on CRM technology (i.e. Salesforce) based on their determination that the traditional LMS market did not suit their specific needs. CfA even spun off their learning platform as a new company – Motivis Learning.

If Helix Education is feeling the pressure to be LMS-neutral, does that mean that these other companies are or will be facing the same? Or, is Helix Education’s decision really based on company profitability and capabilities that are unique to their specific situation?

The other side of the market effect will be determined by which company buys the Helix LMS. Will a financial buyer (e.g. private equity) choose to create a standalone CBE platform company? Will a traditional LMS company buy the Helix LMS to broaden their reach in the quickly-growing CBE space (350 programs in development in the US)? Or will an online service provider and partial competitor of Helix Education buy the LMS? It will be interesting to see which companies bid on this product line and who wins.

Overall

If I find out more about what this change in direction means for Helix Education or for competency-based programs in general, I’ll share in future posts.

The post Helix Education puts their competency-based LMS up for sale appeared first on e-Literate.

Blackboard’s SVP of Product Development Gary Lang Resigns

Tue, 2014-12-16 12:46

Gary Lang, Blackboard’s senior vice president in charge of product development and cloud operations, has announced his resignation and plans to join Amazon. Gary took the job with Blackboard in June 2013 and, along with CEO Jay Bhatt and SVP of Product Management Mark Strassman, formed the core management team that had worked together previously at AutoDesk. Gary led the reorganization effort to bring all product development under one organization, a core component of Blackboard’s recent strategy.

Michael described Blackboard’s new product moves toward cloud computing and an entirely new user experience (UX) for the Learn LMS, and Gary was the executive in charge of these efforts. These significant changes have yet to fully roll out to customers (public cloud in pilot, new UX about to enter pilot). Gary was also added to the IMS Global board of directors in July 2014 – I would expect this role to change as well given the move to Amazon.

At the same time, VP Product Management / VP Market Development Brad Koch has also resigned from Blackboard.[1] Brad came to Blackboard from the ANGEL acquisition. Given his long-term central role leading product definition and being part of Ray Henderson’s team[2], Brad’s departure will also have a big impact. Brad’s LinkedIn page shows that he has left Blackboard, but it does not yet show his new company. I’m holding off reporting until I can get public confirmation.

Blackboard provided the following statement from CEO Jay Bhatt.

The decision to leave Blackboard for an opportunity with Amazon was a personal one for Gary that allows him to return home to the West Coast. During his time here, Gary has made significant contributions to the strategic direction of Blackboard and the technology we deliver to customers. The foundation he has laid, along with other leaders on our product development team, will allow us to continue to drive technical excellence for years to come. We thank him for his leadership and wish him luck as he embarks on this new endeavor.

  1. The two resignations are unrelated as far as I can tell.
  2. Starting at Pearson, then at ANGEL, finally at Blackboard

The post Blackboard’s SVP of Product Development Gary Lang Resigns appeared first on e-Literate.

Vendors as Traditional Revolutionaries

Sun, 2014-12-14 09:40

In a post titled “The LMS for Traditional Revolutionaries,” Instructure’s VP of Research and Education for Canvas Jared Stein responded to my LMS rant with some numbers and some thoughts about the role of the vendor in encouraging progressive teaching practices. First, the numbers on the use of open education features in Canvas:

  • 3.8% of courses are “public”; you don’t need a login to see them.
  • 0.6% of courses are Creative Commons-licensed.
  • 4.0% of assignments are URL submissions (suggesting that students are completing their assignments on their blogs or elsewhere on the open web).

On the one hand, as Jared acknowledges, these percentages are very low. On the other hand, as he points out, 4% of assignments is close to 250,000 assignments, which is non-trivial as an absolute number. And all of this raises the question: What is the role of the vendor in promoting progressive educational practices?

Let’s take the best-case scenario. Suppose you’re a good person and a thoughtful educator who happens to work for a vendor at the moment. (For those of you who don’t know him, Jared enjoys just such a reputation, having spent a number of years as an excellent academic ed tech blogger and practitioner before joining Instructure.) What can you do? What is your role? On the one hand, you will get criticized by educators who want more and faster change for being too conventional. I certainly have leveled that sort of criticism at vendors before. And maybe those criticisms will sting particularly hard if you were one of those educators yourself before you joined the company (and maybe still are, in your heart of hearts). On the other hand, you are likely to be criticized as arrogant, high-handed, and unwilling to listen to your customers if you put yourself in the position of lecturing to educators (or, at worst, bullying them) about what you, as a vendor, define as best teaching practices. I certainly have leveled this sort of criticism as well.

So what’s a vendor to do? Jared writes,

These [open education features] are just a few examples of capabilities in Canvas that we believe add flexibility and encourage different approaches to teaching and learning. I recognize that sharing this data is a little risky; some may use it to argue that Canvas shouldn’t worry so much about the small percentage of educators who may take advantage of these fringe capabilities. After all, won’t teachers who are actually invested in open educational practices just eschew the LMS for their own platforms anyway?

Focusing only on “users like us” and ignoring the others may work in the short-term, but for long-term success you have to build bridges, not walls.

To help education improve itself for all teachers and learners we have to try to connect with those teachers who aren’t comfortable with radical shifts in pedagogy or technology. We believe that the best way to encourage positive change in educational practices across the broad landscape of content areas, learning objectives, and teaching philosophies is by providing tools that are easy-to-use, flexible, and comfortable to the majority of teachers and learners. The door to change must be open and the doorkeeper must be deposed.

Some of the ways we do this is by having an open community, engaging with people who disagree with us, and investing in the open platform aspect of Canvas. We need both traditionalists, critical pedagogues, progressive researchers, and open educators to contribute to Canvas.That doesn’t have to be done through pull requests or by building LTI apps or integrations, though that’s a brilliant way to build solutions that are right for your context. But by dialoging what works in teaching and learning and what doesn’t. By debating what technology is best for, and when it leads us away from our shared goals of teaching and learning better in an open and connected world.

Shorter Jared: We put capabilities to support progressive practices in our product in the hopes that our users will discover, adopt, and promote them, but it’s not our place to push our preferred educational practices on our customers.

In many cases—particularly with a platform that serves a large and heterogeneous swath of the campus community—that’s the best attitude you can get from your vendor. That’s the most they can do without rightly pissing off (more) people.

All of which brings me back to a single point: If you want better educational technology, then work to make sure that your colleagues in your campus community are asking for the things that you think would make educational technology better. If 40% rather than 4% of assignments created by your colleagues were on the open web, then learning platforms like LMSs would look and work differently. I guarantee it. Likewise, as long as most educators tend to use the technology to reproduce existing classroom practices, LMSs will look the same. I guarantee that too. And that’s not a vendor thing. That’s a software development thing. Community-developed open source learning platforms generally haven’t broken the mold, and the few that have tend to be the ones that you probably have never heard of because they don’t get adopted. They build what their community members ask for and what they think will attract other community members. So if you want better tech, then the best thing you can do to get it is to create demand for it among your colleagues.

The post Vendors as Traditional Revolutionaries appeared first on e-Literate.

The Battle for Open and MOOC Completion Rates

Wed, 2014-12-10 23:21

Yesterday I wrote a post on the 20 Million Minds blog about Martin Weller’s new book The Battle for Open: How openness won and why it doesn’t feel like victory. Exploring different aspects of open in higher education – open access, MOOCs, open education resources and open scholarship – Weller shows how far the concept of openness has come, to the point where “openness is now such a part of everyday life that it seems unworthy of comment”. If you’re interested in OER, open courses, open journals, or open research in higher education – get the book (it’s free and available in a variety of formats).

Building on the 20MM post about the ability to reuse or repurpose the book itself, I would like to expand on a story from early 2013 where I happen to play a role. I’ll mix in Weller’s description (MW) from the book with Katy Jordan’s data (KJ) and my own description (PH) from this blog post.

(MW) I will end with one small example, which pulls together many of the strands of openness. Katy Jordan is a PhD student at the OU focusing on academic networks on sites such as Academia. edu. She has studied a number of MOOCs on her own initiative to supplement the formal research training offered at the University. One of these was an infographics MOOC offered by the University of Texas. For her final visualisation project on this open course she decided to plot MOOC completion rates on an interactive graph, and blogged her results (Jordan 2013).

(KJ)

(MW) This was picked up by a prominent blogger, who wrote about it being the first real attempt to collect and compile completion data for MOOCs (Hill 2013), and he also tweeted it.

(PH) How many times have you heard the statement that ‘MOOCs have a completion rate of 10%’ or ‘MOOCs have a completion rate of less than 10%’? The meme seems to have developed a life of its own, but try to research the original claim and you might find a bunch of circular references or anecdotes of one or two courses. Will the 10% meme hold up once we get more data?

While researching this question for an upcoming post, I found an excellent resource put together by Katy Jordan, a graduate student at The Open University of the UK. In a blog post from Feb 13, 2013, Katy described a new effort of hers to synthesize MOOC completion rate data – from xMOOCs in particular and mostly from Coursera.

(MW) MOOC completion rates are a subject of much interest, and so Katy’s post went viral, and became the de-facto piece to link to on completion rates, which almost every MOOC piece references. It led to further funding through the MOOC Research Initiative and publications. All on the back of a blog post.

This small example illustrates how openness in different forms spreads out and has unexpected impact. The course needed to be open for Katy to take it; she was at liberty to share her results and did so as part of her general, open practice. The infographic and blog relies on open software and draws on openly available data that people have shared about MOOC completions, and the format of her work means others can interrogate that data and suggest new data points. The open network then spreads the message because it is open access and can be linked to and read by all.

(PH) Once I had and shared Katy’s blog post, it seemed the natural move was to build on this data. What was interesting to me was that there seemed to be different student patterns of behavior within MOOCs, leading to this initial post and culminating (for now) in a graphical view of MOOC student patterns.

 

(PH) With a bit of luck or serendipity, this graphical view of patterns nicely fit together with research data from Stanford.

(MW) It’s hard to predict or trigger these events, but a closed approach anywhere along the chain would have prevented it. It is in the replication of small examples like this across higher education that the real value of openness lies.

Weller has a great point on the value of openness, and I appreciate the mention in the book.

Source: Weller, M. 2014. The Battle for Open: How openness won and why it doesn’t feel like victory. London: Ubiquity Press. DOI: http://dx.doi.org//10.5334/bam

The post The Battle for Open and MOOC Completion Rates appeared first on e-Literate.

Is Kuali Guilty of “Open Washing”?

Wed, 2014-12-03 14:05

Phil and I don’t write a whole lot about Student Information Systems (SISs) and the larger Enterprise Resource Management (ERP) suites that they belong to, not because they’re unimportant but because the stories about them often don’t fit with our particular focus in educational technology. Plus, if I’m being completely honest, I find them to be mostly boring. But the story of what’s going with Kuali, the open source ERP system for higher education, is interesting and relevant for a couple of reasons. First, while we hear a lot of concern from folks about the costs of textbooks and LMSs, those expenses pale in comparison to what colleges and universities pay in SIS licensing and support costs. A decent sized university can easily pay a million dollars or more for their SIS, and a big system like UC or CUNY can pay tens or even hundreds of millions. One of the selling points of Kuali has been to lower costs, thus freeing up a lot of that money to meet other needs that are more closely related to the core university mission. So what happens in the SIS space has a potentially huge budgetary impact on teaching and learning. Second, there’s been a lot of conversation about what “open” means in the context of education and, more specifically, when that label is being abused. I am ambivalent about the term “open washing” because it encourages people to flatten various flavors of openness into “real” and “fake” open when, in fact, there are often legitimately different kinds of open with different value (and values). That said, there certainly exist cases where use of the term “open” stretches beyond the bounds of even charitable interpretation.

When Kuali, a Foundation-developed project released under an open source license, decided to switch its model to a company-developed product where most but not all of its code would be released under a different open source license, did they commit an act of “open washing”? Finding the answer to that question turns out to be a lot more complicated than you might expect. As I started to dig into it, I found myself getting deep into details of both open source licensing and cloud computing, since the code that KualiCo will be withholding is related to cloud offerings. It turned out to be way too long and complex to deal with both of those topics in one post, so I am primarily going to follow up on Phil’s earlier posts on licensing here and save the cloud computing part for a future post.

Let’s start by recapping the situation:

  • The Kuali Foundation is a non-profit entity formed by a group of schools that wanted to pool their resources to build a suite of open source components that together could make up an ERP system suitable for colleges and universities.
  • One of the first modules, Kuali Financials, has been successfully installed and run by a small number of schools which have reported dramatic cost savings, both in the obvious licensing but also in the amount of work it took to get the system running.
  • Other elements of the suite have been been more problematic. For example, the registrar module, Kuali Student, has been plagued with multi-year delays and members dropping out of the project.
  • Kuali code has historically been released under an open source license called the “Educational Community License (ECL)”, which is a slight variation of the Apache license.
  • The Kuali Foundation announced that ownership of the code would be transferred to a new commercial entity, KualiCo. Their reason for doing so is to accelerate the development of the suite, although they have not been entirely clear about what they think the causes of their development speed problem are and why moving to a commercial structure will solve the problems.
  • KualiCo intends to release most, but not all, of future code under a different open source license, the Affero Gnu Public License (AGPL).
  • Some of the new code to be developed by KualiCo will not be released as open source. So far the only piece they have named in this regard is their multitenant code.
  • These changes happened fairly quickly, driven by the board, without a lot of community discussion.

So is this “open washing”? Is it a betrayal of trust of an open source community, or deceptively claiming to be open and proving to be otherwise? In one sense, the answer to that question depends on the prior expectations of the community members. I would argue that the biggest and clearest act of open washing may have occurred over a decade ago with the creation of the term “community source.” Back in the earlier days of Sakai—another “community source” project—I made a habit of asking participants what they thought “community source” meant. Very often, the answer I got was something along the lines of, “It’s open source, but with more emphasis on building a community.” But that’s not what Brad Wheeler meant at all when he popularized the term. As I have discussed in an earlier post, “community source” was intended to be a consortial model of development with an open source license tacked onto the end product. Far from emphasizing community, it was explicitly designed to maximize the control and autonomy of the executive decision-makers from the consortial partners—and to keep a lid on the decision-making power of other community participants. Remember the motto: “If you’ve got the gold, then you make the rules.” Community source, as defined by those who coined the term, is a consortium with a license. But community source was always marketed as an improvement on open source. “It’s the pub between the cathedral and the bazaar where all the real work gets done,” Brad liked to say.

Different “community source” projects followed the centralized, financially-driven model to different degrees. Kuali, for example, was always explicitly and deliberately more centralized in its decision-making processes than Sakai. As an outside observer of Kuali’s culture and decision-making processes, and as a close reader of Brad Wheeler’s articles and speeches about community source, I can’t say that the move to KualiCo surprised me terribly. Nor can I say that it is inconsistent with what Brad has said all along about how community source works and what it is for. The consortial leaders, whose membership I assume was roughly defined by their financial contributions to the consortium, made a decision that supports what they believe is in their interest. All code that was previously released under an open source license will remain under an open source license. Presumably the Kuali Foundation and KualiCo will be clear going forward about which consortial contributions go toward creating functionality that will be open source or private source in the future. I am not privy to the internal politics of the foundation and therefore am not in a position to say whether some of those who brought the gold were left out of the rule-making process. To the degree that Brad’s golden rule was followed, the move to KualiCo is consistent with the clearly stated (if craftily marketed) philosophy of community source.

The question of how much these changes practically affect open source development of Kuali is a more complicated one to answer. It is worth stating that another tenet of community source was that it was specifically intended to be commercial-friendly, meaning that the consortia tried to avoid licensing or other practices that discouraged the development of a healthy and diverse ecosystem of support vendors. (Remember, community source frames problems with the software ecosystem as procurement problems. As such, its architects are concerned with maintaining a robust range of support contracting options.) Here the balancing act is more delicate. On the one hand, to the degree that the changes give KualiCo advantages over potential competitors, Kuali will be violating the commercial-friendly principle of community source. On the other hand, to the degree that the changes do not give KualiCo advantages over potential competitors, it’s not clear why one would think that KualiCo will be a viable and strong enough company to move development faster than the way it has been until now.

The first thing to point out here is that, while KualiCo has only said so far that it will keep the multitenant code private source, there is nothing to prevent them from keeping more code private in the future. Instructure Canvas, which started out with only the multitenant code as private source, currently has the following list of features that are not in the open source distribution:

  • Multi-tenancy extensions
  • Mobile integration
  • Proprietary SIS integrations
  • Migration tools for commercial LMSs
  • Other minor customizations that only apply to our hosted environment
  • Chat Tool
  • Attendance Tool (Roll Call)

I don’t think there is a clear and specific number of private source features that marks the dividing line between good faith open source practices and “open washing”; nor am I arguing that Instructure is open washing here. Rather, my point is that, once you make the decision to be almost-all-but-not-completely open source, you place your first foot at the top of a slippery slope. By saying that they are comfortable withholding code on any feature for the purposes of making their business viable, KualiCo’s leadership opens the door to private sourcing as much of the code as they need to in order to maintain their competitive advantage.

Then there’s the whole rather arcane but important question about the change in open source licenses. Unlike the ECL license that Kuali has used until now, AGPL is “viral,” meaning that anybody who combines AGPL-licensed code with other code must release that other code under the AGPL as well. Anybody, that is, except for the copyright holder. Open source licenses are copyright licenses. If KualiCo decides to combine  open source code to which they own the copyright with private source code, they don’t have to release the private source code under the AGPL. But if a competitor, NOTKualiCo, comes along and combines KualiCo’s AGPL-licensed code with their own proprietary code, then NOTKualiCo has to release their own code under the AGPL. This creates two theoretical problems for NOTKualiCo. First, NOTKualiCo does not have the option of making the code they develop a proprietary advantage over their competitors. They have to give it away. Second, while NOTKualiCo has to share its code with KualiCo, KualiCo doesn’t have the same obligation to NOTKualiCo. So theoretically, it would be very hard for any company to compete on product differentiators when they are building upon AGPL-licensed code owned by another company.

I say “theoretically” because, in practice, it is much more complicated than that. First, there is the question of what it means to “combine” code. The various GPL licenses recognize that some software is designed to work with other software “at arm’s length” and therefore should not be subject to the viral clause. For example, it is permissible under the license to run AGPL applications on a Microsoft Windows or Apple Mac OS X operating system without requiring that those operating systems also be released under the GPL. Some code combinations fall clearly into this category, while others fall clearly into the category of running as part of the original open source program and therefore subject to the viral clause of the GPL. But there’s a vast area in the murky middle. Do tools that use APIs specifically designed for integration fall under the viral clause? It depends on the details of how they integrate as well as who you ask. It doesn’t help that the language used to qualify what counts as “combining” in Gnu’s documentation uses terms that are specific to the C programming language.

KualiCo has said that they will specifically withhold multitenant capabilities from future open source distributions. If competitors developed their own multitenant capabilities, would they be obliged to release that code under the AGPL? Would such code be “combining” with Kuali, or could it be sufficiently arm’s-length that it could be private source? It depends on how it’s developed. Since KualiCo’s CEO is the former CTO of Instructure, let’s assume for the sake of argument that Kuali’s multitenant capabilities will be developed similarly to Canvas’. Zach Wily, Instructure’s Chief Architect, described their multitenant situation to me as follows:

[O]ur code is open-source, but only with single-tenancy. The trick there is that most of our multi-tenancy code is actually in the open source code already! Things like using global identifiers to refer to an object (instead of tenant-local identifiers), database sharding, etc, are all in the code. It’s only a couple relatively thin libraries that help manage it all that are kept as closed source. So really, the open-source version of Canvas is more like a multi-tenant app that is only convenient to run with a single tenant, rather than Canvas being a single-tenant app that we shim to be multi-tenant.

The good news from NOTKualiCo’s (or NOTInstructureCo’s) perspective is that it doesn’t sound like there’s an enormous amount of development required to duplicate that multitenant functionality. Instructure has not gone through contortions to make the development of multitenant code harder for competitors; I will assume here that KualiCo will follow a similar practice. The bad news is that the code would probably have to be released under the AGPL, since it’s a set of libraries that are intended to run as a part of Kuali. That’s far from definite, and it would probably require legal and technical experts evaluating the details to come up with a strong conclusion. But it certainly seems consistent with the guidance provided by the Gnu Foundation.

OK, so how much of a practical difference does this make for NOTKualiCo to be able to compete with KualiCo? Probably not a huge amount, for several reasons. First, we’re not talking about an enormous amount of code here; nor is it likely to be highly differentiated. But also, NOTKualiCo owns the copyright on the libraries that they release. While anybody can adopt them under the AGPL, if KualiCo wanted to incorporate any of NOTKualiCo’s code, then the viral provision would have to apply to KualiCo. The practical net effect is that KualiCo would almost certainly never use NOTKualiCo’s code. A third competitor—call them ALSONOTKualiCo—could come in and use NOTKualiCo’s code without incurring any obligations beyond those that they already assumed by adopting KualiCo’s AGPL code, so there’s a disadvantage there for NOTKualiCo. But overall, I don’t think that withholding multitenant code from KualiCo’s open source releases—assuming that it’s done the way Instructure has done it—is a decisive barrier to entry for competitors. Unfortunately, that may just mean that KualiCo will end up having to withhold other code in order to maintain a sustainable advantage.

So overall, is Kuali guilty of “open washing” or not? I hope this post has helped make clear why I don’t love that term. The answer is complicated and subjective. I believe that “community source” was an overall marketing effort that entailed some open washing, but I also believe that (a) Brad has been pretty clear about what he really meant if you listened closely enough, and (b) not every project that called itself community source followed Brad’s tenets to the same degree or in the same way. I believe that KualiCo’s change in license and withholding of code are a violation of the particular flavor of openness that community source promised, but I’m not sure how much of a practical difference that makes to the degree that one cares about the “commercial friendliness” of the project. Would I participate in work with the Kuali Foundation today if I were determined to work only on projects that are committed to open source principles and methods? No I would not. But I would have given you the same answer a year ago. So, after making you wade through all of those arcane details, I’m sorry to say that my answer to the question of whether Kuali is guilty of open washing is, “I’m not sure that I know what the question means, and I’m not sure how much the answer matters.”

The post Is Kuali Guilty of “Open Washing”? appeared first on e-Literate.

Pearson, Efficacy, and Research

Mon, 2014-12-01 08:03

A while back, I mentioned that MindWires, the consulting company that Phil and I run, had been hired by Pearson in response to a post I wrote a while back expressing concerns about the possibility of the company trying to define “efficacy” in education for educators (or to them) rather than with them. The heart of the engagement was us facilitating conversations with different groups of educators about how they think about learning outcomes—how they define them, how they know whether students are achieving them, how the institution does or doesn’t support achieving them, and so on. As a rule, we don’t blog about our consulting work here on e-Literate. But since we think these conversations have broader implications for education, we asked for and received permission to blog about what we learn under the following conditions:

  • The blogging is not part of the paid engagement. We are not obliged to blog about anything in particular or, for that matter, to blog at all.
  • Pearson has no editorial input or prior review of anything we write.
  • If we write about specific schools or academics who participated in the discussions, we will seek their permission before blogging about them.

I honestly wasn’t sure what, if anything, would come out of these conversations that would be worth blogging about. But we got some interesting feedback. It seems to me that the aspect I’d like to cover in this post has implications not only for Pearson, and not only for ed tech vendors in general, but for open education and maybe for the future of education in general. It certainly is relevant to my recent post about why the LMS is the way it is and the follow-up post about fostering better campus conversations. It’s about the role of research in educational product design. It’s also about the relationship of faculty to the scholarship of teaching.

It turns out that one of the aspects about Pearson’s efficacy work that really got the attention of the folks we talked with was their research program. Pearson has about 40 PhDs doing educational research of different kinds throughout the company. They’ve completed about 300 studies and have about another 100 currently in progress. Given that educational researchers were heavily represented in the groups of academics we talked to, it wasn’t terribly surprising that the reaction of quite a few of them were variations of “Holy crap!” (That is a direct quote of one the researchers.) And it turns out that the more our participants knew about learning outcomes research, the more they were interested in talking about how little we know about the topic. For example, even though we have had course design frameworks for a long time now, we don’t know a whole lot about which course design features will increase the likelihood of achieving particular types of learning outcomes. Also, while we know that helping students develop a sense of community in their first year at school increases the likelihood that they will stay on in school and complete their degrees, we know very little about which sorts of intra-course activities are most likely to help students develop that sense of connectedness in ways that will measurably increase their odds of completion. And to the degree that research on topics like these exist, it’s scattered throughout various disciplinary silos. There is very little in the way of a pool of common knowledge. So the idea of a well-funded organization conducting high volumes of basic research was exciting to a number of the folks that we talked to.

But how to trust that research? Every vendor out there is touting their solutions based on “brain science” and “big data.” How can the number of PhDs a vendor employs or the number of “studies” that it conducts yield more credible value than a bullet point in the marketing copy?

In part, the answer is surprisingly simple: Vendors can demonstrate the credibility and value of their research using the same mechanisms that any other researcher would. The first step is transparency. It turns out that Pearson already publishes a library of their studies on their “research and innovation network” site. Here is a sample of some of their more recent titles that will give you a sense of the range of topics:

Pearson also has a MyLabs- and Mastering-specific site that is more marketing-oriented but still has some research-based reports in it.

How good is this research? I don’t know. My guess is that, like any large body of research conducted by a reasonably large group of people, it probably varies in quality. Some of these studies have been published in academic journals or presented in academic conferences. Many have not. One thing we heard from a number of the folks we spoke to was that they’d like to see Pearson submit as much of their research as possible to blind peer-reviewed journals. Ultimately, how does an academic typically judge the quality of any research? The number of citations it gets is a good place to start. So the folks that we talked to wanted to see Pearson researchers participate as peers in the academic research community, including submitting their work to the same scrutiny that academic research undergoes.

This is approach isn’t perfect, of course. We’ve seen in industries like pharmaceuticals that deep-pocketed industry players can find various ways to warp the research process. But pharmaceuticals are particularly bad because (a) the research studies are incredibly expensive to conduct, and (b) they require access to the proprietary drugs being tested, which can be difficult in general and particularly so before the product is released to the market. Educational research is much less vulnerable to these problems, but it has one of its own. By and large, replicability of experiments (and therefore confirmation or disconfirmation of results) is highly difficult or even impossible in many educational situations for both logistical and ethical reasons. So evaluating vendor-conducted or vendor-sponsored educational research would have its challenges, even with blind peer review. That said, the opinions of many of the folks we talked to, particularly of those who are involved in conducting, reviewing, and publishing academic educational research, was that the challenges are manageable and the potential value generated could be considerable.

Even more interesting to me were the discussions about what to do with that research besides just publishing it. There was a lot of interest in getting faculty engaged in with the scholarship of teaching, even in small ways. Take, for example, the case of an adjunct instructor, running from one school to the next to cobble together a living, spending many, many hours grading papers and exams. That person likely doesn’t have time to do a lot of reading on educational research, never mind conducting some. But she might appreciate some curricular materials that say, “there are at least three different ways to teach this topic, but the way we’re recommending is consistent with research on a sense of building class community that encourages students to feel like they belong at the school and reduces dropout rates.” She might even find some precious time to follow the link and read that research if it’s on a topic that’s important enough to her.

This is pretty much the opposite of how most educational technology and curricular materials products are currently designed. The emphasis has historically been on making things easier for the instructors by having them cede more control to the product and vendor. “Don’t worry. It’s brain science! It’s big data! You don’t have to understand it. Just buy it and the product will do the work for you.” Instead, these products could be educating and empowering faculty to try more sophisticated pedagogical approaches (without forcing them to do so). Even if most faculty pass up these opportunities most of the time, simply providing them with ready contextual access to relevant research could be transformative in the sense that it constantly affords them new opportunities to incorporate the scholarship of teaching into their daily professional lives. It also could encourage a fundamentally different relationship between the teachers and third-party curricular materials, whether they are vendor-provided or OER. Rather than being a solitary choice made behind closed doors, the choice of curricular materials could include, in part, the choice of a community of educational research and practice that the adopting faculty member wants to join. Personally, I think this is a much better selection criterion for curricular materials than the ones that are often employed by faculty today.

These ideas came out of conversations with just a couple of dozen people, but the themes were pretty strong and consistent. I’d be interested to hear what you all think.

The post Pearson, Efficacy, and Research appeared first on e-Literate.

Upcoming EDUCAUSE Webinars on Dec 4th and Dec 8th

Sun, 2014-11-30 20:39

Michael and I will be participating in two upcoming EDUCAUSE webinars.

Massive and Open: A Flipped Webinar about What We Are Learning

On Thursday, December 4th from 1:00–2:00 p.m. ET we will be joined by George Siemens for an EDUCAUSE Live! webinar:

In 2012, MOOCs burst into public consciousness with course rosters large enough to fill a stadium and grand promises that they would disrupt higher education. Two years later, after some disappointments, setbacks, and not a small amount of schadenfreude, MOOCs seem almost passé. And yet, away from the sound and the fury, researchers and teachers have been busy finding answers to some basic questions: What are MOOCs good for, and what can we learn from them? Phil Hill and Michael Feldstein will talk about what we’re learning so far.

This will be a flipped webinar. We strongly encourage you to watch the 14-minute video interview film of MOOC Research Initiative (MRI) grantees before the webinar begins. If you would like more background on MOOCs, feel free to watch the other two videos (parts 2 and 3) in the series as well.

More information on this page. Registration is free.

UPDATE: The recording is now available, both the Adobe Connect and the chat transcript.

Teaching and Learning: 2014 in Retrospect, 2015 in Prospect

On Monday, December 8th from 1:00 – 2:00 p.m. ET we will be joined by Audrey Watters for an EDUCAUSE Learning Initiative (ELI) webinar:

Join Malcolm Brown, EDUCAUSE Learning Initiative director, and Veronica Diaz, ELI associate director, as they moderate this webinar with Phil Hill, Michael Feldstein, and Audrey Watters.

The past year has been an eventful one for teaching and learning in higher education. There have been developments in all areas, including analytics, the LMS, online education, and learning spaces. What were the key developments in 2014, and what do they portend for us in 2015? For this ELI webinar, we will welcome a trio of thought leaders to help us understand the past year and what it portends for the coming year. Come and share your own insights, and join the discussion.

More information on this page. This webinar is available for ELI member institutions, but it will be publicly available 90 days later.

We hope you can join us for these discussions.

The post Upcoming EDUCAUSE Webinars on Dec 4th and Dec 8th appeared first on e-Literate.

WCET14 Student Panel: What do students think of online education?

Fri, 2014-11-21 15:37

Yesterday at the WCET14 conference in Portland I had the opportunity along with Pat James to moderate a student panel.[1] I have been trying to encourage conference organizers to include more opportunities to let students speak for themselves – becoming real people with real stories rather than nameless aggregations of assumptions. WCET stepped up with this session. And my new favorite tweet[2]:

@kuriousmind @lukedowden @wcet_info @PhilOnEdTech Best. Panel. Ever. Massively insightful experience.

— Matthew L Prineas (@mprineas) November 20, 2014

As I called out in my introduction, we talk about students, we characterize students, we listen to others talk about students, but we don’t do a good job in edtech talking with students.  There is no way that a student panel can be representative of all students, even for a single program or campus[3]. We’re not looking for statistical answers, but we can hear stories and gain understanding.

These four students were either working adults (and I’m including a stay-at-home mom in this category) taking undergraduate online programs. They were quite well-spoken and self-aware, which made for a great conversation that included comments that might surprise some on faculty-student interaction potential:

A very surprising (to me) comment on class size:

And specific feedback on what doesn’t work well in online courses:

To help with viewing of the panel, here are the primary questions / topics of discussion:

The whole student panel is available on the Mediasite platform:

Thanks to the help of the Mediasite folks, I have also uploaded a Youtube video of the full panel:

Click here to view the embedded video.

  1. Pat is the executive director of the California Community College Online Education Initiative (OEI) – see her blog here for program updates.
  2. I’m not above #shameless.
  3. As can be seen from this monochromatic panel, which might make sense for Portland demographics but not from a nationwide perspective.

The post WCET14 Student Panel: What do students think of online education? appeared first on e-Literate.

In Which I (Partially) Disagree with Richard Stallman on Kuali’s AGPL Usage

Wed, 2014-11-19 18:32

Since Michael is making this ‘follow-up blog post’ week, I guess I should jump in.

In my latest post on Kuali and the usage of the AGPL license, the key argument is that this license choice is key to understanding the Kuali 2.0 strategy – protecting KualiCo as a new for-profit entity in their future work to develop multi-tenant cloud hosting code.

What I have found interesting is that in most of my conversations with Kuali community people ,even for those who are disillusioned, they seem to think the KualiCo creation makes some sense. The real frustration and pushback has been on how decisions are made, how decisions have been communicated, and how the AGPL license choice will affect the community.

In the comments, Richard Stallman chimed in.

As the author of the GNU General Public License and the GNU Affero General Public License, and the inventor of copyleft, I would like to clear up a possible misunderstanding that could come from the following sentence:

“Any school or Kuali vendor, however, that develops its own multi-tenant cloud-hosting code would have to relicense and share this code publicly as open source.”

First of all, thinking about “open source” will give you the wrong idea about the reasons why the GNU AGPL and the GNU GPL work as they do. To see the logic, you should think of them as free software licenses; more specifically, as free software licenses with copyleft.

The idea of free software is that users of software deserve freedom. A nonfree program takes freedom away from its users, so if you want to be free, you need to avoid it. The aim of our copyleft licenses is to make sure all users of our code get freedom, and encourage release of improvements as free software. (Nonfree improvements may as well be discouraged since we’d need to avoid them anyway.) See http://gnu.org/philosophy/free-software-even-more-important.html.

I don’t use the term “open source”, since it rejects these ethical ideas. (http://gnu.org/philosophy/open-source-misses-the-point.html.) Thus I would say that the AGPL requires servers running modified versions of the code to make the source for the running version available, under the AGPL, to their users.

The license of the modifications themselves is a different question, though related. The author of the modifications could release the modifications under the AGPL itself, or under any AGPL-compatible free software license. This includes free licenses which are pushovers, such as the Apache 2.0 license, the X11 license, and the modified BSD license (but not the original BSD license — see
http://gnu.org/licenses/license-list.html).

Once the modifications are released, Kuali will be able to get them and use them under whatever license they carry. If it is a pushover license, Kuali will be able to incorporate those modifications even into proprietary software. (That’s what makes them pushover licenses.)

However, if the modifications carry the AGPL, and Kuali incorporates them into a version of its software, Kuali will be bound by the AGPL. If it distributes that version, it will be required to do so under the AGPL. If it installs that version on a server, it will be required by the AGPL to make the whole of the source code for that version available to the users of that server.

To avoid these requirements, Kuali would have to limit itself to Kuali’s own code, others’ code released under pushover licenses, plus code for which it gets special permission. Thus, Kuali will not have as much of a special position as some might think.

See also http://gnu.org/philosophy/assigning-copyright.html
and http://gnu.org/philosophy/selling-exceptions.html.

Dr Richard Stallman
President, Free Software Foundation (gnu.org, fsf.org)
Internet Hall-of-Famer (internethalloffame.org)
MacArthur Fellow

I appreciate this clarification and Stallman’s participation here at e-Literate, and it is useful to understand the rationale and ethics behind AGPL. However, I disagree with the statement “Thus, Kuali will not have as much of a special position as some might think”. I do not think he is wrong, per se, but the combination of both the AGPL license and the Contributor’s License Agreement (CLA) in my view does ensure that KualiCo has a special position. In fact, that is the core of the Kuali 2.0 strategy, and their approach would not be possible without the AGPL usage.

Note: I have had several private conversations that have helped me clarify my thinking on this subject. Besides Michael with his comment to the blog, Patrick Masson and three other people have been very helpful. I also interviewed Chris Coppola from KualiCo to understand and confirm the points below. Any mistakes in this post, however, are my own.

It is important to understand two different methods of licensing at play – distributing code through an APGL license and contributing code to KualiCo through a CLA (Kuali has a separate CLA for partner institutions and a Corporate CLA for companies).

  • Distribution – Anyone can download the Kuali 2.0 code from KualiCo and make modifications as desired. If the code is used privately, there is no requirement for distributing the modified code. If, however, a server runs the modified code, the reciprocal requirements of AGPL kick in and the code must be distributed (made available publicly) with the AGPL license or a pushover license. This situation is governed by the AGPL license.
  • Contribution – Anyone who modifies the Kuali 2.0 code and contributes it to KualiCo for inclusion into future releases of the main code grants a license with special permission to KualiCo to do with the code as they see fit. This situation is governed by the CLA and not AGPL.

I am assuming that the future KualiCo multi-tenant cloud-hosting code is not separable from the Kuali code. In other words, the Kuali code would need modifications to allow multi-tenancy.

For a partner institution, their work is governed by the CLA. For a company, however, the choice on whether to contribute code is mutual between that company and KualiCo, in that both would have to agree to sign a CLA. Another company may choose to do this to ensure that bug fixes or Kuali enhancements get into the main code and do not have to be reimplemented with each new release.

For any contributed code, KualiCo can still keep their multi-tenant code proprietary as their special sauce. For distributed code under AGPL that is not contributed under the CLA, the code would be publicly available and it would be up to KualiCo whether to incorporate any such code. If KualiCo incorporated any of this modified code into the main code base, they would have to share all of the modified code as well as their multi-tenant code. For this reason, KualiCo will likely never accept any code that is not under the CLA – they do not want to share their special sauce. Chris Coppola confirmed this assumption.

This setup strongly discourages any company from directly competing with KualiCo (vendor protection) and is indeed a special situation.

The post In Which I (Partially) Disagree with Richard Stallman on Kuali’s AGPL Usage appeared first on e-Literate.

A Weird but True Fact about Textbook Publishers and OER

Wed, 2014-11-19 13:44

As I was perusing David Kernohan’s notes on Larry Lessig’s keynote at the OpenEd conference, one statement leapt out at me:

Could the department of labour require that new education content commissioned ($100m) be CC-BY? There was a clause (124) that suggested that the government should check that no commercial content should exist in these spaces. Was argued down. But we were “Not important” enough to be defeated.

It is absolutely true that textbook publishers do not currently see OER as a major threat. But here’s a weird thing that is also true:

These days, many textbook publishers like OER.

Let me start with the full disclosure. For 18 months, I was an employee of Cengage Learning, one of the Big Three textbook publishers in US higher education. Since then, I have consulted for textbook publishers on and off. Pearson is a current client, and there have been others. Make of that what you will in terms of my objectivity on this subject, but I have been in the belly of the beast. I have had many conversations with textbook publisher employees at all levels about OER, and many of them truly, honestly like it. They really, really like it. As a rule, they don’t understand it. But some of them actually see it as a way out of the hole that they’re in.

This is a relatively recent thing. Not so very long ago, you’d get one of two reactions from employees at these companies, depending on the role of the person you were talking to. Editors would tend to dismiss OER immediately because they had trouble imagining that content that didn’t go through their traditional editorial vetting process could be good (fairly similarly to the way academics would dismiss Wikipedia as something that couldn’t be trusted without traditional peer review). There were occasional exceptions to this, but always for very granular content. Videos, for example. Sometimes editors saw (or still see) OER as extra bits—or “ancillary materials,” in their vernacular—that could be bundled with their professionally edited product. That’s the most that editors typically thought about OER. At the executive level, every so often they would trot out OER on their competitive threat list, look at it for a bit, and decide that no, they don’t see evidence that they are losing significant sales to OER. Then they would forget about it for another six months or so. Publishers might occasionally fight OER at a local level, or even at a state level in places like Washington or California where there was legislation. But in those cases the fight was typically driven by the sales divisions that stood to lose commissions, and they were treated like any other local or regional competition (such as home-grown content development). It wasn’t viewed as anything more than that. For the most part, OER was just not something publishers thought a lot about.

That has changed in US higher education as it has become clear that textbook profits are collapsing as student find more ways to avoid buying the new books. The traditional textbook business is clearly not viable in the long term, at least in that market, at least at the scale and margins that the bigger publishers are used to making. So these companies want to get out of the textbook business. A few of them will say that publicly, but many of them say it among themselves. They don’t want to be out of business. They just want to be out of the textbook business. They want to sell software and services that are related to educational content, like homework platforms or course redesign consulting services. But they know that somebody has to make the core curricular content in order to for them to “add value” around that content. As David Wiley puts it, content is infrastructure. Increasingly, textbook publishers are starting to think that maybe OER can be their infrastructure. This is why, for example, it makes sense for Wiley (the publisher, not the dude) to strike a licensing deal with OpenStax. They’re OK about not making a lot of money on the books as long as they can sell their WileyPlus software. Which, in turn, is why I think that Wiley (the dude, not the publisher) is not crazy at all when he predicts that “80% of all US general education courses will be using OER instead of publisher materials by 2018.” I won’t be as bold as he is to pick a number, but I think he could very well be directionally correct. I think many of the larger publishers hope to be winding down their traditional textbook businesses by 2018.

How particular OER advocates view this development will depend on why they are OER advocates. If your goal is to decrease curricular materials costs and increase the amount of open, collaboratively authored content, then the news is relatively good. Many more faculty and students are likely to be exposed to OER over the next four or five years. The textbook companies will still be looking to make their money, but they will have to do so by selling something else, and they will have to justify the value of that something else. It will no longer be the case that students buy closed textbooks because it never occurs to faculty that there is another viable option. On the other hand, if you are an OER advocate because you want big corporations to stay away from education, then Larry Lessig is right. You don’t currently register as a significant threat to them.

Whatever your own position might be on OER, George Siemens is right to argue that the significance of this coming shift demands more research. There’s a ton that we don’t know yet, even about basic attitudes of faculty, which is why the recent Babson survey that everybody has been talking about is so important. And there’s a funny thing about that survey which few people seem to have noticed:

It was sponsored by Pearson.

The post A Weird but True Fact about Textbook Publishers and OER appeared first on e-Literate.

Better Ed Tech Conversations

Tue, 2014-11-18 09:44

This is another follow-up to the comments thread on my recent LMS rant. As usual, Kate Bowles has insightful and empathetic comments:

…From my experience inside two RFPs, I think faculty can often seem like pretty raucous bus passengers (especially at vendor demo time) but in reality the bus is driven by whoever’s managing the RFP, to a managed timetable, and it’s pretty tightly regulated. These constraints are really poorly understood and lead to exactly the predictable and conservative outcomes you observe. Nothing about the process favours rethinking what we do.

Take your focus on the gradebook, which I think is spot on: the key is how simply I can pull grades in, and from where. The LMS we use is the one with the truly awful, awful gradebook. Awful user experience, awful design issues, huge faculty development curve even to use it to a level of basic competence.

The result across the institution is hostility to making online submission of assignments the default setting, as overworked faculty look at this gradebook and think: nope.

So beyond the choosing practice, we have the implementation process. And nothing about this changes the mind of actual user colleagues. So then the institutional business owner group notices underuse of particular features—oh hey, like online submission of assignments—and they say to themselves: well, we need a policy to make them do it. Awfulness is now compounding.

But then a thing happens. Over the next few years, faculty surreptitiously develop a workable relationship with their new LMS, including its mandated must-use features. They learn how to do stuff, how to tweak and stretch and actually enjoy a bit. And that’s why when checklist time comes around again, they plead to have their favourite corner left alone. They only just figured it out, truly.

If institutions really want to do good things online, they need to fund their investigative and staff development processes properly and continuously, so that when faculty finally meet vendors, all can have a serious conversation together about purpose, before looking at fit.

This comment stimulated a fair bit of conversation, some of which continued on the comments thread of Jonathan Rees’ reply to my post.

The bottom line is that there is a vicious cycle. Faculty, who are already stretched to the limit (and beyond) with their workloads, are brought into a technology selection process that tends to be very tactical and time-constrained. Their response, understandably, tends to be to ask for things that will require less time from them (like an easier grade book, for example). When administrators see that they are not getting deep and broad adoption, they tend to mandate technology use. Which makes the problem worse rather than better because now faculty are forced to use features that take up more of their time without providing value, leaving them with less time to investigate alternatives that might actually add value. Round and round it goes. Nobody stops and asks, “Hey, do we really need this thing? What is it that we do need, and what is the most sensible way of meeting our needs?”

The only way out of this is cultural change. Faculty and administrators alike have work together toward establishing some first principles around which problems the technology is supposed to help them solve and what a good solution would look like. This entails investing time and university money in faculty professional development, so that they can learn what their options are and what they can ask for. It entails rewarding faculty for their participation in the scholarship of teaching. And it entails faculty seeing educational technology selection and policy as something that is directly connected to their core concerns as both educational professionals and workers.

Sucky technology won’t fix itself, and vendors won’t offer better solutions if customers can’t define “better” for them. Nor will open source projects fare better. Educational technology only improves to the extent that educators develop a working consensus regarding what they want. The technology is a second-order effect of the community. And by “community,” I mean the group that collectively has input on technology adoption decisions. I mean the campus community.

The post Better Ed Tech Conversations appeared first on e-Literate.

Walled Gardens, #GamerGate, and Open Education

Sat, 2014-11-15 08:41

There were a number of interesting responses to my recent LMS rant. I’m going to address a couple of them in short posts, starting with this comment:

…The training wheels aren’t just for the faculty, they’re for the students, as well. The idea that the internet is a place for free and open discourse is nice, of course, but anyone who pays attention knows that to be a polite fiction. The public internet is a relatively safe place for straight, white, American males, but freedom of discourse is a privilege that only a small minority of our students (and faculty, for that matter) truly enjoy. If people didn’t understand that before, #notallmen/#yesallmen and GamerGate should certainly have driven that home.

As faculty and administrators we have an obligation–legal, and more importantly moral–to help our students understand the mechanisms, and unfortunately, often the consequences, of public discourse, including online communications. This is particularly true for the teenagers who make up the bulk of the undergrad population. Part of transformative teaching is giving people a safe space to become vulnerable and open to change. For those of us who think still of the “‘net” in terms of it’s early manifestations that were substantially open and inclusive research networks and BBS of largely like-minded people (someone else mentioned The Well, although The Well, of course, has always been a walled garden), open access seems tempting. But today’s internet is rarely that safe space for growth and learning. Just because students can put everything on the internet (YikYak, anyone?) doesn’t mean that they should.

In many, if not most, situations, A default stance of of walled garden with easy-to-implement open access options for chosen and curated content makes a great deal of sense….

There are lots of legitimate reasons why students might not want to post on the public internet. A few years back, when I was helping my wife with a summer program that exposed ESL high schoolers to college and encouraged them to feel like it could be something for them, we had a couple of students who did not want to blog. We didn’t put them on the spot by asking why, but we suspected that their families were undocumented and that they were afraid of getting in trouble.

This certainly doesn’t mean that everybody has to use an LMS or lock everything behind a login, but it does mean that faculty teaching open courses need to think about how to accommodate students who won’t or can’t work on the open web. I don’t think this sort of accommodation in any way compromises the ethic of open education. To the contrary, ensuring access for everyone is part of what open education is all about.

The post Walled Gardens, #GamerGate, and Open Education appeared first on e-Literate.

APLU Panel: Effects of digital education trends on teaching faculty

Wed, 2014-11-12 18:47

Last week I spoke on a panel at the Association of Public and Land-grant Universities (APLU) annual conference. Below are the slides and abridged notes on the talk.

It is useful to look across many of the technology-drive trends affecting higher education and ask what that tells about faculty of the future. Distance education (DE) of course is not new, and the first DE course came out in the mid 1800s in a course from London on shorthand. These distance, or often correspondence, course have expanded over time, but with the rise of the Internet online education (today’s version of DE) has been accelerating over the past 20 years to become quite common in our higher education system.

For the first time IPEDS has been collecting data on DE, starting with Fall 2012 data. We finally have some real data to show us what is happening state-by-state and by different measures. We’re talking numbers from 20 – 40+% of students taking at least one online course with public 4-year institutions. This is no longer just a fringe condition for our students – it’s hitting the mainstream.

Hill APLU Slides Nov 2014 18

 

Hill APLU Slides Nov 2014 19

We’re now in an area where online courses are becoming a standard part of our students’ educational experience. The student demographics and experience are changing. Much of this is driven by working adults, people coming back into college to get a degree, and what used to be called non-traditional students. What we know, of course, is that non-traditional students are now in the majority – we need new terminology.

The numbers we’re discussing with distance education really understate the change. There is no longer a simple dualism of traditional vs. online education. We’re seeing an emerging landscape of educational delivery models. What does this emerging landscape of educational delivery models look like? I have categorized the models not just in terms of modality—ranging from face-to-face to fully online—but also in terms of the method of course design. These two dimensions allow a richer understanding of the new landscape of educational delivery models. Within this landscape, the following models have emerged: ad hoc online courses and programs, fully online programs, School-as-a-Service or Online Service Providers, competency-based education, blended/hybrid courses and the flipped classroom, and MOOCs.

Hill APLU Slides Nov 2014 23

The vertical axis of course design gets at the core assumption that underlies much of the higher education system – the one-to-one relationship between a faculty member and a course. With many of the new models, we’re getting into multi-disciplinary faculty team designs and even team-based course designs include faculty, subject matter experts, instructional designers, and multi-media experts. These models raise a lot of questions over ownership of content and ability or permission to duplicate course section.

These new models change the assumptions of who owns the course, and it leads to different processes for designing, delivering, and updating courses–processes that just don’t exist in traditional education. The implications of this approach are significant. These differences create a barrier that very few institutions can cross.

It is culturally difficult to cross the barrier into the area of team-based course design, and yet this is where many of the new technology-enabled models involve.

There is another case of seeing the Course as a Product. Previously we had three separate domains with content (typically provided by publishers), platforms (typically provided by LMS vendors) and course and curriculum design (typically provided by faculty and academic departments. What we’re seeing more recently is the breakdown, or merging, of these domains with various products and services overlapping. Digital content includes both content and platform. Courseware, however, takes this tot the next level and organizes the content and delivery around learning outcomes. In other words, Courseware actually overlaps into the domain and course and curriculum design.

 

Hill APLU Slides Nov 2014 24

From an organizational change perspective, however, we are just now starting to see how digital education is affecting the mainstream of higher education. We’re not just dealing with niche programs but also having to grapple with how these changes are affecting our institutions as a whole.

Another way of viewing this situation is that we had been used to people experimenting with digital education as a group quietly playing in the corner.

Hill APLU Slides Nov 2014 21

But these people are contained no more and are loose in the house, often causing chaos but also having fun.

Hill APLU Slides Nov 2014 22

These moves raise many questions that need to be addressed at a policy and faculty governance level.

  • How broadly are we applying these initiatives? There are big questions in figuring out which pilot programs to start, whether and when to expand the new models beyond an isolated program more broadly.
  • Who owns the course when a team works on the design from start to finish?
  • Who needs to give permission to take a master course and duplicate into multiple shells, or course sections, taught by others?
  • How should faculty be credited for team-based course design and how should professional development opportunities adjust?

The late family therapist Virginia Satir created a model that can describe much of the changes arising from technology-based innovations. The model shows how social systems or cultures react to a transformative event through various stages (see Steve Smith’s post for more information).

The issue for our discussion is that a foreign element – the change or innovation – is the key event that triggers the move away from the late status quo. This change typically leads to resistance, and eventually to a period of chaos. During these two phases, the performance of the social system fluctuates to a large degree and actually is often worse than during the status quo phase, as the social system wrestles with how to integrate the change in a manner that produces benefits. The second key event is the transforming idea, when people determine how to integrate the innovation into the core of the social system. This integration phase leads to real performance improvements as well as less fluctuation. As the innovation reaches a critical mass, a new status quo develops.

QuoChart

It is not a given that the innovation actually takes hold, there are cases where the social system does not benefit from the innovation.

Some of the implications for faculty during these times of change:

  • With all of the changes, it’s not just that change should be difficult, but also that performance will fluctuate wildly and often our outcomes will get worse as the system adapts to an innovation.
  • The foreign element that dismantles the status quo is not necessarily the basis of technology adoption that gets adopted. The transforming idea is typically related to the foreign element, but it is not equivalent. Faculty ideally will have the time and opportunity to help “find” the transforming idea.
  • It would be a mistake to add accountability measures prematurely, when the system has not had a chance to figure out how to successfully improve outcomes.

Many of these digital education models also raise the question of whether the faculty members need to be on campus, and if not, what support structures should be in place to help out these distance faculty. What about professional development opportunities? Beyond that, how to do you include distance faculty in governance processes.

Other changes, such as competency-based education, can move beyond seat time as a core design element. But how does this change faculty compensation and faculty workload?

We also see faculty age assumptions. What I’m seeing lately is more and more evidence that this assumption is incorrect – older faculty in general are not more resistant to change than younger faculty – and this could have implications for ed tech initiatives struggling to get faculty buy-in.

In a recent post here at 20MM, I pointed out an interesting finding from a recent survey on the use of Open Educational Resources (OER) by the Babson Survey Research Group.

It has been hypothesized that it is the youngest faculty that are the most digitally aware, and have had the most exposure to and comfort in work with digital resources. Older faculty are sometimes assumed to be less willing to adopt the newest technology or digital resources. However, when the level of OER awareness is examined by age group, it is the oldest faculty (aged 55+) that have the greatest degree of awareness, while the youngest age group (under 35) trail behind. The youngest faculty do show the greatest proportion claiming to be“very aware” (6.7%), but have lower proportions reporting that they are “aware” or “somewhat aware.”

Hill APLU Slides Nov 2014 30

Combine this finding with one from another recent survey by Gallup, sponsored and reported by Inside Higher Education.

The doubt extends across age groups and most academic disciplines. Tenured faculty members may be the most critical of online courses, with an outright majority (52 percent) saying online courses produce results inferior to in-person courses, but that does not necessarily mean opposition rises steadily with age. Faculty respondents younger than 40, for example, are more critical of online courses (38 percent) than are those between the ages of 50 and 59 (34 percent).

These findings challenge the predominant assumption about older faculty being more resistant to change, but I would not consider it proof of the reverse. For now, I think the safest assumption is to stop assuming that age is a determining factor for ed tech and pedagogical changes from faculty members. What are the implications?

  • I have heard informal comments at schools about instituting change by waiting it out – letting the resistant older faculty retire over time and allowing innovative younger faculty to change the culture. This approach and assumption could be a mistake.
  • Everett Rogers has found that opinion leaders play a crucial role in the change process. There could be key advantages in actively reaching out to older faculty who might be established opinion leaders to include them directly in change initiatives.
  • We should not assume that older faculty would not want additional support and professional development. These ‘senior’ faculty members may need additional opportunities to learn new technologies, but you might be surprised to find they are more receptive to experimentation and participation in change initiatives.

I would not presume to be able to answer these questions for you, but I think it is important to highlight how technology changes will have faculty support and management implications that go well beyond niche programs and could change the faculty of the future. These innovations are having a broader effect.

Q (audience). Another dimension is that we’re seeing more need for interaction, seeking greater impact with students. We need more meaningful interactions between faculty and students. How do these changes apply to interaction?

A. One of the most encouraging aspects in the study mentioned from Inside Higher Ed faculty survey showed that the biggest definition of quality in online learning (and hopefully f2f learning) is mentorship. The quality of an online course or program depends on the design and implementation. There are a lot of bad online courses with poor engagement. But at the same time there are many well-designed online courses with more interaction between faculty and student that is even possible in traditional face-to-face courses. For example, online tools can increase the ability to reach out to introverts and bring them in to group discussions. Well-design learning analytics can act as a teacher’s eyes and ears to see more directly how different students are doing in the class. Moving forward, this is one of the biggest opportunities to enhance interaction. You raise a good point, though – it’s a challenge and cannot just be solved automatically.

If faculty or an institution fall back on traditional course design just being placed online,  there will be problems. Some of the best-designed courses, however, go beyond the official LMS tools and use social media, blogs, and various interactive tools to enhance creativity and interaction. Long and short – it’s not a matter of if a course or program is put online, it’s a matter of how the course is designed, the faculty role in actively creating opportunities for interaction, and adequate support for students and faculty.

The post APLU Panel: Effects of digital education trends on teaching faculty appeared first on e-Literate.

Dammit, the LMS

Mon, 2014-11-10 16:07

Count De Monet: I have come on the most urgent of business. It is said that the people are revolting!

King Louis: You said it; they stink on ice.

- History of the World, Part I

Jonathan Rees discovered a post I wrote about the LMS in 2006 and, in doing so, discovered that I was writing about LMSs in 2006. I used to write about the future of the LMS quite a bit. I hardly ever do anymore, mostly because I find the topic to be equal parts boring and depressing. My views on the LMS haven’t really changed in the last decade. And sadly, LMSs themselves haven’t changed all that much either. At least not in the ways that I care about most. At first I thought the problem was that the technology wasn’t there to do what I wanted to do gracefully and cost-effectively. That excuse doesn’t exist anymore. Then, once the technology arrived as Web 2.0 blossomed[1], I thought the problem was that there was little competition in the LMS market and therefore little reason for LMS providers to change their platforms. That’s not true anymore either. And yet the pace of change is still glacial. I have reluctantly come to the conclusion that the LMS is the way it is because a critical mass of faculty want it to be that way.

Jonathan seems to think that the LMS will go away soon because faculty can find everything they need on the naked internet. I don’t see that happening any time soon. But the reasons why seem to get lost in the perennial conversations about how the LMS is going to die any day now. As near as I can remember, the LMS has been about to die any day now since at least 2004, which was roughly when I started paying attention to such things.

And so it comes to pass that, with great reluctance, I take up my pen once more to write about the most dismal of topics: the future of the LMS.

In an Ideal World…

I have been complaining about the LMS on the internet for almost as long as there have been people complaining about the LMS on the internet. Here’s something I wrote in 2004:

The analogy I often make with Blackboard is to a classroom where all the seats are bolted to the floor. How the room is arranged matters. If students are going to be having a class discussion, maybe you put the chairs in a circle. If they will be doing groupwork, maybe you put them in groups. If they are doing lab work, you put them around lab tables. A good room set-up can’t make a class succeed by itself, but a bad room set-up can make it fail. If there’s a loud fan drowning out conversation or if the room is so hot that it’s hard to concentrate, you will lose students.

I am a first- or, at most, second-generation internet LMS whiner. And that early post captures an important aspect of my philosophy on all things LMS and LMS-like. I believe that the spaces we create for fostering learning experiences matter, and that one size cannot fit all. Therefore, teachers and students should have a great deal of control in shaping their learning environments. To the degree that it is possible, technology platforms should get out of the way and avoid dictating choices. This is a really hard thing to do well in software, but it is a critical guiding principle for virtual learning environments. It’s also the thread that ran through the 2006 blog post that Jonathan quoted:

Teaching is about trust. If you want your students to take risks, you have to create an environment that is safe for them to do so. A student may be willing to share a poem or a controversial position or an off-the-wall hypothesis with a small group of trusted classmates that s/he wouldn’t feel comfortable sharing with the entire internet-browsing population and having indexed by Google. Forever. Are there times when encouraging students to take risks out in the open is good? Of course! But the tools shouldn’t dictate the choice. The teacher should decide. It’s about academic freedom to choose best practices. A good learning environment should enable faculty to password-protect course content but not require it. Further, it should not favor password-protection, encouraging teachers to explore the spectrum between public and private learning experiences.

Jonathan seems to think that I was supporting the notion of a “walled garden” in that post—probably because the title of the post is “In Defense of Walled Gardens”—but actually I was advocating for the opposite at the platform level. A platform that is a walled garden is one that forces particular settings related to access and privacy on faculty and students. Saying that faculty and students have a right to have private educational conversations when they think those are best for the situation is not at all the same as saying that it’s OK for the platform to dictate decisions about privacy (or, for that matter, that educational conversations should always be private). What I have been trying to say, there and everywhere, is that our technology needs to support and enable the choices that humans need to make for themselves regarding the best conditions for their personal educational needs and contexts.

Regarding the question of whether this end should be accomplished through an “LMS,” I am both agnostic and utilitarian on this front. I can imagine a platform we might call an “LMS” that would have quite a bit of educational value in a broad range of circumstances. It would bear no resemblance to the LMS of 2004 and only passing resemblance to the LMS of 2014. In the Twitfight between Jonathan and Instructure co-founder Brian Whitmer that followed Jonathan’s post, Brian talked about the idea of an LMS as a “hub” or an “aggregator.” These terms are compatible with what my former SUNY colleagues and I were imagining in 2005 and 2006, although we didn’t think of it in those terms. We thought of the heart of it as a “service broker” and referred to the whole thing in which it would live as a “Learning Management Operating System (LMOS).” You can think of the broker as the aggregator and the user-facing portions of the LMOS as the hub that organized the aggregated content and activity for ease-of-use purposes.

By the way, if you leave off requirements that such a thing should be “institution-hosted” and “enterprise,” the notion that an aggregator or hub would be useful in virtual learning environments is not remotely contentious. Jim Groom’s ds106 uses a WordPress-based aggregation system, the current generation of which was built by Alan Levine. Stephen Downes built gRSShopper ages ago. Both of these systems are RSS aggregators at heart. That second post of mine on the LMOS service broker, which gives a concrete example of how such a thing would work, mainly focuses on how much you could do by fully exploiting the rich metadata in an RSS feed and how much more you could do with it if you just added a couple of simple supplemental APIs. And maybe a couple of specialized record types (like iCal, for example) that could be syndicated in feeds similarly to RSS. While my colleagues and I were thinking about the LMOS as an institution-hosted enterprise application, there’s nothing about the service broker that requires it to be so. In fact, if you add some extra bits to support federation, it could just as easily form the backbone of for a distributed network of personal learning environments. And that, in fact, is a pretty good description of the IMS standard in development called Caliper, which is why I am so interested in it. In my recent post about walled gardens from the series that Jonathan mentions in his own post, I tried to spell out how Caliper could enable either a better LMS, a better world without an LMS, or both simultaneously.

Setting aside all the technical gobbledygook, here’s what all this hub/aggregator/broker stuff amounts to:

  • Jonathan wants to “have it all,” by which he means full access to the wide world of resources on the internet. Great! Easily done.
  • The internet has lots of great stuff but is not organized to make that stuff easy to find or reduce the number of clicks it takes you to see a whole bunch of related stuff. So it would be nice to have the option of organizing the subset of stuff that I need to look at for a class in ways that are convenient for me and make minimal demands on me in terms of forcing me to go out and proactively look to see what has changed in the various places where there might be activity for my class.
  • Sometimes the stuff happening in one place on the internet is related to stuff happening in another place in ways that are relevant to my class. For example, if students are writing assignments on their blogs, I might want to see who has gotten the assignment done by the due date and collect all those assignments in one place that’s convenient for me to comment on them and grade them. It would be nice if I had options of not only aggregating but also integrating and correlating course-related information.
  • Sometimes I may need special capabilities for teaching my class that are not available on the general internet. For example, I might want to model molecules for chemistry or have a special image viewer with social commenting capabilities for art history. It would be nice if there were easy but relatively rich ways to add custom “apps” that can feed into my aggregator.
  • Sometimes it may be appropriate and useful (or even essential) to have private educational conversations and activities. It would be nice to be able to do that when it’s called for and still have access to whole public internet, including the option to hold classes mostly “in public.”

In an ideal world, every class would have its own unique mix of these capabilities based on what’s appropriate for the students, teacher, and subject. Not every class needs all of these capabilities. In fact, there are plenty of teachers who find that their classes don’t need any of them. They do just fine with WordPress. Or a wiki. Or a listserv. Or a rock and a stick. And these are precisely the folks who complain the loudest about what a useless waste the LMS is. It’s a little like an English professor walking into a chemistry lab and grousing, “Who the hell designed this place? You have these giant tables which are bolted to the floor in the middle of the room, making it impossible to have a decent class conversation. And for goodness sake, the tables have gas jets on them. Gas jets! Of all the pointless, useless, preposterous, dangerous things to have in a classroom…! And I don’t even want to know how much money the college wasted on installing this garbage.”

Of course, today’s LMS doesn’t look much like what I described in the bullet points above (although I do think the science lab analogy is a reasonable one even for today’s LMS). It’s fair to ask why that is the case. Some of us have been talking about this alternative vision for something that may or may be called an “LMS” for a decade or longer now. And there are folks like Brian Whitmer at LMS companies (and LMS open source projects) saying that they buy into this idea. Why don’t our mainstream platforms look like this yet?

Why We Can’t Have Nice Things

Let’s imagine another world for a moment. Let’s imagine a world in which universities, not vendors, designed and built our online learning environments. Where students and teachers put their heads together to design the perfect system. What wonders would they come up with? What would they build?

Why, they would build an LMS. They did build an LMS. Blackboard started as a system designed by a professor and a TA at Cornell University. Desire2Learn (a.k.a. Brightspace) was designed by a student at the University of Waterloo. Moodle was the project of a graduate student at Curtin University in Australia. Sakai was built by a consortium of universities. WebCT was started at the University of British Columbia. ANGEL at Indiana University.

OK, those are all ancient history. Suppose that now, after the consumer web revolution, you were to get a couple of super-bright young graduate students who hate their school’s LMS to go on a road trip, talk to a whole bunch of teachers and students at different schools, and design a modern learning platform from the ground up using Agile and Lean methodologies. What would they build?

They would build Instructure Canvas. They did build Instructure Canvas. Presumably because that’s what the people they spoke to asked them to build.

In fairness, Canvas isn’t only a traditional LMS with a better user experience. It has a few twists. For example, from the very beginning, you could make your course 100% open in Canvas. If you want to teach out on the internet, undisguised and naked, making your Canvas course site just one class resource of many on the open web, you can. And we all know what happened because of that. Faculty everywhere began opening up their classes. It was sunlight and fresh air for everyone! No more walled gardens for us, no sirree Bob.

That is how it went, isn’t it?

Isn’t it?

I asked Brian Whitmer the percentage of courses on Canvas that faculty have made completely open. He didn’t have an exact number handy but said that it’s “really low.” Apparently, lots of faculty still like their gardens walled. Today, in 2014.

Canvas was a runaway hit from the start, but not because of its openness. Do you know what did it? Do you know what single set of capabilities, more than any other, catapulted it to the top of the charts, enabling it to surpass D2L in market share in just a few years? Do you know what the feature set was that had faculty from Albany to Anaheim falling to their knees, tears of joy streaming down their faces, and proclaiming with cracking, emotion-laden voices, “Finally, an LMS company that understands me!”?

It was Speed Grader. Ask anyone who has been involved in an LMS selection process, particularly during those first few years of Canvas sales.

Here’s the hard truth: While Jonathan wants to think of the LMS as “training wheels” for the internet (like AOL was), there is overwhelming evidence that lots of faculty want those training wheels. They ask for them. And when given a chance to take the training wheels off, they usually don’t.

Let’s take another example: roles and permissions.[2] Audrey Watters recently called out inflexible roles in educational software (including but not limited to LMSs) as problematic:

Ed-tech works like this: you sign up for a service and you’re flagged as either “teacher” or “student” or “admin.” Depending on that role, you have different “privileges” — that’s an important word, because it doesn’t simply imply what you can and cannot do with the software. It’s a nod to political power, social power as well.

Access privileges in software are designed to enforce particular ways of working together, which can be good if and only if everybody agrees that the ways of working together that the access privileges are enforcing are the best and most productive for the tasks at hand. There is no such thing as “everybody agrees” on something like the one single best way for people to work together in all classes. If the access privileges (a.k.a. “roles and permissions”) are not adaptable to the local needs, if there is no rational and self-evident reason for them to be structured the way they are, then they end up just reinforcing the crudest caricatures of classroom power relationships rather than facilitating productive cooperation. Therefore, standard roles and permissions often do more harm than good in educational software. I complained about this problem in 2005 when writing about the LMOS and again in 2006 when reviewing an open source LMS from the UK called Bodington. (At the time, Stephen Downes mocked me for thinking that this was an important aspect of LMS design to consider.)

Bodington had radically open permissions structures. You could attach any permissions (read, write, etc.) to any object in the system, making individual documents, discussions, folders, and what have you totally public, totally private, or somewhere in between.You could collect sets of permissions and and define them as any roles that you wanted. Bodington also, by the way, had no notion of a “course.” It used a geographical metaphor. You would have a “building” or a “floor” that could house a course, a club, a working group, or anything else. In this way, it was significantly more flexible than any LMS I had seen before.

Of course, I’m sure you’ve all heard of Bodington, its enormous success in the market, and how influential it’s been on LMS design.[3]

What’s that? You haven’t?

Huh.

OK, but surely you’re aware of D2L’s major improvements in the same area. If you recall your LMS patent infringement history, then you’ll remember that roles and permissions were exactly the thing that Blackboard sued D2L over. The essence of the patent was this: Blackboard claimed to have invented a system where the same person could be given the role of “instructor” in one course site and the role of “student” in another. That’s it. And while Blackboard eventually lost that fight, there was a court ruling in the middle in which D2L was found to have infringed on the patent. In order to get around it, the company ripped out its predefined roles, making it possible (and necessary) for every school to create its own. As many as they want. Defined however they want. I remember Ken Chapman telling me that, even though it was the patent suit that pushed him to think this way, in the end he felt that the new way was a significant improvement over the old way of doing things.

And the rest, as you know, was history. The Chronicle and Inside Higher Ed wrote pieces describing the revolution on campuses as masses of faculty demanded flexible roles and permissions. Soon it caught the attention of Thomas Friedman, who proclaimed it to be more evidence that the world is indeed flat. And the LMS market has never been the same since.

That is what happened…right?

No?

Do you want to know why the LMS has barely evolved at all over the last twenty years and will probably barely evolve at all over the next twenty years? It’s not because the terrible, horrible, no-good LMS vendors are trying to suck the blood out of the poor universities. It’s not because the terrible, horrible, no-good university administrators are trying to build a panopticon in which they can oppress the faculty. The reason that we get more of the same year after year is that, year after year, when faculty are given an opportunity to ask for what they want, they ask for more of the same. It’s because every LMS review process I have ever seen goes something like this:

  • Professor John proclaims that he spent the last five years figuring out how to get his Blackboard course the way he likes it and, dammit, he is not moving to another LMS unless it works exactly the same as Blackboard.
  • Professor Jane says that she hates Blackboard, would never use it, runs her own Moodle installation for her classes off her computer at home, and will not move to another LMS unless it works exactly the same as Moodle.
  • Professor Pat doesn’t have strong opinions about any one LMS over the others except that there are three features in Canvas that must be in whatever platform they choose.
  • The selection committee declares that whatever LMS the university chooses next must work exactly like Blackboard and exactly like Moodle while having all the features of Canvas. Oh, and it must be “innovative” and “next-generation” too, because we’re sick of LMSs that all look and work the same.

Nobody comes to the table with an affirmative vision of what an online learning environment should look like or how it should work. Instead, they come with this year’s checklists, which are derived from last year’s checklists. Rather than coming with ideas of what they could have, the come with their fears of what they might lose. When LMS vendors or open source projects invent some innovative new feature, that feature gets added to next year’s checklist if it avoids disrupting the rest of the way the system works and mostly gets ignored or rejected to the degree that it enables (or, heaven forbid, requires) substantial change in current classroom practices.

This is why we can’t have nice things. I understand that it is more emotionally satisfying to rail against the Powers That Be and ascribe the things that we don’t like about ed tech to capitalism and authoritarianism and other nasty isms. And in some cases there is merit to those accusations. But if we were really honest with ourselves and looked at the details of what’s actually happening, we’d be forced to admit that the “ism” most immediately responsible for crappy, harmful ed tech products is consumerism. It’s what we ask for and how we ask for it. As with our democracy, we get the ed tech that we deserve.

In fairness to faculty, they don’t always get an opportunity to ask good questions. For example, at Colorado State University, where Jonathan works, the administrators, in their infinite wisdom, have decided that the best course of action is to choose their next LMS for their faculty by joining the Unizin coalition. But that is not the norm. In most places, faculty do have input but don’t insist on a process that leads to a more thoughtful discussion than compiling a long list of feature demands. If you want agitate for better ed tech, then changing the process by which your campus evaluates educational technology is the best place to start.

There. I did it. I wrote the damned “future of the LMS” post. And I did it mostly by copying and pasting from posts I wrote 10 years ago. I am now going to go pour myself a drink. Somebody please wake me again in another decade.

  1. Remember that term?
  2. Actually, it’s more of an extension of the previous example. Roles and permissions are what make a garden walled or not, which another reason why they are so important.
  3. The Bodington project community migrated to Sakai, where some, but not all, of its innovations were transplanted to the Sakai platform.

The post Dammit, the LMS appeared first on e-Literate.

Kuali, Ariah and Apereo: Emerging ed tech debate on open source license types

Mon, 2014-11-10 08:13

With the annual Kuali conference – Kuali Days – starting today in Indianapolis, the big topic should be the August decision to move from a community source to a professional open source model, moving key development to a commercial entity, the newly-formed KualiCo. Now there will be two new announcements for the community to discuss, both centering on a esoteric license choice that could have far-reaching implications. Both the announcement of the Ariah Group as a new organization to support Kuali products and the statement from the Apereo Foundation center on the difference between Apache-style and AGPL licenses.

AGPL and Vendor Protection

Kuali previously licensed its open source code as Educational Community License (ECL), a derivative of the standard Apache license that is designed to be permissive in terms of allowing organizations to contribute modified open source code while mixing with code with different licenses – including proprietary. This license is ‘permissive’ in the sense that the derived, remixed code may be licensed in different manners. It is generally thought that this license type gives the most flexibility for developing a community of contributors.

With the August pivot to Kuali 2.0 / KualiCo, the decision was made to fork and relicense any Kuali code that moves to KualiCo to use the Affero General Public License (AGPL), a derivative of the original GPL license and a form of “copyleft” licensing that allows derivative works but requires the derivatives to use the same license. Ideally the idea is to ensure that open source code remains open. No commercial entity can create derivative works and license with different terms.

The problem is when you have asymmetric AGPL licenses – where the copyright holder such as KualiCo does not have the same restrictions as all other users or developers of the code. Kuali has already announced that the multi-tenant cloud-hosting code to be developed by KualiCo will be proprietary and not open source. As the copyright holder, this is their right. Any school or Kuali vendor, however, that develops its own multi-tenant cloud-hosting code would have to share this code back with KualiCo relicense and share this code publicly as open source. If you want to understand how this choice might create vendor lock-in, even using an open source license, go read Charles Severance’s post. Update: fixed wording about sharing requirements.

To their credit, the Kuali Foundation and KualiCo are very open about the intention of this license change, as described at Inside Higher Ed from a month ago.

[Barry] Walsh, who has been dubbed the “father of Kuali,” issued that proclamation after a back-and-forth with higher education consultant Phil Hill, who during an early morning session asked the Kuali leadership to clarify which parts of the company’s software would remain open source.

The short answer: everything — from the student information system to library management software — but the one thing institutions that download the software for free won’t be able to do is provide multi-tenant support (in other words, one instance of the software accessed by multiple groups of users, a feature large university systems may find attractive). To unlock that feature, colleges and universities need to pay KualiCo to host the software in the cloud, which is one way the company intends to make money.

“I’ll be very blunt here,” Walsh said. “It’s a commercial protection — that’s all it is.”

My post clarifying this interaction can be found here.

Enter Ariah Group

On Friday of last week, the newly formed Ariah Group sent out an email announcing a new support option for Kuali products.

Strong interest has been expressed in continuing to provide open source support for Kuali®products therefore The Ariah Group, a new nonprofit entity, has been established for those who wish to continue and enhance that original open source vision.

We invite you to join us. The community is open to participants of all kinds with a focus on making open source more accessible. The goal will be to deliver quality open source products for Finance, Human Resources, Student, Library, Research, and Continuity Planning. The Ariah Group will collaborate to offer innovative new products to enhance the suite and support the community. All products will remain open source and use the Apache License, Version 2.0 (http://opensource.org/licenses/Apache-2.0) for new contributions. A number of institutions and commercial vendors will be announcing their support in the coming days and weeks.

To join or learn more visit The Ariah Group at http://ariahgroup.org/

Who is the Ariah Group? While details are scarce, this new organization seems to be based on 2 – 3 current and former Kuali vendors. As can be seen from their incomplete website, the details have not been worked out. The group has identified an Executive Director, based on an email exchange I had with the company.

The only vendor that I can confirm is part of Ariah is Moderas, the former Kuali Commercial Affiliate that was removed as an official vendor in September (left or kicked out, depending on which side you believe; I’d say it was a mutual decision). I talked to Chris Thompson, co-founder of Moderas, who said that he understood the business rationale for the move to the Professional Open Source model but had a problem with the community aspects. The Kuali Foundation made a business decision to adopt AGPL and shift development to KualiCo, which makes sense in his telling, but the decision did not include real involvement from the Kuali Community. Chris sees that the situation has changed Kuali from a collaborative to a competitive environment, with KualiCo holding most of the cards.

This is the type of thinking behind the Ariah Group announcement – going back to the future. As described on the website:

We’ve been asked if we’re “branching the code” as we’ve discussed founding Ariah and our response has been that we feel that in fact the Kuali Foundation is branching with their new structure that includes a commercial entity who will set the development priorities and code standards that may deviate from the current Java technology stack in use. At Ariah our members will set the priorities as it was and as it should be in any truly open source environment. Java will always be our technology stack as we understand the burden that changing could cause a massive impact to our members.

This is an attempt to maintain some of the previous Kuali model including an Apache license (very close to ECL) and the same technology stack. But this approach raises two questions: How serious is this group (including whether they are planning to raise investment capital)? And why would Ariah expect to succeed when Kuali was unable to deliver on this model?

While this move by Ariah would have to be considered high risk, at least in its current form without funding secured or details worked out, it adds a new set of risks for Kuali itself as the Kuali Days conference begins. Kuali is in a critical period where the Foundation is seeking to get partner institutions to sign agreements to support KualiCo, contributing both cash and project staff. Based on input from multiple sources, only the University of Maryland has already signed a Memo of Understanding and agreed to this move for the Kuali Student project. Will the Ariah Group announcement cause schools to either reconsider upcoming decisions or even to just delay decisions. Will the Kuali project functional councils be influenced by this announcement on whether to move to the AGPL license.

I contacted Brad Wheeler, chair of the Kuali Foundation board, who added this comment:

Unlike many proprietary software models, Kuali was established with and continues with a software model that has always enabled institutional prerogative. Nothing new here.

Apereo Statement

In a separate but related announcement, this morning the Apereo Foundation (parent organization for Sakai, uPortal and other educational open source projects) released a statement on open source licenses.

Apereo supports the general ideas behind “copyleft” and believes that free software should stay free. However, Apereo is more interested in promoting widespread adoption and collaboration around its projects, and copyleft licenses can be a barrier to this. Specifically, the required reciprocity of copyleft licenses (like the GPL and AGPL) is viewed negatively by many potential adopters and contributors. Apereo also has a number of symbiotic relationships with other open source communities and projects with Apache-style licensing that would be hurt by copyleft licensing.

Apereo strongly encourages anyone who improves upon an Apereo project to contribute those changes back to the community. Contributing is mutually beneficial since the community gets a better project and the contributor does not have to maintain a diverging codebase. Apereo project governance bodies that feel licensing under the GPL or AGPL is necessary in their context can request permission from the Licensing & Intellectual Property Committee and the Apereo Foundation Board of Directors to select this copyleft approach to outbound licensing.

Apereo believes that the reciprocity in a copyleft open source software project should be symmetrical for everyone, specifically that all individuals and organizations involved should share any derivative works as defined in the selected outbound license. Apereo sponsored projects that adopt a copyleft approach to outbound licensing will be required to maintain fully symmetric reciprocity for all parties, including Apereo itself.

Those seeking further information on copyleft licensing, including potential pitfalls of asymmetric application, should read chapter 11 of the “Copyleft and the GNU General Public License: A Comprehensive Tutorial and Guide – Integrating the GPL into Business Practices”. This can be found at –

http://www.copyleft.org/guide/comprehensive-gpl-guidech12.html#x15-10400011.2

While Kuali would appear to be one of the triggers for this statement, there are other educational changes to consider such as the Open edX change from AGPL to Apache (reverse of Kuali) for its XBlock code. From the edX blog post describing this change:

The XBlock API will only succeed to the extent that it is widely adopted, and we are committed to encouraging broad adoption by anyone interested in using it. For that reason, we’re changing the license on the XBlock API from AGPL to Apache 2.0.

The Apache license is permissive: it lets adopters and extenders do what they want with their changes. They can release them under a copyleft license like AGPL, or a permissive license like Apache, or even keep them closed-source.

Methods Matter

I’ll be interested to see any news or outcomes from the Kuali Days conference, and these two announcements should affect the license discussions at the conference. What I have found interesting is that in most of my conversations with Kuali community people ,even for those who are disillusioned, they seem to think the KualiCo creation makes some sense. The real frustration and pushback has been on how decisions are made, how decisions have been communicated, and how the AGPL license choice will affect the community.

It’s too early to tell if the Ariah Group will have any significant impact on the Kuali community or not, but the issue of license types should have a growing importance in educational technology discussions moving forward.

The post Kuali, Ariah and Apereo: Emerging ed tech debate on open source license types appeared first on e-Literate.

A New e-Literate TV Series is in Production

Sat, 2014-11-08 13:09

We have been quiet about e-Literate TV lately, but it doesn’t mean that we have been idle. In fact, we’ve been hard at work filming our next series. In addition to working with our old friends at IN THE TELLING—naturally—we’re also collaborating with EDUCAUSE Learning Initiative (ELI) and getting funding and support from the Bill & Melinda Gates Foundation.

As we have discussed both here and elsewhere, we think the term “personalized learning” carries a lot of baggage with it that needs to be unpacked, as does the related concept of “adaptive learning.” The field in general is grappling with these broad concepts and approaches; an exploration of specific examples and implementations should sharpen our collective understandings about the promise and risks of these concepts and approaches. The Gates foundation has funded the development of an ETV series and given us a free editorial hand to explore the topics of personalization and adaptive learning.

The heart of the series will be a series of case studies at a wide range of different schools. Some of these schools will be Gates Foundation grantees, piloting and studying the use of “personalized learning” technology or product, while others will not. (For more info about some of the pilots that Gates is funding in adaptive learning, including which schools are participating and the evaluation process the foundation has set up to ensure an object review of the results, see Phil’s post about the ALMAP program.) Each ETV case study will start by looking at who the students are at a particular school, what they’re trying to accomplish for themselves, and what they need. In other words who are the “persons” for whom we are trying to “personalize” learning? Hearing from the students directly through video interviews will be a central part of this series. We then take a look at how each school is using technology to support the needs of those particular students. We’re not trying to take a pro- or anti- position on any of these approaches. Rather, we’re trying to understand what personalization means to the people in these different contexts and how they are using tools to help them grapple with it.

Because many Americans have an idealized notion of what a personalized education means that may or may not resemble what “personalized learning” technologies deliver, we wanted to start the series by looking at that ideal. We filmed our first case study at Middlebury College, an elite New England liberal arts college that has an 8-to-1 student/teacher ratio. They do not use the term “personalized learning” at Middlebury, and some stakeholders there expressed the concern that technology, if introduced carelessly, could depersonalize education for Middlebury students. That said, we heard both students and teachers talk about ways in which even an eight-person seminar can support more powerful and personalized learning through the use of technology.

The second school on our list was Essex County College in Newark, New Jersey, where we are filming as of this writing (but will be finished by publication time). As you might imagine, the students, their needs, and their goals and aspirations are different, and the school’s approach to supporting them is different. Here again, we’ll be asking students and teachers for their stories and their views rather than imposing ours. We intend to visit a diverse handful of schools, with the goal of releasing a few finished case studies by the end of this year and more early next year.

With the help of the good folks at ELI, we will also be bringing together a panel at the ELI 2015 conference, consisting of the people from the various case studies to have a conversation about what we can learn about the idea of personalized learning by looking across these various contexts and experiences. This will be a “flipped” panel in the sense that the panelists (and the audience) will have had the opportunity to watch the case study videos before sitting down and talking to each other. The discussion will be filmed and included in the ETV series.

We’re pretty excited about the series and grateful, as always, to the support of our various partners. We’ll have more to say—and show—soon.

Stay tuned.

The post A New e-Literate TV Series is in Production appeared first on e-Literate.

Michael’s Keynote at Sakai Virtual Conference

Thu, 2014-11-06 18:51

Michael is giving the keynote address at tomorrow’s (Friday, Nov 7) Sakai Virtual Conference #SakaiVC14. The virtual conference is only $50 registration, with more information and registration link here. The schedule at a glance is available as PDF here.

Michael’s keynote is at 10:00am EDT, titled “Re-Imagining Educational Content for a Digital World”. At 4:30pm, there will be a Q&A session based on the keynote.

The post Michael’s Keynote at Sakai Virtual Conference appeared first on e-Literate.

Flipped Classrooms: Annotated list of resources

Thu, 2014-10-30 17:03

I was recently asked by a colleague if I knew of a useful article or two on flipped classrooms – what they are, what they aren’t, and when did they start. I was not looking for any simple advocacy or rejection posts, but explainers that can allow other people to understand the subject and make up their own mind on the value of flipping.

While I had a few in mind, I put out a bleg on Google+ and got some great responses from Laura Gibbs, George Station, and Bryan Alexander. Once mentioned, Robert Talbert and Michelle Pacansky-Brock jumped into the conversation with additional material. It seemed like a useful exercise to compile the results and share a list here at e-Literate. This list is not meant to be comprehensive, but a top level of the articles that I have found useful.

There are other useful article out there, but this list is a good starting place for balanced, non-hyped descriptions of the flipped classroom concept.[1] Let me know in the comments if there are others to include in this list.

  1. I did not include any directly commercial sites or articles in the list above. Michelle’s book was included as the introduction is freely available.

The post Flipped Classrooms: Annotated list of resources appeared first on e-Literate.

Significant Milestone: First national study of OER adoption

Tue, 2014-10-28 22:02

For years we have heard anecdotes and case studies about OER adoption based on one (or a handful) of institutions. There are many items we think we know, but we have lacked hard data on the adoption process to back up these assumptions that have significant policy and ed tech market implications.

OtC Cover PageThe Babson Survey Research Group (BSRG) – the same one that administers the annual Survey of Online Learning – has released a survey of faculty titled “Opening the Curriculum” on the decision process and criteria for choosing teaching resources with an emphasis on Open Educational Resources (OER). While their funding from the Hewlett Foundation and from Pearson[1] is for the current survey only, there are proposals to continue the Faculty OER surveys annually to get the same type of longitudinal study that they provide for online learning.

While there will be other posts (including my own) that will cover the immediate findings of this survey, I think it would be worthwhile to first provide context on why this is a significant milestone. Most of the following background and author findings is based on my interview with Dr. Jeff Seaman, one of the two lead researchers and authors of the report (the other is Dr. I. Elaine Allen).

Background

Three years ago when the Survey for Online Learning was in its 9th iteration, the Hewlett Foundation approached BSRG about creating reports on OER adoption. Jeff did a meta study to see what data was already available and was disappointed with results, so the group started to compile surveys and augment their own survey questionnaires.

The first effort, titled Growing the Curriculum and published two years ago, was a combination of results derived from four separate studies. The section on Chief Academic Officers was “taken from the data collected for the 2011 Babson Survey Research Group’s online learning report”. This report was really a test of the survey methodology and types of questions that needed to be asked.

The Hewlett Foundation is planning to develop an OER adoption dashboard, and there has been internal debate on what to measure and how. This process took some time, but once the groups came to agreement, the current survey was commissioned.

Pearson came in as a sponsor later in the process and provided additional resources to expand the scope of survey, augmented the questions to be asked, and helped with infographics, marketing, and distribution.

A key issue on OER adoption is that the primary decision-makers are faculty members. Thus the current study is based on responses from teaching faculty “(defined as having at least one course code associated with their records)”.

A total of 2,144 faculty responded to the survey, representing the full range of higher education institutions (two-year, four-year, all Carnegie classifications, and public, private nonprofit, and for-profit) and the complete range of faculty (full- and part-time, tenured or not, and all disciplines). Almost three-quarters of the respondents report that they are full-time faculty members. Just under one-quarter teach online, and they are evenly split between male and female, and 28% have been teaching for 20 years or more.

Internal Lessons

I asked Jeff what his biggest lessons have been while analyzing the results. He replied that the key meta findings are the following:

  • We have had a lot of assumptions in place (e.g. faculty are primary decision-makers on OER adoption, cost is not a major driver of the decision), but we have not had hard data to back up these assumptions, at least beyond several case studies.
  • The decision process for faculty is not about OER – it is about selecting teaching resources. The focus of studies should be on this general resource selection process with OER as one of the key components rather than just asking about OER selection.

Thus the best way to view this report is not to look for earth-shaking findings or to be disappointed if there are no surprises, but rather to see data-backed answers on the teaching resource adoption process.

Most Surprising Finding

Given this context, I pressed Jeff to answer what findings may have surprised him based on prior assumptions. The two answers are encouraging from an OER perspective.

  • Once you present OER to faculty, there’s a real affinity and alignment of OER with faculty values. Jeff was surprised more about the potential of OER than he had thought going in. Unlike other technology-based subjects of BSRG studies, there is almost no suspicion of OER. Everything else BSRG has measured has had strong minority views from faculty against the topic (online learning in particular), with incredible resentment detected. This resistance or resentment is just not there with OER. It is interesting for OER, with no organized marketing plan per se, to have no natural barriers from faculty perceptions.[2]
  • In the fundamental components of OER adoption – such as perceptions of quality and discoverability and currency – there is no significant difference between publisher-provided content and OER.
Notes on Survey

This is valuable survey, and I would hope that BSRG succeeds in getting funding (hint, hint Hewlett and Pearson) to make this into an annual report with longitudinal data. Ideally the base demographics will increase in scope so that we get a better understanding of the unique data between institution types and program types. Currently the report separates 2-year and 4-year institutions, but it would be useful to compare 4-year public vs. private and even for program type (e.g. competency-based programs vs. gen ed vs. fully online traditional programs).

There is much to commend in the appendices of this report – with basic data tables, survey methodology, and even the questionnaire itself. Too many survey reports neglect to include these basics.

You can download the full report here or read below. I’ll have more in terms of analysis of the specific findings in an upcoming post or two.

Download (PDF, 1.89MB)

  1. Disclosure: Pearson is a client of MindWires Consulting – see this post for more details.
  2. It’s no bed of roses for OER, however, as the report documents issues such as lack of faculty awareness and the low priority placed on cost as a criteria in selecting teaching resources.

The post Significant Milestone: First national study of OER adoption appeared first on e-Literate.