Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 5 hours 38 min ago

LISTedTECH: New wiki site and great visualizations

Thu, 2014-10-23 18:06

Last year I wrote about a relatively new site offering very interesting data and visualizations in the ed tech world. LISTedTECH was created by Justin Menard, who is Business Intelligence Senior Analyst at University of Ottawa. First of all, the site is broader in scope than just the LMS – there is a rich source of data & visualizations on MOOCs, university rankings, and IPEDS data. Most of the visualizations are presented by Tableau and therefore interactive in nature, allowing the user to filter data, zoom in on geographic data, etc. Since e-Literate is not set up for full-page visualizations, I have included screen shots below, but clicking on the image will take you to the appropriate LISTedTECH page.

Top_Learning_Management_System__LMS__by_State_or_Province_-_LISTedTECH

Justin created the LISTedTECH site based on his frustration with getting valuable market information while working on an ERP project at the University of Ottawa. After taking a year-long travel sabbatical, he added a programmer to his team this past summer. Justin does not have immediate plans to monetize the site beyond hoping to pay for server time.

LISTedTECH is a wiki. Anyone can sign up and contribute data on institutions and products. Justin gets notifications of data added and verifies data.[1] One of the key benefits of a wiki model is the ability to get user-defined data and even user ideas on useful data to include. Another benefit is the ability to scale. One of the key downsides of the wiki model is the need to clean out bad data, which can grow over time. Another downside is the selective sampling in data coverage.

LISTedTECH puts a priority on North America, and currently all ~140 Canadian schools are included. Justin and team are currently working to get complete, or near complete, US coverage. The one below could be titled If Ed Tech Were a Game of Risk, Moodle Wins.

World_Map_of_Learning_Management_Systems_08_2013_-_LISTedTECH

As of today the site includes:

  • Compan‏ies (511)
  • Products (1,326)
  • Institutions‏‎ (27,595)
  • Listed products used by institutions (over 18,000)
  • Product Categories‏‎ (36)
  • Countries (235)
  • World Ranking‏‎s (9)

The biggest change since I wrote last year is that LISTedTech has moved to a new site.

We have (finally) launched our new website wiki.listedtech.com. As you might remember, our old Drupal based site had been wikified to try and make contributions easier and try to build a community around HigherEd tech data. Even if it was possible to edit and share information, it was difficult to get all the steps down, and in the right order.

With the new version of the site, we knew that we needed a better tool. The obvious choice was to use the Mediawiki platform. To attain our goal of better data, we souped it up with semantic extensions. This helps by structuring the data on the pages so that they can be queried like a database.

Another example shows the history of commercial MOOCs based on the number of partner institutions:

MOOCs__a_short_history_-_LISTedTECH

I’m a sucker for great visualizations, and there is a lot to see at the site. One example is on blended learning and student retention, using official IPEDS data in the US. “Blended in this case means that the institution offers a mix of face-to-face and online courses.

Blended_Learning_and_Student_Retention_-_LISTedTECH

This is interesting – for 4-year institutions student retention positively there is a negative correlation with the percentage of courses available online, while for 2-year institutions the story is very different. That data invites additional questions and exploration.

All of the data for the website is available for download as XML files.

  1. He asks for people to include a link to source data to help in the QA process.

The post LISTedTECH: New wiki site and great visualizations appeared first on e-Literate.

What Faculty Should Know About Competency-Based Education

Thu, 2014-10-23 16:26

I loved the title of Phil’s recent post, “Competency-Based Education: Not just a drinking game” because it acknowledges that, whatever else CBE is, it is also a drinking game. The hype is huge and still growing. I have been thinking a lot lately about Gartner’s hype cycle and how it plays out in academia. In a way, it was really at the heart of the Duke keynote speech I posted the other day. There are a lot of factors that amplify it and make it more pernicious in the academic ecosystem than it is elsewhere. But it’s a tough beast to tackle.

I got some good responses to the “what faculty should know…” format that I used for a post about adaptive learning, so I’m going to try it again here in somewhat modified form. Let me know what you think of the format.

What Competency-Based Education (CBE) Is

The basic idea behind CBE is that what a student learns to pass a course (or program) should be fixed while the time it takes to do so should be variable. In our current education system, a student might have 15 weeks to master the material covered in a course and will receive a grade based on how much of the material she has mastered. CBE takes the position that the student should be able to take either more or less time than 15 weeks but should only be certified for completing the course when she has mastered all the elements. When a student registers for a course, she is in it until she passes the assessments for the course. If she comes in already knowing a lot and can pass the assessments in a few weeks—or even immediately—then she gets out quickly. If she is not ready to pass the assessments at the end of 15 weeks, she keeps working until she is ready.

Unfortunately, the term “CBE” is used very loosely and may have different connotations in different contexts. First, when “competency-based education” was first coined, it was positioned explicitly against similar approaches (like “outcomes-based education” and “mastery learning”) in that CBE was intended to be vocationally oriented. In other words, one of the things that CBE was intended to accomplish by specifying competencies was to ensure that what the students are learning is relevant to job skills. CBE has lost that explicit meaning in popular usage, but a vocational focus is often (but not always) present in the subtext.

Also, competencies increasingly feature prominently even in classes that do not have variable time. This is particularly true with commercial courseware. Vendors are grouping machine-graded assessment questions into “learning objectives” or competencies that are explicitly tied to instructional readings, videos, and so on. Rather than reporting that the student got quiz questions 23 through 26 wrong, the software is reporting that the student is not able to answer questions on calculating angular momentum, which was covered in the second section of Chapter 3. Building on this helpful but relatively modest innovation, courseware products are providing increasingly sophisticated support to both students and teachers on areas of the course (or “competencies”) where students are getting stuck. This really isn’t CBE in the way the term was originally intended but is often lumped together with CBE.

What It’s Good For

Because the term “CBE” is used for very different approaches, it is important to distinguish among them in terms of their upsides and downsides. Applying machine-driven competency-based assessments within a standard, time-based class is useful and helpful largely to the extent that machine-based assessment is useful and helpful. If you already are comfortable using software to quiz your students, then you will probably find competency-based assessments to be an improvement in that they provide improved feedback. This is especially true for skills that build on each other. If a student doesn’t master the first skill in such a sequence, she is unlikely to master the later skills that depend on it. A competency-based assessment system can help identify this sort of problem early so that the student doesn’t suffer increasing frustration and failure throughout the course just because she needs a little more help on one concept.

Thinking about your (time-based) course in terms of competencies, whether they are assessed by a machine or by a teacher, is also a useful tool in terms of helping you as a teacher shift your thinking from what it is you want to teach to what it is you want your students to learn—and how you will know that they have learned it. Part of defining a competency is defining how you will know when a student has achieved it. Thinking about your courses this way can not only help you design your courses better but also help when it is time to talk to your colleagues about program-level or even college-level goals. In fact, many faculty encounter the word “competency” for the first time in their professional context when discussing core competencies on a college-wide basis as part of the general education program. If you  have participated in these sorts of conversations, then you may well have found them simultaneously enlightening and incredibly frustrating. Defining competencies well is hard, and defining them so that they make sense across disciplines is even harder. But if faculty are engaged in thinking about competencies on a regular basis, both as individual teachers and as part of a college or disciplinary community, then they will begin to help each other articulate and develop their competencies around working with competencies.

Assuming that the competencies and assessments are defined well, then moving from a traditional time- or term-based structure to full go-at-your-own-pace CBE can help students by enabling those students who are especially bright or come in with prior knowledge and experience to advance quickly, while giving students who just need a little more time the chance they need to succeed. Both of these aspects are particularly important for non-traditional students[1] who come into college with life experience but also need help making school work with their work and life schedules—and who may very well have dropped out of college previously because they got stuck on a concept here or there and never got help to get past it.

What To Watch Out For

All that said, there are considerable risks attached to CBE. As with just about anything else in educational technology, one of the biggest has more to do with the tendency of technology products to get hyped than it does with the underlying ideas or technologies themselves. Schools and vendors alike, seeing a huge potential market of non-traditional students, are increasingly talking about CBE as a silver bullet. It is touted as more “personalized” than traditional courses in the sense that students can go at their own pace, and it “scales”—if the assessments are largely machine graded. This last piece is where CBE goes off the tracks pretty quickly. Along with the drive to service a large number of students at lower cost comes a strong temptation to dumb down competencies to the point where they can be entirely machine graded. Again, this probably doesn’t do much damage to traditional courses or programs that are already machine graded, it can do considerable damage in cases where the courses are not. And because CBE programs are typically aimed a working class students who can’t afford to go full-time, CBE runs the risk of making what is already a weaker educational experience in many cases (relative to expensive liberal arts colleges with small class sizes) worse by watering down standards for success and reducing the human support, all while advertising itself as “personalized.”

A second potential problem is that, even if the competencies are not watered down, creating a go-at-your-own-pace program makes social learning more of a challenge. If students are not all working on the same material at the same time, then they may have more difficulty finding peers they can work with. This is by no means an insurmountable design problem, but it is one that some existing CBE programs have failed to surmount.

Third, there are profound labor implications for moving from a time-based structure to CBE, starting with the fact that most contracts are negotiated around the number of credit hours faculty are expected to teach in a term. Negotiating a move from a time-based program to full CBE is far from straightforward.

Recomendations

CBE offers the potential to do a lot of good where it is implemented well and a lot of harm where it is implemented poorly. There are steps faculty can take to increase the chances of a positive outcome.

First, experiment with machine-graded competency-based programs in your traditional, time-based classes if and only if your are persuaded that the machine is capable of assessing the students well at what it is supposed to assess. My advice here is very similar to the advice I gave regarding adaptive learning, which is to think about the software as a tutor and to use, supervise, and assess its effectiveness accordingly. If you think that a particular software product can provide your students with accurate guidance regarding which concepts they are getting and which ones that they are not getting within a meaningful subset of what you are teaching, then it may be worth trying. But there is nothing magical about the word “competency.” If you don’t think that software can assess the skills that you want to assess, then competency-based software will be just as bad at it.

Second, try to spend a little time as you prepare for a new semester to think about your course in terms of competencies and refine your design at least a bit with each iteration. What are you trying to get students to know? What skills do you want them to have? How will you know if they have succeeded in acquiring that knowledge and those skills? How are your assessments connected to your goals? How are your lectures and course materials connected to them? To what degree are the connections clear and explicit?

Third, familiarize yourself with CBE efforts that are relevant to your institution and discipline, particularly if they are driven by organizations that you respect. For example, the American Association of Colleges and Universities (AAC&U) has created a list of competencies called the Degree Qualifications Profile (DQP) and a set of assessment rubrics called Valid Assessment of Learning in Undergraduate Education (VALUE). While these programs are consistent with and supportive of designing a CBE program, they focus on defining competencies students should receive from a high-quality liberal arts education and emphasize the use of rubrics applied by expert faculty for assessment over machine grading.

And finally, if your institution moves in the direction of developing a full CBE program, ask the hard questions, particularly about quality. What are the standards for competencies and assessments? Are they intended to be the same as for the school’s traditional time-based program? If so, then how will we know that they have succeeded in upholding those standards? If not, then what will the standards be, and why are they appropriate for the students who will be served by the program?

 

  1. The term “non-traditional is really out-of-date, since at many schools students who are working full-time while going to school are the rule rather than the exception. However, since I don’t know of a better term, I’m sticking with non-traditional for now.

The post What Faculty Should Know About Competency-Based Education appeared first on e-Literate.

Keynote: The Year After the Year of the MOOC

Wed, 2014-10-22 15:23

Here’s a talk I gave recently at the CIT event in Duke. In addition to being very nice and gracious, the Duke folks impressed me with how many faculty they have who are not only committed to improving their teaching practices but dedicating significant time to it as a core professional activity.

Anyway, for what it’s worth, here’s the talk:

Click here to view the embedded video.

The post Keynote: The Year After the Year of the MOOC appeared first on e-Literate.

State of the US Higher Education LMS Market: 2014 Edition

Wed, 2014-10-22 06:18

I shared the most recent graphic summarizing the LMS market in November 2013, and thanks to new data sources it’s time for an update. As with all previous versions, the 2005 – 2009 data points are based on the Campus Computing Project, and therefore is based on US adoption from non-profit institutions. This set of longitudinal data provides an anchor for the summary.

The primary data source for 2013 – 2014 is Edutechnica, which not only does a more direct measurement of a larger number of schools (viewing all schools in IPEDS database with more than 800 FTE enrollments), but it also allows scaling based on enrollment per institution. This means that the latter years now more accurately represent how many students use a particular LMS.

A few items to note:

  • Despite the addition of the new data source and its inclusion of enrollment measures, the basic shape and story of the graphic have not changed. My confidence has gone up in the past few years, but the heuristics were not far off.
  • The 2013 inclusion of Anglosphere (US, UK, Canada, Australia) numbers caused more confusion and questions than clarity, so this version goes back to being US only.
  • The Desire2Learn branding has been changed to Brightspace by D2L.
  • The eCollege branding has been changed to Pearson LearningStudio.
  • There is a growing area of “Alternative Learning Platforms” that includes University of Phoenix, Coursera, edX and OpenEdX, 2U, Helix and Motivis (the newly commercialized learning platform from College for America).
  • While the data is more solid than 2012 and prior years, keep in mind that you should treat the graphic as telling a story of the market rather than being a chart of exact data.

LMS_MarketShare_20141021

Some observations of the new data taken from the post on Edutechnica from September:

  • Blackboard’s BbLearn and ANGEL continue to lose market share in US -[1] Using the 2013 to 2014 tables (> 2000 enrollments), BbLearn has dropped from 848 to 817 institutions and ANGEL has dropped from 162 to 123. Using the revised methodology, Blackboard market share for > 800 enrollments now stands at 33.5% of institutions and 43.5% of total enrollments.
  • Moodle, D2L, and Sakai have no changes in US – Using the 2013 to 2014 tables (> 2000 enrollments), D2L has added only 2 schools, Moodle none, and Sakai 2 schools.
  • Canvas is the fasted growing LMS and has overtaken D2L – Using the 2013 to 2014 tables (> 2000 enrollments), Canvas grew ~40% in one year (from 166 to 232 institutions). For the first time, Canvas appears to have have larger US market share than D2L (13.7% to 12.2% of total enrollments using table above).

The post State of the US Higher Education LMS Market: 2014 Edition appeared first on e-Literate.

Competency-Based Education: Not just a drinking game

Thu, 2014-10-16 13:48

Ray Henderson captured the changing trend of the past two EDUCAUSE conferences quite well.

The #Edu14 drinking game: sure inebriation in 13 from vendor claims of "mooc" "cloud" or "disrupting edu". In 2014: "competency based."

— Ray Henderson (@readmeray) October 3, 2014

The drinking game: sure inebriation in 13 from vendor claims of “mooc” “cloud” or “disrupting edu”. In 2014: “competency based.”

Two years ago, the best-known competency-based education (CBE) initiatives were at Western Governors University (WGU), Southern New Hampshire University’s College for America (CfA), and SUNY’s Excelsior College. In an article this past summer describing the US Department of Education’s focus on CBE, Paul Fain noted [emphasis added]:

The U.S. Department of Education will give its blessing — and grant federal aid eligibility — to colleges’ experimentation with competency-based education and prior learning assessment.

On Tuesday the department announced a new round of its “experimental sites” initiative, which waives certain rules for federal aid programs so institutions can test new approaches without losing their aid eligibility. Many colleges may ramp up their experiments with competency-based programs — and sources said more than 350 institutions currently offer or are seeking to create such degree tracks.

One issue I’ve noticed, however, is that many schools are looking to duplicate the solution of CBE without understanding the the problems and context that allowed WGU, CfA and Excelsior to thrive. By looking at the three main CBE initiatives, it is important to note at least three lessons that are significant factors in their success to date, and these lessons are readily available but perhaps not well-understood.

Lesson 1: CBE as means to address specific student population

None of the main CBE programs were designed to target a general student population or to offer just another modality. In all three cases, their first consideration was how to provide education to working adults looking to finish a degree, change a career, or advance a career.

As described by WGU’s website:

Western Governors University is specifically designed to help adult learners like you fit college into your already busy lives. Returning to college is a challenge. Yet, tens of thousands of working adults are doing it. There’s no reason you can’t be one of them.

As described by College for America’s website:

We are a nonprofit college that partners with employers nationwide to make a college degree possible for their employees. We help employers develop their workforce by offering frontline workers a competency-based degree program built on project-based learning that is uniquely applicable in the workplace, flexibly scheduled to fit in busy lives, and extraordinarily affordable.

As described by Excelsior’s website:

Excelsior’s famously-flexible online degree programs are created for working adults.

SNHU’s ubiquitous president Paul Leblanc described the challenge of not understanding the target for CBE at last year’s WCET conference (from my conference notes):

One of the things that muddies our own internal debates and policy maker debates is that we say things about higher education as if it’s monolithic. We say that ‘competency-based education is going to ruin the experience of 18-year-olds’. Well, that’s a different higher ed than the people we serve in College for America. There are multiple types of higher ed with different missions.

The one CfA is interested in is the world of working adults – this represent the majority of college students today. Working adults need credentials that are useful in the workplace, they need low cost, they need me short completion time, and they need convenience. Education has to compete with work and family requirements.

CfA targets the bottom 10% of wage earners in large companies – these are the people not earning sustainable wages. They need stability and advancement opportunities.

CfA has two primary customers – the students and the employers who want to develop their people. In fact, CfA does not have a retail offering, and they directly work with employers to help employees get their degrees.

Lesson 2: Separate organizations to run CBE

In all three cases the use of CBE to serve working adults necessitated entirely new organizations that were designed to provide the proper support and structure based on this model.

WGU was conceived as a separate non-profit organization in 1995 and incorporated in 1997 specifically to design and enable the new programs. College for America was spun out of SNHU in 2012. Excelsior College started 40 years ago as Regents College, focused on both mastery and competency-based programs. The CBE nursing program was founded in 1975.

CBE has some unique characteristics that do not fit well within traditional educational organizations. From a CBE primer I wrote in 2012 and updated in 2013:

I would add that the integration of self-paced programs not tied to credit hours into existing higher education models presents an enormous challenge. Colleges and universities have built up large bureaucracies – expensive administrative systems, complex business processes, large departments – to address financial aid and accreditation compliance, all based on fixed academic terms and credit hours. Registration systems, and even state funding models, are tied to the fixed semester, quarter or academic year – largely defined by numbers of credit hours.

It is not an easy task to allow transfer credits coming from a self-paced program, especially if a student is taking both CBE courses and credit-hour courses at the same time. The systems and processes often cannot handle this dichotomy.

Beyond the self-paced student-centered scheduling issues, there are also different mentoring roles required to support students, and these roles are not typically understood or available at traditional institutions. Consider the mentoring roles at WGU as described in EvoLLLutions:

Faculty mentors (each of whom have at least a master’s degree) are assigned a student caseload and their full-time role is to provide student support. They may use a variety of communication methods that, depending on student preferences,include calling — but also Skype, email and even snail mail for encouraging notes.

Course mentors are the second type of WGU mentor. These full-time faculty members hold their Ph.D. and serve as content experts. They are also assigned a student caseload. Responsibilities of course mentors include creating a social community among students currently enrolled in their courses and teaching webinars focused specifically on competencies students typically find difficult. Finally, they support students one-on-one based on requests from the student or referral from the student’s faculty mentor.

Lesson 3: Competency is not the same as mastery

John Ebersole, the president of Excelsior College, called out the distinction between competency and mastery in an essay this summer at Inside Higher Ed.

On close examination, one might ask if competency-based education (or CBE) programs are really about “competency,” or are they concerned with something else? Perhaps what is being measured is more closely akin to subject matter “mastery.” The latter can be determined in a relatively straightforward manner, using various forms of examinations, projects and other forms of assessment.

However, an understanding of theories, concepts and terms tells us little about an individual’s ability to apply any of these in practice, let alone doing so with the skill and proficiency which would be associated with competence.

Deeming someone competent, in a professional sense, is a task that few competency-based education programs address. While doing an excellent job, in many instances, of determining mastery of a body of knowledge, most fall short in the assessment of true competence.

Ebersole goes on to describe the need for true competency measuring, and his observation that I share about programs confusing the two concepts..

A focus on learning independent of time, while welcome, is not the only consideration here. We also need to be more precise in our terminology. The appropriateness of the word competency is questioned when there is no assessment of the use of the learning achieved through a CBE program. Western Governors University, Southern New Hampshire, and Excelsior offer programs that do assess true competency.

Unfortunately, the vast majority of the newly created CBE programs do not. This conflation of terms needs to be addressed if employers are to see value in what is being sold. A determination of “competency” that does not include an assessment of one’s ability to apply theories and concepts cannot be considered a “competency-based” program.

Whither the Bandwagon

I don’t think that the potential of CBE is limited only to the existing models nor do I think WGU, CfA, and Excelsior are automatically the best initiatives. But an aphorism variously attributed to Pablo Picasso, Dalai Lama XIV or bassist Jeff Berlin might provide guidance to the new programs:

Know the rules well, so you can break them effectively

How many new CBE programs are being attempted that target the same student population as the parent institutions? How many new CBE programs are being attempted in the same organization structure? And how many new CBE programs are actually based on testing only of masteries and not competencies?

Judging by media reports and observations at EDUCAUSE, I think there are far too many programs attempting this new educational model of CBE as a silver bullet. They are moving beyond the model and lessons from WGU, College for America and Excelsior without first understanding why those initiatives have been successful. I don’t intend to name names here but just to note that the 350 new programs cited in Paul Fain’s article would do well to ground themselves in a solid foundation that understands and builds off of successful models.

The post Competency-Based Education: Not just a drinking game appeared first on e-Literate.

Kuali Student Sunsetting $40 million project, moving to KualiCo

Thu, 2014-10-09 13:00

The changes with Kuali are accelerating, and there are some big updates on the strategy.

Earlier this week the Kuali Foundation distributed an Information Update obtained by e-Literate on many of the details of the transition to Kuali 2.0 and the addition of the for-profit KualiCo. Some of the key clarifications:

  • KualiCo will be an independent C Corporation with a board of directors. KualiCo will not be a subsidiary of Kuali Foundation. Capital structure, equity allocations, and business plans are confidential and will not be shared publicly for the same reasons these things are rarely shared by private companies. The board of directors will start out with three members and will move to five or seven over time. Directors will include the CEO and an equal number of educational administrators and outside directors. One of the educational administrators will be appointed by the Kuali Foundation. Outside directors will be compensated with equity. Educational administrators will not be compensated in any way and could only serve as a director with the explicit permission of their university administration with attention to all relevant institutional policies.
  • KualiCo’s only initial equity investor is the Kuali Foundation. The Kuali Foundation will invest up to $2M from the Foundation’s cash reserves. [snip] For its equity investment, the Kuali Foundation will have the right to designate a director on the KualiCo Board of Directors. The Kuali Foundation, through its director, will have an exceptional veto right to block the sale of the company, an IPO of the company or a change to the open source license. This helps ensure that KualiCo will stay focused on marketplace-winning products and services rather than on flipping the company on Wall Street.
  • The Kuali Foundation is not licensing the Kuali software code for Kuali products to KualiCo as Kuali software is already fully open source and could be used by anyone for any purpose — as is already being done today. No license transfer or grant is needed by KualiCo or anyone else.
  • The copyright for the AGPL3 software will be copyright KualiCo for the open source distribution that is available to everyone. It would very quickly become untenable to even try to manage multiple copyright lines as various sections of code evolve through the natural enhancement processes of an open source community.

One key point the document describes at length is the lack of financial interest from individuals in the Kuali Foundation and KualiCo, including the uncompensated director position, the lack of equity held by individuals outside of KualiCo, etc.

Two other key points that are particularly relevant to yesterday’s news:

  • Each project board will decide if, when, to what extent, and for what term to engage with KualiCo. Project boards could decide to continue on as they currently do, to engage KualiCo in a limited way, or to allow KualiCo to help drive substantial change to the software approach to that product. If a project chooses not to engage KualiCo, KualiCo will have less initial funding to invest in enhancing the product, but will slowly build up those funds over time by hosting the product and enhancing the product for its customers. Choosing to engage with KualiCo in any fashion requires code to be reissued under the AGPL3 license (see Open Source section).
  • KualiCo will be working with the Kuali community to make improvements to current Kuali products. In addition to enhancing the current codebase, KualiCo is beginning the re-write of Kuali products with a modern technology stack. The initial focus will be on Kuali Student and then HR. Complete rewrites of KFS and KC will likely not begin for 3-5 years.
Kuali Student Changes

With this in mind, yesterday the Kuali Student (KS) Project Board met and made the decision to sunset their current project and to transition to KualiCo development. Bob Cook, CIO at the University of Toronto and chair of the KS Project Board confirmed by email.

I can say that the Board adopted its resolution because it is excited about the opportunity that KualiCo presents for accelerating the delivery of high quality administrative services for use in higher education, and is eager to understand how to best align our knowledgeable project efforts to achieve that goal. [snip]

In recognition of the opportunity presented by the establishment of KualiCo as a new facet in the Kuali community, the Kuali Student Board has struck a working group to develop a plan for transitioning future development of Kuali Student by the KualiCo. The plan will be presented to the Board for consideration.

While Bob did not confirm the additional level of details I asked (“It would be premature to anticipate specific outcomes from a planning process that has not commenced”), my understanding is that it is safe to assume:

  • Kuali Student will transition to AGPL license with KualiCo holding copyright;
  • KualiCo will develop a new product roadmap based on recoding / additions for multi-tenant framework; and
  • Some of all of the current KS development efforts will be shut down over the next month or two.

KS Project Director Rajiv Kaushik sent a note to the full KS team with more details:

KS Board met today and continued discussions on a transition to Kuali 2.0. That thread is still very active with most current investors moving in the Kuali 2.0 direction. In the meantime, UMD announced its intent to invest in Kuali 2.0 and to withdraw in 2 weeks from the current KS effort. Since this impacts all product streams, Sean, Mike and I are planning work over the next 2 weeks while we still have UMD on board. More to come on that tomorrow at the Sprint demo meeting.

I will update or correct this information as needed.

Kuali Student (KS) is the centerpiece of Kuali – it is the largest and most complex project and the most central value to higher education. KS was conceived in 2007. Unlike KFS, Coeus and Rice, Kuali Student was designed from the ground up. The full suite of modules within Kuali Student had been scheduled to be released between 2012 – 2015 in a single-tenant architecture. With the transition, there will be a new roadmap redeveloping for multi-tenant and updated technology stack.

Just how large has this project been? According to a financial analysis of 2009-2013 performed by instructional media + magic inc.[1] Kuali Student had $30 million in expenditures in that 5-year span. The 2014 records are not yet available nor the 2007-8 records, but an educated guess is that the total is closer to $40 million.[2]

Kuali Project Finances 2009-13

 

I mention this to show the scope of Kuali Student to date as well as the relative project size compared to other Kuali projects. I wrote a post on cloud computing around the LMS that might be relevant to the future KualiCo development, calling out how cloud technologies and services are driving down the cost of product development and time. In the case of the LMS, the difference has been close to an order of magnitude compared to the first generation:

Think about the implications – largely due to cloud technologies such as Amazon web services (which underpins Lore as well as Instructure and LoudCloud), a new learning platform can be designed in less than a year for a few million dollars. The current generation of enterprise LMS solutions often cost tens of millions of dollars (for example, WebCT raised $30M prior to 2000 to create its original LMS and scale to a solid market position, and raised a further $95M in 2000 alone), or product redesigns take many years to be released (for example, Sakai OAE took 3 years to go from concept to release 1.0). It no longer takes such large investments or extended timeframes to create a learning platform.

Cloud technologies are enabling a rapid escalation in the pace of innovation, and they are lowering the barriers to entry for markets such as learning platforms. Lore’s redesign in such a short timeframe gives a concrete example of how quickly systems can now be developed.

How will these dynamics apply to student information systems? Given the strong emphasis on workflow and detailed user functionality, I suspect that the differences will be less than for the LMS, but still significant. In other words, I would not see the redevelopment of Kuali Student to take anywhere close to $40 million or seven years, but I will be interested to see the new roadmap when it comes out.

This decision – moving Kuali Student to KualiCo – along with the foundation’s ability to hold on to the current community members (both institutions and commercial affiliates) will be the make-or-break bets that the Kuali Foundation has made with the move to Kuali 2.0. Stay tuned for more updates before the Kuali Days conference in November.

Say what you will about the move away from Community Source, Kuali is definitely not sitting on its laurels and being cautious. This redevelopment of Kuali Student with a new structure is bold and high-risk.

  1. Disclosure: Jim Farmer from im+m has been a guest blogger at e-Literate for many years.
  2. It’s probably more than that, but let’s use a conservative estimate to set general scope.

The post Kuali Student Sunsetting $40 million project, moving to KualiCo appeared first on e-Literate.

LinkedIn Releases College Ranking Service

Fri, 2014-10-03 09:57

I have long thought that LinkedIn has the potential to be one of the most transformative companies in ed tech for one simple reason: They have far more cross-institutional longitudinal outcomes data than anybody else—including government agencies. Just about anybody else who wants access to career path information of graduates across universities would face major privacy and data gathering hurdles. But LinkedIn has somehow convinced hundreds of millions of users to voluntarily enter that information and make it available for public consumption. The company clearly knows this and has been working behind the scenes to make use of this advantage. I have been waiting to see what they will come up with.

I have to say that I’m disappointed with their decision that their first foray would be a college ranking system. While I wouldn’t go so far as to say that these sorts of things have zero utility, they suffer from two big and unavoidable problems. First, like any standardized test—and I mean this explicitly in the academic meaning of the term “test”—they are prone to abuse through oversimplification of their meaning and overemphasis on their significance. (It’s not obvious to me that they would be subject to manipulation by colleges the way other surveys are, given LinkedIn’s ranking method, so at least there’s that.) Second and more importantly, they are not very useful even when designed well and interpreted properly. Many students change their majors and career goals between when they choose their college and when they graduate. According to the National Center for Education Statistics, 80% of undergraduates change their majors at least once, and the average student changes majors three times. Therefore, telling high schools students applying to college which school is ranked best for, say, a career in accounting has less potential impact on the students’ long-term success and happiness than one might think.

It would be more interesting and useful to have LinkedIn tackle cross-institutional questions that could help students make better decisions once they are in a particular college. What are the top majors for any given career? For example, if I want to be a bond trader on Wall Street, do I have to major in finance? (My guess is that the answer to this question is “no,” but I would love to see real data on it.) Or how about the other way around: What are the top careers for people in my major? My guess is that LinkedIn wanted to start off with something that (a) they had a lot of data on (which means something coarse-grained) and (b) was relatively simple to correlate. The questions I’m suggesting here would fit that bill while being more useful than a college ranking system (and less likely to generate institutional blow-back).

The post LinkedIn Releases College Ranking Service appeared first on e-Literate.

Kuali Foundation: Clarification on future proprietary code

Thu, 2014-10-02 08:35

Well that was an interesting session at Educause as described at Inside Higher Ed:

It took the Kuali leadership 20 minutes to address the elephant in the conference center meeting room.

“Change is ugly, and change is difficult, and the only difference here is you’re going to see all the ugliness as we go through the change because we’re completely transparent,” said John F. (Barry) Walsh, a strategic adviser for the Kuali Foundation. “We’re not going to hide any difficulty that we run into. That’s the way we operate. It’s definitely a rich environment for people who want to chuck hand grenades. Hey, have a shot — we’re wide open.” [snip]

Walsh, who has been dubbed the “father of Kuali,” issued that proclamation after a back-and-forth with higher education consultant Phil Hill, who during an early morning session asked the Kuali leadership to clarify which parts of the company’s software would remain open source.

While the article describes the communication and pushback issues with Kuali’s creation of a for-profit entity quite well (go read the whole article), I think it’s worth digging into what Carl generously describes as a “back-and-forth”. What happened was that there was a slide describing the relicensing of Kuali code as AGPL, and the last bullet caught my attention:

  • AGPL > GPL & ECL for SaaS
  • Full versions always downloadable by customers
  • Only feature “held back” is multi-tenant framework

If you need a read on the change of open source licenses and why this issue is leading to some of the pushback, go read Chuck Severance’s blog post.

Does ‘held back’ mean that the multi-tenant framework to enable cloud hosting partially existed but is not moving to AGPL, or does it mean that the framework would be AGPL but not downloadable by customers, or does it mean that the framework is not open course? That was the basis of my question.

Several Kuali Foundation representatives attempted to indirectly answer the question without addressing the license.

“I’ll be very blunt here,” Walsh said. “It’s a commercial protection — that’s all it is.”

The back-and-forth involved trying to get a clear answer, and the answer is that the multi-tenant framework to be developed / owned by KualiCo will not be open source – it will be proprietary code. I asked Joel Dehlin for additional context after the session, and he explained that all Kuali functionality will be open source, but the infrastructure to allow cloud hosting is not open source.

This is a significant clarification on the future model. While Kuali has always supported an ecosystem with commercial partners that can offer proprietary code, this is the first time that Kuali itself will offer proprietary, non open source code.[1]

What is not clear is whether any of the “multi-tenant framework” already exists and will be converted to a proprietary license or if all of this code will be created by KualiCo from the ground up. If anyone knows the answer, let me know in the comments.

From IHE:

“Unfortunately some of what we’re hearing is out of a misunderstanding or miscommunication on our part,” said Eric Denna, vice president of IT and chief information officer at the University of Maryland at College Park. “Brad [Wheeler, chair of the foundation’s board of directors,] and I routinely are on the phone saying, ‘You know, we have day jobs.’ We weren’t hired to be communications officers.”

Suggestion: Simple answers such as “What ‘held back’ means is that the framework will be owned by KualiCo and not open source and therefore not downloadable” would avoid some of the perceived need for communication officers.

  1. Kuali Foundation is partial owner and investor in KualiCo.

The post Kuali Foundation: Clarification on future proprietary code appeared first on e-Literate.

Investigation of IPEDS Distance Education Data Highlights System Not Ready for Modern Trends

Fri, 2014-09-26 00:00

This article is cross-posted to the WCET blog.

After billions of dollars spent on administrative computer systems and billions of dollars invested in ed tech companies, the U.S. higher education system is woefully out of date and unable to cope with major education trends such as online & hybrid education, flexible terms, and the expansion of continuing and extended education. Based on an investigation of the recently released distance education data for IPEDS, the primary national education database maintained by the National Center for Education Statistics (NCES), we have found significant confusion over basic definitions of terms, manual gathering of data outside of the computer systems designed to collect data, and, due to confusion over which students to include in IPEDS data, the systematic non-reporting of large numbers of degree-seeking students.

In Fall 2012, the IPEDS (Integrated Postsecondary Education Data System) data collection for the first time included distance education – primarily for online courses and programs. This data is important for policy makers and institutional enrollment management as well as for the companies serving the higher education market.

We first noticed the discrepancies based on feedback from analysis that we have both included at the e-Literate and WCET blogs. One of the most troubling calls came from a state university representative that said that the school has never reported any students who took their credit bearing courses through their self-supported, continuing education program.  Since they did not include the enrollments in reporting to the state, they did not report those enrollments to IPEDS. These were credits toward degrees and certificate programs offered by the university and therefore should have been included in IPEDS reporting based on the following instructions.

Include all students enrolled for credit (courses or programs that can be applied towards the requirements for a postsecondary degree, diploma, certificate, or other formal award), regardless of whether or not they are seeking a degree or certificate.

Unfortunately, the instructions call out this confusing exclusion (one example out of four):

Exclude students who are not enrolled for credit. For example, exclude: Students enrolled exclusively in Continuing Education Units (CEUs).

How many schools have interpreted this continuing education exclusion to apply to all continuing education enrollments? To do an initial check, we contacted several campuses in the California State University system and were told that all IPEDS reporting was handled at the system level. Based on the introduction of the Fall 2012 distance education changes, Cal State re-evaluated whether to change their reporting policy. A system spokesman explained that:

I’ve spoken with our analytic studies staff and they’ve indicated that the standard practice for data reporting has been to share only data for state-supported enrollments. We have not been asked by IPEDS to do otherwise so when we report distance learning data next spring, we plan on once again sharing only state-supported students.

Within the Cal State system, this means that more than 50,000 students taking for-credit self-support courses will not be reported, and this student group has never been reported.

One of the reasons for the confusion as well as the significance of this change is that continuing education units have moved past their roots of offering CEUs and non-credit courses for the general public (hence the name continuing education) and taking up a new role of offering courses not funded by the state (hence self-support). Since these courses and programs are not state funded, they are not subject to the same oversight and restrictions as state-funded equivalents such as maximum tuition per credit hour.

This situation allows continuing education units in public schools to become laboratories and innovators in online education. The flip side is that given the non-state-funded nature of these courses and programs, it appears that schools may not be reporting these for-credit enrollments through IPEDS, whether or not the students were in online courses. However, the changes in distance education reporting may actually trigger changes in reporting.

Do Other Colleges Also Omit Students from Their IPEDS Report?

Given what was learned from the California State University System, we were interested in learning if other colleges were having similar problems with reporting distance education enrollments to IPEDS. WCET conducted a non-scientific canvassing of colleges to get their feedback on what problems they may have encountered. Twenty-one institutions were selected through a non-scientific process of identifying colleges that reported enrollment figures that seemed incongruous with their size or distance education operations. See the “Appendix A: Methodology” for more details.

From early August to mid-September, we sought answers regarding whether the colleges reported all for-credit distance education and online enrollments for Fall 2012. If they did not, we asked about the size of the undercount and why some enrollments were not reported.

Typically, the response included some back-and-forth between the institutional research and distance education units at each college. Through these conversations, we quickly realized that we should have asked a question about the U.S. Department of Education’s definition of “distance education.”   Institutions were very unclear about what activities to include or exclude in their counts. Some used local definitions that varied from the federal expectations. As a result, we asked that question as often as we could.

The Responses

Twenty institutions provided useable responses. We agreed to keep responses confidential. Table 1 provides a very high level summary of the responses to the following two questions:

  • Counts Correct? – Do the IPEDS data reported include all for-credit distance education and online enrollments for Fall 2012?
  • Problem with “Distance Education” Definition? – Although we did not specifically ask this question, several people volunteered that they had trouble applying the IPEDS definition.
Table 1: Counts for Institutional Responses  Counts Correct?Problem with "Distance Education" Definition? Yes113 Maybe55 No412

Of those that assured us that they submitted the correct distance education counts, some of them also reported having used their own definitions or processes for distance education. This would make their reported counts incomparable to the vast majority of others reporting.One institution declined to respond. Given that its website advertises many hundreds of online courses, the distance education counts reported would leave us to believe that they either: a) under-reported, or b) average one or two students per online class. The second scenario seems unlikely.

Findings

This analysis found several issues that call into question the usability of IPEDS distance education enrollment counts and, more broadly and more disturbingly, IPEDS statistics, in general.

There is a large undercount of distance education students

While only a few institutions reported an undercount, one was from the California State University System and another from a large university system in another populous state. Since the same procedures were used within each system, there are a few hundred thousand students who were not counted in just those two systems.

In California, they have never reported students enrolled in Continuing Education (self-support) units to IPEDS. A source of the problem may be in the survey instructions. Respondents are asked to exclude: “Students enrolled exclusively in Continuing Education Units (CEUs).” The intent of this statement is to exclude those taking only non-credit courses. It is conceivable that some might misinterpret this to mean to exclude those in the campuses continuing education division. What was supposed to be reported was the number of students taking for-credit courses regardless of what college or institutional unit was responsible for offering the course.

In the other large system, they do not report out-of-state students as they do not receive funding from the state coffers.

It is unclear what the numeric scope would be if we knew the actual numbers across all institutions. Given that the total number of “students enrolled exclusively in distance education courses” for Fall 2012 was 2,653,426, an undercount of a hundred thousand students just from these two systems would be a 4% error. That percentage is attention-getting on its own.

The IPEDS methodology does not work for innovative programs…and this will only get worse

Because it uses as many as 28 start dates for courses, one institutional respondent estimated that there was approximately a 40% undercount in its reported enrollments. A student completing a full complement of courses in a 15-week period might not be enrolled in all of those courses at the census date. With the increased use of competency-based programs, adaptive learning, and innovations still on the drawing board, it is conceivable that the census dates used by an institution (IPEDS gives some options) might not serve every type of educational offering.

The definition of ‘distance education’ is causing confusion

It is impossible to get an accurate count of anything if there is not a clear understanding of what should or should not be included in the count. The definition of a “distance education course” from the IPEDS Glossary is:

A course in which the instructional content is delivered exclusively via distance education.  Requirements for coming to campus for orientation, testing, or academic support services do not exclude a course from being classified as distance education.

Even with that definition, colleges faced problems with counting ‘blended’ or ‘hybrid’ courses. What percentage of a course needs to be offered at a distance to be counted in the federal report? Some colleges had their own standard (or one prescribed by the state) with the percentage to be labeled a “distance education” course varied greatly. One reported that it included all courses with more than 50% of the course being offered at a distance.

To clarify the federal definition, one college said they called the IPEDS help desk. After escalating the issue to a second line manager, they were still unclear on exactly how to apply the definition.

The Online Learning Consortium is updating their distance education definitions. Their current work could inform IPEDs on possible definitions, but probably contains too many categories for such wide-spread data gathering.

There is a large overcount of distance education students

Because many colleges used their own definition, there is a massive overcount of distance education. At least, it is an overcount relative to the current IPEDS definition. This raises the question, is the near 100% standard imposed by that definition useful in interpreting activity in this mode of instruction? Is it the correct standard since no one else seems to use it?

In addressing the anomalies, IPEDS reporting becomes burdensome or the problems ignored

In decentralized institutions or in institutions with “self-support” units that operate independently from the rest of campus, their data systems are often not connected. They are also faced with simultaneously having to reconcile differing “distance education” definitions. One choice for institutional researchers is to knit together numbers from incompatible data systems and/or with differing definitions. Often by hand. To their credit, institutional researchers overcome many such obstacles. Whether it is through misunderstanding the requirements or not having the ability to perform the work, some colleges did not tackle this burdensome task.

Conclusions – We Don’t Know

While these analyses have shed light on the subject, we are still left with the feeling that we don’t know what we don’t know. In brief the biggest finding is that we do not know what we do not know and bring to mind former Secretary of Defense Donald Rumsfeld’s famous rambling:

There are known knowns. These are things we know that we know. We also know there are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are ones we don’t know we don’t know.

The net effect is not known

Some institutions reported accurately, some overcounted, some undercounted, some did both at the same time. What should the actual count be?

We don’t know.

The 2012 numbers are not a credible baseline

The distance education field looked forward to the 2012 Fall Enrollment statistics with distance education numbers as a welcomed baseline to the size and growth of this mode of instruction. That is not possible and the problems will persist with the 2013 Fall Enrollment report when those numbers are released. These problems can be fixed, but it will take work. When can we get a credible baseline?

We don’t know.

A large number of students have not been included on ANY IPEDS survey, EVER.

A bigger issue for the U.S. Department of Education goes well beyond the laser-focused issue of distance education enrollments. Our findings indicate that there are hundreds of thousands of students who have never been reported on any IPEDS survey that has ever been conducted. What is the impact on IPEDS? What is the impact on the states where they systematically underreported large numbers of students?

We don’t know.

Who is at fault?

Everybody and nobody.  IPEDS is faced with institutional practices that vary greatly and often change from year-to-year as innovations are introduced.  Institutional researchers are faced with reporting requirements that vary depending on the need, such as state oversight agencies, IPEDS, accrediting agencies, external surveys and ranking services, and internal pressures from the marketing and public relations staffs.  They do the best they can in a difficult situation. Meanwhile, we are in an environment in which innovations may no longer fit into classic definitional measurement boxes.

What to expect?

In the end, this expansion of data from NCES through the IPEDS database is a worthwhile effort in our opinion, and we should see greater usage of real data to support policy decisions and market decisions thanks to this effort. However, we recommend the following:

  • The data changes from the Fall 2012 to Fall 2013 reporting periods will include significant changes in methodology from participating institutions. Assuming that we get improved definitions over time, there will also be changes in reporting methodology at least through Fall 2015. Therefore we recommend analysts and policy-makers not put too much credence in year-over-year changes for the first two or three years.
  • The most immediate improvement available is for NCES to clarify and gain broader consensus on the distance education definitions. This process should include working with accrediting agencies, whose own definitions influence school reporting, as well as leading colleges and universities with extensive online experience.
Appendix: Methodology The Process for Selecting Institutions to Survey

The selection process for institutions to survey was neither random nor scientific. A multi-step process of identifying institutions that might have had problems in reporting distance education enrollments was undertaken. The goal was to identify twenty institutions to be canvassed. The steps included:

  • A first cut was created by an “eyeball” analysis of the Fall 2012 IPEDS Fall Enrollment database to identify institutions that may have had problems in responding to the distance education enrollment question.
    • Colleges that reported distance education enrollments that did not appear to be in scope with the size of the institution (i.e., a large institution with very low distance education enrollments) or what we knew about their distance education operations were included.
    • Special attention was paid to land grant colleges as they are likely to have self-funded continuing or distance education units.
    • Institutions in the California State University system were excluded.
    • This resulted in a list of a little more than 100 institutions.
  • The second cut was based upon:
    • Including colleges across different regions of the country.
    • Including a private college and an HBCU as indicators as to whether this problem might be found in colleges from those institutional categories.
    • Twenty institutions were identified.
  • In side discussions with a distance education leader at a public university, they agreed to participate in the survey. This brought the total to twenty-one institutions.

Questions Asked in the Survey

  1. Do the IPEDS data reported include all for-credit distance education and online enrollments for Fall 2012?
  2. If the IPEDS data reported does not include all for-credit distance education and online enrollments for Fall 2012, approximately how many enrollments are under-counted?
  3. If the IPEDS data reported does not include all for-credit distance education and online enrollments for Fall 2012, why did you not report some enrollments?

The post Investigation of IPEDS Distance Education Data Highlights System Not Ready for Modern Trends appeared first on e-Literate.

New LMS Market Data: Edutechnica provides one-year update

Tue, 2014-09-23 04:58

In Fall 2013 we saw a rich source of LMS market data emerge.

George Kroner, a former engineer at Blackboard who now works for University of Maryland University College (UMUC), has developed what may be the most thorough measurement of LMS adoption in higher education at Edutechnica (OK, he’s better at coding and analysis than site naming). This side project (not affiliated with UMUC) started two months ago based on George’s ambition to unite various learning communities with better data. He said that he was inspired by the Campus Computing Project (CCP) and that Edutechnica should be seen as complementary to the CCP.

The project is based on a web crawler that checks against national databases as a starting point to identify the higher education institution, then goes out to the official school web site to find the official LMS (or multiple LMSs officially used). The initial data is all based on the Anglosphere (US, UK, Canada, Australia), but there is no reason this data could not expand.

There is new data available in Edutechnica’s one-year update, with year-over-year comparisons available as well as improvements to the methodology. Note that the methodology has improved both in terms of setting the denominator and in terms of how many schools are included in the data collection.

The Fall 2014 data which now includes all schools with more than 800 enrollments:

There’s more data available on the site, including measures of the Anglosphere (combining US, UK, Canada and Australia data) as well as comparison tables for 2013 to 2014. Go read the whole post.

LMS Anglo 2014In the meantime, here are some initial notes on this data. Given the change in methodology, I will focus on major changes.

  • Blackboard’s BbLearn and ANGEL continue to lose market share in US -[1] Using the 2013 to 2014 tables (> 2000 enrollments), BbLearn has dropped from 848 to 817 institutions and ANGEL has dropped from 162 to 123. Using the revised methodology, Blackboard market share for > 800 enrollments now stands at 33.5% of institutions and 43.5% of total enrollments.
  • Moodle, D2L, and Sakai have no changes in US - Using the 2013 to 2014 tables (> 2000 enrollments), D2L has added only 2 schools, Moodle none, and Sakai 2 schools.
  • Canvas is the fasted growing LMS and has overtaken D2L - Using the 2013 to 2014 tables (> 2000 enrollments), Canvas grew ~40% in one year (from 166 to 232 institutions). For the first time, Canvas appears to have have larger US market share than D2L (13.7% to 12.2% of total enrollments using table above).
  • BbLearn is popular in the UK while Moodle is largest provider in Canada and Australia - The non-US numbers are worth reviewing, even without the same amount of detail as we have for US numbers.

While this data is very useful, I will again point out that no one to my knowledge has independently verified the accuracy of the data at this site. I have done sanity checks against Campus Computing and ITC data, but I do not have access to the Edutechnica specific mechanism for counting systems. In order to gain longer-term acceptance of these data sets, we will need some method to provide some level of verification.

In the meantime, enjoy the new market data.

Update: Allan Christie has a post up questioning the source data for Australia. I hope this information is used to improve the Edutechnica data set or at least leads to clarifications.

Put simply, it is generally accepted that there are 39 universities (38 public, 1 private) in Australia. Given the small number of universities and my knowledge of the sector I know that there are 20 (51%) universities which use Blackboard as their enterprise LMS, 16 (41%) use Moodle, and 3 (8%) use D2L. It is acknowledged that there are some departments within universities that use another LMS but according to Edutechnica’s methodology these were excluded from their analysis.

  1. Disclosure: Blackboard is a client of MindWires Consulting.

The post New LMS Market Data: Edutechnica provides one-year update appeared first on e-Literate.

On False Binaries, Walled Gardens, and Moneyball

Sat, 2014-09-20 10:08

D’Arcy Norman started a lively inter-blog conversation like we haven’t seen in the edublogosphere in quite a while with his post on the false binary between LMS and open. His main point is that, even if you think that the open web provides a better learning environment, an LMS provides a better-than-nothing learning environment for faculty who can’t or won’t go through the work of using open web tools, and in some cases may be perfectly adequate for the educational need at hand. The institution has an obligation to provide the least-common-denominator tool set in order to help raise the baseline, and the LMS is it. This provoked a number of responses, but I want to focus on Phil’s two responses, which talk at a conceptual level about building a bridge between the “walled garden” of the LMS and the open web (or, to draw on his analogy, keeping the garden but removing the walls that demarcate its border). There are some interesting implications from this line of reasoning that could be explored. What would be the most likely path for this interoperability to develop? What role would the LMS play when the change is complete? For that matter, what would the whole ecosystem look like?

Seemingly separately from this discussion, we have the new Unizin coalition. Every time that Phil or I write a post on the topic, the most common response we get is, “Uh…yeah, I still don’t get it. Tell me again what the point of Unizin is, please?” The truth is that the Unizin coalition is still holding its cards close to its vest. I suspect there are details of the deals being discussed in back rooms that are crucial to understanding why universities are potentially interested. That said, we do know a couple of broad, high-level ambitions that the Unizin leadership has discussed publicly. One of those is to advance the state of learning analytics. Colorado State University’s VP of Information Technology Pat Burns has frequently talked about “educational Moneyball” in the context of Unizin’s value proposition. And having spoken with a number of stakeholders at Unizin-curious schools, it is fair to say that there is a high level of frustration with the current state of play in commercial learning analytics offerings that is driving some of the interest. But the dots have not been connected for us. What is the most feasible path for advancing the state of learning analytics? And how could Unizin help in this regard?

It turns out that the walled garden questions and the learning analytics questions are related.

The Current State of Interoperability

Right now, our LMS gardens still have walls and very few doors, but they do have windows, thanks to the IMS LTI standard. You can do a few things with LTI, including the following:

  • Send a student from the LMS to someplace elsewhere on the web with single sign-on
  • Bring that “elsewhere” place inside the LMS experience by putting it in an iframe (again, with single sign-on)
  • Send assessment results (if there are any) back from that “elsewhere” to the LMS gradebook.

The first use case for LTI was to bring in a third-party tool (like a web conferencing app or a subject-specific test engine) into the LMS, making it feel like a native tool. The second use case was to send students out to a tool that needed to full control of the screen real estate (like an eBook reader or an immersive learning environment) but to make that process easier for students (through single sign-on) and teachers (through grade return). This is nice, as far as it goes, but it has some significant limitations. From a user experience perspective, it still privileges the LMS as “home base.” As D’Arcy points out, that’s fine for some uses and less fine for others. Further, when you go from the LMS to an LTI tool and back, there’s very little information shared between the tool. For example, you can use LTI to send a student from the LMS to a WordPress multiuser installation, have WordPress register that student and sign that student in, and even provision a new WordPress site for that student. But you can’t have it feed back information on all the student’s posts and comments into a dashboard that combines it with the student’s activity in the LMS and in other LTI tools. Nor can you use LTI to aggregate student posts from their respective WordPress blogs that are related to a specific topic. All of that would have to be coded separately (or, more likely, not done at all). This is less than ideal from both user experience and analytics perspectives.

Enter Uniz…Er…Caliper

There is an IMS standard in development called Caliper that is intended to address this problem (among many others). I have described some of the details of it elsewhere, but for our current purposes the main thing you need to know is that it is based on the same concepts (although not the same technical standards) as the semantic web. What is that? Here’s a high-level explanation from the Man Himself, Mr. Tim Berners-Lee:

Click here to view the embedded video.

The basic idea is that web sites “understand” each other. The LMS would “understand” that a blog provides posts and comments, both of which have authors and tags and categories, and some of which have parent/child relationships with others. Imagine if, during the LTI initial connection, the blog told the LMS about what it is and what it can provide. The LMS could then reply, “Great! I will send you some people who can be ‘authors’, and I will send you some assignments that can be ‘tags.’ Tell me about everything that goes on with my authors and tags.” This would allow instructors to combine blog data with LMS data in their LMS dashboard, start LMS discussion threads off of blog posts, and probably a bunch of other nifty things I haven’t thought of.

But that’s not the only way you could use Caliper. The thing about the semantic web is that it is not hub-and-spoke in design and does not have to have a “center.” It is truly federated. Perhaps the best analogy is to think of your mobile phone. Imagine if students had their own private learning data wallets, the same way that your phone has your contact information, location, and so on. Whenever a learning application—an LMS, a blog, a homework product, whatever—wanted to know something about you, you would get a warning telling you which information the app was asking to access and asking you to approve that access. (Goodbye, FERPA freakouts.) You could then work in those individual apps. You could authorize apps to share information with each other. And you would have your own personal notification center that would aggregate activity alerts from those apps. That notification center could become the primary interface for your learning activities across all the many apps you use. The PLE prototypes that I have seen basically tried to do a basic subset of this capability set using mostly RSS and a lot of duct tape. Caliper would enable a richer, more flexible version of this with a lot less point-to-point hand coding required. You could, for example, use any Caliper-enabled eBook reader that you choose on any device that you choose to do your course-related reading. You could choose to share your annotations with other people in the class and have their annotations appear in your reader. You could share information about what you’ve read and when you’ve read it (or not) with the instructor or with a FitBit-style analytics system that helps recommend better study habits. The LMS could remain primary, fade into the background, or go away entirely, based on the individual needs of the class and the students.

Caliper is being marketed as a learning analytics standard, but because it is based on the concepts underlying the semantic web, it is much more than that.

Can Unizin Help?

One of the claims that Unizin stakeholders make is that the coalition can can accelerate the arrival of useful learning analytics. We have very few specifics to back up this claim so far, but there are occasionally revealing tidbits. For example, University of Wisconsin CIO Bruce Mass wrote, “…IMS Global is already working with some Unizin institutions on new standards.” I assume he is primarily referring to Caliper, since it is the only new learning analytics standard that I know of at the IMS. His characterization is misleading, since it suggests a peer-to-peer relationship between the Unizin institutions and IMS. That is not what is happening. Some Unizin institutions are working in IMS on Caliper, by which I mean that they are participating in the working group. I do not mean to slight or denigrate their contributions. I know some of these folks. They are good smart people, and I have no doubt that they are good contributors. But the IMS is leading the standards development process, and the Unizin institutions are participating side-by-side with other institutions and with vendors in that process.

Can Unizin help accelerate the process? Yes they can, in the same ways that other participants in the working group can. They can contribute representatives to the working groups, and those representatives can suggest use cases. They can review documents. They can write documents. They can implement working prototypes or push their vendors to do so. The latter is probably the biggest thing that anyone can do to move a standard forward. Sitting around a table and thinking about the standard is good and useful, but it’s not a real standard until multiple parties implement it. It’s pretty common for vendors to tell their customers, “Oh yes, of course we will implement Caliper, just as soon as the specification is finalized,” while failing to mention that the specification cannot be finalized until there are implementers. What you end up with is a bunch of kids standing around the pool, each waiting for somebody else to jump in first. In other words, what you end up with is paralysis. If Unizin can accelerate the rate of implementation and testing of the proposed specification by either implementing themselves or pushing their vendor(s) to implement, then they can accelerate the development of real market solutions for learning analytics. And once those solutions exist, then Unizin institutions (along with everyone else) can use them and try to discover how to use all that data to actually improve learning. These are not unique and earth-shaking contributions that only Unizin could make, but they are real and important ones. I hope that they make them.

The post On False Binaries, Walled Gardens, and Moneyball appeared first on e-Literate.

Helix: View of an LMS designed for competency-based education

Thu, 2014-09-18 11:42

Within higher education, we tend to talk about LMS solutions based on an institutional perspective – which systems can serve as the official LMS for an entire institution. While this view is important and forms the basis for my LMS graphics, the emergence of new educational delivery models has led to the development of some interesting program-specific LMS models. One example that I have already written about is 2U’s platform (built on top of Moodle and Adobe Connect) for their specific Online Service Provider (OSP) business.

One educational model that is becoming more and more important is competency-based education (CBE). One of the challenges for this model is that the traditional LMS – based on a traditional model using grades, seat time and synchronous cohort of students – is not easily adapted to serve CBE needs. As described in this CBE primer:

OBE [Outcome-based education] can be implemented in various modalities, including face-to-face, online and hybrid models.

Competency-based education (CBE) is a narrower concept, a subset or instance of OBE, where the outcomes are more closely tied to job skills or employment needs, and the methods are typically self-paced. Again based on the Malan article, the six critical components of CBE are as follows:

  1. Explicit learning outcomes with respect to the required skills and concomitant proficiency (standards for assessment)
  2. A flexible time frame to master these skills
  3. A variety of instructional activities to facilitate learning
  4. Criterion-referenced testing of the required outcomes
  5. Certification based on demonstrated learning outcomes
  6. Adaptable programs to ensure optimum learner guidance

Enter Helix (again)

In 2008 Altius Education, started by Paul Freedman, worked with Tiffin University to create a new entity called Ivy Bridge College. The goal of Ivy Bridge was to help students get associate degrees and then transfer to a four-year program. Altius developed the Helix LMS specifically for this mission. All was fine until the regional accrediting agency shut down Ivy Bridge with only three months notice.[1]

The end result was that Altius sold the LMS and much of the engineering team to Datamark in 2013. Datamark is an educational services firm with a focus on leveraging data. With the acquisition of the Helix technology, Datamark could expand into the teaching and learning process, leading them to rebrand as Helix Education – a sign of the centrality of the LMS to the company’s strategy. Think of Helix Education now as an OSP (a la carte services that don’t require tuition revenue sharing) with an emphasis on CBE programs.

I asked the Helix team to give me a demo of their tool, with permission to do screen grabs, to get a better sense of the system design and inter-relationship with CBE. The goal is to understand some of the nuances brought about by a CBE design focus, based on 3 of the 6 components from SPT Malan. I am not trying to judge better or worse in terms of LMS design but rather to get a better visualization of the implications of CBE.

First – the main page:

classroom

1) Explicit Learning Outcomes

The learning outcomes are embedded throughout the course, including the ability to allow a pretest and determine the starting set of competencies.

pretest_feedback

From a faculty and / or course designer standpoint, there is also a view to determine the coverage of competencies within the course.

OutcomesCoverageReport

2) Flexible Time Frame

The system gives estimates of the time required for most tasks and allows the student to choose their own course plan. The “schedule” then adjusts based on the course plan.

CreateCoursePlan

Because the course is essentially self-paced, Helix adds some features to allow peer assessment and discussions, often in an asynchronous manner (based on current and past discussions of students at the same point in the course, but allowing real-time faculty interaction).

Answer Forum - student

Students can ‘like’ and choose to ‘follow’ different discussions. I asked if students can ‘follow’ other students, and the answer was no (but possible in roadmap).

AnswerForum

6) Adaptable Programs / Learner Guidance

This one was interesting, as Helix allows students to select different levels of guidance – hints, examples, step-by-step, etc.

practice

Notes
  • I am doing some research to see if I can get other examples from CBE programs, but there seems to be movement of newer educational models, including CBE, becoming more likely to develop program-specific LMS solutions.
  • While the Helix team stated that the assessments (formative and summative) can be set up with essays or other non-multiple-choice formats, the overall view of this platform gives a good view of the fairly prescriptive, objectivist nature of CBE.[2]
  • Helix stated several times during the demo that the LMS could be applied to non-CBE programs, but my focus was on the CBE nature of the system.
  • Helix is building up its management team, including several key hires with a background at eCollege (acquired by Pearon in 2007), Western Governors University and DeVry University.
  • Finally, this feature is not CBE-based, but I do like the ability for students to provide feedback on lessons within the course itself.

LessonFeedback

  1. More on that sad tale in this post as well as this Chronicle article.
  2. See Tony Bates article for more on this topic.

The post Helix: View of an LMS designed for competency-based education appeared first on e-Literate.

Opening Up the LMS Walled Garden

Wed, 2014-09-17 08:45

In yesterday’s post I described where I (and many others) see the LMS market heading in terms of interoperability.

At the same time, the LMS does a very poor job at providing a lot of the learning technologies desired by faculty and students. There is no way that a monolithic LMS can keep up with the market – it cannot match functionality of open internet tools especially without adding feature bloat.

I would add that part of the cause of the “false binary position” that D’Arcy points out is that much of the public commentary focuses on where the LMS has been rather than where it is going. There is a significant movement based on interoperability that is leading, perhaps painfully and slowly, to a world where the LMS can coexist with open educational tools, with even end users (faculty and students) eventually having the ability to select their tools that can share rosters and data with the institutional LMS.

Coexistence and interoperability, however, should not imply merely having links from the LMS to external tools as is too often the case.

The Walled Garden

The LMS (which George Station rightly points out was really called the Course Management System in the early years) started out as a walled garden with basic functionality of syllabus sharing, announcements, gradebook, email, and a few other tools.

walledgarden

Over time, as both Jared Stein points out in his blog post:

Flash forward to 2005(ish), when “Web 2.0” was on many educators’ minds as a new wave of services that made it easier for anyone to express themselves to anyone who was interested in participating. New web services and social media made the legacy LMS look like what it was: A slow-moving cruise ship that locked passengers in their cabins. It didn’t care about user experience. It didn’t care about integrating with social media. It didn’t care about encouraging novel practices or experimentation. But those were really just symptoms; the sickness was that the LMS vendors didn’t care about what was happening in our culture and in our communities as connectivity and multimedia exploded through the open web.

The LMS vendors did not just ignore these new services, however, but they tried to eat their cake and have it, too, by creating poor imitations of the external tools and stuffing them inside the LMS.

walledgarden2

As Web 2.0 tools proliferated, this approach of maintaining the walled garden was one of the primary causes of feature bloat and poorly-designed learning tools within the LMS.

walledgarden3

False Binary – A Choice

This situation – a walled garden LMS with feature bloat and inelegant tools while multiplying external tools become available – represents the bad side of the ed tech market as it has existed. Despite the weakness of this design approach, the vendors themselves were not the only ones at fault. As Mike Caulfield points out in his description of the “elegant and extensible Prometheus:

A number of years later I asked a person I knew who worked at Prometheus why Prometheus failed. Did Blackboard crush them?

His answer was interesting. No, it wasn’t Blackboard at all. It was the educational institutions. With the slow, resource-intensive and state-mandated RFP processes, the interminable faculty commitees, and the way that even after the deal was signed the institution would delay payment and implementation as long as possible (or suddenly throw it into an unanticipated ‘final review’) it was just not possible to grow a stable business. The process institutions followed was supposed to ensure equitable access to contracts, but what it did was made it impossible for any company not sitting on a pile of cash to stay in business. (I’m extrapolating a bit here, but not much).

I would add that the RFP process also encourages a feature checklist mentality, elevating the importance of being able to say “we have that feature” and minimizing the ability to say “this design doesn’t suck”.

Many institutions have reacted slowly to the proliferation of tools and officially support only the enterprise LMS – often due to FERPA / student privacy concerns but also due to perceived inability of central units to provide support to faculty and students on multiple tools.

But this is a choice, even in the current market with limited interoperability. There are other institutions that support not only the official enterprise LMS but also multiple learning tools. While institutions have a responsibility to provide baseline LMS services for faculty, there is a strong argument that they also have a responsibility to support the innovators and early adopters that want to explore with different learning tools, whether or not they integrate with the LMS within a course.

Moving Beyond the Wall

But can the market progress such that the enterprise LMS can coexist with open tools even at the course level? The answer in my mind is yes, and the work to move in this direction has been in progress for years. Thanks to LTI specification, and in the future the Caliper interoperability framework, the vision that George Kroner describes is getting closer and closer.

But the LMSs today won’t be the LMSs of tomorrow. Rather than being a “dumping ground” for content, maybe one possible future for LMSs is as Learning Management Scaffolding – metaphorically supporting learning no matter its shape or form – with content being viewed and activities taking place inside and outside of the LMS. Maybe content will be seamlessly navigable around the LMS and the web – and perhaps in other types of systems like LCMSs – Learning Content Management Systems. Maybe learning tools of all types and sizes – but external to the LMS – will support every long-tail instructional desire imaginable while assessment results feed back into the LMS gradebook. Maybe the LMS will be the storage mechanism for leaning analytics as well, but it is more likely that it will become only one source of data feeding into another system better-suited for the task. But try as I might I fail to imagine a future in which some centrally-managed, instructor-accessible system stores rosters and grades, enforces privacy and security policies, and provides some form of starting-off point for students.

In this developing future market, coexistence of LMS and Open will include not just links or grudging institutional support, but it will also include information sharing of rosters, data, and context. Open tools that will start with the class roster in place, data of user activity shared between apps, and the ability to external apps to be run in the context of the course design and recent class activities.

walledgarden5

There will be painful implementations – caused both by LMS vendors and by institutions – that will prevent a smooth transition to this breakdown of the walled garden, but it will become increasingly difficult for LMS solutions to survive over time if they don’t adapt. There will also be market niches (e.g. specific online programs) that will retain the walled garden LMS approach, but in general the markets should change.

I personally see the realistic future as having more of a choice of tools rather than a minimal LMS. LMS vendors will continue to have reasons to develop (or acquire) their own internal tools, and there will even be cases where the tight integration and focused development will lead to better tools in the LMS than outside. The key change will be the ability for integration decisions – which tools to use in specific classes or in specific institutions – to be made closer to the faculty and student end users. From LMS vendor to central IT to academic program to even individual faculty – moving closer to those who know the specific needs of the class. Central IT and the institution will remain important in setting policies and permissions to protect student privacy and provide guidance to faculty and course designers who are more conservative in their ed tech usage. But either way (minimal LMS or swappable tool LMS), I think the long-term trend is moving in this direction of LMS and Open tool coexistence.

Update 9/19: Updated graphics to add LMS label, CC license and logo to facilitate sharing outside of blog.

The post Opening Up the LMS Walled Garden appeared first on e-Literate.

LMS and Open: The false binary is based on past, not future markets

Mon, 2014-09-15 12:46

D’Arcy Norman has an excellent blog post up titled “On the false binary of LMS vs. Open” that captures a false framing issue.

We’re pushed into a false binary position – either you’re on the side of the evil LMS, working to destroy all that is beautiful and good, or you’re on the side of openness, love, and awesomeness. Choose. There is no possible way to teach (or learn) effectively in an LMS! It is EVIL and must be rooted out before it sinks its rotting tendrils into the unsuspecting students who are completely and utterly defenseless against its unnatural power!

While D’Arcy is a proponent of open tools, he rightly calls out the need to understand institutional responsibilities.

But. We can’t just abdicate the responsibility of the institution to provide the facilities that are needed to support the activities of the instructors and students. That doesn’t mean just “hey – there’s the internet. go to it.” It means providing ways for students to register in courses. For their enrolment to be automatically processed to provision access to resources (physical classrooms, online environments, libraries, etc…). For students’ grades and records to be automatically pushed back into the Registrar’s database so they can get credit for completing the course. For integration with library systems, to grant acccess to online reserve reading materials and other resources needed as part of the course.

This is an important point, in that the institutional LMS is important and will not, and should not, go away anytime soon. I have pointed out recently that the LMS is one of the very few technologies now used in a majority of courses within an institution, and the institutional responsibility described above helping to explain why.

In our consulting work Michael and I often help survey institutions to discover what technologies are being used within courses, and typically the only technologies that are used by a majority of faculty members or in a majority of courses are the following:

  • AV presentation in the classroom;
  • PowerPoint usage in the classroom (obviously connected with the projectors);
  • Learning Management Systems (LMS);
  • Digital content at lower level than a full textbook (through open Internet, library, publishers, other faculty, or OER); and
  • File sharing applications.

At the same time, the LMS does a very poor job at providing a lot of the learning technologies desired by faculty and students. There is no way that a monolithic LMS can keep up with the market – it cannot match functionality of open internet tools especially without adding feature bloat.

I would add that part of the cause of the “false binary position” that D’Arcy points out is that much of the public commentary focuses on where the LMS has been rather than where it is going. There is a significant movement based on interoperability that is leading, perhaps painfully and slowly, to a world where the LMS can coexist with open educational tools, with even end users (faculty and students) eventually having the ability to select their tools that can share rosters and data with the institutional LMS.

Below is a modified presentation I gave at the Apereo Mexico conference in the spring (with a few changes to explain slides without audio). The key point is that there are subtle changes to the LMS market that are significant, and the coexistence of the LMS with open tools will be central to the market’s future.

Will all LMS vendors move this direction? In marketing, yes, but in reality, no. There are different approaches to this coexistence issue from the LMS vendors, ranging from lip service to outright support, and several points in between. But the overall trend is clearing moving this direction, even if some solutions lose out over time.

Download (PDF, 4.21MB)

The post LMS and Open: The false binary is based on past, not future markets appeared first on e-Literate.

Pearson’s Efficacy Listening Tour

Thu, 2014-09-11 14:06

Back around New Year, Michael wrote a post examining Pearson’s efficacy initiative and calling on the company to engage in active discussions with various communities within higher education about defining “efficacy” with educators rather than for educators. It turns out that post got a fair bit of attention within the company. It was circulated in a company-wide email from CEO John Fallon, and the blog post and all the comments were required reading for portions of the company leadership. After a series of discussions with the company, we, through our consulting company, have been hired by Pearson to facilitate a few of these conversations. We also asked for and received permission to blog about them. Since this is an exception to our rule that we don’t blog about our paid engagements, we want to tell you a little more about the engagement, our rationale for blogging about it, and the ground rules.

The project itself is fairly straightforward. We’re facilitating conversations with a few different groups of educators in different contexts. The focus of each conversation is how they define and measure educational effectiveness in their respective contexts. There will be some  discussion of Pearson’s efficacy efforts at a high level, but mainly for the purpose of trying to map what the educators are telling us about their practices to how Pearson is thinking about efficacy in the current iteration of their approach. After doing a few of these, we’ll bring together the participants along with other educators in a culminating event. At this meeting, the participants will hear a summary of the lessons learned from the earlier conversations, learn a bit more about Pearson’s efficacy work, and then break up into mixed discussion groups to provide more feedback on how to move the efficacy conversation forward and how Pearson’s own efforts can be improved to make them maximally useful to educators.

Since both e-Literate readers and Pearson seemed to get a lot of value from our original post on the topic, we believe there would be value in sharing some of the ongoing conversation here as well. So we asked for and received permission from Pearson to blog about it. Here are the ground rules:

  • We are not getting paid to blog and are under no obligation to blog.
  • Our blog posts do not require prior editorial review by Pearson.
  • Discussions with Pearson during the engagement are considered fair game for blogging unless they are explicitly flagged as otherwise.
  • On the other hand, we will ask for Pearson customers for approval prior to writing about their own campus initiatives (and, in fact, will extend that courtesy to all academic participants).

The main focus of these posts, like the engagement itself, is likely to be on how the notion of efficacy resonates (or doesn’t) with various academic communities in various contexts. Defining and measuring the effectiveness of educational experiences—when measurement is possible and sensible—is a subject with much broader application’s than Pearson’s product development, which is why we are making an exception to our blogging recusal policy for our consulting engagements and why we appreciate Pearson giving us a free hand to write about what we learn.

The post Pearson’s Efficacy Listening Tour appeared first on e-Literate.

GAO Report: Yes, student debt is growing problem

Thu, 2014-09-11 10:57

In case anyone needed additional information to counter the Brookings-fed meme that “Americans who borrowed to finance their education are no worse off today than they were a generation ago”, theU.S. Government Accountability Office (GAO) released a report yesterday with some significant findings. As reported at Inside Higher Ed by Michael Stratford:

More than 700,000 households headed by Americans 65 or older now carry student debt, according to a report released Wednesday by the U.S. Government Accountability Office. And the amount of debt owed by borrowers 65 and older jumped from $2.8 billion in 2005 to $18.2 billion last year. [snip]

Between 2004 and 2010, for instance, the number of households headed by individuals 65 to 74 with student loan debt more than quadrupled, going from 1 percent to 4 percent of all such families. During that same period, the rate of borrowing among Americans under 44 years old increased between 40 and 80 percent, even though borrowing among that age group is far more prevalent than it is among senior citizens.

I have been highly critical of the Brookings Institutions and their report and update. This new information from the GAO goes outside the selective Brookings data set of households headed by people aged 20 – 40, but it should be considered by anyone trying to draw conclusions about student debt holders.

Noting that Brookings analysis is based on “Americans who borrowed to finance their education” and the GAO report is on student debt holders, it is worth asking if we’re looking at a similar definition. For the most part, yes, as explained at IHE:

While some of the debt reflects loans taken out by parents on behalf of their children, the vast majority — roughly 70 to 80 percent of the outstanding debt — is attributable to the borrowers’ own education. Parent PLUS loans accounted for only about 27 percent of the student debt held by borrowers 50 to 64 years old, and an even smaller share for borrowers over 65.

Go read at least the entire IHE article, if not the entire GAO report.

Student debt is a growing problem in the US, and the Brookings Institution conclusions are misleading at best.

The post GAO Report: Yes, student debt is growing problem appeared first on e-Literate.

Brookings Institution analysis on student debt becoming a farce

Wed, 2014-09-10 12:39

I have previously written about the deeply flawed Brookings Institution analysis on student debt with its oft-repeated lede:

These data indicate that typical borrowers are no worse off now than they were a generation ago …

Their data is based on the triennial Survey of Consumer Finances (SCF) by the Federal Reserve Board, with the report based on 2010 data. With the release of the 2013 SCF data, Brookings Institution put out an update this week on their report, and they continue with the lede:

The 2013 data confirm that Americans who borrowed to finance their educations are no worse off today than they were a generation ago. Given the rising returns to postsecondary education, they are probably better off, on average. But just because higher education is still a good investment for most students does not mean that high and rising college costs should be left unquestioned.

This conclusion is drawn despite the following observations of changes from 2010 – 2013 in their own update:

  • The share of young (age 20 – 40) households with student debt rose from 36% to 38%;
  • The average amount of debt per household rose 14%;
  • The distribution of debt holders rose by 50% for debt levels of $20k – $75k and dropped by 19% for debt levels of $1k – $10k; and
  • Wage income is stagnant and same level as ~1999, yet debt amounts have risen by ~50% in that same time period (see below).

Wage and borrowing over time

Brookings’ conclusion from this chart?

The upshot of the 2013 data is that households with education debt today are still no worse off than their counterparts were more than 20 years ago. Even though rising debt continued to cut into stagnant incomes, the average household with debt is better off than it used to be.

The strongest argument that Brookings presents is that the median monthly payment-to-income ratios have stayed fairly consistent at ~4% over the past 20 years. What they fail to mention is that households are taking much longer to pay off student loans now.

More importantly, the Brookings analysis ignores the simple and direct measurement of loan delinquency. See this footnote from the original report [emphasis added]:

These statistics are based on households that had education debt, annual wage income of at least $1,000, and that were making positive monthly payments on student loans. Between 24 and 36 percent of borrowers with wage income of at least $1,000 were not making positive monthly payments, likely due to use of deferment and forbearance …

That’s what I call selective data analysis. In the same SCF report that Brookings used for its update:

Delinquencies

The delinquency rate for student loans has gone up ~50% from 2010 to 2013!

How can anyone claim that Americans with student debt are no worse off when:

  • More people have student debt;
  • The average amount of debt has risen;
  • Wage income has not risen; and
  • The delinquency rate for student loans has risen.

None of the secondary spreadsheet jockeying from Brookings counters these basic facts. This ongoing analysis by Brookings on student debt is a farce.

The post Brookings Institution analysis on student debt becoming a farce appeared first on e-Literate.

What the Apple Watch Tells Us About the Future of Ed Tech

Wed, 2014-09-10 12:20

Nothing.

So please, if you’re thinking about writing that post or article, don’t.

I’m begging you.

The post What the Apple Watch Tells Us About the Future of Ed Tech appeared first on e-Literate.

Say What? Buzzfeed follows up on D2L story with solid reporting

Sun, 2014-09-07 13:14

In a post last month I questioned the growth claims that D2L was pushing to the media based on their recent massive funding round. A key part of the article was pointing out the lack of real reporting from news media.

It is worth noting that not a single media outlet listed by EDUKWEST or quoted above (WSJ, Reuters, Bloomberg, re/code, edSurge, TheStar) challenged or even questioned D2L’s bold claims. It would help if more media outlets didn’t view their job as paraphrasing press releases.

I should give credit where it’s due: Education reporter Molly Hensley-Clancy at Buzzfeed has done some solid reporting with her article out today.

In response to detailed questions from BuzzFeed News about figures to back up its claims of record growth in higher education and internationally, the company released a statement to BuzzFeed News, saying “As a private company, D2L does not publicly disclose these details. The past year has been one of record growth for D2L, culminating in the recent $85 million round of financing.” A representative declined to make the company’s CEO, or any other executive, available for an interview related to the company’s growth.

The stonewalling didn’t come as a surprise to former employees with whom BuzzFeed News spoke.

“The picture they’re painting of growth is not accurate,” said one former employee, who left the company within the last year and asked to remain anonymous, citing his confidentiality agreement with the company. “If you look at actual metrics, they tell a different story. They’re very likely not seeing growth in higher education.”

Molly’s article included discussions with three former D2L employees, an interview with CSU Channel Islands CIO Michael Berman, and a D2L official response (in a manner of speaking). Who would have thought that Buzzfeed would be the source of valuable reporting that challenges the all-too-easy headlines provided through press releases?

Me, for one. If you follow the Buzzfeed education articles, you’ll notice a pattern of this type of reporting – mostly focused on the business of education. Consider the following articles:

In each case, Molly challenges public perceptions, digs up unique information through interviews and document research, and describes the findings in a hard-hitting but balanced article. Buzzfeed is becoming an important source for education news and a welcome addition.

The post Say What? Buzzfeed follows up on D2L story with solid reporting appeared first on e-Literate.

Unizin Updates: Clarification on software development and potential new members

Wed, 2014-09-03 15:11

In a recent post on Kuali, I characterized Unizin as a community source initiative. Brad Wheeler, CIO at Indiana University and co-founder of Kuali and Unizin, responded via email (with permission to quote):

Unizin is not a Community Source effort in the way that I understand Community Source as we started applying the label 10+ years ago. Unizin is better understood, as you have reported, as a cloud-scale service operator somewhat like I2. It does not plan to do lots of software development other than as needed for integrations. No biggie, just a nuanced observation from the end of the story.

Brad is correct, and I note that Unizin document has been fairly consistent in the lack of plans for software development, as seen in Unizin FAQs:

Is Unizin another open- or community-source project like Sakai or Kuali?
No – those endeavors focus on building software as a community, for use by individual institutions. Unizin strives to foster a community more concerned with creating and sharing content and improving outcomes.

I have already revised the Kuali post to add a clarification on this point. I asked Brad whether this means that Unizin is ruling out software development. His reply:

Unizin is working on its roadmap for each area. If we do need to head down some development approach that is more than integration, we’ll give thought to the full range of options for best achieving that, but there is no plan to begin an open/community source effort at this time.

All public indications are that Unizin plans to source existing technologies (as they have done with Canvas as the LMS) for content repository and learning analytics functionality, focusing any software development on integrations.

Potential New Consortium Members

Based on two articles about the University of Iowa (D2L) exploration of Unizin[1], the University of Illinois (Blackboard), University of Wisconsin (D2L), University of Minnesota (Moodle), and Ohio State University (D2L) are listed as schools that are “close to joining” Unizin, quoting the Iowa CIO and an internal Illinois email.

The original four consortium members are Indiana University, University of Michigan, University of Florida and Colorado State University. Unizin materials originally indicated the following set of potential partners:

  • Colorado State University (Blackboard)
  • Indiana University (Sakai)
  • Oregon State University (Blackboard)
  • Purdue University (Blackboard)
  • University of Florida (Sakai, with some use of Canvas)
  • University of Maryland (Canvas)
  • University of Michigan (Sakai)
  • University of Texas (Blackboard, with at least one campus in the process of moving to Canvas)
  • University of Wisconsin (Desire2Learn)
  • University of Utah (Canvas)

Of the new schools mentioned, only Wisconsin appears on the earlier list.

From the first article:

Lon Moeller, the associate provost for undergraduate education, will present the idea to the council. He said he is not yet sure whether the UI will commit to Unizin.

“We’re talking to faculty and administrators across campus to see if there’s interest in Unizin as a program,” he said. “We’re evaluating Unizin as a pilot project.”

From the second article:

“We are discussing doing a pilot project for next spring, but we need to decide if the project is worth our time,” Fleagle [CIO] said.

The articles make it sound like UI officials are pushing for the campus to join Unizin:

If the UI decided to join Unizin, it could have input into the new software as well as help to develop it.

“There is no software yet. The four partners [Colorado State, Indiana, Florida, and Michigan] are working on it,” Fleagle said. “They have groups working on it now. If we wanted to do it, we need to get in so that Iowa’s needs can get met … The disadvantage of waiting is that you don’t get access to those discussions right away. I think we have as good of people at Iowa as anyone else involved influencing it.”

I talked to Maggie Jesse, Senior IT Director for the Office of Teaching, Learning, & Technology, to get a better understanding of their decision process. The picture that emerges is a campus that has been invited to join Unizin, is potentially willing to do a pilot program, but is not in marketing mode.

To set the stage, Iowa recently performed an assessment to get campus feedback on ICON, their branded version of the LMS. The message coming back from the assessment was that faculty were not motivated to do a major change - they were not unhappy and did not need the burden of an LMS change. Furthermore, Iowa just completed an upgrade to ICON, changing the D2L version and reconfiguring the campus hosting and support structure.

In the article quote above, a faculty member had asked whether Iowa should just wait for others to join Unizin and let Iowa decide later. Steve Fleagle answered that the argument for deciding early was to influence Unizin direction. The pilot option allows Iowa to work within its usual culture of campus decision making- piloting a system and then taking the outcomes to the campus for a decision. Iowa has developed a communications plan to discuss a possible Unizin pilot with over 20 different campus groups to determine if there is interest in Unizin, and yesterday’s meeting was just one of these.

Illinois Confusion

The first article quoted sections of an email at the University of Illinois that reflected a poor understanding of Unizin [emphasis added].

Charles Tucker, the vice provost for undergraduate education and innovation at the University of Illinois, wrote in an email that the benefits of many universities joining Unizin include being able to band together to “influence the companies that provide software for teaching and learning.” [snip]

Tucker said the system can streamline different technologies to make things simpler for professors and students.

“The Unizin approach is to promote a standards-based approach where the same digital course materials can work in any learning-management system,” he said. “Faculty spend less time making the technology work, and students get a better course.”

This statement is odds with the description at Inside Higher Ed from July:

The digital learning consortium, announced Wednesday morning, aims to simplify how universities share learning analytics, content and software platforms. But in order to do so, Unizin needs its members to use the same infrastructure. A common learning management system is the first part of that package.

“You don’t really have common infrastructure if you’re saying everything is heterogeneous,” said Brad Wheeler, the Unizin co-founder who serves as vice president for IT and chief information officer at Indiana University. “A lot of these different learning tools — Sakai, Blackboard, Canvas — they all do a bunch of really good stuff. But five universities picking five different ones — what’s the end value in that if they want to do something together?” [snip]

“The presumption is if I join Unizin, I’m going to be using Canvas sooner or later,” said Joel Dehlin, Instructure’s chief technology officer [ed. now CEO of Kuali's for-profit arm]. Joining Unizin without using Canvas, he added, is “like joining a country club and paying for the country club and not using the golf course.”

The goal of Unizin includes promoting control over digital course materials and even various reuse of content. This goal is not based on LMS neutrality driven by standards, but it is based on having a common infrastructure that would sit on top of and benefit from standards. Note that these are goals and not realities yet. However, Illinois might benefit from making their decision based on a more accurate understanding of Unizin.

Update: See comment below from Rob Abel, CEO of the IMS Global Learning Consortium, below re. standards.

  1. Disclosure: I was directly involved as a consultant in the original Iowa decision to adopt D2L. See this link for details.

The post Unizin Updates: Clarification on software development and potential new members appeared first on e-Literate.