This is just a quick note on behalf of my friends at the Apereo Foundation to note that their reception will be at 6:30 PM at the Hilton tonight at EDUCAUSE. For some reason, it got left off the program. Details are here. I, unfortunately, will not be able to join, but if you’re interested in talking to good people ding good work in higher ed open source, then drop on by.
Josh Kim wrote three predictions at Inside Higher Ed for the EDUCAUSE 2013 conference, and I particularly agree with the basis of #2:
Prediction 2: Adaptive Learning Platforms Will Be the Toast of the Party
Everyone will want to talk to Knewton. The ASU / Pearson / Knewton partnership is a huge deal. Knewton has the technology, relationships, funding, and management team to make a huge impact.
I’ll be looking at EDUCAUSE at the other adaptive learning players. Where are they focusing their platform work? What deals and relationships do they currently have? How big is their market penetration? What is the quality of their leadership team and employees they have a EDUCAUSE?
I’m betting we will see at least one major adaptive learning vendor announcement. A purchase, a big collaboration deal, or a new huge round of funding.
I also expect much of the discussion this year to be on adaptive learning. But one risk of this zeitgeist (if it comes to pass) is that terminology becomes fuzzy and often devoid of meaning. Hey, get your adaptive here. You want to be adaptive, don’t you? We are the adaptive makers… and we are the dreamers of dreams.
What does adaptive learning mean? I don’t believe anyone can thoroughly describe all the concepts accurately and thoroughly in one place, but I did see a video from Knewton that is helpful. They have a “Knerds on the Board” blog series that includes various Knewton staff giving short video explanations of key concepts. In a recent post, Jess Nepom described the differences between differentiated, personalized Learning and adaptive learning, which I have paraphrased below.
- Differentiated Learning describes the case where there are different pathways that students can take within a learning environment, typically organized as pre-set categories.
- Personalized Learning describes the case where there is a different pathway for each individual student, often implemented in a rules-based method with a decision tree. Students might take a diagnostic test on the first day that will be fed into a rules engine to lay out that individual’s path and content.
- Adaptive Learning is data-driven and continually takes data from students and adapts their learning pathway to “change and improve over time for each student”.
In Knewton’s world, these three are steps towards the ideal – Differentiated is step 1, Personalized is step 2, and Adaptive is step 3. I suspect that many other platform vendors share this view of the world.
While this video is helpful for basic clarity on adaptive learning and related concepts, it makes the implicit assumption that the machine should do the selection of learning pathways for the students. Algorithms relying on big data are the way to go. But this is only one version of how to effectively design learning around the student.
Another approach is to empower the student to select their own learning pathway as either a pre-set category (described above as differentiated learning) or even to create their own pathway that adjusts over time based on the learning process and interactions with other learnings. This gets close to the Connectivism model behind cMOOCs.
It will be interesting to see if the various vendor demos and conference sessions include descriptions of what is meant by differentiated, personalized or adaptive learning, and if presenters describe the key issue of who selects the pathway – the instructor, the student, or the machine.
The post Differentiated, Personalized & Adaptive Learning: some clarity for EDUCAUSE appeared first on e-Literate.
This week is the week of the big annual EDUCAUSE conference which, among other things, is the world’s largest ed tech fashion show. In the next five days, we will learn that earth tones are the latest style for adaptive personalized learning systems, and that hemlines and license fees are both going way up this year. And by the end of the week, more than one conference-goer will be heard to mutter, “If I have to sit through one more MOOC presentation, I am going to shoot myself.”
And yet, outside this bubble, the noise dissipates surprisingly quickly. When Phil and I visit campuses, we routinely meet faculty who have literally never heard of MOOCs. When new ed tech products do come to campus, they are often either uncritically embraced as the savior of higher education or reflexively reviled as its destruction—and often the latter by one stakeholder group in quick reaction to the former by another group. We have real and serious challenges in education, some of which can genuinely be helped through the judicious application of technology by thoughtful and concerned educators. But we are not having the right conversations to match problems with solutions.
That is why Phil and I are delighted and excited to announce a new initiative we’re calling e-Literate TV, in collaboration with a company called In the Telling. Using some of the lessons that we’re learning from the MOOC community about differential engagement, our goal is create multiple entry points into a conversation about the issues. The first entry point is a series of 10-minute video episodes providing overviews of each new topic. One of the two great assets that In the Telling brings to the table is their experience at telling stories in film. With their help, we are crafting segments that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies. It will introduce the topics in what we hope will be an engaging and provocative manner. All episodes will be released under a Creative Commons license on YouTube.
For those who wish to dive deeper, we will be taking advantage of In the Telling’s second asset, which is their Telling Story platform. We can tie content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. And finally, we’ll be integrating that platform into WordPress, where we will be posing questions that are intended to be discussion starters across campus stakeholder groups. We are particularly interested in community- and conversation-building features and how to add to these capabilities over time. Access to that content will also be free, and in cases where it is e-Literate blog posts, it is already Creative Commons licensed.
The pilot series will consist of five episodes (not counting the introduction) on the following topics:
- The big issues in higher education that are driving interest in and need for educational technology
- The landscape of online learning
- The state of play with MOOCs
- The rise of courseware
- Learning analytics and adaptive learning
Filming of these episodes has already begun and will be finished up at EDUCAUSE. We’ll have a longer trailer showing off some of this new material soon and expect to launch the pilot series in December or January.
Beyond that, we are seeking sponsorship to fund e-Literate TV as an ongoing project. We think we can do four series the size of the pilot every year. Truth to tell, each one of the pilot episodes could be a topic of its own series. We’re also interested in digging into topics like the role of open education, how to get beyond the factory model in the classroom, and how the ed tech ecosystem works.
We’ll be sharing more about the project as it develops. In the meantime, here’s a taste of the video aspect of the series:
And here’s an overview of the Telling Story platform:
The IMS has announced the initial public release of something they call Caliper, which they characterize as a learning analytics interoperability framework. But it’s actually much, much more than that. In fact, it represents the functional core of something that my SUNY colleagues and I used to refer to as a Learning Management Operating System (LMOS), and is something that I have been hoping to see for eight years, because it promises to resolve the tension between the flexibility of lots of separately developed, specialized learning tools and the value and convenience of an integrated system.
Let’s take a peek at the framework to see why I’m so hopeful about this framework. But before we do that, you should fasten your seat belts and strap on your aviator goggles. It’s going to get geeky in here.The LMOS
Back in 2005, when I worked at the SUNY Learning Network, some colleagues and I were asked to evaluate the options for the next SUNY-wide LMS. It is important to understand just how diverse SUNY is. There are 64 campuses in the system, ranging from tiny rural Adirondack Community College to giant urban Suffolk County Community College to R1 universities like SUNY Stony Brook to specialty colleges like the Fashion Institute of Technology and the SUNY College of Optometry. These schools have radically different teaching needs from each other. We concluded that no LMS that existed at the time could serve all the needs of this diverse group of institutions equally well.
Now, 2005 was the peak of the Web 2.0 hype cycle, which meant that it was also the beginning of the “LMS is dead” meme. Creative, motivated teachers were starting to do really good online education outside of the LMS using tools like blogs and wikis. But our job at the SUNY Learning Network was to help campuses grow their online education programs at scale, and it was clear to us that a majority of faculty simply did not have the skills (or time, or passion) to cobble together decentralized tools and incur the extra management required to run a class that way. Furthermore, with learning analytics in their infancy, our newborn hope of actually being able to gather enough data on student behavior to learn from them and help them achieve their goals would never come to fruition in a radically decentralized environment. There would be no way to get all the data into one place to analyze it.
To solve this dilemma, we proposed a concept that we called the Learning Management Operating System. Like an operating system on a desktop computer, it would offer low-level services upon which many specialized applications written by many different developers could run and operate. Patrick Masson and I articulated the educational imperative for such a system in an article for eLearn Magazine called “Unbolting the Chairs,” which started with the following argument:
In the physical world, it goes without saying that not all classrooms look the same. A room that is appropriate for teaching physics is in no way set up for teaching art history. A large lecture hall with stadium seating is not well-suited to a small graduate seminar. And even within a particular class space, most rooms are substantially configurable. You can move the chairs into rows, small groups, or one big circle. You can choose to have a projection screen or a whiteboard at the front of the room. You can bring equipment in and out. Most of the time, we take these affordances for granted; yet they are critical factors for teaching and learning. When faculty members don’t have what they need in their rooms, they tend to complain loudly.
The situation is starkly different in most virtual classrooms. In the typical Learning Management System (LMS), the virtual rooms are fairly generic. Almost all have discussion forums, calendars, test engines, group work spaces, and gradebooks. (The Edutools Web site lists 26 LMSs that have all of these features.) Many have chat capabilities and some ability to move the chairs around the room using instructional templates. (Edutools lists 12 products with these additional capabilities.) Beyond these common features, LMSs tend to differentiate themselves with fine-grained features. Does the chat feature have a searchable archive? Can I download the discussion posts for offline reading? These features may be very useful but they are also fairly generic in the sense that they are merely enhancements of general-purpose accoutrements that already exist. Our virtual classrooms may be getting smarter, but they are still pretty much one-size-fits-all. They aren’t especially tailored to teach particular subjects to particular students in a particular way.
This is not as it should be. Virtual classrooms should be more flexible than their physical counterparts rather than less so. Do you teach art history? Then you need an image annotation tool. But probably a different one than the image annotation tool needed to teach histology. Foreign language teachers may want voice discussion boards to check student accents. Writing teachers should have peer editing tools. History teachers should have interactive maps. And so on.
Granted, some of these applications exist today and can be included in an LMS. But there are not nearly as many of them as there can and should be. We contend that the current technical design philosophy of today’s Learning Management Systems is substantially retarding progress toward the kind of flexible virtual classrooms that teachers need to provide quality education. In order to have substantial development of specialized teaching tools at an acceptable rate, LMSs need to be designed from the ground up to make development and integration of new tools as easy as possible.
We also recommended to SUNY that the system should build an LMOS, and we set about trying to define what that would mean. One central architectural concept that we worked with was something that we called a “service broker.” The basic idea is that tools would plug into it and share information with other tools. (One commenter helpfully pointed out that a more appropriate term for this idea was actually a service bus.) I wrote a series of blog posts trying to unpack the idea, including one that described integrating an external blog tool into an LMS environment. The gist of the scenario I described was as follows:
- The service broker would take the RSS feed as input.
- There would be some sort of single sign-on mechanism to verify that the author of the blog is the same student that the LM(O)S knows about.
- The LMS would be able to publish class and assignment information which the blog would be able to read as post categories.
- When a student published a blog post with the appropriate class and assignment categories, the broker would pick it up and make it available to other applications.
- The activity tracker would note that the student had submitted a blog post blah on date blah for assignment blah in class blah.
- The course grade book would add a line item for the student’s submission for the assignment and display the text of the post.
- An aggregator in the course space would display the blog posts from various students for the assignment.
- Later, an ePortfolio app could ask the grade book for the student’s blog post along with the instructor’s grade and comment.
The idea was that an LMOS service broker would have different adapters to accept data from different kinds of learning applications and pass that data to whatever other apps needed it. These adaptors would ideally be standards-based so that it would be easy to plug in new applications from different sources.
In the end SUNY decided that it did not have the risk tolerance to build a new platform. And to be honest, it would have been challenging to pull off with the technology of the time. But eight years later, I still think the vision was a good one. And now, eight years later, I think that Caliper has a chance of fulfilling that vision.
Have you strapped on those aviator glasses yet? OK. Here we go.Triple Your Pleasure, Triple Your Fun
Of course, we weren’t the first people to think about creating a web of data that could link disparate applications. Big shots like Tim Berners Lee had been talking about a “semantic web,” where sites could talk to each other and automagically interoperate, since the late 1990s. One of the foundational technologies in the semantic web effort was something called the Resource Description Framework, or RDF. And a core idea in RDF was something called a triple. A triple can really be boiled down to a plain English sentence structure: subject, phrase that characterizes a relationship, object. Here are a few examples:
- Joe | is the author of | http://www.themusicalfruit.com/
- http://www.themusicalfruit.com/ | is about | legumes
Note that there is a kind of transitive property possible here. If Joe is the author of TheMusicalFruit.com and TheMusicalFruit.com is a website about legumes, then we can infer that Joe is the author of a website about legumes. Theoretically, you can create long chains and complex clusters of these inferences in something that a mathematician might call a graph.
Let’s look at some triples that are relevant to the student blog example above:
- Ann | is a student in | Intro to Linguistics
- “Whorf hypothesis argument” | is an assignment in | Intro to Linguistics
- “Beam Me Up, Whorf” | is a blog post by | Ann
- “Beam Me Up, Whorf” | is a homework submission for | “Whorf hypothesis assignment”
Using triples like these, you could accomplish a lot of what I described in that use case in 2005. You could see, for example, that “Beam Me Up, Whorf,” Ann’s submission for the Intro to Linguistics assignment called “Whorf hypothesis assignment.”
Ultimately, RDF never took off, for a variety of reasons. But triples are extremely useful and have been employed in a variety of other technologies, including both IMS’s Caliper and SCORM’s TinCan API. They provide a grammar for the semantic web.Grammar Aren’t Everything
The great thing about triples is that they can express just about any relationship. The bad thing about triples is that they can express just about any relationship. Consider the following triple:
- Fribble | is a frogo of | Framizan.
This is a grammatically valid triple, but it tells us nothing, because we don’t know what the words mean. OK, it’s true, I cheated by using made up words. Let’s see if we can make the situation clearer by adding some English:
- Fribble | is a parent of | Framizan.
Huh. Not much better. Are Fribble and Framazan people? Are they subfolders in a file directory? It turns out that human languages aren’t terribly precise. And if you want disparate computer programs to be able to understand each other without constant human intervention, then you need to be very precise. In addition to a grammar, you need a lexicon. Or, in computer terms, you need an entity model. You need to tell the computer things like this:
- There is an entity type that we call a “person.”
- A person has a first name and a last name.
- A person (in your world) has a unique ID.
- A person (in your world) will always have an email address.
- A person (in your world) might have a phone number.
With this, we can have the computer say something like:
- The person entity with ID “Ann” | has relationship “is a student in” | to the class entity with ID “Intro to Linguistics”
That may sound clunky to you and me, but it’s poetry to a machine.
This is essentially what Caliper adds to a triple structure. It adds a collection of “entities,” or things, that all interoperating computers agree have certain properties. And that, my friends, is what makes time travel work. With both a grammar and a lexicon, learning applications can start talking to each other.
And it turns out that the IMS had a bunch of entity definitions lying around already from their previous standards work. For example, the LIS standard (which is designed to integrate LMSs with SISs) has definitions for a person, a course section, and an outcome. What will be interesting to see is the development of new entities for learning activity types (beyond those that are already specified in QTI). For example, what would we want to know about a reading? A video? A simulation? A note-taking app? We could generate a list of such things pretty easily, and for each thing that we want to know about, we could generate a short list of what we want to know about it. That short list would be the core of the entity model for the thing, and it would be the information that developers would have to expose in their apps in order to be able to plug into Caliper. The downside of adding an entity model is that it’s more work for developers to implement, increasing the chances that any particular developer won’t do it. The upside is more assurance of interoperability and a richer information flow.
So again, to plug into the Caliper LMOS, an app would have to be able to read and/or write some subset of these entities and understand the triple relationships. They would also have to establish a communication channel. Luckily, LTI essentially already does that. The LTI standard was always intended as a kind of a wrapper. It provides single sign-on between learning apps and then enables the two apps to talk to each other. Right now, the standards-based communication over LTI is pretty limited. But Caliper would open up a whole new world of possible communications.Analytics
It’s pretty easy to see why the IMS has latched onto Caliper as an analytics interoperability standard. The graph lets you crawl relationships among pieces of data to get to the relationships that you want. Assuming that entities have reasonable metadata (like creation dates, for example), we can ask a bunch of questions in the blogging example above:
- Has Ann submitted all her blog post homework assignments for Intro to Linguistics?
- How close to an assignment deadline does Ann typically complete her blog post assignments?
- How well, on average, does Ann do on her blog post assignments?
- Does Ann have a pattern of completion or performance in her blog posts across all of her classes?
- What is the class average on blog post homework assignments?
None of this sounds particularly earth-shattering until you remember that all of this data is being gathered from the students’ own weblogs. This is not software provided by the LMS vendor. It may not even be software hosted or contracted by the university. This could be from the students’ own blogs, with a plugin installed (which in WordPress, at least, is super simple to do).
Let’s see what happens if we extend the learning graph one step further. Suppose we have a relationship in the triple that we call “is a response to.” If you write a post and I write a comment on your post, then my comment “is a response to” your post. If you write a post and I write a post on my own blog referring to yours, my blog post also “is a response to” your blog post. (We might also create an entity called “comment,” so that we can distinguish between a response that is a blog post on another site and a response that is a comment on the same site: “My comment | is a response to | your blog post.”) Interestingly, most blogs have a feature called “pingbacks,” which detect when other blogs have pointed to your blog post by embedding a URL. Suppose that our Caliper WordPress plugin translates that pingback into a triple that can be read by any Caliper-compliant system. Now we can start asking questions like the following:
- How many responses did Ann’s blog posts for Intro to Linguistics generate?
- Which students in the class write the posts that generate the most responses?
- What percentage of responses occurred as comments on the same page as the blog posts, and what percentage was blog posts on commenter’s own site?
- Which classes have the most activity of students responding to other students?
- What is the correlation between levels of student responses to each other and outcomes?
- What is the moment in the course when students started responding more to each other and less to the teacher?
As the size of the graph grows, the number of questions you can answer grows exponentially.From Learning Management Operating System to Learning Cloud
But let’s suppose that you want to do more than gather data on students’ use of blogs in a class that is otherwise managed in a central LMS. Let’s suppose that you want the students’ blogs to be the LMS (for the most part). Suppose you want to build a course like ds106, where all student work happens out on their own blogs, and the hub of the course is really just an aggregation point. Right now, ds106 accomplishes this goal through a Frankenstein’s monster of WordPress plugins and custom hacks. I don’t mean to denigrate the technical work that they’ve done. To the contrary, I’m astonished by what they’ve been able to accomplish with chewing gum and duct tape. Caliper could potentially provide them with better tools for richer integration in a more elegant way. It could, for example, create a visualization of the conversation across the various blogs, and make that visualization clickable so that students could see the thread and then jump directly to the posts involved. In a real way, those disparate blogs would function as one distributed piece of software. Each piece would be independent. Students could use the blogging platform they want on the host that they want. But the data would flow freely, easily aggregatable, sortable, visualizable, and analyzable. Forget about the Learning Management Operating System. The future is the Learning Cloud.
In our eLearn article, Patrick and referred to a Flickr social markup of a Merode Altarpiece created by an art history class taught by our friend and colleague Beth Harris. We wrote,
What is the learning object here? Or, to put it another way, what is the locus of educational value? Is it the picture itself? Is it the picture plus the comments of the students? Or is it both of these plus the action potential for students to continue to exchange ideas through the commenting system? A learning object-centric view of the world would place the emphasis on the content, ignoring the value of the ongoing educational dialog as something extraneous. But that view clearly doesn’t allow us to encapsulate the locus of educational value in this case. Sometimes people will try to fudge the difference by tacking the word “interactive” in front of “learning object.” This obscures the problem rather than solving it. “Object” is just a longer word for “thing.” It inherently focuses on artifacts rather than activities. It emphasizes content to be learned rather than the actions on the part of students that lead to learning.
To take a more familiar example, consider the spreadsheet. What is it that you share when you email a spreadsheet to a colleague? Is it the content, the interaction potential, or both? Are you simply sharing a “tabular data object,” or is the potential for the recipient to plug in new data and get new results an inextricable part of the thing we’re calling a “spreadsheet?” There is no one right answer to this question; it is entirely context-dependent. Sometimes what we mean by “spreadsheet” is a set of completed calculations and how they were derived. In this case, content is king. However, at other times a “spreadsheet” means a tool for plugging in new numbers to make calculations and run “what-if” scenarios. Sometimes its locus of value is as a “tabular data object,” sometimes it is as a “tabular data-processing application,” and sometimes it is as an inextricable fusion of the two.
So what is the distinction between a learning object and a learning application? What is the difference between the domain of content (and therefore content experts) and the domain of functionality (and therefore programming experts)? We contend that there is no clean separation of concerns. The world does not divide neatly between functionality packages that can be integrated as Blackboard Building Blocks or WebCT Powerlinks on the one hand, and self-contained content packages that can be tied up in a bow and listed in MERLOT on the other hand. The division between learning objects and learning environments is a false dichotomy. Students need both the functionality and the content—the verbs and the nouns—in order to have a coherent learning experience. They learn when they do things with information. They discuss paintings. They correlate news with its location in the world. They run financial scenarios in a business case study. Consequently, managing the learning content or managing the learning environment in isolation doesn’t get the job done. We need to manage learning affordances. We need to focus on providing faculty and students with a rich array of content-focused learning activities that they can organize to maximum benefit for each student’s learning needs.
That article was published in January 2006. One year later, on January 9th, 2007, Apple unveiled the first iPhone. We live in an appy world now. The LMS is not going away, but neither is it going to be the whole of the online learning experience anymore. It is one learning space among many now. What we need is a way to tie those spaces together into a coherent learning experience. Just because you have your Tuesday class session in the lecture hall and your Friday class session in the lab doesn’t mean that what happens in one is disjointed from what happens in the other. However diverse our learning spaces may be, we need a more unified learning experience. Caliper has the potential to provide that.
The post The IMS’s New “Caliper” Learning Analytics Interoperability Framework Is Deeply Interesting appeared first on e-Literate.
As you may know, Phil and I started a consulting practice in January. Throughout my years of blogging here, I have made it a practice to update readers on how changes in professional life may affect the writing that I (and now we) do on the blog. I don’t believe there is any such thing as “objective” analysis. Analysis is all about perspective, and perspective is derived from your point of view—where you stand, and so on. If we are going to be fortunate enough to continue earning your trust, then it’s important for you to understand our perspective so that you can anticipate and account for any limitations that are inherent in it. As it turns out, running a consulting business creates both some complex ethical challenges as well as some exciting opportunities for the blog. Now that Phil and I have had some time to understand the landscape better, this seems like a good time to give you an update.
One of the compliments of e-Literate that I treasure most of all is when people tell me that our writing is “fair.” Often this praise comes attached to some acknowledgement that maintaining “fairness” can be a challenge in certain employment circumstances. A common one I get is, “I can’t believe how you managed to stay so fair and independent when you were working for Oracle [or Cengage].” The truth is that maintaining some distance in those situations wasn’t so difficult, in part because in both cases I had great managers who protected me from internal pressures. Also, there was only a subset of issues I wrote about that Oracle or Cengage even cared about. It was fairly easy to anticipate what the conflicts of interest might be and how I would have to deal with them. Being a consultant is much more complex. Just about any university or company that we write about could be a future client. Some of them may be past or present clients. This is further complicated by the fact that we are also now categorized as analysts by many vendors and therefore receive a kind of courtship from them. For example, when I worked for Oracle, I could not accept an offer for somebody to pay for my travel. Either Oracle paid or I didn’t go. But it’s increasingly common practice for companies to pay for analysts expenses to visit conferences, and now that Larry Ellison isn’t writing the checks, that travel reimbursement can mean the difference between us going and not going.
Phil and I end up talking about ethics a lot, both in our blogging and in our consulting. We’re still learning as we go, but here are a few of the rules that we apply to handle conflicts of interest:
- When we first engage with clients, we let them know that we will not blog on e-Literate about the area that we are consulting on with them, either during or for some period after. (We may, in the future, blog about it on the MindWires site, where our relationship to the client and the work are less murky and where people are explicitly coming to learn about what we do commercially.)
- If we blog about a current customer—not the area that we are consulting on, but maybe some other aspect of the company—we disclose in the post that they are a client of MindWires Consulting.
- If a company pays for our expenses to go to their event, we do not bother to disclose that in any blog post, but we will disclose if we are paid for our time or paid speaker fees.
But the bottom line is that there is no set of rules that will guarantee freedom from conflict of interest, or even reliable guidance on when to disclose. For example, what if we’re blogging about a university that isn’t a client now but was recently? Or a company that we’re talking to about potentially becoming a client? At the end of the day, Phil and I are going to have to rely on our own judgment on a case by case basis as the best tool we have for staying transparent and fair. You deserve to know that as your read our work and decide for yourselves how to weigh and filter our analysis.
As always, we welcome your feedback, now and as we go forward.
Update: Mike has written another post clarifying the intuitions behind his math.
The spectacular Mike Caulfield casts a skeptical eye on the Course Signals data:
Only a portion of Purdue’s classes are Course Signals classes, so the chance any course a freshman takes is a Course Signals course can be expressed as a percentage, say 25%. In an overly dramatic simplification of this model, a freshman who takes four classes the first semester and drops out has a has about a 16% chance of having taken two Course Signals courses (as always, beware my math here, but I think I’m right). Meanwhile they have a 74% chance of having taken 1 or fewer, and a 42% chance of having taken exactly one.
What about about a student who does *not* drop out first semester, and takes a full load of five courses each semester? Well, the chance of that student having two or more Course Signals courses is 75%. That’s right — just by taking a full load of classes and not dropping out first semester you’re likely to be tagged as a CS 2+ student.
In other words, each class you take is like an additional coin flip. A lot of what Course Signals “analysis” is measuring is how many classes students are taking.
Are there predictions this model makes that we can test? Absolutely. As we saw in the above example, at a 25% CS adoption rate, the median dropout has a 42% chance of having taken exactly one CS course. So it’s quite normal for a dropout to have had a CS course. But early on in the program the adoption rate would have much lower. What are the odds of a first semester dropout having a CS course in those early pilots? For the sake of argument let’s say adoption at that point was 5%. In that case, the chance our 4-course semester drop out would have exactly one CS course drops from 42% to 17%. In other words, as adoption grows having had one course in CS will cease to be a useful predictor of first to second-year persistence.
Is that what we see? Assuming adoption grew between 2007 and 2009, that’s *exactly* what we see.
I’d like to see somebody at Purdue (or Ellucian) respond to the questions that Mike raises. Matt Pistilli, are you listening?
ACCJC, the accrediting commission behind the City College of San Francisco crisis, issued an warning to Honolulu Community College in February of this year, with a report required by October 15. As described in Hawai’i News Now:
Honolulu Community College has been placed on warning accreditation status by the Accrediting Commission for Community and Junior Colleges, the only of the University of Hawaii’s ten campuses to get such a warning.
The accrediting panel gave the 4,400-student campus the warning after an evaluation visit to the Kalihi school last fall.
What is interesting in this action is that one of the primary drivers of the warning was HCC’s lack of evaluation of the effectiveness of their online courses versus the comparable face-to-face courses.
More than 100 HCC faculty members teach courses online and that’s where the accreditation panel leveled its most serious criticism.
“The college should compare the instructional quality of face-to-face and distance education courses and develop a strategic plan for distance education,” the accrediting panel wrote.
There has been a lack of publicly-available information across higher education documenting the relative results of online courses and traditional courses, and this move by the accrediting commission could have a big impact. For its part, HCC will likely have this information in its report next month. But I would expect other schools, especially those accredited by ACCJC (mostly community and junior colleges on the west coast), to now be highly motivated to collect and report on similar data.
From the accreditation report on HCC:
The team recommends that the college develop a formal assessment process in order to evaluate the effectiveness of its Distance Education program in meeting the institutional mission. The process should include a systematic evaluation, analysis, communication, and improvement of the program, including assessment of how well each online course is satisfying its learning outcomes, support for staff development, and technical assistance for faculty. [snip]
The previous visiting team recommended that the college develop a formal assessment process for distance education courses. There are at least two assessment reports based on student surveys, suggesting the college pays attention to student learning outcomes and satisfaction with various services and programs. However, the college was unable to provide evidence on comparative successful completion data for online learning versus face-to-face classes (no data are cited in the Self Evaluation Report on disaggregated success rates, and no data either on the DE Assessment Web site). [emphasis added]
I have previously criticized accrediting commissions for their lack of transparency. A welcome aspect of this news is that ACCJC is now encouraging schools to publicly share their accrediting information – the full HCC report is here, and the cover letter includes this blurb:
Please note that in response to public interest in disclosure, the Commission now requires institutions to post accreditation information on a page no farther than one click from the institution’s home page.
The post Lack of online course evaluation leads to accreditation warning appeared first on e-Literate.
In two apparently unrelated announcements, both MIT and Wharton announced they were moving beyond just courses and putting significant parts of their curriculum into MOOC platforms, both with identity verification. MIT is putting several undergraduate sequences online through MITx (their implementation of edX), while Wharton business school is putting a “foundation series” of first-year courses online through Coursera.
The Massachusetts Institute of Technology will this fall package some of its online courses into more cohesive sequences, just as edX prepares to roll out certificates of completion using identity verification. Seen together, the two announcements may provide a glimpse at what the future holds for the massive open online course provider.
The “XSeries” sequences add a new layer of structure to MITx, the institution’s section of the edX platform. The first of seven courses in the Foundations of Computer Science XSeries will be offered this fall, with one or more new courses being rolled out each semester until the fall of 2015. The Supply Chain Management XSeries, consisting of three courses, will begin in the fall of 2014. The two sequences will target undergraduates and working professionals, respectively. [snip]
EdX allows students either to audit courses or complete assignments to earn a certificate of completion. Right now, the gateway to earning a certificate is guarded only by an honor code. Beginning next spring, instructors can choose to implement an identity verification process that prompts students to present government-issued identification at specific milestones, like a midterm or final exam. Their identities are then verified by Software Secure, which offers online proctoring services.
Apart from the security benefits, edX officials said the verified certificates are intended for students who enroll in online courses to further their careers.
And the parallel Wharton news from Bloomberg Businessweek:
Getting a Wharton MBA involves taking off from work for two years, moving to Philadelphia, and spending about $200,000 on tuition and expenses. Now, with the addition of three new courses on the online learning platform Coursera, you can get much of the course content for free.
While you won’t get the full Wharton on-campus experience—or an internship, career services, or alumni network, for that matter—the new courses in financial accounting, marketing, and corporate finance duplicate much of what you would learn during your first year at the elite business school, says Don Huesman, managing director of the innovation group at Wharton.
A fourth course in operations management that’s been offered since September rounds out the “foundation series.” Along with five existing electives, which include courses on sports business and health care, the new offerings make it possible to learn much of what students in Wharton’s full-time MBA program learn, and from the same professors. All nine courses are massive open online courses, or MOOCs, expected to attract students from around the world. [snip]
Students in all four courses are eligible, for a $49 fee, to receive a verified electronic certificate indicating that they’ve completed the course requirements.
Huesman says Wharton has no plans to accept the certificates for course credit should students subsequently enroll at Wharton, adding that “there’s a very different experience that happens in a two-year immersion in a community of scholars that culminates in a degree.” But he says what students learn in the online classes can be used to “test out” of required courses just as those with knowledge of the subject matter can do now.
In both cases the schools go out of their way to emphasize that the MOOC curricula do not replace the immersive experience of face-to-face courses, but that professors are experimenting with flipped classroom approaches using the MOOC materials.
Update: eCampusNews has an article on the MIT announcement, focusing on the usage of webcams for the identity verification.
Bill Flook, who covers the DC technology scene for Business Journals, just interviewed Blackboard CEO Jay Bhatt about last week’s layoff. The full article can be found here. From a quick read, it looks like Blackboard is executing on two key priorities:
- Trimming the fat caused by years of acquisitions and redundant operations; and
- Completing the acquisitions by centralizing core functions, particularly under new management.
From the article:
Blackboard Inc. carried out a round of layoffs last week as part of a broader reorganization by CEO Jay Bhatt, the latest in a string of actions aimed at revitalizing the 16-year-old ed-tech behemoth.
Bhatt, in an interview Tuesday evening, confirmed the job cuts, which he described as “a very small action we took to take some costs out of the business, primarily on things that don’t allow us to get where we need to go.” He declined to specify the number of layoffs.
As for the priority of trimming the fat, it is now fairly clear that Blackboard is not set to divest any major product lines, but rather will follow a path of centralization. The new management team is a key part of the reorganization plans.
The reorganization is, in many ways, the culmination of what Bhatt has been talking about for the past few months. Blackboard is a quilt of a company, the product of years of stitching together lines of business through acquisitions, some integrated more seamlessly than others.
Bhatt’s recent mission has been consolidation, which began with the realignment of product management and product development under two executives, Mark Strassman and Gary Lang, both former colleagues of Bhatt at 3D design firm Autodesk.
There’s more there in the full article.
Much of the reorganization and layoffs seem to reinforce the plans that both Michael and I recently described from interviews of Blackboard management. What is not clear, however, is whether there will be future layoffs and how the multiple cuts are affecting company morale. Corporate turn-arounds can be quite painful, and it is difficult during the transitions to avoid losing key people that were not part of the layoffs.
It will also be interesting to see if Blackboard’s renewed emphasis on the core learning management system, and de-emphasis of the multiple semi-independent product lines, will be part of their core message at the EDUCAUSE conference next month.
Robert McGuire wrote an article for Campus Technology, Building a Sense of Community in MOOCs, that touches on an important topic – is the centralized discussion forum a barrier to student engagement?
But more students can also mean more isolation within the crowd. “Online classes can be really lonely places for students if they don’t feel like there’s a community,” notes Maria Andersen, director of learning and research at Instructure, which runs Canvas Network, an open repository where participating schools can deliver their own MOOCs.
Ironically, the biggest obstacle preventing MOOC students from forming relationships is the feature most relied on to encourage them. Discussion forums are the number one complaint by readers and contributors of MOOC News and Reviews, an online publication devoted to critiquing individual MOOC courses and the evolving MOOC landscape. Most MOOC discussion forums have dozens of indistinguishable threads and offer no way to link between related topics or to other discussions outside the platform. Often, they can’t easily be sorted by topic, keyword, or author. As a result, conversations have little chance of picking up steam, and community is more often stifled than encouraged.
There are several studies that appear to show that MOOC discussion forums have few students participating and that the forums are dominated by a small number of students.
However, we know that, on average, only 3% of all students participated in the discussion forum. Figure 10 below illustrates the small number of posts the vast majority of students actually made. But we know that certificate earners used the forum at a much higher rate than other students: 27.7% asked a question, 40.6% answered a question, and 36% made a comment. In total, 52% of the certificate earners were active on the forum. We are analyzing the number of comments individual students posted to see if it is predictive of that individual’s level of achievement or persistence.
More recently, there is a study from Stanford looking at discussion forum activity across 23 separate MOOCs on the Coursera platform. Across all registered students, no MOOC had more than 10% of students posting on a forum, and most were below 5%. Note that they measured students having only one forum post (typically the introduction forum) and those with more than one.
The team then excluded all students getting less than a 10% as a grade, which removed 86% of registered students. The rate of students posting to the forums rose significantly.
The Stanford study also found a reverse correlation between the course size and the percentage of students posting.
Several people whom we discussed this data with asked whether there might be an inverse relationship between the size of a class and the % of students who post. For instance, if students mainly use the forum to answer questions and check for existing answers before posting, they may find that in a larger class there’s less need to post, since their questions are already addressed.
Excluding two outlier classes, and looking again only at students who scored at least 10% in a class, and only at students who posted more than once (so that introductions are excluded), we see that there’s only a very small inverse correlation between size of class and % of posters (correlation value is -0.4):
A third source on the low engagement of MOOC students in discussion forums comes from the University of Edinburgh and their study of 6 MOOCs. Here we see that students are far less likely to engage in discussion forums than in videos or assessments.
What we are seeing here matches the lessons from the early cMOOCs, as described by Stephen Downes.
It’s interesting that this article [a post based on the Campus Tech article] addresses a lesson we learned in the first few weeks of our MOOC in 2008 – the centralized discussion forum is not a good tool for a course of thousands of people.
Indeed, Robert McGuire also commented on this same post with similar conclusions:
I’m surprised at how many classes rely uncritically on discussion forums when ten minutes of experience reveals how inadequate they can be, at least without more thoughtful management of them.
While the emergence of MOOCs is still quite young, I think it is becoming quite clear that certain elements can scale quite effective (videos, quizzes), but that centralized discussion forums do not scale. For MOOCs to be more effective, we need to see different approaches to student engagement.
Outlier at Duke
I should mention that the Duke report on the Bioelectricity MOOC could be an outlier in the positive reviews of students on the discussion forums. This data is from a voluntary survey at the end of the course, so there is obviously a self-selection bias, but it is worth noting these results.
In addition to overall course satisfaction, students reported that they were satisfied with the forums and
the instructor (1=strongly disagree to 5 strongly agree):
• Forum discussions with my peers enhanced my understanding of the material (m=4.16)
• The forums were a safe, supportive place to post (m=4.19)
• The organization of the forum was conducive to communicating with my peers (m=4.07)
• The instructor enhanced my understanding of the material (m=4.38)
• I would take another course from this instructor (m=4.25)
Updates 9/17: Corrected Campus Technology reference.
Also, here is related information from Vanderbilt based on their MOOC reporting (this snippet from their first MOOC):
Of those 23,313 active students, 20,933 of them (90%) watched at least one lecture video, 5,702 (24%) took at least one quiz, 2,072 (9%) submitted at least one assignment for peer grading, and 942 (4%) posted at least once in the discussion forums. [snip]
Across their other three MOOCs, the forum participation of active students was 9%, 22% and 6%. Some relevant commentary from Derek Bruff:
Why so much participation in the LSIO forums [the one with 22%] compared with the other two courses? David Owens, LSIO instructor, encouraged forum participation, building it into the completion criteria and seeding the forums each week with a question that permitted multiple perspectives.
Sometime guest blogger and friend of e-Literate Elijah Mayfield has another great post up on using machine learning tools in the service of improving student writing over at his company blog. However you may feel about the technology, the exploration that he’s doing raises some important question about what good feedback on writing is. This aspect of educational technology—the fact that it forces us to examine our tacit knowledge of teaching and make it explicit—is one of the things that I value most about the field.
I’ve been thinking a little more this morning about the language used by the researchers in the SJSU Udacity report. They focus a lot on student “effort.” But it’s also pretty common in education to talk about “engagement.” From a technical perspective, the researchers chose the better word. “Effort” is meant to be an observable behavior, e.g., how many minutes students put into watching videos or how many homework problems they solved. “Engagement” is a non-observable attitude that might be a cause for differences in effort that we observe between students. But the connotations of these words tend to encourage different sorts of questions. When we talk about a problem with student effort, we tend to ask how we can get students to do more work. When we talk about a problem with student engagement, we tend to ask how we can get students to want to do more work. The former might lead us to solutions such as student reminders and alerts when they are falling behind or changes in schedule to accommodate students with jobs, while the latter might lead to ideas about increased interactivity or changes to the content.
Just a thought.
As Phil noted in his analysisof the SJSU report, one of the main messages of the report seems to be that some of what we already know about performance and critical success factors for more traditional online courses also seem to apply to xMOOCs. But how good is the ed tech industry at taking advantage of what we already know?
Not very good, as far as I can tell.
One of the points that the report writers emphasize is that—no surprise—student effort is by far the biggest predictor of student success:
The primary conclusion from the model, in terms of importance to passing the course, is that measures of student effort eclipse all other variables examined in the study, including demographic descriptions of the students, course subject matter and student use of support services. Although support services may be important, they are overshadowed in the current models by students’ degree of effort devoted to their courses. This overall finding may indicate that accountable activity by students—problem sets for example—may be a key ingredient of student success in this environment.
We also know that many of the students in the SJSU MOOCs were at-risk students. They were traditional students who had failed the course for the first time, high school students in an economically disadvantaged neighborhood, and non-traditional students. What do we know about at-risk students? We know that they often need help, and we also know that they are not good at knowing when to get help. They aren’t good at knowing when they are not doing enough and they are also not good at knowing when they are underperforming and are in danger of failing.
We certainly see signs of the latter problem in the SJSU report:
The statistical model pointed to the critical importance of effort. In Survey 3 students indicated that they recognized the need for a sustained effort and the danger of falling behind. In fact, when asked what they would change if starting the semester over knowing what they know now, one of the top choices was to “make sure I don’t fall behind.” Almost two-thirds of survey respondents (65%) in Survey 3 pointed to this change, including 82% of matriculated students in Math 6L and 75% of matriculated students in Math 8. In Stat 95, where students were less likely to fall seriously behind because of stricter adherence to deadlines (see below), 60% of both matriculated and non-matriculated students identified “not falling behind” as a change they would make.
So students in these classes had trouble staying on top of the work, and the evidence (from the statistics class) is that at least part of the problem is an inability to self-regulate rather than pure lack of time. We also see evidence that students in these courses were not good at seeking help:
[O]ne of the top-rated changes students identified in Survey 3 was “more help with course content”. In this area, there was almost no difference between survey responses from matriculated and non-matriculated students with 80% of respondents from both groups rating “more help with content” as a “very important” or “important” change they would like to see Udacity and SJSU make.
This finding, when corroborated with input from faculty (see below) points to an area that may require additional attention. Because while students did not use the opportunity to video-conference with instructors, many students e-mailed their professor, but not with questions about content. Instead, student email communications focused, across the three courses, on questions related to course requirements, assignments and other technical or process-related issues.
Some of the problem could be attributed to students’ lack of awareness that help is available, but you can’t say that for students that actually asked for help and then failed to ask for content help. There is a failure of help-seeking behavior.
As I have written about here before, this is exactly the problem that Purdue’s (and now Elucian’s) Course Signals retention early warning system was designed to address. The whole thing is designed to prod students who are falling behind to get help. Basically, if students are falling behind (or failing), they get increasingly insistent messages pushing them to get help. At its heart, it is really that simple. And the results that Purdue has gotten are impressive. For example, they have been able to drive much higher utilization of their biology resource center:
Does an increase in help-seeking behavior yield greater student success? The answer is a resounding yes:
They are seeing solid double-digit improvements in most cases. What is most impressive to me, though, is that these results persist over time, even after the students stop using the system:
Students who took just one Signals-based course are 17% more likely to still be in school in their fourth year of college than those who didn’t. Students who took two or more Signals-based courses are a whopping 24% more likely to be in school in their fourth year than those who didn’t have any. In other words, the technology actually teaches the students skills that are vital to their success. Once they learn those skills, they no longer need the tool in order to succeed. So there is no mystery about what at-risk students need or how technology can help provide it to them. Purdue’s first presentation on Course Signals was in 2006.
This is why I expressed disappointment in my posts about the analytics products from Desire2Learn and Blackboard when, despite obviously following in the footsteps of Course Signals, both products focused on dashboards for teachers. We know that direct and timely intervention is what at-risk students need. We know that this intervention can have substantial and lasting effects. The Purdue model works. I’m sure that either company could sell lots of product if they could credibly claim that they could increase their customers’ four-year retention rates by double digits. But in order to do that, they need to follow Purdue’s example and support direct feedback to the students. This approach, by the way, fits rather well with the MOOC model, where direct instructor intervention is far less likely (or even impossible) on a per-student basis. But Purdue has shown that it also works quite well in a traditional course model and that faculty can get direct benefit and insight from this approach, even if they are not the primary audience.
Can we expect better from the MOOC providers? The early indications are not good. The company and the university, perhaps egged on by the Gates Foundation, rushed courses out for at-risk students with a clearly inadequate focus on getting students to ask for and find the support that they need despite the fact that it is a huge known risk. Anybody with any experience teaching these populations would cite this challenge as a critical success factor—indeed, as the critical success factor. “Fail fast” may be a great mantra for a software startup, but it’s not such a good approach to teaching those students who need our help the most. (Mike Caulfield has a very timely and interesting post on just this subject.) “Measure twice and cut once” might be a better idea. Or even better than that is Steve Jobs’ favorite quote by Igor Stravinsky: “Good artists copy; great artists steal.” Of course, in order to steal like great artists do, you have to start by recognizing that you yourself are not inventing art.
The post What Blackboard, Desire2Learn, and Udacity Should Learn from SJSU appeared first on e-Literate.
By reading the SJSU research report (download actual report here), one item that really hits me is that however different the scaling model is for MOOCs, they are still online courses and have similar success factors. I am not trying to minimize the value of the report by the title of this post, because there is real value in letting objective data lead you to conclusions, even if hunches and assumptions would have led to similar results.
First, however, it would be useful to understand the data and key terms – AOLE is Augmented Online Learning Environment, which represents the additional support structures and for-credit status for MOOCs.
Out of a total of 274 students enrolled in AOLE courses, 249 students remained in the sample after data cleaning, which included removal of students enrolled in multiple courses and of those with no course activity. In addition, 36 students were removed who withdrew from the course or received a final grade of Incomplete. This left 213 students for the deeper analysis. This data set will be referenced throughout this report as a “research data file”.
So this data does not include no-shows, withdraws or incompletes, making it more akin to analyzing a traditional college course after the add / drop period.
Table 6 shows the pass rate (getting a grade of C or above) broken down by course and by matriculated / non-matriculated status.
We can see that matriculated students performed much better as a group, with pass rates of 30%, 50% and 54% for each course. Non-matriculated students (45% of whom were high school students from an urban environment) had pass rates of 18%, 12% and 49%.
It is also worth noting that MATH 6L was a remedial math course, MATH 8 was college-level algebra and STAT 95 was elementary statistics. MATH 6L deserves special consideration:
Matriculated students who do not pass Math 6L during their first semester will be allowed to repeat it once. If they do not pass this course by the end of their first year, they will need to complete this course at a community college before they are eligible to enroll at SJSU. However, due to budget restrictions, MATH 6L has only been offered at SJSU in fall semesters since Fall 2009 so students that don’t pass Math 6L in the fall semester do not have the option of retaking it in the fall semester do not have the option of retaking it in the spring semester at SJSU. The Udacity-SJSU 6L course ￼￼offered an alternative for students in this situation. All matriculated students in the course had failed Math 6L before.
The report studies much more than just completion and pass rates, as it looks at student activities and looks for “significance of associations between individual predictor variables and pass/fail”.
The primary conclusion from the model, in terms of importance to passing the course, is that measures of student effort eclipse all other variables examined in the study, including demographic descriptions of the students, course subject matter and student use of support services. Although support services may be important, they are ￼overshadowed in the current models by students’ degree of effort devoted to their courses. This overall finding may indicate that accountable activity by students— problem sets for example—may be a key ingredient of student success in this environment. [snip]
Students who work harder do better in the courses.
Table 12 shows the 5 variables that are highly significantly related to pass/fail for 6 different groupings of the students. Four variables are measures of effort and the other variable measures use of support services. Their functional behaviors relative to pass/fail follows the table.
An interesting finding was that support usage was not a strong predictor of success, but not that the issue was unimportant. Rather than the total amount of support used by students, the actual awareness of support structures was the real issue.
While the regression analysis did not find a positive relationship between use of online support and positive outcomes, this should not be interpreted to mean that online support cannot increase student engagement and success. As students, Udacity service providers and faculty members explained, several factors complicated students’ ability to fully use the support services, including their limited online experience, their lack of awareness that these services were available and the difficulties they experienced interacting with some aspects of the online platform. It is thus the advice of the research team that additional investigations be conducted into the role that online and other support can play in the delivery of AOLE courses once the initial technical and other complications have been addressed.
The big comparison that has some value is how the students within the MOOCs performed compared to historical rates for traditional face-to-face courses. Keep in mind, however, that MATH 6L has not actually been available since 2009, so there is an argument that the real comparison is a MOOC of MATH 6L versus nothing, at least at SJSU. Table 7 summarizes this comparison.
The key measures for matriculated students are MOOC vs. face-to-face pass rates of 30% vs. 34 – 50% for MATH 6L, 50% vs. 52 – 73% for MATH 8 and 54% vs. 71 – 80% for STAT 95. The MOOC pilot led to pass rates below those of the face-to-face courses, but not dramatically lower rates.
So where does this leave us?
There is more information available, and I look forward to reading other analysis posts. As I mentioned earlier, however, what really strikes me is that many of the findings match what we already know about online courses in general.
- Remedial students have the toughest time self-regulating and performing well in an online environment. If these students are targeted for a program, there needs to be significant effort spend on support structures that proactively help students and don’t rely on their passive use (push vs. pull).
- Without significant design and support, online courses tend to have lower pass rates than their counterpart face-to-face courses (although in many online programs with real history, the pass rates can be roughly equivalent).
- Students need to participate in online courses to succeed, just as is true for face-to-face courses.
MOOCs are online courses, and many, if not most, of the known best practices for online education also apply to MOOCs.
I commend the research team and SJSU for providing this report.
The post SJSU research report confirms MOOCs are online courses appeared first on e-Literate.
Back in late July we found out that San Jose State University was pausing their SJSU Plus pilot program using Udacity for-credit MOOCs due to low passing rates. While there was fairly extensive media coverage of the story broken by Inside Higher Ed, there was the promise of a National Science Foundation (NSF) funded research report based on the spring 2013 courses. I described why the research report was important, despite indications that the lessons learned aligned with well-known practices for online education.
With this many known issues, which have also been documented in several excellent blog posts, why is it valuable for SJSU to use the external NSF-funded report?
In my opinion, there will still be value in A) releasing the full data set and analysis, and B) getting others to understand key points.
Many specialists might understand why the program failed, but many others do not. Campus leaders and policy makers need to learn many of the lessons that are already known in ed tech / higher ed community, and SJSU Plus will help in this area. Plus, I wouldn’t discount some unexpected findings (e.g. need for self-pacing in math courses as mentioned in student surveys). [snip]
It is important for some of the new participants (and I would add foundations, state governments, university presidents, etc) to fully understand what worked and what didn’t work in this application of online education. The official report from SJSU will provide a valuable service to help with this learning process.
There is also value in this very public case establishing a precedent of pausing pilots that don’t work, evaluating results early in the process, adjusting the course and support design based on findings, and focusing on student learning outcomes as the primary measure of success.
The research team included the Research and Planning Group for California Community Colleges (RP Group), with team members Rob Firmin, Eva Schiorring; John Whitmer (from RP Group) and Sutee Sujitparapitaya (from SJSU).
The project underlying the report was titled “Experiments in student mentoring, tutoring and guided peer interaction in Massive Open Online Courses (MOOCs)”, with the following description:
This project assesses the effectiveness of human mentorship and guided peer interaction in the context of Massive Open Online Courses (MOOCs). It is based on the observations that
(a) MOOCs are presently more successful for highly self-motivated individuals
(b) there is a nearly complete absence of interactive human mentoring in MOOCs.
This project will investigate the effectiveness of six forms of human mentoring, including group and individual mentoring as well as instructor-guided peer interaction in small groups. In pursuing this project, the PIs seek to characterize the effects of these different types of mentorship on the collection of variables that measure course completion and learning outcomes.
The overall objective of this project is to find ways in which MOOCs can be made successful among a much broader segment of students.
As can be seen from this description, the report focuses on the augmentation of human mentoring to traditional MOOCs, leading to its description of Augmented Online Learning Environments (AOLE). The study addressed three research questions:
1. Who engaged and who did not engage in a sustained way and who passed or failed in the remedial and introductory AOLE courses?
2. What student background and characteristics and use of online material and support services are associated with success and failure?
3. What do key stakeholders (students, faculty, online support services, coordinators, leaders) tell us they have learned?
This is an information-rich 44-page report (plenty to chew on here), but here are the key findings:
Findings: The research found that matriculated students performed better than non-matriculated students and that, in particular, students from the partner high school were less successful than the other AOLE students. Pass rates varied significantly with course taken and by persistence of student effort as seen in the following table and figure.
The statistical model found that measures of student effort trump all other variables tested for their relationships to student success, including demographic descriptions of the students, course subject matter and student use of support services. The clearest predictor of passing a course is the number of problem sets a student submitted. The relationship between completion of problem sets and success is not linear; rather the positive effect increases dramatically after a certain baseline of effort has been made. Video Time, another measure of effort, was also found to have a strong positive relationship with passing, particularly for Stat 95 students. The report graphs these and other relationships between variables examined by the logistic-regression models and pass/fail.
While the regression analysis did not find a positive relationship between use of online support and positive outcomes, this should not be interpreted to mean that online support cannot increase student engagement and success. As students, Udacity service providers and faculty members explained, several factors complicated students’ ability to fully use the support services, including their limited online experience, their lack of awareness that these services were available and the difficulties they experienced interacting with some aspects of the online platform. It is thus the advice of the research team that additional investigations be conducted into the role that online and other support can play in the delivery of AOLE courses once the initial technical and other complications have been addressed.
Inside Higher Ed has more coverage of the report release, including updates on the timing of the report.
We plan to add a few posts analyzing the report here on e-Literate, so stay tuned.
The post SJSU releases NSF-funded research report on Udacity pilot appeared first on e-Literate.
File this under “you read it first on e-Literate”.
In previous posts from spring 2013 I provided a graphical view on MOOC student patterns based on observed retention over time as well as differing student types. This graphic was based on anecdotal observations of multiple MOOCs, mostly through Coursera.
Based on a recent study of the edX Circuits and Electronics MOOC, there is this interesting chart of student patterns based on actual data analysis of the 155k students from the spring 2012 offering of this course. The full report is worth reading, by the way, with some real student pattern insights.
I took this chart and overlaid it on the MOOC student patterns graphic, scaling for 0% / 100% of enrollment vertically and start / stop of course horizontally.
It’s good to see this validation of the overall retention pattern based on real data analysis to augment the original graphic’s model.
Despite all the media hype on MOOCs over the past two years, perhaps the most important recent market entry for ed tech has been Canvas, the LMS from Instructure. Instructure was founded in 2008 by Brian Whitmer and Devlin Daley. At the time Brian and Devlin were graduate students at BYU who had just taken a class taught by Josh Coates, where their assignment was to come up with a product and business model to address a specific challenge. Brian and Devlin chose the LMS market based on the poor designs and older architectures dominating the market. This design led to the founding of Instructure, with Josh eventually providing seed funding and becoming CEO by 2010.
Michael covered the burst of news in January 2011 that served as the launch of Canvas.
Instructure has just announced that they will be releasing an open source version of their Canvas LMS product. Between this announcement, the winning of the Utah Education Network contract (109,000 college students and 40,000 K12 students), and the oh-so-ever-brief lawsuit by Desire2Learn about that win, Instructure has been making quite a splash lately.
Since that time Instructure has grown to 7M+ users, 500 customers and 240 employees while raising $50M in total VC funding. With that much early success, it was a surprise to see Devlin Daley leave the company as of last week with his Douglas Adams reference.
What does the departure of this co-founder mean to Instructure and its future path? For some perspective, it would help to understand the change in roles that occurred in early 2011.
From the founding of Instructure through the big release of Canvas in January 2011, Brian and Devlin shared tasks primarily based on their backgrounds – Brian’s in usability design and Devlin’s in architecture and software security, although their work overlapped as can be expected in any small team. They both led development of the product as well as business development based on our their specialties. They were heavily influenced with their BYU background by David Wiley, Jared Stein (now working for Instructure) and their visions of more open architectures and learning networks.
In early 2011, however, the company made a conscious decision to have Devlin hit the road as lead evangelist and sales closer while Brian stayed mostly in company headquarters leading usability design as well as the move to an LTI-based open platform. In other words they became an inside-outside team with Devlin was on the road for most of the past two years.
The official word on Devlin’s reason for departure is that he still has the entrepreneurial bug to find clever technology solutions, whereas Instructure is now a growing company with a maturing feature set. He has been talking to people and getting ideas, but some of these ideas don’t make sense in an established LMS product.
While that official explanation makes sense, it doesn’t mean that Devlin’s departure will not affect Instructure. The biggest challenge they will face, in my opinion, is having someone out on the road, working with customers, asking why and what if questions. Just naming a person or two to this role is not the same as having the original vision and skills from a co-founder, although I would expect Jared Stein to play a key role in this regard.
This transition comes at a critical time for Instructure. The growing number of customers acquired in late 2011 through 2012 are now entering or in production, and there is a growing demand for more and more features. The honeymoon is over and marriage requires hard work to last. System administrators tend to focus on the LMS power users and advanced feature requests when they work with vendors, often trying to mold the company’s products into what they are used to or how they would design the product themselves. Canvas was designed with simplicity in mind. In particular, most legacy LMSs have been designed with institutional control in mind – how do I prevent students from doing X, how do I allow students to do Y when Z occurs. Canvas focuses more on faculty and student control – how do I want to receive my communications, how do I bring in material from outside the LMS, how do I add apps at my control. This is not a trivial distinction, and maintaining the right balance requires listening to customers, asking deeper questions to understand needs, and also the ability to either say ‘no’ or present an alternative solution.
Higher education institutions will need to carefully evaluate if Instructure stays true to its founding values of simplicity and modern design while avoiding the trap of feature-bloat that has plagued the ed tech market. At the same time, there are missing features that customers need, as can be seen in the discussion forums such as this one on uploading content to multiple courses at once. This type of balance between feature requests and simplicity is the area for customers to watch to understand how well Instructure handles Devlin’s departure.
For their part, Instructure put together a party for Devlin last Friday and posted a YouTube video on the event.
The post What does Devlin Daley’s departure mean for Instructure? appeared first on e-Literate.
Alternate Headline: “Our Long National Nightmare is Over – SJSU and Udacity solve problem of college graduates being able to pass remedial math”
The more I read on SJSU’s announcement on the pilot program, the more troubled I am with the lack of clear description of student population change (I wrote briefly about the change in student populations yesterday). In a nutshell, the spring 2013 pilot was completely different in the major demographic variables than the summer 2013 pilot. That’s good, right, showing that SJSU and Udacity are learning their lessons? It would be good if SJSU clearly described the student differences and avoided any implications that the numbers could be compared. Further, it would be good to avoid misleading comparisons to face-to-face courses at SJSU.
But that is not what is going on. SJSU, in particular, is going out of its way to compare spring, summer pilots alongside SJSU on-campus courses in its media blitz. And the strategy is working, based on the articles that came directly from SJSU / Udacity interviews and information releases.
Inside Higher Ed: university officials on Wednesday touted results from the summer cohort as “significantly better”
Chronicle: But now the pilot program appears to be back on course, buoyed by encouraging data from this summer’s trials
TechCrunch: But the university and its platform partner, Udacity, bounced back on their second try, improving students’ outcomes… [snip] Turns out, the failure was premature.
More distressing is that SJSU and Udacity have put out a table, used by most media outlets, that shows direct comparisons.
Here’s the trouble which I described yesterday. The student populations between these three groups are completely different, to the point where other comparisons, such as passing rates or completion rates, should not be made.
Below is my summary of the student demographics based on various interviews and articles.
That’s right – the summer pilot includes 53% of students already having a college degree, 48% with a bachelor’s or higher. In the spring, none of the students had a college degree.
Note: there are conflicting reports on the spring pilot demographics. Most accounts show that it was approximately 50% active high school students (many from Oakland) and 50% matriculated SJSU or CSU students. The Wall Street Journal, however, lists the totals as 20% active high school students. What is troubling is that all of these accounts are based on SJSU or Udacity interviews. I have chosen to use the 50% numbers, for two reasons:
- Udacity lists these numbers (50% high school, 50% SJSU) for spring, and Udacity is the holder of the data.
- The actual contract documents called for 50 / 50 split with SJSU students (courtesy Ry Rivard at IHE):
In these initial three Courses, each section per Course will have 50 students, for a total of 100 students enrolled for-credit, not including unlimited non-credit students as described in Section 2.2. Half of the for-credit students will be matriculated University students (50 in each Course); the other half will be non-University students (50 students in each Course).
Need More Data
Furthermore, we don’t know the breakdown per course. The remedial math course has the worst pass rates, but does this course have a higher percentage of high school vs. college vs. college graduate for either spring or summer? We have no idea.
With the dramatically different student populations, we also need to know who completed vs. dropped out, who passed (C or above) and who failed.
The only viable comparison across all three groups would be for matriculated SJSU or CSU students. That comparison might tell us a lot.
Effect of Credit and Fees and Proctored Exam
And there is another key issue with this program – it is one of the first attempts to allow credit for a MOOC-style course (although not massive in spring terms). The idea is that for-credit students would pay $150 per course, and if students get a C or above through a proctored exam, they would get academic credit at a CSU campus. This is a bold program pushing the envelope. How does the potential for academic credit affect student performance in a MOOC? How does the $150 (skin in the game) affect student performance, even if the National Science Foundation covered the fees for the spring pilot?
And one other big question to consider: with the opening of enrollment between spring and summer, going from 300 easy-to-identify students to 2091 students mostly out of state or country, did all of the summer students take a proctored exam?
Change in Retention Rate
We also see that SJSU changed the definition of retention rate, from their post:
The overall retention rate dropped to 60 percent this summer, compared with 83 percent this spring, reflecting SJSU’s decision to be more flexible when students signaled to instructors that they needed to drop the course.
Clearly SJSU allowed more course drops (we don’t know what the policy change was), but in a standard course, once a student drops they are not counted in overall pass rates. So this policy change would change the pass rates, making them seem higher than they actually are. This was noted in the IHE article yesterday.
While student performance is up, the retention rate dropped from 83 percent this spring to 60 percent over the summer, which Taiz [president of the California Faculty Association] said may have inflated the pass rates, as students who would have received a poor grade in a course instead decided to drop it. In comparison, data provided by SJSU showed similar on-campus classes have retained no less than 94.3 percent of students since the 2010 spring semester.
The Biggest Offender: Official SJSU Post
Ironically (or depressingly), the best information comes from Udacity and not from SJSU. Inside Higher Ed, the Chronicle and even TechCrunch have much better descriptions of the student population differences than does SJSU.The only reference in the SJSU official announcement to student demographic changes are these nuggets:
This summer, 89 percent of our SJSU Plus students were not California State University students. [snip]
Over the summer, there were many comparisons made between our SJSU Plus and face-to-face courses. What many people failed to realize is this was not an apples-to-apples comparison.
The announcement then goes further to actually call out lessons learned:
Meanwhile, we would like to share some lessons learned.
Here’s what worked:
Learning by doing works. Online video allows us to stop every few minutes and offer students the opportunity to try what they’ve learned with an online exercise. Instructors have found this so effective that some are incorporating SJSU Plus materials into their campus-based courses.
Student interaction remains strong. Does online learning stifle conversation? We found the opposite. Students are connecting with each other, instructors and instructional assistants through every means available: text, email, phone calls, chats and meetings.
Here’s where we’ve improved:
Students need help preparing for class. With SJSU Plus reaching well beyond the SJSU campus, we are enrolling a growing number of students who are unfamiliar with the demands of college courses. This summer, 89 percent of our SJSU Plus students were not California State University students. So SJSU Plus now offers orientation in various forms in all five courses.
Students need help keeping up. Everyone needs a little encouragement to stay on track. So we’ve added tools that help students gauge their progress and we’re checking in with individual students more often.
We need to communicate better with students. Although SJSU and Udacity try to be as clear as possible with our online instruction, we know we can do better. Student feedback has been immensely helpful in refining SJSU Plus materials. We’re also sending less email and more messages while students are “in class” online.
These findings may have some merit (and in fact should have been understood before designing the courses), but it is premature to declare lessons learned unless the student population is taken into account.
Update: Fixed minor wording in first two paragraphs for clarity; no change in meaning.
The post SJSU Plus Udacity Pilots: Lack of transparency in describing data appeared first on e-Literate.
San Jose State University (SJSU) and Udacity have announced the results of their summer pilot, and the headlines cover the big improvements (text from IHE article, table from Udacity blog).
Thrun recently hinted that the summer pilot’s results would be more positive, and that Udacity was getting close to finding the “magic formula” to deliver high-quality, low-cost education.
The lone holdout among the SJSU Plus courses is entry-level math, which saw the smallest increase in students who received a passing grade, from 23.8 to 29.8 percent. That places the pass rate almost 40 percentage points below the closest SJSU Plus course, and about 15 percentage points below the pass rate of the on-campus course.
I am not trying to throw cold water here, but it is very important to look at the student populations. To his credit, Sebastian Thrun describes the big differences in his blog post.
This summer, we ran the second instance of our pilot. While in the Spring, we actively sought out underserved high schools from low-income areas in California, this time we simply opened up enrollment to anyone. As predicted, with 2,091 students who enrolled, we mainly reached students who would not ordinarily attend college. Only 11% of the summer students who took the for-credit courses from SJSU were matriculated students in one of the California State Universities. 71% of our students came from out of state or foreign countries. And while the total number of high school students went up, their proportion in the total student body went down. [snip]
One key difference between Spring and Summer was that we opened our Summer session to everyone. This led to a substantial difference in student body. Among the student body, 53% reported that they already hold a post-secondary degree (5% Associate, 28% Bachelor’s, 16% Master’s, and 4% Doctorate). Only 12% of the students had a high school graduate diploma or equivalent, and 15% were active high school students. This is very different from the Spring Pilot, in which approximately 50% of the student body were active high school students, and the other 50% were matriculated SJSU students. [emphasis added]
One of the key success factors of online education is to target student populations who can succeed in this environment. If you target groups (remedial students, underserved students, high school students), then there must be tremendous student support.
Pay attention to the student populations – this might be the real story. If you ignore the difference in student populations, you might draw the wrong conclusions.
The post SJSU Plus / Udacity Update: Different student populations appeared first on e-Literate.
Michael and I have written about California’s efforts to leverage online education to address the challenge of students having access to needed courses, but it would help to hear what students have to say. Towards that end, I am sharing a student newspaper article about Cal State’s new online concurrent enrollment program. The student is my daughter, Hillary Hill, who is in her third year at Sonoma State University. You can find the original article here.
As a student attending a school that is part of the California State University system, specifically Sonoma State University, I know the dread and frustration associated with registering for classes.
In some ways, it seems like we are being set up to fail; we have a 16 unit cap, registration times get mixed up, and it seems like every class you need is full after five minutes.
This especially applies to general education, or GE, classes. It makes sense, as everyone has to fill the same GE requirements, and there can only be so many different classes offered. However, wouldn’t you like to be fully informed about any possible way to make filling those GE requirements easier and faster? I know that I would.
That is why I was thrilled to learn about a new program that would make completing these classes easier. My dad works in the online higher education field and sent me a link in late July to a website for the California State University Intrasystem Concurrent Enrollment program.
It is a new program that allows students currently enrolled at a CSU campus to take fully online GE classes administered at other CSU campuses. The only requirement is that students at schools on the semester system take classes from campuses that are also on the semester system and likewise for quarter system campuses.
The classes are automatically transferred to your home campus, there are both upper division and lower division courses, and they all fulfill a GE area.
Sounds pretty perfect, right? I bet you’re wondering why you might not have heard of it. The answer to that lies with both the CSU system and our own campus.
It has been obvious for a while that the needs of the students far surpass the University’s ability to accommodate them.
Online classes eliminate the need to add more classroom time and don’t require a professor to be on campus each week to teach, so it seemed like untapped potential. The California State University system recognized this and joined other universities, such as the University of California campuses, in the endeavor to alleviate physical and financial burdens by hosting classes online.
The problem is that for the 2013-2014 school year, this was too little too late. The Intrasystem Concurrent Enrollment program was announced at the very end of July, mere weeks before we came back to school.
That’s not a lot of time to rearrange your schedule and figure out what classes would benefit you. That is the fault of the CSU system. It feels like the program was rushed to be put into action and wasn’t promoted nearly enough. It could really change the lives of CSU students, but no one really knows about it.
At Sonoma State, we had even less time to learn about this program. Part of the reason why not a lot of people at our school know about this is that we only received one email about it, and that email was sent about one week before classes started.
That’s right – one of the many emails that the University sends to all of it’s students, which you probably delete without much thought, contained all of the information about this program and how to enroll in it.
We get bombarded about who’s playing at the Green Music Center at least once a week, but we only got one unassuming email that could potentially save students time and money.
I feel that over the next few years the Intrasystem Concurrent Enrollment program can produce the “radical” changes that CSU spokesman Mike Uhlenkamp believes it can.
It’s a little too late for it to make any real difference this semester, but I encourage Sonoma State students to do research on it and be prepared to use it to your advantage in the coming semesters.
We have a lot working against us, but I believe if we’re proactive in bettering our futures, we will be successful.
The post Cal State’s New Online Concurrent Enrollment Program: A Student’s View appeared first on e-Literate.