Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 19 hours 38 min ago

ITC Survey: Four trends to watch in LMS market for community colleges

Tue, 2014-05-06 09:26

The Instructional Technology Council (ITC), an affiliated council of the American Association of Community Colleges (AACC), has conducted a distance education survey since 2004 focusing on online education trends among community colleges in the US and Canada. With this focus, the member colleges tend to be those with a stronger interest in online education programs and therefore are more progressive in technology usage than the general community college population. One part of the ITC survey is on general LMS usage at member schools, and this provides another source of data on the higher ed LMS market.

The survey this year was based on 142 responses out of the 375 member institutions (39%), and there are a few caveats that should be noted:

  • Prior to 2012 the survey was emailed to all AACC colleges, but due to uneven responses the survey changed to just survey ITC colleges; and
  • “No answer” responses are not listed in percentages, thus totals will not always equal 100%.

Despite those caveats, “70 percent of the annual submissions have come from the same campuses during the nine years of the survey”, which means that it is worth exploring some broad trends for the LMS market for community colleges. Here is the data presented on LMS usage this year on page 15 (April 2014 report of Fall 2013 survey data):

Table 3

 

This data is a little confusing due to the Blackboard acquisition of WebCT in 2005 and ANGEL in 2009. The survey changed from listing ANGEL as its own LMS to being the same as Blackboard this year. For simplicity’s sake, I’ve combined Bb Learn, WebCT and ANGEL data into one category called “Blackboard”.

ITC LMS Market Share

 

Given the change in data collection (combining Blackboard solutions, changing listed LMS options, dropping the AACC mailing), I think it would be safer to concentrate on trends rather than absolute market share numbers. Here is a view of trend lines:

ITC LMS Trend Lines

 

With this data, there are four trends that seem important for the LMS market.

  • Despite the change in categories, Blackboard may be showing signs of organic market growth – taking customers away from competitors rather than just acquiring or retaining customers. Does this represent a change in Blackboard’s ongoing market share losses overall? I don’t think we can draw conclusions yet, but this is new data worth exploring (watch for Campus Computing Survey, Edutechnica, the next ITC survey).
  • Moodle and Canvas are consistently growing in the community college market – Canvas more so than Moodle. In the case of Moodle, the ITC survey shows fairly consistent growth since 2009 (the year Blackboard acquired ANGEL). In the case of Canvas, the ITC survey shows rapid growth over the past 2 years.
  • For the first time in an LMS market survey that I am aware of, Desire2Learn has actually lost market share. In fact, Desire2Learn is now lower than both Moodle and Canvas for community colleges according to this survey. This is a topic worth exploring further, especially in relation to last year’s layoffs.
  • If you look at the total responses, notice that the top 4 LMS solutions (Blackboard, Moodle, Canvas, Desire2Learn) accounted for 74% total in 2011 and 98.5% total in 2013. I asked the researchers, and they said the survey lists 15 different LMSs plus the ability to write-in, but that this year “the respondents reported a much more narrow number of LMS solutions”.

Update (5/6): George Kroner was kind enough to run data from Edutechnica site, getting fairly similar numbers.

@PhilOnEdTech for Carnegie 2010 basic #1-14: 35% Bb, 11% ANGEL, 12% D2L, 12% Canvas, 15% Moodle, 3% Sakai, 3% Pearson, 1% WebCT, 8% Other

— George Kroner (@georgekroner) May 7, 2014

@georgekroner Thanks – seems the biggest disconnect is Bb / ANGEL / WebCT totals of 58% for ITC and 47% for your data.

— Phil Hill (@PhilOnEdTech) May 7, 2014

The post ITC Survey: Four trends to watch in LMS market for community colleges appeared first on e-Literate.

UCF and Temple IPEDS corrections lead to new Top 20 in online enrollment

Sun, 2014-05-04 12:12

In response to the new distance education (roughly equivalent to online education) data released by the National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS), I have written a series of posts showing different ways to view the data. In short, the data allows much greater insight into degree-granting online higher education in the US than we previously had. WCET also has a series of posts worth exploring.

One post in particular listed the top 20 institutions in terms of how many students take at least one online course, broken down by sector. The list for public 4-year showed the following (undergrad + grad, # students taking at least one online course in Fall 2012):

Public 4-year

Two schools, upon viewing this data, have discovered that their data provided to the IPEDS database was incorrect.

University of Central Florida

Thomas Cavanaugh from UCF was a little surprised to see his school listed with 21,782 students online when his group’s internal data showed higher numbers. For context, UCF has had one of the most proactive and strategic approaches to online education of any R1 university, including mandatory training and quality assurance standards for any course offered online or in hybrid format. What Cavanaugh and team discovered was that the school was not reporting every category of course that is actually considered distance education. UCF defines their modalities in five categories:

World Wide Web (W) — courses conducted via web-based instruction and collaboration. Some courses may require minimal campus attendance or in-person/proctored examinations.

Video Streaming (V) — courses delivered over the web via streaming digital video which may be supplemented by additional Web activity, projects or exams.

Video Streaming/ Reduced Seat Time (RV) — classroom-based content is available over the web via streaming video and classroom attendance is not required. Other required activities that substitute for video instruction may include any of the following elements: web activity, in-person or proctored examinations, and labs. See course notes for details.

Mixed Mode/Reduced Seat Time (M) — courses include both required classroom attendance and online instruction. Classes have substantial activity conducted over the web, which substitutes for some classroom meetings.

Face To Face Instruction (P) — courses have required classroom attendance and meet on a regularly scheduled basis. Students may encounter internet and/or computer requirements in these classes.

Meanwhile, the official IPEDS definitions:

Distance education – Education that uses one or more technologies to deliver instruction to students who are separated from the instructor and to support regular and substantive interaction between the students and the instructor synchronously or asynchronously.

Technologies used for instruction may include the following: Internet; one-way and two-way transmissions through open broadcasts, closed circuit, cable, microwave, broadband lines, fiber optics, satellite or wireless communication devices; audio conferencing; and video cassette, DVDs, and CD-ROMs, if the cassette, DVDs, and CD-ROMs are used in a course in conjunction with the technologies listed above.

Distance education course - A course in which the instructional content is delivered exclusively via distance education. Requirements for coming to campus for orientation, testing, or academic support services do not exclude a course from being classified as distance education.

The mistake made at UCF was that the Video Streaming categories (V and RV) were originally noted reported to IPEDS as distance education. You can see that V and RV categories deliver instructional content exclusively via “one or more technologies to deliver instruction to students who are separated from the instructor and to support regular and substantive interaction between the students and the instructor synchronously or asynchronously.” UCF has since sent corrections to NCES.

The results is that UCF moves from 21,782 students taking at least one online course to 28,954 (this may change slightly due to definition of census date), which is a 33% change.

Temple University

Several readers were, like me, surprised to see Temple University with such a high rate of online courses (35,248 or 96% of all students taking at least one online course), when the school does not the reputation for offering online courses and the web site does not emphasize online.

I contacted the school’s Institutional Research and Assessment group to ask for clarifications, and it turns out they also discovered that their IPEDS reporting data was incorrect – but in the opposite direction from UCF. In their case, Temple appears to just made a mistake in entering the data due to new reporting standards and survey overload. According to my contact, one of the issues is that IPEDS notifies a school if there are any anomalies in the reported data, but the distance education reporting is brand new. The mistake was not caught until the public blogging of data.

The results is that Temple University moves from 35,248 students taking at least one online course to 1,809, which is a 95% change. Temple has graciously sent corrections to NCES.

Value of Open Data

I’d like to point out the value of Open Data in this case, as the public reporting and public blogging of data is leading to transparency of the data and reporting process as well as improvements in the data. This would not have happened using previous methods of surveys with closed data.

I’d also like to thank UCF and Temple personnel for their openness in noticing, correcting, and describing data entry mistakes. They provide excellent examples for other schools.

Changes in Top 20

I am not sure what the IPEDS process is to update the database based on corrections, so the resultant changes reported below will not match official records for a while (in other words, you read it here first at e-Literate). NOTE: I have also revised the view to focus just on degree or certificate-seeking students, so there are some other changes as well.

Top 20 US Public 4-year Institutions by Online Enrollment (Revised) Listing of Top 20 US public 4-year institutions Fall 2012 online education enrollment for combined undergraduate and graduate degree and certificate-seeking students, per IPEDS RankInstitutionTotal Students Taking At Least One Online CoursePercentageTotal Students 1University of Maryland-University College41,357100%41,457 2Arizona State University36,05049%73,179 3University of Central Florida28,95449%59,212 4Florida International University24,40354%44,940 5University of Florida22,77346%49,160 6Thomas Edison State College20,45699%20,606 7University of South Florida-Main Campus16,11340%40,459 8Troy University15,44469%22,407 9St Petersburg College15,04455%27,574 10Pennsylvania State University-Main Campus13,12829%45,138 11Northern Arizona University12,47848%25,850 12Valencia College11,94833%35,728 13College of Southern Nevada11,86739%30,787 14Middle Tennessee State University11,37945%25,144 15University of North Texas11,25930%37,950 16University of Houston10,80928%39,303 17Indiana University-Purdue University-Indianapolis10,53436%29,559 18Florida State College at Jacksonville10,33639%26,634 19University of Arizona10,31626%39,875 20North Carolina State University at Raleigh10,08531%32,798

The post UCF and Temple IPEDS corrections lead to new Top 20 in online enrollment appeared first on e-Literate.

White House report on big data will impact ed tech

Fri, 2014-05-02 14:56

Yesterday the White House released its report on big data and privacy implications. The focus was broadly on big data, but there will be implications for ed tech, with several key recommendations specifically focused on the education sector. Specifically, there will be a push to update and revise the Family Educational Rights and Privacy Act (FERPA, enacted in 1974) and Children’s Online Privacy Protection Act (COPPA, enacted in 2000). Education Week set the context quite well in its article:

FERPA, which was written before the Internet existed, is intended to protect disclosure of the personally identifiable information contained in children’s education records. And COPPA, which requires parental consent under certain conditions for the online collection of personal information from children under age 13, was written before the age of smartphones, tablets, apps, the cloud, and big data.

Think Progress, part of a group founded by John Podesta, who also led the White House study, summarized the key findings as follows:

  1. Giving consumers more protection and control over their private data with a bill of rights

  2. Pass a singular data breach law to prevent the next Target fiasco

  3. Strengthen outdated and archaic laws, such as the Electronic Communications Privacy Act (ECPA), which dictates how the government accesses emails

  4. Give non-citizens the same privacy protections

  5. Ensure data collected on students is used only for educational purposes

Ed Tech Sections of the Report Itself:

First, there is a description of the situation in pages 24 – 26 that is too long to quote but worth highlighting:

Learning about Learning: Big Data and Education Education at both the K-12 and university levels is now supported inside and outside the classroom by a range of technologies that help foster and enhance the learning process. Students now access class materials, watch instructional videos, comment on class activities, collaborate with each other, complete homework, and take tests online. Technology-based educational tools and platforms offer important new capabilities for students and teachers. After only a few generations of evolution, these tools provide real-time assessment so that material can be presented based on how quickly a student learns. Education technologies can also be scaled to reach broad audiences, enable continuous improvement of course content, and increase engagement among stu- dents.

Later in the conclusions section of the report, pages 62 – 63, there are the relevant recommendations (quoted in full):

2. Responsible Educational Innovation in the Digital Age Big data offers significant opportunities to improve learning experiences for children and young adults. Big data intersects with education in two important ways. As students begin to share information with educational institutions, they expect that they are doing so in order to develop knowledge and skills, not to have their data used to build extensive profiles about their strengths and weaknesses that could be used to their disadvantage in later years. Educational institutions are also in a unique position to help prepare children, adolescents, and adults to grapple with the world of big data.

Ensure data protection while promoting innovation in learning Substantial breakthroughs stand to be made using big data to improve education as personalized learning on network-enabled devices becomes more common. Over the next five years, under the President’s ConnectED initiative, American classrooms will receive a dramatic influx of technology—with substantial potential to enhance teaching and learning, particularly for disadvantaged communities. Internet-based education tools and

Recognize digital literacy as an important 21st century skill. In order to ensure students, citizens, and consumers of all ages have the ability to adequately protect themselves from data use and abuse, it is important that they develop fluency in understanding the ways in which data can be collected and shared, how algorithms are employed and for what purposes, and what tools and techniques they can use to protect themselves. Although such skills will never replace regulatory protections, in- creased digital literacy will better prepare individuals to live in a world saturated by data. Digital literacy—understanding how personal data is collected, shared, and used— should be recognized as an essential skill in K-12 education and be integrated into the standard curriculum.

RECOMMENDATION: The federal government should ensure that data collected in schools is used for educational purposes and continue to support investment and innovation that raises the level of performance across our schools. To promote this innovation, it should explore how to modernize the privacy regulatory framework under the Family Educational Rights and Privacy Act and Children’s Online Privacy Protection Act and Children’s Online Privacy Protection Act to ensure two complementary goals: 1) protecting students against their data being shared or used inappropriately, especially when that data is gathered in an educational context, and 2) ensuring that innovation in educational technology, including new approaches and business models, have ample opportunity to flourish.

This new policy push will be important to watch over the next two years or so as it moves beyond white papers into proposed legislation or regulatory changes. My initial read is that this report is quite balanced and well-researched – it is difficult to find much to fault.

There were some cautious views voiced at EdWeek, however:

“When they say ‘modernize,’ we say, ‘build upon’ because I don’t want to get away from the privacy protections that current laws already afford,” said Khaliah Barnes, a lawyer for the Washington-based nonprofit Electronic Privacy Information Center, which filed 14 pages of comments on big data for the White House report before it was released.

Barnes viewed the report favorably, in general, but took issue with a statement that education technologies are already being deployed “with strong privacy and safety protections for students, inside and outside of the classroom.” She mentioned statewide longitudinal databases and cloud computing databases “being used every day that do not adequately protect students.”

The Software & Information Industry Association, a trade group for technology businesses, issued a statement concurring about the value of big data to support student learning. However, the group known as SIIA said, “modernizing privacy rules need not involve new legislation.”

The post White House report on big data will impact ed tech appeared first on e-Literate.

GSV+ASU EdInnovations Conference: A proxy for ed tech’s big challenge

Tue, 2014-04-29 21:01

Last week Michael and I attended the GSV+ASU EdInnovations conference commonly known as “Davos in the Desert”. This conference is growing in importance, especially in the effect on ed tech investment, as can be seen by the rising attendance. Just five years ago there were less than 300 attendees, and this year’s event had well over 2,000. Some of the notable articles covering the event:

  • Inside Higher Ed: “Let’s Make a Deal” – this article compares the rise of ed tech in general to the rise in the conference, but asks about lack of educators and lack of nationwide results (more on this later).
  • Huffington Post: “Is the American Dream a Dream Deferred?” – this article takes a positive spin on the conference, especially the focus on improving access and student success, but also questions the lack of diversity (more on gender lines than educator / non-educator lines).
  • EdSurge: “What Happened at ASU-GSV…” – this article presents an insider, booster view of the conference occasionally to accentuate the positive and defuse the negative, but it does describe the conference events quite well.
  • Getting Smart: “ASU+GSV: Education Dreaming” – ditto.
  • EdWeek: “Teachers at Summit Tell Ed. Companies What They Want–and What They Don’t” – this article covers one panel in particular, but this one is highly relevant to the following discussion on educator involvement.
  • Dusty World: “ASU/GSV Summit” – this article has a slightly humorous description of “the strangest education conference of my career”, noting that most discussions actually focused on politics and not pedagogical approaches.

As for myself, I live-tweeted some of my observations, particularly focusing on pro-immigration reform, foundation influence, and the difficulty of interviewing Don Graham (former owner of the Washington Post and current family owner of Kaplan).

Lack of Big Results

Rather than re-hashing some of these discussions, I’d like to build on and clarify some comments from the Inside Higher Ed article where I was quoted.

As the billion-dollar education technology industry holds what has become its primary gathering here this week, the onus is on vendors to show they can produce not only profits, but also improved outcomes.

The best section of the article came in the interview from Gallup’s representative.

“At a national level, there is no evidence that educational technology has reduced the cost of education yet or improved the efficacy of education,” said Brandon Busteed, executive director of Gallup Education. “And that’s just as true as it gets. Maybe there will be some day, but that’s the question: How much longer do we think it will take before we can detect movement on the national needle?” [snip]

“Every one of these companies has — at least most of them — some story of a school or a classroom or a student or whatever that they’ve made some kind of impact on, either a qualitative story or some real data on learning improvement,” Busteed said. “You would think that with hundreds of millions of dollars, maybe billions now, that’s been plowed into ed-tech investments … and all the years and all the efforts of all these companies to really move the needle, we ought to see some national-level movement in those indicators.”

This hits it just right. Ed tech in general, and the conference as a proxy, focus very well on good intentions and successful pilot programs, but we are not seeing the results scale. The problem is not a lack of innovative ideas, the problem is a breakdown in the Diffusions of Innovations.

Good Intentions, Not Just Pursuit of the Dollar

On the positive side, I found that the conversations, at least in the plenary sessions, were refreshing in their focus not on making money, but on aiming to help societal challenges. Money really was secondary in importance for the most part in the official conference focus – the ‘doing well by doing good’ concept.

From IHE:

Hill, on the other hand, said the summit could also show the investment community in a good light, with a focus on money, to be sure, but also plenty of good intentions and a quest for results.

“I think there’s a lot of people in higher education who are concerned that this is all about people making a buck,” he said. “If there were more of them here, I think they would hear that there’s a lot of very legitimate concern for improving education.”

This sentiment was also captured by the Huffington Post article:

There was a lot of talk at the summit about the American dream and creating opportunity for all. In fact it was the theme of the conference. I heard a lot about fears that the American dream is dying. [snip]

The sessions were engaging and valuable. During one, Michael Crow, ASU’s president, championed the need for “culture change” in families, in societies and in education in order for us to see any real improvements in learning outcomes. Sentiments I am in agreement with.

These good intentions from the plenary sessions and panel discussions don’t necessarily extend to the hallway conversations. It was a pitch-fest, and there was too much exuberance.

Missing Conversations and Depth of Understanding

What I found most problematic from this conference was the lack of meaningful involvement by educators. From IHE:

Phil Hill, an education consultant, said the lack of faculty and teacher input has limited the conversation about how to address the challenges facing K-12 and higher education. Some speakers, he said, used data to make the case that the education system is broken, as opposed to showcasing how they plan to fix it.

“You end up with everybody pitching everybody,” Hill said. “Where are the schools, the teachers, the people who could actually use the innovations coming out of here to do something with it?”

EdSurge attempted to defuse this feedback (not mine specifically, but from general discussions), noting twice that there were more than 100 educators (including K-12 and higher education). GSV themselves put the numbers in their post-conference survey:

Brief 2014 Summit Fast Facts:

  • Over 2100 Registered Participants
  • Over 200 Investors
  • Over 535 Entrepreneurs
  • Over 80 Higher Education Institutions
  • Over 180 Educators
  • 230 Presenting Companies
  • 23 Countries Represented

From a pure numerical perspective, it is worth noting that the majority of higher ed “educators” were from for-profit institutions (roughly 70) and the hosts at ASU (roughly 34). Even with these caveats there were less than 10% of attendees as educators, and very little diversity (Update: To clarify, I’m calling out the issue of including significant educator input from a variety of institution types and a variety of adoption categories such as early adopters, early majority, etc). The EdWeek article called out a panel discussion where faculty gave some welcome feedback:

The speakers at one panel during the conference were able to offer companies some insights, often in very blunt terms. The panelists included a pair of teachers, who spoke about their hopes for classroom technology, and their disappointments about what it doesn’t deliver.

Tanesha Dixon, a middle school social studies teacher, said that many of the classroom technologies she saw didn’t seem as if they’d been designed to make educators’ lives easier, and would most likely add to their workload.

The teacher at the Wheatley Education Campus, a public school in the District of Columbia, said she wasn’t convinced that developers were doing enough to seek out the opinions of educators.

“I always wonder, how do you develop a product?” Dixon asked the audience. “Are you sitting at home in your basement?…I feel like no one’s talking to us.”

This feedback really gets to my biggest complaint at the conference: too many well-intentioned conversations talking about educators without the effort of talking with educators (and yes, the panel in my mind is the exception that proves the rule). This criticism is really aimed at much of ed tech in general and not just the conference.

I doubt that the GSV and ASU organizers intend to exclude educators (see Dusty World for discussion of reduced rates available for non-profits), but that was the result. This is a comment more on reality, not on intentions.

Put Them All Together

And more importantly, I see these themes overlapping. Not having enough well-intentioned discussions with educators is one reason we are seeing a breakdown in the diffusion of innovations, partially caused by a lack of depth in understanding.

Michael called this situation out in his post on why VCs often get ed tech wrong.

The combination of the obsession with disruption and the phobia of institutions is a particularly bad one for the education markets. When I say that, I don’t mean that it is harmful to education (although that is probably also true). What I mean is that it leads to consistently unprofitable investment decisions. [snip]

I just don’t see disruptive innovation as a good guide for investment in education. And for similar reasons, I think the notion of avoiding institutional entanglements is pretty much hopeless, since the very idea of education is inextricably bound up in those institutions. Disruptive innovation and direct-to-consumer are both investment strategies that are designed to avoid complexities that lead to investment risk. But in education, complexity is unavoidable, which means strategies that attempt to avoid it usually result in risk-increasing ignorance rather than risk-avoiding safety. And as Warren Buffett said, “When you combine ignorance and leverage, you get some pretty interesting results.”

I call out this challenge not to criticize the GSV and ASU conference organizers, but rather to use the conference observations to help illustrate the big challenge we have in the education community – plenty of good ideas but frustrating inability to scale and diffuse innovations effectively.

The post GSV+ASU EdInnovations Conference: A proxy for ed tech’s big challenge appeared first on e-Literate.

New e-Literate TV Series on MOOCs

Fri, 2014-04-25 13:28

hypecycle

If 2013 was the year that MOOC mania peaked in the Gartner hype cycle, then surely 2014 is the year in which they have entered the trough of disillusionment. And just as it was important last year to push back on the inflated expectations, it will be equally important this year to make sure we don’t throw out the baby with the bath water.

With that in mind, I’m pleased to announce our new e-Literate TV series, “Massive and Open: What Are We Learning?” Filmed at the 2013 MOOC Research Initiative (MRI) conference, it delves into the current state of knowledge of the field. The interview with Stanford’s Amy Collier from our pilot series should really be considered Episode Zero for the series, as it explores the value of the first generation of MOOCs (particularly xMOOCs). To that, we have added three episodes from the new series:

  1. MOOC Research Initiative Grant Projects: After a brief introduction with MRI lead researcher George Siemens, Phil and I talk to a number of the grantees about the research they are undertaking. The range of research questions is eye-opening.
  2. MOOC Point/Counterpoint: We intercut interviews between MOOC enthusiast Keith Devlin and MOOC skeptic Jonathan Rees. The result is a rich and nuanced dialog between these two smart guys.
  3. Crowdsourced Answers to Popular MOOC Questions: We got MRI conference attendees to give their answers to frequently asked MOOC questions. The answers are varied and give a good sense of just how much our thinking about MOOCs is still all over the place.

Enjoy!

The post New e-Literate TV Series on MOOCs appeared first on e-Literate.

Partial Transcript: Richard Levin (new Coursera CEO) on Charlie Rose

Wed, 2014-04-23 19:53

I have written two posts recently about Coursera’s appointment of the former president of Yale as the company’s new CEO, with the implicit argument that this move represents a watershed moment for commercial MOOCs. In particular, Coursera seems likely to become the third generation of Richard Levin’s dream, following AllLearn and Open Yale Courses. I’ve also argued that Levin is embellishing the history by making Internet bandwidth a primary factor in the demise of AllLearn when the lack of a viable business model was the more important issue, with even Levin arguing this point.

Richard Levin was just interviewed by Charlie Rose, and I am including a transcript of most of the segment (starting around 3:15), highlighting some key points in bold. This interview should give us further insight into the future of commercial MOOCs, especially as we have the first non-founder CEO in one of the big three commercial MOOC providers. Follow this link to watch on CharlieRose.com and avoid the annoying Hulu ad.

Rose: You could have gotten a government job, as an ambassador or something; maybe been Secretary of the Treasury as far as I know . . . you could have done a lot of things. But you’re out running some online education company (laughs).

Levin: It’s a fantastic mission, it’s really the perfect job for me and for following a university president’s [job].

Rose: Why’s that?

Levin: One, I like running things, so it’s an opportunity to run something. But most important it’s so much an extension of what I’ve tried to do. It’s to take Yale to the world, and this is an opportunity to take 108 of the world’s greatest educational institutions (and there’ll probably be some more) and teach the planet.

Rose: Before you go there, let’s get the landscape. At Yale you tried some online education. A couple of others have had . . . and there is a checkered past.

Levin: Well, there was a time of experimentation in learning. When we started in 2000 with Stanford and Oxford as partners, we thought our market was our own alumni so we sort of narrowcast over the Internet. Then we opened it to the public the bandwidth wasn’t there.

Rose: You mean the technical bandwidth?

Levin: Yes, this was still the era of your videos were jerking around, you remember that? So it had that problem, and we just didn’t have the right model for making it work. And also it didn’t have a high degree of interactivity. Basically you watched a lecturer give a lecture, and maybe there were some visuals, but that was it.

And then the next thing we did were “Open Yale Courses”, which basically were videos of 42 of our best lecture courses put out for free over the Internet, with support of the Hewlett Foundation. They were great, but very few people watched them from beginning to end. They were free. The materials of the course were distributed, but there were no quizzes, no exercises.

Now what Coursera has done has sort of recognized that first of all, we have greater bandwidth, we can support lots of people at once – taking quizzes, reacting to the material, getting feedback; having professors look at the data to see what parts students are having a hard time and improving their courses. So it’s a constant feedback loop.

It’s really terrific and the scale is immense. It’s amazing, we’ve had 7 million different people.

Rose: Separate the landscape for me. There’s edX, there’s Sebastian Thern’s [Thrun’s] thing, Udacity. How are the three different?

Levin: They’re all a little bit different, but those are three that are involved in this MOOC space [uses square quotes]. There’s lots of other things online. Many schools have had things online with closed enrollments for substantial tuition dollars for some time now.

What these three are trying to do is go to a wide open public, and putting courses out for free and getting hundreds of thousands of people to sign up for them.

Our approach and edX’s are pretty similar.

Rose: edX is Harvard and MIT?

Levin: edX is ‘Harvard and MIT want to do their own thing and not sign up with Coursera (laughs). At this time we have about three times as many partner institutions and three or four times the audience. It’s [edX] is a worthy effort, they’re doing a good job, and so are we, and we’re competing on what are the features offered for students. edX is open source software , which some of the computer science types like that – it means they can play with it, they can add to the features on their own.

But we’re developing interfaces that will allow faculty to add features as well.

I think it’s good there’s competition. I’ve studied innovative industries; before I became president of Yale it was my field. Competition is good for innovation, the products will get better.

Rose: But is the mission the same?

Levin: I think that edX and Coursera have very similar missions. It’s to have great universities as the partners . . . the universities develop the courses. We’re not a university, Coursera’s not a university. Coursera is a platform and a technology company that serves universities.

Rose: Backed by venture capital?

Levin: Yeah, but I think the key lesson here is the scale.

The post Partial Transcript: Richard Levin (new Coursera CEO) on Charlie Rose appeared first on e-Literate.

Coursera CEO Interview: Mike Caulfield nailed it two months ago

Tue, 2014-04-15 20:03

Two months ago Mike Caulfield lamented the inability for many people in online education, especially massive online initiatives, to honestly learn from the past. In the post Mike referred to the failed AllLearn initiative and the seminal post-mortem written up in University Business.

How does that relate? A paragraph from the 2006 post-mortem of AllLearn really stuck out for me:

Oxford, Yale, and Stanford have kept quiet about the collapse of their joint e-learning venture…[h]owever, AllLearn’s closure could offer an unprecedented opportunity to step back and discuss the strengths and weaknesses of the business model… Further research into the series of collapsed online ventures may shed some light on what makes a successful distance education program, and enable some of the surviving online providers to redefine their business models and marketing strategies accordingly

Of course they don’t delve into these things honestly, and as a result most people in these institutions are unaware of them. Like Leonard, the institutions alter the record of the past. They wake up the next day with amnesia, consult a set of dramatically altered notes, and wonder why no one has tried massive Ivy League courses yet. The PR push to cover one’s tracks ends up erasing the institutional knowledge that could build a better initiative.

Little did Mike realize that he was writing a script.

One month later Coursera hired Richard Levin as its new CEO. As president of Yale, Levin was one of the key figures in the creation of All Learn in 2000, and after the 2006 collapse of the initiative Levin was one of the key figures directly responsible for the Open Yale Courses initiative.

The consensus view is that AllLearn failed to generate enough interest in its non-credit elite courses, and subsequently the program closed due to economics (by Levin’s own previous admission). In 2005 AllLearn attempted to address this challenge by branching beyond alumni as related in this Yale Daily News post [emphasis added in all quotes below].

“I think we’ve learned a lot form the experiment,” Levin said. “While I believe we’ve produced some very high quality courses, we’ve learned that it’s hard to generate large audiences sufficiently from these courses from just the alumni of the three partner schools. So we’ve reached out to attract broader audiences through affiliating with universities and through finding other organizations that might have an interest in making courses available to members.”

Fast forward a year, and it is clear that the effort had failed economically despite the broadening of audiences, again from the Yale Daily News.

Yale President Richard Levin, who served as AllLearn’s chairman, said he thinks that while the participating institutions learned what is necessary to manage a successful distance learning program, they were unable to make the project financially viable.

“We are disappointed that we weren’t able to find a way to make this successful economically,” Levin said. “[But] we learned a lot, and I think it will serve us well in the future.”

Open Yale Courses also provides non-credit elite courses. The problem? You might have guessed it, as described by this 2012 report for the Committee on Online Education.

Open Yale Courses has been supported by generous grants from the Hewlett Foundation, but those grants are ending this semester; and there is no provision yet for the continuation of this program. There has been extensive planning, however, to keep the 42 existing courses on the Yale site as well as the iTunes U and YouTube platforms. All of the courses are being stored and preserved for future study. New visitors are discovering Open Yale Courses daily and global media coverage, which has been constant and enthusiastic since the start of the project, continues unabated.

The initiative is now attempting direct solicitation as a method of funding.

I don’t mean to question Levin’s good intentions nor his considerable support of the mission of making education more accessible through online technology. However, I find it disingenuous to try and alter history. This week the New York Times interviewed Levin about his new role as Coursera CEO, and the reporter asked some good questions but lacked follow-up.

Q. Yale has not exactly been a mass institution.

A. No, but we were early in the on-line arena, with a venture back in 2000 called All-Learn.

Q. How much did you lose, and why didn’t that spoil this for you?

A. It was too early. Bandwidth wasn’t adequate to support the video. But we gained a lot of experience of how to create courses, and then we used it starting in 2007 to create very high quality videos, now supported by adequate bandwidth in many parts of the world, with the Open Yale courses. We’ve released over 40 of them, and they gained a wide audience.

So here we have yet another initiative offering non-credit elite courses, and one of the biggest challenges that Coursera faces is that it has yet to find a viable business model. The company is living on $85 million in venture capital investment and has not yet found revenue sources that go beyond a few million dollars per year (Signature Track). Levin called out this challenge in the same NYT interview.

Q. Doesn’t edX have an advantage in being not-for-profit, meaning they don’t have to worry about returning on investment so soon? Yesterday Andrew Ng said, “We’ve raised $85 million, so we’ve got some runway.” How much runway?

A. I think the principal investors in Coursera understand that this is a long term play. We’re fortunate to have patient investors; and as Andrew said, we’re quite adequately capitalized. I think we can become financially viable certainly within that five-year framework.

Q. You’re an economist. How do you get from here to there?

A. Right now courses are free and we’re charging for certification. We think that as the idea of using Coursera courses for professional advancement grows, the numbers seeking certificates will grow. And the price we charge probably can grow, too. A move from $50 or $60 for Signature Track to $100 is certainly imaginable. At $100 a pop, if you had two or three, or five million people. …

I would suggest that Coursera will not “get from here to there” by altering the record of the past. AllLearn failed to generate sufficient interest in its courses, and the proximate cause was not “insufficient bandwidth”. AllLearn in fact had several approaches that alleviated bandwidth concerns, including CD-ROM delivery and the ability to turn off high-bandwidth features. AllLearn’s average learner was a 47-year-old Yale alumni – hardly a case of low-income lack of access to sufficient bandwidth. Plenty of online ventures started in 2000 or prior have succeeded – Penn State’s World Campus, UMUC, most large for-profits, UMassOnline, University of Central Florida, etc. This was not a case of being “too early”.

Read the University Business post-mortem and the Yale article. The issue involved economics and insufficient revenue to offset expenses.

Coursera and all the xMOOC providers have this same long-term challenge of adequate business models. I called out this challenge as one of the four key barriers that MOOcs faced, based on a July 2012 post. I speak as someone who would like to see MOOCs succeed – not in their current form, but in a form that evolves to better meet learner needs. This healthy evolution won’t happen, however, unless the providers honestly evaluate the lessons of the past.

figure3

 

Update (4/17): It appears that Levin is doubling down on his new theory about AllLearn. From the Yale Daily News today:

“I knew from the beginning that [online ed] had great potential,” Levin said.

In fact, Yale’s experiment with online education began under Levin’s presidency. In 2000, Yale launched AllLearn, a joint venture with Stanford and Oxford that faltered after four years due to insufficient technology at the time. The Internet bandwidth in most homes was inadequate for properly sharing course material, Levin admitted.

For the record, I can find no explanations from the time of AllLearn’s demise in 2006 that insufficient bandwidth was the problem. What we do have are statements including from Levin himself that insufficient demand leading to unsustainable revenue was the problem.

The post Coursera CEO Interview: Mike Caulfield nailed it two months ago appeared first on e-Literate.

Links to External Articles and Interviews

Tue, 2014-04-15 11:41

Last week I was off the grid (not just lack of Internet but also lack of electricity), but thanks to publishing cycles I managed to stay artificially productive: two blog posts and one interview for an article.

Last week brought news of a new study on textbooks for college students, this time from a research arm of the  National Association of College Stores. The report, “Student Watch: Attitudes and Behaviors toward Course Materials, Fall 2013″, seems to throw some cold water on the idea of digital textbooks based on the press release summary [snip]

While there is some useful information in this survey, I fear that the press release is missing some important context. Namely, how can students prefer something that is not really available?

March 28, 2014 may well go down as the turning point where Big Data lost its placement as a silver bullet and came down to earth in a more productive manner. Triggered by a March 14 article in Science Magazine that identified “big data hubris” as one of the sources of the well-known failures of Google Flu Trends,[1] there were five significant articles in one day on the disillusionment with Big Data. [snip]

Does this mean Big Data is over and that education will move past this over-hyped concept? Perhaps Mike Caulfield from the Hapgood Blog stated it best, including adding the education perspective . . .

This is the fun one for me, as I finally have my youngest daughter’s interest (you made Buzzfeed!). Buzzfeed has added a new education beat focusing on the business of education.

The public debut last week of education technology company 2U, which partners with nonprofit and public universities to offer online degree programs, may have looked like a harbinger of IPO riches to come for companies that, like 2U, promise to disrupt the traditional education industry. At least that’s what the investors and founders of these companies want to believe. [snip]

“We live in a post-Facebook area where startups have this idea that they can design a good product and then just grow, grow, grow,” said Phil Hill, an education technology consultant and analyst. “That’s not how it actually works in education.”

 

The post Links to External Articles and Interviews appeared first on e-Literate.

Head in the Oven, Feet in the Freezer

Mon, 2014-04-14 05:19

Some days, the internet gods are kind. On April 9th, I wrote,

We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention.

On the very same day, the estimable Larry Cuban blogged,

So it is hardly surprising, then, that many others, including myself, have been skeptical of the popular idea that evidence-based policymaking and evidence-based instruction can drive teaching practice. Those doubts have grown larger when one notes what has occurred in clinical medicine with its frequent U-turns in evidence-based “best practices.” Consider, for example, how new studies have often reversed prior “evidence-based” medical procedures. *Hormone therapy for post-menopausal women to reduce heart attacks wasfound to be more harmful than no intervention at all. *Getting a PSA test to determine whether the prostate gland showed signs of cancer for men over the age of 50 was “best practice” until 2012 when advisory panels of doctors recommended that no one under 55 should be tested and those older  might be tested if they had family histories of prostate cancer. And then there are new studies that recommend women to have annual mammograms, not at age  50 as recommended for decades, but at age 40. Or research syntheses (sometimes called “meta-analyses”) that showed anti-depressant pills worked no better than placebos. These large studies done with randomized clinical trials–the current gold standard for producing evidence-based medical practice–have, over time, produced reversals in practice. Such turnarounds, when popularized in the press (although media attention does not mean that practitioners actually change what they do with patients) often diminished faith in medical research leaving most of us–and I include myself–stuck as to which healthy practices we should continue and which we should drop. Should I, for example, eat butter or margarine to prevent a heart attack? In the 1980s, the answer was: Don’t eat butter, cheese, beef, and similar high-saturated fat products. Yet a recent meta-analysis of those and subsequent studies reached an opposite conclusion. Figuring out what to do is hard because I, as a researcher, teacher, and person who wants to maintain good health has to sort out what studies say and  how those studies were done from what the media report, and then how all of that applies to me. Should I take a PSA test? Should I switch from margarine to butter?

He put it much better than I did. While the gains in overall modern medicine have been amazing, anybody who has had even a moderately complex health issue (like back pain, for example) has had the frustrating experience of having a billion tests, being passed from specialist to specialist, and getting no clear answers.1 More on this point later. Larry’s next post—actually a guest post by Francis Schrag—is an imaginary argument between an evidence-based education proponent and a skeptic. I won’t quote it here, but it is well worth reading in full. My own position is somewhere between the proponent and the skeptic, though leaning more in the direction of the proponent. I don’t think we can measure everything that’s important about education, and it’s very clear that pretending that we can has caused serious damage to our educational system. But that doesn’t mean I think we should abandon all attempts to formulate a science of education. For me, it’s all about literacy. I want to give teachers and students skills to interpret the evidence for themselves and then empower them to use their own judgment. To that end, let’s look at the other half of Larry’s April 9 post, the title of which is “What’s The Evidence on School Devices and Software Improving Student Learning?”

Lies, Damned Lies, and…

The heart of the post is a study by John Hattie, a Professor at the University of Auckland (NZ). He’s done meta-analysis on an enormous number of education studies, looking at effect sizes, measured on a scale from 0.1, which is negligible, to 1.0, which is a full standard deviation.

He found that the “typical” effect size of an innovation was 0.4. To compare different classroom approaches shaped student learning, Hattie used the “typical” effect size (0.4) to mean that a practice reached the threshold of influence on student learning (p. 5). From his meta-analyses, he then found that class size had a .20 effect (slide 15) while direct instruction had a .59 effect (slide 21). Again and again, he found that teacher feedback had an effect size of .72 (slide 32). Moreover, teacher-directed strategies of increasing student verbalization (.67) and teaching meta-cognition strategies (.67) had substantial effects (slide 32). What about student use of computers (p. 7)? Hattie included many “effect sizes” of computer use from distance education (.09), multimedia methods (.15), programmed instruction (.24), and computer-assisted instruction (.37). Except for “hypermedia instruction” (.41), all fell below the “typical ” effect size (.40) of innovations improving student learning (slides 14-18). Across all studies of computers, then, Hattie found an overall effect size of .31 (p. 4).

The conclusion is that changing a classroom practice can often produce a significant effect size while adding a technology rarely does. But as my father likes to say, if you stick your head in the oven and your feet in the freezer, on average you’ll be comfortable. Let’s think about introducing clickers to a classroom, for example. What class are you using them in? How often do you use them? When do you use them? What do you use them for? Clickers in and of themselves change nothing. No intervention is going to be educationally effective unless it gets students to perceive, act, and think differently. There are lots of ways to use clickers in the classroom that have no such effect. My guess is that, most of the time, they are used for formative assessments. Those can be helpful or not, but generally when done in this way are more about informing the teacher than they are directly about helping the student. But there are other uses of clicker technologies. For example, University of Michigan professor Perry Samson recently blogged about using clickers to compare students’ sense of their physical and emotional well-being with their test performance:

Figure 2.  Example of results from a student wellness question for a specific class day.  Note the general collinearity of physical and emotional wellness. FIGURE 2. EXAMPLE OF RESULTS FROM A STUDENT WELLNESS QUESTION FOR A SPECIFIC CLASS DAY. NOTE THE GENERAL COLLINEARITY OF PHYSICAL AND EMOTIONAL WELLNESS.

I have observed over the last few years that a majority of the students who were withdrawing from my course in mid-semester commented on a crisis in health or emotion in their lives.  On a lark this semester I created an image-based question to ask students in LectureTools at the beginning of each class (example, Figure 2) that requested their self assessment of their current physical and emotional state. Clearly there is a wide variation in students’ perceptions of their physical and emotional state.  To analyze these data I performed cluster analysis on students’ reported emotional state prior to the first exam and found that temporal trends in this measure of emotional state could be clustered into six categories.

Figure 3.  Trends in students' self reported emotional state prior to the first exam in class are clustered into six categories.  The average emotional state for each cluster appears to be predictive of median first exam scores. FIGURE 3. TRENDS IN STUDENTS’ SELF REPORTED EMOTIONAL STATE PRIOR TO THE FIRST EXAM IN CLASS ARE CLUSTERED INTO SIX CATEGORIES. THE AVERAGE EMOTIONAL STATE FOR EACH CLUSTER APPEARS TO BE PREDICTIVE OF MEDIAN FIRST EXAM SCORES.

Perhaps not surprisingly Figure 3 shows that student outcomes on the first exam were very much related to the students’ self assessment of their emotional state prior to the exam.  This result is hard evidence for the intuitive, that students perform better when they are in a better emotional state.

I don’t know what Perry will end up doing with this information in terms of a classroom intervention. Nor do I know whether any such intervention will be effective. But it seems common sense not to lump it in with a million billion professors asking quiz questions on their clickers to aggregate it into an average of how effective clickers are. To be fair, that’s not Larry’s point for quoting the Hattie study. He’s arguing against the reductionist argument that technology fixes everything—an argument which seems obviously absurd to everybody except, sadly, the people who seem to have the power to make decisions. But my point is that it is equally absurd to use this study as evidence that technology is generally not helpful. What I think it suggests is that it makes little sense to study the efficacy of educational technologies or products outside the context of the efficacy of the practices that they enable. More importantly, it’s a good example of how we all need to get much more sophisticated about reading the studies so we can judge for ourselves what they do and do not prove.

Of Back Mice and Men

I have had moderate to severe back pain for the past seven years. I have been to see orthopedists, pain specialists, rheumatologists, urologists, chiropractors, physical therapists, acupuncturists, and massage therapists. In many cases, I have seen more than one in any given category. I had X-rays, CAT scans, MRIs, and electrical probes inserted into my abdomen and legs. I had many needles of widely varying gauges stuck in me, grown humans walking on my back, gallons of steroids injected into me. I had the protective sheathes of my nerves fried with electricity. If you’ve ever had chronic pain, you know that you would probably go to a voodoo priest and drink goat urine if you thought it might help. (Sadly, there are apparently no voodoo priests in my area of Massachusetts—or at least none who have a web page.) Nobody I went to could help me. Not too long ago, I had cause to visit my primary care physician, who is a good old country doctor. No specialist certificates, no Ivy League medical school degrees. Just a solid GP with some horse sense. In a state of despair, I explained my situation to him. He said, “Can I try something? Does it hurt when I touch you here?” OUCH!!!! It turns out that I have a condition called “back mice,” also called “episacral lipomas” when it is referred to in the medical literature, which, it turns out, happens rarely. I won’t go into the details of what they are, because that’s not important to the story. What’s important is what the doctor said next. “There’s hardly anything on them in the literature,” he said. “The thing is, they don’t show up on any scans. They’re impossible to diagnose unless you actually touch the patient’s back.” I thought back to all the specialists I had seen over the years. None of the doctors ever once touched my back. Not one. My massage therapist actually found the back mice, but she didn’t know what they were, and neither of us knew that they were significant. It turns out that once my GP discovered that these things exist, he started finding them everywhere. He told me a story of an eighty-year-old woman who had been hospitalized for “non-specific back pain.” They doped her up with opiates and the poor thing couldn’t stand up without falling over. He gave her a couple of shots in the right place, and a week later she was fine. He has changed my life as well. I am not yet all better—we just started treatment two weeks ago—but I am already dramatically better. The thing is, my doctor is an empiricist. In fact, he is one of the best diagnosticians I know. (And I have now met many.) He knew about back mice in the first place because he reads the literature avidly. But believing in the value of evidence and research is not the same thing as believing that only that which has been tested, measured, and statistically verified has value. Evidence should be a tool in the service of judgment, not a substitute for it. Isn’t that what we try to teach our students?

  1. But I’m not bitter.

The post Head in the Oven, Feet in the Freezer appeared first on e-Literate.

AAC&U GEMs: Exemplar Practice

Sat, 2014-04-12 06:04

A while back, I wrote about my early experiences as a member of the Digital Working Group for the AAC&U General Education Maps and Markers (GEMs) initiative and promised that I would do my homework for the group in public. Today I will make good on that promise. The homework is to write-up an exemplar practice of how digital tools and practices can help support students in their journeys through GenEd.

As I said in my original post, I think this is an important initiative. I invite all of you to write up your own exemplars, either in the comments thread here or in your own blogs or other digital spaces.

The template for the exemplar is as follows:

Evocative Examples of Digital Resources and Strategies that can Improve General Education: What are these cases a case of?

Brief Description of practice:

  • In what ways is the practice effective or transformative for student learning? What’s the evidence? How do we know? (If you can tie the practice to any of the outcomes in the DQP and/or the LEAP Essential Learning Outcomes, that would be great.)
  • How does the practice reflect the digital world as lived student culture? What are the skills and content associated with the digital practice or environment? How does the practice deepen or shape behavior of students with digital tools and environments with which they may be variously familiar?
  • What does it take to make the practice work? What is the impact on faculty time? Does it take a team to design, implement, assess? What are the implications for organizational change?
  • How is it applicable to gen ed (if example doesn’t come from gen ed)?
  • Are there references or literature to which you can point that is relevant to the practice?

I decided to base my exemplar on the MSU psychology class that I’ve written about recently.

Flipped and Blended Class with Homework Platform Support

In this practice, every effort is made to move both direct instruction and formative assessment outside of class time. The “flipped classroom” (or “flipped learning”) approach is employed, providing students with instructional videos and other supplemental content. In addition, a digital homework platform is employed, enabling students to get regular formative assessments. In order to give students more time for these activities, the amount of in-class time is reduced, making the course effectively a blended or hybrid course. In-class time is devoted either to class discussion, which is supported by the instructor’s knowledge of the students’ performance on the regular formative assessments, and by group work.

In what ways is the practice effective or transformative for student learning? What’s the evidence? How do we know?

This is a particular subset of a practice that the National Center for Academic Transformation (NCAT) calls “the replacement model”, and they have a variety of course redesign projects that demonstrated improved outcomes relative to the control. For example, a redesign of a psychology Gen Ed course at Missouri State University produced the following results:

  • On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement).
  • Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement).
  • Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%)
  • Over a three-semester period following the redesign, the course DFW rate improved from 24.6% to 18.4% (most of which was because of a significant drop in the withdrawal rate).

One of the investigators of the project, who also was a course instructor, indicated that the quality of class discussion improved significantly as well.

Possible reasons why the practice is effective include the following:

  • Teacher/student contact time is maximized for interactivity.
  • Regular formative assessments with instant feedback help students to be better prepared to maximize discussion time with the teacher and with peers.
  • Feedback from the homework system enabled the instructor to walk into class knowing where students need the most help.
  • Reduced number of physical class meetings reduces the chances that a student will withdraw due to grade damaging absences.

How does the practice reflect the digital world as lived student culture? What are the skills and content associated with the digital practice or environment? How does the practice deepen or shape behavior of students with digital tools and environments with which they may be variously familiar?

Students are used to getting their information online. They are also often very effective at “time slicing,” in which they use small increments of time (e.g., when they are on a bus or waiting for an appointment) to get things done. This exemplar practice enables students to do that with the portions of academic work that are suited to it while preserving and actually expanding room for long and deep academic discussion.

What does it take to make the practice work? What is the impact on faculty time? Does it take a team to design, implement, assess? What are the implications for organizational change?

The redesign effort is significant and, because the creation of significant digital resources is involved, is often best done by a team (although that is not strictly necessary). For the purposes of this design, the homework platform need not be cutting-edge adaptive, as long as it provides formative assessments that are consistent with the summative assessments and provides both students and instructors with good, regular feedback. That said, implementing the technology is often not seamless and may take several semesters to work the kinks out. The shift to a flipped classroom also puts new demands on students and may take several semesters for the campus culture to adjust to the new approach.

How is it applicable to gen ed (if example doesn’t come from gen ed)?

This model is often used in Gen Ed. It is particularly appropriate for larger classes where the DFW rate is high and where a significant percentage of the subject matter—at least the foundational knowledge on the lower rungs of Bloom’s taxonomy—can be assessed through software.

Are there references or literature to which you can point that is relevant to the practice?

http://mfeldstein.com/efficacy-adaptive-learning-flipped-classroom/

http://mfeldstein.com/efficacy-adaptive-learning-flipped-classroom-part-ii/

http://www.thencat.org/PlanRes/R2R_Model_Rep.htm

http://www.thencat.org/PCR/R3/TCC/TCC_Overview.htm

http://www.flippedlearning.org/

The post AAC&U GEMs: Exemplar Practice appeared first on e-Literate.

Efficacy, Adaptive Learning, and the Flipped Classroom, Part II

Wed, 2014-04-09 10:45

In my last post, I described positive but mixed results of an effort by MSU’s psychology department to flip and blend their classroom:

  • On the 30-item comprehensive exam, students in the redesigned sections performed significantly better (84% improvement) compared to the traditional comparison group (54% improvement).
  • Students in the redesigned course demonstrated significantly more improvement from pre to post on the 50-item comprehensive exam (62% improvement) compared to the traditional sections (37% improvement).
  • Attendance improved substantially in the redesigned section. (Fall 2011 traditional mean percent attendance = 75% versus fall 2012 redesign mean percent attendance = 83%)
  • They did not get a statistically significant improvement in the number of failures and withdrawals, which was one of the main goals of the redesign, although they note that “it does appear that the distribution of A’s, B’s, and C’s shifted such that in the redesign, there were more A’s and B’s and fewer C’s compared to the traditional course.”
  • In terms of cost reduction, while they fell short of their 17.8% goal, they did achieve a 10% drop in the cost of the course….

It’s also worth noting that MSU expected to increase enrollment by 72 students annually but actually saw a decline of enrollment by 126 students, which impacted their ability to deliver decreased costs to the institution.

Those numbers were based on the NCAT report that was written up after the first semester of the redesigned course. But that wasn’t the whole story. It turns out that, after several semesters of offering the course, MSU was able to improve their DFW numbers after all:

MSU DFWThat’s a fairly substantial reduction. In addition, their enrollment numbers have returned to roughly what they were pre-redesign (although they haven’t yet achieved the enrollment increases they originally hoped for).

When I asked Danae Hudson, one of the leads on the project, why she thought it took time to see these results, here’s what she had to say:

I do think there is a period of time (about a full year) where students (and other faculty) are getting used to a redesigned course. In that first year, there are a few things going on 1) students/and other faculty are hearing about “a fancy new course” – this makes some people skeptical, especially if that message is coming from administration; 2) students realize that there are now a much higher set of expectations and requirements, and have all of their friends saying “I didn’t have to do any of that!” — this makes them bitter; 3) during that first year, you are still working out some technological glitches and fine tuning the course. We have always been very open with our students about the process of redesign and letting them know we value their feedback. There is a risk to that approach though, in that it gives students a license to really complain, with the assumption that the faculty team “doesn’t know what they are doing”. So, we dealt with that, and I would probably do it again, because I do really value the input from students.

I feel that we have now reached a point (2 years in) where most students at MSU don’t remember the course taught any other way and now the conversations are more about “what a cool course it is etc”.

Finally, one other thought regarding the slight drop in enrollment we had. While I certainly think a “new blended course” may have scared some students away that first year, the other thing that happened was there were some scheduling issues that I didn’t initially think about. For example, in the Fall of 2012 we had 5 sections and in an attempt to make them very consistent and minimize missed classes due to holidays, we scheduled all sections on either a Tuesday or a Wednesday. I didn’t think about how that lack of flexibility could impact enrollment (which I think it did). So now, we are careful to offer sections (Monday through Thursday) and in morning and afternoon.

To sum up, she thinks there were three main factors: (1) it took time to get the design right and the technology working optimally; (2) there was a shift in cultural expectations on campus that took several semesters; and (3) there was some noise in the data due to scheduling glitches.

There are a number of lessons one could draw from this story, but from the perspective of educational efficacy, I think it underlines how little the headlines (or advertisements) we get really tell us, particularly about components of a larger educational intervention. We could have read, “Pearson’s MyPsychLabs Course Substantially Increased Students Knowledge, Study Shows.” That would have been true, but we have little idea how much improvement there would have been had the course not been fairly radically redesigned at the same time. We also could have read, “Pearson’s MyPsychLabs Course Did Not Improve Pass and Completion Rates, Study Shows.” That would have been true, but it would have told us nothing about the substantial gains over the semesters following the study. We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention.

The post Efficacy, Adaptive Learning, and the Flipped Classroom, Part II appeared first on e-Literate.