Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 18 hours 42 sec ago

University of California’s $220 million payroll project reboot

Mon, 2014-07-21 16:11

Chris Newfield has an excellent post at Remaking the University about the University of California’s budget situation and how it relates to the recent Moody’s negative outlook on higher education finances. The whole article is worth reading, but one section jumped off the page for me [emphasis added].

The sadder example of ongoing debt is the request for “external financing for the UCPath project.” UC Path was UCOP’s flagship solution to UC inefficiencies that were allegedly wasting taxpayers’ money–in other words, new enterprise software for the systemwide consolidation of payroll and human resources functions. This is boring, important back office stuff, hardly good material for a political campaign to show the state “UC means business,” but that’s what it became. Rather than funding each campus’s decades-old effort to upgrade its systems on its own, UCOP sought centralization, which predictably introduced new levels of cost, complexity, and inefficiency, since centralization is often not actually efficient.

I had heard nothing good about UC Path from people trying to implement it on campuses, and have tried to ignore it, but this week it has resurfaced as a problem at the Regental level. The project timeline has grown from 48 to 72 months, and its costs are said to be $220 million (it had spent $131 million by May 2014) . Worse, the repayment schedule has mushroomed from seven to twenty years. Annual payments are to be something like $25 million. Campuses are to be taxed to pay for 2015-era systems until 2035, which is like taking out a twenty year mortgage to pay for your refrigerator, except that your fridge will be working better in 2035 than next year’s PeopleSoft product. Since the concurrent budget document notes efficiency savings of $30 million per year (top of page 4), UCOP may be spending $220 million to save a net $5 million per year over a couple of decades–and going into debt to do it. In the end, an efficiency measure has turned into a literal liability.

What the hell – a $220 million project to save money? How did this project get in this much trouble?

The UCPath project concept originated in 2009 with the project announcement coming in late 2011. The goal is to replace the Payroll Personnel System (PPS) that runs separately for each of the 11 UC locations with Oracle’s PeopleSoft payroll and HR systems. PPS is over 30 years old, and there are major risk issues with such an old system as well as a host of inefficient processes. The original project plans were based on a $170 million budget1 with the first wave of go-live for the Office of the President and 3 campuses scheduled for early 2013. All campuses would be live on the new system by late 2014.2

In a presentation to the Board of Regents in January 2012:

Over the same period, cost reductions are expected to be approximately $750 million from technology efficiency gains, process standardization and consolidation of transactional activities into a UC-wide shared services center. Overall, the project has a net present value of approximately $230 million (at a nine percent discount rate) with breakeven in year 5.

Subsequent promises were made in March of 2012:

We think this project is likely to pay for itself within five years, and UC could be accruing over $100 million in annual savings by the eighth year,” said Peter Taylor, UC’s chief financial officer. “We also expect to deliver HR and payroll services with increased efficiency, accuracy and quality.”

At the Board of Regents’ meeting last week, the project team gave the first update to the regents since January 2012 (itself a troubling sign). See this Sharestream video from 2:56:10 – 3:22:40.

By Fall 2013 the project was in trouble, and UC leadership brought in new leadership for the project: Mark Cianca as Deputy CIO and Sabu Varghese as Program Director. Their first act was to do a health check on the project, and the results were not pretty (as described in last week’s Board of Regents’ meeting).

  • The project team and implementation partner (Oracle) had treated the project as a software replacement rather than a fundamental business transformation initiative.
  • The individual campuses had not been consulted on changes in business processes, and in fact they had not even been asked to sign off on future state business processes that each campus would have to run to stay in operation.
  • The new project team had to go through more than 100 future state processes with campuses and get agreement on how to proceed.

The result, as described by UC President Janet Napolitano at last week’s meeting, was the team having to “reboot the entire project”.

Based on the reboot, the current plan is $220 million with first wave complete by February 2016 and all campuses live by mid 2017. That’s $50 million over budget and 24 months over schedule.

Deployment Schedule Jul 2014

But the planning is not complete. They are working up their “final” replan of budget and timeline, which they will present in January 2015.

Topics for Jan 2015

How solid is the current estimate? The implementation schedule is listed as the highest risk, even with the delays.

Major Risks Jul 2014

The project financing has changed so much that UC is now facing the need to use external financing over a much longer term, as described in the material for last week’s board meeting.

Therefore, this item seeks approval to refinance the UCPath loan out of CapEquip and into external financing to achieve the financing customization required. As indicated above, the original repayment plan based on the $220.5 million budget was expected to have been repaid with annual debt service of $25 million. This would have resulted in a 12-year loan term once principal was to be repaid. In January 2015, UCPath project leadership plans to present a revised project timeline, a revised project budget and a revised estimated loan repayment schedule. Project leadership will work with the campus budget officers (and campus budget department staff) to develop: (1) an appropriate campus cost allocation strategy; (2) an estimated repayment schedule that will reflect commencement of principal repayments in conjunction with the final campus deployment (estimated to be early 2017); and (3) an estimated 15-20 year loan repayment period.

Notes
  • The new project team seems quite credible, and for the most part they addressed the right points during the briefing. Kudos to UC for making this change in leadership.
  • This is a major project turnaround (or reboot, in Napolitano’s words), but I’m not sure that UC had communicated the significance of the project changes to system campuses (and certainly not to the media).
  • I would view the current plan of $220 million and Q1 2017 full deployment as best case situation – the team told the regents that they were going to update the plan, and ERP project almost never come in earlier than planned.
  • The actual amount is much higher than $220 based on this footnote: “The $10 million in tenant improvements approved for the UCPath Center Riverside site as well as the $17.4 million purchase of the facility (UCPath is currently projected to use no more than 50 percent of the building) are not included in the figures above.”
  • How do you go 2.5 years between updates from what is now a quarter billion dollar project?
  • What about the current estimate of benefits – is it $30 million per year as Chris described or closer to $100 million per year? One big concern I have is that the information on project benefits was not updated, presented to the regents, or asked by the regents. While I question the $25 million financing and $30 million benefits numbers, I think Chris got it exactly right by noting how UC administration is failing to ask hard questions:

Moving forward, I’m afraid that officials are going to have to get much better at admitting mistakes like UCPath, and then actually undoing them. I couldn’t listen to the recording of the UCPath conversation, but Cloudminder made it sound like a lot of restrained finger-pointing with no solution in sight. Did anyone say, “well, this seemed like a good idea at the time, but it’s not. Let’s just cancel it, figure out where we went wrong, and come up with something better”?

It is possible that continuing with the rebooted project is the right answer, but UC is not even asking the question. Failing to ask whether 15-20 year financing of a new ERP makes sense seems like a major oversight. Won’t this lock UC into an Oracle system that is already antiquated for another two decades or more? It seems stunning to me that UC is planning to commit to $220 million of external financing without asking some basic questions.

  1. one regent last week stated the original request was actually $156 million.
  2. All public projects should fear the Wayback Machine for checking old web pages.

The post University of California’s $220 million payroll project reboot appeared first on e-Literate.

Desire2Wha?

Sun, 2014-07-20 08:16

It would be deeply unfair of me to mock Blackboard for having a messy but substantive keynote presentation and not give equal time to D2L’s remarkable press release, pithily entitled “D2L Supercharges Its Integrated Learning Platform With Adaptive Learning, Robust Analytics, Game-Based Learning, Windows® 8 Mobile Capabilities, And The Newest Education Content All Delivered In The Cloud.” Here’s the first sentence:

D2L, the EdTech company that created the world’s first truly integrated learning platform (ILP), today announces it is supercharging its ILP by providing groundbreaking new features and partnerships designed to personalize education and eliminate the achievement gap.

I was going to follow that quote with a cutting remark, but really, I’m not sure that I have anything to say that would be equal to the occasion. The sentence speaks for itself.

For a variety of reasons, Phil and I did not attend D2L FUSION this year, so it’s hard to tell from afar whether there is more going on at the company than meets the eye. I’ll do my best to break down what we’re seeing in this post, but it won’t have the same level of confidence that we have in our Blackboard analysis.

Let me get to the heart of the matter first. Does it look to us like D2L has made important announcements this year? No, it does not. Other than, you know, supercharging its ILP by providing groundbreaking new features and partnerships designed to personalize education and eliminate the achievement gap. They changed their product name to “Brightspace” and shortened their company name to D2L. The latter strikes me as a particularly canny PR move. If they are going to continue writing press releases like their last one, it is probably wise to remove the temptation of the endless variety of potential “Desire2″ jokes. Anyway, THE Journal probably does the best job of summarizing the announcements. For an on-the-ground account of the conference and broader observations about shifts in the company’s culture, read D’Arcy Norman’s post. I’ve been following D’Arcy since I got into blogging ten years ago and have learned to trust his judgment as a level-headed on-the-ground observer.

From a distance, a couple of things jump out at me. First, it looks to me like D2L is trying to become a kind of a content player. Having acquired the adaptive platform in Knowillage, they are combining it with the standards database that they acquired with the Achievement Standards Network. They are also making a lot of noise about enhancements to and content partnerships for their Binder product, which is essentially an eBook platform. Put all of this together, and you get something that conceptually is starting to look (very) vaguely like CogBooks. It wants to be an adaptive courseware container. If D2L pulls this off it will be significant, but I don’t see signs that they have a coherent platform yet—again, acknowledging that I wasn’t able to look at the strategy up close at FUSION this year and could easily be missing critical details.

Second, their announcement that they are incorporating IBM’s Cognos into their Insights learning analytics platform does not strike me as a good sign for Insights. As far as we have been able to tell from our sources, that product has languished since Al Essa left the company for McGraw Hill. One problem has been that their technical team was unable to deliver on the promise of the product vision. There were both data integrity and performance issues. This next bit is speculation on my part, but the fact that D2L is announcing that they plan to use the Cognos engine suggests to me that the company has thus far failed to solve those problems and now is going to a third party to solve them. That’s not necessarily a bad strategy, but it reinforces our impression that they’ve lost another year on a product that they hyped to the heavens and raises questions about the quality of their technical leadership.

The post Desire2Wha? appeared first on e-Literate.

It’s The End of Cal State Online As We Know It . . .

Sat, 2014-07-19 08:48

In a letter to campus leaders, Cal State University system office last month announced that Cal State Online will no longer operate as originally conceived. Emphasis added below.

As the CSU continues to expand its online education strategies, Cal State Online will evolve as a critical component. An early Cal State Online goal will continue: to increase the quality and quantity of fully online education offerings to existing and prospective CSU students, resulting in successful completion of courses and graduation.

The re-visioning of Cal State Online was recommended by the Council of Presidents and approved by the chancellor. This will include a shift to a communication, consultation and services’ strategy for fully online campus degree programs, credentials, certificates and courses supported by opt-in shared services. Cal State Online’s shared services will be designed, delivered and managed to:

1. Make it easy for prospective and existing students to discover, decide, enroll and successfully complete their CSU online education opportunities.

2. Make it more cost-effective for CSU campuses to develop, deliver and sustain their high- quality fully online degree, credential and certificate programs and courses.

Background in a nutshell

In early 2010 a sub-set of the Cal State presidents – the Technology Steering Committee (TSC) – came up with a plan to get the system to aggressively push online education across the system. In fall 2011 the group commissioned a consultant’s set of reports to help them pick an operating model, with the reports delivered in February 2012. This study led to the creation of CSU Online, conceived as a separate 501(c)3 non-profit group1 run by the system, with the plan to use a for-profit Online Service Provider (OSP).2 Early on they realized that Colorado State University was already using the CSU Online name, and the initiative was renamed Cal State Online. The idea was to offer fully-online programs offered by individual campuses in a one-stop shop. Based on an RFP process, in August 2012 Cal State Online selected Pearson as their OSP partner.

Some media coverage of initiative:

The March IHE article quoted official Cal State documents to describe the initiative.

“The goal of Cal State Online is to create a standardized, centralized, comprehensive business, marketing and outreach support structure for all aspects of online program delivery for the Cal State University System,” says the draft RFP. In the open letter, the executive director offers assurances that “participation is optional” for each of the system’s nearly two dozen campuses, “all programs participating in Cal State Online are subject to the same approval processes as an on-campus program,” and “online courses will meet or exceed the quality standards of CSU face-to-face courses.”

What has changed?

This change is significant and recent, meaning that Cal State likely does not have full plans on what will happen in the future. For now:

  • Cal State Online will no longer be a separate operating entity, and the remnant, or “re-visioned” services will be run by the existing Academic Technology Services department within the Chancellor’s Office.

The re-visioning Cal State Online team will be led by Gerry Hanley (Assistant Vice Chancellor for Academic Technology Services) with Sheila Thomas (State University Dean, Extended and Continuing Education).

  • Pearson is no longer the OSP, and in fact, they had already changed their role many months ago3 to remove the on-site team and become more of a platform provider for the LearningStudio (aka eCollege) LMS and supporting services.
  • Cal State is no longer attempting to provide a centralized, comprehensive support structure “for all aspects of online program delivery” but instead will centrally provide select services through the individual campuses.
  • It is clear that Cal State is positioning this decision to show as much continuity as possible. They will continue to provide some of the services started under Cal State Online and will continue to support the programs that have already been offered through the group.

Some services will continue and CSU may keep the name, but it’s the end of Cal State Online as we know it.

I am working on a longer post to explain what happened, including (hopefully) some interviews for supporting information . . . stay tuned.

Update: Changed description of Pearson change and added footnote.

  1. I have not independently verified that the organization truly was set up as a 501(c)3.
  2. Pearson had a team in place at Cal State providing LMS, implementation and integration services, enrollment management & marketing, course design support, analytics and reporting, learning object repository, help desk and technical support, training and faculty support.
  3. I believe this occurred Feb 2014 but am not sure.

The post It’s The End of Cal State Online As We Know It . . . appeared first on e-Literate.

Blackboard’s Big News that Nobody Noticed

Fri, 2014-07-18 16:57

This week was both D2L’s FUSION conference and Blackboard’s BbWorld. The conventional wisdom going around is that there was no big news out of either conference. In Blackboard’s case, that’s just not true. In fact, there was an astonishing amount of very significant news. It’s just that Blackboard didn’t do a very good job of explaining it to people. And that, by itself, is also news.

The big corporate keynote had to be one of the strangest I’ve ever seen. CEO Jay Bhatt ran through a whole long list of accomplishments for the year, but he only gave each one a few seconds as he rattled through the checklist. He mentioned that the company has a new mission statement but didn’t bother to explain it. It took nearly an hour of mostly talking about big macro trends in education and generalities about the categories of goals that the company has set before he finally got around to new product announcements. And then commenced what I can only describe as a carpet bombing run of announcements—a series of explosions that were over by the time you realized that they had started, leaving you to wonder what the heck had just happened. Vice President of User Experience Stephanie Weeks gave a 10-minute talk that was mostly platitudes and generalities about goals for students while some truly significant UX work that her team had done played on the video screen in the background, largely unexplained. There was something mentioned about cloud. Collaborate without a Java plugin! A new mobile app. Wait, another new mobile app, but something about jobs. Wait! Go back to the last slide! I think that was…. Is it over already? It seemed like simultaneously the longest and shortest keynote ever.

Phil and I had a chance to talk to Jay about it later in the day and asked him (politely) what he was thinking. He said, “I don’t view BbWorld as a selling conference. At all.”

Wait. What? This is the Blackboard conference, right?

Apparently it was. This executive team is nothing if not earnest about wanting to talk about the real issues in education. In fact, they’re so earnest about it that they’d rather talk about that than sell you their product. As a result, what was announced in Vegas stayed in Vegas. They made a serious mistake with their keynote plan. But as far as serious mistakes go, it was kind of awesome. And revealing. In and of itself, it is a strong indicator that, having begun a major cultural shift under Ray Henderson, the Blackboard of today is under Jay Bhatt is a very different beast than the Blackboard of five or six years ago. Many of your assumptions about what the company is and what you can expect from them probably aren’t safe ones to make anymore.

Anyway, it’s not surprising that people observing the conference from afar (and even from anear) missed the announcements. So what were they?

Major UX Overhaul

In the past, a “major UX overhaul” for Blackboard typically meant “we moved around some stuff in the admin panel and put on a skin that looks 5 years out of date rather than 15.” Not this time. The new UX is very different. It takes a lot of design cues from iOS (and, to a certain degree, from Windows Mobile). Forget about the 15 different submenus. They’re moving everything to a single-page model with contextual overlays that fly in when you need them. Workflows have been greatly simplified, and many of them rethought. As I sat in on a demo later in the day, I’m pretty sure that the woman in the row in front of me started crying when she saw how much easier it is to import content from an old course.

To be fair, this isn’t shipping code. “Oh, Michael,” you’re thinking about now, “How can you be such a sucker as to fall for the old vaporware bait and switch?” Well, Phil and I spent some time in their UX lab. We were given access to what was clearly a live system (as was anyone else who came to the UX lab). The UX guy managing the lab gave us a script and warned us that this is still a system in development so if we wanted to see what is actually working today we should stick to the script. But of course, we didn’t. The workflows covered by the script were significant, and a lot that wasn’t on the script was also actually already working. This is real, folks. It may not be done yet, but it’s credible. And if the alpha we saw was any indication, it’s not crazy to imagine that Blackboard could raise the bar on LMS UX design by the time that they release. I kid you not.

Underneath all of this, some serious technical work has been done. Blackboard UX is now 100% separated from the business logic, using Node.js to deliver it and putting presentation code in the browser. Also, the new UX is fully responsive. It dynamically adjusts to the size of the browser window (and device).

Even more impressive was the overhaul of Blackboard Collaborate. The Java plugin is gone.1 It’s been replaced by a simple—dare I say elegant?—WebRTC-based UX. We saw a live demo of it. If Google had designed Hangouts specifically for education, they probably would have built something like what Blackboard is showing off. And it works. We saw it in action.

The UX overhaul would be a pretty significant development all by itself. But it wasn’t all by itself.

Blackboard Learn Is Going to the Cloud

Wait. What?

Phil and I are still trying to nail down some of the details on this one, particularly since the term “cloud” is used particularly loosely in ed tech. For example, we don’t consider D2L’s virtualization to be a cloud implementation. But from what we can tell so far, it looks like a true elastic, single-instance multi-tenant implementation on top of Amazon Web Services. It’s kind of incredible. And by “kind of incredible,” I mean I have a hard time believing it. Re-engineering a legacy platform to a cloud architecture takes some serious technical mojo, not to mention a lot of pain. If it is true, then the Blackboard technical team has to have been working on this for a long time, laying the groundwork long before Jay and his team arrived. But who cares? If they are able to deliver a true cloud solution while still maintaining managed hosting and self-hosted options, that will be a major technical accomplishment and a significant differentiator.

This seems like the real deal as far as we can tell, but it definitely merits some more investigation and validation. We’ll let you know more as we learn it.

Bundled Products

This one may sound like a trivial improvement unless you’ve ever actually dealt with Blackboard’s sales force and trivial to implement unless you’ve ever worked in a big software company with lots of business units, but Blackboard has ended the practice of separately licensing 57 different products, each with its own sales rep and price sheet. In some cases—like xpLOR and myEDU—they’re merging the functionality into the core product. In others, they’re creating tiers of service.

Here’s how their website currently describes the tiers:

  • Learning Core: Bb Learn. (But remember, they’re merging previously separate offerings into it.)
  • Learning Essentials: Everything in Core plus Collaborate.
  • Learning Insight: Everything in Essentials plus Analytics for Learn
  • Learning Insight & Student Retention: Everything in Insight plus “retention services.” I didn’t catch this at the conference, but if it’s what it sounds like then the company is beginning to move away from differentiating between products and services and toward integrated solutions.

This should deliver more value to customers with less hassle.

Other Stuff

Those were the big announcements, but there was a lot of other stuff that floated by. It seems like they’re doing significant work on their mobile app, separate from the responsive UX work. I didn’t get a chance to even see what that is about. They’re working on a content store in partnership with MBS Books that could be more significant than it looks at a glance. There was some sort of jobs or career mobile app that whizzed by in the keynote. And who knows what else.

When I take a step back and look at this as a whole, a few thoughts run through my head. First comes, “Yeah, they had to do most of this in order to compete with Instructure. The holes they are filling are fairly clear.” Next comes, “I really didn’t believe they could pull some of this off at all, never mind as quickly and well as they seem to be doing it. Time will tell but…wow.” Then comes, “How the hell did they manage to get through a keynote with all of this in it and not blow people out of their chairs?” And finally, “Who would have thought in a million years that the LMS space could become interesting again?”

But there you have it. This is just a news post; the implications for Blackboard and the market are many and significant. Phil and I will have more to say about it in the days and weeks ahead. For now, the take-home message can be summed up thusly:

Game on.

  1. Many Bothans died to bring you this enhancement.

The post Blackboard’s Big News that Nobody Noticed appeared first on e-Literate.

NPR and Missed (Course) Signals

Mon, 2014-07-14 12:45

Anya Kamenetz has a piece up on NPR about learning analytics, highlighting Purdue’s Course Signals as its centerpiece. She does a good job of introducing the topic to a general audience and raising some relevant ethical questions. But she missed one of the biggest ethical questions surrounding Purdue’s product—namely, that some of its research claims are likely false. In particular, she repeats the following claim:

Course Signals…has been shown to increase the number of students earning A’s and B’s and lower the number of D’s and F’s, and it significantly raises the chances that students will stick with college for an additional year, from 83% to 97%. [Emphasis added.]

Based on the work of Mike Caulfield and Al Essa summarized in the link above, it looks like that latter claim is probably the result of selection bias rather than a real finding. So who is at fault for this questionable claim being repeated without challenge in a popular venue many months after it has been convincingly challenged?

For starters, Purdue is. They never responded to the criticism, despite confirmation that they are aware of it—for one thing, they got contacted by us and by Inside Higher Ed—and despite the fact that they apparently continue to make money off the sales of the product through a licensing deal with Ellucian. And the uncorrected paper is still available on their web site. This is unconscionable.

Anya clearly bears some responsibility too. Although it’s easy to assume from the way the article is written that the dubious claim was repeated to her in an interview by Purdue research Matt Pistilli, she confirmed for me via email that she took the claim from the previously published research paper and did not discuss it with Pistilli. Given that this is her central example of the potential of learning analytics, she should have interrogated this a little more, particularly since she had Matt on the phone. Mike Caulfield also commented to me that any claim of such a dramatic increase in year-to-year retention should automatically be subject to additional scrutiny.

I have to put some blame on the higher ed press as well. Inside Higher Ed covered the story (and, through them, the Times Higher Education). In fact, Carl Straumsheim actually advanced the story a bit by putting the question to researcher Matt Pistilli (who gave a non-answer). The Chronicle of Higher Education did not cover it, despite having run a puff piece on Purdue’s claims the same day that Mike Caulfield wrote his original piece challenging the results. It is very clear to Phil and me that we are read by the Chronicle staff, in part because they periodically publish stories that have been obviously influenced by our earlier coverage. Sometimes without attribution. I don’t care that much about the credit, but if they thought Purdue’s claims were newsworthy enough to cover in the first place then they should have done their own reporting on the fact that those claims have been called into question. If they had been more aggressive in their coverage then the mainstream press reporters who find Course Signals will be more likely to find the other side(s) of the story as well. Outside of IHE, I’m having trouble finding any coverage, never mind any original reporting, in the higher ed or ed tech press.

I have a lot of respect for news reporters in general, and I think that most people grossly underestimate how hard the job is. I think highly of Anya as a professional. I like the reporters I interact with most at the Chronicle as well. Nor will I pretend that we are perfect here at e-Literate. We miss important angles and get details wrong our fair share. For example, I doubt that I would have caught the flaw in Purdue’s research if Mike hadn’t brought it to my attention. But collectively, we have to do a better job of providing critical coverage of topics like learning analytics, particularly at a time when so much money is being spent and our entire educational system is starting to be remade on the premise that this stuff will work. And there is absolutely no excuse whatsoever for a research university to not take responsibility for their published research on a topic that is so critical to the future of universities.

The post NPR and Missed (Course) Signals appeared first on e-Literate.

A response to Bloomberg article on UCLA student fees

Sat, 2014-07-12 13:56

Megan McArdle has an article that was published in Bloomberg this week about the growth of student fees. The setup of the article was based on a new “$4 student fee to pay for better concerts”.

To solve this problem, UCLA is introducing a $4 student fee to pay for better concerts. That illuminates a budgeting issue in higher education — and indeed among human beings more generally.

That $4 is not a large fee. Even the poorest student can probably afford it. On the other hand, collectively, UCLA’s student fees are significant: more than $3,500, or about a quarter of the mandatory cost of attending UCLA for a year.

Those fees are made up of many items, each trivial individually. Only collectively do they become a major source of costs for students and their families and potentially a barrier to college access for students who don’t have an extra $3,500 lying around.

I’m sympathetic to the argument that college often costs too much and that institutions can play revenue games to avoid the appearance of raising tuition. I also think that Megan is one of the better national journalists on the topic of the higher education finances.

UCLA Fees

However, this article is somewhat sloppy in a way that harms the overall message. I would like to clarify the student fees data to help show the broader point.

Let’s look at the actual data from UCLA’s web site. I assume that Megan is basing this analysis on in-state undergraduate full-time students. The data is listed per quarter, and UCLA has three quarters for a full academic year. I have summarized below summing three quarters into yearly data, and you can:

  • Hover over each measure to see the fee description from UCLA’s fee description page;
  • Click on each category that I added to see the component fees;
  • Sort either column; and
  • Choose which rows to keep or exclude.
  • NOTE: Static image above if you cannot see interactive graphics

UCLA Fees for In-State Undergrads (Total $3,749.97)

Learn About Tableau Some Clarifications Needed
  • The total of non-tuition fees is $3,750 per year, not $3,500; however, Megan is right that this represents “about a quarter of the mandatory cost of attending UCLA for a year” ($3,750 out of $14,970).
  • The largest single fee is the UC health insurance fee (UC-SHIP), which is more than half of the total non-tuition fees. This fact (noted by Michael Berman on Twitter) should have been pointed out, given the significant percentage of the total.
  • With the UC-SHIP at $1,938 and the student services fee at $972, I hardly consider these as “trivial individually”.
Broader Point on Budgeting

The article’s broader point is that using extraneous fees to create additional revenue leads to a flawed budgeting process.

As I’ve written before, this is a common phenomenon that you see among people who have gotten themselves into financial trouble — or, for that matter, people who are doing OK but complain that they don’t know where the money goes and can’t save for the big-ticket items they want. They consider each purchase individually, rather than in the context of a global budget, which means that they don’t make trade-offs. Instead of asking themselves “Is this what I want to spend my limited funds on, or would I rather have something else?” they ask “Can I afford this purchase on my income?” And the answer is often “Yes, I can.” The problem is that you can’t afford that purchase and the other 15 things that you can also, one by one, afford to buy on your income. This is how individual financial disasters occur, and it is also one way that college tuition is becoming a financial disaster for many families.

This point is very important. Look at the Wooden Center fee, described here (or by hovering over chart):

Covers repayment of the construction bond plus the ongoing maintenance and utilities costs for the John Wooden Recreation Center. It was approved by student referendum. The fee is increased periodically based on the Consumer Price Index.

To take Megan’s point, this fee “was approved by student referendum”, which means that UCLA has moved budgeting responsibility away from a holistic approach to saying “the students voted on it”. This makes no financial sense, nor does it make sense to shift bond repayment and maintenance and utilities cost onto student fees.

While this article had some sloppy reporting in terms of accurately describing the student fees, it does highlight an important aspect of the budget problems in higher education and how the default method is to shift the costs to students.

The post A response to Bloomberg article on UCLA student fees appeared first on e-Literate.

Blackboard’s Perceptis Acquisition Offers Clues into Company’s Strategy

Wed, 2014-07-09 08:47

Yesterday Blackboard announced that they acquired Perceptis, a provider of help desk and financial aid support services for colleges and universities. In and of itself, this is not a huge acquisition. Perceptis has 33 clients, offers services that Blackboard was already offering, and has no substantial new technology. But as we approach BbWorld next week, the move provides some early hints into the strategic direction that the company may highlight at the conference.

I had the opportunity to talk with Blackboard’s Vice President of Education Services Katie Blot about the move.

There are a couple of different ways to frame help desk services, so I was curious to hear how Blackboard would position it. Katie talked about being “very, very focused on end-to-end learner-centric support” and “supporting learner pathways” for “non-traditional and post-traditional students.” And in the acquisition announcement, Jay Bhatt is quoted as saying,

By combining the Blackboard and Perceptis teams, we will enhance a service model that the industry needs: one that fully supports students from the first moment they are interested in a school to the day they graduate. This is yet another way Blackboard is reimagining education.

While “reimagining education” may be laying it on a little thick in the context of acquiring a help desk service, the reframing of the company mission as supporting students from orientation to graduation is a significant change. I always got the feeling that former CEO Michael Chasen’s role model was Oracle’s Larry Ellison. If you need a piece of software to help you do something important, Ellison will get it and sell it to you. It doesn’t matter too much what kind of software it is, as long as you’re the kind of customer he wants to have. There’s nothing wrong with that per se, but it leads to particular types of business decisions. A friend who used to work at Georgetown University liked to joke that Blackboard probably had some useful insights about his bowel health because he had to swipe his Blackboard-vended key card every time he used the faculty bathroom. Barring an uncommonly expansive definition of what it means to “fully support students,” this is just not the kind of business that the company Jay Bhatt is describing would be likely to get into (although, for the record, Blackboard currently still owns this business).

Interestingly, this is a point that Katie brought up unprompted. She took great pains to emphasize how they are building a “new Blackboard” (which, by implication, is importantly not like the old Blackboard). In the old days, she said, the company made acquisition decisions based primarily on the financial case. “We bought a lot of companies that were not closely aligned with the core.” I would put it slightly differently. I would say that Blackboard did not have the same core that the company leadership is articulating today.

And what is that core? What is the company trying to become? We will likely know more after next week, but by doubling down on the support services and positioning it the way they are, the company is trying to move up the value chain, away from being perceived as a software vendor and toward being perceived as a student success-related services vendor. According to Katie, their services business as tripled in the three years since Chasen got Blackboard into the call center support business by acquiring Presidium. The Perceptis move can be seen as doubling down. This puts them in an increasingly crowded space, particularly in online education, with competitors that range from Pearson to 2U to Hobsons. When I asked Katie how the company intends to differentiate itself, she cited two factors. First, they provide an a la carte approach and are avoiding making moves that they believe would potentially either put them in direct competition with their customers or otherwise cannibalize the schools’ core competencies. They are staying out of certain services businesses—she didn’t specify, but I imagine that curriculum development is a good example of what she means—while in others she said they take a “teach to fish” approach, moving more toward the consulting than the outsourcing range of the spectrum. This is not terribly different from the marketing message that Instructure deployed against the MOOC providers when announcing the Canvas Network and may be effective against the textbook publishers and more full-service outsourcing companies.

The second differentiator was interesting too. While Katie emphasized the a la carte message and specifically mentioned that Perceptis was attractive to the company because it served non-Blackboard customers and reinforced the message that they want to provide services to schools using other LMSs, she also said that Blackboard’s knowledge of the learning technology stack and, more importantly, the learning data, gives them an edge helping support their customers in making data-driven decisions. There aren’t many service providers who can make that claim right now. To be honest, I’m not sure that Blackboard can either yet. As I have written previously, the heritage of Blackboard’s analytics product is not really with learning analytics and they are still in the early stages of moving into this space. That said, Phil and I are impressed with their decision to hire John Whitmer as Director for Platform Analytics and Educational Research. As Phil has observed, Instructure has gotten strong benefits from hiring academic Jared Stein. Likewise, Al Essa led some pretty major conceptual work on analytics at Desire2Learn before they lost him to McGraw Hill. John is a solid researcher in the field of learning analytics and just the sort of guy that Blackboard needs to help them figure out how to deliver on their claims that they understand how educational data can provide insights enabling better student support.

Obviously, I’m reading tea leaves here. Speaking of data, Phil and I will both be at BbWorld next week and should have more concrete moves by Blackboard to analyze.

The post Blackboard’s Perceptis Acquisition Offers Clues into Company’s Strategy appeared first on e-Literate.

Roundtable Discussion on Integrative Education July 9th at 2pm EDT

Tue, 2014-07-08 16:19

Tomorrow, July 9th at 2:00pm EDT, I’ll join a great cast to discuss Reinvent the University for the Whole Person: Principles Driving Policy, and I hope many of you can watch. The other participants:

  • Randy Bass (Vice Provost for Education and Professor of English at Georgetown University)
  • Martha Kanter (Distinguished Visiting Professor of Higher Education at New York University & former U.S. Under Secretary of Education)
  • Robert Groves (Provost at Georgetown University)
  • Jeffrey Selingo (Author of College (Un)Bound: The Future of Higher Education and What It Means for Students)
  • Tia Brown McNair (Senior Director for Student Success at the Association of American Colleges & Universities)
  • Anthony Carnevale (Director of the Center on Education & the Workforce at Georgetown University)

Reinventors

The core idea for the series:

American higher education rarely has been more in the national spotlight than with the arrival of new digital technologies and new for-profit education businesses, among other big trends. In this rapidly changing landscape, the old model looks increasingly outmoded and many efforts are underway to begin to transform the system for the 21st century. Most efforts are focusing on making the system more efficient and producing a larger number of graduates to fit in a changing economy.

Very little thought is going into other valuable contributions that universities have provided in the past. Universities also produce future citizens, problem–solvers, leaders – not to mention knowledge that can drive innovation and economic growth. How do we ensure that these other critical outcomes will continue in the future? How can we build on new insights about learning and invent new ways to deliver and measure education that matters for a lifetime? How can we use new tools and approaches that are only available now to carry out the mission of educating for the whole person even better than before?

For the roundtable tomorrow, we’ll discuss:

What are the opportunities for shaping public policy for integrative education in a world that also needs more access, lower costs and workplace preparation? How do we ensure this focus is not elitist?

You can access the discussion on the Reinventors website here.

You can access the discussion within Google+ here.

The post Roundtable Discussion on Integrative Education July 9th at 2pm EDT appeared first on e-Literate.

Instructure’s CTO Joel Dehlin Abruptly Resigns

Mon, 2014-07-07 16:32

One week after the conclusion of Instructure’s Users’ Conference, CTO Joel Dehlin abruptly resigned from the company for a new job. Joel took the CTO job with Instructure in summer 2013, around the same time as Devlin Daley’s departure (Devlin was co-founder). Joel’s resignation comes as a surprise, especially given his prominent placement as the technology lead for the Canvas LMS. As recently as InstructureCon on June 27th, Joel gave the product update presentation.

The change became apparent by viewing the new Instructure leadership page (nice page design, btw), as I noticed that Joel was not included. I contacted Devin Knighton, Director of Public Relations for Instructure, who confirmed that the resignation was unexpected and was Joel’s decision. I am not sure how significant this resignation is for the company. What we do know is that Joel has not been replaced as CTO, but that Jared Stein (VP of Research and Education), Trey Bean (VP of Product), David Burggraaf (VP of Engineering), and Zach Willy (Chief Architect) will cover the CTO responsibilities in the near term. I would have more details, but Devin is on family vacation, and I did not want to push for him to send me an official email.

We’ll keep you posted if we find out more information (assuming it is newsworthy).

Update: Corrected second paragraph on VP of Product and VP of Engineering per Devin Knighton comment below.

The post Instructure’s CTO Joel Dehlin Abruptly Resigns appeared first on e-Literate.

Adaptive Learning Market Acceleration Program (ALMAP) Summer Meeting Notes

Mon, 2014-07-07 05:04

I recently attended the ALMAP Summer Meeting. ALMAP is a program funded by the Gates Foundation, with the goals described in this RFP webinar presentation from March 2013:

We believe that well implemented personalized & adaptive learning has the potential to dramatically improve student outcomes

Our strategy to accelerate the adoption of Adaptive Learning in higher education is to invest in market change drivers… …resulting in strong, healthy market growth

As the program is in its mid stage (without real results to speak of yet), I’ll summarize Tony Bates style with summary of program and some notes at the end. Consider this my more-than-140-character response to Glenda Morgan:

@PhilOnEdTech was the agenda of the Gates Summit online at all?

— Glenda Morgan (@morganmundum) June 30, 2014

Originally planned for 10 institutions, the Gates Foundation funded 14 separate grantees at a level of ~$100,000 each. The courses must run for 3 sequential semesters with greater than 500 students total (per school), and the program will take 24 months total (starting June 2013). The awards were given to the following schools:

Gates has also funded SRI International to provide independent research on the results of each grant.

The concept of accelerator as used by the Gates Foundation is to push adaptive learning past the innovator’s adoption category into the majority category (see RFP webinar).

ALMAP accelerator

The meeting was organized around quick updates from most of the grantees along with panels of their partner software providers (Knewton, ALEKS, CogBooks, Cerego, OLI, ASSISTments, Smart Sparrow), faculty, and several local students. Here is a summary of the meeting agenda.

ALMAP Agenda

Notes

Adaptive Learning is becoming a hotter topic in higher education recently, and I expect that we will hear more from ALMAP as the results come in. In the meantime, here are some preliminary notes from the meeting (some are my own, some are group discussions that struck me as very important).

  • Despite the potential importance of this funding program, I can only find one full article (outside of Gates publications) about the program. Campus Technology had an article in April titled “The Great Adaptive Learning Experiment”. David Wiley referred to the program in his take on the risks of adaptive learning. Scientific American (among a few others) described ALMAP in one paragraph of a larger story on Adaptive Learning.
  • We really need a taxonomy to describe Adaptive Learning and Personalized Learning as both terms are moving into buzzword and marketing-speak territory. During the break out groups, it seemed there was unanimous agreement on this problem of a lack of precise terminology. While the Gates Foundation also funded two white papers on Adaptive Learning, I did not hear the ALMAP participants using the embedded taxonomy (see below) to improve language usage. I’m not sure why. I provided a short start in this post before EDUCAUSE, but I think Michael and I will do some more analysis on the field and terminology soon. Michael also has a post that was published in the American Federation of Teachers publication AFT On Campus, titled “What Faculty Should Know About Adaptive Learning”, that is worth reading.
  • The above problem (lack of accepted taxonomy, different meanings of adaptive), along with faculty flexibility in determining how to use the software, will make the research challenging, at least in terms of drawing conclusions across the full set of experiments. SRI has its work cut out for them.
  • There appears to be a divide in the vendor space between publisher models, where the content is embedded with the platform, and a platform-only model, where content is provided from external sources. Examples of the former include ALEKS, Adapt Courseware and OLI. Examples of the latter include ASSISTments, Smart Sparrow, CogBooks, Cerego. Cerego might be the only example where they provide “starter” content but also allow the user to provide or integrate their own content. Credit to Neil Heffernan from WPI and ASSISTments for this observation over drinks.
  • Programs of this type (pushing innovation and driving for changes in behavior) should not be judged by the first semester of implementation, when faculty are figuring out how to work out the new approach. Real results should be judged starting in the second semester, and one attendee even recommended to avoid results publication until the third semester. This is the primary reason I am choosing to not even describe the individual programs or early results yet.
  • Kudos to the Gates Foundation for including a student panel (like 20MM Evolve and upcoming WCET conference). Below are a few tweets I sent during this panel.

Student on panel: Profs matter a lot – could tell the ones who don't like teaching. Ones who love teaching are contagious, her best classes.

— Phil Hill (@PhilOnEdTech) June 27, 2014

Conversely, fac who use tech poorly – don't understand, no instructions, no effort to use well – have very negative impact on students

— Phil Hill (@PhilOnEdTech) June 27, 2014

Whether it's from prof or from adaptive sw (or both), student panel wants clear instructions on assignments, timely feedback

— Phil Hill (@PhilOnEdTech) June 27, 2014

Expect to hear more from e-Literate as well as e-Literate TV not only on the ALMAP awardees and their progress, but also from the general field of personalized and adaptive learning.

Below is the taxonomy provided as part of the Gates-funded white paper from Education Growth Advisors.

AL Whitepaper Taxonomy

 

Update: I did not mention the elephant in the room for adaptive learning – whether software will replace faculty – because it was not an elephant in this room; however, this is an important question in general.

@ricetopher Good point. Unclear if gates funded automation would eliminate teachers… Are we becoming the machine? @PhilOnEdTech

— Whitney Kilgore (@whitneykilgore) July 7, 2014

At the ALMAP meeting, I believe that most grantees had faculty members present. From these faculty members (including a panel specifically on faculty experiences), there were discussions about changing roles (“role is facilitator, coach, lifeguard in a sense”), the fact that faculty were requested to participate rather than initiate the change, and the challenge of getting students to come to class for hybrid models. One faculty member mentioned that the adaptive software allow more instruction on real writing and less on skill-and-drill activities.

But the way the grantees implemented adaptive learning software was not based on replacing faculty, at least for this program.

The post Adaptive Learning Market Acceleration Program (ALMAP) Summer Meeting Notes appeared first on e-Literate.

Fall 2012 US Distance Education Enrollment: Now viewable by each state

Wed, 2014-07-02 23:15

Starting in late 2013, the National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) started providing preliminary data for the Fall 2012 term that for the first time includes online education. Using Tableau (thanks to Justin Menard for prompting me to use this), we can now see a profile of online education in the US for degree-granting colleges and university, broken out by sector and for each state.

Please note the following:

  • For the most part distance education and online education terms are interchangeable, but they are not equivalent as DE can include courses delivered by a medium other than the Internet (e.g. correspondence course).
  • There are three tabs below – the first shows totals for the US by sector and by level (grad, undergrad); the second also shows the data for each state (this is new); the third shows a map view.

Learn About Tableau

The post Fall 2012 US Distance Education Enrollment: Now viewable by each state appeared first on e-Literate.

Is the DOE backing down on proposed State Authorization regulations?

Thu, 2014-06-26 08:25

Now witness the firepower of this fully written and delivered WCET / UPCEA /Sloan-C letter!

- D. Poulin

One of the policies that we’re tracking at e-Literate is the proposed State Authorization regulation that the US Department of Education (DOE) has been pushing. The latest DOE language represents a dramatic increase in federal control of distance education and in bureaucratic compliance required of institutions and states. In the most recent post we shared a letter from WCET, UPCEA and Sloan-C to Secretary Duncan at the DOE.

What does it take to get all of the higher education institutions and associations to agree? Apparently the answer is for the Department of Education to propose its new State Authorization regulations. [snip]

Here’s what is newsworthy – the idea and proposed language is so damaging to innovation in higher ed (which the DOE so fervently supports in theory) and so burdensome to institutions and state regulators that three higher ed associations have banded together to oppose the proposed rules. WCET (WICHE Cooperative on Educational Technologies), UPCEA (University Professional and Continuing Education Association) and Sloan-C (Sloan Consortium) wrote a letter to Secretary Arne Duncan calling for the DOE to reconsider their planned State Authorization regulations.

While it is unclear how direct an impact the letter had, yesterday brought welcome news from Ted Mitchell at the DOE: they have effectively paused their efforts to introduce new State Authorization regulations. As described at Inside Higher Ed:

The Obama administration is delaying its plan to develop a controversial rule that would require online programs to obtain approval from each and every state in which they enroll students, a top Education Department official said Wednesday.

Under Secretary of Education Ted Mitchell said that the administration would not develop a new “state authorization” regulation for distance education programs before its November 1 deadline.
“We, for all intents and purposes, are pausing on state authorization,” Mitchell said during remarks at the Council for Higher Education Accreditation conference. “It’s complicated, and we want to get it right.”

Mitchell said he wanted make sure the regulation was addressing a “specific problem” as opposed to a general one. The goal, he said, should be to promote consumer protection while also allowing for innovation and recognizing that “we do live in the 21st century and boundaries don’t matter that much.”

It gets better. Mitchell made this statement while at a workshop for the Council for Higher Education Accreditation, and his speech mentioned his desire to clean up some of the regulatory burden on accrediting agencies. As described at the Chronicle:

Ted Mitchell, the under secretary of education, told attendees at a workshop held by the Council for Higher Education Accreditation that accreditors’ acceptance of more responsibility over the years for monitoring colleges had created “complicated expectations for institutions, regulators, politicians, and the public.”

Much of the work accreditors do to ensure that colleges comply with federal regulations is “less appropriate to accreditors than it may be to the state or federal government,” said Mr. Mitchell, who is the No. 2 official in the Department of Education and oversees all programs related to postsecondary education and federal student aid.

“If I could focus on a spot today,” he said, “it would be the compliance work and seeing if we could relieve accreditors of the burden of taking that on for us.”

This is just a speech, and we do not know what the DOE will eventually propose (or not) on State Authorization. But it is certainly a welcome sign that the department has heard the concerns of many in the higher education community.

Update: See Russ Poulin’s blog post at WCET with more context and inside info.

WCET joined with Sloan-C and UPCEA to write a letter to Education Secretary Arne Duncan and Under Secretary Mitchell about our concerns with the direction the Department was taking and to give recommendations on how the Department might proceed. I have also been talking with numerous groups and individuals that have been writing their own letters or have used their contacts.

On Tuesday of this week, Marshall Hill (Executive Director of the National Council on State Authorization Reciprocity Agreements) and some high-ranking members of the National Council leadership board met with Mr. Mitchell. According to Marshall, Mr. Mitchell was aware of many of the concerns that they raised and was very supportive of reciprocity. From that meeting, Mr. Mitchell indicated that more work needed to be done, but did not suggest the delay.

Mr. Mitchell’s reference in the Inside Higher Ed article about addressing a “specific problem” showed that our message was being heard.

The post Is the DOE backing down on proposed State Authorization regulations? appeared first on e-Literate.

WWW-based online education turns 20 this summer

Tue, 2014-06-24 17:01

I’m a little surprised that this hasn’t gotten any press, but Internet-based online education turns 20 this summer. There were previous distance education programs that used networks of one form or another as the medium (e.g. University of Phoenix established its “online campus” in 1989), but the real breakthrough is the use of the world wide web (WWW), effectively creating what people most commonly know as “the Internet”.

To the best of my knowledge (correct me in comments if there are earlier examples), the first accredited school to offer a course over the WWW was the Open University in a pilot Virtual Summer School project in the summer of 1994. The first course was in Cognitive Psychology, offered to 12 students, as described in this paper by Marc Eisenstadt and others involved in the project (the HTML no longer renders):

In August and September 1994, a Virtual Summer School (VSS) for Open University undergraduate course D309 Cognitive Psychology enabled students to attend an experimental version of summer school ‘electronically’, i.e. from their own homes using a computer and a modem. VSS students were able to participate in group discussions, run experiments, obtain one-to-one tuition, listen to lectures, ask questions, participate as subjects in experiments, conduct literature searches, browse original journal publications, work in project teams, undertake statistical analyses, prepare and submit nicely formatted individual or joint written work, prepare plenary session presentations, and even socialize and chit-chat, all without ever leaving their homes. The term ‘Virtual Summer School’ was used to mean that the software packages supplied to students emulate many aspects of a residential summer school, but without requiring physical attendance. As with many other Open University activities, we feel that face-to-face tuition and peer group interaction would still be preferable if it could be achieved. However, there are sometimes circumstances which preclude physical attendance, so we want to provide the best possible alternative. Virtual Summer School was a first step in this direction. This year, it was only an experimental option for a dozen already-excused students, which gave us a low-risk entry in order to assess the viability of the approach.

There is even a concept video put together by the Open University at the end of 1994 that includes excerpts of the VSS course.

And now for your trip down memory lane, I have taken the paper, cleaned up the formatting, and fixed / updated / removed the links that no longer work. The modified paper is below for easier reading:

*************

Virtual Summer School Project, 1994

(source: http://faculty.education.ufl.edu/Melzie/Distance/Virtual%20Summer%20School%20Project)

Background

One of the great strengths of the UK’s Open University is its extensive infrastructure, which provides face-to-face tuition through a network of more than 7000 part-time tutors throughout the UK and Europe. This support network, combined with in-house production of high-quality text and BBC-produced videos, provides students with much more than is commonly implied by the phrase ‘distance teaching’! Moreover, students on many courses must attend residential schools (e.g. a one-week summer school to gain experience conducting Biology experiments), providing an additional layer of support. About 10% of students have genuine difficulty attending such residential schools, and increasingly we have started to think about addressing the needs of students at a greater distance from our base in the UK. This is where the Virtual Summer School comes in.

The Cognitive Psychology Virtual Summer School

In August and September 1994, a Virtual Summer School (VSS) for Open University undergraduate course D309 Cognitive Psychology enabled students to attend an experimental version of summer school ‘electronically’, i.e. from their own homes using a computer and a modem. VSS students were able to participate in group discussions, run experiments, obtain one-to-one tuition, listen to lectures, ask questions, participate as subjects in experiments, conduct literature searches, browse original journal publications, work in project teams, undertake statistical analyses, prepare and submit nicely formatted individual or joint written work, prepare plenary session presentations, and even socialize and chit-chat, all without ever leaving their homes. The term ‘Virtual Summer School’ was used to mean that the software packages supplied to students emulate many aspects of a residential summer school, but without requiring physical attendance. As with many other Open University activities, we feel that face-to-face tuition and peer group interaction would still be preferable if it could be achieved. However, there are sometimes circumstances which preclude physical attendance, so we want to provide the best possible alternative. Virtual Summer School was a first step in this direction. This year, it was only an experimental option for a dozen already-excused students, which gave us a low-risk entry in order to assess the viability of the approach.

Below we describe the technology involved, evaluation studies, and thoughts about the future.

The Technology

Three main categories of technology were required: communications & groupwork tools, support & infrastructure software/hardware, and academic project software.

Communications and Groupwork
  • Email, Usenet newsgroups, live chat lines and low-bandwidth (keyboard) conferencing: this technology was provided by FirstClass v. 2.5 from SoftArc in Toronto, and gave students a nice-looking veneer for many of their day-to-day interactions. A ‘Virtual Campus’ map appeared on their desktops, and folder navigation relied on a ‘room’ metaphor to describe crucial meeting places and bulletin boards.
  • WWW access: NCSA Mosaic 1.0.3 for Macintosh was provided for this purpose [in the days before Netscape was released] . Students had customized Hotlists which pointed them to academically-relevant places (such as Cognitive & Psychological Sciences on The Internet), as well as some fun places.
  • Internet videoconferencing: Using Cornell University’s CU-SeeMe, students with ordinary Macs or Windows PCs (even over dial-up lines from home) were able to watch multiple participants around the world. Video transmission from slightly higher-spec Macs & PCs was used for several Virtual Summer School events, including a Virtual Guest Lecture by Donald A. Norman, formerly Professor of Psychology at the University of California at San Diego (founder of its Cognitive Science Programme), and now an Apple Fellow.
  • Remote presentation software: we used a product called ‘The Virtual Meeting’ (from RTZ in Cupertino), which allowed synchronized slide & movie presentations on remote Macs & PCs distributed across local, wide, or global (including dial-in) networks, displayed images of all remote ‘participants’, and facilitated moderated turn-taking, ‘hand-raising’, interactive whiteboard drawing & question/answer sessions.
  • Mobile telephone support and voice conferencing: every VSS student was supplied with an NEC P100 cellular phone, so that they could use it while their domestic phone was busy with their modem (some day they’ll have ISDN of fibre optic lines, but not this year). Audio discussions were facilitated by group telephone conference calls, run concurrently with CU-SeeMe and other items shown above. Our largest telephone conference involved 17 participants, and worked fine given that basic politeness constraints were obeyed.
  • Remote diagnostic support and groupwork: Timbuktu Pro from Farallon, running over TCP/IP, enabled us to ‘cruise in’ to our students’ screens while chatting to them on their mobile phones, and to help them sort out specific problems. Students could also work in small self-moderated groups this way, connecting as observers to one user’s Macintosh.
Support and infrastructure software/hardware
  • Comms Infrastructure: TCP/IP support was provided by a combination of MacTCP, MacPPP, VersaTerm Telnet Tool on each student’s machine, plus an Annex box at The Open University connecting to a Mac Quadra 950 running a FirstClass Server and 3 Suns running cross-linked CU-SeeMe reflectors.
  • Tutorial Infrastructure: each student was supplied with HyperCard, MoviePlay, and SuperCard 1.7 to run pre-packaged tutorial and demonstration programs, some of which were controlled remotely by us during group presentations. Pre-packaged ‘guided tour’ demos of all the software were also provided (prepared with a combination of MacroMind Director and CameraMan). To help any computer-naive participants ‘bootstrap’ to the point where they can at least send us an email plea for help, we also supplied a short video showing them how to unpack and connect all of their equipment, and how to run some of the demos and FirstClass.
  • Hardware: one of our aims was to foreshadow the day in the near future when we can presuppose that (a) most students will be computer-literate, (b) students will have their own reasonable-specification hardware, (c) bandwidth limitations will not be so severe, and (d) all of our software will be cross-platform (e.g. Mac or Windows). We could only approximate that in 1994, so we supplied each VSS student with a Macintosh LC-II with 8MB of RAM, a 14.4Kbps modem, a StyleWriter-II printer, 13″ colour monitor, mobile phone and extra mobile phone battery. Students were given a conventional video cassette showing how to set up all the equipment (see tutorial infrastructure above).
Academic project software

Our students had four main support packages to help them in their Cognitive Psychology studies:

  • a custom-built ‘Word Presentation Program’, which allowed them to create stimuli for presentation to other students and automatically record data such as reaction times and button presses (they could create a turnkey experiment package for emailing to fellow students, and then have results emailed back);
  • a HyperCard-based statistics package, for analysing their data;
  • MacProlog from Logic Programming Associates in the UK, for writing simple Artificial Intelligence and Cognitive Simulation programs;
  • ClarisWorks, for preparing reports and presentations, reading articles that we emailed to them as attachments, and doing richer data analyses.
Timetable and evaluation

Students had a three-week warmup period in order to become familiar with their new equipment and run some trial (fun) activities with every piece of software, and formal academic activities took place from August 27th – Sept. 9th, 1994, mostly in the evenings. Thus, the conventional one-week residential summer school was stretched out for two weeks to allow for part-time working. During week one the students concentrated on experimental projects in the area of “Language & Memory” (typically demonstrating inferences that “go beyond the information given”). During week two the students wrote simple AI programs in Prolog that illustrate various aspects of cognitive processing (e.g. simulating children’s arithmetic errors). They were supplied with Paul Mulholland’s version of our own Prolog trace package (see descriptions of our work on Program Visualization) to facilitate their Prolog debugging activities.

A detailed questionnaire was supplied both to the Virtual Summer School students and to conventional summer school students taking the same course. We looked at how students spent their time, which activities were beneficial for them, and many other facets of their Virtual Summer School experience.

[removed reference to Kim Isikoff's paper and student interviews, as all links were broken]

The future

The Virtual Summer School finished on 9th September 1994 (following our Virtual Disco on 8th September 1994, incidentally…. we told students about music available on the World Wide Web for private use). What happens next? Here are several issues of importance to us:

  • We must lobby for ever-increasing ‘bandwidth’ [i.e. channel capacity, reflected directly in the amount and quality of full-colour full-screen moving images and quality sound that can be handled]. This is necessary not only for Open University students, but also for the whole of the UK, and indeed for the whole world. As capacity and technology improve, so does the public expectation and need [analagous to the way the M25 motorway was overfull with cars the first day it opened-- the technology itself helps stimulate demand]. Whatever the current ‘Information SuperHighway’ plans are [just like Motorway construction plans], there is a concern that they don’t go far enough.
  • We must RADICALLY improve both (i) the user interfaces and (ii) the underlying layers of communications tools. Even with the excellent software and vendor support that we had at our disposal, all the layers of tools needed (TCP/IP, PPP, Communications Toolbox, etc.) made a veritable house of cards. The layers of tools were (i) non-trivial to configure optimally in the first place (for us, not the students); (ii) non-trivial to mass-install as ‘turnkey’-ready systems for distribution to students; (iii) non-trivial for students to use straight ‘out of the box’ (naturally almost everything in the detailed infrastructure is hidden from the students, but one or two items must of necessity rear their ugly heads, and that gets tricky); and (iv) ‘temperamental’ (students could get interrupted or kicked off when using particular combinations of software). We were fully prepared for (iv), because that’s understandible in the current era of communicating via computers, but (i), (ii), and (iii) were more surprising. [If anyone doubts the nature of these difficulties, I hereby challenge them to use Timbuktu Pro, a wonderful software product, with 4 remotely-sited computer-naive students using TCP/IP over a dial-up PPP connection.] We can do better, and indeed we MUST do better in the future. Many vendors and academic institutions are working on these issues, and they need urgent attention.
  • We must obtain a better understanding of the nature of remote groupwork. Our students worked in groups of size 2, 3, or 4 (depending on various project selection circumstances). Yet even with pre-arranged group discussions by synchronous on-line chat or telephone conference calls, a lot of fast-paced activity would suddenly happen, involving just one student and one tutor. For example, student A might post a project idea to a communal reading area accessible only to fellow project-group students B and C and also tutor T. Tutor T might post a reply with some feedback, and A might read it and react to it before B and C had logged in again. Thus, A and T would have inadvertently created their own ‘shared reality’– a mini-dialogue INTENDED for B and C to participate in as well, yet B and C would get left behind just because of unlucky timing. The end result in this case would be that students A, B, and C would end up doing mostly individual projects, rather than a group project. Tutors could in future ‘hold back’, but this is probably an artificial solution. The ‘shared reality’ between A and T in the above scenario is no different from what would happen if A cornered T in the bar after the day’s activities had finished at a conventional Summer School. However, in that situation T could more easily ensure that B and C were brought up to date the next day. We may ultimately have to settle for project groups of size 2, but not before doing some more studies to try to make larger groups (e.g. size 4) much more cohesive and effective.
  • We need to improve ‘tutor leverage’ (ability to reach and influence more people). Let’s suppose that we have thoroughly researched and developed radical improvements for the three items above (more bandwidth, nice user interfaces with smooth computer/communications infrasture [sic], happy cohesive workgroups of size 4). It would be a shame if, after all that effort and achievement, each tutor could only deal with, say, 3 groups of 4 students anywhere in the world. The sensory overload for tutors at the existing Virtual Summer School was considerable… many simultaneous conversations and many pieces of software and technology running at once. The 1994 Virtual Summer School was (of necessity) run by a self-selecting group of tutors who were competent in both the subject matter and the technology infrastructure. Less technologically-capable tutors need to be able to deal with larger numbers of students in a comfortable fashion, or Virtual Summer School will remain quite a ‘niche’ activity.

The four areas above (more bandwidth, better computer/comms interfaces, larger workgroups, increased tutor leverage) are active areas of research for us…. stay tuned (and see what we’re now doing in KMi Stadium)!

Who made it work?
  • Marc Eisenstadt: VSS Course Director, Slave Driver, and Fusspot
  • Mike Brayshaw: VSS Tutor & Content Wizard
  • Tony Hasemer: VSS Tutor & FirstClass Wizard
  • Ches Lincoln: VSS Counsellor and FirstClass Guru
  • Simon Masterton: VSS Academic Assistant, Mosaic Webmaster, and Mobile Phone Guru
  • Stuart Watt: VSS Mac Wizard
  • Martin Le Voi: VSS Memory/Stats Advisor & Unix Guru
  • Kim Issroff: VSS Evaluation and <A HREF=”#kim-report”>Report</A>
  • Richard Ross: VSS Talking Head Guided Tour
  • Donald A. Norman (Apple, Inc.): VSS Virtual Guest Lecturer
  • Blaine Price: Unix & Internet Guru & Catalyst
  • Adam Freeman: Comms & Networking Guru
  • Ian Terrell: Network Infrastructure Wizard
  • Mark L. Miller (Apple, Inc.): Crucial Guidance
  • Christine Peyton (Apple UK): Support-against-all-odds
  • Ortenz Rose: Admin & Sanity Preservation
  • Elaine Sharkey: Warehousing/Shipping Logistics

Update: Changed title and Internet vs. WWW language to avoid post-hoc flunking of Dr. Chuck’s IHTS MOOC.

The post WWW-based online education turns 20 this summer appeared first on e-Literate.

Coursera shifts focus from ‘impact on learners’ to ‘reach of universities’

Mon, 2014-06-23 17:15

Richard Levin, the new CEO of Coursera, is getting quite clear about the new goals for the company. At first glance the changes might seem semantic in nature, but I believe the semantics are revealing. Consider this interview with the Washington Post that was published today in the Washington Post [emphasis added in both cases below]:

Richard C. Levin, the new chief executive of Coursera, the most widely used MOOC platform, wants to steer the conversation back to what grabbed public attention in the first place: the wow factor.

Sure, Levin said, the emerging technology will help professors stimulate students on campus who are tired of old-school lectures. The talk of “flipped classrooms” and “blended learning” — weaving MOOCs into classroom experiences — is not mere hype.

“But that is not the big picture,” Levin said in a visit last week to The Washington Post. “The big picture is this magnifies the reach of universities by two or three orders of magnitude.”

Contrast this interview with Daphne Koller’s December article at EdSurge:

Among our priorities in the coming year, we hope to shift the conversation around these two dimensions of the learning experience, redefine what it means to be successful, and lay the groundwork for products, offerings, and features that can help students navigate this new medium of learning to meet their own goals, whether that means completing dozens of courses or simply checking out a new subject. [snip]

Still, we are deeply committed to expanding our impact on populations that have been traditionally underserved by higher education, and are actively working to broaden access for students in less-developed countries through a range of initiatives

There are valid criticisms of how well Coursera has delivered on its goal of helping students meet their own learning goals, but now it is apparent that the focus of their efforts is shifting away from the learner and towards the institution. Below are a few notes based on these recent interviews.

Changing Direction From Founders’ Vision

This is the second interview where Levin contradicts the two Coursera founders. In the case above Levin shows the point of Coursera is not primarily impact on learners but is reach of great universities. In a New York Times interview from April he made similar points in contrast to Andrew Ng.

In a recent interview, Mr. Levin predicted that the company would be “financially viable” within five years. He began by disagreeing with Andrew Ng, Coursera’s co-founder, who described Coursera as “a technology company.”

Q. Why is the former president of Yale going to a technology company?

A. We may differ in our views. The technology is obviously incredibly important, but what really makes this interesting for me is this capacity to expand the mission of our great universities, both in the United States and abroad, to reach audiences that don’t have access to higher education otherwise.

Levin is signifying a change at Coursera, and he is not just a new CEO to manage the same business. Andrew Ng no longer has an operational role in the company, but he remains as Chairman of the Board (I’m not claiming a correlation here, but just noting the change in roles).

Reach Is Not Impact

@PhilOnEdTech Is "reach" the same as "impact"?

— Russell Poulin (@RussPoulin) June 23, 2014

The answer in my opinion is only ‘yes’ if the object of the phrase is the universities. Impact on learners is not the end goal. In Levin’s world there is a class of universities that are already “great”, and the end goal is to help these universities reach more people. This is about A) having more people understand the value of each university (branding, eyeballs) and B) getting those universities to help more people. I’m sure that B) is altruistic in nature, but Levin does not seem to focus on what that help actually comprises. Instead we get abstract concepts as we see in the Washington Post:

“That’s why I decided to do it,” Levin said. “Make the great universities have an even bigger impact on the world.”

Levin seems enamored of the scale of Coursera (8.2 million registered students, etc), but I can find no concrete statements in his recent interviews that focus on actual learning results or improvements to the learning process (correct me in the comments if I have missed some key interview). This view is very different from the vision Koller was offering in December. In her vision, Koller attempts to improve impact on learners (the end) by using instruction from great university (the means).

Other People’s Money

Given this view of expanding the reach of great universities, the candor about a lack of revenue model is interesting.

“Nobody’s breathing down our necks to start to turn a profit,” he said. Eventually that will change.

Levin said, however, that “a couple” universities are covering their costs through shared revenue. He declined to identify them.

This lack of priority on generating a viable revenue model is consistent with the pre-Levin era, but what if you take it to its logical end with the new focus of the company? What we now have is a consistent story with AllLearn and Open Yale Courses – spending other people’s money to expand the reach of great universities. Have we now reached the point where universities that often have billion-dollar endowments are using venture capital money to fund part of their branding activities? There’s a certain irony in that situation.

It is possible that Levin’s focus will indirectly improve the learning potential of Coursera’s products and services, but it is worth noting a significant change in focus from the largest MOOC provider.

The post Coursera shifts focus from ‘impact on learners’ to ‘reach of universities’ appeared first on e-Literate.

“Personalized Learning” Is Redundant

Mon, 2014-06-23 11:13

Dan Meyer has just published a provocative post called “Don’t Personalize Learning,” inspired by an even more provocative post with the same title by Benjamin Riley (as well as being a follow-up to Meyer’s post “Tools for Socialized Instruction not Individualized Instruction“). Part of the confound here is sloppy terminology. Specifically, I think the term “personalized learning” doesn’t really mean anything, so it’s hard to have an intelligent conversation about it.

All learning is personalized in virtue of the fact that it is accomplished by a person for him or herself. This may seem like a pedantic point, but if the whole point of creating the term is to focus on fitting the education to the student rather than the other way around, then it’s important to be clear about agency. What we really want to talk about, I think, is “personalized education” or, more specifically, “personalized instruction.” Here too we need to be thoughtful about what we mean by “personalized.” To me, “personalized” means “to make more personal,” which has to do with the goals and desires of the person in question. If I let you choose what you want to learn and how you want to learn it, those are aspects of personalization. Riley argues that radical personalization, where students make all the choices, isn’t necessarily a good thing, for several reasons. One reason he gives is that learning is cumulative and students are not likely to stumble upon the correct ordering by themselves. He asserts that teaching was invented “largely to solve for that problem.” I agree that one of the main values of a teacher is to help students find good learning paths, but I disagree that students are unlikely to find good paths themselves. Teachers can help students optimize, but the truth is that people learn all sorts of things all the time on their own. Teaching is about the zone of proximal development; it’s about helping students learn (and discover) those things that they are not quite ready to learn on their own but can learn with a little bit of help. That’s not the same thing at all as saying that humans aren’t good at constructing good learning experiences for themselves (which is what you get if you take Ryan’s argument to its logical conclusion). Also, I believe in the value of curriculum, but it’s a bit of a straw man to suggest that personalized learning must mean that students decide everything, for themselves and on their own.

And I vehemently disagree with him when he writes,

Second, the problem with the pace argument is that it too contradicts one of the key insights from cognitive science: our minds are not built to think. In fact, our brains are largely oriented to avoid thinking. That’s because thinking is hard. And often not fun, at least at first. As a result, we will naturally gravitate away from activities that we find hard and unpleasant.

Frankly, I think he draws exactly the wrong conclusion from the research he cites. I would say, rather, that we are most inclined to think about things that inspire a sense of fun. We like stories and puzzles. But which stories and which puzzles we like is…well…personal. If you want to to get humans to think on a regular basis, then you have to make it personal to them. My own experience as both a teacher and a learner is that if a person is personally engaged then he or she can often learn quite quickly and eagerly. The same cannot often be said of somebody who is personally disengaged. Of course, one can be personally engaged without having a personalized learning experience, if by the latter you mean that the student chooses the work. But the point I made at the top of the post is that “personal” is inherent to the person. The student may not decide what work to do, but she and only she always decides whether or not to engage with that work. When the work is not personalized, a good teacher is always performing in acts of persuasion, trying to help students find personal reasons to engage.

Meyer is latching onto something different. By “personal” he seems to mean “solitary,” and I interpret him to be responding specifically to adaptive systems, which are often labeled “personalized learning” (as well as “new and improved” and “99.44% pure”). First of all, in and of themselves, adaptive systems are often not personalized in the sense that I described above. They are customized, in that they respond to the individual learner’s knowledge and skill gaps, but they are not personalized. Customized solitary instruction has its place, as I described in my post about what teachers should know about adaptive systems. Customized instruction can also be personalized—for example, students can choose their path down a skill tree on Khan Academy. But I think Dan’s main point is that many of the more interesting and potent learning experiences tend to happen when humans talk with other intelligent humans. We learn from each other, traveling down paths that machines can’t take us yet (and probably won’t be able to for quite a while). It is possible for a learning experience to be simultaneously social and personalized, for example, when students individually individually work on problems they choose that are interesting to them but then discuss their ideas and solutions with their classmates.

So, to sum up:

  1. Humans are generally pretty good at learning what they want to learn (but can get stuck sometimes).
  2. Help from good teachers can enable humans to learn more effectively than they can on their own in many cases.
  3. Sometimes solitary study can be helpful, particularly for practicing weak skills.
  4. Conversations with other humans often lead to rich, powerful, and personal learning experiences that are difficult or impossible to have on one’s own.
  5. All learning is personal. Some instruction is personalized to a student’s individual interests and choices, and some is customized to a students individual skills and knowledge. Some is both and some is neither.
  6. Personalized instruction may or may not include social learning activities.
  7. Customized instruction may or may not include some personalization.

Why do we make this stuff so complicated?

The post “Personalized Learning” Is Redundant appeared first on e-Literate.

InstructureCon: Canvas LMS has different competition now

Thu, 2014-06-19 05:27

For the first few years of the Canvas LMS, Instructure’s core message was ‘Canvas is better than Blackboard’. This positioning was thinly veiled in the company’s 2011 spoof of the Apple / 1984 commercial and even hitting the level of gloating in a company blog commenting on Blackboard’s strategy reversal in 2012. Instructure made their name by being the anti-Blackboard.

At InstructureCon 2014, there was hardly a mention of Blackboard or any of the other LMS providers. In fact, most of the general sessions avoided any direct or indirect comparison of LMS products. This year there were three observations that surprised me:

  • Snow in June;
  • Canvas growth in K-12 markets; and
  • Lack of mention of LMS competitors or product one-upmanship.
Snow in June

14 - 1

The weather eventually cleared up, however, with a high of 69 forecast for later today.

Company Growth, Even in K-12

Instructure is on a roll, and in the 3.5 years since the launch of Canvas LMS, they have grown to have more than 800 customers and more than 12 million end users registered in their system. During Josh Coates’ keynote, he showed a chart that showed the growth, including breakouts per market. In the two years since I was last at InstructureCon (2012), the company has almost tripled the number of higher ed clients and more than quadrupled the total number of customers.

Beyond the impressive overall growth, I was surprised to see that Instructure now appears to have approximately 3/4 the number of K-12 customers as they do higher ed customers. I also noticed a large number of K-12 users at the conference.

Graphic presented by Instructure at 2014 InstructureCon

Graphic presented by Instructure at 2014 InstructureCon

It is worth pointing out a few caveats:

  • The K-12 market is more of “the wild west” (term used at Instructure) than higher ed, with a large number of unconnected districts without consistent purchasing patterns;
  • There are far more K-12 schools and districts than there are higher ed institutions, and Canvas market percentages are much lower in K-12 than higher ed;
  • Typical customer sizes can be much smaller in K-12, so I doubt that Instructure makes 3/4 the revenue in K-12 as they do in higher ed; and
  • These are self-reported student and faculty registrations, which includes newly-signed schools that have not yet migrated from their old system (this chart is more of a leading indicator than typical market share measures).
Canvas Moving to Next Stage

Given the significant growth over just 3.5 years, it is striking the change in tone from Instructure. Call it a maturing process, or call it confidence from winning the majority of LMS selections in higher ed recently, but Instructure has subtly but significantly changed their assumed competition. Rather than focusing on being better than Blackboard or Desire2Learn or Moodle or Sakai, the real competition for Canvas now seems to be lack of meaningful adoption, whether the end users are working online or face-to-face.

Josh Coates’ keynote was close to 40 minutes in duration, and I estimate he mentioned Canvas for 4 minutes or less – and that was on system uptime and growth in adoption presented in an introspective manner. Rather than pitching the product, Josh spent the majority of the keynote talking about his fictional and real inspirations or heroes, including Katherine Switzer, Atticus Finch, Norman Borlaug, and Sophie Scholl. Is this the same Josh Coates from the 2013 Learning Impact Fight Club as described by Claude Vervoort?

LMS CEOs Panel: Leave all Political Correctness at the door, thanks! I was amazed. It all started smoothly but it did not take long for Instructure’s CEO Josh Coates to give a kick in the anthill. I could not believe my ears :) Not always constructive but surely entertaining!

For the general sessions and over-riding conference themes, the primary product announcement was on “Lossless Learning” – combining “the ease and efficiency of online learning with the magic of a face-to-face environment”. The idea is to use online tools to augment the face-to-face experience, with four tools to support this idea:

  • Canvas Polls – a built-in response system using iOS or Android devices to replace clickers;
  • Magic Marker – an iPad app allowing the instructor to observe and evaluate individual performance of students in group environments, integrated with Canvas gradebook;
  • Quiz Stats – an improved visualization of multiple-choice quizzes and item analysis; and
  • Learning Mastery for Students – the student view of mastery-based gradebook.

Mike Caulfield describes more of the minimally invasive assessment angle in this blog post.

Instructure has a new announcement about Canvas, and it’s in an area close to my heart. They are rolling out a suite of tools that allow instructors to capture learning data from in-class activities.

But Mike, you say, the LMS is evil, and more LMS is eviler. Why you gotta be Satan’s Cheerleader?

Well, here’s my take on that. The LMS is not evil. What is evil is making the learning environment of your class serve the needs of the learning management system rather than serve the needs of the students.

Leading this Lossless Learning effort is Jared Stein, whose role is to connect the Canvas product team with actual classroom usage and vice virsa. When co-founder Devlin Daley left Instructure last year, I made the following observation:

While that official explanation makes sense, it doesn’t mean that Devlin’s departure will not affect Instructure. The biggest challenge they will face, in my opinion, is having someone out on the road, working with customers, asking why and what if questions. Just naming a person or two to this role is not the same as having the original vision and skills from a co-founder, although I would expect Jared Stein to play a key role in this regard.

What I believe I am seeing at InstructureCon is just how important Jared is becoming to Instructure’s strategy.

Rather than Canvas vs. Blackboard or Desire2Learn or Moodle or Sakai, the message now has shifted to more meaningful implementations of Canvas vs. shallow usage of an LMS.

Caution

I used to think that the biggest risk that Instructure faced was the lack of focus on large online programs (University of Central Florida being the primary exception). No longer, as I see that the company has plenty of headroom to grow with their focus more on augmenting traditional face-to-face or hybrid programs, especially with K-12 markets and international markets being open. The biggest risks I now see:

  • Hubris – there is a fine line between confidence that allows a company to look beyond other LMS providers and cockiness; if the company becomes too comfortable in their growth and becomes cocky, then they can take a fall like other LMS providers have shown.
  • Focus – as the company grows and adds customers, it will be increasingly difficult to maintain the focus that has led them to have a clean, intuitive user interface and to avoid feature bloat.

We’ll keep watching Instructure and their Canvas product suite, but we’ll also look at other LMS providers and how they might change the market.

The post InstructureCon: Canvas LMS has different competition now appeared first on e-Literate.

Happy Birthday, e-Literate

Wed, 2014-06-18 14:17

Ten years ago today, I wrote my first blog post on e-Literate. At the time, my only real ambitions were to learn about blogging and use the fact that I was writing in public to force myself to think more clearly about what I believed about educational technologies. I never could have imagined that, not only would I still be writing a decade later, but that it would be come such a large and important part of my professional and personal life. I have met many wonderful people and had unbelievable opportunities as a result of this blog.

At the beginning of my career as an educator, I was a middle school and high school teacher. I am the son of teachers, brother of teachers, husband of a teacher, and father-in-law of a teacher. I still consider myself to be a teacher. I particularly loved teaching 8th grade and plan to go back to it someday. Occasionally I wonder if I’m doing the right thing by doing what I do now instead of going back to the classroom immediately. But then I run into people like Jon and Chris Boggiano at someplace like the GSV conference. These two brothers enlisted in the army after 9/11, went to West Point, and served in active combat duty in places like Iraq, Kosovo, and Afghanistan. After they got out of the Army, they turned down lucrative careers in corporate America to create a startup training veterans and other folks in green jobs. They were recognized by the White House for their success. They then sold their startup, enrolled in graduate school in Stanford together, used their proceeds to become educational technology angel investors, and are busy planning their next startup. Here’s a video of them in action, presenting their thoughts on childhood and educational philosophy at a Stanford event:

Click here to view the embedded video.

One more thing: Jon and Chris are my former seventh and eighth grade students.

Their choices make me feel better about mine. If they think that developing technology in the service of education is a good way to try to change the world, then I am going to do everything I can to help them and people like them. I don’t always trust my judgment about what’s the best thing to do for our future, but I trust theirs.

They are also a reminder that we often don’t know the consequences of the actions we take. I was absolutely stunned to see their names on the attendee list for the ed tech conference, but I also felt incredibly proud. Jon and Chris deserve 100% credit for who they have become and what they have accomplished, but I feel privileged to have played a small part in their grand adventure.

And in that spirit, I want to ask you all if you might consider giving e-Literate a birthday present. When Phil and I write our blog posts, we are in our respective homes. Maybe we talk to each other a little bit about what we’re writing, but it’s mostly a solitary experience. We get sporadic feedback, often long after we have written our pieces. We get some idea of what people are thinking by the comments and the web traffic, but we don’t know if what we’re doing really matters. So if something we’ve written here has made a difference, if there was a decision you made differently or some post that influenced your thinking, we’d be grateful if you would let us know in comments or in email. It would mean a great deal to us to know that we’ve had even a small impact.

Thanks for sticking with us, and I hope that you will still find us worth reading ten years from now.

The post Happy Birthday, e-Literate appeared first on e-Literate.

WCET, UPCEA & Sloan-C call on DOE to change State Authorization proposal

Mon, 2014-06-16 20:46

What does it take to get all of the higher education institutions and associations to agree? Apparently the answer is for the Department of Education to propose its new State Authorization regulations.

As part of DOE’s negotiated rulemaking process over the past half year representatives from schools (Columbia University, Youngstown State University, Benedict College, Santa Barbara City College, Clemson University, MIT, Capella University) to higher ed associations (WCET) were unanimous in their rejection of the proposed State Authorization rules. As Russ Poulin wrote for WCET:

On Tuesday May 20, the Committee we had our final vote on the proposed language. I voted “no.” I was joined in withholding consent by all the representatives of every higher education sector. Nine out of sixteen negotiators voting “no” is a high ratio.

Note that only one of the mentioned groups is a for-profit university – the purported offenders causing the need for the regulations. I wrote a post arguing that the proposed rules represented a dramatic increase in control over distance education that would cause a significant increase in compliance and administrative overhead for both colleges / universities and for states themselves.

In the end, predictably, the rulemaking process ended in a lack of consensus that allows the DOE to propose whatever language they desire. The latest proposal was from DOE, and it would make sense for the final proposal to follow this language closely.

Here’s what is newsworthy – the idea and proposed language is so damaging to innovation in higher ed (which the DOE so fervently supports in theory) and so burdensome to institutions and state regulators that three higher ed associations have banded together to oppose the proposed rules. WCET (WICHE Cooperative on Educational Technologies), UPCEA (University Professional and Continuing Education Association) and Sloan-C (Sloan Consortium) wrote a letter to Secretary Arne Duncan calling for the DOE to reconsider their planned State Authorization regulations. As the intro states [emphasis added]:

The member institutions of our three organizations are leaders in the practice of providing quality postsecondary distance education to students throughout the nation and the world. Our organizations represent the vast majority of institutions that are passionate about distance education across the country and across all higher education sectors.

For the first time our organizations are joining with one voice to express our concern over the Department of Education’s “state authorization for distance education” proposal(1) that was recently rejected by most of the members of the Program Integrity and Improvement Negotiated Rulemaking Committee. Our comments are focused on the final draft proposal presented to the Committee. We believe the final draft represents the most current thinking of Department staff as they construct a regulation for public comment.

We are eager to promote policies and practices that protect consumers and improve the educational experience of the distance learner. Unfortunately, the final draft regulation would achieve neither of those goals.

The impact of the proposed regulations would be large-scale disruption, confusion, and higher costs for students in the short-term. In addition, there would be no long-term benefits for students. This letter briefly outlines our concerns and provides recommendations that achieve the Department’s goals without disrupting students enrolling in distance education programs across state lines.

As an example of the problems with the latest proposal:

Second, when pressed to define an “active review,” the Department provided a short list of criteria that states could use in the review, such as submitting a fiscal statement or a list of programs to be offered in the state. While it may sound simple to add a few review criteria, state regulators cannot act arbitrarily. Their authorization actions must be based on state laws and regulations. Therefore, state laws would need to be changed and the state regulators would need to add staff to conduct the necessary reviews. Our analysis estimates that 45 states would need to make these changes. This is a large amount of activity and added costs for what appears to be a “cursory” review. These reviews will likely not change a decision regarding an institution’s eligibility in a state. There is no benefit for the student.

The letter does not just list objections but also offers eight concrete recommendations that would help DOE achieve its stated goals.

Michael and I fully endorse this letter and also call on the DOE to rethink its position.

The full letter can be found at WCET’s site along with an explanatory blog post.

The post WCET, UPCEA & Sloan-C call on DOE to change State Authorization proposal appeared first on e-Literate.

Starbucks Paying for Employees Tuition at ASU Online

Mon, 2014-06-16 10:08

This is a big deal:

Starbucks will provide a free online college education to thousands of its workers, without requiring that they remain with the company, through an unusual arrangement with Arizona State University, the company and the university will announce on Monday.

The program is open to any of the company’s 135,000 United States employees, provided they work at least 20 hours a week and have the grades and test scores to gain admission to Arizona State. For a barista with at least two years of college credit, the company will pay full tuition; for those with fewer credits it will pay part of the cost, but even for many of them, courses will be free, with government and university aid.

Over the past few decades, America has slowly but surely been transitioning from a system in which college education was treated as a public good (and therefore subsidized by taxpayers) to being a private good (and therefore paid for entirely by students and their families). And while there is no substitute for that model, it is interesting and important that Starbucks is positioning college tuition the way companies position health insurance plans—as a benefit they use to compete for better workers.

This is not an entirely new idea. Many companies have tuition reimbursement, although it often comes with more restrictions and is typically aimed at white-collar workers. A while back, Wal-Mart made headlines by offering heavily subsidized (but not free) college credit in partnership with APU. Starbucks takes this to the next level. Since both Wal-Mart and Starbucks have reputations as union busters, it will be interesting to see how their respective college subsidization moves impact their struggles with their labor forces. Will tuition help them lower demand for unionization? Will it become another bargaining chip at the negotiating table?

I wrote a while back about the idea of reviving the apprenticeship for the digital age and gave an example of an Indian tech company that is doing it. I think we’re going to see a lot more of variations on the theme of employer-funded education in the future.

You can learn more about the Starbucks college program at their website.

The post Starbucks Paying for Employees Tuition at ASU Online appeared first on e-Literate.

Why Google Classroom won’t affect institutional LMS market … yet

Sun, 2014-06-15 16:54

Yesterday I shared a post about the new Google Classroom details that are coming out via YouTube videos, and as part of that post I made the following statement [emphasis added]:

I am not one to look at Google’s moves as the end of the LMS or a complete shift in the market (at least in the short term), but I do think Classroom is significant and worth watching. I suspect this will have a bigger impact on individual faculty adoption in higher ed or as a secondary LMS than it will on official institutional adoption, at least for the next 2 – 3 years.

The early analysis is based on this video that shows some of the key features:

There is also a new video showing side-by-side instructor and student views that is worth watching.

Here’s why I believe that Classroom will not affect the LMS market for several years. Google Classroom is a slick tool that appeals to individual instructors whose schools use Google Apps for Education (GAE) – primarily K-12 instructors but also to higher ed faculty members. The tight integration of Google Drive, Google+ and GAE rosters allows for easy creation of course sites by the instructor, easy sharing of assignments and documents (particularly where the instructor creates the GDrive document and has students directly edit and add to that document), and easy feedback and grading of individual assignments. Working with the GAE framework, there are a lot of possibilities for individual instructors or instructional designers to expand the course tools. All of these features are faculty-friendly and help Google’s promise of “More time for teaching; more time for learning”.

But these features are targeted at innovators and early adopter instructors who are willing to fill in the gaps themselves.

  • The course creation, including setting up of rosters, is easy for an instructor to do manually, but it is manual. There has been no discussion that I can find showing that the system can automatically create a course, including roster, and update over the add / drop period.
  • There is no provision for multiple roles (student in one class, teacher in another) or for multiple teachers per class.
  • The integration with Google Drive, especially with Google Docs and Sheets, is quite intuitive. But there is no provision for PDF or MS Word docs or even publisher-provided courseware.
  • There does not appear to be a gradebook – just grading of individual assignments. There is a button to export grades, and I assume that you can combine all the grades into a custom Google Sheets spreadsheet or even pick a GAE gradebook app. But there is no consistent gradebook available for all instructors within an institution to use and for students to see consistently.

For higher ed institutions in particular, we are just now getting to the stage where the majority of faculty use the institutional LMS. I am seeing more and more surveys on individual institutions where 70+ % of faculty use the LMS for most of their courses. What this means, however, is that we have a different categories of adopter for institutional LMS – the early majority (characterized by pragmatic approach) and late majority (characterized by a conservative approach) as shown by the technology adoption curve. I am showing the version that Geoffrey Moore built on top of the Everett Rogers base model.

chasmDiagramLabels

With adoption often above 50% or more of faculty, the institution has to serve both the group on the left (innovators and early adopters) and the larger group on the right (early and late majority more than laggards). As poorly designed as some of the institutional LMS solutions are, they typically allow automatic course and roster creation with updates, sharing of multiple document types, integrated standard gradebooks, and many others.

Institutions can (and really should) allow innovators and early adopters to try out new solutions and help create course designs not bound by the standard LMS implied pedagogy, but institutions cannot ignore the majority faculty who are typically unwilling to spend their own time to fill in the technology gaps – especially now that these faculty are just getting used to LMS usage.

None of this argues that Google Classroom is an inferior tool – it is just not designed to replace the full-featured LMS. Remember that Google is a technology-vision company that is comfortable putting out new tools before they understand how the tools will be used. Google is also comfortable playing the long game, getting more and more instructors and faculty using, giving feedback, and pushing forward the new toolset. This process will take some time to play out – at least 2 or 3 years in my opinion before a full institutional LMS may be available. If Google like the direction Classroom usage is going.

Google Classroom does attempt to partially understand the instructor use cases, but is not designed as a holistic product. Think of Classroom as ‘let’s see how to tie existing Google tools together to advance the ball in the general course site world’. It is still technology first and tied more specifically Google technology first. The use cases are simple (e.g. one instructor sharing GDrive-based assignment to students who edit and then submit for feedback), but there are many possibilities for clever faculty to innovate.

In the near-term, Google Classroom will likely be a factor for individual faculty adoption (innovators, early adopters) at schools with GAE licenses or even as secondary LMS. But not as a replacement.

The post Why Google Classroom won’t affect institutional LMS market … yet appeared first on e-Literate.