Skip navigation.

Oracle AppsLab

Syndicate content
Driving Innovation
Updated: 15 hours 17 min ago

Amazon Echo, The Future or Fad?

Thu, 2014-12-18 16:10

Update: I now “hacked” the API to control Hue Lights and initiate a phone call with Twilio.  Check here https://www.youtube.com/watch?v=r58ERvxT0qM

AmazonEcho

Last November Amazon announced a new kind of device. Part speaker, part personal assistant and it called it Amazon Echo. If you saw the announcement you might have also see their quirky infomercial.

The parodies came hours after their announcement, and they were funny. But dismissing this just as a Siri/Cortana/Google Now copycat might miss the potential of this “always listening” device. To be fair this is not the first device that can do this. I have a Moto X that has an alway-on chip waiting for a wake word (“OK Google”), Google Glass glass does the same thing (“OK Glass.”) But the fact that I don’t have to hold the device, be near it, or push a button (Siri) makes this cylinder kind of magical.

It is also worth noting that NONE of these devices are really “always-listening-and-sending-all-your-conversations-to-the-NSA,” in fact the “always listening” part is local. Once you say the wake word then I guess you better make sure don’t spill the beans for the next few seconds, which is the period that the device will listen and do a STT (speech-t0-text) operation on the Cloud.

We can all start seeing through Amazon and why this good for them. Right off the bat you can buy songs with a voice command. You can also add “stuff” to your shopping list. Which also reminds me of a similar product Amazon had last year, Amazon Dash  which unfortunately is only for selected markets. The fact is that Amazon wants us to buy more from them, and for some of us that is awesome, right? Prime, two day shipping, drone delivery, etc.

I have been eyeing these “always listening” devices for a while. The Ubi ($300) and Ivee ($200) were my two other choices. Both have had mixed reviews and both of them are still absent on the promise of an SDK or API. Amazon Echo doesn’t have an SDK yet, but they placed a link to show the Echo team your interest in developing apps for it.

The promise of a true artificial intelligence assistant or personal contextual assistant (PCA) is coming soon to a house or office near you. Which brings me to my true interest in Amazon Echo. The possibility of creating a “Smart Office” where the assistant will anticipate my day-to-day tasks, setup meetings, remind me of upcoming events, analyze and respond email and conversations, all tied to my Oracle Cloud of course.  The assistant will also control physical devices in my house/office “Alexa, turn on the lights,” “Alexa, change the temperature to X,” etc.

All in all, it has been fun to request holiday songs around the kitchen and dinning room (“Alexa, play Christmas music.”) My kids are having a hay day trying to ask the most random questions. My wife, on the other side, is getting tired of the constant interruption of music, but I guess it’s the novelty. We shall see if my kids are still friendly to Alexa in the coming months.

In my opinion, people dismissing Amazon Echo, will be the same people that said: “Why do I need a music player on my phone, I already have ALL my music collection in my iPod” (iPhone naysayers circa 2007), “Why do I need a bigger iPhone? That `pad thing is ridiculously huge!” (iPad naysayers circa 2010.) And now I have already heard “Why do I want a device that is always connected and listening, I already have Siri/Cortana/Google Now” (Amazon Echo naysayers circa 2014.)

Agree, disagree?  Let me know.Possibly Related Posts:

New Adventures in Virtual Reality

Tue, 2014-12-16 18:49

Back in the early 90s I ventured into virtual reality and was sick for a whole day afterwards.

We have since learned that people become queazy when their visual systems and vestibular systems get out of sync. You have to get the visual response lag below a certain threshold. It’s a very challenging technical problem which Oculus now claims to have cracked. With ever more sophisticated algorithms and ever faster processors, I think we can soon put this issue behind us.

Anticipating this, there has recently been a resurgence of interest in VR. Google’s Cardboard project (and Unity SDK for developers) makes it easy for anyone to turn their smartphone into a VR headset just by placing it into a cheap cardboard viewer. VR apps are also popping up for iPhones and 3D side-by-side videos are all over YouTube.

Image from Ultan's Instagram

Image from Ultan’s Instagram

Some of my AppsLab colleagues are starting to experiment with VR again, so I thought I’d join the party. I bought a cheap cardboard viewer at a bookstore. It was a pain to put together, and my iPhone 5S rattles around in it, but it worked well enough to give me a taste.

I downloaded an app called Roller Coaster VR and had a wild ride. I could look all around while riding and even turn 180 degrees to ride backwards! To start the ride I stared intently at a wooden lever until it released the carriage.

My first usability note: between rides it’s easy to get turned around so that the lever is directly behind you. The first time I handed it to my wife she looked right and left but couldn’t find the lever at all. So this is a whole new kind of discoverability issue to think about as a designer.

Despite appearances, my roller coaster ride (and subsequent zombie hunt through a very convincing sewer) is research.  We care about VR because it is an emerging interaction that will sooner or later have significant applications in the enterprise.  VR is already being used to interact with molecules, tumors, and future buildings, uses cases that really need all three dimensions.  We can think of other uses cases as well; Jake suggested training for service technicians (e.g. windmills) and accident re-creation for insurance adjusters.

That said, both Jake and I remain skeptical.  There are many problems to work through before new technology like this can be adopted at an enterprise scale. Consider the the idea of immersive virtual meetings.  Workers from around the world, in home offices or multiple physical meeting rooms, could instantly meet all together  in a single virtual room, chat naturally with each other, pick up subtle facial expressions, and even make presentations appear in mid air at the snap of a finger.  This has been a holy grail for decades, and with Oculus being acquired by Facebook you might think the time has finally come.

Not quite yet.  There will be many problems to overcome first, not all of them technical.  In fact VR headsets may be the easiest part.

A few of the other technical problems:

  • Bandwidth.  I still can’t even demo simple animations in a web conference because the U.S.  internet system is too slow.  I could do it in Korea or Sweden or China or Singapore, but not here anytime soon.  Immersive VR will require even more bandwidth.
  •  Cameras. If you want to see every subtle facial expression in the room, you’ll need cameras pointing at every face from every angle (or at least one 360 camera spinning in the center of the table).  For those not in the room you’ll need more than just a web cam pointing at someone’s forehead, especially if you want to recreate them as a 3D avatar.  (You’ll need better microphones too, which might turn out to be even harder.)  This is technically possible now, Hollywood can do it, but it will be awhile before it’s cheap, effortless, and ubiquitous.
  •  Choreography.  Movie directors make it look easy, and even as individuals we’re pretty good about scanning a crowded room and following a conversation.  But in a 3-dimensional meeting room full of 3-dimensional people there are many angles to choose from every second.  We will expect our virtual eyes to capture at least as much detail as our real eyes that instinctively turn to catch words and expressions before they happen.  Even if we accept that any given participant will see a limited subset of what the overall system can see, creating a satisfying immersive presence will require at least some artificial intelligence.  There are probably a lot of subtle challenges like this.

And a non-technical problem:

  • Privacy.  Any virtual meeting which can me transmitted can also be recorded and viewed by others not in the meeting.  This includes off-color remarks (now preserved for the ages or at least for future office parties), unflattering camera angles, surreptitious nose picking, etc.  We’ve learned from our own research that people *love* the idea of watching other people but are often uncomfortable about being watched themselves.  Many people are just plain camera shy – and even less fond of microphones.  Some coworkers are uncomfortable with our team’s weekly video conferences.  “Glasshole” is now a word in the dictionary – and glassholes sometimes get beaten up.

So for virtual meetings to happen on an enterprise scale, all of the above problems will have to be solved and some of our attitudes will have to change.  We’ll have to find the right balance as a society – and the lawyers will have to sign off on it.  This may take awhile.

But that doesn’t mean our group won’t keep pushing the envelope (and riding a few virtual roller coasters).  We just have to balance our nerdish enthusiasm with a healthy dose of skepticism about the speed of enterprise innovation.

What are your thoughts about the viability of virtual reality in the enterprise?  Your comments are always appreciated!Possibly Related Posts: