Skip navigation.

Oracle AppsLab

Syndicate content
Driving Innovation
Updated: 11 hours 5 min ago

Put Glance on It

Fri, 2016-02-05 02:59

Because I live in Portland, I’m often asked if “Portlandia” is accurate.

It is, mostly, and so it seems appropriate to channel an early episode to talk about Glance, our wearables framework.

Actually, Glance has grown beyond wearables to support cars and other devices, the latest of which is Noel’s (@noelportugal) gadget du jour, the LaMetric Time (@smartatoms).

Insert mildly amusing video here.

And of course Noel had to push Glance notifications to LaMetric, because Noel. Pics, it happened.

IMG_20160129_101740

The text is truncated, and I tried to take a video of the scrolling notification, but it goes a bit fast for the camera. Beyond just the concept, we’ll have to break up the notification to fit LaMetric’s model better, but this was only a few minutes of effort from Noel. I know, because he was sitting next to me while he coded it.

In case you need a refresher, here’s Glance of a bunch of other things.

IMG_20160202_141310-4

I didn’t have a separate camera so I couldn’t show the Android notification.

Screenshot_20160127-113935

We haven’t updated the framework for them, but if you recall, Glance also worked on Google Glass and Pebble in its 1.0 version.

IMG_0098

Screenshot_2014-09-08-07-20-50Possibly Related Posts:

Come Visit the OAUX Gadget Lab

Thu, 2016-02-04 11:28

In September 2014, Oracle Applications User Experience (@usableapps) opened a brand new lab that showcases Oracle’s Cloud Applications, specifically the many innovations that our organization has made to and around Cloud Applications in the past handful of year.

We call it the Cloud User Experience Lab, or affectionately, the Cloud Lab.

Our team has several projects featured in the Cloud Lab, and many of our team members have presented our work to customers, prospects, partners, analysts, internal groups, press, media and even schools and Girl and Boy Scout troops.

In 2015, the Cloud Lab hosted more than 200 such tours, actually quite a bit more, but I don’t have the exact number in front of me.

Beyond the numbers, Jeremy (@jrwashely), our group vice president, has been asked to replicate the Cloud Lab in other places, on the HQ campus and abroad at other Oracle campuses.

People really like it.

In October 2015, we opened an adjoining space to the Cloud Lab that extends the experience to include more hands-on projects. We call this new lab, the Gadget Lab, and it features many more of our projects, including several you’ve seen here.

In the Gadget Lab, we’re hoping to get people excited about the possible and give them a glimpse of what our team does because saying “we focus on emerging technologies” isn’t nearly as descriptive as showing our work.

RS3499_151029_OAUXHeadquartersLabTours_0262-scr

So, the next time you’re at Oracle HQ, sign up for a tour of the Cloud and Gadget Labs and let us know what you think.Possibly Related Posts:

The MagicBand

Wed, 2016-02-03 21:59

Editor’s note: Here’s the first post from our new-ish researcher, Tawny. She joined us back in September, just in time for OpenWorld. After her trip to Disney World, she talked eagerly about the MagicBand experience, and if you read here, you know I’m a fan of Disney’s innovative spirit.

Enjoy.

Planning a Disney World trip is no small feat. There are websites that display crowd calendars to help you find the best week to visit and the optimal parks to visit on each of those days so you can take advantage of those magic hours. Traveling with kids? Visiting during the high season? Not sure which FastPass+ to make reservations for?

There are annually updated guidebooks dedicated to providing you the most optimal attraction routes and FastPass+ reservations, based off of thousands of data points for each park. Beginning 2013, Disney introduced the MagicBand, a waterproof bracelet that acts as your entry ticket, FastPass+ holder, hotel key and credit card holder. The bands are part of The MyMagic+ platform, consisting of four main components: MagicBands, FastPass+, My Disney Experience, and PhotoPass Memory Maker. Passborterboard lists everything you can do with a MagicBand.

I got my chance to experience the MagicBand early this January.

image001 image002

 

These are both open edition bands. This means that they do not have customized lights or sounds at FP+ touchpoints. We bought them at the kiosk at Epcot after enviously looking on at other guests who were conveniently accessing park attractions without having to take out their tickets! It was raining, and the idea of having to take out anything from our bags under our ponchos was not appealing.

The transaction was quick and the cashier kindly linked our shiny new bands to our tickets. Freedom!!!

The band made it easy for us to download photos and souvenirs across all park attractions without having to crowd around the photo kiosk at the end of the day. It was great being able to go straight home to our hotel room while looking through our Disney photo book through their mobile app!

Test Track at Epcot made the most use of the personalization aspect of these bands. Before the rise, guests could build their own cars with the goal of outperforming other cars in 4 key areas: power, turn handling, environmental efficiency and responsiveness.

image003

After test driving our car on the ride, there were still many things we could do with our car such as join a multiplayer team race…we lost :(

What was really interesting were guests were fortunate to show off personalized entry colors and sounds, a coveted status symbol amongst MagicBand collectors. The noise and colors was a mini attraction on its own! I wish our badge scanners said hi to us like this every morning…

 

 

When used in conjunction with My Disney Experience app, there can be a lot of potential:

  • Order ahead food + scan to pick up or food delivery while waiting in a long line.
  • Heart sensor + head facing camera to take pictures within an attractions to capture happy moments.
  • Haptic feedback to tell you that your table is ready at a restaurant. Those pagers are bulky.

So what about MagicBands for the enterprise context?

Hospitals may benefit, but some argue that the MagicBand model works exclusively for Disney because of its unique ecosystem and the heavy cost it would take to implement it. The concept of the wearable is no different from the badges workers have now.

Depending on the permissions given to the badgeholder, she can badge into any building around the world.

What if we extend our badge capabilities to allow new/current employees to easily find team members to collaborate and ask questions?

What if the badge carried all of your desktop and environmental preferences from one flex office to the desk so you never have to set up or complain about the temperature ever again?

What if we could get a push notification that it’s our cafeteria cashier’s birthday as we’re paying and make their day with a “Happy Birthday?”

That’s something to think about.Possibly Related Posts:

M2M, the Other IoT

Thu, 2016-01-28 11:50

Before IoT became ‘The’ buzzword, there was M2M (machine to machine). Some industries still refer to IoT as M2M, but overall the term Internet of Things has become the norm. I like the term M2M because it describes better what IoT is meant to do: Machines talking to other machines.

This year our team once again participated int he AT&T Developer Summit 2016 hackathon. With M2M in our minds, we created a platform to allow machines and humans report extraordinary events in their neighborhood.  Whenever a new event was reported (by machine or human),  devices and people (notified by an app) connected to the platform could react accordingly.  We came with two possible use cases to showcase our idea.

CX1FlaUUoAAuT6f

Virtual Gated Community

Gated communities are a great commodity for those wanting to have privacy and security. The problem is that usually these communities come with a high price tag. So we came up with a turnkey solution for a virtual gate using M2M. We created a device using the Qualcomm DragonBoard 410c board with wifi and bluetooth capabilities. We used a common motion sensor and a camera to detect cars and people not belonging to the neighborhood. Then, we used Bluetooth beacons that could be placed in at the resident keychains. When a resident drove (or walked) by the virtual gate, it would not trigger the automated picture and report to the system, but if someone without the Bluetooth beacon drove by, the system will log and report it.

We also created an app, so residents could get notifications as well as report different events, which brings me to our second use case.

Devices reacting to events

We used AT&T Flow Designer and M2X platform to create event workflows with notifications. A user or a device could subscribe to receive only events that they care about such as lost dog/cat, water leaks etc. The real innovative idea here is that devices can also react to certain events. For example, a user could configure its porch lights to automatically turn on when someone nearby reported suspicious activity. If everyone in their street do the same, it could be a pretty good crime deterrent to effectively being able to turn all the porch lights in the street at once by reporting such event.

We called our project “Neighborhood”, and we are still amazed on how much we were able to accomplish in merely 20+ hours.

IMG_4469 IMG_4472 IMG_4470 IMG_4471 Possibly Related Posts:

SafeDrop – Part 2: Function and Benefits

Mon, 2016-01-25 02:14

SafeDrop is a secure box for receiving a physical package delivery, without the need of recipient to be present. If you recall, it was my team’s project at the AT&T Developer Summit Hackathon earlier this month.

SafeDrop is implemented with Intel Edison board at its core, coordinating various peripheral devices to produce a secure receiving product, and it won second place in the “Best Use of Intel Edison” at the hackathon.

SafeDrop box with scanner

SafeDrop box with scanner

Components built in the SafeDrop

Components built in the SafeDrop

While many companies have focused on the online security of eCommerce, the current package delivery at the last mile is still very much insecure. ECommerce is ubiquitous, and somehow people need to receive the physical goods.

The delivery company would tell you the order will be delivered on a particular day. You can wait at home all day to receive the package, or let the package sit in front of your house and risk someone stealing it.

Every year there are reports of package theft during holiday season, but more importantly, the inconvenience of staying home to accept goods and the lack of peace of mind, really annoys many people.

With SafeDrop, your package is delivered, securely!

How SafeDrop works:

1. When a recipient is notified a package delivery with a tracking number, he puts that tracking number in a mobile app, which essentially registers it to SafeDrop box that it is expecting a package with that tracking number.

2. When delivery person arrives, he just scans the tracking number barcode on the SafeDrop box, and that barcode is the unique key to open the SafeDrop. Once the tracking number is verified, the SafeDrop will open.

Delivery person scans the package in front of SafeDrop

Delivery person scans the package in front of SafeDrop

3. When the SafeDrop is opened, a video is recorded for the entire duration when the door is open, as a security measure. If the SafeDrop is not closed, a loud buzz sound continues until it is closed properly.

 Intel Edison board, sensor, buzzer, LED, servo, and webcam

Inside of SafeDrop: Intel Edison board, sensor, buzzer, LED, servo, and webcam

4. Once the package is within SafeDrop, a notification is sent to recipient’s mobile app, indicating the expected package has been delivered. Also shows a link to the video recorded.

In the future, SafeDrop could be integrated with USPS, UPS and FedEx, to verify the package tracking number for the SafeDrop automatically. When delivery person scans tracking number on SafeDrop, it automatically update status to “delivered” into that tracking record in delivery company’s database too. That way, the entire delivery process is automated in a secure fashion.

This SafeDrop design highlights three advantages:

1. Tracking number barcode as the key to the SafeDrop.
That barcode is tracked during the entire delivery, and it is with the package always, and it is fitting to use that as “key” to open its destination. We do not introduce any “new” or “additional” thing for the process.

2. Accurate delivery, which eliminates human mistakes.
Human error sometimes causes deliveries to the wrong address. With SafeDrop integrated into the shipping system, the focus is on a package (with tracking number as package ID) going to a target (SafeDrop ID associated with an address).

In a sense, the package (package ID) has its intended target (SafeDrop ID). The package can only be deposited into one and only one SafeDrop, which eliminates the wrong delivery issue.

3. Non-disputable delivery.
This dispute could happen: delivery company says a package has been delivered, but the recipient says that it has not arrived. The possible reasons: a) delivery person didn’t really deliver it; b) delivery person dropped it to a wrong address; c) a thief came by and took it; d) the recipient got it but was making a false claim.

SafeDrop makes things clear! If it is really delivered to SafeDrop, it is being recorded, and delivery company has done its job. If it is in the SafeDrop, the recipient has it. Really there is no dispute.

I will be showing SafeDrop at Modern Supply Chain Experience in San Jose, January 26 and 27 in the Maker Zone in the Solutions Showcase. If you’re attending the show, come by and check out SafeDrop.Possibly Related Posts:

My Joyful Consumption of Data

Thu, 2016-01-21 21:47

I love data, always have.

To feed this love and to compile data sets for my quantified self research, I recently added the Netatmo Weather Station to the other nifty devices that monitor and quantify my everyday life, including Fitbit AriaAutomatic and Nest.

I’ve been meaning to restart my fitness data collection too, after spending most of last year with the Nike+ Fuelband, the Basis Peak, the Jawbone UP24, the Fitbit Surge and the Garmin Vivosmart.

FWIW I agree with Ultan (@ultan) about the Basis Peak, now simply called Basis, as my favorite among those.

Having so many data sets and visualizations, I’ve observed my interest peak and wane over time. On Day 1, I’ll check the app several times, just to see how it’s working. Between Day 2 and Week 2, I’ll look once a day, and by Week 3, I’ve all but forgotten the device is collecting data.

This probably isn’t ideal, and I’ve noticed that even something that I expected would work, like notifications, I tend to ignore, e.g. the Netatmo app can send notifications on carbon dioxide levels indoor, temperature outside and rain accumulation outside, if you have the rain gauge.

These seem useful, but I tend to ignore them, a very typical smartphone behavior.

Unexpectedly, I’ve come to love the monthly email many devices send me and find them much more valuable than shorter interval updates.

Initially, I thought I’d grow tired of these and unsubscribe, but turns out, they’re a happy reminder about those hard-working devices that are tirelessly quantifying my life for me and adding a dash of data visualization, another of my favorite things for many years.

Here are some examples.

Pasted_Image_1_21_16__7_15_PM

My Nest December Home Report

December Monthly Driving Insight from Automatic

December Monthly Driving Insight from Automatic

Although it’s been a while, I did enjoy the weekly summary emails some of the fitness trackers would send. Seems weekly is better in some cases, at least for me.

Weekly Progress Report from Fitbit

Weekly Progress Report from Fitbit

Pasted_Image_1_21_16__7_41_PM

Basis Weekly Sleep Report

A few years ago, Jetpack, the WordPress analytics plugin, began compiling a year in review report for this blog, which I also enjoy annually.

If I had to guess about my reasons, I’d suspect that I’m not interested enough to maintain a daily velocity, and a month (or a week for fitness trackers) is just about the right amount of data to form good and useful data visualizations.

Of course, my next step is dumping all these data into a thinking pot, stirring and seeing if any useful patterns emerge. I also need to reinvigorate myself about wearing fitness trackers again.

In addition to Ultan’s new fave, the Xiaomi Mi Band, which seems very interesting, I have the Moov and Vector Watch waiting for me. Ultan talked about the Vector in his post on #fashtech.

Stay tuned.Possibly Related Posts:

Any Port in a Storm: Novel Ways of Interacting with Our Devices

Wed, 2016-01-20 12:20

With smartwatches, sometimes your fingers just aren’t good enough for the task at hand. Fortunately, some ingenious users have found a suitable alternative for when those digits just won’t do: their nose.

That thing sticking out from your face is enough like a fingertip to act as one in situations where your hands might be wet, dirty, or separated from your device by a layer of gloves.

A nose tap reenactment on the Apple Watch.

A nose tap reenactment on the Apple Watch.

Our own research, as well as that of Apple Watch research firm Wristly.co, has found users have occasionally resorted to their nose, and at reasonable numbers, too. Wristly found in one of their surveys that 46% of respondents had used their nose on their watch, and another 28% hadn’t, but were willing to try.

While users are probably not opting to use their nose when their fingers will do, this usage pattern fits into a larger question of how we interact with our devices: What’s the best way to interact with a device at any given time? When do we or should we use touch versus voice, gesture versus mouse, nose versus finger?

What I love about the nose tap is that it’s something that happened with users out in the real world, with real world situations. It’s doubtful this sort of usage would have been found in the lab, and may not have been considered when the Apple Watch was being designed. After all, with California’s beautiful weather, one might not consider what a glove-wearing population has to go through.

But now with this knowledge, designers should be asking themselves, “will my users ever need to nose tap? If so, how do I make sure it will work for them?” It sounds a little silly, but it could make an app or feature more useful to some users. And researchers should also be asking the same questions.

This goes for any novel, unexpected way users interact with any product, software or hardware: why are they doing it that way, and what is it telling us about their needs or underlying problems?

And the best way to find those novel, unexpected interactions? By seeing (or at least asking) how people use these products in the real world.Possibly Related Posts:

Seven Days with the Xiaomi Mi Band: A Model of Simple Wearable Tech UX for Business

Mon, 2016-01-18 02:47

Worn Out With Wearables

That well-worn maxim about keeping it simple, stupid (KISS) now applies as much to wearable tech (see what I did there?) user experience as it does to mobile or web apps.

The challenge is to keep on keeping “it” simple as product managers and nervous C-types push for more bells and whistles in a wearable tech market going ballistic. Simplicity is a relative term in the fast changing world of technology. Thankfully, the Xiaomi Mi Band has been kept simple and the UX relates to me.

mi_with_apple_color

The Mi Band worn alongside Apple Watch (42mm version) for size.

I first heard about the Mi Band with a heads-up from OAUX AppsLab chief Jake Kuramoto (@jkuramot) last summer. It took me nearly six months to figure out a way to order this Chinese device in Europe: When it turned in up Amazon UK.

topper

We both heard about the Mi from longtime Friend of the ‘Lab, Matt Topper (@topperge).

I’ve become jaded with the current deluge of wearable tech and the BS washing over it. Trying to make sense of wearable tech now makes my head hurt. The world and its mother are doing smartwatches and fitness trackers. Smartglasses are coming back. Add the wellness belts, selfie translators that can get you a date or get you arrested, and ingestibles into the mix; well it’s all too much to digest. There are signals the market is becoming tired too, as the launch of the Fitbit Blaze may indicate.

 Mi Band (All app images are on iOS)

On a winning streak: Mi Band (All app images are on iOS)

But after 7 days of wearing the Mi Band, I have to say: I like it.

Mi User Experience Es Tu User Experience

My Mi Band came in a neat little box, complete with Chinese language instructions.

 A big UX emerges.

Inside the little box: A big UX emerges.

Setup was straightforward. I figured out that the QR code in the little booklet was my gateway to installing the parent App (iOS and Android are supported) on my iPhone and creating an account. Account verification requires an SMS text code to be sent and entered. This made me wonder where my data was stored and its security. Whatever.

I entered the typical body data to get the Mi Band setup for recording my activity (by way of steps) and sleep automatically, reporting progress on the mobile app or by glance at the LEDs on the sensor (itself somewhat underwhelming in appearance. This ain’t no Swarovski Misfit Shine).

bmi

Enter your personal data. Be honest.

Metric, Imperial, and Jin locale units are supported.

Metric, Imperial, and Jin locale units are supported.

I charged up the sensor using yet another unique USB cable to add to my ever-growing pile of Kabelsalat, slipped the sensor into the little bracelet (black only, boo!), and began the tracking of step, sleep and weight progress (the latter requires the user to enter data manually).

steps_1 steps_2 sleep_1 sleep_2 weight_1 weight_2

I was impressed by simplicity of operation that was balanced by attention to detail and a friendly style of UX. The range of locale settings, the quality of the visualizations, and the very tone of the communications (telling me I was on a “streak”) was something I did not expect from a Chinese device. But then Xiaomi is one of the world’s biggest wearable tech players, so shame on me, I guess.

fitbit_v_mi_2 AppleHealth

The data recorded seemed to be fairly accurate. The step count seemed to be a little high for my kind of exertion and my sleep stats seemed reasonable. The Mi Band is not for the 100 miles-a-week runners like me or serious quantified self types who will stick with Garmin, Suunto, Basis, and good old Microsoft Excel.

fitbit_v_mi_1 fitbit_v_mi_2

For a more in-depth view of my activity stats, I connected the Mi Band to Apple Health and liked what I saw on my iPhone (Google Fit is also supported). And of course, the Mi Band app is now enabled for social. You can share those bragging rights like the rest of them.

But, you guessed it. I hated the color of the wristband. Only black was available, despite Xiaomi illustrations showing other colors. WTF? I retaliated by ordering a Hello Kitty version from a third party.

The Mi Band seems ideal for the casual to committed fitness type and budding gym bunnies embarking on New Year resolutions to improve their fitness and need the encouragement to keep going. At a cost of about 15 US dollars, the Mi Band takes some beating. Its most easily compared with the Fitbit Flex, and that costs a lot more.

Beyond Getting Up To Your Own Devices

I continue to enjoy the simple, glanceable UX and reporting of my Mi Band. It seems to me that its low price is hinting at an emergent business model that is tailor-made for the cloud: Make the devices cheap or even free, and use the data in the cloud for whatever personal or enterprise objectives are needed. That leaves the fanatics and fanbois to their more expensive and complex choices and to, well, get up to their own devices.

So, for most, keeping things simple wins out again. But the question remains: how can tech players keep on keeping it simple?

Mi Band Review at a Glance

Likes

  • Simplicity
  • Price
  • Crafted, personal UX
  • Mobile app visualizations and Apple and Google integration

Dislikes

  • Lack of colored bands
  • Personal data security
  • Unique USB charging cable
  • Underwhelming #fashtech experience

Your thoughts are welcome in the comments.Possibly Related Posts:

2016 AT&T Developer Summit Hackathon: SafeDrop

Fri, 2016-01-15 14:51

It has become tradition now for us, AppsLab, the OAUX emerging technologies team, that the first thing in a New Year is to fly to Las Vegas, not solely testing our luck on the casino floor (though some guys did get lucky), but also attending to the real business–participating in the AT&T Developer Summit Hackathon to build something meaningful, useful, and future-oriented, and hopefully get lucky and win some prizes.

Noel (@noelportugal), who participated in 2013, 2014 and last year, and his team were playing a grand VR trick–casting a virtual gate on a Real neighborhood–not exactly the VR as you know it, but rather, several steps deep into the future. Okay, I will just stop here and not spoil that story.

On the other side of desk, literately, David, Osvaldo (@vaini11a), Juan Pablo and I (@yuhuaxie) set our sights on countering Mr. Grinch with the Christmas package theft reports still fresh in people’s minds. We were team SafeDrop, and we forged carefully a safe box for receiving Christmas gift package. Unsurprisingly, we called it SafeDrop! And of course, you can use it to receive package deliveries other time of the year too, not just at Christmas time.

Team SafeDrop

Team SafeDrop (l-r), Os, David, me, Juan Pablo

In terms of the details of how the SafeDrop was built, and how you would use it, I will explain each in future posts, so stay tuned. Or shall I really explain the inner mechanism of SafeDrop and let Mr. Grinch have an upper hand?

By the way, team SafeDrop won the 2nd place of “Best Use of Intel Edison.” Considering the large scale of the AT&T Hackathon with over 1,400 participants forming 200+ teams, we felt that we were pretty lucky to win prizes. Each team member received a Basis Peak, a fitness watch you’ve read about here, and a Bluetooth headset, the SMS In-Ear Wireless Sport.

SafeDrop won 2nd place of Best Use of Intel Edison, with prizes

SafeDrop won 2nd place of Best Use of Intel Edison

We know there are other approaches to prevent Mr. Grinch from stealing gifts, such as this video story shows: Dog poop decoy to trick holiday package thief. Maybe after many tries, Mr. Grinch would just get fed up with the smell and quit.

But we think SafeDrop would do the job by just the first try.

Stay tuned for more details of what we built.Possibly Related Posts:

VR Skeptic: 1994

Sat, 2016-01-09 12:52

Here is a blast from the past: a letter I wrote to some friends back in 1994 about my very first VR experience.

VR enjoyed a brief spin as the next big thing that year. Jaron Lanier had been featured in the second issue of Wired magazine and virtual reality arcades began to appear in the hipper shopping malls. In short, it was as hyped and inevitable then as it is again today.

So with any further ado, here is my unedited account of what virtual reality was like in 1994:

A view of the Dactyl Nightmare playing field

A view of the Dactyl Nightmare playing field

 

For my birthday last weekend, Janine, Betsy, Tom and I tried an interesting experiment: a virtual reality arcade in San Francisco called Cyber Mind.

It was a pleasant enough little boutique, not overrun by pimply-faced hoards as I had expected. They had a total of ten machines, two sit-down, one-person flight simulator contraptions, and two sets of four networked platforms. We chose to play one of the four-person games called Dactyl Nightmare.

I stepped up on a sleek-looking platform and an attendant lowered a railing over my head so that I would not wander off while my mind was in other worlds. I then strapped on a belt with more equipment and cables and donned a six pound Darth-Vader-on-steroids helmet. The attendant placed a gun in my hand.

When they pulled the switch I found myself standing on a little chessboard floating in space. There were a total of four such platforms with stairs leading down to a larger chessboard, all decorated by arches and columns. I could look in any direction and if I held out my arm I could see a computer-generated rendition of my arm flailing around, holding a gun. If I pushed the thumb switch on top of the gun I began to walk in whatever direction I was looking in.

I began bumping into columns and stumbling down stairs. It wasn’t long before I saw Janine, Betsy, and Tom also stumbling around, walking with an odd gait, and pointing guns at me. The game, as old as childhood itself, was to get them before they could get me. Usually, by the time I could get my bearings and take careful aim, someone else (usually Betsy) had sneaked up behind me and blasted me into computer-generated smithereens. After a few seconds, I reformed and rejoined the hunt.

This happy situation was somewhat complicated by a large green Pterodactyl with the bad habit of swooping down and carrying off anyone who kept firing their guns into the ground (which was usually where I tended to fire). If you were true of heart and steady of aim you could blast the creature just before it’s claws sunk in. I managed this once, but the other three or four times I was carried HIGH above the little chessboard world and unceremoniously dropped.

After a quick six minutes it was all over. The total cost was $20 for the four of us, about a dollar per person per minute. I found the graphics interesting but not compelling and resolved to come back in a few years when the technology had improved.

I was not dizzy or disoriented during the game itself, but I emerged from my helmet slightly seasick, especially after the second round. This feeling persisted for the rest of the day. But it was a worthy experiment. Twenty dollars and a dizzy day: a small price to pay for my first glimpse at virtual reality.Possibly Related Posts:

VR Skeptic: Immersion and Serendipity

Sat, 2016-01-09 12:42

 

John staring boldly into the future of VR

Look, I’m as fond of holodecks and the matrix as the next nerd. I was having queazy VR experiences back in 1994. That’s me just last month strapped into to a cheap plastic viewer, staring boldly into the future. I’ve been thinking and writing about virtual reality for over twenty years now.

But are we there yet? Is VR ready for mainstream adoption? And, aside from a few obvious niche cases, does it have any significant relevance for the enterprise?

This is the first of a series of “VR Skeptic” blog posts that will explore these questions. The AppsLab has already started to acquire some VR gear and is hunting for enterprise use cases. I’ll share what I find along the way.

So why am I a skeptic? Despite all the breathless reviews of the Oculus Rift over the last few years and the current hype storm at CES, the VR industry still faces many serious hurdles :

  • Chicken and egg: developers need a market, the market needs developers
  • Hand controls remain awkward
  • The headsets are still bulky
  • Most PCs will need an upgrade to keep up
  • People are still getting sick

To this I would add what I’ve noticed about the user experience while sampling Google Cardboard VR content:

  • Limited range of view unless you’re sitting in a swivel chair
  • Viewer fatigue after wearing the googles for a few minutes
  • Likelihood of missing key content and affordances behind you
  • Little or no interactivity
  • Low quality resolution for reading text

But every time I’m ready to give up on VR, something like this pulls me back: Google Cardboard Saves Baby’s Life  (hat tip to my colleague Cindy Fong for finding this).

Immersion and Serendipity

I think VR does have two unique qualities that might help us figure out where it could make an impact: immersion and serendipity.

Immersion refers to the distinct feeling of being “inside” an experience. When done well, this can be a powerful effect, the difference between watching a tiger through the bars of its cage and being in the cage with the tiger. Your level of engagement rises dramatically. You are impelled to give the experience your full attention – no multi-tasking! And you may feel a greater sense of empathy with people portrayed in whatever situation you find yourself in.

Serendipity refers to the potential for creative discovery amidst the jumbled overflow of information that tends to come with VR. A VR typically shows you more content than you can easily absorb, including many things you won’t even see unless you happen to be looking in the right direction at the right moment. This makes it harder to guide users through a fixed presentation of information. But it might be an advantage in situations where users need to explore vast spaces, each one using his or her instincts to find unique, unpredictable insights.

It might be fruitful, then, to look for enterprise use cases that require either immersion or serendipity. For immersion this might include sales presentations or job training. Serendipity could play a role in ideation (e.g. creating marketing campaigns) or investigation (e.g. audits or budget reconciliations where you don’t know what you’re looking for until you find it).

Collaboration

Because VR content and viewers are not yet ubiquitous, VR today tends to be a solitary experience. There are certainly a number of solitary enterprise use cases, but the essence of “enterprise” is collaboration: multiple people working together to achieve things no single person could. So if there is a killer enterprise VR app, my guess is that it will involve rich collaboration.

The most obvious example is virtual meetings. Business people still fill airports because phone and even video conferences cannot fully replace the subtleties and bonding opportunities that happen when people share a physical space. If VR meetings could achieve enough verisimilitude to close a tricky business deal or facilitate a delicate negotiation that would be a game changer. But this is a very hard problem. The AppsLab will keep thinking about this, but I don’t see a breakthrough anytime soon.

Are there any easier collaborations that could benefit from VR? Instead of meetings with an unlimited number of participants, perhaps we could start with use cases involving just two people. And since capturing subtle facial expressions and gestures is hard, maybe we could look for situations which are less about personal interactions and more about a mutual exploration of some kind of visualized information space.

One example I heard about at last year’s EyeO conference was the Mars Rover team’s use of Microsofts’s HoloLens. Two team members in remote locations could seem to be standing together in a Martian crater as they decide on where to send the rover to next. One could find an interesting rock and call the other over to look at it. One could stand next to a distant feature to give the other a better sense of scale.

Can we find a similar (but more mundane) situation? Maybe an on-site construction worker with an AR headset sharing a live 360 view with a remote supervisor with a VR headset. In addition to seeing what the worker sees, the supervisor could point to elements that the worker would then see highlighted in his AR display. Or maybe two office managers taking a stroll together through a virtual floor plan allocating cubicle assignments.

These are some of the ideas I hope to explore in future installments. Stay tuned and please join the conversation.Possibly Related Posts:

The Time I Ran Out of Gas

Wed, 2016-01-06 02:08

Before Christmas, I ran out of gas for the first time.

All things considered, I was very lucky. It was just me in the car, and the engine died in a covered parking structure, in a remote corner with few cars. Plus, it was the middle of the day, during the week before Christmas, so not a lot of people were out and about anyway.

Could have been a lot worse.

The reason why I ran out of gas is more germane, and as a result of my mishap, I found another interesting experience.

I’ll start with the why. If you read here, you’ll know I’ve been researching the quantified self, which I understand roughly as the collection of data related to me and the comparison of these data sets to identify efficiencies.

As an example, I tracked fitness data with a variety of wearables for most of last year.

About a year ago, Ben posted his impressions of Automatic, a device you plug into your car’s OBD-II diagnostics port that will quantify and analyze your driving data and track your car’s overall health, including its fuel consumption and range.

I have since added Automatic my car as another data set for my #QS research. I’ve found the data very useful, especially with respect to real cost of driving.

Since Automatic knows when you fill the tank and can determine the price you paid, it can provide a very exact cost for each trip you make. This adds a new dimension to mundane tasks.

Suddenly, running out for a single item has a real cost, driving farther for a sale can be accurately evaluated, splitting the cost of a trip can be exact, etc.

One feature I appreciate is the range. From the wayback archives, in 2010, I discussed the experience and design challenges of a gas gauge. My Jeep warns of low fuel very early and won’t report the range once the low fuel indicator has been tripped.

However, Automatic always reports an estimated range, which I really like.

Screenshot_2016-01-04-14-24-12

Maybe too much, given this is how I ran out of gas. The low fuel indicator had been on for a day, but I felt confident I could get to a gas station with plenty of time, based on the estimated range reported by Automatic.

Armed with my false confidence, I stopped to eat on the way to the gas station, and then promptly ran out of gas.

To be clear, this was my fault, not Automatic’s.

In my 2010 musings on the gas gauge, I said:

It does seem to be nigh impossible to drive a car completely out of gas, which seems to be a good thing, until you realize that people account for their experiences with the gauges when driving on E, stretching them to the dry point.

Yeah, I’m an idiot, but I did discover an unforeseen negative, over reliance on data. I knew it was there, like in every other data vs. experience point-counter point. I know better than to rely on data, but I still failed.

Something I’ll need to consider more deeply as my #QS investigation continues.

Now for the interesting experience I found.

After calling roadside assistance for some gas, my insurance company texted me a link to monitor “the progress of my roadside event.” Interesting copy-writing.

That link took me an experience you might recognize.

Screenshot_2015-12-18-10-05-21

Looks like Uber, doesn’t it? Because it’s a web page, the tow trunk icon wasn’t animated like the Uber cars are, but overall, it’s the same, real-time experience.

I like the use here because it gives a tangible sense that help is on the way, nicely done.

So, that’s my story. I hope to avoid repeating this in the future, both the running-out-of-gas and the over-reliance on data.

Find the comments and share your thoughts.Possibly Related Posts:

Selfies. Social. And Style: Smartwatch UX Trends

Tue, 2016-01-05 02:04

From Antiques to Apple

“I don’t own a watch myself,” a great parting shot by Kevin of Timepiece Antique Clocks in the Liberties, Dublin.

I had popped in one rainy day in November to discover more about clock making and to get an old school perspective on smartwatches. Kevin’s comment made sense. “Why would he need to own a watch?” I asked myself, surrounded by so many wonderful clocks from across the ages, all keeping perfect time.

This made me consider what might influence people to use smartwatches? Such devices offer more than just telling the time.

 UX research in the Liberties, Dublin

From antiques to Apple: UX research in the Liberties, Dublin

2015 was very much the year of the smartwatch. The arrival of the Apple Watch earlier in 2015 sparked much press excitement and Usable Apps covered the enterprise user experience (UX) angle with two much-read blog pieces featuring our Group Vice President, Jeremy Ashley (@jrwashley).

Although the Apple Watch retains that initial consumer excitement (at the last count about 7 million units have shipped), we need to bear in mind that the Oracle Applications User Experience cloud strategy is not about one device. The Glance UX framework runs just as well on Pebble and Android Wear devices, for example.

vectorpeak_plane-1

It’s not all about the face. Two exciting devices came my way in 2015 for evaluation against the cloud user experience: The Basis (left) and Vector Watch.

Overall, the interest in wearable tech and what it can do for the enterprise is stronger than ever. Here’s my (non-Oracle endorsed) take on what’s going to be hot and why in 2016 for smartwatch UX.

Trending Beyond Trendy

There were two devices that came my way in 2015 for evaluation that for me captured happening trends in smartwatch user experience.

First there was the Basis Peak (now just Basis). I covered elsewhere my travails in setting up the Basis and how my perseverance eventually paid off.

basis

Basis: The ultimate fitness and sleep tracker. Quantified self heaven for those non-fans of Microsoft Excel and notebooks. Looks great too!

Not only does the Basis look good, but its fitness functionality, range of activity and sleep monitoring “habits,” data gathering, and visualizations matched and thrilled my busy work/life balance. Over the year, the Basis added new features that reflected a more personal quantified self angle (urging users to take a “selfie”) and then acknowledged that fitness fans might be social creatures (or at least in need of friends) by prompting them to share their achievements, or “bragging rights,” to put it the modern way.

peaknotifications_25

Your bragging rights are about to peak: Notifications on Basis (middle).

Second there was the Vector Watch, which came to me by way of a visit to Oracle EPC in Bucharest. I was given a device to evaluate.

A British design, with development and product operations in Bucharest and Palo Alto too, the Vector looks awesome. The sophisticated, stylish appearance of the watch screams class and quality. It is easily worn by the most fashionable people around and yet packs a mighty user experience.

 Fit executive meets fashion.

Vector Watch: Fit executive meets fashion.

I simply love the sleek, subtle, How To Spend It positioning, the range of customized watch faces, notifications integration, activity monitoring capability, and the analytics of the mobile app that it connects with via Bluetooth. Having to charge the watch battery only 12 times (or fewer) each year means one less strand to deal with in my traveling Kabelsalat.

The Vector Watch affordance for notifications is a little quirky, and sure it’s not the Garmin or Suunto that official race pacers or the hardcore fitness types will rely on, and maybe the watch itself could be a little slimmer. But it’s an emerging story, and overall this is the kind of device for me, attracting positive comments from admirers (of the watch, not me) worldwide, from San Francisco to Florence, mostly on its classy looks alone.

I’m so there with the whole #fitexecutive thing.

Perhaps the Vector Watch exposes that qualitative self to match the quantified self needs of our well-being that the Basis delivers on. Regardless, the Vector Watch tells us that wearable tech is coming of age in the fashion sense. Wearable tech has to. These are deeply personal devices, and as such, continue the evolution of wristwatches looking good and functioning well while matching the user’s world and responding to what’s hot in fashion.

Heck, we are now even seeing the re-emergence of pocket watches as tailoring adapts and facilitates their use. Tech innovation keeps time and keeps up, too, and so we have Kickstarter wearabletech solutions for pocket watches appearing, designed for the Apple Watch.

The Three “Fs”

Form and function is a mix that doesn’t always quite gel. Sometimes compromises must be made trying to make great-looking, yet useful, personal technology. Such decisions can shape product adoption. The history of watch making tells us that.

Whereas the “F” of the smartwatch era of 2014–2015 was “Fitness,” it’s now apparent that the “F” that UX pros need to empathize with in 2016 will be “Fashion.” Fashionable technology (#fashtech) in the cloud, the device’s overall style and emotional pull, will be as powerful a driver of adoption as the mere outer form and the inner functionality of the watch.

The Beauty of Our UX Strategy

The Oracle Applications Cloud UX strategy—device neutral that it is—is aware of such trends, ahead of them even.

The design and delivery of beautiful things has always been at the heart of Jeremy Ashley’s group. Watching people use those beautiful things in a satisfied way and hearing them talk passionately about them is a story that every enterprise UX designer and developer wants the bragging rights to.

So, what will we see on the runway from Usable Apps in 2016 in this regard?

Stay tuned, fashtechistas!

Editor’s note: Cross-posted from Usableapps (@usableapps), thanks to our old mate Ultan (@ultan), a guy who knows both fashion and tech.Possibly Related Posts:

AppsLab Research in 2015

Mon, 2016-01-04 11:20

As we exit 2015 and enter 2016, I’m reflecting on all that happened in AppsLab and looking forward to the future. Our 2015 research spanned the spectrum – from attitudinal to behavioral, domestic to international, controlled to ad hoc, low to high tech, and many more research tactics. I won’t bore you with stats and an exhaustive list of studies. Rather, here is a brief recap of some of our research and interests.

We studied smartwatches a bit this year. We ran focus groups to gauge interest and identify use cases. We ran user journal studies to learn about user adoption and behavior patterns. We ran guerrilla usability studies with prototypes to evaluate features and interactions. We used stars and stickies to gather feedback. We used Oracle Social Research Management (SRM) to glean insight from social media.

CCqDd3UUMAABf__

Thao and Ben leading a focus group at HCM World in March.

CSRTjhmUcAA5oB2

Lo-fi research at the OAUX Exchange during OpenWorld in October.

CSa8JapUYAE-hOA

Ben, Tawny and Guido our guerrilla testing team at OpenWorld in October.

We designed and built a Smart Office, which we used to spark conversations and perspectives on the future of work and user experience. Ironically, we used low tech methods (with posters, stickies and stickers) to gather feedback on the high tech office.

CSSZvFOUkAEzyur

Smart Office demonstration at the OAUX Exchange during OpenWorld in October.

We also got out of the labs and headed to customers and partners in Europe and Asia for global perspectives.

CEYRknDVAAA3Kmv

Anthony Lai showing OAUX extensibility to a group of partners in Beijing in April.

To close out 2015 and start 2016, we opened the OAUX Gadget Lab, a hands-on lab where visitors will be able to come in and experience the latest technologies with us.

image1

The new Gadget Lab at Oracle HQ.

Stick around with us in 2016 to see what we are up to.Possibly Related Posts:

Your UKOUG Apps15 and Tech15 Conference Explorer

Wed, 2015-11-25 13:38

ukougprize

Are you attending UKOUG Apps15 (#ukoug_apps15) or Tech15 (#ukoug_tech15)? If so, you are in luck! Once again we will run our ever popular scavenger hunt with a twist. From December 7-9 we will be at the ICC in Birmingham, UK working with the UKOUG team to give you a fun way to explore the conference and win prizes along the way.

man-gets-expo-stamp-on-his-arm

If you’re not familiar with this game, it is as if we are stamping a card (or your arm!) every time you do a task. But instead, we are using IoT technologies; Raspberry Pi, NFC stickers, and social media to give you points. This time we will hold a daily drawing. You only need to enter once, but the more tasks you complete the more chances you have to win. Each day has different tasks.

This is a great way to discover new things, make friends, and enjoy the conference from a different perspective.

You can pre-register here http://bit.ly/UKOUG15Explorer or come over during registration so we can set you up. See you there!Possibly Related Posts:

OpenWorld 2015 Highlights

Thu, 2015-11-19 12:36

It’s been nearly three weeks, and I’m finally getting around to sharing the highlights of our OpenWorld 2015. Enjoy.

Keynotes

Last year, Steve Miranda showed some of our project work in his keynote. This year, our Glance framework on the Apple Watch, made an appearance in Larry Ellison’s first keynote in a video showcasing the evolution of Oracle’s User Experience over the last 30 years.

OTN Community Quest

Noel (@noelportugal) and I oversaw the OTN Community Quest during OOW and JavaOne this year.

Pic, it happened.

We registered more than 300 players. Of those, more than half completed more than one task, and we had 18 players finish all nine tasks.

One major change we made to the game after the Kscope15 Scavenger Hunt was to change from a points system to a drawing, allowing anyone to play, at any time during the game and have a good shot to win.

An unintentional proof point, the Grand Prize winner completed only one task.

Here’s a short video explaining what it was, how to play and why we did it.

I know Noel would like me to mention a cool feature for this iteration of the game.  He wrote some code to have the Amazon Echo draw the winning entries, which was a pretty sweet demo.

echoPis

Big thanks to our longtime and very good friends at OTN for letting us do this.

OAUX Exchange

Speaking of Alexa, the Amazon Echo’s persona and our favorite toy lately,  it featured prominently in the IoT Smart Office our team built to show at the OAUX Exchange.

Alexa and the smart badge were only parts of the whole experience, which included proximity using BLE beacons, voice-controlled environment settings and voice-controlled Cloud Applications navigation, just to hit the highlights.

The Smart Office wow feature was this Leap Motion interaction, grabbing a record from one screen and throwing it to another.

But wait, there was more. We showed the Glance Framework running on Android Auto, on an actual car head unit.

CSRQTEHUsAAeRpD

And for our annual fun demo, Myo gestural armbands used to control Anki race cars, because why not? Special thanks to Guanyi for running this demo for us.

Research

Thao (@thaobnguyen) and her team we busy throughout the conference conducting research, running a focus group and doing ad hoc interviews, a.k.a. guerrilla testing.

Ben, Tawny and Guido, our crack researchers.

Ben, Tawny and Guido, our crack team of researchers.

Miscellaneous

Our team had very busy and very successful OpenWorld. For the complete story on all the OAUX OpenWorld 2015 happenings, check out the Storify.

And finally two other noteworthy, but not OOW-related happenings I should mention.

Mark (@mvilrokx) and Raymond (@yuhuaxie) were interviewed about MakerFaire by Profit magazine. Check out what the veteran makers had to say.

Speaking of Mark and making, Business Insider covered his IoT Nerf gun project. Check out his instructions if you want to make your own.Possibly Related Posts:

A Smart Badge for a Smart Office

Mon, 2015-11-09 02:53

Editor’s note: Maybe you got a chance to check out our IoT Smart Office demo at the OAUX Exchange during OpenWorld. If not, don’t fret, we’ll describe its many cool bits here, beginning with this post from Raymond (@yuhuaxie) on his smart badge build. A Smart Office needs an equally smart badge.

We showcased the Smart Office at OAUX Exchange.

You put on a badge and walked into an office in the future, and everything came alive. You may be dazzled by the lights, screens, voice commands and gesture, and may not have paid any attention to the badge. Well, the badge is the key to start all the magical things when you approach the office. And it is worth some mention of its build.

The classical version:

If you tried it at the Exchange event, you can feel it is just like a normal badge, except it is a little heavier and has a vivid display.

Inside that home-made leather pouch, there is a 3.5” TFT display, a large recycled LiPo battery (taken out of a thin power bank we got at Kscope15), and one controller called LightBlue Bean (@punchthrough), which happened to be the same controller inside “Smart Bracelet” we showcased a year earlier to light up different colors to guide you an expo path.

The Smart Badge needs to be programmable and to tell its presence, so that we can program it to be a particular persona, and it can indicate to other listener (a Raspberry Pi server acting as IoT brain) about its presence or approaching to Smart Office.

The LightBlue Bean does the job perfectly, as it has two personalities: BLE and Arduino compatible controller. We created an iOS app to talk to the Bean over Bluetooth to set up persona, and the Arduino side of Bean would control the TFT display to show proper badge image.

bean_front

The Smart Badge

When the person wearing the Smart Badge approaches the office, the Raspberry Pi server would detect its coming, and can send out a remote notification to iPhone and Apple Watch, so that you can “check in” to the office from an Apple Watch, and then start the whole sequence of Smart Office coming alive.

At the OAUX Exchange event, due to limited space issue, we did not use BLE presence detection we built, but instead, we used a time-delay mechanism to send out notification to Apple Watch.

bean_back

Inside the Smart Badge

For those curious minds, we prepared another version, to show the components inside the Smart Badge. I would call it . . .

The techno version:

Instead of stacking up the components, we spread the components so people can see the parts and wiring. We designed and laser-cut acrylic sheets to make a transparent case, and it resembles the typical laminated badge you see at conferences.

As you can see, the wiring between the Bean and TFT is pretty much for the display control over SPI protocol, plus one wire to access SD card inside TFT display board, and another wire to control the backlight. The BLE part is all over wireless.

bean_demo

esp8266_demo

By the way, we made another version using ESP8266 (NodeMCU) instead of the Bean, because we just love the little ESP8266.

We connected the TFT display with NodeMCU over hardware SPI, and similar approach for access SD card and backlight control. Since NodeMCU has so many more pins available (some PWM pins too), we can actually control the backlight to dim at many brightness level, instead of just turning on and off, as in the Bean case.

The NodeMCU is flushed as an Arduino variant, and set up as a web server with its built-in wifi capability. The iOS app mentioned earlier can program this Smart Badge using HTTP request over wifi. In fact, it can be programmed using any browser or CLI, without the need of tethering to iPhone.

NodeMCU is such a recent toy on the block, that it takes some time to get it work with TFT display. For example, the image just does not look right when controlled by NodeMCU. After some perseverance, it was straightened out.

esp8266_experiment

The card version:

We were considering making the Smart Badge as small as possible. This design was to make it just a bit larger than the TFT display, by stacking NodeMCU behind TFT, and use a tiny proto-board to neatly wire everything. It is about the size of a deck of card, hosted in a transparent acrylic case.

Figured that it does not look like a badge, but it is a nice little display hooked on the Internet, that can sit on a desktop. So I decided to fit in a 3 x AAA everyday battery, instead of the LiPo battery which usually use JST connector and is a hassle to recharge.

esp8266_front

esp8266_back

Currently, this version can show a badge, also do some slide shows. But it can do much more – with NodeMCU (ESP8266), it is a web server that can listen for instructions, it is web client to pull information from outside, it runs MQTT to react to outside events, and it has many pins to hook up sensors and controllers. Plus that the TFT is a touch display, we can make touch input to switch to different modes, etc.

It could be a real functional Smart Office monitor/notification center.

I guess this is just the starting point of a little toy.Possibly Related Posts:

OTN Community Quest

Sat, 2015-10-24 21:54

Quest

After a very successful Scavenger Hunt at Kscope15, we are back with an Oracle OpenWorld and JavaOne edition. This time we partnered with the Oracle Technology Network (@oracleotn) folks to give Oracle OpenWorld (@oracleopenworld) and JavaOne (@javaoneconf) attendees a fun experience, and with even more chances of winning.

The OTN Community Quest was designed to be a win-win experience. We ask attendees to do certain tasks, you learn along the way, have fun and get a chance to win great prizes.

There are two types of tasks during the Quest:

  1. Tweet a “selfie” along with two hashtags: #otnquest and [#taskshashtag], i.e. the hashtag of the task.
  2. Scan a “Smart Sticker” on the Raspberry Pi scanner to validate a task. Stop by the OTN Lounges to get your “Smart Sticker.”

12045451_1090435214334864_263213618862046835_o

You can either just do the twitter tasks, the Raspberry Pi tasks or both, but the more tasks you do, the greater chance you will have to win one of these prizes:

Grand prize (GoPro Hero4), first prize (Basis Peak), second, third, and fourth prizes (Amazon Echo).

Prizes

The prize drawing will take place on Wednesday, October 28 at 4:45 PM, OTN Lounge in Moscone South. Need not to be present to win, but must pick up prizes at the OTN Lounge in Moscone South by 2 PM on Thursday, October 29.

Register here or stop by the OTN Lounge in the Moscone South lobby or the Info Desk in the OTN Community Cafe in the Java Hub at JavaOne. If you register online and want to complete the Raspberry Pi tasks with the “Smart Sticker,” you’ll need to come by either of those locations to get the sticker.

Good luck and see you at the show.Possibly Related Posts:

Connect All the Things: An IoT Nerf Gun – Part 2: The Software

Thu, 2015-10-22 12:09
Introduction

In the first part of this series, I showed you how to mod a Nerf gun so it can connect to the internet, and you can electronically trigger it.  In this second part, I will show you some software I created to actually remotely control the Nerf gun over the internet using various input devices.

CRR9q6sUwAI6Bgo

ESP8266

First you will have to prepare your ESP8266 so it connects to the wifi and the internet, and it can sent and receive data through this connection.  With the Lua framework on the chip, there are 2 ways to achieve this.

HTTP Server

You can run an HTTP server on the chip, which looks very similar to creating a HTTP Server with Node.js, and then have it listen on a specific port.  You can then forward incoming internet traffic to this port and have the server perform whatever you want when it receives traffic, e.g. launch a Nerf dart when it receives a POST with a URL = “/launch”.

I hit a few snags with this approach though.  First, it requires access to the router, something which is not always the case.  You can’t just take the gun to your friends house and have it work out-of-the-box, he would have to configure his router etc.  Also, at work, I don’t get to access the router.

Second, it requires quite a bit of “framework” code before this works.  I actually created a framework to help me with this, and you are free to use it for this or other projects, but ultimately I realized I don’t need all this.  Also, I found that the connection was extremely unstable when running an HTTP server.  I could never figure out why this was happening and so I abandoned this approach, although I still use the framework for other purposes.

An alternative approach is to set the ESP in AP (Access Point) mode.  That way you can connect any wifi device straight to the Nerf gun. However, this means it is not really connected to the internet, and for me, it exhibited the same connectivity issues.

MQTT Protocol

The Lua framework for ESP also supports MQTT, a protocol that is specifically designed for IoT devices.  It is an extremely lightweight publish/subscribe messaging transport.  You can turn the ESP into an MQTT client which can then receive (subscribe to) messages or send (publish) messages.

These messages are then relayed by an MQTT broker.  So yeah, you need one of those too.

Luckily there are many implementations of MQTT brokers for every imaginable language and platform and if you don’t want to run your own, there are SaaS providers too, which is what I ended up doing.

This proved to be extremely stable and fast for me, messages get relayed almost instantaneously and the ESP reacts immediately to them.  The other advantage is that you can create MQTT clients for every imaginable platform, even in the browser, which then allows you to control the Nerf gun from every conceivable device, as long as you connect them all to the same broker.  I will show a few examples of these in later sections.

These are the steps you have to perform in your Lua script for this to all work:

  1. Set your ESP in STATION mode (wifi.setmode(wifi.STATION)) and connect it to your Wifi (
    wifi.sta.config(<networkName>,<password>))
  2. Create your MQTT Client (mqtt.Client(<clientId>, <keepAliveSeconds>, <cloudMqttUserName>, <cloudMqttPwd>)) and connect it to your MQTT Broker (<mqttClient>:connect(host, port, 0, callback))
  3. Publish (<mqttClient>:publish) and/or subscribe (<mqttClient>:subscribe) messages to topics
  4. Listen for incoming messages (<mqttClient>:on(“message”, callback)) and act on them (launchDarts())

I have my Nerf gun publish a message when it comes online.  Besides the state of the Nerf gun (“online”), this message also contains the unique ID of the Nerf gun (the ESP’s MAC Address) which is later used by other MQTT clients to address messages straight to the Nerf gun (see below).

It also publishes a message when it starts firing, when it fires a dart, when it stops firing and when it goes offline.

The Nerf gun also subscribes to a topic that contains it’s unique ID.  This allows other MQTT Clients to address messages (“commands”) to a specific Nerf gun.  Whenever the Nerf gun receives a message on this topic, it verifies the topic and acts accordingly, e.g. if the topic contains “command/launch” and the message contains “{‘nrOfDarts’: 2}”, the Nerf gun will launch 2 darts.

Launching a dart

The actual launching of the dart is an interplay between the flywheel motors and the servo that now controls the trigger.

This is because the flywheel motors take some time to spin up to optimal speed, about 1-2 seconds, and also to spin down.  This is the sequence of events that get triggered in the Nerf Gun when it receives a command to launch:

  1. Turn on the flywheel motors, i.e. switch the relay on.
  2. After 1.5 seconds, set the servo to the “fire position.”
  3. After 0.5 seconds, set the servo back to the “neutral position.”
  4. Turn off the flywheel motors, i.e. switch the relay off.

At each stage, MQTT messages are being sent to inform other MQTT clients what is happening in the Nerf gun.

When the user wants to launch multiple darts, the only change is that the servo goes from neutral to fire and back to neutral as many times as there are darts to be fired.  The flywheel motors stay on during the duration of this sequence because it takes to long to spin them up and down:

  1. Turn on the flywheel motors, i.e. switch the relay on.
  2. After 1.5 seconds, set the servo to the “fire position.”
  3. After .5 seconds, set the servo back to the “neutral position.”
  4. If more darts to fire, go back to 2.
  5. Turn off the flywheel motors, i.e. switch the relay off.
Control the Nerf Gun from the CLI

Once your Nerf Gun is prepped, publishing information and subscribed to commands, you can start using it with other MQTT Clients.

One of the simplest ways to start is to use the command line to publish MQTT messages.  As I mentioned before, you need an MQTT Broker to make this all work, Mosquitto is such a broker and when you install it, it actually comes with an MQTT version 3.1/3.1.1 client for publishing simple messages from the command line called mosquitto_pub (there is also an equivalent mosquitto_sub, an MQTT version 3.1 client for subscribing to topics).

Using it would look like this:

$ mosquitto_pub -h <MQTTBbrokerHost> -p <Port> -u <user> -P <pwd> -t 
nerf/5c:cf:7f:f1:31:45/command/launch -m '{"nrOfDarts":2}'

This will publish a message to your MQTT Broker. This can be a different one than the Mosquitto broker, e.g. I use it to publish to my SaaS MQTT Broker, but it has to be the one your Nerf gun is also logged on to.  If a Nerf gun is listening to this topic, it will pick this up and launch 2 darts.

Web Application: The Nerf Center

In order to make this a bit more visually appealing and add some more useful functionality, you could create a web application, e.g. I created one that shows different Nerf Guns that are available, their status, and a button that allows me to launch darts.

In order for this to work, you need a MQTT Client that runs in the browser, i.e. a JavaScript implementation.  I used MQTT.js (and Webpack to make it browser friendly), there are others that you can use.

Screen Shot 2015-10-21 at 12.21.30 PMI used React.js for the front end code and Twitter Bootstrap for visual appeal, but of course you can use whatever floats your boat.

All I do is use MQTT.js to listen (subscribe) to topics from the Nerf Guns.  As I explained earlier, they all publish messages to a common topic that includes their unique ID (the MAC Address of the ESP in the Nerf Gun) and their state.

Whenever I receive such a message, I upsert the table.  Also, as I now have the unique Id of each Nerf gun, I can publish (using the “Launch” button) a message to a specific Nerf gun, allowing me to launch any number of darts from a particular Nerf gun.  Here’s a Vine with this in action (slightly older UI, same principle though):

Amazon Echo Integration

And once you got this far, why not add voice control!  I used the Amazon Echo but I imagine you can do the same with Google Now (probably not Siri though, at least not without some hacking).  All this required was to add a Skill to the Echo that published a message to a Nerf gun’s topic whenever I say “Alexa, ask Nerf gun to launch 5 darts.”  Here’s a Vine showing how this works in practice:

From here you can go on to add Twitter integration, having the Nerf gun tweet every shot, or vice versa, having it launch when you sent it a tweet.  The possibilities are literally endless.

Summary

This is, of course, a whimsical example of an IoT device, but hopefully it shows you what happens when something that isn’t normally connected to the internet, suddenly is.

Apart from being able to remotely control it, you get remote access to its “state” and possibly its surroundings.  This in turn allows you to act accordingly to this state and influence it if needed.

Furthermore, the device itself has access to the vast resources of the internet, i.e. computing power, data, services etc., and can use that to influence its own state.

As more and more devices come online, it would also have access to those, all providing more and more context to each other, allowing them to become more capable than they could ever be on their own, truly becoming greater than the sum of its parts.

2536163Possibly Related Posts:

Connect All the Things: An IoT Nerf Gun

Wed, 2015-10-21 05:39
Introduction

As part of my foray into the Internet of Things (IoT) I was looking for a project I could sink my teeth in because I find it easier to learn something by doing it rather than just reading about it.

It just happened to be that at JavaOne this year they will have a Maker Zone, and they were looking for participants who could build something interesting to show to the attendees.  As a regular visitor of the MakerFaire in my back yard, this immediately piqued my interest, and after rummaging through my 9 year old son’s toy boxes, I decided that I wanted to mod one of his Nerf guns.

This blog post will detail what I did and how I did it.

My plan was to turn the Nerf gun itself into an internet-connected device that would then allow me to poll its status (online/offline/launching) and to launch darts remotely, just using an internet connection.

To make my life a little bit easier, I started with a semi-automatic Nerf gun called NERF N-STRIKE MODULUS ECS-10 BLASTER.  All the user has to do is start the flywheels and then pull a trigger which pushes the foam darts (using a lever and push rod) between the 2 fast-spinning flywheels, which ensure the speedy exit of the dart from the barrel.

Internals of an unmodified Modulus Nerf Gun

Internals of an unmodified Modulus Nerf Gun

Instead of having the user start the flywheels and pull the trigger, I had to use some electromechanical solution for both.  Let’s talk about each solution individually.

Flywheel Control: Relay

This component is, strictly speaking, already electromechanical: when the user pulls the flywheel control trigger, s/he is actually physically pressing a button (hidden under the orange lid right behind the flywheel control trigger) that starts up the flywheels.

All I had to do was replace the mechanical button with one that I could control electronically, e.g. a relay. I could have opted for a transistor as well but decided to go for the relay as I like the satisfying “click” sound they make when activated :-)

I will show later how this was actually done.

Trigger: Servo

The trigger mechanism on the other hand is completely mechanical, and I had to replace this with some sort of electronic component in order to be able to control it electronically.  The actual trigger movement is very small, but it is “amplified” by the lever it is connected to.

This lever then pushes a rod forward, which is what makes the dart squeeze between the flywheels.

My initial thought was to replace the rod with some sort of push solenoid, but those typically have a very small stroke, too small for this purpose.  I also looked at very small actuators but they suffered from the same drawback, plus they were also very expensive and relatively slow.

So instead, I decided to replace the trigger with a servo that would control the lever.  The axis of the servo would sit inline with the axis of the lever so when the servo turns, it turns the lever, exactly what happens when you pull the trigger.

Internet Connection: ESP8266

The final component of the build was to put the Nerf Gun on the internet.

For this is decided to settle on the ESP8266 chip, more precise the ESP8266 ESP-12 variant. Besides being a wifi chip, this also has several GPIO pins that I use to control both the flywheel relay and the servo with some to spare for other components I might want to add later, e.g. a darts counter, range finder, etc.

Unfortunately the chip runs on 3.3V, and the Nerf gun supplies 6V (4 x 1.5V AA batteries) to the flywheel motors. So I either had to use another battery that supplies 3.3V or tap into the 6V and step it down to 3.3V.

I actually tried both, but in the end opted for the latter, as it simpler to just replace 1 set of batteries when we run out than 2.

Also, if 1 set of batteries runs out, the other part becomes completely unusable as well, so there is no benefit to having separate power sources for that either.  This complicated the build, but certainly benefited the UX. Hey, I am in UX after all.

Breadboard Schema Breadboard layout of IoT Nerf Gun

Breadboard layout of IoT Nerf Gun

I hope this is rather self-explanatory.  Note that you have to connect CH_PD to Vin/Vcc, otherwise the chip doesn’t power up.

Also GPIO15 has to be connected to GND. If the ESP module doesn’t come preinstalled with Lua (mine didn’t), then you have to flash it first with Lua.

This is outside the scope of this article and there are plenty of articles on the internet explaining how to do this but be aware that if you need to do this, you have to pull GPIO0 (that’s GPIO zero) to GND and pull GPIO2 to HIGH.

Then, once you have flashed it with Lua, you have to disconnect these connections, and you can use both pins for controlling other things.  Also, in order to upload anything to the ESP8266 you need to use the TX and RX pins and a USB to TTL (Serial) adapter, e.g. FTDI.

First Build

For my first build, I actually used a NodeMCU board which is a breakout board for the ESP8266-12 that includes a 5V->3.3V power converter, reset and flash buttons and is breadboard compatible, unlike the actual ESP8266 which has a 2mm pitch rather than 2.54mm, making it easier to breadboard with.

However, it is much larger than the naked ESP chip and I had a hard time containing it all in the Nerf gun. One of my objectives was to keep the Nerf gun “plain” looking.

IMG_5954

NodeMCU Dev Board on the left, ESP8266-12 on the right

I started with the servo integration as I figured that was the hardest.  Integrating the relay should be relatively easy compared to that.  Here is a Vine video with my first working version of the ESP8266 and the Servo motor attached, using a crude web service:

I then modded the gun to accommodate the servo. As the servo was too big to fit in the gun, I had to cut a hole in the side of the gun and hot-glue the servo in place.

I tried a few micro servos first but none of them were powerful enough to push the dart, and also, they didn’t fit either.

I then modified the lever so that the servo could control it.  This meant I had to shorten the axis of the lever and cut 2 slits which could then be gripped by the servo.  As the servo turns, so does the lever, does the dart gets pushed by the servo between the flywheels:

ModdifiedGripper

At this point, I was also using all sorts of connectors to connect the battery to the different motors and ESP.

I thought that this would make it easier to route all the cables in the Nerf gun and later disconnect everything if I need to open the gun.

However, this turned out to not be the case and in later iterations I just soldered the connections straight to the necessary components, with as short a cable as possible, with the exception of the servo connection as the servo was the only component that was physically connected to the other part of the Nerf gun.

This way, if I did have to open up the Nerf gun, I could still disconnect the halves from one another.

Final Build

As you can see, this yields far less cable to deal with and it is easy to close the Nerf gun this way, with room to spare.

You can also see that at this stage, I added an externally accessible connector that connects to RX, TX and GND of the ESP chip.

This allows me to debug and reprogram the ESP chip without opening the Nerf gun!  This has proven to be exceptionally useful, as you can imagine. There are 14 tiny screws holding the nerf gun together.

I also disabled one of the safety mechanism on the Nerf gun so I can fire it without closing the jam door on top of the Nerf gun.

This made it easier to debug some issues I had with the serve not pushing the darts correctly through the flywheels and jamming the Nerf Gun.  Here are a few more Vine videos of the IoT Nerf Gun in action.

This one shows the servo pushing the rod from the inside:

And a view from the top with the jam door open:

In the second part of this blog post, I will go into more detail about the software I wrote to control the Nerf gun.Possibly Related Posts: