Skip navigation.

Oracle AppsLab

Syndicate content
Driving Innovation
Updated: 15 hours 55 min ago

Asteroid Hackathon

Mon, 2014-11-17 09:49

A couple weeks ago Jeremy Ashley (@jrwashley), Bill Kraus, Raymond Xie and I participated in the Asteroid Hackathon hosted by @EchoUser. The main focus was “to engage astronomers, other space nerds, and the general public, with information, not just data.”

asteroid-hackathon-small

As you might already know, we here at the AppsLab, are big fans of Hackathons as well as ShipIt days or FedEx days. The ability to get together, get our collective minds together and being able to create something in a short amount of time is truly amazing. It also helps to keep us on our toes, technically and creatively.

Our team built what we called “The Daily Asteroid.” The idea behind our project was to highlight the asteroid profile of the current date’s closed approach to Earth or near Earth object (NEO) data. What this means is to show which asteroid is the closest to earth today. A user could “favorite” today’s asteroid and start a conversation with other users about it, using a social network like Twitter.

Screen Shot 2014-11-17 at 9.47.36 AM

We also added the ability to change the asteroid properties (size, type, velocity, angle) and play a scenario to see what damage could it cause if it hit the earth. And to finish up,  we created an Asteroid Hotline using Twilio (@twilio) where you can call to get the latest NEO info using your phone!

We were lucky to be awarded 3rd place or “Best Engagement,” and we had a blast doing it. Considering the small amount time we had, we came out really proud of our results.Possibly Related Posts:

The Cloud UX Lab

Mon, 2014-11-10 09:57

There’s a post over on VoX about a OAUX new lab at Oracle HQ, the Cloud UX Lab.

labwidewithJacopy

Jeremy Ashley, VP, in the new lab, image used with permission.

Finished just before OOW in September, this lab is a showcase for OAUX projects, including a few of ours.

The lab reminds me of a spacecraft from the distant future, the medical bay or the flight deck. It’s a very cool place, directly inspired and executed by our fearless leader, Jeremy Ashley (@jrwashley), an industrial designer by trade.

I actually got to observe the metamorphosis of this space from something that felt like a doctor’s office waiting room into the new hotness. Looking back on those first meetings, I never expected it would turn out so very awesome.

Anyway, the reason why I got to tag along on this project is because our team will be filling the control room for this lab with our demos. Noel (@noelportugal) and Jeremy have a shared vision for that space, which will be a great companion piece to the lab and equally awesome.

So, if you’re at Oracle HQ, book a tour and stop by the new Cloud UX Lab, experience the new hotness and speculate on what Noel is cooking up behind the glass.Possibly Related Posts:

Pseudo-Philosophical Observations on Wearables, Part 1

Wed, 2014-11-05 11:53

Jawbone announced the Up3 today, reportedly its most advanced fitness tracker to date.

As with all fitness trackers, the Up3 has an accelerometer, but it also has sensors for measuring skin and ambient temperature, as well as something called bioimpedence. As these data collected by the Up3 are used by a new feature called Smart Coach.

You can imagine what the Smart Coach does. It sounds like a cool, possibly creepy, feature.

This post is not about the Up3.

This post is about my journey into the dark heart of the quantified self. The Up3 has just reminded me to coalesce my thoughts.

Earlier this year, I started wearing my first fitness tracker, the Misfit Shine. I happily wore it for about two months before the battery died, and then I realized it had control of me.

Misfit calculates activity based on points, and my personal goal of 1,000 points was relatively easy to reach every day, even for someone who works from home. What I realized quickly was that the Shine pushed me to chase points, not activity.

Screenshot_2014-11-05-08-18-56

My high score.

 

The Shine uses its accelerometer to measure activity, so depending on where I wore it on my person, a run could be worth more points. This isn’t unique to the Shine. I’ve seen people spinning at the gym wearing their fitness trackers on their ankles.

As the weeks passed, I found myself avoiding activities that didn’t register a lot of points, definitely not good behavior, and even though my goal was 1,000 points, I avoided raising it for fear of missing my daily goal-achievement dopamine high.

Then, mid-Summer, Misfit dropped an update that added some new game mechanics, and one day, my Shine app happily informed me that I’d hit my goal 22 days in a row.

This streak was the beginning of the end for me.

On the 29th day of my streak, the battery died. I replaced it, crisis averted, streak in tact. Then, later that day, the Shine inexplicably died. I tried several new batteries and finally had to contact support.

All the while, I worried about my streak. I went to gym, but it felt hollow and meaningless without the tangible representation, the coaching, as it were, from my Shine.

This is not a good look.

Misfit replaced my Shine, but in the days that elapsed, during my detox, I decided to let it go. Turns out the quantified self isn’t for obsessive, overly-competitive personality types like me.

And I’m not the only one in this group.

In September, I read an article called Stepping Out: Living the Fitbit Life, in which the author, David Sedaris, describes a similar obsession with his Fitbit. As I read it, I commiserated, but I also felt a little jealous of the level of his commitment. This dude makes me look like a rank amateur.

Definitely worth a read.

Anyway, this is not in any meant to be an indictment of the Shine, Fitbit, Jawbone or any fitness tracker. Overall, these devices offer people a positive and effective way to reenforce healthy behavior and habits.

But for people like, they lead to unanticipated side effects. As I read about the Up3, its sensors and Smart Coach, all of which sound very cool, I had to remind myself of the bad places where I went with the Shine.

And the colloquial, functionally-incorrect but very memorable, definition of insanity.

In Part 2, when I get around to it, I’ll discuss the flaws in the game mechanics these companies use.

Find the comments.Possibly Related Posts:

Google Glass, Android Wear, and Apple Watch

Tue, 2014-10-28 15:43

I have both the Google Glass and Android Wear (Samsung Gear Live, Moto 360), and often times I wear them together.  People always come up with a question:  “How do you compare Google Glass and Android watches?”  Let me address couple of the view points here.  I would like to talk about Apple Watch, but since it has not been officially released yet, let’s say that shape-wise it is square and looks like a Gear Live, and features seem to be pretty similar to Android Wear, with the exceptions of the attempt to add more playful colors and features.  Lets discuss more about it once it is out.

unnamed                             Moto-360-Dynamic-Black

423989-google-glass              10-apple-watch.w529.h352.2x

I am the first batch of the Google Glass Explorer and got my Glass mid 2013.  In the middle of this year, I first got the Gear Live, then later Moto 360.  I always find it peculiar that Glass is an old technology while Wear is a newer technology.  Should it not be easier to design a smart watch first before a glassware?

I do find a lot of similarities between Glass and Wear.  The fundamental similarity is that both are Android devices.  They are voice-input enabled and show you notifications.  You may install additional Android applications for you to personalize your experience and maximize your usage.  I see these as the true values for wearables.

Differences?  Glass does show a lot of capabilities that Android Wear is lack of at the moment.  Things that probably matter for most people would be sound, phone calls, video recording, pictures taking, hands-free with head-on display, GPS, wifi.  Unlike Android Wear, it can be used standalone;  Android Wear is only a companion gadget and has to be paired up with a phone.

Is Glass more superior?   Android Wear does provide a better touch-based interaction, comparing to the swiping at the side of the Glass frame.  You can also play simple games like Flopsy Droid on your watch.  Also commonly included are pedometers and heart activity sensor.  Glass also tends to get over-heated easily.  Water-resistance also plays a role here: you would almost never want to get your Glass wet at all, while Android Wear is water-resistant to certain degree.  When you are charging your watch at night, it also serves as a bedtime clock.

php71o7v6

For me, personally, although I own Glass longer than Wear, I have to say I prefer Android Wear over Glass for couple reasons.  First, there is the significant price gap ($1500 vs $200 price tag).  Second, especially when you add prescription to Glass, it gets heavy and hurts the ear when wearing it for an extended period of time.  Third, I do not personally find the additional features offered by Glass useful to my daily activities;  I do not normally take pictures other than at specific moments or while I am traveling.

I also find that even Glass is now publicly available within the US, Glass is still perceived as an anti-social gadget.  The term is defined in the Urban Dictionary as well.  Most of the people I know of who own Glass do not wear it themselves due to all various reasons.  I believe improving the marketing and advertising strategy for Glass may help.

Gadget preference is personal.  What’s yours?Possibly Related Posts:

Glorious Data Visualizations for Your Friday

Fri, 2014-10-24 09:00

If you’ve read here for more than a hot minute, you’ll know that I love me some data visualization.

This love affair dates back to when Paul (@ppedrazzi) pointed me to Hans Rosling’s (@hansrosling) first TED talk. I’m sure Hans has inspired an enormous city of people by now, judging by the 8 million plus views his TED talk has garnered. Sure, those aren’t unique view, but even so.

There’s an interesting meta-project: visualize the people influenced by various visualization experts, like a coaching tree or something.

sandwich

Classic comic from xkcd, used under CC 2.5

Back on track, if you haven’t yet, watch the BBC documentary on him, “The Joy of Stats,” fantastic stuff, or if you have seen it, watch it again.

As luck would have it, one area of specialization of our newest team members is, wait for it, data visualization.

Last week, I got to see them in action in a full-day workshop on data visualization, which was eye-opening and very informative.

I’m hoping to get a few blog posts out of them on the subject, and while we wait, I wanted to share some interesting examples we’ve been throwing around in email.

I started the conversation with xkcd because, of course I did. Randal Munroe’s epic comic isn’t usually mentioned as a source for data visualizations, but if you read it, you’ll know that he has a knack for exactly that. Checking out the Google Image search for “xkcd data visualization” reminded me of just how many graphs, charts, maps, etc. Randal has produced over the years.

I also discovered that someone has created a D3 chart library as an homage to the xkcd style.

Anyway, two of my favorite xkcd visualizations are recent, possibly a function of my failing memory and not coincidence, Pixels and Click and Drag.

I probably spent 10 minutes zooming into Pixels, trying to find the bottom; being small-minded, I gave up pretty early on Click and Drag, assuming it was small. It’s not.

How much time did you spend, cough, waste, on these?

During our conversation, a couple interesting examples have come back to me, both worth sharing.

First is Art of the Title, dedicated to the opening credits of various films. In a very specific way, opening credits are data visualizations; they set the mood for the film and name the people responsible for it.

Second is Scale of the Universe, which is self-explanatory and addictive.

So, there you go. Enjoy investigating those two and watch this space for more visualization content.

And find the comments.Possibly Related Posts:

Mind Control?

Mon, 2014-10-13 16:37

Editor’s note: Hey look, a new author. Here’s the first post from Raymond Xie, who joined us nearly a year ago. You may remember him from such concept demos as geo-fencing or Pebble watchface. Raymond has been busy at work and wants to share the work he did with telekinesis. Or something, you decide. Enjoy.

You put on a headband, stare at a ball, tilt your head back-forth and left-right . . . the ball navigates through a simple maze, rushing, wavering, changing colors, and finally hitting the target.

That is the latest creation out of AppsLab: Muse Sphero Driver. When it was first showed at OAUX Exchange during OOW, it amused many people, as they would call it “mind control” game.

The setup consists of  Muse – a brain-sensing headband, Sphero – a robotic ball, and a tablet to bridge the two.

Technically, it is your brainwave data (Electroencephalography – EEG) driving the Sphero (adjusting speed and changing color with spectrum from RED to BLUE, where RED: fast, active;  BLUE: slow, calm);  and head gesture (3d Accelerarometer- ACC) controlling the direction of Sphero movement.  Whether or not you call that as “mind control” is up to your own interpretation.

You kind of drive the ball with your mind, but mostly brainwave noises instead of conscious thought. It is still too early to derive accurate “mind control” from EEG data out of any regular person, for the reasons:

1. For EEG at Scalp level, the noise-to-signal ratio is very poor;
2. Need to establish the correlation between EEG and mind activity.

But it does open up a dialog in HCI, such as voice-control vs mind-control (silence); or in Robotics, instead of asking machine to “see”/”understand”, we can “see”/”understand” and impersonate it with our mind and soul.

While it is difficult to read out “mind” (any mind activity) transparently, we think it is quite doable to map your mind into certain states, and use the “state” as command indirectly.

We may do something around this area. So stay tuned.

Meanwhile, you can start to practice Yoga or Zen, to get better noise-to-signal ratio, and to set your mind into certain state with ease.Possibly Related Posts:

Here We Grow Again

Mon, 2014-10-13 12:18

Cheesy title aside, the AppsLab (@theappslab) is growing again, and this time, we’re branching out into new territory.

As part of the Oracle Applications User Experience (@usableapps) team, we regularly work with interaction designers, information architects and researchers, all of whom are pivotal to ensuring that what we build is what users want.

Makes sense, right?

So, we’re joining forces with the Emerging Interactions team within OAUX to formalize a collaboration that has been ongoing for a while now. In fact, if you read here, you’ll already recognize some of the voices, specifically John Cartan and Joyce Ohgi, who have authored posts for us.

For privacy reasons (read, because Jake is lazy), I won’t name the entire team, but I’m encouraging them to add their thoughts to this space, which could use a little variety. Semi-related, Noel (@noelportugal) was on a mission earlier this week to add content here and even rebrand this old blog. That seems to have run its course quickly.

One final note, another author has also joined the fold, Mark Vilrokx (@mvilrokx); Mark brings a long and decorated history of development experience with him.

So, welcome everyone to the AppsLab team.Possibly Related Posts:

Did You See Our Work in Steve Miranda’s Keynote?

Fri, 2014-10-10 09:28

Last week at OpenWorld, a few of our projects were featured in Steve Miranda’s (@stevenrmiranda) keynote session.

Jeremy (@jrwashley) tweeted the evidence.

jatweet

Debra (@debralilley) noticed too. I wasn’t able to attend the keynote, so I found out thanks to the Usable Apps (@usableapps) Storify, which chronicled “Our OpenWorld 2014 Journey.”

And today, I finally got to see the video, produced by Friend of the ‘Lab, Martin Taylor, who you might remember from other awesome videos like “A Smart Holster for Law Enforcement.”

Noel (@noelportugal) and Anthony (@anthonyslai) both play developers in the short film. Noteworthy, the expression on Noel’s face as he drives the Sphero ball with the Muse, brain-sensing headband.

Thanks to Martin for making this video, thanks to Steve for including it in his keynote, and thanks to you for watching it.Possibly Related Posts:

ESP8266 – Cheap WiFi for your IoT

Thu, 2014-10-09 21:14

About a month ago, hackaday.com broke the news of a new Wifi chip called ESP8266 that costs about $5. This wireless system on a chip (SoC) took all the IoT heads (including me) by surprise. Until now if you wanted to integrate wifi to any DIY project you had to use more expensive solutions. To put this into perspective, my first wifi Arduino shield was about $99!

F0FZH4CI0RYTMAP.LARGE

So I ordered a few of them (I think I’m up to 10 now!) and went to test the possibilities. I came up with a simple Instructable to show how can you log a room temperature to the Cloud. I used an Arduino to do this, but one of the most amazing things about this chip is that you can use it as stand alone! Right now documentation is sparse, but I was able to compile the source code using a gcc compiler toolchain created by the new esp8266 community.

But why is this important to you even if you haven’t dabble with DIY electronics? Well this chip comes from China and even though it doesn’t have an FCC stamp of  approval (yet), it signals the things about to come. This is what I call the Internet of Things r(evolution). Prices of these chips are at a historical low, and soon we will see more and more products connecting to the Internet/Cloud. From light switches, light bulbs, to washer machines, dishwashers. Anything that needs to be turned on or off could potentially have one of these. Anything that can collect data like thermostats, smoke detectors etc. could also potentially have it.

So you scared or will you welcome our new internet overlords?Possibly Related Posts:

iBeacons or The Physical Web?

Tue, 2014-10-07 06:55

For the past year at the AppsLab we have been exploring the possibilities of advanced user interactions using BLE beacons. A couple days ago, Google (unofficially) announced that one of their Chrome teams is working on what I’m calling the gBeacon. They are calling it the Physical Web.
This is how they describe it:

“The Physical Web is an approach to unleash the core superpower of the web: interaction on demand. People should be able to walk up to any smart device – a vending machine, a poster, a toy, a bus stop, a rental car – and not have to download an app first. Everything should be just a tap away.

The Physical Web is not shipping yet nor is it a Google product. This is an early-stage experimental project and we’re developing it out in the open as we do all things related to the web. This should only be of interest to developers looking to test out this feature and provide us feedback.

Here is a short run down of how iBeacon works vs The Physical Web beacons:

iBeacon

The iBeacon profile advertises a 30 byte packet containing three values that combined make a unique identifier: UUID, Major, Minor. The mobile device will actively listen for these packets. When it gets close to one of them it will query a database (cloud) or use hard-coded values to determine what it needs to do or show for that beacon. Generally the UUID is set to identify a common organization. Major value is an asset within that organization, and Minor is a subset of assets belonging to the Major.
iBeacon_overview.001
For example, if I’m close to the Oracle campus, and I have an Oracle application that is actively listening for beacons, then as I get within reach of any beacon my app can trigger certain interactions related to the whole organization (“Hello Noel, Welcome to Oracle.”) The application had to query a database to know what that UUID represents. As I reach building 200, my application picks up another beacon that contains a Major value of lets say 200. Then my app will do the same and query to see what it represents (“You are in building 200.”) Finally when I get close to our new Cloud UX Lab, a beacon inside the lab will broadcast a Minor ID that represents the lab (“This is the Cloud UX lab, want to learn more?”)

iBeacons are designed to work as full closed ecosystem where only the deployed devices (app+beacons+db) will know what a beacon represents. Today I can walk to the Apple store and use a Bluetooth app to “sniff” BLE devices, but unless I know what their UUID/Major/Minor values represent I cannot do anything with that information. Only the official Apple Store app will know what do with when is nearby beacons around the store (“Looks like you are looking for a new iPhone case.”)

As you can see the iBeacon approach is a “push” method where the device will proactively push actions to you. In contrast the Physical Web beacon proposes to act as a “pull” or on-demand method.

Physical Web

The Physical Web gBeacon will advertise a 28 bytes packet containing an encoded URL. Google wants to use the familiar and established method of URLs to tell an application, or an OS, where to find information about physical objects. They plan to use context (physical and virtual) to top rank what might be more important to you at the current time and display it.

gBeacon

Image from https://github.com/google/physical-web/blob/master/documentation/introduction.md

The Physical Web approach is designed to be a “pull” discovery service where most likely the user will initiate the interaction. For example, when I arrive to the Oracle campus, I can start an application that will scan for nearby gBeacons or I can open my Chrome browser and do a search.  The application or browser will use context to top rank nearby objects combined with results. It can also use calendar data, email or Google Now to narrow down interests.  A background process with “push” capabilities could also be implemented. This process could have filters that can alert the user of nearby objects of interest.  These interests rules could be predefined or inferred by using Google’s intelligence gathering systems like Google Now.

The main difference between the two approaches is that iBeacons is a closed ecosystem (app+beacons+db) and the Physical Web is intended to be a public self discovered (app/os+beacons+www) physical extension of the web. Although the Physical Web could also be restricted by using protected websites and encrypted URLs.

Both approaches are accounting to prevent the misconception about these technologies: “I am going to be spammed as soon as I walk inside a mall?”  The answer is NO. iBeacons is an opt-in service within an app and the Physical Web beacons will mostly work on-demand or will have filter subscriptions.

So there you have it. Which method do you prefer?Possibly Related Posts: