Skip navigation.

Oracle AppsLab

Syndicate content
Driving Innovation
Updated: 18 hours 26 min ago

Mind Control?

Mon, 2014-10-13 16:37

Editor’s note: Hey look, a new author. Here’s the first post from Raymond Xie, who joined us nearly a year ago. You may remember him from such concept demos as geo-fencing or Pebble watchface. Raymond has been busy at work and wants to share the work he did with telekinesis. Or something, you decide. Enjoy.

You put on a headband, stare at a ball, tilt your head back-forth and left-right . . . the ball navigates through a simple maze, rushing, wavering, changing colors, and finally hitting the target.

That is the latest creation out of AppsLab: Muse Sphero Driver. When it was first showed at OAUX Exchange during OOW, it amused many people, as they would call it “mind control” game.

The setup consists of  Muse – a brain-sensing headband, Sphero – a robotic ball, and a tablet to bridge the two.

Technically, it is your brainwave data (Electroencephalography – EEG) driving the Sphero (adjusting speed and changing color with spectrum from RED to BLUE, where RED: fast, active;  BLUE: slow, calm);  and head gesture (3d Accelerarometer- ACC) controlling the direction of Sphero movement.  Whether or not you call that as “mind control” is up to your own interpretation.

You kind of drive the ball with your mind, but mostly brainwave noises instead of conscious thought. It is still too early to derive accurate “mind control” from EEG data out of any regular person, for the reasons:

1. For EEG at Scalp level, the noise-to-signal ratio is very poor;
2. Need to establish the correlation between EEG and mind activity.

But it does open up a dialog in HCI, such as voice-control vs mind-control (silence); or in Robotics, instead of asking machine to “see”/”understand”, we can “see”/”understand” and impersonate it with our mind and soul.

While it is difficult to read out “mind” (any mind activity) transparently, we think it is quite doable to map your mind into certain states, and use the “state” as command indirectly.

We may do something around this area. So stay tuned.

Meanwhile, you can start to practice Yoga or Zen, to get better noise-to-signal ratio, and to set your mind into certain state with ease.Possibly Related Posts:

Here We Grow Again

Mon, 2014-10-13 12:18

Cheesy title aside, the AppsLab (@theappslab) is growing again, and this time, we’re branching out into new territory.

As part of the Oracle Applications User Experience (@usableapps) team, we regularly work with interaction designers, information architects and researchers, all of whom are pivotal to ensuring that what we build is what users want.

Makes sense, right?

So, we’re joining forces with the Emerging Interactions team within OAUX to formalize a collaboration that has been ongoing for a while now. In fact, if you read here, you’ll already recognize some of the voices, specifically John Cartan and Joyce Ohgi, who have authored posts for us.

For privacy reasons (read, because Jake is lazy), I won’t name the entire team, but I’m encouraging them to add their thoughts to this space, which could use a little variety. Semi-related, Noel (@noelportugal) was on a mission earlier this week to add content here and even rebrand this old blog. That seems to have run its course quickly.

One final note, another author has also joined the fold, Mark Vilrokx (@mvilrokx); Mark brings a long and decorated history of development experience with him.

So, welcome everyone to the AppsLab team.Possibly Related Posts:

Did You See Our Work in Steve Miranda’s Keynote?

Fri, 2014-10-10 09:28

Last week at OpenWorld, a few of our projects were featured in Steve Miranda’s (@stevenrmiranda) keynote session.

Jeremy (@jrwashley) tweeted the evidence.

jatweet

Debra (@debralilley) noticed too. I wasn’t able to attend the keynote, so I found out thanks to the Usable Apps (@usableapps) Storify, which chronicled “Our OpenWorld 2014 Journey.”

And today, I finally got to see the video, produced by Friend of the ‘Lab, Martin Taylor, who you might remember from other awesome videos like “A Smart Holster for Law Enforcement.”

Noel (@noelportugal) and Anthony (@anthonyslai) both play developers in the short film. Noteworthy, the expression on Noel’s face as he drives the Sphero ball with the Muse, brain-sensing headband.

Thanks to Martin for making this video, thanks to Steve for including it in his keynote, and thanks to you for watching it.Possibly Related Posts:

ESP8266 – Cheap WiFi for your IoT

Thu, 2014-10-09 21:14

About a month ago, hackaday.com broke the news of a new Wifi chip called ESP8266 that costs about $5. This wireless system on a chip (SoC) took all the IoT heads (including me) by surprise. Until now if you wanted to integrate wifi to any DIY project you had to use more expensive solutions. To put this into perspective, my first wifi Arduino shield was about $99!

F0FZH4CI0RYTMAP.LARGE

So I ordered a few of them (I think I’m up to 10 now!) and went to test the possibilities. I came up with a simple Instructable to show how can you log a room temperature to the Cloud. I used an Arduino to do this, but one of the most amazing things about this chip is that you can use it as stand alone! Right now documentation is sparse, but I was able to compile the source code using a gcc compiler toolchain created by the new esp8266 community.

But why is this important to you even if you haven’t dabble with DIY electronics? Well this chip comes from China and even though it doesn’t have an FCC stamp of  approval (yet), it signals the things about to come. This is what I call the Internet of Things r(evolution). Prices of these chips are at a historical low, and soon we will see more and more products connecting to the Internet/Cloud. From light switches, light bulbs, to washer machines, dishwashers. Anything that needs to be turned on or off could potentially have one of these. Anything that can collect data like thermostats, smoke detectors etc. could also potentially have it.

So you scared or will you welcome our new internet overlords?Possibly Related Posts:

iBeacons or The Physical Web?

Tue, 2014-10-07 06:55

For the past year at the AppsLab we have been exploring the possibilities of advanced user interactions using BLE beacons. A couple days ago, Google (unofficially) announced that one of their Chrome teams is working on what I’m calling the gBeacon. They are calling it the Physical Web.
This is how they describe it:

“The Physical Web is an approach to unleash the core superpower of the web: interaction on demand. People should be able to walk up to any smart device – a vending machine, a poster, a toy, a bus stop, a rental car – and not have to download an app first. Everything should be just a tap away.

The Physical Web is not shipping yet nor is it a Google product. This is an early-stage experimental project and we’re developing it out in the open as we do all things related to the web. This should only be of interest to developers looking to test out this feature and provide us feedback.

Here is a short run down of how iBeacon works vs The Physical Web beacons:

iBeacon

The iBeacon profile advertises a 30 byte packet containing three values that combined make a unique identifier: UUID, Major, Minor. The mobile device will actively listen for these packets. When it gets close to one of them it will query a database (cloud) or use hard-coded values to determine what it needs to do or show for that beacon. Generally the UUID is set to identify a common organization. Major value is an asset within that organization, and Minor is a subset of assets belonging to the Major.
iBeacon_overview.001
For example, if I’m close to the Oracle campus, and I have an Oracle application that is actively listening for beacons, then as I get within reach of any beacon my app can trigger certain interactions related to the whole organization (“Hello Noel, Welcome to Oracle.”) The application had to query a database to know what that UUID represents. As I reach building 200, my application picks up another beacon that contains a Major value of lets say 200. Then my app will do the same and query to see what it represents (“You are in building 200.”) Finally when I get close to our new Cloud UX Lab, a beacon inside the lab will broadcast a Minor ID that represents the lab (“This is the Cloud UX lab, want to learn more?”)

iBeacons are designed to work as full closed ecosystem where only the deployed devices (app+beacons+db) will know what a beacon represents. Today I can walk to the Apple store and use a Bluetooth app to “sniff” BLE devices, but unless I know what their UUID/Major/Minor values represent I cannot do anything with that information. Only the official Apple Store app will know what do with when is nearby beacons around the store (“Looks like you are looking for a new iPhone case.”)

As you can see the iBeacon approach is a “push” method where the device will proactively push actions to you. In contrast the Physical Web beacon proposes to act as a “pull” or on-demand method.

Physical Web

The Physical Web gBeacon will advertise a 28 bytes packet containing an encoded URL. Google wants to use the familiar and established method of URLs to tell an application, or an OS, where to find information about physical objects. They plan to use context (physical and virtual) to top rank what might be more important to you at the current time and display it.

gBeacon

Image from https://github.com/google/physical-web/blob/master/documentation/introduction.md

The Physical Web approach is designed to be a “pull” discovery service where most likely the user will initiate the interaction. For example, when I arrive to the Oracle campus, I can start an application that will scan for nearby gBeacons or I can open my Chrome browser and do a search.  The application or browser will use context to top rank nearby objects combined with results. It can also use calendar data, email or Google Now to narrow down interests.  A background process with “push” capabilities could also be implemented. This process could have filters that can alert the user of nearby objects of interest.  These interests rules could be predefined or inferred by using Google’s intelligence gathering systems like Google Now.

The main difference between the two approaches is that iBeacons is a closed ecosystem (app+beacons+db) and the Physical Web is intended to be a public self discovered (app/os+beacons+www) physical extension of the web. Although the Physical Web could also be restricted by using protected websites and encrypted URLs.

Both approaches are accounting to prevent the misconception about these technologies: “I am going to be spammed as soon as I walk inside a mall?”  The answer is NO. iBeacons is an opt-in service within an app and the Physical Web beacons will mostly work on-demand or will have filter subscriptions.

So there you have it. Which method do you prefer?Possibly Related Posts:

Oracle OpenWorld and JavaOne 2014 Cometh

Mon, 2014-09-22 11:28

This time next week, we’ll be in the thick of the Oracle super-conference, the combination of Oracle OpenWorld and JavaOne.

This year, our team and our larger organization, Oracle Applications User Experience, will have precisely a metric ton of activities during the week.

For the first time, our team will be doing stuff at JavaOne too. Anthony (@anthonyslai) will be talking about the IFTTPi workshop we built for the Java team for MakerFaire back in May on Monday, and Tony will be showing those workshop demos in the JavaOne OTN Lounge at the Hilton all week.

If you’re attending either show or both, stop by, say hello and ask about our custom wearable.

Speaking of wearables, Ultan (@ultan) will be hosting a Wearables Meetup a.k.a. Dress Code 2.0 in the OTN Lounge at OpenWorld on Tuesday, September 30 from 4-6 PM. We’ll be there, and here’s what to expect:

  • Live demos of wearables proof-of-concepts integrated with the Oracle Java Cloud.
  • A wide selection of wearable gadgets available to try on for size.
  • OAUX team chatting about use cases, APIs, integrations, UX design, fashion and how you can use OTN resources to build your own solutions.

Update: Here are Bob (@OTNArchBeat) and Ultan talking about the meetup.

Here’s the list of all the OAUX sessions:

Oracle Applications Cloud User Experiences: Trends, Tailoring, and Strategy

Presenter: Jeremy Ashley, Vice President, Applications User Experience; Jatin Thaker, Senior Director, User Experience; and Jake Kuramoto, Director, User Experience

The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. See what we mean by simplicity as we demo our latest cloud user experiences and show you only the essential information you need for your work. Learn how we are addressing mobility, by delivering the best user experience for each device as you access your enterprise data in the cloud. We’ll also talk about the future of enterprise experiences and the latest trends we see emerging in the consumer market. And finally, understand what we mean by extensibility after hearing a high-level overview of the tools designed for tailoring the cloud user experience. With this team, you will always get a glimpse into the future, so we know you will be inspired about the future of the cloud.

Session ID: CON7198
Date: Monday, September. 29, 2014
Time: 2:45 p.m. – 3:30 p.m.
Location: Moscone West – 3007

Learn How to Create Your Own Java and Internet of Things Workshop

Presenter: Anthony Lai, User Experience Architect, Oracle

This session shows how the Applications User Experience team created an interactive workshop for the Oracle Java Zone at Maker Faire 2014. Come learn how the combination of the Raspberry Pi and Embedded Java creates a perfect platform for the Internet of Things. Then see how Java SE, Raspi, and a sprinkling of user experience expertise engaged Maker Faire visitors of all ages, enabling them to interact with the physical world by using Java SE and the Internet of Things. Expect to play with robots, lights, and other Internet-connected devices, and come prepared to have some fun.

Session ID: JavaOne 2014, CON7056
Date: Monday, Sept. 29, 2014
Time: 4 p.m. – 5 p.m.
Location: Parc 55 – Powell I/II

Oracle HCM Cloud User Experiences: Trends, Tailoring, and Strategy

Presenters: Jeremy Ashley, Vice President, Applications User Experience, Oracle; Aylin Uysal, Director, Human Capital Management User Experience, Oracle

The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. See what we mean by simplicity as we demo our latest cloud user experiences and show you only the essential information you need for your work. Learn how we are addressing mobility, by delivering the best user experience for each device as you access your enterprise data in the cloud. We’ll also talk about the future of enterprise experiences and the latest trends we see emerging in the consumer market. And finally, understand how you can extend with the Oracle tools designed for tailoring the cloud user experience. With this team, you will always get a glimpse into the future. Come and get inspired about the future of the Oracle HCM Cloud.

Session ID: CON8156
Date: Tuesday, Sept. 30, 2014
Time: 12:00 p.m. – 12:45 p.m.
Location: Palace – Presidio

Oracle Sales Cloud: How to Tailor a Simple and Efficient Mobile User Experience

Presenters: Jeremy Ashley, Vice President, Applications User Experience, Oracle; Killian Evers, Senior Director, Applications User Experience, Oracle

The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. In this session, learn how Oracle is addressing mobility by delivering the best user experience for each device as you access your enterprise data in the cloud. Hear about the future of enterprise experiences and the latest trends Oracle sees emerging in the consumer market. You’ll understand what Oracle means by extensibility after getting a high-level overview of the tools designed for tailoring the cloud user experience, and you’ll also get a glimpse into the future of Oracle Sales Cloud.

Session ID: CON7172
Date: Wednesday, Oct. 1 2014
Time: 4:30 p.m. – 5:15 p.m.
Location: Moscone West – 2003

Oracle Applications Cloud: First-Time User Experience

Presenters: Laurie Pattison, Senior Director, User Experience; and Mindi Cummins, Principal Product Manager, both of Oracle

So you’ve bought and implemented Oracle Applications Cloud software. Now you want to get your users excited about using it. Studies show that one of the biggest obstacles to meeting ROI objectives is user acceptance. Based on working directly with thousands of real users, this presentation discusses how Oracle Applications Cloud is designed to get your users excited to try out new software and be productive on a new release ASAP. Users say they want to be productive on a new application without spending hours and hours of training, experiencing death by PowerPoint, or reading lengthy manuals. The session demos the onboarding experience and even shows you how a business user, not a developer, can customize it.

Session ID: CON7972
Date: Thursday, Oct. 2, 2014
Time: 12 p.m. – 12:45 p.m.
Location: Moscone West – 3002

Using Apple iBeacons to Deliver Context-Aware Social Data

Presenters: Anthony Lai, User Experience Architect, Oracle; and Chris Bales, Director, Oracle Social Network Client Development

Apple’s iBeacon technology enables companies to deliver tailored content to customers, based on their location, via mobile applications. It will enable social applications such as Oracle Social Network to provide more relevant information, no matter where you are. Attend this session to see a demonstration of how the Oracle Social Network team has augmented the mobile application with iBeacons to deliver more-context-aware data. You’ll get firsthand insights into the design and development process in this iBeacon demonstration, as well as information about how developers can extend the Oracle Social Network mobile applications.

Session ID: Oracle OpenWorld 2014, CON8918
Date: Thursday, Oct. 2, 2014
Time: 3:15 p.m. – 4 p.m.
Location: Moscone West – 2005

Hope to see you next week.Possibly Related Posts:

Our Very Own Wearable

Wed, 2014-09-17 16:33

Noel (@noelportugal) and Raymond have been hard at work building a custom wearable, a.k.a. the secret OpenWorld project. The finished product is ready for a closeup.

Click to view slideshow.

The components are:

The Bean is an amazingly little board, Arduino-compatible with a Bluetooth Low Energy module, plus an RGB LED and an 3-axis accelerometer.

I can’t tell you what we’re doing with this custom wearable, yet, but it will happen during OpenWorld. If you’ll be at the big show, OpenWorld or JavaOne, you’ll have a chance to see it in action and chat with the guys who built it.

Oh, and Noel will be writing up the details of the build, the story behind it and the journey, as well as all the nerdy bits. Stay tuned for that.Possibly Related Posts:

Autonomous Quadcopters Playing Some Catch

Wed, 2014-09-17 16:04

Tony went to a talk by Salim Ismail (@salimismail), the Founding Executive Director of Singularity University recently. He may/may not post his thoughts on the talk, which sounds fascinating, but this video is worth sharing either way, and not just because we have quadcopter fever.

Yeah, that’s autonomous flight, So refer to the list of horrifying things that should not be allowed.Possibly Related Posts:

Filler or Curated Content?

Wed, 2014-09-17 15:30

I consider these types of posts to be filler, but I suppose you could look at it as curated content or something highbrow like that. Take your pick.

10 Horrifying Technologies That Should Never Be Allowed

I scanned this post first, thought it would be interesting and left it to read later. Then I read it, and now, I’m terrified. Here’s the list, make sure to hit the link and read all about the sci-fi horrors that aren’t really sci-fi anymore.

  • Weaponized Nanotechnology
  • Conscious Machines
  • Artificial Superintelligence
  • Time Travel
  • Mind Reading Devices
  • Brain Hacking Devices
  • Autonomous Robots Designed to Kill Humans
  • Weaponized Pathogens
  • Virtual Prisons and Punishment
  • Hell Engineering

xkcd on watches

This is exactly how I feel about watches.

This is Phil Fish

I only know who Phil Fish is because I watched Indie Game: The Movie. This short documentary by Ian Danskin is quite good and is newsworthy this week thanks to Marcus Persson’s reference to it in his post about why he’s leaving Mojang (h/t Laurie for sharing), the makers of Minecraft, after Microsoft completes its acquisition of the company.

I have often wondered why so many people hate Nickelback, and now I have a much better understanding of why, thanks to Ian. Embedded here for your viewing pleasure.

https://www.youtube.com/watch?v=PmTUW-owa2wPossibly Related Posts:

Wearables Should be Stylish

Tue, 2014-09-09 13:18

To no one’s surprise, Apple announced the Apple Watch today.

Very apropos because I just read Sandra Lee’s (@SandraLee0415) post over on Usable Apps about fashionable tech, one of Ultan’s (@ultan) main talking points about wearables.

Ultan, our wearables whisperer, has style and flair; if you’ve ever met him, you know this. His (and Sandra’s) point about wearable tech needing to be stylish is one that Apple has made, again, to precisely no one’s surprise. Appearance matters to people, and smartwatches and other wearables are accessories that should be stylish and functional.

The market has spoken on this. To the point, the Android Wear smartwatch people want is the round Moto 360, which sold out in less a day earlier this week.

The Apple Watch looks very sleek, and if nothing else, the array of custom bands alone differentiate it from smartwatches like the Samsung Gear Live and the LG G, both of which are also glass rectangles, but with boring rubber wristbands.

I failed to act quickly enough to get a Moto 360 and settled instead on a Gear Live, which is just as well, given I really don’t like wearing watches. We’ve been building for the Pebble for a while now, and since the announcement of Android Wear earlier this year, we’ve been building for it as well, comparing the two watches and their SDKs.

IMG_20140909_121201

Like Google Glass, the Gear Live will be a demo device, not a piece of personal tech. However, for Anthony, his Android Wear watch has replaced Glass as his smartphone accessory of choice. Stay tuned for the skinny on that one.

I haven’t read much about the Apple Watch yet, but I’m sure there will be coverage aplenty as people get excited for its release early in 2015. Now that Apple’s in the game, wearables are surely even more of a thing than they were yesterday.

And they’re much more stylish.

Find the comments.Possibly Related Posts:

On Disney Parks, Data Science, Drones and Wearables

Fri, 2014-09-05 09:16

As the parent of a toddler, I have no choice but to pay attention to Disney and its myriad of products and services.

Case in point, this Summer we took our daughter to Disneyland for the first time, which was a whole thing. Pause to h/t Disneyland expert, Friend of the ‘Lab and colleague Kathy for all her park and travel protips.

Being who I am, I found myself wandering around Disneyland and California Adventure thinking about how many hardcore analytics geeks they must employ to come up with systems like FASTPASS.

For the unfamiliar, FASTPASS is a system that allows you to skip some, if not all, of the line-standing for the most popular attractions in the parks. Although it’s difficult to explain in words, the system is rather simple once you get your first pass.

Being in the park, you can feel all the thought and craft that has gone into the experience. Disney is a $45 billion company, and it’s no surprise their R&D is cutting edge. But what makes it so successful?

Attendees of Disney parks are in a very similar to employees of an enterprise in that they will gladly opt-in to new technologies because the value they receive in return is clear and quantifiable.

Put into examples, if Google Glass helps me do my job more effectively, I’ll wear them. If I receive discounted benefits for wearing a fitness tracker, I’ll do it.

If a MagicBand allows me to leave my wallet in my room, not worry about losing the room keycards, and use FastPass+, I’ll wear it, even though it will allow Disney World to track my location at a very fine-grained level. Who cares? FastPass+ is worth it, right?

Odd branding note, the official ways to write these two terms are indeed FASTPASS and FastPass+, according to Disney’s web site.

If you’re interested in reading more about the MagicBand, what’s inside and Disney uses it at Disney World, check out Welcome to Dataland. Imagine all the data science that goes into creating and iterating on these enormous data sets; this is embiggened Big Data when you consider Disney parks occupied the top eight spots in the 2012 Theme Park Index, comprising well over 100 million visits.

It boggles my mind, although for someone like Bill, it would be Christmas every day.

The post also recounts Walt Disney’s futurist vision, which seems to drive their R&D today. It also encompasses the my point nicely:

Rather, because Disney’s theme parks don’t have the same relationship to reality that Google and Costco and the NSA do. They are hybrids of fantasy and reality.

I read Welcome to Dataland only because I’d just been to Disneyland myself. Then came news that Disney had filed severals patents concerning the use of drones for its park shows, one for floating pixels, one for flying projection screens, one for transporting characters, h/t Business Insider.

Click to view slideshow.

We’ve been experimenting (ahem, playing) with quadcopters, and it struck me that Ultan (@ultan) had sent me a Disney video about customized wearables. This one:

That was posted in August 2012.

So beyond casual interest as the father of a daughter who loves Disney Princesses, suddenly it’s obvious that I need to watch Disney much more carefully to see how they’re adopting emerging technologies.

Oh and become a willing data point in their data set.

Find the comments.Possibly Related Posts:

Behold: The Simplified UI Rapid Development Kit

Wed, 2014-09-03 14:49

Editor’s note: The recent release of the Oracle Applications Cloud Simplified User Interface Rapid Development Kit represents the culmination of a lot of hard work from a lot of people. The kit was built, in large part, by Friend of the ‘Lab, Rafa Belloni (@rafabelloni), and although I tried to get him to write up some firsthand commentary on the ADF-fu he did to build the kit, he politely declined. 

We’re developers here, so I wanted to get that out there before cross-posting (read, copying) the detailed post on the kit from the Usable Apps (@usableapps) blog. I knew I couldn’t do better, so why try? Enjoy.

Simplified UI Rapid Development Kit Sends Oracle Partners Soaring in the Oracle Applications Cloud

A glimpse into the action at the Oracle HCM Cloud Building Simplified UIs workshop with Hitachi Consulting by Georgia Price (@writeprecise)

Building stylish, modern, and simplified UIs just got a whole lot easier. That’s thanks to a new kit developed by the Oracle Applications User Experience (OAUX) team that’s now available for all from the Usable Apps website.

The Oracle Applications Cloud Simplified User Interface Rapid Development Kit is a collection of code samples from the Oracle Platform Technology Solutions (PTS) Code Accelerator Kit, coded page templates and Oracle ADF components, wireframe stencils and examples, coding best practices, and user experience design patterns and guidance. It’s designed to help Oracle partners and developers quickly build—in a matter of hours—simplified UIs for their Oracle Applications Cloud use cases using Oracle ADF page types and components.

eBook_SUI_redCover_1

A key component of the simplified UI Rapid Development Kit—the Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook—in use. Pic: Sasha Boyko, all rights reserved.

The kit was put to the test last week by a group of Hitachi Consulting Services team members at an inaugural workshop on building simplified UIs for the Oracle HCM Cloud that was hosted by the OAUX team in the Oracle headquarters usability labs.

The results: impressive.

During the workshop, a broad range of participants—Hitachi Consulting VPs, senior managers, developers, designers, and architects—learned about the simplified UI design basics of glance, scan, commit and how to identify use cases for their business. Then, they collaboratively designed and built—from wireframe to actual code—three lightweight, tablet-first, intuitive solutions that simplify common, every day HCM tasks.

Sona Manzo (@sonajmanzo), Hitachi Consulting VP leading the company’s Oracle HCM Cloud practice, said, “This workshop was a fantastic opportunity for our team to come together and use the new Rapid Development Kit’s tool s and techniques to build actual solutions that meet specific customer use cases. We were able to take what was conceptual to a whole different level.”

sona_manzo_hitachi_aug2014

Great leadership. Hitachi Consulting’s Sona Manzo gets the whole team into the spirit of building simplified UIs. Pic: Martin Taylor, all rights reserved.

Workshop organizer and host Ultan O’Broin (@ultan), Director, OAUX, was pleased with the outcome as well: “That a key Oracle HCM Cloud solution partner came away with three wireframed or built simplified UIs and now understands what remains to be done to take that work to completion as a polished, deployed solution is a big win for all.”

ultan_anna_hitachi_aug2014_med

OAUX Principal Interaction Designer Anna Budovsky (left) and Ultan O’Broin (right) facilitate Hitachi Consulting team members in working out solutions for customer use cases. Pics: Martin Taylor, all rights reserved.

Equally importantly, said Ultan, is what the OAUX team learned about “what such an Oracle partner needs to do or be able to do next to be successful.”

According to Misha Vaughan (@mishavaughan), Director of the OAUX Communications and Outreach team, folks are lining up to attend other building simplified UI workshops.

“The Oracle Applications Cloud partner community is catching wind of the new simplified UI rapid development kit. I’m delighted by the enthusiasm for the kit. If a partner is designing a cloud UI, they should be building with this kit,” said Misha.

Ultan isn’t surprised by the response. “The workshop and kit respond to a world that’s demanding easy ways to build superior, flexible, and yet simple enterprise user experiences using data in the cloud.”

The Oracle Applications Cloud Simplified User Interface Rapid Development Kit will now be featured at Oracle OpenWorld 2014 OAUX events and in OAUX communications and outreach worldwide.Possibly Related Posts:

Context in UX – What It Is, What It Isn’t, and Why It’s Important

Mon, 2014-09-01 19:54
Big Brown Bat (Eptesicus fuscus) in Flight

Copyright@2012 Bill Kraus, All rights reserved.

Our location is relentlessly tracked by our mobile devices. Our online transactions – both business and social – are recorded and stored in the cloud. And reams of biometric data will soon be collected by wearables.  Mining this contextual data offers a significant opportunity to enhance the state of human computer interaction. But this begs the question: what exactly is ‘context’ ?

Consider the following sentence:

“As Michael was walking, he observed a bat lying on the ground.”

Now take a moment and imagine this scene in your mind.

Got it? Good.

Now a few questions. First, does the nearby image influence your interpretation of this sentence? Suppose I told you that Michael was a biologist hiking through the Amazonian rain forest. Does this additional information confirm your assumptions?

Now, suppose I told you that the image has nothing to do with the sentence, but instead it’s just a photograph I took in my own backyard and inserted into this post because I have a thing for flying mammals.  Furthermore, what if I told you that Michael actually works as a ball boy at Yankee stadium? Do these additional facts alter your interpretation of the sentence? Finally, what if I confessed that I have been lying to you all along, that Michael is actually in Australia, his last name is Clarke, and that he was carrying a ball gauge? Has your idea of what I meant by ‘bat’ changed yet again? (Hint – Michael Clarke is a star cricket player.)

The point here is that contextual information – the who, what, where, and when of a situation – provides critical insights into how we interpret data. In pondering the sentence above, providing you with context – either as additional background statements or through presumed associations with nearby content – significantly altered how you interpreted that simple sentence.

At its essence, context allows us to resolve ambiguities. What do I mean by this? Think of the first name of someone you work with. Chances are good that there are many other people in the world (or at your company if your company is as big as Oracle) with that same first name. But if I know who you are (and ideally where you are) and what you are working on, and I have similar information about your colleagues, then I can make a reasonably accurate guess as to the identity of the person you are thinking of without you having to explicitly tell me anything other than their first name. Furthermore, if I am wrong, my error is understandable to you, precisely because my selection was the logical choice. Were you thinking of your colleague Madhuri in Mumbai that you worked with remotely on a project six months ago? But I guessed the Madhuri that has an office down the hall from you in Redwood City and with whom you are currently collaborating? Ok, I was wrong, but my error makes sense, doesn’t it?   (In intelligent human computer interactions, the machine doesn’t always need be right as long as any errors are understandable. In fact, Chris Welty of IBM’s Watson team has argued that intelligent machines will do very well to be right 80% of the time – which of course was more than enough to beat human Jeopardy champions.)

So why is the ability to use context to resolve ambiguities important? Because – using our example – I can now take the information derived from context and provide you with a streamlined, personalized user experience that does not require you to explicitly specify the full name of your colleague – in fact, you might not need to enter any name at all if I have enough contextual background about you and what you are trying to do.

When it comes to UX, context is actually a two-way street. Traditionally, context has flowed from the machine to the user, where layout and workflow – the consequence of both visual and interaction design – has been used to inform the user as to what something means and what to do next.  But as the availability of data and the complexity of systems have grown to the point of overwhelming the user, visualizations and interactions alone are not sufficient to stem the tide. Rather, context – this time emanating from the user to the machine – is the key for achieving  a more simplified, personalized user experience.

Context allows us to ask the right questions and infer the correct intentions. But the retrieval of the actual answers – or the execution of the desired task – is not part of context per se. For example, using context based on user identity and past history (demographic category, movies watched in the past) can help a recommendation engine provide a more targeted search result. But context is simply used to identify the appropriate user persona – the retrieval of recommendations is done separately. Another way to express this is that context is used to decide which view to put on the data, but it is not the data itself.

Finally, how contextual information is mapped to appropriate system responses can be divided into two (not mutually exclusive) approaches, one empirical, the other deductive. First, access to Big Data allows the use of machine learning and predictive analytics to discern patterns of behavior across many people, mapping those patterns back to individual personas and transaction histories. For example, if you are browsing Amazon.com for a banana slicer and Amazon’s analytics show that people who spend a lot of time on the banana slicer page also tend to buy bread slicers, then you can be sure you will see images of bread slicers.

But while Big Data can certainly be useful, it is not required for context to be effective. This is particularly true in enterprise, where reasonable assumptions can be made from a semantic understanding of the underlying business model, and where information-rich employee data can be mined directly by the company. Are you a salesperson in territory A with customers X, Y, and Z? Well then it is safe to assume that you are interested in the economic climate in A as well as news about X, Y, and Z without you ever having to explicitly say so.

So in closing, the use of context is essential for creating simple yet powerful user experiences – and like the term ‘user experience’ itself, there is no one single implementation of context – rather, it is a concept that should pervade all aspects of human computer interaction in its myriad of forms.Possibly Related Posts:

Personal Assistant or Creepy Stalker? The Rise of Cognitive Computing

Wed, 2014-08-20 23:44

I just got back to my hotel room after attending the first of a two day Cognitive Computing Forum, a conference running in parallel to the Semantic Technology (SemTech) Business Conference and the NoSQL Conference here in San Jose. Although the forum attracts less attendees and has only a single track, I cannot remember attending a symposium where so many stimulating ideas and projects were presented.

What is cognitive computing? It refers to computational systems that are modeled on the human brain – either literally by emulating brain structure or figuratively through using reasoning and semantic associations to analyze data. Research into cognitive computing has become increasingly important as organizations and individuals attempt to make sense of the massive amount of data that is now commonplace.

The first forum speaker was Chris Welty, who was an instrumental part of IBM’s Watson project (the computer that beat the top human contestants on the gameshow Jeopardy). Chris gave a great overview of how cognitive computing changes the traditional software development paradigm. Specifically, he argued that rather than focus on perfection, it is ok to be wrong as long as you succeed often enough to be useful (he pointed to search engine results as a good illustration of this principle). Development should focus on incremental improvement – using clearly defined metrics to measure whether new features have real benefit. Another important point he made was that there is no one best solution – rather, often the most productive strategy is to apply several different analytical approaches to the same problem, and then use a machine learning algorithm to mediate between (possibly) conflicting results.

There were also several interesting – although admittedly esoteric – talks by Dave Sullivan of Ersatz Labs (@_DaveSullivan) on deep learning, Subutai Ahmad of Numenta on cortical computing (which attempts to emulate the architecture of the neocortex) and Paul Hofmann (@Paul_Hofmann) of Saffron Technology on associative memory and cognitive distance. Kristian Hammond (@KJ_Hammond) of Narrative Science described technology that can take structured data and use natural language generation (NLG) to automatically create textual narratives, which he argued are often much better than data visualizations and dashboards in promoting understanding and comprehension.

However, the highlight of this first day was the talk entitled ‘Expressive Machines’ by Mark Sagar from the Laboratory for Animate Technologies. After showing some examples of facial tracking CGI from the movies ‘King Kong’ and ‘Avatar’, Mark described a framework modeled on human physiology that emulates human emotion and learning. I’ve got to say that even though I have a solid appreciation and understanding for the underlying science and technology, Mark’s BabyX – who is now really more a virtual toddler than an infant – blew me away. It was amazing to see Mark elicit various emotions from BabyX. Check out this video about BabyX from TEDxAukland 2013.

At the end of the day, the presentations helped crystallize some important lines of thought in my own carbon-based ‘computer’.

First, it is no surprise that human computer interactions are moving towards more natural user interfaces (NUIs), where a combination of artificial intelligence, fueled by semantics and machine learning and coupled with more natural ways of interacting with devices, result in more intuitive experiences.

Second, while the back end analysis is extremely important, what is particularly interesting to me is the human part of the human computer interaction. Specifically, while we often focus on how humans manipulate computers, an equally  interesting question is how computers can be used to ‘manipulate’ humans in order to enhance our comprehension of information by leveraging how our brains are wired. After all, we do not view the world objectively, but through a lens that is the result of idiosyncrasies from our cultural and evolutionary history – a fact exploited by the advertising industry.

For example, our brains are prone to anthropomorphism, and will recognize faces even when faces aren’t there. Furthermore, we find symmetrical faces more attractive than unsymmetrical faces.  We are also attracted to infantile features – a fact put to good use by Walt Disney animators who made Mickey Mouse appear more infant-like over the years to increase his popularity (as documented by paleontologist Stephen Jay Gould). In fact, we exhibit a plethora of cognitive biases (ever experience the Baader Meinhof phenomenon?), including the “uncanny valley”, which describes a rapid drop off in comfort level as computer agents become almost – but not quite perfectly – human-looking.  And as Mark Sagar’s work demonstrates, emotional, non-verbal cues are extremely important (The most impressive part of Sagar’s demo was not the A.I. – afer all, there is a reason why BabyX is a baby and not an fully conversant adult – but rather the emotional response it elicited in the audience).

The challenge in designing intelligent experiences is to build systems that are informative and predictive but not presumptuous, tending towards the helpful personal assistant rather than the creepy stalker. Getting it right will depend as much on understanding human psychology as it will on implementing the latest machine learning algorithms.Possibly Related Posts:

More First World Problems

Mon, 2014-08-18 18:36

I’ve been traveling a lot lately, which is bad. I’ve been consuming a lot of in-flight wifi, which is good, because there really should be no place on Earth where I’m unable to work.

Plus, it’s internets at 35,000 feet. How cool is that?

Today, I found myself in the throes of a decidedly first world problem. Of the many devices I carry, I couldn’t decide which one to use for the airplane wifi, which is, naturally, charged per-device.

Normally, I’d go with the tablet, since it’s a nice mix of form factors. The laptop is my preference, but I end up doing in-seat yoga to use it, not a good look.

But, horror of horrors, the tablet’s battery was only 21%. Being an Android tablet, that wouldn’t be enough to make it to my destination. I do carry a portable battery, but it won’t charge the Nexus 7 tablet, for some odd reason.

Recursive, first world problems.

I debated smartphone vs. laptop for a minute or two before I realized what an awful, self-replicating, first world problem this was. So, I made a call and immediately did what anyone would do, tweeted about it.

ba5h3

What has become of me.Possibly Related Posts:

Quadcopters and the Internet of Things

Sun, 2014-08-17 14:44
Jerry-rigged attachment of a 808 keychain camera to the underside of a Syma X1 quadcopter.

Low tech attachment of a 808 keychain camera to the underside of a Syma X1 quadcopter.

Editor’s note: Hey a new author! Here’s the first one, of many I hope, from Bill Kraus, who joined us back in February. Enjoy.

One of the best aspects of working in the emerging technologies team here in Oracle’s UX Apps group is that we have the opportunity to ‘play’ with new technology. This isn’t just idle dawdling, but rather play with a purpose – a hands-on exercise exploring new technologies and brainstorming on how such technologies can be incorporated into future enterprise user experiences.

Some of this technology, such as beacons and wearables,  have obvious applications. The relevancy of other technologies, such as quadcopters and drones, are more obtuse (not withstanding their possible use as a package delivery mechanism for an unnamed online retail behemoth).

Ponit Monroe

Video still taken from the quadcopter hundreds of feet above my home on Bainbridge Island, looking north to the Puget Sound and Point Monroe.

As an amateur wildlife and nature photographer, I’ve dabbled in everything from digiscoping to infrared imaging to light painting to underwater photography. I’ve also played with strapping lightweight keychain cameras to inexpensive quadcopters (yes, I know I could get a DJI Phantom and a GoPro, but at the moment I prefer to test my piloting skills on something that won’t make me shed tears – and incur the wrath of my spouse – if it crashes).

After telling my colleagues recently over lunch about my quadcopter adventures  (I already lost several in the trees and waters of the Puget Sound), Tony, Luis, and Osvaldo decided to purchase their own and we had a blast at our impromptu ‘flight school’ at Oracle. The guys did great, and Osvaldo’s copter even had a têt-à-tête with a hummingbird, who seemed a bit confused over just what was hovering before it.

Luis flying his quadcopter.

Luis flying his quadcopter in the hallway.

Osvaldo flying his quadcopter.

Osvaldo flying his quadcopter.

This is all loads of fun, but what do flying quadcopters have to do the Internet of Things? Well, just as a quadcopter allows a photographer to get a perspective previously thought impossible, mobile technology combined with embedded sensors and the cloud have allowed us to break the bonds of the desktop and view data in new ways. No longer do we interact with digital information at a single point in time and space, but rather we are now enveloped by it every waking (and non-waking) moment – and we have the ability to view this data from many different perspectives. How this massive flow of incoming data is converted into useful information will depend in large part on context (you knew I’d get that word in here somehow) – analogous to how the same subject can appear dramatically different depending on the photographer’s (quadcopter assisted) point-of-view.

In fact, the Internet of Things is as much about space as it is about things – about sensing, interacting with and controlling the environment around us using technology to extend what we can sense and manipulate. Quadcopters are simply a manifestation of this idea – oh, and they are also really fun to fly.Possibly Related Posts:

The Secret Project Emerges

Fri, 2014-08-15 07:56

Noel (@noelportugal) and Raymond have been working on a secret project. Here’s the latest:

eihgjagiThanks to AUX colleague and Friend of the ‘Lab, Rob Hernandez, for the 3D modeling.

So now you know why Noel bought the slap bands, but what goes in the case?

appslab-slap-band-1

If you’ve been watching, you might know already.

10516589_828102840568104_7123336379861752905_n

LightBlue Beans from Punch Through Design

Those are LightBlue Beans from Punch Through Design (@punchthrough), h/t @colin_k.

Stay tuned.Possibly Related Posts:

Injecting JavaScript into Simplified UI

Thu, 2014-08-14 11:04

Extensibility is one of the themes we here in Oracle Applications User Experience (@usableapps) advocate, along with simplicity and mobility.

Simplified UI provides a ton of extensible features, from themes, colors and icons to interface and content changes made by Page Composer.

But sometimes you need to inject some JavaScript into Simplified UI, and you just can’t figure out how, like last week for example. Tony and Osvaldo are building one of Noel’s (@noelportugal) crazy ideas, and they needed to do just that. The project? Yeah, it’s a secret for now, but stay tuned.

Anyway, they had been trying for a couple days, unsuccessfully, to find a way to inject some JS, until I finally decided to ask AUX colleague and extensibility guru, Tim DuBois. As I hoped, Tim had a method, a sneaky roundabout one, but one that sounded promising.

Tim couldn’t recall the source of the method, might have come from Angelo Santagata (@AngeloSantagata) or possibly from a Cloud partner, but as you’ll see, it’s ingenious.

Whoever discovered this method was clever and tenacious and should get kudos. It’s a nice, easy way to get JS into a Simplified UI page without changing the shell.

Here we go.

From the Simplified UI springboard, Sales Cloud in this example, navigate to a page like Leads and expand the menu next to your username.

jsexsten1

At this point, you should create a sandbox to keep your changes isolated, just in case. For more about how and why you want to use sandboxes, check out the documentation.

I didn’t create one in this instance because I’m that confident it works. However, we did use a sandbox when we were testing this.

So, from the expanded menu choose Customize User Interface and pick Site as the target layer.

jsexsten2

Click Select from the edit options and choose a component on the page, like a label, in this case “Leads.”

jsexsten3

For this exercise, the component you choose doesn’t really matter because we’re just making a placeholder change. All you need is one with an Edit Component option.

jsexsten4

Choose Edit Component and modify the value. In this case, we’ll change the text by choosing Select Text Resource from the Value menu and then picking a random key value and entering new label text to display.

jsexsten5

jsexsten6

Make sure to click Create before leaving this dialog. Upon returning to Page Composer, you’ll see the Leads label has changed. Exit Page Composer.

jsexsten7

Once again, expand the menu by your username and choose Manage Customizations.jsexsten8

From the All Layers column, download the XML file.

jsexsten9

Edit the XML file and include your JavaScript.

jsexsten10.5

For the record, we found the correct syntax in this forum post. The code should be similar to:

<mds:insert after="outputText1" parent="g1">
  <af:resource xmlns:af="http://xmlns.oracle.com/adf/faces/rich" 
  type="javascript">alert("HELLO WORLD!");</af:resource>
</mds:insert> 

Finally, upload your updated XML using the same Manage Customizations dialog, close and reload the page.

jsexsten11

And there you go.

Find the comments if you like.

Possibly Related Posts:

OTN Latin America Tour 2014 – Mexico

Wed, 2014-08-13 12:19

keynote1

The OTN network is designed to help Oracle users with community generated resources. Every year the OTN team organizes worldwide tours that allow local users to learn from subject matter experts in all things Oracle. For the past few years the UX team has been participating in the OTN Latin America Tour as well as other regions.  This year I was happy to accept their invitation to deliver the opening keynote for the Mexico City tour stop.

The keynote title was “Wearables in the Enterprise: From Internet of Things to Google Glass and Smart Watches.” Given the AppsLab charter and reputation on cutting edge technologies and innovation it was really easy to put a presentation deck on our team’s findings on these topics. The presentation was a combination of the keynote given by our VP, Jeremy Ashley, during MakerCon 2014 at Oracle HQ this past May and our proof-of-concepts using wearable technologies.

Session114883028625_f2f6a4d6c7_o

I also had a joint session with my fellow UX team member Rafael Belloni titled “Designing Tablet UIs Using ADF.” Here we had the chance to share how users can leverage two great resources freely available from our team:

  1. Simplified User Experience Design Patterns for the Oracle Applications Cloud Service (register to download e-book here)
  2. A Starter kit with templates used to build a Simplified UI interfaces (download kit here)
    *Look for “Rich UI with Data Visualization Components and JWT UserToken validation extending Oracle Sales Cloud– 1.0.1″

These two resources are the result of extensive research done by our whole UX organization and we are happy to share with the Oracle community. Overall it was a great opportunity to reach out to the Latin American community, especially my fellow Mexican friends.

Here are some pictures of the event and of Mexico City. Enjoy!

 

Photo credits to Pablo Ciccarello, Plinio Arbizu, and me.Possibly Related Posts:

Oracle Voice Debuts on the App Store

Mon, 2014-08-11 16:05

Editor’s note: I meant to blog about this today, but looks like my colleagues over at VoX have beat me to it. So, rather than try to do a better job, read do any work at all, I’ll just repost it. Free content w00t!

Although I no longer carry an iOS device, I’ve seen Voice demoed many times in the past. Projects like Voice and Simplified UI are what drew me to Applications User Experience, and it’s great to see them leak out into the World.

Enjoy.

Oracle Extends Investment in Cloud User Experiences with Oracle Voice for Sales Cloud
By Vinay Dwivedi, and Anna Wichansky, Oracle Applications User Experience

Oracle Voice for the Oracle Sales Cloud, officially called “Fusion Voice Cloud Service for the Oracle Sales Cloud,” is available now on the Apple App Store. This first release is intended for Oracle customers using the Oracle Sales Cloud, and is specifically designed for sales reps.

Home_With_Frame

The home screen of Fusion Voice Cloud Service for the Oracle Sales Cloud is designed for sales reps.

Unless people record new information they learn, (e.g. write it down, repeat it aloud), they forget a high proportion of it in the first 20 minutes. The Oracle Applications User Experience team has learned through its research that when sales reps leave a customer meeting with insights that can move a deal forward, it’s critical to capture important details before they are forgotten. We designed Oracle Voice so that the app allows sales reps to quickly enter notes and activities on their smartphones right after meetings, no matter where they are.

Instead of relying on slow typing on a mobile device, sales reps can enter information three times faster (pdf) by speaking to the Oracle Sales Cloud through Voice. Voice takes a user through a dialog similar to a natural spoken conversation to accomplish this goal. Since key details are captured precisely and follow-ups are quicker, deals are closed faster and more efficiently.

Oracle Voice is also multi-modal, so sales reps can switch to touch-and-type interactions for situations where speech interaction is less than ideal.

Oracle sales reps tried it first, to see if we were getting it right.

We recruited a large group of sales reps in the Oracle North America organization to test an early version of Oracle Voice in 2012. All had iPhones and spoke American English; their predominant activity was field sales calls to customers. Users had minimal orientation to Oracle Voice and no training. We were able to observe their online conversion and usage patterns through automated testing and analytics at Oracle, through phone interviews, and through speech usage logs from Nuance, which is partnering with Oracle on Oracle Voice.

Users were interviewed after one week in the trial; over 80% said the product exceeded their expectations. Members of the Oracle User Experience team working on this project gained valuable insights into how and where sales reps were using Oracle Voice, which we used as requirements for features and functions.

For example, we learned that Oracle Voice needed to recognize product- and industry-specific vocabulary, such as “Exadata” and “Exalytics,” and we requested a vocabulary enhancement tool from Nuance that has significantly improved the speech recognition accuracy. We also learned that connectivity needed to persist as users traveled between public and private networks, and that users needed easy volume control and alternatives to speech in public environments.

We’ve held subsequent trials, with more features and functions enabled, to support the 10 workflows in the product today. Many sales reps in the trials have said they are anxious to get the full version and start using it every day.

“I was surprised to find that it can understand names like PNC and Alcoa,” said Marco Silva, Regional Manager, Oracle Infrastructure Sales, after participating in the September 2012 trial.

“It understands me better than Siri does,” said Andrew Dunleavy, Sales Representative, Oracle Fusion Middleware, who also participated in the same trial.

This demo shows Oracle Voice in action.

What can a sales rep do with Oracle Voice?

Oracle Voice allows sales reps to efficiently retrieve and capture sales information before and after meetings. With Oracle Voice, sales reps can:

Prepare for meetings

  • View relevant notes to see what happened during previous meetings.
  • See important activities by viewing previous tasks and appointments.
  • Brush up on opportunities and check on revenue, close date and sales stage.

Wrap up meetings

  • Capture notes and activities quickly so they don’t forget any key details.
  • Create contacts easily so they can remember the important new people they meet.
  • Update opportunities so they can make progress.
These screenshots show how to create tasks and appointments using Oracle Voice.

These screenshots show how to create tasks and appointments using Oracle Voice.

Our research showed that sales reps entered more sales information into the CRM system when they enjoyed using Oracle Voice, which makes Oracle Voice even more useful because more information is available to access when the same sales reps are on the go. With increased usage, the entire sales organization benefits from access to more current sales data, improved visibility on sales activities, and better sales decisions. Customers benefit too — from the faster response time sales reps can provide.

Oracle’s ongoing investment in User Experience

Oracle gets the idea that cloud applications must be easy to use. The Oracle Applications User Experience team has developed an approach to user experience that focuses on simplicity, mobility, and extensibility, and these themes drive our investment strategy. The result is key products that refine particular user experiences, like we’ve delivered with Oracle Voice.

Oracle Voice is one of the most recent products to embrace our developer design philosophy for the cloud of “Glance, Scan, & Commit.” Oracle Voice allows sales reps to complete many tasks at what we call glance and scan levels, which means keeping interactions lightweight, or small and quick.

Are you an Oracle Sales Cloud customer?

Oracle Voice is available now on the Apple App Store for Oracle customers using the Oracle Sales Cloud. It’s the smarter sales automation solution that helps you sell more, know more, and grow more.

Will you be at Oracle OpenWorld 2014? So will we! Stay tuned to the VoX blog for when and where you can find us. And don’t forget to drop by and check out Oracle Voice at the Smartphone and Nuance demo stations located at the CX@Sales Central demo area on the second floor of Moscone West.Possibly Related Posts: