Oracle AppsLab

Subscribe to Oracle AppsLab feed Oracle AppsLab
The Emerging Technologies team of Oracle Applications User Experience
Updated: 12 hours 59 min ago

Real Time Ambient Display at OpenWorld: The Software (for the Hardware)

Thu, 2016-10-20 03:45

This is part 2 of my blog post series about the Ambient Visualization hardware (part 1).  Also, please read John’s post on details about the creation of the actual visualization, from concept to build.  In the first part, I focused on the hardware, a sonar sensor connected to a NodeMCU.  In this second part, the focal point will be the software.

When I started working with ESPs a few years ago I was all gaga about the fact that you could use Lua to program these chips. However, over the last year, I have revised my thinking as I ran into stability issues with Lua.  I now exclusively code in C/C++ for the ESPs using the arduino library for the ESP8266.  This has led to much stabler firmware and with the advent of the likes of PlatformIO, a much better development experience (and I’m all for better DX!).

As I was not going to be present at the Exchange myself to help with the setup of the devices, I had to make it as easy as possible to set up and use.  I could not assume that the person setting up the NodeMCUs had any knowledge about the NodeMCU, Sonars, C++ etc.  Ideally, they could just place it in a suitable location, switch on the NodeMCU and that would be it!  There were a few challenges I had to overcome to get to this ideal scenario.

First, the sonars needed to be “calibrated.”  The sonar just measures the time it takes for a “ping” to come back as it bounces of an object … any object.  If I place the sonar on one side of the room and point it to the opposite wall, it will tell me how long it takes (in µs) for a “ping” to come back as it bounces of that wall.  (You can then use the speed of sound to calculate how far away that wall is.)  However, I want to know when somebody walks by the sensor, i.e. when the ping that comes back is not from the opposite wall but from something in between the wall and the sensor.  In order to be able to do this, I have to know how far away the wall is (or whatever fixed object the sonar is pointed at when it is placed down).  Since I didn’t know where these sensors were going to be placed, I did not know in advance where these walls would be so this could not be coded upfront; this had to be done on-site.  And since I could not rely on anybody being able to just update the code on the fly as mentioned earlier, the solution was to have the sonars “self-calibrate.”

As soon as you turn on the NodeMCU, it will go into “calibration mode”.  The first few seconds it will take a few hundred samples under the assumption that whatever it “sees” initially is the wall opposite the device.  It will then store this information for as long as the NodeMCU is powered on.  After this, any ping that is close to the wall is assumed to be coming from the wall, and discarded.  Whenever a ping is received of an object that is closer to the sonar than the wall, we assume that this is a person walking by the sensor (between the wall and the sensor) and we flag this.  If you want to put the NodeMCU in a different location (presumably with the opposing wall at a different distance from it), you just switch it off, move it, and switch it back on.  The calibration will make sure it works anywhere you place it.  For the people setting up the sonars, this meant that all they’d have to do was place the sensors, switch them on and make sure that in the first 1-2 seconds nothing is in between the sensor and the opposite side (and if there was something in between by accident, they could just “reset” the NodeMCU which would recalibrate it).  This turned out to work great, some sensors had a small gap (~2 meters), others had a much larger gap (+5 meters), all working just fine using the same code.

Second, the NodeMCU needs to be configured to connect to a WiFi.  Typically this is hard-coded in the firmware, but again, this was not an option as I didn’t know what the WiFi SSID or password would be.  And even if I did, conference WiFi is notoriously bad (the Achilles heel of all IoT) so there was a distinct possibility that we would have to switch WiFi networks on-site to a better alternative (e.g. a local hotspot).  And as with the calibration, I could not rely on anybody being able to fix this in the code, on-site. Also, unlike the calibration, connecting to a WiFi requires human interaction; somebody has to enter the password.  The solution I implemented was for the NodeMCU to come with its own configuration web application.  Let me explain…

The NodeMCU is powerful enough to run its own Web Server, serving HTML, CSS and/or JS.  The NodeMCU can also be an Access Point (AP) so you can connect to it like you connect to your router.  It exposes an SSID and when you connect your device to this network, you can query up HTML pages and the NodeMCU Web Server will serve them to you.  Note that this does not require any WiFi, the NodeMCU basically “brings its own” WiFi that you connect to.

NodeMCU Access Point

NodeMCU Access Point (called ESP8266-16321847)

So I created a Web Server on the NodeMCU and build a few HTML pages which I stored on the NodeMCU (in the SPIFFS).  Whenever you connect to a NodeMCU running this firmware and point your browser to 192.168.4.1, it will serve up those pages which allows you to configure that very same NodeMCU.  The main page allows you to set the WiFi SSID and password (you can also configure the MQTT setup).  This information then gets stored on the NodeMCU in the Flash (EEPROM) so it is persistent; even if you switch off the NodeMCU it will “remember” the WiFi credentials.

NodeMCU Config Screen

NodeMCU Config Screen

This makes it very easy for novice users on-site to configure the NodeMCU to connect to any WiFi that is available.  As soon as you restart the NodeMCU it will attempt to connect to the WiFi as configured, which brings me to the final UX challenge.

Since the NodeMCU does not have a screen, how do users know if it is even working?  It needs to calibrate itself, it needs to connect to WiFi and to MQTT, how do I convey this information to the users?  Luckily the NodeMCU has a few onboard LEDs which I decided to use for that purpose.  To show the user that the NodeMCU is calibrating the sonar, it will flash the red LED (remember this happens at every boot).  As soon as the sonar is successfully calibrated, the red LED will stay on.  If for whatever reason the calibration failed – this can happen is the wall is too far away (+6 meters), not reflecting any sound (e.g. off stealth bombers) or no sonar is attached to the NodeMCU – the red LED will switch off.  A similar sequence happens when the NodeMCU is trying to connect to the WiFi.  As it tries, it will be blinking the blue onboard LED.  If it connects successfully to the WiFi, the blue LED will stay on, if it failed however, the board will automatically switch to AP mode, assuming you want to (re)configure the board to connect to a different WiFi and the blue LED will still stay on (indicating you can connect to the NodeMCU AP) but very faintly.  With these simple interactions, I can let the user know exactly what is happening and if the device is ready to go (both blue and red LEDs are on) or not (one of the LEDs or both are off).

This setup worked remarkably well and I had not one question during the Exchange on how these things work or need to be setup.  All that needed to be done was set them down, boot them up, and make sure all lights were on.  If they were not, try again (reboot) or reconfigure.

The actual capturing of data was pretty easy as well; the NodeMCU would send a signal to our MQTT broker every time it detected a person walking by.  The MQTT broker then broadcasted this to its subscribers, one of which was a small (node.js) server that I wrote which would forward this message to APEX using a REST API made available by Noel.  He would then store this information where it could be accessed by John (using another REST API) for his visualization.

Cheers,

Mark.

 

 Possibly Related Posts:

Real Time Ambient Display at OpenWorld: The Hardware

Wed, 2016-10-19 03:44

As John mentioned in his post, one of the projects I worked on for OOW16 was the devices to provide the data to his Ambient Display.  Unlike previous years, where we record attendance and then produce a report a few days or weeks after OOW, Jake proposed that we’d somehow visualize the data in real-time and show it to the attendees as they are producing the data themselves.

In order to produce the data, we wanted to strategically place “sensors” in the OAUX Exchange tent that could sense when somebody walks by them.  Whenever this happened, the device should sent a signal to John so that he could consume it and show it on his visualization.

I considered several designs and my first thought was to build a system using a laser-diode on one side and a photo-resistor as a receiver on the other side: when somebody “breaks the beam” I would know somebody walked by, basically a laser-tripwire you can find in many other applications.  Unfortunately, photo-resistors are fairly small, the largest affordable model I could find was half the size of my pinkie’s fingernail and so this meant that the area for the laser to hit was really small, especially as the distance increases.  To add to this, we couldn’t attach the sensors to walls (i.e. an immovable object) because the OAUX Exchange is held in a tent.  The best we could hope for to attach our sensors to was a tent pole or a table leg.  Any movement in those would misalign the laser or the sensor and would get registered as a “walk by.”  So I quickly abandoned the idea of lasers (I’ll keep that one in the bag for when we finally get those sharks).

Noel suggested to use an ultrasonic sensor instead.  These work just like a sonar: they send out inaudible “pings” of sound and then listen for the sound to come back when it bounces of an object.  With some simple math you can then work out how far that object is removed from the sonar sensor.  I tested out a few sonar sensors but I finally settled on the LV-MaxSonar-EZ1, which had the right combination of sensitivity at the distances we needed (+2 meters) and ease-of-use.

Next I had to figure out what to attach the sensor to, i.e. what was going to be my “Edge” device.  Initially I tested with a Raspberry Pi because we have a few of those around the office all the time, however this turned out to have several disadvantages.  For one, the LV-MaxSonar-EZ1 is an analog ultrasonic sensor. Since the RPi does not support analog input I had to use an ADC chip to convert the signal from analog to digital. Although this gave me very accurate readings, it complicated the build.  Also, we weren’t guaranteed power at each station so the end solution would have to be able to run on battery power all day long, something that is hard with a RPi.

Next I used an Arduino (Uno) as my Edge device.  Since it has analog inputs, it was much easier to build but the problem is that it needs an additional WiFi Shield to be able to connect to the internet (remember, I needed to get the sensor data somehow to John), which is pretty pricy, combined we are now talking +$100.  I wanted a cheaper solution.

As is customary now with me when I work on IoT solutions, I turned to the ESP8266/NodeMCU.  It’s cheap (< $10), has lots of GPIOs (~10) and has Wifi built in.  Also, we had a few lying around :-):

NodeMCUs

NodeMCUs

I hooked up the Sonar to the NodeMCU (using PWM on a digital GPIO) and within a few minutes I had accurate readings and was sending the data to the backend over the internet: IoT FTW!  Furthermore, it’s pretty easy to run a NodeMCU off battery power for a whole day (as it turned out, they all ran the whole 3-days of the Exchange on a single charge, with plenty of battery power to spare!).  It was really a no brainer so I settled on the NodeMCU with the LV-MaxSonar-EZ1 attached to it, all powered by a ~6000mAh battery:

NodeMCU with Sonar

NodeMCU with Sonar

cswypriueaahhal

First iteration for initial testing.

Three of the ultrasonic sensors we used to detect movement in the tent

Three of the ultrasonic sensors we used to detect movement in the tent

Once I settled on the hardware, it was on to the software, which I will explain in detail in a second post.

Cheers,

Mark.Possibly Related Posts:

My First Oracle OpenWorld

Tue, 2016-10-18 13:44

This year I had the great opportunity to attend in person Oracle OpenWorld 2016 and JavaOne 2016. Since I was student, I heard how fantastic and big is this conference but you cannot realize it until you are in it.

All San Francisco is taken by a bunch of personalities from all companies around the world, and it’s a great space to talk about Oracle, show off our projects and of course, our vision as a team and organization.

img1

In this conference you can see a big contrast between attendees profiles. If you walk near to Moscone Center, you probably will see attendees wearing suits and ties and talking all time about business. In contrast, if you walk couples block to downtown you will see more casual dress code (shirts and jeans) meaning that you are entering to developers zone.

img2

Either way, the whole city is all about Oracle. Even, there are a couple of main streets that are closed to set up a lounge area, booths and entertainment. You can see hanging posters and glued around the entire city. It’s awesome.

Conference is divided in two, OpenWorld and JavaOne. So as I said, this conference cover a lot of interesting areas of technology.

img3

I attended this year to polish our demos before the conference and to help Oracle Network Technology (@oracleotn) with our IoT Workshops. This workshop was at both OpenWorld and JavaOne conferences, I helped at JavaOne.

The idea behind the IoT Workshop was to introduce technical and non technical people to the IoT world. Show them how easy is to start and teach them the very basic tools, hardware and of course, code to connect things to internet.

From the beginning, we were skeptical in the results. This was the first time we ran this workshop in a big conference three days in a row. Our schedule was five sessions per day, one hour each session. The start was slow, but we got a lot of traction the consecutive days. The response from attendees was awesome. Last two days, pretty much all sessions were packed up. At some point we had a long waitlist and all people wanted to get the IoT Starter Kit.

Speaking of Starter Kit, we were giving away the kit to all attendees at the end of the session. The kit includes one NodeMCU with an ESP8266 WiFi micro controller, a push button, a buzzer, a resistor, a LED and some cables to wire the components. Attendees could take the workshop in two ways; from scratch, meaning that they had to use their own computer and install all required tools and libraries and then compile the Arduino code, wire the components and flash the NodeMCU or the expedited way, meaning that we give them pre-flashed micro controller and they just wire components.

It was very surprising that many attendees decided to take the long path, that showed us that they were very interested to learn and potentially keep working on their own projects. Part of the session, we spent some minutes talking about how OAUX is using IoT to see how it will affect user experience and propose projects that can help Oracle users and partners in their daily lives.

img4

img5

img6

Specifically at JavaOne, we had many conversations about how potentially they could find a niche in their companies using IoT, and they came up with pretty cool ideas. It was so fun and interesting having direct contact with both technical and non technical people.

I think Java is one of my preferred programming languages so far, and I had never had the chance to attend a conference about Java. This time was awesome, I had the chance to present and at the same time be an attendee.

img7

img8

img9

The rest of the team was working at the OAUX Exchange. We presented all our demos and I didn’t miss the opportunity to see how people get very excited with our demos.

And to close with a flourish, some OOW attendees were invited to visit our Gadget Lab to show more about our vision and new integrations with gadgets we have got lately.

img10

img11

Overall, OOW is the result of our team work and collaboration during the year. It’s where we see reflected all our work into smiles, wows and people’s enthusiasm. It’s a feeling that cannot be described.

Finally we are here rolling again, getting prepared for the next OOW. So stay tuned on what we are cooking up to surprise you.Possibly Related Posts:

My Life as a (Telepresence) Robot

Mon, 2016-10-03 15:24
telepresence

Left: Double 2. Right: Beam

We have been quietly observing and evaluating our options before we finally decided to get a telepresence robot. Telepresence technology dates back to 1993 (Human Productivity Lab) and telepresence robots are not completely new.

There is a growing array of telepresence robot options (see comparison) and the list is bound to get cheaper and better. Before we settled on getting the Double Robotics robot, we tested the Suitable Technologies Beam. The Beam robot is a pretty solid solution, but it lacked one of our primary requirements: an SDK. We wanted a platform that we could “hack” to explore different scenarios. So we got the Double 2 robot, which does have and SDK and promptly gave it a name: Elliot after the main character in Mr. Robot.

As far as usability, driving around is not difficult at all. The Double 2 does lack a wide angle camera or foot camera since it uses the camera from the iPad. (Edit: It was pointed to me that The Double 2 standard set includes an attachable, 150 degree wide-angle camera and an always-on downward facing camera. We just didn’t buy the standard set.) But driving the Double 2 feels really smooth, so moving around to look and moving side to side is not a problem. The iPad housing has a mirror pointing to the bottom so you can switch to the back camera and see the bottom. There is an Audio Kit with external mic and speaker that helps you hear and be heard better. Overall the experience is good as long as you have good internet connectivity.

I have been virtually attending some of our Cloud Lab tours and the reaction is always positive. I also attended a couple meetings and felt a bit more integrated. Maybe that would wear off with time, but that is one of the reason we have it, to research the human aspect of these devices.

I am eagerly working on making Elliot a little more smart. Thanks to the SDK I can automate movement, but sadly the Double 2 doesn’t have any external sensors. So we are working on retrofitting some sonar sensors similar to the ones we used for this project to give Elliot a little more independence. So stay tuned to see more coolness coming from Elliot.

Telepresence Robot in The Big Bang Theory (Sheldon)

Possibly Related Posts:

Our Real Time Ambient Display at OpenWorld

Fri, 2016-09-30 16:16

One month before we entered the OAUX Exchange tent at OpenWorld, Jake (@jkuramot) challenged us to come up with a visualization “that would ambiently show data about the people in the space.”

A view of the Apps UX Exchange Tent at OpenWorld 2016

A view of the OAUX Exchange Tent at OpenWorld 2016

Mark (@mvilrokx), Noel (@noelportugal) and I accepted the challenge. Mark put together the Internet of Things ultrasonic sensors, Noel created a cloud database to house the data, and it fell to me to design and create the ambient display.

An ambient display is the opposite of a dashboard. A dashboard displays an array of data in a comprehensive and efficient way so that you can take appropriate actions. Like the dashboard of a car or airplane, it is designed to be closely and continuously monitored.

Ambient displays, in contrast, are designed to sit in the background and become part of the woodwork, only drawing your attention when something unusual happens. They are simple instead of complex, unified instead of diverse, meant for glancing, not for scanning.

This project was not only a chance to design an ambient display, but also a chance to work with master makers like Mark and Noel, get my feet wet in the Internet of Things, and visualize data in real time. I’ve also long wanted to make an art installation, which this sort of is: an attractive and intriguing display for an audience with all the risks of not really knowing what will happen till after the curtain goes up.

My basic concept was to represent the sensors as colored lines positioned on a simplified floor plan and send out ripples of intersecting color whenever someone “broke the beam.” Thao (@thaobnguyen) suggested that it would be even better if we could see patterns emerge over time, so I added proportion bars and a timeline.

Since we only had a few weeks we had to work in parallel. While Mark and the rest of the team debated what kind of sensor to use, my first task was to come up with some visuals in order to define and sell the basic concept, and then refine it. Since I didn’t yet have any data, I had to fake some.

So step one was to create a simulation, which I did using a random number generator weighted to create a rising crescendo of events for four colored sensor beams. I first tried showing the ripples against a white background and later switched to black. The following video shows the final concept.

Once Mark built the sensors and we started to get real data, I no longer needed the simulation, but kept it anyway. That turned out to be a good decision. When it came to do the final implementation in the Exchange tent, I had to make adjustments before all four sensors were working. The simulation was perfect for this kind of calibration; I made a software switch so that I could easily change between real and simulated data.

The software for this display did not require a single line of code. I used NodeBox, an open source visual programming tool designed for artists. It works by connecting a series of nodes. One node receives raw cloud data from a JSON file, the next refines each event time, subtracts it from the current time, uses the difference to define the width of an expanding ellipse, etc. Here is what my NodeBox network looks like:

The NodeBox program that produced the ambient video display

The NodeBox program that produced the ambient video display

One challenge was working in real time. In a perfect world, my program would instantly detect every event and instantly respond. But in the real world it took about a second for a sensor to upload a new row of data to the cloud, and another second for my program to pull it back down. Also, I could not scan the cloud continuously; I had to do a series of distinct queries once every x seconds. The more often I queried, the slower the animation became.

I finally settled on doing queries once every five seconds. This caused an occasional stutter in the animation, but was normally not too noticeable. Sometimes, though, there would be a sudden brief flash of color, which happened when an event fired early in that five-second window. By the time I sensed it the corresponding ripple had already expanded to a large circle like a balloon about to pop, so all I saw was the pop. I solved this problem by adjusting my clock to show events five seconds in the past.

Testing was surprisingly easy despite the fact that Mark was located in Redwood Shores and Noel in Austin, while I worked from home or from my Pleasanton office. This is one of the powerful advantages of the Internet of Things. Everyone could see the data as soon as it appeared regardless of where it came from.

We did do one in-person dry run in an Oracle cafeteria. Mark taped some sensors to various doorways while I watched from my nearby laptop. We got our proof of concept and took the sensors down just before Oracle security started getting curious.

On the morning of the big show, we did have a problem with some of the sensors. It turned out to be a poor internet connection especially in one corner of the tent; Noel redirected the sensors to a hotspot and from then on they worked fine. Jake pitched in and packaged the sensors with hefty battery packs and used cable ties to place them at strategic spots. Here is what they looked like:

Three of the ultrasonic sensors we used to detect movement in the tent

Three of the ultrasonic sensors we used to detect movement in the tent

The ambient display ran for three straight days and was seen by hundreds of visitors. It was one of the more striking displays in the tent and the simple design was immediately understood by most people. Below is a snapshot of the display in action; Jake also shot a video just before we shut it down.

It was fun to watch the patterns change over time. There would be a surge of violet ripples when a new group of visitors flooded in, but after that the other colors would dominate; people entered and exited only once but passed across the other sensors multiple times as they explored the room. The most popular sensor was the one by the food table.

One of our biggest takeaways was that ambient displays work great at a long distance. All the other displays had to be seen closeup, but we could easily follow the action on the ambient display from across the room. This was especially useful when we were debugging the internet problem. We could adjust a sensor on one side of the room and look to the far corner to see whether a ripple for that sensor was appearing and whether or not it was the right color.

A snapshot of the ambient display in action

A snapshot of the ambient display in action

It was a bit of a risk to conduct this experiment in front of our customers, but they seemed to enjoy it and we all learned a lot from it. We are starting to see more applications for this type of display and may set up sensors in the cloud lab at HQ to further explore this idea.Possibly Related Posts:

Fun, Games and Work: Telepresence Robots

Wed, 2016-09-28 13:46

Companies talk about “Gamification,” but the first time I felt like I was playing a game at work was driving our Double telepresence robot around the office floor, rolling down the hallway and poking into cubicles. With a few simple controls—forward, backward, left, and right—it took me back to the D-pad on my NES, trying to maneuver some creature or robot on the screen and avoid obstacles.

cq1f_4dusaav0o_

It’s really a drone, but so much less stressful than controlling a quadcopter. For one, you can stay put without issue. Two, it’s not loud. And three, there aren’t any safety precautions preventing us from driving this around inside Oracle buildings.

Of course, this isn’t the intended use. It’s a telepresence robot, something that allows you to be more “present” in a meeting or at some remote site than you would be if you were just a face on a laptop—or even more invisibly—a (mostly silent) voice on a conference call. You can instead be a face on a robot, one that you control.

That initial drive wouldn’t have been nearly as fun (or funny) if I were just cruising around the floor and no one else was there. A lot of the enjoyment was from seeing how people reacted to the robot and talking to them about it.

It is a little disruptive, though that may wear off over time. Fellow AppsLab member Noel (@noelportugal) drove it into a meeting, and the whole crowd got a kick out of it. I could see throughout the meeting others gazing at the robot with a bit of wonder. And when Noel drove the robot behind someone, they noted how it felt like they were being watched. But no one forgot Noel was in the meeting—there was an actual presence that made it feel he was much more a part of the group than if we were just on the phone.

crsozevvuaapahg

On another virtual walkaround, Noel met up with Mark (@mvikrokx) and they had a real work conversation about some hardware they had been emailing back and forth about, and being able to talk “face” to “face” made it much more productive.

All this provokes many interesting questions—is a telepresence robot better than video conferencing? How so, and by how much? How long does it take for the robot to seem “normal” and just become a part of a standard meeting?

And of course—what would a meeting be like that consisted solely of telepresence robots?Possibly Related Posts:

IoT Workshop Guide – part 2

Wed, 2016-09-14 03:50

In last post, we have setup development environment for coding and uploading scratches to NodeMCU, an IoT device.

This post, we will upload and run two examples to demonstrate how IoT device sending data into Cloud and receiving commands from Cloud. You can find the source code and MQTT library requirement on github.

4. Architecture Diagram

It involves several tiers and components to make the whole IoT loop. However, you will just focus on device communication with MQTT, all other components have been setup properly.

iotws_diagram

 

5. Wiring Diagram

For the two testing examples, you can just use the following diagram:

wiring1

And here is an example of actual wiring used to test the example code:

wiring3

 

6. Test Sample #1

Demonstrate that IoT device interacts with Internet over MQTT. You can get the source code from github: https://github.com/raymondxie/iotws/blob/master/iotws_mqtt.ino

Please note, you need modify the code by supplying necessary connection parameters for WiFi network and MQTT broker. Check the parameter values with your instructor.

The example let you press a button, the event is sent to MQTT broker in the Cloud,  and NodeMCU board is also listening to that channel for input, essentially the information just come right back to the board. Based on the button press count (even / odd count), the board plays a different tune for you.

Have fun playing the tunes!

7. Test Sample #2

Send a message into Oracle IoT Cloud Service (IoTCS) by press of a button. You can get the source code from github: https://github.com/raymondxie/iotws/blob/master/iotws_iotcs.ino

Please note, you need modify the code by supplying necessary connection parameters for WiFi network and MQTT broker. Check the parameter values with your instructor.

This sample let you press a button, and a message along with your name is sent to MQTT broker. There is a Raspberry Pi listening to inputs to that particular MQTT channel. The Raspberry Pi acts as a gateway to IoTCS, and relays the message to it. You can then verify your message with your name in the IoTCS console.

 Possibly Related Posts:

IoT Workshop Guide – part 1

Wed, 2016-09-14 03:49

AppsLab and OTN will jointly host IoT Workshop at Oracle OpenWorld and JavaOne conference in 2016. We look forward to seeing you at the Workshop.

Here is some details about the Workshop with step-by-step instructions. Our goal is that you will learn some basics and get a glimpse of Oracle IoT Cloud Service at the workshop, and you can continue playing it with IoT package after going home. So be sure to bring your computer so we can setup proper software for you.

Before we get into the step-by-step guide, here is the list of hardware parts we will use at the IoT Workshop.

Version 2

Board and parts

1. Download and install software

We use the popular Arduino IDE to write code and upload to IoT device.You may download it from Arduino website even before coming to the workshop.
https://www.arduino.cc/en/Main/Software

arduino-download

Just to make sure you get the proper platform for your computer, e.g. if you have a Windows machine, get the “Windows installer.”

The installation is straightforward, as it is very typical installation on your computer platform. If needed, here is instruction: https://www.arduino.cc/en/Guide/HomePage

2. Setup Arduino IDE to use NodeMCU board

We use a IoT device board called NodeMCU. Like Arduino Uno board, it has many pins to connect sensors and LED lights, but also has built-in WiFi chip which we can use to send input data into IoT Cloud.

You have installed Arduino IDE at step 1. Now open the Arduino IDE.

Go to File -> Preferences, and get to a page like this:

nodemcu-conf

Add this “http://arduino.esp8266.com/stable/package_esp8266com_index.json” to the “Additional Boards Manager URLs” field, and then hit “OK” button.

Restart Arduino IDE, and go to “Tools” -> “Board” -> “Board Manager”, and select “esp8266 by ESP8266 Community”. Click it and install.

boardinstall

Restart Arduino IDE and go to “Tools” -> “Board”, and select “NodeMCU 1.0” board.

chooseboard

Also set up corresponding parameters on CPU Frequency, Flash Size, etc, according to above screenshot.

3. Quick Blink Test

To verify that we have set up the Arduino IDE for NodeMCU properly, we can connect the board to computer using a USB-microUSB cable.

Then go to “File” -> “New”, copy & paste this example code into coding window:  https://github.com/raymondxie/iotws/blob/master/iotws_led.ino

loadsample

Select the proper Port where board is connected via USB:

chooseport

Click “Upload” icon on the top left of Arduino IDE, and observe that the sample code is loaded onto board. The on-board LED should blink once per second.

For some Macbook, if you don’t see proper port of “USBtoUART”, you need install a FTDI driver – you can download it from here.

For Windows machine, you will see certain “COM” ports. You need install this driver.

You can also play around and connect an external LED light to a pin similar to following wiring diagram, and modify the code to use that pin to blink the LED.

blinkled

By now, you have completed the setup of Arduino development environment for NodeMCU – an IoT device, upload and execute code on the device.

Continue to part 2: Load and Test IoT Code >>Possibly Related Posts:

For OpenWorld and JavaOne 2016, An Internet of Things Workshop

Wed, 2016-09-14 00:48

iotkitbanner2

Want to learn more about the Internet of Things?

Are you attending Oracle OpenWorld 2016 or JavaOne 2016? Then you are in luck! Once again we have partnered with the Oracle Technology Network (OTN) team to give OOW16 and JavaOne attendees an IoT hands-on workshop.

We will provide a free* IoT Cloud Kit so you can get your feet wet on one of the hottest emerging technologies. You don’t have to be an experienced electronic engineer to participate. We will go through the basics and show you how to connect a wifi micro-controller to the Oracle Internet of Things Cloud.

All you need to do is sign-up for a spot using the OpenWorld (Android, iOS) or JavaOne (Android, iOS) conference mobile apps. Look under Info Booth, and you’ll find an IoT Workshop Signup section.

Plus, brand-new this year, check out the Gluon JavaOne conference app (Android, iOS), look for the OTN Experiences and hit the IoT Workshop.

Note: OK, so that Gluon JavaOne app, 1) isn’t new this year and 2) I posted the wrong links. This year’s app is called JavaOne16, so look carefully. You can find the IoT Workshop signups under OTN Experiences.

Or find us at the OTN Lounge on Sunday afternoon. Workshops run all day, Monday through Wednesday of both conferences. Space is limited, and we may not be able to accommodate walkups, so do sign up if you plan to attend.

Then come to the OTN Lounge in Moscone South or the Java Hub at Hilton Union Square with your laptop and a micro-usb cable.

Kit

The kit includes a NodeMCU, buzzer, button, and an LED

 

*Free? Yes free, while supplies last. Please make sure you read the Terms & Conditions (pdf).Possibly Related Posts:

Oracle Volunteers and the Daily Minor Planet

Thu, 2016-08-11 13:58

“Supercharged Perseid Meteor Shower Peaks This Month” – as the very first edition of Daily Minor Planet brought us the news on August 4th, 2016.

Daily Minor Planet

First edition of Daily Minor Planet

Daily Minor Planet is a digital newspaper on asteroids and planetary systems. It features an asteroid that might fly by Earth for the day, or one of particular significance to the day. Also it features a section of news from different sources on the topics of Asteroid and Planets. And most interestingly, it has a dynamic orbit diagram embedded, showing real-time positions of the planets and the daily asteroid in the sky. You can drag the diagram and see them in different angles.

You can read the live daily edition on the Minor Planet Center website. Better yet, subscribe to it with your email, and get your daily dose of asteroid news in your email.

Daily Minor Planet is the result of collaboration between Oracle Volunteers and Minor Planet center. Since the Asteroid Hackathon in 2014, we have followed up with a Phase I project of Asteroid Explorer in 2015, which focused asteroid data processing and visualization. And this is the Phase II project, which focuses on the public awareness and engagement.

The Oracle Volunteers on this phase consisted of Chan Kim, Raymond Xie (me!), Kristine Robison, DJ Ursal and Jeremy Ashley. We have been working with Michael Rudenko and J.L. Galache from Minor Planet Center for past several months, and created a newspaper sourcing – editing – publishing – archiving system, with user subscription and daily email delivery functionality. And during the first week of August, the Oracle volunteer team were on site to prepare and launch the Daily Minor Planet.

Check out video of the launch event, which was hosted in Phillips Auditorium, Harvard-Smithsonian Center for Astrophysics, and live streamed on YouTube channel. The volunteer’s speech starts around 29:00 minute mark:

It was a quite intense week, as we were trying to get it ready for launch. In the end, as a reward, we got a chance to have a tour of the Great Refractor at Harvard College Observatory, which was located just in next building.

The Great Refractor

The Great Refractor, at Harvard College Observatory

By the way, the Perseid meteor shower this year will peak on August 12, and it is in an outburst mode with potentially over 200 meteors per hour. So get yourself ready and catch some shooting stars!Possibly Related Posts:

The AppsLab’s Latest Inter-State Adventure: A Site Visit to Micros

Mon, 2016-07-18 16:17

Probably the best way to get to know your users is to watch them work, in their typical environment. That, and getting to talk to them right after observing them. It’s from that perspective that you can really see what works, what doesn’t, and what people don’t like. And this is exactly what we want to learn about in our quest to improve our users’ experience using Oracle software.

That said, we’ve been eager to get out and do some site visits, particularly for learning more about supply chain management (SCM). For one, SCM is an area most of us on the team haven’t spent too much time working on. But two, at least for me–working mostly in the abstract, or at least the virtual—there’s something fascinating and satisfying about how physical products and materials move throughout the world, starting as one thing and being manufactured or assembled into something else.

We had a contact at Micros, so we started there. Also, they’re an Oracle company, so that made it much easier. You’ve probably encountered Micros products, even if you haven’t noticed them—Micros does point of sales (POS) systems for retail and hospitality, meaning lots of restaurants, stadiums, and hotels.

Micros point-of-sales terminals throughout the years. This is in Micros's corporate office in Columbia, Maryland.

Micros point-of-sales terminals throughout the years. This is in Micros’s corporate office in Columbia, Maryland.

For this particular adventure, we teamed up with the SCM team within OAUX, and went to Hanover, Maryland, where Micros has its warehouse operations, and where all of its orders are put together and shipped out across the world.

We observed and talked to a variety of people there: the pickers, who grab all the pieces for an order; the shippers, who get the orders ready to ship out and load them on the trucks; receiving, who takes in all the new inventory; QA, who have to make sure incoming parts are OK, as well as items that are returned; and cycle counters, who count inventory on a nightly basis. We also spoke to various managers and people involved in the business end of things.

A view inside the Micros warehouse.

A view inside the Micros warehouse.

In addition to following along and interviewing different employees, the SCM team ran a focus group, and the AppsLab team ran something like a focus group, but which is called a User Journey Map. With this research method, you have users map out their tasks (using sticky notes, a UX researcher’s best friend), while also including associated thoughts and feelings corresponding to each step of each task. We don’t just want to know what users are doing or have to do, but how they feel about it, and the kinds of questions they may have.

In an age where we’re accustomed to pressing a button and having something we want delivered in two days (or less), it’s helpful on a personal level to see how this sort of thing actually happens, and all the people involved in the background. On a professional level, you see how software plays a role in all of it—keeping it all together, but also imposing limits on what can be done and what can be tracked.

This was my first site visit, though I hope there are plenty more in the future. There’s no substitute for this kind of direct observation, where you can also ask questions. You come back tired, but with lots of notes, and lots of new insights.Possibly Related Posts:

Blast from the Past: Gesture-Controlled Robot Arm

Sun, 2016-07-17 18:31

Hard to believe it’s been nearly three years since we debuted the Leap Motion-controlled robot arm. Since then, it’s been a mainstay demo for us, combining a bit of fun with the still-emergent interaction mechanism, gesture.

Anthony (@anthonyslai) remains the master of the robot arm, and since we lost access to the original video, Noel (@noelportugal) shot a new one in the Gadget Lab at HQ where the robot arm continues to entertain visitors.

Interesting note, Amazon showed a very similar demo when they debuted AWS IoT. We nerds love robots.

We continue to investigate gesture as an interaction; in addition to our work with the Leap Motion as a robot arm controller and as a feature in the Smart Office, we’ve also used the Myo armband to drive Anki race cars, a project Thalmic Labs featured on their developer blog.

Gesture remains a Wild West, with no standards and different implementations, but we think there’s something to it. And we’ll keep investigating and having some fun while we do.

Stay tuned.Possibly Related Posts:

The Nextbit Robin

Thu, 2016-07-14 19:46

For a couple of months, I’ve been using as my main phone the Nextbit Robin. A $299 Android phone that started as a campaign in Kickstarter, and it got 3,611 backers including Jake (@jkuramot).

I previously had my Nexus 5, but over the time, Bluetooth stopped working and that was a good excuse to try this phone.

Also I was so excited because at SXSW I had a long talk with the Nextbit (@nextbitsys) development team about all technology behind this phone, more details below.

4

So Nexbit is a new company that wants to revolutionize hand held storage and this first attempt is really good.

They came up with Robin phone; it is square, rectangular with tight corners that looks like uncomfortable at first but it has soft touch finish. It has a decent balance of weight. People tend to ask me if this is the modular phone (Project Ara) by Google or if it’s new Lego’s phone. Either way, conclusion is that it has a pretty cool and minimalistic design and people like it a lot.

Talking about its design, power button on the right hand side with is also a fingerprint reader and tiny volume buttons on the left hand side. Probably that’s the worst part of the build; the buttons are small and round and of course kinda hard to press.

The power button does not protrude at all so it’s hard to press too. The fingerprint is actually really good though; accuracy and speed are on point. The fingerprint with the side placement like this, actually makes a lot of sense as you can register your left index finger and right thumb for the way you grip the phone and unlock it as soon as you hold the phone.

It has an USB Type-C at the bottom left corner with quick charging and dual front-facing stereo speakers, loud and clear. Quick charging is awesome.

Running the latest version of Android 6 with a custom Nextbit skin but all elements feel pretty stock.

Specifications are pretty good too, Snapdragon 808, 3 Gb of RAM, 2680 mAh battery, that makes the phone pretty smooth. Camera on the back with 13 MP with decent colors and details but dynamic range is weak.

I noticed that is very slow to actually take the photos, but they just have release new software update that solves the shutter lag.

1

But let’s focus on what’s the main spec of this phone, storage. All magic is in the Nextbit skin. Every Robin comes with 32 GB on-board storage but then also 100 GB of free cloud storage. Now, you’ll be asking why do you need cloud storage instead on-board storage?

What happens is Robin is supposed to be smart enough to offload the oldest and least frequently used stuff from internal storage straight to the cloud. So when you start to run out of local storage with old apps and old photos that haven’t been opened in a while they will be moved to the cloud and make room for more in your local storage seamlessly almost without you ever having a notice.

2

Directly in the application drawer you will notice that some app icons are grayed out, so these are the apps that are offline or stored in the cloud and not stored in the device anymore. If you want to use any of them, it takes a minute or so to download everything in the state you last left it in and then opens up right where you left off. So it’s a process of archiving and restoring.

You can also set apps to not get archived swiping the icon app down to pin them, and they will never go to the cloud. If you are using some apps all the time you shouldn’t even need to pin them as Robin will noticed that you use it a lot.

In order to save battery and don’t waste your carrier data, backing up process happens only when the phone is in WiFi and is charging.

Problem is that all restoring is dependent on the internet, so if you are out there with no data and want to use your app that is archived in the cloud, pretty much you’re lost.

In deep details, it has machine learning algorithms, cloud integrated into Android OS and onboard storage is merged with cloud seamlessly. Machine learning mechanism learns from your app and photos usage. Also it can think ahead, so months before you ever run out of storage Robin anticipates you will need more space and continually synchronizes apps and photos. For pictures, they are downsampled to screen resolution but full size version remain linked in the cloud.
For security concerns, all data stored in cloud storage is encrypted with Android built-in encryption.

I like the idea behind Robin system, but the cool thing is that you can use it like a normal phone, you can use your launcher of choice, even root it. The bootloader is actually unlocked out of the box and still under the warranty.

Pretty good phone for the price outside of the storage solution, but if you are looking for a phone focusing on having lots of storage, I’d look for something with a Micro SD card slot. Otherwise it’s definitely worth considering this. Definitely, I would use it as my main phone.

It’s cool to see this type of cloud-based storage solution in action.

3Possibly Related Posts:

Ultra Subjective Space

Thu, 2016-07-14 06:37

Architects design space. A building is just a way to create spaces. Information architects at Oracle design relationships with abstract concepts. So far the main way we have to create visible spaces for our users is by projecting pixels onto glass screens.

This may change someday. If the promise of virtual reality is ever achieved, we may be able to sculpt entirely new realities and change the very way that people experience space.

pace1

The AppsLab R&D Team visits the teamLab exhibition

One sneak peek into this possible future is now on display at Pace Gallery in Menlo Park. Last week the AppsLab research and design team toured the Living Digital Space and Future Parks exhibit by the renowned Japanese art collective teamLab.

Still photographs do not do this exhibit justice. Each installation is a space which surrounds you with moving imagery. Some of these spaces felt like VR without the goggles – almost like being on a holodeck.

Various teamLab installations

Various teamLab installations

The artwork has a beautiful Japanese aesthetic. The teamLab artists are exploring a concept they call ultra subjective space. Their theory is that art shapes the way people of different cultures experience space.

Since the renaissance, people in the west have been taught to construct their experience of spatial reality like perspective paintings with themselves as a point observer. Premodern Japanese art, in contrast, might have taught people to experience a very different flattened perspective which places them inside each space: subjective instead of objective.

To explore this idea, teamLab starts with three dimensional computer models and uses mathematical techniques to create flattened perspectives which then form the basis for various animated experiences. I can’t say that the result actually changed my perception of reality, but the experience was both sublime and thought-provoking.

More teamLab installations

More teamLab installations

Their final installation was kid-centric. In one area, visitors were given paper and crayons and were asked to draw spaceships, cars, and sea creatures. When you placed your drawing under a scanner it became animated and was immediately projected onto one of two giant murals. We made an AppsLab fish and an AppsLab flying saucer.

Another area lets you hop across virtual lillypads or build animated cities with highways, rivers, and train tracks by moving coded wooden blocks around a tabletop. I could imagine using such a tabletop to do supply chain management.

Kids having fun - including us

Kids having fun – including us

Ultra subjective space is a pretty high brow concept. It’s interesting to speculate that ancient Japanese people may have experienced space in a different way than we do now, though I don’t see any way of proving it. But the possibility of changing something that fundamental is certainly an exciting idea. If virtual reality ever lets us do this, the future may indeed be not just stranger than we imagine, but stranger than we can imagine.

Living Digital Space and Future Parks will be on display at the Pace Gallery in Menlo Park through December 18, 2016.Possibly Related Posts:

From BI to AI: Whose Intelligence is Better?

Tue, 2016-07-12 08:54

Numbers are a property of the universe. Once Earthians figured that out, there was no stopping them. They went as far as the Moon.

We use numbers in business and life. We measure, we look for oddities, we plan. We think of ourselves as rational.

I, for one, like to look at the thermometer before deciding if I shall go out in flip-flops or uggs. But I cannot convince my daughter to do the same. She tosses a coin.

biui2

More often than we like to think, business decisions are made the way my daughter decides on what to wear. I need an illustration here, so let me pick on workers’ compensation. If you have workers, you want to reward them for good work, and by doing that, encourage the behaviors you want to see more of – you want them to work harder, better, quicker, and be happy. You can measure productivity by amount of, say, shoes sold. You can measure quality by, say, number of customers who came back to buy more shoes. You can measure happiness, by, say . . .  okay, let’s not measure happiness. How do you calculate what the worker compensation shall be based on these two measures?

50/50? 75/25? 25/75? Why? Why not? This is where most businesses toss a coin.

Here is an inventory of types of questions people tend to answer by tossing a coin:

  • Should you monitor the dollar amount of sales, or the percentage of sale increase?
  • Which of the two measures lets you better predict future performance?
  • Why would it?
  • How accurate are the predictions?
  • How big shall the errors be until you feel the measure doesn’t make accurate predictions? Why?
  • Which measures shall be combined and looked at together?
  • In which way?
  • Where would you set up thresholds between good, bad, and ugly?
  • Why? Why not?
  • If some numbers are way off, how do you know it is an exception and not part of some pattern that you don’t see?

If not tossing a coin, it is common practice to answer these kinds of questions based on a gut feeling. To answer these questions based on evidence instead, there shall be a way to evaluate the gut feeling, together with bunch of other hypotheses, in order to choose a hypothesis that actually true and works. This is hard for humans. Not only because it requires a lot of modeling and computations.

Conceptually, as humans, we tend to look for reasons and explain things. It is hard for us to see a pattern if we don’t see why it works. “I wouldn’t have seen it if I hadn’t believed it” as one wise person said. Admit it, we are biased. We won’t even consider evaluating a hypothesis that looks like a complete nonsense.

Computers, on the other hand, don’t have such a problem. Machine learning can create and test thousands of crazy hypotheses for you and select the best one. That is, the best one in predicting, not explaining. They can also keep updating the hypotheses as conditions change.

That’s why I believe AI is a new BI. It is more thorough and less biased then us humans. Therefore, it is often more rational.

I am fascinated to learn about ML algorithms, and what they can do for us. Applying the little I learned about Decision Trees to the worker’s compensation dilemma above, this is what I get. Let’s pretend the workers get a bonus at the end of the year. The maximum amount of the bonus is based on their salary, but the exact amount is a percent of the maximum based on performance – partially on the amount of sales, partially on the number of returned customers. These are your predictors. Your goal for paying off the bonus is that next year your workers have increased amount of sales AND increased number of returned customers at the same time. That’s your outcome.

graph1

Decision Tree algorithm will look at each possible combination of your predictors, and will measure which one better divides your outcomes into categories. (They say it is a division that minimizes the entropy and increases information gain).

clip_image001

Would we try to do that “by hand,” it would’ve taken so much time. But here we have the most effective bonus recipes figured out for us. Some of the recipes may look counter-intuitive; we may find out that the largest bonus is not the best encouragement, or some such. But, again, figuring out “whys” is a different problem.

And here is my little classification of business intelligence tasks that I believe AI can take over and improve upon.

graph2

As a human and a designer who welcomes our machine learning overlords, I see their biggest challenge in overcoming our biggest bias, the one of our superior rationality.Possibly Related Posts:

ODTUG Kscope16

Tue, 2016-07-05 10:05

Just like last year, a few members (@jkuramot, @noelportugal, @YuhuaXie, Tony and myself) of @theappslab attended Kscope16 to run a Scavenger Hunt, speak and enjoy one of the premier events for Oracle developers. It was held in Chicago this time around, and here are my impressions.

Lori and Tony Blues

Lori and Tony Blues

Since our Scavenger Hunt was quite a success the previous year, we were asked to run it again to spice up the conference a bit.  This is the 4th time we ran the Scavenger Hunt (if you want to learn more about the game itself, check out Noel’s post on the mechanics) and by now it runs like a well-oiled machine.  The competition was even fiercer than last year with a DJI Phantom at stake but in the end @alanarentsen prevailed, congratulations to Alan.  @briandavidhorn was the runner up and walked away with an Amazon Echo and in 3rd place, @GoonerShre got a Raspberry Pi for his efforts.

IMG_20160626_081906

Sam Hetchler and Noel engage in a very deep IoT discussion.

There were also consolation prizes for the next 12 places, they each got both a Google Chromcast and a Tile.  All-in-all it was another very successful run of the Scavenger Hunt with over 170 participants and a lot of buzz surrounding the game, here’s a quote from one of the players:

“I would not have known so many things, and tried them out, if there were not a Scavenger Hunt. It is great.”

Better than Cats. We haven’t decided yet if we are running the Scavenger Hunt again next year, if we do, it will probably be in a different format; our brains are already racing.

Our team also had a few sessions, Noel talked broadly about OAUX, and I had a presentation about Developer Experience or DX.  As is always the case at Kscope, the sessions are pretty much bi-directional, with the audience participating as you deliver your presentation.  Some great questions were asked during my talk, and I even was able to record a few requirements for API Maker, a tool we are building for DX.

Judging by the participation of the attendees, there seems to be a lot of enthusiasm in the developer community for both API Maker and 1CSS, another tool we are creating for DX.  As a result of the session, we have picked up a few contacts within Oracle which we will explore further to push these tools and get them out sooner rather than later.

In addition to all those activities, Raymond ran a preview of an IoT workshop we plan to replicate at OpenWorld and JavaOne this year. I won’t give away too much, but it involves a custom PCB.

IMG_20160626_104141

The makings of our IoT workshop, coming to OOW and J1.

Unfortunately, my schedule (Scavenger Hunt, presentation) didn’t really allow me to attend any sessions but other members of our team attended a few, so I will let them talk about that. I did, however, get a chance to play some video games.

IMG_20160627_183615

I really, really like video games.

And have some fun, as is customary at Kscope.

Cl6f96VWEAABRUp

A traditional Chicago dog.

Cheers,

Mark.

Possibly Related Posts:

Kscope16 Scavenger Hunt

Thu, 2016-06-16 17:34

Kscope16_1024_500

Are you attending Kscope16? If so, you are in luck, @theappslab team will be back this year (by popular demand) to do a Scavenger Hunt. This year there are even more chances to win, plus check out these prizes:

  • First place: DJI Phantom Drone
  • Second place: Amazon Echo
  • Third place: Raspberry Pi

Our first scavenger hunt took place last year at Kscope15. Here’s a quick video detailing the whats, whys and wherefores of the game from our fearless leader and Group Vice President, Jeremy Ashley (@jrwashley) and me.

After that, we replicated the experience for an OTN Community Quest at OpenWorld and JavaOne 2015 and then for the UKOUG App15 and Tech15 Conference Explorer. We have had great fun seeing participants engaged. We are very proud of the game engine we built for the scavenger hunt, bringing together software and IoT. If you are interested to see how it all works check out our post “Game Mechanics of a Scavenger Hunt“.

Check the Kscope16 Scavenger Hunt site for more information on how to join and play during the annual ODTUG user group shindig. You can even signup to play during your registration process.

We have some interesting twists in store this year, and we’re hoping for an even larger group of engaged players this year.

See you there!Possibly Related Posts:

Twilio Signal Conference – $Bash night

Tue, 2016-06-14 11:35

Twilio Signal Conference ended with an after-party called the $Bash night.  Twilio set up booths with geeky games like programming, program debugging, computer building etc..  They also had a foosball table for 16 people.  I think it is one of the nicest parties for geeks I attended so far.  It was a fun night with music, drinks, food and games, tuned for developers.

During that morning’s keynote, Jeff Lawson (Twilio Founder) had a virtual meeting with Rony Abovitz (Magic Leap Founder), and they announced that the winner of the $Bash night can get access to Magic Leap.  Magic Leap is so mysterious, and I had a great urge to win in the $Bash night to be able to play and do something with it.

It turned out if you compete with other developers during the $Bash night, you could win raffle tickets, and the person who had the most raffle tickets by the end of the night would become the winner.  So all night I have been going all out playing and competing.  The environment was too dark to possibly take some good quality pictures, but you can find some info here.

There are 2 games I did quite well and enjoyed:  1. program debugging competition among 6 developers, 2.  pairing up to move jenga blocks with a robot arm.  At the end of night, although I tried my best, I only came out second.  At first I was quite disappointed, however, I was told there is still quite a very good chance there is a second spot to offer me for Magic Leap.  I shall keep my hope up to wait and see.Possibly Related Posts:

Twilio Signal Conference – Sessions

Tue, 2016-06-14 11:34

Lets dive to the Twilio sessions.

The sessions are generally divided in the following 4 tracks:

  • Learn
    See the latest progress in software and cloud communications, talk shop with Twilio engineers who developed them, and get in to the details on how to use the software.
  • Inspire
    Hear from industry experts shaping the future of tech with the latest software.
  • Equip
    Get details on hurdles tricks and solution from Twilio customers on building communications with software APIs.
  • Action
    Define business plans for modern communications with real-life ROI and before-and-after stories.

My interests was more into the Inspire track, and the hot topic being AI and Virtual Assistants nowadays, those were the sessions I targeted for the conference.

IMG_20160525_102406

Twilio messaging integration with other platforms

This half year is just the “half year of virtual assistants”, with the announcements of controversial Tay and Cortana from Microsoft, messenger bot from Facebook, Allo from Google I/O and Siri from WWDC yesterday.  Every giants want to squeeze into the same space and get a share of it.  There were a lot of sessions regarding to bots in Signal, and I had a feeling that Twilio has carefully hand picked the sessions carefully to suit the audiences.  IBM, and Microsoft and Slack all presented their views and technologies with bots, and I learned a lot from them.  It is a bit odd that api.ai sponsored the lunch for the conference and have a booth in the conference, but did not present in any sessions (afaik).

In the schedule, there was a session called Terrible Ideas in Git by Corey Quinn.  I love Git, and when I saw the topic, my immediate reaction was how can anyone say Git was terrible (at least right)??  I just had to go there and take a look.  To my surprise, it was very fun talk show and I had a good laugh and enjoyed it a lot.  I am glad I did not miss that session.

IMG_20160525_163912

Yep, deal with it!

Yep, deal with it!

Possibly Related Posts:

Twilio Signal Conference – All about Twilio

Tue, 2016-06-14 11:33

This year I attended the Twilio Signal Conference.  Same as its first year, it was held in Pier 27, San Francisco.  It was a 2-day action-packed conference with a keynote session in the morning and sessions after till 6 pm.

The developer experience provided by the conference is superb comparing to a lot of other developer conferences nowadays.  Chartered buses with wifi were provided for commuters using different transits.  Snacks served all day.  6 30-minutes sessions for you to choose from every time slot.  No need to wait in line and you could always attend the sessions you want (sorry Google I/O).  For developers, as least for me, the most important thing was a special coffee stall opened every morning to serve you with a fresh brewed coffee to wake you up and energize you for the rest of the day.  With the CEO among others to code right in front of you in a keynote session to show you some demos, it is one true developer conference that you could hope for.

20160524_111712

The whole conference lit up by the Internet of Blings with live code demonstration on stage.

There were a lot of new products and features Twilio announced in Signal and I would not spend to time to recap here.  You may read more info here and here.  The interesting thing to note is how Twilio gets so huge.  It started off with a text messaging service, it now also provides services on video, authentication, phone, routing.  It is the power engine under the hood for the fast growing companies like Lyft and Uber.  It now offers the most complete messaging platform for developers to connect to their users.  It now has capabilities to reroute your numbers and tap into the phone conversations.  It partners with T-Mobile to get into the IoT domain.  Twilio’s ambition and vision is not small at all.  The big question is:  how Twilio achieve all these?  This question can be controversial, but for me, I would have to say it all boils down into simplicity:  making things really easy, really good, and just works.  The Twilio APIs are very easy to use and it does exactly what it says, no more, no less.  Its reliability is superb.  That is what developers want and rely on.

20160524_094925

Jeff Lawson talks about Twilio’s infrastucture

Twilio as a messaging hub

Twilio as a messaging hub

But wait, there’s more. Check out my thoughts on the sessions at Signal and my $Bash night experience. I almost won a chance to play with the mysterious Magic Leap, and I might yet get access for finishing second. Stay tuned.Possibly Related Posts:

Pages