Your UKOUG Apps15 and Tech15 Conference Explorer

November 25th, 2015 Leave a Comment


Are you attending UKOUG Apps15 (#ukoug_apps15) or Tech15 (#ukoug_tech15)? If so, you are in luck! Once again we will run our ever popular scavenger hunt with a twist. From December 7-9 we will be at the ICC in Birmingham, UK working with the UKOUG team to give you a fun way to explore the conference and win prizes along the way.


If you’re not familiar with this game, it is as if we are stamping a card (or your arm!) every time you do a task. But instead, we are using IoT technologies; Raspberry Pi, NFC stickers, and social media to give you points. This time we will hold a daily drawing. You only need to enter once, but the more tasks you complete the more chances you have to win. Each day has different tasks.

This is a great way to discover new things, make friends, and enjoy the conference from a different perspective.

You can pre-register here or come over during registration so we can set you up. See you there!

OpenWorld 2015 Highlights

November 19th, 2015 Leave a Comment

It’s been nearly three weeks, and I’m finally getting around to sharing the highlights of our OpenWorld 2015. Enjoy.


Last year, Steve Miranda showed some of our project work in his keynote. This year, our Glance framework on the Apple Watch, made an appearance in Larry Ellison’s first keynote in a video showcasing the evolution of Oracle’s User Experience over the last 30 years.

OTN Community Quest

Noel (@noelportugal) and I oversaw the OTN Community Quest during OOW and JavaOne this year.

Pic, it happened.

We registered more than 300 players. Of those, more than half completed more than one task, and we had 18 players finish all nine tasks.

One major change we made to the game after the Kscope15 Scavenger Hunt was to change from a points system to a drawing, allowing anyone to play, at any time during the game and have a good shot to win.

An unintentional proof point, the Grand Prize winner completed only one task.

Here’s a short video explaining what it was, how to play and why we did it.

I know Noel would like me to mention a cool feature for this iteration of the game.  He wrote some code to have the Amazon Echo draw the winning entries, which was a pretty sweet demo.


Big thanks to our longtime and very good friends at OTN for letting us do this.

OAUX Exchange

Speaking of Alexa, the Amazon Echo’s persona and our favorite toy lately,  it featured prominently in the IoT Smart Office our team built to show at the OAUX Exchange.

Alexa and the smart badge were only parts of the whole experience, which included proximity using BLE beacons, voice-controlled environment settings and voice-controlled Cloud Applications navigation, just to hit the highlights.

The Smart Office wow feature was this Leap Motion interaction, grabbing a record from one screen and throwing it to another.

But wait, there was more. We showed the Glance Framework running on Android Auto, on an actual car head unit.


And for our annual fun demo, Myo gestural armbands used to control Anki race cars, because why not? Special thanks to Guanyi for running this demo for us.


Thao (@thaobnguyen) and her team we busy throughout the conference conducting research, running a focus group and doing ad hoc interviews, a.k.a. guerrilla testing.

Ben, Tawny and Guido, our crack researchers.

Ben, Tawny and Guido, our crack team of researchers.


Our team had very busy and very successful OpenWorld. For the complete story on all the OAUX OpenWorld 2015 happenings, check out the Storify.

And finally two other noteworthy, but not OOW-related happenings I should mention.

Mark (@mvilrokx) and Raymond (@yuhuaxie) were interviewed about MakerFaire by Profit magazine. Check out what the veteran makers had to say.

Speaking of Mark and making, Business Insider covered his IoT Nerf gun project. Check out his instructions if you want to make your own.

A Smart Badge for a Smart Office

November 9th, 2015 Leave a Comment

Editor’s note: Maybe you got a chance to check out our IoT Smart Office demo at the OAUX Exchange during OpenWorld. If not, don’t fret, we’ll describe its many cool bits here, beginning with this post from Raymond (@yuhuaxie) on his smart badge build. A Smart Office needs an equally smart badge.

We showcased the Smart Office at OAUX Exchange.

You put on a badge and walked into an office in the future, and everything came alive. You may be dazzled by the lights, screens, voice commands and gesture, and may not have paid any attention to the badge. Well, the badge is the key to start all the magical things when you approach the office. And it is worth some mention of its build.

The classical version:

If you tried it at the Exchange event, you can feel it is just like a normal badge, except it is a little heavier and has a vivid display.

Inside that home-made leather pouch, there is a 3.5” TFT display, a large recycled LiPo battery (taken out of a thin power bank we got at Kscope15), and one controller called LightBlue Bean (@punchthrough), which happened to be the same controller inside “Smart Bracelet” we showcased a year earlier to light up different colors to guide you an expo path.

The Smart Badge needs to be programmable and to tell its presence, so that we can program it to be a particular persona, and it can indicate to other listener (a Raspberry Pi server acting as IoT brain) about its presence or approaching to Smart Office.

The LightBlue Bean does the job perfectly, as it has two personalities: BLE and Arduino compatible controller. We created an iOS app to talk to the Bean over Bluetooth to set up persona, and the Arduino side of Bean would control the TFT display to show proper badge image.


The Smart Badge

When the person wearing the Smart Badge approaches the office, the Raspberry Pi server would detect its coming, and can send out a remote notification to iPhone and Apple Watch, so that you can “check in” to the office from an Apple Watch, and then start the whole sequence of Smart Office coming alive.

At the OAUX Exchange event, due to limited space issue, we did not use BLE presence detection we built, but instead, we used a time-delay mechanism to send out notification to Apple Watch.


Inside the Smart Badge

For those curious minds, we prepared another version, to show the components inside the Smart Badge. I would call it . . .

The techno version:

Instead of stacking up the components, we spread the components so people can see the parts and wiring. We designed and laser-cut acrylic sheets to make a transparent case, and it resembles the typical laminated badge you see at conferences.

As you can see, the wiring between the Bean and TFT is pretty much for the display control over SPI protocol, plus one wire to access SD card inside TFT display board, and another wire to control the backlight. The BLE part is all over wireless.



By the way, we made another version using ESP8266 (NodeMCU) instead of the Bean, because we just love the little ESP8266.

We connected the TFT display with NodeMCU over hardware SPI, and similar approach for access SD card and backlight control. Since NodeMCU has so many more pins available (some PWM pins too), we can actually control the backlight to dim at many brightness level, instead of just turning on and off, as in the Bean case.

The NodeMCU is flushed as an Arduino variant, and set up as a web server with its built-in wifi capability. The iOS app mentioned earlier can program this Smart Badge using HTTP request over wifi. In fact, it can be programmed using any browser or CLI, without the need of tethering to iPhone.

NodeMCU is such a recent toy on the block, that it takes some time to get it work with TFT display. For example, the image just does not look right when controlled by NodeMCU. After some perseverance, it was straightened out.


The card version:

We were considering making the Smart Badge as small as possible. This design was to make it just a bit larger than the TFT display, by stacking NodeMCU behind TFT, and use a tiny proto-board to neatly wire everything. It is about the size of a deck of card, hosted in a transparent acrylic case.

Figured that it does not look like a badge, but it is a nice little display hooked on the Internet, that can sit on a desktop. So I decided to fit in a 3 x AAA everyday battery, instead of the LiPo battery which usually use JST connector and is a hassle to recharge.



Currently, this version can show a badge, also do some slide shows. But it can do much more – with NodeMCU (ESP8266), it is a web server that can listen for instructions, it is web client to pull information from outside, it runs MQTT to react to outside events, and it has many pins to hook up sensors and controllers. Plus that the TFT is a touch display, we can make touch input to switch to different modes, etc.

It could be a real functional Smart Office monitor/notification center.

I guess this is just the starting point of a little toy.

OTN Community Quest

October 24th, 2015 Leave a Comment


After a very successful Scavenger Hunt at Kscope15, we are back with an Oracle OpenWorld and JavaOne edition. This time we partnered with the Oracle Technology Network (@oracleotn) folks to give Oracle OpenWorld (@oracleopenworld) and JavaOne (@javaoneconf) attendees a fun experience, and with even more chances of winning.

The OTN Community Quest was designed to be a win-win experience. We ask attendees to do certain tasks, you learn along the way, have fun and get a chance to win great prizes.

There are two types of tasks during the Quest:

  1. Tweet a “selfie” along with two hashtags: #otnquest and [#taskshashtag], i.e. the hashtag of the task.
  2. Scan a “Smart Sticker” on the Raspberry Pi scanner to validate a task. Stop by the OTN Lounges to get your “Smart Sticker.”


You can either just do the twitter tasks, the Raspberry Pi tasks or both, but the more tasks you do, the greater chance you will have to win one of these prizes:

Grand prize (GoPro Hero4), first prize (Basis Peak), second, third, and fourth prizes (Amazon Echo).


The prize drawing will take place on Wednesday, October 28 at 4:45 PM, OTN Lounge in Moscone South. Need not to be present to win, but must pick up prizes at the OTN Lounge in Moscone South by 2 PM on Thursday, October 29.

Register here or stop by the OTN Lounge in the Moscone South lobby or the Info Desk in the OTN Community Cafe in the Java Hub at JavaOne. If you register online and want to complete the Raspberry Pi tasks with the “Smart Sticker,” you’ll need to come by either of those locations to get the sticker.

Good luck and see you at the show.

Connect All the Things: An IoT Nerf Gun – Part 2: The Software

October 22nd, 2015 5 Comments


In the first part of this series, I showed you how to mod a Nerf gun so it can connect to the internet, and you can electronically trigger it.  In this second part, I will show you some software I created to actually remotely control the Nerf gun over the internet using various input devices.



First you will have to prepare your ESP8266 so it connects to the wifi and the internet, and it can sent and receive data through this connection.  With the Lua framework on the chip, there are 2 ways to achieve this.

HTTP Server

You can run an HTTP server on the chip, which looks very similar to creating a HTTP Server with Node.js, and then have it listen on a specific port.  You can then forward incoming internet traffic to this port and have the server perform whatever you want when it receives traffic, e.g. launch a Nerf dart when it receives a POST with a URL = “/launch”.

I hit a few snags with this approach though.  First, it requires access to the router, something which is not always the case.  You can’t just take the gun to your friends house and have it work out-of-the-box, he would have to configure his router etc.  Also, at work, I don’t get to access the router.

Second, it requires quite a bit of “framework” code before this works.  I actually created a framework to help me with this, and you are free to use it for this or other projects, but ultimately I realized I don’t need all this.  Also, I found that the connection was extremely unstable when running an HTTP server.  I could never figure out why this was happening and so I abandoned this approach, although I still use the framework for other purposes.

An alternative approach is to set the ESP in AP (Access Point) mode.  That way you can connect any wifi device straight to the Nerf gun. However, this means it is not really connected to the internet, and for me, it exhibited the same connectivity issues.

MQTT Protocol

The Lua framework for ESP also supports MQTT, a protocol that is specifically designed for IoT devices.  It is an extremely lightweight publish/subscribe messaging transport.  You can turn the ESP into an MQTT client which can then receive (subscribe to) messages or send (publish) messages.

These messages are then relayed by an MQTT broker.  So yeah, you need one of those too.

Luckily there are many implementations of MQTT brokers for every imaginable language and platform and if you don’t want to run your own, there are SaaS providers too, which is what I ended up doing.

This proved to be extremely stable and fast for me, messages get relayed almost instantaneously and the ESP reacts immediately to them.  The other advantage is that you can create MQTT clients for every imaginable platform, even in the browser, which then allows you to control the Nerf gun from every conceivable device, as long as you connect them all to the same broker.  I will show a few examples of these in later sections.

These are the steps you have to perform in your Lua script for this to all work:

  1. Set your ESP in STATION mode (wifi.setmode(wifi.STATION)) and connect it to your Wifi (
  2. Create your MQTT Client (mqtt.Client(<clientId>, <keepAliveSeconds>, <cloudMqttUserName>, <cloudMqttPwd>)) and connect it to your MQTT Broker (<mqttClient>:connect(host, port, 0, callback))
  3. Publish (<mqttClient>:publish) and/or subscribe (<mqttClient>:subscribe) messages to topics
  4. Listen for incoming messages (<mqttClient>:on(“message”, callback)) and act on them (launchDarts())

I have my Nerf gun publish a message when it comes online.  Besides the state of the Nerf gun (“online”), this message also contains the unique ID of the Nerf gun (the ESP’s MAC Address) which is later used by other MQTT clients to address messages straight to the Nerf gun (see below).

It also publishes a message when it starts firing, when it fires a dart, when it stops firing and when it goes offline.

The Nerf gun also subscribes to a topic that contains it’s unique ID.  This allows other MQTT Clients to address messages (“commands”) to a specific Nerf gun.  Whenever the Nerf gun receives a message on this topic, it verifies the topic and acts accordingly, e.g. if the topic contains “command/launch” and the message contains “{‘nrOfDarts’: 2}”, the Nerf gun will launch 2 darts.

Launching a dart

The actual launching of the dart is an interplay between the flywheel motors and the servo that now controls the trigger.

This is because the flywheel motors take some time to spin up to optimal speed, about 1-2 seconds, and also to spin down.  This is the sequence of events that get triggered in the Nerf Gun when it receives a command to launch:

  1. Turn on the flywheel motors, i.e. switch the relay on.
  2. After 1.5 seconds, set the servo to the “fire position.”
  3. After 0.5 seconds, set the servo back to the “neutral position.”
  4. Turn off the flywheel motors, i.e. switch the relay off.

At each stage, MQTT messages are being sent to inform other MQTT clients what is happening in the Nerf gun.

When the user wants to launch multiple darts, the only change is that the servo goes from neutral to fire and back to neutral as many times as there are darts to be fired.  The flywheel motors stay on during the duration of this sequence because it takes to long to spin them up and down:

  1. Turn on the flywheel motors, i.e. switch the relay on.
  2. After 1.5 seconds, set the servo to the “fire position.”
  3. After .5 seconds, set the servo back to the “neutral position.”
  4. If more darts to fire, go back to 2.
  5. Turn off the flywheel motors, i.e. switch the relay off.

Control the Nerf Gun from the CLI

Once your Nerf Gun is prepped, publishing information and subscribed to commands, you can start using it with other MQTT Clients.

One of the simplest ways to start is to use the command line to publish MQTT messages.  As I mentioned before, you need an MQTT Broker to make this all work, Mosquitto is such a broker and when you install it, it actually comes with an MQTT version 3.1/3.1.1 client for publishing simple messages from the command line called mosquitto_pub (there is also an equivalent mosquitto_sub, an MQTT version 3.1 client for subscribing to topics).

Using it would look like this:

$ mosquitto_pub -h <MQTTBbrokerHost> -p <Port> -u <user> -P <pwd> -t 
nerf/5c:cf:7f:f1:31:45/command/launch -m '{"nrOfDarts":2}'

This will publish a message to your MQTT Broker. This can be a different one than the Mosquitto broker, e.g. I use it to publish to my SaaS MQTT Broker, but it has to be the one your Nerf gun is also logged on to.  If a Nerf gun is listening to this topic, it will pick this up and launch 2 darts.

Web Application: The Nerf Center

In order to make this a bit more visually appealing and add some more useful functionality, you could create a web application, e.g. I created one that shows different Nerf Guns that are available, their status, and a button that allows me to launch darts.

In order for this to work, you need a MQTT Client that runs in the browser, i.e. a JavaScript implementation.  I used MQTT.js (and Webpack to make it browser friendly), there are others that you can use.

Screen Shot 2015-10-21 at 12.21.30 PMI used React.js for the front end code and Twitter Bootstrap for visual appeal, but of course you can use whatever floats your boat.

All I do is use MQTT.js to listen (subscribe) to topics from the Nerf Guns.  As I explained earlier, they all publish messages to a common topic that includes their unique ID (the MAC Address of the ESP in the Nerf Gun) and their state.

Whenever I receive such a message, I upsert the table.  Also, as I now have the unique Id of each Nerf gun, I can publish (using the “Launch” button) a message to a specific Nerf gun, allowing me to launch any number of darts from a particular Nerf gun.  Here’s a Vine with this in action (slightly older UI, same principle though):

Amazon Echo Integration

And once you got this far, why not add voice control!  I used the Amazon Echo but I imagine you can do the same with Google Now (probably not Siri though, at least not without some hacking).  All this required was to add a Skill to the Echo that published a message to a Nerf gun’s topic whenever I say “Alexa, ask Nerf gun to launch 5 darts.”  Here’s a Vine showing how this works in practice:

From here you can go on to add Twitter integration, having the Nerf gun tweet every shot, or vice versa, having it launch when you sent it a tweet.  The possibilities are literally endless.


This is, of course, a whimsical example of an IoT device, but hopefully it shows you what happens when something that isn’t normally connected to the internet, suddenly is.

Apart from being able to remotely control it, you get remote access to its “state” and possibly its surroundings.  This in turn allows you to act accordingly to this state and influence it if needed.

Furthermore, the device itself has access to the vast resources of the internet, i.e. computing power, data, services etc., and can use that to influence its own state.

As more and more devices come online, it would also have access to those, all providing more and more context to each other, allowing them to become more capable than they could ever be on their own, truly becoming greater than the sum of its parts.


Connect All the Things: An IoT Nerf Gun

October 21st, 2015 Leave a Comment


As part of my foray into the Internet of Things (IoT) I was looking for a project I could sink my teeth in because I find it easier to learn something by doing it rather than just reading about it.

It just happened to be that at JavaOne this year they will have a Maker Zone, and they were looking for participants who could build something interesting to show to the attendees.  As a regular visitor of the MakerFaire in my back yard, this immediately piqued my interest, and after rummaging through my 9 year old son’s toy boxes, I decided that I wanted to mod one of his Nerf guns.

This blog post will detail what I did and how I did it.

My plan was to turn the Nerf gun itself into an internet-connected device that would then allow me to poll its status (online/offline/launching) and to launch darts remotely, just using an internet connection.

To make my life a little bit easier, I started with a semi-automatic Nerf gun called NERF N-STRIKE MODULUS ECS-10 BLASTER.  All the user has to do is start the flywheels and then pull a trigger which pushes the foam darts (using a lever and push rod) between the 2 fast-spinning flywheels, which ensure the speedy exit of the dart from the barrel.

Internals of an unmodified Modulus Nerf Gun

Internals of an unmodified Modulus Nerf Gun

Instead of having the user start the flywheels and pull the trigger, I had to use some electromechanical solution for both.  Let’s talk about each solution individually.

Flywheel Control: Relay

This component is, strictly speaking, already electromechanical: when the user pulls the flywheel control trigger, s/he is actually physically pressing a button (hidden under the orange lid right behind the flywheel control trigger) that starts up the flywheels.

All I had to do was replace the mechanical button with one that I could control electronically, e.g. a relay. I could have opted for a transistor as well but decided to go for the relay as I like the satisfying “click” sound they make when activated :-)

I will show later how this was actually done.

Trigger: Servo

The trigger mechanism on the other hand is completely mechanical, and I had to replace this with some sort of electronic component in order to be able to control it electronically.  The actual trigger movement is very small, but it is “amplified” by the lever it is connected to.

This lever then pushes a rod forward, which is what makes the dart squeeze between the flywheels.

My initial thought was to replace the rod with some sort of push solenoid, but those typically have a very small stroke, too small for this purpose.  I also looked at very small actuators but they suffered from the same drawback, plus they were also very expensive and relatively slow.

So instead, I decided to replace the trigger with a servo that would control the lever.  The axis of the servo would sit inline with the axis of the lever so when the servo turns, it turns the lever, exactly what happens when you pull the trigger.

Internet Connection: ESP8266

The final component of the build was to put the Nerf Gun on the internet.

For this is decided to settle on the ESP8266 chip, more precise the ESP8266 ESP-12 variant. Besides being a wifi chip, this also has several GPIO pins that I use to control both the flywheel relay and the servo with some to spare for other components I might want to add later, e.g. a darts counter, range finder, etc.

Unfortunately the chip runs on 3.3V, and the Nerf gun supplies 6V (4 x 1.5V AA batteries) to the flywheel motors. So I either had to use another battery that supplies 3.3V or tap into the 6V and step it down to 3.3V.

I actually tried both, but in the end opted for the latter, as it simpler to just replace 1 set of batteries when we run out than 2.

Also, if 1 set of batteries runs out, the other part becomes completely unusable as well, so there is no benefit to having separate power sources for that either.  This complicated the build, but certainly benefited the UX. Hey, I am in UX after all.

Breadboard Schema

Breadboard layout of IoT Nerf Gun

Breadboard layout of IoT Nerf Gun

I hope this is rather self-explanatory.  Note that you have to connect CH_PD to Vin/Vcc, otherwise the chip doesn’t power up.

Also GPIO15 has to be connected to GND. If the ESP module doesn’t come preinstalled with Lua (mine didn’t), then you have to flash it first with Lua.

This is outside the scope of this article and there are plenty of articles on the internet explaining how to do this but be aware that if you need to do this, you have to pull GPIO0 (that’s GPIO zero) to GND and pull GPIO2 to HIGH.

Then, once you have flashed it with Lua, you have to disconnect these connections, and you can use both pins for controlling other things.  Also, in order to upload anything to the ESP8266 you need to use the TX and RX pins and a USB to TTL (Serial) adapter, e.g. FTDI.

First Build

For my first build, I actually used a NodeMCU board which is a breakout board for the ESP8266-12 that includes a 5V->3.3V power converter, reset and flash buttons and is breadboard compatible, unlike the actual ESP8266 which has a 2mm pitch rather than 2.54mm, making it easier to breadboard with.

However, it is much larger than the naked ESP chip and I had a hard time containing it all in the Nerf gun. One of my objectives was to keep the Nerf gun “plain” looking.


NodeMCU Dev Board on the left, ESP8266-12 on the right

I started with the servo integration as I figured that was the hardest.  Integrating the relay should be relatively easy compared to that.  Here is a Vine video with my first working version of the ESP8266 and the Servo motor attached, using a crude web service:

I then modded the gun to accommodate the servo. As the servo was too big to fit in the gun, I had to cut a hole in the side of the gun and hot-glue the servo in place.

I tried a few micro servos first but none of them were powerful enough to push the dart, and also, they didn’t fit either.

I then modified the lever so that the servo could control it.  This meant I had to shorten the axis of the lever and cut 2 slits which could then be gripped by the servo.  As the servo turns, so does the lever, does the dart gets pushed by the servo between the flywheels:


At this point, I was also using all sorts of connectors to connect the battery to the different motors and ESP.

I thought that this would make it easier to route all the cables in the Nerf gun and later disconnect everything if I need to open the gun.

However, this turned out to not be the case and in later iterations I just soldered the connections straight to the necessary components, with as short a cable as possible, with the exception of the servo connection as the servo was the only component that was physically connected to the other part of the Nerf gun.

This way, if I did have to open up the Nerf gun, I could still disconnect the halves from one another.

Final Build

As you can see, this yields far less cable to deal with and it is easy to close the Nerf gun this way, with room to spare.

You can also see that at this stage, I added an externally accessible connector that connects to RX, TX and GND of the ESP chip.

This allows me to debug and reprogram the ESP chip without opening the Nerf gun!  This has proven to be exceptionally useful, as you can imagine. There are 14 tiny screws holding the nerf gun together.

I also disabled one of the safety mechanism on the Nerf gun so I can fire it without closing the jam door on top of the Nerf gun.

This made it easier to debug some issues I had with the serve not pushing the darts correctly through the flywheels and jamming the Nerf Gun.  Here are a few more Vine videos of the IoT Nerf Gun in action.

This one shows the servo pushing the rod from the inside:

And a view from the top with the jam door open:

In the second part of this blog post, I will go into more detail about the software I wrote to control the Nerf gun.

The Glance Framework on Android Auto

October 20th, 2015 Leave a Comment

A while back, I told you about Glance, a framework for wearables and other devices.

Quick history review, Glance was born out of our frustration with rebuilding basic Oracle Cloud Applications notification functionality each time a hot new wearable device dropped, e.g. Google Glass, Pebble, Android Wear, etc.

So, by the time the Apple Watch launched, we had a solution ready, and we designed Glance to be a framework so whenever the next, new hotness came around, we’d have most of the work already done.

Enter the connected car, specifically, Android Auto, with which Noel (@noelportugal) has been tinkering since its launch. Maybe the connected car won’t be the next, new hotness, but it’s our passion to investigate, build and research everything we can to benefit our users.

Best. Job. Ever. Am I right?

Anyway, connected car solutions like Auto and Apple CarPlay are similar to smartwatch ones, purposefully limited interactions, designed to minimize distractions and maximize the glance in the OAUX design philosophy, glance-scan-commit.

Noel had some basic Oracle Cloud Applications notifications flowing to the Auto simulator way back in the Spring, but we put that effort on hold to focus on the Apple Watch. Plus, without a head unit, Auto just looked like a tablet app.



Waiting paid off in that department because over the past six months, several car electronics manufacturers have begun selling after-market head units that support Auto and/or CarPlay.

Heading into Oracle OpenWorld (@oracleopenworld), which is next week (eek!), Noel and I wanted to get Glance working on a head unit to show off at the OAUX Exchange. There will be lots of Glance shown on the Apple Watch, but we want to emphasize it’s a framework, not an Apple Watch-specific solution.

Noel did some research at his local car-modding store and chose the Pioneer AVH-4100NEX, which supports both Auto and CarPlay. This is the model down from the top-of-the-line Pioneer AVIC-8100NEX, recently reviewed by The Verge, if you’re interested.

Powering the head unit was an adventure. Luckily, Noel knows what he’s doing because ultimately, outside the car, the head unit requires a 10 amp power supply. You read that right; that’s about five times more juice than your average tablet power supply pushes.

Of course, once he got it running, Noel and I kicked the tires, checking out the basic features of CarPlay and Auto. Turns out they’re quite different. CarPlay is very much like basic iOS, just bigger and less functional, by design.


Auto follows a Google Now card-based approach, providing very little beyond basic information, again, by design.


And of course, Noel had to get his new Chromecast working on the Pioneer’s seven-inch resistive touchscreen because, well, he’s Noel.  The head unit has an HDMI input on the back for in-car entertainment, but using the input while the car is not in park is impossible for obvious safety reasons.


Unless you’re Noel, and you figure out how to fool the unit into thinking the car is in park.


Just yesterday, Luis (@lsgaleana) and Osvaldo (@vaini11a) got Glance running on the Pioneer, and we’re now ready to show it next week.



Cool, right?

If you want to see Glance in action on Android Auto or on the Apple Watch, sign up for the OAUX Exchange on Monday, and stop by and say hi.

Oh wait, Noel and I won’t be there. Did you hear we’re running a Quest with OTN?

See you in less than a week.

See the IoT Nerf Gun at JavaOne

October 14th, 2015 Leave a Comment

So, Mark (@mvilrokx) built a internet-connected Nerf gun, so that happened. If you read here, you’ll have seen the early stages of the build.

Heading into JavaOne (@javaoneconf), where he’ll be giving a talk, Mark wanted to build something cool to show.

Look for a technical post from him soon. The short version is that he added the ESP8266 to connect the toy to the interwebs and then rigged the internal mechanism to fire without pulling the trigger.

Here’s the evolution of the project.

Of course, lots of Vines showing the progress:

Of course, the Nerf gun needs an Amazon Echo integration because you know how much we love Alexa.

Here’s more from Mark:

The Internet of Things (IoT) is all the rage these days, but what is IoT exactly and how will this affect our daily lives? The IoT Nerf gun is a whimsical example used to illustrate some of the key concepts of IoT and explain the possibilities of this exiting new technology.

We modified a standard Nerf blaster to be able to connect itself to the Internet. Once connected, the Nerf gun can be queried over the internet about its status. We can also send commands to the Nerf gun in order to launch any number of darts from anywhere in the World. The Nerf gun can also talk to other internet enabled services, like Twitter. We setup the Nerf gun to tweet every time it launches a dart; it has its own Twitter account, @IoTNerf!

These are really all just examples of what is possible once you connect a “thing” to the Internet. The Nerf gun effectively has become a service endpoint that can be reached from anywhere in the World over the Internet. As a result, it can be integrated (“mashup”) with other internet services to do whatever you want.

Want to get notified when an important email comes in from your boss by getting a foam dart in the back? No problem! Want to replace your kids’ alarms with something more . . . effective? Done! Want to get a notification from your Nerf gun every time it launches a dart? Easy!

Last week, during one of the Oracle Education Foundation’s (@ORCLCitizenship) quarterly workshops, Mark took the Nerf gun on a dry run to gauge the response.


Photo by Kellyn PotVin-Gorman, used w permission.

It went over really well, and we got some really solid questions from the Design Tech High School (@dTechHS) students in the workshop, and hey, kismet, Friend of the ‘Lab Kellyn (@dbakevlar) was a volunteer mentor.

If you’re going to JavaOne, make sure to come and test out the IoT Nerf gun in person. It will be in the MakerZone in the Java Hub. Mark will be there if you have questions, want to sign up and play the OTN Community Quest we’ll be running, or just want to hang out and talk nerdy with him.

See you there.

Go on a Quest at Oracle OpenWorld and JavaOne

October 13th, 2015 Leave a Comment

The Scavenger Hunt we ran at Kscope15 was a huge success. The players had fun, the game was mostly bug-free and our gracious hosts, ODTUG (@odtug), were happy.

The Hunt went so well that we’ve been asked by the OTN (@oracleotn) to run a similar game at this year’s Oracle OpenWorld (@oracleopenworld) and JavaOne (@javaoneconf).


I can’t tell you much about it yet, but the name has changed. During Kscope15, we realized that “scavenger hunt” doesn’t really translate into other languages. So, we’re rebranding the game as a the OTN Community Quest.

It should be a lot fun, so make note if you’re attending OpenWorld or JavaOne this year and stay tuned.

We’re stoked to renew our collaboration with the good folks at OTN. It’s been three years since we ran the first developer challenge at OpenWorld in the OTN Lounge back in 2012, and we’re looking forward to this year’s event.

In addition to the Quest, our team and OAUX will be very busy during OpenWorld and JavaOne. If you’re attending, check out the OAUX schedule for OOW and JavaOne.

Watch this space for what we’ll be doing and where you can find us during the big shows.

A ShipIt at Oracle’s Mexico Development Center

October 6th, 2015 1 Comment

Editor’s note: Our team participated in many of the Summer of Innovation events held by Laurie (@lsptahoe) and her team. Here’s a report from Luis (@lsgaleana) on the Mexico Development Center (MDC) ShipIt held in mid-August. Enjoy.

Oracle Mexico Development Center (MDC) has grown to bring together some 700 employees, and it continues to grow weekly. However, there is only a single point of entry for every non-employee person, the receptionist.

Every visitor that comes to MDC goes roughly through the same process. The person talks to the receptionist. The receptionist goes to the corporate directory to find the employee being visited. The employee meets the person at the lobby. This is a pretty straightforward job and it is similar for regular visitors, delivery personnel and interview candidates, which are the user roles we picked for our ShipIt project.


The days before the ShipIt event, Rafael Belloni (Rafa) gathered the Mariachis team, Osvaldo Villagrana (Os), Oscar Vargas, Juan Pablo Martinez (Juampi) and myself, and talked to us about his idea of a virtual kiosk that would serve as an entry point for every visitor. It would consist of a screen with a simple but elegant user interface, that in the background would take care of the tedious and repetitive job of finding people in the corporate directory, contacting them, printing badges, saving the information into a log and even entertaining visitors with a video about Oracle and all its wonders.


Technically, we wanted to have an Android app loaded onto a Nexus Player, which Oscar owned, connected to a touch-screen monitor, that Rafa had recently bought (with the ShipIt in mind, of course). This 3-part device would represent the kiosk. In the background, we would have web services to scrape the corporate directory, notify employees via e-mail, IM and text, print a badge for the visitor, and save all of the information into a log. One final part was a web panel, where HR personnel could schedule interviews and assign interviewers.

The day of the event, we started putting things together. However, as is common in this kind of events, issues started arising quickly.

To enable touch in the touch-screen monitor, a USB cable was needed between the monitor and the device controlling it. But the Nexus Player has no input for a USB. We debated on our options and concluded that we would try to make it work (somehow). In the mean time, we would try to have Android running on a Raspberry Pi connected to the monitor. If all failed, we would make a web app running on a laptop. In the end, Oscar came up with BlueStacks, an Android emulator for the PC, on which we could easily install an Android app. We went with that.

Rafa worked on the web panel for the interviews; Os worked on the web services, the pdf conversion, and IM and e-mail communication; Juampi did the design; and Oscar and I made the Android app. We all had our series of problems, but we discussed them among all. We all proposed solutions, but it always came down to the easiest and quickest.

The day went by and we called it for the night around 3am.


The next day, after breakfast, the kiosk was coming together. The design was ready, we had screens to show, the web panel was working and most of the web services were in place. We just needed to put everything together and polish some features.

We rehearsed the presentation about an hour before the scheduled time. Rafa did all the talking, I controlled the demo, and Os showed the e-mails, IM messages and pdf. We were number 3 of 5 to present. All teams made their voting. The time came to announce the winners: 3rd, UAE Tacklers; 2nd, Team Roller; 1st, Mariachis.


In the end, we skimmed the project. We decided that it was a bit too much to print the badge – generating a pdf was cool enough. We also discarded the texting, saving information to a log and the Oracle video. However, these features can easily be added in production.

This is how it looked:

IoT Hackathon: Team Waterlytics’ Entry – Part 1/2

October 5th, 2015 Leave a Comment

Editor’s note: Our team participated in many of the Summer of Innovation events held by Laurie (@lsptahoe) and her team. Here’s Part 1 of Mark’s (@mvilrokx) project from the IoT Hackathon held at Oracle HQ, July 30-31. Enjoy.

Last week I participated in the IoT Hackathon at HQ which is part of “The Summer of Innovation,” a series of events that Laurie Pattison’s team is organizing to stimulate ideas and creative thinking.

Unfortunately our entry didn’t win any prizes but of course that wasn’t the reason I entered (no really, it wasn’t!)  I am trying to learn more about the Internet of Things and what better way to improve my skills then to test them against my peers.  In this post I’ll delve into what I learned and how I am planning to us this knowledge going forward.

A few weeks before the Hackathon, I was approached by Diane Boross with an idea for the Hackathon which intrigued me because it was simple, but could have far reaching consequences.  California is suffering it’s worst drought in over a century and we are all asked to reduce our water usage by an average of 20% over previous years.

Save Our Water Logo

The problem is that, as a consumer, be that an individual or a company, you have no clue how much water each individual appliance is using; you only know the total amount of water your house or company is consuming.  So how can you know what to use less, or turn of completely, in order to save 20%?

The answer is you can’t, it’s a process of trail and error.  Oh sure, there are lots of tips and suggestions from the state and the water utility companies, but those are very generic and not at all tailored to individual situations.

A very concrete example: I stopped watering my lawn a few months ago and it is now very dead indeed, however, this turned out to be nowhere near enough to save 20% as I thought it would be, in fact it barely made a dent (I have a very small lawn), it’s quite possible I let it die for no good reason.

Image from Kevin Cortopassi from flickr used under CC.

Image from Kevin Cortopassi from flickr used under CC.

And now, I have to figure out where else to save water and I don’t know where to start.

Diane’s idea was to create a device that can measure the water usage of each individual appliance, something you can already do for electricity usage, but for some reason not for water.

It would be a smart device that connects to the internet and transmits this data in real time to a central hub that would collect this data and present it in a user friendly manner to the consumer.  This would allow that person to then make much more informed decisions about how to save water.

Once you have this data, you can conjure up many other use cases, e.g. you could pit individuals or even whole neighborhoods against each other in a friendly competition of “Who can save the most water in August?”

You could also give much more tailored  advice to consumers, e.g. if their lawn watering system is using 50% of their overall water usage you can tell them to turn it a bit lower, preserving the lawn but still saving a bunch of water.  Or in my case they could have told me not to even bother and instead shorten my shower routine.

Unusual high water usage of an individual appliance, e.g. due to a leak, would also be much easier to detect, instead of being drowned in the noise – pardon the pun.

You could create a waterbudget per device and provide a feedback mechanism at that device that in real time can tell you where you are in relationship to your budget.  As you can see, once you have this data at your disposal, the possibilities are endless.

My initial thought was to use a Water Flow meter to measure the flow through the pipe, connect it to a Raspberry Pi which would in turn connect to the Internet and send the data to a backend server for processing and eventually a database for storage.

The problem with that solution however was that it required plumbing; each device would have to be installed by a plumber, inline with the water pipe that runs to the appliance you want to measure the flow of.  I consulted with Joe Goldberg in the next door office who agreed that this would never scale and he proposed instead to “listen” for water running through the pipe using a simple and very cheap piezo.

A piezo measures vibrations, kinda like a microphone, and since water that runs through a pipe causes that pipe to vibrate, putting a piezo on that pipe should, in theory, allow us to not only verify that water is flowing through the pipe, but also how much water is flowing through it.

The latter turned out to be a bit harder than we theorized and was the subject of many experiments over the course of a few weeks leading up to the Hackathon.  For his efforts, Joe was recruited by our team which by now consisted of Diane, Joe and myself.

During these experiment, we tried to correlate the vibrations of the piezo with the actual flow rate through the pipe, which we were measuring precisely using a flow meter, using some regression analysis.

Once we would establish this correlation, we could then use that as a model to extrapolate flow rates using just a piezo stuck to a pipe, no plumbing needed.  We assumed that we probably would need several models for different environmental circumstances, like e.g. the type of pipe that was used, how long the pipe is etc., but for the Hackathon obviously proving the concept would be enough.

Unfortunately we never found a significant correlation using the cheap piezos, all we were able to do was determine that water was running through the pipe, or not, i.e. whether the device was on or off.  It turns out that in practice, in most cases, this can actually be used to measure flow rate as well.

Most appliances that use water either use it all (on) or none (off).  Think about it, when you flush your toilet, it fills up again as quickly as it can, basically at the maximum flow capacity the pipe can handle.  The same is true for a dishwasher, a washing machine and even most showers. I cannot turn down my shower, it always flows at the same rate, all I can control is the temperature.

The only exception to this rule is a tap where you can manually control the flow.  Considering this and our failure to establish a good model, we fell back on assuming that if we measure vibrations, the device is using the maximum amount of water that can flow through a 1/2 inch pipe, which was about 23 liters per minute (or for the decimally challenged, about 6 gallons per minute).

We would simply measure the duration the appliance is “on” and then calculate the actual flow, e.g. if the appliance would be on for a minute, we would calculate that it used 23 liters.

We then had to figure out how to connect the piezo to the internet.  Initially I was thinking of using the provided Raspberry Pi’s, but in the margin of the Scavenger Hunt that we ran for KScope15, Noel (@noelportugal) mentioned to me the existence of a the ESP8266 chip.

This is essentially a dirt cheap wifi chip that comes with a bunch of GPIO pins and is powerful enough to be programmed, right on the chip.

There are a bunch of breakout boards that use this chip as their basis and make development even easier.  I went for the NodeMCU board, partly because I thought it would allow me to code in node.js (which is incorrect, it uses Lua, a language that has a lot in common with JS, but is not JS).

More importantly though was the prize, the breakout development boards cost ~$10 but the actual chip costs less than $3 (and dropping).  Given that our piezo cost 20 cents, we would have a potential IoT product that costs less than $4 to produce.

Once we settled on the sensor and the board, we started to focus on our use case.

We contacted Oracle Facilities, explained what we were trying to do and asked them if they could give us some real numbers of water usage at Oracle to use in our presentation and as a base line for our product.

To our surprise, not only did they provide us the information we needed, they also wanted 1,500 devices from us to test themselves.  We had to disappoint them at that point explaining to them what a Hackathon is, however, this clearly demonstrated to us that this would be a useful product, regardless of what would happen during the Hackaton.

We also got another interesting use case from them, essentially A/B testing for water using appliances; they are in the process of installing low flow toilets and faucets on some floors and they immediately realized that this would be the perfect device to confirm whether these expensive upgrades are cost effective, compared to regular appliances.

This is very hard to do right now as, again, sometimes the savings get lost in the overall usage, which is all they can currently measure.

That concluded al the investigative work we did for the Hackathon.  In the second part of this blog post I will drill down into the technical aspects of the solution we eventually presented to the judges.

On the Necessity of Flight

October 2nd, 2015 Leave a Comment

Editor’s note: Our team participated in many of the Summer of Innovation events held by Laurie (@lsptahoe) and her team. Here’s a recap of the IoT Hackathon held at Oracle HQ, July 30-31. Enjoy.

“The wind was a torrent of darkness among the gusty trees,
The moon was a ghostly galleon tossed upon cloudy seas…”
The Highwayman, Alfred Noyes

To Luis, Osvaldo, and me this beautiful poetry calls to mind one thing. Quadcopters. We started flying these machines about a year ago. The Syma X1, for example, costs ≅$30 and I cannot recommend it enough. Guilty feelings come with this amount of fun and exhilaration, at this price. Relax. Order now!

Syma X1: AA batteries not included.

Syma X1: AA batteries not included.

So for a recent Oracle Internet of Things Hackathon, our first idea was to stick sensors on a quadcopter. We could have used the Syma X1 but we were ready to graduate to something more–programmable with the potential for autonomy–like the Parrot AR Drone 2.0. At an almost affordable $300, you get a very fun and very hackable quadcopter. You can telnet right into the thing. I mean, come on! There are even JavaScript libraries available to control it and grab live video from its two cameras, or you can use any Android or iOS device to fly while you see what it sees.

Parrot app with video + controls.

Screenshot of Parrot app shows streaming live video + controls.

Trust me here. You can justify the use cases and guarantee the safe, professional operation of a quadcopter in a corporate environment. And you can do it with less than 100 emails, discussions, and offhand comments executed strategically over the course of a year or so. A quadcopter seed will, if properly nurtured, sprout a seedling of official approval. Only minutes after approval, that seedling grows propellers.

Parrot AR Drone 2.0 + masking tape + LightBlue Bean.

Parrot AR Drone 2.0 + masking tape + LightBlue Bean.

The Parrot hatches. I hear its first peeps–beeps–and 2 seconds later it flies the nest. I close my front door behind me, the Parrot hovers outside. Surreal excitement, I fly. Flight! Flying! Crashing, fixing, caring, charging. Repeat, repeat,…

Now to the Hackathon. In a nutshell, our idea automates warehouse data collection: find merchandise, identify malfunctioning temperature/humidity sensors, go where people cannot, and such. Tape a LightBlue Bean and maybe some sensors onto the Parrot’s back, write some JavaScript to fly the quad around, do a little dance, and fly home. One problem: upon arriving the morning of the hackathon we were informed that due to safety concerns no drones would be allowed to fly in the venue. We had prepared for this so we went off to our already-booked private meeting room, GoPro in hand, ready to code and create a demo video.

But there was another problem. We could barely control the Parrot. We did not shell out for a Parrot with GPS, and we naively thought that a magnetometer would provide the data needed to precisely control flight. But that’s not the way the world works, baby. How many times did our code raise the Parrot into the air only to flip it over on its back or careen it into a wall? So many times, it was truly disheartening. At first we didn’t even have a clue what the problem might be. In our minds, as software guys, you tell the drone to go forward for 500ms and it goes directly forward as commanded. But what’s “forward” mean? The compass was not accurate and our other efforts…

quadcopter tiles

This did not work.

…no, the checkerboard does not help the Parrot go forward, does not help the flipping, nor the careening, nor the crashing. Something needed to be done and we did not have much time left. What about the Parrot’s eyes? The JavaScript libraries access video and images from the Parrot’s cameras. Grab an image, follow a line half way down the image, and check 10% of the pixels for something close to the color red. If the red target is to the right, go right. If the red target is to the left, go left. If the red target takes up 75% or more of the image, stop.

Will you believe me if I tell you it worked? It did! When carrying our red target, the Parrot would follow us around the room. Our simple code performed smoothly. Amazement, disbelief, and joy! Strangely, surreally, the Parrot seemed alive. All by itself, like a puppy coming to play, it knew how to get around.

The drone loved the color red so much, we had to cover up a few things.

The drone loved the color red so much, we had to cover up a few things.

Then it stopped working. We had split the code into parts: control, vision, sensors, UI. At one point, the code worked properly but then we got fancy. In our confidence we quickly added frills like a little dance, and a flip! Then the Parrot kept losing sight of the target. We only had two batteries and we did not have enough charge to debug. In retrospect, I think it started acting poorly when I tweaked the RGB color identifier function. We needed a video and had just enough charge for this…

See how jerky it acts? That’s me controlling it. When the code works, it runs much more smoothly. Given these complications, and some stiff competition, we did not win anything at the hackathon. I think Luis, Osvaldo, and I are OK with it though. The Parrot works well and if we get some more time to mess with it, it will fly autonomously once again.

Another Home for Our Stuff

September 29th, 2015 Leave a Comment

I’m very pleased to announce that our colleague and friend Kathy recently pushed live a page on under the OAUX section, dedicated to our Emerging Technologies team and our work.

Our new home, circa 2015.

Our new home, circa 2015.

Big moment for us. I feel like a made guy.

Don’t worry. This little blog will continue to be our home and stream of consciousness, but now, we have another, more structured home, focused on our projects and thinking.

Like that’s our home, all neat and organized, and this is our garage, where we tinker and talk about interesting stuff, all disorganized, with exposed wires and cracked open cases.

Anyway, the content there won’t be as fluid as what you read here, but it won’t be a static collection. We’ll keep it updated as we progress on our projects and find new shiny objects to chase.

Huzzah for us. Thanks our GVP Jeremy (@jrwashley) Misha (@mishavaughan), head of the OAUX Communications and Outreach team, and everyone who has made this possible. Enjoy.

An IoT Hackathon in Utrecht

September 28th, 2015 1 Comment

I recently attended a hackaton in Utrecht organized by Laurie Pattison (@lsptahoe) and her team as part of their “Summer of Innovation” series.

The theme was Internet of Things (IoT) and this marked the first time that they organized a hackathon specifically for an outside partner, eProseed. All the previous hackathons were internal Oracle events.  Initially the plan was for us Oracle folks to go over and mentor teams as needed, but later on, the decision was made to place us as technical resources in a team and actually participate.  After some initial hiccups with my own team, I ended up in a team with Lonneke Dikmans (@lonnekedikmans), Karel Bruckman, DJ Ursal and Antonio Aguilar.  Here’s what happened next …

If you have ever been to the Netherlands, you probably noticed they like bikes … a lot!  This is the first thing you see when you get off the train in Utrecht:

Bikes at Utrecht Train Station

Bikes at Utrecht Train Station

Not exactly organized.

Lonneke’s team’s idea was to solve this with some IoT ingenuity.  We tried to solve the following issues:

The tools at our disposal where a Raspberry PI with a fully loaded GrovePi kit and an Oracle Mobile Cloud Service (MCS) account.  We were free to use any other tools as needed, but we decided to stick with these as they were freely available. Plus, we had direct access domain experts on site.

We used sensors in the GrovePi kit to detect a bike’s presence in the bike rack.  As soon as we detected a bike being put into the rack, we used a Raspberry Pi camera to take a picture of the (presumably) bike owner and identified the person using her/his own phone. Users of the parking system had to register themselves so we could identify and charge them, but this part we did not build as part of the hackathon.  We then sent a notification to the person’s phone using MCS.  This notification contained the picture we took, the location of the bike and the time it was parked.

The location of the bike could be traced using a phone app and a web application.  This app could also be used to keep track of how long the bike had been parked and how much this was going to cost the the user.

As soon as the bike was removed from the bike rack, another notification would be sent to the bike’s owner through MCS informing her/him of the usage time and how much the charge would be, and the system would automatically charge the user.

Besides the app for end users, we also had a dashboard that could be used by the parking management company.  This could be a municipality or a for-profit company.  The dashboard web application gave an overview of bike distribution throughout the company’s territory, e.g. the city.

This would allow the company to direct cyclists to places where there were free bike racks.  Over time, we would also collect bike rack usage data that could be used to enhance parking infrastructure and overall usage, e.g. with this data you can predict usage peaks and proactively redirect cyclists, plan where more parking is needed or inform city planners on how to avoid “parking hot spots.”

In the end, our entry was good for third place.

Image from Laurie Pattison @lsptahoe, used w permission.

Image from @lsptahoe, used w permission.

You can see our presentation to the judges, together with all the other ones on eProceed’s website.



Android Users, You Need Vysor

September 18th, 2015 6 Comments

Despite the amount of iOS and Apple Watch chatter here lately, we still have a dedicated Android user base on the team, including me.

If you read here, you’ll know that projecting an Android device screen to an audience has always been important for us, and usually we have more than one screen to project. Over the years, I’ve used several tools for this BBQScreenAndroid Screen Monitor and lately, Chromecast’s screen mirroring feature, which is handy because it cuts out the laptop middleman.

Enter a Chrome app called Vysor, created by Android luminary, Koushik Dutta, better know as koush.


You may remember him from such projects as CyanogenMod, AllCast, Helium, ROM Manager, etc. He’s kind of a big deal in the Android modding community because he’s constantly filling Android gaps and improving the Android experience.

And Vysor doesn’t disappoint. Just install the Chrome app from the Chrome Web Store, set the developer options on your Android device and plug it into your machine.

Vysor opens your device’s screen, and sit down for this one, allows you to control it via mouse and keyboard.vysor1You read that right. This is a huge feature that none of the other options I’ve tried offer.

For a full how-to, read Lifehacker’s post.

As an aside, Chrome apps are the bee’s knees. They are truly cross-platform, can run in their own windows outside Chrome and can run when Chrome itself is closed. Seriously, I can quit Chrome and launch the Vysor app on its own. I just did this on OS X and again on Ubuntu.

Did you know that? Although I did, and I shared the enthusiasm of the tech press back in late 2013, I had forgotten. Google hasn’t done a very good job promoting this awesome capability of the Chrome ecosystem.

Not to wander too far off topic, but it is a shame to see Chrome take a back seat to Android when the two are equally useful.

Moving on, Vysor is in beta. It works really well with one device from what I’ve seen. However, once you get two devices connected, it gets confused.


I might have something set wrong, or maybe it’s that annoying Android File Transfer app that’s required (is it still?) for OS X.

Might be user error.

Anyway, if you’re an Android user, check out Vysor. It’s awesome.

Find the comments.

Celebrating 5 Years in Oracle’s Mexico Development Center

September 17th, 2015 Leave a Comment

Editor’s note: If you read here, you might recall that we have dos hermanos en Guadalajara, Luis (@lsgaleana) y Osvaldo (@vaini11a). Last month, the Mexico Development Center (MDC) celebrated its fifth anniversary. Here’s to many more. Reposted from VoX.

By Sarahi Mireles, (@sarahimireles), Oracle Applications User Experience (@usableapps)

As you may know, Oracle has a couple Development Centers around the globe, and one of them is in Guadalajara, México. The Oracle Mexico Development Center, aka Oracle MDC (where I work), was 5 years old on Aug. 18, and the celebration was just as tech-y and fun as it can be for a development center.


Oracle staff hang out at the event before lunch.

Staff from the 9th floor of Oracle MDC have fun and celebrate 5 years of Oracle in Mexico (hurray!)

Staff from the 9th floor of Oracle MDC have fun and celebrate 5 years of Oracle in Mexico (hurray!)

The celebration was split in two events, an open event called “Plug in” and a private event (just Oracle staff). Topics were related to what we love: Database, Cloud and, of course, User Experience. Some of the guest speakers were Hector García, who was chairman of the Computer Science Department at Stanford University; Javier Cordero, Managing Director of Oracle México; Jeremy Ashley (@jrwashley), Group Vice President, Applications User Experience, and Erik Peterson, General Manager of Oracle MDC.

Hector García Molina starts with his talk, "Thoughts on the Future Recommendation Systems," with students and Oracle staff.

Hector García Molina starts with his talk, “Thoughts on the Future Recommendation Systems,” with students and Oracle staff.

Andrew Mendelsohn, Executive Vice President, Database Server Technologies, gives a talk at the event.

Andrew Mendelsohn, Executive Vice President, Database Server Technologies, gives a talk at the event.

Cheers at the conference. It was a really fun event. Geeks knows how to have fun!

Cheers at the conference. It was a really fun event. Geeks knows how to have fun!

Late in the afternoon, the real celebration started! We got to celebrate with all of our friends, colleagues, mates and the whole staff of Oracle MDC, and we all got to be in the anniversary picture of this awesome team, team Oracle!

Members of different teams (UX, UAE) hang out at the celebration.

Members of different teams (UX, UAE) hang out at the celebration.

This year, we all received this fun, handmade airplane as a gift to remember the 5th anniversary of Oracle MDC.

This year, we all received this fun, handmade airplane as a gift to remember the 5th anniversary of Oracle MDC.

All of the crew of Oracle MDC pose in the annual photo taken at the event.

All of the crew of Oracle MDC pose in the annual photo taken at the event.

If you want to know more about life at Oracle MDC, check our Facebook page! And if you’re a student, don’t miss our post about student visits on the Usable Apps blog.

Reducing Friction with Amazon Prime Now

September 9th, 2015 Leave a Comment

A couple days after Noel (@noelportugal) wrote about reducing user friction, I got a chance to try one of Amazon’s latest friction-reducing services, Amazon Prime Now.

The likes of Webvan pioneered grocery delivery back in the dot com days, and fun fact, Amazon owns and continues to operate what is left of Webvan. In those halcyon days, I used Webvan quite a bit. Sounds a bit silly, but being able to order groceries online and have them delivered within the hour felt like living in the future.

Alas, like jetpacks, the future had to wait a bit.

Well, the future is now, erm, again, at least in some cities. Amazon Prime Now launched in Manhattan in December, and I’ve followed its progress eagerly awaiting the happy day when Prime Now launched in Portland. That day came on August 26, and last week, I tried out Prime Now for the first time.

Prime Now offers Amazon Prime members in participating cities free two-hour delivery of various items, not just groceries, from Amazon and several other local stores. You can pay $7.99 for one-hour delivery.

Read that again, it’s not just groceries. I browsed the Amazon store and found plenty of other items, e.g. Amazon Echo, one of our favorite gadgets.


Perfect for scratching that itch for (near) instant gratification, or (almost) last-minute gift ideas.

So, my wife and I decided to order some groceries for dinner from our local market, New Seasons (@newseasons), one of the participating local stores in Portland.



Prime Now only has a mobile app for now, which makes browsing a bit cumbersome, and item availability for delivery depends on the stores. Eventually, we found all the stuff we needed and upon checking out were presented with several delivery times, most of which were taken. Makes sense, it’s a very new service, so I’m sure the stores are measuring demand before committing too many resources.

We chose the 6-8 PM delivery window, and around 6, the person collecting the order texted us to say two items we’d chosen were out of stock. He recommended substitutions, and the order was on its way.


Screenshot_2015-09-03-19-28-52Similar to Uber and Lyft, Prime Now has a map showing the store and your location as a means for status updates on the order, although it doesn’t have a moving icon to show the driver’s progress.

The order arrived easily within the 6-8 PM window, and everything was as expected. One nice feature, the Prime Now app allows for a tip on checkout, so no awkwardness at the door.


There were some unexpected hiccups, but we agreed that Prime Now was worth future tries. I expect as the service is used, the demand should prompt ironing out of the hiccups we encountered.

Like Domino’s, Amazon is moving fast to remove friction between its goods and services and its customers with testing services like drone delivery, restaurant food delivery, the growing list of features the Echo offers, the Dash restocking buttons and its Fresh service.

These services overlap each other, which speaks to Amazon’s culture, but presumably, we, the consumers, will benefit.

Find the comments.

The Basis Peak: Take a #selfie. See How Exercise Changes You

September 7th, 2015 2 Comments

Editor’s Note: Reposted from Ultan’s (@ultan) Tumblr, a great read. Ultan knows his fitness (and fashion), so his rousing endorsement of the Peak is legit. Read my impressions of the Basis Peak for more. Since I wore it last, Basis has updated the watch’s firmware to add some pretty cool features. Enjoy.

“You’ve been running! Take a selfie, see how exercise changes you!” I smile when that message pops into the notifications list on my Android smartphone after using the Basis Peak. All part of what endears me to using it even more to track my activity and sleep patterns.



This “smile-o-meter” approach of the Basis Peak Photo Finish feature is a great mix of the analog and digital, leveraging well-familiar smart phone functionality to enable me to choose to add even more “in the moment” context that adds to creating a better user experience.

Not that I need encouragement to take selfies. But the qualitative self, fun, and motivational power of selfies, even if you do not want to share them, should not be underestimated in today’s fitness world. On the other hand, there is evidence of less than attractive dimensions to the phenomenon.


I’ve documented my earlier pains in setting up the Basis Peak, now resolved with Basis Help team support through Twitter and an onsite visit. Now up and running (#seewhatIdidthere), I really like the thing. I am glad I stuck with it. Would others persevere after my initial experience?

I love the look of the device itself, its shape, sleek finish, and the option to add other colored sporty straps (I have the green and white SportVent straps now). The device UI is compact, easy to use, glanceable, and supports simple gesture interaction. Although I think the lack of a color UI takes away from the #fashtech aesthetic, most people remark on how great the Basis Peak looks on.





The phrasing of some of the messages, that shouty “CHARGE ME!” in particular, seems out of step with the crafted look and a modern UX for the mobile, selfie, visual, fit’n’finish world. Nothing major though.

I also enjoy those visualizations presented from the sensed-out data from my activity– though some may be a little dense for some people to grasp. These visualizations allow me plenty of insight to track my progress and are an easy way to explore personal habits and data of all useful sorts.


The sleep analytics are awesome and for the first time enabled me to see a relationship between the nature of my sleep (or lack of it) and my fitness. I track everything diligently and rely on unfolding habits and patterns to progress things. I am never bored by the Basis Peak.



That sleep summary email at the end of the week reminds me of how well I am doing (or not!). A pause for thought as I enter another week.



Although most of my activity is running and walking, this ability to unlock more, diverse “habits”, to mix it up, to try new things, and to explore variants of activities is both motivationally challenging and rewarding. “Torch More Calories”? Hell, yeah! An “Evening Lap”? Go for it. “Run Club”? I’m there. My kind of gamification (personal exhortation, really).

I am sure the lack of built-in social and community features won’t work for everyone, and there are merits to sharing, but what the Basis Peaks offers works for me. Fitness isn’t a Facebook-level activity for a lot of us. As my Basis Peak-using colleague Jake Kuramoto (@jkuramot) might say: “This is about quantified self, not quantified community.”

Other things I have noticed about my usage of the Basis Peak are that I am more inclined to rely on the mobile dashboard and activity steam than I did with other fitness bands, turning to the desktop dashboard only to obtain more data and analysis at the weekend. I dig that activity feed and glanceable style of the dashboard, and those little messages again:





In all, a great user experience. It’s my favourite piece of wearable tech right now. I wear my Basis Peak all the time (another “habit” unlocked) and the battery life works great for my global, mobile lifestyle.

The Basis Peak works for me. The technology and sensors rock. And it looks great. An emerging innovation story I guess, and I’m excited as to where the Basis Peak is going.

Even my 11 year-old wants one. And in preference to… that other Watch. I prefer it too.

That says something of the appeal and potential of the Basis Peak.

Four Weeks with the Garmin Vivosmart

September 3rd, 2015 2 Comments

The Year of Data continues for me, and yesterday, I finished a four-week relationship with the Garmin Vivosmart.

I use relationship purposefully here because if you use a wearable to track fitness and sleep, you’re wearing it a lot, and it actually becomes a little friend (or enemy) that’s almost always with you. Wearables are very personal devices.

If you’re scoring at home, 2015 has gone thusly for me:

After that month of nothing, I nearly ended the experimentation. However, I already had two more wearables new and still in the box. So, next up was the Vivosmart.

I didn’t know Garmin made wearables at all until OHUG 2014 where I met a couple people wearing Garmin devices. Turns out, Garmin makes an impressive array of wearable devices, running the gamut from casual to hardcore athlete.

I chose the Vivosmart, at the casual end of the spectrum, because of its display and notification capabilities.

As always, before I launch into my impressions, you might want to read real reviews from Engadget and The Verge.

The Band

Finally, a wearable that doesn’t require a laptop to configure. The setup was all mobile, download the app and pair, very nice for a change.


After the initial setup, however, I did need to tether the Vivosmart to my laptop, but I don’t think my case is common.

The firmware version that came out-of-the-box was 2.60, and after reading the Engadget review, I decided to update to the latest version. Specifically, I wanted the notification actions that came in 3.40. There didn’t seem to be a way to get this update over-the-air, so I had to install Garmin Express on my Mac and tether the Vivosmart to install the update, a very quick and painless process.

This must have been because I was going through several updates because the Vivosmart got an over-the-air update at some point without Garmin Express.

Like all the rest, the Vivosmart has a custom cable for charging and tethering, and this one looks like mouthguard.


Looks aside, getting the contacts to line up just right was a learning process, but happily, I didn’t charge it very often.

The low power, touch display is pretty cool. The band feels rubbery, and the display is completely integrated with no visible bezel, pretty impressive bit of industrial design. The display is surprisingly bright, easily visible in full sunlight and useful as a flashlight in the dark.

There are several screens you swipe to access, and they can be configured from the mobile app, e.g. I quickly ended up hiding the music control, more on that in a minute. Long-pressing opens another set of options and menus.

The Vivosmart has sleep tracking, one thing I actually missed during my device cleanse. Like the Jawbone UP24, it provides a way to track sleep manually. I tried this and failed miserably because somehow during the night the sleep tracking ended.

The reason? The display activates when anything touches it. So, while I slept, the display touched the sheets, the pillow, etc. registering each touch as an interaction, which finally resulted in turning off sleep mode.

This is exactly how I discovered the find phone option. While using my laptop, I wore the Vivosmart upside down to prevent the metal Garmin clasp on the underside of the device from scratching the aluminum, a very common problem with wrist-worn accessories.

During a meeting my phone started blinking its camera flash and blaring a noise. A notification from Garmin Connect declared it had found my phone. I looked at the band, and sure enough, it was in one of the nested menus.

So, the screen is cool, but it tends to register everything it touches, even water activated it. Not to mention the rather unnerving experience of the display coming on in a dark room while partially awake, definitely not cool.

Luckily, I found the band and app auto-detect sleep, a huge save.

Functionally, the battery life was about five days, which is nice. When the battery got low, a low battery icon appeared on the time and date screen. You can see it in the picture. Once full, that icon disappeared, also nice.

The Vivosmart can control audio playing on the phone, a nice feature for running I guess. I run with Bluetooth headphones, and having two devices paired for audio confused my phone, causing it to play through its own speakers. So, I disabled the playback screen via the app.

Like most fitness bands, this one is water resistant to 5 ATM (50 meters), and I wore it in the shower with no ill effects, except for the random touches when water hit the device’s screen. I actually tested this by running water on it and using the water to navigate through the screens.

Syncing the band with the phone was an adventure. Sometimes, it was immediate. Other times, I had to toggle Bluetooth off/on. Could be my impatience, but the band would lose connectivity sometimes when it was clearly within range, so I don’t think it was me.

The Vivosmart has a move indicator which is nice as a reminder. However, I quickly disabled it because its times weren’t configurable, and it would go off while I was moving. Seriously, that happened a few times.

The App and Data

As with most fitness trackers, Garmin provides both a mobile app and a web app. Both are cleanly designed and easy to use, although I didn’t use the web app much at all. Garmin Connect has a nice array of features, to match the range of athletes to which they cater, I suppose.

Garmin Connect

Garmin Connect2

I probably only used 25% of the total features, and I liked what I used.

I did find the mobile app a bit tree-based, meaning I found myself backing up to the main dashboard and then proceeding into another section.

Garmin tracks the usual activity data, steps, calories, miles, etc. There’s a wide array of activities you can choose from, but I’m a boring treadmill runner so I used none of that.

For sleep, it tracks deep and light sleep and awake time, and I found something called “Sleep Mood” no idea what that is.

One feature I don’t recall seeing anywhere else is the automatic goal setting for steps which increases incrementally as you meet your daily goal. The starting default was 7,500 steps, and each day, the goal rose a little, I assume based on how much I had surpassed it the previous day. It topped out at 13,610.

I passed the goal every day I wore the Vivosmart, so I don’t know what happens if you fail to meet it.

You can set the goal to be fixed, but I liked this daily challenge approach. There were days I worried I wouldn’t make the step number, and it actually did spur me to be more active. I guess I’m easily manipulated.

Possibly the biggest win for Garmin Connect is its notification capabilities. It supports call, text and calendar notifications, like some others do, but in addition, there is also a nice range of other apps from which you can get notifications.

And there’s the feature I mentioned earlier, taking actions from the band. I tried this with little success, but I only turned on notifications for text messages.

One possible reason why Garmin has such robust notifications may be its developer ecosystem. There’s a Garmin Connect API and a store for third party apps. I didn’t use any, mostly because I’m lazy.

That, and one of the kind volunteers for our guerrilla Apple Watch testing at OHUG warned me that some apps had borked his Garmin. He had the high-end fenix 3, quite a nice piece of technology in an Ultan-approved design.

Finally, Garmin Connect offers exports and integrations with other fitness services like RunKeeper, Strava, etc. They’re definitely developer-friendly, which we like.

Overall, I found the Vivosmart to be an average device, some stuff to like, some stuff to dislike. The bland black version I chose didn’t help; Ultan (@ultan) would hate it, but Garmin does offer some color options.

I like the apps and the ecosystem, and I think the wide range of devices Garmin offers should make them very sticky for people who move from casual running to higher level fitness.

If I end up going back to Garmin, I’ll probably get a different device. If only I could justify the fenix 3, I’m just not serious enough, would feel like a poseur.

Find the comments.

Reducing User Friction

September 2nd, 2015 3 Comments

A few nights ago a Domino’s Pizza commercial got my attention. It is called “Sarah Loves Emoji.”

At the end, the fictional character Sarah finishes by simply saying “only Domino’s gets me.

The idea of texting an emoji, tweeting, using a Smart TV, or a smartwatch to automagically order pizza fascinates me. What Domino’s is attempting to do here is to reduce user friction, which is defined as anything that prevents a user from accomplishing a goal.  After researching Domino’s Anywhere user accounts, I found a negative post of a frustrated user, of course! Thus proving that even if the system is designed to reduce friction, the human element on the process is bound to fail at some point. Regardless I think is pretty cool that consumer oriented companies are thinking “outside the box.”

Screen Shot 2015-09-02 at 2.30.45 PM

As a long fan of building Instant Messaging (xmpp/jabber) and SMS (Twilio) bots, I understand how these technologies can actually increase productivity and reduce user friction. Even single-button devices (think Amazon Dash, or my Staples Easy Button hack) can actually serve some useful purpose.

I believe we will start to see more use cases, where input is no longer tied to a single Web UI or mobile app. Instead we will see how more ubiquitous input process like text, twitter, etc. can be used to start or complete a process. After all it seems like email and text are here to stay for a while, but that’s the content of a different post.

I think we should all strive that our customers will ultimate say that we “get them.”