Holiday Project, A DIY CNC Adventure

January 16th, 2017 3 Comments

This past holiday break, I have mostly stayed in the garage – setting up a CNC machine and cutting all the things that I deemed cuttable by it.

Raymond is busy on CNC jobs

Owning a CNC machine has been on my list for a long time, it is just out of my price range for a high quality professional grade machine, or maybe not useful enough for a “toy” level DIY one. But every time I built an electronic toy, like Pac-Man on a String, I wished to have a precision CNC machine to build a case for it. Recently, a casual chat with Jeremy (@jrwashley) brought up the topic of DIY CNC, I was encouraged to try to build one now, otherwise, it would just be a spot on the wish list.

I researched on the landscape of CNC, primarily focused on entry level to mid-level DIY types.  You can find lots of “CNC Routers” on Amazon, with price range of $400 – $800, but be aware that the controller is antique and requires a old desktop PC with parallel ports. You can also find many CNC DIY builds on Instructables.com, which gives you good ideas on all the elements making up the machine, but may take a while to source or custom-make the parts, with possible fitting issues of parts from difference places. Many of the entry level CNC machine can do light engraving and cutting on softer materials, but will have difficulties in precision cutting harder materials. The ready-made mid-level CNC machine can handle most of jobs better, but it could easily run over $5,000, such as the ShopBot ones.

Choose it.

The machine capabilities and quality are usually correlated to the price tags, so I have to ask what I really need, in order to keep the price in reasonable range while it can do a decent job. If you are in market for a CNC machine, you probably should think along the same lines too. Here is what I laid out:

  1. Mostly work with acrylic and plastic materials, some decent-size wood work, and occasionally aluminum sheets;
  2. Good size of work area for a desktop/bench-top, from 10′ to 20′ in X, Y axis;
  3. Want to have solid structure to ensure the precision and strength of cuts, and the durability of machine itself;
  4. Controller is less of a concern for me, as I will likely replace it later with my own experimental controller, but I do want to have it work with my current computer instead of going back old parallel ports;
  5. Controller, CAD and CAM software; capability and open source?
  6. For now, I want to have good capable CNC machine that I can use immediately, instead of figuring out how a CNC machine is built. So that I can learn to operate the machine, without complication of build issues. After that, I may come back to make a inexpensive DIY CNC machine or other installation from such control mechanism.

With above considerations, it is clear that I would get a kit where all parts come from the same vender (so no mis-match), but I still get the chance to look at all the parts (to understand the purpose of all the parts), and to assemble all the parts into the machine (to understand the machine composition). That will enable me to build my first CNC machine with less potential problems (also to isolate machine issues from operation issues during learning stage), while still get the great ins-and-outs of a CNC machine that I can potentially design my own version in the future. Two equal candidates surfaced: Shapeoko 3 from Carbide 3D, and X-carve from Inventables. I like Inventables for it is an open system and has great resources, but the solid XY rails of Shapeoko 3 won out.

Build it.

Shapeoko-3 Parts

If you have prior experience in assembling a CNC machine, the Shapeoko-3 kit may take you several hours to put it together. For me, I really took it slow to examine each part and understand why and how each part is designed, and how the part will function in the final machine.

CNC controller wiring

The controller works as taking G-code commands from your design, and moving the router along X, Y, Z axis precisely to make the cuts. The controller board can be replaced if I want to do further experiments, by using an Arduino running Grbl.

Operate it.

The basic workflow from idea to build:

  1. Sketch up on a napkin 🙂
  2. Transfer your drawing from napkin to a CAD (Computer-Aided Design) program, where you do the engineering or artistic design,
  3. Export the CAD drawing and import to CAM (Computer-Aided Manufacturing) program, where you do the specification of material (type, size, thickness, etc), mill bits (type, size, etc), operation parameters (feed rate, plunge rate, step depth, routing path), and tool-path and cutting order. You should image how the machine would cut from start to end because that’s exactly what you do here to instruct the machine from start to end,
  4. Generate G-code from CAM to be fed to CNC controller software,
  5. CNC cutting or milling exert forces to cut through material, so make sure to hold your stock material securely and take safety precautions,
  6. Start the job and observe – various issues may occur if you are a beginner and doing experiments,
  7. If everything goes well, you get the job done. Polish it up, such as cleaning, sanding, buffing, etc.

CNC in operation

During the first week, I ran into a lot of issues at step 6, and I had to hit the emergency STOP. During the job run, that’s when all the parameters / factors come together, and you have to experiment to find the perfect combination of operational control parameters, for your design on a particular material using various bits for engraving, milling, cutting. It is a large number of different combination, and experience will definitely help.

For example, not all acrylic materials are equal – generally cast acrylic is good for cutting and even milling, but extruded acrylic tends to melt and form a blob at the bit head. You may slow down spindle and increase feed rate to reduce the rubbing, but it could break milling bit if you go that direction too much. You may use larger bits on wood milling (and later sanding and polishing), but probably it is better to do engraving on acrylic as milling may dull the luster effect.

Usually some CAD/CAM software should come with a CNC machine, whether it is free or paid, open source or propriety, to get you started. But after you get a handle on the CNC workflow, you should explore different software packages for CAD and CAM, as different software can provide various capabilities required by your design.

Show samples.

First set of successful cutting and engraving on acrylic:  1. Abstract flower; 2. Spring time – little girl, kitten and butterfly.

Engraving V-cut on two types of acrylic material

After experimenting for almost one week, I perfected the operational parameters (spindle, feed rate, plunge rate, material, cut/engrave/mill) and proper bits (bit size, bit shape) for making a batch of gifts for the AppsLab team:

Gifts to AppsLab team members

As apprentices in the OAUX organization, we all know UX design philosophy: Glance, Scan and Commit. And that is represented in this CNC-cut fashion of elements:

UX design “elements”

If you still remember from high school chemistry, they are elements from periodic table: Gallium (Ga for Glance), Scandium (Sc for Scan), and Curium (Cm for Commit). Here is a closeup look at the Gallium with information of element symbol, name, atomic number and atomic weight:

Element: Gallium under pink light

Yeah, engravings on acrylic look better under light. Should I mill a wood-channel and install a LED strip for the display?

Google Home Notifier (Google Home Push Notifications)

December 23rd, 2016 2 Comments

Working in an emerging technologies team has a lot of perks. One of them is to kick the tires of technologies that are about to become mainstream. It also has the somewhat fun privilege to gain interwebz bragging rights or the equivalent of the emblematic/annoying “first” comment that plagued the web forums just a few years ago. Today I have another “first”, a Google Home Notifier to send push notifications.

Among my list of “first”* (afaik) are the following:

Ok, ok. I know. It looks like a sad bragging list. It is, but nevertheless I’m proud of it. This brings me to the latest “first,” but let me back up before I dig into how and why I built this Google Home Notifier.

Two years ago I was lucky enough to be in the first batch of Amazon Echo buyers. That led me to be one of the first to “hack” (here and here) with it, which in turn got me some notoriety with the Amazon Echo team. So I was asked under NDA to review their Alpha SDK. I happily agreed. After playing with it for a few weeks, I had some feedback.  One of my main requests was exactly this: I wanted to be able to send a push notification to the device. Why? Here are some use cases:

The list can grow with more compelling use cases, but when I spoke with the Amazon Echo team they had their very valid concerns. What if this feature got abused by marketers or otherwise? Alexa would become a marketing/PR nightmare with stories of constant blabbing, un-opportunistic announcements, etc. So far it seems they don’t have a roadmap for this feature.

Fair enough. So two years ago at the AT&T Developer Summit Hackathon our team used the Amazon Echo. We wanted to send a notification to it, so our half baked solution was to use the Echo as a bluetooth speaker connected to a mac and use the “say” command. The voice didn’t quite match the Echo’s sultry voice. This was far from ideal from many technical standpoints.

Now Amazon Echo has finally a worthwhile competitor: The Google Home. So while we wait and see if Google will implement push notifications, I created my own and it’s available for anyone. I tested with IFTTT and it works like a charm.

A few drawbacks with my implementation is that you have to run it as a server/service inside your house. It can run on a Raspberry Pi, your pc or a mac. Finally I want to acknowledge that the reason I found out this was possible to do was due to this article that describes how to send a notification using Android. The reason I’m still claiming a “first” with my solution is because it accepts external call events from sites like IFTTT or any other server what can POST a notification.

*I’m happy to stand corrected on all my “first” claims! So if you happen to read this and know of someone or yourself that did it first. Please let me know so I can take a humble pill.

 

Emotion Recognition at Oracle Maker Faire

December 21st, 2016 2 Comments

Emotibot, an emotion sensing robot.


A few weeks before the first ever Oracle sponsored Maker Faire, I was experimenting with some of the cognitive (vision) recognition APIs available. Google Vision API, Watson Visual Recognition and Microsoft Computer Vision API are some of the biggest players in this field right now.

After testing all of them I found the idea of Microsoft’s CaptionBot really compelling: Upload an image to the CaptionBot and it will try to come up with a coherent caption based on a mashup of three of their cognitive services (Computer Vision API +  Emotion API + Bing Image API). I wrote an iOS app (with it’s own  Swift framework) to consume this mashup and took it for a spin.

I gave my phone to my kids to test the app. They ran around the house and were truly amused by pointing the camera to an object and getting a description.

So when the call to create a project for Oracle Maker Faire with my kids came, we sat down and started brainstorming. The concept was still fresh on their minds; A computer could guess as close as possible to what an object is and even guess a facial expression.

Brainstorming ideas

They came up with a plan and a name: Emotibot, an emotion sensing robot. We drove to the closest Home Depot to find materials. We found an Led Glow Ball Lamp that worked perfectly for the head of our emotion sensing robot.

Staging parts before drilling holes

 

We used the following materials to build our robot:

The robot worked as follows:

  1. The Ultrasonic Ping sensor detected when someone was close to it (about 10 inches).
  2. The robot started to talk using festival-lite or flite. The mouth servo was synchronized with the flite by counting the words and moving the mouth for each word spoken.
  3. A picture was snapped and submitted to Microsoft Emotion API. The JSON result was parsed and then spoken with flite by Emotibot.
  4. Using the blink(1) USB LEDs the robot changed colors based on the emotion.

 

Overall the project was a success. I was able to involve my kids and they learned some concepts along the way. If anyone is interested to see the code hit me up in the comments and I might put it on Github.

Color Workshop at the Oracle Maker Faire

December 9th, 2016 Leave a Comment

Workshop preparations

For the last 6 years, each Friday, I have been bringing a mobile painting studio to my kids’ school. Among other things, we work on what I call “color studies” – a set of formal exercises aimed to sharpen one’s sense of color and develop painting skill, just like musical scales, chords and arrpedgios sharpen one’s sense of harmony and develop performance skills. I was originally inspired to offer such level of formalism by work of Josef Albers, and my students and I has been playing with the “color studies” for two years at the school.

Here are some of the students’ color studies:

This slideshow requires JavaScript.

I attended the Oracle Maker Faire with the upper echelon of the students (5th and 6th graders), and our contribution was three-fold: we had an exhibit of our color studies; we taught the color workshop to about eighteen self-selected participants; and I got to rattle from the main stage on the topic of Maker Movement and art while kids kept me company up there.

Exhibit stands ready for a trip from Santa Cruz to Redwood Shores

At the workshop

At the workshop

It was big fun to bring kids, their paintings, and our workshop to the Oracle Maker Fair. Many people talk about work/life balance, I am grateful I get to live it 🙂 My thanks go to the Oracle lore for the event, to friends and family for swinging by to support me, and the workshop participants big and small who showed up and painted their hearts out.

Pac-Man on a String

December 8th, 2016 3 Comments

Oracle hosted the first-ever company co-sponsored Maker Faire event on November 17, 2016.

When I heard of the event, I had something in my mind that I would encourage a group of middle-schoolers to make a game for the event. I have been teaching them occasionally on Arduino, NodeMCU and MQTT concept over some weekends, and this is the perfect opportunity to put them together in a build.

After some brain-storming, we came up with “Pac-Man on a String” concept. And because of the way of this game plays, the player needs to rush to a location and do a quick action, I picked the team name as “JET – React” to describe the game play, (and if you are writing JavaScript at Oracle, you know JET and React).

There are some interesting design points we have put into the build:

So we adapted the game plot of well-known Pac-Man, and changed narratives to collect gems (as Emily likes sparkling gems), and changed ghost to wasp so we can flee or just attack at the spot with precise timing. Coding-wise, it turns out “wasp” is very much like “gem” except it chases player (represented as 5-LED segment).

The modified game story and play mechanism fit very well with the limited control and interface we have. Making a interesting game with just one joystick and one LED strip, that presents serious constraints as what you can work with. But in the end, I think we achieved excellent results.

To make it simple, we coded 5-LED segment (we call it Saber) that a player can move it up and down along the LED strip, and use that to collect a gem or attach a wasp. Most people can align 5-LED to sparkling dot on the strip, right?

To make it challenging (that is to gain high score), we made the scoring logic to require a player has great timing and quick reflexes. First, all the gems and wasps can show up at any location, any moment, for any length of duration (all the parameters are randomly generated). So player has to move fast, and quickly decide what to do when multiple gems/wasps are showing. Second, the Saber has 5-LED segment, and if you use middle LED to collect a gem, you get 100 points; the two LED next the middle one will get you 50 points, and two side LED get you 25 points. So it has great incentive to use the middle LED to do your work, but it is very difficult to move the Saber to the right spot, it requires great skill of timing and anticipation. And finally, the wasp can sting you, but if you are quick enough, you can knock it out before it can sting you, and you gain points, that requires great timing skill. Overall, it takes practice to play this game well.

So here is the team info and game play description:

These are the components and flow for the game build.

The game code is on the Arduino Mega board, where the player moves Saber to collect and attack. The code controls all LED light state at all time, which represents space, gems, wasps, as well as detect the player action, and Saber to gem/wasp collision time. It has scoring logic when detected collision, and sends game events to NodeMCU for reporting.

The code on NodeMCU is to rely game events into CloudMQTT, so that the App on Tablet can get the events, and starts/stops game, or accumulates points. Once a game is over, the score is recorded into Oracle Apex database for keeping the Leader board.

First prototype build, using a wooden tea box:

This is the cleaned up final build, hosting all components in a acrylic box:

This is the close-up look at the inside of box:

Score board and leader board in an Android App:

The game as played out at the Maker Faire. Both adults and kids had a blast!

Update: Here’s a video of the gameplay.

Trip to Black (W)holes

November 28th, 2016 Leave a Comment

imagination

Last week my kids’ school went on a field trip to the University of Santa Cruz to observe a black hole multimedia exhibition. We were invited there by Enrico Ramirez-Ruiz, the astrophysicist and the fellow parent at the school. When Enrico is not busy pushing the frontiers of science (he is partial to violent explosions), he teaches astrophysics to children age 4 to 12.

The exhibition combined the visualized data from recent Extreme Mass Ratio Inspiral (look it up) event, projected to the round screen on the floor, with the sound mapped to the acceleration of the star matter spiraling into the black hole, and an auxiliary animation of Einstein’s scribbles projected to the walls. It was an immersive experience.

The reality of being INSIDE of the installation, together with friends and the teacher, stimulated thinking and collaboration. Kids started asking questions, and there were no stopping of them. Enrico is awesome at understanding underlying questions children ask no matter how well or poorly they express the questions with their words.

There were certain abstractions in the visualization – it was rendered in a logarithmic scale, the perpendicular rays had to be “flatten” to the projection plane, the meaning of color was reversed to red for hot and blue for cold. Interestingly, these abstractions provoked more thinking and more discussions.

Enrico explained it is a balancing act to find a happy middle between scientific accuracy and intuitiveness of visualization.

Where the visual props come short, Enrico switches to explaining with his hands, he is as good at it as Richard Feynman was, creating a kind of single-actor science visualization theatre.

I was fascinated to hear from Enrico that, as a scientist, not only he uses imagery for explanations, but he also thinks in images.

I’ll use this as a good excuse to break into quoting my favorite parallel quotes.

Enjoy.
artandsci1

artandsci2

artandsci3

A Personal Assistant Technologies or PAT Hackathon

November 7th, 2016 Leave a Comment

When tech media started proclaiming 2016 as the year of the bots, they seem to have nailed it. At Oracle we have at least three groups working on bots, OAUX included.

One of the latest forays into bots was a Personal Assistant Technologies (PAT) hackathon, organized by Laurie Pattison’s (@lsptahoe) Apps UX Innovation Events team, open to people across Oracle. The goal? Create a great use case for bots with a great user experience.

photo05

photo06

Because I’ve done a fair amount of research on bots recently, I was selected as a mentor, though the MVM (most valuable mentor) prizes definitely went to Anthony Lai (@anthonyslai) and Noel Portugal (@noelportugal), who provided all the technical assistance for the teams.

photo24

The most interesting part of a hackathon is, of course, at the end. Each team has three short minutes to show what they built and why it’s awesome. There were a lot of teams, covering use cases from sales, service, supply chain, finance, developer tools, and project management. It was a pleasure just to see all the creativity across groups that came from distant parts of Oracle—including a few who traveled all the way from India and Armenia just to participate.

photo25

The teams had to use an NLP system and a bot framework to interact with Oracle systems to actually do something—some were more transactional, others more about querying information. The most important thing (to me, at least) about a bot use case is that it needs to be better than the existing way you’d do something. Why would a user want to use a bot—something new they have to learn, even if it is easy—instead of doing it the old fashioned way?

A big part of the potential value of bots is that it’s easy to use them from a variety of devices—if all you need to do is type or speak, you can easily use a mobile device to send a text message, Amazon Echo, your IM on your desktop, or maybe even a smartwatch. The teams used a variety of input methods, pointing out the real value someone can unlock with the ability to be productive while on the go or in contexts we don’t normally associate with work.

photo23

Also represented in the mentor and judge crowd were Oracle Virtual Assistant (part of the RightNow team), and the Chatbot Cloud Service, which Larry Ellison introduced at OpenWorld this year. Some teams leveraged the Oracle Virtual Assistant for their submissions, but it wasn’t required.

It’s an exciting time, now that natural language technology is finally enabling some wonderful user experiences. I, for one, am looking forward to seeing all the upcoming cycles of design-build-test-repeat in the quest for a useful and productive bot experience.

Mixed Reality Demo – The Physical Parts

October 31st, 2016 Leave a Comment

I have been always intrigued by the fact that people get deeply attached to the characters in the game (e.g. Second Life), or virtual pets. And with sufficient advancement in technology, the virtual characters may eventually cross the boundary and get attached to real-life people (e.g. Sci-Fi movie such as “Her”). While that is still a little far away from now, I’ve been looking to explore the 2-way communication and interaction between virtual and real world.

At AppsLab, we have enough skills to build some physical toys that we can communicate and control, but we miss a game or virtual environment that is appealing and communicative. I tried interact with Minecraft environment but stopped when it was sold. So Jake’s casual mention of MindWurld from Ed Jones (@edhjones) sparked a great interest!

MindWurld is a fantastic game.  You can choose a virtual character (Avatar) to walk around Hawaii island to free and rescue pigs, also collect treasure, and play trick of spawning pigs and catching them with Pokeball. And yes, we have full access to the source code. (see Ed’s post for details)

So we came up with a game plot quickly, as manifested in the final build:

Real controller - Virtual avatar - Real robot

Real controller – Virtual avatar – Real robot

  1. Player in Real world communicates to a virtual character in MindWurld;
  2. Virtual game character and object has a mirrored object in the real world;
  3. Events or actions happening in sync between real and virtual objects.

This is how we put things together:

Step 1 – Toy guitar as controller

We thought of using player’s own cellphone to call a number to reach the Avatar (the virtual character in game), and just talk over the phone to tell Avatar what to do. But voice service provider was not responsive enough and we were approaching OpenWorld soon, so we ditched that approach and went for a customized controller.

Ed is a guitar player, and the virtual Avatar would be attending OpenWorld on behalf of Ed, so it is fitting that we use a toy guitar to represent him.

Mod of a toy guitar as controller

A toy guitar mod as controller

The toy guitar essentially provides many buttons that I can use to convey various commands and intentions, but the mod itself is a little bit more complex, as each button produce a set of signals feeding into a chip for playing music, it is not a clear simple one push to one line reading.

I used one Arduino mini pro to read signal patterns for each button push and did some noise filtering and process, and then translated them into “a player command,” which is feed into a Bluefruit EZ-key HID chip. The HID chip can connect to a computer as HID device, so each “play command” is a simple key stoke input to control the game.

Step 2 – MiP robot as mirrored avatar

MiP robot from WowWee is an inexpensive but very capable little robot. It can balance itself on two wheels, and can move back-forth, and spin on the spot, and that makes it having accurate travel along any path.

Oh, and it is quite a character. It makes happy, grumpy and lots of other noises, and shows many light patterns, to express full range of emotions!

MiP robot as buddy in Real world

MiP robot as buddy in real world

The best part for our developers – it has an API in many languages that we can program and control the movement, sound and lights.

So for whatever events happening in the MindWurld game, such as the avatar walking around, opening treasure boxes, spawning pigs, freeing pigs and rescuing them, they are sent over a socket to a my Robot controller program, which in turn asks the Robot to perform corresponding movement and act in certain cheerful ways.

Originally, I made the MiP robot to be the mirror of virtual character in the game, in a sense that it walks the same way as his virtual part in game. It requires a large area for it to roam around. For the OAUX Exchange at OpenWorld, due to space limitation, I reprogrammed it to be a buddy of the virtual character, so it does not move too much, but makes noise and blinks light to cheer for his virtual friend.

By now, we can test out the full cast of game!

Step 3 – Juiced it up with a Pokeball

Meanwhile, Mark (@mvilrokx) had been busy printing Pokeballs:  3D printed shells, polished and painted, outfitted with unbalance motor for vibration, LED for light color effect, and  NodeMCU for network connectivity, and hooked up to a MQTT broker ready for action.

Pokeball used to catch pig in virtual world.

Pokeball used to catch pig in virtual world.

Ed quickly outfitted the virtual character with a ball in hand, throwing at pigs to rescue them.

I just quickly added a MQTT client code, replied ball-thrown and pig-rescued events to MQTT broker. And the Pokeball in real world would vibrate and flash when the virtual character throws and catches pigs in the MindWurld.

Play out at OAUX Exchange

Play it out at OAUX Exchange

Oh, that’s the Mixed Reality game setup at OAUX Exchange. Anthony had 3 days of fun time playing Rock Star, together with “real” people, “virtual” avatar, “real” avatar, “virtual” balls and “real” balls.

Real Time Ambient Display at OpenWorld: The Software (for the Hardware)

October 20th, 2016 Leave a Comment

This is part 2 of my blog post series about the Ambient Visualization hardware (part 1).  Also, please read John’s post on details about the creation of the actual visualization, from concept to build.  In the first part, I focused on the hardware, a sonar sensor connected to a NodeMCU.  In this second part, the focal point will be the software.

When I started working with ESPs a few years ago I was all gaga about the fact that you could use Lua to program these chips. However, over the last year, I have revised my thinking as I ran into stability issues with Lua.  I now exclusively code in C/C++ for the ESPs using the arduino library for the ESP8266.  This has led to much stabler firmware and with the advent of the likes of PlatformIO, a much better development experience (and I’m all for better DX!).

As I was not going to be present at the Exchange myself to help with the setup of the devices, I had to make it as easy as possible to set up and use.  I could not assume that the person setting up the NodeMCUs had any knowledge about the NodeMCU, Sonars, C++ etc.  Ideally, they could just place it in a suitable location, switch on the NodeMCU and that would be it!  There were a few challenges I had to overcome to get to this ideal scenario.

First, the sonars needed to be “calibrated.”  The sonar just measures the time it takes for a “ping” to come back as it bounces of an object … any object.  If I place the sonar on one side of the room and point it to the opposite wall, it will tell me how long it takes (in µs) for a “ping” to come back as it bounces of that wall.  (You can then use the speed of sound to calculate how far away that wall is.)  However, I want to know when somebody walks by the sensor, i.e. when the ping that comes back is not from the opposite wall but from something in between the wall and the sensor.  In order to be able to do this, I have to know how far away the wall is (or whatever fixed object the sonar is pointed at when it is placed down).  Since I didn’t know where these sensors were going to be placed, I did not know in advance where these walls would be so this could not be coded upfront; this had to be done on-site.  And since I could not rely on anybody being able to just update the code on the fly as mentioned earlier, the solution was to have the sonars “self-calibrate.”

As soon as you turn on the NodeMCU, it will go into “calibration mode”.  The first few seconds it will take a few hundred samples under the assumption that whatever it “sees” initially is the wall opposite the device.  It will then store this information for as long as the NodeMCU is powered on.  After this, any ping that is close to the wall is assumed to be coming from the wall, and discarded.  Whenever a ping is received of an object that is closer to the sonar than the wall, we assume that this is a person walking by the sensor (between the wall and the sensor) and we flag this.  If you want to put the NodeMCU in a different location (presumably with the opposing wall at a different distance from it), you just switch it off, move it, and switch it back on.  The calibration will make sure it works anywhere you place it.  For the people setting up the sonars, this meant that all they’d have to do was place the sensors, switch them on and make sure that in the first 1-2 seconds nothing is in between the sensor and the opposite side (and if there was something in between by accident, they could just “reset” the NodeMCU which would recalibrate it).  This turned out to work great, some sensors had a small gap (~2 meters), others had a much larger gap (+5 meters), all working just fine using the same code.

Second, the NodeMCU needs to be configured to connect to a WiFi.  Typically this is hard-coded in the firmware, but again, this was not an option as I didn’t know what the WiFi SSID or password would be.  And even if I did, conference WiFi is notoriously bad (the Achilles heel of all IoT) so there was a distinct possibility that we would have to switch WiFi networks on-site to a better alternative (e.g. a local hotspot).  And as with the calibration, I could not rely on anybody being able to fix this in the code, on-site. Also, unlike the calibration, connecting to a WiFi requires human interaction; somebody has to enter the password.  The solution I implemented was for the NodeMCU to come with its own configuration web application.  Let me explain…

The NodeMCU is powerful enough to run its own Web Server, serving HTML, CSS and/or JS.  The NodeMCU can also be an Access Point (AP) so you can connect to it like you connect to your router.  It exposes an SSID and when you connect your device to this network, you can query up HTML pages and the NodeMCU Web Server will serve them to you.  Note that this does not require any WiFi, the NodeMCU basically “brings its own” WiFi that you connect to.

NodeMCU Access Point

NodeMCU Access Point (called ESP8266-16321847)

So I created a Web Server on the NodeMCU and build a few HTML pages which I stored on the NodeMCU (in the SPIFFS).  Whenever you connect to a NodeMCU running this firmware and point your browser to 192.168.4.1, it will serve up those pages which allows you to configure that very same NodeMCU.  The main page allows you to set the WiFi SSID and password (you can also configure the MQTT setup).  This information then gets stored on the NodeMCU in the Flash (EEPROM) so it is persistent; even if you switch off the NodeMCU it will “remember” the WiFi credentials.

NodeMCU Config Screen

NodeMCU Config Screen

This makes it very easy for novice users on-site to configure the NodeMCU to connect to any WiFi that is available.  As soon as you restart the NodeMCU it will attempt to connect to the WiFi as configured, which brings me to the final UX challenge.

Since the NodeMCU does not have a screen, how do users know if it is even working?  It needs to calibrate itself, it needs to connect to WiFi and to MQTT, how do I convey this information to the users?  Luckily the NodeMCU has a few onboard LEDs which I decided to use for that purpose.  To show the user that the NodeMCU is calibrating the sonar, it will flash the red LED (remember this happens at every boot).  As soon as the sonar is successfully calibrated, the red LED will stay on.  If for whatever reason the calibration failed – this can happen is the wall is too far away (+6 meters), not reflecting any sound (e.g. off stealth bombers) or no sonar is attached to the NodeMCU – the red LED will switch off.  A similar sequence happens when the NodeMCU is trying to connect to the WiFi.  As it tries, it will be blinking the blue onboard LED.  If it connects successfully to the WiFi, the blue LED will stay on, if it failed however, the board will automatically switch to AP mode, assuming you want to (re)configure the board to connect to a different WiFi and the blue LED will still stay on (indicating you can connect to the NodeMCU AP) but very faintly.  With these simple interactions, I can let the user know exactly what is happening and if the device is ready to go (both blue and red LEDs are on) or not (one of the LEDs or both are off).

This setup worked remarkably well and I had not one question during the Exchange on how these things work or need to be setup.  All that needed to be done was set them down, boot them up, and make sure all lights were on.  If they were not, try again (reboot) or reconfigure.

The actual capturing of data was pretty easy as well; the NodeMCU would send a signal to our MQTT broker every time it detected a person walking by.  The MQTT broker then broadcasted this to its subscribers, one of which was a small (node.js) server that I wrote which would forward this message to APEX using a REST API made available by Noel.  He would then store this information where it could be accessed by John (using another REST API) for his visualization.

Cheers,

Mark.

 

 

Real Time Ambient Display at OpenWorld: The Hardware

October 19th, 2016 1 Comment

As John mentioned in his post, one of the projects I worked on for OOW16 was the devices to provide the data to his Ambient Display.  Unlike previous years, where we record attendance and then produce a report a few days or weeks after OOW, Jake proposed that we’d somehow visualize the data in real-time and show it to the attendees as they are producing the data themselves.

In order to produce the data, we wanted to strategically place “sensors” in the OAUX Exchange tent that could sense when somebody walks by them.  Whenever this happened, the device should sent a signal to John so that he could consume it and show it on his visualization.

I considered several designs and my first thought was to build a system using a laser-diode on one side and a photo-resistor as a receiver on the other side: when somebody “breaks the beam” I would know somebody walked by, basically a laser-tripwire you can find in many other applications.  Unfortunately, photo-resistors are fairly small, the largest affordable model I could find was half the size of my pinkie’s fingernail and so this meant that the area for the laser to hit was really small, especially as the distance increases.  To add to this, we couldn’t attach the sensors to walls (i.e. an immovable object) because the OAUX Exchange is held in a tent.  The best we could hope for to attach our sensors to was a tent pole or a table leg.  Any movement in those would misalign the laser or the sensor and would get registered as a “walk by.”  So I quickly abandoned the idea of lasers (I’ll keep that one in the bag for when we finally get those sharks).

Noel suggested to use an ultrasonic sensor instead.  These work just like a sonar: they send out inaudible “pings” of sound and then listen for the sound to come back when it bounces of an object.  With some simple math you can then work out how far that object is removed from the sonar sensor.  I tested out a few sonar sensors but I finally settled on the LV-MaxSonar-EZ1, which had the right combination of sensitivity at the distances we needed (+2 meters) and ease-of-use.

Next I had to figure out what to attach the sensor to, i.e. what was going to be my “Edge” device.  Initially I tested with a Raspberry Pi because we have a few of those around the office all the time, however this turned out to have several disadvantages.  For one, the LV-MaxSonar-EZ1 is an analog ultrasonic sensor. Since the RPi does not support analog input I had to use an ADC chip to convert the signal from analog to digital. Although this gave me very accurate readings, it complicated the build.  Also, we weren’t guaranteed power at each station so the end solution would have to be able to run on battery power all day long, something that is hard with a RPi.

Next I used an Arduino (Uno) as my Edge device.  Since it has analog inputs, it was much easier to build but the problem is that it needs an additional WiFi Shield to be able to connect to the internet (remember, I needed to get the sensor data somehow to John), which is pretty pricy, combined we are now talking +$100.  I wanted a cheaper solution.

As is customary now with me when I work on IoT solutions, I turned to the ESP8266/NodeMCU.  It’s cheap (< $10), has lots of GPIOs (~10) and has Wifi built in.  Also, we had a few lying around :-):

NodeMCUs

NodeMCUs

I hooked up the Sonar to the NodeMCU (using PWM on a digital GPIO) and within a few minutes I had accurate readings and was sending the data to the backend over the internet: IoT FTW!  Furthermore, it’s pretty easy to run a NodeMCU off battery power for a whole day (as it turned out, they all ran the whole 3-days of the Exchange on a single charge, with plenty of battery power to spare!).  It was really a no brainer so I settled on the NodeMCU with the LV-MaxSonar-EZ1 attached to it, all powered by a ~6000mAh battery:

NodeMCU with Sonar

NodeMCU with Sonar

cswypriueaahhal

First iteration for initial testing.

Three of the ultrasonic sensors we used to detect movement in the tent

Three of the ultrasonic sensors we used to detect movement in the tent

Once I settled on the hardware, it was on to the software, which I will explain in detail in a second post.

Cheers,

Mark.

My First Oracle OpenWorld

October 18th, 2016 1 Comment

This year I had the great opportunity to attend in person Oracle OpenWorld 2016 and JavaOne 2016. Since I was student, I heard how fantastic and big is this conference but you cannot realize it until you are in it.

All San Francisco is taken by a bunch of personalities from all companies around the world, and it’s a great space to talk about Oracle, show off our projects and of course, our vision as a team and organization.

img1

In this conference you can see a big contrast between attendees profiles. If you walk near to Moscone Center, you probably will see attendees wearing suits and ties and talking all time about business. In contrast, if you walk couples block to downtown you will see more casual dress code (shirts and jeans) meaning that you are entering to developers zone.

img2

Either way, the whole city is all about Oracle. Even, there are a couple of main streets that are closed to set up a lounge area, booths and entertainment. You can see hanging posters and glued around the entire city. It’s awesome.

Conference is divided in two, OpenWorld and JavaOne. So as I said, this conference cover a lot of interesting areas of technology.

img3

I attended this year to polish our demos before the conference and to help Oracle Network Technology (@oracleotn) with our IoT Workshops. This workshop was at both OpenWorld and JavaOne conferences, I helped at JavaOne.

The idea behind the IoT Workshop was to introduce technical and non technical people to the IoT world. Show them how easy is to start and teach them the very basic tools, hardware and of course, code to connect things to internet.

From the beginning, we were skeptical in the results. This was the first time we ran this workshop in a big conference three days in a row. Our schedule was five sessions per day, one hour each session. The start was slow, but we got a lot of traction the consecutive days. The response from attendees was awesome. Last two days, pretty much all sessions were packed up. At some point we had a long waitlist and all people wanted to get the IoT Starter Kit.

Speaking of Starter Kit, we were giving away the kit to all attendees at the end of the session. The kit includes one NodeMCU with an ESP8266 WiFi micro controller, a push button, a buzzer, a resistor, a LED and some cables to wire the components. Attendees could take the workshop in two ways; from scratch, meaning that they had to use their own computer and install all required tools and libraries and then compile the Arduino code, wire the components and flash the NodeMCU or the expedited way, meaning that we give them pre-flashed micro controller and they just wire components.

It was very surprising that many attendees decided to take the long path, that showed us that they were very interested to learn and potentially keep working on their own projects. Part of the session, we spent some minutes talking about how OAUX is using IoT to see how it will affect user experience and propose projects that can help Oracle users and partners in their daily lives.

img4

img5

img6

Specifically at JavaOne, we had many conversations about how potentially they could find a niche in their companies using IoT, and they came up with pretty cool ideas. It was so fun and interesting having direct contact with both technical and non technical people.

I think Java is one of my preferred programming languages so far, and I had never had the chance to attend a conference about Java. This time was awesome, I had the chance to present and at the same time be an attendee.

img7

img8

img9

The rest of the team was working at the OAUX Exchange. We presented all our demos and I didn’t miss the opportunity to see how people get very excited with our demos.

And to close with a flourish, some OOW attendees were invited to visit our Gadget Lab to show more about our vision and new integrations with gadgets we have got lately.

img10

img11

Overall, OOW is the result of our team work and collaboration during the year. It’s where we see reflected all our work into smiles, wows and people’s enthusiasm. It’s a feeling that cannot be described.

Finally we are here rolling again, getting prepared for the next OOW. So stay tuned on what we are cooking up to surprise you.

How I Attended Oracle OpenWorld 2016

October 10th, 2016 2 Comments

OK, it wasn’t me exactly. It was more like some of my software-based representatives.

Hi, I’m Ed (@edhjones). And I’m not one of the AppsLab folks. But I’m always interested in the work they do, so I try to hang out with them whenever we’re co-located. I was intrigued then when, a couple of months prior to OOW16, I get a mail from Jake (@jkuramot) CC’ing Raymond (@yuhuaxie). It said, simply:

“Ed did something cool w APEX and Minecraft that he showed at Kscope15 … you two should talk and share notes.”

What was this cool something that I did? For starters, whilst undoubtedly cool (even though I say so myself!) it wasn’t really Minecraft. Although, to be fair, it did look quite a bit like it, thanks to my rather basic 3D modeling skills, and because I borrowed some textures from BDCraft. It’s actually something that I whipped up for the APEX Open Mic night at the Kscope15 conference. It was just an experiment at the time, so I was very excited that the AppsLab wizards might be able to put it to some use.

The Original

It’s a web application running on Oracle’s internally hosted APEX (#orclapex) instance. It uses Three.js to present an interactive 3D visualization of information in the Oracle database. And it just so happens that that visualization looks somewhat like a blocky character walking around in a low-poly world. The data in question is provided by back-end service calls to the US Geological Survey’s point query service, which is then cached in the database and provided to clients as streamed chunks of JSON. In the case of the demo, the elevation data was used to simulate a scaled down version of Hawaii.

mail-attachment

Other service calls reach out to the Clara.io browser-based 3D modeling and animation software, from where some of the character models are loaded on-the-fly. Other scenery data, like rocks and trees, are generated procedurally based on pseudo-random seeds calculated from the object’s geographical location in the virtual world. No Man’s Sky, eat your heart out.

The “game” aspect of the demo is implemented as (yet more!) service calls to Oracle’s Social Network. Conversations which you create in the Social Network, and tag in a certain way, appear in the visualization as chests. You open the chest, and you see a “parchment” containing the related OSN conversation, and that gives you a clue as to where the next chest might be, and so on, until you complete the treasure hunt.

It’s also multi-player so that many people can be hunting together. And they can see each other, unlike No Man’s Sky. And it’s integrated with our internal OraTweet micro-blogging platform, built many years ago by Noel (@noelportugal) and still going strong, to allow those players to talk to each other from within the game.

But, why? As I say, it was an experiment; an experiment into the amazing capabilities that today’s modern browsers provide, specifically in the way of hardware accelerated 3D graphics and HTML5 audio, and it’s a demonstration of how seamlessly Oracle Application Express (APEX) is able to interface to a multitude of external services, and efficiently handle large volumes of data. There are a lot of data-points in a map of Hawaii. It was (IMHO) a cool experiment, I’d moved on to other things, but now it was about to get a new lease of life.

The Remix

The discussions kicked-off with Jake and Raymond mentioning that they were investigating some interesting experimental control schemes and devices, but they needed something (fun, preferably) to control. Exactly what those control schemes are will be the subject of a future post from Raymond but, suffice to say; if we could resurrect my experiment, and connect it up to these devices, then that surely would be a cool demo for Oracle OpenWorld.

Since I didn’t know what environment I’d be running in (it might not have access to Oracle’s internal network, or any network at all, for example) I wanted to make it a bit easier to move the application around and I wanted to reduce the dependencies upon other systems. So, here’s what I did:

mail-attachment1
All fun and games. But we still needed some kind of controls. And, at this point, I had no concrete idea of exactly what scheme Raymond was dreaming up. So, we needed a “loose” way of providing bi-directional communications between the game and something.

The browser client then was connected up to the server using socket.io to facilitate real-time communication between the two. When certain events happen in the client (you rescue a pig, for example) then messages are sent to the server, when the server sends certain messages (for example, a command to “push” something) then the client performs a pre-determined action, like pushing a barrier out of the way.

At the server end, I added functionality to listen for messages sent to specific MQTT channels, interpret them and pass appropriate actions on through the websocket connection to the browser client. The theory being; we can now connect up any input device, even remote ones and multiple different ones, as long as they’re able to send the right messages to the right channel on an MQTT broker somewhere.

To test this out, before we had the real control devices available, I simply used jQuery Mobile to whip up a quick interface for my phone (served from the same Node.js server as the main application) which sends messages to the broker which then get passed on to the client. It’s a pretty cool experiment to be able to control a 3D world that’s hosted on my MacBook (but deployable to any Node.js application container platform, I used Modulus) running in Chrome, on a gaming PC displayed on your TV in your lounge, from an interface your phone, whilst standing on the sidewalk at the opposite end of the street, through messages being bounced from my tiny home town in Australia via an MQTT channel on the other side of the planet.

At this point, I made a final push to github and was done. Now it was up to Raymond to weave his Maker Magic and connect up his innovative control devices. Happy that my little 3D people and pigs would be off on their own to Oracle OpenWorld 2016, I simply left it in the more than capable hands of our beta-testers.

mail-attachment2

My Life as a (Telepresence) Robot

October 3rd, 2016 5 Comments

telepresence

Left: Double 2. Right: Beam

We have been quietly observing and evaluating our options before we finally decided to get a telepresence robot. Telepresence technology dates back to 1993 (Human Productivity Lab) and telepresence robots are not completely new.

There is a growing array of telepresence robot options (see comparison) and the list is bound to get cheaper and better. Before we settled on getting the Double Robotics robot, we tested the Suitable Technologies Beam. The Beam robot is a pretty solid solution, but it lacked one of our primary requirements: an SDK. We wanted a platform that we could “hack” to explore different scenarios. So we got the Double 2 robot, which does have and SDK and promptly gave it a name: Elliot after the main character in Mr. Robot.

As far as usability, driving around is not difficult at all. The Double 2 does lack a wide angle camera or foot camera since it uses the camera from the iPad. (Edit: It was pointed to me that The Double 2 standard set includes an attachable, 150 degree wide-angle camera and an always-on downward facing camera. We just didn’t buy the standard set.) But driving the Double 2 feels really smooth, so moving around to look and moving side to side is not a problem. The iPad housing has a mirror pointing to the bottom so you can switch to the back camera and see the bottom. There is an Audio Kit with external mic and speaker that helps you hear and be heard better. Overall the experience is good as long as you have good internet connectivity.

I have been virtually attending some of our Cloud Lab tours and the reaction is always positive. I also attended a couple meetings and felt a bit more integrated. Maybe that would wear off with time, but that is one of the reason we have it, to research the human aspect of these devices.

I am eagerly working on making Elliot a little more smart. Thanks to the SDK I can automate movement, but sadly the Double 2 doesn’t have any external sensors. So we are working on retrofitting some sonar sensors similar to the ones we used for this project to give Elliot a little more independence. So stay tuned to see more coolness coming from Elliot.

Telepresence Robot in The Big Bang Theory (Sheldon)

Our Real Time Ambient Display at OpenWorld

September 30th, 2016 3 Comments

One month before we entered the OAUX Exchange tent at OpenWorld, Jake (@jkuramot) challenged us to come up with a visualization “that would ambiently show data about the people in the space.”

A view of the Apps UX Exchange Tent at OpenWorld 2016

A view of the OAUX Exchange Tent at OpenWorld 2016

Mark (@mvilrokx), Noel (@noelportugal) and I accepted the challenge. Mark put together the Internet of Things ultrasonic sensors, Noel created a cloud database to house the data, and it fell to me to design and create the ambient display.

An ambient display is the opposite of a dashboard. A dashboard displays an array of data in a comprehensive and efficient way so that you can take appropriate actions. Like the dashboard of a car or airplane, it is designed to be closely and continuously monitored.

Ambient displays, in contrast, are designed to sit in the background and become part of the woodwork, only drawing your attention when something unusual happens. They are simple instead of complex, unified instead of diverse, meant for glancing, not for scanning.

This project was not only a chance to design an ambient display, but also a chance to work with master makers like Mark and Noel, get my feet wet in the Internet of Things, and visualize data in real time. I’ve also long wanted to make an art installation, which this sort of is: an attractive and intriguing display for an audience with all the risks of not really knowing what will happen till after the curtain goes up.

My basic concept was to represent the sensors as colored lines positioned on a simplified floor plan and send out ripples of intersecting color whenever someone “broke the beam.” Thao (@thaobnguyen) suggested that it would be even better if we could see patterns emerge over time, so I added proportion bars and a timeline.

Since we only had a few weeks we had to work in parallel. While Mark and the rest of the team debated what kind of sensor to use, my first task was to come up with some visuals in order to define and sell the basic concept, and then refine it. Since I didn’t yet have any data, I had to fake some.

So step one was to create a simulation, which I did using a random number generator weighted to create a rising crescendo of events for four colored sensor beams. I first tried showing the ripples against a white background and later switched to black. The following video shows the final concept.

Once Mark built the sensors and we started to get real data, I no longer needed the simulation, but kept it anyway. That turned out to be a good decision. When it came to do the final implementation in the Exchange tent, I had to make adjustments before all four sensors were working. The simulation was perfect for this kind of calibration; I made a software switch so that I could easily change between real and simulated data.

The software for this display did not require a single line of code. I used NodeBox, an open source visual programming tool designed for artists. It works by connecting a series of nodes. One node receives raw cloud data from a JSON file, the next refines each event time, subtracts it from the current time, uses the difference to define the width of an expanding ellipse, etc. Here is what my NodeBox network looks like:

The NodeBox program that produced the ambient video display

The NodeBox program that produced the ambient video display

One challenge was working in real time. In a perfect world, my program would instantly detect every event and instantly respond. But in the real world it took about a second for a sensor to upload a new row of data to the cloud, and another second for my program to pull it back down. Also, I could not scan the cloud continuously; I had to do a series of distinct queries once every x seconds. The more often I queried, the slower the animation became.

I finally settled on doing queries once every five seconds. This caused an occasional stutter in the animation, but was normally not too noticeable. Sometimes, though, there would be a sudden brief flash of color, which happened when an event fired early in that five-second window. By the time I sensed it the corresponding ripple had already expanded to a large circle like a balloon about to pop, so all I saw was the pop. I solved this problem by adjusting my clock to show events five seconds in the past.

Testing was surprisingly easy despite the fact that Mark was located in Redwood Shores and Noel in Austin, while I worked from home or from my Pleasanton office. This is one of the powerful advantages of the Internet of Things. Everyone could see the data as soon as it appeared regardless of where it came from.

We did do one in-person dry run in an Oracle cafeteria. Mark taped some sensors to various doorways while I watched from my nearby laptop. We got our proof of concept and took the sensors down just before Oracle security started getting curious.

On the morning of the big show, we did have a problem with some of the sensors. It turned out to be a poor internet connection especially in one corner of the tent; Noel redirected the sensors to a hotspot and from then on they worked fine. Jake pitched in and packaged the sensors with hefty battery packs and used cable ties to place them at strategic spots. Here is what they looked like:

Three of the ultrasonic sensors we used to detect movement in the tent

Three of the ultrasonic sensors we used to detect movement in the tent

The ambient display ran for three straight days and was seen by hundreds of visitors. It was one of the more striking displays in the tent and the simple design was immediately understood by most people. Below is a snapshot of the display in action; Jake also shot a video just before we shut it down.

It was fun to watch the patterns change over time. There would be a surge of violet ripples when a new group of visitors flooded in, but after that the other colors would dominate; people entered and exited only once but passed across the other sensors multiple times as they explored the room. The most popular sensor was the one by the food table.

One of our biggest takeaways was that ambient displays work great at a long distance. All the other displays had to be seen closeup, but we could easily follow the action on the ambient display from across the room. This was especially useful when we were debugging the internet problem. We could adjust a sensor on one side of the room and look to the far corner to see whether a ripple for that sensor was appearing and whether or not it was the right color.

A snapshot of the ambient display in action

A snapshot of the ambient display in action

It was a bit of a risk to conduct this experiment in front of our customers, but they seemed to enjoy it and we all learned a lot from it. We are starting to see more applications for this type of display and may set up sensors in the cloud lab at HQ to further explore this idea.

Fun, Games and Work: Telepresence Robots

September 28th, 2016 3 Comments

Companies talk about “Gamification,” but the first time I felt like I was playing a game at work was driving our Double telepresence robot around the office floor, rolling down the hallway and poking into cubicles. With a few simple controls—forward, backward, left, and right—it took me back to the D-pad on my NES, trying to maneuver some creature or robot on the screen and avoid obstacles.

cq1f_4dusaav0o_

It’s really a drone, but so much less stressful than controlling a quadcopter. For one, you can stay put without issue. Two, it’s not loud. And three, there aren’t any safety precautions preventing us from driving this around inside Oracle buildings.

Of course, this isn’t the intended use. It’s a telepresence robot, something that allows you to be more “present” in a meeting or at some remote site than you would be if you were just a face on a laptop—or even more invisibly—a (mostly silent) voice on a conference call. You can instead be a face on a robot, one that you control.

That initial drive wouldn’t have been nearly as fun (or funny) if I were just cruising around the floor and no one else was there. A lot of the enjoyment was from seeing how people reacted to the robot and talking to them about it.

It is a little disruptive, though that may wear off over time. Fellow AppsLab member Noel (@noelportugal) drove it into a meeting, and the whole crowd got a kick out of it. I could see throughout the meeting others gazing at the robot with a bit of wonder. And when Noel drove the robot behind someone, they noted how it felt like they were being watched. But no one forgot Noel was in the meeting—there was an actual presence that made it feel he was much more a part of the group than if we were just on the phone.

crsozevvuaapahg

On another virtual walkaround, Noel met up with Mark (@mvikrokx) and they had a real work conversation about some hardware they had been emailing back and forth about, and being able to talk “face” to “face” made it much more productive.

All this provokes many interesting questions—is a telepresence robot better than video conferencing? How so, and by how much? How long does it take for the robot to seem “normal” and just become a part of a standard meeting?

And of course—what would a meeting be like that consisted solely of telepresence robots?

IoT Workshop Guide – part 2

September 14th, 2016 Leave a Comment

In last post, we have setup development environment for coding and uploading scratches to NodeMCU, an IoT device.

This post, we will upload and run two examples to demonstrate how IoT device sending data into Cloud and receiving commands from Cloud. You can find the source code and MQTT library requirement on github.

4. Architecture Diagram

It involves several tiers and components to make the whole IoT loop. However, you will just focus on device communication with MQTT, all other components have been setup properly.

iotws_diagram

 

5. Wiring Diagram

For the two testing examples, you can just use the following diagram:

wiring1

And here is an example of actual wiring used to test the example code:

wiring3

 

6. Test Sample #1

Demonstrate that IoT device interacts with Internet over MQTT. You can get the source code from github: https://github.com/raymondxie/iotws/blob/master/iotws_mqtt.ino

Please note, you need modify the code by supplying necessary connection parameters for WiFi network and MQTT broker. Check the parameter values with your instructor.

The example let you press a button, the event is sent to MQTT broker in the Cloud,  and NodeMCU board is also listening to that channel for input, essentially the information just come right back to the board. Based on the button press count (even / odd count), the board plays a different tune for you.

Have fun playing the tunes!

7. Test Sample #2

Send a message into Oracle IoT Cloud Service (IoTCS) by press of a button. You can get the source code from github: https://github.com/raymondxie/iotws/blob/master/iotws_iotcs.ino

Please note, you need modify the code by supplying necessary connection parameters for WiFi network and MQTT broker. Check the parameter values with your instructor.

This sample let you press a button, and a message along with your name is sent to MQTT broker. There is a Raspberry Pi listening to inputs to that particular MQTT channel. The Raspberry Pi acts as a gateway to IoTCS, and relays the message to it. You can then verify your message with your name in the IoTCS console.

 

IoT Workshop Guide – part 1

September 14th, 2016 1 Comment

AppsLab and OTN will jointly host IoT Workshop at Oracle OpenWorld and JavaOne conference in 2016. We look forward to seeing you at the Workshop.

Here is some details about the Workshop with step-by-step instructions. Our goal is that you will learn some basics and get a glimpse of Oracle IoT Cloud Service at the workshop, and you can continue playing it with IoT package after going home. So be sure to bring your computer so we can setup proper software for you.

Before we get into the step-by-step guide, here is the list of hardware parts we will use at the IoT Workshop.

Version 2

Board and parts

1. Download and install software

We use the popular Arduino IDE to write code and upload to IoT device.You may download it from Arduino website even before coming to the workshop.
https://www.arduino.cc/en/Main/Software

arduino-download

Just to make sure you get the proper platform for your computer, e.g. if you have a Windows machine, get the “Windows installer.”

The installation is straightforward, as it is very typical installation on your computer platform. If needed, here is instruction: https://www.arduino.cc/en/Guide/HomePage

2. Setup Arduino IDE to use NodeMCU board

We use a IoT device board called NodeMCU. Like Arduino Uno board, it has many pins to connect sensors and LED lights, but also has built-in WiFi chip which we can use to send input data into IoT Cloud.

You have installed Arduino IDE at step 1. Now open the Arduino IDE.

Go to File -> Preferences, and get to a page like this:

nodemcu-conf

Add this “http://arduino.esp8266.com/stable/package_esp8266com_index.json” to the “Additional Boards Manager URLs” field, and then hit “OK” button.

Restart Arduino IDE, and go to “Tools” -> “Board” -> “Board Manager”, and select “esp8266 by ESP8266 Community”. Click it and install.

boardinstall

Restart Arduino IDE and go to “Tools” -> “Board”, and select “NodeMCU 1.0” board.

chooseboard

Also set up corresponding parameters on CPU Frequency, Flash Size, etc, according to above screenshot.

3. Quick Blink Test

To verify that we have set up the Arduino IDE for NodeMCU properly, we can connect the board to computer using a USB-microUSB cable.

Then go to “File” -> “New”, copy & paste this example code into coding window:  https://github.com/raymondxie/iotws/blob/master/iotws_led.ino

loadsample

Select the proper Port where board is connected via USB:

chooseport

Click “Upload” icon on the top left of Arduino IDE, and observe that the sample code is loaded onto board. The on-board LED should blink once per second.

For some Macbook, if you don’t see proper port of “USBtoUART”, you need install a FTDI driver – you can download it from here.

For Windows machine, you will see certain “COM” ports. You need install this driver.

You can also play around and connect an external LED light to a pin similar to following wiring diagram, and modify the code to use that pin to blink the LED.

blinkled

By now, you have completed the setup of Arduino development environment for NodeMCU – an IoT device, upload and execute code on the device.

Continue to part 2: Load and Test IoT Code >>

For OpenWorld and JavaOne 2016, An Internet of Things Workshop

September 13th, 2016 1 Comment

iotkitbanner2

Want to learn more about the Internet of Things?

Are you attending Oracle OpenWorld 2016 or JavaOne 2016? Then you are in luck! Once again we have partnered with the Oracle Technology Network (OTN) team to give OOW16 and JavaOne attendees an IoT hands-on workshop.

We will provide a free* IoT Cloud Kit so you can get your feet wet on one of the hottest emerging technologies. You don’t have to be an experienced electronic engineer to participate. We will go through the basics and show you how to connect a wifi micro-controller to the Oracle Internet of Things Cloud.

All you need to do is sign-up for a spot using the OpenWorld (Android, iOS) or JavaOne (Android, iOS) conference mobile apps. Look under Info Booth, and you’ll find an IoT Workshop Signup section.

Plus, brand-new this year, check out the Gluon JavaOne conference app (Android, iOS), look for the OTN Experiences and hit the IoT Workshop.

Note: OK, so that Gluon JavaOne app, 1) isn’t new this year and 2) I posted the wrong links. This year’s app is called JavaOne16, so look carefully. You can find the IoT Workshop signups under OTN Experiences.

Or find us at the OTN Lounge on Sunday afternoon. Workshops run all day, Monday through Wednesday of both conferences. Space is limited, and we may not be able to accommodate walkups, so do sign up if you plan to attend.

Then come to the OTN Lounge in Moscone South or the Java Hub at Hilton Union Square with your laptop and a micro-usb cable.

Kit

The kit includes a NodeMCU, buzzer, button, and an LED

 

*Free? Yes free, while supplies last. Please make sure you read the Terms & Conditions (pdf).

Oracle Volunteers and the Daily Minor Planet

August 11th, 2016 12 Comments

“Supercharged Perseid Meteor Shower Peaks This Month” – as the very first edition of Daily Minor Planet brought us the news on August 4th, 2016.

Daily Minor Planet

First edition of Daily Minor Planet

Daily Minor Planet is a digital newspaper on asteroids and planetary systems. It features an asteroid that might fly by Earth for the day, or one of particular significance to the day. Also it features a section of news from different sources on the topics of Asteroid and Planets. And most interestingly, it has a dynamic orbit diagram embedded, showing real-time positions of the planets and the daily asteroid in the sky. You can drag the diagram and see them in different angles.

You can read the live daily edition on the Minor Planet Center website. Better yet, subscribe to it with your email, and get your daily dose of asteroid news in your email.

Daily Minor Planet is the result of collaboration between Oracle Volunteers and Minor Planet center. Since the Asteroid Hackathon in 2014, we have followed up with a Phase I project of Asteroid Explorer in 2015, which focused asteroid data processing and visualization. And this is the Phase II project, which focuses on the public awareness and engagement.

The Oracle Volunteers on this phase consisted of Chan Kim, Raymond Xie (me!), Kristine Robison, DJ Ursal and Jeremy Ashley. We have been working with Michael Rudenko and J.L. Galache from Minor Planet Center for past several months, and created a newspaper sourcing – editing – publishing – archiving system, with user subscription and daily email delivery functionality. And during the first week of August, the Oracle volunteer team were on site to prepare and launch the Daily Minor Planet.

Check out video of the launch event, which was hosted in Phillips Auditorium, Harvard-Smithsonian Center for Astrophysics, and live streamed on YouTube channel. The volunteer’s speech starts around 29:00 minute mark:

It was a quite intense week, as we were trying to get it ready for launch. In the end, as a reward, we got a chance to have a tour of the Great Refractor at Harvard College Observatory, which was located just in next building.

The Great Refractor

The Great Refractor, at Harvard College Observatory

By the way, the Perseid meteor shower this year will peak on August 12, and it is in an outburst mode with potentially over 200 meteors per hour. So get yourself ready and catch some shooting stars!

Pokemon GO || Ramblings from a UX Perspective

July 26th, 2016 7 Comments

Gym at Oracle HQ is currently being held by Team Valor. The gym switches hands on average 3x a day.

Crowd at a park playing Pokemon. Most of us have been here for at least 2 hours.

Crowd at a park playing Pokemon. Most of us have been here for at least 2 hours.

By now, I have played Pokemon GO for 2 weeks and have reached the coveted level 22. Pokemon GO (POGO) is a massively popular mobile game that has been a viral hit. It became the most popular paid app in the iOS store within a few days, received more than 10 million downloads within its first week, surpassed Twitter in daily active users (DAU) and has already generated $14.04 million in sales, which is 47% of the total mobile gaming market. It’s so popular that players are deciding where they eat based on the restaurant’s proximity to a Pokestop. In lieu of this, Yelp added a Pokestop filter in their web and mobile app. POGO’s popularity has led to massive amount of server issues that is common amongst the initial launches of massively multiplayer online role-playing games (MMPORG) like Sim City, Diablo 3 and Guild Wars 2. To be fair, POGO wasn’t expected to be this popular outside of their already established fan base and is technically still in beta 0.29.3. I’m happy to say that the server has stabilized and I rarely encounter any server issues during peak times. Despite the lack of communication and transparency that should be a part of any customer experience, Niantic worked hard and proved with their launch in Japan that servers issues will be a thing of the past.

Success Factors

It’s safe to say that POGO has transformed how we experience reality through our phones. It did it so smoothly in a matter of days! There are a lot of game play dynamics at play, but before I get into that, there are a few external dimensions that led to the success of POGO.

My Experience and the UI

A few others have written about the easy on-boarding of the game, so I will not rehash that.

I found that the game incentivizes things that we do already when on the go (Pokemon GO :]) We go to work, we go out to eat, we walk the dog, we go to school, and hang around in groups at parks and meetups. Walking helps us hatch eggs for rare or high IV Pokemon, find other Pokemon that hides in the tall grass, while also getting you from point A to point B. Walking also gets you to gyms and Pokestops for much needed supplies. Best of all, walking takes us to new places and outside to meet new people with the same common interest of playing POGO.

As a game that’s meant to be played quickly and on the go, everything happen within minutes. The traditional Pokemon game play is replaced with a minimum viable feature list. We don’t spend 5+ minutes battling a Pokemon, agonizing over the move sets, IVs (Individual Values) or natures anymore. I mean…unless you want to. Instead, we spend at most 90 seconds trying to take over a gym, move sets are randomly selected for us and catching a Pokemon is done with a quick flick of a finger.

LEFT: I like the ability to turn on/off AR as needed. The bottom of the screen is the primary area for interaction. E.g. throwing a ball and using items from your back. CENTER: This is what battling at a gym with AR turned on looks like! You just continuously tap on your opponent to attack and swipe left or right to dodge. No menus needed. RIGHT: Customizing your own characters makes you feel more attached to the game. It helps you believe that your avatar is you.

The third person avatar view and map overlay is just real enough to be believed. Our brains are harsh judges of things that try to be realistic but falls short. This is a challenge with virtual reality. When something is represented to us as real, we try to find the smallest discrepancies which prevents us from suspending our disbelief; yet when we are presented with something that is not “real” our brains easily accepts that representation and fills in the gaps cognitively. As I walk around, I find myself amazed at how accurate my avatar seems to follow me. Even as I am standing still and turning to face different directions, my avatar follows smoothly.

Strangely, I felt connected to the real world as I was walking around zombified with my peers. Just the other day, a fellow player yelled “Starmie!” and the location of that Pokemon. Unsurprisingly, we all slowly got up and walked in a huge group to the other end of the park. Though we are still staring at our phones, we are really watching our avatar walk in a map overlay of the real world. We are aware of the river to the left of us and the road that is coming up ahead as depicted on the map. The Pokestops are easter eggs of wall murals, sculptures and other places of interest that I never would have stopped to look at before.

Caught the Starmie! The jewel of the sea.

Caught the Starmie! The jewel of the sea.

The game is addicting as Facebook and Instagram are. There is a constant need to check if there is a new Pokemon spawn around the corner. Feedback is constant and instantaneous. Catching and spinning a Pokestop is just as satisfying as clicking on the red notification bubble or pulling to refresh a feed. Despite server issues, there were intrinsic rewards tied to every game mechanic.

A list of pokemon caught. Since the game is meant to be played on the go, there is a focus on coffee shop interactions. This means I can hold a cup of coffee in one hand and interact with a mobile app in the other hand. All the main action icons can be reached with my right thumb!

Since we have to walk around to play an augmented reality game, we are encouraged to interact with strangers. On the game map you can find high activity hot spots. Many times in and out of work, I have no problems befriending others while finding an elusive dratini at the Oracle lagoon or when picking a random picnic spot between high traffic lures at Guadalupe Park and chatting about our latest Pokemon catches, know-hows and food. Last week, I walked across the street to have lunch at my friend’s work place. The many times that I’ve been there before, I’ve talked to only 2 or 3 of her co-workers. Now when I walk over, we have lunch then join a group of my friend’s Pokemon hunting colleagues and we found a rare Hitmonlee together. They shared the location of an Electabuzz spawn. They taught my friend and I how to hunt nearby creatures using the on-screen compass.

Interestingly enough, there is no need to share your activity with your virtual social network, nor is there a push for you to purchase in-app items. How many times have you been asked to invite your friends to play to progress further in a game? How many times in a game is an advertisement for an in-app purchase persistent through every screen? I’m never bombarded with such eye sores  in POGO. Everything seems more organic. It’s refreshing from the usual Facebook game apps and Candy Crush games that gamifies your network to level up further.

Life has definitely changed as illustrated by this meme (credit: http://www.dorkly.com/post/79726/life-before-pokemon-go-and-after).

My dog has noticed that we’ve been stopping a lot more than usual.

One night when when it got dark at the park, we were asked by our local police to leave. I suspiciously watched as the gym turned from red to blue 😡

Thoughts on the Future of Pokemon Go

Having gone this far in the game, it’s hard to keep motivated. Catching Pokemon and gaining experience is harder unless you are willing to shell out money for more pokeballs, lucky eggs and incubators. Other than the group meet ups and the need to “catch em’ all,” the rewards to get up to the next level isn’t incentivizing enough. For those that have caught ’em all or are have the grinding stages of the game (lvl 22+), Pokemon Go’s announcement at the San Diego Comic Con 2016 gave current players a reason to keep playing.

Since the goal of the game is for players to go out and explore and/or be on the move, what if there was a Fitbit like band that that counts your steps so you can hatch eggs when running on a treadmill? Can Google glass make a comeback? Instead of staring at the phone, I can still hunt for Pokemon while keeping my eyes on my real surroundings? How about a wearable like an Apple Watch that vibrates when a Pokemon comes up. I can just tap to automatically throw a ball without looking if I am riding a bike.

The game will definitely spur more AR games that may or may not have as huge of an impact as POGO but will increase adoption of AR as a social media and marketing platform. McDonalds is the first to partner up with the game to turn every location in Japan to a gym. Eateries, museums and police stations have all been inserting themselves into the game by purchasing lures to attract crowds of players into their establishment.

Beautiful lures! Waves of people are usually found camping out at these high activity spots left by other players or establishments. In this case, a popular BBQ joint dropped lures. I saw many people enjoying sausages and other meats on the lawn as they played the game.

Beautiful lures! Waves of people are usually found camping out at these high activity spots left by other players or establishments. In this case, a popular BBQ joint dropped lures. I saw many people enjoying sausages and other meats on the lawn as they played the game.

So far I’ve only seen glimmers of what the game hopes to be. It’s not polished and lacks the social feature that its Nintendo counterparts have weaved in seamlessly. Despite all these setbacks, it’s still made a huge impact on our culture and technology. I’m excited to see how Niantic plays this out, especially as mixed reality devices like the Microsoft Hololens and Magic Leap come to market.