Real Time Ambient Display at OpenWorld: The Software (for the Hardware)

This is part 2 of my blog post series about the Ambient Visualization hardware (part 1).  Also, please read John’s post on details about the creation of the actual visualization, from concept to build.  In the first part, I focused on the hardware, a sonar sensor connected to a NodeMCU.  In this second part, the focal point will be the software.

When I started working with ESPs a few years ago I was all gaga about the fact that you could use Lua to program these chips. However, over the last year, I have revised my thinking as I ran into stability issues with Lua.  I now exclusively code in C/C++ for the ESPs using the arduino library for the ESP8266.  This has led to much stabler firmware and with the advent of the likes of PlatformIO, a much better development experience (and I’m all for better DX!).

As I was not going to be present at the Exchange myself to help with the setup of the devices, I had to make it as easy as possible to set up and use.  I could not assume that the person setting up the NodeMCUs had any knowledge about the NodeMCU, Sonars, C++ etc.  Ideally, they could just place it in a suitable location, switch on the NodeMCU and that would be it!  There were a few challenges I had to overcome to get to this ideal scenario.

First, the sonars needed to be “calibrated.”  The sonar just measures the time it takes for a “ping” to come back as it bounces of an object … any object.  If I place the sonar on one side of the room and point it to the opposite wall, it will tell me how long it takes (in µs) for a “ping” to come back as it bounces of that wall.  (You can then use the speed of sound to calculate how far away that wall is.)  However, I want to know when somebody walks by the sensor, i.e. when the ping that comes back is not from the opposite wall but from something in between the wall and the sensor.  In order to be able to do this, I have to know how far away the wall is (or whatever fixed object the sonar is pointed at when it is placed down).  Since I didn’t know where these sensors were going to be placed, I did not know in advance where these walls would be so this could not be coded upfront; this had to be done on-site.  And since I could not rely on anybody being able to just update the code on the fly as mentioned earlier, the solution was to have the sonars “self-calibrate.”

As soon as you turn on the NodeMCU, it will go into “calibration mode”.  The first few seconds it will take a few hundred samples under the assumption that whatever it “sees” initially is the wall opposite the device.  It will then store this information for as long as the NodeMCU is powered on.  After this, any ping that is close to the wall is assumed to be coming from the wall, and discarded.  Whenever a ping is received of an object that is closer to the sonar than the wall, we assume that this is a person walking by the sensor (between the wall and the sensor) and we flag this.  If you want to put the NodeMCU in a different location (presumably with the opposing wall at a different distance from it), you just switch it off, move it, and switch it back on.  The calibration will make sure it works anywhere you place it.  For the people setting up the sonars, this meant that all they’d have to do was place the sensors, switch them on and make sure that in the first 1-2 seconds nothing is in between the sensor and the opposite side (and if there was something in between by accident, they could just “reset” the NodeMCU which would recalibrate it).  This turned out to work great, some sensors had a small gap (~2 meters), others had a much larger gap (+5 meters), all working just fine using the same code.

Second, the NodeMCU needs to be configured to connect to a WiFi.  Typically this is hard-coded in the firmware, but again, this was not an option as I didn’t know what the WiFi SSID or password would be.  And even if I did, conference WiFi is notoriously bad (the Achilles heel of all IoT) so there was a distinct possibility that we would have to switch WiFi networks on-site to a better alternative (e.g. a local hotspot).  And as with the calibration, I could not rely on anybody being able to fix this in the code, on-site. Also, unlike the calibration, connecting to a WiFi requires human interaction; somebody has to enter the password.  The solution I implemented was for the NodeMCU to come with its own configuration web application.  Let me explain…

The NodeMCU is powerful enough to run its own Web Server, serving HTML, CSS and/or JS.  The NodeMCU can also be an Access Point (AP) so you can connect to it like you connect to your router.  It exposes an SSID and when you connect your device to this network, you can query up HTML pages and the NodeMCU Web Server will serve them to you.  Note that this does not require any WiFi, the NodeMCU basically “brings its own” WiFi that you connect to.

NodeMCU Access Point

NodeMCU Access Point (called ESP8266-16321847)

So I created a Web Server on the NodeMCU and build a few HTML pages which I stored on the NodeMCU (in the SPIFFS).  Whenever you connect to a NodeMCU running this firmware and point your browser to 192.168.4.1, it will serve up those pages which allows you to configure that very same NodeMCU.  The main page allows you to set the WiFi SSID and password (you can also configure the MQTT setup).  This information then gets stored on the NodeMCU in the Flash (EEPROM) so it is persistent; even if you switch off the NodeMCU it will “remember” the WiFi credentials.

NodeMCU Config Screen

NodeMCU Config Screen

This makes it very easy for novice users on-site to configure the NodeMCU to connect to any WiFi that is available.  As soon as you restart the NodeMCU it will attempt to connect to the WiFi as configured, which brings me to the final UX challenge.

Since the NodeMCU does not have a screen, how do users know if it is even working?  It needs to calibrate itself, it needs to connect to WiFi and to MQTT, how do I convey this information to the users?  Luckily the NodeMCU has a few onboard LEDs which I decided to use for that purpose.  To show the user that the NodeMCU is calibrating the sonar, it will flash the red LED (remember this happens at every boot).  As soon as the sonar is successfully calibrated, the red LED will stay on.  If for whatever reason the calibration failed – this can happen is the wall is too far away (+6 meters), not reflecting any sound (e.g. off stealth bombers) or no sonar is attached to the NodeMCU – the red LED will switch off.  A similar sequence happens when the NodeMCU is trying to connect to the WiFi.  As it tries, it will be blinking the blue onboard LED.  If it connects successfully to the WiFi, the blue LED will stay on, if it failed however, the board will automatically switch to AP mode, assuming you want to (re)configure the board to connect to a different WiFi and the blue LED will still stay on (indicating you can connect to the NodeMCU AP) but very faintly.  With these simple interactions, I can let the user know exactly what is happening and if the device is ready to go (both blue and red LEDs are on) or not (one of the LEDs or both are off).

This setup worked remarkably well and I had not one question during the Exchange on how these things work or need to be setup.  All that needed to be done was set them down, boot them up, and make sure all lights were on.  If they were not, try again (reboot) or reconfigure.

The actual capturing of data was pretty easy as well; the NodeMCU would send a signal to our MQTT broker every time it detected a person walking by.  The MQTT broker then broadcasted this to its subscribers, one of which was a small (node.js) server that I wrote which would forward this message to APEX using a REST API made available by Noel.  He would then store this information where it could be accessed by John (using another REST API) for his visualization.

Cheers,

Mark.

 

 

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.