Real Time Ambient Display at OpenWorld: The Hardware

As John mentioned in his post, one of the projects I worked on for OOW16 was the devices to provide the data to his Ambient Display.  Unlike previous years, where we record attendance and then produce a report a few days or weeks after OOW, Jake proposed that we’d somehow visualize the data in real-time and show it to the attendees as they are producing the data themselves.

In order to produce the data, we wanted to strategically place “sensors” in the OAUX Exchange tent that could sense when somebody walks by them.  Whenever this happened, the device should sent a signal to John so that he could consume it and show it on his visualization.

I considered several designs and my first thought was to build a system using a laser-diode on one side and a photo-resistor as a receiver on the other side: when somebody “breaks the beam” I would know somebody walked by, basically a laser-tripwire you can find in many other applications.  Unfortunately, photo-resistors are fairly small, the largest affordable model I could find was half the size of my pinkie’s fingernail and so this meant that the area for the laser to hit was really small, especially as the distance increases.  To add to this, we couldn’t attach the sensors to walls (i.e. an immovable object) because the OAUX Exchange is held in a tent.  The best we could hope for to attach our sensors to was a tent pole or a table leg.  Any movement in those would misalign the laser or the sensor and would get registered as a “walk by.”  So I quickly abandoned the idea of lasers (I’ll keep that one in the bag for when we finally get those sharks).

Noel suggested to use an ultrasonic sensor instead.  These work just like a sonar: they send out inaudible “pings” of sound and then listen for the sound to come back when it bounces of an object.  With some simple math you can then work out how far that object is removed from the sonar sensor.  I tested out a few sonar sensors but I finally settled on the LV-MaxSonar-EZ1, which had the right combination of sensitivity at the distances we needed (+2 meters) and ease-of-use.

Next I had to figure out what to attach the sensor to, i.e. what was going to be my “Edge” device.  Initially I tested with a Raspberry Pi because we have a few of those around the office all the time, however this turned out to have several disadvantages.  For one, the LV-MaxSonar-EZ1 is an analog ultrasonic sensor. Since the RPi does not support analog input I had to use an ADC chip to convert the signal from analog to digital. Although this gave me very accurate readings, it complicated the build.  Also, we weren’t guaranteed power at each station so the end solution would have to be able to run on battery power all day long, something that is hard with a RPi.

Next I used an Arduino (Uno) as my Edge device.  Since it has analog inputs, it was much easier to build but the problem is that it needs an additional WiFi Shield to be able to connect to the internet (remember, I needed to get the sensor data somehow to John), which is pretty pricy, combined we are now talking +$100.  I wanted a cheaper solution.

As is customary now with me when I work on IoT solutions, I turned to the ESP8266/NodeMCU.  It’s cheap (< $10), has lots of GPIOs (~10) and has Wifi built in.  Also, we had a few lying around :-):

NodeMCUs

NodeMCUs

I hooked up the Sonar to the NodeMCU (using PWM on a digital GPIO) and within a few minutes I had accurate readings and was sending the data to the backend over the internet: IoT FTW!  Furthermore, it’s pretty easy to run a NodeMCU off battery power for a whole day (as it turned out, they all ran the whole 3-days of the Exchange on a single charge, with plenty of battery power to spare!).  It was really a no brainer so I settled on the NodeMCU with the LV-MaxSonar-EZ1 attached to it, all powered by a ~6000mAh battery:

NodeMCU with Sonar

NodeMCU with Sonar

cswypriueaahhal

First iteration for initial testing.

Three of the ultrasonic sensors we used to detect movement in the tent

Three of the ultrasonic sensors we used to detect movement in the tent

Once I settled on the hardware, it was on to the software, which I will explain in detail in a second post.

Cheers,

Mark.

One comment

  1. Thanks, Mark.

    From my vantage point as a lifelong software person it’s fascinating to hear how you worked through all the real-world hardware constraints. I’m impressed at how quuickly you were able to deliver a practical solution that held up to three days is a crowded, windy tent – with power to spare.

    I look forward to hearing about the software part of your solution. It was a blast working with you on this projwct!

    John

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.