Four Weeks with the Garmin Vivosmart

September 3rd, 2015 1 Comment

The Year of Data continues for me, and yesterday, I finished a four-week relationship with the Garmin Vivosmart.

I use relationship purposefully here because if you use a wearable to track fitness and sleep, you’re wearing it a lot, and it actually becomes a little friend (or enemy) that’s almost always with you. Wearables are very personal devices.

If you’re scoring at home, 2015 has gone thusly for me:

After that month of nothing, I nearly ended the experimentation. However, I already two more wearables new and still in the box. So, next up was the Vivosmart.

I didn’t know Garmin made wearables at all until OHUG 2014 where I met a couple people wearing Garmin devices. Turns out, Garmin makes an impressive array of wearable devices, running the gamut from casual to hardcore athlete.

I chose the Vivosmart, at the casual end of the spectrum, because of its display and notification capabilities.

As always, before I launch into my impressions, you might want to read real reviews from Engadget and The Verge.

The Band

Finally, a wearable that doesn’t require a laptop to configure. The setup was all mobile, download the app and pair, very nice for a change.

IMG_20150902_081908

After the initial setup, however, I did need to tether the Vivosmart to my laptop, but I don’t think my case is common.

The firmware version that came out-of-the-box was 2.60, and after reading the Engadget review, I decided to update to the latest version. Specifically, I wanted the notification actions that came in 3.40. There didn’t seem to be a way to get this update over-the-air, so I had to install Garmin Express on my Mac and tether the Vivosmart to install the update, a very quick and painless process.

This must have been because I was going through several updates because the Vivosmart got an over-the-air update at some point without Garmin Express.

Like all the rest, the Vivosmart has a custom cable for charging and tethering, and this one looks like mouthguard.

IMG_20150903_093246

Looks aside, getting the contacts to line up just right was a learning process, but happily, I didn’t charge it very often.

The low power, touch display is pretty cool. The band feels rubbery, and the display is completely integrated with no visible bezel, pretty impressive bit of industrial design. The display is surprisingly bright, easily visible in full sunlight and useful as a flashlight in the dark.

There are several screens you swipe to access, and they can be configured from the mobile app, e.g. I quickly ended up hiding the music control, more on that in a minute. Long-pressing opens another set of options and menus.

The Vivosmart has a sleep tracking, one thing I actually missed during my device cleanse. Like the Jawbone UP24, it provides a way to track sleep manually. I tried this and failed miserably because somehow during the night the sleep tracking ended.

The reason? The display activates when anything touches it. So, while I slept, the display touched the sheets, the pillow, etc. registering each touch as an interaction, which finally resulted in turning off sleep mode.

This is exactly how I discovered the find phone option. While using my laptop, I wore the Vivosmart upside down to prevent the metal Garmin clasp on the underside of the device from scratching the aluminum, a very common problem with wrist-worn accessories.

During a meeting my phone started blinking its camera flash and blaring a noise. A notification from Garmin Connect declared it had found my phone. I looked at the band, and sure enough, it was in one of the nested menus.

So, the screen is cool, but it tends to register everything it touches, even water activated it. Not to mention the rather unnerving experience of the display coming on in a dark room while partially awake, definitely not cool.

Luckily, I found the band and app auto-detect sleep, a huge save.

Functionally, the battery life was about five days, which is nice. When the battery got low, a low battery icon appeared on the time and date screen. You can see it in the picture. Once full, that icon disappeared, also nice.

The Vivosmart can control audio playing on the phone, a nice feature for running I guess. I run with Bluetooth headphones, and having two devices paired for audio confused my phone, causing it to play through its own speakers. So, I disabled the playback screen via the app.

Like most fitness bands, this one is water resistant to 5 ATM (50 meters), and I wore it in the shower with no ill effects, except for the random touches when water hit the device’s screen. I actually tested this by running water on it and using the water to navigate through the screens.

Syncing the band with the phone was an adventure. Sometimes, it was immediate. Other times, I had to toggle Bluetooth off/on. Could be my impatience, but the band would lose connectivity sometimes when it was clearly within range, so I don’t think it was me.

The Vivosmart has a move indicator which is nice as a reminder. However, I quickly disabled it because its times weren’t configurable, and it would go off while I was moving. Seriously, that happened a few times.

The App and Data

As with most fitness trackers, Garmin provides both a mobile app and a web app. Both are cleanly designed and easy to use, although I didn’t use the web app much at all. Garmin Connect has a nice array of features, to match the range of athletes to which they cater, I suppose.

Garmin Connect

Garmin Connect2

I probably only used 25% of the total features, and I liked what I used.

I did find the mobile app a bit tree-based, meaning I found myself backing up to the main dashboard and then proceeding into another section.

Garmin tracks the usual activity data, steps, calories, miles, etc. There’s a wide array of activities you can choose from, but I’m a boring treadmill runner so I used none of that.

For sleep, it tracks deep and light sleep and awake time, and I found something called “Sleep Mood” no idea what that is.

One feature I don’t recall seeing anywhere else is the automatic goal setting for steps which increases incrementally as you meet your daily goal. The starting default was 7,500 steps, and each day, the goal rose a little, I assume based on how much I had surpassed it the previous day. It topped out at 13,610.

I passed the goal every day I wore the Vivosmart, so I don’t know what happens if you fail to meet it.

You can set the goal to be fixed, but I liked this daily challenge approach. There were days I worried I wouldn’t make the step number, and it actually did spur me to be more active. I guess I’m easily manipulated.

Possibly the biggest win for Garmin Connect is its notification capabilities. It supports call, text and calendar notifications, like some others do, but in addition, there is also a nice range of other apps from which you can get notifications.

And there’s the feature I mentioned earlier, taking actions from the band. I tried this with little success, but I only turned on notifications for text messages.

One possible reason why Garmin has such robust notifications may be its developer ecosystem. There’s a Garmin Connect API and a store for third party apps. I didn’t use any, mostly because I’m lazy.

That, and one of the kind volunteers for our guerrilla Apple Watch testing at OHUG warned me that some apps had borked his Garmin. He had the high-end fenix 3, quite a nice piece of technology in an Ultan-approved design.

Finally, Garmin Connect offers exports and integrations with other fitness services like RunKeeper, Strava, etc. They’re definitely developer-friendly, which we like.

Overall, I found the Vivosmart to be an average device, some stuff to like, some stuff to dislike. The bland black version I chose didn’t help; Ultan (@ultan) would hate it, but Garmin does offer some color options.

I like the apps and the ecosystem, and I think the wide range of devices Garmin offers should make them very sticky for people who move from casual running to higher level fitness.

If I end up going back to Garmin, I’ll probably get a different device. If only I could justify the fenix 3, I’m just not serious enough, would feel like a poseur.

Find the comments.

Reducing User Friction

September 2nd, 2015 3 Comments

A few nights ago a Domino’s Pizza commercial got my attention. It is called “Sarah Loves Emoji.”

At the end, the fictional character Sarah finishes by simply saying “only Domino’s gets me.

The idea of texting an emoji, tweeting, using a Smart TV, or a smartwatch to automagically order pizza fascinates me. What Domino’s is attempting to do here is to reduce user friction, which is defined as anything that prevents a user from accomplishing a goal.  After researching Domino’s Anywhere user experiences, I found a negative post of a frustrated user, of course! Thus proving that even if the system is designed to reduce friction, the human element on the process is bound to fail at some point. Regardless I think is pretty cool that consumer oriented companies are thinking “outside the box.”

Screen Shot 2015-09-02 at 2.30.45 PM

As a long fan of building Instant Messaging (xmpp/jabber) and SMS (Twilio) bots, I understand how these technologies can actually increase productivity and reduce user friction. Even single-button devices (think Amazon Dash, or my Staples Easy Button hack) can actually serve some useful purpose.

I believe we will start to see more use cases, where input is no longer tied to a single Web UI or mobile app. Instead we will see how more ubiquitous input process like text, twitter, etc. can be used to start or complete a process. After all it seems like email and text are here to stay for a while, but that’s the content of a different post.

I think we should all strive that our customers will ultimate say that we “get them.”

OAUX Emerging Technologies in Profit Magazine

September 1st, 2015 Leave a Comment

The August 2015 edition of Profit Magazine (@OracleProfit) includes a nice piece called “The Explorers” highlighting the work of our team and that of JD Edwards Labs.

oracle

This article is nice companion piece to our strategic approach to emerging technologies and how we apply the “Glance, Scan, Commit” design philosophy to our work.

I’m honored to be quoted in the article and proud to see our little team getting this level of recognition.

If you want to learn more about the R&D projects mentioned, you’re in luck. You can read about the Glance framework and approach and see a quick video of it in action on several smartwatches, including the Apple Watch.

Be sure to read the sidebar, “Moon Shots” which mentions our Muse (@ChooseMuseresearch and our Leap Motion (@LeapMotion) investigations and development projects.

If you want to see some of these emerging technologies projects in person, register to visit the OAUX Cloud Exchange at Oracle OpenWorld 2015, or come tour the new Cloud UX Lab at Oracle HQ.

Emerging Technologies and the ‘Glance, Scan, Commit’ Design Philosophy

August 31st, 2015 Leave a Comment

Cross-posted from VoX.

Behind the Oracle user experience goals of designing for simplicity, mobility, and extensibility is a core design philosophy guiding the Oracle Applications User Experience (OAUX) team’s work in emerging technologies: “Glance, Scan, Commit.”

It nicely boils down a mountain of research and a design experience that shapes the concepts you can see from us.

The philosophy of “Glance, Scan, Commit” permeates all of our work in the Oracle Applications Cloud user experience, especially when investigating emerging technologies.

On your wrist

Some projects fit the “Glance, Scan, Commit” philosophy like a glove. The smaller screens of smartwatches like the Apple Watch and Android Wear watches require the distillation of content to fit.

RS2953_IMG_7909_2-scr

Consumers demand glance and scan interactions on their wearable devices. The Oracle user experiences provide just the right amount of information on wearable devices and enable the ability to commit to more detail via the accompanying smartphone app.

On your ‘Things’

How else does the OAUX team apply the “Glance, Scan, Commit” design philosophy?

Let’s look at another example: The “Things” in the Internet of Things (IoT) represent a very broad category of Internet-connected devices, and generally speaking, consumers can’t rely on these things to have large screens, or even screens at all. This reduces the experience down to the lightest “glance” of proximity, and in some cases a sonic “glance.”

Sometimes we tap into the user’s context such as micro-location, provided by Bluetooth beacons or Near Field Communications (NFC) tags, to capture a small chunk of information. The “glance” here is the lightest touch of a beacon coming within range or a near field tag brushing up against a sensor.

In some cases, we use the philosophy to build sound “glances,” by capturing chunks of information that are then dictated by a personal assistant, like Amazon Echo. These are simple, small, discrete tasks powered by the human voice and Internet-connected devices.

For the eyes

We are also actively exploring and building visualizations to provide “glance” and “scan” experiences that allow users to consume report data quickly and easily, without poring over tables of information.

Video Storytelling, for example, permits complicated and detailed reports to be animated and delivered via audio and video. Think about the intricacies of a quarterly financial statement; video storytelling does the thinking for you by producing the information in very scannable, organized buckets of audio and video.

The “Glance, Scan, Commit” philosophy becomes even more important when building new experiences. As users are exposed to new experiences, data from the Oracle Applications Cloud provides a constant that helps them embrace these new technologies. Delivering the data in a particular way, using designs shaped by “Glance, Scan, Commit,” increases that consistency.

If the Oracle user experience can provide customers with the information they need to do work every day, in a meaningful way, then new technologies are tools to increase user participation, not barriers.

In the not-so-distant past, “walk up and use” was the bar for experiences, meaning that the interactions should be easy enough to support use without any prior knowledge or training. The user would simply walk up and use it.

The rise of smartphones, ubiquitous connectivity, and IoT — and the emerging technology that enables their use — make our new goal as close to simply “walk up” as possible. Workers can use the system without interacting with it directly, because context collected from phones, combined with smart things around them and enterprise data in the cloud, allow the environment to pass useful information to users without any interactions. This removes more barriers and also works to increase user participation. The more users are engaging with an enterprise system, the more data goes in – and the more value our customers can get out of their investment.

And that, in the end, is the overarching goal of the Oracle user experience.

See it for yourself

If you want to put hands-on what we do, we will be at Oracle OpenWorld participating in the OAUX Cloud Exchange. Attendance requires a non-disclosure agreement, so please register early.

Amazon Dash — It’s Dinner Time!

August 28th, 2015 1 Comment

Yesterday I received an Amazon Dash for ordering IZZE juice.

61KC+ua8uYL._SL1000_

I think it is a great device, not that I would order tons of IZZE from Amazon, but at $5 it has wifi module + a micro controller + a LED light + a battery + nice enclosure, and it’s usually in deep sleep which means the device can last for years. That’s a bargain – a similar device would cost $20 – $40, at least before ESP8266 became available.

First thing I tried is to re-purpose it to toggle on/off the poor man’s Nest screen, because the PiTFT screen is quite bright and gets warm, I want to turn it off without unplugging the cord, and turn it back on instantly if needed. And here is it. Note the IZZE sticker color coordinates well with Nest warm yellow color :)

IMAG7128

IMAG7129

The signal cycle goes through PubNub, which has MQTT at its core, the response time is less than 0.5 second. So it is remotely controlled – if leave poor man’s Nest in office, I can push the button at home to turn it off.

While my daughter is excited about the IZZE toggle button controlling PiTFT screen, my wife asked me to do something more meaningful :) So I made a second try, turned IZZE button to be a “dinner time” call button.

Every time when dinner is ready, I have to shout toward upstairs to get my kids, and more often than not, I couldn’t get them because they have head-phones on.

So I modified a little bit of the code, to listen for IZZE wake up and try to connect to my router, then use that signal to ask Philips Hue lights to blink 3 times.

Now my wife can just press IZZE button in the kitchen, the Hue lights at my kid’s desk start to blink, and that’s dinner time call.

I guess that is more meaningful, at least I don’t have to shout toward upstairs again :)

Controlling NodeBox from an Apple Watch

August 28th, 2015 Leave a Comment

We are always on the hunt for interesting new uses of the Apple Watch, so when my colleague Ben Bendig alerted me to AstroPad’s new iPhone/Apple Watch app, I downloaded it immediately.

The app, AstroPad Mini, is intended to let you use your iPhone as a graphics tablet and controls Photoshop nicely right out of the box. But it will work with any Mac app; it lets you map any area of your Mac screen to the iPhone and map up to eight keyboard commands to buttons in an Apple Watch app. I reprogrammed it to work with NodeBox.

Although you can zoom and pan the Mac screen from your iPhone, this seems awkward for precision work (the iPad app would work better for that). It was more useful to map a small control area of the screen to the iPhone instead. For Photoshop you could arrange palettes (tools, layers, history) and dialogs (e.g. color picker) into a corner somewhere (maybe on a second monitor), map the iPhone to that, and use the iPhone as an auxiliary screen so you don’t have to keep moving your mouse back and forth. This worked particularly well for the color picker.

iPhone display when controlling a typical NodeBox node

iPhone display when controlling a typical NodeBox node

For NodeBox I mapped the node pane, a small area which displays properties of the currently selected node. I could then select any node on my ginormous screen using a mouse or trackpad and then scrub its properties from the phone (without having to relocate the mouse).

Some of the Apple Watch buttons I use to control NodeBox

Some of the Apple Watch buttons I use to control NodeBox

Even more fun: I mapped common actions to Apple Watch buttons: Save, Full Screen, Escape, New Node, Undo, Redo, Play, and Rewind. When creating animations, it’s pleasant to lean back in my chair, put the display in full screen, and play and rewind to my hearts content all from my watch.

I was also able to focus the iPhone on the slider of my transforming table (running as a web app) and could then stand back from the display and move the slider back and forth from my phone. You could do the same thing by just running the table app on the phone and mirroring it via AirPlay, but AstroPad let me focus the entire iPhone screen on the slider so that it was easier to manipulate.

The app did occasionally lose its wifi connection for a few moments, but otherwise worked fine.

I think with a little thought and practice this setup could speed my workflow somewhat. The benefits are marginal, though, not revolutionary. One tip: if you use the Apple Watch be sure to set “Activate on Wrist Raise” to “Resume Previous Activity” instead of “Show Watch Face” so that you don’t have to keep relaunching the AstoPad app.

We could conceivably use this app in some of the concept demos our group does. It would be a quick and dirty way of controlling some features from an iPhone or Apple Watch without having to write any special code. The catch is that the demo would have to run on a Mac. One advantage: they have an option for controlling the Mac via USB cable instead of wifi, a handy workaround at HQ or demo grounds when sharing a local wifi router is problematic.

Hmmm. I wonder if I could aim and fire my USB Rocket Launcher from my watch. Now THAT might be a killer app.

Intel Compute Stick: Nowhere and Back Again

August 28th, 2015 Leave a Comment

The Intel Compute Stick provides a full desktop experience in an ultra-portable HDMI dongle form factor. It’s like Google Chromecast, but an entire PC instead of just a web browser. I tested both the $150 Windows 8 version and the new $110 Ubuntu version.

The Intel Compute Stick (L) alongside an apple product (R).

TAP TAP TAP. IS THIS THING ON?

The HDMI end goes into a display, the power goes into an outlet, and a blue light comes on but the Stick does not boot. Either tap or long press the power button, then switch the display input source after a few seconds. Just by looking at the Stick you cannot tell if it’s off, on, or booting. Long press the power button and you may end up at the boot menu or the blue light may go off—I suppose making the Stick even more off than previously.

power

Power on? Try • • • – – – • • •

SEBCAC

It boots. This is where you need to find a keyboard, then a little later find a mouse. See, there’s only one USB port on the Stick so we ended up swapping peripherals during the setup. This gets old instantly so either get a USB hub or some bluetooth peripherals. Unsurprisingly, the Microsoft bluetooth keyboard we got from our local StaplesMax Depot did not like the Ubuntu version of the Stick so we needed a hub.

You will want to plug the included HDMI extension cable into the Stick or your wifi will be—at least in my experience—absent. Use the micro-USB charger that came with the Stick if you want it to boot at all. It’s a proprietary charger masquerading as non-proprietary. It’s better to find this out now rather than on the road. All of these non-moving parts make for something…squiddy.

rig

My rig: Squid Exists Between Computer and Chair

GETTING THAT FOR WHICH YOU PAID

I type this now with the intestines. The quick brown fox jumps over the lazy dog. It performs well for the tasks that most people perform most often, but then again we live at a time where my $35 MP3 player has a word processor, plays chess, and even runs the game DOOM.

popup

Regularly scheduled popup, even after emptying. I’m sure we can safely ignore it from now on…

Web pages load with a small delay but I have little complaint there. YouTube, for example, runs smoothly and overall the Stick is fine for common tasks.

Lag is lag. While typing I get periodic freezes. No words appear and then all of a sudden abracadabra. Opening a folder in the file manager sometimes takes a few beats. There’s a 64-bit quad-core Atom processor inside® but it sometimes feels like Mac OS 8 or Windows 3.1 on 20 year old hardware. Fun fact: the Ubuntu Stick has 1GB RAM / 8GB storage while the Windows Stick has 2GB RAM / 32GB storage. The internet says you can install Linux on the Windows version.

Leap Motion: seems harmless enough.

Leap Motion: seems harmless enough.

Let’s push things a bit. The Leap Motion is a cool USB device which tracks your hands’ motions and provides an API to do things with that data. Even though the Stick doe—oops, freeze-up—does not meet the minimum requirements, why not give it a try? I’m sure it’s fine and no harm will come of it.

It all had been going so not smoothly too…

It all had been going so not smoothly too…

PART NUMBER II

The Leap Motion did not work so I tried rebooting the Stick. And tried. And tried. And tried… Sure, I had not mastered the power button but this was different. The Stick would show the Ubuntu splash screen and then go endlessly dark. Luckily, others had faced the same issue. I simply had to hold the power button for just less than 4 seconds—not 4 seconds, mind you—to get into the boot menu, then choose to recover the BIOS. BIOS recovery did not fix the black screen. Update the BIOS then. That went smoothly, but did not fix it. There were other trials too.

At this point, I just wanted the beginnings of this very blog post off of the Stick. I decided to make a bootable USB drive so I could at least grab the document. I’ve only made “live” CDs/DVDs before and making a live USB stick was more challenging and time-consuming than I had anticipated. I was able to get GParted installed but then decided Puppy Linux with persistency would be easier. I tried doing this on my Linux machine at home but in the end the easiest thing I found was LinuxLive USB Creator on Windows. Prepared, I hoped that the next day I would be able to grab those words up there from the borked Stick.

When I got to work I decided to try the Stick again: same problem, of course. I had a meeting so I left it plugged in. When I got back I tried rebooting, ever the naïve optimist.

It boots!!! And an error message popped up, perhaps the cause of all of this: The volume “Filesystem root” has only 156.5 MB disk space remaining. Where had I seen that before?

My confidence restored along with the bootability, I am continuing this blog post on the Stick. It is behaving well with little lag although Firefox crashed a couple times with one tab open. I’m not entirely sure if this is “normal” or if the trials and tribulations took their toll.

Firefox the gray

Firefox the gray

If the self-healing mini-miracle had not happened, would I have been able to boot from the USB stick? No. There’s a Catch-22 because of the sole USB port. The keyboard needs that port to use the boot menu. Using the hub or switching to the hub when at that menu ends all input from then on. There is a micro SD slot, and if I wasn’t exhausted from all of this I would try to boot from it.

The End

Summer Projects and a Celebration

August 27th, 2015 Leave a Comment

If you follow us on Twitter (@theappslab) or on Facebook, you’ve seen some of the Summer projects coming together.

If not, here’s a recap of some of the tinkering going on in OAUX Emerging Technologies land.

Mark (@mvilrokx) caught the IoT bug from Noel (@noelportugal), and he’s been busy destroying and rebuilding a Nerf gun, which is a thing. Search for “nerf gun mods” if you don’t believe me.

11878916_1055356384509414_2509613851779629347_o

Mark installed our favorite chip, the ESP8266, to connect the gun to the internets, and he’s been tinkering from there.

Meanwhile, Raymond (@yuhuaxie) has been busy building a smart thermostat.

11880462_1056118014433251_3359065287035416341_n

11896214_1056118017766584_838064281034754658_n

And finally, completely unrelated to IoT tinkering, earlier this month the Oracle Mexico Development Center (MDC) in Guadalajara celebrated its fifth anniversary. As you know, we have two dudes in that office, Os (@vaini11a) and Luis (@lsgaleana), as well as an extended OAUX family. Congratulations.

11838559_950231585020447_942122106329896825_o

 

11218984_950231691687103_4011713548176238985_n

New Adventures with Raspberry Pi

August 18th, 2015 4 Comments

If you read here, you’ll recall that Noel (@noelportugal) and I have been supporters of the Raspberry Pi for a long time, Noel on the build side, many, many times, me on the talking-about-how-cool-and-useful-it-is side.

And we’ve been spreading the love through internal hackdays and lots of projects.

So, yeah, we love us some Raspi.

The little guy has become our go-to choice to power all our Internet of Things (IoT) projects.

Since its launch in early 2012, the little board that could has come a long way. The latest model, the Raspberry Pi 2 B, can even run a stripped down Windows 10 build to do IoT stuff.

Given all that we do with Raspis, I’ve always meant to get one for my own tinkering. However, Noel scared me off long ago with stories about how long it took to get one functional and the risks.

For example, I remember reading a long post early on the Pi’s history about how choosing a Micro USB was critical, amperage too high burned out the board, too low and it wouldn’t run.

The information was out there, contributed by a huge and generous community. I just never had the time to invest.

Recently, I’ve been talking the good people at the Oracle Education Foundation (@ORCLcitizenship) about ways our team can continue to help them with their workshops, and one of their focus areas is the Raspberry Pi.

After all, the mission of the Raspi creators was to teach kids about computers, so yeah.

I figured it was finally time to overcome my fears and get dirty, and thanks to Noel, I found a kit that included everything I would need, this Starter Kit from Vilros.

IMG_20150812_094443

Vilros Raspberry Pi 2 Complete Starter Kit

Armed with this kit, I took a day and hoped that would be enough to get the little guy running. About an hour after starting, I was done.

Going from zero to functional is now ridiculously easy, thanks to these kits that include all the necessities.

So, now I have a functioning Pi running Raspbian. All I need is a project, any ideas?

Coda: Happy coincidence, as I wrote this post, I got a DM from Kellyn Pot’Vin-Gorman (@dbakevlar) asking if knew any ways for her to use her Raspberry Pi skills in an educational capacity. Yay kismet.

Get a Look at the Future Oracle Cloud User Experience at Oracle OpenWorld 2015

August 18th, 2015 Leave a Comment

Here’s the first of many OpenWorld-related posts, this one cross-posted from our colleagues and friends at VoX, the Voice of Experience for Oracle Cloud Applications. Enjoy.

Are you all set for Oracle OpenWorld 2015 (@oracleopenworld)? Even if you think you’re already booked for the event, you’ll want to squeeze in a chance to experience the future of the Oracle Applications User Experience (OAUX) — and maybe even make a few UX buddies along the way — with these sessions, demos, and speakers. We loved OOW 2014, and couldn’t wait to get ready for this year.

lucasjakeoow2015

Lucas Jellema, AMIS & Oracle ACE Director (left), Anthony Lai, Oracle (center), Jake Kuramoto, Oracle (right) at OOW 2015 during our strategy day. Photo by Rob Hernandez.

Save the Date: Oracle Applications Cloud User Experience Strategy & Roadmap Day

The OAUX team is hosting a one-day interactive seminar ahead of Oracle OpenWorld 2015 to get select partners and customers ready for the main event. This session will focus on Oracle’s forward-looking investment in the Oracle Applications Cloud user experience.

You’ll get the opportunity to share feedback about the Oracle Applications Cloud UX in the real world. How is our vision lining up with what needs to happen in your market?

Speaking of our vision, we’ll start the session with the big-picture perspective on trends and emerging technologies we are watching and describe their anticipated effect on your end-user experiences. Attendees will take a deeper dive into specific focus areas of the Oracle Applications Cloud and learn about our impending investments in the user experience including HCM Cloud, CX Cloud, and ERP Cloud.

The team will also share with you the plans for Cloud user experience tools, including extensibility and user experience in the Platform-as-a-Service (PaaS4SaaS) world (get the latest here). We’ll close out the day with a “this-town-ain’t-big-enough” event that was extremely popular last year: the ACE Director Speaker Showdown.

Want to go?

When: 9 a.m. to 5 p.m. Wednesday, Oct. 21, 2015
Where: Oracle Conference Center, Room 202, 350 Oracle Pkwy, Redwood City, CA 94065
Who: Applications Cloud partners and customers (especially HCM, CX, or ERP Cloud), Oracle ACE Directors, and Oracle-internal Cloud thought leaders in product development, sales, or Worldwide Alliances and Channels

Register Now!

To get on our waitlist.

Active confidential disclosure agreement required.

Chloe Arnold and Mindi Cummins, Oracle, during OOW 2014 at the OAUX Cloud Exchange

Chloe Arnold and Mindi Cummins, Oracle, during OOW 2014 at the OAUX Cloud Exchange.

Save the Date: Oracle Applications User Experience Cloud Exchange

Speakers and discussions are all well and good, but what is the future of the Oracle Applications UX really like? The OAUX team is providing a daylong, demo-intensive networking event at Oracle OpenWorld 2015 to show you what the results of Oracle’s UX strategy will look like.

User experience is a key differentiator for the Oracle Applications Cloud, and Oracle is investing heavily in its future. Come see what our recently released and near-release user experiences look like, and check out our research and development user experience concepts, then let us know what you think.

These experience experiments for the modern user will delve even deeper into the OAUX team’s guiding principles of simplicity, mobility, and extensibility and come from many different product areas. This is cutting-edge stuff, folks. And, since we know you’re worn out from these long, interactive days, this event will also feature refreshments.

Want to go?

When: Monday, October 26, 2015
Where: InterContinental Hotel, San Francisco
Who: Oracle Applications Cloud Partners, Customers, Oracle ACEs and ACE Directors, Analysts, Oracle-internal Cloud thought leaders in product development, sales, or Worldwide Alliances and Channels.

Register Now!

To get on our waitlist.

Active confidential disclosure agreement required.

IFTTT Maker Channel

August 17th, 2015 Leave a Comment

large

A couple months a go IFTTT added a much needed feature: A custom channel for generic urls. They called it the Maker Channel. If you noticed my previous post, I used it to power an IoT Staples Easy Button.

At a closer look this is a very powerful feature. Now you can basically make and receive web requests (webhooks) from any possible connected device to any accessible web service (API, public server, etc..) It is important to highlight that requests “may” be rate limited, so don’t start going crazy with Big Data style pushing.

if-maker-then-maker

You can also trigger any of the existing Channels with the Maker Channel.  So either you can choose to trigger any of the existing Channels when you POST/GET to the Maker Channel:

curl https://maker.ifttt.com/trigger/remote_trigger/with/key/${secret_key}

if-maker-then-hue if-maker-then-lifx if-maker-then-gdrive

Or you could have IFTTT POST/GET/PUT to your server when any of the existing Channels are triggered.

if-gmail-then-maker
There seems to be hundred of possible combinations or “recipes.”

Do you use IFTTT? Do you find it useful? Let me know in the comments.

IFTTT Easy Button

August 15th, 2015 Leave a Comment

IMG_2798

The Amazon Dash button, it’s all the buzz lately. Regardless whether you think it is the greatest invention or just a passing fad, it is a nice little IoT device. There is already work underway to try and make it work with custom code.

There are a couple crowdfunding projects (flic and btn) that are attempting to create custom IoT buttons as well. But these often come with a high price tag (around $100).

This is where the up and coming ESP8266 mcu can shine. For under $3 you can have a wifi chip plus a programable micro-controller. You just need to add a cheap button (like the Staples Easy Button for around $7.) Add good ol’ IFTTT Maker Channel and you will be set to go with your custom IoT button for about $10.

IMG_2771IMG_2784

Check my hackster.io project (https://www.hackster.io/noelportugal/esp8266-ifttt-easy-button) to learn how to make your own.

What Kids Tell Us about Touch and Voice

August 14th, 2015 2 Comments

Recently, my four year-old daughter and her little bestie were fiddling with someone’s iPhone. I’m not sure which parent had sacrificed the device for our collective sanity.

Anyway, they were talking to Siri. Her bestie was putting Siri through its paces, and my daughter asked for a joke, because that’s her main question for Alexa, a.k.a. the Amazon Echo.

AmazonEcho

Siri failed at that, and my daughter remarked something like “Our Siri knows the weather too.”

Thus began an interesting comparison of what Siri and “our Siri” i.e. the Echo can do, a pretty typical four year-old topping contest. You know, mine’s better, no mine is, and so forth.

After resolving that argument, I thought about how natural it was for them to talk to devices, something that I’ve never really liked to do, although I do find talking to Alexa more natural than talking to Google Now or Siri.

I’m reminded of a post, which I cannot find, Paul (@ppedrazzi) wrote many years ago about how easily a young child, possibly one of his daughters, picked up and used an iPhone. This was in 2008 or 2009, early days for the iPhone, and the child was probably two, maybe three, years old. Wish I could find that post.

From what I recall, Paul mused on how natural touch was as an input mechanism for humans, as displayed by how a child could easily pick up and use an iPhone. I’ve seen the same with my daughter, who has been using iOS on one device or another since she was much younger.

I’m observing that speech as equally natural to her.

Kids provide great anecdotal research for me because they’re not biased by what they already know about technology.

When I use something like gesture or voice control, I can’t help but compare it to what I know already, i.e. keyboard, mouse, which colors my impressions.

Watching kids use touch and voice input, the interactions seem very natural.

This is obvious stuff that’s been known forever, but it took how long for someone, Apple, to get touch right? Voice is in an earlier phase, advancing, but not completely natural.

One point Noel (@noelportugal) makes about about voice input is that having a wake word is awkward, i.e. “Alexa” or “OK Google,” but given privacy concerns, this is the best solution for the moment. Noel wants to customize that wake word, but that’s only incrementally better.

When commanding the Amazon Echo, it’s not very natural to say “Alexa” and pause to ensure she’s listening. My daughter tends to blurt out a full sentence without the pause, “Alexa tell us a joke” which sometimes works.

That pause creates awkward usability, at least I think it does.

Since its release, Noel has led the charge for Amazon Echo research, testing and hacking (lots of hacking) on our team, and we’ve got some pretty cool projects brewing to test our theories. I’ve been using it around my home for a while, and I’m liking it a lot, especially the regular updates Amazon pushes to enhance it, e.g. IFTTT integration, smart home controlGoogle Calendar integration, reordering items from Amazon and a lot more.

Amazon is expanding its voice investment too, providing Alexa as a service, VaaS or AVS as they call it.

I fully believe the not-so-distant future will feature touch and speech, and maybe gestures, at the glance and scan layers of interaction, with the old school keyboard and mouse for heavy duty commit interactions.

Quick review, glance, scan, commit is our strategic design philosophy. Check out Ultan (@ultan) explaining it if you need a refresher.

So, what do you think? Thank you Captain Obvious, or pump the brakes Jake?

Find the comments.

Biohacking, Here Come the Cyborgs

August 11th, 2015 1 Comment

For me, 2015 has been the year of the quantified self.

I’ve been tracking my activity using various wearables: Nike+ Fuelband, Basis Peak, Jawbone UP24, Fitbit Surge, and currently, Garmin Vivosmart. I just set up Automatic to track my driving; check out Ben’s review for details. I couldn’t attend QS15, but luckily, Thao (@thaobnguyen) and Ben went and provided a complete download.

And, naturally, I’m fascinated by biohacking because, at its core, it’s the same idea, i.e. how to improve/modify the body to do more, better, faster.

index.1

Professor Kevin Warwick of the University of Reading

Ever since I read about RFID chip implanting in the early 00s, I’ve been curiously observing from the fringe. This post on the Verge today included a short video about biohacking that was well worth 13 and half minutes.

If you like that, check out the long-form piece, Cyborg America: inside the strange new world of basement body hackers.

This stuff is fascinating to me. People like Kevin Warwick and Steve Mann have modified themselves for the better, but I’m guessing the future of biohacking lies in healthcare and military applications, places where there’s big money to be made.

My job is to look ahead, and I love doing that. At some point during this year, Tony asked me what the future held; what were my thoughts on the next big things in technology.

I think the human body is the next frontier for technology. It’s an electrical source that could solve the modern battery woes we all have; it’s an enormous source for data collection, and you can’t forget it in a cab or on a plane. At some point, because we’ll be so dependent on it, technology will become parasitic.

And I for one, welcome the cyborg overlords.

Find the comments.

Jeremy and Noel Talk IoT at Kscope15

August 10th, 2015 Leave a Comment

By now, you know all about the Scavenger Hunt we ran a Kscope15 in partnership with our good friends at ODTUG and YCC.

Noel (@noelportugal) talked about the technical bits in a post last week, and today, ODTUG posted an interview featuring our fearless leader, Jeremy Ashley (@jrwashley), and Noel from the conference wherein they talk about Internet of Things (IoT) and the IoT bits included in the Hunt.

If you read here, you’ll know that IoT has been a long-time passion of Noel’s, dating back to well before Internet-connected devices were commonplace and way before they had an acronym.

Thanks to ODUTG for giving us the opportunity to do something cool and fun using our nerdy passion, IoT.

Guerrilla Testing at OHUG

August 10th, 2015 Leave a Comment

The Apple Watch came out, and we had a lot of questions: What do people want to do on it? What do they expect to be able to do on it? What are they worried about? And more importantly, what are they excited about?

But we had a problem—we wanted to ask a lot of people about the Apple Watch, but nobody had it, so how could we do any research?

Our solution was to do some guerrilla testing at the OHUG conference in June, which took place in Las Vegas. We had a few Apple Watches at that time, so we figured we could let people play around with the watch, and then ask them some targeted questions. This was the first time running a study like this, so we weren’t sure how hard it would be to get people to participate by just asking them while they were at the conference.

It turned out the answer was “not very.” We should have known—people both excited and skeptical were curious about what the watch was really like.

research1

Friend of the ‘Lab and Oracle ACE Director Gustavo Gonzalez and Ben enjoy some Apple humor.

Eventually we had to tell the people at our recruiting desk to stop asking people if they want to participate! Some sessions went on for over 45 minutes, with conference attendees chatting about different possibilities and concerns, brainstorming use cases that would work for them or their customers.

research2

The activity was a great success, generating some valuable insights not only about how people would like to use a smartwatch (Apple or not), but how they want notifications to work in general. Which, of course, is an important part of how people get their work done using Oracle applications.

research3

Our method was pretty simple: We had them answer some quick survey questions, then we put the watch on them and let them explore and ask questions. While they were exploring, we sent them some mock notifications to see what they thought, and then finished up asking them more in depth about what they want to be able to accomplish with the watch.

At the end, they checked off items from a list of notifications that they’d like to receive on the watch. We recorded everything so we didn’t have to have someone taking notes during the interviews. It took some time to transcribe everything, but it was extremely valuable to have actual quotes bringing to life the users’ needs and concerns with notifications and how they want things to work on a smartwatch.

Most usability activities we run at conferences involve 5–10 people, whether it’s a usability test or a focus group, and usually they all have similar roles. It was valuable here to get a cross-section of people from different roles and levels of experience, talking about their needs for not only a new technology, but also some core functionality of their systems.

In retrospect, we were a little lucky. It would probably be a lot more difficult to talk to the same number of people for an appreciable amount of time just about notifications, and though we did learn a good deal about wants and needs for developing for the watch, it was also a lot broader than that.

So one takeaway is to find a way to take advantage of something people will be excited to try out—not just in learning about that specific new technology, but other areas that technology can impact.

Game Mechanics of a Scavenger Hunt

August 7th, 2015 1 Comment

android512x512This year we organized a scavenger hunt for Kscope15 in collaboration with the ODTUG board and YCC.

As we found out, scavenger hunts are a great way to get people to see your content, create buzz and have fun along the way.  We also used the scavenger hunt as a platform to try some of the latest technologies. The purpose was to have conference attendees complete tasks using Internet of Things (IoT), Twitter hashtags and pictures and compete for a prize.

Here is a short technical overview of the technologies we used.

Registration

We opted to use a Node.js back-end and a React front-end to do a clever Twitter name autocomplete. As you typed your Twitter handle, the first and last name fields were completed for you. Once you filled all your information the form submitted to a REST endpoint based on Oracle APEX. This piece was built by Mark Vilrokx (@mvilrokx) and we were all very happy with the results.

registration

Smart Badge

We researched two possible technologies: Bluetooth Low Energy (BLE) Beacons and Near Field Communication (NFC) stickers and settled for NFC. The reason behind choosing NFC was the natural tendency we have to touch something (NFC scanner) and get something in return (notifications + points). When we tested with BLE beacons the “check-in” experience was more transparent but not as obvious when trying to complete a task.

We added a NFC sticker to all scavenger hunt participants’ badges so they could get points by scanning their badge in our Smart Scanners.  To provision each NFC badge we build an Android app that took the tag id and associated with the user profile.

Circus_NFC

Smart Scanner

The Smart Scanner was a great way to showcase IoT. We used the beloved Raspberry Pis to host an NFC reader. We used the awesome blink(1) USB LED light to indicate whether the scan was successful or not. We also added a Mini USB Wi-Fi dongle and a high capacity battery to assure complete freedom from wires.

Raymond Xie (@YuhuaXie) did a great job using Java 8 to read the NFC stickers and send the information to our REST server. The key part for these scanners was creating a failover system in case of internet disconnection. In such case we would still read the NFC tag and register it, then it would post it to our server as soon as connectivity was restored.

IMG_1625

Twitter and SMS Bots

Another key component was to create a twitter and SMS bot. Once again, Mark used Node.js to consume the Twitter stream. We looked for tweets mentioning #kscope15 and #taskhashtag. Then we posted to our REST server which made sure that points were given to the right person for the right task. Again we were pleasantly surprised by the flexibility and power of Node.js. Similarly we deployed a Twilio SMS server that listened for SMS subscriptions and sent SMS notifications.

Leaderboards

We just didn’t settled by creating a web client to keep track of points. We created a web mobile client (React), an iOS app and an Android app. This was part of our research to see how people used each platform. As a bonus we created Apple Watch and Android Wear companion apps. One of the challenges we had was to create a similar experience across platforms.

leaderboards

Administration

We needed a way to manage all task and player administration. Since we used APEX  and PLSQL to create our REST interface, it was a no brainer to use APEX for our admin front-end. The added bonus was that APEX has user authentication and sessions management, so all we had to do is create admin users with different roles.

Screen Shot 2015-08-07 at 1.54.25 PM

Conclusion

Creating a scavenger hunt for tech conference is no easy task. You have to take into consideration many factors, from what are the right task for the conference attendees to having the optimal wifi connection. Having an easy registration and provisioning process are also paramount for easy uptake.

We really had fun using the latest technologies and we feel we successfully showcased what good UX can do for you across different devices and platforms. Stay tuned to see if we end up doing another similar activity. You wont want to miss it!

Royal High School Students Visit Oracle

August 6th, 2015 Leave a Comment

Last week a group of high school students from Royal High School visited Oracle Headquarters in Redwood Shores, California.

Royal High School, a public school in Simi Valley, California, is launching an International Business Pathway program. This program is part of California’s Career Pathways Trust (CCPT), which was established in 2013 by the California State Legislature to better prepare students for the 21st Century workplace.

The goal of the visit was to introduce students to real life examples of what they will be studying in the year ahead, which include Business Organization and Environment, Marketing, Human Resources, Operations, and Finance.

I was honored to be invited to be on a career panel with three other Oracle colleagues and share our different careers and career paths.

L-R Chris Kite, VP Finance A&C/NSG; Jessica Moore, Sr Director Corporate Communications; Thao Nguyen, Director Research & Design; Kym Flaigg, College Recruiting Manager

While Oracle is known as a technology company, it is comprised of many different functional areas beyond engineering. The panel shared our diverse backgrounds and education, our different roles within the organization, the different cultures within Oracle, and more.

Since these are students in an international business program, we also discussed Oracle as a global business. The panelists shared our individual involvement and impact on Oracle’s international business – from working with Oracle colleagues located throughout the world to engaging with global customers, partners, and journalists.

By the end, the students heard stories of our professional and personal journeys to where we are now. The common themes were to be authentic and true to yourself, change is inevitable, and it is a lifetime of learning. All of the panelists started on one path but ultimately found new interests and directions.

The students learned there are many different opportunities in companies and many different paths to achieve career and life goals. Bring your passion to work and you’ll succeed.

On a personal note, I grew up in the same area as these students, that being the San Fernando Valley in Southern California. I moved from the San Fernando Valley to the Silicon Valley years ago, but thanks to Oracle Giving, I am able to give back to my roots and proud to participant in Oracle’s community outreach .

Usability of Text Analytics

August 5th, 2015 Leave a Comment

User experience design as a career fell largely on the era of GUI. Thus most people in my profession are visual thinkers if not by birth than by experience. When it comes to presenting information, we think visualization. Times are changing, and with that we are challenged to present information verbally. This is where text analytics meets UX. I only worked on a handful of projects that are about text, and only with a handful of text technologies, but the experience has been worth mentioning.

Text analytics, more or less meaning the same as text mining, is “devising of patterns and trends from text through means such as statistics…” (Oh, Wikipedia!)

There are many areas of text analytics – text summarization, information retrieval, sentiment analysis, named entity recognition, and on… The tools and techniques are constantly getting better, it is exciting. I get an impression that the text mining companies are intoxicated with the coolness of technologies they build, so they think of it first and think of possible industry applications later. As I am conditioned to think in an opposite direction, it was interesting for me to see how the same technique can be so useful in one case and completely irrelevant in another.

Here is my use case inventory. Take a brand manager versus a sales representative. A brand manager might like daily sentiment analysis of her brands and those of her competitor. On the other hand, the sales representatives we have interviewed are not at all into sentiment analysis. What they look for is highly tuned searches that would brief them daily on what’s happening with their top clients. They also search for industry news that they can retweet with a hope to influence the clients. A money manager might need to use text analytics to contextualize the jump in a stock price, while a marketer would rather have a predictive text mining tool to target customers for a purchasing recommendation. I often research different design topics and am interested in text analytics that would make me see at a glance what a collection of papers or articles is about. I also like to see daily summaries of trending topics in design and technology.

So the first lesson I’ve learned is how all text analytics use cases are different.

The second lesson is how the devil is in detail.

For one of my project, I wanted to have a condensed representation of press coverage for the new release of HCM applications, specifically, its user experience. For my purposes, I wanted to have it as a cloud of words. I have collected a number of press releases and reviews, and fed them through four text analytics tools I could put my hands on, namely Semantria, Open Calais, TagCrowd, and Oracle’s own Social Relationship Management (SRM) Listen and Analyze.

Here are the results.

results

For the fairness of the comparison, I have stripped the lists of its the original formatting (the products have drastically different interfaces), and limited the results to 20 items. Moreover, some packages categorize the results into “themes,” “entities,” etc. I kind of had to either pick or merge. SRM doesn’t allow me to feed corpus of text to it to analyze, so I had to create a search query about OAUX instead.

You can see that the differences are dramatic. I believe some differences are the results of subtle choices made by the product designers – frequency thresholds, parts of speech included, the choice of either 1, 2 or 3 word phrases, etc. Other differences are the results of the actual algorithms beneath – bag of words, word vectors, neural nets, skip grams, chaining, deep learning, … . At first, I was determined to figure them all out. I quickly realized that there is no way I can get through the math of it. So I decided to approach it in a chocolate tasting way. If I like the taste, I’ll make an effort to read the ingredients.

Semantria I liked the most. I liked the combination of themes and entities; I thought the length of the phrases was well balanced. I read the ingredients. Instead of plain word frequencies, Semantria uses something called “lexical chaining” to score themes. “The algorithm takes context and noun-phrase placement into account when scoring themes.” I put “lexical chaining” high on my list of likes.

OpenCalais looked totally solid, though heavy on terms and nouns, and light on themes and adjectives. This is to no surprise, as Named Entity Recognition is OpenCalais’ core competency, and there it is unsurpassed. The new “generic relations” feature in a shape of a “subject-predicate-object” is amazing.

TagCrowd’s was definitely too plain to represent what the collection is about. This is a very simple well-meaning word frequency tool, with the stop words (the and a removed) being its only “lexical analysis” feature. From TagCrowd I’ve learned that the word frequencies can take you only that far.

Finally, there is SRM. SRM uses latent semantic analysis, which is a type of vectorial technique.

And what’s your favorite?

NodeBox

August 4th, 2015 Leave a Comment

In my previous post I argued that the hunt is on for a better way to code, a way more suited for a designer’s need to test new interactions. I said I wanted a process less like solving a Rubik’s cube and more like throwing a pot. What does this actually mean?

“I want to grab a clump of clay and just continuously shape it with my hands until I am satisfied.”

There are two key concepts here: “continuously shape” and “with my hands.”

Code that is continuously shaped is called reactive programming. A familiar example is the spreadsheet: change a single cell and the rest of the sheet automatically updates. There is no need to write a series of instructions and then “run” them to see what happens; instead every change you make instantly affects the outcome.

“With my hands” refers to a kinesthetic or visuospatial style of thinking which leverages our ability to perceive and manipulate spatial relationships. Traditional programming languages are frustrating for visual thinkers; they rely on a phonological style which uses hands only to type and eyes only to read.

In theory, any written language can instead be represented as a collection of elements arranged and connected in space; this is the idea behind visual programming languages. Instead of typing instructions, you drag objects around and connect them together to express ideas.

Clockwise from upper left: Origami (Quartz Composer), Coral, Scratch, Form

Clockwise from upper left: Origami (Quartz Composer), Coral, Scratch, Form

The image above includes some typical examples. Block style IDEs (e.g. Scratch) let you snap together commands like Lego bricks. The others let you drag boxes around and string wires between them.

I think it’s easy to see at a glance the problem with this approach: it doesn’t scale. Stringing wires or snapping bricks gets really messy really fast. Reaching elbow deep into a rat’s nest of wires is not anything like shaping clay.

But it doesn’t have to be this bad. The problem these examples have is that, although visual, they slavishly adhere to an imperative style of coding where instructions are listed in order and even the words within each instruction must follow a specific syntax. This forces connections into arbitrary knots and loops, creating more tangles and going against the overall flow. A visual style demands a simpler, more fluid kind of logic.

Enter an old idea in computer science which has seen a recent resurgence: functional programming. In place of a sequence of instructions which focus on how to do things, functional programming languages use chains of transformations that focus on the desired result at each point. Loops are banished and each node can have only one output so everything naturally flows in the same direction. A classic example is Lisp; a more modern functional language now gaining traction is Clojure.  Don’t be scared.

So what we need is a functional reactive programming language with a responsive, fun to use visual IDE, designed specifically for artists. Extra bonus points if it includes natural scrubbing interactions for setting values ala Bret Victor.

Meet NodeBox. NodeBox is an open source, cross-platform GUI originally developed for generative artists. I first encountered it at the OpenVis conference in 2013. The video of that presentation is a great introduction; you can skip to 22:00 to see a demo of NodeBox in action which shows how quickly and easily you can shape a visualization. This is what I mean by shaping clay.

A simple Nodebox network: Recursive Pentagons

A simple Nodebox network: Recursive Pentagons

This NodeBox “network” draws a set of nested pentagons. The structure is so simple you can see how it works just by looking at it. Make a pentagon node, color it, hook it to a “nextChild” subnetwork that makes a smaller copy, repeat three more times, then combine all five pentagons into a single display.

You can double-click on any node to render it on the main screen; a white triangle in the lower right corner indicates the currently rendered node. You can then single-click any other node to adjust its parameters – in this case the original pentagon node. By scrubbing (dragging the mouse across) the radius field I can increase or decrease its size; making the top pentagon bigger will automatically make all its children bigger. In this way I can quickly scrub values to get the result I want.

A NodeBox network which can draw itself

A NodeBox network which can draw itself

Another (somewhat mind-bending) example: a NodeBox network which can draw itself. On the right is a set of nodes that opens a JSON file, analyzes the contents, and plots it as a series of rectangles and connecting lines. On the left is what happens when that JSON file happens to contain this network’s own structure (taken directly from it’s .ndbx file).

I’ve been playing with NodeBox for about six months now and have created over forty networks which let me play with and try out various visualizations and data-driven animations. I find that some things which are easy to do in other languages are hard to do in NodeBox (or just hard for me to figure out how to do). But the reverse is also true: some things that are difficult or time-consuming to do in any other language are spectacularly easy in NodeBox.

Debugging, in particular, is much less time-consuming and almost fun. I catch most bugs instantly since every change I make is instantly rendered. When something unexpected does happen I can just click on each node in turn to follow the steps of a process. When something is too big or too small or in the wrong place I can simply scrub a parameter or even just grab the offending object and drag it where it needs to go.

Scaling up to large projects is manageable, but remains problematic. If you think clearly enough you can encapsulate everything into a handful of subnetworks and sub-subnetworks. But this can only go so far. NodeBox’s functional approach eliminates “side effects;” a change made to one function cannot affect distant functions unless those two functions are physically linked. This prevents the nasty hard-to-trace bugs which plague procedural languages, but it also means there are no global variables, which in turn means that if you want a variable to effect twenty different functions you will need to create at least twenty separate links.

You can alleviate this somewhat by using Null nodes as cable ties. If two clumps of nodes have many interlinkages, you can physically separate them, lay one cable across the void to a Null node, and then distribute its output from there. After I get something working in NodeBox I usually spend some more time “tidying up,” rearranging nodes into related clumps and positioning nodes to reduce the number of crossing lines. I regard this not as a nuisance, but as a pleasant, almost meditative ritual that helps me optimize my code.

NodeBox does have one major limitation: it doesn’t do input. It was designed to produce intricate still images and animations, not to facilitate end user interactions. So there are no input fields, no buttons, no sliders, no checkboxes – no way to create a standalone interactive prototype. These things could all be done in theory, it’s just that NodeBox does not currently provide any *nodes* to do them.

This is ironic because the NodeBox IDE itself is richly interactive. It’s vector-based ZUI (zoomable user interface) is a joy to use. So as a designer I can experience wonderful interactions by scrubbing node parameters and zooming in and out, but I can’t create a similar experience for my end users.

My use of NodeBox, therefore, is limited to creating sketches and animations. This is no small thing – it allows me to play and try and then convey the essence of ideas which are inherently hard to test and demonstrate. But for now I will still have to move to other languages if I need to create stand-alone interactives.

I think the deeper value of NodeBox is that it shows what is possible. There are better ways of imagining, better ways of coding. If we hope to create ever better experiences for our users, we need to keep searching for these better ways.