In my previous post I argued that the hunt is on for a better way to code, a way more suited for a designer’s need to test new interactions. I said I wanted a process less like solving a Rubik’s cube and more like throwing a pot. What does this actually mean?
“I want to grab a clump of clay and just continuously shape it with my hands until I am satisfied.”
There are two key concepts here: “continuously shape” and “with my hands.”
Code that is continuously shaped is called reactive programming. A familiar example is the spreadsheet: change a single cell and the rest of the sheet automatically updates. There is no need to write a series of instructions and then “run” them to see what happens; instead every change you make instantly affects the outcome.
“With my hands” refers to a kinesthetic or visuospatial style of thinking which leverages our ability to perceive and manipulate spatial relationships. Traditional programming languages are frustrating for visual thinkers; they rely on a phonological style which uses hands only to type and eyes only to read.
In theory, any written language can instead be represented as a collection of elements arranged and connected in space; this is the idea behind visual programming languages. Instead of typing instructions, you drag objects around and connect them together to express ideas.
The image above includes some typical examples. Block style IDEs (e.g. Scratch) let you snap together commands like Lego bricks. The others let you drag boxes around and string wires between them.
I think it’s easy to see at a glance the problem with this approach: it doesn’t scale. Stringing wires or snapping bricks gets really messy really fast. Reaching elbow deep into a rat’s nest of wires is not anything like shaping clay.
But it doesn’t have to be this bad. The problem these examples have is that, although visual, they slavishly adhere to an imperative style of coding where instructions are listed in order and even the words within each instruction must follow a specific syntax. This forces connections into arbitrary knots and loops, creating more tangles and going against the overall flow. A visual style demands a simpler, more fluid kind of logic.
Enter an old idea in computer science which has seen a recent resurgence: functional programming. In place of a sequence of instructions which focus on how to do things, functional programming languages use chains of transformations that focus on the desired result at each point. Loops are banished and each node can have only one output so everything naturally flows in the same direction. A classic example is Lisp; a more modern functional language now gaining traction is Clojure. Don’t be scared.
So what we need is a functional reactive programming language with a responsive, fun to use visual IDE, designed specifically for artists. Extra bonus points if it includes natural scrubbing interactions for setting values ala Bret Victor.
Meet NodeBox. NodeBox is an open source, cross-platform GUI originally developed for generative artists. I first encountered it at the OpenVis conference in 2013. The video of that presentation is a great introduction; you can skip to 22:00 to see a demo of NodeBox in action which shows how quickly and easily you can shape a visualization. This is what I mean by shaping clay.
This NodeBox “network” draws a set of nested pentagons. The structure is so simple you can see how it works just by looking at it. Make a pentagon node, color it, hook it to a “nextChild” subnetwork that makes a smaller copy, repeat three more times, then combine all five pentagons into a single display.
You can double-click on any node to render it on the main screen; a white triangle in the lower right corner indicates the currently rendered node. You can then single-click any other node to adjust its parameters – in this case the original pentagon node. By scrubbing (dragging the mouse across) the radius field I can increase or decrease its size; making the top pentagon bigger will automatically make all its children bigger. In this way I can quickly scrub values to get the result I want.
Another (somewhat mind-bending) example: a NodeBox network which can draw itself. On the right is a set of nodes that opens a JSON file, analyzes the contents, and plots it as a series of rectangles and connecting lines. On the left is what happens when that JSON file happens to contain this network’s own structure (taken directly from it’s .ndbx file).
I’ve been playing with NodeBox for about six months now and have created over forty networks which let me play with and try out various visualizations and data-driven animations. I find that some things which are easy to do in other languages are hard to do in NodeBox (or just hard for me to figure out how to do). But the reverse is also true: some things that are difficult or time-consuming to do in any other language are spectacularly easy in NodeBox.
Debugging, in particular, is much less time-consuming and almost fun. I catch most bugs instantly since every change I make is instantly rendered. When something unexpected does happen I can just click on each node in turn to follow the steps of a process. When something is too big or too small or in the wrong place I can simply scrub a parameter or even just grab the offending object and drag it where it needs to go.
Scaling up to large projects is manageable, but remains problematic. If you think clearly enough you can encapsulate everything into a handful of subnetworks and sub-subnetworks. But this can only go so far. NodeBox’s functional approach eliminates “side effects;” a change made to one function cannot affect distant functions unless those two functions are physically linked. This prevents the nasty hard-to-trace bugs which plague procedural languages, but it also means there are no global variables, which in turn means that if you want a variable to effect twenty different functions you will need to create at least twenty separate links.
You can alleviate this somewhat by using Null nodes as cable ties. If two clumps of nodes have many interlinkages, you can physically separate them, lay one cable across the void to a Null node, and then distribute its output from there. After I get something working in NodeBox I usually spend some more time “tidying up,” rearranging nodes into related clumps and positioning nodes to reduce the number of crossing lines. I regard this not as a nuisance, but as a pleasant, almost meditative ritual that helps me optimize my code.
NodeBox does have one major limitation: it doesn’t do input. It was designed to produce intricate still images and animations, not to facilitate end user interactions. So there are no input fields, no buttons, no sliders, no checkboxes – no way to create a standalone interactive prototype. These things could all be done in theory, it’s just that NodeBox does not currently provide any *nodes* to do them.
This is ironic because the NodeBox IDE itself is richly interactive. It’s vector-based ZUI (zoomable user interface) is a joy to use. So as a designer I can experience wonderful interactions by scrubbing node parameters and zooming in and out, but I can’t create a similar experience for my end users.
My use of NodeBox, therefore, is limited to creating sketches and animations. This is no small thing – it allows me to play and try and then convey the essence of ideas which are inherently hard to test and demonstrate. But for now I will still have to move to other languages if I need to create stand-alone interactives.
I think the deeper value of NodeBox is that it shows what is possible. There are better ways of imagining, better ways of coding. If we hope to create ever better experiences for our users, we need to keep searching for these better ways.