Personal Assistant or Creepy Stalker? The Rise of Cognitive Computing

I just got back to my hotel room after attending the first of a two day Cognitive Computing Forum, a conference running in parallel to the Semantic Technology (SemTech) Business Conference and the NoSQL Conference here in San Jose. Although the forum attracts less attendees and has only a single track, I cannot remember attending a symposium where so many stimulating ideas and projects were presented.

What is cognitive computing? It refers to computational systems that are modeled on the human brain – either literally by emulating brain structure or figuratively through using reasoning and semantic associations to analyze data. Research into cognitive computing has become increasingly important as organizations and individuals attempt to make sense of the massive amount of data that is now commonplace.

The first forum speaker was Chris Welty, who was an instrumental part of IBM’s Watson project (the computer that beat the top human contestants on the gameshow Jeopardy). Chris gave a great overview of how cognitive computing changes the traditional software development paradigm. Specifically, he argued that rather than focus on perfection, it is ok to be wrong as long as you succeed often enough to be useful (he pointed to search engine results as a good illustration of this principle). Development should focus on incremental improvement – using clearly defined metrics to measure whether new features have real benefit. Another important point he made was that there is no one best solution – rather, often the most productive strategy is to apply several different analytical approaches to the same problem, and then use a machine learning algorithm to mediate between (possibly) conflicting results.

There were also several interesting – although admittedly esoteric – talks by Dave Sullivan of Ersatz Labs (@_DaveSullivan) on deep learning, Subutai Ahmad of Numenta on cortical computing (which attempts to emulate the architecture of the neocortex) and Paul Hofmann (@Paul_Hofmann) of Saffron Technology on associative memory and cognitive distance. Kristian Hammond (@KJ_Hammond) of Narrative Science described technology that can take structured data and use natural language generation (NLG) to automatically create textual narratives, which he argued are often much better than data visualizations and dashboards in promoting understanding and comprehension.

However, the highlight of this first day was the talk entitled ‘Expressive Machines’ by Mark Sagar from the Laboratory for Animate Technologies. After showing some examples of facial tracking CGI from the movies ‘King Kong’ and ‘Avatar’, Mark described a framework modeled on human physiology that emulates human emotion and learning. I’ve got to say that even though I have a solid appreciation and understanding for the underlying science and technology, Mark’s BabyX – who is now really more a virtual toddler than an infant – blew me away. It was amazing to see Mark elicit various emotions from BabyX. Check out this video about BabyX from TEDxAukland 2013.

At the end of the day, the presentations helped crystallize some important lines of thought in my own carbon-based ‘computer’.

First, it is no surprise that human computer interactions are moving towards more natural user interfaces (NUIs), where a combination of artificial intelligence, fueled by semantics and machine learning and coupled with more natural ways of interacting with devices, result in more intuitive experiences.

Second, while the back end analysis is extremely important, what is particularly interesting to me is the human part of the human computer interaction. Specifically, while we often focus on how humans manipulate computers, an equally  interesting question is how computers can be used to ‘manipulate’ humans in order to enhance our comprehension of information by leveraging how our brains are wired. After all, we do not view the world objectively, but through a lens that is the result of idiosyncrasies from our cultural and evolutionary history – a fact exploited by the advertising industry.

For example, our brains are prone to anthropomorphism, and will recognize faces even when faces aren’t there. Furthermore, we find symmetrical faces more attractive than unsymmetrical faces.  We are also attracted to infantile features – a fact put to good use by Walt Disney animators who made Mickey Mouse appear more infant-like over the years to increase his popularity (as documented by paleontologist Stephen Jay Gould). In fact, we exhibit a plethora of cognitive biases (ever experience the Baader Meinhof phenomenon?), including the “uncanny valley”, which describes a rapid drop off in comfort level as computer agents become almost – but not quite perfectly – human-looking.  And as Mark Sagar’s work demonstrates, emotional, non-verbal cues are extremely important (The most impressive part of Sagar’s demo was not the A.I. – afer all, there is a reason why BabyX is a baby and not an fully conversant adult – but rather the emotional response it elicited in the audience).

The challenge in designing intelligent experiences is to build systems that are informative and predictive but not presumptuous, tending towards the helpful personal assistant rather than the creepy stalker. Getting it right will depend as much on understanding human psychology as it will on implementing the latest machine learning algorithms.

8 comments

  1. Ultimately what is the benefit of more human-like computers? Isn’t that displacing jobs (blue-collar workers who have been replaced by machines)?

    What makes interacting with a computer more than a human so much better? I mean, don’t get me wrong; I use the ATM exclusively, and the only time I go into a bank is to exchange foreign currency. But is the benefit of efficiency worth the lack of real human interaction? Will we end up as supremely lonely people surrounded by our machine “friends” and “babies”? It sounds ludicrous, but isn’t this the path that will lead to a future like the Matrix or the Terminator movies?

    Curious to hear Bill (and John)’a thoughts.

  2. Bill,

    I accept that in some situations people will share private thoughts aloud with computers or pets or houseplants that they wouldn’t share with an actual therapist. But I am skeptical that virtual therapists will “enable intimacy” in a deep way. The same claims were made about Eliza back in the 70s but proved to be mostly hype.

    Like Joyce, my first thought was that I would soon be facetiming with the Geico lizard for customer support or ordering fast food from a virtual (but empathetic) Ronald McDonald or watching Max Headroom deliver the evening news.

    Assuming that this could actually work (and that people would not be freaked out by uncanny clowns), I do wonder about the effect on jobs. Here is a thoughtful link on this subject:

    http://kottke.org/14/08/humans-need-not-apply

    For me this about finding the right balance. I don’t think technology necessarily has to lead to alienation or massive unemployment. Facetime allows my mother to see her granddaughter whenever she wants. But as the singularity approaches I think we have to ask not just what we *can* do but what we *should* do. Will this make our lives better?

    This is why user experience becomes more important with each passing year.

    John

  3. Thanks for your comments!

    Certainly no one knows what the future holds. But there are a couple of points I’d like to throw out there to think about.

    First, a virtual animated agent – human or otherwise – is only one manifestation of machine intelligence. In fact, I think we initially will be much more likely to interact with machine intelligence through ‘smart’ but simple devices that are part of the Internet of Things – appliances that can make intelligent, context-based decisions without human intervention.

    At the same time, I think our propensity for anthropomorphism combined with more natural, non-intrusive interfaces will enable interactions that are more ‘human’ – in some cases incarnated as an animated entity.

    And while I also agree that AI has historically been overhyped, the landscape is fundamentally different now than when Eliza was created. Specifically, the massive amount of data available combined with orders of magnitude more processing power has qualitatively changed what is possible. While significant challenges remain, there have also been significant breakthroughs, such as IBM’s success with using cognitive computing to win at Jeopardy against human champions.

    Of course, the question of whether and to what extent machine intelligence should be deployed is a different one from whether it can be deployed. Personally, I feel that cognitive computing will ultimately and fundamentally transform our lives for the better. But it also force us to wrestle with core philosophical and ethical questions, such as what it means to be human.

  4. I thought about it some more on my drive to work, and I came up with what I think would be a helpful situation to have the “computer baby”. Warning: this use case is extremely sad. When parents lose a child due to SIDS or any other tragic accident, the incidence of divorce is extremely high. Everyone grieves differently and it’s so hard to support each other when both spouses are in intense pain. But maybe having a machine baby, as weird as that may sound, would help them focus, not solely on their loss, but on caring for another being, even if it’s a machine. I know this sounds really weird, but I wonder if something like that would work. Obviously the machine baby cannot replace the tragic loss of the real baby, but maybe the parents don’t want a replacement; they can cherish the memories of their real baby while the machine baby requires them to “parent” as a team and care for something besides dwelling on their own pain.

    Oookay, now the whole internet knows about the weird stuff I think about during my commute….signing off for now!

  5. Oxford philosophy professor Nick Bostrom has made a strong argument outlining the risks of superhuman artificial intelligence to the very existence of mankind in his book, Superintelligence: Paths, Dangers, Strategies.

    Made me think that given the likelihood of life on other worlds ala the Drake equation, maybe human civilization is much more likely to encounter an alien artificial intelligence than a biological one. Sure enough, SETI astronomer Seth Shostak argues just that point (http://www.bbc.co.uk/news/science-environment-11041449).

    Sobering to contemplate.

  6. For me, scifi is pretty much as good as your “facts” and “well-researched articles from legitimate news sources”. I read the book Robopocalypse and that’s pretty much what I think of when it comes to AI and machine learning, at least when I’m feeling pessimistic.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.