Automatic: Nice, but Not Necessary

February 20th, 2015 1 Comment

Editor’s note: Here’s the first post from one of our newish team members, Ben. Ben is a usability engineer with a PhD in Cognitive Psychology, and by his own account, he’s also a below average driver. Those two factoids are not necessarily related; I just don’t know what his likes and dislikes are so I’m spit-balling.

Ben applied his research chops to himself and his driving using Automatic (@automatic), a doodad that measures your driving and claims to make you a better driver. So, right up his alley.

Aside from the pure research, I’m interested in this doodad as yet another data collector for the quantified self. As we generate mounds of data through sensors, we should be able to generate personal annual reports, a la Nicholas Felton, that have recommended actions and tangible benefits.

Better living through math.

Anyway, enjoy Ben’s review.

When I first heard about Automatic (@automatic), I was quite excited—some cool new technology that will help me become a better driver. The truth is, I’m actually not a big fan of driving. Which is partly because I know I’m not as good of a driver as I could be, so Automatic was a glimmer of hope that would lead me on the way to improving my skills.

Though I will eagerly adopt automated cars once they’re out and safe, the next best thing is to get better so I no longer mildly dread driving, especially when I’m conveying others. And one issue with trying to improve is knowing what and when you’re doing something wrong, so with that in mind (and for enterprise research purposes), I tried out Automatic.

Automatic is an app for your phone plus a gadget (called the Link) that plugs into your car’s diagnostics port, which together gives you feedback on your driving and provides various ways to look at your trip data.

Automatic Link

The diagnostics port the Link plugs into is the same one that your mechanic uses to see what might be wrong when your check engine light is ominously glaring on your dashboard. Most cars after 1996 have these, but not all data is available for all cars. Mine is a 2004 Honda Civic, which doesn’t put out gas tank level data, meaning that MPG calculations may not be as accurate as they could be. But it still calculates MPG, and it seems to be reasonably accurate. I don’t, however, get the benefit of “time to fuel up” notifications, though I do wonder how much of a difference those notifications make.

The Link has its own accelerometer, so that combined with the data from the port and paired with your phone via Bluetooth, it can tell you about your acceleration, distance driven, your speed, and your location. It can also tell you what your “Check Engine” light means, and send out some messages in the result of a crash.

It gives three points of driving feedback: if you accelerate too quickly, brake too hard, or go over 70 mph. Each driving sin is relayed to you with its own characteristic tones emitted from the Link. It’s a delightful PC speaker, taking you way back to the halcyon DOS days (for those of you who were actually alive at the time). It also lets you know when it links up with your phone, and when it doesn’t successfully connect it outputs a sound much like you just did something regrettable in a mid-’80s Nintendo game.

App screenshot

One of the main motivators for the driving feedback is to save gas—though you can change the top speed alert if you’d like. From their calculations, Automatic says 70 mph is about as fast as you want to go, given the gas-spent/time-it-will-take-to-get-there tradeoff.

Automatic web dashboard

Another cool feature is that it integrates with IFTTT (@ifttt), so you can set it up to do things like: when you get home, turn the lights on (if you have smart lights); or when you leave work, send a text to your spouse; or any other number of things—useful or not!

Is It Worth It?

The big question is, is it worth $99? It’s got a great interface, a sleek little device, and a good number of features, but for me, it hasn’t been that valuable (yet). For those with the check engine light coming up, it could conceivably save a lot of money if you can prevent unnecessary service on your car. Fortunately, my Civic has never shown me the light (knock on wood), though I’ll probably be glad I have something like Automatic when it does.

I had high hopes for the driver feedback, until I saw that it’s actually pretty limited. For the most part, the quick acceleration and braking are things I already avoided, and when it told me I did them, I usually had already realized it. (Or it was a situation out of my control that called for it.) A few times it beeped at me for accelerating where it didn’t feel all that fast, but perhaps it was.

I was hoping the feedback would be more nuanced and could allow me to improve further. The alerts would be great for new drivers, but don’t offer a whole lot of value to more experienced drivers—even those of us who would consider themselves below average in driving skill (putting me in an elite group of 7% of Americans).

The Enterprise Angle

Whether it’s Automatic, or what looks like might be a more promising platform, Mojio (@getmojio), there are a few potentially compelling business reasons to check out car data-port devices.

One of the more obvious ones is to track mileage for work purposes—it gives you nice readouts of all your trips, and allows you to easily keep records. But that’s just making it a little easier for an employee to do their expense reports.

The most intriguing possibility (for me) is for businesses that manage fleets of regularly driven vehicles. An Automatic-like device could conceivably track the efficiency of cars/trucks and drivers, and let a business know if a driver needs better training, or if a vehicle is underperforming or might have some other issues. This could be done through real-time fuel efficiency, or tracking driving behavior, like what Automatic already does: hard braking and rapid acceleration.
If a truck seems to be getting significantly less mpg than it should, they can see if it needs maintenance or if the driver is driving too aggressively. Though trucks probably get regular maintenance, this kind of data may allow for preventive care that could translate to savings.

This kind of tracking could also be interesting for driver training, examining the most efficient or effective drivers and adopting an “Identify, Codify, Modify” approach.

Overall

I’d say this technology has some interesting possibilities, but may not be all that useful yet for most people. It’s fun to have a bunch of data, and to get some gentle reminders on driving practices, but the driver improvement angle from Automatic hasn’t left me feeling like I’m a better driver. It really seems that this kind of technology (though not necessarily Automatic, per se) lends itself more to fleet management, improving things at a larger scale.

Stay tuned for a review of Mojio, which is similar to Automatic, but features a cellular connection and a development platform, and hence more possibilities.

Fun with an Android Wear Watch

February 3rd, 2015 2 Comments

A couple days ago, I was preparing to show some development work Luis (@lsgaleana) did for Android Wear using the Samsung Gear Live.

One of the interesting problems we’ve encountered lately is projecting our device work onto larger screens to show to an audience. I know, bit of a first world problem, which is why I said “interesting.”

At OpenWorld last year, I used an IPEVO camera to project two watches, the Gear Live and the Pebble, using a combination of jewelry felt displays. That worked OK, but the contrast differences between the watches made it a bit tough to see them equally well through the camera.

Plus, any slight movement of the table, and the image shook badly. Not ideal.

Lately, we haven’t been showing the Pebble much, which actually makes the whole process much easier because . . . it’s all Android. An Android Wear watch is just another Android device, so you can project its image to your screen using tools like Android Screen Monitor (ASM) or Android Projector.

Of course, as with any other Android device, you’ll have to put the watch into debugging mode first. If you’re developing for Android Wear, you already know all this, and for the rest of us, the Android Police have a comprehensive how-to hacking guide.

For my purposes, all I needed to do is get adb to recognize the watch. Here are the steps (h/t Android Police):

Now, when I need to show a tablet app driving the Wear watch, I can use adb and ASM to show both screens on my Mac, which I can then project. Like so.

allTheScreens

Bonus points, the iPod Touch in that screen is projected using a new feature for QuickTime in Mavericks that works with iOS 8 devices.

49578622

Stories Are the Best, Plus News on Nest!

January 28th, 2015 2 Comments

Friend of the ‘Lab, Kathy, has been using Storify for a while now to compile easy-to-consume, erm, stories about the exploits of Oracle Applications User Experience (@usableapps).

You might remember Storify from past stories such as the In the U.K.: Special events and Apps 14 with UKOUG and Our OpenWorld 2014 Journey.

Anyway, Kathy has a new story, The Internet of Things and the Oracle user experience, which just so happens to feature some of our content. If you read here with any regularity or know Noel (@noelportugal), you’ll know we love our internet-connect things.

So, check out Kathy’s story to get the bigger picture, and hey, why not read all the stories on the Usableapps Storify page.

And bonus content on IoT!

Google keeps making the Nest smarter and marginally, depending on your perspective, more useful. In December, a Google Now integration rolled out, pairing a couple of my favorite products.

More gimmick than useful feature, at least for me, I ran into issues with the NLP on commands, as you can see:

Screenshot_2015-01-28-12-01-40

Saying “set the temperature to 70 degrees” frequently results in an interpretation of 270 degrees. Works fine if you don’t say “to” there. Google Now becomes a more effective assistant, this integration will be more useful, I’ve no doubt.

Then, at CES, Nest announced partnerships that form a loose alliance of household appliances. It may take a big player like Nest (ahem, Google) to standardize the home IoT ecosystem.

And just this week, Misfit announced a partnership with Nest to allow their fitness tracker, the one I used to wear, to control the Nest. I’m tempted to give the Shine another go, but I’m worried about falling back into a streak-spiral.

Thoughts on IoT? Nest? Ad-supported world domination? You know what to do.

BusinessTown

January 23rd, 2015 2 Comments

Maybe you remember Busytown, Richard Scarry’s famous town, from your childhood or from reading it to your kids.

Tony Ruth has created the Silicon Valley equivalent, BusinessTown, (h/t The Verge) populated by the archetypes we all know and sometimes love. What do the inhabitants of BusinessTown do? “What Value-Creating Winners Do All Day,” natch.

brogrammers

Who’s up for a Silicon Valley marathon?

Mash up Oracle Cloud Application Web Services with Web APIs and HTML5 APIs

January 22nd, 2015 Leave a Comment

No more an “honorary” but now a full-blown member of the AppsLab team, I gave a presentation at the Chicago & Dubai Oracle Usability Advisory Board in November on REST and Web APIs and how they can facilitate the transition from on-premise software to cloud-based solutions (the content of which can be fodder for a future post).

As we all are transitioning from on-premise implementations to cloud-based solutions, there seems to be a growing fear among customers and partners (ISV, OEM) alike that they will lose the capability to extend these cloud-based applications.  After all, they do not have access to the server anymore to deploy and run their own reports/forms/scripts.

I knocked up a very simple JavaScript client side application as part of my presentation to prove my point, which was that (well-designed) REST APIs and these JavaScript frameworks make it trivial to create new applications on top of existing backend infrastructure and add functionality that is not present in the original application.

My example application is based on existing Oracle Sales Cloud Web Services.  I added the capability to tweet, send text messages (SMS) and make phone calls straight from my application and speech-enable the UI.  Although you can debate the usefulness of how I am using  some of these feature, that was obviously not the purpose of this exercise.

Instead, I wanted to show that, with just a few lines of code, you can easily add these extremely complex features to an existing application. When was the last time you wrote a bridge to the Public Switched Telephone Network or a Speech synthesizer that can speak 25 different languages?

Here’s a 40,000 foot view of the architecture:

High level view of Demo APP Architecture

High level view of Demo APP Architecture

The application itself is written as a Single Page Application (SPA) in plain JavaScript.  It relies heavily on open source JavaScript libraries that are available for free to add functionality like declarative DOM binding and templating (knockout.js), ES6 style Promises (es6-promise.js), AMD loading (require.js) etc.  I didn’t have to do anything to add all this functionality (other than including the libraries).

It makes use of the HTML5 Speech Synthesis API, which is now available in most modern browsers to add Text-to-Speech functionality to my application.  I didn’t have to do anything to add all this functionality.

I also used the Twitter APIs to be able to send tweets from my application and the Twilio APIs to be able to make phone calls and send SMS text messages from my application.  I didn’t have to do anything to add all this functionality.  Can you see a theme emerging here?

Finally I used the Oracle Sales Cloud Web Services to display all the Business Objects I wanted to be present in my application, Opportunities, Interactions and Customers.  As with the other pieces of functionality, I didn’t have to do anything to add this functionality!

You basically get access to all the functionality of your CRM system through these web services where available, i.e. not every piece of functionality is exposed through web services.


Note that I am not accessing the Web Services directly from my JS but I go through a proxy server in order to adhere to browser’s same-origin policy restrictions.  The proxy also decorates the Oracle Applications SOAP Services as REST end-points.  If you are interested in how to do this, you can have a look at mine, it’s freely available.


For looks I am using some CSS that makes the application look like a regular ADF application.  Of course you don’t have to do this, you can e.g. use bootstrap if you prefer.  The point being is that you can make this application look however you want.  As I am trying to present this as an extension to an Oracle Cloud Application, I would like it to look like any other Oracle Cloud Application.

With all these pieces in place, it is now relatively easy to create a new application that makes use of all this functionality.  I created a single index.html page that bootstraps the JS application on first load.  Depending on the menu item that is clicked, a list of Customers, Opportunities or Interactions is requested from Oracle Sales Cloud, and on return, those are laid out in a simple table.

For demonstration purposes, I provided switches to enable or disable each feature.  Whenever a feature is enabled and the user would click on something in the table, I would trigger either the phone call, SMS sending, speech or tweet, whichever is enabled, e.g. here is the code to do Text-to-Speech using the HTML5 Speech Synthesis API, currently available in webkit browsers so use Safari or Chrome (mobile or desktop), and yes I have feature detection in the original code, I just left it out to keep the code simple:

Ditto for the SMS sending using the Twilio API:

And calling somebody, using the Phone Call API from Twilio, using the same user and twilio object from above:

The tweeting is done by adding the tweet button to the HTML, dynamically filling in the tweet’s content with some text from the Opportunity or Interaction.

Here is a screencast of the application in action:

As I mentioned earlier, how I am using the APIs might not be particularly useful, but the point is to show how easy it is to integrate this functionality with Oracle Cloud Applications to extend the functionality beyond what is delivered out of the box.  It probably makes more sense to use Twilio to actually call or text a contact attached to the opportunity or interaction, rather than me.  Or to tweet when an opportunity moves to a “win” status, the possibilities are literally endless, but I leave that up to you.

Happy Coding!

Mark.

Dowsing for Smarties

January 21st, 2015 Leave a Comment

Editor’s note: John and Noel (@noelportugal) need to chat about Google’s Physical Web gBeacons.

I have been a tad skeptical about the usefulness of smart watches, but my colleague Julia Blyumen has changed my thinking.

Woodblock of dowserIn her recent blog post, Julia noted that a smart watch could become both a detector and a universal remote control for all IoT “smart things”. She backed this up with a link to an excellent academic paper (pdf) “User Interfaces for Smart Things: A Generative Approach with Semantic Interaction Descriptions.”

I strongly encourage anyone interested in the Internet of Things to read this paper. In it the authors lay the foundations for a general purpose way of interacting with “smart things”, interactive components that can sense and report on current conditions (counters, thermometers), or respond to commands (light switches, volume knobs).

These smarties (as I like to call them) will have much to tell us and will be eager to accept our commands. But how will we interact with them? Will they adapt to us or must we adapt to them? How will we even find them?

The authors propose a brilliant solution: let each smartie emit a description of what it can show or do. Once we have that description, we can devise whatever graphical user interface (or voice command or gesture) we want. And we could display that interface anywhere: on a webpage or a smartphone – or a watch!

Another one of my AppsLab colleagues, Raymond Xie, immediately saw a logical division of labor: use a phone or tablet for complex interactions, use a watch for simple monitoring and short command bursts.

Another way a watch could work in concert with a phone would be as a “smartie detector.”  It will be a long time (if ever) before every thing is smart.  Until then it will often not be obvious whether the nearby refrigerator, copy machine, projector, or lamp is controllable.

Watches could fill this gap nicely.  Every time your watch comes within a few feet of a smartie it could vibrate or display an icon or show the object’s state or whatever.  You could then just glance at your wrist to see if the object is smart instead of pulling out your phone and using it as a dowsing rod.

One way of implementing this would be for objects or fixed locations (room doors, cubicles, etc.) to simply emit a short-range bluetooth ID beacon.  The watch or its paired phone could constantly scan for such signals (as long as its battery holds out).  If one was detected it would use local wifi to query for the ID and pull up an associated web page.  Embedded code in the web page would provide enough information to display a simple readout or controller. The watch could either display it automatically or just show an indicator to let the user know she could tap or speak for quick interactions or pull out her phone to play with a complete web interface.

An example I would find useful would be meeting room scheduling.  I often arrive at a meeting room to find someone else is already using it.  It would be nice to wave my watch at the door and have it confirm who had reserved the room or when it would next be free. Ideally, I could reserve it myself just by tapping my watch. If I realized that I was in the wrong place or needed to find another room, I could then pull out my phone or tablet with a meeting room search-and-reserve interface already up and running.

But that’s just the beginning.

One of the possibilities that excites me the most about this idea is the ability to override all the confusing and aggravating UIs that currently assault me from every direction and replace them with my own UIs, customized to my tastes.  So whenever I am confronted with a mysterious copy machine or the ridiculously complicated internet phone we use at work, or a pile of TV remote controls with 80 buttons apiece, or a BART ticket machine with poorly marked slots and multiple OK buttons, or a rental car with diabolically hidden wiper controls, I could pull out my phone (or maybe even just glance at my watch) to see a more sane and sensible UI.

Designers could perfect and sell these replacement UIs, thus freeing users from the tyranny of having to rely on whatever built-in UI is provided.  This would democratize the user experience in a revolutionary way.  It would also be a boon for accessibility. Blind users or old people or children or the wheelchair-bound could replace any UI they encounter in the wild with one specially adapted for them.

Virtual interfaces could also end the tedium of waiting in lines. Lines tend to form in parking garages and conference registration because only one person can use a kiosk at a time. But if you could tap into a kiosk from your smart watch, dozens of people could complete their transactions at the same time.

Things get even more interesting if people start wearing their own beacons.  You could then use your watch to quickly capture contact information or create reminders; during a hallway conversation, a single tap could “set up meeting with Jake.” Even automatically displaying the full name of the person next to you would be helpful to those of us who sometimes have trouble remembering names.

If this capability was ubiquitous and the range was a bit wider you could see and interact with a whole roomful of people or even make friends during a plane ride. Even a watch could display avatars for nearby people and let you bring any one into focus. You could then take a quick action from the watch or pass the selected avatar to your phone/tablet/laptop to initiate something more complex like transferring a file.

Of course this could get creepy pretty fast.  People should have some control over the information they are willing to share and the kind of interactions they wish to permit. It’s an interesting design question: “What interaction UIs should a person emit?”

We are still at the dawn of the Internet of Things, of course, so it will be a while before all of this comes to pass. But after reading this paper I now look at the things (and people) around me with new eyes. What kind of interfaces could they emit? Suddenly the idea of using a watch to dowse for smarties seems pretty cool.

Dear Julia: SmartWatch Habits and Preferences

January 13th, 2015 Leave a Comment

Julia’s recent post about her experiences with the Samsung Gear watches triggered a lively conversation here at the AppsLab. I’m going to share my response here and sprinkle in some of Julia’s replies.  I’ll also make a separate post about the interesting paper she referenced.

Dear Julia,

You embraced the idea of the smart watch as a fully functional replacement for the smart phone (nicely captured by your Fred Flintstone image). I am on the other end of the spectrum. I like my Pebble precisely because it is so simple and limited.

I wonder if gender-typical fashion and habit is a partial factor here. One reason I prefer my phone to my watch is that I always keep my phone in my hip pocket and can reliably pull it out in less than two seconds. My attitude might change if I had to fish around for it in a purse which may or may not be close at hand.

Julia’s response:

I don’t do much on the watch either. I use it on the go to:

  • read and send SMS
  • make and receive a call
  • read email headlines
  • receive alerts when meetings start
  • take small notes

and with Gear Live:

  • get driving directions
  • ask for factoids

I have two modes to my typical day. One is when I am moving around with hands busy. Second is when I have 5+ minutes of still time with my hands free. In the first mode I would prefer to use a watch instead of a phone. In the second mode I would prefer to use a tablet or a desktop instead of a phone. I understand that some people find it useful to have just one device – the phone – for both modes. From Raymond’s description of Gear S, it sounds like reading on a watch is also okay.

Another possible differentiator, correlated with gender, is finger size. For delicate tasks I sometimes ask my wife for help. Her small, nimble fingers can do some things more easily than my big man paws. Thus I am wary of depending too heavily on interactions with the small screen of a watch. Pinch-zooming a map is delightful on a phone but almost impossible on a watch. Even pushing a virtual button is awkward because my finger obscures almost the entire surface of the watch. I am comfortable swiping the surface of the watch, and tapping one or two button targets on it, but not much more. For this reason I actually prefer the analog side buttons of the Pebble.

Julia’s response:

Gear has a very usable interface. It is controlled by a tap, swipe, single analog button, and voice. Pinch-zoom of images was enabled on old Gear, but there were no interaction that depended on pinch-zoom.

How comfortable are you talking to your watch in public? I have become a big fan of dictation, and do ask Siri questions from time to time, but generally only when I am alone (in my car, on a walk, or after everyone else has gone to bed). I am a bit self-conscious about talking to gadgets in public spaces. When other people do it near me I sometimes wonder if they are talking to me or are crazy, which is distracting or alarming, so I don’t want to commit the same offense.

I can still remember watching Noel talking to his Google Glass at a meeting we were in. He stood in a corner of the room, facing the wall, so that other people wouldn’t be distracted or think he was talking to them. An interesting adaption to this problem, but I’m not sure I want a world in which people are literally driven into corners.

Julia’s Response:

I am not at all comfortable talking to my watch. We should teach lipreading to our devices (wouldn’t that be a good kickstarter project?) But I would speak to the watch out of safety or convenience. Speaking to a watch is not as bad as to glasses. I am holding the watch to my mouth, looking at it, and, in case of Gear Live, first say “Okay, Google.” I don’t think many think I am talking to them. I must say most look at me with curiosity and, yes, admiration.

What acrobatics did you have to go through to use your watch as a camera? Did you take it off your wrist? Or were you able to simultaneously point your watch at your subject while watching the image on the watch? Did tapping the watch to take the photo jiggle the camera? Using the watch to take pictures of wine bottles and books and what-not is a compelling use case but often means that you have to use your non-watch hand to hold the object. If you ever expand your evaluation, I would love it if you could have someone else video you (with their smart watch?) as you take photos of wine bottles and children with your watch.

Julia’s Response:

No acrobatics at all. The camera was positioned at the right place. As a piece of industrial design it looked awful. My husband called it the “carbuncle” (I suspect it might be the true reason for camera’s disappearance in Gear Live). But it worked great. See my reflection in the mirror as I was taking the picture below? No acrobatics. The screen of the watch worked well as a viewfinder. I didn’t have to hold these “objects” in my hands. Tapping didn’t jiggle the screen.

dhdibfff      julia-spy-photo

Thanks again for a thought-provoking post, Julia.  I am also not sure how typical I am. But clearly there is a spectrum of how much smart watch interaction people are comfortable with.

John

An Interaction Designer’s Perspective: Samsung Gear vs. Samsung Gear Live

January 12th, 2015 Leave a Comment

Editor’s note: In January of 2014, our team held a wearables summit of sorts, test-driving five popular watches, fitness bands and head-mounted displays to collect experiential evidence of each form factor, initial experience, device software and ecosystem and development capabilities.

Julia drew the original Samsung Galaxy Gear smartwatch, and she’s been using it ever since. A few months ago, she began using the new Android Wear hotness, the Samsung Gear Live, which several of us have.

What follows are Julia’s impressions and opinions of the two watches. Enjoy.

Original Galaxy Gear versus Gear Live

When I had to keep track of time, I used to wear my Skagen watch, and I loved my little Skagen. Last year it ran out of battery. Coincidently, it happened when Thao (@thaobnguyen) ordered then just released Samsung Galaxy Gear for me to “test.”

Life is busy, and it took me some ten months to get new battery for my Skagen.

In the meantime, I wore Gear. When I got my Skagen back, I had a “Lucy of Prince Caspian” moment. I felt my watch was bewitched – I couldn’t talk to it (I tried), and it couldn’t talk back to me. Mute and dumb. That’s how I realized I am hooked on smart watches.

Back to Narnia, Lucy Pevensie tries to wake up a lethargic tree that forgot how to speak. Skagen watch doesn’t speak to me either.

Back to Narnia, Lucy Pevensie tries to wake up a lethargic tree that forgot how to speak. Skagen watch doesn’t speak to me either.

This is just a preface, the write up is about original Gear versus Gear Live, which I’ve been testing for few months. In a nutshell, I have mixed feelings about Gear Live. Though there are some improvements over the original watch, I find many setbacks.

Typography

On the left, original Gear, on the right, Gear Live.

Left, original Gear, right, Gear Live, note the minimalistic typography of original Gear versus decorative typography of Android Wear.

Original Samsung Galaxy Gear featured clean bold typography. I could read a notification at a glance even when driving. In Gear Live, the minimalistic typography of Samsung Gear was replaced by smaller fonts and decorative backgrounds of Android Wear. Not only those decorations are useless, they make the watch unusable in the situations when it could’ve been most helpful. (And yes, I understand Samsung had to showcase the impressive display).

Speaker

On the left: original Gear On the right: Gear Live I can take a call AND talk on the original Gear. With Gear Live I can take the call, but then (unless I am connected to car speakers) I need to pick up the phone to talk.

Left, original Gear, right Gear Live, 
I can take a call AND talk on the original Gear. With Gear Live I can take the call, but then, unless I am connected to car speakers, I need to pick up the phone to talk.

Getting calls on a Gear in awkward situations was my main usage of it. As clunky as placement of the speaker and mic was on the original Gear, I still was able to get the calls safely while driving, or while walking with my hands full. Gear Live has no speaker. It can initiate the call hands-free, but what is the use if I still need to get to my phone to speak?

Camera

On the left: original Gear On the right: Gear Live Gear Live has no camera

Left, original Gear, right, Gear Live, which has no camera.

Location, voice-to-text, AND image-to-text are three most logical input methods for the watch. I got very used to taking image notes with the original Gear. Did you know that Evernote can search for text in images? For me, the flagman demo application of the original Gear was Vivino. With Vivino, one can take a picture of a wine label at a store with a watch camera, and get the rating/pricing back on the watch. This application was a great demonstration of smart watch retail potential. Gear Live has no camera, dismissing all such use cases.

Vivino application on original Gear (no longer supported) Point a watch camera to a label, take a picture and submit to Vivino server, receive wine rating on the watch.

Vivino application on original Gear, no longer supported.
Point a watch camera to a label, take a picture and submit to Vivino server, receive wine rating on the watch.

Google Speech Recognition

five

Google Speech Recognition is superbly usable technology, way beyond S-Voice or Siri. Big Data in real action! Voice Search, Voice Commands, and dictation work fabulously. The only issue I found is with recognizing email contacts from speech.

Smart Watch

Google Voice Search makes Smart Watch smart. It brings the knowledgebase of the world – Internet – to the tip of your tongue, and it is MAGIC!

six

seven

eight

Google Now

I must confess I am annoyed by Google Now cards. I know it tries really hard, but the recommendations are wrong about 50% of the time. The other 49% they are irrelevant. Given that, I feel that Now shall stick to the back rows. Instead, it is putting itself on a central stage. Lesson learned – for smart watch, precision/recall balance needs to be skewed heavily toward precision.

Google Now on Gear Live Ah? I am at home, silly!

Google Now on Gear Live Ah? I am at home, silly!

Conclusions

These opinions are my own. At least half of my day is spent on the go – driving kids around, in classrooms or lessons, and doing family errands. I rarely have idle hands or idle time.

You’ll be the judge if I am an atypical user. In addition, I do not subscribe to the school of thought that a smart watch is a phone satellite, and a fetish. I believe it can be useful gadget way beyond that.

Yes, it is given that no one will use the watch to write or read a novel, not even a long email. Afar from that, I don’t see why a good smart watch cannot do all a person on a go needs to do, replacing the phones, and giving us back our other hand.

Therefore, I feel that a good smart watch shall aspire to:

ten

If that is your typical day, then this is your gadget.

eleven

Last Thought: Smart Watch and IoT

Last but not the least, I believe that a smart watch naturally lends itself to become a universal remote control for all IoT “smart things” – it can be your ID, it can sense “smart things,” it can output small chunks of information as voice or text, and it can take commands. As you walk next to (your) refrigerator, refrigerator can remind you via your watch to buy more milk, and you can adjust refrigerator’s temperature via the watch. This assumes that a “smart thing” can beam a description of all the knobs and buttons you need to control it.

twelve

I am surprised there is not much written on that, but here is a very good paper (pdf) “User Interfaces for Smart Things A Generative Approach with Semantic Interaction Descriptions” Simon Mayer, Andreas Tschofen, Anind K. Dey, and Friedemann Mattern, Institute for Pervasive Computing, ETH Zurich, HCI Institute, Carnegie Mellon University, April 4, 2014.

2015 AT&T Developer Summit & Hackathon

January 9th, 2015 2 Comments

Editor’s Note: Noel did it! After competing in 2013 and 2014, he broke through and won a prize at the annual AT&T Developer Summit Hackathon (@attdeveloper). Congrats to the whole team.

MediaTek.prize

The whole team minus Anthony who was too sick to enjoy the moment.

 

This year, Anthony (@anthonyslai), Raymond, Osvaldo (@vaini11a), Luis (@lsgaleana), Tony and I (@noelportugal) participated in the AT&T Developer Summit & Hackathon.

From the beginning we realized we had too much brain power for just one project so we decided to split the group. The first group would attempt to go for the first overall prize. And the second group would focus on just one accelerator price from a sponsor.

“Your Voice”  – First Overall Prize Entry:

We knew we only had three minutes to present, and we had to leave an impression with the judges. So, we opted to build our solution around a specific use case with only one persona in mind. The use case was to use our voice to control AT&T Digital Life, AT&T WebRTC and the AT&T Speech APIs. The persona was an older gentleman going around his daily life around the house. We opted to use our latest toy, the Amazon Echo as way to interface with AT&T services. We went to work, found a couple limitations, but at the end we overcame them and we felt pretty confident with our solution.

Here is our use case:

Tony is an 89 year old man and lives alone. He is pretty self sufficient, but his daughter (Cindy) worries about his well being. So she bought AT&T Digital Life to make sure her dad was safe and sound. Tony doesn’t want to be bothered to learn all the new mumbo-jumbo that comes with new technology, like a mobile app, a fancy remote, etc. Instead he prefers to use “Your Voice” to make sure all doors are locked, garage door closed, lights off/on, etc. “Your Voice” also works as personal assistant that can take care reminding Tony of important things, read email, initiate video calls (WebRTC), etc.

So that’s it! We pre-programmed sequences to identify actions. When Tony said “Alexa, I’m tired. I’m going to bed,” the system started a series of actions, not just one. When Tony said “Alexa, call my grandson,” the system automatically started the projector and did a video conference.

And finally we created a video introduction for our presentation:

 “Sensus” – Accelerator Entry:

Raymond and Anthony decided to enter the “MediaTek Labs IoT and Wearables Challenge.” MediaTek (@MediaTekLabs) has a very nice multipurpose development board called LinkIt ONE that includes an array of connectivity options (BLE, Wifi, GSM, GPRS, etc.), plus access to a lot of plug-and-play sensors.

They built a sensor station to monitor environmental safety metrics (temperature, fire hazard) and environmental health metrics (noise, dust, UV). They used Android Wear as the wearable platform to notify users when things happen, using an IFTTT model.

Their solution was an end-to-end solution using only the MediaTek LinkIt One and their cloud platform. This gave them the edge since this was a pure MediaTek solution. It became a clear solution when the judges came to talk to them. Our room had A/C issues and constantly overheated, so we had to chase the maintenance guys quite often for them to fix it. Raymond talk to them about the opportunity to solve the issue by giving a wearable device to the head of maintenance so he would know whats going on in the building by just “glancing.”

“Sensus” got the first prize for the accelerator entry.  And as team we could not be happier!

Conclusion:

Hackathons or developer challenges are a great way to work as a team, learn new technologies and push the limits of what can be accomplished on such short time. As a team we have proven to be always ahead of the curve with our solutions, e.g. last year we built a Smart Holster for Law Enforcement, and if you have been following CES 2015, there are some companies doing similar implementations.

There is no doubt that voice control will be huge this year. The technology is maturing at a very fast rate and we are bound to see a lot more great implementations.

Finally, winning is not everything in these events. The journey is what matters. What we learned along the way. I find it very apt to have this competition in Las Vegas since this place is full of chance, probability and ultimately pure luck.

Here Are Your First Links of 2015

January 7th, 2015 4 Comments

Our team has been busy since the New Year, competing in the AT&T Developer Summit hackathon, which is Noel’s (@noelportugal) Everest, i.e. he tries to climb it every year, see 2013 and 2014.

If you follow our Twitter (@theappslab) or Facebook page, you might have seen the teaser. If not, here it is:

Image courtesy of AT&T Developer Program

Image courtesy of AT&T Developer Program’s Facebook page

Look for details later this week.

While you wait for that, enjoy these tidbits from our Oracle Applications User Experience colleagues.

Fit for Work: A Team Experience of Wearable Technology

Wearables are a thing, just look at the CES 2015 coverage, so Misha (@mishavaughandecided to distribute Fitbits among her team to collect impressions.

Good idea, get everyone to use the same device, collect feedback, although it seems unfair, given Ultan (@ultan) is perhaps the fittest person I know. Luckily, this wasn’t a contest of fitness or of most-wrist-worn-gadgets. Rather, the goal was to gather as much anecdotal experience as possible.

Bonus, there’s a screenshot of the Oracle HCM Cloud Employee Wellness prototype.

¡Viva Mexico!

Fresh off a trip to Jolly Old England, the OAUX team will be in Santa Fe, Mexico in late February. Stay tuned for details.

Speaking of, one of our developers in Oracle’s Mexico Development Center, Sarahi Mireles (@sarahimireles) wrote up her impressions and thoughts on the Shape and ShipIt we held in November, en español.

And finally, OAUX and the Oracle College Hire Program

Oracle has long had programs for new hires right out of college. Fun fact, I went through one myself many eons ago.

Anyway, we in OAUX have been graciously invited to speak to these new hires several times now, and this past October, Noel, several other OAUX luminaries and David (@dhaimes) were on a Morning Joe panel titled “Head in the Clouds,” focused loosely around emerging technologies, trends and the impact on our future lives.

Ackshaey Singh (from left to right), DJ Ursal (@djursal), Misha Vaughan (@mishavaughan), Joe Goldberg, Noel Portugal (@noelportugal), and David Haimes (@dhaimes)

 

Interesting discussion to be sure, and after attending three of these Morning Joe panels now, I’m happy to report that the attendance seems to grow with each iteration, as does the audience interaction.

Good times.

Another Echo Hack from Noel

January 6th, 2015 Leave a Comment

Noel (@noelportugal) spent a lot of time during his holidays geeking out with his latest toy, Amazon Echo. Check out his initial review and his lights hack.

For a guy whose name means Christmas, seems it was a logical leap to use Alexa to control his Christmas tree lights too.

Let’s take a minute to shame Noel for taking portrait video. Good, moving on, oddly, I found out about this from a Wired UK article about Facebook’s acquisition of Wit.ai, an interesting nugget in its own right.

If you’re interested, check out Noel’s code on GitHub. Amazon is rolling out another batch of Echos to those who signed up back when the device was announced in November.

How do I know this? I just accepted my invitation and bought my very own Echo.

With all the connected home announcements coming out of CES 2015, I’m hoping to connect Alexa to some of the IoT gadgets in my home. Stretch goal for sure, given all the different ecosystems, but maybe this is finally the year that IoT pushes over the adoption hump.

Fingers crossed. The comments you must find.

Chromecast Guest Mode Rules

January 5th, 2015 Leave a Comment

If you read here regularly, you’ll know I’m a huge fan of the Google Chromecast.

It’s helped me cut the cable, I gave it as a Christmas gift two years in a row (to different people), I have several in my home, and I carry one in my laptop bag to stream content on the road.

And if you’ve seen any of us on the road, you may have seen some cool stuff we’ve built for the Chromecast.

Back in June, Google announced a killer feature for the little HDMI gizmo, ultrasonic pairing, which promised to remove the necessity for a device to be connected to the same wifi network as the Chromecast to which it was casting.

That feature, guest mode, rolled out in December for Android devices running 4.3 and higher, and it is as awesome as expected.

It’s very easy to setup and use.

First, you need to enable guest mode for your Chromecast. I tried this initially in the Mac Chromecast app, but alas, it has not yet been updated to include this option, same with iOS. So, you’ll need to use the Android Chromecast mobile app, like so:

Screenshot_2015-01-05-12-05-14

Once enabled, the PIN is displayed on the Chromecast’s backdrop, and anyone in the room can cast to it via the PIN or by audio pairing.

IMG_20150105_104202

 

When attempting to connect, the Chromecast first tries the audio method; the Chromecast app asks to use the device’s microphone, and Chromecast broadcasts the PIN via audio tone.

This slideshow requires JavaScript.

Failing that (or if you skip the audio pairing), the user is prompted by the Chromecast app to enter the PIN manually.

Easy stuff, right? In case you’re worried that someone not in the room could commandeer your Chromecast, they can’t, at least according to Google.  Being a skeptic, I tested this myself, and sure enough, the audio method won’t work if there are walls separating the device from the Chromecast. The app fails to pair via audio and asks for the PIN, which you can only get from the TV screen itself.

Not entirely foolproof, but good enough.

So why is this a cool feature? In a word, collaboration. Guest mode allows people to share artifacts and collaborate (remember, Chromecast has a browser) on a big screen without requiring them all to join the same wifi network.

Plus, it’s a modern way to torture your friends and family with your boring vacation pictures and movies.

More and more apps now support Chromecast, making it all the more valuable, e.g. the Disney Movies app, a must-have for me. Bonus for that app, it’s among the first that I know of to bridge the Google and Apple ecosystems, i.e. it consolidates all the Disney movies I’ve bought on iTunes and Google Play into a single app.

Thoughts? Find the comments.

Noel’s Amazon Echo Hack

January 5th, 2015 2 Comments

Noel (@noelportugal) is one of a handful of early adopters to get his hands on the Amazon Echo, Amazon’s in-home personal assistant, and being the curious, hacker that he is, of course he used an unpublished API to bend Alexa, that’s the Echo’s personality, to his will.

Video, because it happened:

And look, Noel’s hack got picked up by Hackaday (@hackaday), kudos. You can grab his code on GitHub.

We’re hoping Amazon releases official APIs for the Echo soon, lots of great ideas on deck.

Our Week at UKOUG

December 26th, 2014 2 Comments

Earlier this month, Noel (@noelportugal) and I (@joybot12) represented the AppsLab crew at the UKOUG Apps 14 and Tech 14 conferences in Liverpool.

I conducted customer feedback sessions with users who fit the “c-level executive” user profile, to collect feedback on some of our new interactive data visualizations. Unfortunately, I can’t share any of these design concepts just yet, but I can share a bunch of pics of Noel, who gave several talks over the course of the 3-day conference.

This first photo is a candid taken after Noel’s talk on Monday about “Wearables at Work.”

2014-12-08 12.58.50

Photo by Joyce Ohgi

I was thrilled to see so many conference attendees sticking around afterwards to pepper Noel with questions; usually at conferences, people leave promptly to get to their next session, but in this case, they stuck around to chat with Noel (and try on Google Glass for the first time).

Here’s another of Noel taken by Misha Vaughan (@mishavaughan) with his table of goodies.

B4bTSfHCMAAV3NR

Photo by Misha Vaughan

The next photo is from Tuesday, where Noel and Vivek Naryan hosted a roundtable panel on UX. Because this was a more intimate, round-table style talk, the conference attendees felt comfortable speaking up and adding to the conversation. They raised concerns about data privacy, their thoughts on where technology is headed in the future, and generally chatted about the future of UX and technology.

2014-12-08 15.20.51

Photo by Joyce Ohgi

This last photo is from Monday afternoon, when I made Noel take a break from his grueling schedule to play table tennis with me. The ACC Liverpool conference center thoughtfully provided table tennis in their Demo Grounds as a way to relieve stress and get some exercise (was a bit too cold to run around outside).

I put up a valiant effort, but Noel beat me handily. In my defense, I played the first half of the game in heels; once I took those off my returns improved markedly. I’ll get him next time! :) A special thank-you to Gustavo Gonzalez (@ggonza4itc), CTO at IT Convergence for the great action shot, and also for giving excellent feedback and thoughtful input about the design concepts I showed him the day following.

Photo by Gustavo Gonzalez

Photo by Gustavo Gonzalez

All-in-all, we enjoyed the Apps 14 and Tech 14 conferences. It’s always great to get out among the users of our products to collect real feedback.

For more on the OAUX team’s activities at the 2014 editions of the UKOUG’s annual conferences, check out the Storify thread.

Amazon Echo, The Future or Fad?

December 18th, 2014 4 Comments

Update: I now “hacked” the API to control Hue Lights and initiate a phone call with Twilio.  Check here https://www.youtube.com/watch?v=r58ERvxT0qM

AmazonEcho

Last November Amazon announced a new kind of device. Part speaker, part personal assistant and it called it Amazon Echo. If you saw the announcement you might have also see their quirky infomercial.

The parodies came hours after their announcement, and they were funny. But dismissing this just as a Siri/Cortana/Google Now copycat might miss the potential of this “always listening” device. To be fair this is not the first device that can do this. I have a Moto X that has an alway-on chip waiting for a wake word (“OK Google”), Google Glass glass does the same thing (“OK Glass.”) But the fact that I don’t have to hold the device, be near it, or push a button (Siri) makes this cylinder kind of magical.

It is also worth noting that NONE of these devices are really “always-listening-and-sending-all-your-conversations-to-the-NSA,” in fact the “always listening” part is local. Once you say the wake word then I guess you better make sure don’t spill the beans for the next few seconds, which is the period that the device will listen and do a STT (speech-t0-text) operation on the Cloud.

We can all start seeing through Amazon and why this good for them. Right off the bat you can buy songs with a voice command. You can also add “stuff” to your shopping list. Which also reminds me of a similar product Amazon had last year, Amazon Dash  which unfortunately is only for selected markets. The fact is that Amazon wants us to buy more from them, and for some of us that is awesome, right? Prime, two day shipping, drone delivery, etc.

I have been eyeing these “always listening” devices for a while. The Ubi ($300) and Ivee ($200) were my two other choices. Both have had mixed reviews and both of them are still absent on the promise of an SDK or API. Amazon Echo doesn’t have an SDK yet, but they placed a link to show the Echo team your interest in developing apps for it.

The promise of a true artificial intelligence assistant or personal contextual assistant (PCA) is coming soon to a house or office near you. Which brings me to my true interest in Amazon Echo. The possibility of creating a “Smart Office” where the assistant will anticipate my day-to-day tasks, setup meetings, remind me of upcoming events, analyze and respond email and conversations, all tied to my Oracle Cloud of course.  The assistant will also control physical devices in my house/office “Alexa, turn on the lights,” “Alexa, change the temperature to X,” etc.

All in all, it has been fun to request holiday songs around the kitchen and dinning room (“Alexa, play Christmas music.”) My kids are having a hay day trying to ask the most random questions. My wife, on the other side, is getting tired of the constant interruption of music, but I guess it’s the novelty. We shall see if my kids are still friendly to Alexa in the coming months.

In my opinion, people dismissing Amazon Echo, will be the same people that said: “Why do I need a music player on my phone, I already have ALL my music collection in my iPod” (iPhone naysayers circa 2007), “Why do I need a bigger iPhone? That `pad thing is ridiculously huge!” (iPad naysayers circa 2010.) And now I have already heard “Why do I want a device that is always connected and listening, I already have Siri/Cortana/Google Now” (Amazon Echo naysayers circa 2014.)

Agree, disagree?  Let me know.

New Adventures in Virtual Reality

December 16th, 2014 6 Comments

Back in the early 90s I ventured into virtual reality and was sick for a whole day afterwards.

We have since learned that people become queazy when their visual systems and vestibular systems get out of sync. You have to get the visual response lag below a certain threshold. It’s a very challenging technical problem which Oculus now claims to have cracked. With ever more sophisticated algorithms and ever faster processors, I think we can soon put this issue behind us.

Anticipating this, there has recently been a resurgence of interest in VR. Google’s Cardboard project (and Unity SDK for developers) makes it easy for anyone to turn their smartphone into a VR headset just by placing it into a cheap cardboard viewer. VR apps are also popping up for iPhones and 3D side-by-side videos are all over YouTube.

Image from Ultan's Instagram

Image from Ultan’s Instagram

Some of my AppsLab colleagues are starting to experiment with VR again, so I thought I’d join the party. I bought a cheap cardboard viewer at a bookstore. It was a pain to put together, and my iPhone 5S rattles around in it, but it worked well enough to give me a taste.

I downloaded an app called Roller Coaster VR and had a wild ride. I could look all around while riding and even turn 180 degrees to ride backwards! To start the ride I stared intently at a wooden lever until it released the carriage.

My first usability note: between rides it’s easy to get turned around so that the lever is directly behind you. The first time I handed it to my wife she looked right and left but couldn’t find the lever at all. So this is a whole new kind of discoverability issue to think about as a designer.

Despite appearances, my roller coaster ride (and subsequent zombie hunt through a very convincing sewer) is research.  We care about VR because it is an emerging interaction that will sooner or later have significant applications in the enterprise.  VR is already being used to interact with molecules, tumors, and future buildings, uses cases that really need all three dimensions.  We can think of other uses cases as well; Jake suggested training for service technicians (e.g. windmills) and accident re-creation for insurance adjusters.

That said, both Jake and I remain skeptical.  There are many problems to work through before new technology like this can be adopted at an enterprise scale. Consider the the idea of immersive virtual meetings.  Workers from around the world, in home offices or multiple physical meeting rooms, could instantly meet all together  in a single virtual room, chat naturally with each other, pick up subtle facial expressions, and even make presentations appear in mid air at the snap of a finger.  This has been a holy grail for decades, and with Oculus being acquired by Facebook you might think the time has finally come.

Not quite yet.  There will be many problems to overcome first, not all of them technical.  In fact VR headsets may be the easiest part.

A few of the other technical problems:

And a non-technical problem:

So for virtual meetings to happen on an enterprise scale, all of the above problems will have to be solved and some of our attitudes will have to change.  We’ll have to find the right balance as a society – and the lawyers will have to sign off on it.  This may take awhile.

But that doesn’t mean our group won’t keep pushing the envelope (and riding a few virtual roller coasters).  We just have to balance our nerdish enthusiasm with a healthy dose of skepticism about the speed of enterprise innovation.

What are your thoughts about the viability of virtual reality in the enterprise?  Your comments are always appreciated!

Magical Links for a Tuesday in December

December 2nd, 2014 3 Comments

It’s difficult to make a link post seem interesting. Anyway, I have some nuggets from the Applications User Experience desk plus bonus robot video because it’s Tuesday.

Back to Basics. Helping You Phrase that Alta UI versus UX Question

Always entertaining bloke and longtime Friend of the ‘Lab, Ultan (@ultan) answers a question we get a lot, what’s the difference between UI and UX?

From Coffee Table to Cloud at a Glance: Free Oracle Applications Cloud UX eBook Available

Next up, another byte from Ultan on a new and free eBook (registration required) produced by OAUX called “Oracle Applications Cloud User Experiences: Trends and Strategy.” If you’ve seen our fearless leader, Jeremy Ashley (@jrwashley), present recently, you might recognize some of the slides.

OAUX_ebook

Oh and if you like eBooks and UX, make sure to download the Oracle Applications Cloud Simplified User Interface Rapid Development Kit.

Today, We Are All Partners: Oracle UX Design Lab for PaaS

And hey, another post from Ultan about an event he ran a couple weeks ago, the UX Design Lab for PaaS.

Friend of the ‘Lab, David Haimes (@dhaimes), and several of our team members, Anthony (@anthonyslai), Mark (@mvilrokx), and Tony, participated in this PaaS4SaaS extravaganza, and although I can’t discuss details, they built some cool stuff and had oodles of fun. Yes, that’s a specific unit of fun measurement.

IMG_1147

Mark (@mvilrokx) and Anthony (@anthonyslai) juggle balloons for science.

Amazon’s robotic fulfillment army

From kottke.org, a wonderful ballet of Amazon’s fulfillment robots.

About Proofs and Stories, in 4 Parts

December 1st, 2014 3 Comments

Editor’s note: Here’s another new post from a new team member. Shortly after the ‘Lab expanded to include research and design, I attended a workshop on visualizations hosted by a couple of our new team members, Joyce, John and Julia. 

The event was excellent. John and Julia have done an enormous amount of critical thinking about visualizations, and I immediately started bugging them for blog posts. All the work and research they’ve done needs to be freed into the World so anyone can benefit from it. This post includes the first three installments, and I hope to get more. Enjoy.

Part 1

I still haven’t talked anyone into reading Proofs and Stories, and god knows I tried. If you read it, let me know. It is written by the author of Logicomix, Apostolos Doxiadis, if that makes the idea of reading Proofs and Stories more enticing. If not, I can offer you my summary:

H.C. Berann. Yellowstone National Park panorama

H.C. Berann. Yellowstone National Park panorama

1. Problem solving is like a quest. As in a quest, you might set off thinking you are bound for Ithaka only to find yourself on Ogygia years later. Or, in Apostolos’ example, you might set off to prove Fermat’s Last Theorem only to find yourself studying elliptic curves for years. The seeker walks through many paths, wonders in circles, reverses the steps, and encounters dead ends.

2. The quest has a starting point = what you know, the destination = the hypothesis you want to prove, and the points in between = statements of facts. Graph, in mathematical sense, is a great way to represent this. A is a starting point, B is the destination, F is a transitive point, C is a choice.

graph1

A story is a path through the graph, defined by the choices a storyteller makes on behalf of his characters.

graph_2

Frame P5 below shows Snowy’s dilemma. Snowy’s choice determines what happens to Tintin in Tibet. If only Snowy not gone for the bone, the story would be different.

swnoys_dilemma

Image from Tintin in Tibet by Hergé

Even though its own nature dictates the story to be linear, there is always a notion of alternative paths. How to linearize forks and branches of the path so that the story is most interesting, is an art of storytelling.

3. Certain weight, or importance, can be suggested based on the number of choices leading to a point, or resulting from it.

graph_weights

When a story is summarized, each storyteller likely to come up with a different outline. However the most important points usually survive majority of summarizations.

Stories can be similar. The practitioners of both narrative and problem solving rely on patterns to reduce choice and complexity.

So how does this have to do with anything?

Part 2

Another book I cannot make anyone to read but myself is called “Interaction Design for Complex Problem Solving: Developing Useful and Usable Software” by Barbara Mirel. The book is as voluminous as its title suggests, 397 pages, out of which I made it through the page 232 in four years. This probably doesn’t entice you into reading the book. Luckily there is a one-pager paper “Visualizing complexity: Getting from here to there in ill-defined problem landscapes” from the same author on the same very subject. If this is too much to read still, may I offer you my summary?

Mainly, cut and paste from Mirel’s text:

1. Complex problem solving is an exploration across rugged and at times uncharted problem terrains. In that terrain, analysts have no way of knowing in advance all moves, conditions, constraints or consequences. Problem solvers take circuitous routes through “tracts” of tasks toward their goals, sometimes crisscrossing the landscape and jump across foothills to explore distant knowledge, to recover from dead ends, or to reinvigorate inquiry.

2. Mountainscapes are effective ways to model and visualize complex inquiry. These models stress relationships among parts and do not reduce problem solving to linear and rule-based procedures or work flows. Mountainscapes represent spaces being as important to coherence as the paths. Selecting the right model affect the designs of the software and whether complex problem solvers experience useful support. Models matter.

 B. Mirel, L. Allmendinger. Analyzing sleep products visualized as a mountain climb

B. Mirel, L. Allmendinger. Analyzing sleep products visualized as a mountain climb

Complex problems can neither be solved nor supported with linear or pre-defined methods. Complex problems have many possible heuristics, indefinite parameters, and ranges of outcomes rather than one single right answer or stopping point.

3. Certain types of complex problems recur in various domains and, for each type, analysts across organizations perform similar patterns of inquiry. Patterns of inquiry are the regularly repeated sets of actions and knowledge that have a successful track record in resolving a class of problems in a specific domain
And so how does this have to do with anything?

Part 3

A colleague of mine, Dan Workman, once commented on a sales demo of a popular visual analytics tool. “Somehow” he said “the presenter drills down here, pivots there, zooms out there, and, miraculously, arrives to that view of the report where the answer to his question lies. But how did he know to go there? How would anyone know where the insight hides?”

His words stuck with me.

Imagine a simple visualization that shows revenue trend of a business by region by product by time. Let’s pretend the business operates in 4 regions, sells 4 products, and has been in business for 4 years. The combination of these parameters results in 64 views of sales data. Now imagine that each region is made up of hundreds of countries. If visualization allows user to view sales by country, there will be thousands and thousands of additional views. In the real world, a business might also have lots more products. The number of possible views could easily exceed what a human being can manually look at, and only some views (alone or in combination) possibly contain insight. But which ones?

I am yet to see an application that supports the users in finding insightful views of a visualization. Often users won’t even know where to start.
So here is the connection between Part1, Part2, and Part3. It’s the model. The visualization exploration can be represented as a graph (in mathematical sense), where the points are the views, and the connections are navigation between views. Users then trace a path through the graph as they explore new results.

J. Blyumen Navigating Interactive Visualizations

J. Blyumen Navigating Interactive Visualizations

From here, certain design research agenda comes to mind:

1. The world needs interfaces to navigate the problem mountainspaces: keeping track of places visited, representing branches and loops in the path, enabling to reverse steps, etc.

2. The world needs an interface for linearizing a completed quest into a story (research into presentation), and outlining stories.

3. The world needs software smarts that can collect the patterns of inquiry and use them to guide the problem solvers through the mountainspaces.

So I hope from this agenda the Part 4 will eventually follow . . . .

Happy Thanksgiving

November 26th, 2014 3 Comments

Editor’s note: Here’s a first post from one of our new team members, Thao Nguyen (@thaobnguyen), who runs our Emerging Interactions team, the Research and Design part of the R, D & D.

That last D is Development if that’s unclear. Anyway, like Thao says, Happy Thanksgiving for those who celebrate it, and for those who don’t enjoy the silence in our absence. To Thao’s question, I’m going with Internet. Yes, it’s a gadget because it’s a series of tubes, not a big truck.

Find the comments to add the gadget for which you are most thankful.

Tomorrow is Thanksgiving and this seems like a good time to put my voice out on The AppsLab (@theappslab). I’m Thao, and my Twitter (@thaobnguyen) tagline is “geek mom.” I’m a person of few words and those two words pretty much summarize my work and home life. I manage The AppsLab researchers and designers. Jake welcomed us to the AppsLab months ago here, so I’m finally saying “Thank you for welcoming us!”

Photo by floodllama on Flickr used under Creative Commons

Photo by floodllama on Flickr used under Creative Commons

As we reflect on all the wonderful things in our lives, personal and professional, I sincerely want to say I am very thankful for having the best work family ever. I was deeply reminded of that early this week, when I had a little health scare at work and was surround by so much care and support from my co-workers. Enough of the emotional stuff, and onto the fun gadget stuff . . . .

My little health scare led me to a category of devices that hadn’t hit my radar before – potentially, life saving, personal medical apps. I’ve been looking at wearables, fitness devices, healthcare apps, and the like for a long time now but there is a class of medical-grade devices (at least recommended by my cardiologist) that is potentially so valuable in my life, as well as those dear to me . . . AliveCor. It is essentially turns your smartphone into an ECG device so you can monitor your heart health anytime and share it with your physician. Sounds so cool!

Back to giving thanks, I’m so thankful for all the technology and gadgets of today – from the iPhone and iPad that lets me have a peaceful dinner out with the kids to these medical devices that I’ll be exploring now. I want to leave you with a question, “What gadget are you most thankful for?”

Look What We Made

November 20th, 2014 5 Comments

As a team-building activity for our newly merged team of research, design and development, someone, who probably wishes to remain nameless, organized a glass mosaic and welding extravaganza at The Crucible in Oakland.

We split into two teams, one MIG welding, the other glass breaking, and here’s the result.

Original image, glass before firing.

Original image, glass before firing.

Finished product, including frame.

Finished product, including frame.

All-in-all an interesting and entertaining activity. Good times were had by all, and no one was cut or burned, so bonus points for safety.