Personal Assistant or Creepy Stalker? The Rise of Cognitive Computing

August 20th, 2014 7 Comments

I just got back to my hotel room after attending the first of a two day Cognitive Computing Forum, a conference running in parallel to the Semantic Technology (SemTech) Business Conference and the NoSQL Conference here in San Jose. Although the forum attracts less attendees and has only a single track, I cannot remember attending a symposium where so many stimulating ideas and projects were presented.

What is cognitive computing? It refers to computational systems that are modeled on the human brain – either literally by emulating brain structure or figuratively through using reasoning and semantic associations to analyze data. Research into cognitive computing has become increasingly important as organizations and individuals attempt to make sense of the massive amount of data that is now commonplace.

The first forum speaker was Chris Welty, who was an instrumental part of IBM’s Watson project (the computer that beat the top human contestants on the gameshow Jeopardy). Chris gave a great overview of how cognitive computing changes the traditional software development paradigm. Specifically, he argued that rather than focus on perfection, it is ok to be wrong as long as you succeed often enough to be useful (he pointed to search engine results as a good illustration of this principle). Development should focus on incremental improvement – using clearly defined metrics to measure whether new features have real benefit. Another important point he made was that there is no one best solution – rather, often the most productive strategy is to apply several different analytical approaches to the same problem, and then use a machine learning algorithm to mediate between (possibly) conflicting results.

There were also several interesting – although admittedly esoteric – talks by Dave Sullivan of Ersatz Labs (@_DaveSullivan) on deep learning, Subutai Ahmad of Numenta on cortical computing (which attempts to emulate the architecture of the neocortex) and Paul Hofmann (@Paul_Hofmann) of Saffron Technology on associative memory and cognitive distance. Kristian Hammond (@KJ_Hammond) of Narrative Science described technology that can take structured data and use natural language generation (NLG) to automatically create textual narratives, which he argued are often much better than data visualizations and dashboards in promoting understanding and comprehension.

However, the highlight of this first day was the talk entitled ‘Expressive Machines’ by Mark Sagar from the Laboratory for Animate Technologies. After showing some examples of facial tracking CGI from the movies ‘King Kong’ and ‘Avatar’, Mark described a framework modeled on human physiology that emulates human emotion and learning. I’ve got to say that even though I have a solid appreciation and understanding for the underlying science and technology, Mark’s BabyX – who is now really more a virtual toddler than an infant – blew me away. It was amazing to see Mark elicit various emotions from BabyX. Check out this video about BabyX from TEDxAukland 2013.

At the end of the day, the presentations helped crystallize some important lines of thought in my own carbon-based ‘computer’.

First, it is no surprise that human computer interactions are moving towards more natural user interfaces (NUIs), where a combination of artificial intelligence, fueled by semantics and machine learning and coupled with more natural ways of interacting with devices, result in more intuitive experiences.

Second, while the back end analysis is extremely important, what is particularly interesting to me is the human part of the human computer interaction. Specifically, while we often focus on how humans manipulate computers, an equally  interesting question is how computers can be used to ‘manipulate’ humans in order to enhance our comprehension of information by leveraging how our brains are wired. After all, we do not view the world objectively, but through a lens that is the result of idiosyncrasies from our cultural and evolutionary history – a fact exploited by the advertising industry.

For example, our brains are prone to anthropomorphism, and will recognize faces even when faces aren’t there. Furthermore, we find symmetrical faces more attractive than unsymmetrical faces.  We are also attracted to infantile features – a fact put to good use by Walt Disney animators who made Mickey Mouse appear more infant-like over the years to increase his popularity (as documented by paleontologist Stephen Jay Gould). In fact, we exhibit a plethora of cognitive biases (ever experience the Baader Meinhof phenomenon?), including the “uncanny valley”, which describes a rapid drop off in comfort level as computer agents become almost – but not quite perfectly – human-looking.  And as Mark Sagar’s work demonstrates, emotional, non-verbal cues are extremely important (The most impressive part of Sagar’s demo was not the A.I. – afer all, there is a reason why BabyX is a baby and not an fully conversant adult – but rather the emotional response it elicited in the audience).

The challenge in designing intelligent experiences is to build systems that are informative and predictive but not presumptuous, tending towards the helpful personal assistant rather than the creepy stalker. Getting it right will depend as much on understanding human psychology as it will on implementing the latest machine learning algorithms.

More First World Problems

August 18th, 2014 5 Comments

I’ve been traveling a lot lately, which is bad. I’ve been consuming a lot of in-flight wifi, which is good, because there really should be no place on Earth where I’m unable to work.

Plus, it’s internets at 35,000 feet. How cool is that?

Today, I found myself in the throes of a decidedly first world problem. Of the many devices I carry, I couldn’t decide which one to use for the airplane wifi, which is, naturally, charged per-device.

Normally, I’d go with the tablet, since it’s a nice mix of form factors. The laptop is my preference, but I end up doing in-seat yoga to use it, not a good look.

But, horror of horrors, the tablet’s battery was only 21%. Being an Android tablet, that wouldn’t be enough to make it to my destination. I do carry a portable battery, but it won’t charge the Nexus 7 tablet, for some odd reason.

Recursive, first world problems.

I debated smartphone vs. laptop for a minute or two before I realized what an awful, self-replicating, first world problem this was. So, I made a call and immediately did what anyone would do, tweeted about it.

ba5h3

What has become of me.

Quadcopters and the Internet of Things

August 17th, 2014 5 Comments

Jerry-rigged attachment of a 808 keychain camera to the underside of a Syma X1 quadcopter.

Low tech attachment of a 808 keychain camera to the underside of a Syma X1 quadcopter.

Editor’s note: Hey a new author! Here’s the first one, of many I hope, from Bill Kraus, who joined us back in February. Enjoy.

One of the best aspects of working in the emerging technologies team here in Oracle’s UX Apps group is that we have the opportunity to ‘play’ with new technology. This isn’t just idle dawdling, but rather play with a purpose – a hands-on exercise exploring new technologies and brainstorming on how such technologies can be incorporated into future enterprise user experiences.

Some of this technology, such as beacons and wearables,  have obvious applications. The relevancy of other technologies, such as quadcopters and drones, are more obtuse (not withstanding their possible use as a package delivery mechanism for an unnamed online retail behemoth).

Ponit Monroe

Video still taken from the quadcopter hundreds of feet above my home on Bainbridge Island, looking north to the Puget Sound and Point Monroe.

As an amateur wildlife and nature photographer, I’ve dabbled in everything from digiscoping to infrared imaging to light painting to underwater photography. I’ve also played with strapping lightweight keychain cameras to inexpensive quadcopters (yes, I know I could get a DJI Phantom and a GoPro, but at the moment I prefer to test my piloting skills on something that won’t make me shed tears – and incur the wrath of my spouse - if it crashes).

After telling my colleagues recently over lunch about my quadcopter adventures  (I already lost several in the trees and waters of the Puget Sound), Tony, Luis, and Osvaldo decided to purchase their own and we had a blast at our impromptu ‘flight school’ at Oracle. The guys did great, and Osvaldo’s copter even had a têt-à-tête with a hummingbird, who seemed a bit confused over just what was hovering before it.

Luis flying his quadcopter.

Luis flying his quadcopter in the hallway.

Osvaldo flying his quadcopter.

Osvaldo flying his quadcopter.

This is all loads of fun, but what do flying quadcopters have to do the Internet of Things? Well, just as a quadcopter allows a photographer to get a perspective previously thought impossible, mobile technology combined with embedded sensors and the cloud have allowed us to break the bonds of the desktop and view data in new ways. No longer do we interact with digital information at a single point in time and space, but rather we are now enveloped by it every waking (and non-waking) moment – and we have the ability to view this data from many different perspectives. How this massive flow of incoming data is converted into useful information will depend in large part on context (you knew I’d get that word in here somehow) – analogous to how the same subject can appear dramatically different depending on the photographer’s (quadcopter assisted) point-of-view.

In fact, the Internet of Things is as much about space as it is about things – about sensing, interacting with and controlling the environment around us using technology to extend what we can sense and manipulate. Quadcopters are simply a manifestation of this idea – oh, and they are also really fun to fly.

The Secret Project Emerges

August 15th, 2014 6 Comments

Noel (@noelportugal) and Raymond have been working on a secret project. Here’s the latest:

eihgjagiThanks to AUX colleague and Friend of the ‘Lab, Rob Hernandez, for the 3D modeling.

So now you know why Noel bought the slap bands, but what goes in the case?

appslab-slap-band-1

If you’ve been watching, you might know already.

10516589_828102840568104_7123336379861752905_n

LightBlue Beans from Punch Through Design

Those are LightBlue Beans from Punch Through Design (@punchthrough), h/t @colin_k.

Stay tuned.

OTN Latin America Tour 2014 – Mexico

August 13th, 2014 2 Comments

keynote1

The OTN network is designed to help Oracle users with community generated resources. Every year the OTN team organizes worldwide tours that allow local users to learn from subject matter experts in all things Oracle. For the past few years the UX team has been participating in the OTN Latin America Tour as well as other regions.  This year I was happy to accept their invitation to deliver the opening keynote for the Mexico City tour stop.

The keynote title was “Wearables in the Enterprise: From Internet of Things to Google Glass and Smart Watches.” Given the AppsLab charter and reputation on cutting edge technologies and innovation it was really easy to put a presentation deck on our team’s findings on these topics. The presentation was a combination of the keynote given by our VP, Jeremy Ashley, during MakerCon 2014 at Oracle HQ this past May and our proof-of-concepts using wearable technologies.

Session114883028625_f2f6a4d6c7_o

I also had a joint session with my fellow UX team member Rafael Belloni titled “Designing Tablet UIs Using ADF.” Here we had the chance to share how users can leverage two great resources freely available from our team:

  1. Simplified User Experience Design Patterns for the Oracle Applications Cloud Service (register to download e-book here)
  2. A Starter kit with templates used to build a Simplified UI interfaces (download kit here)
    *Look for “Rich UI with Data Visualization Components and JWT UserToken validation extending Oracle Sales Cloud– 1.0.1″

These two resources are the result of extensive research done by our whole UX organization and we are happy to share with the Oracle community. Overall it was a great opportunity to reach out to the Latin American community, especially my fellow Mexican friends.

Here are some pictures of the event and of Mexico City. Enjoy!

 

Photo credits to Pablo Ciccarello, Plinio Arbizu, and me.

Oracle Voice Debuts on the App Store

August 11th, 2014 Leave a Comment

Editor’s note: I meant to blog about this today, but looks like my colleagues over at VoX have beat me to it. So, rather than try to do a better job, read do any work at all, I’ll just repost it. Free content w00t!

Although I no longer carry an iOS device, I’ve seen Voice demoed many times in the past. Projects like Voice and Simplified UI are what drew me to Applications User Experience, and it’s great to see them leak out into the World.

Enjoy.

Oracle Extends Investment in Cloud User Experiences with Oracle Voice for Sales Cloud
By Vinay Dwivedi, and Anna Wichansky, Oracle Applications User Experience

Oracle Voice for the Oracle Sales Cloud, officially called “Fusion Voice Cloud Service for the Oracle Sales Cloud,” is available now on the Apple App Store. This first release is intended for Oracle customers using the Oracle Sales Cloud, and is specifically designed for sales reps.

Home_With_Frame

The home screen of Fusion Voice Cloud Service for the Oracle Sales Cloud is designed for sales reps.

Unless people record new information they learn, (e.g. write it down, repeat it aloud), they forget a high proportion of it in the first 20 minutes. The Oracle Applications User Experience team has learned through its research that when sales reps leave a customer meeting with insights that can move a deal forward, it’s critical to capture important details before they are forgotten. We designed Oracle Voice so that the app allows sales reps to quickly enter notes and activities on their smartphones right after meetings, no matter where they are.

Instead of relying on slow typing on a mobile device, sales reps can enter information three times faster (pdf) by speaking to the Oracle Sales Cloud through Voice. Voice takes a user through a dialog similar to a natural spoken conversation to accomplish this goal. Since key details are captured precisely and follow-ups are quicker, deals are closed faster and more efficiently.

Oracle Voice is also multi-modal, so sales reps can switch to touch-and-type interactions for situations where speech interaction is less than ideal.

Oracle sales reps tried it first, to see if we were getting it right.

We recruited a large group of sales reps in the Oracle North America organization to test an early version of Oracle Voice in 2012. All had iPhones and spoke American English; their predominant activity was field sales calls to customers. Users had minimal orientation to Oracle Voice and no training. We were able to observe their online conversion and usage patterns through automated testing and analytics at Oracle, through phone interviews, and through speech usage logs from Nuance, which is partnering with Oracle on Oracle Voice.

Users were interviewed after one week in the trial; over 80% said the product exceeded their expectations. Members of the Oracle User Experience team working on this project gained valuable insights into how and where sales reps were using Oracle Voice, which we used as requirements for features and functions.

For example, we learned that Oracle Voice needed to recognize product- and industry-specific vocabulary, such as “Exadata” and “Exalytics,” and we requested a vocabulary enhancement tool from Nuance that has significantly improved the speech recognition accuracy. We also learned that connectivity needed to persist as users traveled between public and private networks, and that users needed easy volume control and alternatives to speech in public environments.

We’ve held subsequent trials, with more features and functions enabled, to support the 10 workflows in the product today. Many sales reps in the trials have said they are anxious to get the full version and start using it every day.

“I was surprised to find that it can understand names like PNC and Alcoa,” said Marco Silva, Regional Manager, Oracle Infrastructure Sales, after participating in the September 2012 trial.

“It understands me better than Siri does,” said Andrew Dunleavy, Sales Representative, Oracle Fusion Middleware, who also participated in the same trial.

This demo shows Oracle Voice in action.

What can a sales rep do with Oracle Voice?

Oracle Voice allows sales reps to efficiently retrieve and capture sales information before and after meetings. With Oracle Voice, sales reps can:

Prepare for meetings

Wrap up meetings

These screenshots show how to create tasks and appointments using Oracle Voice.

These screenshots show how to create tasks and appointments using Oracle Voice.

Our research showed that sales reps entered more sales information into the CRM system when they enjoyed using Oracle Voice, which makes Oracle Voice even more useful because more information is available to access when the same sales reps are on the go. With increased usage, the entire sales organization benefits from access to more current sales data, improved visibility on sales activities, and better sales decisions. Customers benefit too — from the faster response time sales reps can provide.

Oracle’s ongoing investment in User Experience

Oracle gets the idea that cloud applications must be easy to use. The Oracle Applications User Experience team has developed an approach to user experience that focuses on simplicity, mobility, and extensibility, and these themes drive our investment strategy. The result is key products that refine particular user experiences, like we’ve delivered with Oracle Voice.

Oracle Voice is one of the most recent products to embrace our developer design philosophy for the cloud of “Glance, Scan, & Commit.” Oracle Voice allows sales reps to complete many tasks at what we call glance and scan levels, which means keeping interactions lightweight, or small and quick.

Are you an Oracle Sales Cloud customer?

Oracle Voice is available now on the Apple App Store for Oracle customers using the Oracle Sales Cloud. It’s the smarter sales automation solution that helps you sell more, know more, and grow more.

Will you be at Oracle OpenWorld 2014? So will we! Stay tuned to the VoX blog for when and where you can find us. And don’t forget to drop by and check out Oracle Voice at the Smartphone and Nuance demo stations located at the CX@Sales Central demo area on the second floor of Moscone West.

We Have Slap Bands

August 8th, 2014 6 Comments

As part of a secret project Noel (@noelportugal) and Raymond are cooking up, Noel ordered some AppsLab-branded slap bands.

appslab-slap-band-1

The bands were produced by Amazing Wristbands (@AMZG_Wristbands), and Noel has nothing but good things to say about them, in case you’re looking for your own slap bands.

Anyway, I’m sure we’ll have some left over after the double-secret project. So, if you want one, let us know.

Find the comments.

A Smart Holster for Law Enforcement

July 24th, 2014 8 Comments

So, back in January, Noel (@noelportugal) took a team of developers to the AT&T Developer Summit Hackathon in Las Vegas.

Although they didn’t win, the built some very cool stuff, combining Google Glass, Philips Hue, Internet of Things, and possibly a kitchen sink in there somewhere, into what can only be described as a smart holster. You know, for guns.

You read that right. This project was way out of our usual wheelhouse, which is what made it so much fun, or so I’m told.

Friend of the ‘Lab Martin Taylor was kind enough to produce, direct and edit the following video, in which Noel describes and demonstrates the holster’s capabilities.

Did you catch the bit at 3:06? That’s Raymond behind the mask.

Enjoy.

So, a Researcher and Six Developers Join a Coding Challenge

July 16th, 2014 10 Comments

Editor’s Note: Hey, a new author! Colleague and Friend of the ‘Lab, Joyce Ohgi, a principal usability researcher here at Oracle Applications User Experience, joined several of our guys and tall man, all-around good dude and Friend of the ‘Lab, Rafa Belloni (@rafabelloni), to form a super-powered team last week.

This is her story, as told from the inside. Enjoy.

I earned $600 in a coding challenge without writing a single line of code.

Well, strictly speaking, $600/7 = $85.71, 7 being the number of members on our team. The challenge in question? The Oracle Applications User Experience Beacons Developer Challenge, a contest between internal Oracle teams to devise a creative solution using Estimote’s beacons and Oracle Facilities data provided by Oracle Spatial.

We were given: the beacons, some sample data, icons, and images, an example app, a pack of poster gum to stick the beacons on walls, and the freedom to do whatever we could: 1) dream up and 2) execute in 48 hours.

Fast forward: Anthony Lai (@anthonslai) and I are standing in front of a room of developers and five judges about to give a presentation on our app, whose back end I still did not fully grasp. How did I get there?

My journey started two days before the official challenge start date. I ate lunch with Tony, one of the developers, and he suggested I join the team because “Why not? It’ll be fun.”

I had heard of the challenge but thought it wasn’t for someone like me, as my now-rusty coding skills were last used for an Intro to C programming class in college; what could I contribute to a contest whose purpose is literally to generate code? But I like Tony, and he promised me it would be fun. So I decided, well, if the team will have me, I’d like to try it out. So I signed up.

One day before the challenge: the team decides to meet in order to: 1) learn each other’s names and 2) come up with a list of ideas, which would be narrowed down once the contest started.

After we all introduced ourselves, the brainstorming began immediately and organically. But, to my surprise, not a single dev was taking notes. How were we going to remember all the ideas and organize ourselves?

As a researcher, one of the basic rules of my job is to always observe and always take notes.

I could be useful! I whipped out my handy iPad with keyboard case and typed away. But some of the ideas didn’t make sense to me, and for the good of the team, I realized I also should be voicing my questions and opinions, not just act as the scribe.

So, I asked questions. It was scary. I was worried they would tease me for not knowing the back-end stuff they were talking about, or for speaking about ideas in terms of users’ needs, instead of the system constraints or technology features.

But the team listened to me. They even agreed with me. Okay, they also disagreed with me sometimes. But they treated me with the same respect they treated each other.

Day of the challenge - final code check-in: Honestly, the whole coding challenge experience is a blur. As a researcher, I’m trained not just to always take notes, but also to take photos whenever possible to retain key details that could be otherwise forgotten.

I got so wrapped up in our project, that I didn’t take a single photo of our group. I did take several pictures of our competition though.

Luckily Kathy Miedema dropped by to wish us luck and also snapped a picture.

Mail Attachment

Photo by Kathy Miedema, used with permission

As for the experience itself, I can only attempt to describe it by painting a picture in words.

We are all seated in the AUX Team’s little Design Room. Although all the chairs are occupied, silence reigns, interrupted only by the soft clicking of keyboards, and the occasional low conversation.

Usually, the mental image of collaboration is of a group of people talking together in a group. But in this case, even though it looked like we were all doing our own separate thing, it was intensely collaborative.

Each of our parts would need to come together by the deadline, so we did constant, impromptu, little check-ins to make sure the pieces we were building would integrate quickly.

I checked-in constantly as well, seeking confirmation that, of the many research methodologies I could use, the ones I chose gave the team the data they needed, i.e. user interviews to capture wants, needs and task flows of the current processes and feedback sessions with key stakeholders.

By the way, if you are interested in learning more about research methodology, you can find more info at UX Direct.

So, back to Anthony and me, standing in front of a crowd, about to launch into our demo.

It was crazy; we didn’t have time to do a run-through before; we had some weird display lags using the projector and the Samsung Gear Live smartwatch; the script was too long, and we ran out of time.

Believe me, I have a list of things that we can improve upon for the next challenge, but our idea was good.

Technically, it was solid, because of the deep expertise of the team, which aggregated probably comes close to 100 years of total development experience, and it was based on real users’ needs because of my research.

Happily, we won 2nd place, and $600. Next year, we’ll be gunning for 1st and the cool $1000 prize, which would net $142.86 for each of us.

All kidding aside, it’s not about the prize money or the recognition. It’s about people using their unique skill sets to build something better than any of them could have built on their own.

I will close with a text exchange between Anthony and me, post-challenge:

Me: Thx for letting me participate. I enjoyed seeing “your world” aka development.
Anthony: Uh oh. We are a test species to you.
Me: Don’t worry. A good researcher observes to understand, not to pass judgment.

And later, when I was fretting that I cost our team the win by not contributing any code, Anthony wrote to me:

Contributing code does not mean contributing; contributing does not mean contributing code.

Editor again: Joyce thought the post needing a closing. Thanks to Joyce, Rafa and our guys, Anthony, Luis, Osvaldo, Raymond and Tony for all their hard work. Consider the post closed. Oh, and find the comments.

Taleo Interview Evaluations, Part 2

July 3rd, 2014 7 Comments

So, if you read Part 1, you’re all up to speed. If not, no worries. You might be a bit lost, but if you care, you can bounce over and come back for the thrilling conclusion.

I first showed the Taleo Interview Evaluation Glass app and Android app at a Taleo and HCM Cloud customer expo in late April, and as I showed it, my story evolved.

Demos are living organisms; the more you show them, the more you morph the story to fit the reactions you get. As I showed the Taleo Glass app, the demo became more about Glass and less about the story I was hoping to tell, which was about completing the interview evaluation more quickly to move along the hiring process.

So, I began telling that story in context of allowing any user, with any device, to complete these evaluations quickly, from the heads-up hotness of Google Glass, all the way down the technology coolness scale to a boring old dumbphone with just voice and text capabilities.

I used the latter example for two reasons. First, the juxtaposition of Google Glass and a dumbphone sending texts got a positive reaction and focused the demo around how we solved the problem vs. “is that Google Glass?”

And second, I was already designing an app to allow a user with a dumbphone to complete an interview evaluation.

Noel (@noelportugal) introduced me to Twilio (@twilio) years ago when he built the epic WebCenter Rock ‘em Sock ‘em Robots. Those robots punched based on text and voice input collected by Twilio.

Side note, Noel has long been a fan of Twilio’s, and happily, they are an Oracle Partner. Ultan (@ultan) is hard at work dreaming up cool stuff we can do with Twilio, so stay tuned.

Anyway, Twilio is the perfect service to power the app I had in mind. Shortly after the customer expo ended, I asked Raymond to build out this new piece, so I could have a full complement of demos to show that fit the full story.

In about a week, Raymond was done, and we now have a holistic story to tell.

The interface is dead simple. The user simply sends text messages to a specific number, using a small set of commands. First, sending “Taleo help” returns a list of the commands. Next, the user sends “Taleo eval requests” to retrieve a list of open interview evaluations.

Screenshot_2014-07-02-12-54-57

The user then sends a command to start one of the numbered evaluations, e.g. “Start eval 4″ and each question is sent as a separate message.

questions1

questions2

When the final question has been answered, a summary of the user’s answered is sent, and the user can submit the evaluation by sending “Confirm submit.”

summarySubmit

 

And that’s it. Elegant and simple and accessible to any manager, e.g. field managers who spend their days traveling between job sites. Coupled with the Glass app and the Android app, we’ve covered all the bases not already covered by Taleo’s web app and mobile apps.

As always, the disclaimer applies. This is not product. It’s simply a concept demo, built to show people the type of R&D we, Oracle Applications User Experience and this team, do. Not product, only research.

Find the comments.

Taleo Interview Evaluations, Part 1

July 2nd, 2014 6 Comments

Time to share a new concept demo, built earlier this Spring by Raymond and Anthony (@anthonyslai), both of whom are ex-Taleo.

Back in April, I got my first exposure to Taleo during a sales call. I was there with the AUX contingent, talking about Oracle HCM Cloud Release 8, featuring Simplified UI, our overall design philosophies and approaches, i.e. simplicity-mobility-extensibility, glance-scan-commit, and our emerging technologies work and future cool stuff.

I left that meeting with an idea for a concept demo, streamlining the interview evaluation process with a Google Glass app.

The basic pain point here is that recruiters have trouble urging the hiring managers they support through the hiring process because these managers have other job responsibilities.

It’s the classic Catch-22 of hiring; you need more people to help do work, but you’re so busy doing the actual work, you don’t have time to do the hiring.

Anyway, Taleo Recruiting has the standard controls, approvals and gating tasks that any hiring process does. One of these gating tasks is completing the interview evaluation; after interviewing a candidate, the interviewer, typically the hiring manager and possibly others, completes an evaluation of the candidate that determines her/his future path in the process.

Good evaluation, the candidate moves on in the process. Poor evaluation, the candidate does not.

Both Taleo’s web app and mobile app provide the ability to complete these evaluations, and I thought it would be cool to build a Glass app just for interview evaluations.

Having a hands-free way to complete an evaluation would be useful for a hiring manager walking between meetings on a large corporate campus or driving to a meeting. The goal here is to bring the interview evaluation closer to the actual end of the interview, while the chat is still fresh in the manager’s mind.

Imagine you’re the hiring manager. Rather than delaying the evaluation until later in the day (or week), walk out of an interview, command Glass to start the evaluation, have the questions read directly into your ear, dictate your responses and submit.

Since the Glass GDK dropped last Winter, Anthony has been looking for a new Glass project, and I figured he and Raymond would run with a Taleo project. They did.

The resulting concept demo is a Glass app and an accompanying Android app that can also be used as a dedicated interview evaluation app. Raymond and Anthony created a clever way to transfer data using the Bluetooth connection between Glass and its parent device.

Here’s the flow, starting with the Glass app. The user can either say “OK Glass” and then say “Start Taleo Glass,” or tap the home card, swipe through the cards and choose the Start Taleo Glass card.

startTaleo

The Glass app will then wait for its companion Android app to send the evaluation details.

Screenshot_2014-07-02-09-26-25

Next, the user opens the Android app to see all the evaluations s/he needs to complete, and then selects the appropriate one.

evalApp1

Tapping Talk to Google Glass sends the first question to the Glass over the Bluetooth connection. The user sees the question in a card, and Glass also dictates the question through its speaker.

Screenshot_2014-07-02-09-06-30

Tapping Glass’ touchpad turns on the microphone so the user can dictate a response, either choosing an option for a multiple choice question or dictating an answer for an open-ended question. As each answer is received by the Android app, the evaluation updates, which is pretty cool to watch.

answers

The Glass app goes through each question, and once the evaluation is complete, the user can review her/his answers on the Android app and submit the evaluation.

The guys built this for me to show at a Taleo and HCM Cloud customer expo, similar to the one AMIS hosted in March. After showing it there, I decided to expand the concept demo to tell a broader story. If you want to read about that, stay tuned for Part 2.

Itching to sound off on this post, find the comments.

Update: The standard disclaimer applies here. This is not product of any kind. It’s simply a concept demo, built to show people the type of R&D we, Oracle Applications User Experience and this team, do. Not product, only research.

I Guess Wearables Are a Thing

June 20th, 2014 5 Comments

For what seems like ages, the noise around wearable technology has been building, but until recently, I’ve been skeptical about widespread adoption.

Not anymore, wearables are a thing, even without an Apple device to lead the way.

Last week, Noel (@noelportugal) and I attended the annual conference of the Oracle HCM Users Group (@ohugupdates); the Saturday before the conference, we showed off some of our wearable demos to a small group of customers in a seminar hosted by Oracle Applications User Experience.

As usual, we saturated the Bluetooth spectrum with our various wearables.

BpuY09ECEAIL8yy

This doesn’t even include Noel’s Glass and Pebble.

The questions and observations of the seminar attendees showed a high level of familiarity with wearables of all types, not just fitness bands, but AR glasses and other, erm, wearable gadgets. A quick survey showed that several of them had their own wearables, too.

Later in the week, chatting up two other customers, I realized that one use case I’d thought was bogus is actually real, the employee benefits plus fitness band story.

In short, employers give out fitness bands to employees to promote healthy behaviors and sometimes competition; the value to the organization comes from an assumption that the overall benefit cost goes down for a healthier employee population. Oh, and healthy people are presumably happier, so there’s that too.

At a dinner, I sat between two people, who work for two different employers, in very different verticals; they both were wearing company-provided fitness trackers, one a Garmin device, the other a FitBit. And they both said the devices motivated them.

So, not a made-up use case at all.

My final bit of anecdotal evidence from the week came during Jeremy’s (@jrwashley) session. The room was pretty packed, so I decided to do some Bluetooth wardriving using the very useful Bluetooth 4.0 Scanner app, which has proven to be much more than a tool for finding my lost Misfit Shine.

From a corner of the room, I figured my scan covered about a third of the room.

bluetoothWarDriving

That’s at least six wearables, five that weren’t mine. I can’t tell what some of the devices are, e.g. One, and devices like Google Glass and the Pebble watch won’t be detected by this method. We had about 40 or so people in the room, so even without scanning the entire room, that’s a lot of people rocking wearables.

If you’re not impressed by my observations, maybe some fuzzy app-related data will sway you. From a TechCrunch post:

A new report from Flurry Analytics shows that health and fitness apps are growing at a faster rate than the overall app market so far in 2014. The analytics firm looked at data from more than 6,800 apps in the category on the iPhone and iPad and found that usage (measured in sessions) is up 62% in the last six months compared to 33% growth for the entire market, an 87% faster pace.

This data comes just as Apple and Google aim to boost the ecosystem for fitness apps and wearables with HealthKit and Google Fit, both of which aim to make it easy for wearable device manufacturers to share their data and app developers to use that data to make even better apps.

Of course, if/when Apple and Google make their plays, wearables will only get more prevalent.

So, your thoughts, about wearables, your own and other people’s, corporate wellness initiatives, your own observations, belong in the comments.

Fourfecta! Java, IoT, Making and Raspi

June 18th, 2014 Leave a Comment

Here comes more Maker content for your reading pleasure, this time it’s an OTN piece on Java and the Internet of Things:

A Perfect Match: Java and the Internet of Things

The piece features lots of Noel (@noelportugal) wisdom, on making, on IoT, on the Raspi and on Java, his own personal fourfecta. If you’re scanning (shame on you), look for the User Experience and the Internet of Things section.

Here’s a very Noel quote:

“Java powers the internet, our banks, and retail enterprises—it’s behind the scenes everywhere,” remarks Portugal. “So we can apply the same architectures, security, and communication protocols that we use on the enterprise to an embedded device. I have used Arduino, but it would be hard to start a web server with it. But with Raspberry Pi, I can run a server, or, for example, use general-purpose I/O where I can connect sensors. The point is that developers can translate their knowledge of Java from developing on the enterprise to embedded things.”

For those attending Kscope14 next week, come find us for the in-person version and see IFTTPi, the fourfecta in action.

Find Us at Kscope14 Next Week in Seattle

June 17th, 2014 3 Comments

Kscope14 (#kscope14), the annual conference of the Oracle Development Tools User Group, affectionately ODTUG (@odtug), will happen next week in beautiful Seattle, June 22 to June 26.

The good people at ODTUG have graciously invited me back as a speaker for the 2014 vintage, and Anthony (@anthonyslai) will be my wingman for our session, Oracle Cloud and the New Frontier of User Experience.

Here are the particulars:

Title: Oracle Cloud and the New Frontier of User Experience
When: Monday, June 23, 2014, Session 1, 10:45 am – 11:45 am
Abstract
A wristband that can unlock and start your car based on a unique cardiac rhythm. Head-mounted displays that layer digital information over reality. Computers, robots, drones, and more controlled with a wave of the hand or a flick of the wrist. Everyday objects connected to the Internet that convey information in an ambient way. Fully functional computers on tiny sticks. Invisible fences that control the flow of data. Science fiction isn’t fiction anymore, and people aren’t tied to PCs and desks. Everything is a device, everything is connected, everything is smart, and everything is an experience. Come see the R&D work of Oracle’s Applications User Experience team and explore new devices, trends, and platforms.

Noel (@noelportugal) will be tagging along as well, and I think we’ll have a scaled-down, but still fun, version of the IFTTPi activity the guys showed at the Maker Faire last month.

So, if you want to hear about and see the emerging technologies R&D coming out of Oracle Applications User Experience (@usableapps), try out Google Glass, Leap Motion, various wearables, play with the Sphero, or just say hi, come find us in Seattle.

 

Maker Movement Fuels the Internet of Things

June 16th, 2014 Leave a Comment

The Java Team recently released a short video compiling selected moments from last month’s MakerCon and Maker Faire. If you recall, we were lucky to be invited to participate in both events, both of which were tons of fun, enlightening and inspiring.

At 0:33 you’ll see the some of the guys hamming it up for the camera, and Jeremy’s (@jrwashley) keynote at MakerCon is featured prominently as a voiceover.

Enjoy.

Google Glass and First World Problems

June 16th, 2014 5 Comments

If you have Google Glass, you’ve probably seen this card a few times.

glassmustcooldown

After a while, you begin to expect the card when your right temple starts to get uncomfortably warm. Apparently, Anthony (@anthonyslai), our resident Glass expert and long-time Glass Explorer, has a protip to handle this problem, two cans of cold soda.

Screen Shot 2014-06-13 at 12.57.39 AM

Ultan’s (@ultan) Glass cooling off

I guess Ultan (@ultan) first encountered this clever solution while Anthony was presenting at the EchoUser Wearables Design Jam a couple weeks ago.

Now I have an efficient way to solve this decidedly First World Problem.

9lwxb

The Qualcomm Toq

May 29th, 2014 15 Comments

Editor’s Note: Here’s a post from newish ‘Lab member, Tony. Enjoy, and maybe if you’re nice in comments, he’ll write more. Or not, we won’t know until we know.

The ideas flying, crawling, walking, and slithering around us in the sunny windy San Francisco Bay setting made for an enjoyable, educational, and truly inspirational experience. O’Reilly Solid conference: Software/Hardware/Everywhere was last week and with it, the future finally materialized. Wearables, robots, new materials, new methods, and new software have arrived to change . . . everything.

This slideshow requires JavaScript.

This spirit was interrupted–no, augmented–for a few hours at the beginning of Solid by some hushed mumbles: O’Reilly was giving away 30 smartwatches at lunchtime!

I will spare the details of finding and analyzing the official rules, staking out and running reconnaissance around the giveaway area, listening in, photographing, and hunkering down. I created my first personal twitter account and opened 10 identical tabs on my smartphone, ready to spawn the required golden tweet. I proudly whispered this strategy to a colleague who responded: “OK, you’ve gone too far now.” I agreed and then quietly, though not completely unabashedly, created two more tabs.

131118-qualcomm-toq-smart-watch-04

The Toq Smartwatch by Qualcomm features Qualcomm’s Mirasol display technology which delivers a sizable, always on, color touch screen without consuming much power. The screen is readable even in direct sunlight. In darkness, double tap the secret spot on the upper band to toggle the happy backlight. The screen snappily responds to touch and the battery lasted a full week in my test. Given that the display stays on for so long between charges, I find it difficult to overly criticize the often washed out, blurred colors.

The watch face is so much bigger than the band that the screen overlaps my hand a bit. The watch often digs in when the wrist is bent, say when using an armrest to get up from a chair. Tightening the band to prevent the discomfort is not an option. The Toq band is cut to fit, and careful with those scissors: a battery and sensors in the band mean you cannot replace it. The design of the band does not permit an analog fit as there are a finite number of slots. If you are one of the lucky ones with a blessed wrist size then you should be able to use Toq without frequent pain. Got pain? Regularly shove the watch up to where your arm is thicker, or sell it to someone with a wrist of equal or lesser circumference to your own.

The software, both on the Toq itself and on the required Android-only device, is adequate. Devices stayed paired and notifications were timely. Range was around 30 feet. What more do you want in a smartwatch? How about using your voice to dictate a text!? Pretty cool, Toq! An SDK is also available for you to make your own Android apps which communicate with Toq. I tried downloading it and they wanted me to create an account so I didn’t. I was also discouraged by the quiet, small dev forum.

I seldom wear a watch, but I am never without my smartphone. So will I use a smartwatch regularly? I really like being able to casually look down and immediately read a new email/chat/text. Quick access to stocks, weather, calendar, and basic music controls come in handy sometimes. Overall though, Toq leaves me wanting more: a true smartphone experience, always on, on my wrist. But then maybe Toq has done its job. I think I have seen the light, the conversion has been made, and I am enthusiastically on board for next time.

Bottom line: Qualcomm Toq is OK for a free gift but I want more.

The Narrative Clip

May 28th, 2014 4 Comments

Editor’s note: Here’s another post from friend of the ‘Lab and colleague, John Cartan. When John reached out, offering a review of the Narrative Clip (neé Memento), I jumped at the opportunity to read and publish his thoughts, and not just because I value his insights.

When Noel (@noelportugal) and I were in the Netherlands for the awesome event hosted by AMIS in March, we ran into Sten Vesterli (@stenvesterli), Ace Director and OAUX Advocate, who was sporting the very same Narrative Clip. We both quizzed Sten about it and were intrigued to explore future uses and cool demos for the little life-logging camera.

Anyway, John’s review reminded me, and now we have more anecdotal usage on which to draw if/when we get to building for the Narrative Clip.

Enjoy.

For several weeks now I’ve been wearing a small orange gadget clipped to my shirt – a “lifelogging” camera called the “Narrative Clip”. We thought we might be able to use it for ethnographic field studies (following users around to see how they do their job), or maybe for recording whiteboards during brainstorming meetings. But I was especially curious to see how other people would react to it.

From L-R: the Narrative Clip’s box, John, the Narrative Clip

Usage

The device itself is small (about the size of a Triscuit) and easy to use: just clip it onto your shirt or collar and forget it. It takes a photo once every 30 seconds without flashing lights or any visible indication. At the end of the day you hook it to a Mac or PC with a 3-inch USB cable to both upload the day’s photos and recharge the device.

The camera can be temporarily deactivated by putting it face down on a table or in a purse or pocket. In practice I found that my pocket wasn’t dark enough so I made a small carrying case out a box of mints.

Once the photos are transferred (which takes only a minute or two) you can either leave them on your hard disk, upload them to a cloud server, or both. The server upload and processing takes anywhere from ten minutes to six hours or more. Once uploaded, the images are straightened, cropped, sorted to remove blurry photos, organized into groups, and made available to a free iPhone or Android browser app.

The cloud storage is effortless and requires no local storage but sometimes over-crops (it once chopped the heads off all the people in a meeting I monitored) and provides only limited access to the photos (you have to mail yourself reduced photos from the phone app one at a time).

So I think that for full control you have to enable the local storage option. This works fine, but creates more work. You can easily generate over a thousand photos a day, which all have to be sorted and rotated. The photos consume a gig or more each day, which may eventually overwhelm your local hard drive; for long-term usage I would recommend a dedicated external drive.

Photo Quality

Each raw photo is 2592 x 1944 (5 megapixels). The quality is acceptable in full light, grainy in low light (there is no flash). But because the photos are taken mindlessly while clipped to a shirt that may bounce or sag, the results are generally poor: mostly shots of the ceiling or someone’s elbow. There is no way to check the images as they are taken, so if the lens is blocked by a droopy collar you may not discover this until the end of the day (as happened to me once). And the camera generally won’t be pointed in the direction you are looking unless you glue it to your forehead or wear it on a hat. You can force a photo by double-tapping, but this doesn’t work well.

For all these reasons the Narrative Clip is not a replacement for a normal camera. But the random nature of the photo stream does have some redeeming qualities: it notices things you do not (a passing expression on someone’s face, an interesting artifact in an odd corner of someone’s cube, etc.) and it creates a record of small moments during the course of a day which would otherwise be quickly forgotten. Even if most of the photos are unusable, they do tend to jog your memory about the actual sequence of events. And because the photos are un-posed they can sometimes capture more authentic moments than a more obvious camera usually would.

Possible Applications

The key to designing a great user experience for enterprise software is to first understand your user: what her job is, how she does it, what challenges she has to overcome each day, etc. One way of doing this is an “ethnographic field study” – the researcher follows the user around and documents a typical day.

Our original idea was that the Narrative Clip could enhance ethnographic field studies. Either the researcher could wear it while following a user, or you could ask the user to wear it for a day and then meet later to review the photos.

I think both of these ideas are worth trying. The Narrative Clip would not replace a normal camera; it’s main value would be to jog the memory when writing up reports at the end of the day. Similarly, if the user wears the clip herself, the researcher should schedule time the next day to step through the photos together and answer questions (“What were you doing here? Who is that? It looks like you stepped briefly onto the shop floor after lunch – how often do you that?”).

There are other applications as well. I set up the camera in a meeting room to take a photo of the whiteboard every 30 seconds. This could be a quick and easy way to capture drawings during the course of a brainstorming session. Placing the camera far enough back to capture the entire board meant the writing was hard to discern; it might work with good lighting and strong marking pens.

johnWhiteBoard

John conducting at the whiteboard

Setting the clip on a table during an interview allowed me to collect a collage of un-posed portraits which, in total, gave a more accurate reflection of the subject’s personality than any single posed photo could provide.

image003

Logging an interview with one of our OAUX colleagues

Another possible application is using the camera to take photos from the dashboard of a moving car. For optimal results the camera needs to be placed near the windshield and high enough to avoid photographing the hood of the car. I achieved a stable mount by clipping the camera to a placard holder (from an office supply store) and placing that on a dashboard sticky pad (from an auto supply store).

Personal Reactions

As we enter the age of wearable sensors and the Internet of Things, we are starting to ask a new question during our design sessions: “is that creepy?” As technologists we are naturally excited by the new applications and the bounty of data made available. But as we think about the user experience of our customers, it is important to consider what it’s like being on the other end of the camera. Wearing the Narrative Clip was a great way to explore personal reactions to this brave new world.

I found that in general people didn’t notice (or were to polite to ask) about the device unless I brought it up. But once they realized it was a camera, some people were uncomfortable (at first). Most people didn’t seem to mind too much once they understood how it worked, but some people were definitely shy about having their photos taken. Some changed positions so as not to be in my normal field of vision. One person requested that I destroy any photos it might take of her. It helps to explain what you’re doing and ask permission first.

Here is what one acquaintance of mine confessed:

“What I think is that I value one-to-one time that is ephemeral. Not recorded. Felt in the heart. I feel threatened when recorded without permission. Sigh. I know. That sounds dumb. I mean, with cell phones everywhere, I don’t even have privacy in the gym locker room. Then the flip side of my brain starts blabbing: “What are you worrying about? Who would want to see your body or record your thoughts anyway?” Am I just prejudiced? I would not want to hire someone I interviewed if they wore one. I would leave the dinner table if a date wore one.”

I feel that it is very important to respect attitudes like this. If people are uncomfortable with a new technology, they will find ways to bypass or subvert it. Sensor-based enterprise applications will only succeed if we strike the right balance between convenience and privacy, are upfront about exactly what data we are collecting and how it will be used, and show respect by asking permission and letting people opt in as much as possible.

Conclusions

The Narrative Clip is a solid, easy to use device that could be helpful for tasks like ethnographic fieldwork, but culling through the flood of random images requires time and effort. Further experimentation is needed to determine if the trade-off would be worthwhile.

Recording entire days – and being recorded by others – was an illuminating experience. Sensor-based technologies can provide treasure troves of data, but it’s always worth asking what it would be like to be on the other end of the camera. A reasonable balance can be struck if we are transparent about what we are doing and show respect by asking permission.

The Misfit Shine

May 27th, 2014 16 Comments

Over the past 12 months, the chatter about wearables (glasses, watches, bands, clothing, material) has become too loud to ignore. It almost seems like manufacturers will force consumers to like wearables, like it or not.

There are good uses for wearables, and one of the most common is the fitness tracker.

Although I haven’t worn one myself until recently, I’ve been around lots of people who have, e.g. my wife had an early FitBit, Noel (@noelportugal) was an early adopter of the Nike+ Fuelband and has a Jawbone UP, Ultan (@ultan) has at least a dozen different fitness trackers, etc.

I finally made the jump and bought the Misfit Wearables Shine, and after wearing it for a week, I’m impressed. I do wonder how long it will keep my attention though.
Pros

Of all the fitness bands and smartwatches (and smartphone apps) that track activity, I chose the Shine because I love the small form factor and the flexible ways to wear it. The Shine is about the diameter of a quarter, and guessing here, about the thickness of two or three quarters stacked.

So, yeah, it’s small. It comes with a wristband and a magnetic clasp, and you can buy other, erm, Shine holders including necklaces, leather wristbands and even socks and t-shirts, specifically designed to hold the little guy.

Another plus for the Shine is that it takes a standard watch battery, no need to charge it or tether it for syncing, a common complaint about other fitness trackers.

The Shine uses Bluetooth 4.0 (a.k.a. Bluetooth Low Energy) to communicate with the phone. BLE uses less power than the older spec, but keeping the Bluetooth receiver on all the time runs down the battery noticeably.

misfit-wearables-shine-exploreEven though its design is minimalist, the Shine can tell you the time, if you learn its indicators and ensure you know which side is 12 o’clock. Easier than a binary clock, but requires some learning.

My experience so far has been pretty positive. I like the little guy, but I’m not sure how long I’ll stay engaged. This isn’t a Misfit problem though.

Cons

There are some noteworthy negatives.

Misfit only provides a mobile app for the Shine, no accompanying web app, which I actually don’t mind, yet. This does limit the metrics and analytics a bit, which I know other people like, especially as they accumulate data over time. So, this will eventually bug me

I’m a fan of the quantified self, to a fault; I used to carry a workout journal with eight years of handwritten data in it.

I’m *that* guy.

Misfit has no publicly-available developer options, no APIs, no SDK. They have been promising an API for a while now, so I assume it’s coming soon. An SDK would be nice, e.g. to allow developers to access the Shine for glanceable notifications. Not sure if that’s in the cards or not.

Finally, one of the positives can be a negative. I like the different options for wearing the Shine, and I’ve tested out both the sports band and the magnetic clasp. The latter leads me to a con; it’s easy to lose the Shine.

Case in point, I was wearing the Shine attached to my shorts. I went about my day and suddenly realized it was missing. Looking at the last time I had synced, I retraced my steps to no avail, using the Bluetooth scanning feature as a BLE dowsing rod of sorts.

As a last resort, I pinged Noel, BLE master. He pointed me to an Android app called simply Bluetooth 4.0 Scanner and within minutes, I had found it.

Screenshot_2014-05-22-13-42-30

Huzzah for Noel! Huzzah for Bluetooth 4.0 Scanner! Reading the comments on that app shows that my use case is not unique. Perhaps the developer should rename it, Fitness Band Finder, or some such.

Anyway, that’s my week or so with the Misfit Shine.

Find the comments.

Saying Wearables in Spanish

May 23rd, 2014 11 Comments

Friend of the ‘Lab, Bob Rhubart (@otnarchbeat) recently recorded a segment with our own Noel (@noelportugal) and Sarahi Mireles (@sarahimireles), a UX developer from our Mexico Development Center.

The topic was wearables, but I only know this because they told me. Google Translate wasn’t very helpful, unless “Manos libres y vista al frente: Con el futuro puesto” means “Handsfree and front view: With the future since.”

Anyway, enjoy.

Update: Noel pointed me to an English version on the same topic.