I’ve been traveling a lot lately, which is bad. I’ve been consuming a lot of in-flight wifi, which is good, because there really should be no place on Earth where I’m unable to work.
Plus, it’s internets at 35,000 feet. How cool is that?
Today, I found myself in the throes of a decidedly first world problem. Of the many devices I carry, I couldn’t decide which one to use for the airplane wifi, which is, naturally, charged per-device.
Normally, I’d go with the tablet, since it’s a nice mix of form factors. The laptop is my preference, but I end up doing in-seat yoga to use it, not a good look.
But, horror of horrors, the tablet’s battery was only 21%. Being an Android tablet, that wouldn’t be enough to make it to my destination. I do carry a portable battery, but it won’t charge the Nexus 7 tablet, for some odd reason.
Recursive, first world problems.
I debated smartphone vs. laptop for a minute or two before I realized what an awful, self-replicating, first world problem this was. So, I made a call and immediately did what anyone would do, tweeted about it.
What has become of me.
Editor’s note: Hey a new author! Here’s the first one, of many I hope, from Bill Kraus, who joined us back in February. Enjoy.
One of the best aspects of working in the emerging technologies team here in Oracle’s UX Apps group is that we have the opportunity to ‘play’ with new technology. This isn’t just idle dawdling, but rather play with a purpose – a hands-on exercise exploring new technologies and brainstorming on how such technologies can be incorporated into future enterprise user experiences.
Some of this technology, such as beacons and wearables, have obvious applications. The relevancy of other technologies, such as quadcopters and drones, are more obtuse (not withstanding their possible use as a package delivery mechanism for an unnamed online retail behemoth).
As an amateur wildlife and nature photographer, I’ve dabbled in everything from digiscoping to infrared imaging to light painting to underwater photography. I’ve also played with strapping lightweight keychain cameras to inexpensive quadcopters (yes, I know I could get a DJI Phantom and a GoPro, but at the moment I prefer to test my piloting skills on something that won’t make me shed tears – and incur the wrath of my spouse – if it crashes).
After telling my colleagues recently over lunch about my quadcopter adventures (I already lost several in the trees and waters of the Puget Sound), Tony, Luis, and Osvaldo decided to purchase their own and we had a blast at our impromptu ‘flight school’ at Oracle. The guys did great, and Osvaldo’s copter even had a têt-à-tête with a hummingbird, who seemed a bit confused over just what was hovering before it.
This is all loads of fun, but what do flying quadcopters have to do the Internet of Things? Well, just as a quadcopter allows a photographer to get a perspective previously thought impossible, mobile technology combined with embedded sensors and the cloud have allowed us to break the bonds of the desktop and view data in new ways. No longer do we interact with digital information at a single point in time and space, but rather we are now enveloped by it every waking (and non-waking) moment – and we have the ability to view this data from many different perspectives. How this massive flow of incoming data is converted into useful information will depend in large part on context (you knew I’d get that word in here somehow) – analogous to how the same subject can appear dramatically different depending on the photographer’s (quadcopter assisted) point-of-view.
In fact, the Internet of Things is as much about space as it is about things – about sensing, interacting with and controlling the environment around us using technology to extend what we can sense and manipulate. Quadcopters are simply a manifestation of this idea – oh, and they are also really fun to fly.
Noel (@noelportugal) and Raymond have been working on a secret project. Here’s the latest:
So now you know why Noel bought the slap bands, but what goes in the case?
If you’ve been watching, you might know already.
The OTN network is designed to help Oracle users with community generated resources. Every year the OTN team organizes worldwide tours that allow local users to learn from subject matter experts in all things Oracle. For the past few years the UX team has been participating in the OTN Latin America Tour as well as other regions. This year I was happy to accept their invitation to deliver the opening keynote for the Mexico City tour stop.
The keynote title was “Wearables in the Enterprise: From Internet of Things to Google Glass and Smart Watches.” Given the AppsLab charter and reputation on cutting edge technologies and innovation it was really easy to put a presentation deck on our team’s findings on these topics. The presentation was a combination of the keynote given by our VP, Jeremy Ashley, during MakerCon 2014 at Oracle HQ this past May and our proof-of-concepts using wearable technologies.
I also had a joint session with my fellow UX team member Rafael Belloni titled “Designing Tablet UIs Using ADF.” Here we had the chance to share how users can leverage two great resources freely available from our team:
- Simplified User Experience Design Patterns for the Oracle Applications Cloud Service (register to download e-book here)
- A Starter kit with templates used to build a Simplified UI interfaces (download kit here)
*Look for “Rich UI with Data Visualization Components and JWT UserToken validation extending Oracle Sales Cloud– 1.0.1″
These two resources are the result of extensive research done by our whole UX organization and we are happy to share with the Oracle community. Overall it was a great opportunity to reach out to the Latin American community, especially my fellow Mexican friends.
Here are some pictures of the event and of Mexico City. Enjoy!
Editor’s note: I meant to blog about this today, but looks like my colleagues over at VoX have beat me to it. So, rather than try to do a better job, read do any work at all, I’ll just repost it. Free content w00t!
Although I no longer carry an iOS device, I’ve seen Voice demoed many times in the past. Projects like Voice and Simplified UI are what drew me to Applications User Experience, and it’s great to see them leak out into the World.
Oracle Extends Investment in Cloud User Experiences with Oracle Voice for Sales Cloud
By Vinay Dwivedi, and Anna Wichansky, Oracle Applications User Experience
Oracle Voice for the Oracle Sales Cloud, officially called “Fusion Voice Cloud Service for the Oracle Sales Cloud,” is available now on the Apple App Store. This first release is intended for Oracle customers using the Oracle Sales Cloud, and is specifically designed for sales reps.
Unless people record new information they learn, (e.g. write it down, repeat it aloud), they forget a high proportion of it in the first 20 minutes. The Oracle Applications User Experience team has learned through its research that when sales reps leave a customer meeting with insights that can move a deal forward, it’s critical to capture important details before they are forgotten. We designed Oracle Voice so that the app allows sales reps to quickly enter notes and activities on their smartphones right after meetings, no matter where they are.
Instead of relying on slow typing on a mobile device, sales reps can enter information three times faster (pdf) by speaking to the Oracle Sales Cloud through Voice. Voice takes a user through a dialog similar to a natural spoken conversation to accomplish this goal. Since key details are captured precisely and follow-ups are quicker, deals are closed faster and more efficiently.
Oracle Voice is also multi-modal, so sales reps can switch to touch-and-type interactions for situations where speech interaction is less than ideal.
Oracle sales reps tried it first, to see if we were getting it right.
We recruited a large group of sales reps in the Oracle North America organization to test an early version of Oracle Voice in 2012. All had iPhones and spoke American English; their predominant activity was field sales calls to customers. Users had minimal orientation to Oracle Voice and no training. We were able to observe their online conversion and usage patterns through automated testing and analytics at Oracle, through phone interviews, and through speech usage logs from Nuance, which is partnering with Oracle on Oracle Voice.
Users were interviewed after one week in the trial; over 80% said the product exceeded their expectations. Members of the Oracle User Experience team working on this project gained valuable insights into how and where sales reps were using Oracle Voice, which we used as requirements for features and functions.
For example, we learned that Oracle Voice needed to recognize product- and industry-specific vocabulary, such as “Exadata” and “Exalytics,” and we requested a vocabulary enhancement tool from Nuance that has significantly improved the speech recognition accuracy. We also learned that connectivity needed to persist as users traveled between public and private networks, and that users needed easy volume control and alternatives to speech in public environments.
We’ve held subsequent trials, with more features and functions enabled, to support the 10 workflows in the product today. Many sales reps in the trials have said they are anxious to get the full version and start using it every day.
“I was surprised to find that it can understand names like PNC and Alcoa,” said Marco Silva, Regional Manager, Oracle Infrastructure Sales, after participating in the September 2012 trial.
“It understands me better than Siri does,” said Andrew Dunleavy, Sales Representative, Oracle Fusion Middleware, who also participated in the same trial.
This demo shows Oracle Voice in action.
What can a sales rep do with Oracle Voice?
Oracle Voice allows sales reps to efficiently retrieve and capture sales information before and after meetings. With Oracle Voice, sales reps can:
Prepare for meetings
- View relevant notes to see what happened during previous meetings.
- See important activities by viewing previous tasks and appointments.
- Brush up on opportunities and check on revenue, close date and sales stage.
Wrap up meetings
- Capture notes and activities quickly so they don’t forget any key details.
- Create contacts easily so they can remember the important new people they meet.
- Update opportunities so they can make progress.
Our research showed that sales reps entered more sales information into the CRM system when they enjoyed using Oracle Voice, which makes Oracle Voice even more useful because more information is available to access when the same sales reps are on the go. With increased usage, the entire sales organization benefits from access to more current sales data, improved visibility on sales activities, and better sales decisions. Customers benefit too — from the faster response time sales reps can provide.
Oracle’s ongoing investment in User Experience
Oracle gets the idea that cloud applications must be easy to use. The Oracle Applications User Experience team has developed an approach to user experience that focuses on simplicity, mobility, and extensibility, and these themes drive our investment strategy. The result is key products that refine particular user experiences, like we’ve delivered with Oracle Voice.
Oracle Voice is one of the most recent products to embrace our developer design philosophy for the cloud of “Glance, Scan, & Commit.” Oracle Voice allows sales reps to complete many tasks at what we call glance and scan levels, which means keeping interactions lightweight, or small and quick.
Are you an Oracle Sales Cloud customer?
Oracle Voice is available now on the Apple App Store for Oracle customers using the Oracle Sales Cloud. It’s the smarter sales automation solution that helps you sell more, know more, and grow more.
Will you be at Oracle OpenWorld 2014? So will we! Stay tuned to the VoX blog for when and where you can find us. And don’t forget to drop by and check out Oracle Voice at the Smartphone and Nuance demo stations located at the CX@Sales Central demo area on the second floor of Moscone West.
As part of a secret project Noel (@noelportugal) and Raymond are cooking up, Noel ordered some AppsLab-branded slap bands.
Anyway, I’m sure we’ll have some left over after the double-secret project. So, if you want one, let us know.
Find the comments.
So, back in January, Noel (@noelportugal) took a team of developers to the AT&T Developer Summit Hackathon in Las Vegas.
Although they didn’t win, the built some very cool stuff, combining Google Glass, Philips Hue, Internet of Things, and possibly a kitchen sink in there somewhere, into what can only be described as a smart holster. You know, for guns.
You read that right. This project was way out of our usual wheelhouse, which is what made it so much fun, or so I’m told.
Friend of the ‘Lab Martin Taylor was kind enough to produce, direct and edit the following video, in which Noel describes and demonstrates the holster’s capabilities.
Did you catch the bit at 3:06? That’s Raymond behind the mask.
Editor’s Note: Hey, a new author! Colleague and Friend of the ‘Lab, Joyce Ohgi, a principal usability researcher here at Oracle Applications User Experience, joined several of our guys and tall man, all-around good dude and Friend of the ‘Lab, Rafa Belloni (@rafabelloni), to form a super-powered team last week.
This is her story, as told from the inside. Enjoy.
I earned $600 in a coding challenge without writing a single line of code.
Well, strictly speaking, $600/7 = $85.71, 7 being the number of members on our team. The challenge in question? The Oracle Applications User Experience Beacons Developer Challenge, a contest between internal Oracle teams to devise a creative solution using Estimote’s beacons and Oracle Facilities data provided by Oracle Spatial.
We were given: the beacons, some sample data, icons, and images, an example app, a pack of poster gum to stick the beacons on walls, and the freedom to do whatever we could: 1) dream up and 2) execute in 48 hours.
Fast forward: Anthony Lai (@anthonslai) and I are standing in front of a room of developers and five judges about to give a presentation on our app, whose back end I still did not fully grasp. How did I get there?
My journey started two days before the official challenge start date. I ate lunch with Tony, one of the developers, and he suggested I join the team because “Why not? It’ll be fun.”
I had heard of the challenge but thought it wasn’t for someone like me, as my now-rusty coding skills were last used for an Intro to C programming class in college; what could I contribute to a contest whose purpose is literally to generate code? But I like Tony, and he promised me it would be fun. So I decided, well, if the team will have me, I’d like to try it out. So I signed up.
One day before the challenge: the team decides to meet in order to: 1) learn each other’s names and 2) come up with a list of ideas, which would be narrowed down once the contest started.
After we all introduced ourselves, the brainstorming began immediately and organically. But, to my surprise, not a single dev was taking notes. How were we going to remember all the ideas and organize ourselves?
As a researcher, one of the basic rules of my job is to always observe and always take notes.
I could be useful! I whipped out my handy iPad with keyboard case and typed away. But some of the ideas didn’t make sense to me, and for the good of the team, I realized I also should be voicing my questions and opinions, not just act as the scribe.
But the team listened to me. They even agreed with me. Okay, they also disagreed with me sometimes. But they treated me with the same respect they treated each other.
Day of the challenge – final code check-in: Honestly, the whole coding challenge experience is a blur. As a researcher, I’m trained not just to always take notes, but also to take photos whenever possible to retain key details that could be otherwise forgotten.
I got so wrapped up in our project, that I didn’t take a single photo of our group. I did take several pictures of our competition though.
Luckily Kathy Miedema dropped by to wish us luck and also snapped a picture.
As for the experience itself, I can only attempt to describe it by painting a picture in words.
We are all seated in the AUX Team’s little Design Room. Although all the chairs are occupied, silence reigns, interrupted only by the soft clicking of keyboards, and the occasional low conversation.
Usually, the mental image of collaboration is of a group of people talking together in a group. But in this case, even though it looked like we were all doing our own separate thing, it was intensely collaborative.
Each of our parts would need to come together by the deadline, so we did constant, impromptu, little check-ins to make sure the pieces we were building would integrate quickly.
I checked-in constantly as well, seeking confirmation that, of the many research methodologies I could use, the ones I chose gave the team the data they needed, i.e. user interviews to capture wants, needs and task flows of the current processes and feedback sessions with key stakeholders.
By the way, if you are interested in learning more about research methodology, you can find more info at UX Direct.
So, back to Anthony and me, standing in front of a crowd, about to launch into our demo.
It was crazy; we didn’t have time to do a run-through before; we had some weird display lags using the projector and the Samsung Gear Live smartwatch; the script was too long, and we ran out of time.
Believe me, I have a list of things that we can improve upon for the next challenge, but our idea was good.
Technically, it was solid, because of the deep expertise of the team, which aggregated probably comes close to 100 years of total development experience, and it was based on real users’ needs because of my research.
Happily, we won 2nd place, and $600. Next year, we’ll be gunning for 1st and the cool $1000 prize, which would net $142.86 for each of us.
All kidding aside, it’s not about the prize money or the recognition. It’s about people using their unique skill sets to build something better than any of them could have built on their own.
I will close with a text exchange between Anthony and me, post-challenge:
Me: Thx for letting me participate. I enjoyed seeing “your world” aka development.
Anthony: Uh oh. We are a test species to you.
Me: Don’t worry. A good researcher observes to understand, not to pass judgment.
And later, when I was fretting that I cost our team the win by not contributing any code, Anthony wrote to me:
Contributing code does not mean contributing; contributing does not mean contributing code.
Editor again: Joyce thought the post needing a closing. Thanks to Joyce, Rafa and our guys, Anthony, Luis, Osvaldo, Raymond and Tony for all their hard work. Consider the post closed. Oh, and find the comments.
So, if you read Part 1, you’re all up to speed. If not, no worries. You might be a bit lost, but if you care, you can bounce over and come back for the thrilling conclusion.
I first showed the Taleo Interview Evaluation Glass app and Android app at a Taleo and HCM Cloud customer expo in late April, and as I showed it, my story evolved.
Demos are living organisms; the more you show them, the more you morph the story to fit the reactions you get. As I showed the Taleo Glass app, the demo became more about Glass and less about the story I was hoping to tell, which was about completing the interview evaluation more quickly to move along the hiring process.
So, I began telling that story in context of allowing any user, with any device, to complete these evaluations quickly, from the heads-up hotness of Google Glass, all the way down the technology coolness scale to a boring old dumbphone with just voice and text capabilities.
I used the latter example for two reasons. First, the juxtaposition of Google Glass and a dumbphone sending texts got a positive reaction and focused the demo around how we solved the problem vs. “is that Google Glass?”
And second, I was already designing an app to allow a user with a dumbphone to complete an interview evaluation.
Side note, Noel has long been a fan of Twilio’s, and happily, they are an Oracle Partner. Ultan (@ultan) is hard at work dreaming up cool stuff we can do with Twilio, so stay tuned.
Anyway, Twilio is the perfect service to power the app I had in mind. Shortly after the customer expo ended, I asked Raymond to build out this new piece, so I could have a full complement of demos to show that fit the full story.
In about a week, Raymond was done, and we now have a holistic story to tell.
The interface is dead simple. The user simply sends text messages to a specific number, using a small set of commands. First, sending “Taleo help” returns a list of the commands. Next, the user sends “Taleo eval requests” to retrieve a list of open interview evaluations.
The user then sends a command to start one of the numbered evaluations, e.g. “Start eval 4″ and each question is sent as a separate message.
When the final question has been answered, a summary of the user’s answered is sent, and the user can submit the evaluation by sending “Confirm submit.”
And that’s it. Elegant and simple and accessible to any manager, e.g. field managers who spend their days traveling between job sites. Coupled with the Glass app and the Android app, we’ve covered all the bases not already covered by Taleo’s web app and mobile apps.
As always, the disclaimer applies. This is not product. It’s simply a concept demo, built to show people the type of R&D we, Oracle Applications User Experience and this team, do. Not product, only research.
Find the comments.
Back in April, I got my first exposure to Taleo during a sales call. I was there with the AUX contingent, talking about Oracle HCM Cloud Release 8, featuring Simplified UI, our overall design philosophies and approaches, i.e. simplicity-mobility-extensibility, glance-scan-commit, and our emerging technologies work and future cool stuff.
I left that meeting with an idea for a concept demo, streamlining the interview evaluation process with a Google Glass app.
The basic pain point here is that recruiters have trouble urging the hiring managers they support through the hiring process because these managers have other job responsibilities.
It’s the classic Catch-22 of hiring; you need more people to help do work, but you’re so busy doing the actual work, you don’t have time to do the hiring.
Anyway, Taleo Recruiting has the standard controls, approvals and gating tasks that any hiring process does. One of these gating tasks is completing the interview evaluation; after interviewing a candidate, the interviewer, typically the hiring manager and possibly others, completes an evaluation of the candidate that determines her/his future path in the process.
Good evaluation, the candidate moves on in the process. Poor evaluation, the candidate does not.
Both Taleo’s web app and mobile app provide the ability to complete these evaluations, and I thought it would be cool to build a Glass app just for interview evaluations.
Having a hands-free way to complete an evaluation would be useful for a hiring manager walking between meetings on a large corporate campus or driving to a meeting. The goal here is to bring the interview evaluation closer to the actual end of the interview, while the chat is still fresh in the manager’s mind.
Imagine you’re the hiring manager. Rather than delaying the evaluation until later in the day (or week), walk out of an interview, command Glass to start the evaluation, have the questions read directly into your ear, dictate your responses and submit.
Since the Glass GDK dropped last Winter, Anthony has been looking for a new Glass project, and I figured he and Raymond would run with a Taleo project. They did.
The resulting concept demo is a Glass app and an accompanying Android app that can also be used as a dedicated interview evaluation app. Raymond and Anthony created a clever way to transfer data using the Bluetooth connection between Glass and its parent device.
Here’s the flow, starting with the Glass app. The user can either say “OK Glass” and then say “Start Taleo Glass,” or tap the home card, swipe through the cards and choose the Start Taleo Glass card.
The Glass app will then wait for its companion Android app to send the evaluation details.
Next, the user opens the Android app to see all the evaluations s/he needs to complete, and then selects the appropriate one.
Tapping Talk to Google Glass sends the first question to the Glass over the Bluetooth connection. The user sees the question in a card, and Glass also dictates the question through its speaker.
Tapping Glass’ touchpad turns on the microphone so the user can dictate a response, either choosing an option for a multiple choice question or dictating an answer for an open-ended question. As each answer is received by the Android app, the evaluation updates, which is pretty cool to watch.
The Glass app goes through each question, and once the evaluation is complete, the user can review her/his answers on the Android app and submit the evaluation.
The guys built this for me to show at a Taleo and HCM Cloud customer expo, similar to the one AMIS hosted in March. After showing it there, I decided to expand the concept demo to tell a broader story. If you want to read about that, stay tuned for Part 2.
Itching to sound off on this post, find the comments.
Update: The standard disclaimer applies here. This is not product of any kind. It’s simply a concept demo, built to show people the type of R&D we, Oracle Applications User Experience and this team, do. Not product, only research.
For what seems like ages, the noise around wearable technology has been building, but until recently, I’ve been skeptical about widespread adoption.
Not anymore, wearables are a thing, even without an Apple device to lead the way.
Last week, Noel (@noelportugal) and I attended the annual conference of the Oracle HCM Users Group (@ohugupdates); the Saturday before the conference, we showed off some of our wearable demos to a small group of customers in a seminar hosted by Oracle Applications User Experience.
As usual, we saturated the Bluetooth spectrum with our various wearables.
The questions and observations of the seminar attendees showed a high level of familiarity with wearables of all types, not just fitness bands, but AR glasses and other, erm, wearable gadgets. A quick survey showed that several of them had their own wearables, too.
Later in the week, chatting up two other customers, I realized that one use case I’d thought was bogus is actually real, the employee benefits plus fitness band story.
In short, employers give out fitness bands to employees to promote healthy behaviors and sometimes competition; the value to the organization comes from an assumption that the overall benefit cost goes down for a healthier employee population. Oh, and healthy people are presumably happier, so there’s that too.
At a dinner, I sat between two people, who work for two different employers, in very different verticals; they both were wearing company-provided fitness trackers, one a Garmin device, the other a FitBit. And they both said the devices motivated them.
So, not a made-up use case at all.
My final bit of anecdotal evidence from the week came during Jeremy’s (@jrwashley) session. The room was pretty packed, so I decided to do some Bluetooth wardriving using the very useful Bluetooth 4.0 Scanner app, which has proven to be much more than a tool for finding my lost Misfit Shine.
From a corner of the room, I figured my scan covered about a third of the room.
That’s at least six wearables, five that weren’t mine. I can’t tell what some of the devices are, e.g. One, and devices like Google Glass and the Pebble watch won’t be detected by this method. We had about 40 or so people in the room, so even without scanning the entire room, that’s a lot of people rocking wearables.
If you’re not impressed by my observations, maybe some fuzzy app-related data will sway you. From a TechCrunch post:
A new report from Flurry Analytics shows that health and fitness apps are growing at a faster rate than the overall app market so far in 2014. The analytics firm looked at data from more than 6,800 apps in the category on the iPhone and iPad and found that usage (measured in sessions) is up 62% in the last six months compared to 33% growth for the entire market, an 87% faster pace.
This data comes just as Apple and Google aim to boost the ecosystem for fitness apps and wearables with HealthKit and Google Fit, both of which aim to make it easy for wearable device manufacturers to share their data and app developers to use that data to make even better apps.
Of course, if/when Apple and Google make their plays, wearables will only get more prevalent.
So, your thoughts, about wearables, your own and other people’s, corporate wellness initiatives, your own observations, belong in the comments.
Here comes more Maker content for your reading pleasure, this time it’s an OTN piece on Java and the Internet of Things:
The piece features lots of Noel (@noelportugal) wisdom, on making, on IoT, on the Raspi and on Java, his own personal fourfecta. If you’re scanning (shame on you), look for the User Experience and the Internet of Things section.
Here’s a very Noel quote:
“Java powers the internet, our banks, and retail enterprises—it’s behind the scenes everywhere,” remarks Portugal. “So we can apply the same architectures, security, and communication protocols that we use on the enterprise to an embedded device. I have used Arduino, but it would be hard to start a web server with it. But with Raspberry Pi, I can run a server, or, for example, use general-purpose I/O where I can connect sensors. The point is that developers can translate their knowledge of Java from developing on the enterprise to embedded things.”
The good people at ODTUG have graciously invited me back as a speaker for the 2014 vintage, and Anthony (@anthonyslai) will be my wingman for our session, Oracle Cloud and the New Frontier of User Experience.
Here are the particulars:
Title: Oracle Cloud and the New Frontier of User Experience
When: Monday, June 23, 2014, Session 1, 10:45 am – 11:45 am
A wristband that can unlock and start your car based on a unique cardiac rhythm. Head-mounted displays that layer digital information over reality. Computers, robots, drones, and more controlled with a wave of the hand or a flick of the wrist. Everyday objects connected to the Internet that convey information in an ambient way. Fully functional computers on tiny sticks. Invisible fences that control the flow of data. Science fiction isn’t fiction anymore, and people aren’t tied to PCs and desks. Everything is a device, everything is connected, everything is smart, and everything is an experience. Come see the R&D work of Oracle’s Applications User Experience team and explore new devices, trends, and platforms.
Noel (@noelportugal) will be tagging along as well, and I think we’ll have a scaled-down, but still fun, version of the IFTTPi activity the guys showed at the Maker Faire last month.
So, if you want to hear about and see the emerging technologies R&D coming out of Oracle Applications User Experience (@usableapps), try out Google Glass, Leap Motion, various wearables, play with the Sphero, or just say hi, come find us in Seattle.
The Java Team recently released a short video compiling selected moments from last month’s MakerCon and Maker Faire. If you recall, we were lucky to be invited to participate in both events, both of which were tons of fun, enlightening and inspiring.
If you have Google Glass, you’ve probably seen this card a few times.
After a while, you begin to expect the card when your right temple starts to get uncomfortably warm. Apparently, Anthony (@anthonyslai), our resident Glass expert and long-time Glass Explorer, has a protip to handle this problem, two cans of cold soda.
Now I have an efficient way to solve this decidedly First World Problem.
Editor’s Note: Here’s a post from newish ‘Lab member, Tony. Enjoy, and maybe if you’re nice in comments, he’ll write more. Or not, we won’t know until we know.
The ideas flying, crawling, walking, and slithering around us in the sunny windy San Francisco Bay setting made for an enjoyable, educational, and truly inspirational experience. O’Reilly Solid conference: Software/Hardware/Everywhere was last week and with it, the future finally materialized. Wearables, robots, new materials, new methods, and new software have arrived to change . . . everything.
This spirit was interrupted–no, augmented–for a few hours at the beginning of Solid by some hushed mumbles: O’Reilly was giving away 30 smartwatches at lunchtime!
I will spare the details of finding and analyzing the official rules, staking out and running reconnaissance around the giveaway area, listening in, photographing, and hunkering down. I created my first personal twitter account and opened 10 identical tabs on my smartphone, ready to spawn the required golden tweet. I proudly whispered this strategy to a colleague who responded: “OK, you’ve gone too far now.” I agreed and then quietly, though not completely unabashedly, created two more tabs.
The Toq Smartwatch by Qualcomm features Qualcomm’s Mirasol display technology which delivers a sizable, always on, color touch screen without consuming much power. The screen is readable even in direct sunlight. In darkness, double tap the secret spot on the upper band to toggle the happy backlight. The screen snappily responds to touch and the battery lasted a full week in my test. Given that the display stays on for so long between charges, I find it difficult to overly criticize the often washed out, blurred colors.
The watch face is so much bigger than the band that the screen overlaps my hand a bit. The watch often digs in when the wrist is bent, say when using an armrest to get up from a chair. Tightening the band to prevent the discomfort is not an option. The Toq band is cut to fit, and careful with those scissors: a battery and sensors in the band mean you cannot replace it. The design of the band does not permit an analog fit as there are a finite number of slots. If you are one of the lucky ones with a blessed wrist size then you should be able to use Toq without frequent pain. Got pain? Regularly shove the watch up to where your arm is thicker, or sell it to someone with a wrist of equal or lesser circumference to your own.
The software, both on the Toq itself and on the required Android-only device, is adequate. Devices stayed paired and notifications were timely. Range was around 30 feet. What more do you want in a smartwatch? How about using your voice to dictate a text!? Pretty cool, Toq! An SDK is also available for you to make your own Android apps which communicate with Toq. I tried downloading it and they wanted me to create an account so I didn’t. I was also discouraged by the quiet, small dev forum.
I seldom wear a watch, but I am never without my smartphone. So will I use a smartwatch regularly? I really like being able to casually look down and immediately read a new email/chat/text. Quick access to stocks, weather, calendar, and basic music controls come in handy sometimes. Overall though, Toq leaves me wanting more: a true smartphone experience, always on, on my wrist. But then maybe Toq has done its job. I think I have seen the light, the conversion has been made, and I am enthusiastically on board for next time.
Bottom line: Qualcomm Toq is OK for a free gift but I want more.
Editor’s note: Here’s another post from friend of the ‘Lab and colleague, John Cartan. When John reached out, offering a review of the Narrative Clip (neé Memento), I jumped at the opportunity to read and publish his thoughts, and not just because I value his insights.
When Noel (@noelportugal) and I were in the Netherlands for the awesome event hosted by AMIS in March, we ran into Sten Vesterli (@stenvesterli), Ace Director and OAUX Advocate, who was sporting the very same Narrative Clip. We both quizzed Sten about it and were intrigued to explore future uses and cool demos for the little life-logging camera.
Anyway, John’s review reminded me, and now we have more anecdotal usage on which to draw if/when we get to building for the Narrative Clip.
For several weeks now I’ve been wearing a small orange gadget clipped to my shirt – a “lifelogging” camera called the “Narrative Clip”. We thought we might be able to use it for ethnographic field studies (following users around to see how they do their job), or maybe for recording whiteboards during brainstorming meetings. But I was especially curious to see how other people would react to it.
The device itself is small (about the size of a Triscuit) and easy to use: just clip it onto your shirt or collar and forget it. It takes a photo once every 30 seconds without flashing lights or any visible indication. At the end of the day you hook it to a Mac or PC with a 3-inch USB cable to both upload the day’s photos and recharge the device.
The camera can be temporarily deactivated by putting it face down on a table or in a purse or pocket. In practice I found that my pocket wasn’t dark enough so I made a small carrying case out a box of mints.
Once the photos are transferred (which takes only a minute or two) you can either leave them on your hard disk, upload them to a cloud server, or both. The server upload and processing takes anywhere from ten minutes to six hours or more. Once uploaded, the images are straightened, cropped, sorted to remove blurry photos, organized into groups, and made available to a free iPhone or Android browser app.
The cloud storage is effortless and requires no local storage but sometimes over-crops (it once chopped the heads off all the people in a meeting I monitored) and provides only limited access to the photos (you have to mail yourself reduced photos from the phone app one at a time).
So I think that for full control you have to enable the local storage option. This works fine, but creates more work. You can easily generate over a thousand photos a day, which all have to be sorted and rotated. The photos consume a gig or more each day, which may eventually overwhelm your local hard drive; for long-term usage I would recommend a dedicated external drive.
Each raw photo is 2592 x 1944 (5 megapixels). The quality is acceptable in full light, grainy in low light (there is no flash). But because the photos are taken mindlessly while clipped to a shirt that may bounce or sag, the results are generally poor: mostly shots of the ceiling or someone’s elbow. There is no way to check the images as they are taken, so if the lens is blocked by a droopy collar you may not discover this until the end of the day (as happened to me once). And the camera generally won’t be pointed in the direction you are looking unless you glue it to your forehead or wear it on a hat. You can force a photo by double-tapping, but this doesn’t work well.
For all these reasons the Narrative Clip is not a replacement for a normal camera. But the random nature of the photo stream does have some redeeming qualities: it notices things you do not (a passing expression on someone’s face, an interesting artifact in an odd corner of someone’s cube, etc.) and it creates a record of small moments during the course of a day which would otherwise be quickly forgotten. Even if most of the photos are unusable, they do tend to jog your memory about the actual sequence of events. And because the photos are un-posed they can sometimes capture more authentic moments than a more obvious camera usually would.
The key to designing a great user experience for enterprise software is to first understand your user: what her job is, how she does it, what challenges she has to overcome each day, etc. One way of doing this is an “ethnographic field study” – the researcher follows the user around and documents a typical day.
Our original idea was that the Narrative Clip could enhance ethnographic field studies. Either the researcher could wear it while following a user, or you could ask the user to wear it for a day and then meet later to review the photos.
I think both of these ideas are worth trying. The Narrative Clip would not replace a normal camera; it’s main value would be to jog the memory when writing up reports at the end of the day. Similarly, if the user wears the clip herself, the researcher should schedule time the next day to step through the photos together and answer questions (“What were you doing here? Who is that? It looks like you stepped briefly onto the shop floor after lunch – how often do you that?”).
There are other applications as well. I set up the camera in a meeting room to take a photo of the whiteboard every 30 seconds. This could be a quick and easy way to capture drawings during the course of a brainstorming session. Placing the camera far enough back to capture the entire board meant the writing was hard to discern; it might work with good lighting and strong marking pens.
Setting the clip on a table during an interview allowed me to collect a collage of un-posed portraits which, in total, gave a more accurate reflection of the subject’s personality than any single posed photo could provide.
Another possible application is using the camera to take photos from the dashboard of a moving car. For optimal results the camera needs to be placed near the windshield and high enough to avoid photographing the hood of the car. I achieved a stable mount by clipping the camera to a placard holder (from an office supply store) and placing that on a dashboard sticky pad (from an auto supply store).
As we enter the age of wearable sensors and the Internet of Things, we are starting to ask a new question during our design sessions: “is that creepy?” As technologists we are naturally excited by the new applications and the bounty of data made available. But as we think about the user experience of our customers, it is important to consider what it’s like being on the other end of the camera. Wearing the Narrative Clip was a great way to explore personal reactions to this brave new world.
I found that in general people didn’t notice (or were to polite to ask) about the device unless I brought it up. But once they realized it was a camera, some people were uncomfortable (at first). Most people didn’t seem to mind too much once they understood how it worked, but some people were definitely shy about having their photos taken. Some changed positions so as not to be in my normal field of vision. One person requested that I destroy any photos it might take of her. It helps to explain what you’re doing and ask permission first.
Here is what one acquaintance of mine confessed:
“What I think is that I value one-to-one time that is ephemeral. Not recorded. Felt in the heart. I feel threatened when recorded without permission. Sigh. I know. That sounds dumb. I mean, with cell phones everywhere, I don’t even have privacy in the gym locker room. Then the flip side of my brain starts blabbing: “What are you worrying about? Who would want to see your body or record your thoughts anyway?” Am I just prejudiced? I would not want to hire someone I interviewed if they wore one. I would leave the dinner table if a date wore one.”
I feel that it is very important to respect attitudes like this. If people are uncomfortable with a new technology, they will find ways to bypass or subvert it. Sensor-based enterprise applications will only succeed if we strike the right balance between convenience and privacy, are upfront about exactly what data we are collecting and how it will be used, and show respect by asking permission and letting people opt in as much as possible.
The Narrative Clip is a solid, easy to use device that could be helpful for tasks like ethnographic fieldwork, but culling through the flood of random images requires time and effort. Further experimentation is needed to determine if the trade-off would be worthwhile.
Recording entire days – and being recorded by others – was an illuminating experience. Sensor-based technologies can provide treasure troves of data, but it’s always worth asking what it would be like to be on the other end of the camera. A reasonable balance can be struck if we are transparent about what we are doing and show respect by asking permission.
Over the past 12 months, the chatter about wearables (glasses, watches, bands, clothing, material) has become too loud to ignore. It almost seems like manufacturers will force consumers to like wearables, like it or not.
There are good uses for wearables, and one of the most common is the fitness tracker.
Although I haven’t worn one myself until recently, I’ve been around lots of people who have, e.g. my wife had an early FitBit, Noel (@noelportugal) was an early adopter of the Nike+ Fuelband and has a Jawbone UP, Ultan (@ultan) has at least a dozen different fitness trackers, etc.
I finally made the jump and bought the Misfit Wearables Shine, and after wearing it for a week, I’m impressed. I do wonder how long it will keep my attention though.
Of all the fitness bands and smartwatches (and smartphone apps) that track activity, I chose the Shine because I love the small form factor and the flexible ways to wear it. The Shine is about the diameter of a quarter, and guessing here, about the thickness of two or three quarters stacked.
So, yeah, it’s small. It comes with a wristband and a magnetic clasp, and you can buy other, erm, Shine holders including necklaces, leather wristbands and even socks and t-shirts, specifically designed to hold the little guy.
Another plus for the Shine is that it takes a standard watch battery, no need to charge it or tether it for syncing, a common complaint about other fitness trackers.
The Shine uses Bluetooth 4.0 (a.k.a. Bluetooth Low Energy) to communicate with the phone. BLE uses less power than the older spec, but keeping the Bluetooth receiver on all the time runs down the battery noticeably.
Even though its design is minimalist, the Shine can tell you the time, if you learn its indicators and ensure you know which side is 12 o’clock. Easier than a binary clock, but requires some learning.
My experience so far has been pretty positive. I like the little guy, but I’m not sure how long I’ll stay engaged. This isn’t a Misfit problem though.
There are some noteworthy negatives.
Misfit only provides a mobile app for the Shine, no accompanying web app, which I actually don’t mind, yet. This does limit the metrics and analytics a bit, which I know other people like, especially as they accumulate data over time. So, this will eventually bug me
I’m a fan of the quantified self, to a fault; I used to carry a workout journal with eight years of handwritten data in it.
I’m *that* guy.
Misfit has no publicly-available developer options, no APIs, no SDK. They have been promising an API for a while now, so I assume it’s coming soon. An SDK would be nice, e.g. to allow developers to access the Shine for glanceable notifications. Not sure if that’s in the cards or not.
Finally, one of the positives can be a negative. I like the different options for wearing the Shine, and I’ve tested out both the sports band and the magnetic clasp. The latter leads me to a con; it’s easy to lose the Shine.
Case in point, I was wearing the Shine attached to my shorts. I went about my day and suddenly realized it was missing. Looking at the last time I had synced, I retraced my steps to no avail, using the Bluetooth scanning feature as a BLE dowsing rod of sorts.
As a last resort, I pinged Noel, BLE master. He pointed me to an Android app called simply Bluetooth 4.0 Scanner and within minutes, I had found it.
Huzzah for Noel! Huzzah for Bluetooth 4.0 Scanner! Reading the comments on that app shows that my use case is not unique. Perhaps the developer should rename it, Fitness Band Finder, or some such.
Anyway, that’s my week or so with the Misfit Shine.
Find the comments.
The topic was wearables, but I only know this because they told me. Google Translate wasn’t very helpful, unless “Manos libres y vista al frente: Con el futuro puesto” means “Handsfree and front view: With the future since.”
Update: Noel pointed me to an English version on the same topic.
Editor’s note: Here comes a guest post from our old pal and colleague, Ultan O’Broin (@ultan). You can read more of his thoughts and musing at User Experience Assistance and Not Lost in Translation. Enjoy.
Whereable is the Killer Wearable Translation Use Cases? Glass, Word Lens and UX
By Ultan O’Broin
I just can’t escape the Word Lens-AppsLab vortex. I blogged about the Quest Visual Word Lens augmented reality (AR) mobile translation app for Jake (@jkuramot) a while ago. I’m involved in user experience (UX) outreach with Noel (@noelportugal) or Anthony (@anthonyslai) demoing the Google Glass version of Word Lens. Blogged about that, too.
Now, Google have just announced an acquisition of Word Lens. That’s good news for the Glass version. The current version “works” but hardly at the level UX aspires to. The AR translation is impacted by stuff like how often you drink in certain bars in San Francisco the Glass wearer’s head moves, the fonts used in the target image, and so on. Likely this acquisition will mean Google Translate’s overall UX improves, offering an upped experience in terms of optimized UIs on different devices, all pivoting around a cloud-based real-time translation of a wide range of language combinations.
Up to now these mobile translation apps (there’s a ton) seem like a hammer in search of a nail. Finding a consumer nail, let alone an enterprise one, often leaves me scratching at the bottom of the toolbox. Besides the translation quality, contextual factors are forgotten. Stuff like cost of operation or the device, or the very environment you’re got to work in.
Take Word Lens on Glass. Great for wearables ideation, the promise of an immersive UX, the AR potential, and people just love the awesome demos. But would you ever use it for real, right now?
Consider this: I’m a Glass Explorer and a runner. I recently did a 10 miler in Tel Aviv using Strava Run Glassware. Yeah, more of our global experiment to see if normal people would notice or care about Glass being in their faces (they didn’t).
It was a great user experience, sure. But the cost of using my Glass tethered to my EU-registered Samsung Galaxy SIII for data on the move forced me back to reality: nearly 33 EUR (45 USD, today) in roaming charges. Over 3 Euros a mile.
Of course, there’s also the cost of Glass itself. Effectively, with taxes and bits added, it’s 1700 USD (1250 EUR). Not cheap. So, consider adding another real world problem. Running around sweating on your 1700 bucks. Nothing that some handy tape and a plastic bag can’t deal with in a sort of Nerdy 2.0 duct tape eyeglasses repairy way. But, not a UX for the stylish.
I’ve no idea what Word Lens on Glass would cost to translate a foreign language dinner menu, billboard or sign when away on vacation. But, I’d bet if you’re going to try more serious translation tasks and stay connected during it, then it’ll likely be cheaper and a lot easier to just man up and try and ask someone local. Unless, the app is usable offline … and works outside in the rain.
Time will tell where Google Glass and Word Lens ends up. The message from all this is that in the enterprise, when it comes to gathering user requirements for wearable (and other) tech, it’s about more than just the end user and about taking technology at face value. Context is king.
Oh, we’ve a course on that, too.