Editorial note: Here comes another guest post. This one comes from Ultan O’Broin (@ultan), a Director in the Oracle Applications User Experience Team, based in Dublin, Ireland. He’s worked in and around globalized applications and user assistance issues for nearly two decades. Check out his blog about user assistance and user experience. Enjoy.
Lots of mobile translation apps out there; now and then one of them takes center stage and we all sit up (Google Goggles for Android for example). One that’s really caught the attention of user experience (UX) and translation folks alike—judging by how fast it went viral through the tubes in December—is Word Lens (it made TechCrunch’s top 40 iPhone apps of 2010 list too). What’s really fired up people is that augmented reality (a hot topic in UX) combined with translation is now out there for everybody in an iPhone app. We’re used to innovation in translation, but this time it seems almost sci-fi.
To use the app, point the iPhone camera at an object with words on it. Word Lens not only detects and translates the text, it replaces the original text with the translated equivalent on the image of the object shown on the phone. Incredible!
Natch, I had to test it out myself. The translations (English to/from Spanish are the only ones supported now) that Word Lens produces are as good (or bad) as you could expect from a word-for-word substitution that’s unable to draw any context from the image itself. Word Lens doesn’t handle stylized fonts or large amounts text, but still, it’s good enough for tourists, the curious, and other interested users’ purposes. Getting people into the gisting ballpark of understanding something in another language when the alternative is not an excellent translation but no translation at all seems a major value-add to me. And it looks very cool too.
The Word Lens augmented reality translation could go a long way. Hooking it up to Google Translate would allow users access to many more language options through statistical machine translation, for example. Maybe there’s even an opportunity to real-time translate user-generated content pictures on the tubes far more efficiently than a clunky manual editing process. Furthermore, the technology offers possibilities in the accessibility space for turning text on images into audio output for people with visual impairments, going way past what ALT text could do.
But, for me, as a UX professional working in the applications space, I always need to think about what possibilities these kind of innovations might offer for enterprise users. How could an augmented reality and OCR-based translation app be used by our customers? Who would use it and for what? Would it fit with a CRM task of some sort? Maybe be used for rapid global guerrilla marketing campaigns? Or for analysis of international sales pitches on the spot? What about other areas?
Help me out here! If you have ideas or comments about augmented reality translation opportunities in the enterprise computing space, I’d love to hear them!