Here’s John on Firmness, Commodity, and Delight

July 31st, 2013 Leave a Comment

Editor’s Note: Here’s another post from John Cartan, some heavy philosophical musing for your Wednesday. Personally, I cringe when people throw around the word “delight” when talking about software. Usually, it’s just noise, considering how low the bar has been, so delight when framed in software terms means a lot less than it does in real life terms.

Consider the following statements you might have heard. “Apple makes software that delights users.” “Apple makes software that just works.” So, yeah, that.

Interesting nuggets about John, he’s been writing code since the era of punch cards. Think that sounds hard? He writes entirely via the touch keypad on his iPad.

Yup, he pounded out his Leap Motion review, more than 1,000 words, without an external keyboard. That’s dedication to the intended experience. 

Anyway, enjoy and find the comments.

Firmness, Commodity, and Delight

John Cartan, Applications User Experience, Emerging Interactions

At a recent conference, a young developer asked me a really thoughtful question, one that has come up many times over my career as a user experience architect:

Is there a difference between a beautiful interface and an interface with a great user experience?

References to beautiful or pretty interfaces get at something that architects of buildings call “delight”. The first century Roman architect Vitruvius said that for a building to succeed it needed three essential qualities: firmness, commodity, and delight. I think the same is true for software architecture.

By firmness, he meant that the building should not fall down. By analogy, an app should not crash, freeze, expose bugs, or lose data. A firm app is consistently reliable; users will quickly learn to avoid apps that are not firm.

Commodity means that the building allows and facilitates whatever use is intended for it. A commodious app helps the user efficiently accomplish the task at hand without ever getting in the way. Commodity in an app is only possible if you deeply understand that task – and the user who is trying to perform it.

A delightful building is attractive to enter and pleasant to move around in. A delightful car is beautiful and fun to drive; a delightful hand tool exudes craftsmanship and feels good to the touch. In the same way, a delightful app is a pleasure to use, not just because it has pretty icons, but because the attention to detail, clean layout, and thoughtful design empowers the user and makes her feel in full control.

The key thing to notice about these three qualities is that they are interdependent. A house that falls down on you is not delightful. The same attention to detail that creates a delightful user experience also tends to create a more firm and commodious experience.

There is a common misconception in the software industry that delight is an optional feature that you can bolt on at the end of a release cycle if there’s time. UX people are sometimes called in at the last minute to “make it pretty”. But this never works; delight is something that has to be baked in from the beginning.

Another misconception is that it is somehow frivolous or even unprofessional for a serious business application to be delightful. But delight contributes directly to productivity. And app that is fun to use will be used more often, empowering each user, increasing efficiency, and improving morale.

For me, then, a beautiful interface and an interface with a great user experience are one and the same.

Guest Post: A Leap Motion Review

July 27th, 2013 6 Comments

Editor’s note: Today, I present to you a detailed review of the recently released Leap Motion controller from one of my colleagues in Applications User Experience, John Cartan. You can read about John’s design work and background  on his personal website, which has been live since 1995.

John’s recent work has focused on design for the iPad, which he uses almost exclusively, even for content creation, including this review. I hope he uses an external keyboard.

I’ve had the pleasure of speaking with (some might say, arguing with) John on a usability panel, and I’m pleased to present his review here. Look for my own Leap Motion review sometime soon. Anthony might chime in with one of his own too. So, stay tuned.

Initial Impressions of Leap Motion Device

John Cartan, Applications User Experience, Emerging Interactions

Summary: A fascinating, well-built device but often frustrating and confusing to use. Leap Motion is worthy of continuing study but is not yet a paradigm-shifting device.

I have now spent one day with the Leap Motion device, both at work (in our San Francisco office on a Windows machine) and at home (using my Mac laptop). I downloaded and tested five apps from the Airspace Store, played with about a dozen apps altogether, and read through reviews of many others. I also went through Leap’s tips and training videos and read their first newsletter.

General Impressions

Leap Motion is fun to try, but often frustrating to use – more of an interesting gimmick than a solid working tool that could replace a mouse or touchpad 24-7. Barriers to adoption include confusing gestures, invisible boundaries around the sensory space, unreliable responses to input, and poor ergonomics.

Gestures and responses

As David Pogue reported in his review, every app employs a completely different set of gestures that cannot be discovered or intuited without training and which are hard to remember from session to session. The variety of gestures was impressive; the New York Times app uses a twirling motion to scroll a carousel reminiscent of old hand-cranked microfilm viewers. A Mac macro app called BetterTouchTool claims to support 25 different gestures.

One recurring problem with many gestures were that they rely on the invisible boundaries (or invisible mid-point) of the volume of space the sensor can detect. The general-purpose Touchless app (which lets you substitute your hands for a mouse to browse the web, etc.), uses half of this invisible volume for hovering or moving the cursor, the other half for clicking and dragging; even with good visual feedback it’s hard to find the dividing line between the two. It’s also easy to inadvertently wander outside the boundary when you change positions, or accidentally move the sensor.

Even when you remember the gestures, an app’s response can be unreliable. Although the device can theoretically detect the positions of all ten-fingers, in practice it often confuses two fingers with three, or fails to register some details due to bad lighting, one part of the hand obscuring another, etc. In almost every app, even after repeated practice, my gestures were frequently misinterpreted.

Part of the problem lies not in the device, but in the software written for it. There are countless ways of using the data generated and an art to interpreting them in reliable ways that take account of the things people do with their hands without even realizing it.

A good case in point is Google Earth. At its best, the experience of flying over the earth with your hands is magical – different and more intuitive that using a mouse or even a good tablet app (which simulates moving a large, expandable map, not actually flying over a surface). But Google Earth’s controls for Leap were far too touchy – even when I set the sensitivity to its slowest setting. The tiniest movement of a finger would rocket me a thousand miles up and send the earth spinning like a top. This is a problem that could be fixed with better algorithms. The current apps seems to use absolute, not relative positioning, with little or no attempt to dampen sudden changes or limit the earth’s spin.

Ergonomics

When sitting at a desk using Leap Motion, my arms began to hurt within five minutes; other volunteers reported this as well. This is the same problem we found with our smart board testing: supporting your arms in space is tiring – and soon becomes surprisingly painful.

Recognizing this, Leap Motion’s first newsletter contained a brief tutorial on ergonomics. But it seemed more focused on the needs of the device (to avoid obstructions to the signal) than on the needs of the user. For example, Leap says to avoid the most natural and comfortable positions (e.g. resting on your elbows) in favor of holding your arms straight out into space.

I did experiment with using Leap Motion from a reclining position, as shown in the photo.

image

Positioning the device on the front part of my laptop worked fine. By projecting my display to a large TV and using a third-party app, I could even close the laptop and position the sensor on its lid or remove the laptop altogether and position the sensor on a side table so that my hand could rest on a chair arm and always remain lower to the ground than my heart (which reduces pain and effort over time). With a little fiddling this worked well during an extended session (as long as I stayed in one place). You can also use the Leap control panel to adjust the required height of your hand(s) over the device. Even so, ergonomics will remain a significant concern.

Recommendations

The best experience I had was with the free Molecules app. The Leap is well-suited to true three-dimensional input like turning a complex molecule in space. This app used clear, simple gestures like a clenched fist to pause, and tolerated random movements without completely disrupting the display. This gives me some hope that with the right gestural algorithms and a good use case, Leap can actually provide some added benefits.

For the most part, though, using Leap Motion was an exercise in frustration. The initial productivity apps, like photo manipulation or Powerpoint presentation, are more likely to produce rage than improved productivity. David Pogue summed up the problem nicely:

“Word of advice to potential app writers: If your app involves only two-dimensional movements, of the sort you’d make with a mouse, then the Leap adds nothing to it. Rethink your plan. Find something that requires 3-D control — that’s what the Leap is good for.”

Leap Motion is a solution looking for a problem and has yet to find its killer app. For most enterprise applications it would be a gimmick that would quickly grow annoying and would be unsuited for daily usage.

There may, however, be some niche applications which might benefit from Leap Motion or something like it. To succeed these use cases would require or at least benefit from true 3-D control, would not rely solely on the Leap to perform destructive or non-reversible actions, and could be performed from a sitting or standing position near a desk or lectern (not mobile). I could also see using other forms of this technology to supplement rather than replace current sensing systems (e.g. detect hover positions above a tablet surface).

We should keep our eyes out for niche use cases and continue to monitor future developments. But there is no need to toss the mouse, trackpad, or touch surfaces just yet.

Update: John has an update from Leap’s PR team in comments pointing to their ergonomic support guide. The net: Tyrannosaurus Rex arms, not zombie arms.

The Story of the Oracle People App

July 24th, 2013 4 Comments

When we started blogging back in 2007, we had grand visions of this as a place to share our innovative work and insights.

That hasn’t panned out exactly as we drew it up back then, but there are some stories that stand the test of time. Today, I give you one of them, the story of the Oracle People app.

History

Since its first version dropped in early 2009, the People app has been one of the top traffic drivers to this space. About every-other-day on average, I get a request to provide the download and installation details, and before Apple changed the yearly provisioning file expiration, every year I’d see a flurry of panicked requests about updating.

Clayton built the People app back in 2009 as a fun project to test-drive Apple’s new, at the time, SDK for iPhone. If you recall, Apple debuted the App Store and its SDK back in 2008, along with the iPhone 3. This was the tipping point for the iPhone, as developers could fully embrace the iPhone as a new development platform.

This was new thinking for mobile apps, and it presented an evolutionary step for mobile experiences. Apps were new and different, and we wanted to jump on board.

I’m guessing many people don’t even remember this, given that by the time the iPhone really got going, apps were already an integral piece of the device.

Anyway, Clayton, like many developers, wanted to scratch an itch. He wanted to search the corporate directory from his iPhone, not in the browser, but in a self-contained way.

Clayton initially approached me to use the APIs we provided for Oracle Connect to add some social features into his app. His timing was perfect, since at the time, we were looking for ways to experiment with the iPhone SDK.

It was a great collaboration, since he did most of the work and provided Rich(@rmanalan) and Anthony (@anthonyslai) with enhancements to the APIs. Clayton was one of the first consumers of our APIs, so it was nice to get real feedback from an active developer.

After the initial release, Clayton dropped a second version to support the iPad and newer version of iOS, but since then, he hasn’t had to do much to enhance it, which is good, what with his day job and all.

Luckily, the app is rock solid and has transitioned well into newer releases of iOS.

At some point, I hope IT will take over the People app. Requests for an Android version have been increasing over time, as Android has taken market share from iOS, and none of us has the time to build and support an Android version.

Plus, the app’s user base has grown large enough to warrant more investment.

So what?

Other than being a nice little story, it turns out, Clayton’s little app is a critical tool for everyday work.

I don’t have reliable metrics to calculate how many people actually use the app or how often, but the two posts on the first and second version of the app get lots of traffic and have a couple hundred comments, which is a lot for this blog.

Search referrals bear out this assumption, as we get a hefty number of keyword referrals for terms like “oracle people app” and “people 2.”

Even though the People app eventually was included as an “official” app for iOS users, IT doesn’t support it. Clayton has always done the support and development work, and I do minimal support triage.

Aside from the posts, the only other promotion is from IT’s internal site for mobile users. Even so, I’d guess that the app is installed on at least a thousand devices, putting its penetration at maybe 1% of all the iOS devices at the company.

I like telling stories, and maybe someday, if I write a book, I’ll include this one. It’s a nice little object lesson into tapping hidden demand and the value of scratching an itch.

So, thanks to Clayton for the People app. His work has helped thousands of users and has driven lots of interest our way over the years.

Not bad for a little side project.

It’s Monday, and I Have Links

July 15th, 2013 Leave a Comment

Happy Monday.

Tech that is always listening

Fresh on the heels of the news that XBox One will always be listening, comes a rumor that the upcoming Moto X will also have passive listening capabilities. Add this to the long list of technology that feels invasive and creepy for consumers, but will be extremely useful for work.

Since the rumor slipped that Google Glass would passively listen for commands, I’ve been thinking that interaction would be perfect for enterprise users. One failing of Siri and Google’s voice features is that they are too purposeful, i.e. you have to remember to use them and make them part of your flow. Although the same is true for passive listening devices, you are removing barriers, i.e. picking up the device and opening the app that you plan to use.

Back to enterprise users, imagine you’re a mobile worker, like live in your car mobile. I think you’d find a passively listening assistant that can do stuff for you while you’re driving to be very helpful. This is why so many cars offer hands-free features for phones and why Apple is interested in the car dashboard. But this solution one-ups those integrations by skipping the car integration.

Creepy for everyday consumers, but useful at work.

IFTTT for iPhone

One of my favorite services, IFTTT launched an iPhone app. When I first saw the headline, I skipped it because I don’t carry any iOS devices. Love IFTTT, don’t really care about iOS. At some point, I skimmed the coverage.

The reason this matters is that IFTTT didn’t just create an app to manage your recipes on your phone, they built recipes tied to phone actions in three standard iOS apps, Reminders, Photos and Contacts.

This type of automation has always been a differentiator for Android, e.g. apps like Llama and Tasker, and it’s great to see it come to iOS, albeit in a watered-down version. Check out this review of Tasker if you want an overview; it does pretty cool stuff.

Smartphones are huge parts of our lives, and automation based on their activity and sensors is an easy way to chart what you do and tie together various services.

OK Maps

I still don’t have the new-and-improved version of Google Maps, but this seems important. To save a map offline, type “OK Maps” into search. Keep that one in your back pocket.

Chromebooks defy analysts

Despite the gutting of the PC market by tablets, shipments of Chromebooks have grown and taken market share.

A Wednesday Collection

July 10th, 2013 Leave a Comment

After Kscope 13 and a week of staycation, I suppose I should get back to blogging. Not really much on my mind lately, so I figured a collection of links would do.

You’ve been quiet lately too. Real comments have been scarce since I moved away from Disqus. I hope those aren’t related. Say hi or something.

The Post-Reader World

Now that Google Reader is gone, I’ve settled into NewsBlur as my replacement. NewsBlur’s feature set is pretty close to what I used Reader to do, and so far, I’ve been happy with it. The mobile apps are good, and I found an IFTTT recipe to transfer shared items to Evernote, something I definitely missed.

NewsBlur has a shared item feed, a blurblog, which functions exactly like the old Reader Shared Items feed did.

I did briefly test FeedBin, but found it a bit slow. Its unread counts were always different than Reader’s and NewsBlur’s too, which was odd. To its credit, FeedBin has great serious third-party app support from Press and Reeder, but I’m not in love with how they handle credentials right now. I might go back and give it another shot at some point.

What are you using to fill gaping void Reader left?

More Hollywood UI

Some topics seem to rotate through the interest cycle for odd reasons. Since I mentioned file interfaces, I’ve read another piece on the Verge about what designers can learn from sci-fi interfaces and found a compendium of various film and TV interfaces (h/t the Verge), the IMDB of FUI, fictional user interfaces.

Enjoy.

Nitrous IO

A while back, I speculated that Google would soon deploy a cloud-based IDE for its myriad of developers. Turns out there’s a startup doing this already, Nitrous.IO (h/t O’Reilly Radar). Using Nitrous.IO, you can spin up Python, Ruby, Rails, Node.js, Go, and Django environments in less than a minute, according to the site.

This fully cloud-hosted solution, including the IDE, makes that Chromebook you got from IO a lot more useful to do real work, something Anthony (@anthonyslai) has been experimenting with since he got his Chromebook Pixel. Anthony put Ubuntu on his Pixel, but presumably, Nitrous.IO would make Chrome OS a lot more valuable by providing an IDE.

FWIW I checked out Anthony’s Pixel recently. The display is amazing, although at its full resolution, 2560 x 1700 at 239 PPI, I can’t imagine how he gets through a day without a splitting headache.

That’s it for now. Anything on your mind? Find the comments.

The Long Goodbye to Reader

July 1st, 2013 Leave a Comment

Google Reader will be gone when today ends, not sure what time zone Google is using for the apocalypse.

Turns out Google Takeout doesn’t extract everything you could ever want, so some resourceful folks have whipped up a Python utility that will get everything out of the dying service.

Check it out at readerisdead.com. I did this yesterday, and it took several hours. However, I’m a voracious consumer; my archive was more than 1.1 million objects, totaling 10.19 GB. Your mileage will vary.

Do it now. There is no tomorrow.

Messing around with Glass and Fusion CRM for Kscope 13

June 20th, 2013 Leave a Comment

As Anthony (@anthonyslai) mentioned yesterday, we’ve been experimenting with his new toy, Google Glass.

It’s our job as a research team to investigate emerging technologies and explore how they might affect the users of Fusion Applications in the near and long-term future. So, don’t get ahead of yourself, this is a research project, not product. We’re just having fun with out new toy.

Jeremy and I will be speaking at Kscope 13 next week on Monday morning, and like Noel (@noelportugal) did with the WebCenter Rock ‘em Sock ‘em Robots at last year’s Kscope, we wanted to do something interesting and entertaining to get people’s attention.

And Google Glass tends to get attention, both positive and negative. I’ve seen the reactions up close and personal, walking around Silicon Valley with Anthony, and I expect that outside the vacuum of the Valley, we’ll get even more sideways looks, especially from the highly technical audience at Kscope.

So, Glass seemed like a good fit. For the demo, we decided to tell a story about a Glass-toting sales rep using Fusion CRM. What would she use Glass to do? How could we fit Glass into her daily tasks?

The former was pretty easy to answer, given how limited Glass’ feature set is at the moment. The story goes like this:

Our sales rep, let’s call her Lucy, attends lots of customer meetings. Prior to a meeting, Glass reminds Lucy of the meeting. Using Glass, she can get turn-by-turn directions and browse news about the company she’s visiting.

She can see who is attending the meeting and can call or text these contacts to do last minute preparation or to let them know if she’s running late.

During the meeting, she can dictate notes with Glass to capture action items or to-dos, and take pictures of the white board or hand-written notes which she can share with her sales team on G+ and upload to Fusion CRM to attach to the customer record or an opportunity.

After the meeting, Lucy can search for the customer record or opportunity to make basic updates, and since the meeting was a success, she can search for nearby restaurants, call and use Glass to navigate there.

Super basic, I know, but Anthony put all this together in about two weeks, working around the limited feature set of Glass, including the insane 1,000 requests-per-day limitation.

Anyway, to reiterate, this isn’t product. It’s just a fun project that Anthony threw together for me to show at Kscope. So, if you’re at Kscope and want to check out Google Glass and this demo, find Jeremy and me Monday at 8:30 AM. Here are our session details:

Oracle Fusion & Cloud Applications: A Platform for Building New User Experiences

Monday, June 24, 2013, Session 1, 8:30 am – 9:30 am

Do your users keep bringing new gadgets to work? Not just the latest tablets and smartphones, but wearable smart devices, and even cyborg-like glasses? Your users are no longer tied to their PCs, and neither are enterprise applications. See how Oracle’s Applications User Experience team is bringing Fusion Applications data to new devices and platforms. Find out what technology trends we are paying attention to, and what our own applications developers are exploring for the future.

Over the next few months, Anthony will work on refining the demo, and if you’re attending UKOUG Tech 13 in December, he might be there to show off what he’s done, assuming his session is accepted.

Find the comments.

Oracle Fusion Glass App

June 19th, 2013 Leave a Comment

Lately, I have been working on a Glass App for Fusion CRM, as a research project for my own personal edification.  As developers,  the normal way to create a Glass app is to use HTML 5 and JSON to construct the timeline cards.  Building on Glass is like building a web application.  When going through the Mirror API developer’s guide, although most things are quite clear, there are bits of details not fully explained.

The good news is there are resources out there to assist you:

1.  The Glass Quick Start project came in handy.  It is great to use it as the base, as there are some interactions with Glass already implemented.   Plus, OAuth 2 authentication is already baked into the project itself.

2.  Mirror playground.  It provides developers a way to experiment with how the pages will come out in Glass, before actually deploying the application.  Changes can be made on the JSON, raw text and the actual UI itself.  Pretty slick.  If you own Glass, you can also see the cards currently on your timeline.

Requirement:

Someone within your team must be a Glass Explorer to be whitelisted by Google to use the Mirror API.  The Mirror API can be enabled in the Google API console.  The Glass Explorer can create a project with Mirror API enabled and share it with the teammates.

Couple notes worth pointing out:

1.  You need to do some tweaking to be able to use all capabilities of the Mirror API in development environments.  In particular, you would not be able to receive location and timeline notifications from Glass without SSL enabled.

2.  There is a 1000 requests/day limit for Mirror API.  While developing the application, I was able to exceed the limit in couple hours.  If you exceed the quota, you would be out of luck and need to wait till the next day to continue your work.  I hope Google would increase the quota limit some time soon in the future.

3.  Some provided functions do not really work.  For example, even though there exists a Mirror API to pin a card, it does not actually work.  You would need to test out the features you are planning to use.

4.  All JavaScript code will be sanitized and removed before displaying it on Glass.  No need to be too creative.

Moving on from My Precious Reader

June 13th, 2013 7 Comments

Google Reader will be gone for good in less than three weeks.

Since the announcement, I’ve continued to use Reader, the denial stage, but this week, I finally decided to investigate replacements.

I used this crowdsourced list of alternatives as my starting place. My criteria were:

I finally settled on two options, NewsBlur and Feedbin. NewsBlur has a free option, so I can see how it works, while simultaneously testing Feedbin.

NewsBlur

NewsBlur has a free option, but my reading will hit the limits very quickly. So, for my purposes it costs $24/year. It’s open-sourced, which is nice, and it’s visually appealing, a lot of nice, thoughtful work, done well with performant JavaScript. For sharing, I can use IFTTT.

NewsBlur might be a bit too feature-rich for me. It has sharing, statistics, and recommendations, making it a nice Reader replacement. But, I never used those much, and I’m really just focused on consumption.

NewsBlur has a handy Google Reader import feature, but that drove me a bit mad since I couldn’t get the unread counts to match. I ended up scrapping that and starting over with my Reader OPML file.

Feedbin

Feedbin costs $2/month, so the same as NewsBlur. I’m not entirely sure, given they sent me a receipt for $0.00, but my guess is they bill for a year, rather than by month. The latter would mean excessive transaction fees for each charge.

Update: I heard back from Feedbin that they offer a free, three-day trial before charging. So, that’s nice.

Feedbin has an API, and I like the developer-friendly model. I’m not thrilled about using basic authentication over SSL, but I get that this is a small project. At least I know how they handle it, so I can choose a client wisely, and for Android, I’ve got Press, which I test-drove as a Google Reader client back in December, based on several positive reviews.

Press announced they would support Feedbin, and that update is now available. Another plus for Feedbin is that I can use Reeder with it. I love Reeder, and even though I no longer carry any iOS devices, I have the Mac app.

As for the web version of Feedbin, I like its sparse interface, very Reeder-esque in fact. As with NewsBlur, it’s easy to use and fast.

So, for the foreseeable future, I plan to use both services. This will be confusing, and that’s the idea. When I’m in a hurry to consume some information, I’ll gravitate to the one I prefer, which will help me make a choice.

No matter which one wins in the end, I’ll pay for both to support the post-Reader World, which is one that still needs a great feed reader. Today, as part of tearing off the Band-Aid, I unpinned the Google Reader tab from Chrome. This is more than symbolic because it was the first tab, and now all the other pinned tabs have moved. I’ve already aimed for a tab based on muscle memory and missed, a sad reminder of the hole that Reader will leave.

Sniff.

Have you picked a Reader replacement?

Find the comments.

How Movies Have Shaped UI

June 13th, 2013 4 Comments

Last week, in a meeting, we got on the subject of Terminator vision. For the uninitiated, here’s what that looked like in Terminator 2: Judgement Day:

terminator_vision-600x224

Image from Orion Pictures, Terminator 2: Judgement Day

If you recall, Robocop had a similar overlay readout:

Image from Orion Pictures

Image from Orion Pictures, Robocop

So, for about 25 years, Hollywood has been seeding this vision (ha, pun) of augmented reality, i.e. an overlay of real-time information about visible objects in a heads-up display. Technology has been realizing this vision over the past decade, with Google Glass poised being the most recent, and most polarizing, iteration.

This post isn’t about Glass, though, it’s about how Hollywood, essentially art, has been driving our interfaces for a really long time.

And then came Minority Report, which upped the ante by showing a (dystopian) view where technology dances and jumps around the room at the flick of a wrist.

Kinect-Minority-Report-UI-2

Image from Twentieth Century Fox, Minority Report

Not too dissimilar from the experience provided by Microsoft’s Kinect, minus the gloves.

At this point, I need to backtrack and mention another film, The Matrix, which goes even further, by providing training directly to the brain (“I know kung fu.”) and altering human perception to see the non-augmented (and more dystopian), reality and manipulate it.

keanu-reeves-action-star

Image from Warner Brothers, The Matrix

Earlier this year, I read an interesting post about this very effect and its impact on modern interface design, How ‘Minority Report’ Trapped Us In A World Of Bad Interfaces. An interesting read, with a money quote:

The reality is, there’s a huge gap between what looks good on film and what is natural to use.

As we close the gap between art and technology, real physiological and usability issues emerge. Arms and shoulders get tired when they’re constantly in motion. Try shadow-boxing for a minute with no breaks and see how you feel. Then strap on 16-ounce gloves (competition weight) and compare. It gives you a new respect for boxing.

This is why, aside from chin down, hands up, the most common coaching mantra is, elbows in, relax your shoulders. Even then, it’s tiring. Floyd (@fteter) knows.

It seems odd, but one of the biggest challenges for interface designers is reeducating people on how interfaces should work, since we can’t any longer say they can’t work that way.

Anyway, food for thought. I’m sure I’ve missed a bunch of other influential movies, and let’s leave the discussion of what Hollywood thinks hackers can do for another day.

Find the comments.

See You at Kscope 13

June 11th, 2013 2 Comments

Kscope 13 (#kscope13), ODTUG’s (@odtug) annual conference begins in less than two weeks. I know, it sneaked up on me too.

If you happen to be making the trip to New Orleans for the show, which runs June 23-27, and you feel like getting up early on Monday morning, I’ll be speaking with Jeremy Ashley, the head of Applications User Experience.

So, why not come by our session, sip your coffee and hear about Fusion Applications as a platform for new experiences?

Here’s the skinny:

Oracle Fusion & Cloud Applications: A Platform for Building New User Experiences

Monday, June 24, 2013, Session 1, 8:30 am – 9:30 am

Do your users keep bringing new gadgets to work? Not just the latest tablets and smartphones, but wearable smart devices, and even cyborg-like glasses? Your users are no longer tied to their PCs, and neither are enterprise applications. See how Oracle’s Applications User Experience team is bringing Fusion Applications data to new devices and platforms. Find out what technology trends we are paying attention to, and what our own applications developers are exploring for the future.

Yeah, it’s at 8:30 AM, but you’ll be starting your Kscope 13 with a hot topic.

Thanks to Sten Verstli (@stenvesterli) for inspiring the title. Sten attended a super-secret AUX event at HQ last month, organized and hosted by Misha (@mishavaughan), and wrote the following:

Yesterday, the Oracle UX team hosted a confidential (strictly no photography!) event demoing some of the new stuff they are working on. If I told you the details I’d have to kill you, but what I can say is this: The future of ERP is as a platform, not an application.

I think that nicely encapsulates a lot of what I’ve been doing lately, so I borrowed it. Thanks Sten.

Anyway, if you’re planning your schedule for Kscope 13 and need a reason to get up early on Monday, come by our session, and even if you don’t make it, find me at show and say hi.

Always enjoy meeting readers IRL.

Control Center, So Now You Don’t Have to Jailbreak

June 10th, 2013 Leave a Comment

Apple announced a lot today as WWDC began. Chief among the announcements is a long-overdue redesign for iOS, which looks essentially the same in iOS 6 as it did in iPhone OS 1, or whatever they versioned it back in 2007.

The redesign is very slick, and as expected by many, focuses on reducing design aspects to a more flat design.

I’m not an iOS user and haven’t been one for quite some time, but one feature jumped out as surprisingly refreshing, the Control Center, which allows the user quick access to device settings among other things. Quoting the Verge:

Adding quick-access controls like these have been a popular jailbreak tweak, and clearly Apple has taken that to heart.

Over the years when I get into discussions with developers about why they carry iPhones, this topic comes up frequently. The developer will say something like “I only jailbroke my phone so I could get quick access to settings” via some app or another. So, unlike many Android phone carrying developers, these people don’t really want to tamper with their devices.

controlCenter

Android has always been a playground for the curious, and iOS has always had an antagonistic relationship with the jailbreaking community. I asked my boss why he still carries an iPhone, despite his recent drift to various Android devices. His comments boiled down to something like “I just need it to work, and with Android, because I can muck about with it, I do. And sometimes that means I break it. I need the phone to work.”

By extension, he needs to the phone to protect him from himself and his muck-about tendencies.

I’m in the same place, albeit on a different phone OS, and I’ve stuck with Nexus devices specifically because I wanted them to just work and avoid falling down the rabbit hole.

Back to Control Center, it’s significant that Apple has listened to its users. This might be a new direction. Apple is famous for its design and execution and for aggressively making decisions, regardless of what users may think.

I’m probably reading into to it bit too much, since I have to assume that Apple’s employees all carry iPhones and have probably been screaming for this feature for years. Still, I wonder why it took so many years to make it into base iOS.

Anyway, iOS 7 looks very nice and has quite a few new features. I’m also interested to see how well the WebOS-style multi-tasking works.

Thoughts? Find the comments.

First 3 days as a Glass Explorer (Day 2)

June 4th, 2013 2 Comments

Editor’s note: Read Anthony’s (@anthonyslai) full Glass adventure starting with the prologuethe week before, and Day 1 posts.

I met with Jake and Noel the next day I got the Glass.  To anyone who tried it, everyone seemed to be pleasantly surprised with  the current features.  At that time, no Twitter and Facebook integration even existed.  Just having Google search, photo taking, video recording, and GPS navigation already made it a great device.  Of course, it is still arguable whether it worth such a high price tag; on the other hand, it is all about economies of scale, and the price will go down, although probably not for this year.  To me, yes, I do think it is slightly expensive, but I think it is still worth it.

Regarding to the glass piece, the technology behind it is apparently a Google secret, as they were not willing to disclose it during the FireChat Session in Google IO.  I think this makes sense as the glass piece is probably the hardest to replicate and copy.  I just had a funny feeling as I often get asked, “Is that… Google Glasses?”  Well, technically, it is Glass, not Glasses, although I had the same misconception myself. 

The other question which I find interesting is, “Did you get it from Amazon?”  Although I answered the question seriously, maybe I should have said, “Yes, I got it from Best Buy with a promotional discount.”.  The questions I received normally ranged among the 2 extremes, either they know the technology really well, or they have no idea about what it is.    As it is not readily available to the general public, only people who are passionate about technologies would have looked into it.

Before Glass become widely available, Glass can be as an excellent ice breaker.  People were generally genuinely interested in the Glass experience, and they would approach you and they were about it.  In the afternoon, I went to Health 2.0 Refactored.  This was how I met up with couple people in the conference and get to know each other.

There were a lot of reports about Glass haters.  People mostly have privacy concerns, worried that you are taking pictures, recording videos secretly without their permission.  In this era where almost everyone are having a smartphone with a high resolution pixel camera, I do not think this is avoidable.  We will just have to live with it.

The other concern is that Glass can be used to pull up all your personal information by just looking at you.  My question would be, “Where and how do they find your information?”  Your name, maybe yes.  Social security number, probably not.  If they do know about your social security number, do they need to look at you to find it out?  As with all technologies, it can serve both good or evil.  For example, this can be useful when interviewing candidates.  Hopefully, people would have a peace of mind for now, knowing that facial recognition is banned by Google.  No more seeing the statistics of your opponent like in Dragon Z.
20756_dragon_ball_z

In my case, no one I met with dislike me wearing Glass, and my experience so far had been great and positive.

Another Go with the Chromebook

June 3rd, 2013 Leave a Comment

Anthony (@anthonyslai) has been using his Chromebook lately, by necessity, and a combination of recent speculation and my own gut tells me that I should try to work my Chromebook back into the regular rotation.

Actually, the speculation isn’t recent. Anton Wahlmann predicted a Chrome OS smartphone two years ago; he’s just updated his prediction based on the happenings at I/O.

He makes an interesting argument, and given the consolidation of Android and Chrome OS under Sundar Pichai, it makes sense. Also compelling to nearly everyone, the sudden and unexplained appearance of a chrome-plated Android robot at Google HQ.

So, I’m giving the little guy another chance to find a home in my gadget arsenal.

I’ve had a Chromebook for a few years, the Samsung Series 5 3G Chromebook, but like many gadgets, it’s gone unused. Picking it up again for the first time in a while, the first striking feature is how many updates I’ve missed.

Chrome OS has undergone some major changes, since I last used this device. Yes, I’m using it now. It now feels like an OS, rather than just a Chrome browser, with a desktop, a listing of “apps” and a system bar.

It also doesn’t feel dated, unlike most two-year-old devices do. For example, the Nexus S, which I got new at about the same time, is woefully under-powered running the most recent version of Android, Jelly Bean. Google has obviously taken care to architect Chrome OS to maximize the low-spec hardware.

The features of Google’s web apps, like Hangouts, make it seem more functional, and the speed Anthony references is still there, providing instant-on and snappy reboots for updates.

So, not surprisingly, the Chromebook is an ecosystem device. Google stuff looks good and runs well. Web stuff is mostly the same. Beyond that, you have to wade into deep water. Check Anthony’s post for examples.

One early show-stopper for me still exists, VPN support. It’s baked-in, but not for the VPN I need for work. That won’t change anytime soon.

Overall, it’s not bad to use. Unlike the Pixel, my Chromebook feels like exactly what it is, a cheap laptop. Not necessarily a bad thing, just an observation.

Actually, the device’s cost and its lack of local data make it a perfect traveling machine. Don’t want to carry your fancy laptop and its accessories on vacation? Just throw a Chromebook into your luggage. It’s fairly rugged, and the mental cost of losing it is much lower.

I did this equation in my head during a trip last year, and found it quite liberating to leave behind the Macbook Pro and laptop bag.

The size is nice, but it’s a bit too small for my lap and the keyboard feels a bit cramped after extended use. Google has rearranged a few keys, specifically the ctrl one, which has caused some relearning.

The trackpad also tends to catch my fingers as I type, moving the cursor around magically.

Otherwise, the device works very well. So, why wouldn’t Google create tablets and phones for it? Firefox OS is proving there is room at the other end of the smartphone market, where apps are less important than portable internet connectivity. They may have competition soon.

Find the comments.

First 3 days as a Glass Explorer (Day 1)

May 29th, 2013 Leave a Comment

Editor’s note: For the full Glass experience, check out Anthony’s (@anthonyslaiprologue and week before Glass posts.

I was finally able to pick up the Glass the day right before Jake and Noel came to the Bay Area. It was also the Sunday before Google IO. I got out of the Glass garage at Google in 10 minutes. The Google product manager who fitted me thought I should be the fastest person who got out of the door, as I already knew everything. When I got back home, I figured out an issue. There were couple dead pixels in the Glass. Dead pixels were hard to detect, as they were white in color, and you could only see them in dark places.  I called the Glass support right away, and they were nice enough to offer me a replacement. It did seem to be a very common issue with Glass, as there were other people having the same problem as well.

The first night with Glass was a bit of a sleepless night. I went ahead and did all kinds of experiments with it. Glass does not come with a launcher for native Android applications installed. There is also no Play Store that comes with it. In order to start working on native apps, I installed an app called Launchy, which was built by Mike DiGiovanni. I then tested out building some simple native apps to work with Glass. As Glass provides 12 GB of free space at your disposal, you can pretty much do anything with it. As expected, I was able to do almost everything as with other Android device.

I also created my own Glass service using Google App Engine. With the starter project Google provided , it was a breeze. However, in order to get allow Glass users to access it, you must be a Glass Explorer and being white-listed by Google to turn on the Mirror API service in the Google API Console. With the starter project, you can interact with Glass by sending cards to the timeline of a specific user or broadcast it to all users using your service. A contact card of your service will also be created so that the user can interact with your service by replying, posting to the contact. Your service would also be able to receive location updates regarding your users’ locations. There are quite a lot of things you can do with these features. At the same time, these are currently the only things you can do with Mirror API, so it can be quite limited as well, depending on your application.

When I woke up the next morning, somehow I could see everything clearly. Apparently I forgot to take off my contact lenses before going to bed. It had been a long time since I forgot doing so. It reminded me this image, which did somewhat reflect the reality.

22dBD

Reading the Tea Leaves

May 28th, 2013 2 Comments

Anthony (@anthonyslai) has been on a roll lately, and his latest post reminded me to put words behind a hunch I have.

When the Chromebook Pixel was announced, a lot of head-scratching ensued. What’s the point of a fantastic piece of expensive, high-end hardware that runs an internet-tethered OS like Chrome OS? After all, Chromebooks have settled into a niche at the bottom of the device market, one where netbooks were once aimed.

Then, I heard that Google was giving Pixels to its employees. Obviously most Google’s employees are developers, and in order to do their work, they’d have to do development.

Prior to I/O, I fully expected that Google would announce a cloud-based IDE, which would explain why employees were given Pixels.

No such announcement came.

However, every I/O attendee received a Chromebook Pixel, which only strengthens my hunch that a cloud-based IDE is coming.

There were several announcements at I/O that contributed to this line of thinking.

First, Android Studio, a custom IDE for Android development based on IntelliJ IDEA, which will include hooks for Cloud Messaging and other Google services. Speaking of Google services, the Play Services APIs and Hangouts were also announced.

The former adds several features that are self-contained, allowing developers to upgrade apps without requiring newer versions of Android. The latter consolidates several messaging products and effectively removes support for XMPP in favor of the Hangouts API.

Google has been slowly backing away from open standards recently in favor of their own APIs, e.g. lost in the most recent Spring-cleaning announcement, which included Google Reader, was an announcement that the CalDAV API would be limited to a whitelist in favor of the Calendar API. Details are scant, but the loss of CalDAV support for calendar applications like iCal is kind of a big deal.

I/O this year focused squarely on developers, not on product. No new Android version was unveiled, but rather, existing Android apps, like Maps, Plus, Play Services and Hangouts, received major, version-independent updates. This points to alleviating a huge developer concern, fragmentation.

So, why give attendees an expensive paperweight in the Pixel?

The other shoe has to be a browser-based IDE that leverages the computing power of the Pixel with Google’s network infrastructure, while providing all Google’s APIs and services in one package, the ultimate ecosystem package for Android and possibly Chrome OS developers.

This isn’t all that crazy, given that services like App Engine already use Google’s infrastructure.

Should this IDE materialize, the key here is distributing the compilation workload between the machine and server, while minimizing the bandwidth consumption.

But if you happen to live in a Google Fiber city, you could develop on a Pixel and use Google’s bandwidth and server infrastructure, allowing Google to control even the transport of your code.

So, Google could have all the bases covered, providing an unprecedented development experience.

Anyway, I’m not a developer, but I play one sometimes. I’ve run this idea past a few developers, and it seems plausible, even a little bit desirable.

What do you think?

Find the comments.

Ubuntu/Chrome OS revisited

May 27th, 2013 3 Comments

Diverting away from Glass a bit. I will update more soon.

I became a Mac OS fan when I started using my MBP since 2006. Windows never worked for me, as I am a developer. Microsoft, Just give me a terminal. Cygwin to me is just something used to alleviate the issue, but that does not solve the fundamental problem. Handling “/” and “\” was already complicated enough, not to mention about the actual development. Ubuntu at that time was good, but there were lot of issues with package dependencies and distro upgrade.  Some essential packages tended to break and put your machine into an almost unusable state. Mac, on the other hand, got ahead of the competition. It is simple, elegant, and most importantly, it is based on Unix and always works. The development community embraced the power of the macs, and there were many development tools that were only available in Mac OS X.

Couple things made me revisit Ubuntu this year. First, I would never consider getting the current new MBP model. I would not buy anything that can not be customized and upgradable. It seemed to me that this is the direction Apple is heading, and if this continues, I will stay away from it.

Second, I bought a 7200 rpm 1 TB drive and attempted to put it into the optical bay of my MacBook Pro 2011 early model. It was a complete failure. Story short, the MacBook Pro and the hard drive negotiated a 6 Gb/s link, but the MacBook Pro firmware actually does not support 6 Gb/s.  My beloved MPB, please stop pretending to support 6 Gb/s and just negotiate a 3 Gb/s link. I could have lived with that. The final result was that I had this extra hard drive, and did not know what to do with it. I finally decided to put it into a Dell machine, and for fun, tri-boot it with Windows 8, Ubuntu and Hackintosh.   Although installing Hackintosh was a long tedious process, but amazingly, everything worked like a charm at the end.  I liked the Ubuntu Unity desktop and its user-friendliness.  No more touching Xorg.conf to deal with drivers.

Third, I have been working on Android development.  Something not advertised about Android AOSP is that there could be strange issues with your build if you do not compile it in Ubuntu, which is the official platform.  I learned a hard lesson and spent quite some time debugging in the wrong direction without realizing this fact.  So I need a Ubuntu machine anyways.

Fourth, some development tools such as GitLab is available on Linux but not on Mac.

Today, my MPB fan started giving me a loud humming voice.  I am sensitive to these noises while I am concentrating, and it is nerve wrecking to me.  I decided to stop using it until it got serviced.  In the meantime, I need a machine.  I took out the first Chromebook, a CR-48, that was given by my friend at Google years ago and started using it again (sorry, my friend, your machine just was not powerful enough for my daily usage, and I only needed one machine).  I went ahead and switch it to development mode, then installed Ubuntu with Unity desktop in it using Crouton.  I then installed necessary tools like java, Intellij, Thunderbird.  It is great to be able to switch between the Chrome OS and Ubuntu.  I am using it to write this blog.  My complaint with this machine is that I kept hitting the mouse pad when I am typing.  Even though this is an old machine, and not very well designed, it works, and you can not hear a bit of noise even if you put your ears to it.

CR48-preview

I am surprised that a low-end, old machine like a CR-48 can still runs smoothly, and the speed is still acceptable.  Such hardware would not be able to support running Mac OS X 10.8 or Windows 8, but it can run Chrome OS and Ubuntu 12.04 concurrently.  Google is trying to unleash all the power through the Internet, and they build dumb terminals like a Chromebook.  While this may still be a bit hard to achieve, given that a lot of productivity software can only be run natively, and certainly not for developers, I do believe that it will be the future of computing.  This is how we can afford having every children in the entire world to have computing in his/her hands.

I may eventually use Ubuntu and replace Mac OS X in the near future.

First 3 Days as a Glass Explorer (Day -7)

May 24th, 2013 Leave a Comment

Editor’s note: FYI, here’s another post by Anthony (@anthonyslai). If you’re interested in his Glass odyssey, make sure to read the prologue.

Day -7

My friend got the invitation to pick up her Glass.  At that time, I have not got any updates for mine yet.  All Glass Explorers can bring a guest to the Glass event, and she asked me if I wanted to tag along.  Of course that is a yes.

We got to the Google campus, and there was already a Glass product manager waiting outside the building for us.  Hopefully not a big surprise, he is wearing Glass himself.  He kindly led us into the Glass garage, where the Glass fitting is done.  They even offered champagne in the event.  It sounded like the same treatment when you are shopping for a BMW.

20130504_164337_239 20130512_125605_159

It was the first time I touched Glass.  It is made of titanium, and it is as light as it gets.  There were rumors about Glass being just a phone accessory;  that is not true.  It is an Android with 4.0.4 fully capable of running everything by its own.  The Glass project manager explained everything about Glass to us patiently, and he even guided us around Google campus.  We got out after 2 hours staying there.

As an Android developer, I could not resist experimenting on it, but with care.  After all, it is not mine.  Here are couple things you can/cannot do:

1.  You can turn on debug mode.

2.  You can adb to it as user, not root

3.  You can adb push photos/videos, but they would not show up in the timeline, even though you use the correct naming convention.  Apparently, Google is storing the timeline entries in SQLite, which makes perfect sense. (I did manage to push photos/videos and show them in the timeline)

4.  You can not just plug in a mouse and keyboard into the usb port.  It would not work.

IMG_0694

After getting a taste of Glass, I am more eager to get mine.

What We Need Is More Robots

May 24th, 2013 2 Comments

If you’ve seen intermittent connectivity issues this week, apologies, should be resolved now.

I think we can all agree that robots are fascinating. So, when Misha (@mishavaughan) mentioned a robot project, I was immediately interested.

Ludovic Vignals has an Aldebaran Robotics NAO humanoid robot. I know nothing about robotics, but NAO sounds pretty cool. Ludovic decided to experiment with his NAO and has programmed it to read the news.

Why? Because he can. I have to say, I found the robot’s snarky commentary about being ignored to be funny, which makes sense, given the Nao robot also does stand-up comedy.

Enough with the background, how did Ludovic do this?

In his words:

USA Today offers a free (capped volume) API access to some of their daily content. I think they mentioned somewhere on their site that the SOAP API has or will be discontinued so using their REST API is the best choice. Before getting access to the API you will need to go to USAToday.com to request an API access as they don’t allow anonymous connections and they want to know what you are using the API for and also get you to agree to their usage policy.

After receiving an access key, the steps are pretty standard:

1. Request credentials to connect to USA Today
2. Establish a connection to api.usatoday.com
3. Decide what information you want to retrieve
4. Format the query string and issue a GET command, e.g. /open/articles/topnews/home?count=10&days=0&page=0&encoding=json&api_key=foo
5. Parse string into JSON object or parse JSON formatted string directly into native object of the client programming language you are using
6. Iterate through the news and related fields as needed

Ah the beauty of RESTful APIs. Next, Ludovic used Choregraphe, the IDE provided by Aldebaran, to program the robot to read the news.

image001

image002

Beyond the cool factor, the NAO could serve as a personal assistant, reading notifications, reminding you of appointments and to-dos, processing commands and taking actions. Think Google Now type features but with an audible reminder.

I really like Google Now, but I struggle to remember to use it. If the NAO simply reminded me that I have to leave now to make an appointment, based on traffic conditions, that would be useful. Have you ever been late to a call because you were heads-down on something and forgot the time? Me too. Sure, calendar sent an email reminder, but you ignored that because you were busy.

Add a service like Twilio into the mix, and you could use the NAO to send texts and place calls for you.

The thing about notifications is they typically arrive via email and tend to get lost among all the other email you get. Sure, they’re important, but you might not need to take immediate action, or you’re busy ignoring email. Having the NAO audibly remind you to approve expense reports or take action on transactions waiting for you would be valuable.

The whole point of an assistant is to remind you to do stuff, and the NAO opens new possibilities for automating reminders.

Obviously, there are a lot of other, more compelling use cases for a humanoid robot, and we’ve only scratched the surface here. Kudos and thanks to Ludovic for sharing his work. I’m stoked to see what else he can build. Stay tuned.

Find the comments.

First 3 Days as a Glass Explorer (Prologue)

May 22nd, 2013 Leave a Comment

Editor’s note: FYI, this post is by Anthony (@anthonyslai).

I have decided to write up some posts for my first couple days’ experience with Glass.  I am not planning to go deep into the technical details regarding Glass in these posts, but may do so if there are enough interests.

Prologue

When Google first announced Google Glass in Google IO 2012, I signed up immediately to become a Glass Explorer.  Without knowing even a single bit about the Glass specification, 2,000 people still waited in line and signed up for it.  Google gave all Glass Explorers a glass with a number on it, claiming that each Google Glass would be carved with each explorer’s unique number.  Mine was 1109.

I believed such technology can lead us quite far in the future.  The original release date for beta testing was set to be the end of year 2012.  That did not happen, and there were almost no status updates from Google.  To me, that was quite a disappointment.

Screen Shot 2013-05-20 at 9.55.56 PM

Shared in Google+ after signing up as a Glass Explorer.

Early this year, to draw more diverse beta-testers to Glass, Google started the #ifIhadGlass competition in Twitter.  The reaction was enormous, and 8,000 more people would be able to get hold of the Glass through the competition.  Still, no status updates.

For Google, things moved along fastest for Google IO.  As expected, Google finally sent out an email update at the end of April this year.   My long wait has finally ended, and I received my Glass during the week of Google IO.  My Glass do not have my unique Glass number on it, but nonetheless, I am happy.

Screen Shot 2013-05-20 at 10.13.19 PM