My wife just bought an app for her iPhone. I asked her if she liked it, and she replied that it looked good, but she was still trying to figure out how to use it.
Given how far down the feature set of most apps is distilled, this is a disastrous statement. There should be no figuring out period. Granted, she was productive quickly, so this was a word choice problem, not a value judgement.
Right after that, I read this:
The dangerous gap between those who make software, and those who use it
Jakob Nielsen refers to this succinctly as the Usability Divide.
I’ve covered this several times in the past, but it’s still a big deal. I hate family support as much as the next nerd, but I don’t hate players. I hate the game.
This topic keeps coming up, more frequently every year. How long until someone fixes it? Can we fix it?
Interesting, semi-related note, Rian points out that what average people define as “browser” is shifting from Google to Facebook, a fact that is not lost on their leadership.
Thoughts? Find the comments.
In an attempt to be contrarian, the first thought that came to my mind was this – while it is certainly beneficial to solicit feedback from actual users, how good can this feedback be if you’re trying to introduce an entirely new concept? Aren’t these users going to look at the new concept in light of what was done before, and immediately object that it’s not like Microsoft Works or whatever?
But then I thought some more, and realized that those of us who are supposedly technically advanced have the same blinders. If a new concept is distressingly different from our favorite toy, we will object as much as the non-professional user.
Of course, you and I are lucky. My company’s software is primarily used by a small segment of the population (police officers and forensic analysts). The Labs’ software is primarily used in a single technological company, in which the software users (presumably) have an above average exposure to computer skills. The challenges are much greater for people who author consumer products.
I was wondering about one other thing – how much does computer literacy vary depending upon the age of the user? Anecdotal evidence suggests that 30 year olds are, on average, more literate than 50 year olds, and that 15 year olds are, on average, more literate than 30 years old. Does actual data bear this out?
Your point is completely valid for disruptive innovation. It’s new so users either won’t understand the value or don’t want it, yet. The overarching point here is for sustaining or incremental innovation, i.e. what makes an existing product better.
If you go behind the scenes of any development shop, even the small one-person ones, the decision process for what goes into a new release would be eye-opening for most users.
It’s very difficult to choose what to include, what to postpone and what to drop entirely, and every shop does it slightly differently, another reason why software is hard.
Computer literacy is a moving target and cannot really be quantified for measure, but there is some related evidence. In that vein, I saw this today:
http://radar.oreilly.com/2012/07/the-web-as-legacy-technology.html
“Average age of @guardian Facebook audience is 29. Website is 37, print paper 44. Amazing channel effect, really. #newsrw”
Not exactly what you want, but related.
Context of use in requirements gathering. It’s educating stakeholders to do it where the challenge lies. See the CISU-R (included Oracle input): http://zing.ncsl.nist.gov/iusr/documents/CISU-R-IR7432.pdf
Sure, this is a good step, but so many gaps remain. At some point, you hope the rising tide of consumer technology will raise everyone’s skill level, but that creates new problems, e.g. the fallacy of ease of development propagated by mobile apps.