Over the years, I’ve put a fair amount of thought into the ways that game mechanics can improve software usability. I’m not alone, since now there’s, a name for this process, gamification.
The biggest problem designers face when attempting to create engagement through game mechanics is “doing it right,” and since each system is different, as are its users, even when you do it right for some, others hate it.
I’m a huge fan of Easter Eggs, and I’ve come to think this is the best way to sprinkle game mechanics into existing software.
First off, unexpected discovery creates surprise, e.g. Loren Brichter’s pull to refresh, which debuted in his awesome Twitter client, Tweetie. You could use Tweetie without ever noticing this feature, but I’m guessing very few did because once it was discovered, people wanted to share it. It didn’t hurt that Tweetie was a great app, but pull to refresh became its hallmark feature.
This is the second reason I like Easter Eggs, people will share them because discovery feels like an accomplishment, something you want to brag about discovering and show your friends.
Some gestures are like this. The iPhone has always been a pick-up-and-use device, by design, so many gesture features aren’t obvious. Find a gesture-based action accidentally? Share it. For example, swipe to delete individual message in iOS Mail has existed for many releases, but you don’t need it to use the app.
The designer in me dislikes features with no affordances, but the player in me likes to discover new ways to do things.
Another great example is shake to report feedback in Google Maps. Facebook is said to have used this in their internal, employee-only builds of Facebook for Android.
I love this because it taps the human emotion of frustration. Phone isn’t working right? Shake that sucker and teach it a lesson. I wonder how many people found that accidentally.
Most recently, enabling the developer options in Android 4.2 is a nice Easter Egg. To be fair, I didn’t find this on my own; I had to search for it. I’m not alone. The post I did on this has been the leading traffic driver recently, which tells me that a lot of people got Nexus devices for the holidays.
Google does a nice job by adding a message, counting the taps and then declaring that you are now an Android developer, as if it were that easy.
As with any game mechanic, you run a risk if you inject discovery into the wrong process, but adding Easter Eggs is a relatively low cost, high gain method. Of course, once you do, you can layer on more traditional mechanics like questing.
Find the comments.
Hope you all enjoyed the holidays and closing out 2012. Continuing something I started doing last year, here are some topics I’ll be following with interest in 2013. You may notice some are repeats of last year, but that’s not to say any have dropped off my radar. I just felt like repeating them for one reason or another.
As always, find the comments to add your thoughts and to clue me in on anything you think I’m missing.
Fringe Mobile OSes
Today’s announcement of Ubuntu for Phones was a timely reminder that I’m always interested in new mobile operating systems. I’ve always enjoyed tinkering with my phone, one reason why I can’t commit to returning to iOS, and as projects like Firefox OS, Tizen, and Jolla Sailfish mature, it’s interesting to get a new look at what a smartphone OS can do.
These projects don’t have scads of existing users and are underdogs, allowing them to experiment in ways that Android and iOS cannot. These aren’t necessarily hallmarks for success, but they could produce some interesting results.
2013 looks to be a year of releases for these projects, in one form or another, and I’m keenly interested to see their takes on the mobile OS.
Sharing Everything as a Business Model
By sharing, I mean physical objects or tasks, not personal status or pictures. Companies like Uber, Airbnb, Zipcar, Getaround, TaskRabbit, Exec and numerous others are disrupting traditional supply chains and creating economies of scale that large businesses cannot match.
This hasn’t gone totally unnoticed, given that Avis has announced it will acquire Zipcar.
Bit of a weird thought, but it’s happening, thanks largely to smartphones, ubiquitous connectivity and the trend toward moderation.
Interestingly, many of these companies fill a need that Craigslist also meets, which leaves me wondering.
Security, Every Year, Forever
2012 brought more stories of hacks and more stories of how people’s entire lives can be pwned with relatively little effort through password reuse, phishing and misrepresentation.
As connected devices continue to invade our lives, creating more attack points, it’s more important than ever to take security as a personal responsibility. Use unique passwords (ahem, 1Password), keep your devices fully patched, and accept that being secure will be annoying, but is essential.
Tell your friends and family, and rest assured, I doubt anyone is posting nasty videos about you online.
I love the maker culture, and even if I never go hands-on and finally do something myself, a distinct possibility, I’ll always be impressed by what people make.
The average person’s experience with electronics is as a finished product, made in factories by robots and people with facemasks. So,there’s something wonderful about remixing and bending the world to do something more or something differently.
Or maybe it’s just me.
Aside from my child-like wonder, I like that fact that maker’s don’t care about looks. Wires and circuit boards are exposed, soldering is visible, many projects look like what they are, mad scientist experiments.
Aesthetics are undeniably important, but it’s refreshing to see function over form every so often.
Two words: Leap Motion.
I’m a skeptic by nature, but Leap Motion looks fantastic. I’m excited to see what it can do, despite the killer footnote:
The Leap Motion controller works with Leap-enabled software only. Functionality may vary depending on software.
Saw that one coming. Still, I’m always interested in the evolution of how we interact with technology. I’m looking forward to seeing more from the Google Glass project, although we all should check with Steve Mann before pre-ordering.
Whether it’s personal assistants like Google Now or data mashing, e.g. Sitegeist, Field Trip, I want data to work for me. Everyone does, until it gets creepy.
The Disconnection Movement
Netflix went down on Christmas Eve. As I cursed the TV and then the tablet, it occurred to me that this was an odd problem to have. Had you told me this story on Christmas Eve in 2003, 1993, 1983, I would have had a much different reaction.
After Christmas, I went searching for a recent movie, released in 2011, to watch with the family. It wasn’t available to stream, so I had to find a physical copy to rent or buy. After trying three options, I hit pay dirt on the fourth.
How very 2010, or something.
Disconnection is the new black.
Happy 2013. Find the comments.
Until today, when I got the crazy idea that Evernote would be a great place to store and search the 5,698 Google Reader shared items I accumulated before Google yanked that feature last year.
Aside from other people’s positive reviews and the desire to test drive the service, Evernote has search and is portable across all my devices. Plus, even though there is a free option, you can pay for Evernote Premium. Part of the reason I’m doing this at all is because I over-invested in a free service, and from experience, with Posterous and Delicious, I know that getting data out of a free service can be difficult.
So, I’m starting out with free Evernote, but as I consolidate the myriad of stuff that interests me into one place, I’m likely to need an upgrade. Paying should get me better data portability if I need it, right?
I should note that what follows is a consolidation of work done by others that you could find by searching. I’m documenting it here for my own use later and to provide some link love to the people who helped. Thanks people.
Maybe someday I’ll do some real work myself.
Since Google recently added Reader to its Takeout data portability project, I had a 28 MB JSON file as a starting point.
As a quick caveat, one reason I put off joining Evernote so long is that it’s a very fully-featured service, and it’s very possible that I overlooked some great feature that could have saved me effort. Feel free to share anything I missed in the comments.
After some digging, it became apparent that the process wouldn’t be as easy as an import into Evernote, whose desktop client only supports a proprietary .enex file format.
I decided to start by creating a bookmarks-style HTML page for the Reader Shares, something I’d been planning to do anyway to give me a quick list for searching. It was pretty clear that Evernote wouldn’t be able to do anything with JSON, so converting to HTML seemed like a step in the right direction.
Evernote can import an HTML file as a note, but it won’t create individual notes for each link, which is what I wanted.
After quite a bit more digging, I found a Perl script written by Thomas Schädler that takes a bookmarks file and converts it to the .enex file format for import into Evernote. Bueno.
After installing ActivePerl, I used Thomas’ script and poof, I had a file ready for Evernote import. I successfully imported the file and all 5,700 links (somehow I think I gained two extra, but meh), then synced them to Evernote, and now, I can finally close the book on Google Reader Shared Items.
Bonus, if Google pulls the plug on Reader Starred Items, I’ll know where to put them.
Now that I have a home for all my Reader Shares, the next step will be to move my Chrome browser bookmarks over and reclaim my Delicious links, assuming I can do that. With Twitter finally rolling out an archive feature, maybe I’ll add all my tweets too.
I’ve been using Pocket, formerly Read It Later, a lot lately as a replacement store for all the links I find. I love the ease it provides, and happily, Evernote and Pocket provide an integration between the two services.
So, it looks like I’ll be sticking with Evernote, given that it’s become the home for all my links and such. Frankly, this is the best way to force myself to use something new, i.e. make it very necessary.
Find the comments with any Evernote tips and tricks or any general thoughts you’d like to add.
This time of year, bloggers are guaranteed fodder, posting predictions for the upcoming year and analyzing the past year’s predictions.
I gave up on that last year and have no plans to do it again. I’m just not smart or industrious enough.
Anyway, check out their predictions, very interesting stuff.
For some reason, there was a box full of freebie covers for various devices in the office, and I took this opportunity to quiz my colleagues about the same argument that the old ‘Lab had way back when.
I was mildly surprised that everyone valued device safety over the unadulterated form of the device. These are design-conscious folks too.
I suppose this utilitarian viewpoint makes sense. Drop a few bills on a smartphone or tablet, and you’ll feel protective of it.
For years, I covered my original iPhone with a case, and when I switched to the HTC EVO 4G, I added a rather chunky case to protect it, a retroactive move after I’d drop-kicked it accidentally after fumbling it out of my pocket, too little, too late.
At some point, probably swayed by that discussion we had and by the additional cost, I stopped buying cases. I carried the Nexus S without one, and yeah, it got dropped and scratched a little.
After recently getting a Nexus 4, which is a fantastic phone by the way, I’m faced with this question again: should I sully the beautiful device’s form to protect it?
I love design of all kinds, and it’s really a shame to hide the lines of a device like an iPhone or a Nexus 4. Of course, part of the beauty of these devices comes from glass, used liberally for the screen and aesthetic purposes. Nearly all of the Nexus 4 is glass, including the back, which I know will eventually be its undoing.
It’s an opinion, but cases tend to ruin the design lines and make devices more bulky. Again, all relative, but these devices are designed to be used as-is, not with a case. It’s possible I convinced myself that my old iPhone and EVO felt better without the cases, or maybe they really did, impossible to tell.
On the other hand, when presented with an object of considerable value, people are generally willing to spend a little on a case to avoid a costly repair or a constant reminder of how they failed to protect it.
A bit odd when contrasted with the hesitancy of many people to spend on apps to make the device incrementally more useful. Of course, that could be attributed to higher value for physical goods, but interesting nonetheless.
Anyway, I probably won’t buy a case for the Nexus 4, even though I expect to drop it at some point. That will hurt, but I guess I’m committed to the original design, even if that ends in tears.
Where do you stand on this? Find the comments.
I rarely use iTunes anymore, but after OS X offered up the iTunes 11 update, I figured I take it for a quick spin, given all the coverage I’d read about the drastic redesign.
After a few minutes, I noticed something was missing, Cover Flow.
As an unabashed design nerd wannabe, this made me a somewhat sad. I sometimes use Cover Flow, more in the Finder than in iTunes, but its removal signals the end of an era. Whether you used Cover Flow or not, you’ve probably used software influenced by it.
Cover Flow is an influential interaction that had far-reaching impact, providing users with a fresh, new way to browse through objects in a pleasing manner. Even though Cover Flow is essentially a nice-to-have interaction, I’ll miss it, kind of like the lose of the Start button in Windows 8, on a smaller scale.
I suppose the removal of Cover Flow could be a signal that Apple is moving toward a more flat design aesthetic. If so, it seems odd, given that Cover Flow has always been more natural for touch devices, even though it debuted before the iPhone.
Whatever the reason, I’ll miss Cover Flow and hope it will remain in OS X.
Find the comments.
Let’s follow Mark as he tampers with Siri and find the comments here or on his original post.
A few months ago, just after Siri was released on the world I got the uncontrollable urge to see what Siri could do for me. I was planning to leverage the Siri APIs and slap some Ruby code on it to scratch my itch. But my enthusiasm was quickly doused when realizing that Apple didn’t release any public APIs and probably won’t be doing so for the foreseeable future.
Siri Proxy to the rescue!
Siri Proxy is a proxy server for Siri, written in ruby. It sits between Apple’s servers and your iPhone, allowing you to intercept all traffic from and to, no jailbreak required. It also has a handy plugin framework using standard ruby gems which means I could use this for my own nefarious purposes:
Setting up a Siri Proxy is well documented in the README of the git repo, including several YouTube videos so this is outside the scope of this post. I will delve deeper into a plugin I wrote for Siri Proxy and what I learned in the process about Siri and a voice driven user experience.
The plan is to control my Logitech Squeezebox Radio from Siri, using just my voice. Given that the squeezebox server which controls the radio can be connected to with Telnet and queried from the command line, this shouldn’t be too hard. I will write a plugin for Siri Proxy that will intercept and listen for certain words and trigger calls to the radio. On with the show.
I first create an object that represents my radio. It will allow me to connect and talk to my radio:
The constructor connects to the default server and port of the radio (this can be configured) using the Telnet protocol. Once instantiated, we can issue any method we want against the object. These will get caught by the method_missing method which will pass the calls to the radio. It takes the method name and parameters and passes these over Telnet to the radio. This simple construct allows us to call any known squeezebox server command on the Squeezebox object with very little code. As long as Squeezebox server understands the command, i.e. as long as the method we call on the object is a known command, this will work. Now that we have this object, writing the actual plugin is peanuts.
On initialization of the plugin framework, we initialize our Squeezebox object. This connects us to the radio. We then listen to certain commands that come from Siri and trigger the appropriate commands on the radio object, which in turn passes them to the radio itself which executes them. The plugin only supports 3 different types of commands, radio on, off and playing music from a particular artist, but you get the gist of it. It could easily be extended with many more commands, like forward, backwards, etc.
So, what have we learned? Well, it turns out that it is ridiculously easy to listen in on traffic from Apple servers, borderline dangerous I’d say. If you have control over the DNS server, you can listen in on ALL Siri traffic. You can also issue commands to the phone (e.g. send an SMS on his behalf) and the user would be completely unaware. Fun for you, not so much for the user. In an enterprise setting, this would obviously be completely unacceptable and I can therefor not recommend this approach to anybody trying to build a business around this idea (are you listening in for?). However, for home use, this is a lot of fun and quite useful.
As for User Experience, I think voice has a bright future but the example I created exposes the Achilles Heel of the whole concept: understanding exactly what the user is trying to do. ”Radio on” and “Radio off” are simple commands but even those could be expressed by the users in an infinite number of ways. A polite user might ask “Could you please turn the radio off?”, a not so polite user might shout “Shut up!”. Some defensive coding and clever regex’ing might help here, but you can easily see that as the vocabulary expands, this will become very difficult very quickly indeed. What makes for a good User Experience is not the conversion of voice into text, but extracting intent from that text. If you cannot do that, your application will fail, no matter how good it is at converting voice to text. I will delve a bit deeper into this processing of (natural) language in a future post.
Here is a video of the plugin in action on my own radio.
All the code, including installation instructions, is available on github.
I would have added this to my Friday aggregate post, but I didn’t see it until just now.
The ever-helpful Fat Bloke has a post on Growing your VirtualBox Virtual Disk that I highly recommend you Pocket, bookmark or whatever you use to keep useful links handy for later use.
I actually went through this very process myself to upgrade my Windows 8 Consumer Preview VM to the released version, which needed additional space to perform the upgrade.
The process is pretty straightforward, use the command:
VBoxManage modifyhd <uuid>|<filename> [--resize <megabytes>|--resizebyte <bytes>]
Then boot the VM and don’t forget to expand the partition to include the newly allocated space, like so in Windows 8:
Anyway, save this for later because I guarantee you’ll need it at some point.
Update: While Fat Bloke mentions installing GParted from Ubuntu’s LiveCD to resize a partition, Bill Taroli (@btaroli) suggests, based on his experience, going straight to the source and using the GParted LiveCD as another course of action.
The more you know.
Seems like Friday has become my day to clean up all the open browser tabs I accumulate during the week, add some context and content and, ideally, get your thoughts. Here we go.
Mobile OS and the Back Feature
The Back button originated in the browser, and it has migrated over to native apps in very unpredictable ways. Frankly, as the web evolved from static pages to functional applications, back lost its focus.
Obviously, an app developer is constrained by the capabilities provided by a given SDK and influenced by personal design ideas and financial motivations.
This is one of many discussions that tends to degenerate into an Android vs. iOS holy war argument, but I’ll take my chances.
Whenever I use iOS, I encounter one of the pain points that Lukas mentions, i.e. each app implements UIWebView to embed its own WebKit instance to handle links. This is detrimental for a number of reasons, e.g. UIWebView doesn’t perform like Mobile Safari, browsing history and browser metadata are locked away in each app.
Android does a better job by surfacing back as an OS feature and by providing intents, allowing back to navigate to the app that originated the request, e.g. if I click through on a link in the Google Reader app, I’m sent to Chrome and returned to Reader when I choose back. Even so, the progression can get confusing because, again, the implementations aren’t always consistent.
The net is that users lose because of inconsistent implementations.
As a bonus, check out Guy Kawasaki’s thoughts on Android. They may surprise you.
Sitegeist is a mobile application that helps you to learn more about your surroundings in seconds. Drawing on publicly available information, the app presents solid data in a simple at-a-glance format to help you tap into the pulse of your location. From demographics about people and housing to the latest popular spots or weather, Sitegeist presents localized information visually so you can get back to enjoying the neighborhood. The application draws on free APIs such as the U.S. Census, Yelp! and others to showcase what’s possible with access to data.
Although Sitegeist is one of the many apps that I’ll probably only use once or twice, I like the information design and presentation. I can see Google adding this type of contextual data to Google Now, which gets better every day.
Anil Dash on the Web We Lost
Although Anil’s post is ultimately positive, it depressed me to read and remember how the intertubes were only a handful of years ago. He’s right that the pendulum will swing back, but like a pendulum, it will come back incrementally, meaning we will have lost some of the web’s freedom irrevocably.
we’re going to face a big challenge with re-educating a billion people about what the web means, akin to the years we spent as everyone moved off of AOL a decade ago, teaching them that there was so much more to the experience of the Internet than what they know.
Waiting for Movable Type
Interesting insight from Ryan at 37 Signals about how Tablets are waiting for their Movable Type. I’d posit that mobile generally is waiting for that moment, i.e. the one where self-publishing to phones and tablets happens as easily as it does with a blog. Refer Anil’s point about what we’ve lost.
And Because I Can
Finally, here are Dieter Rams’ Ten Principles for Good Design, easily said than done, unless of course, you are Dieter Rams.
As always, find the comments.
This session will be on Sunday of the show, hosted by the IOUG’s WebCenter Special Interest Group.
I like this idea because it helps close the loop between attendees and presenters. If you’re like me, when you map out what sessions to attend at a conference, you start with ones that definitely interest you and then fill in the time around them. Usually, the Sunday before a big show starts is sparsely attended, so if you’re planning to attend that day, this is your chance to have input into what’s presented.
Interested? Here’s how to give suggestions:
If you always wanted to know something crazy about how WebCenter works, please take our survey so we know what to present. You can also leave a comment, email us at info at bezzotech.com, or send it to me directly. We’ll tally up the requests and let the WebCenter faithful decide what our talk will be!
Related, if you ever needed more reason to attend a chat given by Floyd Teter (@fteter), he did this for a talk he gave at Alliance 2012, ad-hoc, during the session. What a great way to reward people for attending your session, give them exactly what they want at that exact minute.
Too bad I missed that one.
Editor’s note: The following post comes from Mark Vilrokx (@mvilrokx), one of my colleagues in Apps UX. Mark has been working on the Voice project that Ultan (@ultan) mentioned in his recap of the OUAB meeting last week, and as part of his work on Voice, Mark has done some serious hacking that I wanted to publicize. Happily, he agreed, and here we are.
Mark blogs at All Things Rails, and maybe he’ll reignite my Rails interest, which has waned since we stopped working on Connect. Anyway, enjoy Mark’s adventure, Build your own Siri Application … in the browser! and find the comments here on his original post.
I have emerged myself recently in Voice driven applications and was asked to knock up a quick prototype of something “that looks and acts like Siri”. That’s a pretty tall order I thought but after some research I came up with the following…
The first problem we have to solve is Speech Recognition, i.e. convert the voice data into text. The data would have to be streamed to a server which then performs the actual recognition and sends back a string of what it thinks you said. That’s some complicated stuff right there. Voice recognition is a science in itself and I also did not want to have to deal with the server setup. Luckily for me, it turns out that Google has built all of this stuff into their Chrome browser already courtesy of the HTML 5 Speech Input API. All you have to do is add a special attribute to an <input> and it will allow users to simply:
“click on an icon and then speak into your computer’s microphone. The recorded audio is sent to speech servers for transcription, after which the text is typed out for you.”
Sounds about right to me, first problem solved!
The second challenge is to extract meaningful information from the text to understand what the users wants you to do. When the user says ”What is the weather forecast for tomorrow,” you have to figure out, from this string that the user … well … wants to see the weather forecast for tomorrow. If this is the only case your application has to handle, it’s pretty easy:
if utterance =~ /.*weather forecast.*/ig
return “I do not know what you mean, try asking again (e.g. what is the weather forecast for tomorrow)”
But also pretty useless.
Clearly you could not write a case statement big enough that could handle all possible scenarios or even a fairly limited scenario, e.g. what would happen if the user asks “Show me the forecast of the weather,” not to mention “Is is going to rain tomorrow?”. You can see that this processing of natural language can get fairly complicated very quickly. As it it turns out, this is another field of science (Natural Language Processing or NLP) that people much smarter than myself have worked on for decades. One example of a website that uses NLP to answer questions is wolframalpha. And guess who uses wolframalpha … that’s right: Siri. So if it is good enough for Apple, it’s certainly good enough for my prototype so I registered for a developers licence with them and that was it (I suggest you do the same if you want to follow this article). Now I just needed to hook up everything, I’m going to create a Rails application to do just that.
It will be a very simple application with 1 page that has 1 form on it. This form in turn will have 1 field on it that can be used by the user to “enter” their question. To support voice entry, I will add the required attribute (“x-webkit-speech”) to this input field. To further emphasis the fact that this is a voice driven application, I am going to style the input field:
Using the following CSS:
Furthermore, that same page will have an area that displays the data: what the user says and what wolframalpha returns as the answer. We call this the stream and represent it as in ordered list, which gives us the following (using HAML):
Incredibly simple! The user presses on the microphone and starts talking. When he stops talking, Google processes the voice data and returns a text representation (actual it returns several, ranked in decreasing order of “correctness”, we just always use the top result). It inserts the text into the Text Field on which the voice was triggered, essentially Chrome fills in the form for us with the transcribed voice data. This is all handled by chrome, we do not have to do anything for this to work.
When the result comes back from google, chrome also raises a JS event that we can listen for. We will use this to trigger an AJAX call to WolframAlpha, passing in the received text, i.e. we automatically submit the form to process_speech. process_speech is a controller method that handles the call to WolframAlpha (I am using the Faraday gem). When we receive an answer from WolframAlpha, we attach this to the stream (in coffeescript):
And that is it really, some more CSS and more coffeescript to make it look pretty and you are good to go: Siri in the browser in less than 150 lines of code. I haven’t had a chance yet to clean up the code so it’s not public yet on github, but here’s a video showing the end result.
I’m positively giddy playing around with the face unlock feature on my new Nexus 4. This was announced for ICS a long time ago, but it never made it to my Nexus S, not sure why.
Thanks to Apple, there’s a lot of talk about delighting users, and I’m a very jaded user. So, I’m finding it crazy that this feature has me giggling. It’s weird.
Anyway, one noteworthy item already about Android 4.2.1. The developer options have been hidden. The stock OS has no cues as to how they can be unlocked. I spent a fair amount of time combing through the menus looking for them.
Turns out they’re now hidden behind the Build Number in About. You have to tap Build Number seven times to unlock them; a countdown appears as you tap.
Maybe I’m overly excited about face unlock, but I found this a bit exciting too. It’s an interesting way to hide developer features as an Easter Egg. Kudos to Google for that.
Update: Changobarrocho points out in comments how to disable and re-hide the Developer Options after they’ve been enabled, i.e. by toggling them off and by clearing the Settings app data, as described in this thread, which also includes the steps to turn on Daydreams, a screensaver-like feature hidden in Android 4.2. I love Easter Eggs.
Been a while, but frankly, I haven’t had much to say lately. So, I rolled a few updates into a single post. Enjoy.
Over the weekend, I watched “Urbanized,” the final installment in Gary Hutswit’s design trilogy. If you read here, you’ll recall that I thoroughly enjoyed the first two parts, “Objectified” about industrial design and “Helvetica” about typeface design.
“Urbanized” is about city design and planning, a topic about which I know very little. As with the other parts, this one was fascinating. I’ve always thought city planning would be a fun job, even considering all outside pressures, i.e. politics, costs, contractors, etc. “Urbanized” is a great collection of stories that showcase aspects of modern city planning and the many aspects it must consider.
Coincidentally, I also assembled a train table for my daughter over the weekend, an early Christmas present. It’s a city and almost immediately, I noticed all the problems. She and I will have to redesign it, especially the very sharp turns that none of the trains can make without crashing.
If you like documentaries and design, definitely watch all three films. They’re very thought-provoking and highly interesting.
Macbook Pro Wifi Woes
The ongoing saga of my Macbook Pro’s wifi issues has a couple more chapters now. I’ve been working with a senior tech at Apple Care, and the last piece of advice was to reinstall OS X, not a small task. I did some more digging before committing to that and found a couple other tips that have helped others.
First, I turned off the hard drive’s sleep mode, which kept the problem at bay for several days. This probably isn’t a good long-term solution, given that hard drive’s do fail, and I suspect running all the time might accelerate that failure.
I also found what seems to be a credible diagnosis of the problem, i.e. the Broadcom network adapter drivers may be to blame. Most people report possible fixes without any indication of the root cause, so this was an interesting find. Apparently, a similar issue occurred with Atheros drivers in the past and downgrading to the last version of the drivers that worked fixed the problem.
I followed the instructions in the post, downgrading back to the Broadcom drivers in OS X 10.6.4, but after a reboot, the OS doesn’t report any wifi adapters, borking wifi completely.
Color me very disappointed. Anyway, I’m still leaning toward this as the root cause, and if it is, a reinstallation of OS X is unlikely to fix the problem. I’ve reached out to the tech at Apple Care to get his thoughts.
Fun with Android
I treated myself to a new phone, the Nexus 4, and yes, I was one of the poor saps who got caught in the fiasco last week. Thanks to a stubborn will and lots of refreshes, I finally got my device, which arrived yesterday.
So far, I’m liking it a lot, especially Android 4.2. Now, I just need to get service for it. Yeah, I know it doesn’t have LTE (or at least, the LTE radio isn’t active), but I can live HSPA+ and no contract.
The Nexus 4 isn’t my first brush with Android 4.2. Since I got the Xoom back from Noel (@noelportugal), it’s been an early Christmas for me. The Xoom is a great device, and I’ve had a ton of fun modding it. I normally would wait for Google to update it. After all, the Xoom is essentially the first Android developer tablet, but sadly, it won’t get Jelly Bean 4.2.
Actually, the one I have is locked to Verizon, and it may not even get Jelly Bean 4.1.
So, I unlocked it, rooted it and then had some fun. After some research, many reported that the Stingray (Verizon Xoom) worked with the Wingray (Wifi Xoom) image, minus of course the Verizon bit. So, I flashed it to 4.1.2, using this guide and ran Jelly Bean for a few days.
Then, I decided to try to run JDeveloper on the Xoom, which should be possible using a Linux VM. I quickly discovered, however, that my root was borked.
After some unsuccessful attempts to fix it, I somehow got onto the idea that I’d flash Jelly Bean 4.2.1, having run across this guide. I was stoked to find that this worked too, and I enjoyed running 4.2.1 for a while, until I hit some nasty bugs. One of the things I’m really looking forward to with the Nexus 4 is using the improved Google Now in 4.2.1.
I don’t know about you, but when I get deep into a project like this, I tend to fall into deep ratholes and find myself wondering hours or days later how I even got there. I know Chet (@oraclenerd) has this issue too. Anyway, this was one of those times.
By now, I had a buggy mess on my hands, so, I decided to start all over, just like when I’d first unboxed the Xoom. To do this, I relocked the bootloader and reapplied the original stock image of Honeycomb and then walked through the half dozen or so OTA updates.
After that, I unlocked and rooted again to try the VM install of Linux to get JDev running. I hit a few snags downloading the Ubuntu image, so stay tuned for the rest of that story.
After all that modding, I’m reminded of why I’ve stayed with Android; there’s something rewarding about being able to bend a device to your will. In the past, I’ve lost patience with custom ROMs, which is why I always gravitate to Nexus devices. Still, if you have the time and patience, modding is fun.
Ice Cream Sandwich and Jelly Bean are very capable on tablets, and the Xoom has replaced my original iPad as my tablet of choice. Sadly, the OG iPad is stuck on iOS 5, and as apps update to iOS 6, I’m seeing more and more random crashes.
Something funny I’ve noticed. Even though Android is the largest smartphone OS by share, very few people I work with carry Android phone. Even Noel, a maker at heart, insists on iOS. He barely even turned on the Xoom.
Food for thought as to why.
Noel’s Business Card Hack
If any of this strikes you as comment-worthy, you know what to do.
It’s Friday, so why not share some video? Doesn’t really make sense to me either, but sometimes the words just won’t flow.
Moving on, here are a couple interesting talks, one official TED, the other a TEDx, i.e. independently-produced. These two speakers are so different, it’s actually quite amazing that their ideas and messages actually dovetail.
Matt Ridley is a British scientist and academic, oh and a viscount. Rodney Mullen is an American skateboarder, former member of the famous Bones Brigade, skating pioneer and general innovator. It’s a bit eerie to watch as these two speakers migrate toward the same conclusion, i.e. ideas are better when combined and communities exchanging ideas make the overall good, better.
Check out Matt Ridley’s TED talk, “When Ideas Have Sex.”
Now watch Rodney Mullen’s TEDx talk at USC “How content shapes context.”
h/t to Jeremy for putting me on to Ridley’s talk and to Jason Fried for recommending Rodney’s.
Not-all-that-related, but I’m stoked to watch Stacy Peralta’s Bones Brigade. Having spent a lot of my youth falling off a board, I’ll watch pretty much any documentary about 80′s era skating or biking. Some of my favorites for your viewing pleasure:
- Dogtown and the Z-Boys
- Rising Son: The Legend of Christian Hosoi
- ESPN 30 for 30: The Birth of Big Air
- Stoked: The Rise and Fall of Gator
The first edition was a couple of Google devices, a Chromebook and a Google TV. Today’s edition is a couple of Android devices, the venerable Motorola Xoom and the Nexus S.
It’s been a while since I’ve used the Xoom, the Granddaddy of Android tablets; it went to Anthony and then to Noel for development purposes and only recently came back to home to me. I have to say, I’m liking it quite a bit. It’s running Ice Cream Sandwich, 4.0.4, a major improvement over the Honeycomb build it had the last time I used it.
ICS handles tablet operations very smoothly, and even though the Xoom is nearly two years old, like 50 in tablet years, it’s very snappy. One nice thing about Android, true multi-tasking, which is great when you’re downloading 700 items into Pocket and feel like doing something else.
Mea culpa, multi-tasking does matter, especially if you’ve been teased by Apple’s bogus version of it.
Anyway, on to Fuse, which ran beautifully on the Xoom, which doesn’t really qualify as a low-powered device. The Xoom rocks an Nvidia Tegra 2 T20, 1 GHz dual-core processor with 1 GB of RAM, making it very capable. The screen is 10.1 inches at a resolution of 1280 x 800.
I also tested Fuse from my GSM Nexus S, which runs Jelly Bean, 4.1.2. Even though Fuse isn’t yet intended for use on a mobile phone, I wanted to see how it performed. The Nexus S sports a 1 GHz single-core ARM Cortex-A8 with only 512 MB of RAM, so it’s definitely slower than the high-end phones you’d buy today.
Fuse ran well enough on the Nexus S, performing well considering the specs of the device.
On to the pics:
You may notice in the pictures that I’m using the Dolphin browser and not the Android stock browser or Chrome for Android. My preference would have been to use Chrome for all these tests, but alas, the environment I’ve been testing does UA sniffing to ensure that user’s browsers are current.
Fun fact, not all Chromes are equal, e.g. the Google TV Chrome version is 15, Chrome for Android is 18.
Anyway, to avoid the UA sniffer on the Google TV, I used the string for Chrome 24. Another fun fact, Chrome on the Google TV has an advanced setting allowing for a custom UA, but Chrome on Android does not. In Chrome for Android, you can choose to request the desktop version from the Settings, but the UA reported is still Chrome 18.
Long story long, Dolphin allows for custom UA strings, which I used to bypass the check.
So, that’s it for this edition. Next, I’m going old school, OG iPad and OG iPhone.
Stay tuned. Oh, and find the comments.
Sometimes I get ideas in my head that I can’t shake until I execute them, like last week when I suddenly decided I would give Windows 8 another try, after reading Jakob Nielsen’s review of the OS. I know, counterintuitive, given the review.
Somewhere about halfway through that process, another idea wiggled into my head, I should try to run Fuse, codename for the new face of Fusion Apps, on as many devices as I could find. Sounds like a fun project, and it should be pretty straightforward. After all, any device with a browser should be able to run Fuse.
So, after I finished upgrading Windows 8 on my Consumer Preview VM, I set about putting Fuse on it, channeling Portlandia.
The Chromebook sports a 1.66 GHz Intel Atom N570 processor with 2 GB of RAM, specs more or less comparable to the original Kindle Fire. The Logitech Revue Google TV has even less horsepower with a 1.2 GHz Intel Atom CE4150 processor and only 1 GB of RAM, close to the Apple TV.
My first few tests were failures, due to VPN issues, but happily, I did find an environment I could use outside the corporate firewall.
So, I give you pics to prove it happened, Fuse on a Chromebook and on a Google TV:
The pictures aren’t that great, but that is, in fact, Fuse running on a 60″ LED TV mounted to my living room wall. It was dark when I took the shots, and I didn’t want to include all my daughter’s toy clutter, etc. in the shot. So, I guess you’ll have to take my word for it.
The only oddity I did notice was that the images in the Directory were 404ing for some reason, but aside from that, Fuse ran as expected on both devices, about as fast as other web applications.
This may become an ongoing series. Misha (@mishavaughan) definitely seemed interested, asking in a very a Dr. Seuss-way, i.e. can you run it on a . . . ?
I plan to try Fuse on the Xoom, once I get it flashed to Jelly Bean, and on my Nexus S for a look at how it handles a small screen. I suppose I could also try it my OG iPad, and there’s always the venerable and iconic OG iPhone.
Beyond that I’m searching for other devices with browsers. Ideas?
Find the comments.
Evan Roth’s Angry Birds All Levels is an art exhibit showing the ink tracings of the finger swipes required to complete all the levels of Angry Birds.
[Angry Birds All Levels] comments on the rise of casual gaming, identity and our relationship with mobile devices. Consisting of 300 sheets of tracing paper and black ink, it’s a visualization of every finger swipe needed to complete the popular mobile game of the same name. The gestures exist on a sheet of paper that’s the same size as the iPhone on which it was originally created. Angry Birds is part of a larger series that Roth has been working on over the last year called Multi-Touch Paintings. These compositions are created by performing simple routine tasks on multi-touch handheld computing devices [ranging from unlocking the device to checking Twitter] with inked fingers. The series is a comment on computing and identity, but also creates an archive of this moment in history where we have started to manipulate pixels directly through gestures that we were unfamiliar with just over 5 years ago. In the end, the viewer is presented with a black and white representation of the gestures that have been prescribed to us in the form of user interaction design.
Here are all the tracings set to that familiar tune:
Art is subjective, natch, but I like the meta-nature of this, i.e. creating ink and paper renditions of an almost completely virtual activity, boiling down an emotional reaction and the result of millions of lines of code into a few smudges on paper. Cool.
I’m working through a Windows 8 upgrade right now and plan to report back on that after it finishes.
In the meantime, I wanted to share some recent newsworthy items from my new team, Applications User Experience.
At OpenWorld 2012, the Apps UX team showed off a new interface for Fusion Applications, a custom shell built in ADF, designed to be portable across disparate devices and platforms (including devices like SMART Boards, kiosks, internet-capable TVs) and distilled down to surface the most common functions.
Note to self, try this on my Google TV, you know, for the lulz.
The recent launch of ADF Mobile has Oracle developers stoked to extend their ADF chops to mobile devices, and your friendly neighborhood Apps UX team wants to help you get started by providing design patterns for mobile to make life easier.
These are the very same design patterns used internally to build the latest release of the Fusion Mobile Expenses iPhone app.
If you like this kind of content, keep reading here and add these blogs to your reader:
If you’ve used Google Reader for a long time, like I have, this is huge news.
A year ago, Google revamped Reader’s UI and removed the sharing features in favor of G+ sharing. I’d been using Reader shares for several years, both to share and consume content, as well as to bookmark interesting content for later retrieval.
Shares were still available on a public page, and there were several hacks out there that people built to extract their Reader content into useful archives. Reader also provided an export to JSON, but that never pulled my complete history of shares, hitting a hard limit that I couldn’t seem to hack.
Every few months, I’d comb the web for solutions, but nothing worked. So, I’m very happy to report that I’ve successfully used Google Takeout to reclaim a nicely-formatted JSON file of all my Reader shares, as well as all my Starred items, subscriptions and all the old social data that was removed. I think Google promised this at some point, and I’m glad they kept their word.
Now, I can make a nice local webpage of links, or maybe I’ll trust another service like Pocket to keep all my links safe.
Find the comments.
I toyed with some cool adverbs to add to insightful, e.g. surprisingly, inspiringly, but they all seemed to cheapen his work. So, I chose delightfully because everything is better with a side of delight.
Matt writes and draws comics, much in the same way that Randall Munroe and Chris Onstad do. You know, the kind of stuff you’d never get to read if it weren’t for internets because no publisher in her/his right mind would allow them to go to print.
That is until there was a rabid audience who would pay for a book.
Matt Groening’s Life in Hell is the exception to that rule, natch.
Anyway, as you were.