I love a good pun. This way off-topic, even for me, but I can’t pass up not one, but two Sriracha stories.
From the rare interview with Quartz:
His dream, Tran tells Quartz, “was never to become a billionaire.” It is “to make enough fresh chili sauce so that everyone who wants Huy Fong can have it. Nothing more.”
Well worth a read.
Then today, there’s news that the city of Irwindale is suing to stop production of Sriracha due to the smell produced by the Huy Fong factory, which sounds like it might be similar to, if not exactly, aerosolized capsaicin, a.k.a. pepper spray, h/t Foodspin.
Of course, if this suit is successful, production will be impacted, which means higher prices.
Tying it all together, Tran says he has never raised wholesale price for his hot sauce.
I like Google Now, and although it’s not a fixture in my daily device life, I it use pretty often, and it just keeps getting smarter and more useful.
Case in point, a couple weeks ago, I got my first Activity summary card, or at least, I noticed it for the first time.
Pretty interesting stuff I suppose. My biggest takeaways were: wow, if I could track older months this might be useful, the health tracking market is getting very crowded, and holy crap, Google could pwn that entire, admittedly nascent, market if they want.
To that last point, I use so many Google services that they could offer me a truly valuable summary of my activity, including the reasons, e.g. because of the events in my calendar, I walked more, or because of my searches for walking directions, I walked more.
From what I’ve seen so far, most fitness trackers are marginally useful at best. However, just by tracking my month-over-month walking, Google could offer cards that give me a decent incentive to be more healthy, e.g. the Travel Time card that shows time to work or to make an appointment on time could easily recommend that I walk, or walk a longer route, to spur healthy choices.
Plus, that card could create a competitive incentive by comparing my miles walking month-over-month.
If nothing else, it’s a reminder that I should at least consider walking.
I read about an internet of things startup today, Greenbox, that is building a smarter sprinkler system. In classic pitch terms, they’re calling it Nest for the garden. Sounds like a good pitch, e.g. using weather data to shut off the system when it’s raining.
There’s no reason to think Google Now couldn’t use weather data to produce more useful cards; Now already shows you the weather conditions and forecasts for cities to which you’re traveling in the near future, based on itineraries in your Gmail account.
So, cards that alert you to severe weather conditions or even changes in weather, e.g. to let you know to pack a jacket if the weather is drastically different, would be welcome.
And if micro-weather ever becomes a thing, like Waze for weather, Google could just throw money around to buy whatever startup seems right, just like they did with Waze. Incidentally, I’ve used the combination of Waze and Maps several time to avoid traffic jams.
There are lots of examples rattling around in my head, and I don’t even have all the Now cards enabled.
All this is possible because Google has access to so much data and has so much computing power, and it certainly doesn’t hurt that they also have access to tons of mobile devices that we carry around all the time.
Google Now could easily lay claim to both the future and past, all provided in easy to consume cards. It’s too bad they’re just an advertising company.
Find the comments.
For the nostalgic readers out there, I found the install disks for Windows NT 4.0 in my desk last week.
I thought it would be fun to spin up a VM for old time’s sake, but that idea will take some doing. Stay tuned.
If nothing else, I’ll keep these as a history lesson to amaze children who wonder about the Save icon or to demonstrate what a floppy disk is.
I don’t watch many commercials, but thanks to Hulu Plus, I now know about the Kohler Moxie, a showerhead that is also a portable, Bluetooth speaker. You know, for listening to music in the shower.
Beyond my surprise that such a thing exists at all, I wondered how I had missed this gadget, given how much gadget and industrial design news I read. The Moxie is a beautiful piece of design that combines utility with luxury in a modern package.
I say utility because it’s a showerhead, and also because shower radios are a market segment, also much to my surprise. Amazon has a product category for Shower Radios with more than 250 results.
Rounding out my surprise was the lack of coverage about the Moxie, which has been on the market since late 2012. Here’s one of the few reviews I found.
Given the list price of $199, this is a niche market to be sure, but the connected home as an interest area of mine. So, I’m curious to see how devices like this evolve into the mainstream.
Speaking of, erm, speakers, one of my favorite listening gadgets, Sonos, announced the PLAY:1 today, just in time to add to my holiday wish list. I’ve had a PLAY:3 for several years, which I love, and I’ve been pondering how to justify adding another one. The PLAY:1′s price makes that justification easier.
And finally, MIT is developing self-assembling robots for your amazement and possibly to haunt your dreams. So enjoy.
Traveling a lot lately, which means I’ve had time to sit and observe people in airports. In today’s world, this is a great research opportunity for mobile devices. I’m a huge fan of anecdotal evidence, and I like to watch users in the wild to see what devices they use.
Numbers can tell you a lot about the rise of the phablet, you know phone + tablet. I liked that term better when I thought it was phab, as in phat for fab. Something about phablet in that sense suggested a blinged-out device, all sparkly and trimmed in 14K gold; related, the rise of phablets in Asia led me to this assumption.
Why? Gold iPhone.
Generally speaking, phablet is a phone whose screen is larger than five and smaller than seven inches, e.g. Galaxy Note, which could be said to have created this segment. I remember the negative reviews when the Note debuted; I remember thinking “that will never sell.”
Win some, lose some, am I right?
Lately, I’ve been seeing more and more phablets, and they’re growing popularity leads me to wonder why.
Beyond the allure of big screen, I’m not sure why people want such a big device. While in an airport, I got an interesting clue; I saw a person talking on a phablet, which looked comically uncomfortable until she went to the shoulder-chin cradle.
Physiologically, that makes sense. Holding any phone up to your ear for a prolonged period of time tires out the arm, which is why this plastic doodad exists. I think it’s called a telephone shoulder rest or something.
While the telephone shoulder rest has gone the way of the audio cassette tape and floppy diskette, the problem persists. Sure, headphones work, but sometimes, it’s just faster to put the phone up to your ear and go, assuming you even make phone calls.
In this instance, the phablet creates a bigger object to squeeze between your chin and shoulder.
Otherwise, I really don’t have a good understanding about the popularity of phablets. Do you?
Here are some other random observations from my recent time in airports.
- I saw a netbook, and I’m still shocked.
- The Blackberry diehards are disappearing. Used to be common to see them in airports, but not anymore, which can’t be good for RIM, erm Blackberry.
- On the whole, people love cases for their devices, and surprisingly, at least to me, cases seem to be more for self expression than device protection.
- I sat across from two 20-somethings for a good long time. Even though they buried their faces into their phones, the content on each device created a social interaction, like reverse sharing. Facebook and Twitter were created to share IRL activity online, but in a twist, online activity is creating IRL interactions. This is weird, but interesting, to me.
- I saw a dude reading a newspaper on his iPad. It looked like a digital copy, probably pdf, of the print edition. I have no idea what to think of this; it boggles my mind.
Care to chime in or add your strange device observations? Your anecdote research is welcome here.
Find the comments.
This is just plain awesome. From kotte.org:
Steven Hawking came up with a simple and clever way of seeing if time travel is possible. On June 28, 2009, he threw a party for time travellers from the future…but didn’t advertise it until after the party was already over.
Usually, I’d be all over this type of story, but it somehow escaped my attention. Tracing the coverage, Ars mentioned this party last year but before that, this Hawking piece in The Daily Mail in 2010 seems to be the first mention of this time travel party. And that mention is purely conditional.
I’m wondering if this trickling out of coverage is all part of the experiment. Anyway, even though no one arrived at this party, assuming it really happened, that’s not empirical evidence to disprove the possibility of time travel.
For some reason, I’m reminded of a lottery commercial that ran in my hometown in the 80s. The refrain was “Look, there goes another winner,” over and over; I guess the subliminal messaging has attached itself to other things in my mind.
Anyway, been busy lately, so here’s a recap of some stuff that I’ve found interesting, a curated list of cruft or something.
Ambient technology is evolving, but it seems to fall somewhere between gentle, environmental signals that let you know to pay attention and yet another notification mechanism to ignore. I like the intersection of real things and digital ones, made possible by IoT, and the opportunity to couple industrial and digital design.
It’s an interesting space. Stay tuned for more.
In the vein of WTF Visualizations, I give you Why We Hate Infographics (And Why You Should), a thoughtful look at the infographics fad. As an old school bar-chart-line-graph dude, I got a huge kick out of this.
Big Brother and User Experience
How Monitoring Can Affect User Experience is an interesting read. The Prius example resonates for me, since I’ve ridden in many a Prius cab. I’ve always wondered how distracting the UI is; Paul (@ppedrazzi) told me years ago that the UI dramatically changed his driving habits simply by showing the impact of a tire-burning start.
All interesting stuff, presented for comment.
Even though sailing isn’t my thing, it was impossible to avoid getting swept up by last week’s boat race. Possibly lost in all the drama is the amount of technology these boats and race teams use to help them win.
Check out this video that highlights all the technology out on the water and how the race team uses it.
Even if you’re not into sailing or the competition aspects, these are very cool uses for technology that can be applied to everyday life improvements.
Bonus, there’s a 92-minute behind-the-scenes feature from Make you can check out if you want more.
In the midst of preparing for Oracle OpenWorld, I have spent couple hours building a micro app for our marketing team to keep attendees up-to-date with the latest OOW news and announcements. You may head up to http://oow2013.appspot.com to start using it if you have Glass.
I look forward for a fun and exciting week starting today. You can find us in the Fusion UX DEMOGround and OTN lounge. See you all in OOW!
You may have 99 problems, but finding a contest to show off your coding chops during Oracle OpenWorld won’t be one of them. You’ll have two from which to choose, the Oracle Fusion Cloud Developer Challenge or the Applications Express Developer Challenge.
Or hey, why not do both?
I guess because of our challenge exploits last year, Noel (@noelportugal) and I have become the unofficial hackathon guys. So, it’s fitting that Noel will be one of the judges for both these challenges.
Even if you can’t join the fun, stop by the judging events, check out what people have built and chat with Noel. Judging for both challenges is Wednesday, September 25 in the OTN Lounge, Moscone Center South lobby; Apex runs 3-4 PM, Cloud 4-5 PM.
Sounds like a line from some dystopian future movie, but surprise, it’s a real question, i.e. where can I see the fantastic and amazing dueling robot arms?
If you’re attending Oracle OpenWorld next week, you can find the robotic arms, controlled by the Leap Motion, at the following venues and times:
2013 OPN Exchange & OAUX Expo
We’ll be one of several Apps UX teams attending the Oracle Partner Network’s Expo on Monday at 1:30 PM at the Marriott Marquis Mission Grill. If you’re an OPN member, check out the skinny from OPN and Misha (@mishavaughan) and definitely plan to attend.
In addition to the robotic arms, we’ll have some other cool stuff on tap. Can’t tell you what, but I think it’s cool.
As an eleventh-hour thing, we’ll also be at the OpenWorld OTN Lounge, in the Moscone South lobby, on Tuesday at 3:30 PM, and the JavaOne OTN Lounge in the Hilton, on Wednesday at 1 PM.
We’ll also be showing them to the Ace Directors in about an hour, so there’s that. Even the real-time web might not be fast enough to help you make that one though.
I told you, we need more robots. A while back, Misha (@mishavaughan) asked us to build something fun. So, we decided to put our new-ish Leap Motion controllers to good use to control robotic arms, specifically the OWI 535 with the USB interface, because why not.
Although Anthony won this round, they dueled again later, and Noel won. They’ll be dueling all week, so we’ll see who prevails. I’ve tried this myself, and it’s harder than it looks.
Beyond a fun exercise, this was a chance to dig into the Leap SDK, which we plan to use for other projects. Stay tuned for some technical thoughts on that.
Even though the Leap hasn’t been GA for long, there are a lot of hacks like this out there, including this exact one. The Leap is a fun tool to work with, and we’re excited to kick its tires some more.
I keep saying this, but if you read here enough, you’ll recognize that I tend to repeat myself. Noel (@noelportugal) pointed me to this “60 Minutes” segment from January of this year called “Are robots hurting job growth?”
It’s an interesting segment to me, mostly because it showcases new (to me) uses for robots in the workplace. I have a thing for robots, and I’m always interested to see how they’re being used.
Not surprisingly, I like to indulge in the SkyNet/Hal-robot-overlord plot lines too.
Anyway, I think I speak for all of the team when I say that robots fascinate us. The guys are hacking away at an interesting robot-related demo that we hope to show in the coming weeks.
Stay tuned for that and find the comments to share your insights on robots. Or economics or ethics, although those might go unanswered.
For those so-inclined, here are some design-related goodies I found recently. Enjoy.
This is a great read. We’ve all run into counter-intuitive features that seem designed to confuse. Although I like to think the best of humanity, in some cases, confusion is intentional, and this post cites some possible examples.
In my career as PM, I’ve seen many checkboxes and toggles labeled with confusing double-speak, e.g. checking a box for a negative label seems odd. Yes, don’t do A seems much less intuitive than Yes, do B, but it’s not always possible to convey what’s meant in a succinct label.
In the case of Apple’s ad-tracking feature, you wonder about the motivation. From the post:
If you haven’t been here before, the only option in the advertising menu, “Limit Ad Tracking” is probably selected “Off.”
But let’s take a closer look at the way this is worded. It doesn’t say “Ad Tracking – Off” it says “Limit Ad Tracking – Off”. So it’s a double negative. It’s not being limited, so when this switch is off, ad tracking is actually on.
Purposefully misleading or not, it’s difficult to understand without some thinking. Anyway, the post a good read.
This is an interesting read. It centers on the fingers and hands as input devices for “Pictures Under Glass” touch interfaces and the shortcomings of said interfaces. Essentially, touch interfaces ignore many of he physiological advantages of the hand and fingers.
This is a rant without any real guidance for other options, which is fine, and going forward, I’m hoping three-dimensional technologies like the Leap Motion can begin to open new gestural interfaces that make more sense than single-finger swiping.
That’s little more than promise right now, but I have high hopes. As an OpenWorld teaser, the guys are cooking up a Leap-enabled, fun project that will give them experience with the Leap’s SDK. Stay tuned for more.
And finally, by way of FlowingData, check out WTF Visualizations, a great collection of what-were-they-thinking visualizations, some impossible to read/interpret, some humorously not-to-scale, some just wacky, all humorous.
RWW has dubbed this flood of devices, the “arm race,” and given the persistent iWatch rumors, you’d expect Apple to join the race very soon.
While many are watching and waiting, we’ve decided to jump in with a watch that’s already (sort of) shipping, the Pebble.
Why? First, Jeremy, our fearless leader, has one by virtue of backing the initial Kickstarter project. Plus, Pebble has an SDK and small, but dedicated group of developers already pushing the device and probing its capabilities.
I got my Pebble a few weeks ago, and quickly set it up and got down to business adding watchfaces and pushing watch apps to the device. There are differences between the two.
I quickly hit my functional limit with the SDK, which requires C, a skill I only developed minimally back in the first Clinton administration, so my impressions are basic at best. Even so, here they are.
I’m not a watch guy, so take that into account. The Pebble doesn’t do much, and that’s OK. It’s solidly constructed, with a rubbery band, which can be replaced, and a display that is big enough to read, but not too big for the average wrist. It’s chunky, in a cool way and very easy to read, thanks to its display which also contributes to its hefty battery life.
In my unscientific testing, the battery life has been outstanding. It’s been on my desk for at least a week without a charge.
The watch body has four buttons, three on the right side, one on the left, for basic navigation. This design is decidedly right-hand friendly, depending on how much you decide to manipulate the watch. If I were using it left-handed, I might find it difficult to press the buttons.
Again, not a watch guy, but the overall weight seems comparable to other watches.
Initial setup of the Pebble requires a smartphone app, iOS or Android. Although I didn’t try, I think the Pebble would function just fine as a basic watch without the smartphone.
The Pebble uses Bluetooth to communicate with your phone, and the app controls the notifications you receive on the Pebble, which include new emails, texts, calls. There might be more, but again, not a watch guy, limited testing.
I did test the mail notifications from the Android Gmail and stock Email apps; the Gmail app shows new mail, which you can open and scroll through via the buttons. The Email app only shows a general notification of how many new messages you have, with no ability to view the message.
OOTB, Pebble includes three watchfaces, and via the smartphone app, you can find a handful of others, including Big Time, which I’ve been using.
Beyond that, there’s a large community at My Pebble Faces, where you can find a surprisingly wide array of watchfaces.
Installing watchfaces (and watch apps) from your smartphone is the easiest path. For example, if you hit My Pebble Faces from Chrome on Android while running the Pebble app, each watchface has an Install button. After a quick update, the Pebble app pushes the watchface to your Pebble, where you can immediately use it.
Overall, the software experience is good. I’d expect the Pebble app to include more watchfaces over time as third-party developers are vetted, evolving into an official Pebble store, but for now, you have to trust the at-large communities like My Pebble Faces to get variety.
We’re an R&D outfit, so obviously, the SDK matters. After you register a developer account, you can download and install the SDK. The install instructions are very complete. On OS X, you need XCode or at least the command-line utilities, which is a bit of a bummer. In retrospect, I should have gone the Linux route to avoid that annoyance.
I breezed through the install and all the dependencies and then the Hello World example. Beyond this, I’m useless. So, now it’s time to hand off to Anthony (@anthonyslai), who used to teach C apparently, for the real work.
Deploying apps to your Pebble requires an http server, which isn’t a big deal. It was a bit of a surprise to me; I was expecting communication between the watch and computer via USB, but again, I’m functionally useless so there’s that.
. . . is a scheme for communicating with the internet from the Pebble, using a generic protocol and without any application-specific code running on the phone. It also provides a mechanism for storing persistent data, reading timezone information, and getting the user’s approximate location.
Nice little workaround.
Although I’m not a watch guy, I like the Pebble. I’m definitely interested in the coming tsunami of wearable devices and how they can enrich the overall user experience.
It’s actually refreshing to design for a device like Pebble, which has very limited functionality. These limits actually clear the mind of all the noise of what could be done and focus it on only what matters to the user.
And once I get the Pebble to Anthony, we’ll start working with Jeremy and any other smartwatch users we can find to build some representative cool stuff.
Thoughts about smartwatches generally or the Pebble specifically?
Find the comments.
It’s nearly September, and OpenWorld is coming up quickly.
If you’re attending and are interested in user experience, Misha (@mishavaughan) has a rundown of all the Applications User Experience goings-on during the mega-show.
We’ve got a bunch of sessions, the on-site usability labs, a demopod featuring the simplified UI and mobile goodies, and for those staying through Thursday, a charter bus tour to corporate HQ for a tour of the mighty usability labs, which I highly recommend because, eye-tracking!
Feel free to sign up for the bus tour of the HQ usability labs, if you’re interested in going. Space is limited, etc.
So, here’s to hoping I’ve either helped you fill out your schedule or at least created some difficult decisions.
Don’t think I’ve ever used so many exclamation marks, ever. File this one under brand promotion and feel free to ignore.
The voting period for SXSW Interactive 2014 has begun and continues through September 6, and we have a session proposal that could use your support.
Joining Anthony to handle the speaking duties and provide the color commentary will be friend of the ‘Lab, Ultan O’Broin (@ultan), sometime contributor here, full-time rabble-rouser and all-around good dude.
Check out the supporting videos if you want a taste of what these two will bring to the show.
Voting counts for 30% of the selection criteria, so even if you’re not going to SXSW, we’d appreciate a vote. You’ll have to create an account to vote, but hey, you can use that to vote for some other worthy and interesting topics.
Last week, while I was procrastinating about writing up a review of the Leap Motion, the Chromecast arrived. Nothing like a new shiny object to distract me.
For the unfamiliar, the Chromecast was announced a couple weeks ago at Google’s Android and Chrome OS event. It’s a small HDMI dongle that plugs into a TV and supports “casting” of video to said TV from a computer running Chrome with the Google Cast extension or from a mobile device running Android or iOS.
The latter is accomplished via updated native apps that have been updated to include the casting option, specifically Netflix and YouTube. For Android, there is a Chromecast app, but for now, it just manages the Chromecast’s settings and lists apps that support it.
Sounds a lot like Apple’s AirPlay, but there are a few differences that caught my attention.
- It’s $35 vs. $99 for an Apple TV to manage AirPlay.
- It can cast video playing in a browser. AirPlay can’t.
- Since it depends on the browser, it’s OS agnostic. AirPlay requires OS X on the desktop.
Also noteworthy, Chromecast can play video stored locally and do full screen sharing, something very useful for Linux users.
The box says “plug-in, connect, watch” which is pretty much the entire setup experience. I had the little guy on my wifi and ready to go within five minutes.
After that, I installed the Chrome extension and was casting YouTube in another five minutes. Netflix was similarly easy from the Xoom.
After a few minutes of experimenting, I ordered another one. Too bad they’re on backorder now.
Overall, it’s a very cool little device with a ton of potential, especially at $35. Read on for more details.
Browser Streaming with Google Cast Extension
Through the Google Cast extension, you can cast any tab, whether it’s streaming video or not, which is a nice feature for demos, more on that later. YouTube now includes a casting button in its player, so you might not even need the extension. Since I refuse to install SilverLight, I didn’t test Netflix in the browser. Can’t wait for them to roll out HTML5.
YouTube gracefully plays video in full screen mode on the TV, but if you’re casting other tabs, you’ll have to toggle into full screen to get the best experience. Unless you like tiny video on a big TV.
I tested several sources, Hulu, TED, MTV, some random Italian TV broadcast, AMC, South Park, Academic Earth, and they all worked in varying degrees. This isn’t a surprise, given how far the bits are traveling over the air. Just in my house alone, the cable modem sends them over wifi to my laptop, which casts them over-the-air to the Chromecast. That much data sent through the air is bound to degrade after each hop.
The extension streams video at 720p by default, and if you don’t change that to 480p in the options, you’ll get jittery and laggy playback. When I switched to 480p, everything played well enough.
I did get a friendly network performance warning from the extension at one point, but I’m not exactly near the router. So, that wasn’t a surprise, and it didn’t affect playback.
One nice feature of casting from the browser is that you can use other tabs and other applications without interrupting the cast, e.g. if you’re watching a TED talk, you can take notes, or if you’re watching a demo and need another screen.
Local Video and Screen-Sharing
In addition to browser casting, the Chromecast supports local media playback. I don’t have much local video, but this is a nice feature for anything you have stored on your hard drive. All you have to do is drag and drop the media file onto Chrome, then cast it.
Screen-sharing is the one feature that caught my eye when Chromecast was initially announced. This is accomplished via the Google Cast extension, as a beta feature, and that’s an accurate description.
When attempting this on my Mac, it crashes the browser. This is probably due to the fact that I run the developer channel build of Chrome. I tested from my Ubuntu box, and it worked exactly as advertised, with a second or two lag between actions on the desktop and the cast.
For Linux users, this is a sweet feature, given the smaller number of options for screen-sharing for that OS.
There’s huge potential here for demos given that HDTVs are becoming commonplace. You could cast to a TV in a hotel room or a trade show, no cords or signal adapters.
The mobile device experience is similarly awesome, although support is limited for now, since app developers have to add casting support to their apps. For now, only YouTube and Netflix support it. Hulu, HBO and few other content providers have announced they plan to add support.
As I mentioned, there is a Chromecast app to manage the settings. As a side note, it’s too bad Chrome for Android doesn’t support extensions because if it did, the Google Cast extension would open up a lot more mobile content.
I tested Netflix on the Xoom and the Nexus 4, and both performed really well.
One unexpected perk is that casting opens up true multi-tasking on Android, i.e. the stream continues even when you navigate to other apps on the device, providing a self-contained second-screen type experience.
I also tested switching between devices. I was casting Netflix, then I switched to the Mac to cast YouTube, which the dongle handled gracefully, ending Netflix and starting YouTube.
I did hit a quirky bug on the Nexus 4, which I upgraded to 4.3 last week. During a cast of Netflix, if the display goes to sleep, the device shuts down and locks. Anecdotally, this seems to affect only Nexus devices running 4.3, although very few devices have upgraded to 4.3 yet. Seems very likely that Google will push a fix for that soon.
It’s not all bad though; even though the device shut down, the stream continued.
Apparently, there’s a Netflix bug affecting Android users, presumably non-Nexus ones, since I haven’t seen it, that hides the all-important casting button to the Netflix app. Luckily, there’s a hack to get it working.
Google isn’t pitching the Chromecast as a set-top box replacement, but that’s exactly what it is. Even though it’s very new, I’ve had a couple non-technical people ask me about it, hoping to get exactly that information, i.e. can they use it to avoid or replace a Roku or AppleTV or as a functional replacement for cable/satellite.
The answer is maybe, depending on what you want to watch.
If you already have a Roku or AppleTV, Chromecast is really just overlap. Unless of course there’s web content you can’t get through those devices. That’s where the Chromecast shines, by unlocking all that web-only content for viewing on your HDTV. No more waiting for content providers to build an app or strike an agreement.
It’s the same story for would-be cord-cutters, all about what you watch.
Other Miscellaneous Observations
I’m surprised that non-technical people know about the Chromecast, but given how quickly Google ran through the initial supply, I guess that makes sense. Given the questions I’ve got so far, people see value, especially at $35.
Chromecast is another attempt to invade the living room; first came Google TV, then the Nexus Q. It’s clear that Google has learned from their mistakes, and the Chromecast is potentially very disruptive, even if it’s not accepted universally by content providers.
Chromecast runs Android and has alredy been rooted, and there are hacks popping up around the web. I’m very interested to see these hacks progress as the device trickles out to developers and hobbyists.
Speaking of, there’s a Python package called Leapcast that fools Chrome into thinking it’s Chromecast, allowing casting from mobile devices to a browser. Not sure how valuable it is, but pretty cool.
Anyway, I love the Chromecast, and I’m excited to see how it evolves. Do you get one? Did you order one? Thoughts?
Find the comments.
Remember when I went to Mexico earlier this year? Remember the innovation challenge that combined Profit Magazine data with Endeca Information Discovery?
Well, it wasn’t by accident that Profit’s archive of articles dating back to 2005 was a component because Profit’s editor, Aaron Lazenby (@alazenby) was looking ahead to the August issue of Profit, which focuses on innovation. See what we did there.
The issue is out now, and therein you’ll find a photoessay documenting the Endeca-Profit innovation challenge. I don’t know how Aaron pared down the mountain of shots; the photographer was all over the office, contorting his body to get the shot he wanted. As the sun went down on the first day of the challenge, I remember him crouching in the rooms where people were working to catch the light of the sunset through the floor-to-ceiling windows to capture developers deep in thought. Dude was legit, and I’m sure he got scads of awesome shots.
So, why did we run an innovation challenge?
“The idea was to drive innovation,” says Jeremy Ashley, vice president of Oracle Applications User Experience. “Now, we’ll see if we can turn some of this work into an Oracle product.”
Check it out to get a feel for the two-day event, a peek into the Guadalajara Oracle office, pictures of the winning team and to see our fearless leader, Jeremy Ashley, rocking a sombrero.
Editor’s Note: Here’s another post from John Cartan, some heavy philosophical musing for your Wednesday. Personally, I cringe when people throw around the word “delight” when talking about software. Usually, it’s just noise, considering how low the bar has been, so delight when framed in software terms means a lot less than it does in real life terms.
Consider the following statements you might have heard. “Apple makes software that delights users.” “Apple makes software that just works.” So, yeah, that.
Interesting nuggets about John, he’s been writing code since the era of punch cards. Think that sounds hard? He writes entirely via the touch keypad on his iPad.
Yup, he pounded out his Leap Motion review, more than 1,000 words, without an external keyboard. That’s dedication to the intended experience.
Anyway, enjoy and find the comments.
Firmness, Commodity, and Delight
John Cartan, Applications User Experience, Emerging Interactions
At a recent conference, a young developer asked me a really thoughtful question, one that has come up many times over my career as a user experience architect:
Is there a difference between a beautiful interface and an interface with a great user experience?
References to beautiful or pretty interfaces get at something that architects of buildings call “delight”. The first century Roman architect Vitruvius said that for a building to succeed it needed three essential qualities: firmness, commodity, and delight. I think the same is true for software architecture.
By firmness, he meant that the building should not fall down. By analogy, an app should not crash, freeze, expose bugs, or lose data. A firm app is consistently reliable; users will quickly learn to avoid apps that are not firm.
Commodity means that the building allows and facilitates whatever use is intended for it. A commodious app helps the user efficiently accomplish the task at hand without ever getting in the way. Commodity in an app is only possible if you deeply understand that task – and the user who is trying to perform it.
A delightful building is attractive to enter and pleasant to move around in. A delightful car is beautiful and fun to drive; a delightful hand tool exudes craftsmanship and feels good to the touch. In the same way, a delightful app is a pleasure to use, not just because it has pretty icons, but because the attention to detail, clean layout, and thoughtful design empowers the user and makes her feel in full control.
The key thing to notice about these three qualities is that they are interdependent. A house that falls down on you is not delightful. The same attention to detail that creates a delightful user experience also tends to create a more firm and commodious experience.
There is a common misconception in the software industry that delight is an optional feature that you can bolt on at the end of a release cycle if there’s time. UX people are sometimes called in at the last minute to “make it pretty”. But this never works; delight is something that has to be baked in from the beginning.
Another misconception is that it is somehow frivolous or even unprofessional for a serious business application to be delightful. But delight contributes directly to productivity. And app that is fun to use will be used more often, empowering each user, increasing efficiency, and improving morale.
For me, then, a beautiful interface and an interface with a great user experience are one and the same.
Editor’s note: Today, I present to you a detailed review of the recently released Leap Motion controller from one of my colleagues in Applications User Experience, John Cartan. You can read about John’s design work and background on his personal website, which has been live since 1995.
John’s recent work has focused on design for the iPad, which he uses almost exclusively, even for content creation, including this review. I hope he uses an external keyboard.
I’ve had the pleasure of speaking with (some might say, arguing with) John on a usability panel, and I’m pleased to present his review here. Look for my own Leap Motion review sometime soon. Anthony might chime in with one of his own too. So, stay tuned.
Initial Impressions of Leap Motion Device
John Cartan, Applications User Experience, Emerging Interactions
Summary: A fascinating, well-built device but often frustrating and confusing to use. Leap Motion is worthy of continuing study but is not yet a paradigm-shifting device.
I have now spent one day with the Leap Motion device, both at work (in our San Francisco office on a Windows machine) and at home (using my Mac laptop). I downloaded and tested five apps from the Airspace Store, played with about a dozen apps altogether, and read through reviews of many others. I also went through Leap’s tips and training videos and read their first newsletter.
Leap Motion is fun to try, but often frustrating to use – more of an interesting gimmick than a solid working tool that could replace a mouse or touchpad 24-7. Barriers to adoption include confusing gestures, invisible boundaries around the sensory space, unreliable responses to input, and poor ergonomics.
Gestures and responses
As David Pogue reported in his review, every app employs a completely different set of gestures that cannot be discovered or intuited without training and which are hard to remember from session to session. The variety of gestures was impressive; the New York Times app uses a twirling motion to scroll a carousel reminiscent of old hand-cranked microfilm viewers. A Mac macro app called BetterTouchTool claims to support 25 different gestures.
One recurring problem with many gestures were that they rely on the invisible boundaries (or invisible mid-point) of the volume of space the sensor can detect. The general-purpose Touchless app (which lets you substitute your hands for a mouse to browse the web, etc.), uses half of this invisible volume for hovering or moving the cursor, the other half for clicking and dragging; even with good visual feedback it’s hard to find the dividing line between the two. It’s also easy to inadvertently wander outside the boundary when you change positions, or accidentally move the sensor.
Even when you remember the gestures, an app’s response can be unreliable. Although the device can theoretically detect the positions of all ten-fingers, in practice it often confuses two fingers with three, or fails to register some details due to bad lighting, one part of the hand obscuring another, etc. In almost every app, even after repeated practice, my gestures were frequently misinterpreted.
Part of the problem lies not in the device, but in the software written for it. There are countless ways of using the data generated and an art to interpreting them in reliable ways that take account of the things people do with their hands without even realizing it.
A good case in point is Google Earth. At its best, the experience of flying over the earth with your hands is magical – different and more intuitive that using a mouse or even a good tablet app (which simulates moving a large, expandable map, not actually flying over a surface). But Google Earth’s controls for Leap were far too touchy – even when I set the sensitivity to its slowest setting. The tiniest movement of a finger would rocket me a thousand miles up and send the earth spinning like a top. This is a problem that could be fixed with better algorithms. The current apps seems to use absolute, not relative positioning, with little or no attempt to dampen sudden changes or limit the earth’s spin.
When sitting at a desk using Leap Motion, my arms began to hurt within five minutes; other volunteers reported this as well. This is the same problem we found with our smart board testing: supporting your arms in space is tiring – and soon becomes surprisingly painful.
Recognizing this, Leap Motion’s first newsletter contained a brief tutorial on ergonomics. But it seemed more focused on the needs of the device (to avoid obstructions to the signal) than on the needs of the user. For example, Leap says to avoid the most natural and comfortable positions (e.g. resting on your elbows) in favor of holding your arms straight out into space.
I did experiment with using Leap Motion from a reclining position, as shown in the photo.
Positioning the device on the front part of my laptop worked fine. By projecting my display to a large TV and using a third-party app, I could even close the laptop and position the sensor on its lid or remove the laptop altogether and position the sensor on a side table so that my hand could rest on a chair arm and always remain lower to the ground than my heart (which reduces pain and effort over time). With a little fiddling this worked well during an extended session (as long as I stayed in one place). You can also use the Leap control panel to adjust the required height of your hand(s) over the device. Even so, ergonomics will remain a significant concern.
The best experience I had was with the free Molecules app. The Leap is well-suited to true three-dimensional input like turning a complex molecule in space. This app used clear, simple gestures like a clenched fist to pause, and tolerated random movements without completely disrupting the display. This gives me some hope that with the right gestural algorithms and a good use case, Leap can actually provide some added benefits.
For the most part, though, using Leap Motion was an exercise in frustration. The initial productivity apps, like photo manipulation or Powerpoint presentation, are more likely to produce rage than improved productivity. David Pogue summed up the problem nicely:
“Word of advice to potential app writers: If your app involves only two-dimensional movements, of the sort you’d make with a mouse, then the Leap adds nothing to it. Rethink your plan. Find something that requires 3-D control — that’s what the Leap is good for.”
Leap Motion is a solution looking for a problem and has yet to find its killer app. For most enterprise applications it would be a gimmick that would quickly grow annoying and would be unsuited for daily usage.
There may, however, be some niche applications which might benefit from Leap Motion or something like it. To succeed these use cases would require or at least benefit from true 3-D control, would not rely solely on the Leap to perform destructive or non-reversible actions, and could be performed from a sitting or standing position near a desk or lectern (not mobile). I could also see using other forms of this technology to supplement rather than replace current sensing systems (e.g. detect hover positions above a tablet surface).
We should keep our eyes out for niche use cases and continue to monitor future developments. But there is no need to toss the mouse, trackpad, or touch surfaces just yet.