When the Chromebook Pixel was announced, a lot of head-scratching ensued. What’s the point of a fantastic piece of expensive, high-end hardware that runs an internet-tethered OS like Chrome OS? After all, Chromebooks have settled into a niche at the bottom of the device market, one where netbooks were once aimed.
Then, I heard that Google was giving Pixels to its employees. Obviously most Google’s employees are developers, and in order to do their work, they’d have to do development.
Prior to I/O, I fully expected that Google would announce a cloud-based IDE, which would explain why employees were given Pixels.
No such announcement came.
However, every I/O attendee received a Chromebook Pixel, which only strengthens my hunch that a cloud-based IDE is coming.
There were several announcements at I/O that contributed to this line of thinking.
First, Android Studio, a custom IDE for Android development based on IntelliJ IDEA, which will include hooks for Cloud Messaging and other Google services. Speaking of Google services, the Play Services APIs and Hangouts were also announced.
The former adds several features that are self-contained, allowing developers to upgrade apps without requiring newer versions of Android. The latter consolidates several messaging products and effectively removes support for XMPP in favor of the Hangouts API.
Google has been slowly backing away from open standards recently in favor of their own APIs, e.g. lost in the most recent Spring-cleaning announcement, which included Google Reader, was an announcement that the CalDAV API would be limited to a whitelist in favor of the Calendar API. Details are scant, but the loss of CalDAV support for calendar applications like iCal is kind of a big deal.
I/O this year focused squarely on developers, not on product. No new Android version was unveiled, but rather, existing Android apps, like Maps, Plus, Play Services and Hangouts, received major, version-independent updates. This points to alleviating a huge developer concern, fragmentation.
So, why give attendees an expensive paperweight in the Pixel?
The other shoe has to be a browser-based IDE that leverages the computing power of the Pixel with Google’s network infrastructure, while providing all Google’s APIs and services in one package, the ultimate ecosystem package for Android and possibly Chrome OS developers.
This isn’t all that crazy, given that services like App Engine already use Google’s infrastructure.
Should this IDE materialize, the key here is distributing the compilation workload between the machine and server, while minimizing the bandwidth consumption.
But if you happen to live in a Google Fiber city, you could develop on a Pixel and use Google’s bandwidth and server infrastructure, allowing Google to control even the transport of your code.
So, Google could have all the bases covered, providing an unprecedented development experience.
Anyway, I’m not a developer, but I play one sometimes. I’ve run this idea past a few developers, and it seems plausible, even a little bit desirable.
What do you think?
Find the comments.
Diverting away from Glass a bit. I will update more soon.
I became a Mac OS fan when I started using my MBP since 2006. Windows never worked for me, as I am a developer. Microsoft, Just give me a terminal. Cygwin to me is just something used to alleviate the issue, but that does not solve the fundamental problem. Handling “/” and “\” was already complicated enough, not to mention about the actual development. Ubuntu at that time was good, but there were lot of issues with package dependencies and distro upgrade. Some essential packages tended to break and put your machine into an almost unusable state. Mac, on the other hand, got ahead of the competition. It is simple, elegant, and most importantly, it is based on Unix and always works. The development community embraced the power of the macs, and there were many development tools that were only available in Mac OS X.
Couple things made me revisit Ubuntu this year. First, I would never consider getting the current new MBP model. I would not buy anything that can not be customized and upgradable. It seemed to me that this is the direction Apple is heading, and if this continues, I will stay away from it.
Second, I bought a 7200 rpm 1 TB drive and attempted to put it into the optical bay of my MacBook Pro 2011 early model. It was a complete failure. Story short, the MacBook Pro and the hard drive negotiated a 6 Gb/s link, but the MacBook Pro firmware actually does not support 6 Gb/s. My beloved MPB, please stop pretending to support 6 Gb/s and just negotiate a 3 Gb/s link. I could have lived with that. The final result was that I had this extra hard drive, and did not know what to do with it. I finally decided to put it into a Dell machine, and for fun, tri-boot it with Windows 8, Ubuntu and Hackintosh. Although installing Hackintosh was a long tedious process, but amazingly, everything worked like a charm at the end. I liked the Ubuntu Unity desktop and its user-friendliness. No more touching Xorg.conf to deal with drivers.
Third, I have been working on Android development. Something not advertised about Android AOSP is that there could be strange issues with your build if you do not compile it in Ubuntu, which is the official platform. I learned a hard lesson and spent quite some time debugging in the wrong direction without realizing this fact. So I need a Ubuntu machine anyways.
Fourth, some development tools such as GitLab is available on Linux but not on Mac.
Today, my MPB fan started giving me a loud humming voice. I am sensitive to these noises while I am concentrating, and it is nerve wrecking to me. I decided to stop using it until it got serviced. In the meantime, I need a machine. I took out the first Chromebook, a CR-48, that was given by my friend at Google years ago and started using it again (sorry, my friend, your machine just was not powerful enough for my daily usage, and I only needed one machine). I went ahead and switch it to development mode, then installed Ubuntu with Unity desktop in it using Crouton. I then installed necessary tools like java, Intellij, Thunderbird. It is great to be able to switch between the Chrome OS and Ubuntu. I am using it to write this blog. My complaint with this machine is that I kept hitting the mouse pad when I am typing. Even though this is an old machine, and not very well designed, it works, and you can not hear a bit of noise even if you put your ears to it.
I am surprised that a low-end, old machine like a CR-48 can still runs smoothly, and the speed is still acceptable. Such hardware would not be able to support running Mac OS X 10.8 or Windows 8, but it can run Chrome OS and Ubuntu 12.04 concurrently. Google is trying to unleash all the power through the Internet, and they build dumb terminals like a Chromebook. While this may still be a bit hard to achieve, given that a lot of productivity software can only be run natively, and certainly not for developers, I do believe that it will be the future of computing. This is how we can afford having every children in the entire world to have computing in his/her hands.
I may eventually use Ubuntu and replace Mac OS X in the near future.
My friend got the invitation to pick up her Glass. At that time, I have not got any updates for mine yet. All Glass Explorers can bring a guest to the Glass event, and she asked me if I wanted to tag along. Of course that is a yes.
We got to the Google campus, and there was already a Glass product manager waiting outside the building for us. Hopefully not a big surprise, he is wearing Glass himself. He kindly led us into the Glass garage, where the Glass fitting is done. They even offered champagne in the event. It sounded like the same treatment when you are shopping for a BMW.
It was the first time I touched Glass. It is made of titanium, and it is as light as it gets. There were rumors about Glass being just a phone accessory; that is not true. It is an Android with 4.0.4 fully capable of running everything by its own. The Glass project manager explained everything about Glass to us patiently, and he even guided us around Google campus. We got out after 2 hours staying there.
As an Android developer, I could not resist experimenting on it, but with care. After all, it is not mine. Here are couple things you can/cannot do:
1. You can turn on debug mode.
2. You can adb to it as user, not root
3. You can adb push photos/videos, but they would not show up in the timeline, even though you use the correct naming convention. Apparently, Google is storing the timeline entries in SQLite, which makes perfect sense. (I did manage to push photos/videos and show them in the timeline)
4. You can not just plug in a mouse and keyboard into the usb port. It would not work.
After getting a taste of Glass, I am more eager to get mine.
If you’ve seen intermittent connectivity issues this week, apologies, should be resolved now.
Ludovic Vignals has an Aldebaran Robotics NAO humanoid robot. I know nothing about robotics, but NAO sounds pretty cool. Ludovic decided to experiment with his NAO and has programmed it to read the news.
Why? Because he can. I have to say, I found the robot’s snarky commentary about being ignored to be funny, which makes sense, given the Nao robot also does stand-up comedy.
Enough with the background, how did Ludovic do this?
In his words:
USA Today offers a free (capped volume) API access to some of their daily content. I think they mentioned somewhere on their site that the SOAP API has or will be discontinued so using their REST API is the best choice. Before getting access to the API you will need to go to USAToday.com to request an API access as they don’t allow anonymous connections and they want to know what you are using the API for and also get you to agree to their usage policy.
After receiving an access key, the steps are pretty standard:
1. Request credentials to connect to USA Today
2. Establish a connection to api.usatoday.com
3. Decide what information you want to retrieve
4. Format the query string and issue a GET command, e.g. /open/articles/topnews/home?count=10&days=0&page=0&encoding=json&api_key=foo
5. Parse string into JSON object or parse JSON formatted string directly into native object of the client programming language you are using
6. Iterate through the news and related fields as needed
Ah the beauty of RESTful APIs. Next, Ludovic used Choregraphe, the IDE provided by Aldebaran, to program the robot to read the news.
Beyond the cool factor, the NAO could serve as a personal assistant, reading notifications, reminding you of appointments and to-dos, processing commands and taking actions. Think Google Now type features but with an audible reminder.
I really like Google Now, but I struggle to remember to use it. If the NAO simply reminded me that I have to leave now to make an appointment, based on traffic conditions, that would be useful. Have you ever been late to a call because you were heads-down on something and forgot the time? Me too. Sure, calendar sent an email reminder, but you ignored that because you were busy.
Add a service like Twilio into the mix, and you could use the NAO to send texts and place calls for you.
The thing about notifications is they typically arrive via email and tend to get lost among all the other email you get. Sure, they’re important, but you might not need to take immediate action, or you’re busy ignoring email. Having the NAO audibly remind you to approve expense reports or take action on transactions waiting for you would be valuable.
The whole point of an assistant is to remind you to do stuff, and the NAO opens new possibilities for automating reminders.
Obviously, there are a lot of other, more compelling use cases for a humanoid robot, and we’ve only scratched the surface here. Kudos and thanks to Ludovic for sharing his work. I’m stoked to see what else he can build. Stay tuned.
Find the comments.
Editor’s note: FYI, this post is by Anthony (@anthonyslai).
I have decided to write up some posts for my first couple days’ experience with Glass. I am not planning to go deep into the technical details regarding Glass in these posts, but may do so if there are enough interests.
When Google first announced Google Glass in Google IO 2012, I signed up immediately to become a Glass Explorer. Without knowing even a single bit about the Glass specification, 2,000 people still waited in line and signed up for it. Google gave all Glass Explorers a glass with a number on it, claiming that each Google Glass would be carved with each explorer’s unique number. Mine was 1109.
I believed such technology can lead us quite far in the future. The original release date for beta testing was set to be the end of year 2012. That did not happen, and there were almost no status updates from Google. To me, that was quite a disappointment.
Shared in Google+ after signing up as a Glass Explorer.
Early this year, to draw more diverse beta-testers to Glass, Google started the #ifIhadGlass competition in Twitter. The reaction was enormous, and 8,000 more people would be able to get hold of the Glass through the competition. Still, no status updates.
For Google, things moved along fastest for Google IO. As expected, Google finally sent out an email update at the end of April this year. My long wait has finally ended, and I received my Glass during the week of Google IO. My Glass do not have my unique Glass number on it, but nonetheless, I am happy.
Note the serious demeanor, with Glass power comes great responsibility, or something.
Backstory, Anthony (@anthonyslai) finally got his Explorer Series Glass unit on Sunday. Funny story, its display had a few dead pixels, three actually. He counted. Google replaced the unit, so all’s well.
Anyway, Anthony has generously been allowing people to test-drive his Glasses, and boy, do they get attention. People wherever we go are curious. Noel (@noelportugal) and I each took a turn, and despite the bare feature set, they’re pretty amazing.
Anthony says he’s been wearing them non-stop since Sunday, and that he can’t live without them. Pretty strong endorsement. I know he’s been using them heavily because texts from him have the latest in gadgety signatures appended “Sent through Glass.”
Look for a post from him on his adventures soon. He and Noel are attending Google I/O this week, and I’m sure there will be lots of Glass news.
It’s a very simple, but difficult game. GeoGuessr drops you into a random place that Google has mapped with Street View, but without any metadata, just the images Google captured. You can navigate around, using the usual Street View controls, and the object is simple: figure out where in the World you are.
Sometimes, yes. If you land in a populated area, with signs and businesses.
Not so simple if you land somewhere remote, like this round, where I landed on a tiny island. You’ll notice there weren’t any controls. This island was small enough to map with one look around, no walking needed. Off to the left, there’s a helicopter, indicating how Google managed to get to this remote island.
That’s pretty much it. There are points awarded for guesses, but without leaderboards or any other traditional game mechanics, it’s just a fun test of your detective and geographic skills.
This blog has been around for six years, and given how varied and banal a lot of what I write is, I’m stunned it’s lasted that long.
While at Collaborate in April, John (@jpiwowar) mentioned something about the blog that resonated with me. He said he appreciated that I replied to his comments. That struck me as a bit odd, because from my perspective, I’m glad anyone reads at all, and comments are gravy.
Plus, if you take the time to comment, I should take the time to reply, even if the comments are negative. Actually, those can be fun.
John was attempting to elaborate on a nice comment from Pythian’s head honcho, Paul Vallée (@paulvallee) about this space and its community. Community is a funny word, especially when comparing this little space to an entity as large as Pythian, but over the years, I’ve had the pleasure of meeting many of you IRL, which, I suppose makes us a community.
Anyway, thanks for reading and commenting. As long as you do those things, I’ll probably keep writing.
And if we ever happen to occupy the same meatspace, say hi to me.
As always, find the comments, and I’ll reply.
Edit: I forgot to thank all you silent readers, who read, but don’t comment. I’ve met a few of you IRL too, always a pleasure, and no worries, I know you’re out there.
One of the aspects I like about my newish team, Applications User Experience, is access to real research. Through eye-tracking, the usability labs, ethnographic research, focus groups and a host of other tools, AUX collects data from real users to help us understand how to build better software.
This is perfect for me, since I’ve always been an anecdotal designer, relying on relatively small data sets and what I observe from those around me. Now, I have access to a wealth of data to balance what I see on my own.
Of course, the users around me most often are my family members, so I do a fair amount of observation and experimentation on them.
For me, this is a side benefit of being the family’s technical support.
My family presents a nice mix of personas too. My wife is a savvy user, whose technical knowledge has grown as I add gadgets. She also hates to rely on me for support, which is awesome, because a) I don’t have to do as much and b) I get to observe how she approaches and adapts to new gadgets.
My parents came to computing and the interwebs relatively recently. So, with them, I see the challenges faced by inexperienced users. They too have experienced technology creep, slowly adding new gadgets to their home, and they also prefer to rely on their own wits vs. calling me for support. Another win.
I now have a new user in the family, my daughter, and watching her absorb technology is fascinating. She got a Leap Pad for Christmas, one with a stylus, and it’s amazing how immersed she gets. Using the stylus is a learned behavior, and it was interesting to watch her default to touching everything, even the drawing apps, which you’d think are better with a stylus.
At some point, we tried to get her using a laptop, but the keyboard and mouse presented too big a challenge. She kept trying to touch the screen, and the mouse was nothing more than a toy, unrelated to the actions on the screen.
A couple weeks ago, I decided to wipe my OG iPad for her. It’s forever stuck on iOS 5, and I’ve replaced all its uses with Android tablets, the Xoom and Nexus 7. So, why not put it good use?
We had already bought a few apps for children, so we had content for her. Ironically, the App Store is super buggy on iOS5, so all the time I wasted trying to find more good apps for her was completely wasted.
Not that it mattered, because as soon as we identified the device as hers, she wouldn’t let me use it without complaining. Toddlers exist to possess.
The biggest observation for me is how natural touch UI is for humans. As much as it limits me personally, it’s hard to argue that touch makes more sense than keyboard and mouse.
Back to my wife, a week ago, I asked her to use Facebook Home on my Nexus 7, since she’s a heavier Facebook user than I am. She played with it for about ten minutes with no direction from me, and when I asked her about it, she said, it’s pretty much the same as on her iPad.
Turns out she immediately touched the notifications displayed on Home, which went directly into the Facebook app, bypassing the browsing capabilities of Home.
I explained the launcher concept, and she browsed some updates. She liked what she saw, but it’s clear that Facebook Home is targeted at heavy browsing of the News Feed and drive-by liking. In fact, the up-front nature of notifications actually detract from Facebook Home usage.
A Single Thread
Whether I’m asking my family to test something, supporting their usage or just observing them use technology, there’s a recurrent theme, frustration. Technology creates frustration in two big ways, first by breaking (or not working as designed) and second by creating work.
My recent focus has been on the latter. Over the past decade, Wintel dominance has been replaced by siloed ecosystems, thanks in large part to Apple’s devices and cloud services. These ecosystems tend to create inefficiencies for consumers. Simple stuff like sharing pictures is now dictated by who uses which service and who else uses that service. By who else, I mean privacy concerns.
I don’t see these ecosystems going away, but as they expand to offer as much as possible to everyone, we’ll probably see another dominant platform emerge in the next decade. That should eliminate some of the work, but there will always be frustration.
Thoughts? Find the comments.
WebKey is an Android app and accompanying service that allows you to manage your device from a browser.
It’s actually a nifty little tool. All you do is install the app, then visit webkey.cc or navigate directly to the supplied IP address on your local network to view your device’s screen. Actually, it’s a bit more involved, but that’s the gist.
WebKey is both a useful development and demonstration tool.
For development, it exposes pretty much everything you can do on the device in the browser interface. Clicking on the screen executes taps, you can launch apps, change settings, input text from a more familiar keyboard, all that. I’ve tested it over wifi, but it claims to work over 3G connections too.
For demonstration purposes, especially remote ones, it’s much quicker and easier than firing up the Android emulator.
All this in a browser window, no plugins required.
WebKey does require root, so there’s that. For the security conscious, it is fully open sourced under the GPL and supports SSL over direct connections via IP. Seems there is an issue encrypting connections through WebKey’s servers.
Not bad for a free app built by two guys.
Aside from the recent, top-of-mind examples (Google Glass, Pebble), I’m amazed at how functional smart garments have become. Innovation has been happening in both the fashion and DIY circles, but since I don’t pay very close attention to those areas, I’ve missed some nuggets.
Thinking about smart garments, I’m reminded of Ben Heck’s wind-up Android charger. Why shouldn’t your phone charge itself while in your pocket or purse? Extending the Glass/Pebble-as-smartphone-accessory idea, why couldn’t your clothes leverage the information from your phone’s many sensors to guide you through the World and collect information about you and your surroundings?
Or even include the sensors themselves, e.g. GPS, as described at the 4:04 mark in the video.
Fitness applications are mainstream by now, e.g. FitBit, Nike+ Fuelband, Jawbone Up, but imagine how useful professional athletes would find the data collected from wearable gear, before, during and after exercise? I’ve heard that some athletes will use hyperbaric chambers to speed recovery time, so yeah, they’d be very interested.
Of course, that means data science and lots of software to draw conclusions from the data.
Everyday applications exist too, e.g. rolling emergency calls into clothing similar to the DIY ice cubes that monitor drinking and can text a friend if you’re over your limit, or the DIY pepper spray that also takes a picture. Tracking temperature and heart rate fluctuations collected by clothing could trigger emergency calls, or produce loud noises to deter attackers, like a safety whistle.
It’s a brave new world.
Find the comments.
When Facebook launched Home earlier this month, it marked the first time in quite a while that I was excited to use Facebook.
What excited me wasn’t using Facebook per se, but exploring the possibilities of moving beyond the app.
Despite only being officially supported for a handful of phones, the Samsung Galaxy S III, Galaxy Note II, HTC One X, and the HTC One X+, and the HTC First, it is possible to get Facebook Home running on other Android devices.
My device of choice was my newish Nexus 7, which is basically a development device, so no worries about borking it seriously.
There has been rumbling about the negative reception to Facebook Home based on a relatively low number downloads, only 500,000 in the first five days, and the very high number of one-star reviews Home has garnered. One-star review outnumber all other reviews by a wide margin.
It’s a bit early to brand Home a success or failure, given its limited number of supported devices. Those reviewing it are primarily curious, early adopters, like me, who don’t fit the prime Facebook demographic anymore. Home has always seemed like a tool for avid Facebook users, and it fits that mold very well.
After a little more than a week with Facebook Home, here are my impressions.
Right off the bat, Facebook Home puts your friends’ content all up in your face, pun intended. In fact, you really can’t escape it without some effort.
Initially, Home made Facebook more interesting The full screen photos drew me in to read updates, which then reminded why I don’t use Facebook much anymore. It’s the same content, just nicely presented. This is especially true if you have Instragram-happy friends.
So, Home is really just a steady stream of imagery, which, as you’d expect, is a mixed bag. However, Home does a good job mitigating your friends’ poor content, e.g. for updates without associated photos, Facebook adds the person’s Timeline picture, a smart move.
I did find some interesting tidbits just paging through photos though, e.g. this update from Anthony (@anthonyslai)about a car that ran into the Oracle Dublin office.
I should also mention Chat Heads, the new Facebook Messenger, even though it’s a standalone update and not part of Facebook Home. When it’s not active, Chat Heads minimizes itself into a small, circular version of your friend’s profile picture which you can move around the screen as you do other things. Tapping it reopens the Messenger interface. It’s nicely done, with snazzy animations.
I tested Chat Heads out with Noel (@noelportugal) and my wife, and I really like the way it handles ongoing chats.
I don’t really have any other positive impressions to add, but that’s not to say Facebook Home is bad. It’s a very slick piece of development.
It’s just not for me, or really for anyone who doesn’t spend a lot of time using, not just browsing, Facebook. If you browse the one-star reviews on the Play Store, this impression resonates with a lot of other people.
I suspect someone like my wife, an avid Facebook user, would find it more valuable. I plan to get her to test drive it soon, so stay tuned for some ultra-scientific research.
Design-wise, Facebook Home doesn’t work with a lock-screen. You can page through updates, and I think, like and comment, without unlocking the screen. You can also see what’s installed on the device, although not open any apps.
This seems risky.
The handling of likes is rather annoying too. You can accidentally like content from Home, which could lead to rogue likes and Facebook etiquette blunders from liking and then unliking content.
Also, I don’t really care for the launcher panel Home includes. Recently installed and used apps appear in there, but it feels like a throw-in feature.
Finally, it’s predictably difficult to find your way to the standard Android desktop, which is a usage problem if you have widgets.
Most of these are design considerations that speak to the purpose for Home, i.e. making Facebook ubiquitous on the device, but Home is easy to turn off and remove. I suspect future iterations will offer more granular options that will encourage users to keep some features enabled.
Facebook Home showcases some great user experience, and you can immediately tell if you’re the target user or not. It’s clear Facebook has done extensive research to find ways to make itself more sticky for its heaviest mobile users.
I just don’t happen to be one of them, but I can appreciate the design-thinking that went into Home.
One bonus feature, at least for HTC First, is that the base OS is vanilla Android 4.1.2, without any carrier or OEM software, which is a plus.
Have you tried Facebook Home? What do you think?
Find the comments.
My trip to Guadalajara a month ago was dual-purpose. First, we’re hiring there, so we had interviews. Second, we were assisting with a hackathon.
When we joined Applications UX, Laurie (@lsptahoe) asked for our help organizing an internal hackathon, erm, developer challenge, at the Mexico Development Center (MDC) in Guadalajara. The challenge she had in mind would focus on Endeca Information Discovery and use Profit Magazine data. Those were the only two requirements.
Beyond that, entrants could add any other data they wanted and build any front-end to showcase the data discovery.
Laurie’s use case for the challenge was:
I’m an IT Manager interested in moving some of our applications to the Cloud. I need to learn about the costs and benefits of doing this to see if this would really benefit my company.
The goal was to use Endeca, Profit and other data sources to answer questions like:
- Who are the thought leaders in my company and in my industry?
- What are they saying?
- What kind of information can I find?
- What enterprises have done this already, and what problems have they run into?
- Are there security concerns about moving to the Cloud?
Once we had the tools and a problem to solve, Noel built a VM running OEL with Endeca Studio, Integrator and Server installed, and we decided to provide Twitter as another data source to augment Profit’s content.
Next Laurie’s MDC team built an API converter for Endeca to transform the SOAP output into JSON, which would make it a bit easier for entrants to build web apps and native mobile apps.
The challenge ran March 22-23, giving the entrants only about 36 hours. We encouraged teams, given the large scope of the challenge, and 22 people organized into five teams completed the challenge.
I had to miss the final judging, but yesterday, I got a recap of the top two entries.
First, the winning entry, called for some reason unknown to me, Squeedily Spooch.
Next, here are some shots of the runner-up, a native iPad app designed to showcase the serendipity of the data Endeca captured.
The challenge judges were Jeremy Ashley, head of Applications UX, Mark Burrell, Director of Endeca User Experience, and Erik Peterson, General Manager of the MDC. FYI check out Ultan’s (@ultan) post about Endeca Information Discovery and Mark’s story about its origins.
The judges saw these two entries as very even, but Mark said that Squeedily Spooch illustrated “how Endeca facilitates insight and discovery, through intuitive dialogue with and progressive exploration of diverse information.”
I know you’re wondering what the prize was. Each member of the winning team received a Raspberry Pi Model B, an appropriately geeky prize, and I’ve already seen some hacking come out of those Raspis.
But wait, there’s more. The challenge featured all the online content from Profit Magazine, dating back to 2005, so naturally, Profit’s editor, Aaron Lazenby (@alazenby) attended with a photographer and chronicled the event for a feature that will appear in Profit Magazine’s August issue.
So, look for that.
And hey, if you need a hackathon, I know some guys.
Find the comments.
The N7 ships locked, which seems a bit odd, but no biggie. This will be a development device for me, and once I got it unlocked, the fun began with MultiROM, which allows the N7 to boot from different ROMs stored locally or on attached media like SD cards or USB drives.
As I’ve said before, Android modding is a bit of dark art hobby; you always run the risk that you’ll brick your device or lose all your data. So, having the ability to boot experimental ROMs side-by-side with each other and your base ROM, is huge, especially given how many mature Android ROM projects, e.g. Cyanogen Mod, and new projects are spinning up this year, e.g. Ubuntu Touch, Firefox OS, Tizen, Jolla Sailfish, etc.
Installing MultiROM is a relative breeze, despite how the xda thread looks. In my experience, all xda threads look that scary. Anyway, you just download and flash a recovery image, MultiROM itself and whatever kernel your base ROM is running.
Once you’ve got MultiROM, you can add other ROMs, i.e. the cool part.
I went for the Ubuntu Touch 13.04 preview because it seems the least buggy. Firefox OS looks like it needs some time, and the others are well behind that.
The great thing is that xda’s members will eventually get around to whatever it is you want to try.
Anyway, for Ubuntu Touch, I followed the xda thread instructions, and soon was kicking the tires. If you’re wondering, Ubuntu Touch is still rough, but it definitely looks and behaves like Ubuntu, which I’ve run on a laptop for many years. All the swiping takes some experimentation and learning, and on the first boot, the keypad wouldn’t open.
It looks nice though, and I’m stoked to try future builds.
Back to a sweet feature of MultiROM, it supports external media, key for the N7, which does not have an SD card. The secret is a USB-OTG cable like this one, very cheap and super useful. OTG apparently means “on the go”, if you’re wondering. It’s just a micro USB adapter into which you can plug a USB stick or your SD card of choice, via any card reading adapter.
You’ll need the right format on the USB partitions, and although I haven’t tried this yet, Anthony (@anthonyslai) showed me MultiROM running a ROM directly from the OTG cable. I didn’t notice any lag in performance, but there probably is a degradation, albeit a small one.
So, now I can not only test out new ROMs and save space on my N7, but I have a portable image that I can boot from Anthony’s N7 or really anyone’s N7 with MultiROM installed.
Find the comments.
Here’s another installment in the never-ending documentation of stuff for posterity.
This chapter concerns a change made to Android 4.2.2 that could cause you some headaches if you’re an Android developer, modder or hobbyist. Google added a whitelist for USB debugging in 4.2.2 which adds another layer of security to your phone, so now you must allow the connected computer access to tinker with the attached device, from the actual device.
Makes sense as a feature, but in practice, it’s a bit of a hassle.
Why? Because you must have the latest version of adb (the Android development bridge, used by the SDK to communicate with the device), which currently is 1.031, in order to force the device to ask for approval.
Run adb version to check.
If you have an older version of adb, like I did, your connected device will report itself as offline, and no amount of toggling developer options, connecting/disconnecting and restarting will change that.
Believe me, I tried a fair amount of that because I’ve recently rebuilt my Mac and don’t want to clutter it up unnecessarily. I don’t need the full Android SDK so I searched long and hard for other options.
There aren’t any, but you can escape without bringing down everything the SDK requires. Setting up to build for Android requires two parts, first download the SDK and decompress it; next, you’d typically run android and pull down all the files for your version of choice, which takes a while.
If you just need adb and fastboot to do some tinkering, you can skip the second part and save yourself some time and disk space.
First, download and decompress the SDK. Then, navigate to the directory where you decompressed and find the /sdk/platform-tools directory, and from your CLI of choice, run adb.
Now that you have the latest version, your device should pop up the “Allow USB Debugging?” message. Easy peasy.
Getting back to why this feature is a bit of a hassle, given how large the SDK is, not everyone continuously keeps it updated, or at least not the Android tools. Plus, there’s no official documentation on this that I could find, and I couldn’t readily find a way to revoke a computer’s access on the device.
Anyway, hope this helps you and my future self who will likely forget all this.
Find the comments.
Even so, he was excited at the prospect of receiving it soon.
He’ll be even more stoked now that details of the Glass API and device specs are hitting the interwebs. Plus, Martin Mißfeldt has a beautiful infographic detailing how the technology inside Glass manages to augment your reality, erm, vision. I’m assuming Martin compiled his information from the many patents spawned from Glass. If you’ve ever seen one of the low-fi diagrams in a patent filing, you’ll appreciate what Martin has done.
Anyway, as soon as Anthony gets his Glass unit, I’ll start bugging him to post some thoughts.
Last week while I was traveling, I was reading a story from Wired about Firepad, a collaborative text editor that can be easily added to any web page to allow, well, collaboration. Think pair programming meets Google Docs.
A couple sentences in, I notice a quote from none other than ‘Lab alumnus, old friend and current Atlassian developer advocate, Rich Manalang (@rmanalan). Atlassian has added Firepad to its Git repo management tool, Stash, which I had to read twice hoping it was ‘Stach. Stash makes more sense, but is less funny.
Anyway, turns out Rich was also quoted in TechCrunch. Not a bad day, making Wired and TechCrunch.
For the full story on Stash and Firepad, which also has offline capabilities, check out Rich’s post.
Last week was a bit of a whirlwind.
I spent Sunday through Wednesday in Denver attending Collaborate 13 and then jetted to the Bay Area to speak at a new hire bootcamp on Thursday.
As always, Collaborate 13 was a great chance to catch up with people I only see a couple times a year, if not less often, as well as a chance to meet new people and people I know only from online personas.
Aside from that, I got a chance to see the Applications User Experience team in action with customers, including the onsite usability lab and the eye-tracking demo, as well as the usual sessions.
I got a chance to test out the eye-tracker, made by Tobii, at the AUX demo booth. Thanks to John Rogers for walking me through the demo, which involved calibrating the IR sensors and then answering basic questions by looking at a demo page, e.g. who is a person’s manager based on a profile page.
After finishing the questions, I got to see the path my eyes took and the aggregated data of all the people who had participated. The aggregation of data points was incredibly interesting, as it begins to show patterns in your software, both good and bad.
Here’s a quick video of John walking me through the results.
I love stuff like this and the usability labs because they produce hard numbers to compare to assertions.
The Tobii is a neat little gadget. The naked eye can’t see the IR sensors, but my phone’s camera picked them up, which nicely personifies the little guy into a friendly robot or something.
After leaving Collaborate, I went to HQ to sit on a panel about “User Experience and Breaking the Status Quo.” Panel is a stretch, since this was more a conversation on stage with several of my colleagues, all of whom know way more about user experience than I do. The only reason I was invited was to stir the pot, which I did happily.
Funny aside, during a recap, I heard that Jeff Smith (@thatjeffsmith), yes *that* one, had presented earlier in the week, something I wish I could have seen. Jeff would have enjoyed the way I figured out people were talking about him. They didn’t initially mention his name, but I figured it out by attributes alone.
Anyway, if you’re attending an Oracle-themed conference later this year, I highly recommend checking out the Apps UX activities, the eye-tracker, the usability labs, the sessions, at your show of choice.
Find the comments.
Not that it’s a tradition or anything, but here are some links to brighten your Friday. Or whatever day it happens to be when you see them.
Google Glass SXSW video
Google released the video of their Glass session at SXSW. Noel (@noelportugal) was in the room, and Anthony (@anthonyslai) has an Explorer set on preorder. So yeah, we’re definitely interested.
It’s all speculation at this point, but it seems like Glass will essentially be a smartphone accessory, levaraging an app on the phone and piggybacking on the phone’s connectivity.
Facebook Home and your friends
The Verge humorously asks “Facebook Home is beautiful, but what if your friends aren’t?” A nice hook, and a good point about the value derived from Home and its dependency on the quality of your friends’ content.
I’ve seen this before with services that show beautiful pictures pulled from external content sources. The result looks great if the source content is high resolution and beautiful. Android’s contacts are an example; Android pulls the contact’s picture from the Google+ profile. If that photo is a nice, high resolution shot (e.g. Noel’s), it will look sharp on the phone. If not, it’s an annoying, grainy photo. The resolution of your phone obviously matters, e.g. some photos that looked OK on my Nexus S are now grainy on my Nexus 4.
If you’re building a service that showcases photos, you really should do some pre-processing on the images to ensure bad photos from the source don’t make your UI look janky.
Google’s Knowledge Graph on tablet videos
Yes, it’s only for the Play Movies & TV app and only on tablets, but this is kind of a big deal. In short, Google is now adding metadata from its Knowledge Graph to supported video playing on tablets running the Play Movies & TV app.
When paused, tapping an actor’s face will pull up related cards, e.g. the actor’s filmography and biography, and offer to take actions for you, like search.
This fits a flow that everyone has had at one point or another, i.e. what’s that actor’s name, what was that other movie he was in, how tall is he, etc. Usually, you’d pause an hit IMDB.
I love this combination of internet information with video. It’s useful and a smart way for Google to get even more information about you. Think about the possibilities for analytics on this feature. Twitter should take note, given its rise as a medium for celebrities.
Lots of user experience goodies
Introduce Design Thinking Into Your Enterprise Implementations: From Misha’s (@mishavaughan) VoX blog, I give you Oracle UX Direct. Here’s the skinny:
The Oracle Applications User Experience team has created a program called Oracle UX Direct to provide customers, partners, and consultants in the enterprise industry with design best-practices and tools that they can leverage to make their enterprise implementations more successful. By introducing design thinking during the implementation stage, our customers have the opportunity to create a solution that best fits the needs of their users from the beginning.
Hit the source for details.
The Next Big UI Idea: Gadgets That Adapt To Your Skill: I’ve been noodling since I watched Indie Game: The Movie, i.e. level design applied to software. It raises concerns for me though, given that I would hate to buy a TV that hand-held me through all its features.
Still, there is value in learning by doing, especially for training on new enterprise software. Upgrades often require retraining of users, which means those users aren’t doing their jobs, a double whammy.
No to NoUI: This is an interesting read and a cautionary tale about hiding too much from the user.
4 Surprising App-Design Principles, From The Instagram Of Quick Quizzes: Almost a companion piece for the previous post, these four principles target the mobile, distracted user. The distracted part was an a-ha for me, since thanks to Apple, we’re conditioned to think in terms of an immersive experience. That’s just not true. The smartphone itself is immersive, but its various features and apps, with the exception of games, really aren’t.
Facebook introduced Facebook Home today, and unlike most Facebook news, I’m actually quite interested in this announcement.
Setting aside the features, the shift away from apps to focus on a holistic user experience is one that I’ve been investigating for several months.
But from a user experience standpoint, perhaps the most significant thing about Home is simply the way it thinks beyond the “app” in a broader sense. It’s something Zuckerberg harped on continually: moving beyond apps. And that’s a big departure.
The idea of mobile apps as discrete, cordoned-off experiences is something Apple entrenched with the iPhone very early on. Build whatever you want on your own rectangular plots, Apple told developers, but this phone is ours, and we’re the ones responsible for how it looks, feels, and functions.
Being surrounded by iOS users, I often struggle to explain the benefits of the Android experience, since they tend to think linearly in terms of apps as discrete experiences, separate from the overall OS. This isn’t a fault; it’s just a byproduct of usage.
One of the most common knocks on Android is that it doesn’t have the same quality of apps that iOS does, but what Android allows apps to do provides for a much deeper integration with the OS itself and for a stronger connection with the user. I’m excited to see Facebook Home take advantage of this because, inevitably, it will lead other Android developers to expand their scope beyond the app.
Plus, Facebook Home will give me a nice example of what I mean the next time I try to explain the benefits of developing a holistic experience for Android users.
Although, since Facebook says they have no plans to port Home to iOS (and realistically, how could they?), I’ll still struggle to explain the user experience benefits. Even so, this is a step in the right direction.
As with many consumer technologies, expanding this paradigm for the enterprise may be easier. While we don’t know how successful Home will be, it’s easy to see how a similar work experience might be valuable. Your employer knows a lot about you, and that’s mostly OK because of the implicit trust relationship that exists.
So, while Google Now might be creepy and Facebook Home invasive, similar work-related functions could be much more successful, e.g. visual representations of your colleagues providing easy ways to communicate with them or assistant-type features based on your job role and functions tied to your calendar and email.
There are a lot of ways this type of experience could make working from a mobile device much easier, and maybe even pleasant.
Anyway, I’m interested to see how consumers react to Facebook Home, although it seems that at least initially, availability will be relatively limited.
Find the comments.