The Apple Watch Arrives

April 26th, 2015 Leave a Comment

Ho hum, Apple released a Watch on Friday. I haven’t see this kind of nerd frenzy since Google Glass finally reached its first Explorers.

The Watch has transcended nerd-only fandom to reach regular people’s consciousness too, e.g. when I wore the Basis Peak, several people asked if it was *the* Watch.

Well, Noel (@noelportugal) and Thao (@thaobnguyen) are both test-driving their Watches now, so look for their initial impressions soon.

Here’s a sneak peak from Noel:

We’re definitely ready for the Watch, since our approach to wearable and other devices has been set for a while. Stay tuned for pictures when when we get our stuff rolling on the actual Watch.

We’re not alone squashing Watch bugs and ironing out inconsistencies. Lots of Watch developers are scrambling now that the actual device is in-hand because the Watch Simulator can only take you so far.

And don’t worry, we still have lots of love for Android Wear. I just got a Moto 360, which was on sale for $179 the day everyone was preordering the Apple Watch.

Our philosophy is harmony. Check it, Noel’s Samsung Gear Live happily announced that his Apple Watch was shipping.

appleWatchWear

Lots of love for all our gadgets.

Find the comments.

OAUX Emerging Technologies RD&D Strategy

April 16th, 2015 2 Comments

Speaking of strategies, Misha (@mishavaughan) asked me to write up an article–not a post, there’s a difference–describing how this team goes about its business, i.e. researching, designing and developing solutions for the emerging technologies that will affect our users in the near and not-so-near future.

eleven

You can, and should, read the resulting article over at the mothership, Usableapps (@usableapps). Check it out:

New emphasis on emerging technology shapes Oracle’s user experience strategy

Floyd (@fteter) read it, and so should you because why not?

Untitled

Surprise, there’s method to the madness. It may look like we just play with toys, and while that’s partially true, we’ve always played with purpose.

Thinking back on the eight years I’ve been doing this, I don’t recall ever outlining and presenting a strategy at this level, and the whole exercise of putting the strategy I have in my head into words and slides was enlightening.

Point of fact, we’ve always had a strategy, and it hasn’t changed much, although the technologies we investigate have.

Serious h/t to Paul (@ppedrazzi) in the early years, and Jeremy (@jrwashley) more recently, for shaping, advancing, and fostering the AppsLab vision.

Anyway, now you know where we invest our time and why, or if you knew that already, you now have a handy article to refer to, should you need a refresher or should be you enlightening someone new to the party.

Enjoy.

Are We Ready for the Apple Watch?

April 13th, 2015 2 Comments

So, apparently Apple is launching a watch soon, which has people asking us, i.e. Oracle Applications User Experience (@usableapps), what our strategy is for the watch of watches.

If you read here, you probably already know we’ve been looking at smart watchessuper watches, and wearables of all kinds for a few years. So, the strategy is already in place.

Build a framework that does most of the work and plug new wearables as they come, Google Glass, Android Wear watches, Pebble, Apple Watch, whatever. Then, create glanceable experiences that fit what users want from each device.

Maybe you saw the Glance demo at OpenWorld 2014 in Jeremy’s (@jrwashley) session or at the OAUX Exchange.

IMG_0098

Glance for Oracle Applications Cloud proof of concept apps on Android Wear Samsung Gear Live and Pebble

Ultan (@ultan) has an excellent writeup that will give you the whole scoop. I’ll cherry-pick the money quote:

This is not about designing for any one specific smartwatch. It’s a platform-agnostic approach to wearable technology that enables Oracle customers to get that awesome glanceable, cloud-enabled experience on their wearable of choice.

So, yeah, we have a strategy.

And boom goes the dynamite.

Find the comments.

Cool Machines: RU 800-S Rail Refresher

April 9th, 2015 Leave a Comment

The kid in me loves this behemoth so very much, h/t to Kottke. This rail refresher is a Swietelsky RU 800-S.

There are lots of videos showing this dream machine at work.

Four Weeks with the Basis Peak

April 9th, 2015 2 Comments

Right after I finished wearing the Nike+ Fuelband for three weeks, I moved straight on to another wearable device, the Basis Peak.

The Peak falls into a category Ultan (@ultan, @usableapps) calls the “super watch,” a term that nicely differentiates watches like the Peak (and the Fitbit Surge) from fitness bands (e.g. Jawbone UP24, Nike+ Fuelband, Fitbit Flex), smartwatches (e.g. Android Wear, Apple Watch, Pebble), and serious athletic training gadgets (e.g. Garmin, Polar).

Look at that list and tell me it doesn’t need differentiation. Wearables are definitely a thing.

I’ve been curious about the Basis since before the company was acquired by Intel. Lab alumna, Joyce (@joybot12), had lots of good things to say about the Peak’s ancestor, the Basis B1, and the device collects a lot of data. And I love data, especially data about me.

Unfortunately, the company doesn’t offer any developer integrations, just an export feature.

Anyway, here are some real reviews by people who do that reviews for a living to check those out before reading my ramblings, Engadget, PC Magazine, and this guy.

The watch

Basis bills the Peak as the “ultimate fitness and sleep tracker,” and the device packs an impressive array of technology into a small package. For sensors, it has:

Plus, the Peak has a nifty gray-scale touchscreen display and has a backlight that I eventually discovered, which is nice, albeit not terribly intuitive. I found all the gestures a bit clunky OOTB, and I can’t be the only one because Basis sent an email of how-tos to me right after I created my account. But, like anything, once I learned, all was good.

Fun fact, the little guy is water resistant up to 5 ATM or 50 meters of pressure, and I took it swimming without any leakage.

The optical heart rate sensor is pretty cool (lasers!), and it makes for some spooky lighting in the dark, which is a fun way to creep out your spouse a la Blair Witch/Cloverfield.

I read somewhere that this type of heart rate monitor isn’t real time; I did notice that it was frequently searching for a heart rate, but the charts would show a continuous number. So, if you’re into constant heart rate, this isn’t for you, but it was good enough for me.

IMG_20150317_085720

After wearing the Fuelband, the Peak felt bulky, and its rubber band wasn’t terribly comfortable. Actually, it was uncomfortable, especially since to get the best sensor data, you’re supposed to keep it tight.

On Day 2, I was positive I wouldn’t make it a week, let alone three, but I got used to it. Plus, the data kept me going, more on that in a bit.

The housing on the underside of the band did leave a nice mark after a few days, but that disappeared shortly after I stopped wearing the Peak.

IMG_20150317_085708

Like every other device, it uses Bluetooth Low Energy (BLE) to sync with a smartphone, and Basis has apps for both iOS and Android.

Syncing the Peak with its smartphone was often an adventure. The watch would frequently lose its BLE connection with the phone, and I learned quickly that trying to reset that connection was futile. I tried and tried and eventually had to remove and re-associate the watch with the app to get the data flowing.

Because syncing was such a chore, I missed the instant gratification after a run, quantifying the steps, calories, etc. At one point, I confused the watch accidentally and lost about a day’s worth of data. I changed the date on the phone (cough, Candy Crush), and during that one minute period, the watch synced.

The date changed confused the watch, and I couldn’t reset it without dumping all its data and starting from scratch. Definitely my bad, but given how often the Peak wouldn’t sync, it seemed a bit ironic.

The Peak’s battery is solid, even with all the tech onboard; functionally, I saw about 5 days on the first charge, not too shabby.

And finally, the Peak gets attention. Maybe it was the white band, or more likely the Apple Watch buzz, but several people asked me about it, a few assuming it was the Watch itself. If nothing else, the rising interest tide in the Watch has raised the collective consciousness about technology on the wrist.

The app and data

The only reason I soldiered through the discomfort of wearing the Peak is because it produces an impressive array of data, and I love me some data about me.

I’ll start with the smartphone app, which I didn’t use much except for glance-scan type information because Basis doesn’t follow Jeremy’s (@jrwashley) 10-90-90 rule for their mobile app, i.e. they cram all the graphs and information into the small viewport.

For reference, 10-90-90 refers to 10% of the tasks that 90% of users need 90% of the time. This provides a baseline to scale experiences down to less-capable devices in a thoughtful way.

I get why Basis built their smartphone app this way though; it allows the user to get full information in mobile way. The My Basis web app provides all the data in a very appealing set of visualizations, and this is where I went to pour over the data I’d generated.

As with the other wearables we’ve tested (Misfit Shine, Nike+ Fuelband, Fitbit Surge), the Peak has game elements to encourage activity, called Habits. One of the first Habits that comes unlocked OOTB is called “Wear It,” which you can achieve by simply wearing the band for 12 hours.

Habits   Basis

This tells you a lot about the comfort of the device.

Unlocked Habits are pretty basic, burn 2,500 calories, take 10,000 steps, and as you achieve them, more difficult habits can be unlocked and added, e.g. run for 30 minutes, move every hour, get more sleep, etc.

The thresholds for these Habits are configurable, but none of them is overly challenging. As you progress, you’ll find yourself working on half a dozen or more Habits every day. Habits can be paused, which I found valuable when I went on vacation last week.

Overall, the game seems targeted at casual users vs. athletes, but oddly, the data collected seems like the detail that athletes would find valuable. Maybe I didn’t play long enough.

Ah the data.

The Peak collects the usual stuff, calories and steps, and also heart rate. Additionally, it measures skin temperature and perspiration level, although I’m not sure what to do with those.

Activity Details   Basis

Patterns   Basis

On the sleep side, the Peak measures, light sleep, REM sleep and deep sleep, and it tracks interruptions and tossing and turning.

Sleep Details   Basis

While the Shine and Fuelband made me nutty-complusive about activity data, The Peak turned my compulsion toward the sleep data. I found myself studying the numbers, questioning them and trying to sleep-hack.

Not that any of that mattered, sadly, I live a poor-sleep lifestyle.

In other news, I finally found my personal killer use case for smart/superwatches, glanceable phone and text notifications. Because I carry my phone in my back pocket, I often miss calls and texts, but not with the Peak on my wrist.

My wife especially loved this feature.

In the past with the Pebble and Samsung Gear Live, I had too many notifications turned on, email, calendar, etc., and I didn’t wear these devices long enough to modify the settings.

Finally, the Peak helped me realize what a sadly inactive life I lead. 10,000 steps was a challenge for me every day, unless I went to the gym for a run.

I felt a twinge when it came time to take off the Nike+ Fuelband, and despite the discomfort, I pondered wearing the Basis Peak for longer too, specifically for the data it collected.

Maybe I’m stumbled onto something, like wearables-detachment disorder. These are very personal devices, and I wonder if people develop an emotional attachment to them.

We’ll see when I’m done testing the next wearable.

Find the comments.

The Fitbit Surge: Watching Where the Super Watch Puck Goes

April 6th, 2015 2 Comments

Editor’s note: Here’s a review of the Fitbit Surge from Ultan (@ultan, @usableapps); if anyone can road-test a fitness tracker, it’s him. As luck would have it, the Surge is on my list of wearables to test as well. So, watch this space for a comparison review from a much less active person. Enjoy.

I’ve upgraded my Fitbit experience to the Fitbit Surge, the “Fitness Super Watch.”

Why?

I’ve been a Fitbit Flex user for about 18 months. I’ve loved its simplicity, unobtrusiveness, colourful band options, and general reliability. I’ve sported it constantly, worldwide. I’ve worn out bands and exhausted the sensor until it was replaced by the help of some awesome Fitbit global support. I’ve integrated it with the Aria Wi-Fi scales, synching diligently. I’ve loved the Fitbit analytics, visualization, the badges, and comparing experiences with others.

The human body makes more sense for me as a dashboard than a billboard, as Chris Dancy (@servicesphere) would say.

But I wanted more.

The Flex didn’t tell me very much on its own—or in the moment—other than when a daily goal was reached or the battery needed attention. I had to carry a smartphone to see any real information.

I am also a user of other fitness (mostly running) apps: Strava, MapMyRun, Runcoach, Runkeeper, and more. All have merits, but again, I still need to carry a smartphone with me to actually record or see any results. This also means that I need to run through a tiresome checklist daily to ensure the whole setup is functioning correctly. And given the increasing size of smartphones, I am constantly in need of new carrying accessories. I’m a mugger’s dream with twinkling phablets strapped to my arms at night, not to mention asking for technical grief running around in European rain.

The Surge seemed like a good move to make.

Spinning up the Fitbit Surge in the gym

Spinning up the Fitbit Surge in the gym

Onboarding the Superwatch Experience

I tested my new Fitbit Surge right out of the box in Finland on long snowy runs around Helsinki and have hammered it for weeks now with activities out in the Irish mist and in gyms, too. My impressions:

On the downside:

Relative glance: Fitbit Surge versus Motorola Moto 360

Relative glance: Fitbit Surge versus Motorola Moto 360

Thoughts on the Surge and Super Watch Approach

An emerging wearable technology analyst position is that upped smartwatches such as the Fitbit Surge or “super watches” will subsume the market for dedicated fitness bands. I think that position is broadly reasonable, but requires nuance.

Fitness bands (Flex, Jawbone Up, and so on), as they stand, are fine for the casual fitness type, or for someone who wants a general picture of how they’re doing, wellness-wise. They’ll become cheaper, giveaways even. More serious fitness types, such as hardcore runners and swimmers, will keep buying the upper-end Garmin-type devices and yes, will still export data and play with it in Microsoft Excel. In the middle of the market, there’s that large, broad set of serious amateurs, Quantified Self fanbois, tech heads, and the more competitive or jealous wannabe types who will take to the “super watches.”

And yet, even then, I think we will still see people carrying smartphones when they run or work out in the gym. These devices still have richer functionality. They carry music. They have a camera. They have apps to use during your workout or run (be they for Starbucks or Twitter). And you can connect to other people with them by voice, text, and so on.

I like the Fitbit Surge. Sure, it’s got flaws. But overall, the “super watch” approach is a sound one. The Surge eliminates a lot of complexity from my overall wearable experience, offers more confidence about data reliability, and I get to enjoy the results of my activity efforts faster, at a glance. It’s a more “in the moment.” experience. It’s not there on context and fashion, but it will be, I think.

Anyone wanna buy some colored Fitbit Flex bands?

Conference Recaps and Such

March 27th, 2015 Leave a Comment

I’m currently in Washington D.C. at Oracle HCM World. It’s been a busy conference; on Wednesday, Thao and Ben ran a brainstorming session on wearables as part of the HCM product strategy council’s day of activities.

brainstorm

Then yesterday, the dynamic duo ran a focus group around emerging technologies and their impact on HCM, specifically wearables and Internet of Things (IoT). I haven’t got a full download of the session yet, but I hear the discussion was lively. They didn’t even get to IoT, sorry Noel (@noelportual).

I’m still new to the user research side of our still-kinda-new house, so it was great to watch these two in action as a proverbial fly on the wall. They’ll be doing similar user research activities at Collaborate 15 and OHUG 15.

If you’re attending Collaborate and want to hang out with the OAUX team and participate in a user research or usability testing activity, hit this link. The OHUG 15 page isn’t up yet, but if you’re too excited to wait, contact Gozel Aamoth, gozel dot aamoth at oracle dot com.

Back to HCM World, in a short while, I’ll be presenting a session with Aylin Uysal called Oracle HCM Cloud User Experiences: Trends, Tailoring, and Strategy, and then it’s off to the airport.

Earlier this week, Noel was in Eindhoven for OBUG Experience 2015. From the pictures I’ve seen, it was a fun event. Jeremy (@jrwashley) not only gave the keynote, but he found time to hang out with some robot footballers.

robot

Check out the highlights:

Busy week, right? Next week is more of the same as Noel and Tony head to Modern CX in Las Vegas.

Maybe we’ll run into you at one of these conferences? Drop a comment.

In other news, as promised last week, I updated the feed name. Doesn’t look like that affected anything, but tell your friends just in case.

Update: Nope, changing the name totally borks the old feed, so update your subscription if you want to keep getting AppsLab goodness delivered to your feed reader or inbox.

Time to Update the Feed

March 19th, 2015 Leave a Comment

For those of you who enjoy our content via the feed (thank you), I have news.

Next week, I’ll be changing the feed’s name, so if you want to continue to receive AppsLab goodness in your feed reader of choice or in your inbox, you’ll need to come back here and subscribe again.

Or maybe it’s time to switch over to our Twitter (@theappslab) or Facebook Page, if that’s your thing. I did nuke the Google+ Page, but I doubt anyone will notice it’s gone.

Nothing else has changed.

OAUX Tidbits

March 18th, 2015 Leave a Comment

Here come some rapid fire tidbits about upcoming and recently past Oracle Applications User Experience (@usableapps) events.

Events of the Near Past

Laurie Pattison’s (@lsptahoe) team (@InnovateOracle) has been organizing events focused around stimulating and fostering innovation for quite some time now.

I’ve always been a big fan of group-think-and-work exercises, e.g. design jams, hackathons, ShipIts, code sprints, etc.

Our team frequently participates in and supports these events, e.g. Tony O was on a team that won a couple awards at the Future of Information design jam back in early February and John and Julia served as mentors at the Visualizing Information design jam a few weeks ago.

You may recall Julia’s visualization analysis and thinking; John has an equally informative presentation, not yet shared here, but we can hope.

Watch Laurie’s blog for information about more innovation events.

Events of the Near Future

It’s conference season again, and we’ll be bouncing around the globe spreading our emerging technologies user experience goodness.

Fresh off a hot session at UTOUG (h/t OG Friend of the ‘Lab Floyd) and gadget-hounding at SXSW Interactive, Noel (@noelportugal) will be in Eindhoven, the Netherlands for the Oracle Benelux User Group Experience Day, March 23 and 24.

Our fearless leader, Jeremy Ashley (@jrwashley) will be there as well giving the opening keynote. Bob Rhubart (@OTNArchBeat) recorded a video to tell you all about that. Check it out here:

While Noel enjoys Europe, I’ll be in Washington D.C. speaking at Oracle HCM World, along with Thao and Ben.

After that, we’ll have boots on the ground at Oracle Modern CX and Collaborate 15 in Las Vegas. Stay tuned for more, or if you’ll be at any conferences during Conference Season 2015 and wonder if OAUX will be there, check out our Events page.

Update: Here’s what OAUX will be doing at Collaborate 15. If you’re attending, come by and say hello.

Three Weeks with the Nike+ Fuelband SE

March 11th, 2015 7 Comments

I don’t like wearing stuff on my wrist, but in my ongoing quest to learn more about the wearables our users wear, I have embarked on a journey.

For science! And for better living through math, a.k.a. the quantified self.

And because I’ll be at HCM World later this month talking about wearables, and because wearables are a thing, and we have a Storify to prove it, and we need to understand them better, and the Apple Watch is coming (squee!) to save us all from our phones and restore good old face time (not that Facetime) and and and. Just keep reading.

Moving on, I just finished wearing the Nike+ Fuelband SE for three weeks, and today, I’m starting on a new wearable. It’s a surprise, just wait three weeks.

Now that I’ve compiled a fair amount of anecdotal data, I figured a loosely organized manifest of observations (not quite a review) was in order.

The band

The Fuelband isn’t my first fitness tracker; you might recall I wore the Misfit Shine for a few months. Unlike the minimalist Shine, the Fuelband has quite a few more bells and whistles, starting with its snazzy display.

Check out a teardown of the nifty little bracelet, some pretty impressive stuff inside there, not bad for a shoe and apparel company.

I’ve always admired the design aspects of Nike’s wearables, dating back to 2012 when Noel (@noelportugal) first started wearing one. So, it was a bit sad to hear about a year ago that Nike was closing that division.

Turns out the Fuelband wasn’t dead, and when Nike finally dropped an Android version of the Nike+ Fuelband app, I sprang into action, quite literally.

Anyway, the band didn’t disappoint. It’s lightweight and can be resized using a nifty set of links that can be added or removed.

IMG_20150217_121004

The fit wasn’t terribly tight, and the band is surprisingly rigid, which eventually caused a couple areas on my wrist to rub a little raw, no biggie.

The biggest surprise was the first pinch I got closing the clasp. After a while, it got easier to close and less pinchy, but man that first one was a zinger.

The battery life was good, something that I initially worried about, lasting about a week per full charge. Nike provides an adapter cord, but the band’s clasp can be plugged directly into  a USB port, which is a cool feature, albeit a bit awkward looking.

It’s water-resistant too, which is a nice plus.

Frankly, the band is very much the same one that Noel showed me in 2012, and the lack of advancement is one of the complaints users have had over the years.

The app and data

Entering into this, I fully expected to be sucked back into the statistical vortex that consumed me with the Misfit Shine, and yeah, that happened again. At least, I knew what to expect this time.

Initial setup of the band requires a computer and a software download, which isn’t ideal. Once that was out of the way, I could do everything using the mobile app.

The app worked flawlessly, and it looks good, more good design from Nike. I can’t remember any sync issues or crashes during the three-week period. Surprising, considering Nike resisted Android for so long. I guess I expected their foray into Android to be janky.

I did find one little annoyance. The app doesn’t support the Android Gallery for adding a profile picture, but that’s the only quibble I have.

Everything on the app is easily figured out; there’s a point system, NikeFuel. The band calculates steps and calories too, but NikeFuel is Nike’s attempt to normalize effort for specific activities, which also allows for measurement and competition among participants.

The default the NikeFuel goal for each day is 2,000, a number that can be configured. I left it at 2,000 because I found that to be easy to reach.

The app includes Sessions too, which allow the wearer to specify the type of activity s/he is doing. I suppose this makes the NikeFuel calculation more accurate. I used Sessions as a way to categorize and compare workouts.

I tested a few Session types and was stunned to discover that the elliptical earned me less than half the NikeFuel than running on a treadmill for the same duration.

Update: Forgot to mention that the app communicates in real time with the band (vs. periodic syncing), so you can watch your NikeFuel increase during a workout, pretty cool.

Overall, the Android app and the web app at nikeplus.com are both well-done and intuitive. There’s a community aspect too, but that’s not for me. Although I did enjoy watching my progress vs. other men my age in the web app.

One missing feature of the Fuelband, at least compared to its competition, is the lack of sleep tracking. I didn’t really miss this at first, but now that I have it again, with the surprise wearable I’m testing now, I’m realizing I want it.

Honestly, I was a bit sad to take off the Fuelband after investing three weeks into it. Turns out, I really liked wearing it. I even pondered continuing its testing and wearing multiple devices to do an apples-to-apples comparison, but Ultan (@ultan) makes that look good. I can’t.

So, stay tuned for more wearable reviews, and find me at HCM World if you’re attending.

Anything to add? Find the comments.

Development on Windows 8.1 Phone and Tablet

March 9th, 2015 Leave a Comment

This is a follow up to my previous post (“Where are the Mobile Windows Devices?“) in which I gave my initial impressions of mobile windows devices.  As part of our assessment of these devices we also developed a few apps and this post details how that went.

Getting Started

Windows Phone 8.1 applications have to be developed on Windows 8.1.  I am using a Mac so I installed Windows 8.1 Enterprise Trial (90-day Free Trial) in a Parallels VM.  In order to run the Phone Emulator (which is also a VM and so I was running a VM in a VM), I had to enable Nested Virtualization in Parallels.

Development is done in Visual Studio, I don’t think you can use any other IDE. You can download a version of Visual Studio Express for free.

Finally, you’ll need a developer license to develop and test a Windows Store app before the Store can certify it. When you run Visual Studio for the first time, it prompts you to obtain a developer license. Read the license terms and then click I Accept if you agree. In the User Account Control (UAC) dialog box, click Yes if you want to continue. It was $19 for a developer license.

Development

There are 2 distinct ways to develop applications on the Windows Platform.

Using the Windows Runtime (WinRT)

Applications build with WinRT are called “Windows Runtime apps”, again, there are 2 types of these:

What’s really cool is that Visual Studio provide a universal Windows app template that lets you create a Windows Store app (for PCs, tablets, and laptops) and a Windows Phone Store app in the same project. When your work is finished, you can produce app packages for the Windows Store and Windows Phone Store with a single action to get your app out to customers on any Windows device. These applications can share a lot of their code, both business logic and presentation layer.

Even better, you can create Windows Runtime apps using the programming languages you’re most familiar with, like JavaScript, C#, Visual Basic, or C++. You can even write components in one language and use them in an app that’s written in another language.  Windows Runtime apps can use the Windows Runtime, a native API built into the operating system. This API is implemented in C++ and bindings (called “projections”) are created for  JavaScript, C#, Visual Basic, and C++ in a way that feels natural for each language.

Note that this is very different than the Phonegap/Cordova approach that also let you write apps in JavaScript. Universal Windows Apps do not run in a UIWebView/WebView, they are native applications for which (some of) the application logic gets run through the JavaScript engine. This means that they do not suffer from the challenges we face with Phonegap/Cordova (you can’t use cutting edge features, performance issues, etc.), yet you still get the benefits of using the language you are already familiar with.

This also allows you to use existing JavaScript libraries and CSS templates, no porting requires. You can even write one app use multiple languages, leveraging the dynamic nature of JavaScript for app logic while leveraging languages like C# and C++ for more computationally intensive tasks.

Traditional (Not using the WinRT)

Applications that do not use the WinRT are called Windows desktop app and are executables or browser plug-ins that runs in the Windows desktop environment. These apps are typically written in Win32 and COM, .NET, WPF, or Direct3D APIs. There are also Windows Phone Silverlight apps which are Windows Phone apps that uses the Windows Phone Silverlight UI Framework instead of the Windows Runtime and can be sold in the Windows Phone Store.

Deployment

To deploy to my device I had to first “developer unlock” my phone (instructions).

Deployment is a breeze from Visual Studio, just hook up your phone, select your device and hit deploy. The application gets saved to your phone and it opens. It appears in the apps list like all other apps.  You can also “side-load” applications to other windows machines for testing purpose, just package your application up in Visual Studio, put it on a USB stick, stick it in the other Tablet/PC and run the install script created by the packaging process.

I created 2 simple application, one was a C# Universal Application and one was a JavaScript/CSS3/HTML5 Universal Application. I was able to deploy and run both on a Tablet, Desktop and Phone without any problem. They were very simple applications but I could not see any performance difference between the C# application and the JS application.

Additional Findings

For best User Experience when developing Universal Apps using JS/HTML5/CSS3 you should develop Single Page Applications (SPA).  This ensures there are no weird “page loads” in the middle of your app running.  Users will not expect this from their application, remember, these are universal apps and could be run by a user on his desktop.

State can be easily shared between devices by automatically roaming app settings and state, along with Windows settings, between trusted devices on which the user is logged in with the same Microsoft account.

Applications on the Windows App Store come with build in crashanalytics: This is one of the valuable services you get in exchange for your annual registration with the Store, no need to build it yourself.
<h3Conclusion

As a JavaScript developer myself I am extremely excited by the fact that I can develop native applications on the Windows Platform using tools that I am already familiar with.  Furthermore, with Windows 10 it seems that Microsoft is doubling down on Universal Apps and with that OS Upgrade, my JavaScript apps can soon also be deployed to the HoloLens, Surface Hub, and IoT devices like the Raspberry Pi 2!

Where are the Mobile Windows Devices?

March 9th, 2015 Leave a Comment

That was one of the questions one of the Oracle’s Executives asked when we presented our new Cloud UX Lab.  The short answer was that there were none.  As far as I am aware, we never did any testing of any of our prototypes and applications on Windows Phones or tablets because, frankly, we thought it didn’t matter.   Windows Phones (and tablets) are a distant third to the 2 behemoths in this space, Android and iOS, and even lost market share in the year just wrapped up compared (2.7%) to 2013 (3.3%) according to IDC.  However, they are predicted to do better in the years ahead (although these predictions have been widely off in the past) and it seems that there is some pressure from our Enterprise Apps customers to look at the Windows Mobile platform, hence the question.  Never afraid of a challenge, we ordered a Surface Pro 3 and a Nokia Lumia 1520, used them for a few weeks, ran some test, wrote some apps and jotted down our findings, leading to this blog post.

Initial impressions

Surface Pro 3

I’m going to be short about the Surface Pro 3, it’s basically a PC without a physical keyboard (although you can get one if you want) but with a touch screen and a stylus.  It even runs the same version of Windows 8.1 as your PC.  I must admit that the Tiles seem more practical on the tablet than on a PC, but I could do without the constant reminders to “upgrade Windows” and “upgrade Defender,” complete with mandatory reboots, just like on your PC.  The most infuriating part about this is that the virtual keyboard does not automatically pop up when you tap on an input field, just like on your PC that doesn’t have the concept of a Virtual Keyboard.  Instead you have to explicitly open it to be able to type anything.

Fortunately, there are some advantages too, e.g. anything that runs on your Windows PC probably will run fine on the Windows tablet, confirmed by our tests.  It has a USB 3.0 port that works just like … a USB port.  Plug in a USB Drive and you can instantly access it, just like on your PC, quite handy for when you have to side-load applications (more on that in a later post).

The whole package is also quite pricy, similar to a premium laptop.  It’s more of a competitor for the likes of Apple’s Macbook Air than the iPad I think.  I’m thinking people who try to use their iPads as little laptops are probably better of with this.

Lumia 1520

The phone on the other hand is a different beast.  The Windows 8.1 Phone OS, unlike the tablet version, is a smartphone OS.  As such, it has none of the drawbacks that the tablet displayed.  My first impression of the phone was that it is absolutely huge.  It measures 6 inches across and dwarfs my iPhone 6, which I already thought was big.  It’s even bigger than the iPhone 6+ and the Samsung Galaxy Note 4.  My thumb can reach less than 50% of the screen, this is not a phone you can handle with one hand.

iPhone 4S vs iPhone 6 vs Lumia 1520

iPhone 4S vs iPhone 6 vs Lumia 1520

Initial setup was relatively quick, it comes “preinstalled” with a bunch of apps, although, they are not really installed on the phone yet, they get installed on first boot.  It took about 10-15 minutes for all “preinstalled” phone apps to be installed.

The screen is absolutely gorgeous with bright colors and supreme fine detail, courtesy of a 367ppi AMOLED ClearBlack screen.  It also performs very good outside, in bright light.  It has an FM Radio which uses your headphone cable as the antenna (no headphones, no radio), a USB port and a microSD port.  It also has a dedicated, two stage camera shutter button.  There’s no physical mute button though.  The Tiles work really well on the phone.  They are much easier to tap than the app icons on either Android or iOS and you can resize them.

I tried installing the same apps as I have on my iPhone, but this was unfortunately where I hit my first giant snag.  I knew the ecosystem was underdeveloped compared to Android and iOS, but I didn’t know it was this bad.  Staples on my iPhone like Feedly, Flickr, VLC, Instapaper and Pocket don’t exist on the Windows Phone platform.  You also won’t find a dedicated app to listen to your Amazon Prime music or watch your movies.  If you want to watch the latest exploits of the Lannisters, you are also going to have to do it on another device, no HBO Go or XFinity on the Windows Phone.  There is also no version of Cisco VPN, which means it’s a non-starter for Oracle employees as that is the only way to access our intranet.  Weirder still, there is no Chrome or Firefox available on Windows Phones, which means I had to do all my testing on the version of IE that came with the phone (gulp!).

Impressions after a week of usage

I used the Lumia as my main phone for a week (poor pockets), I just popped in the micro SIM card from my iPhone into the Lumia and it worked.  I really got hooked to the constantly updating Live Tiles.  News, stock prices, weather, calendar notifications, facebook notifications etc. get pushed straight to my main screen without having to open any apps.  I can glance and drill down if I want to, or just ignore them.  They are a little bit of a distraction with their constant flipping motion, but overall very cool.

The other thing that was very noticeable was that the top notification bar is actually transparent and so it doesn’t seem like you lose that part of your screen, I liked that.

The Windows Store has a try-before-you-buy feature, something that would be a godsend on the iPhone: my kids love to buy games and then drop them within a day never to be used again.  You can also connect the Windows Phone to your XBox One and use it as an input device/remote control.

Another feature that I highly appreciated, especially as a newbie to the Windows Phone, was the smart learning notifications (not sure if that is the official name).  Rather than dumping all the help-information on you when you open the app for the first time, the phone seems to be monitoring what you do and how you do it.  If there is a better/easier way of doing that task, after repeated use, it will let you know, in a completely non condescending way, that “You are doing it wrong.” This seems to be a much better approach because if you tell me the first time I use the app how to use all its features, I will forget by the time I actually want to use that feature, or worse, I might never use that feature so now you wasted my time telling me about it.

As for overall performance, there was some noticeable “jank” in the phones animations, it just didn’t feel as buttery smooth as the iPhone 6.

The camera

The camera really deserves its own chapter.  The 1520 is the sister phone of the Lumia 1020, which has a whopping 41 megapixel image sensor.  The 1520 has to make due with 20 megapixels but that is still at least double of what you find in most smartphones.  Megapixel size isn’t everything but it does produce some wonderful pictures.  One of the reasons that Nokia went with these large sensors is because they wanted to support better zooming.  Because you can’t optically zoom with a phone camera, you need a much bigger lens for that, a phone does digital zooming which typically leads to a pixelated mess when you zoom in.  Unless of course you start with a very high resolution image, which is what Nokia did.

One of the interesting features of the photo app is that it supports “lenses.”  These are plugins you can install in the photo app that add features not available out-of-the-box.  There are dozens of these lenses, it’s basically an app store in an app, that add features like (instagram) filters, 360 shots, panoramic pictures etc.  One lens promises to make you look better in selfies (it didn’t work on me).  One really neat lens is Nokia’s “Refocus” lens that brings a Lytro-like variable depth of field to your phone, and it works great too.

Refocus

In the same lens app you can also filter out all colors except for the object you click on, called “color pop,” so you get this effect:

color pop

Color pop in action

In the app, you can keep clicking on other objects (e.g. the table) to pop their color.

Other than the 20 megapixel sensor, the phone is also equipped with a top notch Carl Zeiss lens.  The phone has a physical, dedicated, two-stage shutter button, half-press for focus and full press for taking the picture.  It also has a larger-than-usual degree of manual control. You’ll find the usual settings for flash mode, ISO, white balance and exposure compensation but also parameters for shutter speed and focus. The latter two are not usually available on mobile phones.  The camera also performs really well in low light conditions.

Summary

I like the phone and its OS, and I really like the camera. The Tiles also works really well on a phone. I dislike the performance, the size and the lack of applications, the latter is a deal-breaker for me. I had some trepidation about going cold turkey Windows Phone for the week but it turned out alright. However, I was happy to switch back to my iPhone 6 at the end of the week.
I’m a bit more on the fence about the tablet. If you get the physical keyboard, it might work out better but then you basically have a laptop, so not sure what the point is. The fact that it runs windows has it’s advantages (everything runs just as on windows) and disadvantages (keyboard issues).

I can’t wait to get my hands on Windows 10 and a HoloLens :-)

Happy Coding!

Mark.

Automatic: Nice, but Not Necessary

February 20th, 2015 1 Comment

Editor’s note: Here’s the first post from one of our newish team members, Ben. Ben is a usability engineer with a PhD in Cognitive Psychology, and by his own account, he’s also a below average driver. Those two factoids are not necessarily related; I just don’t know what his likes and dislikes are so I’m spit-balling.

Ben applied his research chops to himself and his driving using Automatic (@automatic), a doodad that measures your driving and claims to make you a better driver. So, right up his alley.

Aside from the pure research, I’m interested in this doodad as yet another data collector for the quantified self. As we generate mounds of data through sensors, we should be able to generate personal annual reports, a la Nicholas Felton, that have recommended actions and tangible benefits.

Better living through math.

Anyway, enjoy Ben’s review.

When I first heard about Automatic (@automatic), I was quite excited—some cool new technology that will help me become a better driver. The truth is, I’m actually not a big fan of driving. Which is partly because I know I’m not as good of a driver as I could be, so Automatic was a glimmer of hope that would lead me on the way to improving my skills.

Though I will eagerly adopt automated cars once they’re out and safe, the next best thing is to get better so I no longer mildly dread driving, especially when I’m conveying others. And one issue with trying to improve is knowing what and when you’re doing something wrong, so with that in mind (and for enterprise research purposes), I tried out Automatic.

Automatic is an app for your phone plus a gadget (called the Link) that plugs into your car’s diagnostics port, which together gives you feedback on your driving and provides various ways to look at your trip data.

Automatic Link

The diagnostics port the Link plugs into is the same one that your mechanic uses to see what might be wrong when your check engine light is ominously glaring on your dashboard. Most cars after 1996 have these, but not all data is available for all cars. Mine is a 2004 Honda Civic, which doesn’t put out gas tank level data, meaning that MPG calculations may not be as accurate as they could be. But it still calculates MPG, and it seems to be reasonably accurate. I don’t, however, get the benefit of “time to fuel up” notifications, though I do wonder how much of a difference those notifications make.

The Link has its own accelerometer, so that combined with the data from the port and paired with your phone via Bluetooth, it can tell you about your acceleration, distance driven, your speed, and your location. It can also tell you what your “Check Engine” light means, and send out some messages in the result of a crash.

It gives three points of driving feedback: if you accelerate too quickly, brake too hard, or go over 70 mph. Each driving sin is relayed to you with its own characteristic tones emitted from the Link. It’s a delightful PC speaker, taking you way back to the halcyon DOS days (for those of you who were actually alive at the time). It also lets you know when it links up with your phone, and when it doesn’t successfully connect it outputs a sound much like you just did something regrettable in a mid-’80s Nintendo game.

App screenshot

One of the main motivators for the driving feedback is to save gas—though you can change the top speed alert if you’d like. From their calculations, Automatic says 70 mph is about as fast as you want to go, given the gas-spent/time-it-will-take-to-get-there tradeoff.

Automatic web dashboard

Another cool feature is that it integrates with IFTTT (@ifttt), so you can set it up to do things like: when you get home, turn the lights on (if you have smart lights); or when you leave work, send a text to your spouse; or any other number of things—useful or not!

Is It Worth It?

The big question is, is it worth $99? It’s got a great interface, a sleek little device, and a good number of features, but for me, it hasn’t been that valuable (yet). For those with the check engine light coming up, it could conceivably save a lot of money if you can prevent unnecessary service on your car. Fortunately, my Civic has never shown me the light (knock on wood), though I’ll probably be glad I have something like Automatic when it does.

I had high hopes for the driver feedback, until I saw that it’s actually pretty limited. For the most part, the quick acceleration and braking are things I already avoided, and when it told me I did them, I usually had already realized it. (Or it was a situation out of my control that called for it.) A few times it beeped at me for accelerating where it didn’t feel all that fast, but perhaps it was.

I was hoping the feedback would be more nuanced and could allow me to improve further. The alerts would be great for new drivers, but don’t offer a whole lot of value to more experienced drivers—even those of us who would consider themselves below average in driving skill (putting me in an elite group of 7% of Americans).

The Enterprise Angle

Whether it’s Automatic, or what looks like might be a more promising platform, Mojio (@getmojio), there are a few potentially compelling business reasons to check out car data-port devices.

One of the more obvious ones is to track mileage for work purposes—it gives you nice readouts of all your trips, and allows you to easily keep records. But that’s just making it a little easier for an employee to do their expense reports.

The most intriguing possibility (for me) is for businesses that manage fleets of regularly driven vehicles. An Automatic-like device could conceivably track the efficiency of cars/trucks and drivers, and let a business know if a driver needs better training, or if a vehicle is underperforming or might have some other issues. This could be done through real-time fuel efficiency, or tracking driving behavior, like what Automatic already does: hard braking and rapid acceleration.
If a truck seems to be getting significantly less mpg than it should, they can see if it needs maintenance or if the driver is driving too aggressively. Though trucks probably get regular maintenance, this kind of data may allow for preventive care that could translate to savings.

This kind of tracking could also be interesting for driver training, examining the most efficient or effective drivers and adopting an “Identify, Codify, Modify” approach.

Overall

I’d say this technology has some interesting possibilities, but may not be all that useful yet for most people. It’s fun to have a bunch of data, and to get some gentle reminders on driving practices, but the driver improvement angle from Automatic hasn’t left me feeling like I’m a better driver. It really seems that this kind of technology (though not necessarily Automatic, per se) lends itself more to fleet management, improving things at a larger scale.

Stay tuned for a review of Mojio, which is similar to Automatic, but features a cellular connection and a development platform, and hence more possibilities.

Fun with an Android Wear Watch

February 3rd, 2015 2 Comments

A couple days ago, I was preparing to show some development work Luis (@lsgaleana) did for Android Wear using the Samsung Gear Live.

One of the interesting problems we’ve encountered lately is projecting our device work onto larger screens to show to an audience. I know, bit of a first world problem, which is why I said “interesting.”

At OpenWorld last year, I used an IPEVO camera to project two watches, the Gear Live and the Pebble, using a combination of jewelry felt displays. That worked OK, but the contrast differences between the watches made it a bit tough to see them equally well through the camera.

Plus, any slight movement of the table, and the image shook badly. Not ideal.

Lately, we haven’t been showing the Pebble much, which actually makes the whole process much easier because . . . it’s all Android. An Android Wear watch is just another Android device, so you can project its image to your screen using tools like Android Screen Monitor (ASM) or Android Projector.

Of course, as with any other Android device, you’ll have to put the watch into debugging mode first. If you’re developing for Android Wear, you already know all this, and for the rest of us, the Android Police have a comprehensive how-to hacking guide.

For my purposes, all I needed to do is get adb to recognize the watch. Here are the steps (h/t Android Police):

Now, when I need to show a tablet app driving the Wear watch, I can use adb and ASM to show both screens on my Mac, which I can then project. Like so.

allTheScreens

Bonus points, the iPod Touch in that screen is projected using a new feature for QuickTime in Mavericks that works with iOS 8 devices.

49578622

Stories Are the Best, Plus News on Nest!

January 28th, 2015 2 Comments

Friend of the ‘Lab, Kathy, has been using Storify for a while now to compile easy-to-consume, erm, stories about the exploits of Oracle Applications User Experience (@usableapps).

You might remember Storify from past stories such as the In the U.K.: Special events and Apps 14 with UKOUG and Our OpenWorld 2014 Journey.

Anyway, Kathy has a new story, The Internet of Things and the Oracle user experience, which just so happens to feature some of our content. If you read here with any regularity or know Noel (@noelportugal), you’ll know we love our internet-connect things.

So, check out Kathy’s story to get the bigger picture, and hey, why not read all the stories on the Usableapps Storify page.

And bonus content on IoT!

Google keeps making the Nest smarter and marginally, depending on your perspective, more useful. In December, a Google Now integration rolled out, pairing a couple of my favorite products.

More gimmick than useful feature, at least for me, I ran into issues with the NLP on commands, as you can see:

Screenshot_2015-01-28-12-01-40

Saying “set the temperature to 70 degrees” frequently results in an interpretation of 270 degrees. Works fine if you don’t say “to” there. Google Now becomes a more effective assistant, this integration will be more useful, I’ve no doubt.

Then, at CES, Nest announced partnerships that form a loose alliance of household appliances. It may take a big player like Nest (ahem, Google) to standardize the home IoT ecosystem.

And just this week, Misfit announced a partnership with Nest to allow their fitness tracker, the one I used to wear, to control the Nest. I’m tempted to give the Shine another go, but I’m worried about falling back into a streak-spiral.

Thoughts on IoT? Nest? Ad-supported world domination? You know what to do.

BusinessTown

January 23rd, 2015 2 Comments

Maybe you remember Busytown, Richard Scarry’s famous town, from your childhood or from reading it to your kids.

Tony Ruth has created the Silicon Valley equivalent, BusinessTown, (h/t The Verge) populated by the archetypes we all know and sometimes love. What do the inhabitants of BusinessTown do? “What Value-Creating Winners Do All Day,” natch.

brogrammers

Who’s up for a Silicon Valley marathon?

Mash up Oracle Cloud Application Web Services with Web APIs and HTML5 APIs

January 22nd, 2015 Leave a Comment

No more an “honorary” but now a full-blown member of the AppsLab team, I gave a presentation at the Chicago & Dubai Oracle Usability Advisory Board in November on REST and Web APIs and how they can facilitate the transition from on-premise software to cloud-based solutions (the content of which can be fodder for a future post).

As we all are transitioning from on-premise implementations to cloud-based solutions, there seems to be a growing fear among customers and partners (ISV, OEM) alike that they will lose the capability to extend these cloud-based applications.  After all, they do not have access to the server anymore to deploy and run their own reports/forms/scripts.

I knocked up a very simple JavaScript client side application as part of my presentation to prove my point, which was that (well-designed) REST APIs and these JavaScript frameworks make it trivial to create new applications on top of existing backend infrastructure and add functionality that is not present in the original application.

My example application is based on existing Oracle Sales Cloud Web Services.  I added the capability to tweet, send text messages (SMS) and make phone calls straight from my application and speech-enable the UI.  Although you can debate the usefulness of how I am using  some of these feature, that was obviously not the purpose of this exercise.

Instead, I wanted to show that, with just a few lines of code, you can easily add these extremely complex features to an existing application. When was the last time you wrote a bridge to the Public Switched Telephone Network or a Speech synthesizer that can speak 25 different languages?

Here’s a 40,000 foot view of the architecture:

High level view of Demo APP Architecture

High level view of Demo APP Architecture

The application itself is written as a Single Page Application (SPA) in plain JavaScript.  It relies heavily on open source JavaScript libraries that are available for free to add functionality like declarative DOM binding and templating (knockout.js), ES6 style Promises (es6-promise.js), AMD loading (require.js) etc.  I didn’t have to do anything to add all this functionality (other than including the libraries).

It makes use of the HTML5 Speech Synthesis API, which is now available in most modern browsers to add Text-to-Speech functionality to my application.  I didn’t have to do anything to add all this functionality.

I also used the Twitter APIs to be able to send tweets from my application and the Twilio APIs to be able to make phone calls and send SMS text messages from my application.  I didn’t have to do anything to add all this functionality.  Can you see a theme emerging here?

Finally I used the Oracle Sales Cloud Web Services to display all the Business Objects I wanted to be present in my application, Opportunities, Interactions and Customers.  As with the other pieces of functionality, I didn’t have to do anything to add this functionality!

You basically get access to all the functionality of your CRM system through these web services where available, i.e. not every piece of functionality is exposed through web services.


Note that I am not accessing the Web Services directly from my JS but I go through a proxy server in order to adhere to browser’s same-origin policy restrictions.  The proxy also decorates the Oracle Applications SOAP Services as REST end-points.  If you are interested in how to do this, you can have a look at mine, it’s freely available.


For looks I am using some CSS that makes the application look like a regular ADF application.  Of course you don’t have to do this, you can e.g. use bootstrap if you prefer.  The point being is that you can make this application look however you want.  As I am trying to present this as an extension to an Oracle Cloud Application, I would like it to look like any other Oracle Cloud Application.

With all these pieces in place, it is now relatively easy to create a new application that makes use of all this functionality.  I created a single index.html page that bootstraps the JS application on first load.  Depending on the menu item that is clicked, a list of Customers, Opportunities or Interactions is requested from Oracle Sales Cloud, and on return, those are laid out in a simple table.

For demonstration purposes, I provided switches to enable or disable each feature.  Whenever a feature is enabled and the user would click on something in the table, I would trigger either the phone call, SMS sending, speech or tweet, whichever is enabled, e.g. here is the code to do Text-to-Speech using the HTML5 Speech Synthesis API, currently available in webkit browsers so use Safari or Chrome (mobile or desktop), and yes I have feature detection in the original code, I just left it out to keep the code simple:

Ditto for the SMS sending using the Twilio API:

And calling somebody, using the Phone Call API from Twilio, using the same user and twilio object from above:

The tweeting is done by adding the tweet button to the HTML, dynamically filling in the tweet’s content with some text from the Opportunity or Interaction.

Here is a screencast of the application in action:

As I mentioned earlier, how I am using the APIs might not be particularly useful, but the point is to show how easy it is to integrate this functionality with Oracle Cloud Applications to extend the functionality beyond what is delivered out of the box.  It probably makes more sense to use Twilio to actually call or text a contact attached to the opportunity or interaction, rather than me.  Or to tweet when an opportunity moves to a “win” status, the possibilities are literally endless, but I leave that up to you.

Happy Coding!

Mark.

Dowsing for Smarties

January 21st, 2015 Leave a Comment

Editor’s note: John and Noel (@noelportugal) need to chat about Google’s Physical Web gBeacons.

I have been a tad skeptical about the usefulness of smart watches, but my colleague Julia Blyumen has changed my thinking.

Woodblock of dowserIn her recent blog post, Julia noted that a smart watch could become both a detector and a universal remote control for all IoT “smart things”. She backed this up with a link to an excellent academic paper (pdf) “User Interfaces for Smart Things: A Generative Approach with Semantic Interaction Descriptions.”

I strongly encourage anyone interested in the Internet of Things to read this paper. In it the authors lay the foundations for a general purpose way of interacting with “smart things”, interactive components that can sense and report on current conditions (counters, thermometers), or respond to commands (light switches, volume knobs).

These smarties (as I like to call them) will have much to tell us and will be eager to accept our commands. But how will we interact with them? Will they adapt to us or must we adapt to them? How will we even find them?

The authors propose a brilliant solution: let each smartie emit a description of what it can show or do. Once we have that description, we can devise whatever graphical user interface (or voice command or gesture) we want. And we could display that interface anywhere: on a webpage or a smartphone – or a watch!

Another one of my AppsLab colleagues, Raymond Xie, immediately saw a logical division of labor: use a phone or tablet for complex interactions, use a watch for simple monitoring and short command bursts.

Another way a watch could work in concert with a phone would be as a “smartie detector.”  It will be a long time (if ever) before every thing is smart.  Until then it will often not be obvious whether the nearby refrigerator, copy machine, projector, or lamp is controllable.

Watches could fill this gap nicely.  Every time your watch comes within a few feet of a smartie it could vibrate or display an icon or show the object’s state or whatever.  You could then just glance at your wrist to see if the object is smart instead of pulling out your phone and using it as a dowsing rod.

One way of implementing this would be for objects or fixed locations (room doors, cubicles, etc.) to simply emit a short-range bluetooth ID beacon.  The watch or its paired phone could constantly scan for such signals (as long as its battery holds out).  If one was detected it would use local wifi to query for the ID and pull up an associated web page.  Embedded code in the web page would provide enough information to display a simple readout or controller. The watch could either display it automatically or just show an indicator to let the user know she could tap or speak for quick interactions or pull out her phone to play with a complete web interface.

An example I would find useful would be meeting room scheduling.  I often arrive at a meeting room to find someone else is already using it.  It would be nice to wave my watch at the door and have it confirm who had reserved the room or when it would next be free. Ideally, I could reserve it myself just by tapping my watch. If I realized that I was in the wrong place or needed to find another room, I could then pull out my phone or tablet with a meeting room search-and-reserve interface already up and running.

But that’s just the beginning.

One of the possibilities that excites me the most about this idea is the ability to override all the confusing and aggravating UIs that currently assault me from every direction and replace them with my own UIs, customized to my tastes.  So whenever I am confronted with a mysterious copy machine or the ridiculously complicated internet phone we use at work, or a pile of TV remote controls with 80 buttons apiece, or a BART ticket machine with poorly marked slots and multiple OK buttons, or a rental car with diabolically hidden wiper controls, I could pull out my phone (or maybe even just glance at my watch) to see a more sane and sensible UI.

Designers could perfect and sell these replacement UIs, thus freeing users from the tyranny of having to rely on whatever built-in UI is provided.  This would democratize the user experience in a revolutionary way.  It would also be a boon for accessibility. Blind users or old people or children or the wheelchair-bound could replace any UI they encounter in the wild with one specially adapted for them.

Virtual interfaces could also end the tedium of waiting in lines. Lines tend to form in parking garages and conference registration because only one person can use a kiosk at a time. But if you could tap into a kiosk from your smart watch, dozens of people could complete their transactions at the same time.

Things get even more interesting if people start wearing their own beacons.  You could then use your watch to quickly capture contact information or create reminders; during a hallway conversation, a single tap could “set up meeting with Jake.” Even automatically displaying the full name of the person next to you would be helpful to those of us who sometimes have trouble remembering names.

If this capability was ubiquitous and the range was a bit wider you could see and interact with a whole roomful of people or even make friends during a plane ride. Even a watch could display avatars for nearby people and let you bring any one into focus. You could then take a quick action from the watch or pass the selected avatar to your phone/tablet/laptop to initiate something more complex like transferring a file.

Of course this could get creepy pretty fast.  People should have some control over the information they are willing to share and the kind of interactions they wish to permit. It’s an interesting design question: “What interaction UIs should a person emit?”

We are still at the dawn of the Internet of Things, of course, so it will be a while before all of this comes to pass. But after reading this paper I now look at the things (and people) around me with new eyes. What kind of interfaces could they emit? Suddenly the idea of using a watch to dowse for smarties seems pretty cool.

Dear Julia: SmartWatch Habits and Preferences

January 13th, 2015 Leave a Comment

Julia’s recent post about her experiences with the Samsung Gear watches triggered a lively conversation here at the AppsLab. I’m going to share my response here and sprinkle in some of Julia’s replies.  I’ll also make a separate post about the interesting paper she referenced.

Dear Julia,

You embraced the idea of the smart watch as a fully functional replacement for the smart phone (nicely captured by your Fred Flintstone image). I am on the other end of the spectrum. I like my Pebble precisely because it is so simple and limited.

I wonder if gender-typical fashion and habit is a partial factor here. One reason I prefer my phone to my watch is that I always keep my phone in my hip pocket and can reliably pull it out in less than two seconds. My attitude might change if I had to fish around for it in a purse which may or may not be close at hand.

Julia’s response:

I don’t do much on the watch either. I use it on the go to:

  • read and send SMS
  • make and receive a call
  • read email headlines
  • receive alerts when meetings start
  • take small notes

and with Gear Live:

  • get driving directions
  • ask for factoids

I have two modes to my typical day. One is when I am moving around with hands busy. Second is when I have 5+ minutes of still time with my hands free. In the first mode I would prefer to use a watch instead of a phone. In the second mode I would prefer to use a tablet or a desktop instead of a phone. I understand that some people find it useful to have just one device – the phone – for both modes. From Raymond’s description of Gear S, it sounds like reading on a watch is also okay.

Another possible differentiator, correlated with gender, is finger size. For delicate tasks I sometimes ask my wife for help. Her small, nimble fingers can do some things more easily than my big man paws. Thus I am wary of depending too heavily on interactions with the small screen of a watch. Pinch-zooming a map is delightful on a phone but almost impossible on a watch. Even pushing a virtual button is awkward because my finger obscures almost the entire surface of the watch. I am comfortable swiping the surface of the watch, and tapping one or two button targets on it, but not much more. For this reason I actually prefer the analog side buttons of the Pebble.

Julia’s response:

Gear has a very usable interface. It is controlled by a tap, swipe, single analog button, and voice. Pinch-zoom of images was enabled on old Gear, but there were no interaction that depended on pinch-zoom.

How comfortable are you talking to your watch in public? I have become a big fan of dictation, and do ask Siri questions from time to time, but generally only when I am alone (in my car, on a walk, or after everyone else has gone to bed). I am a bit self-conscious about talking to gadgets in public spaces. When other people do it near me I sometimes wonder if they are talking to me or are crazy, which is distracting or alarming, so I don’t want to commit the same offense.

I can still remember watching Noel talking to his Google Glass at a meeting we were in. He stood in a corner of the room, facing the wall, so that other people wouldn’t be distracted or think he was talking to them. An interesting adaption to this problem, but I’m not sure I want a world in which people are literally driven into corners.

Julia’s Response:

I am not at all comfortable talking to my watch. We should teach lipreading to our devices (wouldn’t that be a good kickstarter project?) But I would speak to the watch out of safety or convenience. Speaking to a watch is not as bad as to glasses. I am holding the watch to my mouth, looking at it, and, in case of Gear Live, first say “Okay, Google.” I don’t think many think I am talking to them. I must say most look at me with curiosity and, yes, admiration.

What acrobatics did you have to go through to use your watch as a camera? Did you take it off your wrist? Or were you able to simultaneously point your watch at your subject while watching the image on the watch? Did tapping the watch to take the photo jiggle the camera? Using the watch to take pictures of wine bottles and books and what-not is a compelling use case but often means that you have to use your non-watch hand to hold the object. If you ever expand your evaluation, I would love it if you could have someone else video you (with their smart watch?) as you take photos of wine bottles and children with your watch.

Julia’s Response:

No acrobatics at all. The camera was positioned at the right place. As a piece of industrial design it looked awful. My husband called it the “carbuncle” (I suspect it might be the true reason for camera’s disappearance in Gear Live). But it worked great. See my reflection in the mirror as I was taking the picture below? No acrobatics. The screen of the watch worked well as a viewfinder. I didn’t have to hold these “objects” in my hands. Tapping didn’t jiggle the screen.

dhdibfff      julia-spy-photo

Thanks again for a thought-provoking post, Julia.  I am also not sure how typical I am. But clearly there is a spectrum of how much smart watch interaction people are comfortable with.

John

An Interaction Designer’s Perspective: Samsung Gear vs. Samsung Gear Live

January 12th, 2015 Leave a Comment

Editor’s note: In January of 2014, our team held a wearables summit of sorts, test-driving five popular watches, fitness bands and head-mounted displays to collect experiential evidence of each form factor, initial experience, device software and ecosystem and development capabilities.

Julia drew the original Samsung Galaxy Gear smartwatch, and she’s been using it ever since. A few months ago, she began using the new Android Wear hotness, the Samsung Gear Live, which several of us have.

What follows are Julia’s impressions and opinions of the two watches. Enjoy.

Original Galaxy Gear versus Gear Live

When I had to keep track of time, I used to wear my Skagen watch, and I loved my little Skagen. Last year it ran out of battery. Coincidently, it happened when Thao (@thaobnguyen) ordered then just released Samsung Galaxy Gear for me to “test.”

Life is busy, and it took me some ten months to get new battery for my Skagen.

In the meantime, I wore Gear. When I got my Skagen back, I had a “Lucy of Prince Caspian” moment. I felt my watch was bewitched – I couldn’t talk to it (I tried), and it couldn’t talk back to me. Mute and dumb. That’s how I realized I am hooked on smart watches.

Back to Narnia, Lucy Pevensie tries to wake up a lethargic tree that forgot how to speak. Skagen watch doesn’t speak to me either.

Back to Narnia, Lucy Pevensie tries to wake up a lethargic tree that forgot how to speak. Skagen watch doesn’t speak to me either.

This is just a preface, the write up is about original Gear versus Gear Live, which I’ve been testing for few months. In a nutshell, I have mixed feelings about Gear Live. Though there are some improvements over the original watch, I find many setbacks.

Typography

On the left, original Gear, on the right, Gear Live.

Left, original Gear, right, Gear Live, note the minimalistic typography of original Gear versus decorative typography of Android Wear.

Original Samsung Galaxy Gear featured clean bold typography. I could read a notification at a glance even when driving. In Gear Live, the minimalistic typography of Samsung Gear was replaced by smaller fonts and decorative backgrounds of Android Wear. Not only those decorations are useless, they make the watch unusable in the situations when it could’ve been most helpful. (And yes, I understand Samsung had to showcase the impressive display).

Speaker

On the left: original Gear On the right: Gear Live I can take a call AND talk on the original Gear. With Gear Live I can take the call, but then (unless I am connected to car speakers) I need to pick up the phone to talk.

Left, original Gear, right Gear Live, 
I can take a call AND talk on the original Gear. With Gear Live I can take the call, but then, unless I am connected to car speakers, I need to pick up the phone to talk.

Getting calls on a Gear in awkward situations was my main usage of it. As clunky as placement of the speaker and mic was on the original Gear, I still was able to get the calls safely while driving, or while walking with my hands full. Gear Live has no speaker. It can initiate the call hands-free, but what is the use if I still need to get to my phone to speak?

Camera

On the left: original Gear On the right: Gear Live Gear Live has no camera

Left, original Gear, right, Gear Live, which has no camera.

Location, voice-to-text, AND image-to-text are three most logical input methods for the watch. I got very used to taking image notes with the original Gear. Did you know that Evernote can search for text in images? For me, the flagman demo application of the original Gear was Vivino. With Vivino, one can take a picture of a wine label at a store with a watch camera, and get the rating/pricing back on the watch. This application was a great demonstration of smart watch retail potential. Gear Live has no camera, dismissing all such use cases.

Vivino application on original Gear (no longer supported) Point a watch camera to a label, take a picture and submit to Vivino server, receive wine rating on the watch.

Vivino application on original Gear, no longer supported.
Point a watch camera to a label, take a picture and submit to Vivino server, receive wine rating on the watch.

Google Speech Recognition

five

Google Speech Recognition is superbly usable technology, way beyond S-Voice or Siri. Big Data in real action! Voice Search, Voice Commands, and dictation work fabulously. The only issue I found is with recognizing email contacts from speech.

Smart Watch

Google Voice Search makes Smart Watch smart. It brings the knowledgebase of the world – Internet – to the tip of your tongue, and it is MAGIC!

six

seven

eight

Google Now

I must confess I am annoyed by Google Now cards. I know it tries really hard, but the recommendations are wrong about 50% of the time. The other 49% they are irrelevant. Given that, I feel that Now shall stick to the back rows. Instead, it is putting itself on a central stage. Lesson learned – for smart watch, precision/recall balance needs to be skewed heavily toward precision.

Google Now on Gear Live Ah? I am at home, silly!

Google Now on Gear Live Ah? I am at home, silly!

Conclusions

These opinions are my own. At least half of my day is spent on the go – driving kids around, in classrooms or lessons, and doing family errands. I rarely have idle hands or idle time.

You’ll be the judge if I am an atypical user. In addition, I do not subscribe to the school of thought that a smart watch is a phone satellite, and a fetish. I believe it can be useful gadget way beyond that.

Yes, it is given that no one will use the watch to write or read a novel, not even a long email. Afar from that, I don’t see why a good smart watch cannot do all a person on a go needs to do, replacing the phones, and giving us back our other hand.

Therefore, I feel that a good smart watch shall aspire to:

ten

If that is your typical day, then this is your gadget.

eleven

Last Thought: Smart Watch and IoT

Last but not the least, I believe that a smart watch naturally lends itself to become a universal remote control for all IoT “smart things” – it can be your ID, it can sense “smart things,” it can output small chunks of information as voice or text, and it can take commands. As you walk next to (your) refrigerator, refrigerator can remind you via your watch to buy more milk, and you can adjust refrigerator’s temperature via the watch. This assumes that a “smart thing” can beam a description of all the knobs and buttons you need to control it.

twelve

I am surprised there is not much written on that, but here is a very good paper (pdf) “User Interfaces for Smart Things A Generative Approach with Semantic Interaction Descriptions” Simon Mayer, Andreas Tschofen, Anind K. Dey, and Friedemann Mattern, Institute for Pervasive Computing, ETH Zurich, HCI Institute, Carnegie Mellon University, April 4, 2014.