Look for details later this week.
While you wait for that, enjoy these tidbits from our Oracle Applications User Experience colleagues.
Fit for Work: A Team Experience of Wearable Technology
Wearables are a thing, just look at the CES 2015 coverage, so Misha (@mishavaughan) decided to distribute Fitbits among her team to collect impressions.
Good idea, get everyone to use the same device, collect feedback, although it seems unfair, given Ultan (@ultan) is perhaps the fittest person I know. Luckily, this wasn’t a contest of fitness or of most-wrist-worn-gadgets. Rather, the goal was to gather as much anecdotal experience as possible.
Bonus, there’s a screenshot of the Oracle HCM Cloud Employee Wellness prototype.
Fresh off a trip to Jolly Old England, the OAUX team will be in Santa Fe, Mexico in late February. Stay tuned for details.
Speaking of, one of our developers in Oracle’s Mexico Development Center, Sarahi Mireles (@sarahimireles) wrote up her impressions and thoughts on the Shape and ShipIt we held in November, en español.
And finally, OAUX and the Oracle College Hire Program
Oracle has long had programs for new hires right out of college. Fun fact, I went through one myself many eons ago.
Anyway, we in OAUX have been graciously invited to speak to these new hires several times now, and this past October, Noel, several other OAUX luminaries and David (@dhaimes) were on a Morning Joe panel titled “Head in the Clouds,” focused loosely around emerging technologies, trends and the impact on our future lives.
Interesting discussion to be sure, and after attending three of these Morning Joe panels now, I’m happy to report that the attendance seems to grow with each iteration, as does the audience interaction.
For a guy whose name means Christmas, seems it was a logical leap to use Alexa to control his Christmas tree lights too.
Let’s take a minute to shame Noel for taking portrait video. Good, moving on, oddly, I found out about this from a Wired UK article about Facebook’s acquisition of Wit.ai, an interesting nugget in its own right.
If you’re interested, check out Noel’s code on GitHub. Amazon is rolling out another batch of Echos to those who signed up back when the device was announced in November.
How do I know this? I just accepted my invitation and bought my very own Echo.
With all the connected home announcements coming out of CES 2015, I’m hoping to connect Alexa to some of the IoT gadgets in my home. Stretch goal for sure, given all the different ecosystems, but maybe this is finally the year that IoT pushes over the adoption hump.
Fingers crossed. The comments you must find.
It’s helped me cut the cable, I gave it as a Christmas gift two years in a row (to different people), I have several in my home, and I carry one in my laptop bag to stream content on the road.
And if you’ve seen any of us on the road, you may have seen some cool stuff we’ve built for the Chromecast.
Back in June, Google announced a killer feature for the little HDMI gizmo, ultrasonic pairing, which promised to remove the necessity for a device to be connected to the same wifi network as the Chromecast to which it was casting.
That feature, guest mode, rolled out in December for Android devices running 4.3 and higher, and it is as awesome as expected.
It’s very easy to setup and use.
First, you need to enable guest mode for your Chromecast. I tried this initially in the Mac Chromecast app, but alas, it has not yet been updated to include this option, same with iOS. So, you’ll need to use the Android Chromecast mobile app, like so:
Once enabled, the PIN is displayed on the Chromecast’s backdrop, and anyone in the room can cast to it via the PIN or by audio pairing.
When attempting to connect, the Chromecast first tries the audio method; the Chromecast app asks to use the device’s microphone, and Chromecast broadcasts the PIN via audio tone.
Failing that (or if you skip the audio pairing), the user is prompted by the Chromecast app to enter the PIN manually.
Easy stuff, right? In case you’re worried that someone not in the room could commandeer your Chromecast, they can’t, at least according to Google. Being a skeptic, I tested this myself, and sure enough, the audio method won’t work if there are walls separating the device from the Chromecast. The app fails to pair via audio and asks for the PIN, which you can only get from the TV screen itself.
Not entirely foolproof, but good enough.
So why is this a cool feature? In a word, collaboration. Guest mode allows people to share artifacts and collaborate (remember, Chromecast has a browser) on a big screen without requiring them all to join the same wifi network.
Plus, it’s a modern way to torture your friends and family with your boring vacation pictures and movies.
More and more apps now support Chromecast, making it all the more valuable, e.g. the Disney Movies app, a must-have for me. Bonus for that app, it’s among the first that I know of to bridge the Google and Apple ecosystems, i.e. it consolidates all the Disney movies I’ve bought on iTunes and Google Play into a single app.
Thoughts? Find the comments.
Noel (@noelportugal) is one of a handful of early adopters to get his hands on the Amazon Echo, Amazon’s in-home personal assistant, and being the curious, hacker that he is, of course he used an unpublished API to bend Alexa, that’s the Echo’s personality, to his will.
We’re hoping Amazon releases official APIs for the Echo soon, lots of great ideas on deck.
I conducted customer feedback sessions with users who fit the “c-level executive” user profile, to collect feedback on some of our new interactive data visualizations. Unfortunately, I can’t share any of these design concepts just yet, but I can share a bunch of pics of Noel, who gave several talks over the course of the 3-day conference.
This first photo is a candid taken after Noel’s talk on Monday about “Wearables at Work.”
I was thrilled to see so many conference attendees sticking around afterwards to pepper Noel with questions; usually at conferences, people leave promptly to get to their next session, but in this case, they stuck around to chat with Noel (and try on Google Glass for the first time).
The next photo is from Tuesday, where Noel and Vivek Naryan hosted a roundtable panel on UX. Because this was a more intimate, round-table style talk, the conference attendees felt comfortable speaking up and adding to the conversation. They raised concerns about data privacy, their thoughts on where technology is headed in the future, and generally chatted about the future of UX and technology.
This last photo is from Monday afternoon, when I made Noel take a break from his grueling schedule to play table tennis with me. The ACC Liverpool conference center thoughtfully provided table tennis in their Demo Grounds as a way to relieve stress and get some exercise (was a bit too cold to run around outside).
I put up a valiant effort, but Noel beat me handily. In my defense, I played the first half of the game in heels; once I took those off my returns improved markedly. I’ll get him next time! A special thank-you to Gustavo Gonzalez (@ggonza4itc), CTO at IT Convergence for the great action shot, and also for giving excellent feedback and thoughtful input about the design concepts I showed him the day following.
All-in-all, we enjoyed the Apps 14 and Tech 14 conferences. It’s always great to get out among the users of our products to collect real feedback.
For more on the OAUX team’s activities at the 2014 editions of the UKOUG’s annual conferences, check out the Storify thread.
Update: I now “hacked” the API to control Hue Lights and initiate a phone call with Twilio. Check here https://www.youtube.com/watch?v=r58ERvxT0qM
Last November Amazon announced a new kind of device. Part speaker, part personal assistant and it called it Amazon Echo. If you saw the announcement you might have also see their quirky infomercial.
The parodies came hours after their announcement, and they were funny. But dismissing this just as a Siri/Cortana/Google Now copycat might miss the potential of this “always listening” device. To be fair this is not the first device that can do this. I have a Moto X that has an alway-on chip waiting for a wake word (“OK Google”), Google Glass glass does the same thing (“OK Glass.”) But the fact that I don’t have to hold the device, be near it, or push a button (Siri) makes this cylinder kind of magical.
It is also worth noting that NONE of these devices are really “always-listening-and-sending-all-your-conversations-to-the-NSA,” in fact the “always listening” part is local. Once you say the wake word then I guess you better make sure don’t spill the beans for the next few seconds, which is the period that the device will listen and do a STT (speech-t0-text) operation on the Cloud.
We can all start seeing through Amazon and why this good for them. Right off the bat you can buy songs with a voice command. You can also add “stuff” to your shopping list. Which also reminds me of a similar product Amazon had last year, Amazon Dash which unfortunately is only for selected markets. The fact is that Amazon wants us to buy more from them, and for some of us that is awesome, right? Prime, two day shipping, drone delivery, etc.
I have been eyeing these “always listening” devices for a while. The Ubi ($300) and Ivee ($200) were my two other choices. Both have had mixed reviews and both of them are still absent on the promise of an SDK or API. Amazon Echo doesn’t have an SDK yet, but they placed a link to show the Echo team your interest in developing apps for it.
The promise of a true artificial intelligence assistant or personal contextual assistant (PCA) is coming soon to a house or office near you. Which brings me to my true interest in Amazon Echo. The possibility of creating a “Smart Office” where the assistant will anticipate my day-to-day tasks, setup meetings, remind me of upcoming events, analyze and respond email and conversations, all tied to my Oracle Cloud of course. The assistant will also control physical devices in my house/office “Alexa, turn on the lights,” “Alexa, change the temperature to X,” etc.
All in all, it has been fun to request holiday songs around the kitchen and dinning room (“Alexa, play Christmas music.”) My kids are having a hay day trying to ask the most random questions. My wife, on the other side, is getting tired of the constant interruption of music, but I guess it’s the novelty. We shall see if my kids are still friendly to Alexa in the coming months.
In my opinion, people dismissing Amazon Echo, will be the same people that said: “Why do I need a music player on my phone, I already have ALL my music collection in my iPod” (iPhone naysayers circa 2007), “Why do I need a bigger iPhone? That `pad thing is ridiculously huge!” (iPad naysayers circa 2010.) And now I have already heard “Why do I want a device that is always connected and listening, I already have Siri/Cortana/Google Now” (Amazon Echo naysayers circa 2014.)
Agree, disagree? Let me know.
Back in the early 90s I ventured into virtual reality and was sick for a whole day afterwards.
We have since learned that people become queazy when their visual systems and vestibular systems get out of sync. You have to get the visual response lag below a certain threshold. It’s a very challenging technical problem which Oculus now claims to have cracked. With ever more sophisticated algorithms and ever faster processors, I think we can soon put this issue behind us.
Anticipating this, there has recently been a resurgence of interest in VR. Google’s Cardboard project (and Unity SDK for developers) makes it easy for anyone to turn their smartphone into a VR headset just by placing it into a cheap cardboard viewer. VR apps are also popping up for iPhones and 3D side-by-side videos are all over YouTube.
Some of my AppsLab colleagues are starting to experiment with VR again, so I thought I’d join the party. I bought a cheap cardboard viewer at a bookstore. It was a pain to put together, and my iPhone 5S rattles around in it, but it worked well enough to give me a taste.
I downloaded an app called Roller Coaster VR and had a wild ride. I could look all around while riding and even turn 180 degrees to ride backwards! To start the ride I stared intently at a wooden lever until it released the carriage.
My first usability note: between rides it’s easy to get turned around so that the lever is directly behind you. The first time I handed it to my wife she looked right and left but couldn’t find the lever at all. So this is a whole new kind of discoverability issue to think about as a designer.
Despite appearances, my roller coaster ride (and subsequent zombie hunt through a very convincing sewer) is research. We care about VR because it is an emerging interaction that will sooner or later have significant applications in the enterprise. VR is already being used to interact with molecules, tumors, and future buildings, uses cases that really need all three dimensions. We can think of other uses cases as well; Jake suggested training for service technicians (e.g. windmills) and accident re-creation for insurance adjusters.
That said, both Jake and I remain skeptical. There are many problems to work through before new technology like this can be adopted at an enterprise scale. Consider the the idea of immersive virtual meetings. Workers from around the world, in home offices or multiple physical meeting rooms, could instantly meet all together in a single virtual room, chat naturally with each other, pick up subtle facial expressions, and even make presentations appear in mid air at the snap of a finger. This has been a holy grail for decades, and with Oculus being acquired by Facebook you might think the time has finally come.
Not quite yet. There will be many problems to overcome first, not all of them technical. In fact VR headsets may be the easiest part.
A few of the other technical problems:
- Bandwidth. I still can’t even demo simple animations in a web conference because the U.S. internet system is too slow. I could do it in Korea or Sweden or China or Singapore, but not here anytime soon. Immersive VR will require even more bandwidth.
- Cameras. If you want to see every subtle facial expression in the room, you’ll need cameras pointing at every face from every angle (or at least one 360 camera spinning in the center of the table). For those not in the room you’ll need more than just a web cam pointing at someone’s forehead, especially if you want to recreate them as a 3D avatar. (You’ll need better microphones too, which might turn out to be even harder.) This is technically possible now, Hollywood can do it, but it will be awhile before it’s cheap, effortless, and ubiquitous.
- Choreography. Movie directors make it look easy, and even as individuals we’re pretty good about scanning a crowded room and following a conversation. But in a 3-dimensional meeting room full of 3-dimensional people there are many angles to choose from every second. We will expect our virtual eyes to capture at least as much detail as our real eyes that instinctively turn to catch words and expressions before they happen. Even if we accept that any given participant will see a limited subset of what the overall system can see, creating a satisfying immersive presence will require at least some artificial intelligence. There are probably a lot of subtle challenges like this.
And a non-technical problem:
- Privacy. Any virtual meeting which can me transmitted can also be recorded and viewed by others not in the meeting. This includes off-color remarks (now preserved for the ages or at least for future office parties), unflattering camera angles, surreptitious nose picking, etc. We’ve learned from our own research that people *love* the idea of watching other people but are often uncomfortable about being watched themselves. Many people are just plain camera shy – and even less fond of microphones. Some coworkers are uncomfortable with our team’s weekly video conferences. “Glasshole” is now a word in the dictionary – and glassholes sometimes get beaten up.
So for virtual meetings to happen on an enterprise scale, all of the above problems will have to be solved and some of our attitudes will have to change. We’ll have to find the right balance as a society – and the lawyers will have to sign off on it. This may take awhile.
But that doesn’t mean our group won’t keep pushing the envelope (and riding a few virtual roller coasters). We just have to balance our nerdish enthusiasm with a healthy dose of skepticism about the speed of enterprise innovation.
What are your thoughts about the viability of virtual reality in the enterprise? Your comments are always appreciated!
It’s difficult to make a link post seem interesting. Anyway, I have some nuggets from the Applications User Experience desk plus bonus robot video because it’s Tuesday.
Back to Basics. Helping You Phrase that Alta UI versus UX Question
From Coffee Table to Cloud at a Glance: Free Oracle Applications Cloud UX eBook Available
Next up, another byte from Ultan on a new and free eBook (registration required) produced by OAUX called “Oracle Applications Cloud User Experiences: Trends and Strategy.” If you’ve seen our fearless leader, Jeremy Ashley (@jrwashley), present recently, you might recognize some of the slides.
Oh and if you like eBooks and UX, make sure to download the Oracle Applications Cloud Simplified User Interface Rapid Development Kit.
Today, We Are All Partners: Oracle UX Design Lab for PaaS
And hey, another post from Ultan about an event he ran a couple weeks ago, the UX Design Lab for PaaS.
Friend of the ‘Lab, David Haimes (@dhaimes), and several of our team members, Anthony (@anthonyslai), Mark (@mvilrokx), and Tony, participated in this PaaS4SaaS extravaganza, and although I can’t discuss details, they built some cool stuff and had oodles of fun. Yes, that’s a specific unit of fun measurement.
Amazon’s robotic fulfillment army
From kottke.org, a wonderful ballet of Amazon’s fulfillment robots.
Editor’s note: Here’s another new post from a new team member. Shortly after the ‘Lab expanded to include research and design, I attended a workshop on visualizations hosted by a couple of our new team members, Joyce, John and Julia.
The event was excellent. John and Julia have done an enormous amount of critical thinking about visualizations, and I immediately started bugging them for blog posts. All the work and research they’ve done needs to be freed into the World so anyone can benefit from it. This post includes the first three installments, and I hope to get more. Enjoy.
I still haven’t talked anyone into reading Proofs and Stories, and god knows I tried. If you read it, let me know. It is written by the author of Logicomix, Apostolos Doxiadis, if that makes the idea of reading Proofs and Stories more enticing. If not, I can offer you my summary:
1. Problem solving is like a quest. As in a quest, you might set off thinking you are bound for Ithaka only to find yourself on Ogygia years later. Or, in Apostolos’ example, you might set off to prove Fermat’s Last Theorem only to find yourself studying elliptic curves for years. The seeker walks through many paths, wonders in circles, reverses the steps, and encounters dead ends.
2. The quest has a starting point = what you know, the destination = the hypothesis you want to prove, and the points in between = statements of facts. Graph, in mathematical sense, is a great way to represent this. A is a starting point, B is the destination, F is a transitive point, C is a choice.
A story is a path through the graph, defined by the choices a storyteller makes on behalf of his characters.
Frame P5 below shows Snowy’s dilemma. Snowy’s choice determines what happens to Tintin in Tibet. If only Snowy not gone for the bone, the story would be different.
Even though its own nature dictates the story to be linear, there is always a notion of alternative paths. How to linearize forks and branches of the path so that the story is most interesting, is an art of storytelling.
3. Certain weight, or importance, can be suggested based on the number of choices leading to a point, or resulting from it.
When a story is summarized, each storyteller likely to come up with a different outline. However the most important points usually survive majority of summarizations.
Stories can be similar. The practitioners of both narrative and problem solving rely on patterns to reduce choice and complexity.
So how does this have to do with anything?
Another book I cannot make anyone to read but myself is called “Interaction Design for Complex Problem Solving: Developing Useful and Usable Software” by Barbara Mirel. The book is as voluminous as its title suggests, 397 pages, out of which I made it through the page 232 in four years. This probably doesn’t entice you into reading the book. Luckily there is a one-pager paper “Visualizing complexity: Getting from here to there in ill-defined problem landscapes” from the same author on the same very subject. If this is too much to read still, may I offer you my summary?
Mainly, cut and paste from Mirel’s text:
1. Complex problem solving is an exploration across rugged and at times uncharted problem terrains. In that terrain, analysts have no way of knowing in advance all moves, conditions, constraints or consequences. Problem solvers take circuitous routes through “tracts” of tasks toward their goals, sometimes crisscrossing the landscape and jump across foothills to explore distant knowledge, to recover from dead ends, or to reinvigorate inquiry.
2. Mountainscapes are effective ways to model and visualize complex inquiry. These models stress relationships among parts and do not reduce problem solving to linear and rule-based procedures or work flows. Mountainscapes represent spaces being as important to coherence as the paths. Selecting the right model affect the designs of the software and whether complex problem solvers experience useful support. Models matter.
Complex problems can neither be solved nor supported with linear or pre-defined methods. Complex problems have many possible heuristics, indefinite parameters, and ranges of outcomes rather than one single right answer or stopping point.
3. Certain types of complex problems recur in various domains and, for each type, analysts across organizations perform similar patterns of inquiry. Patterns of inquiry are the regularly repeated sets of actions and knowledge that have a successful track record in resolving a class of problems in a specific domain
And so how does this have to do with anything?
A colleague of mine, Dan Workman, once commented on a sales demo of a popular visual analytics tool. “Somehow” he said “the presenter drills down here, pivots there, zooms out there, and, miraculously, arrives to that view of the report where the answer to his question lies. But how did he know to go there? How would anyone know where the insight hides?”
His words stuck with me.
Imagine a simple visualization that shows revenue trend of a business by region by product by time. Let’s pretend the business operates in 4 regions, sells 4 products, and has been in business for 4 years. The combination of these parameters results in 64 views of sales data. Now imagine that each region is made up of hundreds of countries. If visualization allows user to view sales by country, there will be thousands and thousands of additional views. In the real world, a business might also have lots more products. The number of possible views could easily exceed what a human being can manually look at, and only some views (alone or in combination) possibly contain insight. But which ones?
I am yet to see an application that supports the users in finding insightful views of a visualization. Often users won’t even know where to start.
So here is the connection between Part1, Part2, and Part3. It’s the model. The visualization exploration can be represented as a graph (in mathematical sense), where the points are the views, and the connections are navigation between views. Users then trace a path through the graph as they explore new results.
From here, certain design research agenda comes to mind:
1. The world needs interfaces to navigate the problem mountainspaces: keeping track of places visited, representing branches and loops in the path, enabling to reverse steps, etc.
2. The world needs an interface for linearizing a completed quest into a story (research into presentation), and outlining stories.
3. The world needs software smarts that can collect the patterns of inquiry and use them to guide the problem solvers through the mountainspaces.
So I hope from this agenda the Part 4 will eventually follow . . . .
That last D is Development if that’s unclear. Anyway, like Thao says, Happy Thanksgiving for those who celebrate it, and for those who don’t enjoy the silence in our absence. To Thao’s question, I’m going with Internet. Yes, it’s a gadget because it’s a series of tubes, not a big truck.
Find the comments to add the gadget for which you are most thankful.
Tomorrow is Thanksgiving and this seems like a good time to put my voice out on The AppsLab (@theappslab). I’m Thao, and my Twitter (@thaobnguyen) tagline is “geek mom.” I’m a person of few words and those two words pretty much summarize my work and home life. I manage The AppsLab researchers and designers. Jake welcomed us to the AppsLab months ago here, so I’m finally saying “Thank you for welcoming us!”
As we reflect on all the wonderful things in our lives, personal and professional, I sincerely want to say I am very thankful for having the best work family ever. I was deeply reminded of that early this week, when I had a little health scare at work and was surround by so much care and support from my co-workers. Enough of the emotional stuff, and onto the fun gadget stuff . . . .
My little health scare led me to a category of devices that hadn’t hit my radar before – potentially, life saving, personal medical apps. I’ve been looking at wearables, fitness devices, healthcare apps, and the like for a long time now but there is a class of medical-grade devices (at least recommended by my cardiologist) that is potentially so valuable in my life, as well as those dear to me . . . AliveCor. It is essentially turns your smartphone into an ECG device so you can monitor your heart health anytime and share it with your physician. Sounds so cool!
Back to giving thanks, I’m so thankful for all the technology and gadgets of today – from the iPhone and iPad that lets me have a peaceful dinner out with the kids to these medical devices that I’ll be exploring now. I want to leave you with a question, “What gadget are you most thankful for?”
As a team-building activity for our newly merged team of research, design and development, someone, who probably wishes to remain nameless, organized a glass mosaic and welding extravaganza at The Crucible in Oakland.
We split into two teams, one MIG welding, the other glass breaking, and here’s the result.
All-in-all an interesting and entertaining activity. Good times were had by all, and no one was cut or burned, so bonus points for safety.
Editor’s note: Here’s a repost of a wonderful write-up of an event we did a couple weeks ago, courtesy of Friend of the ‘Lab Karen Scipi (@KarenScipi).
What Karen doesn’t mention is that she organized, managed and ran the event herself. Additional props to Ultan (@ultan) on the idea side, including the naming, Sandra Lee (@SandraLee0415) on the execution side and to Misha (@mishavaughan) for seeing the value. Without the hard work of all these people, I’m still just talking about a great idea in my head that I’m too lazy to execute. You guys all rock.
Enjoy the read.
By Karen Scipi
It was an exciting event here at Oracle Headquarters as our User Experience AppsLab (@theappslab) Director Jake Kuramoto (@jkuramot) recently hosted an internal design jam called Shape and ShipIt. Fifteen top-notch members of the newly expanded team got together for two days with a packed schedule to research and innovate cutting-edge enterprise solutions, write use cases, create wireframes, and build and code solutions. They didn’t let us down.
The goal: Collaborate and rapidly design practical, contextual, mobile Oracle Applications Cloud solutions that address real-world user needs and deliver enterprise solutions that are streamlined, natural, and intuitive user experiences.
The result: Success! Four new stellar user experience solutions were delivered to take forward to product development teams working on future Oracle Application Cloud simplified user interface releases.
While I cannot share the concepts or solutions with you as they are under strict lock and key, I can share our markers of the event’s success with you.
The event was split into two days:
Day 1: A “shape” day during which participants received invaluable guidance from Bill Kraus on the role of context and user experience, then researched and shaped their ideas through use cases and wireframes.
Day 2: A “ship” day during which participants coded, reviewed, tested, and presented their solutions to a panel of judges that included Jeremy Ashley (@jrwashley), Vice President of the Oracle Applications User Experience team.
It was a packed two days full of ideas, teamwork, and impressive presentations.The participants formed four small teams that comprised managers, architects, researchers, developers, and interaction designers whose specific perspectives proved to be invaluable to the tasks at hand. Their blend of complementary skills enabled the much needed collaboration and innovation. Although participants were charged with a short timeframe for such an assignment, they were quick to adapt and refine their concepts and produce solutions that could be delivered and presented in two days. Individual team agility was imperative for designing and delivering solutions within a two-day timeframe.
Participants were encouraged to brainstorm and design in ways that suited them. Whether it was sitting at tables with crayons, paper, notebooks and laptops, or hosting walking meetings outside, the participants were able to discuss concepts and ideate in their own, flexible ways.As with all of our simplified user interface design efforts, participants kept a “context produces magic” perspective front and center throughout their activities. In the end, team results yielded responsive, streamlined, context-driven user experience solutions that were simple yet powerful.
Healthy “brain food” and activity breaks were encouraged, and both kept participants engaged and focused on the important tasks at hand. Salads, veggies, dips, pastas, wraps, and sometimes a chocolate chip cookie (for the much needed sugar high) were on the menu. The activity break of choice was an occasional competitive game of table tennis at the Oracle Fitness Center, just a stone’s throw from the event location. The balance of think-mode and break-mode worked out just right for participants.Our biggest marker of success, though, was how wrong we were. Yes. Wrong. While we expected one team’s enterprise solution to clearly stand out from among all of the others, we were pleasantly surprised as all four were equally impressive, viable, and well-received by the design jam judges. Four submissions, four winners. Nice job! Stay tuned to the Usable Apps Blog to learn more about such events and what happens to the innovative user experiences that emerge!
This year some of us at the AppsLab attended the Samsung Developer Conference aka #SDC2014. Last year it was Samsung’s first attempt and we were also there. The quality and caliber of presentations increased tenfold from last year. Frankly, Samsung is doing it really hard to resist to join their ecosystem.
Here are some of the trends I observed:
Wearables and Health:
There was a huge emphasis in Samsung’s commitment with wearable technology. They released a new Tizen based smartwatch (Samsung Gear S) as well as a biometric reference design hardware and software called SIMBAND. Along with their wearable strategy they also released S.A.M.I, a cloud repository to store all this data. All this ties together with their vision of “Voice of the Body.”
During the second day keynote we got to hear from Mounir Zok Senior Sports Technologist of the United States Olympic Committee. He told us of how wearable technology is changing they way Olympic athletes are training. It was only a couple years ago when athletes still had to go to a lab and “fake” actual activities to get feedback. Now they can actually get real data on the field thanks to wearable technology.
Samsung released the Gear VR in partnership with Oculus. This goggles can only work with a mounted Galaxy Note 4 in the front. The gaming experiences with this VR devices are amazing. But they are also exploring other cases like virtual tourism and virtual movie experiences. They released a 3D 360+spherical view camera called “Project Beyond.”
IoT – Home Automation:
Samsung is betting big with IoT and Home Automation and they are putting their money where their mouth is by acquiring SmartThings. The SmartThings platform is open sourced and has the ability to integrate with a myriad of other home automation products. They showcased a smart home powered by SmartThings platform.
I actually really like their new Galaxy Note Edge phablet. Samsung is showing true innovation here with the “edge” part of the device. It has it’s own SDK and it feels great on the hand!
Overall I’m pretty impressed with what Samsung is doing. It seems like their spaghetti-on-the-wall approach (throwing a bunch spaghetti and see what sticks) is starting to pay off. Their whole UX across devices looks seamless. And in my humble approach they are getting ready to take off on their own without having to use Android for their mobile devices. Tizen keeps maturing, but I shall leave that for another post!
Please feel free to share your experience with Samsung devices as well!
Editorial Note: This is a guest post by friend of the ‘Lab and colleague DJ Ursal. Also be sure to check out our Hackathon entry here:
EchoUser (@EchoUser), in partnership with SpaceGAMBIT, Maui Makers, the Minor Planet Center, NASA, the SETI Institute, and Further by Design, hosted an Asteroid Hackathon. The event was in response to the NASA Grand Challenge, “focused on finding all asteroid threats to human populations and knowing what to do about them.”
I had a wonderful opportunity to participate in the Asteriod Hackathon last week. MY team name was NOVA. Our team comprised for 4 team members – DJ Ursal, Kris Robison, Daniel Schwartz, Raj Krishnamurthy
We were given live data from NASA and Minor Planet site and literally just had 5 hours to put together a working prototype and solution to the Asteroid big data problem. We created a web application (works not only on your MAC or PC but also on your iPad and your latest Nexus 7 Android devices) which would help scientists, astronomers and anyone who is interested in Asteriods discover, learn and share information in a fun and interactive way.
Our main them was Finding Asteroids Before They Find Us. The goal was to help discover, learn and share Asteroids information to increase awareness within the community. We created an interactive web app that allowed users to make use of chart filters to find out about the risk for possibilities of future impact with Earth. Find out about the distance of the asteroids to Earth, absolute brightness and rotation of the Asteroid. It allowed users to click and drag on any chart to filter, so that they could transform the filters in multidimensional way in order to explorer, discover , interesting facts and share data on asteroids with riends and community. We made use of Major Tom who is an astronaut referenced in David Bowie’s songs “Space Oddity. “Space Oddity” depicts an astronaut who casually slips the bonds of the world to journey beyond the stars. Users could post questions to Major Tom and could also play his song.
The single most important element about WINNING this hackathon strategically was team composition. Having a team that is effective working together. Collaboration and communication skills were the two of most critical personal skills demanded of all members as time was limited and communication and coordination of utmost importance.
Winning TEAM NOVA- DJ Ursal, Kris Robison, Daniel Schwartz, Raj Krishnamurthy
A couple weeks ago Jeremy Ashley (@jrwashley), Bill Kraus, Raymond Xie and I participated in the Asteroid Hackathon hosted by @EchoUser. The main focus was “to engage astronomers, other space nerds, and the general public, with information, not just data.”
As you might already know, we here at the AppsLab, are big fans of Hackathons as well as ShipIt days or FedEx days. The ability to get together, get our collective minds together and being able to create something in a short amount of time is truly amazing. It also helps to keep us on our toes, technically and creatively.
Our team built what we called “The Daily Asteroid.” The idea behind our project was to highlight the asteroid profile of the current date’s closed approach to Earth or near Earth object (NEO) data. What this means is to show which asteroid is the closest to earth today. A user could “favorite” today’s asteroid and start a conversation with other users about it, using a social network like Twitter.
We also added the ability to change the asteroid properties (size, type, velocity, angle) and play a scenario to see what damage could it cause if it hit the earth. And to finish up, we created an Asteroid Hotline using Twilio (@twilio) where you can call to get the latest NEO info using your phone!
We were lucky to be awarded 3rd place or “Best Engagement,” and we had a blast doing it. Considering the small amount time we had, we came out really proud of our results.
There’s a post over on VoX about a OAUX new lab at Oracle HQ, the Cloud UX Lab.
Finished just before OOW in September, this lab is a showcase for OAUX projects, including a few of ours.
The lab reminds me of a spacecraft from the distant future, the medical bay or the flight deck. It’s a very cool place, directly inspired and executed by our fearless leader, Jeremy Ashley (@jrwashley), an industrial designer by trade.
I actually got to observe the metamorphosis of this space from something that felt like a doctor’s office waiting room into the new hotness. Looking back on those first meetings, I never expected it would turn out so very awesome.
Anyway, the reason why I got to tag along on this project is because our team will be filling the control room for this lab with our demos. Noel (@noelportugal) and Jeremy have a shared vision for that space, which will be a great companion piece to the lab and equally awesome.
So, if you’re at Oracle HQ, book a tour and stop by the new Cloud UX Lab, experience the new hotness and speculate on what Noel is cooking up behind the glass.
Jawbone announced the Up3 today, reportedly its most advanced fitness tracker to date.
As with all fitness trackers, the Up3 has an accelerometer, but it also has sensors for measuring skin and ambient temperature, as well as something called bioimpedence. As these data collected by the Up3 are used by a new feature called Smart Coach.
You can imagine what the Smart Coach does. It sounds like a cool, possibly creepy, feature.
This post is not about the Up3.
This post is about my journey into the dark heart of the quantified self. The Up3 has just reminded me to coalesce my thoughts.
Misfit calculates activity based on points, and my personal goal of 1,000 points was relatively easy to reach every day, even for someone who works from home. What I realized quickly was that the Shine pushed me to chase points, not activity.
The Shine uses its accelerometer to measure activity, so depending on where I wore it on my person, a run could be worth more points. This isn’t unique to the Shine. I’ve seen people spinning at the gym wearing their fitness trackers on their ankles.
As the weeks passed, I found myself avoiding activities that didn’t register a lot of points, definitely not good behavior, and even though my goal was 1,000 points, I avoided raising it for fear of missing my daily goal-achievement dopamine high.
Then, mid-Summer, Misfit dropped an update that added some new game mechanics, and one day, my Shine app happily informed me that I’d hit my goal 22 days in a row.
This streak was the beginning of the end for me.
On the 29th day of my streak, the battery died. I replaced it, crisis averted, streak in tact. Then, later that day, the Shine inexplicably died. I tried several new batteries and finally had to contact support.
All the while, I worried about my streak. I went to gym, but it felt hollow and meaningless without the tangible representation, the coaching, as it were, from my Shine.
This is not a good look.
Misfit replaced my Shine, but in the days that elapsed, during my detox, I decided to let it go. Turns out the quantified self isn’t for obsessive, overly-competitive personality types like me.
And I’m not the only one in this group.
In September, I read an article called Stepping Out: Living the Fitbit Life, in which the author, David Sedaris, describes a similar obsession with his Fitbit. As I read it, I commiserated, but I also felt a little jealous of the level of his commitment. This dude makes me look like a rank amateur.
Definitely worth a read.
Anyway, this is not in any meant to be an indictment of the Shine, Fitbit, Jawbone or any fitness tracker. Overall, these devices offer people a positive and effective way to reenforce healthy behavior and habits.
But for people like, they lead to unanticipated side effects. As I read about the Up3, its sensors and Smart Coach, all of which sound very cool, I had to remind myself of the bad places where I went with the Shine.
And the colloquial, functionally-incorrect but very memorable, definition of insanity.
In Part 2, when I get around to it, I’ll discuss the flaws in the game mechanics these companies use.
Find the comments.
I have both the Google Glass and Android Wear (Samsung Gear Live, Moto 360), and often times I wear them together. People always come up with a question: “How do you compare Google Glass and Android watches?” Let me address couple of the view points here. I would like to talk about Apple Watch, but since it has not been officially released yet, let’s say that shape-wise it is square and looks like a Gear Live, and features seem to be pretty similar to Android Wear, with the exceptions of the attempt to add more playful colors and features. Lets discuss more about it once it is out.
I am the first batch of the Google Glass Explorer and got my Glass mid 2013. In the middle of this year, I first got the Gear Live, then later Moto 360. I always find it peculiar that Glass is an old technology while Wear is a newer technology. Should it not be easier to design a smart watch first before a glassware?
I do find a lot of similarities between Glass and Wear. The fundamental similarity is that both are Android devices. They are voice-input enabled and show you notifications. You may install additional Android applications for you to personalize your experience and maximize your usage. I see these as the true values for wearables.
Differences? Glass does show a lot of capabilities that Android Wear is lack of at the moment. Things that probably matter for most people would be sound, phone calls, video recording, pictures taking, hands-free with head-on display, GPS, wifi. Unlike Android Wear, it can be used standalone; Android Wear is only a companion gadget and has to be paired up with a phone.
Is Glass more superior? Android Wear does provide a better touch-based interaction, comparing to the swiping at the side of the Glass frame. You can also play simple games like Flopsy Droid on your watch. Also commonly included are pedometers and heart activity sensor. Glass also tends to get over-heated easily. Water-resistance also plays a role here: you would almost never want to get your Glass wet at all, while Android Wear is water-resistant to certain degree. When you are charging your watch at night, it also serves as a bedtime clock.
For me, personally, although I own Glass longer than Wear, I have to say I prefer Android Wear over Glass for couple reasons. First, there is the significant price gap ($1500 vs $200 price tag). Second, especially when you add prescription to Glass, it gets heavy and hurts the ear when wearing it for an extended period of time. Third, I do not personally find the additional features offered by Glass useful to my daily activities; I do not normally take pictures other than at specific moments or while I am traveling.
I also find that even Glass is now publicly available within the US, Glass is still perceived as an anti-social gadget. The term is defined in the Urban Dictionary as well. Most of the people I know of who own Glass do not wear it themselves due to all various reasons. I believe improving the marketing and advertising strategy for Glass may help.
Gadget preference is personal. What’s yours?
If you’ve read here for more than a hot minute, you’ll know that I love me some data visualization.
This love affair dates back to when Paul (@ppedrazzi) pointed me to Hans Rosling’s (@hansrosling) first TED talk. I’m sure Hans has inspired an enormous city of people by now, judging by the 8 million plus views his TED talk has garnered. Sure, those aren’t unique view, but even so.
There’s an interesting meta-project: visualize the people influenced by various visualization experts, like a coaching tree or something.
As luck would have it, one area of specialization of our newest team members is, wait for it, data visualization.
Last week, I got to see them in action in a full-day workshop on data visualization, which was eye-opening and very informative.
I’m hoping to get a few blog posts out of them on the subject, and while we wait, I wanted to share some interesting examples we’ve been throwing around in email.
I started the conversation with xkcd because, of course I did. Randal Munroe’s epic comic isn’t usually mentioned as a source for data visualizations, but if you read it, you’ll know that he has a knack for exactly that. Checking out the Google Image search for “xkcd data visualization” reminded me of just how many graphs, charts, maps, etc. Randal has produced over the years.
I also discovered that someone has created a D3 chart library as an homage to the xkcd style.
I probably spent 10 minutes zooming into Pixels, trying to find the bottom; being small-minded, I gave up pretty early on Click and Drag, assuming it was small. It’s not.
How much time did you spend, cough, waste, on these?
During our conversation, a couple interesting examples have come back to me, both worth sharing.
First is Art of the Title, dedicated to the opening credits of various films. In a very specific way, opening credits are data visualizations; they set the mood for the film and name the people responsible for it.
Second is Scale of the Universe, which is self-explanatory and addictive.
So, there you go. Enjoy investigating those two and watch this space for more visualization content.
And find the comments.
Editor’s note: Hey look, a new author. Here’s the first post from Raymond Xie, who joined us nearly a year ago. You may remember him from such concept demos as geo-fencing or Pebble watchface. Raymond has been busy at work and wants to share the work he did with telekinesis. Or something, you decide. Enjoy.
You put on a headband, stare at a ball, tilt your head back-forth and left-right . . . the ball navigates through a simple maze, rushing, wavering, changing colors, and finally hitting the target.
That is the latest creation out of AppsLab: Muse Sphero Driver. When it was first showed at OAUX Exchange during OOW, it amused many people, as they would call it “mind control” game.
Technically, it is your brainwave data (Electroencephalography – EEG) driving the Sphero (adjusting speed and changing color with spectrum from RED to BLUE, where RED: fast, active; BLUE: slow, calm); and head gesture (3d Accelerarometer- ACC) controlling the direction of Sphero movement. Whether or not you call that as “mind control” is up to your own interpretation.
You kind of drive the ball with your mind, but mostly brainwave noises instead of conscious thought. It is still too early to derive accurate “mind control” from EEG data out of any regular person, for the reasons:
1. For EEG at Scalp level, the noise-to-signal ratio is very poor;
2. Need to establish the correlation between EEG and mind activity.
But it does open up a dialog in HCI, such as voice-control vs mind-control (silence); or in Robotics, instead of asking machine to “see”/”understand”, we can “see”/”understand” and impersonate it with our mind and soul.
While it is difficult to read out “mind” (any mind activity) transparently, we think it is quite doable to map your mind into certain states, and use the “state” as command indirectly.
We may do something around this area. So stay tuned.
Meanwhile, you can start to practice Yoga or Zen, to get better noise-to-signal ratio, and to set your mind into certain state with ease.