Update: I now “hacked” the API to control Hue Lights and initiate a phone call with Twilio. Check here https://www.youtube.com/watch?v=r58ERvxT0qM
Last November Amazon announced a new kind of device. Part speaker, part personal assistant and it called it Amazon Echo. If you saw the announcement you might have also see their quirky infomercial.
The parodies came hours after their announcement, and they were funny. But dismissing this just as a Siri/Cortana/Google Now copycat might miss the potential of this “always listening” device. To be fair this is not the first device that can do this. I have a Moto X that has an alway-on chip waiting for a wake word (“OK Google”), Google Glass glass does the same thing (“OK Glass.”) But the fact that I don’t have to hold the device, be near it, or push a button (Siri) makes this cylinder kind of magical.
It is also worth noting that NONE of these devices are really “always-listening-and-sending-all-your-conversations-to-the-NSA,” in fact the “always listening” part is local. Once you say the wake word then I guess you better make sure don’t spill the beans for the next few seconds, which is the period that the device will listen and do a STT (speech-t0-text) operation on the Cloud.
We can all start seeing through Amazon and why this good for them. Right off the bat you can buy songs with a voice command. You can also add “stuff” to your shopping list. Which also reminds me of a similar product Amazon had last year, Amazon Dash which unfortunately is only for selected markets. The fact is that Amazon wants us to buy more from them, and for some of us that is awesome, right? Prime, two day shipping, drone delivery, etc.
I have been eyeing these “always listening” devices for a while. The Ubi ($300) and Ivee ($200) were my two other choices. Both have had mixed reviews and both of them are still absent on the promise of an SDK or API. Amazon Echo doesn’t have an SDK yet, but they placed a link to show the Echo team your interest in developing apps for it.
The promise of a true artificial intelligence assistant or personal contextual assistant (PCA) is coming soon to a house or office near you. Which brings me to my true interest in Amazon Echo. The possibility of creating a “Smart Office” where the assistant will anticipate my day-to-day tasks, setup meetings, remind me of upcoming events, analyze and respond email and conversations, all tied to my Oracle Cloud of course. The assistant will also control physical devices in my house/office “Alexa, turn on the lights,” “Alexa, change the temperature to X,” etc.
All in all, it has been fun to request holiday songs around the kitchen and dinning room (“Alexa, play Christmas music.”) My kids are having a hay day trying to ask the most random questions. My wife, on the other side, is getting tired of the constant interruption of music, but I guess it’s the novelty. We shall see if my kids are still friendly to Alexa in the coming months.
In my opinion, people dismissing Amazon Echo, will be the same people that said: “Why do I need a music player on my phone, I already have ALL my music collection in my iPod” (iPhone naysayers circa 2007), “Why do I need a bigger iPhone? That `pad thing is ridiculously huge!” (iPad naysayers circa 2010.) And now I have already heard “Why do I want a device that is always connected and listening, I already have Siri/Cortana/Google Now” (Amazon Echo naysayers circa 2014.)
Agree, disagree? Let me know.
Back in the early 90s I ventured into virtual reality and was sick for a whole day afterwards.
We have since learned that people become queazy when their visual systems and vestibular systems get out of sync. You have to get the visual response lag below a certain threshold. It’s a very challenging technical problem which Oculus now claims to have cracked. With ever more sophisticated algorithms and ever faster processors, I think we can soon put this issue behind us.
Anticipating this, there has recently been a resurgence of interest in VR. Google’s Cardboard project (and Unity SDK for developers) makes it easy for anyone to turn their smartphone into a VR headset just by placing it into a cheap cardboard viewer. VR apps are also popping up for iPhones and 3D side-by-side videos are all over YouTube.
Some of my AppsLab colleagues are starting to experiment with VR again, so I thought I’d join the party. I bought a cheap cardboard viewer at a bookstore. It was a pain to put together, and my iPhone 5S rattles around in it, but it worked well enough to give me a taste.
I downloaded an app called Roller Coaster VR and had a wild ride. I could look all around while riding and even turn 180 degrees to ride backwards! To start the ride I stared intently at a wooden lever until it released the carriage.
My first usability note: between rides it’s easy to get turned around so that the lever is directly behind you. The first time I handed it to my wife she looked right and left but couldn’t find the lever at all. So this is a whole new kind of discoverability issue to think about as a designer.
Despite appearances, my roller coaster ride (and subsequent zombie hunt through a very convincing sewer) is research. We care about VR because it is an emerging interaction that will sooner or later have significant applications in the enterprise. VR is already being used to interact with molecules, tumors, and future buildings, uses cases that really need all three dimensions. We can think of other uses cases as well; Jake suggested training for service technicians (e.g. windmills) and accident re-creation for insurance adjusters.
That said, both Jake and I remain skeptical. There are many problems to work through before new technology like this can be adopted at an enterprise scale. Consider the the idea of immersive virtual meetings. Workers from around the world, in home offices or multiple physical meeting rooms, could instantly meet all together in a single virtual room, chat naturally with each other, pick up subtle facial expressions, and even make presentations appear in mid air at the snap of a finger. This has been a holy grail for decades, and with Oculus being acquired by Facebook you might think the time has finally come.
Not quite yet. There will be many problems to overcome first, not all of them technical. In fact VR headsets may be the easiest part.
A few of the other technical problems:
- Bandwidth. I still can’t even demo simple animations in a web conference because the U.S. internet system is too slow. I could do it in Korea or Sweden or China or Singapore, but not here anytime soon. Immersive VR will require even more bandwidth.
- Cameras. If you want to see every subtle facial expression in the room, you’ll need cameras pointing at every face from every angle (or at least one 360 camera spinning in the center of the table). For those not in the room you’ll need more than just a web cam pointing at someone’s forehead, especially if you want to recreate them as a 3D avatar. (You’ll need better microphones too, which might turn out to be even harder.) This is technically possible now, Hollywood can do it, but it will be awhile before it’s cheap, effortless, and ubiquitous.
- Choreography. Movie directors make it look easy, and even as individuals we’re pretty good about scanning a crowded room and following a conversation. But in a 3-dimensional meeting room full of 3-dimensional people there are many angles to choose from every second. We will expect our virtual eyes to capture at least as much detail as our real eyes that instinctively turn to catch words and expressions before they happen. Even if we accept that any given participant will see a limited subset of what the overall system can see, creating a satisfying immersive presence will require at least some artificial intelligence. There are probably a lot of subtle challenges like this.
And a non-technical problem:
- Privacy. Any virtual meeting which can me transmitted can also be recorded and viewed by others not in the meeting. This includes off-color remarks (now preserved for the ages or at least for future office parties), unflattering camera angles, surreptitious nose picking, etc. We’ve learned from our own research that people *love* the idea of watching other people but are often uncomfortable about being watched themselves. Many people are just plain camera shy – and even less fond of microphones. Some coworkers are uncomfortable with our team’s weekly video conferences. “Glasshole” is now a word in the dictionary – and glassholes sometimes get beaten up.
So for virtual meetings to happen on an enterprise scale, all of the above problems will have to be solved and some of our attitudes will have to change. We’ll have to find the right balance as a society – and the lawyers will have to sign off on it. This may take awhile.
But that doesn’t mean our group won’t keep pushing the envelope (and riding a few virtual roller coasters). We just have to balance our nerdish enthusiasm with a healthy dose of skepticism about the speed of enterprise innovation.
What are your thoughts about the viability of virtual reality in the enterprise? Your comments are always appreciated!
It’s difficult to make a link post seem interesting. Anyway, I have some nuggets from the Applications User Experience desk plus bonus robot video because it’s Tuesday.
Back to Basics. Helping You Phrase that Alta UI versus UX Question
From Coffee Table to Cloud at a Glance: Free Oracle Applications Cloud UX eBook Available
Next up, another byte from Ultan on a new and free eBook (registration required) produced by OAUX called “Oracle Applications Cloud User Experiences: Trends and Strategy.” If you’ve seen our fearless leader, Jeremy Ashley (@jrwashley), present recently, you might recognize some of the slides.
Oh and if you like eBooks and UX, make sure to download the Oracle Applications Cloud Simplified User Interface Rapid Development Kit.
Today, We Are All Partners: Oracle UX Design Lab for PaaS
And hey, another post from Ultan about an event he ran a couple weeks ago, the UX Design Lab for PaaS.
Friend of the ‘Lab, David Haimes (@dhaimes), and several of our team members, Anthony (@anthonyslai), Mark (@mvilrokx), and Tony, participated in this PaaS4SaaS extravaganza, and although I can’t discuss details, they built some cool stuff and had oodles of fun. Yes, that’s a specific unit of fun measurement.
Amazon’s robotic fulfillment army
From kottke.org, a wonderful ballet of Amazon’s fulfillment robots.
Editor’s note: Here’s another new post from a new team member. Shortly after the ‘Lab expanded to include research and design, I attended a workshop on visualizations hosted by a couple of our new team members, Joyce, John and Julia.
The event was excellent. John and Julia have done an enormous amount of critical thinking about visualizations, and I immediately started bugging them for blog posts. All the work and research they’ve done needs to be freed into the World so anyone can benefit from it. This post includes the first three installments, and I hope to get more. Enjoy.
I still haven’t talked anyone into reading Proofs and Stories, and god knows I tried. If you read it, let me know. It is written by the author of Logicomix, Apostolos Doxiadis, if that makes the idea of reading Proofs and Stories more enticing. If not, I can offer you my summary:
1. Problem solving is like a quest. As in a quest, you might set off thinking you are bound for Ithaka only to find yourself on Ogygia years later. Or, in Apostolos’ example, you might set off to prove Fermat’s Last Theorem only to find yourself studying elliptic curves for years. The seeker walks through many paths, wonders in circles, reverses the steps, and encounters dead ends.
2. The quest has a starting point = what you know, the destination = the hypothesis you want to prove, and the points in between = statements of facts. Graph, in mathematical sense, is a great way to represent this. A is a starting point, B is the destination, F is a transitive point, C is a choice.
A story is a path through the graph, defined by the choices a storyteller makes on behalf of his characters.
Frame P5 below shows Snowy’s dilemma. Snowy’s choice determines what happens to Tintin in Tibet. If only Snowy not gone for the bone, the story would be different.
Even though its own nature dictates the story to be linear, there is always a notion of alternative paths. How to linearize forks and branches of the path so that the story is most interesting, is an art of storytelling.
3. Certain weight, or importance, can be suggested based on the number of choices leading to a point, or resulting from it.
When a story is summarized, each storyteller likely to come up with a different outline. However the most important points usually survive majority of summarizations.
Stories can be similar. The practitioners of both narrative and problem solving rely on patterns to reduce choice and complexity.
So how does this have to do with anything?
Another book I cannot make anyone to read but myself is called “Interaction Design for Complex Problem Solving: Developing Useful and Usable Software” by Barbara Mirel. The book is as voluminous as its title suggests, 397 pages, out of which I made it through the page 232 in four years. This probably doesn’t entice you into reading the book. Luckily there is a one-pager paper “Visualizing complexity: Getting from here to there in ill-defined problem landscapes” from the same author on the same very subject. If this is too much to read still, may I offer you my summary?
Mainly, cut and paste from Mirel’s text:
1. Complex problem solving is an exploration across rugged and at times uncharted problem terrains. In that terrain, analysts have no way of knowing in advance all moves, conditions, constraints or consequences. Problem solvers take circuitous routes through “tracts” of tasks toward their goals, sometimes crisscrossing the landscape and jump across foothills to explore distant knowledge, to recover from dead ends, or to reinvigorate inquiry.
2. Mountainscapes are effective ways to model and visualize complex inquiry. These models stress relationships among parts and do not reduce problem solving to linear and rule-based procedures or work flows. Mountainscapes represent spaces being as important to coherence as the paths. Selecting the right model affect the designs of the software and whether complex problem solvers experience useful support. Models matter.
Complex problems can neither be solved nor supported with linear or pre-defined methods. Complex problems have many possible heuristics, indefinite parameters, and ranges of outcomes rather than one single right answer or stopping point.
3. Certain types of complex problems recur in various domains and, for each type, analysts across organizations perform similar patterns of inquiry. Patterns of inquiry are the regularly repeated sets of actions and knowledge that have a successful track record in resolving a class of problems in a specific domain
And so how does this have to do with anything?
A colleague of mine, Dan Workman, once commented on a sales demo of a popular visual analytics tool. “Somehow” he said “the presenter drills down here, pivots there, zooms out there, and, miraculously, arrives to that view of the report where the answer to his question lies. But how did he know to go there? How would anyone know where the insight hides?”
His words stuck with me.
Imagine a simple visualization that shows revenue trend of a business by region by product by time. Let’s pretend the business operates in 4 regions, sells 4 products, and has been in business for 4 years. The combination of these parameters results in 64 views of sales data. Now imagine that each region is made up of hundreds of countries. If visualization allows user to view sales by country, there will be thousands and thousands of additional views. In the real world, a business might also have lots more products. The number of possible views could easily exceed what a human being can manually look at, and only some views (alone or in combination) possibly contain insight. But which ones?
I am yet to see an application that supports the users in finding insightful views of a visualization. Often users won’t even know where to start.
So here is the connection between Part1, Part2, and Part3. It’s the model. The visualization exploration can be represented as a graph (in mathematical sense), where the points are the views, and the connections are navigation between views. Users then trace a path through the graph as they explore new results.
From here, certain design research agenda comes to mind:
1. The world needs interfaces to navigate the problem mountainspaces: keeping track of places visited, representing branches and loops in the path, enabling to reverse steps, etc.
2. The world needs an interface for linearizing a completed quest into a story (research into presentation), and outlining stories.
3. The world needs software smarts that can collect the patterns of inquiry and use them to guide the problem solvers through the mountainspaces.
So I hope from this agenda the Part 4 will eventually follow . . . .
That last D is Development if that’s unclear. Anyway, like Thao says, Happy Thanksgiving for those who celebrate it, and for those who don’t enjoy the silence in our absence. To Thao’s question, I’m going with Internet. Yes, it’s a gadget because it’s a series of tubes, not a big truck.
Find the comments to add the gadget for which you are most thankful.
Tomorrow is Thanksgiving and this seems like a good time to put my voice out on The AppsLab (@theappslab). I’m Thao, and my Twitter (@thaobnguyen) tagline is “geek mom.” I’m a person of few words and those two words pretty much summarize my work and home life. I manage The AppsLab researchers and designers. Jake welcomed us to the AppsLab months ago here, so I’m finally saying “Thank you for welcoming us!”
As we reflect on all the wonderful things in our lives, personal and professional, I sincerely want to say I am very thankful for having the best work family ever. I was deeply reminded of that early this week, when I had a little health scare at work and was surround by so much care and support from my co-workers. Enough of the emotional stuff, and onto the fun gadget stuff . . . .
My little health scare led me to a category of devices that hadn’t hit my radar before – potentially, life saving, personal medical apps. I’ve been looking at wearables, fitness devices, healthcare apps, and the like for a long time now but there is a class of medical-grade devices (at least recommended by my cardiologist) that is potentially so valuable in my life, as well as those dear to me . . . AliveCor. It is essentially turns your smartphone into an ECG device so you can monitor your heart health anytime and share it with your physician. Sounds so cool!
Back to giving thanks, I’m so thankful for all the technology and gadgets of today – from the iPhone and iPad that lets me have a peaceful dinner out with the kids to these medical devices that I’ll be exploring now. I want to leave you with a question, “What gadget are you most thankful for?”
As a team-building activity for our newly merged team of research, design and development, someone, who probably wishes to remain nameless, organized a glass mosaic and welding extravaganza at The Crucible in Oakland.
We split into two teams, one MIG welding, the other glass breaking, and here’s the result.
All-in-all an interesting and entertaining activity. Good times were had by all, and no one was cut or burned, so bonus points for safety.
Editor’s note: Here’s a repost of a wonderful write-up of an event we did a couple weeks ago, courtesy of Friend of the ‘Lab Karen Scipi (@KarenScipi).
What Karen doesn’t mention is that she organized, managed and ran the event herself. Additional props to Ultan (@ultan) on the idea side, including the naming, Sandra Lee (@SandraLee0415) on the execution side and to Misha (@mishavaughan) for seeing the value. Without the hard work of all these people, I’m still just talking about a great idea in my head that I’m too lazy to execute. You guys all rock.
Enjoy the read.
By Karen Scipi
It was an exciting event here at Oracle Headquarters as our User Experience AppsLab (@theappslab) Director Jake Kuramoto (@jkuramot) recently hosted an internal design jam called Shape and ShipIt. Fifteen top-notch members of the newly expanded team got together for two days with a packed schedule to research and innovate cutting-edge enterprise solutions, write use cases, create wireframes, and build and code solutions. They didn’t let us down.
The goal: Collaborate and rapidly design practical, contextual, mobile Oracle Applications Cloud solutions that address real-world user needs and deliver enterprise solutions that are streamlined, natural, and intuitive user experiences.
The result: Success! Four new stellar user experience solutions were delivered to take forward to product development teams working on future Oracle Application Cloud simplified user interface releases.
While I cannot share the concepts or solutions with you as they are under strict lock and key, I can share our markers of the event’s success with you.
The event was split into two days:
Day 1: A “shape” day during which participants received invaluable guidance from Bill Kraus on the role of context and user experience, then researched and shaped their ideas through use cases and wireframes.
Day 2: A “ship” day during which participants coded, reviewed, tested, and presented their solutions to a panel of judges that included Jeremy Ashley (@jrwashley), Vice President of the Oracle Applications User Experience team.
It was a packed two days full of ideas, teamwork, and impressive presentations.The participants formed four small teams that comprised managers, architects, researchers, developers, and interaction designers whose specific perspectives proved to be invaluable to the tasks at hand. Their blend of complementary skills enabled the much needed collaboration and innovation. Although participants were charged with a short timeframe for such an assignment, they were quick to adapt and refine their concepts and produce solutions that could be delivered and presented in two days. Individual team agility was imperative for designing and delivering solutions within a two-day timeframe.
Participants were encouraged to brainstorm and design in ways that suited them. Whether it was sitting at tables with crayons, paper, notebooks and laptops, or hosting walking meetings outside, the participants were able to discuss concepts and ideate in their own, flexible ways.As with all of our simplified user interface design efforts, participants kept a “context produces magic” perspective front and center throughout their activities. In the end, team results yielded responsive, streamlined, context-driven user experience solutions that were simple yet powerful.
Healthy “brain food” and activity breaks were encouraged, and both kept participants engaged and focused on the important tasks at hand. Salads, veggies, dips, pastas, wraps, and sometimes a chocolate chip cookie (for the much needed sugar high) were on the menu. The activity break of choice was an occasional competitive game of table tennis at the Oracle Fitness Center, just a stone’s throw from the event location. The balance of think-mode and break-mode worked out just right for participants.Our biggest marker of success, though, was how wrong we were. Yes. Wrong. While we expected one team’s enterprise solution to clearly stand out from among all of the others, we were pleasantly surprised as all four were equally impressive, viable, and well-received by the design jam judges. Four submissions, four winners. Nice job! Stay tuned to the Usable Apps Blog to learn more about such events and what happens to the innovative user experiences that emerge!
This year some of us at the AppsLab attended the Samsung Developer Conference aka #SDC2014. Last year it was Samsung’s first attempt and we were also there. The quality and caliber of presentations increased tenfold from last year. Frankly, Samsung is doing it really hard to resist to join their ecosystem.
Here are some of the trends I observed:
Wearables and Health:
There was a huge emphasis in Samsung’s commitment with wearable technology. They released a new Tizen based smartwatch (Samsung Gear S) as well as a biometric reference design hardware and software called SIMBAND. Along with their wearable strategy they also released S.A.M.I, a cloud repository to store all this data. All this ties together with their vision of “Voice of the Body.”
During the second day keynote we got to hear from Mounir Zok Senior Sports Technologist of the United States Olympic Committee. He told us of how wearable technology is changing they way Olympic athletes are training. It was only a couple years ago when athletes still had to go to a lab and “fake” actual activities to get feedback. Now they can actually get real data on the field thanks to wearable technology.
Samsung released the Gear VR in partnership with Oculus. This goggles can only work with a mounted Galaxy Note 4 in the front. The gaming experiences with this VR devices are amazing. But they are also exploring other cases like virtual tourism and virtual movie experiences. They released a 3D 360+spherical view camera called “Project Beyond.”
IoT – Home Automation:
Samsung is betting big with IoT and Home Automation and they are putting their money where their mouth is by acquiring SmartThings. The SmartThings platform is open sourced and has the ability to integrate with a myriad of other home automation products. They showcased a smart home powered by SmartThings platform.
I actually really like their new Galaxy Note Edge phablet. Samsung is showing true innovation here with the “edge” part of the device. It has it’s own SDK and it feels great on the hand!
Overall I’m pretty impressed with what Samsung is doing. It seems like their spaghetti-on-the-wall approach (throwing a bunch spaghetti and see what sticks) is starting to pay off. Their whole UX across devices looks seamless. And in my humble approach they are getting ready to take off on their own without having to use Android for their mobile devices. Tizen keeps maturing, but I shall leave that for another post!
Please feel free to share your experience with Samsung devices as well!
Editorial Note: This is a guest post by friend of the ‘Lab and colleague DJ Ursal. Also be sure to check out our Hackathon entry here:
EchoUser (@EchoUser), in partnership with SpaceGAMBIT, Maui Makers, the Minor Planet Center, NASA, the SETI Institute, and Further by Design, hosted an Asteroid Hackathon. The event was in response to the NASA Grand Challenge, “focused on finding all asteroid threats to human populations and knowing what to do about them.”
I had a wonderful opportunity to participate in the Asteriod Hackathon last week. MY team name was NOVA. Our team comprised for 4 team members – DJ Ursal, Kris Robison, Daniel Schwartz, Raj Krishnamurthy
We were given live data from NASA and Minor Planet site and literally just had 5 hours to put together a working prototype and solution to the Asteroid big data problem. We created a web application (works not only on your MAC or PC but also on your iPad and your latest Nexus 7 Android devices) which would help scientists, astronomers and anyone who is interested in Asteriods discover, learn and share information in a fun and interactive way.
Our main them was Finding Asteroids Before They Find Us. The goal was to help discover, learn and share Asteroids information to increase awareness within the community. We created an interactive web app that allowed users to make use of chart filters to find out about the risk for possibilities of future impact with Earth. Find out about the distance of the asteroids to Earth, absolute brightness and rotation of the Asteroid. It allowed users to click and drag on any chart to filter, so that they could transform the filters in multidimensional way in order to explorer, discover , interesting facts and share data on asteroids with riends and community. We made use of Major Tom who is an astronaut referenced in David Bowie’s songs “Space Oddity. “Space Oddity” depicts an astronaut who casually slips the bonds of the world to journey beyond the stars. Users could post questions to Major Tom and could also play his song.
The single most important element about WINNING this hackathon strategically was team composition. Having a team that is effective working together. Collaboration and communication skills were the two of most critical personal skills demanded of all members as time was limited and communication and coordination of utmost importance.
Winning TEAM NOVA- DJ Ursal, Kris Robison, Daniel Schwartz, Raj Krishnamurthy
A couple weeks ago Jeremy Ashley (@jrwashley), Bill Kraus, Raymond Xie and I participated in the Asteroid Hackathon hosted by @EchoUser. The main focus was “to engage astronomers, other space nerds, and the general public, with information, not just data.”
As you might already know, we here at the AppsLab, are big fans of Hackathons as well as ShipIt days or FedEx days. The ability to get together, get our collective minds together and being able to create something in a short amount of time is truly amazing. It also helps to keep us on our toes, technically and creatively.
Our team built what we called “The Daily Asteroid.” The idea behind our project was to highlight the asteroid profile of the current date’s closed approach to Earth or near Earth object (NEO) data. What this means is to show which asteroid is the closest to earth today. A user could “favorite” today’s asteroid and start a conversation with other users about it, using a social network like Twitter.
We also added the ability to change the asteroid properties (size, type, velocity, angle) and play a scenario to see what damage could it cause if it hit the earth. And to finish up, we created an Asteroid Hotline using Twilio (@twilio) where you can call to get the latest NEO info using your phone!
We were lucky to be awarded 3rd place or “Best Engagement,” and we had a blast doing it. Considering the small amount time we had, we came out really proud of our results.
There’s a post over on VoX about a OAUX new lab at Oracle HQ, the Cloud UX Lab.
Finished just before OOW in September, this lab is a showcase for OAUX projects, including a few of ours.
The lab reminds me of a spacecraft from the distant future, the medical bay or the flight deck. It’s a very cool place, directly inspired and executed by our fearless leader, Jeremy Ashley (@jrwashley), an industrial designer by trade.
I actually got to observe the metamorphosis of this space from something that felt like a doctor’s office waiting room into the new hotness. Looking back on those first meetings, I never expected it would turn out so very awesome.
Anyway, the reason why I got to tag along on this project is because our team will be filling the control room for this lab with our demos. Noel (@noelportugal) and Jeremy have a shared vision for that space, which will be a great companion piece to the lab and equally awesome.
So, if you’re at Oracle HQ, book a tour and stop by the new Cloud UX Lab, experience the new hotness and speculate on what Noel is cooking up behind the glass.
Jawbone announced the Up3 today, reportedly its most advanced fitness tracker to date.
As with all fitness trackers, the Up3 has an accelerometer, but it also has sensors for measuring skin and ambient temperature, as well as something called bioimpedence. As these data collected by the Up3 are used by a new feature called Smart Coach.
You can imagine what the Smart Coach does. It sounds like a cool, possibly creepy, feature.
This post is not about the Up3.
This post is about my journey into the dark heart of the quantified self. The Up3 has just reminded me to coalesce my thoughts.
Misfit calculates activity based on points, and my personal goal of 1,000 points was relatively easy to reach every day, even for someone who works from home. What I realized quickly was that the Shine pushed me to chase points, not activity.
The Shine uses its accelerometer to measure activity, so depending on where I wore it on my person, a run could be worth more points. This isn’t unique to the Shine. I’ve seen people spinning at the gym wearing their fitness trackers on their ankles.
As the weeks passed, I found myself avoiding activities that didn’t register a lot of points, definitely not good behavior, and even though my goal was 1,000 points, I avoided raising it for fear of missing my daily goal-achievement dopamine high.
Then, mid-Summer, Misfit dropped an update that added some new game mechanics, and one day, my Shine app happily informed me that I’d hit my goal 22 days in a row.
This streak was the beginning of the end for me.
On the 29th day of my streak, the battery died. I replaced it, crisis averted, streak in tact. Then, later that day, the Shine inexplicably died. I tried several new batteries and finally had to contact support.
All the while, I worried about my streak. I went to gym, but it felt hollow and meaningless without the tangible representation, the coaching, as it were, from my Shine.
This is not a good look.
Misfit replaced my Shine, but in the days that elapsed, during my detox, I decided to let it go. Turns out the quantified self isn’t for obsessive, overly-competitive personality types like me.
And I’m not the only one in this group.
In September, I read an article called Stepping Out: Living the Fitbit Life, in which the author, David Sedaris, describes a similar obsession with his Fitbit. As I read it, I commiserated, but I also felt a little jealous of the level of his commitment. This dude makes me look like a rank amateur.
Definitely worth a read.
Anyway, this is not in any meant to be an indictment of the Shine, Fitbit, Jawbone or any fitness tracker. Overall, these devices offer people a positive and effective way to reenforce healthy behavior and habits.
But for people like, they lead to unanticipated side effects. As I read about the Up3, its sensors and Smart Coach, all of which sound very cool, I had to remind myself of the bad places where I went with the Shine.
And the colloquial, functionally-incorrect but very memorable, definition of insanity.
In Part 2, when I get around to it, I’ll discuss the flaws in the game mechanics these companies use.
Find the comments.
I have both the Google Glass and Android Wear (Samsung Gear Live, Moto 360), and often times I wear them together. People always come up with a question: “How do you compare Google Glass and Android watches?” Let me address couple of the view points here. I would like to talk about Apple Watch, but since it has not been officially released yet, let’s say that shape-wise it is square and looks like a Gear Live, and features seem to be pretty similar to Android Wear, with the exceptions of the attempt to add more playful colors and features. Lets discuss more about it once it is out.
I am the first batch of the Google Glass Explorer and got my Glass mid 2013. In the middle of this year, I first got the Gear Live, then later Moto 360. I always find it peculiar that Glass is an old technology while Wear is a newer technology. Should it not be easier to design a smart watch first before a glassware?
I do find a lot of similarities between Glass and Wear. The fundamental similarity is that both are Android devices. They are voice-input enabled and show you notifications. You may install additional Android applications for you to personalize your experience and maximize your usage. I see these as the true values for wearables.
Differences? Glass does show a lot of capabilities that Android Wear is lack of at the moment. Things that probably matter for most people would be sound, phone calls, video recording, pictures taking, hands-free with head-on display, GPS, wifi. Unlike Android Wear, it can be used standalone; Android Wear is only a companion gadget and has to be paired up with a phone.
Is Glass more superior? Android Wear does provide a better touch-based interaction, comparing to the swiping at the side of the Glass frame. You can also play simple games like Flopsy Droid on your watch. Also commonly included are pedometers and heart activity sensor. Glass also tends to get over-heated easily. Water-resistance also plays a role here: you would almost never want to get your Glass wet at all, while Android Wear is water-resistant to certain degree. When you are charging your watch at night, it also serves as a bedtime clock.
For me, personally, although I own Glass longer than Wear, I have to say I prefer Android Wear over Glass for couple reasons. First, there is the significant price gap ($1500 vs $200 price tag). Second, especially when you add prescription to Glass, it gets heavy and hurts the ear when wearing it for an extended period of time. Third, I do not personally find the additional features offered by Glass useful to my daily activities; I do not normally take pictures other than at specific moments or while I am traveling.
I also find that even Glass is now publicly available within the US, Glass is still perceived as an anti-social gadget. The term is defined in the Urban Dictionary as well. Most of the people I know of who own Glass do not wear it themselves due to all various reasons. I believe improving the marketing and advertising strategy for Glass may help.
Gadget preference is personal. What’s yours?
If you’ve read here for more than a hot minute, you’ll know that I love me some data visualization.
This love affair dates back to when Paul (@ppedrazzi) pointed me to Hans Rosling’s (@hansrosling) first TED talk. I’m sure Hans has inspired an enormous city of people by now, judging by the 8 million plus views his TED talk has garnered. Sure, those aren’t unique view, but even so.
There’s an interesting meta-project: visualize the people influenced by various visualization experts, like a coaching tree or something.
As luck would have it, one area of specialization of our newest team members is, wait for it, data visualization.
Last week, I got to see them in action in a full-day workshop on data visualization, which was eye-opening and very informative.
I’m hoping to get a few blog posts out of them on the subject, and while we wait, I wanted to share some interesting examples we’ve been throwing around in email.
I started the conversation with xkcd because, of course I did. Randal Munroe’s epic comic isn’t usually mentioned as a source for data visualizations, but if you read it, you’ll know that he has a knack for exactly that. Checking out the Google Image search for “xkcd data visualization” reminded me of just how many graphs, charts, maps, etc. Randal has produced over the years.
I also discovered that someone has created a D3 chart library as an homage to the xkcd style.
I probably spent 10 minutes zooming into Pixels, trying to find the bottom; being small-minded, I gave up pretty early on Click and Drag, assuming it was small. It’s not.
How much time did you spend, cough, waste, on these?
During our conversation, a couple interesting examples have come back to me, both worth sharing.
First is Art of the Title, dedicated to the opening credits of various films. In a very specific way, opening credits are data visualizations; they set the mood for the film and name the people responsible for it.
Second is Scale of the Universe, which is self-explanatory and addictive.
So, there you go. Enjoy investigating those two and watch this space for more visualization content.
And find the comments.
Editor’s note: Hey look, a new author. Here’s the first post from Raymond Xie, who joined us nearly a year ago. You may remember him from such concept demos as geo-fencing or Pebble watchface. Raymond has been busy at work and wants to share the work he did with telekinesis. Or something, you decide. Enjoy.
You put on a headband, stare at a ball, tilt your head back-forth and left-right . . . the ball navigates through a simple maze, rushing, wavering, changing colors, and finally hitting the target.
That is the latest creation out of AppsLab: Muse Sphero Driver. When it was first showed at OAUX Exchange during OOW, it amused many people, as they would call it “mind control” game.
Technically, it is your brainwave data (Electroencephalography – EEG) driving the Sphero (adjusting speed and changing color with spectrum from RED to BLUE, where RED: fast, active; BLUE: slow, calm); and head gesture (3d Accelerarometer- ACC) controlling the direction of Sphero movement. Whether or not you call that as “mind control” is up to your own interpretation.
You kind of drive the ball with your mind, but mostly brainwave noises instead of conscious thought. It is still too early to derive accurate “mind control” from EEG data out of any regular person, for the reasons:
1. For EEG at Scalp level, the noise-to-signal ratio is very poor;
2. Need to establish the correlation between EEG and mind activity.
But it does open up a dialog in HCI, such as voice-control vs mind-control (silence); or in Robotics, instead of asking machine to “see”/”understand”, we can “see”/”understand” and impersonate it with our mind and soul.
While it is difficult to read out “mind” (any mind activity) transparently, we think it is quite doable to map your mind into certain states, and use the “state” as command indirectly.
We may do something around this area. So stay tuned.
Meanwhile, you can start to practice Yoga or Zen, to get better noise-to-signal ratio, and to set your mind into certain state with ease.
As part of the Oracle Applications User Experience (@usableapps) team, we regularly work with interaction designers, information architects and researchers, all of whom are pivotal to ensuring that what we build is what users want.
Makes sense, right?
So, we’re joining forces with the Emerging Interactions team within OAUX to formalize a collaboration that has been ongoing for a while now. In fact, if you read here, you’ll already recognize some of the voices, specifically John Cartan and Joyce Ohgi, who have authored posts for us.
For privacy reasons (read, because Jake is lazy), I won’t name the entire team, but I’m encouraging them to add their thoughts to this space, which could use a little variety. Semi-related, Noel (@noelportugal) was on a mission earlier this week to add content here and even rebrand this old blog. That seems to have run its course quickly.
So, welcome everyone to the AppsLab team.
Last week at OpenWorld, a few of our projects were featured in Steve Miranda’s (@stevenrmiranda) keynote session.
Thanks to Martin for making this video, thanks to Steve for including it in his keynote, and thanks to you for watching it.
About a month ago, hackaday.com broke the news of a new Wifi chip called ESP8266 that costs about $5. This wireless system on a chip (SoC) took all the IoT heads (including me) by surprise. Until now if you wanted to integrate wifi to any DIY project you had to use more expensive solutions. To put this into perspective, my first wifi Arduino shield was about $99!
So I ordered a few of them (I think I’m up to 10 now!) and went to test the possibilities. I came up with a simple Instructable to show how can you log a room temperature to the Cloud. I used an Arduino to do this, but one of the most amazing things about this chip is that you can use it as stand alone! Right now documentation is sparse, but I was able to compile the source code using a gcc compiler toolchain created by the new esp8266 community.
But why is this important to you even if you haven’t dabble with DIY electronics? Well this chip comes from China and even though it doesn’t have an FCC stamp of approval (yet), it signals the things about to come. This is what I call the Internet of Things r(evolution). Prices of these chips are at a historical low, and soon we will see more and more products connecting to the Internet/Cloud. From light switches, light bulbs, to washer machines, dishwashers. Anything that needs to be turned on or off could potentially have one of these. Anything that can collect data like thermostats, smoke detectors etc. could also potentially have it.
So you scared or will you welcome our new internet overlords?
For the past year at the AppsLab we have been exploring the possibilities of advanced user interactions using BLE beacons. A couple days ago, Google (unofficially) announced that one of their Chrome teams is working on what I’m calling the gBeacon. They are calling it the Physical Web.
This is how they describe it:
“The Physical Web is an approach to unleash the core superpower of the web: interaction on demand. People should be able to walk up to any smart device – a vending machine, a poster, a toy, a bus stop, a rental car – and not have to download an app first. Everything should be just a tap away.
The Physical Web is not shipping yet nor is it a Google product. This is an early-stage experimental project and we’re developing it out in the open as we do all things related to the web. This should only be of interest to developers looking to test out this feature and provide us feedback.”
Here is a short run down of how iBeacon works vs The Physical Web beacons:
The iBeacon profile advertises a 30 byte packet containing three values that combined make a unique identifier: UUID, Major, Minor. The mobile device will actively listen for these packets. When it gets close to one of them it will query a database (cloud) or use hard-coded values to determine what it needs to do or show for that beacon. Generally the UUID is set to identify a common organization. Major value is an asset within that organization, and Minor is a subset of assets belonging to the Major.
For example, if I’m close to the Oracle campus, and I have an Oracle application that is actively listening for beacons, then as I get within reach of any beacon my app can trigger certain interactions related to the whole organization (“Hello Noel, Welcome to Oracle.”) The application had to query a database to know what that UUID represents. As I reach building 200, my application picks up another beacon that contains a Major value of lets say 200. Then my app will do the same and query to see what it represents (“You are in building 200.”) Finally when I get close to our new Cloud UX Lab, a beacon inside the lab will broadcast a Minor ID that represents the lab (“This is the Cloud UX lab, want to learn more?”)
iBeacons are designed to work as full closed ecosystem where only the deployed devices (app+beacons+db) will know what a beacon represents. Today I can walk to the Apple store and use a Bluetooth app to “sniff” BLE devices, but unless I know what their UUID/Major/Minor values represent I cannot do anything with that information. Only the official Apple Store app will know what do with when is nearby beacons around the store (“Looks like you are looking for a new iPhone case.”)
As you can see the iBeacon approach is a “push” method where the device will proactively push actions to you. In contrast the Physical Web beacon proposes to act as a “pull” or on-demand method.
The Physical Web gBeacon will advertise a 28 bytes packet containing an encoded URL. Google wants to use the familiar and established method of URLs to tell an application, or an OS, where to find information about physical objects. They plan to use context (physical and virtual) to top rank what might be more important to you at the current time and display it.
The Physical Web approach is designed to be a “pull” discovery service where most likely the user will initiate the interaction. For example, when I arrive to the Oracle campus, I can start an application that will scan for nearby gBeacons or I can open my Chrome browser and do a search. The application or browser will use context to top rank nearby objects combined with results. It can also use calendar data, email or Google Now to narrow down interests. A background process with “push” capabilities could also be implemented. This process could have filters that can alert the user of nearby objects of interest. These interests rules could be predefined or inferred by using Google’s intelligence gathering systems like Google Now.
The main difference between the two approaches is that iBeacons is a closed ecosystem (app+beacons+db) and the Physical Web is intended to be a public self discovered (app/os+beacons+www) physical extension of the web. Although the Physical Web could also be restricted by using protected websites and encrypted URLs.
Both approaches are accounting to prevent the misconception about these technologies: “I am going to be spammed as soon as I walk inside a mall?” The answer is NO. iBeacons is an opt-in service within an app and the Physical Web beacons will mostly work on-demand or will have filter subscriptions.
So there you have it. Which method do you prefer?
This year, our team and our larger organization, Oracle Applications User Experience, will have precisely a metric ton of activities during the week.
For the first time, our team will be doing stuff at JavaOne too. Anthony (@anthonyslai) will be talking about the IFTTPi workshop we built for the Java team for MakerFaire back in May on Monday, and Tony will be showing those workshop demos in the JavaOne OTN Lounge at the Hilton all week.
If you’re attending either show or both, stop by, say hello and ask about our custom wearable.
Speaking of wearables, Ultan (@ultan) will be hosting a Wearables Meetup a.k.a. Dress Code 2.0 in the OTN Lounge at OpenWorld on Tuesday, September 30 from 4-6 PM. We’ll be there, and here’s what to expect:
- Live demos of wearables proof-of-concepts integrated with the Oracle Java Cloud.
- A wide selection of wearable gadgets available to try on for size.
- OAUX team chatting about use cases, APIs, integrations, UX design, fashion and how you can use OTN resources to build your own solutions.
Update: Here are Bob (@OTNArchBeat) and Ultan talking about the meetup.
Here’s the list of all the OAUX sessions:
Presenter: Jeremy Ashley, Vice President, Applications User Experience; Jatin Thaker, Senior Director, User Experience; and Jake Kuramoto, Director, User Experience
The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. See what we mean by simplicity as we demo our latest cloud user experiences and show you only the essential information you need for your work. Learn how we are addressing mobility, by delivering the best user experience for each device as you access your enterprise data in the cloud. We’ll also talk about the future of enterprise experiences and the latest trends we see emerging in the consumer market. And finally, understand what we mean by extensibility after hearing a high-level overview of the tools designed for tailoring the cloud user experience. With this team, you will always get a glimpse into the future, so we know you will be inspired about the future of the cloud.
Session ID: CON7198
Date: Monday, September. 29, 2014
Time: 2:45 p.m. – 3:30 p.m.
Location: Moscone West – 3007
Presenter: Anthony Lai, User Experience Architect, Oracle
This session shows how the Applications User Experience team created an interactive workshop for the Oracle Java Zone at Maker Faire 2014. Come learn how the combination of the Raspberry Pi and Embedded Java creates a perfect platform for the Internet of Things. Then see how Java SE, Raspi, and a sprinkling of user experience expertise engaged Maker Faire visitors of all ages, enabling them to interact with the physical world by using Java SE and the Internet of Things. Expect to play with robots, lights, and other Internet-connected devices, and come prepared to have some fun.
Session ID: JavaOne 2014, CON7056
Date: Monday, Sept. 29, 2014
Time: 4 p.m. – 5 p.m.
Location: Parc 55 – Powell I/II
Presenters: Jeremy Ashley, Vice President, Applications User Experience, Oracle; Aylin Uysal, Director, Human Capital Management User Experience, Oracle
The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. See what we mean by simplicity as we demo our latest cloud user experiences and show you only the essential information you need for your work. Learn how we are addressing mobility, by delivering the best user experience for each device as you access your enterprise data in the cloud. We’ll also talk about the future of enterprise experiences and the latest trends we see emerging in the consumer market. And finally, understand how you can extend with the Oracle tools designed for tailoring the cloud user experience. With this team, you will always get a glimpse into the future. Come and get inspired about the future of the Oracle HCM Cloud.
Session ID: CON8156
Date: Tuesday, Sept. 30, 2014
Time: 12:00 p.m. – 12:45 p.m.
Location: Palace – Presidio
Presenters: Jeremy Ashley, Vice President, Applications User Experience, Oracle; Killian Evers, Senior Director, Applications User Experience, Oracle
The Oracle Applications Cloud user experience design strategy is about simplicity, mobility, and extensibility. In this session, learn how Oracle is addressing mobility by delivering the best user experience for each device as you access your enterprise data in the cloud. Hear about the future of enterprise experiences and the latest trends Oracle sees emerging in the consumer market. You’ll understand what Oracle means by extensibility after getting a high-level overview of the tools designed for tailoring the cloud user experience, and you’ll also get a glimpse into the future of Oracle Sales Cloud.
Session ID: CON7172
Date: Wednesday, Oct. 1 2014
Time: 4:30 p.m. – 5:15 p.m.
Location: Moscone West – 2003
Presenters: Laurie Pattison, Senior Director, User Experience; and Mindi Cummins, Principal Product Manager, both of Oracle
So you’ve bought and implemented Oracle Applications Cloud software. Now you want to get your users excited about using it. Studies show that one of the biggest obstacles to meeting ROI objectives is user acceptance. Based on working directly with thousands of real users, this presentation discusses how Oracle Applications Cloud is designed to get your users excited to try out new software and be productive on a new release ASAP. Users say they want to be productive on a new application without spending hours and hours of training, experiencing death by PowerPoint, or reading lengthy manuals. The session demos the onboarding experience and even shows you how a business user, not a developer, can customize it.
Session ID: CON7972
Date: Thursday, Oct. 2, 2014
Time: 12 p.m. – 12:45 p.m.
Location: Moscone West – 3002
Presenters: Anthony Lai, User Experience Architect, Oracle; and Chris Bales, Director, Oracle Social Network Client Development
Apple’s iBeacon technology enables companies to deliver tailored content to customers, based on their location, via mobile applications. It will enable social applications such as Oracle Social Network to provide more relevant information, no matter where you are. Attend this session to see a demonstration of how the Oracle Social Network team has augmented the mobile application with iBeacons to deliver more-context-aware data. You’ll get firsthand insights into the design and development process in this iBeacon demonstration, as well as information about how developers can extend the Oracle Social Network mobile applications.
Session ID: Oracle OpenWorld 2014, CON8918
Date: Thursday, Oct. 2, 2014
Time: 3:15 p.m. – 4 p.m.
Location: Moscone West – 2005
Hope to see you next week.