The Expo opened today and will be open until the end of Friday! There was a lot to see and do! I managed to explore 1/3 of the space. Walking in, we have the GDC Store to the left and the main floor below the stairs. Upon entering the main floor, Unity was smack dab in the center. It had an impressive set up, but not as impressive as the Oculus area nor Clash of Kings.
There were a lot of demos you could play, with many different type of controllers. Everyone was definitely drinking the VR Kool-Aid. Because of the popularity of some of the sessions, reservations for a play session are strongly encouraged. Most, if not all of the sessions ,were already booked for the whole day by noon. I managed to reserve the PS VR play session for tomorrow afternoon by scanning a QR code to their scheduling app!
The main floor was broken up into pavilions with games by their respective counties. It was interesting to overhear others call their friends to sync up and saying “I’m in Korea.” Haha.
I spent the rest of the time walking around the floor and observing others play.
— Tawny (@iheartthannie) March 16, 2016
I did get a chance to get in line for an arcade ride! My line buddy and I decided to get chased by a T-Rex! We started flying in the air as a Pterodactyl. The gleeful flight didn’t last long. The T-Rex was hungry and apparently really wanted us for dinner. It definitely felt like we were running quickly, trying to get away.
Another simulation others tried that we didn’t was a lala land roller coaster. In this demo, players can actually see their hand on screen.
— Tawny (@iheartthannie) March 16, 2016
Sessions & Highlights
Playstation VR. Sony discusses development concepts, design innovations and what PS VR is and is not. I personally liked the direction they are going for collaboration.
- Design with 2 screens in mind. For console VR, you may be making 2 games in 1. One in VR and one on TV. You should consider doing this to avoid having one headset per player and to allow for multiplayer cooperation. Finding an art direction for both is hard. Keep it simple for good performance.
- Make VR a fun and social experience. In a cooperative environment, you get 2 separate viewpoints of the same environment (mirroring mode) or 2 totally different screen views (separate mode). This means that innovation between competitive and Co-op mode is possible.
The AppsLab team and I have considered this possibility of a VR screen and TV screen experience as well. It’s great that this idea is validated by one of the biggest console makers.
A year of user engagement data. A year’s worth of game industry data, patterns and trends was the theme of all the sessions I attended today.
- There are 185 million gamers in the US. Half are women.
- 72 million are console gamers. Of those console owners the average age is ~30 years old.
- There are 154 million mobile gamers. This is thanks to the rise of free-2-play games. Mobile accessibility has added diversity to the market and brought a new group of players. Revenues grew because of broad expansion. The average age for the mobile group is ~39.4 years old.
- There are 61 million PC gamers thanks to the rise of Steam. These gamers tend to be younger at an average age of ~29.5yrs.
- There are different motivations as to why people play games. There are two group of players: Core vs. casual players. Universally, the primary reason casual players play games is when they are waiting to pass time and as a relaxing activity.
- There is great diversity within the mobile market. There is an obvious gender split between what females and males play casually. Females tend to like matching puzzle (Candy Crush), simulation and casino games while males tend to like competitive games like sport, shooter and combat city builder games.
- When we look internationally, players in Japan have less desire to compete when playing games. Success of games based on cooperative games.
- Most homes have a game console. In 2015, 51% of homes owned at least 2 game consoles. At the start of 2016, there was an increase of 40% in sales for current 8th generation game consoles (PS4, Xbox One, etc minus the Wii).
- Just concentrating on mobile gamers, 71% play games on both their smart phone and tablet, 10% play only on their tablet.
- Top factors leading to churn are lack of interest, failure to meet expectation and too much friction.
- Aside from Netflix and maybe Youtube, Twitch gobbles up more prime time viewers, almost 700K concurrent views as of March 2016. Its viewership is increasing despite competition with the launch of YouTube Gaming.
Day 1 — User research round table. This was my first round table during GDC, and it’s nice to be among those within the same profession. We covered user research for VR, preventing bias and testing on kids! Experts provided their failures on these topics and offers suggestions.
- Testing for Virtual Reality.
- Provide players with enough time warming up in the new environment before asking them to perform tasks. Use the initial immersive exposure for to calibrate them.
- Be ready to pull them out at any indication of nausea.
- Use questionnaires to screen out individuals who easily get motion sickness.
- It’s important to remember that people experience sickness for different reasons. It’s hard to eliminate all the variables. Some people can have vertigo or claustrophobia that’s not necessarily the fault of the VR demo. There is a bias toward that in media. People think they are going to be sick so they feel sick.
- Do not ask people if they feel sick before the experience else you are biasing them to be sick.
- Individuals are only more likely to feel sick if your game experience does not match their expectations. Some people feel sick no matter what.
- One researcher tested 700–800 people in VR. Only 2 persons said that they felt sick. 7–8 said they felt uncomfortable.
- An important questions to ask is “At what point do they feel sick?” If you get frequent reports at that point vs. Generalized reports, then you can do something to make the game better.
- Avoid bragging language. Keep questions neutral.
- Separate yourself from the product.
- Remember participants think that you are an authority. Offload instructions to the survey, rather than relay the instructions yourself. It’s going to bias the feedback.
- Standardize the experiment. Give the same spiel.
- The order of question is important.
- Any single geographic region is going to introduce bias. Only screen out regions if you think culture is going to be an issue.
- Testing with kids.
- It’s better to test with 2 kids in a room. With kids, they are not good at verbalizing what they know and do not know. Having 2 kids allows you to see them verbalize their thoughts to each other as they ask questions and help each other through the game.
- When testing a group of kids at once, assign the kids their station and accessories. Allowing them to pick will end up in a fight over who gets the pink controller.
- Younger kids aren’t granular so allow for 2 clear options on surveys. A thumbs up and thumbs down works.
- Limit kids to one sugary drink or you’ll regret it.
Just like yesterday, the VR sessions were very popular. Even with the change to bigger rooms, lines for popular VR talks would start at least 20 minutes before the session started. The longest line I was in snaked up and down the hallway at least 4 times. The wait was well worth it though!
Today was packed. Many sessions overlapped one another. Wish I could have cloned 3 of myself 🙁
Throughout each session, I noticed points that have been repeated from yesterday’s daily roundup. There are definitely trends and general practices that the game industry has picked up on, especially in virtual reality. I’ll talk more about these trends later in this post.
PlayStation revealed the price of their new VR headset at $399! It’s said that Playstation VR has over 230 developers on board and 160 diverse titles in development. 50 of those games will be available this October. More info as the PS VR launch event tomorrow 🙂
There is a game called Rez Infinite developed for the PS VR. The line to try out the game was long! I wanted to take a picture of someone playing the game, but they asked kindly for no film or photography. Instead, here is a picture of the Day of the Dev banner!
Most popular VR demo so far
Aside from Eagle’s Flight, also built for PS VR, EverestVR lets you climb up Mt.Everest from the comfort (and warmth) of your living room. I overheard that being able to experience the climb with the HTC Vive controllers was booked out for the rest of the week!
Check out previews for both. Here’s Eagle’s Flight:
And Everest VR:
Immersive cinema with Lucasfilm. The entire sessions was a dream come true for fans of Star Wars and cinematic film as well as audiophiles. Anyone who’s watched Season 4 of Arrested Development on Netflix is familiar with the ability to watch parallel storylines within the same episode. Lucasfilm allowed us to experience that same interactive narrative with VR and Star Wars Episode 7!
They also let us in on their creative process for Star Wars: Trials on Tatooine. They reiterated the creative process espoused in many other game making session: (a) define the desired experience first then test it (b) simplify the interaction. VR is still new. Right now we are trying to get players to believe they are in another world. Slow the pacing at the beginning and allow them to explore the world. We don’t want complicated interactions to distract them from whats happening around them. Let them enjoy the immersion. (c) Apply positive fail throughs. If the player does something wrong in-game, don’t let the game script make them feel bad by telling them they did something wrong.
What “affordance” really means. Since the Design of Everyday Things by Don Norman, the term “affordance” has been overly used and misused. He updated his 2012 book with some clarification on the terminology. Affordances are not signifiers. Affordances define what actions are possible. What we think those those objects can do can be right or wrong. To ensure that affordances are clear, we use signifiers as a clue to indicate what we can do. For example, a door, with no doorknob or handle, is an affordance. It can open or close. Placing a pull bar, a signifier, on the door clues us into the notion that we can pull it open.
Virtual World Fair. The team behind the first 3D theme park ride for Universal Studios talked about how brands and other consumer products can take advantage of VR. They introduced the Virtual World’s Fair, a theme park in VR that is eerily similar in concept for Disney World’s Epcot.
Brands, Countries and Organizations can own a pavilion in the world, like shops in a mall, where players can explore and shop the latest and greatest.
Film vs. Games vs. VR. Repeated in many sessions today was that the rules that guide films and games are not applicable in VR. We have to create our own language and build best practices specific to it. For example, close up shots in movies will not work. In VR, we would end up invading the player’s personal space. In VR, we are the camera.
Ambisonic vs. biaural audio. Use an ambisonic mic to capture sounds and use biaural audio in VR. Ambisonic is a full surround sound capture technique. It’s equivalent to lightfields, sound pressure from all directions. Biaural audio is the equivalent of stereoscopic video. A common mistake people make is that biaural is not the same as spatialized audio. Biaural is for headphone playback. Ambisonics are for specialized speakers. Biaural has issues with coloration and rotation. Ambisonic has a flatter frequency and works if the player’s head is static.
“Presence”. The biggest buzzword since the “cloud.” Presence is hard to get and hard to achieve. There was a study done on rats wearing VR and they had trouble too! To achieve presence, we should think about how our world absorbs the player and what might distract them:
- Use diegetic cues to nudge their attention. If something is too interesting, the player has no reason to look away or try anything else.
- Design with the vestibular system in mind. Nausea sucks. We do not want dizziness to be associated with VR.
- Flow. What we’re doing should perfectly match with the skills required to do it.
- Immersion. Give the brain a reason to feel what it is feeling. We do not have to feel like we’re somewhere in order to be engaged.
Redirected walking. Redirected walking came up in 3 sessions I was in again today! With the hype surrounded room tracking, it is important that we implement illusions that keeps users safe and nausea free! Vision dominates vestibular sensation. 3 types of redirected walking were introduced:
- Rotational gains. The players rate of rotation can be greater or less than their physical rotation, e.g. turning 90 degrees in real life = turning 80 degrees in VR.
- Curvature gains. The virtual world rotates as the player walks in a straight line. With this technique, players can walk in a complete circle in the real world while perceiving themselves to walk a straight line in VR.
- Translation gains. The player can walk faster or slower in VR compared to real life, e.g. walking 9 meters in the real world can translate to 6 meters in the virtual world.
For anyone interested this 2010 study discusses thresholds for each type of redirected walking. Because the study was done before the VR devices we have today, another study is needed since there could be new thresholds.
Enabling hands in VR. Hands are the most important input for interaction. A large proportion of your sensory and motor cortex is devoted to the hands. The dominant hand is used for precise control, while the non-dominant hand can be used as a point of reference or for gross movement. Hands can be used synchronously to pull a heavy lever and asynchronously to climb a ladder. Currently, simple virtual hands are somewhat useful for selecting small targets, targets in cluttered regions and moving targets. Ray extenders (extensions of our hands in VR) are better for distant targets.
Hello everyone! I wrapped up the first day at the Games Developers Conference (GDC) in San Francisco! It’s the first Monday after daylight savings so a morning cup of joe in Moscone West was a welcomed sight!
Wow! All of the VR sessions were very popular and crowded. In the morning, I was seated in the overflow room for the HTC Vive session. Attendees were lucky to be able to go to 2 VR sessions back-to-back. There would be lines wrapping around the halls and running into other lines. By the afternoon, when foot traffic was at its highest, it was easy to get confused as to which line belonged to which session. Luckily, the organizers took into account the popularity of the VR sessions and moved it to the larger rooms for the next 4 days!
On the third floor, there was a board game area where everyone could play the latest board game releases like Pandemic Legacy and Mysterium as well as a VR play area where everyone could try out the Vive and other VR games.
Sessions & Take Aways
I sat in on 6 sessions:
- A Year in Roomscale: Design Lessons from the HTC Vive and Beyond.
- You are not building a game, but an experience. Players are actually doing something actively with their hands vs. a game controller.
- There are 3 questions that players ask when they are starting a VR experience that should be addressed:
- (a) Who am I?
- (b) What am I supposed to do?
- (c) How do I interact with the environment?
- Permissability. New players always ask when they are allowed to interact with something, but there are safety issues when they get too comfortable. One developer told a story about how a player actually tried to dive headfirst into a pool while wearing a VR device!
- Don’t have music automatically playing when they enter the game. It’s not natural in the real world. It’s better to have a boom box and have them turn on the music instead. In addition, audio is still hard to do perfectly. Players expect perfect audio by default. If they pick up a phone, they expect to hear it out of 1 ear, not both.
- Social Impact: Leveraging Community for Monetization, User Acqusition and Design.
- Social Whales (SW) have high social value thus have the highest connection to other players and are key to a high ROI . SWs makes it easy for other players to connect with one another.
- There are 3 standard profiles that players fall under:
- (a) The atypical social whales that always want the best things.
- (b) The trendsetter, the one who wants to unite and lead.
- (c) The trend spotter, the players who want to be a part of something.
- When a social whale leaves a games, ROI falls and other players leave. This is because that 2nd degree connection is gone. To keep players from leaving, it’s important to have game mechanics that addresses the following player needs:
- (a) Players want to belong.
- (b) Players want recognition as a valuable member.
- (c) Players want their in-game group to be recognized as the best vs. other groups.
- Menus Suck.
- A very interesting talk on rethinking how players access key menu items in VR.
- Have a following object like a cat! Touching different parts of the object will allow you to select different things. It’s much easier than walking back and forth between a menu and what you have to do.
- Job Simulator uses retro cartridges for menu selection.
- Create menu shortcuts with the player’s body. Have the user pull things out of different parts of their head (below).
- Eating as an interaction. In job simulator you can eat a cake marked with an “Exit” to exit the game. The cake changes to another dessert item marked with an “Are you sure?” to ensure the exit.
- Improving Playtesting through Workshops Focusing on Exploring.
- For games, we are experience testing (playtesting) not performing a usability test.
- For games, especially for VR, comfort comes first. Right after that it’s ease of use.
- When exploring desired experiences for a game, create a composition box to ensure you get ideas from all views of your development team.
- When observing play, look for actions (e.g. vocalizations, gestures) as well as for changes in posture and focus.
- The Tower of Want.
- Learn critical questions our designs must answer to engage players over the long term.
- Follow the “I want to..” and “so I can…” framework to unearth player’s short term and long term goals. Instead of asking why 5 times like we do in user research, we ask then to complete the framework’s “so I can…” sentence (e.g. I want to get good grades so I can get into college…so I can get a good job…so I can make a lot of money…so I can buy a house).
- The framework creates a ladder of motivations that incentivizes a player to complete each game level in that ladder daily.
- Cognitive Psychology of Virtual Reality: Basics, Problems and Tips.
- Psychology is the physics of VR.
- Use redirected walking to keep players within the same space.
- Design for optical flow. Put shadows over areas where users are not concentrating on. It’ll help with dizziness.
- Players underestimate depth by up to 50%.
- Add depth by adding transitional rooms (portals). This helps ease the players into their new environment.
- Players can see a maximum of 6 meters ahead of them for 3D.
- In their peripherals, they can only see 2D.
- Design with the mind that 20–30% of the population has problems with stereoscopic vision.
We will be following closely all things UX, IoT, VR, AI. Our schedules are getting full with some great sessions and workshops. Check back in a week or so to read some of our impressions!
At the end of 2015, our team was wrapping up projects that would be shown at the main Oracle conference, Oracle OpenWorld.
As with every OOW, we like to come up with a fun project that shows attendees our spirit of innovation by building cool projects within Oracle.
The team was thinking about building something cool with kids’ racetracks. We all were collectively in charge of looking for alternatives, so we visited a toy store to get ideas and see products that already existed out there.
We looked pretty cool racetracks but none of them suited our needs for functionality and of course, we didn’t have enough time to invest on modifying some of them.
So, searching through internet someone came up with Anki OVERDRIVE cars, yes, that product that was announced back in 2013 at Apple WWDC keynote. To sum up, Anki provides a racetrack that includes flexible plastic magnets tracks that can be chained together and to allow for any racetrack configuration, rechargeable cars that have an optical sensor underneath to keep the car on the track, a lot of fun features like all kinds of virtual weapons, cars upgrades, etc., a companion app for both Android and iOS platform to operate the cars and a software development kit (SDK).
For us, it was exactly what we were looking for. But now we needed to find a way to control the cars without using the companion app because, you know, that was boring and we wanted more action and go one step further.
So after discussing different approaches, I suggested to control cars with Myo gesture control armband that basically is a wireless touch-free, wearable gesture control and motion device. We had Myo armband already, but we hadn’t played with it much. Good thing that Myo band has an SDK too, so we had everything ready to build a cool demo 🙂
So it was time to get my hands dirty and start coding! The general idea was to build an Android app that receives Myo gestures and motion, translate and map those values and send them to the Anki car to control its speed and receive messages from the car to know its status and take action. As a plus, we wanted to count laps to make a real racing car contest with the attendees.
I started investigating how to control the Anki cars with the SDK, and I found that they just provide a C implementation of the message protocols and data parsing routines necessary for communicating with the Anki Drive vehicles. And that the initial release has the minimal number of functions required to use the message protocol and parse information sent by vehicles. And finally, that version provides a limited subset of a messaging protocol. Knowing that, I was sure that tracking number of laps would be a hard task so I decided to get table that for another day.
I dove into the SDK to understand the message protocols and how to translate that to the Android SDK to have full control of the vehicle plus how to pair them through Bluetooth with the Android device; I have to admit, it was difficult at the beginning as documentation is very limited. Also, I found that nothing that you do will work until you send and set a SDK_MODE flag, so make a note if you want to do any Anki builds.
Myo integration was more transparent as they already provide an Android SDK with cool examples. So I just had to code Bluetooth paring and map and translate gestures and motions into a valid speed for the Anki cars.
I set two gestures and motions to control speed. The first one was rotating the wrist to the right to increase speed or rotating the wrist to the left to decrease speed, and second was moving the arm up to increase or down to decrease speed.
I’m sure there are a lot of gestures or motions we could have used and implemented, but those were enough for our proof of concept and demo.
Here you can see a development testing.
I’ve been to two conferences where this demo has been shown, and the attendees’ “wow” reaction is very gratifying, and that my fiends is the whole idea of this demo and this team, the “wow” moment 🙂
Of course, it was shown at OOW 2015, and you can read more about it here.
Also, this demo has its own spot in OAUX Gadget Lab, so come by and have some fun time with it and our others demos that live in the lab.
So, here’s a new thing I’ve noticed lately, customizable wearables, specifically the Xiaomi Mi Band (#MiBand), which is cheap and completely extensible.
This happens to be Ultan’s (@ultan) new fitness band of choice, and coincidentally, Christina’s (@ChrisKolOrcl) as well. Although both are members of Oracle Applications User Experience (@usableapps), neither knew the other was wearing the Mi Band until they read Ultan’s post.
Since, they’ve shared pictures of their custom bands.
The Mi Band already comes in a wider array of color options that most fitness bands, and a quick search of Amazon yields many pages of wristband and other non-Xiaomi produced accessories. So, there’s already a market for customizing the $20 device.
And why not, given it’s the price of a nice pedometer with more bells and whistles and a third the cost of the cheapest Fitbit, the Zip, leaving plenty of budget left over for making it yours.
Both Christina and Ultan have been tracking fitness for a long time and as early adopters so I’m ready to declare this a trend, i.e. super-cheap, completely-customizable fitness bands.
Of course, as with anything related to fashion (#fashtech), I’m the last to know. Much like a broken clock, my wardrobe is fashionable every 20 years or so. However, Ultan has been beating the #fashtech drum for a while now, and it seems the time has come to throw off the chains of the dull, black band and embrace color again.
Or something like that. Anyway, find the comments and share your Mi Bands or opinions. Either, both, all good.
Editor’s note: Our team participated in many of the Summer of Innovation events held by Laurie (@lsptahoe) and her team. Here’s Part 2 of Mark’s (@mvilrokx) project from the IoT Hackathon held at Oracle HQ, July 30-31. Enjoy.
After all the prep work for the IoT hackaton where we agreed on our use case and settled on the hardware we would be using, it was time to start the actual fun part: designing and coding!
Diane provided all the designs for the application, Joe built all the hardware and provided the “smarts” of the application and I was responsible for bringing it all together.
Let’s start with the sketch of the whole hardware setup:
Basically the piezo is connected to the 1 analog input on the ESP8266 (A0) and to ground. In parallel with the piezo we put a 1MΩ pull-down resistor to reduce possible signal noise, a 1µF capacitor to smooth out the piezo’s signal over time to improve readings and a 5.1V Zener diode to prevent the piezo from frying the ESP8266 (piezo’s can spike at 20~40V). The piezo can then simply be attached to a pipe with some plumbers putty, ready to start sensing vibrations.
For our test setup, we created a closed circuit water flow with some copper pipes, a simple garden pump and a bucket of water, simulating an actual water system.
This turned out to work great for research and the actual demonstration during the hackathon.
ESP8266 WiFi Client
The whole reason for using the ESP8266 was to be able to connect the piezo to the cloud. The ESP8266 can be flashed with a Lua based firmware called NodeMCU, which makes this whole process remarkably simple. Just set the ESP8266 in Wifi STATION mode and then connect to a available WiFi, that’s it, 2 lines of (Lua) code:
wifi.setmode(wifi.STATION) wifi.sta.config(<ssid>, <pwd>)
The board is now connected to the internet and you can perform e.g. REST API calls from it. In practice, we implemented this slightly different because this isn’t very user friendly, but that’s outside the scope of this post.
All we had to do now was read the data from the piezo and send it via the internet to be processed and stored somewhere, e.g. on the Oracle Cloud IoT platform. Unfortunately we didn’t have access to that platform so we had to build something ourselves, see next.
Initially the plan was to use REST API calls to send the data to our backend, but this turned out to be too heavy for the little ESP8266 board and we could only perform a few calls/second before it would freeze and reboot.
Instead, we opted to use the MQTT protocol (quote from http://mqtt.org/):
“MQTT is a machine-to-machine (M2M)/”Internet of Things” connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport. It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium…”
Sounds like a perfect match and the Lua firmware has build in support for MQTT!
We turned our ESP8266 into a MQTT Client and used CloudMQTT as our broker. Once we switched to MQTT we were able to send data at a much higher frequency, but still not fast enough compared to the number of samples we wanted to collect from the piezo (100s/sec).
So instead of sending all the data we collect from the piezo to a backend, we decided to do some of the processing on the ESP8266 chip itself. We would collect 100 samples and then calculate the mean, median and standard deviation of those and send that to the backend instead as fast as we could. In the end it turned out that calculating the standard deviation was to much for the board and it started effecting our sampling frequency, so we just dropped that al together.
Piezo data is now being pushed from our little board to the broker, next we need to store that data into a database. For this we need another MQTT client that listens to the messages from the ESP8266 and stores them as they arrive. We used node.js and the MQTT.js package to create our client and hooked it up to a MongoDB.
This gave us the flexibility to change the data that gets sent from the board (which was just some JSON) without having to deal with DDL. For example, we managed to cram extra attributes into the data sent from the ESP8266 that contained information about the pipe’s material (“copper” or “PVC”) and the location of the piezo on the pipe (“close to bend”) all without changing anything other than the code on the ESP8266 that captures this information.
This information would be useful in our model as different pipe materials or other characteristics could have an effect on it, although we didn’t get to use it for the Hackathon.
The final piece of the puzzle was to display all this information into a useful manner to the end user.
For this we had a web application that would connect to the MongoDB and process the information available there. Users could monitor usage, per device or aggregated in any number of ways: devices could be group by restroom, floor, building, campus or company wide.
You could also aggregate by device type: shower, toilet, urinal etc. A budget could be allocated, again for any of these levels and time line, e.g. 100 gallon/day/toilet or 100,000 gallon/quarter/building. Notifications would get sent out when you go over a budgets or when “unusual” activity gets noticed, e.g. a device is nowhere neat it’s budget BUT it has used much more today than it normally does, which could indicate a leak.
Those were all the software parts that made up our final solution, here’s an overview:
Notice how this architecture allows us to easily scale the individual parts without effecting one another: we can scale up the number of sensors, scale the MQTT broker, scale the node.js server and scale the MongoDB.
One final component that we did not really get to highlight in the presentation during the hackathon is that the MQTT Client on the ESP8266 is configured to both send messages (the piezo data as we shown before) but also to receive messages.
This allowed us to control the sensors remotely, by sending it commands from the broker (through another MQTT Client). As soon as you switched on an ESP8266 module, say “sensor1”, it would announce itself. The node.js server would be listening to these messages and indicate in the database that “sensor1” is online and ready to accept commands.
From another MQTT client, controlled from a admin web application, we could then send a command to this specific ESP8266 and tell it to either start sensing and send data or stop. This was done for practical purpose because we were producing so much data that we were in danger of running out of free capacity on CloudMQTT 🙂 but it turned out to be a very useful feature that we are planning to investigate further and implement in any future versions.
In the end, we didn’t win any prizes at the Hackathon but I did learn a lot about IoT and plan to use that in future projects here at the AppsLab, stay tuned for more blogposts on those projects.
2016 has been a whirlwind so far, and February kept up the pace. Here’s a quick rundown of what we’ve been doing.
As we did last year, OAUX made a trip to APAC again this year to meet partners, customers and Oracle people, show our Expo of goodies and talk simplicity-mobility-extensibility, Glance, Scan, Commit and our overall goal of increasing user participation.
This year, Noel (@noelportugal) and I made the trip to Australia and spent an awesome, warm week in beautiful Sydney. Noel showed the portable Smart Office demo, including the Smart Badge, that we debuted at OpenWorld in October, and I showed a couple visualizations and Glance on a handful of devices.
Don’t let the picture fool you. It was taken between the end of Jeremy’s (@jrwashley) talk, before the official start of the Expo, during lunch. Once people finished eating, the room filled up quickly with 80 or so partner attendees.
On the second day in the Sydney office, we got the chance to chat with local Oracle sales reps, consultants and architects, and I was lucky enough to meet Stuart Coggins (@ozcoggs) and Scott Newman (@lamdadsn) who read this blog.
It’s always invigorating to meet readers IRL. I’ve poured nine years into this blog, writing more than 2,000 posts, and sometimes the silence is deafening. I wonder who’s out there reading, so it’s always a treat.
Oh, and Stuart had a board with blicky lights, an IoT demo he’d shown a customer that day. So, I was immediately intrigued.
Turns out Stuart and Scott create demos similar to our own and share the same nerdy passions we do. To wit, check out the Anki car hack Stuart and some colleagues did for Pausefest 2016 in Melbourne earlier this month.
You’ll recall the Anki cars are one of latest favorite shiny toys to hack.
Overall, it was a great week. Special thanks to John P for hosting us and making the trip all the more fun.
While we were away, Thao (@thaobnguyen) and the HQ team hosted a group of analysts in the Cloud and Gadget Labs, including Vinnie Mirchandani (@dealarchitect), who posted a nice writeup, including the money quote:
A walkthrough of their UX lab was like that of Q’s workshop in Bond movies. An Amazon Echo, smart watches, several gestural devices point to further changes in interfaces. Our expectations are being shaped by rapidly evolving UX in our cars and homes.
Somewhere Noel feels warm and fuzzy because the Q-meets-Tony Stark aesthetic is exactly what he was hoping to capture in his initial designs for our Gadget Lab.
Anyway, given all the excitement lately, I’m only now getting a chance to encourage you to read a post by Sarah Smart over on VoX about stuff, “Wearables, IoT push Oracle’s emerging tech development.”
So yeah, a lot going on, and conference season is just beginning. Stay tuned for more.
Because I live in Portland, I’m often asked if “Portlandia” is accurate.
Actually, Glance has grown beyond wearables to support cars and other devices, the latest of which is Noel’s (@noelportugal) gadget du jour, the LaMetric Time (@smartatoms).
Insert mildly amusing video here.
And of course Noel had to push Glance notifications to LaMetric, because Noel. Pics, it happened.
The text is truncated, and I tried to take a video of the scrolling notification, but it goes a bit fast for the camera. Beyond just the concept, we’ll have to break up the notification to fit LaMetric’s model better, but this was only a few minutes of effort from Noel. I know, because he was sitting next to me while he coded it.
In case you need a refresher, here’s Glance of a bunch of other things.
I didn’t have a separate camera so I couldn’t show the Android notification.
We haven’t updated the framework for them, but if you recall, Glance also worked on Google Glass and Pebble in its 1.0 version.
In September 2014, Oracle Applications User Experience (@usableapps) opened a brand new lab that showcases Oracle’s Cloud Applications, specifically the many innovations that our organization has made to and around Cloud Applications in the past handful of year.
We call it the Cloud User Experience Lab, or affectionately, the Cloud Lab.
Our team has several projects featured in the Cloud Lab, and many of our team members have presented our work to customers, prospects, partners, analysts, internal groups, press, media and even schools and Girl and Boy Scout troops.
In 2015, the Cloud Lab hosted more than 200 such tours, actually quite a bit more, but I don’t have the exact number in front of me.
Beyond the numbers, Jeremy (@jrwashely), our group vice president, has been asked to replicate the Cloud Lab in other places, on the HQ campus and abroad at other Oracle campuses.
People really like it.
In October 2015, we opened an adjoining space to the Cloud Lab that extends the experience to include more hands-on projects. We call this new lab, the Gadget Lab, and it features many more of our projects, including several you’ve seen here.
In the Gadget Lab, we’re hoping to get people excited about the possible and give them a glimpse of what our team does because saying “we focus on emerging technologies” isn’t nearly as descriptive as showing our work.
So, the next time you’re at Oracle HQ, sign up for a tour of the Cloud and Gadget Labs and let us know what you think.
Editor’s note: Here’s the first post from our new-ish researcher, Tawny. She joined us back in September, just in time for OpenWorld. After her trip to Disney World, she talked eagerly about the MagicBand experience, and if you read here, you know I’m a fan of Disney’s innovative spirit.
Planning a Disney World trip is no small feat. There are websites that display crowd calendars to help you find the best week to visit and the optimal parks to visit on each of those days so you can take advantage of those magic hours. Traveling with kids? Visiting during the high season? Not sure which FastPass+ to make reservations for?
There are annually updated guidebooks dedicated to providing you the most optimal attraction routes and FastPass+ reservations, based off of thousands of data points for each park. Beginning 2013, Disney introduced the MagicBand, a waterproof bracelet that acts as your entry ticket, FastPass+ holder, hotel key and credit card holder. The bands are part of The MyMagic+ platform, consisting of four main components: MagicBands, FastPass+, My Disney Experience, and PhotoPass Memory Maker. Passborterboard lists everything you can do with a MagicBand.
I got my chance to experience the MagicBand early this January.
These are both open edition bands. This means that they do not have customized lights or sounds at FP+ touchpoints. We bought them at the kiosk at Epcot after enviously looking on at other guests who were conveniently accessing park attractions without having to take out their tickets! It was raining, and the idea of having to take out anything from our bags under our ponchos was not appealing.
The transaction was quick and the cashier kindly linked our shiny new bands to our tickets. Freedom!!!
The band made it easy for us to download photos and souvenirs across all park attractions without having to crowd around the photo kiosk at the end of the day. It was great being able to go straight home to our hotel room while looking through our Disney photo book through their mobile app!
Test Track at Epcot made the most use of the personalization aspect of these bands. Before the rise, guests could build their own cars with the goal of outperforming other cars in 4 key areas: power, turn handling, environmental efficiency and responsiveness.
After test driving our car on the ride, there were still many things we could do with our car such as join a multiplayer team race…we lost 🙁
What was really interesting were guests were fortunate to show off personalized entry colors and sounds, a coveted status symbol amongst MagicBand collectors. The noise and colors was a mini attraction on its own! I wish our badge scanners said hi to us like this every morning…
When used in conjunction with My Disney Experience app, there can be a lot of potential:
- Order ahead food + scan to pick up or food delivery while waiting in a long line.
- Heart sensor + head facing camera to take pictures within an attractions to capture happy moments.
- Haptic feedback to tell you that your table is ready at a restaurant. Those pagers are bulky.
So what about MagicBands for the enterprise context?
Hospitals may benefit, but some argue that the MagicBand model works exclusively for Disney because of its unique ecosystem and the heavy cost it would take to implement it. The concept of the wearable is no different from the badges workers have now.
Depending on the permissions given to the badgeholder, she can badge into any building around the world.
What if we extend our badge capabilities to allow new/current employees to easily find team members to collaborate and ask questions?
What if the badge carried all of your desktop and environmental preferences from one flex office to the desk so you never have to set up or complain about the temperature ever again?
What if we could get a push notification that it’s our cafeteria cashier’s birthday as we’re paying and make their day with a “Happy Birthday?”
That’s something to think about.
Before IoT became ‘The’ buzzword, there was M2M (machine to machine). Some industries still refer to IoT as M2M, but overall the term Internet of Things has become the norm. I like the term M2M because it describes better what IoT is meant to do: Machines talking to other machines.
This year our team once again participated int he AT&T Developer Summit 2016 hackathon. With M2M in our minds, we created a platform to allow machines and humans report extraordinary events in their neighborhood. Whenever a new event was reported (by machine or human), devices and people (notified by an app) connected to the platform could react accordingly. We came with two possible use cases to showcase our idea.
Virtual Gated Community
Gated communities are a great commodity for those wanting to have privacy and security. The problem is that usually these communities come with a high price tag. So we came up with a turnkey solution for a virtual gate using M2M. We created a device using the Qualcomm DragonBoard 410c board with wifi and bluetooth capabilities. We used a common motion sensor and a camera to detect cars and people not belonging to the neighborhood. Then, we used Bluetooth beacons that could be placed in at the resident keychains. When a resident drove (or walked) by the virtual gate, it would not trigger the automated picture and report to the system, but if someone without the Bluetooth beacon drove by, the system will log and report it.
We also created an app, so residents could get notifications as well as report different events, which brings me to our second use case.
Devices reacting to events
We used AT&T Flow Designer and M2X platform to create event workflows with notifications. A user or a device could subscribe to receive only events that they care about such as lost dog/cat, water leaks etc. The real innovative idea here is that devices can also react to certain events. For example, a user could configure its porch lights to automatically turn on when someone nearby reported suspicious activity. If everyone in their street do the same, it could be a pretty good crime deterrent to effectively being able to turn all the porch lights in the street at once by reporting such event.
We called our project “Neighborhood”, and we are still amazed on how much we were able to accomplish in merely 20+ hours.
SafeDrop is a secure box for receiving a physical package delivery, without the need of recipient to be present. If you recall, it was my team’s project at the AT&T Developer Summit Hackathon earlier this month.
SafeDrop is implemented with Intel Edison board at its core, coordinating various peripheral devices to produce a secure receiving product, and it won second place in the “Best Use of Intel Edison” at the hackathon.
While many companies have focused on the online security of eCommerce, the current package delivery at the last mile is still very much insecure. ECommerce is ubiquitous, and somehow people need to receive the physical goods.
The delivery company would tell you the order will be delivered on a particular day. You can wait at home all day to receive the package, or let the package sit in front of your house and risk someone stealing it.
Every year there are reports of package theft during holiday season, but more importantly, the inconvenience of staying home to accept goods and the lack of peace of mind, really annoys many people.
With SafeDrop, your package is delivered, securely!
How SafeDrop works:
1. When a recipient is notified a package delivery with a tracking number, he puts that tracking number in a mobile app, which essentially registers it to SafeDrop box that it is expecting a package with that tracking number.
2. When delivery person arrives, he just scans the tracking number barcode on the SafeDrop box, and that barcode is the unique key to open the SafeDrop. Once the tracking number is verified, the SafeDrop will open.
3. When the SafeDrop is opened, a video is recorded for the entire duration when the door is open, as a security measure. If the SafeDrop is not closed, a loud buzz sound continues until it is closed properly.
4. Once the package is within SafeDrop, a notification is sent to recipient’s mobile app, indicating the expected package has been delivered. Also shows a link to the video recorded.
In the future, SafeDrop could be integrated with USPS, UPS and FedEx, to verify the package tracking number for the SafeDrop automatically. When delivery person scans tracking number on SafeDrop, it automatically update status to “delivered” into that tracking record in delivery company’s database too. That way, the entire delivery process is automated in a secure fashion.
This SafeDrop design highlights three advantages:
1. Tracking number barcode as the key to the SafeDrop.
That barcode is tracked during the entire delivery, and it is with the package always, and it is fitting to use that as “key” to open its destination. We do not introduce any “new” or “additional” thing for the process.
2. Accurate delivery, which eliminates human mistakes.
Human error sometimes causes deliveries to the wrong address. With SafeDrop integrated into the shipping system, the focus is on a package (with tracking number as package ID) going to a target (SafeDrop ID associated with an address).
In a sense, the package (package ID) has its intended target (SafeDrop ID). The package can only be deposited into one and only one SafeDrop, which eliminates the wrong delivery issue.
3. Non-disputable delivery.
This dispute could happen: delivery company says a package has been delivered, but the recipient says that it has not arrived. The possible reasons: a) delivery person didn’t really deliver it; b) delivery person dropped it to a wrong address; c) a thief came by and took it; d) the recipient got it but was making a false claim.
SafeDrop makes things clear! If it is really delivered to SafeDrop, it is being recorded, and delivery company has done its job. If it is in the SafeDrop, the recipient has it. Really there is no dispute.
I love data, always have.
To feed this love and to compile data sets for my quantified self research, I recently added the Netatmo Weather Station to the other nifty devices that monitor and quantify my everyday life, including Fitbit Aria, Automatic and Nest.
Having so many data sets and visualizations, I’ve observed my interest peak and wane over time. On Day 1, I’ll check the app several times, just to see how it’s working. Between Day 2 and Week 2, I’ll look once a day, and by Week 3, I’ve all but forgotten the device is collecting data.
This probably isn’t ideal, and I’ve noticed that even something that I expected would work, like notifications, I tend to ignore, e.g. the Netatmo app can send notifications on carbon dioxide levels indoor, temperature outside and rain accumulation outside, if you have the rain gauge.
These seem useful, but I tend to ignore them, a very typical smartphone behavior.
Unexpectedly, I’ve come to love the monthly email many devices send me and find them much more valuable than shorter interval updates.
Initially, I thought I’d grow tired of these and unsubscribe, but turns out, they’re a happy reminder about those hard-working devices that are tirelessly quantifying my life for me and adding a dash of data visualization, another of my favorite things for many years.
Here are some examples.
Although it’s been a while, I did enjoy the weekly summary emails some of the fitness trackers would send. Seems weekly is better in some cases, at least for me.
A few years ago, Jetpack, the WordPress analytics plugin, began compiling a year in review report for this blog, which I also enjoy annually.
If I had to guess about my reasons, I’d suspect that I’m not interested enough to maintain a daily velocity, and a month (or a week for fitness trackers) is just about the right amount of data to form good and useful data visualizations.
Of course, my next step is dumping all these data into a thinking pot, stirring and seeing if any useful patterns emerge. I also need to reinvigorate myself about wearing fitness trackers again.
With smartwatches, sometimes your fingers just aren’t good enough for the task at hand. Fortunately, some ingenious users have found a suitable alternative for when those digits just won’t do: their nose.
That thing sticking out from your face is enough like a fingertip to act as one in situations where your hands might be wet, dirty, or separated from your device by a layer of gloves.
Our own research, as well as that of Apple Watch research firm Wristly.co, has found users have occasionally resorted to their nose, and at reasonable numbers, too. Wristly found in one of their surveys that 46% of respondents had used their nose on their watch, and another 28% hadn’t, but were willing to try.
While users are probably not opting to use their nose when their fingers will do, this usage pattern fits into a larger question of how we interact with our devices: What’s the best way to interact with a device at any given time? When do we or should we use touch versus voice, gesture versus mouse, nose versus finger?
What I love about the nose tap is that it’s something that happened with users out in the real world, with real world situations. It’s doubtful this sort of usage would have been found in the lab, and may not have been considered when the Apple Watch was being designed. After all, with California’s beautiful weather, one might not consider what a glove-wearing population has to go through.
But now with this knowledge, designers should be asking themselves, “will my users ever need to nose tap? If so, how do I make sure it will work for them?” It sounds a little silly, but it could make an app or feature more useful to some users. And researchers should also be asking the same questions.
This goes for any novel, unexpected way users interact with any product, software or hardware: why are they doing it that way, and what is it telling us about their needs or underlying problems?
And the best way to find those novel, unexpected interactions? By seeing (or at least asking) how people use these products in the real world.
Worn Out With Wearables
That well-worn maxim about keeping it simple, stupid (KISS) now applies as much to wearable tech (see what I did there?) user experience as it does to mobile or web apps.
The challenge is to keep on keeping “it” simple as product managers and nervous C-types push for more bells and whistles in a wearable tech market going ballistic. Simplicity is a relative term in the fast changing world of technology. Thankfully, the Xiaomi Mi Band has been kept simple and the UX relates to me.
I first heard about the Mi Band with a heads-up from OAUX AppsLab chief Jake Kuramoto (@jkuramot) last summer. It took me nearly six months to figure out a way to order this Chinese device in Europe: When it turned in up Amazon UK.
I’ve become jaded with the current deluge of wearable tech and the BS washing over it. Trying to make sense of wearable tech now makes my head hurt. The world and its mother are doing smartwatches and fitness trackers. Smartglasses are coming back. Add the wellness belts, selfie translators that can get you a date or get you arrested, and ingestibles into the mix; well it’s all too much to digest. There are signals the market is becoming tired too, as the launch of the Fitbit Blaze may indicate.
But after 7 days of wearing the Mi Band, I have to say: I like it.
Mi User Experience Es Tu User Experience
My Mi Band came in a neat little box, complete with Chinese language instructions.
Setup was straightforward. I figured out that the QR code in the little booklet was my gateway to installing the parent App (iOS and Android are supported) on my iPhone and creating an account. Account verification requires an SMS text code to be sent and entered. This made me wonder where my data was stored and its security. Whatever.
I entered the typical body data to get the Mi Band setup for recording my activity (by way of steps) and sleep automatically, reporting progress on the mobile app or by glance at the LEDs on the sensor (itself somewhat underwhelming in appearance. This ain’t no Swarovski Misfit Shine).
I charged up the sensor using yet another unique USB cable to add to my ever-growing pile of Kabelsalat, slipped the sensor into the little bracelet (black only, boo!), and began the tracking of step, sleep and weight progress (the latter requires the user to enter data manually).
I was impressed by simplicity of operation that was balanced by attention to detail and a friendly style of UX. The range of locale settings, the quality of the visualizations, and the very tone of the communications (telling me I was on a “streak”) was something I did not expect from a Chinese device. But then Xiaomi is one of the world’s biggest wearable tech players, so shame on me, I guess.
The data recorded seemed to be fairly accurate. The step count seemed to be a little high for my kind of exertion and my sleep stats seemed reasonable. The Mi Band is not for the 100 miles-a-week runners like me or serious quantified self types who will stick with Garmin, Suunto, Basis, and good old Microsoft Excel.
For a more in-depth view of my activity stats, I connected the Mi Band to Apple Health and liked what I saw on my iPhone (Google Fit is also supported). And of course, the Mi Band app is now enabled for social. You can share those bragging rights like the rest of them.
But, you guessed it. I hated the color of the wristband. Only black was available, despite Xiaomi illustrations showing other colors. WTF? I retaliated by ordering a Hello Kitty version from a third party.
The Mi Band seems ideal for the casual to committed fitness type and budding gym bunnies embarking on New Year resolutions to improve their fitness and need the encouragement to keep going. At a cost of about 15 US dollars, the Mi Band takes some beating. Its most easily compared with the Fitbit Flex, and that costs a lot more.
Beyond Getting Up To Your Own Devices
I continue to enjoy the simple, glanceable UX and reporting of my Mi Band. It seems to me that its low price is hinting at an emergent business model that is tailor-made for the cloud: Make the devices cheap or even free, and use the data in the cloud for whatever personal or enterprise objectives are needed. That leaves the fanatics and fanbois to their more expensive and complex choices and to, well, get up to their own devices.
So, for most, keeping things simple wins out again. But the question remains: how can tech players keep on keeping it simple?
Mi Band Review at a Glance
- Crafted, personal UX
- Mobile app visualizations and Apple and Google integration
- Lack of colored bands
- Personal data security
- Unique USB charging cable
- Underwhelming #fashtech experience
Your thoughts are welcome in the comments.
It has become tradition now for us, AppsLab, the OAUX emerging technologies team, that the first thing in a New Year is to fly to Las Vegas, not solely testing our luck on the casino floor (though some guys did get lucky), but also attending to the real business–participating in the AT&T Developer Summit Hackathon to build something meaningful, useful, and future-oriented, and hopefully get lucky and win some prizes.
Noel (@noelportugal), who participated in 2013, 2014 and last year, and his team were playing a grand VR trick–casting a virtual gate on a Real neighborhood–not exactly the VR as you know it, but rather, several steps deep into the future. Okay, I will just stop here and not spoil that story.
On the other side of desk, literately, David, Osvaldo (@vaini11a), Juan Pablo and I (@yuhuaxie) set our sights on countering Mr. Grinch with the Christmas package theft reports still fresh in people’s minds. We were team SafeDrop, and we forged carefully a safe box for receiving Christmas gift package. Unsurprisingly, we called it SafeDrop! And of course, you can use it to receive package deliveries other time of the year too, not just at Christmas time.
In terms of the details of how the SafeDrop was built, and how you would use it, I will explain each in future posts, so stay tuned. Or shall I really explain the inner mechanism of SafeDrop and let Mr. Grinch have an upper hand?
By the way, team SafeDrop won the 2nd place of “Best Use of Intel Edison.” Considering the large scale of the AT&T Hackathon with over 1,400 participants forming 200+ teams, we felt that we were pretty lucky to win prizes. Each team member received a Basis Peak, a fitness watch you’ve read about here, and a Bluetooth headset, the SMS In-Ear Wireless Sport.
We know there are other approaches to prevent Mr. Grinch from stealing gifts, such as this video story shows: Dog poop decoy to trick holiday package thief. Maybe after many tries, Mr. Grinch would just get fed up with the smell and quit.
But we think SafeDrop would do the job by just the first try.
Stay tuned for more details of what we built.
Here is a blast from the past: a letter I wrote to some friends back in 1994 about my very first VR experience.
VR enjoyed a brief spin as the next big thing that year. Jaron Lanier had been featured in the second issue of Wired magazine and virtual reality arcades began to appear in the hipper shopping malls. In short, it was as hyped and inevitable then as it is again today.
So with any further ado, here is my unedited account of what virtual reality was like in 1994:
For my birthday last weekend, Janine, Betsy, Tom and I tried an interesting experiment: a virtual reality arcade in San Francisco called Cyber Mind.
It was a pleasant enough little boutique, not overrun by pimply-faced hoards as I had expected. They had a total of ten machines, two sit-down, one-person flight simulator contraptions, and two sets of four networked platforms. We chose to play one of the four-person games called Dactyl Nightmare.
I stepped up on a sleek-looking platform and an attendant lowered a railing over my head so that I would not wander off while my mind was in other worlds. I then strapped on a belt with more equipment and cables and donned a six pound Darth-Vader-on-steroids helmet. The attendant placed a gun in my hand.
When they pulled the switch I found myself standing on a little chessboard floating in space. There were a total of four such platforms with stairs leading down to a larger chessboard, all decorated by arches and columns. I could look in any direction and if I held out my arm I could see a computer-generated rendition of my arm flailing around, holding a gun. If I pushed the thumb switch on top of the gun I began to walk in whatever direction I was looking in.
I began bumping into columns and stumbling down stairs. It wasn’t long before I saw Janine, Betsy, and Tom also stumbling around, walking with an odd gait, and pointing guns at me. The game, as old as childhood itself, was to get them before they could get me. Usually, by the time I could get my bearings and take careful aim, someone else (usually Betsy) had sneaked up behind me and blasted me into computer-generated smithereens. After a few seconds, I reformed and rejoined the hunt.
This happy situation was somewhat complicated by a large green Pterodactyl with the bad habit of swooping down and carrying off anyone who kept firing their guns into the ground (which was usually where I tended to fire). If you were true of heart and steady of aim you could blast the creature just before it’s claws sunk in. I managed this once, but the other three or four times I was carried HIGH above the little chessboard world and unceremoniously dropped.
After a quick six minutes it was all over. The total cost was $20 for the four of us, about a dollar per person per minute. I found the graphics interesting but not compelling and resolved to come back in a few years when the technology had improved.
I was not dizzy or disoriented during the game itself, but I emerged from my helmet slightly seasick, especially after the second round. This feeling persisted for the rest of the day. But it was a worthy experiment. Twenty dollars and a dizzy day: a small price to pay for my first glimpse at virtual reality.
Look, I’m as fond of holodecks and the matrix as the next nerd. I was having queazy VR experiences back in 1994. That’s me just last month strapped into to a cheap plastic viewer, staring boldly into the future. I’ve been thinking and writing about virtual reality for over twenty years now.
But are we there yet? Is VR ready for mainstream adoption? And, aside from a few obvious niche cases, does it have any significant relevance for the enterprise?
This is the first of a series of “VR Skeptic” blog posts that will explore these questions. The AppsLab has already started to acquire some VR gear and is hunting for enterprise use cases. I’ll share what I find along the way.
So why am I a skeptic? Despite all the breathless reviews of the Oculus Rift over the last few years and the current hype storm at CES, the VR industry still faces many serious hurdles :
- Chicken and egg: developers need a market, the market needs developers
- Hand controls remain awkward
- The headsets are still bulky
- Most PCs will need an upgrade to keep up
- People are still getting sick
To this I would add what I’ve noticed about the user experience while sampling Google Cardboard VR content:
- Limited range of view unless you’re sitting in a swivel chair
- Viewer fatigue after wearing the googles for a few minutes
- Likelihood of missing key content and affordances behind you
- Little or no interactivity
- Low quality resolution for reading text
But every time I’m ready to give up on VR, something like this pulls me back: Google Cardboard Saves Baby’s Life (hat tip to my colleague Cindy Fong for finding this).
Immersion and Serendipity
I think VR does have two unique qualities that might help us figure out where it could make an impact: immersion and serendipity.
Immersion refers to the distinct feeling of being “inside” an experience. When done well, this can be a powerful effect, the difference between watching a tiger through the bars of its cage and being in the cage with the tiger. Your level of engagement rises dramatically. You are impelled to give the experience your full attention – no multi-tasking! And you may feel a greater sense of empathy with people portrayed in whatever situation you find yourself in.
Serendipity refers to the potential for creative discovery amidst the jumbled overflow of information that tends to come with VR. A VR typically shows you more content than you can easily absorb, including many things you won’t even see unless you happen to be looking in the right direction at the right moment. This makes it harder to guide users through a fixed presentation of information. But it might be an advantage in situations where users need to explore vast spaces, each one using his or her instincts to find unique, unpredictable insights.
It might be fruitful, then, to look for enterprise use cases that require either immersion or serendipity. For immersion this might include sales presentations or job training. Serendipity could play a role in ideation (e.g. creating marketing campaigns) or investigation (e.g. audits or budget reconciliations where you don’t know what you’re looking for until you find it).
Because VR content and viewers are not yet ubiquitous, VR today tends to be a solitary experience. There are certainly a number of solitary enterprise use cases, but the essence of “enterprise” is collaboration: multiple people working together to achieve things no single person could. So if there is a killer enterprise VR app, my guess is that it will involve rich collaboration.
The most obvious example is virtual meetings. Business people still fill airports because phone and even video conferences cannot fully replace the subtleties and bonding opportunities that happen when people share a physical space. If VR meetings could achieve enough verisimilitude to close a tricky business deal or facilitate a delicate negotiation that would be a game changer. But this is a very hard problem. The AppsLab will keep thinking about this, but I don’t see a breakthrough anytime soon.
Are there any easier collaborations that could benefit from VR? Instead of meetings with an unlimited number of participants, perhaps we could start with use cases involving just two people. And since capturing subtle facial expressions and gestures is hard, maybe we could look for situations which are less about personal interactions and more about a mutual exploration of some kind of visualized information space.
One example I heard about at last year’s EyeO conference was the Mars Rover team’s use of Microsofts’s HoloLens. Two team members in remote locations could seem to be standing together in a Martian crater as they decide on where to send the rover to next. One could find an interesting rock and call the other over to look at it. One could stand next to a distant feature to give the other a better sense of scale.
Can we find a similar (but more mundane) situation? Maybe an on-site construction worker with an AR headset sharing a live 360 view with a remote supervisor with a VR headset. In addition to seeing what the worker sees, the supervisor could point to elements that the worker would then see highlighted in his AR display. Or maybe two office managers taking a stroll together through a virtual floor plan allocating cubicle assignments.
These are some of the ideas I hope to explore in future installments. Stay tuned and please join the conversation.
Before Christmas, I ran out of gas for the first time.
All things considered, I was very lucky. It was just me in the car, and the engine died in a covered parking structure, in a remote corner with few cars. Plus, it was the middle of the day, during the week before Christmas, so not a lot of people were out and about anyway.
Could have been a lot worse.
The reason why I ran out of gas is more germane, and as a result of my mishap, I found another interesting experience.
I’ll start with the why. If you read here, you’ll know I’ve been researching the quantified self, which I understand roughly as the collection of data related to me and the comparison of these data sets to identify efficiencies.
As an example, I tracked fitness data with a variety of wearables for most of last year.
About a year ago, Ben posted his impressions of Automatic, a device you plug into your car’s OBD-II diagnostics port that will quantify and analyze your driving data and track your car’s overall health, including its fuel consumption and range.
I have since added Automatic my car as another data set for my #QS research. I’ve found the data very useful, especially with respect to real cost of driving.
Since Automatic knows when you fill the tank and can determine the price you paid, it can provide a very exact cost for each trip you make. This adds a new dimension to mundane tasks.
Suddenly, running out for a single item has a real cost, driving farther for a sale can be accurately evaluated, splitting the cost of a trip can be exact, etc.
One feature I appreciate is the range. From the wayback archives, in 2010, I discussed the experience and design challenges of a gas gauge. My Jeep warns of low fuel very early and won’t report the range once the low fuel indicator has been tripped.
However, Automatic always reports an estimated range, which I really like.
Maybe too much, given this is how I ran out of gas. The low fuel indicator had been on for a day, but I felt confident I could get to a gas station with plenty of time, based on the estimated range reported by Automatic.
Armed with my false confidence, I stopped to eat on the way to the gas station, and then promptly ran out of gas.
To be clear, this was my fault, not Automatic’s.
In my 2010 musings on the gas gauge, I said:
It does seem to be nigh impossible to drive a car completely out of gas, which seems to be a good thing, until you realize that people account for their experiences with the gauges when driving on E, stretching them to the dry point.
Yeah, I’m an idiot, but I did discover an unforeseen negative, over reliance on data. I knew it was there, like in every other data vs. experience point-counter point. I know better than to rely on data, but I still failed.
Something I’ll need to consider more deeply as my #QS investigation continues.
Now for the interesting experience I found.
After calling roadside assistance for some gas, my insurance company texted me a link to monitor “the progress of my roadside event.” Interesting copy-writing.
That link took me an experience you might recognize.
Looks like Uber, doesn’t it? Because it’s a web page, the tow trunk icon wasn’t animated like the Uber cars are, but overall, it’s the same, real-time experience.
I like the use here because it gives a tangible sense that help is on the way, nicely done.
So, that’s my story. I hope to avoid repeating this in the future, both the running-out-of-gas and the over-reliance on data.
Find the comments and share your thoughts.