This is 2016, and seems this is the year for VR. Of course, we at the AppsLab can’t miss the beat!
While Osvaldo (@vaini11a) started to look into Unity based VR capability and prototype, I wanted to take a look into the WebVR based approach. The prospect of delivering VR experience in a browser, and over the web, suddenly makes VR so much more accessible – WebVR can be designed in a way to work with or without a VR headset. In a sense it is an extension of responsive web, to adjust to different renderers/viewers gracefully.
The first thing came to my mind was to VR-enable some of our visualization demos, and I picked John’s Transforming Table for the first try. After a series of hacks, I got a half-baked, and non-functional result; if we had the Oculus Rift, we could get it working. A-Frame is at a very early stage, and there is still a lot to be desired.
I realized I needed a perspective change – instead of fitting the existing presentation and behavior into VR, WebVR/A-Frame is better suited to create a new presentation and behavior that blends with VR naturally.
Jake (@jkuramot) and Noel (@noelportugal) had just come back from a trip to Sydney, and told us a story about someone following our team and reading our posts, from far far away – the other side of planet 🙂
And wouldn’t it be nice if they could see our Gadget Lab too?
So after a couple of days, here is it – our gadget lab in VR.
You may step in Gadget lab, and step out to Oracle campus. While you are in the virtual Gadget Lab, you can see several stations I have labelled to show some human-machine interfaces we are investigating.
I don’t have fancy stereoscopic 360 camera rig, so I just used an old Android phone to produce the scene of Oracle campus and Gadget lab interior. The phone is a little bit under-powered, so you may see some blanked-out areas because the phone would reboot when I tried to create the entire scene.
Let me know what you think in comments.
Editors’ update: To experience this virtual tour of our Gadget Lab, simply navigate to http://theappslab.com/vr/appslab in any browser. Even if you don’t have a VR headset, you can get a feel for the space and see a little bit about the demos.
In a desktop browser, you can use your mouse to look around the room. On a mobile browser, you can tilt your device to pan around the lab, and if you want the full VR experience, tap the glasses icon in the lower right-hand corner, and pop your phone into a compatible viewer, e.g. Google Cardboard.
The future is now.
Yesterday, I talked, in part, about how we can, if we choose, work all the time, from the moment we open our eyes, until we close them for sleep.
So today’s as good a Friday as any to remind you to balance all that work with some fun.
And here’s Anthony showing our Gadget Lab demos to some kids on Spring Break.
We make a conscious effort to take time out and show our fun stuff to kids. Why? Because fun is important.
Back in 2012, I pondered what I could show at Kscope12 that would spice up my presentation. I wanted to add something fun to my session, and after chatting with Noel, we settled on the Rock ’em Sock ’em robots controlled by text/phone as a fun way to keep people’s attention.
During the conference, I stumbled upon something. Turns out when you show people something fun, their creative juices flow, and you get lots of cool ideas. Like this:
I’m not an expert in every functional area of business. I may know some Financials from my years as an E-Business Suite product manager, but I don’t know much about sales or HR or supply chain.
So, when we have something we think will be important for users in the not-so-distant-future, i.e. an emerging technology, we often build a fun demo to help those domain specific uses rise to the surface.
People play with the demo, they have some fun with it, and we talk about ways that particular technology could apply to their everyday work.
This works, believe it or not, and we’ve been repeating that formula with great results for several years. It’s part of our strategy.
For example, robot arms and race cars to investigate gesture as an input device, Internet of Things for when things happen and for connecting dumb things to the Internet, and using your mind to control to drive a robotic ball.
It’s become an annual team activity to come up with the year’s fun demo, and everyone loves the fun demos, building them, showing them, playing with them.
So have fun out there. We will.
Now, if only I could talk Noel into resurrecting those Rock ’em Sock ’em robots.
Earlier this month, our strategy and roadmap eBook was released. In it, you’ll find all the whys, wherefores, whats and hows that drive the Simplicity-Mobility-Extensibility design philosophy we follow for Oracle Cloud Applications.
The eBook is free, as in beer, and it’s a great resource if you find yourself wondering why we do what we do. Download it now.
In said (free) eBook, you’ll find this slide.
Guessing I’ve seen our fearless leader and GVP Jeremy Ashley (@jrwashley) present this slide 20-some times around the World, and each time he asks, “What’s the first thing you do in the morning?”
Inevitably, 90% of the audience says pick up my phone. He’ll then ask how many people in the audience have only one computing device, two, three or more? Overwhelmingly, audiences have three or more.
These are international audiences, so there’s no geographical bias.
I love this slide because it succinctly portrays the modern work experience, spent across devices, all day long. As Jeremy says, we have the ability to work from the moment we open our eyes to wake to the moment we close them for sleep.
You can debate whether that is a good thing or not, but the fact is our users are mobile and device-happy. They use whatever device fits their needs at any given time.
And devices keep changing. For instance, this slide had a head-mounted display glyph at one point to represent a Google Glass-like device, and the smartwatch looked like a Pebble, not an Apple Watch.
That’s where we (@theappslab) come in; we’re always reading the tea leaves, leaning into the future, trying to anticipate what users will want next so we can skate to where the puck will be.
Mixing metaphors is fun.
Anyway, download the free eBook and learn about the OAUX strategy and roadmap and keep reading here to see where we fit.
VR is big and is going to be really big for the game industry, and you could feel it in the air at the GDC 2016. For the first time, GDC added two days of VR development-focused events and sessions, and most of VR sessions were packed – the lines to the VR sessions were long, even 30 minutes before the sessions, and many people could be turned away. The venue for VR sessions had to be changed to double the capacities for day 2.
There was lots of interest and enthusiasm among game designers, developers and business guys, as VR represents a brand new direction, new category, and new genre for games!
It is still at the dawn of VR games, with hardware, software, contents, approaches, etc. starting to come together. Based on what I learned during GDC, I’d like to summarize the state of various aspects of VR development.
1. VR Headset
This is the first thing that comes to our mind when we talk about VR, right? After all, the immersive experience is cast to our minds while covering ourselves with the VR headset. There are a couple VR headsets available on market, and slew of VR headsets to be debuted very soon.
From $10 Google cardboard, to $100 Samsung Gear VR, to >$1000 custom rig, the price of a VR headset is on a wide spectrum, and so are capability and performance. Most people who want to get hold of VR will likely choose one among Samsung Gear VR, PlayStation VR, Oculus Rift, and HTC Vive. Here I will do a brief comparison so you have some ideas of what you can get.
Samsung Gear VR
It uses specific Samsung phones to show VR content, so the performance is low as it is limited by the phone hardware, usually at 60fps. It has a built-in touchpad for input, but you may also use an optional gamepad. It has no wire to connect to PC, so you can spin around on a chair and not worry about tangling yourself. It has no position tracking.
If you own a Samsung S6/S7, or Edge version, why not get the Gear VR to experience the magic? $99 seems to be really inexpensive for any new gadget. Even if you have non-Samsung phone, you can still slip it into the rig and use Gear VR as a advanced version of Cardboard viewer. Of course, you will not have control pad capability.
It uses PS4 to run VR games, so it has real game-grade hardware to run VR content at 120fps, with consistent high performance. For inputs, it has a gamepad and tracked controllers, like holding a beacon with light bulb. It has small position tracking.
The unique part with PSVR is that it is supposed to play with other regular gamers on TV screens, making it a party game in your living room. The person with PSVR will have immersive feeling in the game, while others on TV can fight with or play along with the guy (a game character with VR headset) in a game. If you have a PS4 at home, then shelling out another $399 seems to be reasonable for decent experience of VR games. But you’ll have to wait until October 2016 to buy one, right before the holiday season.
This is expected to be a high-end VR headset, with games running on a powerful Oculus-ready computer. It will have very high performance, showing VR content at 120fps or higher. It will have a wire connected to computer, so that would limit you not to spin too much of 360 degree. It has small position tracking too. It does not come cheap at price $599, but well you can get it pretty much now in March.
It is considered to be even higher spec than Oculus Rift. It will require a muscular PC, with motion sensor and motion controllers attached to it, and it will deliver very high performance for VR games. It has tracked hands for input, and provides room-scale position tracking, which is above everyone else. To designers / developers, this room-scale tracking capability may give another dimension for experiments.
It costs $799, because it is high-end hardware and bundled with a bunch of bells and whistles. And you can expect to get it in April if you pre-order one now.
HoloLens is always another interesting device for VR/AR. Also rumor has it that Google is building a VR headset too – will be much more powerful than its Cardboard version.
2. Game Engine for VR
Recent trend indicates that Game Engine companies are making it easier (or free) for people to access game engine software and develop game on it. There were quite number of sessions covering detail topics on specific game engine, but based on my impression, here is the list to try out.
Unity 5.3 by Unity Technologies – It has a free version (Personal Edition) with full features. I believe it is most popular and widely-used game engine, with cross platform deployment to full range of mobile, VR, desktop, Web, Console and TV. Also many of the alt.ctrl.GDC exhibits utilized Unity to create game for controllers to interact with.
Unreal Engine 4 by Epic Games – It is a sophisticated game engine used to develop some AAA games. They also showcased two VR games Bullet Train and Showdown. The graphics and visual effect looks astonishing.
Lumberyard by Amazon – It is a new entry to the engine game, but it is free with full source, meaning you can tweak the engine if necessary. It would be a good choice if developing online game, and no need to worry about hosting a robust game. I guess that’s where Amazon wants to get a share of the game. It is not supporting VR yet, but will add such support very soon.
3. Capture Device
For many VR games, designers/developers would just create virtual game world using game engine and other graphical software. But in order to show real world event inside VR world, you will need special video camera, which can take 360 degree, or spherical photos and videos.
Well, most of us may not have seen or used this type of camera, including me, and so I don’t have any opinions on them. I did use native Camera App on Android device to capture spherical photos, but it was difficult to take many shots and stitch them together.
A step further is the stereoscopic video capturing, which takes two photographs of the same object at slightly different angle to produce depth. These are high-end professional rigs, with many custom-built versions. The price range could easily go above $10k.
This area is still quite fluid, and not sure if it would ever go mainstream. Hope some consumer version in reasonable price range will become available, so we can produce some VR videos too.
4. Convention and Best Practice
With real VR game titles under 100 in total, people in the VR field are still trying to figure things out, and no clear convention has yet surfaced for designers, developers and players.
In some sessions, VR game designers and developers did share the lessons they have learned while producing their first several VR games, like interaction patterns, reality trade-off (representational, experiential, and interaction fidelity), and fidelity contract in terms of physics rule, affordance, narrative expectations. Audio (binaural audio) and visual effects will too help realize an immersive experience.
We shall see more and more “best practices” converging together with more research in VR psychology and UX, some conventions will emerge to put designers and players on the same page.
5. Area of Use
By far games is the most natural fit for VR experience, and the entire game industry is driving toward it. Cinematic VR will be another great fit, as ILM X Lab demonstrated in “Star Wars,” viewer may “attach” to different characters to experience various view points in the movie.
People also explored VR as a new way of storytelling in journalism, a new way of exercise for sports (e.g. riding stationary bike in gym feels much like driving Humvee car in war zone), and a new way of education, e.g. going into a machine and looking at the inner mechanism of an engine.
VR brings another aspect of artistic expression as new art media, challenges us to advance technology to a new frontier, and at the same time, provides us with great opportunities.
Things are just getting started!
We are still in the early days of virtual reality. Just as in the early days of manned flight, this is a time of experimentation.
What do we wear on our heads? Helmets? Goggles? Contact lenses? Or do we simply walk into a cave or dome or tank? What do we wear or hold in our hands? Game controllers? Wands? Glowing microphones? Bracelets, armbands, and rings? Or do we just flap our arms in the breeze? Do we sit? Stand? Walk on a treadmill? Ride a bike? Or do we wander about bumping into furniture and each other?
As a person who prefers to go through life in a reclining position, most of these options seem like too much bother. I have a hard time imagining how VR could become ubiquitous in the enterprise if employees have to constantly pull on complicated headgear, or tether themselves to some contraption, or fight for access to an expensive VR cave. VR in the workplace must be ergonomic, safe, and easy to use even before you’ve had your morning coffee.
Lately I’ve been enjoying VR content, goggle-free, from the comfort of my lazyboy using an Apple TV app called Littlstar. Instead of craning my head back and forth, I just slide my thumb to and fro on the Apple remote. I can fly through the air and swim with the dolphins without working up a sweat or stepping on a cat.
To be clear: watching VR content on TV is NOT real VR. It’s nowhere near as immersive. But the content is the same and the experience is surprisingly good. Navigation is actually better: because it is effortless I am more inclined to keep looking around.
The Apple remote strikes me as the perfect VR controller. It is light as a feather, easy to hold, lets you pan and drag and click and zoom, and you can operate it blindfolded.
Watching VR content on TV also makes it easier to share. Small groups of people can navigate a virtual space together in comfort. One drawback: it’s fun to be the person “driving,” but abrupt movements can make everyone else a tad queazy.
What works in the living room might also work well at a desk – or in a meeting room. TVs are already replacing whiteboards and projection screens in many workplaces. And the central innovation of the fourth generation, Apple TV, the TV app, creates a marketplace to evolve new forms of group interaction. I expect there will be a whole class of enterprise TV apps someday.
For all these reasons, I have been pushing to create Apple TV app counterparts to the VR apps we are starting to build in the AppsLab. TV counterparts could make it easier to show prototypes in design meetings and customer demos. I feel validated by Tawny’s (@iheartthannie) report from GDC that Sony has adopted a similar philosophy.
Thanks to one of our talented developers, Os (@vaini11a), we already have one such prototype. It doesn’t do much yet; we are just figuring out how to display desktop screens in a VR environment. With goggles on I can use the VR app to spin from screen to screen in my office chair and look down at my feet to change settings. With the Apple TV counterpart app, I can do exactly the same thing without moving anything other than my thumb.
It’s still too early to predict how ubiquitous VR might become in the workplace or how we will interact with it. But TV apps, or something like them, may become one way to view virtual worlds in comfort.
Tawny (@iheartthannie) and I attended the 30th Edition of GDC – Game Developers Conference. As shown in the Tawny’s daily posts, there were lots of fun events, engaging demos, and interesting sessions, that we simply could not cover them all. With 10 to 30 sessions going on at any time slots, I wished to have multiple “virtual mes” to attend some of them simultaneously. However, with only one “real me,” I still managed to attend a large number of sessions, mostly 30-minute sessions to cover more topics at a faster pace.
Unlike Tawny’s posts that give you in-depth looks into many of the sessions, I will try to summarize the information and take-aways in two posts: Part 1 – Event and Impression; Part 2 – The State of VR. This post will cover event overview and general impression.
1. Flash Backward
After two days of VR sessions, this flashback kicked off the GDC Game portion with a sense of nostalgia, flashing games like Pac-Man and Minesweeper, evolving into console games, massive multi-player games, social games (FarmVille), mobile games (Angry Birds), and onto VR games.
GDC has been running for 30 years, and many of the attendants were not even born yet that time. The Flashback started with Chris Crawford, the founder of GDC, and concluded with Palmer Luckey, the Oculus dude, who is 23, with not much for flashback, but only looking forward to the new generation of games in VR. He will be back in 20 years for the retrospective 🙂
2. Awards Ceremony
On 3/16/2016, two awards ceremony were hosted in recognition of creativity, artistry and technological genius – Independent Games Festival Awards, and Game Developers Choice Awards. I believe it is the equivalent to Oscars for movie industry, and it ran the exact format as Oscars.
As you can see, the big winner of the night was “Her Story” (by Sam Barlow), which won 5 out of 6 nominations. It is an Indie title, but also took 3 winners competing with big producers, because it created a fresh way of story-telling using game. And “The Witcher 3: Wild Hunt” took the honor of “Game of the Year.” Gamers: check out the list and check out the games if you have not played.
The ceremony also honored Todd Howard for “Lifetime Achievement Award.” He is a designer, developer, director and producer for award-winning titles “Oblivion,” “Fallout 3,” and “Skyrim,” etc. Markus “Notch” Persson, the programmer and developer of Minecraft, took the honor of “Pioneer Award.” Yeah!
As a maker myself with AppsLab, I found the alt.ctrl.GDC interactive exhibits to be extremely satisfying – just some insane ideas of how controllers can be made for games.
I tried most of the controllers, such as licking popsicles to suck up planet resources in game; mutating a controller to change the object in game to fly, swim or crawl; cranking up handles to drive tanks in game.
“Keep Talking and Nobody Explodes” must be one of the favorites at alt.ctrl.GDC 2015, and the mechanical box still stood out! It has turned into a real game – nominated for three categories and won “Excellence in Design” in the IGF Awards ceremony. It is a fun game, check it out!
“Please Stand By” is my favorite for alt.ctrl.GDC 2016. What do you do when you find a vintage TV box in junkyard? Well, it has all the controllers, even though they do not work anymore. After a wizardry spin, it came back to life – I over-heard the secrets, if you are ever intrigued in knowing how to do that.
Now it acts to show many channels of game TV, of course, you have to tune it with all the knobs and rabbit ears carefully. Oh, there are some buttons on the back too that will do some tricks. If it ever freezes on you, pound it or shake it, like you would with the old TV box.
4. Game Making and Animation
This is too big of a topic for a section in one blog post, so I am not going into any details.
I just want to appreciate how much works and thoughts people put into making a game. For an example, just look at this one slide from UFC 2 session:
That is just one grappling position change, and it would derive into so many permutation depends on how players control. Now work on the animation for each of those permuted position changes. So in UFC 2, the technical genius tried to found procedural way of automating some areas of animation.
Of course, there are so many other aspects of game making – as indicated by the many categories of awards. In additional to the creative side, there are also technological side of running massive online games, or dealing with all forms of devices.
As much as technological advance drives game development, the game making drives technological advance! People are pushing the edge of envelope in making next generation of games in VR. Speaking of VR, stay tuned for my next post on “The State of VR.”
I’ve been doing this job for various different organizations at Oracle for nine years now, and we’ve always existed on the fringe. So, having our own home for content within the Oracle.com world is a major deal, further underlining Oracle’s increased investment in and emphasis on innovation.
Today, I’m excited to launch new content in that space, which, for the record is here:
We have a friendly, short URL too:
The new content focuses on the methodologies we use for research, design and development. So you can read about why we investigate emerging technologies and the strategy we employ, and then find out how we go about executing that strategy, which can be difficult for emerging technologies.
Sometimes, there are no users yet, making standard research tacits a challenge. Equally challenging is designing an experience from scratch for those non-existent users. And finally, building something quickly requires agility, lots of iterations and practice.
All-in-all, I’m very happy with the content, and I hope you find it interesting.
The IoT Smart Office, just happens to be the first project we undertook as an expanded team in late 2014, and we’re all very pleased with the results of our blended, research, design and development team.
I hope you agree.
In the coming months, we’ll be adding more content to that space so stay tuned.
When I first came to GDC, I didn’t know what to expect. I was delightfully surprised to use my first gender neutral restroom. The restroom had urinals and toilet seats. There was no fuss other than others who were standing to take a picture of the sign above. It felt surreal using the restroom next to a stranger who was not the same gender as I. The idea is a positive new way of thinking and fits perfectly with one of the themes of the conference: diversity.
In my last games user research round table, one of the topics we spent a lot of time on was sexism and how we could do our part to include underrepresented groups in our testing. One researcher began with a story about a female contractor he worked with to perform a market test on a new game. One screener question surprised him the most:
What gender do you identify as?
Male [Next question]
Female [Thank her for her time. Dismiss]
O-M-G. The team went back and forth with the contractor for 4 iterations before she agreed to change that question in the screener. Her reasoning were:
- Females are not representative of his game’s audience. Wrong, females made up half of his previous game’s total audience.
- Females are distracting. The males will flirt with the females during testing. Solution, have one day to test all female testers and another day to test all male testers.
- Females don’t like competitive shooting games. Wrong, see first bullet point. As of March 2016, female preference for competitive games overlap with male preference 85%.
If your group of testers are all randomly chosen, but are all straight white males, is that a truly random sample? To build a game that is successful, it is important to test with a diverse group of people. Make sure that most if not all groups of your audience is represented in the sample. This will yield more diverse and insightful findings. You may have to change the language of your recruitment email to target different types of users.
For example, another researcher wanted a diverse pool gamers with little experience. His only screener was that they play games on a console for at least 6 hours a week. No genre of games were specified. He got a 60 year old grandma who played Uno over Xbox Live with her grandkids for 6–8 hours Saturday and Sunday. She took hours to get past level one, but because she was so meticulous and wanted to explore every aspect of the demo, she pointed out trouble spots in the game that most testers speeding through would miss!
Recently on our own screeners at The AppsLab, we ask participants what gender they identify with instead of bucketing them in male or female. It’s small, but a big step in the right direction toward equality.
The presence of UX
The presence of UX and user research has grown since last year. Developers and publishers recognize the importance of iteratively testing early and often. In the “Design of Everyday Games” talk with Christina Wodke the other day, she told the packed room that there was just 8 people in the same talk just the year before. From 8 to a packed room of hundred is a huge growth and a win for the user and for the industry!
Epic Games spoke about product misconceptions that makes it difficult to incorporate user experience into the pipeline. UX practitioners are like hedgehogs. We want to help by giving the extra hug it needs, but our quills aren’t perceived as soft enough. Our goal is to deliver the experience intended to the targeted audience, not change the design intent.
- Misconception #1: UX is common sense. Actually, the human brain is filled with perception, cognitive and social biases that affect both the developers and the users.
- Misconception #2: UX is another opinion. UX experts don’t give opinions. We provide an analysis based on our knowledge of the brain, past experience and available test data.
- Misconception #3: There’s not enough resources for UX. We have resources for QA testing to ensure there are no technical bugs. Can we afford not to test for critical UX issues before shipping?
To incorporate UX into the pipeline, address product misconceptions. Don’t be afraid of each other, just talk. Open communication is the key to creativity and collaboration. Start with small wins to show your value by working with those who show some interest in the process. Don’t be a UX police and jump on every UX issue to start a test pipeline. Work together and measure the process.
Overall, I loved the conference. The week flew by quickly and I was able to get great insights from industry thought leaders. The GDC activity feed was bursting with notes from parallel talks. I fell in love with the community and am in awe that a conference of this size grew from a meeting in a basement 30 years go. I sure hope there is a UX track next year! I decided to end my week with a scary VR experience, Paranormal Activity VR. The focused on music and sound to drive the suspenseful narrative. Needless to say, I screamed and fell on my knees. It beats paying to go to a haunted maze every halloween.
It’s official. All demos are booked for the week. Anyone not on the list is subjected to the standby line. I was lucky enough to score a 5:30pm demo for Bullet Train at the NVIDIA booth early this morning. When I walked by the line late in the evening, I found out that a lady had been waiting for at least an hour for her turn in the line.
Raymond (@yuhuaxie), one of our developers, took his luck to play games at the “no reservations accepted” Oculus store-like booth 30 minutes before the expo opened and still had to wait for almost an hour before he left the line for other session talks. Is it worth the hype? The wait? The fact that you’re crouching and screaming at something no one else can see?
Apparently so! One common sentiment I heard from others who finished playing the demo was that the experience was so amazing that they didn’t care about the friction to enjoy the 10–15 min in virtual reality! For Bullet Train, there had been several repeat visitors to play the fast-paced shooting game again and again!
Today, I had my chance to demo London Heist on the PS VR and Bullet Train on the Oculus Rift. Both are fast-paced shooting games. The head mount gear (HMD) for the PS VR is much more forgiving for those who wear glasses. The HMD wears similarly to a bike helmet, but with no straps to mess with. To adjust, you simply slide the viewer forward and back separate from the mounting. It’s much lighter compared to the other HMDs and breathes better. Here’s another game play of the demo I went through.
London Heist has simple interactions for a shooting game. The game first eases you in as you ride as a passenger with your buddy on the streets of London. You can sit there and get a chance to orient yourself with you new surroundings. Instead of practicing how to grab guns, I gulped down a 7up instead 😡
Finally, a car chase ensues and there are bullets flying at you. The controls were simple. Pull the trigger to grab the gun. Once done, the gun is attached to you the entirety of the game. Just keep pulling the trigger to shoot for the rest of the game. When you run out of bullets, just grab the magazine right next to you with your free hand to reload! Easy peasy!
Bullet Train controls have a slightly higher learning curve but experience is fulfilling. In the game you can transport by creating portal toward the destination you want to teleport to, grab multiple guns to shoot, slow-mo the game (discover-ability) and grab the bullets flying toward you in the air to throw them back at enemies.
There are so many things you should do that you forget how to do them all. I personally stumbled near the end trying to grab bullets in the air and throw it that I forgot how to grab new guns! After the short demo, I felt myself begin to sweat. A change in mental model is needed since typical shooter games allows you to press short cut keys to perform those actions. In VR, you DO those actions. Luckily, it does not detract from the immersion at all. It was fun and I heard that a few attendees came back to the replay the demo with improved execution.
The change in mental model was mentioned in day 2 of the user research round table. We focused on mental models for game control patterns. All control schemes are inherently non-intuitive. The game industry has been lucky that developers used the same control patterns for first person shooters aka the Halo Scheme.
When we look at game schemas for other game genres, it’s a bit of a mess. This may be the same for VR since it is based on the game’s mechanics. Generally, players prefer gaze based direction. This means that the direction you are looking in is the direction you expect to turn toward in the game.
Typically when you want to turn directions in real life, you turn your torso. This preference toward gaze based direction is a part of the Counter Strike Effect. Those who are used to first person shooter games are too used to looking to turn vs. rotating your torso to turn.
It’s definitely a new mental modal to learn. We have to remember what technologies and experiences the users are coming from and what platform and core experiences you are developing for then make judgement calls on that.
Look at these players actually turning! It was easy and turning was quick! Worked up a bit of a sweat here too.
— Tawny (@iheartthannie) March 17, 2016
The above is why the on-boarding experience for games are so important. Tutorials are necessary to ensure that players understand the core game mechanics. Players tend to overestimate themselves and skip tutorials when given the option to do so.
Rather than giving them the option to skip, the installed game should know whether it is your first time playing. First timers go through the tutorial. Everyone else who’s reinstalled the game on another device does not have to go through the tutorial again, but can still have the option to do so.
Space out tutorials evenly or else they’ll have information overload. Leave room for discover-ability. If they can discover a mechanic within 10 minutes of playing after going through core tutorial then it leads to bigger user satisfaction. Induce information seeking behavior and bring up the tutorial when they need it. Avoid front loading the player.
More on Motivation
To understand the psychology behind gamer’s motivations more. Quantic Foundry looked at 2000 data points, we find that there are 12 unique motivations that fall into 6 themes:
- Action (Boom!) — destruction and excitement.
- Social (Let’s play together) — competition and community.
- Mastery (Let me think) — challenge and strategy.
- Achievement (I want more…) — completion and power.
- Immersion (Once upon a time) — fantasy, meaning to be another character or in another place, and story, to be caught up in a plot.
- Creativity (What if?) — design and discovery.
At a high level, there are 3 motivational clusters.
- Action — Social
- Mastery — Achievement
- Immersion — Creativity
Discovery is a bridge between Mastery — Achievement as well as Immersion — Creativity. Design is a bridge between Action — Social. These results were consistent for all geographic region.
Not surprisingly, these game motivations mapped to personality traits. In psychological personality theory, there are the Big 5 personality traits.
When we drill down from the Big 5 to examine each trait, we find that it changes with context. For example, extraversion is typically associated with persons who are social and energetic. Examining extraversion in context of game motivations, we find that it is associated with persons who are social, cheerful, thrill-seeking and assertive and therefore likely to be motivated by games that fall into the Action — Social.
Conscientiousness is associated with the Mastery — Achievement. Openness is associated with Immersion — Creativity. Game motivations align with personality traits. Games are an identity management tool and so people play games that align with their personality traits.
There are some gender differences. Females are motivated by Fantasy, Design and Completion while males are motivated by Destruction, Competition and Fantasy. However, that difference is strongly dwarfed by age differences. Rather than designing for men and women, we should think about how games should be designed for different age groups.
The Action — Social cluster is the most age volatile group. As players grow older, Competition and Excitement drops. For females, story also drops. For males, Challenge also drops.
Imagine a game that changes it’s game mechanics with you as you grow? Imagine if we could drive the health and wellness of our teams by employing the proper motivational UX strategy that is intrinsic to them. That would be pretty cool!
The Expo opened today and will be open until the end of Friday! There was a lot to see and do! I managed to explore 1/3 of the space. Walking in, we have the GDC Store to the left and the main floor below the stairs. Upon entering the main floor, Unity was smack dab in the center. It had an impressive set up, but not as impressive as the Oculus area nor Clash of Kings.
There were a lot of demos you could play, with many different type of controllers. Everyone was definitely drinking the VR Kool-Aid. Because of the popularity of some of the sessions, reservations for a play session are strongly encouraged. Most, if not all of the sessions ,were already booked for the whole day by noon. I managed to reserve the PS VR play session for tomorrow afternoon by scanning a QR code to their scheduling app!
The main floor was broken up into pavilions with games by their respective counties. It was interesting to overhear others call their friends to sync up and saying “I’m in Korea.” Haha.
I spent the rest of the time walking around the floor and observing others play.
— Tawny (@iheartthannie) March 16, 2016
I did get a chance to get in line for an arcade ride! My line buddy and I decided to get chased by a T-Rex! We started flying in the air as a Pterodactyl. The gleeful flight didn’t last long. The T-Rex was hungry and apparently really wanted us for dinner. It definitely felt like we were running quickly, trying to get away.
Another simulation others tried that we didn’t was a lala land roller coaster. In this demo, players can actually see their hand on screen.
— Tawny (@iheartthannie) March 16, 2016
Sessions & Highlights
Playstation VR. Sony discusses development concepts, design innovations and what PS VR is and is not. I personally liked the direction they are going for collaboration.
- Design with 2 screens in mind. For console VR, you may be making 2 games in 1. One in VR and one on TV. You should consider doing this to avoid having one headset per player and to allow for multiplayer cooperation. Finding an art direction for both is hard. Keep it simple for good performance.
- Make VR a fun and social experience. In a cooperative environment, you get 2 separate viewpoints of the same environment (mirroring mode) or 2 totally different screen views (separate mode). This means that innovation between competitive and Co-op mode is possible.
The AppsLab team and I have considered this possibility of a VR screen and TV screen experience as well. It’s great that this idea is validated by one of the biggest console makers.
A year of user engagement data. A year’s worth of game industry data, patterns and trends was the theme of all the sessions I attended today.
- There are 185 million gamers in the US. Half are women.
- 72 million are console gamers. Of those console owners the average age is ~30 years old.
- There are 154 million mobile gamers. This is thanks to the rise of free-2-play games. Mobile accessibility has added diversity to the market and brought a new group of players. Revenues grew because of broad expansion. The average age for the mobile group is ~39.4 years old.
- There are 61 million PC gamers thanks to the rise of Steam. These gamers tend to be younger at an average age of ~29.5yrs.
- There are different motivations as to why people play games. There are two group of players: Core vs. casual players. Universally, the primary reason casual players play games is when they are waiting to pass time and as a relaxing activity.
- There is great diversity within the mobile market. There is an obvious gender split between what females and males play casually. Females tend to like matching puzzle (Candy Crush), simulation and casino games while males tend to like competitive games like sport, shooter and combat city builder games.
- When we look internationally, players in Japan have less desire to compete when playing games. Success of games based on cooperative games.
- Most homes have a game console. In 2015, 51% of homes owned at least 2 game consoles. At the start of 2016, there was an increase of 40% in sales for current 8th generation game consoles (PS4, Xbox One, etc minus the Wii).
- Just concentrating on mobile gamers, 71% play games on both their smart phone and tablet, 10% play only on their tablet.
- Top factors leading to churn are lack of interest, failure to meet expectation and too much friction.
- Aside from Netflix and maybe Youtube, Twitch gobbles up more prime time viewers, almost 700K concurrent views as of March 2016. Its viewership is increasing despite competition with the launch of YouTube Gaming.
Day 1 — User research round table. This was my first round table during GDC, and it’s nice to be among those within the same profession. We covered user research for VR, preventing bias and testing on kids! Experts provided their failures on these topics and offers suggestions.
- Testing for Virtual Reality.
- Provide players with enough time warming up in the new environment before asking them to perform tasks. Use the initial immersive exposure for to calibrate them.
- Be ready to pull them out at any indication of nausea.
- Use questionnaires to screen out individuals who easily get motion sickness.
- It’s important to remember that people experience sickness for different reasons. It’s hard to eliminate all the variables. Some people can have vertigo or claustrophobia that’s not necessarily the fault of the VR demo. There is a bias toward that in media. People think they are going to be sick so they feel sick.
- Do not ask people if they feel sick before the experience else you are biasing them to be sick.
- Individuals are only more likely to feel sick if your game experience does not match their expectations. Some people feel sick no matter what.
- One researcher tested 700–800 people in VR. Only 2 persons said that they felt sick. 7–8 said they felt uncomfortable.
- An important questions to ask is “At what point do they feel sick?” If you get frequent reports at that point vs. Generalized reports, then you can do something to make the game better.
- Avoid bragging language. Keep questions neutral.
- Separate yourself from the product.
- Remember participants think that you are an authority. Offload instructions to the survey, rather than relay the instructions yourself. It’s going to bias the feedback.
- Standardize the experiment. Give the same spiel.
- The order of question is important.
- Any single geographic region is going to introduce bias. Only screen out regions if you think culture is going to be an issue.
- Testing with kids.
- It’s better to test with 2 kids in a room. With kids, they are not good at verbalizing what they know and do not know. Having 2 kids allows you to see them verbalize their thoughts to each other as they ask questions and help each other through the game.
- When testing a group of kids at once, assign the kids their station and accessories. Allowing them to pick will end up in a fight over who gets the pink controller.
- Younger kids aren’t granular so allow for 2 clear options on surveys. A thumbs up and thumbs down works.
- Limit kids to one sugary drink or you’ll regret it.
Just like yesterday, the VR sessions were very popular. Even with the change to bigger rooms, lines for popular VR talks would start at least 20 minutes before the session started. The longest line I was in snaked up and down the hallway at least 4 times. The wait was well worth it though!
Today was packed. Many sessions overlapped one another. Wish I could have cloned 3 of myself 🙁
Throughout each session, I noticed points that have been repeated from yesterday’s daily roundup. There are definitely trends and general practices that the game industry has picked up on, especially in virtual reality. I’ll talk more about these trends later in this post.
PlayStation revealed the price of their new VR headset at $399! It’s said that Playstation VR has over 230 developers on board and 160 diverse titles in development. 50 of those games will be available this October. More info as the PS VR launch event tomorrow 🙂
There is a game called Rez Infinite developed for the PS VR. The line to try out the game was long! I wanted to take a picture of someone playing the game, but they asked kindly for no film or photography. Instead, here is a picture of the Day of the Dev banner!
Most popular VR demo so far
Aside from Eagle’s Flight, also built for PS VR, EverestVR lets you climb up Mt.Everest from the comfort (and warmth) of your living room. I overheard that being able to experience the climb with the HTC Vive controllers was booked out for the rest of the week!
Check out previews for both. Here’s Eagle’s Flight:
And Everest VR:
Immersive cinema with Lucasfilm. The entire sessions was a dream come true for fans of Star Wars and cinematic film as well as audiophiles. Anyone who’s watched Season 4 of Arrested Development on Netflix is familiar with the ability to watch parallel storylines within the same episode. Lucasfilm allowed us to experience that same interactive narrative with VR and Star Wars Episode 7!
They also let us in on their creative process for Star Wars: Trials on Tatooine. They reiterated the creative process espoused in many other game making session: (a) define the desired experience first then test it (b) simplify the interaction. VR is still new. Right now we are trying to get players to believe they are in another world. Slow the pacing at the beginning and allow them to explore the world. We don’t want complicated interactions to distract them from whats happening around them. Let them enjoy the immersion. (c) Apply positive fail throughs. If the player does something wrong in-game, don’t let the game script make them feel bad by telling them they did something wrong.
What “affordance” really means. Since the Design of Everyday Things by Don Norman, the term “affordance” has been overly used and misused. He updated his 2012 book with some clarification on the terminology. Affordances are not signifiers. Affordances define what actions are possible. What we think those those objects can do can be right or wrong. To ensure that affordances are clear, we use signifiers as a clue to indicate what we can do. For example, a door, with no doorknob or handle, is an affordance. It can open or close. Placing a pull bar, a signifier, on the door clues us into the notion that we can pull it open.
Virtual World Fair. The team behind the first 3D theme park ride for Universal Studios talked about how brands and other consumer products can take advantage of VR. They introduced the Virtual World’s Fair, a theme park in VR that is eerily similar in concept for Disney World’s Epcot.
Brands, Countries and Organizations can own a pavilion in the world, like shops in a mall, where players can explore and shop the latest and greatest.
Film vs. Games vs. VR. Repeated in many sessions today was that the rules that guide films and games are not applicable in VR. We have to create our own language and build best practices specific to it. For example, close up shots in movies will not work. In VR, we would end up invading the player’s personal space. In VR, we are the camera.
Ambisonic vs. biaural audio. Use an ambisonic mic to capture sounds and use biaural audio in VR. Ambisonic is a full surround sound capture technique. It’s equivalent to lightfields, sound pressure from all directions. Biaural audio is the equivalent of stereoscopic video. A common mistake people make is that biaural is not the same as spatialized audio. Biaural is for headphone playback. Ambisonics are for specialized speakers. Biaural has issues with coloration and rotation. Ambisonic has a flatter frequency and works if the player’s head is static.
“Presence”. The biggest buzzword since the “cloud.” Presence is hard to get and hard to achieve. There was a study done on rats wearing VR and they had trouble too! To achieve presence, we should think about how our world absorbs the player and what might distract them:
- Use diegetic cues to nudge their attention. If something is too interesting, the player has no reason to look away or try anything else.
- Design with the vestibular system in mind. Nausea sucks. We do not want dizziness to be associated with VR.
- Flow. What we’re doing should perfectly match with the skills required to do it.
- Immersion. Give the brain a reason to feel what it is feeling. We do not have to feel like we’re somewhere in order to be engaged.
Redirected walking. Redirected walking came up in 3 sessions I was in again today! With the hype surrounded room tracking, it is important that we implement illusions that keeps users safe and nausea free! Vision dominates vestibular sensation. 3 types of redirected walking were introduced:
- Rotational gains. The players rate of rotation can be greater or less than their physical rotation, e.g. turning 90 degrees in real life = turning 80 degrees in VR.
- Curvature gains. The virtual world rotates as the player walks in a straight line. With this technique, players can walk in a complete circle in the real world while perceiving themselves to walk a straight line in VR.
- Translation gains. The player can walk faster or slower in VR compared to real life, e.g. walking 9 meters in the real world can translate to 6 meters in the virtual world.
For anyone interested this 2010 study discusses thresholds for each type of redirected walking. Because the study was done before the VR devices we have today, another study is needed since there could be new thresholds.
Enabling hands in VR. Hands are the most important input for interaction. A large proportion of your sensory and motor cortex is devoted to the hands. The dominant hand is used for precise control, while the non-dominant hand can be used as a point of reference or for gross movement. Hands can be used synchronously to pull a heavy lever and asynchronously to climb a ladder. Currently, simple virtual hands are somewhat useful for selecting small targets, targets in cluttered regions and moving targets. Ray extenders (extensions of our hands in VR) are better for distant targets.
Hello everyone! I wrapped up the first day at the Games Developers Conference (GDC) in San Francisco! It’s the first Monday after daylight savings so a morning cup of joe in Moscone West was a welcomed sight!
Wow! All of the VR sessions were very popular and crowded. In the morning, I was seated in the overflow room for the HTC Vive session. Attendees were lucky to be able to go to 2 VR sessions back-to-back. There would be lines wrapping around the halls and running into other lines. By the afternoon, when foot traffic was at its highest, it was easy to get confused as to which line belonged to which session. Luckily, the organizers took into account the popularity of the VR sessions and moved it to the larger rooms for the next 4 days!
On the third floor, there was a board game area where everyone could play the latest board game releases like Pandemic Legacy and Mysterium as well as a VR play area where everyone could try out the Vive and other VR games.
Sessions & Take Aways
I sat in on 6 sessions:
- A Year in Roomscale: Design Lessons from the HTC Vive and Beyond.
- You are not building a game, but an experience. Players are actually doing something actively with their hands vs. a game controller.
- There are 3 questions that players ask when they are starting a VR experience that should be addressed:
- (a) Who am I?
- (b) What am I supposed to do?
- (c) How do I interact with the environment?
- Permissability. New players always ask when they are allowed to interact with something, but there are safety issues when they get too comfortable. One developer told a story about how a player actually tried to dive headfirst into a pool while wearing a VR device!
- Don’t have music automatically playing when they enter the game. It’s not natural in the real world. It’s better to have a boom box and have them turn on the music instead. In addition, audio is still hard to do perfectly. Players expect perfect audio by default. If they pick up a phone, they expect to hear it out of 1 ear, not both.
- Social Impact: Leveraging Community for Monetization, User Acqusition and Design.
- Social Whales (SW) have high social value thus have the highest connection to other players and are key to a high ROI . SWs makes it easy for other players to connect with one another.
- There are 3 standard profiles that players fall under:
- (a) The atypical social whales that always want the best things.
- (b) The trendsetter, the one who wants to unite and lead.
- (c) The trend spotter, the players who want to be a part of something.
- When a social whale leaves a games, ROI falls and other players leave. This is because that 2nd degree connection is gone. To keep players from leaving, it’s important to have game mechanics that addresses the following player needs:
- (a) Players want to belong.
- (b) Players want recognition as a valuable member.
- (c) Players want their in-game group to be recognized as the best vs. other groups.
- Menus Suck.
- A very interesting talk on rethinking how players access key menu items in VR.
- Have a following object like a cat! Touching different parts of the object will allow you to select different things. It’s much easier than walking back and forth between a menu and what you have to do.
- Job Simulator uses retro cartridges for menu selection.
- Create menu shortcuts with the player’s body. Have the user pull things out of different parts of their head (below).
- Eating as an interaction. In job simulator you can eat a cake marked with an “Exit” to exit the game. The cake changes to another dessert item marked with an “Are you sure?” to ensure the exit.
- Improving Playtesting through Workshops Focusing on Exploring.
- For games, we are experience testing (playtesting) not performing a usability test.
- For games, especially for VR, comfort comes first. Right after that it’s ease of use.
- When exploring desired experiences for a game, create a composition box to ensure you get ideas from all views of your development team.
- When observing play, look for actions (e.g. vocalizations, gestures) as well as for changes in posture and focus.
- The Tower of Want.
- Learn critical questions our designs must answer to engage players over the long term.
- Follow the “I want to..” and “so I can…” framework to unearth player’s short term and long term goals. Instead of asking why 5 times like we do in user research, we ask then to complete the framework’s “so I can…” sentence (e.g. I want to get good grades so I can get into college…so I can get a good job…so I can make a lot of money…so I can buy a house).
- The framework creates a ladder of motivations that incentivizes a player to complete each game level in that ladder daily.
- Cognitive Psychology of Virtual Reality: Basics, Problems and Tips.
- Psychology is the physics of VR.
- Use redirected walking to keep players within the same space.
- Design for optical flow. Put shadows over areas where users are not concentrating on. It’ll help with dizziness.
- Players underestimate depth by up to 50%.
- Add depth by adding transitional rooms (portals). This helps ease the players into their new environment.
- Players can see a maximum of 6 meters ahead of them for 3D.
- In their peripherals, they can only see 2D.
- Design with the mind that 20–30% of the population has problems with stereoscopic vision.
We will be following closely all things UX, IoT, VR, AI. Our schedules are getting full with some great sessions and workshops. Check back in a week or so to read some of our impressions!
At the end of 2015, our team was wrapping up projects that would be shown at the main Oracle conference, Oracle OpenWorld.
As with every OOW, we like to come up with a fun project that shows attendees our spirit of innovation by building cool projects within Oracle.
The team was thinking about building something cool with kids’ racetracks. We all were collectively in charge of looking for alternatives, so we visited a toy store to get ideas and see products that already existed out there.
We looked pretty cool racetracks but none of them suited our needs for functionality and of course, we didn’t have enough time to invest on modifying some of them.
So, searching through internet someone came up with Anki OVERDRIVE cars, yes, that product that was announced back in 2013 at Apple WWDC keynote. To sum up, Anki provides a racetrack that includes flexible plastic magnets tracks that can be chained together and to allow for any racetrack configuration, rechargeable cars that have an optical sensor underneath to keep the car on the track, a lot of fun features like all kinds of virtual weapons, cars upgrades, etc., a companion app for both Android and iOS platform to operate the cars and a software development kit (SDK).
For us, it was exactly what we were looking for. But now we needed to find a way to control the cars without using the companion app because, you know, that was boring and we wanted more action and go one step further.
So after discussing different approaches, I suggested to control cars with Myo gesture control armband that basically is a wireless touch-free, wearable gesture control and motion device. We had Myo armband already, but we hadn’t played with it much. Good thing that Myo band has an SDK too, so we had everything ready to build a cool demo 🙂
So it was time to get my hands dirty and start coding! The general idea was to build an Android app that receives Myo gestures and motion, translate and map those values and send them to the Anki car to control its speed and receive messages from the car to know its status and take action. As a plus, we wanted to count laps to make a real racing car contest with the attendees.
I started investigating how to control the Anki cars with the SDK, and I found that they just provide a C implementation of the message protocols and data parsing routines necessary for communicating with the Anki Drive vehicles. And that the initial release has the minimal number of functions required to use the message protocol and parse information sent by vehicles. And finally, that version provides a limited subset of a messaging protocol. Knowing that, I was sure that tracking number of laps would be a hard task so I decided to get table that for another day.
I dove into the SDK to understand the message protocols and how to translate that to the Android SDK to have full control of the vehicle plus how to pair them through Bluetooth with the Android device; I have to admit, it was difficult at the beginning as documentation is very limited. Also, I found that nothing that you do will work until you send and set a SDK_MODE flag, so make a note if you want to do any Anki builds.
Myo integration was more transparent as they already provide an Android SDK with cool examples. So I just had to code Bluetooth paring and map and translate gestures and motions into a valid speed for the Anki cars.
I set two gestures and motions to control speed. The first one was rotating the wrist to the right to increase speed or rotating the wrist to the left to decrease speed, and second was moving the arm up to increase or down to decrease speed.
I’m sure there are a lot of gestures or motions we could have used and implemented, but those were enough for our proof of concept and demo.
Here you can see a development testing.
I’ve been to two conferences where this demo has been shown, and the attendees’ “wow” reaction is very gratifying, and that my fiends is the whole idea of this demo and this team, the “wow” moment 🙂
Of course, it was shown at OOW 2015, and you can read more about it here.
Also, this demo has its own spot in OAUX Gadget Lab, so come by and have some fun time with it and our others demos that live in the lab.
So, here’s a new thing I’ve noticed lately, customizable wearables, specifically the Xiaomi Mi Band (#MiBand), which is cheap and completely extensible.
This happens to be Ultan’s (@ultan) new fitness band of choice, and coincidentally, Christina’s (@ChrisKolOrcl) as well. Although both are members of Oracle Applications User Experience (@usableapps), neither knew the other was wearing the Mi Band until they read Ultan’s post.
Since, they’ve shared pictures of their custom bands.
The Mi Band already comes in a wider array of color options that most fitness bands, and a quick search of Amazon yields many pages of wristband and other non-Xiaomi produced accessories. So, there’s already a market for customizing the $20 device.
And why not, given it’s the price of a nice pedometer with more bells and whistles and a third the cost of the cheapest Fitbit, the Zip, leaving plenty of budget left over for making it yours.
Both Christina and Ultan have been tracking fitness for a long time and as early adopters so I’m ready to declare this a trend, i.e. super-cheap, completely-customizable fitness bands.
Of course, as with anything related to fashion (#fashtech), I’m the last to know. Much like a broken clock, my wardrobe is fashionable every 20 years or so. However, Ultan has been beating the #fashtech drum for a while now, and it seems the time has come to throw off the chains of the dull, black band and embrace color again.
Or something like that. Anyway, find the comments and share your Mi Bands or opinions. Either, both, all good.
Editor’s note: Our team participated in many of the Summer of Innovation events held by Laurie (@lsptahoe) and her team. Here’s Part 2 of Mark’s (@mvilrokx) project from the IoT Hackathon held at Oracle HQ, July 30-31. Enjoy.
After all the prep work for the IoT hackaton where we agreed on our use case and settled on the hardware we would be using, it was time to start the actual fun part: designing and coding!
Diane provided all the designs for the application, Joe built all the hardware and provided the “smarts” of the application and I was responsible for bringing it all together.
Let’s start with the sketch of the whole hardware setup:
Basically the piezo is connected to the 1 analog input on the ESP8266 (A0) and to ground. In parallel with the piezo we put a 1MΩ pull-down resistor to reduce possible signal noise, a 1µF capacitor to smooth out the piezo’s signal over time to improve readings and a 5.1V Zener diode to prevent the piezo from frying the ESP8266 (piezo’s can spike at 20~40V). The piezo can then simply be attached to a pipe with some plumbers putty, ready to start sensing vibrations.
For our test setup, we created a closed circuit water flow with some copper pipes, a simple garden pump and a bucket of water, simulating an actual water system.
This turned out to work great for research and the actual demonstration during the hackathon.
ESP8266 WiFi Client
The whole reason for using the ESP8266 was to be able to connect the piezo to the cloud. The ESP8266 can be flashed with a Lua based firmware called NodeMCU, which makes this whole process remarkably simple. Just set the ESP8266 in Wifi STATION mode and then connect to a available WiFi, that’s it, 2 lines of (Lua) code:
wifi.setmode(wifi.STATION) wifi.sta.config(<ssid>, <pwd>)
The board is now connected to the internet and you can perform e.g. REST API calls from it. In practice, we implemented this slightly different because this isn’t very user friendly, but that’s outside the scope of this post.
All we had to do now was read the data from the piezo and send it via the internet to be processed and stored somewhere, e.g. on the Oracle Cloud IoT platform. Unfortunately we didn’t have access to that platform so we had to build something ourselves, see next.
Initially the plan was to use REST API calls to send the data to our backend, but this turned out to be too heavy for the little ESP8266 board and we could only perform a few calls/second before it would freeze and reboot.
Instead, we opted to use the MQTT protocol (quote from http://mqtt.org/):
“MQTT is a machine-to-machine (M2M)/”Internet of Things” connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport. It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium…”
Sounds like a perfect match and the Lua firmware has build in support for MQTT!
We turned our ESP8266 into a MQTT Client and used CloudMQTT as our broker. Once we switched to MQTT we were able to send data at a much higher frequency, but still not fast enough compared to the number of samples we wanted to collect from the piezo (100s/sec).
So instead of sending all the data we collect from the piezo to a backend, we decided to do some of the processing on the ESP8266 chip itself. We would collect 100 samples and then calculate the mean, median and standard deviation of those and send that to the backend instead as fast as we could. In the end it turned out that calculating the standard deviation was to much for the board and it started effecting our sampling frequency, so we just dropped that al together.
Piezo data is now being pushed from our little board to the broker, next we need to store that data into a database. For this we need another MQTT client that listens to the messages from the ESP8266 and stores them as they arrive. We used node.js and the MQTT.js package to create our client and hooked it up to a MongoDB.
This gave us the flexibility to change the data that gets sent from the board (which was just some JSON) without having to deal with DDL. For example, we managed to cram extra attributes into the data sent from the ESP8266 that contained information about the pipe’s material (“copper” or “PVC”) and the location of the piezo on the pipe (“close to bend”) all without changing anything other than the code on the ESP8266 that captures this information.
This information would be useful in our model as different pipe materials or other characteristics could have an effect on it, although we didn’t get to use it for the Hackathon.
The final piece of the puzzle was to display all this information into a useful manner to the end user.
For this we had a web application that would connect to the MongoDB and process the information available there. Users could monitor usage, per device or aggregated in any number of ways: devices could be group by restroom, floor, building, campus or company wide.
You could also aggregate by device type: shower, toilet, urinal etc. A budget could be allocated, again for any of these levels and time line, e.g. 100 gallon/day/toilet or 100,000 gallon/quarter/building. Notifications would get sent out when you go over a budgets or when “unusual” activity gets noticed, e.g. a device is nowhere neat it’s budget BUT it has used much more today than it normally does, which could indicate a leak.
Those were all the software parts that made up our final solution, here’s an overview:
Notice how this architecture allows us to easily scale the individual parts without effecting one another: we can scale up the number of sensors, scale the MQTT broker, scale the node.js server and scale the MongoDB.
One final component that we did not really get to highlight in the presentation during the hackathon is that the MQTT Client on the ESP8266 is configured to both send messages (the piezo data as we shown before) but also to receive messages.
This allowed us to control the sensors remotely, by sending it commands from the broker (through another MQTT Client). As soon as you switched on an ESP8266 module, say “sensor1”, it would announce itself. The node.js server would be listening to these messages and indicate in the database that “sensor1” is online and ready to accept commands.
From another MQTT client, controlled from a admin web application, we could then send a command to this specific ESP8266 and tell it to either start sensing and send data or stop. This was done for practical purpose because we were producing so much data that we were in danger of running out of free capacity on CloudMQTT 🙂 but it turned out to be a very useful feature that we are planning to investigate further and implement in any future versions.
In the end, we didn’t win any prizes at the Hackathon but I did learn a lot about IoT and plan to use that in future projects here at the AppsLab, stay tuned for more blogposts on those projects.
2016 has been a whirlwind so far, and February kept up the pace. Here’s a quick rundown of what we’ve been doing.
As we did last year, OAUX made a trip to APAC again this year to meet partners, customers and Oracle people, show our Expo of goodies and talk simplicity-mobility-extensibility, Glance, Scan, Commit and our overall goal of increasing user participation.
This year, Noel (@noelportugal) and I made the trip to Australia and spent an awesome, warm week in beautiful Sydney. Noel showed the portable Smart Office demo, including the Smart Badge, that we debuted at OpenWorld in October, and I showed a couple visualizations and Glance on a handful of devices.
Don’t let the picture fool you. It was taken between the end of Jeremy’s (@jrwashley) talk, before the official start of the Expo, during lunch. Once people finished eating, the room filled up quickly with 80 or so partner attendees.
On the second day in the Sydney office, we got the chance to chat with local Oracle sales reps, consultants and architects, and I was lucky enough to meet Stuart Coggins (@ozcoggs) and Scott Newman (@lamdadsn) who read this blog.
It’s always invigorating to meet readers IRL. I’ve poured nine years into this blog, writing more than 2,000 posts, and sometimes the silence is deafening. I wonder who’s out there reading, so it’s always a treat.
Oh, and Stuart had a board with blicky lights, an IoT demo he’d shown a customer that day. So, I was immediately intrigued.
Turns out Stuart and Scott create demos similar to our own and share the same nerdy passions we do. To wit, check out the Anki car hack Stuart and some colleagues did for Pausefest 2016 in Melbourne earlier this month.
You’ll recall the Anki cars are one of latest favorite shiny toys to hack.
Overall, it was a great week. Special thanks to John P for hosting us and making the trip all the more fun.
While we were away, Thao (@thaobnguyen) and the HQ team hosted a group of analysts in the Cloud and Gadget Labs, including Vinnie Mirchandani (@dealarchitect), who posted a nice writeup, including the money quote:
A walkthrough of their UX lab was like that of Q’s workshop in Bond movies. An Amazon Echo, smart watches, several gestural devices point to further changes in interfaces. Our expectations are being shaped by rapidly evolving UX in our cars and homes.
Somewhere Noel feels warm and fuzzy because the Q-meets-Tony Stark aesthetic is exactly what he was hoping to capture in his initial designs for our Gadget Lab.
Anyway, given all the excitement lately, I’m only now getting a chance to encourage you to read a post by Sarah Smart over on VoX about stuff, “Wearables, IoT push Oracle’s emerging tech development.”
So yeah, a lot going on, and conference season is just beginning. Stay tuned for more.
Because I live in Portland, I’m often asked if “Portlandia” is accurate.
Actually, Glance has grown beyond wearables to support cars and other devices, the latest of which is Noel’s (@noelportugal) gadget du jour, the LaMetric Time (@smartatoms).
Insert mildly amusing video here.
And of course Noel had to push Glance notifications to LaMetric, because Noel. Pics, it happened.
The text is truncated, and I tried to take a video of the scrolling notification, but it goes a bit fast for the camera. Beyond just the concept, we’ll have to break up the notification to fit LaMetric’s model better, but this was only a few minutes of effort from Noel. I know, because he was sitting next to me while he coded it.
In case you need a refresher, here’s Glance of a bunch of other things.
I didn’t have a separate camera so I couldn’t show the Android notification.
We haven’t updated the framework for them, but if you recall, Glance also worked on Google Glass and Pebble in its 1.0 version.
In September 2014, Oracle Applications User Experience (@usableapps) opened a brand new lab that showcases Oracle’s Cloud Applications, specifically the many innovations that our organization has made to and around Cloud Applications in the past handful of year.
We call it the Cloud User Experience Lab, or affectionately, the Cloud Lab.
Our team has several projects featured in the Cloud Lab, and many of our team members have presented our work to customers, prospects, partners, analysts, internal groups, press, media and even schools and Girl and Boy Scout troops.
In 2015, the Cloud Lab hosted more than 200 such tours, actually quite a bit more, but I don’t have the exact number in front of me.
Beyond the numbers, Jeremy (@jrwashely), our group vice president, has been asked to replicate the Cloud Lab in other places, on the HQ campus and abroad at other Oracle campuses.
People really like it.
In October 2015, we opened an adjoining space to the Cloud Lab that extends the experience to include more hands-on projects. We call this new lab, the Gadget Lab, and it features many more of our projects, including several you’ve seen here.
In the Gadget Lab, we’re hoping to get people excited about the possible and give them a glimpse of what our team does because saying “we focus on emerging technologies” isn’t nearly as descriptive as showing our work.
So, the next time you’re at Oracle HQ, sign up for a tour of the Cloud and Gadget Labs and let us know what you think.
Editor’s note: Here’s the first post from our new-ish researcher, Tawny. She joined us back in September, just in time for OpenWorld. After her trip to Disney World, she talked eagerly about the MagicBand experience, and if you read here, you know I’m a fan of Disney’s innovative spirit.
Planning a Disney World trip is no small feat. There are websites that display crowd calendars to help you find the best week to visit and the optimal parks to visit on each of those days so you can take advantage of those magic hours. Traveling with kids? Visiting during the high season? Not sure which FastPass+ to make reservations for?
There are annually updated guidebooks dedicated to providing you the most optimal attraction routes and FastPass+ reservations, based off of thousands of data points for each park. Beginning 2013, Disney introduced the MagicBand, a waterproof bracelet that acts as your entry ticket, FastPass+ holder, hotel key and credit card holder. The bands are part of The MyMagic+ platform, consisting of four main components: MagicBands, FastPass+, My Disney Experience, and PhotoPass Memory Maker. Passborterboard lists everything you can do with a MagicBand.
I got my chance to experience the MagicBand early this January.
These are both open edition bands. This means that they do not have customized lights or sounds at FP+ touchpoints. We bought them at the kiosk at Epcot after enviously looking on at other guests who were conveniently accessing park attractions without having to take out their tickets! It was raining, and the idea of having to take out anything from our bags under our ponchos was not appealing.
The transaction was quick and the cashier kindly linked our shiny new bands to our tickets. Freedom!!!
The band made it easy for us to download photos and souvenirs across all park attractions without having to crowd around the photo kiosk at the end of the day. It was great being able to go straight home to our hotel room while looking through our Disney photo book through their mobile app!
Test Track at Epcot made the most use of the personalization aspect of these bands. Before the rise, guests could build their own cars with the goal of outperforming other cars in 4 key areas: power, turn handling, environmental efficiency and responsiveness.
After test driving our car on the ride, there were still many things we could do with our car such as join a multiplayer team race…we lost 🙁
What was really interesting were guests were fortunate to show off personalized entry colors and sounds, a coveted status symbol amongst MagicBand collectors. The noise and colors was a mini attraction on its own! I wish our badge scanners said hi to us like this every morning…
When used in conjunction with My Disney Experience app, there can be a lot of potential:
- Order ahead food + scan to pick up or food delivery while waiting in a long line.
- Heart sensor + head facing camera to take pictures within an attractions to capture happy moments.
- Haptic feedback to tell you that your table is ready at a restaurant. Those pagers are bulky.
So what about MagicBands for the enterprise context?
Hospitals may benefit, but some argue that the MagicBand model works exclusively for Disney because of its unique ecosystem and the heavy cost it would take to implement it. The concept of the wearable is no different from the badges workers have now.
Depending on the permissions given to the badgeholder, she can badge into any building around the world.
What if we extend our badge capabilities to allow new/current employees to easily find team members to collaborate and ask questions?
What if the badge carried all of your desktop and environmental preferences from one flex office to the desk so you never have to set up or complain about the temperature ever again?
What if we could get a push notification that it’s our cafeteria cashier’s birthday as we’re paying and make their day with a “Happy Birthday?”
That’s something to think about.