Hi there, remember me? Wow, April was a busy month for us, and looking ahead, it’s getting busy again.
Busy is good, and also good, is the emergence of new voices here at the ‘Lab. They’ve done a great job holding down the fort. Since my last post in late March, you’ve heard from Raymond (@yuhuaxie), Os (@vaini11a), Tawny (@iheartthannie), Ben (@goldenmean1618) and Mark (@mvilrokx).
Because it’s been a while, here comes an update post on what we’ve been doing, what we’re going to be doing in the near future, and some nuggets you might have missed.
What we’ve been doing
Conference season, like tax season in the US, consumes the Spring. April kicked off for me at Oracle HCM World in Chicago, where Aylin (@aylinuysal) and I had a great session. We showed a couple of our cool voice demos, powered by Noel’s (@noelportugal) favorite gadget, the Amazon Echo, and the audience was visibly impressed.
— Gozel Aamoth (@gozelaamoth) April 7, 2016
I like that picture. Looks like I’m wearing the Echo as a tie.
Collaborate 16 was next, where Ben and Tawny collected VR research and ran a focus group on bots. VR is still very much a niche technology. Many Collaborate attendees hadn’t even heard of VR at all and were eager to take the Samsung Gear VR for a test drive.
During the bots focus group, Ben and Tawny tried out some new methods, like Business Origami, which fostered some really interesting ideas among the group.
— The AppsLab (@theappslab) April 12, 2016
Next, Ben headed out directly for the annual Oracle Benelux User Group (OBUG) conference in Arnhem to do more VR research. Our research needs to include international participants, and Ben found more of the same reactions we’ve seen Stateside. With something as new and different as VR, we cast a wide net to get as many perspectives and collect as much data as possible before moving forward with the project.
Oracle Modern Customer Experience was next for us, where we showed several of our demos to a group students from the Lee Business School at UNLV (@lbsunlv), who then talked about those demos and a range of other topics in a panel session, hosted by Rebecca Wettemann (@rebeccawettemann) of Nucleus Research.
— Geet (@geet_s) April 28, 2016
The feedback we got on our demos was very interesting. These students belong to a demographic we don’t typically get to hear from, and their commentary gave me some lightning bolts of insight that will be valuable to our work.
As with VR, some of the demos we showed were on devices they had not seen or used yet, and it’s always nice to see someone enjoy a device or demo that has become old hat to me.
Because we live and breathe emerging technologies, we tend to get jaded about new devices far too quickly. So, a reset is always welcome.
What we’re going to be doing in the near future
— AMIS, Oracle & Java (@AMISnl) May 9, 2016
Then, June 2-3, we’re returning to the Netherlands to attend and support AMIS 25. The event celebrates the 25th anniversary of AMIS (@AMISnl), and they’ve decided to throw an awesome conference at what sounds like a sweet venue, “Hangaar 2” at the former military airport Valkenburg in Katwijk outside Amsterdam.
Our GVP, Jeremy Ashley (@jrwashley) will be speaking, as will Mark. Noel will be showing the Smart Office, Mark will be showing his Developer Experience (DX) tools, and Tawny will be conducting some VR research, all in the Experience Zone.
I’ve really enjoyed collaborating with AMIS in the past, and I’m very excited for this conference/celebration.
After a brief stint at home, we’re on the road again in late June for Kscope16, which is both an awesome conference and happily, the last show of the conference year. OpenWorld doesn’t count.
We have big fun plans this year, as always, so stay tuned for details.
Stuff you might have missed
Finally, here are some interesting tidbits I collected in my absence from blogging.
- The bots are coming! We love bots, and in October, we’re partnering with the Apps UX Innovation team to run an internal bots-focused hackthon.
- New ways of input still on the verge of the enterprise. Over on VoX, you can read about our work in voice and gesture input and how these technologies are shaping future experiences.
- Smart user experiences: Machine learning and the future of enterprise applications. Check out what Bill has to say about how “smart” experiences are shaping our thinking.
Last week several of my colleagues and myself had the privilege of attending the Samsung Developers Conference (SDC) in San Francisco. It was the 5th time Samsung organized a developers conference in San Francisco but only the first time I attended, although some in our party were present previous times so I had some idea of what to expect. Here are some impressions and thoughts on the conference.
After an hour walking around, my first thought was: is there anything that Samsung doesn’t have their hand in? I knew of course they produce smart phones, tablets, smart watches and TVs, I’ve seen a laptop here and there, but vacuum cleaners, air conditioning units and ranges? Semi-conductors (did you know that inside the iPhone there are Samsung chips?), Smart fridges and security cameras and now VR gear and IoT, pretty crazy. Interestingly enough, I think there are some distinct advantages that Samsung might have because of this smorgasbord of technology over more focused companies (like say Apple) , more on that later.
As with all of these events, Samsung’s motivation for organizing this conference is of course not entirely altruistic; as I mentioned in the intro, they have a huge hardware footprint and almost all of that needs software, which gets developed by … developers.
They need to attract outside developers to their platforms to make them interesting for potential buyers, I mean, what would the iPhone be without Apps? There is nothing wrong with that, that’s one of the reasons we have Oracle OpenWorld, but I thought that the sessions on the “Innovation Track” where a bit light on technical details (at least the ones I attended).
In fact, some of them wouldn’t have been misplaced in the “Marketing Track” I feel. To be fair, I didn’t get to attend any of the hands-on sessions on day zero, maybe they were more useful, but as a hard core developer, I felt a bit … underwhelmed by the sessions.
That doesn’t mean though that the sessions were not interesting, probably none more so than “How to Put Magic in a Magical Product” by Moe Tanabian, Chief Design Officer at Samsung, which took us on a “design and technical journey to build an endearing home robot”, basically how they created this fella:
That is Otto, a personal assistant robot, similar to the Amazon Echo, except with a personality. Tanabian explained in the session how they got from idea and concept to production using a process remarkably similar to how we develop here at the AppsLab; fail fast, iterate quickly, get it in front of user as quickly as possible, measure etc. I just wish we had the same hardware tooling available as they do (apparently they used, what I can only image are very expensive 3D printers to produce the end result).
Samsung also seems to be making a big push in the IoT space, and for good reason. The IoTivity project is a joint open source connectivity framework, sponsored by the Open Interconnect Consortium (OIC) of which Samsung is a member and one of the sessions I attended was about this project.
The whole Samsung Artic IoT platform supports this standard, which should make it easy and secure to discover and connect Artic modules to each other. The question as always is: will other vendors adopt this standard so that you can do this cross-vendor, i.e. have my esp8266’s talk to an Artic module which then talks to a Particle and my Philips Hue lights etc.
Without this, such a new standard is fairly useless and just adds to the confusion.
As mentioned in the intro though, because Samsung makes pretty much everything, they could start by enabling all their own “things” to talk to each other over the internet. Their smart fridge could then command their robotic vacuums to clean up the milk that just got spilled in the kitchen. The range could check what is in the fridge and suggest what’s for dinner. Artic modules can then be used as customizations and extensions for the few things that are not built by Samsung (like IoT Nerf Guns :-), all tied together by Otto which can relay information from and to the users.
This is an advantage they have over e.g. Google (with Brillo) or Apple (with HomeKit) who have to ask hardware vendors to implement their standard; Samsung has both hardware and the IoT platform, no need for an outside party, at least to get started.
Personally, I’m hoping that in the near future I get to experiment with some of the Artic modules, they look pretty cool!
And then of course there was VR; VR Gears, VR vendors, VR Cameras even a VR rollercoaster ride (which I tried and of course made my sick, same as with the Oculus Rift demo at UKOUG last year), maybe I’m just not cutout for VR. One of the giveaways was actually a Gear 360 camera which allows you to take 360 degree camera footage which you can then experience using the Gear VR, nicely tying up the whole Samsung VR experience.
All in all it was a great conference with cool technology showing off Samsung’s commitment to VR and IoT.
Oh, and I got to meet Flo Rida at an AMA session 🙂
VR was the big thing at the Samsung Developer Conference, and one of the points that got driven across, both in the keynotes and in other talks throughout the day, was that VR is a fundamentally new medium—something we haven’t seen since the motion picture.
Injong Rhee, the executive VP of R&D for Software and Services, laid out some of VR’s main application areas: Gaming, Sports, Travel, Education, Theme Parks, Animation, Music, and Real Estate. Nothing too new here, but it is a good summary of the major use cases, and they echo what we’ve heard in our own research.
He also mentioned some of their biggest areas for innovation: Weight, dizziness, image quality, insufficient computing power, restricted mobility, limited input control. For anyone who’s tried the Gear VR and had to use the control pad on the side of the visor, I think we can agree it’s not ideal for long periods of time. And while some VR apps leave me and others with no nausea at all, other apps, where you’re moving around and stepping up and down, can certainly cause some discomfort. I’m curious to see how some of those problems of basic human physiology can be overcome.
A fascinating session after the keynote was with Brett Leonard, who many years ago directed Lawnmower Man, a cautionary tale about VR, which despite the bleak dystopic possibilities it portrayed, inspired many of today’s VR pioneers. Leonard appeared with his brother Greg, a composer, and Frank Serafine, an Oscar-award winning sound designer who did the sound for Lawnmower Man.
Brett, Greg, and Frank made a solid case for VR as a new medium that has yet to be even partially explored, and will surely have a plethora of new conventions that storytellers will need to work with. We’ve become familiar with many aspects of the language of film, such as things happening off screen but are implied to be happening. But with the 360-degree experience of VR, there’s no longer that same framing of shots, or things happening off the screen. The viewer chooses where to look.
Brett also listed his five laws of VR, which cover some of his concerns, given that it is a powerful medium that could have real consequences for people’s minds and physiology, particularly developing children. His laws, very paraphrased are:
- Take it seriously.
- VR should promote interconnecting with humanity, not further reinforcing all the walls we already have, and that technology so far has helped to create.
- VR is its own reality.
- VR should be a safe space—there are a huge amount of innovations possible, things that we haven’t been able to consider before. This may be especially so for medical and psychological treatments.
- VR is the medium of the global human.
Another interesting part of the talk was about true 360-degree sound, which Serafine said hadn’t really been done well before, but with the upcoming Dolby Atmos theaters, finally has.
Good 360-degree sound, not just stereo like we’re used to, will be a big part of VR feeling increasingly real, and will pose a challenge for VR storytelling, because it means recording becomes more complex, and consequently editing and mixing.
Samsung also announced their effort for the connected car, with a device that looks a lot like the Automatic (previously blogged about here) or the Mojio. It will offer all the features of those other devices—driving feedback that can become a driver score (measuring hard braking, fast accelerating, hard turns, and the like), as well as an LTE connection that allows it to stay connected all the time and serve as a WiFi hotspot. But Samsung adds a little more interest to the game with vendor collaborations, like with Fiat, where you can unlock the car, or open the trunk from your app. This can’t currently be done with other devices.
It should come out later this year, and will also have a fleet offering, which should appeal to enterprise companies. If they have more of these exclusive offering because of Samsung’s relationships with various vendors, maybe it will do better than its competitors.
After a whirlwind day at Modern CX, I hurried my way back up to San Francisco for the last day of the Samsung Developers Conference 2016. The morning started out exciting with a giveaway gift of the Samsung Gear 360 Camera.
— Tawny (@iheartthannie) April 30, 2016
Raymond (@yuhuaxie) took a bunch of photos with it and found it very convenient to get a stitched 360 photo with one click of a button. Previously in the making of our virtual gadget lab, he had to use an Android phone camera to capture 20+ shots before stitching the photos together to produce one spherical photo.
The automatic stitching is seamless at a glance, but you can still tell where the stitching happened by looking more closely.
The quality of photos taken with Gear 360 still has things left to be desired. The door frame and structure beam of my house all appear to be curvy, the depth of photo looks very shallow, etc. Maybe it is the fish-eye lenses that lead to a lack of depth distance, and distortion of view outside of focus center. This distortion can be avoided with subjects and objects that are a few meters from the camera or if more cameras are used in high-end gig, such as Project Beyond.
A Secure & Connected Future
- Security – Knox, which delivers mobile enterprise security, and Samsung Pay. Samsung Pay uses MST and NFC, makes mobile payment “simple, secure, virtually anywhere.” A group of panelist from MasterCard, Visa and American Express expressed that mobile payments need to be as easy (or easier) than pulling out your credit card, and Samsung Pay MST and NFC enables a “frictionless” experience.
- The internet of things – Currently there is a fragmented ecosystem of connected devices and manufacturers needed to be democratized. In a keynote, Curtis Sasaki pushed the idea of making connections, not silos. The ARTIK chip is one way to exchange open data amongst devices that originally were not designed to work together.
Sensors can be used to provide a variety of information/status and start actions. With ARTIK, we were able to meet Otto, Samsung’s adorable smart home robotic personal assistant who was methodically turning off lights and taking pictures. Otto is not a consumer ready product, but it functions very much like Amazon’s Alexa but hosts an HD camera and display. This offers an opportunity to test image and face recognition in home environments.
- The connected car is launching at the end of this year. A new dongle gives owners of old cars LTE connection. Samsung Connect Auto uses real-time alerts to help consumers improve their driving behavior while offering a Wi-Fi hotspot to create a multimedia center for the car.
There is also a smart TV and a connected fridge allows you to identify missing ingredients, compose a grocery list and order groceries using the doors of the fridge.
- Smart healthcare – This is about empowerment, connectivity and health data security. There was local intelligence for health monitoring with the Samsung Simband and a virtual reality relaxation pod provided by Cigna Healthcare. Simband is a cloud-based health analytics service can collect health data from wearables and health monitors.
- Virtual reality – The 4D experience was highlighted by Escape Rroom VR is an amazing virtual experience where you can touch and move real objects in virtual reality!
- And of course, there was the obligatory roller coaster experience.
— Tawny (@iheartthannie) April 30, 2016
I managed to catch the tail-end of the inspiration keynote which featured a panel of 4 women change-makers from Baobab Studios, Cisco, Intel and Summit Public Schools chatting about how we can change the world and make history.
One of the common themes with innovation was it comes from anywhere in the organization and users, whether that be from the lab or another person’s scratchpad.
Innovation is questioning the status quo. You have to reject “what is,” and THAT is hard. Your moral obligation to make a better quality world should be your guiding principle. You have to make IT happen by putting in the time. Challenge the way the world is now and bring people into the dialogue, even if they do not want to be there.
There will always be a constant steady heartbeat of new technology. To build meaningful tools, we need to ask new questions:
- What would people do with our technology vs. not what we are doing.
- What do people care about? Passionate about?
- What do people find frustrating?
- What role could technology play in their lives?
We need to bringing sensibility to our future tech. Our future objects will not make sense if they don’t make us happy or more efficient. We need to focus on how to tell a future story that not about how we are seduced by technology; instead, we tell a story about what it will mean for all of us if we do use it.
Overall, we had a lot of fun and learned a lot about what technologies are currently available, where the industry is headed and what emerging technologies are around the corner. Our ideas flourish being surrounded by fellow developers, designers, thinkers and makers.
Last week, we were back in Las Vegas again for Oracle Modern Customer Experience Conference! Instead of talking to customers and partners, we had the honor of chatting with UNLV Lee graduate students (@lbsunlv) and getting their feedback on how we envision the future of work, customers experience, marketing and data security.
We started off with Noel (@noelportugal) showing the portable Smart Office demo, including the Smart Badge, that we debuted at OpenWorld in October, followed by a break out session for the gradates to experience Glance and Virtual Reality at their own leisure.
The event was a hit! The 2 hour session flew by quickly. The same group of graduates who came in for the demos at the start of our session, left at the very last minute when we had to close down.
Experiencing these demos led into some exciting discussions that following day between the UNLV Lee Business School panelists and Rebecca Wettemann (@rebeccawettemann) from Nucleus Research (@NucleusResearch) on the future of work:
- How will sales, marketing, customer service, and commerce change for the next generation?
- What does the next generation expect from their employers?
- Are current employers truly modern and using the latest technology solutions?
— Erin Killian Evers (@keversca) April 28, 2016
— Gozel Aamoth (@gozelaamoth) April 28, 2016
While all of this was going on, a few of the developers and I were at the Samsung Developers Conference in SF discussing how we could build a more connected future. More on that in the next coming posts!
As part of our push to do more international research, I hopped over to Europe to show some customers VR and gather their impressions and thoughts on use cases. This time it was at OBUG, the Oracle Benelux User Group, which was held in Arnhem, a refreshing city along the Rhine.
Given that VR is one of the big technologies of 2016, and is posed to play a major role in the future of user experience, we want to know how our users would like to use VR to help them in their jobs. But first we just need to know what they think about VR after actually using it.
The week prior, Tawny and I showed some VR demos to customers and fellow Oracle employees at Collaborate in Las Vegas, taking them to the arctic to see whales and other denizens of the deep (link) and for the few with some extra time, defusing some bombs in the collaborative game “Keep Talking and Nobody Explodes” (game; Raymond’s blog post from GDC).
The reaction to the underwater scenes are now predictable: pretty much everyone loves it, just some more than others. There’s a sense of wonder, of amazement that the technology has progressed to this point, and that it’s all done with a smartphone. Several people have reached out to try to touch the sea creatures that are swimming by their view, only to realize they’ve been tricked.
Our European customers are no different than the ones we met at Collaborate, with similar ideas of how it could be used in their businesses.
It’s certainly a new technology, and we’ll continue to seek out use cases, while thinking up our own. In the meantime, VR is lots of fun.
Last week, Ben (@goldenmean1618) and I were in Las Vegas for COLLABORATE. We ran two studies which focuses on two trending topics in tech: bots and virtual reality!
Bot Focus Group
— The AppsLab (@theappslab) April 12, 2016
Our timing for the bot study was perfect! The morning we were to run our focus group on bots in the workplace, Facebook launched it’s bot platform for messenger. They are not the only ones with a platform. Microsoft, Telegram as well as Slack has their own platform too.
The goal of our focus group was to generate ideas on useful bots in the workplace. This can range from the concierge bot that Facebook has to workflow bots that Slack has. To generate as many ideas as we could, without groupthink, we had everyone silently write down their ideas using the “I WANT [PAT] TO…SO I CAN…” Tower of Want framework I stumbled upon at the GDC16 conference last March.
Not only do you distill the participant’s motivations, intents and needs, but you also acquire soft goals to guide the bot’s development. Algorithms are extremely literal. The Harvard Business Review notes how social media sites were once “quickly filled with superficial and offensive material.”
The algorithm was simple, find the articles with the most clicks and feed them to the users. Somewhere, the goal of QUALITY highly engaged articles were lost to highly engaged articles at the expense of QUALITY. Intention is everything.
“Algorithms don’t understand trade-offs; they pursue objectives single-mindedly.”
Soft goals are in place to steer a bot away from unintended actions.
After the ideas were generated and shared, we had them place their bot tasks on a pain/frequency chart: How painful is this task for you to do? and How frequently do you need to do this task?
— The AppsLab (@theappslab) April 12, 2016
Then it was time for the business origami! Business Origami is similar to a task flow analysis that uses folded paper cutouts as memory nudges. We now have our bot tasks, but we do not know (a) what triggers the task, (b) what the bot needs to know to do its job and (c) what the desired output is. We modified the Business Origami activity with the inputs and outputs that a Resource Flow activity demands.
Before our customers created their own flows based on their best bot task idea, we did we group warm up. The flow below illustrates the flow of scheduling and booking meeting rooms. Everyone was involved as they talked about the myriad of ways that would trigger the act of scheduling a meeting, the mediums of communication used, what they would need to know in order to schedule that, and what feedback is needed when the task is done.
— The AppsLab (@theappslab) April 12, 2016
Virtual Reality Guerrilla Test
For 3 days, Ben and I ran a guerrilla study to get customer’s and partner’s thoughts on VR and where they might find it useful in their work/industry.
— The AppsLab (@theappslab) April 12, 2016
Our customers experienced virtual reality through the Samsung Gear VR. It relies on our Samsung Note 5 to deliver the immersive experience.
Because of the makeup of our audience at the demo pod, we had to ensure that our study took approximately 5 minutes. We had 2 experiences to show them: an under water adventure with the blue whale in the Artic Ocean (theBlu) and a heart-pounding task of diffusing a bomb (Keep Talking and Nobody Explodes).
Everyone really wanted to reach out and touch the sea animals. 2 reached out and accidentally touched Ben and I and freaked out at how realistic the experience was! Another case for haptic gloves? 🙂
One of our participants had tears in her eyes after she experienced TheBlu Arctic while another participant wanted to play 3+ games of Keep Talking and Nobody Explodes!
Overall, no one felt nauseous. The game control came easy to those who had experience playing XBox and Playstation, while others were able to learn through the gamepad tutorial. Playstation VR makes learning even easier for newcomers since you can see a ghostlike view of your gamepad in VR.
Mostly, our participants confirmed use cases that we found from our first VR study at Modern Supply Chain back in January 2016. We ran 20 participates that month as an onsite guerrilla study. We ran all the participants through 2 applications in a 30 minute session: Swimming with Dolphins in Ocean Rift and Exploring a car show in Relay Cars.
One of our participants had a fear of being underwater. Even though she felt a bit nauseous, she did not want to take the headset off!
The tutorial was a breeze to get through. Unlike Ocean Rift where you need to navigate and swim by using the trackpad on the headset, Relay Cars used gaze control for selection. This means that my looking at a navigation button for 2-3 seconds makes an automatic selection for you without having to reach for the trackpad.
The goal of our initial guerrilla VR study was to find if people would actually wear a headset at work (majority said yes) and what VR could be useful for (many, many ideas). Today we have shortlisted that list and are developing a demo to come.
Austin, beautiful city with a river crossing downtown, music niche, young population, cycling, brisket and the home of SXSW, a big multicultural conference for all tastes; Film, Interactive and Music.
This was my first time attending the conference but Noel (@noelportugal), is a year-to-year attendee. It’s well known that this conference is not only a trampoline for small companies and startups to show off all the world what they are cooking up, but also a big exposure for new services, products, trends, you name it; that’s why we are very interested in this kind of conference that are very aligned with our team’s spirit.
I mean it.
Since Google I/O 2014, I’ve been following the steps to VR and AR. At that time, they released Google Cardboard; inexpensive googles for visualizing VR content and Project Tango for AR. Yes, I know you can argue VR has been around for quite a long time, but I believe they exposed the right development tools and a cheap way to develop and consume that technology, so a lot of people got engaged. However, some others remained very skeptical about use cases.
But now, after two year, guess what? VR is on everyone’s lips, and SXSW wasn’t an exception.
I have to say, I’m very impressed at how many companies had adopted this technology so fast. Of course, we all saw this wave coming to us with announcements of products like Oculus Rift, HTC Vive, Noon VR, Microsoft HoloLens and so on. Of course, as emerging technology team, we were already prepared to be hit by the wave.
I still can’t get used to seeing people with a headset over their eyes and headphones on, 100% isolated from reality. I tried most of VR demos presented and my brain/body is still not prepared for many VR experiences; I had headache, and I felt weird after so many demos.
Also, I could see people with red marks all around their faces from wearing the headset all day. Even so, this helped me to analyze and sum up that pretty much all demos follow the same use case: advertising and promoting products.
It’s really interesting that retail and product companies are investing in this technology to get more buyers and explain in a better way how it feels to hold of their product. This can be applied, for example, to automobiles, houses, travel agencies, etc. Funny thing is this technology sometimes is combined with motion to have a complete experience.
Note: don’t ever try a selfie while wearing a VR headset, almost impossible 🙂
I would like to bring up a story of one of the panelist talking about VR that I found very interesting; his best friend’s rock band was in town for a concert while he was experimenting with VR. He suggested that they record one of his favorites songs in a way it could be post-produced to be seen in VR.
The band accepted, and he set up all recording production in front of the stage, but he remained backstage monitoring all the production while they were playing his favorite song. All went fine and although he missed the opportunity to see and hear his favorite song live, he could watch the VR video several times for a couple of weeks.
Then, when they met again, his friend asked him about the concert, and he almost could said that he was virtually in the front row, enjoying the concert like all the others, singing and jumping.
Examples like this are very impressive. They make our brain believe facts that we actually didn’t live. In the end, that’s what we are doing to our brains, cheating them.
This could be good or bad, depending the point of view and circumstances. But just think about this, health and medicine; helping people with Alzheimer’s cloud recover lost thoughts and memories with VR. That’s huge.
Another common use case is training people or showing how to perform procedures. Imagine medical students being trained in surgical techniques, or how to react under stressful circumstances. Or how to work in risky areas like mines, radioactive plants or even places as simple as a warehouse.
VR is also being used as a human expression, like painting in 3D.
Also companies are taking advantage of VR to sell product on top of it or to complement it. Like this microphone to record 3D audio.
Speaking of AR, I saw this awesome concept that combines virtual objects with physical objects. Interactions between them are very natural, it could be possible to turn a physical object into virtual object and interaction is free hands.Check out the video.
Also, we witnessed beta release of the Metal Glass device; a combination of VR and AR. They claims that next year, monitors won’t be more necessary as VR + AR will replace them. Check out the video.
VR is the thing and we, as a team, are working to come up with cool use cases.
Internet of Things.
This is a hot topic too. We saw a lot startups and companies offering products and services for office and home automation, security, etc.
Noel and I attended a couple of IoT workshops from companies and startups that are making very affordable and simple all this hardware revolution.
We are very convinced that still are a lot of use case out there and we’re going to continue investigating.
Also, a big concern in IoT for big companies is data privacy as small companies are not putting too much attention on it. IoT generates Big Data, which at the same time, can be analyzed and reduced it to analytics, behaviors, forecast and needs. Just imagine, where all your data that are collected from your fridge connected to Internet is going to? And your thermostat data? And your lights data? Here is where regulations may help to protect your data and privacy.
Humanoids, robots, AI and machine learning.
This is a hot topic too, not as big as VR, but, believe me, it’s the next big thing. Apps in general have raised the bar, so the bar has been raised and one way to reach it is with AI.
Apps cannot be the same as ~4 years ago, people’s need have changed, and this has raised the bar for apps developers.
Users want to accomplish tasks quickly with almost zero effort and interaction at all. Our team has raised the bar too, so that’s why I decided to attend Machine Learning sessions. They were very basic and not too technical, but it was so great to see that we are looking in the right direction.
It was very interesting looking combination of all this concepts together and companies investing in research to make more affordable futuristics concepts.
Here’s Noel losing to a robot at rock-paper-scissors:
And here’s a video of Pepper, a humanoid robot.
We saw all kind of wearables. Health tracking, pillow sensor, eye therapy based on light to recover from jet lag or insomnia, pet wearables that read your pet’s feelings, ergonomic wearables, gloves used as a digital interface controller that can interact with laptops, iPads, etc.
Sony unveiled a wearable device currently referred to as N that has hands free user interface and allows users to receive audio without having to put a headset. It has virtual assistance that helps to answer questions. Also, a camera is positioned on one end of the device, so via voice we can ask to take a photo.
At SXSW, there are a lot of things to do including talks, sessions, lounge demos, expo and workshops.
When I was doing my schedule I noticed that there were a lot of JS workshops so I decided to join them. Most of them were self-paced and very introductory but was good to see new programming paradigms like react and functional programming into JS.
Wrap up and time to say until next year.
As a plus, Pi Day was celebrated during SXSW, and for that, we saw a cool implementation using Raspberry-Pis as a distributed system for searching queries.
And that’s it. I love this kind of conference where you can observe where technology and new concepts are going. And more importantly, it can help us to innovate and improve what we do in our team.
This is 2016, and seems this is the year for VR. Of course, we at the AppsLab can’t miss the beat!
While Osvaldo (@vaini11a) started to look into Unity based VR capability and prototype, I wanted to take a look into the WebVR based approach. The prospect of delivering VR experience in a browser, and over the web, suddenly makes VR so much more accessible – WebVR can be designed in a way to work with or without a VR headset. In a sense it is an extension of responsive web, to adjust to different renderers/viewers gracefully.
The first thing came to my mind was to VR-enable some of our visualization demos, and I picked John’s Transforming Table for the first try. After a series of hacks, I got a half-baked, and non-functional result; if we had the Oculus Rift, we could get it working. A-Frame is at a very early stage, and there is still a lot to be desired.
I realized I needed a perspective change – instead of fitting the existing presentation and behavior into VR, WebVR/A-Frame is better suited to create a new presentation and behavior that blends with VR naturally.
Jake (@jkuramot) and Noel (@noelportugal) had just come back from a trip to Sydney, and told us a story about someone following our team and reading our posts, from far far away – the other side of planet 🙂
And wouldn’t it be nice if they could see our Gadget Lab too?
So after a couple of days, here is it – our gadget lab in VR.
You may step in Gadget lab, and step out to Oracle campus. While you are in the virtual Gadget Lab, you can see several stations I have labelled to show some human-machine interfaces we are investigating.
I don’t have fancy stereoscopic 360 camera rig, so I just used an old Android phone to produce the scene of Oracle campus and Gadget lab interior. The phone is a little bit under-powered, so you may see some blanked-out areas because the phone would reboot when I tried to create the entire scene.
Let me know what you think in comments.
Editors’ update: To experience this virtual tour of our Gadget Lab, simply navigate to http://theappslab.com/vr/appslab in any browser. Even if you don’t have a VR headset, you can get a feel for the space and see a little bit about the demos.
In a desktop browser, you can use your mouse to look around the room. On a mobile browser, you can tilt your device to pan around the lab, and if you want the full VR experience, tap the glasses icon in the lower right-hand corner, and pop your phone into a compatible viewer, e.g. Google Cardboard.
The future is now.
Yesterday, I talked, in part, about how we can, if we choose, work all the time, from the moment we open our eyes, until we close them for sleep.
So today’s as good a Friday as any to remind you to balance all that work with some fun.
And here’s Anthony showing our Gadget Lab demos to some kids on Spring Break.
We make a conscious effort to take time out and show our fun stuff to kids. Why? Because fun is important.
Back in 2012, I pondered what I could show at Kscope12 that would spice up my presentation. I wanted to add something fun to my session, and after chatting with Noel, we settled on the Rock ’em Sock ’em robots controlled by text/phone as a fun way to keep people’s attention.
During the conference, I stumbled upon something. Turns out when you show people something fun, their creative juices flow, and you get lots of cool ideas. Like this:
I’m not an expert in every functional area of business. I may know some Financials from my years as an E-Business Suite product manager, but I don’t know much about sales or HR or supply chain.
So, when we have something we think will be important for users in the not-so-distant-future, i.e. an emerging technology, we often build a fun demo to help those domain specific uses rise to the surface.
People play with the demo, they have some fun with it, and we talk about ways that particular technology could apply to their everyday work.
This works, believe it or not, and we’ve been repeating that formula with great results for several years. It’s part of our strategy.
For example, robot arms and race cars to investigate gesture as an input device, Internet of Things for when things happen and for connecting dumb things to the Internet, and using your mind to control to drive a robotic ball.
It’s become an annual team activity to come up with the year’s fun demo, and everyone loves the fun demos, building them, showing them, playing with them.
So have fun out there. We will.
Now, if only I could talk Noel into resurrecting those Rock ’em Sock ’em robots.
Earlier this month, our strategy and roadmap eBook was released. In it, you’ll find all the whys, wherefores, whats and hows that drive the Simplicity-Mobility-Extensibility design philosophy we follow for Oracle Cloud Applications.
The eBook is free, as in beer, and it’s a great resource if you find yourself wondering why we do what we do. Download it now.
In said (free) eBook, you’ll find this slide.
Guessing I’ve seen our fearless leader and GVP Jeremy Ashley (@jrwashley) present this slide 20-some times around the World, and each time he asks, “What’s the first thing you do in the morning?”
Inevitably, 90% of the audience says pick up my phone. He’ll then ask how many people in the audience have only one computing device, two, three or more? Overwhelmingly, audiences have three or more.
These are international audiences, so there’s no geographical bias.
I love this slide because it succinctly portrays the modern work experience, spent across devices, all day long. As Jeremy says, we have the ability to work from the moment we open our eyes to wake to the moment we close them for sleep.
You can debate whether that is a good thing or not, but the fact is our users are mobile and device-happy. They use whatever device fits their needs at any given time.
And devices keep changing. For instance, this slide had a head-mounted display glyph at one point to represent a Google Glass-like device, and the smartwatch looked like a Pebble, not an Apple Watch.
That’s where we (@theappslab) come in; we’re always reading the tea leaves, leaning into the future, trying to anticipate what users will want next so we can skate to where the puck will be.
Mixing metaphors is fun.
Anyway, download the free eBook and learn about the OAUX strategy and roadmap and keep reading here to see where we fit.
VR is big and is going to be really big for the game industry, and you could feel it in the air at the GDC 2016. For the first time, GDC added two days of VR development-focused events and sessions, and most of VR sessions were packed – the lines to the VR sessions were long, even 30 minutes before the sessions, and many people could be turned away. The venue for VR sessions had to be changed to double the capacities for day 2.
There was lots of interest and enthusiasm among game designers, developers and business guys, as VR represents a brand new direction, new category, and new genre for games!
It is still at the dawn of VR games, with hardware, software, contents, approaches, etc. starting to come together. Based on what I learned during GDC, I’d like to summarize the state of various aspects of VR development.
1. VR Headset
This is the first thing that comes to our mind when we talk about VR, right? After all, the immersive experience is cast to our minds while covering ourselves with the VR headset. There are a couple VR headsets available on market, and slew of VR headsets to be debuted very soon.
From $10 Google cardboard, to $100 Samsung Gear VR, to >$1000 custom rig, the price of a VR headset is on a wide spectrum, and so are capability and performance. Most people who want to get hold of VR will likely choose one among Samsung Gear VR, PlayStation VR, Oculus Rift, and HTC Vive. Here I will do a brief comparison so you have some ideas of what you can get.
Samsung Gear VR
It uses specific Samsung phones to show VR content, so the performance is low as it is limited by the phone hardware, usually at 60fps. It has a built-in touchpad for input, but you may also use an optional gamepad. It has no wire to connect to PC, so you can spin around on a chair and not worry about tangling yourself. It has no position tracking.
If you own a Samsung S6/S7, or Edge version, why not get the Gear VR to experience the magic? $99 seems to be really inexpensive for any new gadget. Even if you have non-Samsung phone, you can still slip it into the rig and use Gear VR as a advanced version of Cardboard viewer. Of course, you will not have control pad capability.
It uses PS4 to run VR games, so it has real game-grade hardware to run VR content at 120fps, with consistent high performance. For inputs, it has a gamepad and tracked controllers, like holding a beacon with light bulb. It has small position tracking.
The unique part with PSVR is that it is supposed to play with other regular gamers on TV screens, making it a party game in your living room. The person with PSVR will have immersive feeling in the game, while others on TV can fight with or play along with the guy (a game character with VR headset) in a game. If you have a PS4 at home, then shelling out another $399 seems to be reasonable for decent experience of VR games. But you’ll have to wait until October 2016 to buy one, right before the holiday season.
This is expected to be a high-end VR headset, with games running on a powerful Oculus-ready computer. It will have very high performance, showing VR content at 120fps or higher. It will have a wire connected to computer, so that would limit you not to spin too much of 360 degree. It has small position tracking too. It does not come cheap at price $599, but well you can get it pretty much now in March.
It is considered to be even higher spec than Oculus Rift. It will require a muscular PC, with motion sensor and motion controllers attached to it, and it will deliver very high performance for VR games. It has tracked hands for input, and provides room-scale position tracking, which is above everyone else. To designers / developers, this room-scale tracking capability may give another dimension for experiments.
It costs $799, because it is high-end hardware and bundled with a bunch of bells and whistles. And you can expect to get it in April if you pre-order one now.
HoloLens is always another interesting device for VR/AR. Also rumor has it that Google is building a VR headset too – will be much more powerful than its Cardboard version.
2. Game Engine for VR
Recent trend indicates that Game Engine companies are making it easier (or free) for people to access game engine software and develop game on it. There were quite number of sessions covering detail topics on specific game engine, but based on my impression, here is the list to try out.
Unity 5.3 by Unity Technologies – It has a free version (Personal Edition) with full features. I believe it is most popular and widely-used game engine, with cross platform deployment to full range of mobile, VR, desktop, Web, Console and TV. Also many of the alt.ctrl.GDC exhibits utilized Unity to create game for controllers to interact with.
Unreal Engine 4 by Epic Games – It is a sophisticated game engine used to develop some AAA games. They also showcased two VR games Bullet Train and Showdown. The graphics and visual effect looks astonishing.
Lumberyard by Amazon – It is a new entry to the engine game, but it is free with full source, meaning you can tweak the engine if necessary. It would be a good choice if developing online game, and no need to worry about hosting a robust game. I guess that’s where Amazon wants to get a share of the game. It is not supporting VR yet, but will add such support very soon.
3. Capture Device
For many VR games, designers/developers would just create virtual game world using game engine and other graphical software. But in order to show real world event inside VR world, you will need special video camera, which can take 360 degree, or spherical photos and videos.
Well, most of us may not have seen or used this type of camera, including me, and so I don’t have any opinions on them. I did use native Camera App on Android device to capture spherical photos, but it was difficult to take many shots and stitch them together.
A step further is the stereoscopic video capturing, which takes two photographs of the same object at slightly different angle to produce depth. These are high-end professional rigs, with many custom-built versions. The price range could easily go above $10k.
This area is still quite fluid, and not sure if it would ever go mainstream. Hope some consumer version in reasonable price range will become available, so we can produce some VR videos too.
4. Convention and Best Practice
With real VR game titles under 100 in total, people in the VR field are still trying to figure things out, and no clear convention has yet surfaced for designers, developers and players.
In some sessions, VR game designers and developers did share the lessons they have learned while producing their first several VR games, like interaction patterns, reality trade-off (representational, experiential, and interaction fidelity), and fidelity contract in terms of physics rule, affordance, narrative expectations. Audio (binaural audio) and visual effects will too help realize an immersive experience.
We shall see more and more “best practices” converging together with more research in VR psychology and UX, some conventions will emerge to put designers and players on the same page.
5. Area of Use
By far games is the most natural fit for VR experience, and the entire game industry is driving toward it. Cinematic VR will be another great fit, as ILM X Lab demonstrated in “Star Wars,” viewer may “attach” to different characters to experience various view points in the movie.
People also explored VR as a new way of storytelling in journalism, a new way of exercise for sports (e.g. riding stationary bike in gym feels much like driving Humvee car in war zone), and a new way of education, e.g. going into a machine and looking at the inner mechanism of an engine.
VR brings another aspect of artistic expression as new art media, challenges us to advance technology to a new frontier, and at the same time, provides us with great opportunities.
Things are just getting started!
We are still in the early days of virtual reality. Just as in the early days of manned flight, this is a time of experimentation.
What do we wear on our heads? Helmets? Goggles? Contact lenses? Or do we simply walk into a cave or dome or tank? What do we wear or hold in our hands? Game controllers? Wands? Glowing microphones? Bracelets, armbands, and rings? Or do we just flap our arms in the breeze? Do we sit? Stand? Walk on a treadmill? Ride a bike? Or do we wander about bumping into furniture and each other?
As a person who prefers to go through life in a reclining position, most of these options seem like too much bother. I have a hard time imagining how VR could become ubiquitous in the enterprise if employees have to constantly pull on complicated headgear, or tether themselves to some contraption, or fight for access to an expensive VR cave. VR in the workplace must be ergonomic, safe, and easy to use even before you’ve had your morning coffee.
Lately I’ve been enjoying VR content, goggle-free, from the comfort of my lazyboy using an Apple TV app called Littlstar. Instead of craning my head back and forth, I just slide my thumb to and fro on the Apple remote. I can fly through the air and swim with the dolphins without working up a sweat or stepping on a cat.
To be clear: watching VR content on TV is NOT real VR. It’s nowhere near as immersive. But the content is the same and the experience is surprisingly good. Navigation is actually better: because it is effortless I am more inclined to keep looking around.
The Apple remote strikes me as the perfect VR controller. It is light as a feather, easy to hold, lets you pan and drag and click and zoom, and you can operate it blindfolded.
Watching VR content on TV also makes it easier to share. Small groups of people can navigate a virtual space together in comfort. One drawback: it’s fun to be the person “driving,” but abrupt movements can make everyone else a tad queazy.
What works in the living room might also work well at a desk – or in a meeting room. TVs are already replacing whiteboards and projection screens in many workplaces. And the central innovation of the fourth generation, Apple TV, the TV app, creates a marketplace to evolve new forms of group interaction. I expect there will be a whole class of enterprise TV apps someday.
For all these reasons, I have been pushing to create Apple TV app counterparts to the VR apps we are starting to build in the AppsLab. TV counterparts could make it easier to show prototypes in design meetings and customer demos. I feel validated by Tawny’s (@iheartthannie) report from GDC that Sony has adopted a similar philosophy.
Thanks to one of our talented developers, Os (@vaini11a), we already have one such prototype. It doesn’t do much yet; we are just figuring out how to display desktop screens in a VR environment. With goggles on I can use the VR app to spin from screen to screen in my office chair and look down at my feet to change settings. With the Apple TV counterpart app, I can do exactly the same thing without moving anything other than my thumb.
It’s still too early to predict how ubiquitous VR might become in the workplace or how we will interact with it. But TV apps, or something like them, may become one way to view virtual worlds in comfort.
Tawny (@iheartthannie) and I attended the 30th Edition of GDC – Game Developers Conference. As shown in the Tawny’s daily posts, there were lots of fun events, engaging demos, and interesting sessions, that we simply could not cover them all. With 10 to 30 sessions going on at any time slots, I wished to have multiple “virtual mes” to attend some of them simultaneously. However, with only one “real me,” I still managed to attend a large number of sessions, mostly 30-minute sessions to cover more topics at a faster pace.
Unlike Tawny’s posts that give you in-depth looks into many of the sessions, I will try to summarize the information and take-aways in two posts: Part 1 – Event and Impression; Part 2 – The State of VR. This post will cover event overview and general impression.
1. Flash Backward
After two days of VR sessions, this flashback kicked off the GDC Game portion with a sense of nostalgia, flashing games like Pac-Man and Minesweeper, evolving into console games, massive multi-player games, social games (FarmVille), mobile games (Angry Birds), and onto VR games.
GDC has been running for 30 years, and many of the attendants were not even born yet that time. The Flashback started with Chris Crawford, the founder of GDC, and concluded with Palmer Luckey, the Oculus dude, who is 23, with not much for flashback, but only looking forward to the new generation of games in VR. He will be back in 20 years for the retrospective 🙂
2. Awards Ceremony
On 3/16/2016, two awards ceremony were hosted in recognition of creativity, artistry and technological genius – Independent Games Festival Awards, and Game Developers Choice Awards. I believe it is the equivalent to Oscars for movie industry, and it ran the exact format as Oscars.
As you can see, the big winner of the night was “Her Story” (by Sam Barlow), which won 5 out of 6 nominations. It is an Indie title, but also took 3 winners competing with big producers, because it created a fresh way of story-telling using game. And “The Witcher 3: Wild Hunt” took the honor of “Game of the Year.” Gamers: check out the list and check out the games if you have not played.
The ceremony also honored Todd Howard for “Lifetime Achievement Award.” He is a designer, developer, director and producer for award-winning titles “Oblivion,” “Fallout 3,” and “Skyrim,” etc. Markus “Notch” Persson, the programmer and developer of Minecraft, took the honor of “Pioneer Award.” Yeah!
As a maker myself with AppsLab, I found the alt.ctrl.GDC interactive exhibits to be extremely satisfying – just some insane ideas of how controllers can be made for games.
I tried most of the controllers, such as licking popsicles to suck up planet resources in game; mutating a controller to change the object in game to fly, swim or crawl; cranking up handles to drive tanks in game.
“Keep Talking and Nobody Explodes” must be one of the favorites at alt.ctrl.GDC 2015, and the mechanical box still stood out! It has turned into a real game – nominated for three categories and won “Excellence in Design” in the IGF Awards ceremony. It is a fun game, check it out!
“Please Stand By” is my favorite for alt.ctrl.GDC 2016. What do you do when you find a vintage TV box in junkyard? Well, it has all the controllers, even though they do not work anymore. After a wizardry spin, it came back to life – I over-heard the secrets, if you are ever intrigued in knowing how to do that.
Now it acts to show many channels of game TV, of course, you have to tune it with all the knobs and rabbit ears carefully. Oh, there are some buttons on the back too that will do some tricks. If it ever freezes on you, pound it or shake it, like you would with the old TV box.
4. Game Making and Animation
This is too big of a topic for a section in one blog post, so I am not going into any details.
I just want to appreciate how much works and thoughts people put into making a game. For an example, just look at this one slide from UFC 2 session:
That is just one grappling position change, and it would derive into so many permutation depends on how players control. Now work on the animation for each of those permuted position changes. So in UFC 2, the technical genius tried to found procedural way of automating some areas of animation.
Of course, there are so many other aspects of game making – as indicated by the many categories of awards. In additional to the creative side, there are also technological side of running massive online games, or dealing with all forms of devices.
As much as technological advance drives game development, the game making drives technological advance! People are pushing the edge of envelope in making next generation of games in VR. Speaking of VR, stay tuned for my next post on “The State of VR.”
I’ve been doing this job for various different organizations at Oracle for nine years now, and we’ve always existed on the fringe. So, having our own home for content within the Oracle.com world is a major deal, further underlining Oracle’s increased investment in and emphasis on innovation.
Today, I’m excited to launch new content in that space, which, for the record is here:
We have a friendly, short URL too:
The new content focuses on the methodologies we use for research, design and development. So you can read about why we investigate emerging technologies and the strategy we employ, and then find out how we go about executing that strategy, which can be difficult for emerging technologies.
Sometimes, there are no users yet, making standard research tacits a challenge. Equally challenging is designing an experience from scratch for those non-existent users. And finally, building something quickly requires agility, lots of iterations and practice.
All-in-all, I’m very happy with the content, and I hope you find it interesting.
The IoT Smart Office, just happens to be the first project we undertook as an expanded team in late 2014, and we’re all very pleased with the results of our blended, research, design and development team.
I hope you agree.
In the coming months, we’ll be adding more content to that space so stay tuned.
When I first came to GDC, I didn’t know what to expect. I was delightfully surprised to use my first gender neutral restroom. The restroom had urinals and toilet seats. There was no fuss other than others who were standing to take a picture of the sign above. It felt surreal using the restroom next to a stranger who was not the same gender as I. The idea is a positive new way of thinking and fits perfectly with one of the themes of the conference: diversity.
In my last games user research round table, one of the topics we spent a lot of time on was sexism and how we could do our part to include underrepresented groups in our testing. One researcher began with a story about a female contractor he worked with to perform a market test on a new game. One screener question surprised him the most:
What gender do you identify as?
Male [Next question]
Female [Thank her for her time. Dismiss]
O-M-G. The team went back and forth with the contractor for 4 iterations before she agreed to change that question in the screener. Her reasoning were:
- Females are not representative of his game’s audience. Wrong, females made up half of his previous game’s total audience.
- Females are distracting. The males will flirt with the females during testing. Solution, have one day to test all female testers and another day to test all male testers.
- Females don’t like competitive shooting games. Wrong, see first bullet point. As of March 2016, female preference for competitive games overlap with male preference 85%.
If your group of testers are all randomly chosen, but are all straight white males, is that a truly random sample? To build a game that is successful, it is important to test with a diverse group of people. Make sure that most if not all groups of your audience is represented in the sample. This will yield more diverse and insightful findings. You may have to change the language of your recruitment email to target different types of users.
For example, another researcher wanted a diverse pool gamers with little experience. His only screener was that they play games on a console for at least 6 hours a week. No genre of games were specified. He got a 60 year old grandma who played Uno over Xbox Live with her grandkids for 6–8 hours Saturday and Sunday. She took hours to get past level one, but because she was so meticulous and wanted to explore every aspect of the demo, she pointed out trouble spots in the game that most testers speeding through would miss!
Recently on our own screeners at The AppsLab, we ask participants what gender they identify with instead of bucketing them in male or female. It’s small, but a big step in the right direction toward equality.
The presence of UX
The presence of UX and user research has grown since last year. Developers and publishers recognize the importance of iteratively testing early and often. In the “Design of Everyday Games” talk with Christina Wodke the other day, she told the packed room that there was just 8 people in the same talk just the year before. From 8 to a packed room of hundred is a huge growth and a win for the user and for the industry!
Epic Games spoke about product misconceptions that makes it difficult to incorporate user experience into the pipeline. UX practitioners are like hedgehogs. We want to help by giving the extra hug it needs, but our quills aren’t perceived as soft enough. Our goal is to deliver the experience intended to the targeted audience, not change the design intent.
- Misconception #1: UX is common sense. Actually, the human brain is filled with perception, cognitive and social biases that affect both the developers and the users.
- Misconception #2: UX is another opinion. UX experts don’t give opinions. We provide an analysis based on our knowledge of the brain, past experience and available test data.
- Misconception #3: There’s not enough resources for UX. We have resources for QA testing to ensure there are no technical bugs. Can we afford not to test for critical UX issues before shipping?
To incorporate UX into the pipeline, address product misconceptions. Don’t be afraid of each other, just talk. Open communication is the key to creativity and collaboration. Start with small wins to show your value by working with those who show some interest in the process. Don’t be a UX police and jump on every UX issue to start a test pipeline. Work together and measure the process.
Overall, I loved the conference. The week flew by quickly and I was able to get great insights from industry thought leaders. The GDC activity feed was bursting with notes from parallel talks. I fell in love with the community and am in awe that a conference of this size grew from a meeting in a basement 30 years go. I sure hope there is a UX track next year! I decided to end my week with a scary VR experience, Paranormal Activity VR. The focused on music and sound to drive the suspenseful narrative. Needless to say, I screamed and fell on my knees. It beats paying to go to a haunted maze every halloween.
It’s official. All demos are booked for the week. Anyone not on the list is subjected to the standby line. I was lucky enough to score a 5:30pm demo for Bullet Train at the NVIDIA booth early this morning. When I walked by the line late in the evening, I found out that a lady had been waiting for at least an hour for her turn in the line.
Raymond (@yuhuaxie), one of our developers, took his luck to play games at the “no reservations accepted” Oculus store-like booth 30 minutes before the expo opened and still had to wait for almost an hour before he left the line for other session talks. Is it worth the hype? The wait? The fact that you’re crouching and screaming at something no one else can see?
Apparently so! One common sentiment I heard from others who finished playing the demo was that the experience was so amazing that they didn’t care about the friction to enjoy the 10–15 min in virtual reality! For Bullet Train, there had been several repeat visitors to play the fast-paced shooting game again and again!
Today, I had my chance to demo London Heist on the PS VR and Bullet Train on the Oculus Rift. Both are fast-paced shooting games. The head mount gear (HMD) for the PS VR is much more forgiving for those who wear glasses. The HMD wears similarly to a bike helmet, but with no straps to mess with. To adjust, you simply slide the viewer forward and back separate from the mounting. It’s much lighter compared to the other HMDs and breathes better. Here’s another game play of the demo I went through.
London Heist has simple interactions for a shooting game. The game first eases you in as you ride as a passenger with your buddy on the streets of London. You can sit there and get a chance to orient yourself with you new surroundings. Instead of practicing how to grab guns, I gulped down a 7up instead 😡
Finally, a car chase ensues and there are bullets flying at you. The controls were simple. Pull the trigger to grab the gun. Once done, the gun is attached to you the entirety of the game. Just keep pulling the trigger to shoot for the rest of the game. When you run out of bullets, just grab the magazine right next to you with your free hand to reload! Easy peasy!
Bullet Train controls have a slightly higher learning curve but experience is fulfilling. In the game you can transport by creating portal toward the destination you want to teleport to, grab multiple guns to shoot, slow-mo the game (discover-ability) and grab the bullets flying toward you in the air to throw them back at enemies.
There are so many things you should do that you forget how to do them all. I personally stumbled near the end trying to grab bullets in the air and throw it that I forgot how to grab new guns! After the short demo, I felt myself begin to sweat. A change in mental model is needed since typical shooter games allows you to press short cut keys to perform those actions. In VR, you DO those actions. Luckily, it does not detract from the immersion at all. It was fun and I heard that a few attendees came back to the replay the demo with improved execution.
The change in mental model was mentioned in day 2 of the user research round table. We focused on mental models for game control patterns. All control schemes are inherently non-intuitive. The game industry has been lucky that developers used the same control patterns for first person shooters aka the Halo Scheme.
When we look at game schemas for other game genres, it’s a bit of a mess. This may be the same for VR since it is based on the game’s mechanics. Generally, players prefer gaze based direction. This means that the direction you are looking in is the direction you expect to turn toward in the game.
Typically when you want to turn directions in real life, you turn your torso. This preference toward gaze based direction is a part of the Counter Strike Effect. Those who are used to first person shooter games are too used to looking to turn vs. rotating your torso to turn.
It’s definitely a new mental modal to learn. We have to remember what technologies and experiences the users are coming from and what platform and core experiences you are developing for then make judgement calls on that.
Look at these players actually turning! It was easy and turning was quick! Worked up a bit of a sweat here too.
— Tawny (@iheartthannie) March 17, 2016
The above is why the on-boarding experience for games are so important. Tutorials are necessary to ensure that players understand the core game mechanics. Players tend to overestimate themselves and skip tutorials when given the option to do so.
Rather than giving them the option to skip, the installed game should know whether it is your first time playing. First timers go through the tutorial. Everyone else who’s reinstalled the game on another device does not have to go through the tutorial again, but can still have the option to do so.
Space out tutorials evenly or else they’ll have information overload. Leave room for discover-ability. If they can discover a mechanic within 10 minutes of playing after going through core tutorial then it leads to bigger user satisfaction. Induce information seeking behavior and bring up the tutorial when they need it. Avoid front loading the player.
More on Motivation
To understand the psychology behind gamer’s motivations more. Quantic Foundry looked at 2000 data points, we find that there are 12 unique motivations that fall into 6 themes:
- Action (Boom!) — destruction and excitement.
- Social (Let’s play together) — competition and community.
- Mastery (Let me think) — challenge and strategy.
- Achievement (I want more…) — completion and power.
- Immersion (Once upon a time) — fantasy, meaning to be another character or in another place, and story, to be caught up in a plot.
- Creativity (What if?) — design and discovery.
At a high level, there are 3 motivational clusters.
- Action — Social
- Mastery — Achievement
- Immersion — Creativity
Discovery is a bridge between Mastery — Achievement as well as Immersion — Creativity. Design is a bridge between Action — Social. These results were consistent for all geographic region.
Not surprisingly, these game motivations mapped to personality traits. In psychological personality theory, there are the Big 5 personality traits.
When we drill down from the Big 5 to examine each trait, we find that it changes with context. For example, extraversion is typically associated with persons who are social and energetic. Examining extraversion in context of game motivations, we find that it is associated with persons who are social, cheerful, thrill-seeking and assertive and therefore likely to be motivated by games that fall into the Action — Social.
Conscientiousness is associated with the Mastery — Achievement. Openness is associated with Immersion — Creativity. Game motivations align with personality traits. Games are an identity management tool and so people play games that align with their personality traits.
There are some gender differences. Females are motivated by Fantasy, Design and Completion while males are motivated by Destruction, Competition and Fantasy. However, that difference is strongly dwarfed by age differences. Rather than designing for men and women, we should think about how games should be designed for different age groups.
The Action — Social cluster is the most age volatile group. As players grow older, Competition and Excitement drops. For females, story also drops. For males, Challenge also drops.
Imagine a game that changes it’s game mechanics with you as you grow? Imagine if we could drive the health and wellness of our teams by employing the proper motivational UX strategy that is intrinsic to them. That would be pretty cool!
The Expo opened today and will be open until the end of Friday! There was a lot to see and do! I managed to explore 1/3 of the space. Walking in, we have the GDC Store to the left and the main floor below the stairs. Upon entering the main floor, Unity was smack dab in the center. It had an impressive set up, but not as impressive as the Oculus area nor Clash of Kings.
There were a lot of demos you could play, with many different type of controllers. Everyone was definitely drinking the VR Kool-Aid. Because of the popularity of some of the sessions, reservations for a play session are strongly encouraged. Most, if not all of the sessions ,were already booked for the whole day by noon. I managed to reserve the PS VR play session for tomorrow afternoon by scanning a QR code to their scheduling app!
The main floor was broken up into pavilions with games by their respective counties. It was interesting to overhear others call their friends to sync up and saying “I’m in Korea.” Haha.
I spent the rest of the time walking around the floor and observing others play.
— Tawny (@iheartthannie) March 16, 2016
I did get a chance to get in line for an arcade ride! My line buddy and I decided to get chased by a T-Rex! We started flying in the air as a Pterodactyl. The gleeful flight didn’t last long. The T-Rex was hungry and apparently really wanted us for dinner. It definitely felt like we were running quickly, trying to get away.
Another simulation others tried that we didn’t was a lala land roller coaster. In this demo, players can actually see their hand on screen.
— Tawny (@iheartthannie) March 16, 2016
Sessions & Highlights
Playstation VR. Sony discusses development concepts, design innovations and what PS VR is and is not. I personally liked the direction they are going for collaboration.
- Design with 2 screens in mind. For console VR, you may be making 2 games in 1. One in VR and one on TV. You should consider doing this to avoid having one headset per player and to allow for multiplayer cooperation. Finding an art direction for both is hard. Keep it simple for good performance.
- Make VR a fun and social experience. In a cooperative environment, you get 2 separate viewpoints of the same environment (mirroring mode) or 2 totally different screen views (separate mode). This means that innovation between competitive and Co-op mode is possible.
The AppsLab team and I have considered this possibility of a VR screen and TV screen experience as well. It’s great that this idea is validated by one of the biggest console makers.
A year of user engagement data. A year’s worth of game industry data, patterns and trends was the theme of all the sessions I attended today.
- There are 185 million gamers in the US. Half are women.
- 72 million are console gamers. Of those console owners the average age is ~30 years old.
- There are 154 million mobile gamers. This is thanks to the rise of free-2-play games. Mobile accessibility has added diversity to the market and brought a new group of players. Revenues grew because of broad expansion. The average age for the mobile group is ~39.4 years old.
- There are 61 million PC gamers thanks to the rise of Steam. These gamers tend to be younger at an average age of ~29.5yrs.
- There are different motivations as to why people play games. There are two group of players: Core vs. casual players. Universally, the primary reason casual players play games is when they are waiting to pass time and as a relaxing activity.
- There is great diversity within the mobile market. There is an obvious gender split between what females and males play casually. Females tend to like matching puzzle (Candy Crush), simulation and casino games while males tend to like competitive games like sport, shooter and combat city builder games.
- When we look internationally, players in Japan have less desire to compete when playing games. Success of games based on cooperative games.
- Most homes have a game console. In 2015, 51% of homes owned at least 2 game consoles. At the start of 2016, there was an increase of 40% in sales for current 8th generation game consoles (PS4, Xbox One, etc minus the Wii).
- Just concentrating on mobile gamers, 71% play games on both their smart phone and tablet, 10% play only on their tablet.
- Top factors leading to churn are lack of interest, failure to meet expectation and too much friction.
- Aside from Netflix and maybe Youtube, Twitch gobbles up more prime time viewers, almost 700K concurrent views as of March 2016. Its viewership is increasing despite competition with the launch of YouTube Gaming.
Day 1 — User research round table. This was my first round table during GDC, and it’s nice to be among those within the same profession. We covered user research for VR, preventing bias and testing on kids! Experts provided their failures on these topics and offers suggestions.
- Testing for Virtual Reality.
- Provide players with enough time warming up in the new environment before asking them to perform tasks. Use the initial immersive exposure for to calibrate them.
- Be ready to pull them out at any indication of nausea.
- Use questionnaires to screen out individuals who easily get motion sickness.
- It’s important to remember that people experience sickness for different reasons. It’s hard to eliminate all the variables. Some people can have vertigo or claustrophobia that’s not necessarily the fault of the VR demo. There is a bias toward that in media. People think they are going to be sick so they feel sick.
- Do not ask people if they feel sick before the experience else you are biasing them to be sick.
- Individuals are only more likely to feel sick if your game experience does not match their expectations. Some people feel sick no matter what.
- One researcher tested 700–800 people in VR. Only 2 persons said that they felt sick. 7–8 said they felt uncomfortable.
- An important questions to ask is “At what point do they feel sick?” If you get frequent reports at that point vs. Generalized reports, then you can do something to make the game better.
- Avoid bragging language. Keep questions neutral.
- Separate yourself from the product.
- Remember participants think that you are an authority. Offload instructions to the survey, rather than relay the instructions yourself. It’s going to bias the feedback.
- Standardize the experiment. Give the same spiel.
- The order of question is important.
- Any single geographic region is going to introduce bias. Only screen out regions if you think culture is going to be an issue.
- Testing with kids.
- It’s better to test with 2 kids in a room. With kids, they are not good at verbalizing what they know and do not know. Having 2 kids allows you to see them verbalize their thoughts to each other as they ask questions and help each other through the game.
- When testing a group of kids at once, assign the kids their station and accessories. Allowing them to pick will end up in a fight over who gets the pink controller.
- Younger kids aren’t granular so allow for 2 clear options on surveys. A thumbs up and thumbs down works.
- Limit kids to one sugary drink or you’ll regret it.
Just like yesterday, the VR sessions were very popular. Even with the change to bigger rooms, lines for popular VR talks would start at least 20 minutes before the session started. The longest line I was in snaked up and down the hallway at least 4 times. The wait was well worth it though!
Today was packed. Many sessions overlapped one another. Wish I could have cloned 3 of myself 🙁
Throughout each session, I noticed points that have been repeated from yesterday’s daily roundup. There are definitely trends and general practices that the game industry has picked up on, especially in virtual reality. I’ll talk more about these trends later in this post.
PlayStation revealed the price of their new VR headset at $399! It’s said that Playstation VR has over 230 developers on board and 160 diverse titles in development. 50 of those games will be available this October. More info as the PS VR launch event tomorrow 🙂
There is a game called Rez Infinite developed for the PS VR. The line to try out the game was long! I wanted to take a picture of someone playing the game, but they asked kindly for no film or photography. Instead, here is a picture of the Day of the Dev banner!
Most popular VR demo so far
Aside from Eagle’s Flight, also built for PS VR, EverestVR lets you climb up Mt.Everest from the comfort (and warmth) of your living room. I overheard that being able to experience the climb with the HTC Vive controllers was booked out for the rest of the week!
Check out previews for both. Here’s Eagle’s Flight:
And Everest VR:
Immersive cinema with Lucasfilm. The entire sessions was a dream come true for fans of Star Wars and cinematic film as well as audiophiles. Anyone who’s watched Season 4 of Arrested Development on Netflix is familiar with the ability to watch parallel storylines within the same episode. Lucasfilm allowed us to experience that same interactive narrative with VR and Star Wars Episode 7!
They also let us in on their creative process for Star Wars: Trials on Tatooine. They reiterated the creative process espoused in many other game making session: (a) define the desired experience first then test it (b) simplify the interaction. VR is still new. Right now we are trying to get players to believe they are in another world. Slow the pacing at the beginning and allow them to explore the world. We don’t want complicated interactions to distract them from whats happening around them. Let them enjoy the immersion. (c) Apply positive fail throughs. If the player does something wrong in-game, don’t let the game script make them feel bad by telling them they did something wrong.
What “affordance” really means. Since the Design of Everyday Things by Don Norman, the term “affordance” has been overly used and misused. He updated his 2012 book with some clarification on the terminology. Affordances are not signifiers. Affordances define what actions are possible. What we think those those objects can do can be right or wrong. To ensure that affordances are clear, we use signifiers as a clue to indicate what we can do. For example, a door, with no doorknob or handle, is an affordance. It can open or close. Placing a pull bar, a signifier, on the door clues us into the notion that we can pull it open.
Virtual World Fair. The team behind the first 3D theme park ride for Universal Studios talked about how brands and other consumer products can take advantage of VR. They introduced the Virtual World’s Fair, a theme park in VR that is eerily similar in concept for Disney World’s Epcot.
Brands, Countries and Organizations can own a pavilion in the world, like shops in a mall, where players can explore and shop the latest and greatest.
Film vs. Games vs. VR. Repeated in many sessions today was that the rules that guide films and games are not applicable in VR. We have to create our own language and build best practices specific to it. For example, close up shots in movies will not work. In VR, we would end up invading the player’s personal space. In VR, we are the camera.
Ambisonic vs. biaural audio. Use an ambisonic mic to capture sounds and use biaural audio in VR. Ambisonic is a full surround sound capture technique. It’s equivalent to lightfields, sound pressure from all directions. Biaural audio is the equivalent of stereoscopic video. A common mistake people make is that biaural is not the same as spatialized audio. Biaural is for headphone playback. Ambisonics are for specialized speakers. Biaural has issues with coloration and rotation. Ambisonic has a flatter frequency and works if the player’s head is static.
“Presence”. The biggest buzzword since the “cloud.” Presence is hard to get and hard to achieve. There was a study done on rats wearing VR and they had trouble too! To achieve presence, we should think about how our world absorbs the player and what might distract them:
- Use diegetic cues to nudge their attention. If something is too interesting, the player has no reason to look away or try anything else.
- Design with the vestibular system in mind. Nausea sucks. We do not want dizziness to be associated with VR.
- Flow. What we’re doing should perfectly match with the skills required to do it.
- Immersion. Give the brain a reason to feel what it is feeling. We do not have to feel like we’re somewhere in order to be engaged.
Redirected walking. Redirected walking came up in 3 sessions I was in again today! With the hype surrounded room tracking, it is important that we implement illusions that keeps users safe and nausea free! Vision dominates vestibular sensation. 3 types of redirected walking were introduced:
- Rotational gains. The players rate of rotation can be greater or less than their physical rotation, e.g. turning 90 degrees in real life = turning 80 degrees in VR.
- Curvature gains. The virtual world rotates as the player walks in a straight line. With this technique, players can walk in a complete circle in the real world while perceiving themselves to walk a straight line in VR.
- Translation gains. The player can walk faster or slower in VR compared to real life, e.g. walking 9 meters in the real world can translate to 6 meters in the virtual world.
For anyone interested this 2010 study discusses thresholds for each type of redirected walking. Because the study was done before the VR devices we have today, another study is needed since there could be new thresholds.
Enabling hands in VR. Hands are the most important input for interaction. A large proportion of your sensory and motor cortex is devoted to the hands. The dominant hand is used for precise control, while the non-dominant hand can be used as a point of reference or for gross movement. Hands can be used synchronously to pull a heavy lever and asynchronously to climb a ladder. Currently, simple virtual hands are somewhat useful for selecting small targets, targets in cluttered regions and moving targets. Ray extenders (extensions of our hands in VR) are better for distant targets.
Hello everyone! I wrapped up the first day at the Games Developers Conference (GDC) in San Francisco! It’s the first Monday after daylight savings so a morning cup of joe in Moscone West was a welcomed sight!
Wow! All of the VR sessions were very popular and crowded. In the morning, I was seated in the overflow room for the HTC Vive session. Attendees were lucky to be able to go to 2 VR sessions back-to-back. There would be lines wrapping around the halls and running into other lines. By the afternoon, when foot traffic was at its highest, it was easy to get confused as to which line belonged to which session. Luckily, the organizers took into account the popularity of the VR sessions and moved it to the larger rooms for the next 4 days!
On the third floor, there was a board game area where everyone could play the latest board game releases like Pandemic Legacy and Mysterium as well as a VR play area where everyone could try out the Vive and other VR games.
Sessions & Take Aways
I sat in on 6 sessions:
- A Year in Roomscale: Design Lessons from the HTC Vive and Beyond.
- You are not building a game, but an experience. Players are actually doing something actively with their hands vs. a game controller.
- There are 3 questions that players ask when they are starting a VR experience that should be addressed:
- (a) Who am I?
- (b) What am I supposed to do?
- (c) How do I interact with the environment?
- Permissability. New players always ask when they are allowed to interact with something, but there are safety issues when they get too comfortable. One developer told a story about how a player actually tried to dive headfirst into a pool while wearing a VR device!
- Don’t have music automatically playing when they enter the game. It’s not natural in the real world. It’s better to have a boom box and have them turn on the music instead. In addition, audio is still hard to do perfectly. Players expect perfect audio by default. If they pick up a phone, they expect to hear it out of 1 ear, not both.
- Social Impact: Leveraging Community for Monetization, User Acqusition and Design.
- Social Whales (SW) have high social value thus have the highest connection to other players and are key to a high ROI . SWs makes it easy for other players to connect with one another.
- There are 3 standard profiles that players fall under:
- (a) The atypical social whales that always want the best things.
- (b) The trendsetter, the one who wants to unite and lead.
- (c) The trend spotter, the players who want to be a part of something.
- When a social whale leaves a games, ROI falls and other players leave. This is because that 2nd degree connection is gone. To keep players from leaving, it’s important to have game mechanics that addresses the following player needs:
- (a) Players want to belong.
- (b) Players want recognition as a valuable member.
- (c) Players want their in-game group to be recognized as the best vs. other groups.
- Menus Suck.
- A very interesting talk on rethinking how players access key menu items in VR.
- Have a following object like a cat! Touching different parts of the object will allow you to select different things. It’s much easier than walking back and forth between a menu and what you have to do.
- Job Simulator uses retro cartridges for menu selection.
- Create menu shortcuts with the player’s body. Have the user pull things out of different parts of their head (below).
- Eating as an interaction. In job simulator you can eat a cake marked with an “Exit” to exit the game. The cake changes to another dessert item marked with an “Are you sure?” to ensure the exit.
- Improving Playtesting through Workshops Focusing on Exploring.
- For games, we are experience testing (playtesting) not performing a usability test.
- For games, especially for VR, comfort comes first. Right after that it’s ease of use.
- When exploring desired experiences for a game, create a composition box to ensure you get ideas from all views of your development team.
- When observing play, look for actions (e.g. vocalizations, gestures) as well as for changes in posture and focus.
- The Tower of Want.
- Learn critical questions our designs must answer to engage players over the long term.
- Follow the “I want to..” and “so I can…” framework to unearth player’s short term and long term goals. Instead of asking why 5 times like we do in user research, we ask then to complete the framework’s “so I can…” sentence (e.g. I want to get good grades so I can get into college…so I can get a good job…so I can make a lot of money…so I can buy a house).
- The framework creates a ladder of motivations that incentivizes a player to complete each game level in that ladder daily.
- Cognitive Psychology of Virtual Reality: Basics, Problems and Tips.
- Psychology is the physics of VR.
- Use redirected walking to keep players within the same space.
- Design for optical flow. Put shadows over areas where users are not concentrating on. It’ll help with dizziness.
- Players underestimate depth by up to 50%.
- Add depth by adding transitional rooms (portals). This helps ease the players into their new environment.
- Players can see a maximum of 6 meters ahead of them for 3D.
- In their peripherals, they can only see 2D.
- Design with the mind that 20–30% of the population has problems with stereoscopic vision.