GDC 2016 – Part 2: The State of VR

March 23rd, 2016 Leave a Comment

VR is big and is going to be really big for the game industry, and you could feel it in the air at the GDC 2016. For the first time, GDC added two days of VR development-focused events and sessions, and most of VR sessions were packed – the lines to the VR sessions were long, even 30 minutes before the sessions, and many people could be turned away. The venue for VR sessions had to be changed to double the capacities for day 2.

There was lots of interest and enthusiasm among game designers, developers and business guys, as VR represents a brand new direction, new category, and new genre for games!

It is still at the dawn of VR games, with hardware, software, contents, approaches, etc. starting to come together. Based on what I learned during GDC, I’d like to summarize the state of various aspects of VR development.

1. VR Headset

This is the first thing that comes to our mind when we talk about VR, right? After all, the immersive experience is cast to our minds while covering ourselves with the VR headset. There are a couple VR headsets available on market, and slew of VR headsets to be debuted very soon.

VR Headset

VR Headset

From $10 Google cardboard, to $100 Samsung Gear VR, to >$1000 custom rig, the price of a VR headset is on a wide spectrum, and so are capability and performance. Most people who want to get hold of VR will likely choose one among Samsung Gear VR, PlayStation VR, Oculus Rift, and HTC Vive. Here I will do a brief comparison so you have some ideas of what you can get.

Samsung Gear VR

It uses specific Samsung phones to show VR content, so the performance is low as it is limited by the phone hardware, usually at 60fps. It has a built-in touchpad for input, but you may also use an optional gamepad. It has no wire to connect to PC, so you can spin around on a chair and not worry about tangling yourself. It has no position tracking.

If you own a Samsung S6/S7, or Edge version, why not get the Gear VR to experience the magic? $99 seems to be really inexpensive for any new gadget.  Even if you have non-Samsung phone, you can still slip it into the rig and use Gear VR as a advanced version of Cardboard viewer. Of course, you will not have control pad capability.

PlayStation VR

It uses PS4 to run VR games, so it has real game-grade hardware to run VR content at 120fps, with consistent high performance. For inputs, it has a gamepad and tracked controllers, like holding a beacon with light bulb. It has small position tracking.

The unique part with PSVR is that it is supposed to play with other regular gamers on TV screens, making it a party game in your living room. The person with PSVR will have immersive feeling in the game, while others on TV can fight with or play along with the guy (a game character with VR headset) in a game. If you have a PS4 at home, then shelling out another $399 seems to be reasonable for decent experience of VR games. But you’ll have to wait until October 2016 to buy one, right before the holiday season.

Oculus Rift

This is expected to be a high-end VR headset, with games running on a powerful Oculus-ready computer. It will have very high performance, showing VR content at 120fps or higher. It will have a wire connected to computer, so that would limit you not to spin too much of 360 degree. It has small position tracking too. It does not come cheap at price $599, but well you can get it pretty much now in March.

HTC Vive

It is considered to be even higher spec than Oculus Rift. It will require a muscular PC, with motion sensor and motion controllers attached to it, and it will deliver very high performance for VR games. It has tracked hands for input, and provides room-scale position tracking, which is above everyone else. To designers / developers, this room-scale tracking capability may give another dimension for experiments.

It costs $799, because it is high-end hardware and bundled with a bunch of bells and whistles. And you can expect to get it in April if you pre-order one now.


HoloLens is always another interesting device for VR/AR. Also rumor has it that Google is building a VR headset too – will be much more powerful than its Cardboard version.

2. Game Engine for VR

Recent trend indicates that Game Engine companies are making it easier (or free) for people to access game engine software and develop game on it. There were quite number of sessions covering detail topics on specific game engine, but based on my impression, here is the list to try out.

Unity 5.3  by Unity Technologies – It has a free version (Personal Edition) with full features. I believe it is most popular and widely-used game engine, with cross platform deployment to full range of mobile, VR, desktop, Web, Console and TV. Also many of the  alt.ctrl.GDC exhibits utilized Unity to create game for controllers to interact with.

Unreal Engine 4  by Epic Games – It is a sophisticated game engine used to develop some AAA games. They also showcased two VR games Bullet Train and Showdown. The graphics and visual effect looks astonishing.

Lumberyard game engine by Amazon

Lumberyard game engine by Amazon

Lumberyard by Amazon – It is a new entry to the engine game, but it is free with full source, meaning you can tweak the engine if necessary. It would be a good choice if developing online game, and no need to worry about hosting a robust game. I guess that’s where Amazon wants to get a share of the game. It is not supporting VR yet, but will add such support very soon.

3. Capture Device

For many VR games, designers/developers would just create virtual game world using game engine and other graphical software. But in order to show real world event inside VR world, you will need special video camera, which can take 360 degree, or spherical photos and videos.

Spherical Video Capture Device

Spherical Video Capture Device

Well, most of us may not have seen or used this type of camera, including me, and so I don’t have any opinions on them. I did use native Camera App on Android device to capture spherical photos, but it was difficult to take many shots and stitch them together.

Stereoscopic Video Capture Device

Stereoscopic Video Capture Device

A step further is the stereoscopic video capturing, which takes two photographs of the same object at slightly different angle to produce depth. These are high-end professional rigs, with many custom-built versions. The price range could easily go above $10k.

This area is still quite fluid, and not sure if it would ever go mainstream. Hope some consumer version in reasonable price range will become available, so we can produce some VR videos too.

4. Convention and Best Practice

With real VR game titles under 100 in total, people in the VR field are still trying to figure things out, and no clear convention has yet surfaced for designers, developers and players.

In some sessions, VR game designers and developers did share the lessons they have learned while producing their first several VR games, like interaction patterns, reality trade-off (representational, experiential, and interaction fidelity), and fidelity contract in terms of physics rule, affordance, narrative expectations. Audio (binaural audio) and visual effects will too help realize an immersive experience.

We shall see more and more “best practices” converging together with more research in VR psychology and UX, some conventions will emerge to put designers and players on the same page.

5. Area of Use

By far games is the most natural fit for VR experience, and the entire game industry is driving toward it. Cinematic VR will be another great fit, as ILM X Lab demonstrated in “Star Wars,” viewer may “attach” to different characters to experience various view points in the movie.

People also explored VR as a new way of storytelling in journalism, a new way of exercise for sports (e.g. riding stationary bike in gym feels much like driving Humvee car in war zone), and a new way of education, e.g. going into a machine and looking at the inner mechanism of an engine.

VR brings another aspect of artistic expression as new art media, challenges us to advance technology to a new frontier, and at the same time, provides us with great opportunities.

Things are just getting started!

VR Skeptic: Making VR Comfortable with Apple TV

March 22nd, 2016 Leave a Comment

We are still in the early days of virtual reality. Just as in the early days of manned flight, this is a time of experimentation.


Current VR experiments resemble early manned flight experiments

What do we wear on our heads? Helmets? Goggles? Contact lenses? Or do we simply walk into a cave or dome or tank? What do we wear or hold in our hands? Game controllers? Wands? Glowing microphones? Bracelets, armbands, and rings? Or do we just flap our arms in the breeze? Do we sit? Stand? Walk on a treadmill? Ride a bike? Or do we wander about bumping into furniture and each other?

As a person who prefers to go through life in a reclining position, most of these options seem like too much bother. I have a hard time imagining how VR could become ubiquitous in the enterprise if employees have to constantly pull on complicated headgear, or tether themselves to some contraption, or fight for access to an expensive VR cave. VR in the workplace must be ergonomic, safe, and easy to use even before you’ve had your morning coffee.

Lately I’ve been enjoying VR content, goggle-free, from the comfort of my lazyboy using an Apple TV app called Littlstar. Instead of craning my head back and forth, I just slide my thumb to and fro on the Apple remote. I can fly through the air and swim with the dolphins without working up a sweat or stepping on a cat.

Selection screen for VR content on Littstar Apple TV app

Selection screen for VR content on Littstar Apple TV app

To be clear: watching VR content on TV is NOT real VR. It’s nowhere near as immersive. But the content is the same and the experience is surprisingly good. Navigation is actually better: because it is effortless I am more inclined to keep looking around.

The Apple remote strikes me as the perfect VR controller. It is light as a feather, easy to hold, lets you pan and drag and click and zoom, and you can operate it blindfolded.

Watching VR content on TV also makes it easier to share. Small groups of people can navigate a virtual space together in comfort. One drawback: it’s fun to be the person “driving,” but abrupt movements can make everyone else a tad queazy.

What works in the living room might also work well at a desk – or in a meeting room. TVs are already replacing whiteboards and projection screens in many workplaces. And the central innovation of the fourth generation, Apple TV, the TV app, creates a marketplace to evolve new forms of group interaction. I expect there will be a whole class of enterprise TV apps someday.

For all these reasons, I have been pushing to create Apple TV app counterparts to the VR apps we are starting to build in the AppsLab. TV counterparts could make it easier to show prototypes in design meetings and customer demos. I feel validated by Tawny’s (@iheartthanniereport from GDC that Sony has adopted a similar philosophy.


Screenshot from an early AppsLab Apple TV app

Thanks to one of our talented developers, Os (@vaini11a), we already have one such prototype. It doesn’t do much yet; we are just figuring out how to display desktop screens in a VR environment. With goggles on I can use the VR app to spin from screen to screen in my office chair and look down at my feet to change settings. With the Apple TV counterpart app, I can do exactly the same thing without moving anything other than my thumb.

It’s still too early to predict how ubiquitous VR might become in the workplace or how we will interact with it. But TV apps, or something like them, may become one way to view virtual worlds in comfort.

GDC 2016 – Part 1: Event and Impression

March 22nd, 2016 Leave a Comment

Tawny (@iheartthannie) and I attended the 30th Edition of GDC – Game Developers Conference. As shown in the Tawny’s daily posts, there were lots of fun events, engaging demos, and interesting sessions, that we simply could not cover them all. With 10 to 30 sessions going on at any time slots, I wished to have multiple “virtual mes” to attend some of them simultaneously. However, with only one “real me,” I still managed to attend a large number of sessions, mostly 30-minute sessions to cover more topics at a faster pace.

Game Developers Conference 2016

GDC 2016

Unlike Tawny’s posts that give you in-depth looks into many of the sessions, I will try to summarize the information and take-aways in two posts: Part 1 – Event and Impression; Part 2 – The State of VR. This post will cover event overview and general impression.

1. Flash Backward

Flash Backward - 30 Years of Making Games

Flash Backward – 30 Years of Making Games

After two days of VR sessions, this flashback kicked off the GDC Game portion with a sense of nostalgia, flashing games like Pac-Man and Minesweeper, evolving into console games, massive multi-player games, social games (FarmVille), mobile games (Angry Birds), and onto VR games.

GDC has been running for 30 years, and many of the attendants were not even born yet that time. The Flashback started with Chris Crawford, the founder of GDC, and concluded with Palmer Luckey, the Oculus dude, who is 23, with not much for flashback, but only looking forward to the new generation of games in VR. He will be back in 20 years for the retrospective 🙂

2. Awards Ceremony

Awards Ceremony

IGF and Game Developers Choice Awards Ceremony

On 3/16/2016, two awards ceremony were hosted in recognition of creativity, artistry and technological genius – Independent Games Festival Awards, and Game Developers Choice Awards. I believe it is the equivalent to Oscars for movie industry, and it ran the exact format as Oscars.

Nominations and Winners

Nominations and Winners

As you can see, the big winner of the night was “Her Story” (by Sam Barlow), which won 5 out of 6 nominations. It is an Indie title, but also took 3 winners competing with big producers, because it created a fresh way of story-telling using game. And “The Witcher 3: Wild Hunt” took the honor of “Game of the Year.” Gamers: check out the list and check out the games if you have not played.

The ceremony also honored Todd Howard for “Lifetime Achievement Award.” He is a designer, developer, director and producer for award-winning titles “Oblivion,” “Fallout 3,” and “Skyrim,” etc.  Markus “Notch” Persson, the programmer and developer of Minecraft, took the honor of “Pioneer Award.” Yeah!

3. alt.ctrl.GDC

alt.ctrl.GDC-2016 (image source:

(image source:

As a maker myself with AppsLab, I found the alt.ctrl.GDC interactive exhibits to be extremely satisfying – just some insane ideas of how controllers can be made for games.

I tried most of the controllers, such as licking popsicles to suck up planet resources in game; mutating a controller to change the object in game to fly, swim or crawl; cranking up handles to drive tanks in game.

Keep Talking and Nobody Explodes

Keep Talking and Nobody Explodes

“Keep Talking and Nobody Explodes” must be one of the favorites at alt.ctrl.GDC 2015, and the mechanical box still stood out! It has turned into a real game – nominated for three categories and won “Excellence in Design” in the IGF Awards ceremony. It is a fun game, check it out!

Please Stand By

Please Stand By

“Please Stand By” is my favorite for alt.ctrl.GDC 2016. What do you do when you find a vintage TV box in junkyard? Well, it has all the controllers, even though they do not work anymore. After a wizardry spin, it came back to life – I over-heard the secrets, if you are ever intrigued in knowing how to do that.

Now it acts to show many channels of game TV, of course, you have to tune it with all the knobs and rabbit ears carefully. Oh, there are some buttons on the back too that will do some tricks. If it ever freezes on you, pound it or shake it, like you would with the old TV box.

4. Game Making and Animation

This is too big of a topic for a section in one blog post, so I am not going into any details.

I just want to appreciate how much works and thoughts people put into making a game. For an example, just look at this one slide from UFC 2 session:

Animation Grappling

Animation Grappling

That is just one grappling position change, and it would derive into so many permutation depends on how players control. Now work on the animation for each of those permuted position changes.  So in UFC 2, the technical genius tried to found procedural way of automating some areas of animation.

Of course, there are so many other aspects of game making – as indicated by the many categories of awards. In additional to the creative side, there are also technological side of running massive online games, or dealing with all forms of devices.

As much as technological advance drives game development, the game making drives technological advance! People are pushing the edge of envelope in making next generation of games in VR. Speaking of VR, stay tuned for my next post on “The State of VR.”

New Content on Our Page

March 21st, 2016 2 Comments

Back in September, our little team got a big boost when we launched official content under the official banner.

I’ve been doing this job for various different organizations at Oracle for nine years now, and we’ve always existed on the fringe. So, having our own home for content within the world is a major deal, further underlining Oracle’s increased investment in and emphasis on innovation.

Today, I’m excited to launch new content in that space, which, for the record is here:

We have a friendly, short URL too:

The new content focuses on the methodologies we use for research, design and development. So you can read about why we investigate emerging technologies and the strategy we employ, and then find out how we go about executing that strategy, which can be difficult for emerging technologies.

Sometimes, there are no users yet, making standard research tacits a challenge. Equally challenging is designing an experience from scratch for those non-existent users. And finally, building something quickly requires agility, lots of iterations and practice.

All-in-all, I’m very happy with the content, and I hope you find it interesting.

Not randomly, here are pictures of Noel (@noelportugal) showing the Smart Office in Australia last month.

RS3660_ORACLE 332

RS3652_ORACLE 419

The IoT Smart Office, just happens to be the first project we undertook as an expanded team in late 2014, and we’re all very pleased with the results of our blended, research, design and development team.

I hope you agree.

Big thanks to the writers, Ben, John, Julia, Mark (@mvilroxk) and Thao (@thaobnguyen) and to Kathy (@klbmiedema) and Sarahi (@sarahimireles) for editing and posting the content.

In the coming months, we’ll be adding more content to that space so stay tuned.

GDC16 Day 5: The Good, The Bad, The Weird (Last Day)

March 18th, 2016 1 Comment

bathroomWhen I first came to GDC, I didn’t know what to expect. I was delightfully surprised to use my first gender neutral restroom. The restroom had urinals and toilet seats. There was no fuss other than others who were standing to take a picture of the sign above. It felt surreal using the restroom next to a stranger who was not the same gender as I. The idea is a positive new way of thinking and fits perfectly with one of the themes of the conference: diversity.

In my last games user research round table, one of the topics we spent a lot of time on was sexism and how we could do our part to include underrepresented groups in our testing. One researcher began with a story about a female contractor he worked with to perform a market test on a new game. One screener question surprised him the most:

What gender do you identify as?
Male [Next question]
Female [Thank her for her time. Dismiss]

O-M-G. The team went back and forth with the contractor for 4 iterations before she agreed to change that question in the screener. Her reasoning were:

If your group of testers are all randomly chosen, but are all straight white males, is that a truly random sample? To build a game that is successful, it is important to test with a diverse group of people. Make sure that most if not all groups of your audience is represented in the sample. This will yield more diverse and insightful findings. You may have to change the language of your recruitment email to target different types of users.

For example, another researcher wanted a diverse pool gamers with little experience. His only screener was that they play games on a console for at least 6 hours a week. No genre of games were specified. He got a 60 year old grandma who played Uno over Xbox Live with her grandkids for 6–8 hours Saturday and Sunday. She took hours to get past level one, but because she was so meticulous and wanted to explore every aspect of the demo, she pointed out trouble spots in the game that most testers speeding through would miss!

Recently on our own screeners at The AppsLab, we ask participants what gender they identify with instead of bucketing them in male or female. It’s small, but a big step in the right direction toward equality.


UX practioners are like hedgehogs who just want to hug.


Other job roles on the team are like cuddly, don’t-touch-me, kittens.

The presence of UX

The presence of UX and user research has grown since last year. Developers and publishers recognize the importance of iteratively testing early and often. In the “Design of Everyday Games” talk with Christina Wodke the other day, she told the packed room that there was just 8 people in the same talk just the year before. From 8 to a packed room of hundred is a huge growth and a win for the user and for the industry!

Epic Games spoke about product misconceptions that makes it difficult to incorporate user experience into the pipeline. UX practitioners are like hedgehogs. We want to help by giving the extra hug it needs, but our quills aren’t perceived as soft enough. Our goal is to deliver the experience intended to the targeted audience, not change the design intent.

To incorporate UX into the pipeline, address product misconceptions. Don’t be afraid of each other, just talk. Open communication is the key to creativity and collaboration. Start with small wins to show your value by working with those who show some interest in the process. Don’t be a UX police and jump on every UX issue to start a test pipeline. Work together and measure the process.

Overall, I loved the conference. The week flew by quickly and I was able to get great insights from industry thought leaders. The GDC activity feed was bursting with notes from parallel talks. I fell in love with the community and am in awe that a conference of this size grew from a meeting in a basement 30 years go. I sure hope there is a UX track next year! I decided to end my week with a scary VR experience, Paranormal Activity VR. The focused on music and sound to drive the suspenseful narrative. Needless to say, I screamed and fell on my knees. It beats paying to go to a haunted maze every halloween.

GDC16 Day 4: Demos & Player Motivations

March 17th, 2016 Leave a Comment

Crowded early morning inside GDC Expo.

It’s official. All demos are booked for the week. Anyone not on the list is subjected to the standby line. I was lucky enough to score a 5:30pm demo for Bullet Train at the NVIDIA booth early this morning. When I walked by the line late in the evening, I found out that a lady had been waiting for at least an hour for her turn in the line.

Raymond (@yuhuaxie), one of our developers, took his luck to play games at the “no reservations accepted” Oculus store-like booth 30 minutes before the expo opened and still had to wait for almost an hour before he left the line for other session talks. Is it worth the hype? The wait? The fact that you’re crouching and screaming at something no one else can see?

Apparently so! One common sentiment I heard from others who finished playing the demo was that the experience was so amazing that they didn’t care about the friction to enjoy the 10–15 min in virtual reality! For Bullet Train, there had been several repeat visitors to play the fast-paced shooting game again and again!

Today, I had my chance to demo London Heist on the PS VR and Bullet Train on the Oculus Rift. Both are fast-paced shooting games. The head mount gear (HMD) for the PS VR is much more forgiving for those who wear glasses. The HMD wears similarly to a bike helmet, but with no straps to mess with. To adjust, you simply slide the viewer forward and back separate from the mounting. It’s much lighter compared to the other HMDs and breathes better. Here’s another game play of the demo I went through.

London Heist has simple interactions for a shooting game. The game first eases you in as you ride as a passenger with your buddy on the streets of London. You can sit there and get a chance to orient yourself with you new surroundings. Instead of practicing how to grab guns, I gulped down a 7up instead 😡

Finally, a car chase ensues and there are bullets flying at you. The controls were simple. Pull the trigger to grab the gun. Once done, the gun is attached to you the entirety of the game. Just keep pulling the trigger to shoot for the rest of the game. When you run out of bullets, just grab the magazine right next to you with your free hand to reload! Easy peasy!

Bullet Train controls have a slightly higher learning curve but experience is fulfilling. In the game you can transport by creating portal toward the destination you want to teleport to, grab multiple guns to shoot, slow-mo the game (discover-ability) and grab the bullets flying toward you in the air to throw them back at enemies.

There are so many things you should do that you forget how to do them all. I personally stumbled near the end trying to grab bullets in the air and throw it that I forgot how to grab new guns! After the short demo, I felt myself begin to sweat. A change in mental model is needed since typical shooter games allows you to press short cut keys to perform those actions. In VR, you DO those actions. Luckily, it does not detract from the immersion at all. It was fun and I heard that a few attendees came back to the replay the demo with improved execution.

The change in mental model was mentioned in day 2 of the user research round table. We focused on mental models for game control patterns. All control schemes are inherently non-intuitive. The game industry has been lucky that developers used the same control patterns for first person shooters aka the Halo Scheme.

When we look at game schemas for other game genres, it’s a bit of a mess. This may be the same for VR since it is based on the game’s mechanics. Generally, players prefer gaze based direction. This means that the direction you are looking in is the direction you expect to turn toward in the game.

Typically when you want to turn directions in real life, you turn your torso. This preference toward gaze based direction is a part of the Counter Strike Effect. Those who are used to first person shooter games are too used to looking to turn vs. rotating your torso to turn.

It’s definitely a new mental modal to learn. We have to remember what technologies and experiences the users are coming from and what platform and core experiences you are developing for then make judgement calls on that.

Look at these players actually turning! It was easy and turning was quick! Worked up a bit of a sweat here too.

The above is why the on-boarding experience for games are so important. Tutorials are necessary to ensure that players understand the core game mechanics. Players tend to overestimate themselves and skip tutorials when given the option to do so.

Rather than giving them the option to skip, the installed game should know whether it is your first time playing. First timers go through the tutorial. Everyone else who’s reinstalled the game on another device does not have to go through the tutorial again, but can still have the option to do so.

Space out tutorials evenly or else they’ll have information overload. Leave room for discover-ability. If they can discover a mechanic within 10 minutes of playing after going through core tutorial then it leads to bigger user satisfaction. Induce information seeking behavior and bring up the tutorial when they need it. Avoid front loading the player.

More on Motivation

To understand the psychology behind gamer’s motivations more. Quantic Foundry looked at 2000 data points, we find that there are 12 unique motivations that fall into 6 themes:

At a high level, there are 3 motivational clusters.

Discovery is a bridge between Mastery — Achievement as well as Immersion — Creativity. Design is a bridge between Action — Social. These results were consistent for all geographic region.

Not surprisingly, these game motivations mapped to personality traits. In psychological personality theory, there are the Big 5 personality traits.

When we drill down from the Big 5 to examine each trait, we find that it changes with context. For example, extraversion is typically associated with persons who are social and energetic. Examining extraversion in context of game motivations, we find that it is associated with persons who are social, cheerful, thrill-seeking and assertive and therefore likely to be motivated by games that fall into the Action — Social.

Conscientiousness is associated with the Mastery — Achievement. Openness is associated with Immersion — Creativity. Game motivations align with personality traits. Games are an identity management tool and so people play games that align with their personality traits.

There are some gender differences. Females are motivated by Fantasy, Design and Completion while males are motivated by Destruction, Competition and Fantasy. However, that difference is strongly dwarfed by age differences. Rather than designing for men and women, we should think about how games should be designed for different age groups.

The Action — Social cluster is the most age volatile group. As players grow older, Competition and Excitement drops. For females, story also drops. For males, Challenge also drops.

Imagine a game that changes it’s game mechanics with you as you grow? Imagine if we could drive the health and wellness of our teams by employing the proper motivational UX strategy that is intrinsic to them. That would be pretty cool!

GDC16 Day 3: Another Day of Fun & Data!

March 16th, 2016 Leave a Comment

Early morning view of the GDC16 Expo Hall.

The Expo opened today and will be open until the end of Friday! There was a lot to see and do! I managed to explore 1/3 of the space. Walking in, we have the GDC Store to the left and the main floor below the stairs. Upon entering the main floor, Unity was smack dab in the center. It had an impressive set up, but not as impressive as the Oculus area nor Clash of Kings.

Built to look like a store :O

Clash of Kings. The biggest booth of all booths. They brought the game to real life with hired actors!

There were a lot of demos you could play, with many different type of controllers. Everyone was definitely drinking the VR Kool-Aid. Because of the popularity of some of the sessions, reservations for a play session are strongly encouraged. Most, if not all of the sessions ,were already booked for the whole day by noon. I managed to reserve the PS VR play session for tomorrow afternoon by scanning a QR code to their scheduling app!

The main floor was broken up into pavilions with games by their respective counties. It was interesting to overhear others call their friends to sync up and saying “I’m in Korea.” Haha.

I spent the rest of the time walking around the floor and observing others play.

I did get a chance to get in line for an arcade ride! My line buddy and I decided to get chased by a T-Rex! We started flying in the air as a Pterodactyl. The gleeful flight didn’t last long. The T-Rex was hungry and apparently really wanted us for dinner. It definitely felt like we were running quickly, trying to get away.

Another simulation others tried that we didn’t was a lala land roller coaster. In this demo, players can actually see their hand on screen.

Sessions & Highlights

Playstation VR. Sony discusses development concepts, design innovations and what PS VR is and is not. I personally liked the direction they are going for collaboration.

The AppsLab team and I have considered this possibility of a VR screen and TV screen experience as well. It’s great that this idea is validated by one of the biggest console makers.

A year of user engagement data. A year’s worth of game industry data, patterns and trends was the theme of all the sessions I attended today.

Day 1 — User research round table. This was my first round table during GDC, and it’s nice to be among those within the same profession. We covered user research for VR, preventing bias and testing on kids! Experts provided their failures on these topics and offers suggestions.

GDC16 Day 2: Highlights & Trends

March 15th, 2016 Leave a Comment

Just like yesterday, the VR sessions were very popular. Even with the change to bigger rooms, lines for popular VR talks would start at least 20 minutes before the session started. The longest line I was in snaked up and down the hallway at least 4 times. The wait was well worth it though!

Today was packed. Many sessions overlapped one another. Wish I could have cloned 3 of myself 🙁

Throughout each session, I noticed points that have been repeated from yesterday’s daily roundup. There are definitely trends and general practices that the game industry has picked up on, especially in virtual reality. I’ll talk more about these trends later in this post.

Big announcement


The Playstation VR headset launching in October.

PlayStation revealed the price of their new VR headset at $399! It’s said that Playstation VR has over 230 developers on board and 160 diverse titles in development. 50 of those games will be available this October. More info as the PS VR launch event tomorrow 🙂

There is a game called Rez Infinite developed for the PS VR. The line to try out the game was long! I wanted to take a picture of someone playing the game, but they asked kindly for no film or photography. Instead, here is a picture of the Day of the Dev banner!

Most popular VR demo so far

Aside from Eagle’s Flight, also built for PS VR, EverestVR lets you climb up Mt.Everest from the comfort (and warmth) of your living room. I overheard that being able to experience the climb with the HTC Vive controllers was booked out for the rest of the week!

Check out previews for both. Here’s Eagle’s Flight:

And Everest VR:

Session highlights

Immersive cinema with Lucasfilm. The entire sessions was a dream come true for fans of Star Wars and cinematic film as well as audiophiles. Anyone who’s watched Season 4 of Arrested Development on Netflix is familiar with the ability to watch parallel storylines within the same episode. Lucasfilm allowed us to experience that same interactive narrative with VR and Star Wars Episode 7!

They also let us in on their creative process for Star Wars: Trials on Tatooine. They reiterated the creative process espoused in many other game making session: (a) define the desired experience first then test it (b) simplify the interaction. VR is still new. Right now we are trying to get players to believe they are in another world. Slow the pacing at the beginning and allow them to explore the world. We don’t want complicated interactions to distract them from whats happening around them. Let them enjoy the immersion. (c) Apply positive fail throughs. If the player does something wrong in-game, don’t let the game script make them feel bad by telling them they did something wrong.

What “affordance” really means. Since the Design of Everyday Things by Don Norman, the term “affordance” has been overly used and misused. He updated his 2012 book with some clarification on the terminology. Affordances are not signifiers. Affordances define what actions are possible. What we think those those objects can do can be right or wrong. To ensure that affordances are clear, we use signifiers as a clue to indicate what we can do. For example, a door, with no doorknob or handle, is an affordance. It can open or close. Placing a pull bar, a signifier, on the door clues us into the notion that we can pull it open.

Virtual World Fair. The team behind the first 3D theme park ride for Universal Studios talked about how brands and other consumer products can take advantage of VR. They introduced the Virtual World’s Fair, a theme park in VR that is eerily similar in concept for Disney World’s Epcot.


Brands, Countries and Organizations can own a pavilion in the world, like shops in a mall, where players can explore and shop the latest and greatest.

Film vs. Games vs. VR. Repeated in many sessions today was that the rules that guide films and games are not applicable in VR. We have to create our own language and build best practices specific to it. For example, close up shots in movies will not work. In VR, we would end up invading the player’s personal space. In VR, we are the camera.

Ambisonic vs. biaural audio. Use an ambisonic mic to capture sounds and use biaural audio in VR. Ambisonic is a full surround sound capture technique. It’s equivalent to lightfields, sound pressure from all directions. Biaural audio is the equivalent of stereoscopic video. A common mistake people make is that biaural is not the same as spatialized audio. Biaural is for headphone playback. Ambisonics are for specialized speakers. Biaural has issues with coloration and rotation. Ambisonic has a flatter frequency and works if the player’s head is static.



“Presence”. The biggest buzzword since the “cloud.” Presence is hard to get and hard to achieve. There was a study done on rats wearing VR and they had trouble too! To achieve presence, we should think about how our world absorbs the player and what might distract them:

Redirected walking. Redirected walking came up in 3 sessions I was in again today! With the hype surrounded room tracking, it is important that we implement illusions that keeps users safe and nausea free! Vision dominates vestibular sensation. 3 types of redirected walking were introduced:

For anyone interested this 2010 study discusses thresholds for each type of redirected walking. Because the study was done before the VR devices we have today, another study is needed since there could be new thresholds.

Enabling hands in VR. Hands are the most important input for interaction. A large proportion of your sensory and motor cortex is devoted to the hands. The dominant hand is used for precise control, while the non-dominant hand can be used as a point of reference or for gross movement. Hands can be used synchronously to pull a heavy lever and asynchronously to climb a ladder. Currently, simple virtual hands are somewhat useful for selecting small targets, targets in cluttered regions and moving targets. Ray extenders (extensions of our hands in VR) are better for distant targets.

GDC16 Day 1: Daily Round Up

March 14th, 2016 Leave a Comment

Hello everyone! I wrapped up the first day at the Games Developers Conference (GDC) in San Francisco! It’s the first Monday after daylight savings so a morning cup of joe in Moscone West was a welcomed sight!


First Thoughts

Wow! All of the VR sessions were very popular and crowded. In the morning, I was seated in the overflow room for the HTC Vive session. Attendees were lucky to be able to go to 2 VR sessions back-to-back. There would be lines wrapping around the halls and running into other lines. By the afternoon, when foot traffic was at its highest, it was easy to get confused as to which line belonged to which session. Luckily, the organizers took into account the popularity of the VR sessions and moved it to the larger rooms for the next 4 days!

On the third floor, there was a board game area where everyone could play the latest board game releases like Pandemic Legacy and Mysterium as well as a VR play area where everyone could try out the Vive and other VR games.

Sessions & Take Aways

I sat in on 6 sessions:

See You at SXSW 2016

March 11th, 2016 Leave a Comment

sxsw If you happen to be in Austin this weekend for SXSWi, look for Osvaldo (@vaini11a), me (@noelportugal) and friend of the ‘Lab Rafa (@rafabelloni).

We will be following closely all things UX, IoT, VR, AI. Our schedules are getting full with some great sessions and workshops. Check back in a week or so to read some of our impressions!

The Anki Overdrive Car Project

March 7th, 2016 1 Comment

At the end of 2015, our team was wrapping up projects that would be shown at the main Oracle conference, Oracle OpenWorld.

As with every OOW, we like to come up with a fun project that shows attendees our spirit of innovation by building cool projects within Oracle.

The team was thinking about building something cool with kids’ racetracks. We all were collectively in charge of looking for alternatives, so we visited a toy store to get ideas and see products that already existed out there.

We looked pretty cool racetracks but none of them suited our needs for functionality and of course, we didn’t have enough time to invest on modifying some of them.

So, searching through internet someone came up with Anki OVERDRIVE cars, yes, that product that was announced back in 2013 at Apple WWDC keynote. To sum up, Anki provides a racetrack that includes flexible plastic magnets tracks that can be chained together and to allow for any racetrack configuration, rechargeable cars that have an optical sensor underneath to keep the car on the track, a lot of fun features like all kinds of virtual weapons, cars upgrades, etc., a companion app for both Android and iOS platform to operate the cars and a software development kit (SDK).


For us, it was exactly what we were looking for. But now we needed to find a way to control the cars without using the companion app because, you know, that was boring and we wanted more action and go one step further.

So after discussing different approaches, I suggested to control cars with Myo gesture control armband that basically is a wireless touch-free, wearable gesture control and motion device. We had Myo armband already, but we hadn’t played with it much. Good thing that Myo band has an SDK too, so we had everything ready to build a cool demo 🙂

So it was time to get my hands dirty and start coding! The general idea was to build an Android app that receives Myo gestures and motion, translate and map those values and send them to the Anki car to control its speed and receive messages from the car to know its status and take action. As a plus, we wanted to count laps to make a real racing car contest with the attendees.


I started investigating how to control the Anki cars with the SDK, and I found that they just provide a C implementation of the message protocols and data parsing routines necessary for communicating with the Anki Drive vehicles. And that the initial release has the minimal number of functions required to use the message protocol and parse information sent by vehicles. And finally, that version provides a limited subset of a messaging protocol. Knowing that, I was sure that tracking number of laps would be a hard task so I decided to get table that for another day.

I dove into the SDK to understand the message protocols and how to translate that to the Android SDK to have full control of the vehicle plus how to pair them through Bluetooth with the Android device; I have to admit, it was difficult at the beginning as documentation is very limited. Also, I found that nothing that you do will work until you send and set a SDK_MODE flag, so make a note if you want to do any Anki builds.


Myo integration was more transparent as they already provide an Android SDK with cool examples. So I just had to code Bluetooth paring and map and translate gestures and motions into a valid speed for the Anki cars.

I set two gestures and motions to control speed. The first one was rotating the wrist to the right to increase speed or rotating the wrist to the left to decrease speed, and second was moving the arm up to increase or down to decrease speed.

I’m sure there are a lot of gestures or motions we could have used and implemented, but those were enough for our proof of concept and demo.

Here you can see a development testing.


I’ve been to two conferences where this demo has been shown, and the attendees’ “wow” reaction is very gratifying, and that my fiends is the whole idea of this demo and this team, the “wow” moment 🙂

Of course, it was shown at OOW 2015, and you can read more about it here.

Also, this demo has its own spot in OAUX Gadget Lab, so come by and have some fun time with it and our others demos that live in the lab.


Is the Mi Band the Harbinger of Affordable #fashtech?

March 1st, 2016 2 Comments

So, here’s a new thing I’ve noticed lately, customizable wearables, specifically the Xiaomi Mi Band (#MiBand), which is cheap and completely extensible.

This happens to be Ultan’s (@ultan) new fitness band of choice, and coincidentally, Christina’s (@ChrisKolOrcl) as well. Although both are members of Oracle Applications User Experience (@usableapps), neither knew the other was wearing the Mi Band until they read Ultan’s post.

Since, they’ve shared pictures of their custom bands.


Ultan’s Hello Kitty Mi Band.


Christina’s charcoal+red Mi Band.

The Mi Band already comes in a wider array of color options that most fitness bands, and a quick search of Amazon yields many pages of wristband and other non-Xiaomi produced accessories. So, there’s already a market for customizing the $20 device.

And why not, given it’s the price of a nice pedometer with more bells and whistles and a third the cost of the cheapest Fitbit, the Zip, leaving plenty of budget left over for making it yours.

Both Christina and Ultan have been tracking fitness for a long time and as early adopters so I’m ready to declare this a trend, i.e. super-cheap, completely-customizable fitness bands.

Of course, as with anything related to fashion (#fashtech), I’m the last to know. Much like a broken clock, my wardrobe is fashionable every 20 years or so. However, Ultan has been beating the #fashtech drum for a while now, and it seems the time has come to throw off the chains of the dull, black band and embrace color again.

Or something like that. Anyway, find the comments and share your Mi Bands or opinions. Either, both, all good.

IoT Hackathon: Team Waterlytics’ Entry – Part 2/2

February 24th, 2016 Leave a Comment

Editor’s note: Our team participated in many of the Summer of Innovation events held by Laurie (@lsptahoe) and her team. Here’s Part 2 of Mark’s (@mvilrokx) project from the IoT Hackathon held at Oracle HQ, July 30-31. Enjoy.

After all the prep work for the IoT hackaton where we agreed on our use case and settled on the hardware we would be using, it was time to start the actual fun part: designing and coding!

Diane provided all the designs for the application, Joe built all the hardware and provided the “smarts” of the application and I was responsible for bringing it all together.

The hardware

Let’s start with the sketch of the whole hardware setup:

IoT Hackathon Piezo Sketch_bb

Basically the piezo is connected to the 1 analog input on the ESP8266 (A0) and to ground.  In parallel with the piezo we put a 1MΩ pull-down resistor to reduce possible signal noise, a 1µF capacitor to smooth out the piezo’s signal over time to improve readings and a 5.1V Zener diode to prevent the piezo from frying the ESP8266 (piezo’s can spike at 20~40V).  The piezo can then simply be attached to a pipe with some plumbers putty, ready to start sensing vibrations.

For our test setup, we created a closed circuit water flow with some copper pipes, a simple garden pump and a bucket of water, simulating an actual water system.


This turned out to work great for research and the actual demonstration during the hackathon.

The software

ESP8266 WiFi Client

The whole reason for using the ESP8266 was to be able to connect the piezo to the cloud.  The ESP8266 can be flashed with a Lua based firmware called NodeMCU, which makes this whole process remarkably simple.  Just set the ESP8266 in Wifi STATION mode and then connect to a available WiFi, that’s it, 2 lines of (Lua) code:

wifi.sta.config(<ssid>, <pwd>)

The board is now connected to the internet and you can perform e.g. REST API calls from it.  In practice, we implemented this slightly different because this isn’t very user friendly, but that’s outside the scope of this post.

All we had to do now was read the data from the piezo and send it via the internet to be processed and stored somewhere, e.g. on the Oracle Cloud IoT platform.  Unfortunately we didn’t have access to that platform so we had to build something ourselves, see next.

Initially the plan was to use REST API calls to send the data to our backend, but this turned out to be too heavy for the little ESP8266 board and we could only perform a few calls/second before it would freeze and reboot.

Instead, we opted to use the MQTT protocol (quote from

“MQTT is a machine-to-machine (M2M)/”Internet of Things” connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport. It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium…”

Sounds like a perfect match and the Lua firmware has build in support for MQTT!

We turned our ESP8266 into a MQTT Client and used CloudMQTT as our broker.  Once we switched to MQTT we were able to send data at a much higher frequency, but still not fast enough compared to the number of samples we wanted to collect from the piezo (100s/sec).

So instead of sending all the data we collect from the piezo to a backend, we decided to do some of the processing on the ESP8266 chip itself.  We would collect 100 samples and then calculate the mean, median and standard deviation of those and send that to the backend instead as fast as we could.  In the end it turned out that calculating the standard deviation was to much for the board and it started effecting our sampling frequency, so we just dropped that al together.

Piezo data is now being pushed from our little board to the broker, next we need to store that data into a database.  For this we need another MQTT client that listens to the messages from the ESP8266 and stores them as they arrive.  We used node.js and the MQTT.js package to create our client and hooked it up to a MongoDB.

This gave us the flexibility to change the data that gets sent from the board (which was just some JSON) without having to deal with DDL.  For example, we managed to cram extra attributes into the data sent from the ESP8266 that contained information about the pipe’s material (“copper” or “PVC”) and the location of the piezo on the pipe (“close to bend”) all without changing anything other than the code on the ESP8266 that captures this information.

This information would be useful in our model as different pipe materials or other characteristics could have an effect on it, although we didn’t get to use it for the Hackathon.

The final piece of the puzzle was to display all this information into a useful manner to the end user.

For this we had a web application that would connect to the MongoDB and process the information available there.  Users could monitor usage, per device or aggregated in any number of ways: devices could be group by restroom, floor, building, campus or company wide.

You could also aggregate by device type: shower, toilet, urinal etc.  A budget could be allocated, again for any of these levels and time line, e.g. 100 gallon/day/toilet or 100,000 gallon/quarter/building.  Notifications would get sent out when you go over a budgets or when “unusual” activity gets noticed, e.g. a device is nowhere neat it’s budget BUT it has used much more today than it normally does, which could indicate a leak.

Those were all the software parts that made up our final solution, here’s an overview:

Slide1Notice how this architecture allows us to easily scale the individual parts without effecting one another: we can scale up the number of sensors, scale the MQTT broker, scale the node.js server and scale the MongoDB.

One final component that we did not really get to highlight in the presentation during the hackathon is that the MQTT Client on the ESP8266 is configured to both send messages (the piezo data as we shown before) but also to receive messages.

This allowed us to control the sensors remotely, by sending it commands from the broker (through another MQTT Client).  As soon as you switched on an ESP8266 module, say “sensor1”, it would announce itself.  The node.js server would be listening to these messages and indicate in the database that “sensor1” is online and ready to accept commands.

From another MQTT client, controlled from a admin web application, we could then send a command to this specific ESP8266 and tell it to either start sensing and send data or stop.  This was done for practical purpose because we were producing so much data that we were in danger of running out of free capacity on CloudMQTT 🙂 but it turned out to be a very useful feature that we are planning to investigate further and implement in any future versions.

In the end, we didn’t win any prizes at the Hackathon but I did learn a lot about IoT and plan to use that in future projects here at the AppsLab, stay tuned for more blogposts on those projects.



A Wonderful Week in Australia

February 23rd, 2016 5 Comments

2016 has been a whirlwind so far, and February kept up the pace. Here’s a quick rundown of what we’ve been doing.

As we did last year, OAUX made a trip to APAC again this year to meet partners, customers and Oracle people, show our Expo of goodies and talk simplicity-mobility-extensibility, Glance, Scan, Commit and our overall goal of increasing user participation.

This year, Noel (@noelportugal) and I made the trip to Australia and spent an awesome, warm week in beautiful Sydney. Noel showed the portable Smart Office demo, including the Smart Badge, that we debuted at OpenWorld in October, and I showed a couple visualizations and Glance on a handful of devices.


Don’t let the picture fool you. It was taken between the end of Jeremy’s (@jrwashley) talk, before the official start of the Expo, during lunch. Once people finished eating, the room filled up quickly with 80 or so partner attendees.

Want more? Read Ultan’s (@ultan) account of the partner extensibility day.

On the second day in the Sydney office, we got the chance to chat with local Oracle sales reps, consultants and architects, and I was lucky enough to meet Stuart Coggins (@ozcoggs) and Scott Newman (@lamdadsn) who read this blog.

It’s always invigorating to meet readers IRL. I’ve poured nine years into this blog, writing more than 2,000 posts, and sometimes the silence is deafening. I wonder who’s out there reading, so it’s always a treat.

Oh, and Stuart had a board with blicky lights, an IoT demo he’d shown a customer that day. So, I was immediately intrigued.

Turns out Stuart and Scott create demos similar to our own and share the same nerdy passions we do. To wit, check out the Anki car hack Stuart and some colleagues did for Pausefest 2016 in Melbourne earlier this month.

You’ll recall the Anki cars are one of latest favorite shiny toys to hack.

Overall, it was a great week. Special thanks to John P for hosting us and making the trip all the more fun.

While we were away, Thao (@thaobnguyen) and the HQ team hosted a group of analysts in the Cloud and Gadget Labs, including Vinnie Mirchandani (@dealarchitect), who posted a nice writeup, including the money quote:

A walkthrough of their UX lab was like that of Q’s workshop in Bond movies. An Amazon Echo, smart watches, several gestural devices point to further changes in interfaces. Our expectations are being shaped by rapidly evolving UX in our cars and homes.

Somewhere Noel feels warm and fuzzy because the Q-meets-Tony Stark aesthetic is exactly what he was hoping to capture in his initial designs for our Gadget Lab.

Fun fact, back in 2007 when Justin (@kestelyn) and I collaborated on the original blogger program for OpenWorld, Vinnie was active in the discussion. Small world.

Anyway, given all the excitement lately, I’m only now getting a chance to encourage you to read a post by Sarah Smart over on VoX about stuff, “Wearables, IoT push Oracle’s emerging tech development.”

So yeah, a lot going on, and conference season is just beginning. Stay tuned for more.

Put Glance on It

February 5th, 2016 Leave a Comment

Because I live in Portland, I’m often asked if “Portlandia” is accurate.

It is, mostly, and so it seems appropriate to channel an early episode to talk about Glance, our wearables framework.

Actually, Glance has grown beyond wearables to support cars and other devices, the latest of which is Noel’s (@noelportugal) gadget du jour, the LaMetric Time (@smartatoms).

Insert mildly amusing video here.

And of course Noel had to push Glance notifications to LaMetric, because Noel. Pics, it happened.


The text is truncated, and I tried to take a video of the scrolling notification, but it goes a bit fast for the camera. Beyond just the concept, we’ll have to break up the notification to fit LaMetric’s model better, but this was only a few minutes of effort from Noel. I know, because he was sitting next to me while he coded it.

In case you need a refresher, here’s Glance of a bunch of other things.


I didn’t have a separate camera so I couldn’t show the Android notification.


We haven’t updated the framework for them, but if you recall, Glance also worked on Google Glass and Pebble in its 1.0 version.



Come Visit the OAUX Gadget Lab

February 4th, 2016 Leave a Comment

In September 2014, Oracle Applications User Experience (@usableapps) opened a brand new lab that showcases Oracle’s Cloud Applications, specifically the many innovations that our organization has made to and around Cloud Applications in the past handful of year.

We call it the Cloud User Experience Lab, or affectionately, the Cloud Lab.

Our team has several projects featured in the Cloud Lab, and many of our team members have presented our work to customers, prospects, partners, analysts, internal groups, press, media and even schools and Girl and Boy Scout troops.

In 2015, the Cloud Lab hosted more than 200 such tours, actually quite a bit more, but I don’t have the exact number in front of me.

Beyond the numbers, Jeremy (@jrwashely), our group vice president, has been asked to replicate the Cloud Lab in other places, on the HQ campus and abroad at other Oracle campuses.

People really like it.

In October 2015, we opened an adjoining space to the Cloud Lab that extends the experience to include more hands-on projects. We call this new lab, the Gadget Lab, and it features many more of our projects, including several you’ve seen here.

In the Gadget Lab, we’re hoping to get people excited about the possible and give them a glimpse of what our team does because saying “we focus on emerging technologies” isn’t nearly as descriptive as showing our work.


So, the next time you’re at Oracle HQ, sign up for a tour of the Cloud and Gadget Labs and let us know what you think.

The MagicBand

February 3rd, 2016 Leave a Comment

Editor’s note: Here’s the first post from our new-ish researcher, Tawny. She joined us back in September, just in time for OpenWorld. After her trip to Disney World, she talked eagerly about the MagicBand experience, and if you read here, you know I’m a fan of Disney’s innovative spirit.


Planning a Disney World trip is no small feat. There are websites that display crowd calendars to help you find the best week to visit and the optimal parks to visit on each of those days so you can take advantage of those magic hours. Traveling with kids? Visiting during the high season? Not sure which FastPass+ to make reservations for?

There are annually updated guidebooks dedicated to providing you the most optimal attraction routes and FastPass+ reservations, based off of thousands of data points for each park. Beginning 2013, Disney introduced the MagicBand, a waterproof bracelet that acts as your entry ticket, FastPass+ holder, hotel key and credit card holder. The bands are part of The MyMagic+ platform, consisting of four main components: MagicBands, FastPass+, My Disney Experience, and PhotoPass Memory Maker. Passborterboard lists everything you can do with a MagicBand.

I got my chance to experience the MagicBand early this January.


These are both open edition bands. This means that they do not have customized lights or sounds at FP+ touchpoints. We bought them at the kiosk at Epcot after enviously looking on at other guests who were conveniently accessing park attractions without having to take out their tickets! It was raining, and the idea of having to take out anything from our bags under our ponchos was not appealing.

The transaction was quick and the cashier kindly linked our shiny new bands to our tickets. Freedom!!!

The band made it easy for us to download photos and souvenirs across all park attractions without having to crowd around the photo kiosk at the end of the day. It was great being able to go straight home to our hotel room while looking through our Disney photo book through their mobile app!

Test Track at Epcot made the most use of the personalization aspect of these bands. Before the rise, guests could build their own cars with the goal of outperforming other cars in 4 key areas: power, turn handling, environmental efficiency and responsiveness.


After test driving our car on the ride, there were still many things we could do with our car such as join a multiplayer team race…we lost 🙁

What was really interesting were guests were fortunate to show off personalized entry colors and sounds, a coveted status symbol amongst MagicBand collectors. The noise and colors was a mini attraction on its own! I wish our badge scanners said hi to us like this every morning…



When used in conjunction with My Disney Experience app, there can be a lot of potential:

So what about MagicBands for the enterprise context?

Hospitals may benefit, but some argue that the MagicBand model works exclusively for Disney because of its unique ecosystem and the heavy cost it would take to implement it. The concept of the wearable is no different from the badges workers have now.

Depending on the permissions given to the badgeholder, she can badge into any building around the world.

What if we extend our badge capabilities to allow new/current employees to easily find team members to collaborate and ask questions?

What if the badge carried all of your desktop and environmental preferences from one flex office to the desk so you never have to set up or complain about the temperature ever again?

What if we could get a push notification that it’s our cafeteria cashier’s birthday as we’re paying and make their day with a “Happy Birthday?”

That’s something to think about.

M2M, the Other IoT

January 28th, 2016 1 Comment

Before IoT became ‘The’ buzzword, there was M2M (machine to machine). Some industries still refer to IoT as M2M, but overall the term Internet of Things has become the norm. I like the term M2M because it describes better what IoT is meant to do: Machines talking to other machines.

This year our team once again participated int he AT&T Developer Summit 2016 hackathon. With M2M in our minds, we created a platform to allow machines and humans report extraordinary events in their neighborhood.  Whenever a new event was reported (by machine or human),  devices and people (notified by an app) connected to the platform could react accordingly.  We came with two possible use cases to showcase our idea.


Virtual Gated Community

Gated communities are a great commodity for those wanting to have privacy and security. The problem is that usually these communities come with a high price tag. So we came up with a turnkey solution for a virtual gate using M2M. We created a device using the Qualcomm DragonBoard 410c board with wifi and bluetooth capabilities. We used a common motion sensor and a camera to detect cars and people not belonging to the neighborhood. Then, we used Bluetooth beacons that could be placed in at the resident keychains. When a resident drove (or walked) by the virtual gate, it would not trigger the automated picture and report to the system, but if someone without the Bluetooth beacon drove by, the system will log and report it.

We also created an app, so residents could get notifications as well as report different events, which brings me to our second use case.

Devices reacting to events

We used AT&T Flow Designer and M2X platform to create event workflows with notifications. A user or a device could subscribe to receive only events that they care about such as lost dog/cat, water leaks etc. The real innovative idea here is that devices can also react to certain events. For example, a user could configure its porch lights to automatically turn on when someone nearby reported suspicious activity. If everyone in their street do the same, it could be a pretty good crime deterrent to effectively being able to turn all the porch lights in the street at once by reporting such event.

We called our project “Neighborhood”, and we are still amazed on how much we were able to accomplish in merely 20+ hours.

SafeDrop – Part 2: Function and Benefits

January 25th, 2016 Leave a Comment

SafeDrop is a secure box for receiving a physical package delivery, without the need of recipient to be present. If you recall, it was my team’s project at the AT&T Developer Summit Hackathon earlier this month.

SafeDrop is implemented with Intel Edison board at its core, coordinating various peripheral devices to produce a secure receiving product, and it won second place in the “Best Use of Intel Edison” at the hackathon.

SafeDrop box with scanner

SafeDrop box with scanner

Components built in the SafeDrop

Components built in the SafeDrop

While many companies have focused on the online security of eCommerce, the current package delivery at the last mile is still very much insecure. ECommerce is ubiquitous, and somehow people need to receive the physical goods.

The delivery company would tell you the order will be delivered on a particular day. You can wait at home all day to receive the package, or let the package sit in front of your house and risk someone stealing it.

Every year there are reports of package theft during holiday season, but more importantly, the inconvenience of staying home to accept goods and the lack of peace of mind, really annoys many people.

With SafeDrop, your package is delivered, securely!

How SafeDrop works:

1. When a recipient is notified a package delivery with a tracking number, he puts that tracking number in a mobile app, which essentially registers it to SafeDrop box that it is expecting a package with that tracking number.

2. When delivery person arrives, he just scans the tracking number barcode on the SafeDrop box, and that barcode is the unique key to open the SafeDrop. Once the tracking number is verified, the SafeDrop will open.

Delivery person scans the package in front of SafeDrop

Delivery person scans the package in front of SafeDrop

3. When the SafeDrop is opened, a video is recorded for the entire duration when the door is open, as a security measure. If the SafeDrop is not closed, a loud buzz sound continues until it is closed properly.

Inside of SafeDrop: Intel Edison board, sensor, buzzer, LED, servo, and webcam

Inside of SafeDrop: Intel Edison board, sensor, buzzer, LED, servo, and webcam

4. Once the package is within SafeDrop, a notification is sent to recipient’s mobile app, indicating the expected package has been delivered. Also shows a link to the video recorded.

In the future, SafeDrop could be integrated with USPS, UPS and FedEx, to verify the package tracking number for the SafeDrop automatically. When delivery person scans tracking number on SafeDrop, it automatically update status to “delivered” into that tracking record in delivery company’s database too. That way, the entire delivery process is automated in a secure fashion.

This SafeDrop design highlights three advantages:

1. Tracking number barcode as the key to the SafeDrop.
That barcode is tracked during the entire delivery, and it is with the package always, and it is fitting to use that as “key” to open its destination. We do not introduce any “new” or “additional” thing for the process.

2. Accurate delivery, which eliminates human mistakes.
Human error sometimes causes deliveries to the wrong address. With SafeDrop integrated into the shipping system, the focus is on a package (with tracking number as package ID) going to a target (SafeDrop ID associated with an address).

In a sense, the package (package ID) has its intended target (SafeDrop ID). The package can only be deposited into one and only one SafeDrop, which eliminates the wrong delivery issue.

3. Non-disputable delivery.
This dispute could happen: delivery company says a package has been delivered, but the recipient says that it has not arrived. The possible reasons: a) delivery person didn’t really deliver it; b) delivery person dropped it to a wrong address; c) a thief came by and took it; d) the recipient got it but was making a false claim.

SafeDrop makes things clear! If it is really delivered to SafeDrop, it is being recorded, and delivery company has done its job. If it is in the SafeDrop, the recipient has it. Really there is no dispute.

I will be showing SafeDrop at Modern Supply Chain Experience in San Jose, January 26 and 27 in the Maker Zone in the Solutions Showcase. If you’re attending the show, come by and check out SafeDrop.

My Joyful Consumption of Data

January 21st, 2016 4 Comments

I love data, always have.

To feed this love and to compile data sets for my quantified self research, I recently added the Netatmo Weather Station to the other nifty devices that monitor and quantify my everyday life, including Fitbit AriaAutomatic and Nest.

I’ve been meaning to restart my fitness data collection too, after spending most of last year with the Nike+ Fuelband, the Basis Peak, the Jawbone UP24, the Fitbit Surge and the Garmin Vivosmart.

FWIW I agree with Ultan (@ultan) about the Basis Peak, now simply called Basis, as my favorite among those.

Having so many data sets and visualizations, I’ve observed my interest peak and wane over time. On Day 1, I’ll check the app several times, just to see how it’s working. Between Day 2 and Week 2, I’ll look once a day, and by Week 3, I’ve all but forgotten the device is collecting data.

This probably isn’t ideal, and I’ve noticed that even something that I expected would work, like notifications, I tend to ignore, e.g. the Netatmo app can send notifications on carbon dioxide levels indoor, temperature outside and rain accumulation outside, if you have the rain gauge.

These seem useful, but I tend to ignore them, a very typical smartphone behavior.

Unexpectedly, I’ve come to love the monthly email many devices send me and find them much more valuable than shorter interval updates.

Initially, I thought I’d grow tired of these and unsubscribe, but turns out, they’re a happy reminder about those hard-working devices that are tirelessly quantifying my life for me and adding a dash of data visualization, another of my favorite things for many years.

Here are some examples.


My Nest December Home Report

December Monthly Driving Insight from Automatic

December Monthly Driving Insight from Automatic

Although it’s been a while, I did enjoy the weekly summary emails some of the fitness trackers would send. Seems weekly is better in some cases, at least for me.

Weekly Progress Report from Fitbit

Weekly Progress Report from Fitbit


Basis Weekly Sleep Report

A few years ago, Jetpack, the WordPress analytics plugin, began compiling a year in review report for this blog, which I also enjoy annually.

If I had to guess about my reasons, I’d suspect that I’m not interested enough to maintain a daily velocity, and a month (or a week for fitness trackers) is just about the right amount of data to form good and useful data visualizations.

Of course, my next step is dumping all these data into a thinking pot, stirring and seeing if any useful patterns emerge. I also need to reinvigorate myself about wearing fitness trackers again.

In addition to Ultan’s new fave, the Xiaomi Mi Band, which seems very interesting, I have the Moov and Vector Watch waiting for me. Ultan talked about the Vector in his post on #fashtech.

Stay tuned.