Another year, another amazing at the Maker Faire.
I’ve attended my fair share of Maker Faires these years, so the pyrotechnic sculptures, 3D printing masterpieces, and handmade artisan marketplaces were of no particular surprise. But somehow, every time I come around to the San Mateo fairgrounds, the Faire can’t help but be so aggressively fresh, crazy, and novel. This year, a host of new and intriguing trends kept me on my toes as I ventured through the greatest show and tell on Earth.
Young makers came out in full force this year. Elementary school maker clubs showed off their circuit projects, middle schoolers explained how they built the little robots, high school STEM programs presented their battle robots. It’s pleasing to see how Maker education has blossomed these past years, and how products and startups like LittleBits and Adafruit have made major concepts in electronics and programming so simple and inexpensive that any kid could pick it up and start exploring. Also wonderful is seeing young teams traveling out to the Bay Area from Texas, Oregon, and all these other states, a testament to the growth of the Maker movement out of the Silicon Valley.
Speaking of young makers’ participation, Arduino creator Massimo Banzi talked about Arduino as an education tool for kids to play and tinker, even he never planned to make kid’s toys in his early years. The maker movement has invoked the curious minds of all age, to start playing electronics, making robots, and learning a new language in programming.
While the maker movement made things very accessible to individuals, the essence of creation and innovation also impacted on the large enterprise. On the “Maker Pro” stage, our GVP, Jeremy Ashley (@jrwashley), talked about new trends of large enterprise application design, and OAUX group is driving the change to make simpler, but more effective and more engaging enterprise application.
Drones were also a trending topic this year, with a massive Drone Racing tent set up with events going on the whole weekend. Everything was being explored – new shapes for efficient and quick flight; new widgets and drone attachment modules; new methods of interaction with the drone. One team had developed a smart glove that responded to gyroscopic motion and gestures to control the flight of a quadcopter, and had the machine dance around him – an interesting and novel marriage of wearable tech and flight.
Personally, I’ve got a soft spot for art and whimsy, and the Faire had whimsy by the gallon. The artistry of the creators around the country and globe can’t be overestimated.
Maker Faire never disappoints. We brought friends along who had never been to a Faire, and it’s always fun to watch them get blown off their feet literally and figuratively the first time a flamethrower blasts open from the monolithic Crucible. Or their grins of delight when they see a cupcake shaped racecar zoom past them… and another… and another. Or the spark of amazement when they witness some demo that’s out of any realm of imagination.
Many hands make light (emitting diodes) work. Oracle Applications User Experience (OAUX) gets down to designing fashion technology (#fashtech) solutions in a fun maker event with a serious research and learning intent. OAUX Senior Director and resident part-time fashion blogger, Ultan “Gucci Translated” O’Broin (@ultan), reports from the Redwood City runway.
Fashion and Technology: What’s New?
Wearable technology is not new. Elizabeth I of England was a regal early adopter. In wearing an “armlet” given to her by Robert Dudley, First Earl of Leicester in 1571, the Tudor Queen set in motion that fusion of wearable technology and style that remains evident in the Fitbits and Apple Watches of today.
Elizabeth I’s device was certainly fly, described as “in the closing thearof a clocke, and in the forepart of the same a faire lozengie djamond without a foyle, hanging thearat a rounde juell fully garnished with dyamondes and a perle pendaunt.”
Regardless of the time we live in, for wearable tech to be successful it has to look good. It’s got to appeal to our sense of fashion. Technologists remain cognizant of involving clothing experts in production and branding decisions. For example, at Google I/O 2016, Google and Levi’s announced an interactive jacket based on the Google Jacquard technology that makes fabric interactive, applied to a Levi’s commuter jacket design.
Fashion Technology Maker Event: The Summer Collection
Misha Vaughan’s (@mishavaughan) OAUX Communications and Outreach team joined forces with Jake Kuramoto’s (@jkuramot) AppsLab (@theappslab) Emerging Tech folks recently in a joint maker event in Oracle HQ to design and
build wearable tech solutions that brought the world of fashion and technology (#fashtech) together.
The occasion was a hive of activity, with sewing machines, soldering irons, hot-glue guns, Arduino technology, fiber-optic cables, LEDs, 3D printers, and the rest, all in evidence during the production process.
Fashtech events like this also offer opportunities of discovery, as the team found out how interactive synth drum gloves can not only create music, but be used as input devices to write code too. Why limit yourself to one kind of keyboard?
Wearable Tech in the Enterprise: Wi-Fi and Hi-Heels
What does this all this fashioning of solutions mean for the enterprise? Wearable technology is part of the OAUX Glance, Scan, Commit design philosophy, key to that Mobility strategy reflecting our cloud-driven world of work. Smart watches are as much part of the continuum of devices we use interchangeably throughout the day as smart phones, tablets, or laptops are, for example. To coin a phrase from OAUX Group Vice President Jeremy Ashley (@jrwashley) at the recent Maker Faire event, in choosing what best works for us, be it clothing or technology: one size does not fit all.
A distinction between what tech we use and what we wear in work and at home is no longer convenient. We’ve moved from BYOD to WYOD. Unless that wearable tech, a deeply personal device and style statement all in one, reflects our tastes and sense of fashion we won’t use it: unless we’re forced to. The #fashtech design heuristic is: make it beautiful or make it invisible. So, let’s avoid wearables becoming swearables and style that tech, darling!
Generally, I’m not in favor of consolidating important stuff onto my phone, e.g. credit cards, etc. because if I lose my phone, I’ll lose all that stuff too.
However, I’ve been waiting to try out a digital hotel key, i.e. using my phone to unlock my hotel room. Only a few hotels and hotel chains have this technology in place, and recently, I finally stayed at one that does, the Hilton San Jose.
Much to my surprise, and Noel’s (@noelportugal), the digital key doesn’t use NFC. We’d assumed it would, given NFC is fairly common in newer hotels.
Nope, it uses Bluetooth, and when you get close to your room or any door you have access to unlock, e.g. the fitness center, the key enables.
Then, touch to unlock, just like it says, and within a second or so, the door is unlocked. It’s not instantaneous, like using the key, which uses NFC, but still pretty cool.
Ironically, the spare, physical key they gave me for “just in case” scenarios failed to work. I made the mistake of leaving my phone in the room to charge, taking the spare key while I ran downstairs to get some food, and the physical key didn’t work.
Anyway, the feature worked as expected, which is always a win. Those plastic keys won’t disappear anytime soon, and if you lose your phone while you’re using the digital hotel key, you’re extra hosed.
Still, I liked it and will definitely use it again whenever it’s available because it made me feel like future man and stuff.
Find the comments.
Editor’s Note: In February while we were in Australia, I had the pleasure to meet Stuart Coggins (@ozcoggs) and Scott Newman (@lamdadsn). They told me about a sweet Anki Overdrive cars plus Oracle Cloud Services hack Stuart and some colleagues did for Pausefest 2016 in Melbourne.
Last week, Stuart sent over a more detailed video of the specifics of the build and a brief writeup of what was involved. Here they are.
Oracle IoT and Anki Overdrive
By Stuart Coggins
Some time ago, our Middleware team stumbled upon the Anki Overdrive and its innovative use of technology, including APIs to augment a video game with physical racing cars.
We first presented an IoT focused demonstration earlier this year at an Innovation event in Melbourne. It was very well received, and considered very “un-Oracle.”
Over the past few months, the demo scope has broadened. And so has collaboration across Oracle’s Lines of Business. We saw an opportunity to make use of some of our Cloud Services with a “Data in Action” theme.
We’ve taken the track to several events, spanning various subject areas. Always sparking the question “what does this have to do with me?” And in some cases, “Why is Oracle playing with racing cars?”
As if the cars are not draw card enough at our events, the drone has been a winner. Again, an opportunity to showcase how using a range of services can make things happen.
As you’ll see in the video, the flow is fairly straightforward… the game, running on a tablet talks to the cars via Bluetooth. Using Bluetooth sniffers on a Raspberry Pi, we interrogate the communication between the devices. There are many game events as well as car activity events (speed of left/right wheels, change lane, turn left, turn right, off track, etc). We’re using Python scripts to forward the data to Oracle’s Internet of Things Cloud Service.
This is where things get interesting. The speed and laptime data is being filtered out, and forwarded to Oracle’s Database Cloud Service. The “speedo” dials are rendered using Oracle Apex (Application Express), which does a great job. An “off track” event is singled out and instantiates a process defined in Oracle Process Cloud Service. At this point, we’ll integrate to Oracle Service Cloud to create an event for later auditing and logging. Whilst airborne, the drone captures photos of the incident (the crashed cars), and sends them back to the process. The business process has created an incident folder on Oracle Document Cloud Service to record any details regarding the event, including the photos.
Because data is not much use if you’re not going to do something with it, we then hook up Oracle Business Intelligence Cloud Service to the data stored in Database Cloud Service. Post race analysis is visualised to show the results, and with several sets of race data, gives us insight as to which car is consistently recording the fastest laptimes. i.e. the car that should be used when challenging colleagues to a race!
As we’ve been running this for a few months now, and the use cases and applications of this technology grow, we’re getting more and more data. The adage that data creates data is certainly true here.
Ultimately, we’ll dump this data into Hadoop and perform some discovery, perhaps to understand how the track changes during the day (dust/dirt/use) etc. We’d like to get some temperature data from the Pi to understand if that has any effect on the car performances, and perhaps we’ll have enough data for us to be able to analyse the perfect lap, and replay it using the Anki SDK.
We’re planning a number of hackathons locally with this kit, and we’ll see what other innovations we can highlight.
A big shout out to the technical guy behind sniffing and “translating” the data. The data is not exposed by the SDK and was by no means trivial to maps but it has allowed us to get something meaningful and put it into action.
At first I was skeptical. I was perfectly happy with my iPad Air and the Pro seemed too big and too expensive. Six months later I wouldn’t dream of going back. The iPad Pro has become my primary computing device.
Does the Pro eliminate the need for a laptop or desktop? Almost, but for me not quite yet. I still need my Mac Air for NodeBox coding and a few other things; since they are both exactly the same size I now carry them together in a messenger bag.
The Pro is lighter than it looks and, with a little practice, balances easily on my lap. It fits perfectly on an airplane tray table.
Does the 12.9-inch screen really make that much of a difference? Yes! The effect is surprising; after all, it’s the same size as an ordinary laptop screen. But there is something addictive about holding large, high resolution photos and videos in your hands. I *much* prefer photo editing on the iPad. 3D flyovers in Apple Map are almost like being there.
The extra screen real estate also makes iOS 9’s split screen feature much more practical. Above is a screenshot of me editing a webpage using Coda. By splitting the screen with Safari, I can update code and instantly see the results as I go.
Enterprise users can see more numbers and charts at once. Bloomberg Professional uses the picture-in-picture feature to let you watch the news while perusing a large portfolio display. WunderStation makes dashboards big enough to get lost in.
For web conferences, a major part of my working life at Oracle, the iPad Pro both exceeds and falls short. The participant experience is superb. When others are presenting screenshots I can lean back in my chair and pinch-zoom to see details I would sometimes miss on my desktop. When videoconferencing I can easily adjust the camera or flip it to point at a whiteboard.
But my options for presenting content from the iPad are still limited. I can present images, but cannot easily pull content from inside other apps. (Zoom lets you share web pages and cloud content on Box, Dropbox or Google Drive, but we are supposed to keep sensitive data inside our firewall.) The one-app-at-a-time iOS model becomes a nuisance in situations like this. Until this limitation is overcome I don’t see desktops and laptops on the endangered species list.
The iPad Pro offers two accessories not available with a normal iPad: a “smart keyboard” that uses the new magnetic connector, and the deceptively simple Apple Pencil.
I tried the keyboard and threw it back. It was perfectly fine but I’m just not a keyboard guy. This may seem odd for someone who spends most of his time writing – I’m typing this blog on the iPad right now – but I have a theory about this that may explain who will adopt tablets in the workplace and how they will be used.
I think there are two types of workers: those who sit bolt upright at their desks and those who slump as close to horizontal as they can get; I am a slumper. And there are two kinds of typists: touch typists who type with their fingers and hunt-and-peckers who type with their eyes; I am a, uh, hunter. This places me squarely in the slumper-hunter quadrant.
Slumper-hunters like me love love love tablets and don’t need no stinking keyboards. The virtual keyboard offers a word tray that guesses my words before I do, lets me slide two fingers across the keyboard to precisely reposition the cursor, and has a dictate button that works surprisingly well.
Touch-slumpers are torn: they love tablets but can’t abide typing on glass; for them the smart keyboard – hard to use while slumping – is an imperfect compromise. Upright-hunters could go either way on the keyboard but may not see the point in using a tablet in the first place. Upright-touchers will insist on the smart keyboard and will not use a tablet without one.
If you are an artist, or even just an inveterate doodler, you must immediately hock your Wacom tablet, toss your other high-end styli, and buy the Apple Pencil (with the full-sized Pro as an accessory). It’s the first stylus that actually works. No more circles with dents and monkey-with-big-stick writing. Your doodles will look natural and your signature will be picture perfect.
The above drawing was done in under sixty seconds by my colleague Anna Budovsky. She had never used the iPad Pro before, had never used the app (Paper), and had never before picked up an Apple Pencil. For someone with talent, the Apple Pencil is a natural.
If you are not an artist you can probably skip the Pencil. It’s a bit of a nuisance to pack around and needs recharging once a week (fast and easy but still a nuisance). I carry one anyway just so I can pretend I’m an artist.
For now the iPad Pro is just a big iPad (and the new Pro isn’t even big). Most apps don’t treat it any differently yet and some older apps still don’t even fully support it. But I am seeing some early signs this may be starting to change.
The iPad Pro has one other advantage: processing power. Normal iPad apps don’t really need it (except to keep up with the hi-res screen). Some new apps, though, are being written specifically for the Pro and are taking things to a new level.
Zooming into infinitely complex fractals is not a business application, but it sure is a test of raw processing power. I’ve been exploring fractals since the eighties and have never seen anything remotely as smooth and deep and effortless as Frax HD. Pinch-zooming forever and changing color schemes with a swirl of your hand is a jaw-dropping experience.
The emerging class of mobile CAD apps, like Shapr3D, are more useful but no less stunning. You would think a CAD app would need not just a desktop machine but also a keyboard on steroids and a 3D mouse. Shapr3D uses the Apple Pencil in ingenious ways to replace all that.
Sketch curves and lines with ease and then press down (with a satisfying click) to make inflection points. Wiggle the pencil to change modes (sounds crazy but it works). Use the pencil for drawing and your fingers for stretching – Shapr3D keeps up without faltering. I made the strange but complicated contraption above in my first session with almost no instruction – and had fun doing it.
I hesitate to make any predictions about the transition to tablets in the workplace. But I would recommend keeping an eye on the iPad Pro – it may be a sleeping giant.
Hi there, remember me? Wow, April was a busy month for us, and looking ahead, it’s getting busy again.
Busy is good, and also good, is the emergence of new voices here at the ‘Lab. They’ve done a great job holding down the fort. Since my last post in late March, you’ve heard from Raymond (@yuhuaxie), Os (@vaini11a), Tawny (@iheartthannie), Ben (@goldenmean1618) and Mark (@mvilrokx).
Because it’s been a while, here comes an update post on what we’ve been doing, what we’re going to be doing in the near future, and some nuggets you might have missed.
What we’ve been doing
Conference season, like tax season in the US, consumes the Spring. April kicked off for me at Oracle HCM World in Chicago, where Aylin (@aylinuysal) and I had a great session. We showed a couple of our cool voice demos, powered by Noel’s (@noelportugal) favorite gadget, the Amazon Echo, and the audience was visibly impressed.
— Gozel Aamoth (@gozelaamoth) April 7, 2016
I like that picture. Looks like I’m wearing the Echo as a tie.
Collaborate 16 was next, where Ben and Tawny collected VR research and ran a focus group on bots. VR is still very much a niche technology. Many Collaborate attendees hadn’t even heard of VR at all and were eager to take the Samsung Gear VR for a test drive.
During the bots focus group, Ben and Tawny tried out some new methods, like Business Origami, which fostered some really interesting ideas among the group.
— The AppsLab (@theappslab) April 12, 2016
Next, Ben headed out directly for the annual Oracle Benelux User Group (OBUG) conference in Arnhem to do more VR research. Our research needs to include international participants, and Ben found more of the same reactions we’ve seen Stateside. With something as new and different as VR, we cast a wide net to get as many perspectives and collect as much data as possible before moving forward with the project.
Oracle Modern Customer Experience was next for us, where we showed several of our demos to a group students from the Lee Business School at UNLV (@lbsunlv), who then talked about those demos and a range of other topics in a panel session, hosted by Rebecca Wettemann (@rebeccawettemann) of Nucleus Research.
— Geet (@geet_s) April 28, 2016
The feedback we got on our demos was very interesting. These students belong to a demographic we don’t typically get to hear from, and their commentary gave me some lightning bolts of insight that will be valuable to our work.
As with VR, some of the demos we showed were on devices they had not seen or used yet, and it’s always nice to see someone enjoy a device or demo that has become old hat to me.
Because we live and breathe emerging technologies, we tend to get jaded about new devices far too quickly. So, a reset is always welcome.
What we’re going to be doing in the near future
— AMIS, Oracle & Java (@AMISnl) May 9, 2016
Then, June 2-3, we’re returning to the Netherlands to attend and support AMIS 25. The event celebrates the 25th anniversary of AMIS (@AMISnl), and they’ve decided to throw an awesome conference at what sounds like a sweet venue, “Hangaar 2” at the former military airport Valkenburg in Katwijk outside Amsterdam.
Our GVP, Jeremy Ashley (@jrwashley) will be speaking, as will Mark. Noel will be showing the Smart Office, Mark will be showing his Developer Experience (DX) tools, and Tawny will be conducting some VR research, all in the Experience Zone.
I’ve really enjoyed collaborating with AMIS in the past, and I’m very excited for this conference/celebration.
After a brief stint at home, we’re on the road again in late June for Kscope16, which is both an awesome conference and happily, the last show of the conference year. OpenWorld doesn’t count.
We have big fun plans this year, as always, so stay tuned for details.
Stuff you might have missed
Finally, here are some interesting tidbits I collected in my absence from blogging.
- The bots are coming! We love bots, and in October, we’re partnering with the Apps UX Innovation team to run an internal bots-focused hackthon.
- New ways of input still on the verge of the enterprise. Over on VoX, you can read about our work in voice and gesture input and how these technologies are shaping future experiences.
- Smart user experiences: Machine learning and the future of enterprise applications. Check out what Bill has to say about how “smart” experiences are shaping our thinking.
Last week several of my colleagues and myself had the privilege of attending the Samsung Developers Conference (SDC) in San Francisco. It was the 5th time Samsung organized a developers conference in San Francisco but only the first time I attended, although some in our party were present previous times so I had some idea of what to expect. Here are some impressions and thoughts on the conference.
After an hour walking around, my first thought was: is there anything that Samsung doesn’t have their hand in? I knew of course they produce smart phones, tablets, smart watches and TVs, I’ve seen a laptop here and there, but vacuum cleaners, air conditioning units and ranges? Semi-conductors (did you know that inside the iPhone there are Samsung chips?), Smart fridges and security cameras and now VR gear and IoT, pretty crazy. Interestingly enough, I think there are some distinct advantages that Samsung might have because of this smorgasbord of technology over more focused companies (like say Apple) , more on that later.
As with all of these events, Samsung’s motivation for organizing this conference is of course not entirely altruistic; as I mentioned in the intro, they have a huge hardware footprint and almost all of that needs software, which gets developed by … developers.
They need to attract outside developers to their platforms to make them interesting for potential buyers, I mean, what would the iPhone be without Apps? There is nothing wrong with that, that’s one of the reasons we have Oracle OpenWorld, but I thought that the sessions on the “Innovation Track” where a bit light on technical details (at least the ones I attended).
In fact, some of them wouldn’t have been misplaced in the “Marketing Track” I feel. To be fair, I didn’t get to attend any of the hands-on sessions on day zero, maybe they were more useful, but as a hard core developer, I felt a bit … underwhelmed by the sessions.
That doesn’t mean though that the sessions were not interesting, probably none more so than “How to Put Magic in a Magical Product” by Moe Tanabian, Chief Design Officer at Samsung, which took us on a “design and technical journey to build an endearing home robot”, basically how they created this fella:
That is Otto, a personal assistant robot, similar to the Amazon Echo, except with a personality. Tanabian explained in the session how they got from idea and concept to production using a process remarkably similar to how we develop here at the AppsLab; fail fast, iterate quickly, get it in front of user as quickly as possible, measure etc. I just wish we had the same hardware tooling available as they do (apparently they used, what I can only image are very expensive 3D printers to produce the end result).
Samsung also seems to be making a big push in the IoT space, and for good reason. The IoTivity project is a joint open source connectivity framework, sponsored by the Open Interconnect Consortium (OIC) of which Samsung is a member and one of the sessions I attended was about this project.
The whole Samsung Artic IoT platform supports this standard, which should make it easy and secure to discover and connect Artic modules to each other. The question as always is: will other vendors adopt this standard so that you can do this cross-vendor, i.e. have my esp8266’s talk to an Artic module which then talks to a Particle and my Philips Hue lights etc.
Without this, such a new standard is fairly useless and just adds to the confusion.
As mentioned in the intro though, because Samsung makes pretty much everything, they could start by enabling all their own “things” to talk to each other over the internet. Their smart fridge could then command their robotic vacuums to clean up the milk that just got spilled in the kitchen. The range could check what is in the fridge and suggest what’s for dinner. Artic modules can then be used as customizations and extensions for the few things that are not built by Samsung (like IoT Nerf Guns :-), all tied together by Otto which can relay information from and to the users.
This is an advantage they have over e.g. Google (with Brillo) or Apple (with HomeKit) who have to ask hardware vendors to implement their standard; Samsung has both hardware and the IoT platform, no need for an outside party, at least to get started.
Personally, I’m hoping that in the near future I get to experiment with some of the Artic modules, they look pretty cool!
And then of course there was VR; VR Gears, VR vendors, VR Cameras even a VR rollercoaster ride (which I tried and of course made my sick, same as with the Oculus Rift demo at UKOUG last year), maybe I’m just not cutout for VR. One of the giveaways was actually a Gear 360 camera which allows you to take 360 degree camera footage which you can then experience using the Gear VR, nicely tying up the whole Samsung VR experience.
All in all it was a great conference with cool technology showing off Samsung’s commitment to VR and IoT.
Oh, and I got to meet Flo Rida at an AMA session 🙂
VR was the big thing at the Samsung Developer Conference, and one of the points that got driven across, both in the keynotes and in other talks throughout the day, was that VR is a fundamentally new medium—something we haven’t seen since the motion picture.
Injong Rhee, the executive VP of R&D for Software and Services, laid out some of VR’s main application areas: Gaming, Sports, Travel, Education, Theme Parks, Animation, Music, and Real Estate. Nothing too new here, but it is a good summary of the major use cases, and they echo what we’ve heard in our own research.
He also mentioned some of their biggest areas for innovation: Weight, dizziness, image quality, insufficient computing power, restricted mobility, limited input control. For anyone who’s tried the Gear VR and had to use the control pad on the side of the visor, I think we can agree it’s not ideal for long periods of time. And while some VR apps leave me and others with no nausea at all, other apps, where you’re moving around and stepping up and down, can certainly cause some discomfort. I’m curious to see how some of those problems of basic human physiology can be overcome.
A fascinating session after the keynote was with Brett Leonard, who many years ago directed Lawnmower Man, a cautionary tale about VR, which despite the bleak dystopic possibilities it portrayed, inspired many of today’s VR pioneers. Leonard appeared with his brother Greg, a composer, and Frank Serafine, an Oscar-award winning sound designer who did the sound for Lawnmower Man.
Brett, Greg, and Frank made a solid case for VR as a new medium that has yet to be even partially explored, and will surely have a plethora of new conventions that storytellers will need to work with. We’ve become familiar with many aspects of the language of film, such as things happening off screen but are implied to be happening. But with the 360-degree experience of VR, there’s no longer that same framing of shots, or things happening off the screen. The viewer chooses where to look.
Brett also listed his five laws of VR, which cover some of his concerns, given that it is a powerful medium that could have real consequences for people’s minds and physiology, particularly developing children. His laws, very paraphrased are:
- Take it seriously.
- VR should promote interconnecting with humanity, not further reinforcing all the walls we already have, and that technology so far has helped to create.
- VR is its own reality.
- VR should be a safe space—there are a huge amount of innovations possible, things that we haven’t been able to consider before. This may be especially so for medical and psychological treatments.
- VR is the medium of the global human.
Another interesting part of the talk was about true 360-degree sound, which Serafine said hadn’t really been done well before, but with the upcoming Dolby Atmos theaters, finally has.
Good 360-degree sound, not just stereo like we’re used to, will be a big part of VR feeling increasingly real, and will pose a challenge for VR storytelling, because it means recording becomes more complex, and consequently editing and mixing.
Samsung also announced their effort for the connected car, with a device that looks a lot like the Automatic (previously blogged about here) or the Mojio. It will offer all the features of those other devices—driving feedback that can become a driver score (measuring hard braking, fast accelerating, hard turns, and the like), as well as an LTE connection that allows it to stay connected all the time and serve as a WiFi hotspot. But Samsung adds a little more interest to the game with vendor collaborations, like with Fiat, where you can unlock the car, or open the trunk from your app. This can’t currently be done with other devices.
It should come out later this year, and will also have a fleet offering, which should appeal to enterprise companies. If they have more of these exclusive offering because of Samsung’s relationships with various vendors, maybe it will do better than its competitors.
After a whirlwind day at Modern CX, I hurried my way back up to San Francisco for the last day of the Samsung Developers Conference 2016. The morning started out exciting with a giveaway gift of the Samsung Gear 360 Camera.
— Tawny (@iheartthannie) April 30, 2016
Raymond (@yuhuaxie) took a bunch of photos with it and found it very convenient to get a stitched 360 photo with one click of a button. Previously in the making of our virtual gadget lab, he had to use an Android phone camera to capture 20+ shots before stitching the photos together to produce one spherical photo.
The automatic stitching is seamless at a glance, but you can still tell where the stitching happened by looking more closely.
The quality of photos taken with Gear 360 still has things left to be desired. The door frame and structure beam of my house all appear to be curvy, the depth of photo looks very shallow, etc. Maybe it is the fish-eye lenses that lead to a lack of depth distance, and distortion of view outside of focus center. This distortion can be avoided with subjects and objects that are a few meters from the camera or if more cameras are used in high-end gig, such as Project Beyond.
A Secure & Connected Future
- Security – Knox, which delivers mobile enterprise security, and Samsung Pay. Samsung Pay uses MST and NFC, makes mobile payment “simple, secure, virtually anywhere.” A group of panelist from MasterCard, Visa and American Express expressed that mobile payments need to be as easy (or easier) than pulling out your credit card, and Samsung Pay MST and NFC enables a “frictionless” experience.
- The internet of things – Currently there is a fragmented ecosystem of connected devices and manufacturers needed to be democratized. In a keynote, Curtis Sasaki pushed the idea of making connections, not silos. The ARTIK chip is one way to exchange open data amongst devices that originally were not designed to work together.
Sensors can be used to provide a variety of information/status and start actions. With ARTIK, we were able to meet Otto, Samsung’s adorable smart home robotic personal assistant who was methodically turning off lights and taking pictures. Otto is not a consumer ready product, but it functions very much like Amazon’s Alexa but hosts an HD camera and display. This offers an opportunity to test image and face recognition in home environments.
- The connected car is launching at the end of this year. A new dongle gives owners of old cars LTE connection. Samsung Connect Auto uses real-time alerts to help consumers improve their driving behavior while offering a Wi-Fi hotspot to create a multimedia center for the car.
There is also a smart TV and a connected fridge allows you to identify missing ingredients, compose a grocery list and order groceries using the doors of the fridge.
- Smart healthcare – This is about empowerment, connectivity and health data security. There was local intelligence for health monitoring with the Samsung Simband and a virtual reality relaxation pod provided by Cigna Healthcare. Simband is a cloud-based health analytics service can collect health data from wearables and health monitors.
- Virtual reality – The 4D experience was highlighted by Escape Rroom VR is an amazing virtual experience where you can touch and move real objects in virtual reality!
- And of course, there was the obligatory roller coaster experience.
— Tawny (@iheartthannie) April 30, 2016
I managed to catch the tail-end of the inspiration keynote which featured a panel of 4 women change-makers from Baobab Studios, Cisco, Intel and Summit Public Schools chatting about how we can change the world and make history.
One of the common themes with innovation was it comes from anywhere in the organization and users, whether that be from the lab or another person’s scratchpad.
Innovation is questioning the status quo. You have to reject “what is,” and THAT is hard. Your moral obligation to make a better quality world should be your guiding principle. You have to make IT happen by putting in the time. Challenge the way the world is now and bring people into the dialogue, even if they do not want to be there.
There will always be a constant steady heartbeat of new technology. To build meaningful tools, we need to ask new questions:
- What would people do with our technology vs. not what we are doing.
- What do people care about? Passionate about?
- What do people find frustrating?
- What role could technology play in their lives?
We need to bringing sensibility to our future tech. Our future objects will not make sense if they don’t make us happy or more efficient. We need to focus on how to tell a future story that not about how we are seduced by technology; instead, we tell a story about what it will mean for all of us if we do use it.
Overall, we had a lot of fun and learned a lot about what technologies are currently available, where the industry is headed and what emerging technologies are around the corner. Our ideas flourish being surrounded by fellow developers, designers, thinkers and makers.
Last week, we were back in Las Vegas again for Oracle Modern Customer Experience Conference! Instead of talking to customers and partners, we had the honor of chatting with UNLV Lee graduate students (@lbsunlv) and getting their feedback on how we envision the future of work, customers experience, marketing and data security.
We started off with Noel (@noelportugal) showing the portable Smart Office demo, including the Smart Badge, that we debuted at OpenWorld in October, followed by a break out session for the gradates to experience Glance and Virtual Reality at their own leisure.
The event was a hit! The 2 hour session flew by quickly. The same group of graduates who came in for the demos at the start of our session, left at the very last minute when we had to close down.
Experiencing these demos led into some exciting discussions that following day between the UNLV Lee Business School panelists and Rebecca Wettemann (@rebeccawettemann) from Nucleus Research (@NucleusResearch) on the future of work:
- How will sales, marketing, customer service, and commerce change for the next generation?
- What does the next generation expect from their employers?
- Are current employers truly modern and using the latest technology solutions?
— Erin Killian Evers (@keversca) April 28, 2016
— Gozel Aamoth (@gozelaamoth) April 28, 2016
While all of this was going on, a few of the developers and I were at the Samsung Developers Conference in SF discussing how we could build a more connected future. More on that in the next coming posts!
As part of our push to do more international research, I hopped over to Europe to show some customers VR and gather their impressions and thoughts on use cases. This time it was at OBUG, the Oracle Benelux User Group, which was held in Arnhem, a refreshing city along the Rhine.
Given that VR is one of the big technologies of 2016, and is posed to play a major role in the future of user experience, we want to know how our users would like to use VR to help them in their jobs. But first we just need to know what they think about VR after actually using it.
The week prior, Tawny and I showed some VR demos to customers and fellow Oracle employees at Collaborate in Las Vegas, taking them to the arctic to see whales and other denizens of the deep (link) and for the few with some extra time, defusing some bombs in the collaborative game “Keep Talking and Nobody Explodes” (game; Raymond’s blog post from GDC).
The reaction to the underwater scenes are now predictable: pretty much everyone loves it, just some more than others. There’s a sense of wonder, of amazement that the technology has progressed to this point, and that it’s all done with a smartphone. Several people have reached out to try to touch the sea creatures that are swimming by their view, only to realize they’ve been tricked.
Our European customers are no different than the ones we met at Collaborate, with similar ideas of how it could be used in their businesses.
It’s certainly a new technology, and we’ll continue to seek out use cases, while thinking up our own. In the meantime, VR is lots of fun.
Last week, Ben (@goldenmean1618) and I were in Las Vegas for COLLABORATE. We ran two studies which focuses on two trending topics in tech: bots and virtual reality!
Bot Focus Group
— The AppsLab (@theappslab) April 12, 2016
Our timing for the bot study was perfect! The morning we were to run our focus group on bots in the workplace, Facebook launched it’s bot platform for messenger. They are not the only ones with a platform. Microsoft, Telegram as well as Slack has their own platform too.
The goal of our focus group was to generate ideas on useful bots in the workplace. This can range from the concierge bot that Facebook has to workflow bots that Slack has. To generate as many ideas as we could, without groupthink, we had everyone silently write down their ideas using the “I WANT [PAT] TO…SO I CAN…” Tower of Want framework I stumbled upon at the GDC16 conference last March.
Not only do you distill the participant’s motivations, intents and needs, but you also acquire soft goals to guide the bot’s development. Algorithms are extremely literal. The Harvard Business Review notes how social media sites were once “quickly filled with superficial and offensive material.”
The algorithm was simple, find the articles with the most clicks and feed them to the users. Somewhere, the goal of QUALITY highly engaged articles were lost to highly engaged articles at the expense of QUALITY. Intention is everything.
“Algorithms don’t understand trade-offs; they pursue objectives single-mindedly.”
Soft goals are in place to steer a bot away from unintended actions.
After the ideas were generated and shared, we had them place their bot tasks on a pain/frequency chart: How painful is this task for you to do? and How frequently do you need to do this task?
— The AppsLab (@theappslab) April 12, 2016
Then it was time for the business origami! Business Origami is similar to a task flow analysis that uses folded paper cutouts as memory nudges. We now have our bot tasks, but we do not know (a) what triggers the task, (b) what the bot needs to know to do its job and (c) what the desired output is. We modified the Business Origami activity with the inputs and outputs that a Resource Flow activity demands.
Before our customers created their own flows based on their best bot task idea, we did we group warm up. The flow below illustrates the flow of scheduling and booking meeting rooms. Everyone was involved as they talked about the myriad of ways that would trigger the act of scheduling a meeting, the mediums of communication used, what they would need to know in order to schedule that, and what feedback is needed when the task is done.
— The AppsLab (@theappslab) April 12, 2016
Virtual Reality Guerrilla Test
For 3 days, Ben and I ran a guerrilla study to get customer’s and partner’s thoughts on VR and where they might find it useful in their work/industry.
— The AppsLab (@theappslab) April 12, 2016
Our customers experienced virtual reality through the Samsung Gear VR. It relies on our Samsung Note 5 to deliver the immersive experience.
Because of the makeup of our audience at the demo pod, we had to ensure that our study took approximately 5 minutes. We had 2 experiences to show them: an under water adventure with the blue whale in the Artic Ocean (theBlu) and a heart-pounding task of diffusing a bomb (Keep Talking and Nobody Explodes).
Everyone really wanted to reach out and touch the sea animals. 2 reached out and accidentally touched Ben and I and freaked out at how realistic the experience was! Another case for haptic gloves? 🙂
One of our participants had tears in her eyes after she experienced TheBlu Arctic while another participant wanted to play 3+ games of Keep Talking and Nobody Explodes!
Overall, no one felt nauseous. The game control came easy to those who had experience playing XBox and Playstation, while others were able to learn through the gamepad tutorial. Playstation VR makes learning even easier for newcomers since you can see a ghostlike view of your gamepad in VR.
Mostly, our participants confirmed use cases that we found from our first VR study at Modern Supply Chain back in January 2016. We ran 20 participates that month as an onsite guerrilla study. We ran all the participants through 2 applications in a 30 minute session: Swimming with Dolphins in Ocean Rift and Exploring a car show in Relay Cars.
One of our participants had a fear of being underwater. Even though she felt a bit nauseous, she did not want to take the headset off!
The tutorial was a breeze to get through. Unlike Ocean Rift where you need to navigate and swim by using the trackpad on the headset, Relay Cars used gaze control for selection. This means that my looking at a navigation button for 2-3 seconds makes an automatic selection for you without having to reach for the trackpad.
The goal of our initial guerrilla VR study was to find if people would actually wear a headset at work (majority said yes) and what VR could be useful for (many, many ideas). Today we have shortlisted that list and are developing a demo to come.
Austin, beautiful city with a river crossing downtown, music niche, young population, cycling, brisket and the home of SXSW, a big multicultural conference for all tastes; Film, Interactive and Music.
This was my first time attending the conference but Noel (@noelportugal), is a year-to-year attendee. It’s well known that this conference is not only a trampoline for small companies and startups to show off all the world what they are cooking up, but also a big exposure for new services, products, trends, you name it; that’s why we are very interested in this kind of conference that are very aligned with our team’s spirit.
I mean it.
Since Google I/O 2014, I’ve been following the steps to VR and AR. At that time, they released Google Cardboard; inexpensive googles for visualizing VR content and Project Tango for AR. Yes, I know you can argue VR has been around for quite a long time, but I believe they exposed the right development tools and a cheap way to develop and consume that technology, so a lot of people got engaged. However, some others remained very skeptical about use cases.
But now, after two year, guess what? VR is on everyone’s lips, and SXSW wasn’t an exception.
I have to say, I’m very impressed at how many companies had adopted this technology so fast. Of course, we all saw this wave coming to us with announcements of products like Oculus Rift, HTC Vive, Noon VR, Microsoft HoloLens and so on. Of course, as emerging technology team, we were already prepared to be hit by the wave.
I still can’t get used to seeing people with a headset over their eyes and headphones on, 100% isolated from reality. I tried most of VR demos presented and my brain/body is still not prepared for many VR experiences; I had headache, and I felt weird after so many demos.
Also, I could see people with red marks all around their faces from wearing the headset all day. Even so, this helped me to analyze and sum up that pretty much all demos follow the same use case: advertising and promoting products.
It’s really interesting that retail and product companies are investing in this technology to get more buyers and explain in a better way how it feels to hold of their product. This can be applied, for example, to automobiles, houses, travel agencies, etc. Funny thing is this technology sometimes is combined with motion to have a complete experience.
Note: don’t ever try a selfie while wearing a VR headset, almost impossible 🙂
I would like to bring up a story of one of the panelist talking about VR that I found very interesting; his best friend’s rock band was in town for a concert while he was experimenting with VR. He suggested that they record one of his favorites songs in a way it could be post-produced to be seen in VR.
The band accepted, and he set up all recording production in front of the stage, but he remained backstage monitoring all the production while they were playing his favorite song. All went fine and although he missed the opportunity to see and hear his favorite song live, he could watch the VR video several times for a couple of weeks.
Then, when they met again, his friend asked him about the concert, and he almost could said that he was virtually in the front row, enjoying the concert like all the others, singing and jumping.
Examples like this are very impressive. They make our brain believe facts that we actually didn’t live. In the end, that’s what we are doing to our brains, cheating them.
This could be good or bad, depending the point of view and circumstances. But just think about this, health and medicine; helping people with Alzheimer’s cloud recover lost thoughts and memories with VR. That’s huge.
Another common use case is training people or showing how to perform procedures. Imagine medical students being trained in surgical techniques, or how to react under stressful circumstances. Or how to work in risky areas like mines, radioactive plants or even places as simple as a warehouse.
VR is also being used as a human expression, like painting in 3D.
Also companies are taking advantage of VR to sell product on top of it or to complement it. Like this microphone to record 3D audio.
Speaking of AR, I saw this awesome concept that combines virtual objects with physical objects. Interactions between them are very natural, it could be possible to turn a physical object into virtual object and interaction is free hands.Check out the video.
Also, we witnessed beta release of the Metal Glass device; a combination of VR and AR. They claims that next year, monitors won’t be more necessary as VR + AR will replace them. Check out the video.
VR is the thing and we, as a team, are working to come up with cool use cases.
Internet of Things.
This is a hot topic too. We saw a lot startups and companies offering products and services for office and home automation, security, etc.
Noel and I attended a couple of IoT workshops from companies and startups that are making very affordable and simple all this hardware revolution.
We are very convinced that still are a lot of use case out there and we’re going to continue investigating.
Also, a big concern in IoT for big companies is data privacy as small companies are not putting too much attention on it. IoT generates Big Data, which at the same time, can be analyzed and reduced it to analytics, behaviors, forecast and needs. Just imagine, where all your data that are collected from your fridge connected to Internet is going to? And your thermostat data? And your lights data? Here is where regulations may help to protect your data and privacy.
Humanoids, robots, AI and machine learning.
This is a hot topic too, not as big as VR, but, believe me, it’s the next big thing. Apps in general have raised the bar, so the bar has been raised and one way to reach it is with AI.
Apps cannot be the same as ~4 years ago, people’s need have changed, and this has raised the bar for apps developers.
Users want to accomplish tasks quickly with almost zero effort and interaction at all. Our team has raised the bar too, so that’s why I decided to attend Machine Learning sessions. They were very basic and not too technical, but it was so great to see that we are looking in the right direction.
It was very interesting looking combination of all this concepts together and companies investing in research to make more affordable futuristics concepts.
Here’s Noel losing to a robot at rock-paper-scissors:
And here’s a video of Pepper, a humanoid robot.
We saw all kind of wearables. Health tracking, pillow sensor, eye therapy based on light to recover from jet lag or insomnia, pet wearables that read your pet’s feelings, ergonomic wearables, gloves used as a digital interface controller that can interact with laptops, iPads, etc.
Sony unveiled a wearable device currently referred to as N that has hands free user interface and allows users to receive audio without having to put a headset. It has virtual assistance that helps to answer questions. Also, a camera is positioned on one end of the device, so via voice we can ask to take a photo.
At SXSW, there are a lot of things to do including talks, sessions, lounge demos, expo and workshops.
When I was doing my schedule I noticed that there were a lot of JS workshops so I decided to join them. Most of them were self-paced and very introductory but was good to see new programming paradigms like react and functional programming into JS.
Wrap up and time to say until next year.
As a plus, Pi Day was celebrated during SXSW, and for that, we saw a cool implementation using Raspberry-Pis as a distributed system for searching queries.
And that’s it. I love this kind of conference where you can observe where technology and new concepts are going. And more importantly, it can help us to innovate and improve what we do in our team.
This is 2016, and seems this is the year for VR. Of course, we at the AppsLab can’t miss the beat!
While Osvaldo (@vaini11a) started to look into Unity based VR capability and prototype, I wanted to take a look into the WebVR based approach. The prospect of delivering VR experience in a browser, and over the web, suddenly makes VR so much more accessible – WebVR can be designed in a way to work with or without a VR headset. In a sense it is an extension of responsive web, to adjust to different renderers/viewers gracefully.
The first thing came to my mind was to VR-enable some of our visualization demos, and I picked John’s Transforming Table for the first try. After a series of hacks, I got a half-baked, and non-functional result; if we had the Oculus Rift, we could get it working. A-Frame is at a very early stage, and there is still a lot to be desired.
I realized I needed a perspective change – instead of fitting the existing presentation and behavior into VR, WebVR/A-Frame is better suited to create a new presentation and behavior that blends with VR naturally.
Jake (@jkuramot) and Noel (@noelportugal) had just come back from a trip to Sydney, and told us a story about someone following our team and reading our posts, from far far away – the other side of planet 🙂
And wouldn’t it be nice if they could see our Gadget Lab too?
So after a couple of days, here is it – our gadget lab in VR.
You may step in Gadget lab, and step out to Oracle campus. While you are in the virtual Gadget Lab, you can see several stations I have labelled to show some human-machine interfaces we are investigating.
I don’t have fancy stereoscopic 360 camera rig, so I just used an old Android phone to produce the scene of Oracle campus and Gadget lab interior. The phone is a little bit under-powered, so you may see some blanked-out areas because the phone would reboot when I tried to create the entire scene.
Let me know what you think in comments.
Editors’ update: To experience this virtual tour of our Gadget Lab, simply navigate to http://theappslab.com/vr/appslab in any browser. Even if you don’t have a VR headset, you can get a feel for the space and see a little bit about the demos.
In a desktop browser, you can use your mouse to look around the room. On a mobile browser, you can tilt your device to pan around the lab, and if you want the full VR experience, tap the glasses icon in the lower right-hand corner, and pop your phone into a compatible viewer, e.g. Google Cardboard.
The future is now.
Yesterday, I talked, in part, about how we can, if we choose, work all the time, from the moment we open our eyes, until we close them for sleep.
So today’s as good a Friday as any to remind you to balance all that work with some fun.
And here’s Anthony showing our Gadget Lab demos to some kids on Spring Break.
We make a conscious effort to take time out and show our fun stuff to kids. Why? Because fun is important.
Back in 2012, I pondered what I could show at Kscope12 that would spice up my presentation. I wanted to add something fun to my session, and after chatting with Noel, we settled on the Rock ’em Sock ’em robots controlled by text/phone as a fun way to keep people’s attention.
During the conference, I stumbled upon something. Turns out when you show people something fun, their creative juices flow, and you get lots of cool ideas. Like this:
I’m not an expert in every functional area of business. I may know some Financials from my years as an E-Business Suite product manager, but I don’t know much about sales or HR or supply chain.
So, when we have something we think will be important for users in the not-so-distant-future, i.e. an emerging technology, we often build a fun demo to help those domain specific uses rise to the surface.
People play with the demo, they have some fun with it, and we talk about ways that particular technology could apply to their everyday work.
This works, believe it or not, and we’ve been repeating that formula with great results for several years. It’s part of our strategy.
For example, robot arms and race cars to investigate gesture as an input device, Internet of Things for when things happen and for connecting dumb things to the Internet, and using your mind to control to drive a robotic ball.
It’s become an annual team activity to come up with the year’s fun demo, and everyone loves the fun demos, building them, showing them, playing with them.
So have fun out there. We will.
Now, if only I could talk Noel into resurrecting those Rock ’em Sock ’em robots.
Earlier this month, our strategy and roadmap eBook was released. In it, you’ll find all the whys, wherefores, whats and hows that drive the Simplicity-Mobility-Extensibility design philosophy we follow for Oracle Cloud Applications.
The eBook is free, as in beer, and it’s a great resource if you find yourself wondering why we do what we do. Download it now.
In said (free) eBook, you’ll find this slide.
Guessing I’ve seen our fearless leader and GVP Jeremy Ashley (@jrwashley) present this slide 20-some times around the World, and each time he asks, “What’s the first thing you do in the morning?”
Inevitably, 90% of the audience says pick up my phone. He’ll then ask how many people in the audience have only one computing device, two, three or more? Overwhelmingly, audiences have three or more.
These are international audiences, so there’s no geographical bias.
I love this slide because it succinctly portrays the modern work experience, spent across devices, all day long. As Jeremy says, we have the ability to work from the moment we open our eyes to wake to the moment we close them for sleep.
You can debate whether that is a good thing or not, but the fact is our users are mobile and device-happy. They use whatever device fits their needs at any given time.
And devices keep changing. For instance, this slide had a head-mounted display glyph at one point to represent a Google Glass-like device, and the smartwatch looked like a Pebble, not an Apple Watch.
That’s where we (@theappslab) come in; we’re always reading the tea leaves, leaning into the future, trying to anticipate what users will want next so we can skate to where the puck will be.
Mixing metaphors is fun.
Anyway, download the free eBook and learn about the OAUX strategy and roadmap and keep reading here to see where we fit.
VR is big and is going to be really big for the game industry, and you could feel it in the air at the GDC 2016. For the first time, GDC added two days of VR development-focused events and sessions, and most of VR sessions were packed – the lines to the VR sessions were long, even 30 minutes before the sessions, and many people could be turned away. The venue for VR sessions had to be changed to double the capacities for day 2.
There was lots of interest and enthusiasm among game designers, developers and business guys, as VR represents a brand new direction, new category, and new genre for games!
It is still at the dawn of VR games, with hardware, software, contents, approaches, etc. starting to come together. Based on what I learned during GDC, I’d like to summarize the state of various aspects of VR development.
1. VR Headset
This is the first thing that comes to our mind when we talk about VR, right? After all, the immersive experience is cast to our minds while covering ourselves with the VR headset. There are a couple VR headsets available on market, and slew of VR headsets to be debuted very soon.
From $10 Google cardboard, to $100 Samsung Gear VR, to >$1000 custom rig, the price of a VR headset is on a wide spectrum, and so are capability and performance. Most people who want to get hold of VR will likely choose one among Samsung Gear VR, PlayStation VR, Oculus Rift, and HTC Vive. Here I will do a brief comparison so you have some ideas of what you can get.
Samsung Gear VR
It uses specific Samsung phones to show VR content, so the performance is low as it is limited by the phone hardware, usually at 60fps. It has a built-in touchpad for input, but you may also use an optional gamepad. It has no wire to connect to PC, so you can spin around on a chair and not worry about tangling yourself. It has no position tracking.
If you own a Samsung S6/S7, or Edge version, why not get the Gear VR to experience the magic? $99 seems to be really inexpensive for any new gadget. Even if you have non-Samsung phone, you can still slip it into the rig and use Gear VR as a advanced version of Cardboard viewer. Of course, you will not have control pad capability.
It uses PS4 to run VR games, so it has real game-grade hardware to run VR content at 120fps, with consistent high performance. For inputs, it has a gamepad and tracked controllers, like holding a beacon with light bulb. It has small position tracking.
The unique part with PSVR is that it is supposed to play with other regular gamers on TV screens, making it a party game in your living room. The person with PSVR will have immersive feeling in the game, while others on TV can fight with or play along with the guy (a game character with VR headset) in a game. If you have a PS4 at home, then shelling out another $399 seems to be reasonable for decent experience of VR games. But you’ll have to wait until October 2016 to buy one, right before the holiday season.
This is expected to be a high-end VR headset, with games running on a powerful Oculus-ready computer. It will have very high performance, showing VR content at 120fps or higher. It will have a wire connected to computer, so that would limit you not to spin too much of 360 degree. It has small position tracking too. It does not come cheap at price $599, but well you can get it pretty much now in March.
It is considered to be even higher spec than Oculus Rift. It will require a muscular PC, with motion sensor and motion controllers attached to it, and it will deliver very high performance for VR games. It has tracked hands for input, and provides room-scale position tracking, which is above everyone else. To designers / developers, this room-scale tracking capability may give another dimension for experiments.
It costs $799, because it is high-end hardware and bundled with a bunch of bells and whistles. And you can expect to get it in April if you pre-order one now.
HoloLens is always another interesting device for VR/AR. Also rumor has it that Google is building a VR headset too – will be much more powerful than its Cardboard version.
2. Game Engine for VR
Recent trend indicates that Game Engine companies are making it easier (or free) for people to access game engine software and develop game on it. There were quite number of sessions covering detail topics on specific game engine, but based on my impression, here is the list to try out.
Unity 5.3 by Unity Technologies – It has a free version (Personal Edition) with full features. I believe it is most popular and widely-used game engine, with cross platform deployment to full range of mobile, VR, desktop, Web, Console and TV. Also many of the alt.ctrl.GDC exhibits utilized Unity to create game for controllers to interact with.
Unreal Engine 4 by Epic Games – It is a sophisticated game engine used to develop some AAA games. They also showcased two VR games Bullet Train and Showdown. The graphics and visual effect looks astonishing.
Lumberyard by Amazon – It is a new entry to the engine game, but it is free with full source, meaning you can tweak the engine if necessary. It would be a good choice if developing online game, and no need to worry about hosting a robust game. I guess that’s where Amazon wants to get a share of the game. It is not supporting VR yet, but will add such support very soon.
3. Capture Device
For many VR games, designers/developers would just create virtual game world using game engine and other graphical software. But in order to show real world event inside VR world, you will need special video camera, which can take 360 degree, or spherical photos and videos.
Well, most of us may not have seen or used this type of camera, including me, and so I don’t have any opinions on them. I did use native Camera App on Android device to capture spherical photos, but it was difficult to take many shots and stitch them together.
A step further is the stereoscopic video capturing, which takes two photographs of the same object at slightly different angle to produce depth. These are high-end professional rigs, with many custom-built versions. The price range could easily go above $10k.
This area is still quite fluid, and not sure if it would ever go mainstream. Hope some consumer version in reasonable price range will become available, so we can produce some VR videos too.
4. Convention and Best Practice
With real VR game titles under 100 in total, people in the VR field are still trying to figure things out, and no clear convention has yet surfaced for designers, developers and players.
In some sessions, VR game designers and developers did share the lessons they have learned while producing their first several VR games, like interaction patterns, reality trade-off (representational, experiential, and interaction fidelity), and fidelity contract in terms of physics rule, affordance, narrative expectations. Audio (binaural audio) and visual effects will too help realize an immersive experience.
We shall see more and more “best practices” converging together with more research in VR psychology and UX, some conventions will emerge to put designers and players on the same page.
5. Area of Use
By far games is the most natural fit for VR experience, and the entire game industry is driving toward it. Cinematic VR will be another great fit, as ILM X Lab demonstrated in “Star Wars,” viewer may “attach” to different characters to experience various view points in the movie.
People also explored VR as a new way of storytelling in journalism, a new way of exercise for sports (e.g. riding stationary bike in gym feels much like driving Humvee car in war zone), and a new way of education, e.g. going into a machine and looking at the inner mechanism of an engine.
VR brings another aspect of artistic expression as new art media, challenges us to advance technology to a new frontier, and at the same time, provides us with great opportunities.
Things are just getting started!
We are still in the early days of virtual reality. Just as in the early days of manned flight, this is a time of experimentation.
What do we wear on our heads? Helmets? Goggles? Contact lenses? Or do we simply walk into a cave or dome or tank? What do we wear or hold in our hands? Game controllers? Wands? Glowing microphones? Bracelets, armbands, and rings? Or do we just flap our arms in the breeze? Do we sit? Stand? Walk on a treadmill? Ride a bike? Or do we wander about bumping into furniture and each other?
As a person who prefers to go through life in a reclining position, most of these options seem like too much bother. I have a hard time imagining how VR could become ubiquitous in the enterprise if employees have to constantly pull on complicated headgear, or tether themselves to some contraption, or fight for access to an expensive VR cave. VR in the workplace must be ergonomic, safe, and easy to use even before you’ve had your morning coffee.
Lately I’ve been enjoying VR content, goggle-free, from the comfort of my lazyboy using an Apple TV app called Littlstar. Instead of craning my head back and forth, I just slide my thumb to and fro on the Apple remote. I can fly through the air and swim with the dolphins without working up a sweat or stepping on a cat.
To be clear: watching VR content on TV is NOT real VR. It’s nowhere near as immersive. But the content is the same and the experience is surprisingly good. Navigation is actually better: because it is effortless I am more inclined to keep looking around.
The Apple remote strikes me as the perfect VR controller. It is light as a feather, easy to hold, lets you pan and drag and click and zoom, and you can operate it blindfolded.
Watching VR content on TV also makes it easier to share. Small groups of people can navigate a virtual space together in comfort. One drawback: it’s fun to be the person “driving,” but abrupt movements can make everyone else a tad queazy.
What works in the living room might also work well at a desk – or in a meeting room. TVs are already replacing whiteboards and projection screens in many workplaces. And the central innovation of the fourth generation, Apple TV, the TV app, creates a marketplace to evolve new forms of group interaction. I expect there will be a whole class of enterprise TV apps someday.
For all these reasons, I have been pushing to create Apple TV app counterparts to the VR apps we are starting to build in the AppsLab. TV counterparts could make it easier to show prototypes in design meetings and customer demos. I feel validated by Tawny’s (@iheartthannie) report from GDC that Sony has adopted a similar philosophy.
Thanks to one of our talented developers, Os (@vaini11a), we already have one such prototype. It doesn’t do much yet; we are just figuring out how to display desktop screens in a VR environment. With goggles on I can use the VR app to spin from screen to screen in my office chair and look down at my feet to change settings. With the Apple TV counterpart app, I can do exactly the same thing without moving anything other than my thumb.
It’s still too early to predict how ubiquitous VR might become in the workplace or how we will interact with it. But TV apps, or something like them, may become one way to view virtual worlds in comfort.
Tawny (@iheartthannie) and I attended the 30th Edition of GDC – Game Developers Conference. As shown in the Tawny’s daily posts, there were lots of fun events, engaging demos, and interesting sessions, that we simply could not cover them all. With 10 to 30 sessions going on at any time slots, I wished to have multiple “virtual mes” to attend some of them simultaneously. However, with only one “real me,” I still managed to attend a large number of sessions, mostly 30-minute sessions to cover more topics at a faster pace.
Unlike Tawny’s posts that give you in-depth looks into many of the sessions, I will try to summarize the information and take-aways in two posts: Part 1 – Event and Impression; Part 2 – The State of VR. This post will cover event overview and general impression.
1. Flash Backward
After two days of VR sessions, this flashback kicked off the GDC Game portion with a sense of nostalgia, flashing games like Pac-Man and Minesweeper, evolving into console games, massive multi-player games, social games (FarmVille), mobile games (Angry Birds), and onto VR games.
GDC has been running for 30 years, and many of the attendants were not even born yet that time. The Flashback started with Chris Crawford, the founder of GDC, and concluded with Palmer Luckey, the Oculus dude, who is 23, with not much for flashback, but only looking forward to the new generation of games in VR. He will be back in 20 years for the retrospective 🙂
2. Awards Ceremony
On 3/16/2016, two awards ceremony were hosted in recognition of creativity, artistry and technological genius – Independent Games Festival Awards, and Game Developers Choice Awards. I believe it is the equivalent to Oscars for movie industry, and it ran the exact format as Oscars.
As you can see, the big winner of the night was “Her Story” (by Sam Barlow), which won 5 out of 6 nominations. It is an Indie title, but also took 3 winners competing with big producers, because it created a fresh way of story-telling using game. And “The Witcher 3: Wild Hunt” took the honor of “Game of the Year.” Gamers: check out the list and check out the games if you have not played.
The ceremony also honored Todd Howard for “Lifetime Achievement Award.” He is a designer, developer, director and producer for award-winning titles “Oblivion,” “Fallout 3,” and “Skyrim,” etc. Markus “Notch” Persson, the programmer and developer of Minecraft, took the honor of “Pioneer Award.” Yeah!
As a maker myself with AppsLab, I found the alt.ctrl.GDC interactive exhibits to be extremely satisfying – just some insane ideas of how controllers can be made for games.
I tried most of the controllers, such as licking popsicles to suck up planet resources in game; mutating a controller to change the object in game to fly, swim or crawl; cranking up handles to drive tanks in game.
“Keep Talking and Nobody Explodes” must be one of the favorites at alt.ctrl.GDC 2015, and the mechanical box still stood out! It has turned into a real game – nominated for three categories and won “Excellence in Design” in the IGF Awards ceremony. It is a fun game, check it out!
“Please Stand By” is my favorite for alt.ctrl.GDC 2016. What do you do when you find a vintage TV box in junkyard? Well, it has all the controllers, even though they do not work anymore. After a wizardry spin, it came back to life – I over-heard the secrets, if you are ever intrigued in knowing how to do that.
Now it acts to show many channels of game TV, of course, you have to tune it with all the knobs and rabbit ears carefully. Oh, there are some buttons on the back too that will do some tricks. If it ever freezes on you, pound it or shake it, like you would with the old TV box.
4. Game Making and Animation
This is too big of a topic for a section in one blog post, so I am not going into any details.
I just want to appreciate how much works and thoughts people put into making a game. For an example, just look at this one slide from UFC 2 session:
That is just one grappling position change, and it would derive into so many permutation depends on how players control. Now work on the animation for each of those permuted position changes. So in UFC 2, the technical genius tried to found procedural way of automating some areas of animation.
Of course, there are so many other aspects of game making – as indicated by the many categories of awards. In additional to the creative side, there are also technological side of running massive online games, or dealing with all forms of devices.
As much as technological advance drives game development, the game making drives technological advance! People are pushing the edge of envelope in making next generation of games in VR. Speaking of VR, stay tuned for my next post on “The State of VR.”
I’ve been doing this job for various different organizations at Oracle for nine years now, and we’ve always existed on the fringe. So, having our own home for content within the Oracle.com world is a major deal, further underlining Oracle’s increased investment in and emphasis on innovation.
Today, I’m excited to launch new content in that space, which, for the record is here:
We have a friendly, short URL too:
The new content focuses on the methodologies we use for research, design and development. So you can read about why we investigate emerging technologies and the strategy we employ, and then find out how we go about executing that strategy, which can be difficult for emerging technologies.
Sometimes, there are no users yet, making standard research tacits a challenge. Equally challenging is designing an experience from scratch for those non-existent users. And finally, building something quickly requires agility, lots of iterations and practice.
All-in-all, I’m very happy with the content, and I hope you find it interesting.
The IoT Smart Office, just happens to be the first project we undertook as an expanded team in late 2014, and we’re all very pleased with the results of our blended, research, design and development team.
I hope you agree.
In the coming months, we’ll be adding more content to that space so stay tuned.