Twilio Signal Conference ended with an after-party called the $Bash night. Twilio set up booths with geeky games like programming, program debugging, computer building etc.. They also had a foosball table for 16 people. I think it is one of the nicest parties for geeks I attended so far. It was a fun night with music, drinks, food and games, tuned for developers.
During that morning’s keynote, Jeff Lawson (Twilio Founder) had a virtual meeting with Rony Abovitz (Magic Leap Founder), and they announced that the winner of the $Bash night can get access to Magic Leap. Magic Leap is so mysterious, and I had a great urge to win in the $Bash night to be able to play and do something with it.
It turned out if you compete with other developers during the $Bash night, you could win raffle tickets, and the person who had the most raffle tickets by the end of the night would become the winner. So all night I have been going all out playing and competing. The environment was too dark to possibly take some good quality pictures, but you can find some info here.
There are 2 games I did quite well and enjoyed: 1. program debugging competition among 6 developers, 2. pairing up to move jenga blocks with a robot arm. At the end of night, although I tried my best, I only came out second. At first I was quite disappointed, however, I was told there is still quite a very good chance there is a second spot to offer me for Magic Leap. I shall keep my hope up to wait and see.
Lets dive to the Twilio sessions.
The sessions are generally divided in the following 4 tracks:
See the latest progress in software and cloud communications, talk shop with Twilio engineers who developed them, and get in to the details on how to use the software.
Hear from industry experts shaping the future of tech with the latest software.
Get details on hurdles tricks and solution from Twilio customers on building communications with software APIs.
Define business plans for modern communications with real-life ROI and before-and-after stories.
My interests was more into the Inspire track, and the hot topic being AI and Virtual Assistants nowadays, those were the sessions I targeted for the conference.
This half year is just the “half year of virtual assistants”, with the announcements of controversial Tay and Cortana from Microsoft, messenger bot from Facebook, Allo from Google I/O and Siri from WWDC yesterday. Every giants want to squeeze into the same space and get a share of it. There were a lot of sessions regarding to bots in Signal, and I had a feeling that Twilio has carefully hand picked the sessions carefully to suit the audiences. IBM, and Microsoft and Slack all presented their views and technologies with bots, and I learned a lot from them. It is a bit odd that api.ai sponsored the lunch for the conference and have a booth in the conference, but did not present in any sessions (afaik).
In the schedule, there was a session called Terrible Ideas in Git by Corey Quinn. I love Git, and when I saw the topic, my immediate reaction was how can anyone say Git was terrible (at least right)?? I just had to go there and take a look. To my surprise, it was very fun talk show and I had a good laugh and enjoyed it a lot. I am glad I did not miss that session.
This year I attended the Twilio Signal Conference. Same as its first year, it was held in Pier 27, San Francisco. It was a 2-day action-packed conference with a keynote session in the morning and sessions after till 6 pm.
The developer experience provided by the conference is superb comparing to a lot of other developer conferences nowadays. Chartered buses with wifi were provided for commuters using different transits. Snacks served all day. 6 30-minutes sessions for you to choose from every time slot. No need to wait in line and you could always attend the sessions you want (sorry Google I/O). For developers, as least for me, the most important thing was a special coffee stall opened every morning to serve you with a fresh brewed coffee to wake you up and energize you for the rest of the day. With the CEO among others to code right in front of you in a keynote session to show you some demos, it is one true developer conference that you could hope for.
There were a lot of new products and features Twilio announced in Signal and I would not spend to time to recap here. You may read more info here and here. The interesting thing to note is how Twilio gets so huge. It started off with a text messaging service, it now also provides services on video, authentication, phone, routing. It is the power engine under the hood for the fast growing companies like Lyft and Uber. It now offers the most complete messaging platform for developers to connect to their users. It now has capabilities to reroute your numbers and tap into the phone conversations. It partners with T-Mobile to get into the IoT domain. Twilio’s ambition and vision is not small at all. The big question is: how Twilio achieve all these? This question can be controversial, but for me, I would have to say it all boils down into simplicity: making things really easy, really good, and just works. The Twilio APIs are very easy to use and it does exactly what it says, no more, no less. Its reliability is superb. That is what developers want and rely on.
But wait, there’s more. Check out my thoughts on the sessions at Signal and my $Bash night experience. I almost won a chance to play with the mysterious Magic Leap, and I might yet get access for finishing second. Stay tuned.
Editor’s note: We just returned from Holland last week where we attended AMIS 25, which was a wonderful show. One of the demos we showed was the Smart Office; Noel (@noelportugal) also gave a presentation on it.
We’ve been showing the Smart Office since OOW last year, and it remains one of our most popular demos because it uses off-the-shelf components that are available today, e.g. Amazon Echo, Leap Motion, Philips Hue lights, beacons, etc., making it an experience that anyone could replicate today with some development work.
In early 2015, the AppsLab team decided we were going to showcase the latest emerging technologies in an integrated demo. As part of the Oracle Applications User Experience group, our main goal as the emerging technologies team is to design products that will increase productivity and user participation in Oracle software.
We settled on the idea of the Smart Office, which is designed with the future of enterprise workplaces in mind. With the advent of the Internet of Things and more home automation in consumer products, users are expecting similar experiences in the workplace. We wanted to build an overall vision of how users will accomplish their tasks with the help of emerging technologies, no matter where they might be working.
Technologies such as voice control, gesture, and proximity have reached what we consider an acceptable maturity level for public consumption. Inexpensive products such as the Amazon Echo, Leap Motion and Bluetooth beacons are becoming more common in users’ daily lives. These examples of emerging technology have become cornerstones in our vision for the Smart Office.
Wearable technology also plays an important role in our idea of the Future of Work. Smart watches are becoming ubiquitous, and the price of wireless microprocessors continues to decrease. Dedicated mobile devices, our research shows, can increase productivity in the workplace when they are properly incorporated into the user experience as a whole.
Building for you, a Sales Cloud example
We first created what we call a user persona to assist us in building the Smart Office. This helps us develop very specific work flows using very specific technology that can be widely applied to a variety of software users. In this case, we started with a sales example as they are often mobile workers.
Sally Smith, our development example for the Smart Office, is a regional sales vice president who is traveling to her headquarter’s office. Traveling to another office often requires extra effort to find and book a working space. To help Sally with that task, we built a geo-fence-enabled mobile app as well as a Smart Badge. Here’s what these two components help her do:
- As Sally approaches the office building, her mobile device (using geo-fencing capabilities) alerts her via her smart watch and helps her find her way to her an available office space, using micro-location with beacons. She uses her Smart Badge, which has access to data about her employee status, to go through the security doors at the office building.
- As Sally approaches the available office space, her Smart Badge proximity sensor (a Bluetooth beacon) connects with a Lighthouse, which is a small touch-screen device outside the office space that displays space availability and works as the “brain” to control IoT devices inside the space. The proximity with the Lighthouse triggers a second confirmation to her smart watch to unlock the office and reserve the space in the company’s calendar system. This authenticates her reservation in two ways.
- As Sally enters the office, her global preferences are loaded into the office “brain.” Settings such as light brightness and color (Hue Lights), and room temperature (Nest Thermostat) are set to her liking.
- The office screens then start to load Sally’s familiar pictures as well as useful data relative to her location, such as weather or local events, on two Infoscreens. An Infoscreen is a Wi-Fi-enabled digital frame or LCD screen hung on the wall.
Sally has already interacted with her Smart Office in several ways. But up to this point, all of the interactions have been triggered or captured by emerging technology built into mobile devices that she is carrying with her. Now, she is ready to interact more purposefully with the Smart Office.
- Sally uses the Amazon Echo voice control to talk to the office: “Alexa, start my day.” Since she has been authenticated by the system already, it knows that the Oracle Sales Cloud is the application she is most likely to need, and the welcome page is now loaded in the touchscreen at the desk. She can use voice navigation to check on her opportunities, leads, or any other section of the Sales Cloud.
- Sally was working on the plane with Oracle Sales Cloud, but she did not have a chance to save her work before landing. Session portability is built into the cloud user experience, which takes care of saving her work when she is offline. Now that she is sitting inside the Smart Office and back online, she just swipes her screen to transfer her incomplete work onto the desktop screen.
- The Smart Office also uses empty wall space to project data throughout the day. On this Ambient Screen, Sally could use her voice (Amazon Echo), or hand gestures (Leap Motion), to continue her work. Since Sally has a global sales team, she can use the Ambient Screen to project a basic overview of her team performance metrics, location, and notifications.
- If Sally needs to interact with any of the notifications or actions she sees on the Ambient Screen, she can use a grab-and-throw motion to bring the content to her desk screen. She can also use voice commands to call up a team map, for example, and ask questions about her team such as their general location.
- As Sally finishes her day and gets ready to close her session inside the Smart Office, she can use voice commands to turn everything off.
Find out more
The Smart Office was designed to use off-the-shelf components on purpose. We truly believe that the Future of Work no longer relies on a single device. Instead, a set of cloud-connected devices help us accomplish our work in the most efficient manner.
For more on how we decide which pieces of emerging technology to investigate and develop in a new way for use in the enterprise world, read “Influence of Emerging Technology,” on the Usable Apps website.
See this for yourself and get inspired by what the Oracle Applications Cloud looks like when it’s connected to the future. Request a lab tour.
Another year, another amazing at the Maker Faire.
I’ve attended my fair share of Maker Faires these years, so the pyrotechnic sculptures, 3D printing masterpieces, and handmade artisan marketplaces were of no particular surprise. But somehow, every time I come around to the San Mateo fairgrounds, the Faire can’t help but be so aggressively fresh, crazy, and novel. This year, a host of new and intriguing trends kept me on my toes as I ventured through the greatest show and tell on Earth.
Young makers came out in full force this year. Elementary school maker clubs showed off their circuit projects, middle schoolers explained how they built the little robots, high school STEM programs presented their battle robots. It’s pleasing to see how Maker education has blossomed these past years, and how products and startups like LittleBits and Adafruit have made major concepts in electronics and programming so simple and inexpensive that any kid could pick it up and start exploring. Also wonderful is seeing young teams traveling out to the Bay Area from Texas, Oregon, and all these other states, a testament to the growth of the Maker movement out of the Silicon Valley.
Speaking of young makers’ participation, Arduino creator Massimo Banzi talked about Arduino as an education tool for kids to play and tinker, even he never planned to make kid’s toys in his early years. The maker movement has invoked the curious minds of all age, to start playing electronics, making robots, and learning a new language in programming.
While the maker movement made things very accessible to individuals, the essence of creation and innovation also impacted on the large enterprise. On the “Maker Pro” stage, our GVP, Jeremy Ashley (@jrwashley), talked about new trends of large enterprise application design, and OAUX group is driving the change to make simpler, but more effective and more engaging enterprise application.
Drones were also a trending topic this year, with a massive Drone Racing tent set up with events going on the whole weekend. Everything was being explored – new shapes for efficient and quick flight; new widgets and drone attachment modules; new methods of interaction with the drone. One team had developed a smart glove that responded to gyroscopic motion and gestures to control the flight of a quadcopter, and had the machine dance around him – an interesting and novel marriage of wearable tech and flight.
Personally, I’ve got a soft spot for art and whimsy, and the Faire had whimsy by the gallon. The artistry of the creators around the country and globe can’t be overestimated.
Maker Faire never disappoints. We brought friends along who had never been to a Faire, and it’s always fun to watch them get blown off their feet literally and figuratively the first time a flamethrower blasts open from the monolithic Crucible. Or their grins of delight when they see a cupcake shaped racecar zoom past them… and another… and another. Or the spark of amazement when they witness some demo that’s out of any realm of imagination.
Many hands make light (emitting diodes) work. Oracle Applications User Experience (OAUX) gets down to designing fashion technology (#fashtech) solutions in a fun maker event with a serious research and learning intent. OAUX Senior Director and resident part-time fashion blogger, Ultan “Gucci Translated” O’Broin (@ultan), reports from the Redwood City runway.
Fashion and Technology: What’s New?
Wearable technology is not new. Elizabeth I of England was a regal early adopter. In wearing an “armlet” given to her by Robert Dudley, First Earl of Leicester in 1571, the Tudor Queen set in motion that fusion of wearable technology and style that remains evident in the Fitbits and Apple Watches of today.
Elizabeth I’s device was certainly fly, described as “in the closing thearof a clocke, and in the forepart of the same a faire lozengie djamond without a foyle, hanging thearat a rounde juell fully garnished with dyamondes and a perle pendaunt.”
Regardless of the time we live in, for wearable tech to be successful it has to look good. It’s got to appeal to our sense of fashion. Technologists remain cognizant of involving clothing experts in production and branding decisions. For example, at Google I/O 2016, Google and Levi’s announced an interactive jacket based on the Google Jacquard technology that makes fabric interactive, applied to a Levi’s commuter jacket design.
Fashion Technology Maker Event: The Summer Collection
Misha Vaughan’s (@mishavaughan) OAUX Communications and Outreach team joined forces with Jake Kuramoto’s (@jkuramot) AppsLab (@theappslab) Emerging Tech folks recently in a joint maker event in Oracle HQ to design and
build wearable tech solutions that brought the world of fashion and technology (#fashtech) together.
The occasion was a hive of activity, with sewing machines, soldering irons, hot-glue guns, Arduino technology, fiber-optic cables, LEDs, 3D printers, and the rest, all in evidence during the production process.
Fashtech events like this also offer opportunities of discovery, as the team found out how interactive synth drum gloves can not only create music, but be used as input devices to write code too. Why limit yourself to one kind of keyboard?
Wearable Tech in the Enterprise: Wi-Fi and Hi-Heels
What does this all this fashioning of solutions mean for the enterprise? Wearable technology is part of the OAUX Glance, Scan, Commit design philosophy, key to that Mobility strategy reflecting our cloud-driven world of work. Smart watches are as much part of the continuum of devices we use interchangeably throughout the day as smart phones, tablets, or laptops are, for example. To coin a phrase from OAUX Group Vice President Jeremy Ashley (@jrwashley) at the recent Maker Faire event, in choosing what best works for us, be it clothing or technology: one size does not fit all.
A distinction between what tech we use and what we wear in work and at home is no longer convenient. We’ve moved from BYOD to WYOD. Unless that wearable tech, a deeply personal device and style statement all in one, reflects our tastes and sense of fashion we won’t use it: unless we’re forced to. The #fashtech design heuristic is: make it beautiful or make it invisible. So, let’s avoid wearables becoming swearables and style that tech, darling!
Generally, I’m not in favor of consolidating important stuff onto my phone, e.g. credit cards, etc. because if I lose my phone, I’ll lose all that stuff too.
However, I’ve been waiting to try out a digital hotel key, i.e. using my phone to unlock my hotel room. Only a few hotels and hotel chains have this technology in place, and recently, I finally stayed at one that does, the Hilton San Jose.
Much to my surprise, and Noel’s (@noelportugal), the digital key doesn’t use NFC. We’d assumed it would, given NFC is fairly common in newer hotels.
Nope, it uses Bluetooth, and when you get close to your room or any door you have access to unlock, e.g. the fitness center, the key enables.
Then, touch to unlock, just like it says, and within a second or so, the door is unlocked. It’s not instantaneous, like using the key, which uses NFC, but still pretty cool.
Ironically, the spare, physical key they gave me for “just in case” scenarios failed to work. I made the mistake of leaving my phone in the room to charge, taking the spare key while I ran downstairs to get some food, and the physical key didn’t work.
Anyway, the feature worked as expected, which is always a win. Those plastic keys won’t disappear anytime soon, and if you lose your phone while you’re using the digital hotel key, you’re extra hosed.
Still, I liked it and will definitely use it again whenever it’s available because it made me feel like future man and stuff.
Find the comments.
Editor’s Note: In February while we were in Australia, I had the pleasure to meet Stuart Coggins (@ozcoggs) and Scott Newman (@lamdadsn). They told me about a sweet Anki Overdrive cars plus Oracle Cloud Services hack Stuart and some colleagues did for Pausefest 2016 in Melbourne.
Last week, Stuart sent over a more detailed video of the specifics of the build and a brief writeup of what was involved. Here they are.
Oracle IoT and Anki Overdrive
By Stuart Coggins
Some time ago, our Middleware team stumbled upon the Anki Overdrive and its innovative use of technology, including APIs to augment a video game with physical racing cars.
We first presented an IoT focused demonstration earlier this year at an Innovation event in Melbourne. It was very well received, and considered very “un-Oracle.”
Over the past few months, the demo scope has broadened. And so has collaboration across Oracle’s Lines of Business. We saw an opportunity to make use of some of our Cloud Services with a “Data in Action” theme.
We’ve taken the track to several events, spanning various subject areas. Always sparking the question “what does this have to do with me?” And in some cases, “Why is Oracle playing with racing cars?”
As if the cars are not draw card enough at our events, the drone has been a winner. Again, an opportunity to showcase how using a range of services can make things happen.
As you’ll see in the video, the flow is fairly straightforward… the game, running on a tablet talks to the cars via Bluetooth. Using Bluetooth sniffers on a Raspberry Pi, we interrogate the communication between the devices. There are many game events as well as car activity events (speed of left/right wheels, change lane, turn left, turn right, off track, etc). We’re using Python scripts to forward the data to Oracle’s Internet of Things Cloud Service.
This is where things get interesting. The speed and laptime data is being filtered out, and forwarded to Oracle’s Database Cloud Service. The “speedo” dials are rendered using Oracle Apex (Application Express), which does a great job. An “off track” event is singled out and instantiates a process defined in Oracle Process Cloud Service. At this point, we’ll integrate to Oracle Service Cloud to create an event for later auditing and logging. Whilst airborne, the drone captures photos of the incident (the crashed cars), and sends them back to the process. The business process has created an incident folder on Oracle Document Cloud Service to record any details regarding the event, including the photos.
Because data is not much use if you’re not going to do something with it, we then hook up Oracle Business Intelligence Cloud Service to the data stored in Database Cloud Service. Post race analysis is visualised to show the results, and with several sets of race data, gives us insight as to which car is consistently recording the fastest laptimes. i.e. the car that should be used when challenging colleagues to a race!
As we’ve been running this for a few months now, and the use cases and applications of this technology grow, we’re getting more and more data. The adage that data creates data is certainly true here.
Ultimately, we’ll dump this data into Hadoop and perform some discovery, perhaps to understand how the track changes during the day (dust/dirt/use) etc. We’d like to get some temperature data from the Pi to understand if that has any effect on the car performances, and perhaps we’ll have enough data for us to be able to analyse the perfect lap, and replay it using the Anki SDK.
We’re planning a number of hackathons locally with this kit, and we’ll see what other innovations we can highlight.
A big shout out to the technical guy behind sniffing and “translating” the data. The data is not exposed by the SDK and was by no means trivial to maps but it has allowed us to get something meaningful and put it into action.
At first I was skeptical. I was perfectly happy with my iPad Air and the Pro seemed too big and too expensive. Six months later I wouldn’t dream of going back. The iPad Pro has become my primary computing device.
Does the Pro eliminate the need for a laptop or desktop? Almost, but for me not quite yet. I still need my Mac Air for NodeBox coding and a few other things; since they are both exactly the same size I now carry them together in a messenger bag.
The Pro is lighter than it looks and, with a little practice, balances easily on my lap. It fits perfectly on an airplane tray table.
Does the 12.9-inch screen really make that much of a difference? Yes! The effect is surprising; after all, it’s the same size as an ordinary laptop screen. But there is something addictive about holding large, high resolution photos and videos in your hands. I *much* prefer photo editing on the iPad. 3D flyovers in Apple Map are almost like being there.
The extra screen real estate also makes iOS 9’s split screen feature much more practical. Above is a screenshot of me editing a webpage using Coda. By splitting the screen with Safari, I can update code and instantly see the results as I go.
Enterprise users can see more numbers and charts at once. Bloomberg Professional uses the picture-in-picture feature to let you watch the news while perusing a large portfolio display. WunderStation makes dashboards big enough to get lost in.
For web conferences, a major part of my working life at Oracle, the iPad Pro both exceeds and falls short. The participant experience is superb. When others are presenting screenshots I can lean back in my chair and pinch-zoom to see details I would sometimes miss on my desktop. When videoconferencing I can easily adjust the camera or flip it to point at a whiteboard.
But my options for presenting content from the iPad are still limited. I can present images, but cannot easily pull content from inside other apps. (Zoom lets you share web pages and cloud content on Box, Dropbox or Google Drive, but we are supposed to keep sensitive data inside our firewall.) The one-app-at-a-time iOS model becomes a nuisance in situations like this. Until this limitation is overcome I don’t see desktops and laptops on the endangered species list.
The iPad Pro offers two accessories not available with a normal iPad: a “smart keyboard” that uses the new magnetic connector, and the deceptively simple Apple Pencil.
I tried the keyboard and threw it back. It was perfectly fine but I’m just not a keyboard guy. This may seem odd for someone who spends most of his time writing – I’m typing this blog on the iPad right now – but I have a theory about this that may explain who will adopt tablets in the workplace and how they will be used.
I think there are two types of workers: those who sit bolt upright at their desks and those who slump as close to horizontal as they can get; I am a slumper. And there are two kinds of typists: touch typists who type with their fingers and hunt-and-peckers who type with their eyes; I am a, uh, hunter. This places me squarely in the slumper-hunter quadrant.
Slumper-hunters like me love love love tablets and don’t need no stinking keyboards. The virtual keyboard offers a word tray that guesses my words before I do, lets me slide two fingers across the keyboard to precisely reposition the cursor, and has a dictate button that works surprisingly well.
Touch-slumpers are torn: they love tablets but can’t abide typing on glass; for them the smart keyboard – hard to use while slumping – is an imperfect compromise. Upright-hunters could go either way on the keyboard but may not see the point in using a tablet in the first place. Upright-touchers will insist on the smart keyboard and will not use a tablet without one.
If you are an artist, or even just an inveterate doodler, you must immediately hock your Wacom tablet, toss your other high-end styli, and buy the Apple Pencil (with the full-sized Pro as an accessory). It’s the first stylus that actually works. No more circles with dents and monkey-with-big-stick writing. Your doodles will look natural and your signature will be picture perfect.
The above drawing was done in under sixty seconds by my colleague Anna Budovsky. She had never used the iPad Pro before, had never used the app (Paper), and had never before picked up an Apple Pencil. For someone with talent, the Apple Pencil is a natural.
If you are not an artist you can probably skip the Pencil. It’s a bit of a nuisance to pack around and needs recharging once a week (fast and easy but still a nuisance). I carry one anyway just so I can pretend I’m an artist.
For now the iPad Pro is just a big iPad (and the new Pro isn’t even big). Most apps don’t treat it any differently yet and some older apps still don’t even fully support it. But I am seeing some early signs this may be starting to change.
The iPad Pro has one other advantage: processing power. Normal iPad apps don’t really need it (except to keep up with the hi-res screen). Some new apps, though, are being written specifically for the Pro and are taking things to a new level.
Zooming into infinitely complex fractals is not a business application, but it sure is a test of raw processing power. I’ve been exploring fractals since the eighties and have never seen anything remotely as smooth and deep and effortless as Frax HD. Pinch-zooming forever and changing color schemes with a swirl of your hand is a jaw-dropping experience.
The emerging class of mobile CAD apps, like Shapr3D, are more useful but no less stunning. You would think a CAD app would need not just a desktop machine but also a keyboard on steroids and a 3D mouse. Shapr3D uses the Apple Pencil in ingenious ways to replace all that.
Sketch curves and lines with ease and then press down (with a satisfying click) to make inflection points. Wiggle the pencil to change modes (sounds crazy but it works). Use the pencil for drawing and your fingers for stretching – Shapr3D keeps up without faltering. I made the strange but complicated contraption above in my first session with almost no instruction – and had fun doing it.
I hesitate to make any predictions about the transition to tablets in the workplace. But I would recommend keeping an eye on the iPad Pro – it may be a sleeping giant.
Hi there, remember me? Wow, April was a busy month for us, and looking ahead, it’s getting busy again.
Busy is good, and also good, is the emergence of new voices here at the ‘Lab. They’ve done a great job holding down the fort. Since my last post in late March, you’ve heard from Raymond (@yuhuaxie), Os (@vaini11a), Tawny (@iheartthannie), Ben (@goldenmean1618) and Mark (@mvilrokx).
Because it’s been a while, here comes an update post on what we’ve been doing, what we’re going to be doing in the near future, and some nuggets you might have missed.
What we’ve been doing
Conference season, like tax season in the US, consumes the Spring. April kicked off for me at Oracle HCM World in Chicago, where Aylin (@aylinuysal) and I had a great session. We showed a couple of our cool voice demos, powered by Noel’s (@noelportugal) favorite gadget, the Amazon Echo, and the audience was visibly impressed.
— Gozel Aamoth (@gozelaamoth) April 7, 2016
I like that picture. Looks like I’m wearing the Echo as a tie.
Collaborate 16 was next, where Ben and Tawny collected VR research and ran a focus group on bots. VR is still very much a niche technology. Many Collaborate attendees hadn’t even heard of VR at all and were eager to take the Samsung Gear VR for a test drive.
During the bots focus group, Ben and Tawny tried out some new methods, like Business Origami, which fostered some really interesting ideas among the group.
— The AppsLab (@theappslab) April 12, 2016
Next, Ben headed out directly for the annual Oracle Benelux User Group (OBUG) conference in Arnhem to do more VR research. Our research needs to include international participants, and Ben found more of the same reactions we’ve seen Stateside. With something as new and different as VR, we cast a wide net to get as many perspectives and collect as much data as possible before moving forward with the project.
Oracle Modern Customer Experience was next for us, where we showed several of our demos to a group students from the Lee Business School at UNLV (@lbsunlv), who then talked about those demos and a range of other topics in a panel session, hosted by Rebecca Wettemann (@rebeccawettemann) of Nucleus Research.
— Geet (@geet_s) April 28, 2016
The feedback we got on our demos was very interesting. These students belong to a demographic we don’t typically get to hear from, and their commentary gave me some lightning bolts of insight that will be valuable to our work.
As with VR, some of the demos we showed were on devices they had not seen or used yet, and it’s always nice to see someone enjoy a device or demo that has become old hat to me.
Because we live and breathe emerging technologies, we tend to get jaded about new devices far too quickly. So, a reset is always welcome.
What we’re going to be doing in the near future
— AMIS, Oracle & Java (@AMISnl) May 9, 2016
Then, June 2-3, we’re returning to the Netherlands to attend and support AMIS 25. The event celebrates the 25th anniversary of AMIS (@AMISnl), and they’ve decided to throw an awesome conference at what sounds like a sweet venue, “Hangaar 2” at the former military airport Valkenburg in Katwijk outside Amsterdam.
Our GVP, Jeremy Ashley (@jrwashley) will be speaking, as will Mark. Noel will be showing the Smart Office, Mark will be showing his Developer Experience (DX) tools, and Tawny will be conducting some VR research, all in the Experience Zone.
I’ve really enjoyed collaborating with AMIS in the past, and I’m very excited for this conference/celebration.
After a brief stint at home, we’re on the road again in late June for Kscope16, which is both an awesome conference and happily, the last show of the conference year. OpenWorld doesn’t count.
We have big fun plans this year, as always, so stay tuned for details.
Stuff you might have missed
Finally, here are some interesting tidbits I collected in my absence from blogging.
- The bots are coming! We love bots, and in October, we’re partnering with the Apps UX Innovation team to run an internal bots-focused hackthon.
- New ways of input still on the verge of the enterprise. Over on VoX, you can read about our work in voice and gesture input and how these technologies are shaping future experiences.
- Smart user experiences: Machine learning and the future of enterprise applications. Check out what Bill has to say about how “smart” experiences are shaping our thinking.
Last week several of my colleagues and myself had the privilege of attending the Samsung Developers Conference (SDC) in San Francisco. It was the 5th time Samsung organized a developers conference in San Francisco but only the first time I attended, although some in our party were present previous times so I had some idea of what to expect. Here are some impressions and thoughts on the conference.
After an hour walking around, my first thought was: is there anything that Samsung doesn’t have their hand in? I knew of course they produce smart phones, tablets, smart watches and TVs, I’ve seen a laptop here and there, but vacuum cleaners, air conditioning units and ranges? Semi-conductors (did you know that inside the iPhone there are Samsung chips?), Smart fridges and security cameras and now VR gear and IoT, pretty crazy. Interestingly enough, I think there are some distinct advantages that Samsung might have because of this smorgasbord of technology over more focused companies (like say Apple) , more on that later.
As with all of these events, Samsung’s motivation for organizing this conference is of course not entirely altruistic; as I mentioned in the intro, they have a huge hardware footprint and almost all of that needs software, which gets developed by … developers.
They need to attract outside developers to their platforms to make them interesting for potential buyers, I mean, what would the iPhone be without Apps? There is nothing wrong with that, that’s one of the reasons we have Oracle OpenWorld, but I thought that the sessions on the “Innovation Track” where a bit light on technical details (at least the ones I attended).
In fact, some of them wouldn’t have been misplaced in the “Marketing Track” I feel. To be fair, I didn’t get to attend any of the hands-on sessions on day zero, maybe they were more useful, but as a hard core developer, I felt a bit … underwhelmed by the sessions.
That doesn’t mean though that the sessions were not interesting, probably none more so than “How to Put Magic in a Magical Product” by Moe Tanabian, Chief Design Officer at Samsung, which took us on a “design and technical journey to build an endearing home robot”, basically how they created this fella:
That is Otto, a personal assistant robot, similar to the Amazon Echo, except with a personality. Tanabian explained in the session how they got from idea and concept to production using a process remarkably similar to how we develop here at the AppsLab; fail fast, iterate quickly, get it in front of user as quickly as possible, measure etc. I just wish we had the same hardware tooling available as they do (apparently they used, what I can only image are very expensive 3D printers to produce the end result).
Samsung also seems to be making a big push in the IoT space, and for good reason. The IoTivity project is a joint open source connectivity framework, sponsored by the Open Interconnect Consortium (OIC) of which Samsung is a member and one of the sessions I attended was about this project.
The whole Samsung Artic IoT platform supports this standard, which should make it easy and secure to discover and connect Artic modules to each other. The question as always is: will other vendors adopt this standard so that you can do this cross-vendor, i.e. have my esp8266’s talk to an Artic module which then talks to a Particle and my Philips Hue lights etc.
Without this, such a new standard is fairly useless and just adds to the confusion.
As mentioned in the intro though, because Samsung makes pretty much everything, they could start by enabling all their own “things” to talk to each other over the internet. Their smart fridge could then command their robotic vacuums to clean up the milk that just got spilled in the kitchen. The range could check what is in the fridge and suggest what’s for dinner. Artic modules can then be used as customizations and extensions for the few things that are not built by Samsung (like IoT Nerf Guns :-), all tied together by Otto which can relay information from and to the users.
This is an advantage they have over e.g. Google (with Brillo) or Apple (with HomeKit) who have to ask hardware vendors to implement their standard; Samsung has both hardware and the IoT platform, no need for an outside party, at least to get started.
Personally, I’m hoping that in the near future I get to experiment with some of the Artic modules, they look pretty cool!
And then of course there was VR; VR Gears, VR vendors, VR Cameras even a VR rollercoaster ride (which I tried and of course made my sick, same as with the Oculus Rift demo at UKOUG last year), maybe I’m just not cutout for VR. One of the giveaways was actually a Gear 360 camera which allows you to take 360 degree camera footage which you can then experience using the Gear VR, nicely tying up the whole Samsung VR experience.
All in all it was a great conference with cool technology showing off Samsung’s commitment to VR and IoT.
Oh, and I got to meet Flo Rida at an AMA session 🙂
VR was the big thing at the Samsung Developer Conference, and one of the points that got driven across, both in the keynotes and in other talks throughout the day, was that VR is a fundamentally new medium—something we haven’t seen since the motion picture.
Injong Rhee, the executive VP of R&D for Software and Services, laid out some of VR’s main application areas: Gaming, Sports, Travel, Education, Theme Parks, Animation, Music, and Real Estate. Nothing too new here, but it is a good summary of the major use cases, and they echo what we’ve heard in our own research.
He also mentioned some of their biggest areas for innovation: Weight, dizziness, image quality, insufficient computing power, restricted mobility, limited input control. For anyone who’s tried the Gear VR and had to use the control pad on the side of the visor, I think we can agree it’s not ideal for long periods of time. And while some VR apps leave me and others with no nausea at all, other apps, where you’re moving around and stepping up and down, can certainly cause some discomfort. I’m curious to see how some of those problems of basic human physiology can be overcome.
A fascinating session after the keynote was with Brett Leonard, who many years ago directed Lawnmower Man, a cautionary tale about VR, which despite the bleak dystopic possibilities it portrayed, inspired many of today’s VR pioneers. Leonard appeared with his brother Greg, a composer, and Frank Serafine, an Oscar-award winning sound designer who did the sound for Lawnmower Man.
Brett, Greg, and Frank made a solid case for VR as a new medium that has yet to be even partially explored, and will surely have a plethora of new conventions that storytellers will need to work with. We’ve become familiar with many aspects of the language of film, such as things happening off screen but are implied to be happening. But with the 360-degree experience of VR, there’s no longer that same framing of shots, or things happening off the screen. The viewer chooses where to look.
Brett also listed his five laws of VR, which cover some of his concerns, given that it is a powerful medium that could have real consequences for people’s minds and physiology, particularly developing children. His laws, very paraphrased are:
- Take it seriously.
- VR should promote interconnecting with humanity, not further reinforcing all the walls we already have, and that technology so far has helped to create.
- VR is its own reality.
- VR should be a safe space—there are a huge amount of innovations possible, things that we haven’t been able to consider before. This may be especially so for medical and psychological treatments.
- VR is the medium of the global human.
Another interesting part of the talk was about true 360-degree sound, which Serafine said hadn’t really been done well before, but with the upcoming Dolby Atmos theaters, finally has.
Good 360-degree sound, not just stereo like we’re used to, will be a big part of VR feeling increasingly real, and will pose a challenge for VR storytelling, because it means recording becomes more complex, and consequently editing and mixing.
Samsung also announced their effort for the connected car, with a device that looks a lot like the Automatic (previously blogged about here) or the Mojio. It will offer all the features of those other devices—driving feedback that can become a driver score (measuring hard braking, fast accelerating, hard turns, and the like), as well as an LTE connection that allows it to stay connected all the time and serve as a WiFi hotspot. But Samsung adds a little more interest to the game with vendor collaborations, like with Fiat, where you can unlock the car, or open the trunk from your app. This can’t currently be done with other devices.
It should come out later this year, and will also have a fleet offering, which should appeal to enterprise companies. If they have more of these exclusive offering because of Samsung’s relationships with various vendors, maybe it will do better than its competitors.
After a whirlwind day at Modern CX, I hurried my way back up to San Francisco for the last day of the Samsung Developers Conference 2016. The morning started out exciting with a giveaway gift of the Samsung Gear 360 Camera.
— Tawny (@iheartthannie) April 30, 2016
Raymond (@yuhuaxie) took a bunch of photos with it and found it very convenient to get a stitched 360 photo with one click of a button. Previously in the making of our virtual gadget lab, he had to use an Android phone camera to capture 20+ shots before stitching the photos together to produce one spherical photo.
The automatic stitching is seamless at a glance, but you can still tell where the stitching happened by looking more closely.
The quality of photos taken with Gear 360 still has things left to be desired. The door frame and structure beam of my house all appear to be curvy, the depth of photo looks very shallow, etc. Maybe it is the fish-eye lenses that lead to a lack of depth distance, and distortion of view outside of focus center. This distortion can be avoided with subjects and objects that are a few meters from the camera or if more cameras are used in high-end gig, such as Project Beyond.
A Secure & Connected Future
- Security – Knox, which delivers mobile enterprise security, and Samsung Pay. Samsung Pay uses MST and NFC, makes mobile payment “simple, secure, virtually anywhere.” A group of panelist from MasterCard, Visa and American Express expressed that mobile payments need to be as easy (or easier) than pulling out your credit card, and Samsung Pay MST and NFC enables a “frictionless” experience.
- The internet of things – Currently there is a fragmented ecosystem of connected devices and manufacturers needed to be democratized. In a keynote, Curtis Sasaki pushed the idea of making connections, not silos. The ARTIK chip is one way to exchange open data amongst devices that originally were not designed to work together.
Sensors can be used to provide a variety of information/status and start actions. With ARTIK, we were able to meet Otto, Samsung’s adorable smart home robotic personal assistant who was methodically turning off lights and taking pictures. Otto is not a consumer ready product, but it functions very much like Amazon’s Alexa but hosts an HD camera and display. This offers an opportunity to test image and face recognition in home environments.
- The connected car is launching at the end of this year. A new dongle gives owners of old cars LTE connection. Samsung Connect Auto uses real-time alerts to help consumers improve their driving behavior while offering a Wi-Fi hotspot to create a multimedia center for the car.
There is also a smart TV and a connected fridge allows you to identify missing ingredients, compose a grocery list and order groceries using the doors of the fridge.
- Smart healthcare – This is about empowerment, connectivity and health data security. There was local intelligence for health monitoring with the Samsung Simband and a virtual reality relaxation pod provided by Cigna Healthcare. Simband is a cloud-based health analytics service can collect health data from wearables and health monitors.
- Virtual reality – The 4D experience was highlighted by Escape Rroom VR is an amazing virtual experience where you can touch and move real objects in virtual reality!
- And of course, there was the obligatory roller coaster experience.
— Tawny (@iheartthannie) April 30, 2016
I managed to catch the tail-end of the inspiration keynote which featured a panel of 4 women change-makers from Baobab Studios, Cisco, Intel and Summit Public Schools chatting about how we can change the world and make history.
One of the common themes with innovation was it comes from anywhere in the organization and users, whether that be from the lab or another person’s scratchpad.
Innovation is questioning the status quo. You have to reject “what is,” and THAT is hard. Your moral obligation to make a better quality world should be your guiding principle. You have to make IT happen by putting in the time. Challenge the way the world is now and bring people into the dialogue, even if they do not want to be there.
There will always be a constant steady heartbeat of new technology. To build meaningful tools, we need to ask new questions:
- What would people do with our technology vs. not what we are doing.
- What do people care about? Passionate about?
- What do people find frustrating?
- What role could technology play in their lives?
We need to bringing sensibility to our future tech. Our future objects will not make sense if they don’t make us happy or more efficient. We need to focus on how to tell a future story that not about how we are seduced by technology; instead, we tell a story about what it will mean for all of us if we do use it.
Overall, we had a lot of fun and learned a lot about what technologies are currently available, where the industry is headed and what emerging technologies are around the corner. Our ideas flourish being surrounded by fellow developers, designers, thinkers and makers.
Last week, we were back in Las Vegas again for Oracle Modern Customer Experience Conference! Instead of talking to customers and partners, we had the honor of chatting with UNLV Lee graduate students (@lbsunlv) and getting their feedback on how we envision the future of work, customers experience, marketing and data security.
We started off with Noel (@noelportugal) showing the portable Smart Office demo, including the Smart Badge, that we debuted at OpenWorld in October, followed by a break out session for the gradates to experience Glance and Virtual Reality at their own leisure.
The event was a hit! The 2 hour session flew by quickly. The same group of graduates who came in for the demos at the start of our session, left at the very last minute when we had to close down.
Experiencing these demos led into some exciting discussions that following day between the UNLV Lee Business School panelists and Rebecca Wettemann (@rebeccawettemann) from Nucleus Research (@NucleusResearch) on the future of work:
- How will sales, marketing, customer service, and commerce change for the next generation?
- What does the next generation expect from their employers?
- Are current employers truly modern and using the latest technology solutions?
— Erin Killian Evers (@keversca) April 28, 2016
— Gozel Aamoth (@gozelaamoth) April 28, 2016
While all of this was going on, a few of the developers and I were at the Samsung Developers Conference in SF discussing how we could build a more connected future. More on that in the next coming posts!
As part of our push to do more international research, I hopped over to Europe to show some customers VR and gather their impressions and thoughts on use cases. This time it was at OBUG, the Oracle Benelux User Group, which was held in Arnhem, a refreshing city along the Rhine.
Given that VR is one of the big technologies of 2016, and is posed to play a major role in the future of user experience, we want to know how our users would like to use VR to help them in their jobs. But first we just need to know what they think about VR after actually using it.
The week prior, Tawny and I showed some VR demos to customers and fellow Oracle employees at Collaborate in Las Vegas, taking them to the arctic to see whales and other denizens of the deep (link) and for the few with some extra time, defusing some bombs in the collaborative game “Keep Talking and Nobody Explodes” (game; Raymond’s blog post from GDC).
The reaction to the underwater scenes are now predictable: pretty much everyone loves it, just some more than others. There’s a sense of wonder, of amazement that the technology has progressed to this point, and that it’s all done with a smartphone. Several people have reached out to try to touch the sea creatures that are swimming by their view, only to realize they’ve been tricked.
Our European customers are no different than the ones we met at Collaborate, with similar ideas of how it could be used in their businesses.
It’s certainly a new technology, and we’ll continue to seek out use cases, while thinking up our own. In the meantime, VR is lots of fun.
Last week, Ben (@goldenmean1618) and I were in Las Vegas for COLLABORATE. We ran two studies which focuses on two trending topics in tech: bots and virtual reality!
Bot Focus Group
— The AppsLab (@theappslab) April 12, 2016
Our timing for the bot study was perfect! The morning we were to run our focus group on bots in the workplace, Facebook launched it’s bot platform for messenger. They are not the only ones with a platform. Microsoft, Telegram as well as Slack has their own platform too.
The goal of our focus group was to generate ideas on useful bots in the workplace. This can range from the concierge bot that Facebook has to workflow bots that Slack has. To generate as many ideas as we could, without groupthink, we had everyone silently write down their ideas using the “I WANT [PAT] TO…SO I CAN…” Tower of Want framework I stumbled upon at the GDC16 conference last March.
Not only do you distill the participant’s motivations, intents and needs, but you also acquire soft goals to guide the bot’s development. Algorithms are extremely literal. The Harvard Business Review notes how social media sites were once “quickly filled with superficial and offensive material.”
The algorithm was simple, find the articles with the most clicks and feed them to the users. Somewhere, the goal of QUALITY highly engaged articles were lost to highly engaged articles at the expense of QUALITY. Intention is everything.
“Algorithms don’t understand trade-offs; they pursue objectives single-mindedly.”
Soft goals are in place to steer a bot away from unintended actions.
After the ideas were generated and shared, we had them place their bot tasks on a pain/frequency chart: How painful is this task for you to do? and How frequently do you need to do this task?
— The AppsLab (@theappslab) April 12, 2016
Then it was time for the business origami! Business Origami is similar to a task flow analysis that uses folded paper cutouts as memory nudges. We now have our bot tasks, but we do not know (a) what triggers the task, (b) what the bot needs to know to do its job and (c) what the desired output is. We modified the Business Origami activity with the inputs and outputs that a Resource Flow activity demands.
Before our customers created their own flows based on their best bot task idea, we did we group warm up. The flow below illustrates the flow of scheduling and booking meeting rooms. Everyone was involved as they talked about the myriad of ways that would trigger the act of scheduling a meeting, the mediums of communication used, what they would need to know in order to schedule that, and what feedback is needed when the task is done.
— The AppsLab (@theappslab) April 12, 2016
Virtual Reality Guerrilla Test
For 3 days, Ben and I ran a guerrilla study to get customer’s and partner’s thoughts on VR and where they might find it useful in their work/industry.
— The AppsLab (@theappslab) April 12, 2016
Our customers experienced virtual reality through the Samsung Gear VR. It relies on our Samsung Note 5 to deliver the immersive experience.
Because of the makeup of our audience at the demo pod, we had to ensure that our study took approximately 5 minutes. We had 2 experiences to show them: an under water adventure with the blue whale in the Artic Ocean (theBlu) and a heart-pounding task of diffusing a bomb (Keep Talking and Nobody Explodes).
Everyone really wanted to reach out and touch the sea animals. 2 reached out and accidentally touched Ben and I and freaked out at how realistic the experience was! Another case for haptic gloves? 🙂
One of our participants had tears in her eyes after she experienced TheBlu Arctic while another participant wanted to play 3+ games of Keep Talking and Nobody Explodes!
Overall, no one felt nauseous. The game control came easy to those who had experience playing XBox and Playstation, while others were able to learn through the gamepad tutorial. Playstation VR makes learning even easier for newcomers since you can see a ghostlike view of your gamepad in VR.
Mostly, our participants confirmed use cases that we found from our first VR study at Modern Supply Chain back in January 2016. We ran 20 participates that month as an onsite guerrilla study. We ran all the participants through 2 applications in a 30 minute session: Swimming with Dolphins in Ocean Rift and Exploring a car show in Relay Cars.
One of our participants had a fear of being underwater. Even though she felt a bit nauseous, she did not want to take the headset off!
The tutorial was a breeze to get through. Unlike Ocean Rift where you need to navigate and swim by using the trackpad on the headset, Relay Cars used gaze control for selection. This means that my looking at a navigation button for 2-3 seconds makes an automatic selection for you without having to reach for the trackpad.
The goal of our initial guerrilla VR study was to find if people would actually wear a headset at work (majority said yes) and what VR could be useful for (many, many ideas). Today we have shortlisted that list and are developing a demo to come.
Austin, beautiful city with a river crossing downtown, music niche, young population, cycling, brisket and the home of SXSW, a big multicultural conference for all tastes; Film, Interactive and Music.
This was my first time attending the conference but Noel (@noelportugal), is a year-to-year attendee. It’s well known that this conference is not only a trampoline for small companies and startups to show off all the world what they are cooking up, but also a big exposure for new services, products, trends, you name it; that’s why we are very interested in this kind of conference that are very aligned with our team’s spirit.
I mean it.
Since Google I/O 2014, I’ve been following the steps to VR and AR. At that time, they released Google Cardboard; inexpensive googles for visualizing VR content and Project Tango for AR. Yes, I know you can argue VR has been around for quite a long time, but I believe they exposed the right development tools and a cheap way to develop and consume that technology, so a lot of people got engaged. However, some others remained very skeptical about use cases.
But now, after two year, guess what? VR is on everyone’s lips, and SXSW wasn’t an exception.
I have to say, I’m very impressed at how many companies had adopted this technology so fast. Of course, we all saw this wave coming to us with announcements of products like Oculus Rift, HTC Vive, Noon VR, Microsoft HoloLens and so on. Of course, as emerging technology team, we were already prepared to be hit by the wave.
I still can’t get used to seeing people with a headset over their eyes and headphones on, 100% isolated from reality. I tried most of VR demos presented and my brain/body is still not prepared for many VR experiences; I had headache, and I felt weird after so many demos.
Also, I could see people with red marks all around their faces from wearing the headset all day. Even so, this helped me to analyze and sum up that pretty much all demos follow the same use case: advertising and promoting products.
It’s really interesting that retail and product companies are investing in this technology to get more buyers and explain in a better way how it feels to hold of their product. This can be applied, for example, to automobiles, houses, travel agencies, etc. Funny thing is this technology sometimes is combined with motion to have a complete experience.
Note: don’t ever try a selfie while wearing a VR headset, almost impossible 🙂
I would like to bring up a story of one of the panelist talking about VR that I found very interesting; his best friend’s rock band was in town for a concert while he was experimenting with VR. He suggested that they record one of his favorites songs in a way it could be post-produced to be seen in VR.
The band accepted, and he set up all recording production in front of the stage, but he remained backstage monitoring all the production while they were playing his favorite song. All went fine and although he missed the opportunity to see and hear his favorite song live, he could watch the VR video several times for a couple of weeks.
Then, when they met again, his friend asked him about the concert, and he almost could said that he was virtually in the front row, enjoying the concert like all the others, singing and jumping.
Examples like this are very impressive. They make our brain believe facts that we actually didn’t live. In the end, that’s what we are doing to our brains, cheating them.
This could be good or bad, depending the point of view and circumstances. But just think about this, health and medicine; helping people with Alzheimer’s cloud recover lost thoughts and memories with VR. That’s huge.
Another common use case is training people or showing how to perform procedures. Imagine medical students being trained in surgical techniques, or how to react under stressful circumstances. Or how to work in risky areas like mines, radioactive plants or even places as simple as a warehouse.
VR is also being used as a human expression, like painting in 3D.
Also companies are taking advantage of VR to sell product on top of it or to complement it. Like this microphone to record 3D audio.
Speaking of AR, I saw this awesome concept that combines virtual objects with physical objects. Interactions between them are very natural, it could be possible to turn a physical object into virtual object and interaction is free hands.Check out the video.
Also, we witnessed beta release of the Metal Glass device; a combination of VR and AR. They claims that next year, monitors won’t be more necessary as VR + AR will replace them. Check out the video.
VR is the thing and we, as a team, are working to come up with cool use cases.
Internet of Things.
This is a hot topic too. We saw a lot startups and companies offering products and services for office and home automation, security, etc.
Noel and I attended a couple of IoT workshops from companies and startups that are making very affordable and simple all this hardware revolution.
We are very convinced that still are a lot of use case out there and we’re going to continue investigating.
Also, a big concern in IoT for big companies is data privacy as small companies are not putting too much attention on it. IoT generates Big Data, which at the same time, can be analyzed and reduced it to analytics, behaviors, forecast and needs. Just imagine, where all your data that are collected from your fridge connected to Internet is going to? And your thermostat data? And your lights data? Here is where regulations may help to protect your data and privacy.
Humanoids, robots, AI and machine learning.
This is a hot topic too, not as big as VR, but, believe me, it’s the next big thing. Apps in general have raised the bar, so the bar has been raised and one way to reach it is with AI.
Apps cannot be the same as ~4 years ago, people’s need have changed, and this has raised the bar for apps developers.
Users want to accomplish tasks quickly with almost zero effort and interaction at all. Our team has raised the bar too, so that’s why I decided to attend Machine Learning sessions. They were very basic and not too technical, but it was so great to see that we are looking in the right direction.
It was very interesting looking combination of all this concepts together and companies investing in research to make more affordable futuristics concepts.
Here’s Noel losing to a robot at rock-paper-scissors:
And here’s a video of Pepper, a humanoid robot.
We saw all kind of wearables. Health tracking, pillow sensor, eye therapy based on light to recover from jet lag or insomnia, pet wearables that read your pet’s feelings, ergonomic wearables, gloves used as a digital interface controller that can interact with laptops, iPads, etc.
Sony unveiled a wearable device currently referred to as N that has hands free user interface and allows users to receive audio without having to put a headset. It has virtual assistance that helps to answer questions. Also, a camera is positioned on one end of the device, so via voice we can ask to take a photo.
At SXSW, there are a lot of things to do including talks, sessions, lounge demos, expo and workshops.
When I was doing my schedule I noticed that there were a lot of JS workshops so I decided to join them. Most of them were self-paced and very introductory but was good to see new programming paradigms like react and functional programming into JS.
Wrap up and time to say until next year.
As a plus, Pi Day was celebrated during SXSW, and for that, we saw a cool implementation using Raspberry-Pis as a distributed system for searching queries.
And that’s it. I love this kind of conference where you can observe where technology and new concepts are going. And more importantly, it can help us to innovate and improve what we do in our team.
This is 2016, and seems this is the year for VR. Of course, we at the AppsLab can’t miss the beat!
While Osvaldo (@vaini11a) started to look into Unity based VR capability and prototype, I wanted to take a look into the WebVR based approach. The prospect of delivering VR experience in a browser, and over the web, suddenly makes VR so much more accessible – WebVR can be designed in a way to work with or without a VR headset. In a sense it is an extension of responsive web, to adjust to different renderers/viewers gracefully.
The first thing came to my mind was to VR-enable some of our visualization demos, and I picked John’s Transforming Table for the first try. After a series of hacks, I got a half-baked, and non-functional result; if we had the Oculus Rift, we could get it working. A-Frame is at a very early stage, and there is still a lot to be desired.
I realized I needed a perspective change – instead of fitting the existing presentation and behavior into VR, WebVR/A-Frame is better suited to create a new presentation and behavior that blends with VR naturally.
Jake (@jkuramot) and Noel (@noelportugal) had just come back from a trip to Sydney, and told us a story about someone following our team and reading our posts, from far far away – the other side of planet 🙂
And wouldn’t it be nice if they could see our Gadget Lab too?
So after a couple of days, here is it – our gadget lab in VR.
You may step in Gadget lab, and step out to Oracle campus. While you are in the virtual Gadget Lab, you can see several stations I have labelled to show some human-machine interfaces we are investigating.
I don’t have fancy stereoscopic 360 camera rig, so I just used an old Android phone to produce the scene of Oracle campus and Gadget lab interior. The phone is a little bit under-powered, so you may see some blanked-out areas because the phone would reboot when I tried to create the entire scene.
Let me know what you think in comments.
Editors’ update: To experience this virtual tour of our Gadget Lab, simply navigate to http://theappslab.com/vr/appslab in any browser. Even if you don’t have a VR headset, you can get a feel for the space and see a little bit about the demos.
In a desktop browser, you can use your mouse to look around the room. On a mobile browser, you can tilt your device to pan around the lab, and if you want the full VR experience, tap the glasses icon in the lower right-hand corner, and pop your phone into a compatible viewer, e.g. Google Cardboard.
The future is now.
Yesterday, I talked, in part, about how we can, if we choose, work all the time, from the moment we open our eyes, until we close them for sleep.
So today’s as good a Friday as any to remind you to balance all that work with some fun.
And here’s Anthony showing our Gadget Lab demos to some kids on Spring Break.
We make a conscious effort to take time out and show our fun stuff to kids. Why? Because fun is important.
Back in 2012, I pondered what I could show at Kscope12 that would spice up my presentation. I wanted to add something fun to my session, and after chatting with Noel, we settled on the Rock ’em Sock ’em robots controlled by text/phone as a fun way to keep people’s attention.
During the conference, I stumbled upon something. Turns out when you show people something fun, their creative juices flow, and you get lots of cool ideas. Like this:
I’m not an expert in every functional area of business. I may know some Financials from my years as an E-Business Suite product manager, but I don’t know much about sales or HR or supply chain.
So, when we have something we think will be important for users in the not-so-distant-future, i.e. an emerging technology, we often build a fun demo to help those domain specific uses rise to the surface.
People play with the demo, they have some fun with it, and we talk about ways that particular technology could apply to their everyday work.
This works, believe it or not, and we’ve been repeating that formula with great results for several years. It’s part of our strategy.
For example, robot arms and race cars to investigate gesture as an input device, Internet of Things for when things happen and for connecting dumb things to the Internet, and using your mind to control to drive a robotic ball.
It’s become an annual team activity to come up with the year’s fun demo, and everyone loves the fun demos, building them, showing them, playing with them.
So have fun out there. We will.
Now, if only I could talk Noel into resurrecting those Rock ’em Sock ’em robots.
Earlier this month, our strategy and roadmap eBook was released. In it, you’ll find all the whys, wherefores, whats and hows that drive the Simplicity-Mobility-Extensibility design philosophy we follow for Oracle Cloud Applications.
The eBook is free, as in beer, and it’s a great resource if you find yourself wondering why we do what we do. Download it now.
In said (free) eBook, you’ll find this slide.
Guessing I’ve seen our fearless leader and GVP Jeremy Ashley (@jrwashley) present this slide 20-some times around the World, and each time he asks, “What’s the first thing you do in the morning?”
Inevitably, 90% of the audience says pick up my phone. He’ll then ask how many people in the audience have only one computing device, two, three or more? Overwhelmingly, audiences have three or more.
These are international audiences, so there’s no geographical bias.
I love this slide because it succinctly portrays the modern work experience, spent across devices, all day long. As Jeremy says, we have the ability to work from the moment we open our eyes to wake to the moment we close them for sleep.
You can debate whether that is a good thing or not, but the fact is our users are mobile and device-happy. They use whatever device fits their needs at any given time.
And devices keep changing. For instance, this slide had a head-mounted display glyph at one point to represent a Google Glass-like device, and the smartwatch looked like a Pebble, not an Apple Watch.
That’s where we (@theappslab) come in; we’re always reading the tea leaves, leaning into the future, trying to anticipate what users will want next so we can skate to where the puck will be.
Mixing metaphors is fun.
Anyway, download the free eBook and learn about the OAUX strategy and roadmap and keep reading here to see where we fit.