The components are:
- AppsLab Slapbands from Amazing Wristbands (@AMZG_Wristbands)
- LightBlue Beans from Punch Through Design (@punchthrough)
- 3D-printed cases, designed by Friend of the ‘Lab Rob, printed by Sculpteo (@sculpteo)
- LED lights
- An NFC sticker
The Bean is an amazingly little board, Arduino-compatible with a Bluetooth Low Energy module, plus an RGB LED and an 3-axis accelerometer.
I can’t tell you what we’re doing with this custom wearable, yet, but it will happen during OpenWorld. If you’ll be at the big show, OpenWorld or JavaOne, you’ll have a chance to see it in action and chat with the guys who built it.
Oh, and Noel will be writing up the details of the build, the story behind it and the journey, as well as all the nerdy bits. Stay tuned for that.
Tony went to a talk by Salim Ismail (@salimismail), the Founding Executive Director of Singularity University recently. He may/may not post his thoughts on the talk, which sounds fascinating, but this video is worth sharing either way, and not just because we have quadcopter fever.
I consider these types of posts to be filler, but I suppose you could look at it as curated content or something highbrow like that. Take your pick.
I scanned this post first, thought it would be interesting and left it to read later. Then I read it, and now, I’m terrified. Here’s the list, make sure to hit the link and read all about the sci-fi horrors that aren’t really sci-fi anymore.
- Weaponized Nanotechnology
- Conscious Machines
- Artificial Superintelligence
- Time Travel
- Mind Reading Devices
- Brain Hacking Devices
- Autonomous Robots Designed to Kill Humans
- Weaponized Pathogens
- Virtual Prisons and Punishment
- Hell Engineering
This is exactly how I feel about watches.
I only know who Phil Fish is because I watched Indie Game: The Movie. This short documentary by Ian Danskin is quite good and is newsworthy this week thanks to Marcus Persson’s reference to it in his post about why he’s leaving Mojang (h/t Laurie for sharing), the makers of Minecraft, after Microsoft completes its acquisition of the company.
I have often wondered why so many people hate Nickelback, and now I have a much better understanding of why, thanks to Ian. Embedded here for your viewing pleasure.
To no one’s surprise, Apple announced the Apple Watch today.
Ultan, our wearables whisperer, has style and flair; if you’ve ever met him, you know this. His (and Sandra’s) point about wearable tech needing to be stylish is one that Apple has made, again, to precisely no one’s surprise. Appearance matters to people, and smartwatches and other wearables are accessories that should be stylish and functional.
The Apple Watch looks very sleek, and if nothing else, the array of custom bands alone differentiate it from smartwatches like the Samsung Gear Live and the LG G, both of which are also glass rectangles, but with boring rubber wristbands.
I failed to act quickly enough to get a Moto 360 and settled instead on a Gear Live, which is just as well, given I really don’t like wearing watches. We’ve been building for the Pebble for a while now, and since the announcement of Android Wear earlier this year, we’ve been building for it as well, comparing the two watches and their SDKs.
Like Google Glass, the Gear Live will be a demo device, not a piece of personal tech. However, for Anthony, his Android Wear watch has replaced Glass as his smartphone accessory of choice. Stay tuned for the skinny on that one.
I haven’t read much about the Apple Watch yet, but I’m sure there will be coverage aplenty as people get excited for its release early in 2015. Now that Apple’s in the game, wearables are surely even more of a thing than they were yesterday.
And they’re much more stylish.
Find the comments.
As the parent of a toddler, I have no choice but to pay attention to Disney and its myriad of products and services.
Case in point, this Summer we took our daughter to Disneyland for the first time, which was a whole thing. Pause to h/t Disneyland expert, Friend of the ‘Lab and colleague Kathy for all her park and travel protips.
Being who I am, I found myself wandering around Disneyland and California Adventure thinking about how many hardcore analytics geeks they must employ to come up with systems like FASTPASS.
For the unfamiliar, FASTPASS is a system that allows you to skip some, if not all, of the line-standing for the most popular attractions in the parks. Although it’s difficult to explain in words, the system is rather simple once you get your first pass.
Being in the park, you can feel all the thought and craft that has gone into the experience. Disney is a $45 billion company, and it’s no surprise their R&D is cutting edge. But what makes it so successful?
Attendees of Disney parks are in a very similar to employees of an enterprise in that they will gladly opt-in to new technologies because the value they receive in return is clear and quantifiable.
Put into examples, if Google Glass helps me do my job more effectively, I’ll wear them. If I receive discounted benefits for wearing a fitness tracker, I’ll do it.
If a MagicBand allows me to leave my wallet in my room, not worry about losing the room keycards, and use FastPass+, I’ll wear it, even though it will allow Disney World to track my location at a very fine-grained level. Who cares? FastPass+ is worth it, right?
Odd branding note, the official ways to write these two terms are indeed FASTPASS and FastPass+, according to Disney’s web site.
If you’re interested in reading more about the MagicBand, what’s inside and Disney uses it at Disney World, check out Welcome to Dataland. Imagine all the data science that goes into creating and iterating on these enormous data sets; this is embiggened Big Data when you consider Disney parks occupied the top eight spots in the 2012 Theme Park Index, comprising well over 100 million visits.
It boggles my mind, although for someone like Bill, it would be Christmas every day.
The post also recounts Walt Disney’s futurist vision, which seems to drive their R&D today. It also encompasses the my point nicely:
Rather, because Disney’s theme parks don’t have the same relationship to reality that Google and Costco and the NSA do. They are hybrids of fantasy and reality.
I read Welcome to Dataland only because I’d just been to Disneyland myself. Then came news that Disney had filed severals patents concerning the use of drones for its park shows, one for floating pixels, one for flying projection screens, one for transporting characters, h/t Business Insider.
That was posted in August 2012.
So beyond casual interest as the father of a daughter who loves Disney Princesses, suddenly it’s obvious that I need to watch Disney much more carefully to see how they’re adopting emerging technologies.
Oh and become a willing data point in their data set.
Find the comments.
Editor’s note: The recent release of the Oracle Applications Cloud Simplified User Interface Rapid Development Kit represents the culmination of a lot of hard work from a lot of people. The kit was built, in large part, by Friend of the ‘Lab, Rafa Belloni (@rafabelloni), and although I tried to get him to write up some firsthand commentary on the ADF-fu he did to build the kit, he politely declined.
We’re developers here, so I wanted to get that out there before cross-posting (read, copying) the detailed post on the kit from the Usable Apps (@usableapps) blog. I knew I couldn’t do better, so why try? Enjoy.
Simplified UI Rapid Development Kit Sends Oracle Partners Soaring in the Oracle Applications Cloud
A glimpse into the action at the Oracle HCM Cloud Building Simplified UIs workshop with Hitachi Consulting by Georgia Price (@writeprecise)
Building stylish, modern, and simplified UIs just got a whole lot easier. That’s thanks to a new kit developed by the Oracle Applications User Experience (OAUX) team that’s now available for all from the Usable Apps website.
The Oracle Applications Cloud Simplified User Interface Rapid Development Kit is a collection of code samples from the Oracle Platform Technology Solutions (PTS) Code Accelerator Kit, coded page templates and Oracle ADF components, wireframe stencils and examples, coding best practices, and user experience design patterns and guidance. It’s designed to help Oracle partners and developers quickly build—in a matter of hours—simplified UIs for their Oracle Applications Cloud use cases using Oracle ADF page types and components.
The kit was put to the test last week by a group of Hitachi Consulting Services team members at an inaugural workshop on building simplified UIs for the Oracle HCM Cloud that was hosted by the OAUX team in the Oracle headquarters usability labs.
The results: impressive.
During the workshop, a broad range of participants—Hitachi Consulting VPs, senior managers, developers, designers, and architects—learned about the simplified UI design basics of glance, scan, commit and how to identify use cases for their business. Then, they collaboratively designed and built—from wireframe to actual code—three lightweight, tablet-first, intuitive solutions that simplify common, every day HCM tasks.
Sona Manzo (@sonajmanzo), Hitachi Consulting VP leading the company’s Oracle HCM Cloud practice, said, “This workshop was a fantastic opportunity for our team to come together and use the new Rapid Development Kit’s tool s and techniques to build actual solutions that meet specific customer use cases. We were able to take what was conceptual to a whole different level.”
Workshop organizer and host Ultan O’Broin (@ultan), Director, OAUX, was pleased with the outcome as well: “That a key Oracle HCM Cloud solution partner came away with three wireframed or built simplified UIs and now understands what remains to be done to take that work to completion as a polished, deployed solution is a big win for all.”
Equally importantly, said Ultan, is what the OAUX team learned about “what such an Oracle partner needs to do or be able to do next to be successful.”
According to Misha Vaughan (@mishavaughan), Director of the OAUX Communications and Outreach team, folks are lining up to attend other building simplified UI workshops.
“The Oracle Applications Cloud partner community is catching wind of the new simplified UI rapid development kit. I’m delighted by the enthusiasm for the kit. If a partner is designing a cloud UI, they should be building with this kit,” said Misha.
Ultan isn’t surprised by the response. “The workshop and kit respond to a world that’s demanding easy ways to build superior, flexible, and yet simple enterprise user experiences using data in the cloud.”
The Oracle Applications Cloud Simplified User Interface Rapid Development Kit will now be featured at Oracle OpenWorld 2014 OAUX events and in OAUX communications and outreach worldwide.
Our location is relentlessly tracked by our mobile devices. Our online transactions – both business and social – are recorded and stored in the cloud. And reams of biometric data will soon be collected by wearables. Mining this contextual data offers a significant opportunity to enhance the state of human computer interaction. But this begs the question: what exactly is ‘context’ ?
Consider the following sentence:
“As Michael was walking, he observed a bat lying on the ground.”
Now take a moment and imagine this scene in your mind.
Got it? Good.
Now a few questions. First, does the nearby image influence your interpretation of this sentence? Suppose I told you that Michael was a biologist hiking through the Amazonian rain forest. Does this additional information confirm your assumptions?
Now, suppose I told you that the image has nothing to do with the sentence, but instead it’s just a photograph I took in my own backyard and inserted into this post because I have a thing for flying mammals. Furthermore, what if I told you that Michael actually works as a ball boy at Yankee stadium? Do these additional facts alter your interpretation of the sentence? Finally, what if I confessed that I have been lying to you all along, that Michael is actually in Australia, his last name is Clarke, and that he was carrying a ball gauge? Has your idea of what I meant by ‘bat’ changed yet again? (Hint – Michael Clarke is a star cricket player.)
The point here is that contextual information – the who, what, where, and when of a situation – provides critical insights into how we interpret data. In pondering the sentence above, providing you with context – either as additional background statements or through presumed associations with nearby content – significantly altered how you interpreted that simple sentence.
At its essence, context allows us to resolve ambiguities. What do I mean by this? Think of the first name of someone you work with. Chances are good that there are many other people in the world (or at your company if your company is as big as Oracle) with that same first name. But if I know who you are (and ideally where you are) and what you are working on, and I have similar information about your colleagues, then I can make a reasonably accurate guess as to the identity of the person you are thinking of without you having to explicitly tell me anything other than their first name. Furthermore, if I am wrong, my error is understandable to you, precisely because my selection was the logical choice. Were you thinking of your colleague Madhuri in Mumbai that you worked with remotely on a project six months ago? But I guessed the Madhuri that has an office down the hall from you in Redwood City and with whom you are currently collaborating? Ok, I was wrong, but my error makes sense, doesn’t it? (In intelligent human computer interactions, the machine doesn’t always need be right as long as any errors are understandable. In fact, Chris Welty of IBM’s Watson team has argued that intelligent machines will do very well to be right 80% of the time – which of course was more than enough to beat human Jeopardy champions.)
So why is the ability to use context to resolve ambiguities important? Because – using our example – I can now take the information derived from context and provide you with a streamlined, personalized user experience that does not require you to explicitly specify the full name of your colleague – in fact, you might not need to enter any name at all if I have enough contextual background about you and what you are trying to do.
When it comes to UX, context is actually a two-way street. Traditionally, context has flowed from the machine to the user, where layout and workflow – the consequence of both visual and interaction design – has been used to inform the user as to what something means and what to do next. But as the availability of data and the complexity of systems have grown to the point of overwhelming the user, visualizations and interactions alone are not sufficient to stem the tide. Rather, context – this time emanating from the user to the machine – is the key for achieving a more simplified, personalized user experience.
Context allows us to ask the right questions and infer the correct intentions. But the retrieval of the actual answers – or the execution of the desired task – is not part of context per se. For example, using context based on user identity and past history (demographic category, movies watched in the past) can help a recommendation engine provide a more targeted search result. But context is simply used to identify the appropriate user persona – the retrieval of recommendations is done separately. Another way to express this is that context is used to decide which view to put on the data, but it is not the data itself.
Finally, how contextual information is mapped to appropriate system responses can be divided into two (not mutually exclusive) approaches, one empirical, the other deductive. First, access to Big Data allows the use of machine learning and predictive analytics to discern patterns of behavior across many people, mapping those patterns back to individual personas and transaction histories. For example, if you are browsing Amazon.com for a banana slicer and Amazon’s analytics show that people who spend a lot of time on the banana slicer page also tend to buy bread slicers, then you can be sure you will see images of bread slicers.
But while Big Data can certainly be useful, it is not required for context to be effective. This is particularly true in enterprise, where reasonable assumptions can be made from a semantic understanding of the underlying business model, and where information-rich employee data can be mined directly by the company. Are you a salesperson in territory A with customers X, Y, and Z? Well then it is safe to assume that you are interested in the economic climate in A as well as news about X, Y, and Z without you ever having to explicitly say so.
So in closing, the use of context is essential for creating simple yet powerful user experiences – and like the term ‘user experience’ itself, there is no one single implementation of context – rather, it is a concept that should pervade all aspects of human computer interaction in its myriad of forms.
I just got back to my hotel room after attending the first of a two day Cognitive Computing Forum, a conference running in parallel to the Semantic Technology (SemTech) Business Conference and the NoSQL Conference here in San Jose. Although the forum attracts less attendees and has only a single track, I cannot remember attending a symposium where so many stimulating ideas and projects were presented.
What is cognitive computing? It refers to computational systems that are modeled on the human brain – either literally by emulating brain structure or figuratively through using reasoning and semantic associations to analyze data. Research into cognitive computing has become increasingly important as organizations and individuals attempt to make sense of the massive amount of data that is now commonplace.
The first forum speaker was Chris Welty, who was an instrumental part of IBM’s Watson project (the computer that beat the top human contestants on the gameshow Jeopardy). Chris gave a great overview of how cognitive computing changes the traditional software development paradigm. Specifically, he argued that rather than focus on perfection, it is ok to be wrong as long as you succeed often enough to be useful (he pointed to search engine results as a good illustration of this principle). Development should focus on incremental improvement – using clearly defined metrics to measure whether new features have real benefit. Another important point he made was that there is no one best solution – rather, often the most productive strategy is to apply several different analytical approaches to the same problem, and then use a machine learning algorithm to mediate between (possibly) conflicting results.
There were also several interesting – although admittedly esoteric – talks by Dave Sullivan of Ersatz Labs (@_DaveSullivan) on deep learning, Subutai Ahmad of Numenta on cortical computing (which attempts to emulate the architecture of the neocortex) and Paul Hofmann (@Paul_Hofmann) of Saffron Technology on associative memory and cognitive distance. Kristian Hammond (@KJ_Hammond) of Narrative Science described technology that can take structured data and use natural language generation (NLG) to automatically create textual narratives, which he argued are often much better than data visualizations and dashboards in promoting understanding and comprehension.
However, the highlight of this first day was the talk entitled ‘Expressive Machines’ by Mark Sagar from the Laboratory for Animate Technologies. After showing some examples of facial tracking CGI from the movies ‘King Kong’ and ‘Avatar’, Mark described a framework modeled on human physiology that emulates human emotion and learning. I’ve got to say that even though I have a solid appreciation and understanding for the underlying science and technology, Mark’s BabyX – who is now really more a virtual toddler than an infant – blew me away. It was amazing to see Mark elicit various emotions from BabyX. Check out this video about BabyX from TEDxAukland 2013.
At the end of the day, the presentations helped crystallize some important lines of thought in my own carbon-based ‘computer’.
First, it is no surprise that human computer interactions are moving towards more natural user interfaces (NUIs), where a combination of artificial intelligence, fueled by semantics and machine learning and coupled with more natural ways of interacting with devices, result in more intuitive experiences.
Second, while the back end analysis is extremely important, what is particularly interesting to me is the human part of the human computer interaction. Specifically, while we often focus on how humans manipulate computers, an equally interesting question is how computers can be used to ‘manipulate’ humans in order to enhance our comprehension of information by leveraging how our brains are wired. After all, we do not view the world objectively, but through a lens that is the result of idiosyncrasies from our cultural and evolutionary history – a fact exploited by the advertising industry.
For example, our brains are prone to anthropomorphism, and will recognize faces even when faces aren’t there. Furthermore, we find symmetrical faces more attractive than unsymmetrical faces. We are also attracted to infantile features – a fact put to good use by Walt Disney animators who made Mickey Mouse appear more infant-like over the years to increase his popularity (as documented by paleontologist Stephen Jay Gould). In fact, we exhibit a plethora of cognitive biases (ever experience the Baader Meinhof phenomenon?), including the “uncanny valley”, which describes a rapid drop off in comfort level as computer agents become almost – but not quite perfectly – human-looking. And as Mark Sagar’s work demonstrates, emotional, non-verbal cues are extremely important (The most impressive part of Sagar’s demo was not the A.I. – afer all, there is a reason why BabyX is a baby and not an fully conversant adult – but rather the emotional response it elicited in the audience).
The challenge in designing intelligent experiences is to build systems that are informative and predictive but not presumptuous, tending towards the helpful personal assistant rather than the creepy stalker. Getting it right will depend as much on understanding human psychology as it will on implementing the latest machine learning algorithms.
I’ve been traveling a lot lately, which is bad. I’ve been consuming a lot of in-flight wifi, which is good, because there really should be no place on Earth where I’m unable to work.
Plus, it’s internets at 35,000 feet. How cool is that?
Today, I found myself in the throes of a decidedly first world problem. Of the many devices I carry, I couldn’t decide which one to use for the airplane wifi, which is, naturally, charged per-device.
Normally, I’d go with the tablet, since it’s a nice mix of form factors. The laptop is my preference, but I end up doing in-seat yoga to use it, not a good look.
But, horror of horrors, the tablet’s battery was only 21%. Being an Android tablet, that wouldn’t be enough to make it to my destination. I do carry a portable battery, but it won’t charge the Nexus 7 tablet, for some odd reason.
Recursive, first world problems.
I debated smartphone vs. laptop for a minute or two before I realized what an awful, self-replicating, first world problem this was. So, I made a call and immediately did what anyone would do, tweeted about it.
What has become of me.
Editor’s note: Hey a new author! Here’s the first one, of many I hope, from Bill Kraus, who joined us back in February. Enjoy.
One of the best aspects of working in the emerging technologies team here in Oracle’s UX Apps group is that we have the opportunity to ‘play’ with new technology. This isn’t just idle dawdling, but rather play with a purpose – a hands-on exercise exploring new technologies and brainstorming on how such technologies can be incorporated into future enterprise user experiences.
Some of this technology, such as beacons and wearables, have obvious applications. The relevancy of other technologies, such as quadcopters and drones, are more obtuse (not withstanding their possible use as a package delivery mechanism for an unnamed online retail behemoth).
As an amateur wildlife and nature photographer, I’ve dabbled in everything from digiscoping to infrared imaging to light painting to underwater photography. I’ve also played with strapping lightweight keychain cameras to inexpensive quadcopters (yes, I know I could get a DJI Phantom and a GoPro, but at the moment I prefer to test my piloting skills on something that won’t make me shed tears – and incur the wrath of my spouse – if it crashes).
After telling my colleagues recently over lunch about my quadcopter adventures (I already lost several in the trees and waters of the Puget Sound), Tony, Luis, and Osvaldo decided to purchase their own and we had a blast at our impromptu ‘flight school’ at Oracle. The guys did great, and Osvaldo’s copter even had a têt-à-tête with a hummingbird, who seemed a bit confused over just what was hovering before it.
This is all loads of fun, but what do flying quadcopters have to do the Internet of Things? Well, just as a quadcopter allows a photographer to get a perspective previously thought impossible, mobile technology combined with embedded sensors and the cloud have allowed us to break the bonds of the desktop and view data in new ways. No longer do we interact with digital information at a single point in time and space, but rather we are now enveloped by it every waking (and non-waking) moment – and we have the ability to view this data from many different perspectives. How this massive flow of incoming data is converted into useful information will depend in large part on context (you knew I’d get that word in here somehow) – analogous to how the same subject can appear dramatically different depending on the photographer’s (quadcopter assisted) point-of-view.
In fact, the Internet of Things is as much about space as it is about things – about sensing, interacting with and controlling the environment around us using technology to extend what we can sense and manipulate. Quadcopters are simply a manifestation of this idea – oh, and they are also really fun to fly.
Noel (@noelportugal) and Raymond have been working on a secret project. Here’s the latest:
So now you know why Noel bought the slap bands, but what goes in the case?
If you’ve been watching, you might know already.
The OTN network is designed to help Oracle users with community generated resources. Every year the OTN team organizes worldwide tours that allow local users to learn from subject matter experts in all things Oracle. For the past few years the UX team has been participating in the OTN Latin America Tour as well as other regions. This year I was happy to accept their invitation to deliver the opening keynote for the Mexico City tour stop.
The keynote title was “Wearables in the Enterprise: From Internet of Things to Google Glass and Smart Watches.” Given the AppsLab charter and reputation on cutting edge technologies and innovation it was really easy to put a presentation deck on our team’s findings on these topics. The presentation was a combination of the keynote given by our VP, Jeremy Ashley, during MakerCon 2014 at Oracle HQ this past May and our proof-of-concepts using wearable technologies.
I also had a joint session with my fellow UX team member Rafael Belloni titled “Designing Tablet UIs Using ADF.” Here we had the chance to share how users can leverage two great resources freely available from our team:
- Simplified User Experience Design Patterns for the Oracle Applications Cloud Service (register to download e-book here)
- A Starter kit with templates used to build a Simplified UI interfaces (download kit here)
*Look for “Rich UI with Data Visualization Components and JWT UserToken validation extending Oracle Sales Cloud– 1.0.1″
These two resources are the result of extensive research done by our whole UX organization and we are happy to share with the Oracle community. Overall it was a great opportunity to reach out to the Latin American community, especially my fellow Mexican friends.
Here are some pictures of the event and of Mexico City. Enjoy!
Editor’s note: I meant to blog about this today, but looks like my colleagues over at VoX have beat me to it. So, rather than try to do a better job, read do any work at all, I’ll just repost it. Free content w00t!
Although I no longer carry an iOS device, I’ve seen Voice demoed many times in the past. Projects like Voice and Simplified UI are what drew me to Applications User Experience, and it’s great to see them leak out into the World.
Oracle Extends Investment in Cloud User Experiences with Oracle Voice for Sales Cloud
By Vinay Dwivedi, and Anna Wichansky, Oracle Applications User Experience
Oracle Voice for the Oracle Sales Cloud, officially called “Fusion Voice Cloud Service for the Oracle Sales Cloud,” is available now on the Apple App Store. This first release is intended for Oracle customers using the Oracle Sales Cloud, and is specifically designed for sales reps.
Unless people record new information they learn, (e.g. write it down, repeat it aloud), they forget a high proportion of it in the first 20 minutes. The Oracle Applications User Experience team has learned through its research that when sales reps leave a customer meeting with insights that can move a deal forward, it’s critical to capture important details before they are forgotten. We designed Oracle Voice so that the app allows sales reps to quickly enter notes and activities on their smartphones right after meetings, no matter where they are.
Instead of relying on slow typing on a mobile device, sales reps can enter information three times faster (pdf) by speaking to the Oracle Sales Cloud through Voice. Voice takes a user through a dialog similar to a natural spoken conversation to accomplish this goal. Since key details are captured precisely and follow-ups are quicker, deals are closed faster and more efficiently.
Oracle Voice is also multi-modal, so sales reps can switch to touch-and-type interactions for situations where speech interaction is less than ideal.
Oracle sales reps tried it first, to see if we were getting it right.
We recruited a large group of sales reps in the Oracle North America organization to test an early version of Oracle Voice in 2012. All had iPhones and spoke American English; their predominant activity was field sales calls to customers. Users had minimal orientation to Oracle Voice and no training. We were able to observe their online conversion and usage patterns through automated testing and analytics at Oracle, through phone interviews, and through speech usage logs from Nuance, which is partnering with Oracle on Oracle Voice.
Users were interviewed after one week in the trial; over 80% said the product exceeded their expectations. Members of the Oracle User Experience team working on this project gained valuable insights into how and where sales reps were using Oracle Voice, which we used as requirements for features and functions.
For example, we learned that Oracle Voice needed to recognize product- and industry-specific vocabulary, such as “Exadata” and “Exalytics,” and we requested a vocabulary enhancement tool from Nuance that has significantly improved the speech recognition accuracy. We also learned that connectivity needed to persist as users traveled between public and private networks, and that users needed easy volume control and alternatives to speech in public environments.
We’ve held subsequent trials, with more features and functions enabled, to support the 10 workflows in the product today. Many sales reps in the trials have said they are anxious to get the full version and start using it every day.
“I was surprised to find that it can understand names like PNC and Alcoa,” said Marco Silva, Regional Manager, Oracle Infrastructure Sales, after participating in the September 2012 trial.
“It understands me better than Siri does,” said Andrew Dunleavy, Sales Representative, Oracle Fusion Middleware, who also participated in the same trial.
This demo shows Oracle Voice in action.
What can a sales rep do with Oracle Voice?
Oracle Voice allows sales reps to efficiently retrieve and capture sales information before and after meetings. With Oracle Voice, sales reps can:
Prepare for meetings
- View relevant notes to see what happened during previous meetings.
- See important activities by viewing previous tasks and appointments.
- Brush up on opportunities and check on revenue, close date and sales stage.
Wrap up meetings
- Capture notes and activities quickly so they don’t forget any key details.
- Create contacts easily so they can remember the important new people they meet.
- Update opportunities so they can make progress.
Our research showed that sales reps entered more sales information into the CRM system when they enjoyed using Oracle Voice, which makes Oracle Voice even more useful because more information is available to access when the same sales reps are on the go. With increased usage, the entire sales organization benefits from access to more current sales data, improved visibility on sales activities, and better sales decisions. Customers benefit too — from the faster response time sales reps can provide.
Oracle’s ongoing investment in User Experience
Oracle gets the idea that cloud applications must be easy to use. The Oracle Applications User Experience team has developed an approach to user experience that focuses on simplicity, mobility, and extensibility, and these themes drive our investment strategy. The result is key products that refine particular user experiences, like we’ve delivered with Oracle Voice.
Oracle Voice is one of the most recent products to embrace our developer design philosophy for the cloud of “Glance, Scan, & Commit.” Oracle Voice allows sales reps to complete many tasks at what we call glance and scan levels, which means keeping interactions lightweight, or small and quick.
Are you an Oracle Sales Cloud customer?
Oracle Voice is available now on the Apple App Store for Oracle customers using the Oracle Sales Cloud. It’s the smarter sales automation solution that helps you sell more, know more, and grow more.
Will you be at Oracle OpenWorld 2014? So will we! Stay tuned to the VoX blog for when and where you can find us. And don’t forget to drop by and check out Oracle Voice at the Smartphone and Nuance demo stations located at the CX@Sales Central demo area on the second floor of Moscone West.
As part of a secret project Noel (@noelportugal) and Raymond are cooking up, Noel ordered some AppsLab-branded slap bands.
Anyway, I’m sure we’ll have some left over after the double-secret project. So, if you want one, let us know.
Find the comments.
So, back in January, Noel (@noelportugal) took a team of developers to the AT&T Developer Summit Hackathon in Las Vegas.
Although they didn’t win, the built some very cool stuff, combining Google Glass, Philips Hue, Internet of Things, and possibly a kitchen sink in there somewhere, into what can only be described as a smart holster. You know, for guns.
You read that right. This project was way out of our usual wheelhouse, which is what made it so much fun, or so I’m told.
Friend of the ‘Lab Martin Taylor was kind enough to produce, direct and edit the following video, in which Noel describes and demonstrates the holster’s capabilities.
Did you catch the bit at 3:06? That’s Raymond behind the mask.
Editor’s Note: Hey, a new author! Colleague and Friend of the ‘Lab, Joyce Ohgi, a principal usability researcher here at Oracle Applications User Experience, joined several of our guys and tall man, all-around good dude and Friend of the ‘Lab, Rafa Belloni (@rafabelloni), to form a super-powered team last week.
This is her story, as told from the inside. Enjoy.
I earned $600 in a coding challenge without writing a single line of code.
Well, strictly speaking, $600/7 = $85.71, 7 being the number of members on our team. The challenge in question? The Oracle Applications User Experience Beacons Developer Challenge, a contest between internal Oracle teams to devise a creative solution using Estimote’s beacons and Oracle Facilities data provided by Oracle Spatial.
We were given: the beacons, some sample data, icons, and images, an example app, a pack of poster gum to stick the beacons on walls, and the freedom to do whatever we could: 1) dream up and 2) execute in 48 hours.
Fast forward: Anthony Lai (@anthonslai) and I are standing in front of a room of developers and five judges about to give a presentation on our app, whose back end I still did not fully grasp. How did I get there?
My journey started two days before the official challenge start date. I ate lunch with Tony, one of the developers, and he suggested I join the team because “Why not? It’ll be fun.”
I had heard of the challenge but thought it wasn’t for someone like me, as my now-rusty coding skills were last used for an Intro to C programming class in college; what could I contribute to a contest whose purpose is literally to generate code? But I like Tony, and he promised me it would be fun. So I decided, well, if the team will have me, I’d like to try it out. So I signed up.
One day before the challenge: the team decides to meet in order to: 1) learn each other’s names and 2) come up with a list of ideas, which would be narrowed down once the contest started.
After we all introduced ourselves, the brainstorming began immediately and organically. But, to my surprise, not a single dev was taking notes. How were we going to remember all the ideas and organize ourselves?
As a researcher, one of the basic rules of my job is to always observe and always take notes.
I could be useful! I whipped out my handy iPad with keyboard case and typed away. But some of the ideas didn’t make sense to me, and for the good of the team, I realized I also should be voicing my questions and opinions, not just act as the scribe.
But the team listened to me. They even agreed with me. Okay, they also disagreed with me sometimes. But they treated me with the same respect they treated each other.
Day of the challenge – final code check-in: Honestly, the whole coding challenge experience is a blur. As a researcher, I’m trained not just to always take notes, but also to take photos whenever possible to retain key details that could be otherwise forgotten.
I got so wrapped up in our project, that I didn’t take a single photo of our group. I did take several pictures of our competition though.
Luckily Kathy Miedema dropped by to wish us luck and also snapped a picture.
As for the experience itself, I can only attempt to describe it by painting a picture in words.
We are all seated in the AUX Team’s little Design Room. Although all the chairs are occupied, silence reigns, interrupted only by the soft clicking of keyboards, and the occasional low conversation.
Usually, the mental image of collaboration is of a group of people talking together in a group. But in this case, even though it looked like we were all doing our own separate thing, it was intensely collaborative.
Each of our parts would need to come together by the deadline, so we did constant, impromptu, little check-ins to make sure the pieces we were building would integrate quickly.
I checked-in constantly as well, seeking confirmation that, of the many research methodologies I could use, the ones I chose gave the team the data they needed, i.e. user interviews to capture wants, needs and task flows of the current processes and feedback sessions with key stakeholders.
By the way, if you are interested in learning more about research methodology, you can find more info at UX Direct.
So, back to Anthony and me, standing in front of a crowd, about to launch into our demo.
It was crazy; we didn’t have time to do a run-through before; we had some weird display lags using the projector and the Samsung Gear Live smartwatch; the script was too long, and we ran out of time.
Believe me, I have a list of things that we can improve upon for the next challenge, but our idea was good.
Technically, it was solid, because of the deep expertise of the team, which aggregated probably comes close to 100 years of total development experience, and it was based on real users’ needs because of my research.
Happily, we won 2nd place, and $600. Next year, we’ll be gunning for 1st and the cool $1000 prize, which would net $142.86 for each of us.
All kidding aside, it’s not about the prize money or the recognition. It’s about people using their unique skill sets to build something better than any of them could have built on their own.
I will close with a text exchange between Anthony and me, post-challenge:
Me: Thx for letting me participate. I enjoyed seeing “your world” aka development.
Anthony: Uh oh. We are a test species to you.
Me: Don’t worry. A good researcher observes to understand, not to pass judgment.
And later, when I was fretting that I cost our team the win by not contributing any code, Anthony wrote to me:
Contributing code does not mean contributing; contributing does not mean contributing code.
Editor again: Joyce thought the post needing a closing. Thanks to Joyce, Rafa and our guys, Anthony, Luis, Osvaldo, Raymond and Tony for all their hard work. Consider the post closed. Oh, and find the comments.
So, if you read Part 1, you’re all up to speed. If not, no worries. You might be a bit lost, but if you care, you can bounce over and come back for the thrilling conclusion.
I first showed the Taleo Interview Evaluation Glass app and Android app at a Taleo and HCM Cloud customer expo in late April, and as I showed it, my story evolved.
Demos are living organisms; the more you show them, the more you morph the story to fit the reactions you get. As I showed the Taleo Glass app, the demo became more about Glass and less about the story I was hoping to tell, which was about completing the interview evaluation more quickly to move along the hiring process.
So, I began telling that story in context of allowing any user, with any device, to complete these evaluations quickly, from the heads-up hotness of Google Glass, all the way down the technology coolness scale to a boring old dumbphone with just voice and text capabilities.
I used the latter example for two reasons. First, the juxtaposition of Google Glass and a dumbphone sending texts got a positive reaction and focused the demo around how we solved the problem vs. “is that Google Glass?”
And second, I was already designing an app to allow a user with a dumbphone to complete an interview evaluation.
Side note, Noel has long been a fan of Twilio’s, and happily, they are an Oracle Partner. Ultan (@ultan) is hard at work dreaming up cool stuff we can do with Twilio, so stay tuned.
Anyway, Twilio is the perfect service to power the app I had in mind. Shortly after the customer expo ended, I asked Raymond to build out this new piece, so I could have a full complement of demos to show that fit the full story.
In about a week, Raymond was done, and we now have a holistic story to tell.
The interface is dead simple. The user simply sends text messages to a specific number, using a small set of commands. First, sending “Taleo help” returns a list of the commands. Next, the user sends “Taleo eval requests” to retrieve a list of open interview evaluations.
The user then sends a command to start one of the numbered evaluations, e.g. “Start eval 4″ and each question is sent as a separate message.
When the final question has been answered, a summary of the user’s answered is sent, and the user can submit the evaluation by sending “Confirm submit.”
And that’s it. Elegant and simple and accessible to any manager, e.g. field managers who spend their days traveling between job sites. Coupled with the Glass app and the Android app, we’ve covered all the bases not already covered by Taleo’s web app and mobile apps.
As always, the disclaimer applies. This is not product. It’s simply a concept demo, built to show people the type of R&D we, Oracle Applications User Experience and this team, do. Not product, only research.
Find the comments.
Back in April, I got my first exposure to Taleo during a sales call. I was there with the AUX contingent, talking about Oracle HCM Cloud Release 8, featuring Simplified UI, our overall design philosophies and approaches, i.e. simplicity-mobility-extensibility, glance-scan-commit, and our emerging technologies work and future cool stuff.
I left that meeting with an idea for a concept demo, streamlining the interview evaluation process with a Google Glass app.
The basic pain point here is that recruiters have trouble urging the hiring managers they support through the hiring process because these managers have other job responsibilities.
It’s the classic Catch-22 of hiring; you need more people to help do work, but you’re so busy doing the actual work, you don’t have time to do the hiring.
Anyway, Taleo Recruiting has the standard controls, approvals and gating tasks that any hiring process does. One of these gating tasks is completing the interview evaluation; after interviewing a candidate, the interviewer, typically the hiring manager and possibly others, completes an evaluation of the candidate that determines her/his future path in the process.
Good evaluation, the candidate moves on in the process. Poor evaluation, the candidate does not.
Both Taleo’s web app and mobile app provide the ability to complete these evaluations, and I thought it would be cool to build a Glass app just for interview evaluations.
Having a hands-free way to complete an evaluation would be useful for a hiring manager walking between meetings on a large corporate campus or driving to a meeting. The goal here is to bring the interview evaluation closer to the actual end of the interview, while the chat is still fresh in the manager’s mind.
Imagine you’re the hiring manager. Rather than delaying the evaluation until later in the day (or week), walk out of an interview, command Glass to start the evaluation, have the questions read directly into your ear, dictate your responses and submit.
Since the Glass GDK dropped last Winter, Anthony has been looking for a new Glass project, and I figured he and Raymond would run with a Taleo project. They did.
The resulting concept demo is a Glass app and an accompanying Android app that can also be used as a dedicated interview evaluation app. Raymond and Anthony created a clever way to transfer data using the Bluetooth connection between Glass and its parent device.
Here’s the flow, starting with the Glass app. The user can either say “OK Glass” and then say “Start Taleo Glass,” or tap the home card, swipe through the cards and choose the Start Taleo Glass card.
The Glass app will then wait for its companion Android app to send the evaluation details.
Next, the user opens the Android app to see all the evaluations s/he needs to complete, and then selects the appropriate one.
Tapping Talk to Google Glass sends the first question to the Glass over the Bluetooth connection. The user sees the question in a card, and Glass also dictates the question through its speaker.
Tapping Glass’ touchpad turns on the microphone so the user can dictate a response, either choosing an option for a multiple choice question or dictating an answer for an open-ended question. As each answer is received by the Android app, the evaluation updates, which is pretty cool to watch.
The Glass app goes through each question, and once the evaluation is complete, the user can review her/his answers on the Android app and submit the evaluation.
The guys built this for me to show at a Taleo and HCM Cloud customer expo, similar to the one AMIS hosted in March. After showing it there, I decided to expand the concept demo to tell a broader story. If you want to read about that, stay tuned for Part 2.
Itching to sound off on this post, find the comments.
Update: The standard disclaimer applies here. This is not product of any kind. It’s simply a concept demo, built to show people the type of R&D we, Oracle Applications User Experience and this team, do. Not product, only research.
For what seems like ages, the noise around wearable technology has been building, but until recently, I’ve been skeptical about widespread adoption.
Not anymore, wearables are a thing, even without an Apple device to lead the way.
Last week, Noel (@noelportugal) and I attended the annual conference of the Oracle HCM Users Group (@ohugupdates); the Saturday before the conference, we showed off some of our wearable demos to a small group of customers in a seminar hosted by Oracle Applications User Experience.
As usual, we saturated the Bluetooth spectrum with our various wearables.
The questions and observations of the seminar attendees showed a high level of familiarity with wearables of all types, not just fitness bands, but AR glasses and other, erm, wearable gadgets. A quick survey showed that several of them had their own wearables, too.
Later in the week, chatting up two other customers, I realized that one use case I’d thought was bogus is actually real, the employee benefits plus fitness band story.
In short, employers give out fitness bands to employees to promote healthy behaviors and sometimes competition; the value to the organization comes from an assumption that the overall benefit cost goes down for a healthier employee population. Oh, and healthy people are presumably happier, so there’s that too.
At a dinner, I sat between two people, who work for two different employers, in very different verticals; they both were wearing company-provided fitness trackers, one a Garmin device, the other a FitBit. And they both said the devices motivated them.
So, not a made-up use case at all.
My final bit of anecdotal evidence from the week came during Jeremy’s (@jrwashley) session. The room was pretty packed, so I decided to do some Bluetooth wardriving using the very useful Bluetooth 4.0 Scanner app, which has proven to be much more than a tool for finding my lost Misfit Shine.
From a corner of the room, I figured my scan covered about a third of the room.
That’s at least six wearables, five that weren’t mine. I can’t tell what some of the devices are, e.g. One, and devices like Google Glass and the Pebble watch won’t be detected by this method. We had about 40 or so people in the room, so even without scanning the entire room, that’s a lot of people rocking wearables.
If you’re not impressed by my observations, maybe some fuzzy app-related data will sway you. From a TechCrunch post:
A new report from Flurry Analytics shows that health and fitness apps are growing at a faster rate than the overall app market so far in 2014. The analytics firm looked at data from more than 6,800 apps in the category on the iPhone and iPad and found that usage (measured in sessions) is up 62% in the last six months compared to 33% growth for the entire market, an 87% faster pace.
This data comes just as Apple and Google aim to boost the ecosystem for fitness apps and wearables with HealthKit and Google Fit, both of which aim to make it easy for wearable device manufacturers to share their data and app developers to use that data to make even better apps.
Of course, if/when Apple and Google make their plays, wearables will only get more prevalent.
So, your thoughts, about wearables, your own and other people’s, corporate wellness initiatives, your own observations, belong in the comments.
Here comes more Maker content for your reading pleasure, this time it’s an OTN piece on Java and the Internet of Things:
The piece features lots of Noel (@noelportugal) wisdom, on making, on IoT, on the Raspi and on Java, his own personal fourfecta. If you’re scanning (shame on you), look for the User Experience and the Internet of Things section.
Here’s a very Noel quote:
“Java powers the internet, our banks, and retail enterprises—it’s behind the scenes everywhere,” remarks Portugal. “So we can apply the same architectures, security, and communication protocols that we use on the enterprise to an embedded device. I have used Arduino, but it would be hard to start a web server with it. But with Raspberry Pi, I can run a server, or, for example, use general-purpose I/O where I can connect sensors. The point is that developers can translate their knowledge of Java from developing on the enterprise to embedded things.”