Want to learn more about the Internet of Things?
Are you attending Oracle OpenWorld 2016 or JavaOne 2016? Then you are in luck! Once again we have partnered with the Oracle Technology Network (OTN) team to give OOW16 and JavaOne attendees an IoT hands-on workshop.
We will provide a free* IoT Cloud Kit so you can get your feet wet on one of the hottest emerging technologies. You don’t have to be an experienced electronic engineer to participate. We will go through the basics and show you how to connect a wifi micro-controller to the Oracle Internet of Things Cloud.
Note: OK, so that Gluon JavaOne app, 1) isn’t new this year and 2) I posted the wrong links. This year’s app is called JavaOne16, so look carefully. You can find the IoT Workshop signups under OTN Experiences.
Or find us at the OTN Lounge on Sunday afternoon. Workshops run all day, Monday through Wednesday of both conferences. Space is limited, and we may not be able to accommodate walkups, so do sign up if you plan to attend.
Then come to the OTN Lounge in Moscone South or the Java Hub at Hilton Union Square with your laptop and a micro-usb cable.
*Free? Yes free, while supplies last. Please make sure you read the Terms & Conditions (pdf).
“Supercharged Perseid Meteor Shower Peaks This Month” – as the very first edition of Daily Minor Planet brought us the news on August 4th, 2016.
Daily Minor Planet is a digital newspaper on asteroids and planetary systems. It features an asteroid that might fly by Earth for the day, or one of particular significance to the day. Also it features a section of news from different sources on the topics of Asteroid and Planets. And most interestingly, it has a dynamic orbit diagram embedded, showing real-time positions of the planets and the daily asteroid in the sky. You can drag the diagram and see them in different angles.
You can read the live daily edition on the Minor Planet Center website. Better yet, subscribe to it with your email, and get your daily dose of asteroid news in your email.
Daily Minor Planet is the result of collaboration between Oracle Volunteers and Minor Planet center. Since the Asteroid Hackathon in 2014, we have followed up with a Phase I project of Asteroid Explorer in 2015, which focused asteroid data processing and visualization. And this is the Phase II project, which focuses on the public awareness and engagement.
The Oracle Volunteers on this phase consisted of Chan Kim, Raymond Xie (me!), Kristine Robison, DJ Ursal and Jeremy Ashley. We have been working with Michael Rudenko and J.L. Galache from Minor Planet Center for past several months, and created a newspaper sourcing – editing – publishing – archiving system, with user subscription and daily email delivery functionality. And during the first week of August, the Oracle volunteer team were on site to prepare and launch the Daily Minor Planet.
Check out video of the launch event, which was hosted in Phillips Auditorium, Harvard-Smithsonian Center for Astrophysics, and live streamed on YouTube channel. The volunteer’s speech starts around 29:00 minute mark:
It was a quite intense week, as we were trying to get it ready for launch. In the end, as a reward, we got a chance to have a tour of the Great Refractor at Harvard College Observatory, which was located just in next building.
By the way, the Perseid meteor shower this year will peak on August 12, and it is in an outburst mode with potentially over 200 meteors per hour. So get yourself ready and catch some shooting stars!
By now, I have played Pokemon GO for 2 weeks and have reached the coveted level 22. Pokemon GO (POGO) is a massively popular mobile game that has been a viral hit. It became the most popular paid app in the iOS store within a few days, received more than 10 million downloads within its first week, surpassed Twitter in daily active users (DAU) and has already generated $14.04 million in sales, which is 47% of the total mobile gaming market. It’s so popular that players are deciding where they eat based on the restaurant’s proximity to a Pokestop. In lieu of this, Yelp added a Pokestop filter in their web and mobile app. POGO’s popularity has led to massive amount of server issues that is common amongst the initial launches of massively multiplayer online role-playing games (MMPORG) like Sim City, Diablo 3 and Guild Wars 2. To be fair, POGO wasn’t expected to be this popular outside of their already established fan base and is technically still in beta 0.29.3. I’m happy to say that the server has stabilized and I rarely encounter any server issues during peak times. Despite the lack of communication and transparency that should be a part of any customer experience, Niantic worked hard and proved with their launch in Japan that servers issues will be a thing of the past.
It’s safe to say that POGO has transformed how we experience reality through our phones. It did it so smoothly in a matter of days! There are a lot of game play dynamics at play, but before I get into that, there are a few external dimensions that led to the success of POGO.
- Demographic Appeal – The Pokemon IP comes with a ready fan-based of mid-20s to 30-somethings whom may already have children at the perfect age for mobile games. The brand consciously tries to appeal to people of all ages, with marketing schemes that tries to draw new fans in and keep the old ones around. Just a few weeks ago before the POGO game launch, I was at a Build-A-Bear factory at the West Edmonton Mall only to find that the Pikachu skins that I saw the morning off was sold out later when I visited the same night.
- Dev UX – Pokevision and Pokemon Go for Slack are a few add-ons that spun off from Niantic’s open API. This is rare for a mobile game to allow their users to build custom tools to enhance game play. Developers are users too.
- Android and iOS launch – I don’t remember an instance when a mobile game app launched on both Android and iOS stores at the same time. In combination with the single sign-in and summer timing of the launch helped boost conversion.
- New technology – The seamless camera integration for Augmented Reality (AR) is the differential when comparing POGO to other mobile games. Seeing the small creatures layered over reality makes the game play experience seem more “real.”
- Dedication & Vision – The game took 20 years in the making, with one single vision of having a game layer over the world. UX is not just about being simple, beautiful nor having a set of easy to use features. That vision laid the groundwork for every design decision made and helped identify strategic opportunities for a potentially disruptive market (More of product thinking and UX vision). In POGO’s case, Keyhole and Ingress laid the groundwork for the success of the game. It’s turning point was the 2014 viral success of finding Pokemon in Google Maps.
My Experience and the UI
A few others have written about the easy on-boarding of the game, so I will not rehash that.
I found that the game incentivizes things that we do already when on the go (Pokemon GO :]) We go to work, we go out to eat, we walk the dog, we go to school, and hang around in groups at parks and meetups. Walking helps us hatch eggs for rare or high IV Pokemon, find other Pokemon that hides in the tall grass, while also getting you from point A to point B. Walking also gets you to gyms and Pokestops for much needed supplies. Best of all, walking takes us to new places and outside to meet new people with the same common interest of playing POGO.
As a game that’s meant to be played quickly and on the go, everything happen within minutes. The traditional Pokemon game play is replaced with a minimum viable feature list. We don’t spend 5+ minutes battling a Pokemon, agonizing over the move sets, IVs (Individual Values) or natures anymore. I mean…unless you want to. Instead, we spend at most 90 seconds trying to take over a gym, move sets are randomly selected for us and catching a Pokemon is done with a quick flick of a finger.
The third person avatar view and map overlay is just real enough to be believed. Our brains are harsh judges of things that try to be realistic but falls short. This is a challenge with virtual reality. When something is represented to us as real, we try to find the smallest discrepancies which prevents us from suspending our disbelief; yet when we are presented with something that is not “real” our brains easily accepts that representation and fills in the gaps cognitively. As I walk around, I find myself amazed at how accurate my avatar seems to follow me. Even as I am standing still and turning to face different directions, my avatar follows smoothly.
Strangely, I felt connected to the real world as I was walking around zombified with my peers. Just the other day, a fellow player yelled “Starmie!” and the location of that Pokemon. Unsurprisingly, we all slowly got up and walked in a huge group to the other end of the park. Though we are still staring at our phones, we are really watching our avatar walk in a map overlay of the real world. We are aware of the river to the left of us and the road that is coming up ahead as depicted on the map. The Pokestops are easter eggs of wall murals, sculptures and other places of interest that I never would have stopped to look at before.
The game is addicting as Facebook and Instagram are. There is a constant need to check if there is a new Pokemon spawn around the corner. Feedback is constant and instantaneous. Catching and spinning a Pokestop is just as satisfying as clicking on the red notification bubble or pulling to refresh a feed. Despite server issues, there were intrinsic rewards tied to every game mechanic.
Since we have to walk around to play an augmented reality game, we are encouraged to interact with strangers. On the game map you can find high activity hot spots. Many times in and out of work, I have no problems befriending others while finding an elusive dratini at the Oracle lagoon or when picking a random picnic spot between high traffic lures at Guadalupe Park and chatting about our latest Pokemon catches, know-hows and food. Last week, I walked across the street to have lunch at my friend’s work place. The many times that I’ve been there before, I’ve talked to only 2 or 3 of her co-workers. Now when I walk over, we have lunch then join a group of my friend’s Pokemon hunting colleagues and we found a rare Hitmonlee together. They shared the location of an Electabuzz spawn. They taught my friend and I how to hunt nearby creatures using the on-screen compass.
Interestingly enough, there is no need to share your activity with your virtual social network, nor is there a push for you to purchase in-app items. How many times have you been asked to invite your friends to play to progress further in a game? How many times in a game is an advertisement for an in-app purchase persistent through every screen? I’m never bombarded with such eye sores in POGO. Everything seems more organic. It’s refreshing from the usual Facebook game apps and Candy Crush games that gamifies your network to level up further.
Life has definitely changed as illustrated by this meme (credit: http://www.dorkly.com/post/79726/life-before-pokemon-go-and-after).
Thoughts on the Future of Pokemon Go
Having gone this far in the game, it’s hard to keep motivated. Catching Pokemon and gaining experience is harder unless you are willing to shell out money for more pokeballs, lucky eggs and incubators. Other than the group meet ups and the need to “catch em’ all,” the rewards to get up to the next level isn’t incentivizing enough. For those that have caught ’em all or are have the grinding stages of the game (lvl 22+), Pokemon Go’s announcement at the San Diego Comic Con 2016 gave current players a reason to keep playing.
- They are planning on releasing the 6 elusive legendary Pokemon to round out the original 151 Pokemon in the 1st generation series.
- The teams we have chosen will have a bigger role in the story of the game. I’m excited that there will be a more immersive story that will translate into the real world. When I currently play with players from other teams, we are simply taking turns imperializing gyms. There is no purpose other than to gain experience points and bragging rights. Storytelling inspires and persuades. It makes your games more real and engaging. The three teams in POGO already has a persona behind them that frames each player’s alter ego in the game. Just as people can easily identify with and exhibit character traits from Gryffandor, Slytherin, Hufflepuff and Ravenclaw in Harry Potter, I’ve seen groups in the real world exhibit the characteristics of their chosen team as they play the game. I’m excited to see what narrative Niantic has planned out for us.
Since the goal of the game is for players to go out and explore and/or be on the move, what if there was a Fitbit like band that that counts your steps so you can hatch eggs when running on a treadmill? Can Google glass make a comeback? Instead of staring at the phone, I can still hunt for Pokemon while keeping my eyes on my real surroundings? How about a wearable like an Apple Watch that vibrates when a Pokemon comes up. I can just tap to automatically throw a ball without looking if I am riding a bike.
The game will definitely spur more AR games that may or may not have as huge of an impact as POGO but will increase adoption of AR as a social media and marketing platform. McDonalds is the first to partner up with the game to turn every location in Japan to a gym. Eateries, museums and police stations have all been inserting themselves into the game by purchasing lures to attract crowds of players into their establishment.
So far I’ve only seen glimmers of what the game hopes to be. It’s not polished and lacks the social feature that its Nintendo counterparts have weaved in seamlessly. Despite all these setbacks, it’s still made a huge impact on our culture and technology. I’m excited to see how Niantic plays this out, especially as mixed reality devices like the Microsoft Hololens and Magic Leap come to market.
A couple weeks ago Noel (@noelportugal) and I were invited by the Oracle Apps UX Innovation Labs team to collaborate as technical mentors and judges in BIAC Connected Communities, Connected Lives hackathon.
What would be the best place to hold a hackathon? Cancún. Beautiful weather, beautiful beach, Mexican food, what else could I ask for?
This hackathon was organized around the 2016 Organisation for Economic Co-operation and Development’s (OECD) Ministerial Meeting on the Digital Economy, which discussed topics like innovation, growth and social prosperity.
It was a big event with a bunch of important personalities from politics and economic fields.
The idea of the hackathon was turn ideas into apps, showcase talents and collaborate with peers and technical mentors from all over the world to develop innovative solutions to a local or global challenge.
There were categories like Cultural Heritage, Smart City, Social Inclusion and Entrepreneurship.
Oracle was one of the organizing partners a long with AT&T, CISCO, Disney, Intel, Microsoft and Google.
AT&T was offering $2,000 cash prize and up to 4 nano degrees scholarships from AT&T and Udacity for the best use of the M2X API. Our team had already experience using this API from AT&T’s past hackathons, so we were selected by the AT&T team to mentor and to be judges for AT&T Excellence Award.
Also, we were mentoring and judging for Oracle Excellence Award.
Oracle gave the winners an expense paid trip to Oracle Mexico Development Center in Guadalajara for up to 4 team members; including a tour of Oracle facilities and interviews with senior management.
There were participants from México, Canada and Colombia. Also from all kinds of schools, public and private. Up to 170 developers were participating for the $10K USD grand prize.
The hackathon had excellent vibes. We had a lot of mentoring and support for many teams and team’s ideas were spectacular. Also I could use my little experience with Unity and VR to help out a team that was building VR experiences for kids rehabilitation for people with low income. That team won two categories, Social Inclusion and Disney Excellence Award.
Noel and I also helped with IoT stuff. As you may know (or not), Noel has been a big ESP8266 WiFi module fan for a long time and the hackathon was a good place for advocating that module. I was pleasantly surprised to see teams using the Photon WiFi module for prototyping IoT; our team already had hands on it, and it’s worth a complete post, stay tuned.
This is my first time voting for a winner in a competition, and I have to say it, it’s pretty hard. Noel and I had a hard time deciding the winner for AT&T Excellence Award, but we think we choose correctly based on all arguments.
It was such a great experience participating on the other side of the competition as a mentor and judge, but I couldn’t resist my desire to code 🙂
For more on the event, check out this video made by the Oracle Apps UX Innovation Labs team:
Probably the best way to get to know your users is to watch them work, in their typical environment. That, and getting to talk to them right after observing them. It’s from that perspective that you can really see what works, what doesn’t, and what people don’t like. And this is exactly what we want to learn about in our quest to improve our users’ experience using Oracle software.
That said, we’ve been eager to get out and do some site visits, particularly for learning more about supply chain management (SCM). For one, SCM is an area most of us on the team haven’t spent too much time working on. But two, at least for me–working mostly in the abstract, or at least the virtual—there’s something fascinating and satisfying about how physical products and materials move throughout the world, starting as one thing and being manufactured or assembled into something else.
We had a contact at Micros, so we started there. Also, they’re an Oracle company, so that made it much easier. You’ve probably encountered Micros products, even if you haven’t noticed them—Micros does point of sales (POS) systems for retail and hospitality, meaning lots of restaurants, stadiums, and hotels.
For this particular adventure, we teamed up with the SCM team within OAUX, and went to Hanover, Maryland, where Micros has its warehouse operations, and where all of its orders are put together and shipped out across the world.
We observed and talked to a variety of people there: the pickers, who grab all the pieces for an order; the shippers, who get the orders ready to ship out and load them on the trucks; receiving, who takes in all the new inventory; QA, who have to make sure incoming parts are OK, as well as items that are returned; and cycle counters, who count inventory on a nightly basis. We also spoke to various managers and people involved in the business end of things.
In addition to following along and interviewing different employees, the SCM team ran a focus group, and the AppsLab team ran something like a focus group, but which is called a User Journey Map. With this research method, you have users map out their tasks (using sticky notes, a UX researcher’s best friend), while also including associated thoughts and feelings corresponding to each step of each task. We don’t just want to know what users are doing or have to do, but how they feel about it, and the kinds of questions they may have.
In an age where we’re accustomed to pressing a button and having something we want delivered in two days (or less), it’s helpful on a personal level to see how this sort of thing actually happens, and all the people involved in the background. On a professional level, you see how software plays a role in all of it—keeping it all together, but also imposing limits on what can be done and what can be tracked.
This was my first site visit, though I hope there are plenty more in the future. There’s no substitute for this kind of direct observation, where you can also ask questions. You come back tired, but with lots of notes, and lots of new insights.
Hard to believe it’s been nearly three years since we debuted the Leap Motion-controlled robot arm. Since then, it’s been a mainstay demo for us, combining a bit of fun with the still-emergent interaction mechanism, gesture.
Anthony (@anthonyslai) remains the master of the robot arm, and since we lost access to the original video, Noel (@noelportugal) shot a new one in the Gadget Lab at HQ where the robot arm continues to entertain visitors.
Interesting note, Amazon showed a very similar demo when they debuted AWS IoT. We nerds love robots.
We continue to investigate gesture as an interaction; in addition to our work with the Leap Motion as a robot arm controller and as a feature in the Smart Office, we’ve also used the Myo armband to drive Anki race cars, a project Thalmic Labs featured on their developer blog.
Gesture remains a Wild West, with no standards and different implementations, but we think there’s something to it. And we’ll keep investigating and having some fun while we do.
I previously had my Nexus 5, but over the time, Bluetooth stopped working and that was a good excuse to try this phone.
Also I was so excited because at SXSW I had a long talk with the Nextbit (@nextbitsys) development team about all technology behind this phone, more details below.
So Nexbit is a new company that wants to revolutionize hand held storage and this first attempt is really good.
They came up with Robin phone; it is square, rectangular with tight corners that looks like uncomfortable at first but it has soft touch finish. It has a decent balance of weight. People tend to ask me if this is the modular phone (Project Ara) by Google or if it’s new Lego’s phone. Either way, conclusion is that it has a pretty cool and minimalistic design and people like it a lot.
Talking about its design, power button on the right hand side with is also a fingerprint reader and tiny volume buttons on the left hand side. Probably that’s the worst part of the build; the buttons are small and round and of course kinda hard to press.
The power button does not protrude at all so it’s hard to press too. The fingerprint is actually really good though; accuracy and speed are on point. The fingerprint with the side placement like this, actually makes a lot of sense as you can register your left index finger and right thumb for the way you grip the phone and unlock it as soon as you hold the phone.
It has an USB Type-C at the bottom left corner with quick charging and dual front-facing stereo speakers, loud and clear. Quick charging is awesome.
Running the latest version of Android 6 with a custom Nextbit skin but all elements feel pretty stock.
Specifications are pretty good too, Snapdragon 808, 3 Gb of RAM, 2680 mAh battery, that makes the phone pretty smooth. Camera on the back with 13 MP with decent colors and details but dynamic range is weak.
I noticed that is very slow to actually take the photos, but they just have release new software update that solves the shutter lag.
But let’s focus on what’s the main spec of this phone, storage. All magic is in the Nextbit skin. Every Robin comes with 32 GB on-board storage but then also 100 GB of free cloud storage. Now, you’ll be asking why do you need cloud storage instead on-board storage?
What happens is Robin is supposed to be smart enough to offload the oldest and least frequently used stuff from internal storage straight to the cloud. So when you start to run out of local storage with old apps and old photos that haven’t been opened in a while they will be moved to the cloud and make room for more in your local storage seamlessly almost without you ever having a notice.
Directly in the application drawer you will notice that some app icons are grayed out, so these are the apps that are offline or stored in the cloud and not stored in the device anymore. If you want to use any of them, it takes a minute or so to download everything in the state you last left it in and then opens up right where you left off. So it’s a process of archiving and restoring.
You can also set apps to not get archived swiping the icon app down to pin them, and they will never go to the cloud. If you are using some apps all the time you shouldn’t even need to pin them as Robin will noticed that you use it a lot.
In order to save battery and don’t waste your carrier data, backing up process happens only when the phone is in WiFi and is charging.
Problem is that all restoring is dependent on the internet, so if you are out there with no data and want to use your app that is archived in the cloud, pretty much you’re lost.
In deep details, it has machine learning algorithms, cloud integrated into Android OS and onboard storage is merged with cloud seamlessly. Machine learning mechanism learns from your app and photos usage. Also it can think ahead, so months before you ever run out of storage Robin anticipates you will need more space and continually synchronizes apps and photos. For pictures, they are downsampled to screen resolution but full size version remain linked in the cloud.
For security concerns, all data stored in cloud storage is encrypted with Android built-in encryption.
I like the idea behind Robin system, but the cool thing is that you can use it like a normal phone, you can use your launcher of choice, even root it. The bootloader is actually unlocked out of the box and still under the warranty.
Pretty good phone for the price outside of the storage solution, but if you are looking for a phone focusing on having lots of storage, I’d look for something with a Micro SD card slot. Otherwise it’s definitely worth considering this. Definitely, I would use it as my main phone.
It’s cool to see this type of cloud-based storage solution in action.
Architects design space. A building is just a way to create spaces. Information architects at Oracle design relationships with abstract concepts. So far the main way we have to create visible spaces for our users is by projecting pixels onto glass screens.
This may change someday. If the promise of virtual reality is ever achieved, we may be able to sculpt entirely new realities and change the very way that people experience space.
One sneak peek into this possible future is now on display at Pace Gallery in Menlo Park. Last week the AppsLab research and design team toured the Living Digital Space and Future Parks exhibit by the renowned Japanese art collective teamLab.
Still photographs do not do this exhibit justice. Each installation is a space which surrounds you with moving imagery. Some of these spaces felt like VR without the goggles – almost like being on a holodeck.
The artwork has a beautiful Japanese aesthetic. The teamLab artists are exploring a concept they call ultra subjective space. Their theory is that art shapes the way people of different cultures experience space.
Since the renaissance, people in the west have been taught to construct their experience of spatial reality like perspective paintings with themselves as a point observer. Premodern Japanese art, in contrast, might have taught people to experience a very different flattened perspective which places them inside each space: subjective instead of objective.
To explore this idea, teamLab starts with three dimensional computer models and uses mathematical techniques to create flattened perspectives which then form the basis for various animated experiences. I can’t say that the result actually changed my perception of reality, but the experience was both sublime and thought-provoking.
Their final installation was kid-centric. In one area, visitors were given paper and crayons and were asked to draw spaceships, cars, and sea creatures. When you placed your drawing under a scanner it became animated and was immediately projected onto one of two giant murals. We made an AppsLab fish and an AppsLab flying saucer.
Another area lets you hop across virtual lillypads or build animated cities with highways, rivers, and train tracks by moving coded wooden blocks around a tabletop. I could imagine using such a tabletop to do supply chain management.
Ultra subjective space is a pretty high brow concept. It’s interesting to speculate that ancient Japanese people may have experienced space in a different way than we do now, though I don’t see any way of proving it. But the possibility of changing something that fundamental is certainly an exciting idea. If virtual reality ever lets us do this, the future may indeed be not just stranger than we imagine, but stranger than we can imagine.
Living Digital Space and Future Parks will be on display at the Pace Gallery in Menlo Park through December 18, 2016.
Numbers are a property of the universe. Once Earthians figured that out, there was no stopping them. They went as far as the Moon.
We use numbers in business and life. We measure, we look for oddities, we plan. We think of ourselves as rational.
I, for one, like to look at the thermometer before deciding if I shall go out in flip-flops or uggs. But I cannot convince my daughter to do the same. She tosses a coin.
More often than we like to think, business decisions are made the way my daughter decides on what to wear. I need an illustration here, so let me pick on workers’ compensation. If you have workers, you want to reward them for good work, and by doing that, encourage the behaviors you want to see more of – you want them to work harder, better, quicker, and be happy. You can measure productivity by amount of, say, shoes sold. You can measure quality by, say, number of customers who came back to buy more shoes. You can measure happiness, by, say . . . okay, let’s not measure happiness. How do you calculate what the worker compensation shall be based on these two measures?
50/50? 75/25? 25/75? Why? Why not? This is where most businesses toss a coin.
Here is an inventory of types of questions people tend to answer by tossing a coin:
- Should you monitor the dollar amount of sales, or the percentage of sale increase?
- Which of the two measures lets you better predict future performance?
- Why would it?
- How accurate are the predictions?
- How big shall the errors be until you feel the measure doesn’t make accurate predictions? Why?
- Which measures shall be combined and looked at together?
- In which way?
- Where would you set up thresholds between good, bad, and ugly?
- Why? Why not?
- If some numbers are way off, how do you know it is an exception and not part of some pattern that you don’t see?
If not tossing a coin, it is common practice to answer these kinds of questions based on a gut feeling. To answer these questions based on evidence instead, there shall be a way to evaluate the gut feeling, together with bunch of other hypotheses, in order to choose a hypothesis that actually true and works. This is hard for humans. Not only because it requires a lot of modeling and computations.
Conceptually, as humans, we tend to look for reasons and explain things. It is hard for us to see a pattern if we don’t see why it works. “I wouldn’t have seen it if I hadn’t believed it” as one wise person said. Admit it, we are biased. We won’t even consider evaluating a hypothesis that looks like a complete nonsense.
Computers, on the other hand, don’t have such a problem. Machine learning can create and test thousands of crazy hypotheses for you and select the best one. That is, the best one in predicting, not explaining. They can also keep updating the hypotheses as conditions change.
That’s why I believe AI is a new BI. It is more thorough and less biased then us humans. Therefore, it is often more rational.
I am fascinated to learn about ML algorithms, and what they can do for us. Applying the little I learned about Decision Trees to the worker’s compensation dilemma above, this is what I get. Let’s pretend the workers get a bonus at the end of the year. The maximum amount of the bonus is based on their salary, but the exact amount is a percent of the maximum based on performance – partially on the amount of sales, partially on the number of returned customers. These are your predictors. Your goal for paying off the bonus is that next year your workers have increased amount of sales AND increased number of returned customers at the same time. That’s your outcome.
Decision Tree algorithm will look at each possible combination of your predictors, and will measure which one better divides your outcomes into categories. (They say it is a division that minimizes the entropy and increases information gain).
Would we try to do that “by hand,” it would’ve taken so much time. But here we have the most effective bonus recipes figured out for us. Some of the recipes may look counter-intuitive; we may find out that the largest bonus is not the best encouragement, or some such. But, again, figuring out “whys” is a different problem.
And here is my little classification of business intelligence tasks that I believe AI can take over and improve upon.
As a human and a designer who welcomes our machine learning overlords, I see their biggest challenge in overcoming our biggest bias, the one of our superior rationality.
Just like last year, a few members (@jkuramot, @, @, Tony and myself) of @theappslab attended Kscope16 to run a Scavenger Hunt, speak and enjoy one of the premier events for Oracle developers. It was held in Chicago this time around, and here are my impressions.
Since our Scavenger Hunt was quite a success the previous year, we were asked to run it again to spice up the conference a bit. This is the 4th time we ran the Scavenger Hunt (if you want to learn more about the game itself, check out Noel’s post on the mechanics) and by now it runs like a well-oiled machine. The competition was even fiercer than last year with a DJI Phantom at stake but in the end @ prevailed, congratulations to Alan. @ was the runner up and walked away with an Amazon Echo and in 3rd place, @ got a Raspberry Pi for his efforts.
There were also consolation prizes for the next 12 places, they each got both a Google Chromcast and a Tile. All-in-all it was another very successful run of the Scavenger Hunt with over 170 participants and a lot of buzz surrounding the game, here’s a quote from one of the players:
“I would not have known so many things, and tried them out, if there were not a Scavenger Hunt. It is great.”
Better than Cats. We haven’t decided yet if we are running the Scavenger Hunt again next year, if we do, it will probably be in a different format; our brains are already racing.
Our team also had a few sessions, Noel talked broadly about OAUX, and I had a presentation about Developer Experience or DX. As is always the case at Kscope, the sessions are pretty much bi-directional, with the audience participating as you deliver your presentation. Some great questions were asked during my talk, and I even was able to record a few requirements for API Maker, a tool we are building for DX.
Judging by the participation of the attendees, there seems to be a lot of enthusiasm in the developer community for both API Maker and 1CSS, another tool we are creating for DX. As a result of the session, we have picked up a few contacts within Oracle which we will explore further to push these tools and get them out sooner rather than later.
In addition to all those activities, Raymond ran a preview of an IoT workshop we plan to replicate at OpenWorld and JavaOne this year. I won’t give away too much, but it involves a custom PCB.
Unfortunately, my schedule (Scavenger Hunt, presentation) didn’t really allow me to attend any sessions but other members of our team attended a few, so I will let them talk about that. I did, however, get a chance to play some video games.
And have some fun, as is customary at Kscope.
Are you attending Kscope16? If so, you are in luck, @theappslab team will be back this year (by popular demand) to do a Scavenger Hunt. This year there are even more chances to win, plus check out these prizes:
- First place: DJI Phantom Drone
- Second place: Amazon Echo
- Third place: Raspberry Pi
Our first scavenger hunt took place last year at Kscope15. Here’s a quick video detailing the whats, whys and wherefores of the game from our fearless leader and Group Vice President, Jeremy Ashley (@jrwashley) and me.
After that, we replicated the experience for an OTN Community Quest at OpenWorld and JavaOne 2015 and then for the UKOUG App15 and Tech15 Conference Explorer. We have had great fun seeing participants engaged. We are very proud of the game engine we built for the scavenger hunt, bringing together software and IoT. If you are interested to see how it all works check out our post “Game Mechanics of a Scavenger Hunt“.
Check the Kscope16 Scavenger Hunt site for more information on how to join and play during the annual ODTUG user group shindig. You can even signup to play during your registration process.
We have some interesting twists in store this year, and we’re hoping for an even larger group of engaged players this year.
See you there!
Twilio Signal Conference ended with an after-party called the $Bash night. Twilio set up booths with geeky games like programming, program debugging, computer building etc.. They also had a foosball table for 16 people. I think it is one of the nicest parties for geeks I attended so far. It was a fun night with music, drinks, food and games, tuned for developers.
During that morning’s keynote, Jeff Lawson (Twilio Founder) had a virtual meeting with Rony Abovitz (Magic Leap Founder), and they announced that the winner of the $Bash night can get access to Magic Leap. Magic Leap is so mysterious, and I had a great urge to win in the $Bash night to be able to play and do something with it.
It turned out if you compete with other developers during the $Bash night, you could win raffle tickets, and the person who had the most raffle tickets by the end of the night would become the winner. So all night I have been going all out playing and competing. The environment was too dark to possibly take some good quality pictures, but you can find some info here.
There are 2 games I did quite well and enjoyed: 1. program debugging competition among 6 developers, 2. pairing up to move jenga blocks with a robot arm. At the end of night, although I tried my best, I only came out second. At first I was quite disappointed, however, I was told there is still quite a very good chance there is a second spot to offer me for Magic Leap. I shall keep my hope up to wait and see.
Lets dive to the Twilio sessions.
The sessions are generally divided in the following 4 tracks:
See the latest progress in software and cloud communications, talk shop with Twilio engineers who developed them, and get in to the details on how to use the software.
Hear from industry experts shaping the future of tech with the latest software.
Get details on hurdles tricks and solution from Twilio customers on building communications with software APIs.
Define business plans for modern communications with real-life ROI and before-and-after stories.
My interests was more into the Inspire track, and the hot topic being AI and Virtual Assistants nowadays, those were the sessions I targeted for the conference.
This half year is just the “half year of virtual assistants”, with the announcements of controversial Tay and Cortana from Microsoft, messenger bot from Facebook, Allo from Google I/O and Siri from WWDC yesterday. Every giants want to squeeze into the same space and get a share of it. There were a lot of sessions regarding to bots in Signal, and I had a feeling that Twilio has carefully hand picked the sessions carefully to suit the audiences. IBM, and Microsoft and Slack all presented their views and technologies with bots, and I learned a lot from them. It is a bit odd that api.ai sponsored the lunch for the conference and have a booth in the conference, but did not present in any sessions (afaik).
In the schedule, there was a session called Terrible Ideas in Git by Corey Quinn. I love Git, and when I saw the topic, my immediate reaction was how can anyone say Git was terrible (at least right)?? I just had to go there and take a look. To my surprise, it was very fun talk show and I had a good laugh and enjoyed it a lot. I am glad I did not miss that session.
This year I attended the Twilio Signal Conference. Same as its first year, it was held in Pier 27, San Francisco. It was a 2-day action-packed conference with a keynote session in the morning and sessions after till 6 pm.
The developer experience provided by the conference is superb comparing to a lot of other developer conferences nowadays. Chartered buses with wifi were provided for commuters using different transits. Snacks served all day. 6 30-minutes sessions for you to choose from every time slot. No need to wait in line and you could always attend the sessions you want (sorry Google I/O). For developers, as least for me, the most important thing was a special coffee stall opened every morning to serve you with a fresh brewed coffee to wake you up and energize you for the rest of the day. With the CEO among others to code right in front of you in a keynote session to show you some demos, it is one true developer conference that you could hope for.
There were a lot of new products and features Twilio announced in Signal and I would not spend to time to recap here. You may read more info here and here. The interesting thing to note is how Twilio gets so huge. It started off with a text messaging service, it now also provides services on video, authentication, phone, routing. It is the power engine under the hood for the fast growing companies like Lyft and Uber. It now offers the most complete messaging platform for developers to connect to their users. It now has capabilities to reroute your numbers and tap into the phone conversations. It partners with T-Mobile to get into the IoT domain. Twilio’s ambition and vision is not small at all. The big question is: how Twilio achieve all these? This question can be controversial, but for me, I would have to say it all boils down into simplicity: making things really easy, really good, and just works. The Twilio APIs are very easy to use and it does exactly what it says, no more, no less. Its reliability is superb. That is what developers want and rely on.
But wait, there’s more. Check out my thoughts on the sessions at Signal and my $Bash night experience. I almost won a chance to play with the mysterious Magic Leap, and I might yet get access for finishing second. Stay tuned.
Editor’s note: We just returned from Holland last week where we attended AMIS 25, which was a wonderful show. One of the demos we showed was the Smart Office; Noel (@noelportugal) also gave a presentation on it.
We’ve been showing the Smart Office since OOW last year, and it remains one of our most popular demos because it uses off-the-shelf components that are available today, e.g. Amazon Echo, Leap Motion, Philips Hue lights, beacons, etc., making it an experience that anyone could replicate today with some development work.
In early 2015, the AppsLab team decided we were going to showcase the latest emerging technologies in an integrated demo. As part of the Oracle Applications User Experience group, our main goal as the emerging technologies team is to design products that will increase productivity and user participation in Oracle software.
We settled on the idea of the Smart Office, which is designed with the future of enterprise workplaces in mind. With the advent of the Internet of Things and more home automation in consumer products, users are expecting similar experiences in the workplace. We wanted to build an overall vision of how users will accomplish their tasks with the help of emerging technologies, no matter where they might be working.
Technologies such as voice control, gesture, and proximity have reached what we consider an acceptable maturity level for public consumption. Inexpensive products such as the Amazon Echo, Leap Motion and Bluetooth beacons are becoming more common in users’ daily lives. These examples of emerging technology have become cornerstones in our vision for the Smart Office.
Wearable technology also plays an important role in our idea of the Future of Work. Smart watches are becoming ubiquitous, and the price of wireless microprocessors continues to decrease. Dedicated mobile devices, our research shows, can increase productivity in the workplace when they are properly incorporated into the user experience as a whole.
Building for you, a Sales Cloud example
We first created what we call a user persona to assist us in building the Smart Office. This helps us develop very specific work flows using very specific technology that can be widely applied to a variety of software users. In this case, we started with a sales example as they are often mobile workers.
Sally Smith, our development example for the Smart Office, is a regional sales vice president who is traveling to her headquarter’s office. Traveling to another office often requires extra effort to find and book a working space. To help Sally with that task, we built a geo-fence-enabled mobile app as well as a Smart Badge. Here’s what these two components help her do:
- As Sally approaches the office building, her mobile device (using geo-fencing capabilities) alerts her via her smart watch and helps her find her way to her an available office space, using micro-location with beacons. She uses her Smart Badge, which has access to data about her employee status, to go through the security doors at the office building.
- As Sally approaches the available office space, her Smart Badge proximity sensor (a Bluetooth beacon) connects with a Lighthouse, which is a small touch-screen device outside the office space that displays space availability and works as the “brain” to control IoT devices inside the space. The proximity with the Lighthouse triggers a second confirmation to her smart watch to unlock the office and reserve the space in the company’s calendar system. This authenticates her reservation in two ways.
- As Sally enters the office, her global preferences are loaded into the office “brain.” Settings such as light brightness and color (Hue Lights), and room temperature (Nest Thermostat) are set to her liking.
- The office screens then start to load Sally’s familiar pictures as well as useful data relative to her location, such as weather or local events, on two Infoscreens. An Infoscreen is a Wi-Fi-enabled digital frame or LCD screen hung on the wall.
Sally has already interacted with her Smart Office in several ways. But up to this point, all of the interactions have been triggered or captured by emerging technology built into mobile devices that she is carrying with her. Now, she is ready to interact more purposefully with the Smart Office.
- Sally uses the Amazon Echo voice control to talk to the office: “Alexa, start my day.” Since she has been authenticated by the system already, it knows that the Oracle Sales Cloud is the application she is most likely to need, and the welcome page is now loaded in the touchscreen at the desk. She can use voice navigation to check on her opportunities, leads, or any other section of the Sales Cloud.
- Sally was working on the plane with Oracle Sales Cloud, but she did not have a chance to save her work before landing. Session portability is built into the cloud user experience, which takes care of saving her work when she is offline. Now that she is sitting inside the Smart Office and back online, she just swipes her screen to transfer her incomplete work onto the desktop screen.
- The Smart Office also uses empty wall space to project data throughout the day. On this Ambient Screen, Sally could use her voice (Amazon Echo), or hand gestures (Leap Motion), to continue her work. Since Sally has a global sales team, she can use the Ambient Screen to project a basic overview of her team performance metrics, location, and notifications.
- If Sally needs to interact with any of the notifications or actions she sees on the Ambient Screen, she can use a grab-and-throw motion to bring the content to her desk screen. She can also use voice commands to call up a team map, for example, and ask questions about her team such as their general location.
- As Sally finishes her day and gets ready to close her session inside the Smart Office, she can use voice commands to turn everything off.
Find out more
The Smart Office was designed to use off-the-shelf components on purpose. We truly believe that the Future of Work no longer relies on a single device. Instead, a set of cloud-connected devices help us accomplish our work in the most efficient manner.
For more on how we decide which pieces of emerging technology to investigate and develop in a new way for use in the enterprise world, read “Influence of Emerging Technology,” on the Usable Apps website.
See this for yourself and get inspired by what the Oracle Applications Cloud looks like when it’s connected to the future. Request a lab tour.
Another year, another amazing at the Maker Faire.
I’ve attended my fair share of Maker Faires these years, so the pyrotechnic sculptures, 3D printing masterpieces, and handmade artisan marketplaces were of no particular surprise. But somehow, every time I come around to the San Mateo fairgrounds, the Faire can’t help but be so aggressively fresh, crazy, and novel. This year, a host of new and intriguing trends kept me on my toes as I ventured through the greatest show and tell on Earth.
Young makers came out in full force this year. Elementary school maker clubs showed off their circuit projects, middle schoolers explained how they built the little robots, high school STEM programs presented their battle robots. It’s pleasing to see how Maker education has blossomed these past years, and how products and startups like LittleBits and Adafruit have made major concepts in electronics and programming so simple and inexpensive that any kid could pick it up and start exploring. Also wonderful is seeing young teams traveling out to the Bay Area from Texas, Oregon, and all these other states, a testament to the growth of the Maker movement out of the Silicon Valley.
Speaking of young makers’ participation, Arduino creator Massimo Banzi talked about Arduino as an education tool for kids to play and tinker, even he never planned to make kid’s toys in his early years. The maker movement has invoked the curious minds of all age, to start playing electronics, making robots, and learning a new language in programming.
While the maker movement made things very accessible to individuals, the essence of creation and innovation also impacted on the large enterprise. On the “Maker Pro” stage, our GVP, Jeremy Ashley (@jrwashley), talked about new trends of large enterprise application design, and OAUX group is driving the change to make simpler, but more effective and more engaging enterprise application.
Drones were also a trending topic this year, with a massive Drone Racing tent set up with events going on the whole weekend. Everything was being explored – new shapes for efficient and quick flight; new widgets and drone attachment modules; new methods of interaction with the drone. One team had developed a smart glove that responded to gyroscopic motion and gestures to control the flight of a quadcopter, and had the machine dance around him – an interesting and novel marriage of wearable tech and flight.
Personally, I’ve got a soft spot for art and whimsy, and the Faire had whimsy by the gallon. The artistry of the creators around the country and globe can’t be overestimated.
Maker Faire never disappoints. We brought friends along who had never been to a Faire, and it’s always fun to watch them get blown off their feet literally and figuratively the first time a flamethrower blasts open from the monolithic Crucible. Or their grins of delight when they see a cupcake shaped racecar zoom past them… and another… and another. Or the spark of amazement when they witness some demo that’s out of any realm of imagination.
Many hands make light (emitting diodes) work. Oracle Applications User Experience (OAUX) gets down to designing fashion technology (#fashtech) solutions in a fun maker event with a serious research and learning intent. OAUX Senior Director and resident part-time fashion blogger, Ultan “Gucci Translated” O’Broin (@ultan), reports from the Redwood City runway.
Fashion and Technology: What’s New?
Wearable technology is not new. Elizabeth I of England was a regal early adopter. In wearing an “armlet” given to her by Robert Dudley, First Earl of Leicester in 1571, the Tudor Queen set in motion that fusion of wearable technology and style that remains evident in the Fitbits and Apple Watches of today.
Elizabeth I’s device was certainly fly, described as “in the closing thearof a clocke, and in the forepart of the same a faire lozengie djamond without a foyle, hanging thearat a rounde juell fully garnished with dyamondes and a perle pendaunt.”
Regardless of the time we live in, for wearable tech to be successful it has to look good. It’s got to appeal to our sense of fashion. Technologists remain cognizant of involving clothing experts in production and branding decisions. For example, at Google I/O 2016, Google and Levi’s announced an interactive jacket based on the Google Jacquard technology that makes fabric interactive, applied to a Levi’s commuter jacket design.
Fashion Technology Maker Event: The Summer Collection
Misha Vaughan’s (@mishavaughan) OAUX Communications and Outreach team joined forces with Jake Kuramoto’s (@jkuramot) AppsLab (@theappslab) Emerging Tech folks recently in a joint maker event in Oracle HQ to design and
build wearable tech solutions that brought the world of fashion and technology (#fashtech) together.
The occasion was a hive of activity, with sewing machines, soldering irons, hot-glue guns, Arduino technology, fiber-optic cables, LEDs, 3D printers, and the rest, all in evidence during the production process.
Fashtech events like this also offer opportunities of discovery, as the team found out how interactive synth drum gloves can not only create music, but be used as input devices to write code too. Why limit yourself to one kind of keyboard?
Wearable Tech in the Enterprise: Wi-Fi and Hi-Heels
What does this all this fashioning of solutions mean for the enterprise? Wearable technology is part of the OAUX Glance, Scan, Commit design philosophy, key to that Mobility strategy reflecting our cloud-driven world of work. Smart watches are as much part of the continuum of devices we use interchangeably throughout the day as smart phones, tablets, or laptops are, for example. To coin a phrase from OAUX Group Vice President Jeremy Ashley (@jrwashley) at the recent Maker Faire event, in choosing what best works for us, be it clothing or technology: one size does not fit all.
A distinction between what tech we use and what we wear in work and at home is no longer convenient. We’ve moved from BYOD to WYOD. Unless that wearable tech, a deeply personal device and style statement all in one, reflects our tastes and sense of fashion we won’t use it: unless we’re forced to. The #fashtech design heuristic is: make it beautiful or make it invisible. So, let’s avoid wearables becoming swearables and style that tech, darling!
Generally, I’m not in favor of consolidating important stuff onto my phone, e.g. credit cards, etc. because if I lose my phone, I’ll lose all that stuff too.
However, I’ve been waiting to try out a digital hotel key, i.e. using my phone to unlock my hotel room. Only a few hotels and hotel chains have this technology in place, and recently, I finally stayed at one that does, the Hilton San Jose.
Much to my surprise, and Noel’s (@noelportugal), the digital key doesn’t use NFC. We’d assumed it would, given NFC is fairly common in newer hotels.
Nope, it uses Bluetooth, and when you get close to your room or any door you have access to unlock, e.g. the fitness center, the key enables.
Then, touch to unlock, just like it says, and within a second or so, the door is unlocked. It’s not instantaneous, like using the key, which uses NFC, but still pretty cool.
Ironically, the spare, physical key they gave me for “just in case” scenarios failed to work. I made the mistake of leaving my phone in the room to charge, taking the spare key while I ran downstairs to get some food, and the physical key didn’t work.
Anyway, the feature worked as expected, which is always a win. Those plastic keys won’t disappear anytime soon, and if you lose your phone while you’re using the digital hotel key, you’re extra hosed.
Still, I liked it and will definitely use it again whenever it’s available because it made me feel like future man and stuff.
Find the comments.
Editor’s Note: In February while we were in Australia, I had the pleasure to meet Stuart Coggins (@ozcoggs) and Scott Newman (@lamdadsn). They told me about a sweet Anki Overdrive cars plus Oracle Cloud Services hack Stuart and some colleagues did for Pausefest 2016 in Melbourne.
Last week, Stuart sent over a more detailed video of the specifics of the build and a brief writeup of what was involved. Here they are.
Oracle IoT and Anki Overdrive
By Stuart Coggins
Some time ago, our Middleware team stumbled upon the Anki Overdrive and its innovative use of technology, including APIs to augment a video game with physical racing cars.
We first presented an IoT focused demonstration earlier this year at an Innovation event in Melbourne. It was very well received, and considered very “un-Oracle.”
Over the past few months, the demo scope has broadened. And so has collaboration across Oracle’s Lines of Business. We saw an opportunity to make use of some of our Cloud Services with a “Data in Action” theme.
We’ve taken the track to several events, spanning various subject areas. Always sparking the question “what does this have to do with me?” And in some cases, “Why is Oracle playing with racing cars?”
As if the cars are not draw card enough at our events, the drone has been a winner. Again, an opportunity to showcase how using a range of services can make things happen.
As you’ll see in the video, the flow is fairly straightforward… the game, running on a tablet talks to the cars via Bluetooth. Using Bluetooth sniffers on a Raspberry Pi, we interrogate the communication between the devices. There are many game events as well as car activity events (speed of left/right wheels, change lane, turn left, turn right, off track, etc). We’re using Python scripts to forward the data to Oracle’s Internet of Things Cloud Service.
This is where things get interesting. The speed and laptime data is being filtered out, and forwarded to Oracle’s Database Cloud Service. The “speedo” dials are rendered using Oracle Apex (Application Express), which does a great job. An “off track” event is singled out and instantiates a process defined in Oracle Process Cloud Service. At this point, we’ll integrate to Oracle Service Cloud to create an event for later auditing and logging. Whilst airborne, the drone captures photos of the incident (the crashed cars), and sends them back to the process. The business process has created an incident folder on Oracle Document Cloud Service to record any details regarding the event, including the photos.
Because data is not much use if you’re not going to do something with it, we then hook up Oracle Business Intelligence Cloud Service to the data stored in Database Cloud Service. Post race analysis is visualised to show the results, and with several sets of race data, gives us insight as to which car is consistently recording the fastest laptimes. i.e. the car that should be used when challenging colleagues to a race!
As we’ve been running this for a few months now, and the use cases and applications of this technology grow, we’re getting more and more data. The adage that data creates data is certainly true here.
Ultimately, we’ll dump this data into Hadoop and perform some discovery, perhaps to understand how the track changes during the day (dust/dirt/use) etc. We’d like to get some temperature data from the Pi to understand if that has any effect on the car performances, and perhaps we’ll have enough data for us to be able to analyse the perfect lap, and replay it using the Anki SDK.
We’re planning a number of hackathons locally with this kit, and we’ll see what other innovations we can highlight.
A big shout out to the technical guy behind sniffing and “translating” the data. The data is not exposed by the SDK and was by no means trivial to maps but it has allowed us to get something meaningful and put it into action.
At first I was skeptical. I was perfectly happy with my iPad Air and the Pro seemed too big and too expensive. Six months later I wouldn’t dream of going back. The iPad Pro has become my primary computing device.
Does the Pro eliminate the need for a laptop or desktop? Almost, but for me not quite yet. I still need my Mac Air for NodeBox coding and a few other things; since they are both exactly the same size I now carry them together in a messenger bag.
The Pro is lighter than it looks and, with a little practice, balances easily on my lap. It fits perfectly on an airplane tray table.
Does the 12.9-inch screen really make that much of a difference? Yes! The effect is surprising; after all, it’s the same size as an ordinary laptop screen. But there is something addictive about holding large, high resolution photos and videos in your hands. I *much* prefer photo editing on the iPad. 3D flyovers in Apple Map are almost like being there.
The extra screen real estate also makes iOS 9’s split screen feature much more practical. Above is a screenshot of me editing a webpage using Coda. By splitting the screen with Safari, I can update code and instantly see the results as I go.
Enterprise users can see more numbers and charts at once. Bloomberg Professional uses the picture-in-picture feature to let you watch the news while perusing a large portfolio display. WunderStation makes dashboards big enough to get lost in.
For web conferences, a major part of my working life at Oracle, the iPad Pro both exceeds and falls short. The participant experience is superb. When others are presenting screenshots I can lean back in my chair and pinch-zoom to see details I would sometimes miss on my desktop. When videoconferencing I can easily adjust the camera or flip it to point at a whiteboard.
But my options for presenting content from the iPad are still limited. I can present images, but cannot easily pull content from inside other apps. (Zoom lets you share web pages and cloud content on Box, Dropbox or Google Drive, but we are supposed to keep sensitive data inside our firewall.) The one-app-at-a-time iOS model becomes a nuisance in situations like this. Until this limitation is overcome I don’t see desktops and laptops on the endangered species list.
The iPad Pro offers two accessories not available with a normal iPad: a “smart keyboard” that uses the new magnetic connector, and the deceptively simple Apple Pencil.
I tried the keyboard and threw it back. It was perfectly fine but I’m just not a keyboard guy. This may seem odd for someone who spends most of his time writing – I’m typing this blog on the iPad right now – but I have a theory about this that may explain who will adopt tablets in the workplace and how they will be used.
I think there are two types of workers: those who sit bolt upright at their desks and those who slump as close to horizontal as they can get; I am a slumper. And there are two kinds of typists: touch typists who type with their fingers and hunt-and-peckers who type with their eyes; I am a, uh, hunter. This places me squarely in the slumper-hunter quadrant.
Slumper-hunters like me love love love tablets and don’t need no stinking keyboards. The virtual keyboard offers a word tray that guesses my words before I do, lets me slide two fingers across the keyboard to precisely reposition the cursor, and has a dictate button that works surprisingly well.
Touch-slumpers are torn: they love tablets but can’t abide typing on glass; for them the smart keyboard – hard to use while slumping – is an imperfect compromise. Upright-hunters could go either way on the keyboard but may not see the point in using a tablet in the first place. Upright-touchers will insist on the smart keyboard and will not use a tablet without one.
If you are an artist, or even just an inveterate doodler, you must immediately hock your Wacom tablet, toss your other high-end styli, and buy the Apple Pencil (with the full-sized Pro as an accessory). It’s the first stylus that actually works. No more circles with dents and monkey-with-big-stick writing. Your doodles will look natural and your signature will be picture perfect.
The above drawing was done in under sixty seconds by my colleague Anna Budovsky. She had never used the iPad Pro before, had never used the app (Paper), and had never before picked up an Apple Pencil. For someone with talent, the Apple Pencil is a natural.
If you are not an artist you can probably skip the Pencil. It’s a bit of a nuisance to pack around and needs recharging once a week (fast and easy but still a nuisance). I carry one anyway just so I can pretend I’m an artist.
For now the iPad Pro is just a big iPad (and the new Pro isn’t even big). Most apps don’t treat it any differently yet and some older apps still don’t even fully support it. But I am seeing some early signs this may be starting to change.
The iPad Pro has one other advantage: processing power. Normal iPad apps don’t really need it (except to keep up with the hi-res screen). Some new apps, though, are being written specifically for the Pro and are taking things to a new level.
Zooming into infinitely complex fractals is not a business application, but it sure is a test of raw processing power. I’ve been exploring fractals since the eighties and have never seen anything remotely as smooth and deep and effortless as Frax HD. Pinch-zooming forever and changing color schemes with a swirl of your hand is a jaw-dropping experience.
The emerging class of mobile CAD apps, like Shapr3D, are more useful but no less stunning. You would think a CAD app would need not just a desktop machine but also a keyboard on steroids and a 3D mouse. Shapr3D uses the Apple Pencil in ingenious ways to replace all that.
Sketch curves and lines with ease and then press down (with a satisfying click) to make inflection points. Wiggle the pencil to change modes (sounds crazy but it works). Use the pencil for drawing and your fingers for stretching – Shapr3D keeps up without faltering. I made the strange but complicated contraption above in my first session with almost no instruction – and had fun doing it.
I hesitate to make any predictions about the transition to tablets in the workplace. But I would recommend keeping an eye on the iPad Pro – it may be a sleeping giant.