Fourfecta! Java, IoT, Making and Raspi

June 18th, 2014 Leave a Comment

Here comes more Maker content for your reading pleasure, this time it’s an OTN piece on Java and the Internet of Things:

A Perfect Match: Java and the Internet of Things

The piece features lots of Noel (@noelportugal) wisdom, on making, on IoT, on the Raspi and on Java, his own personal fourfecta. If you’re scanning (shame on you), look for the User Experience and the Internet of Things section.

Here’s a very Noel quote:

“Java powers the internet, our banks, and retail enterprises—it’s behind the scenes everywhere,” remarks Portugal. “So we can apply the same architectures, security, and communication protocols that we use on the enterprise to an embedded device. I have used Arduino, but it would be hard to start a web server with it. But with Raspberry Pi, I can run a server, or, for example, use general-purpose I/O where I can connect sensors. The point is that developers can translate their knowledge of Java from developing on the enterprise to embedded things.”

For those attending Kscope14 next week, come find us for the in-person version and see IFTTPi, the fourfecta in action.

Find Us at Kscope14 Next Week in Seattle

June 17th, 2014 3 Comments

Kscope14 (#kscope14), the annual conference of the Oracle Development Tools User Group, affectionately ODTUG (@odtug), will happen next week in beautiful Seattle, June 22 to June 26.

The good people at ODTUG have graciously invited me back as a speaker for the 2014 vintage, and Anthony (@anthonyslai) will be my wingman for our session, Oracle Cloud and the New Frontier of User Experience.

Here are the particulars:

Title: Oracle Cloud and the New Frontier of User Experience
When: Monday, June 23, 2014, Session 1, 10:45 am – 11:45 am
Abstract
A wristband that can unlock and start your car based on a unique cardiac rhythm. Head-mounted displays that layer digital information over reality. Computers, robots, drones, and more controlled with a wave of the hand or a flick of the wrist. Everyday objects connected to the Internet that convey information in an ambient way. Fully functional computers on tiny sticks. Invisible fences that control the flow of data. Science fiction isn’t fiction anymore, and people aren’t tied to PCs and desks. Everything is a device, everything is connected, everything is smart, and everything is an experience. Come see the R&D work of Oracle’s Applications User Experience team and explore new devices, trends, and platforms.

Noel (@noelportugal) will be tagging along as well, and I think we’ll have a scaled-down, but still fun, version of the IFTTPi activity the guys showed at the Maker Faire last month.

So, if you want to hear about and see the emerging technologies R&D coming out of Oracle Applications User Experience (@usableapps), try out Google Glass, Leap Motion, various wearables, play with the Sphero, or just say hi, come find us in Seattle.

 

Maker Movement Fuels the Internet of Things

June 16th, 2014 Leave a Comment

The Java Team recently released a short video compiling selected moments from last month’s MakerCon and Maker Faire. If you recall, we were lucky to be invited to participate in both events, both of which were tons of fun, enlightening and inspiring.

At 0:33 you’ll see the some of the guys hamming it up for the camera, and Jeremy’s (@jrwashley) keynote at MakerCon is featured prominently as a voiceover.

Enjoy.

Google Glass and First World Problems

June 16th, 2014 5 Comments

If you have Google Glass, you’ve probably seen this card a few times.

glassmustcooldown

After a while, you begin to expect the card when your right temple starts to get uncomfortably warm. Apparently, Anthony (@anthonyslai), our resident Glass expert and long-time Glass Explorer, has a protip to handle this problem, two cans of cold soda.

Screen Shot 2014-06-13 at 12.57.39 AM

Ultan’s (@ultan) Glass cooling off

I guess Ultan (@ultan) first encountered this clever solution while Anthony was presenting at the EchoUser Wearables Design Jam a couple weeks ago.

Now I have an efficient way to solve this decidedly First World Problem.

9lwxb

The Qualcomm Toq

May 29th, 2014 15 Comments

Editor’s Note: Here’s a post from newish ‘Lab member, Tony. Enjoy, and maybe if you’re nice in comments, he’ll write more. Or not, we won’t know until we know.

The ideas flying, crawling, walking, and slithering around us in the sunny windy San Francisco Bay setting made for an enjoyable, educational, and truly inspirational experience. O’Reilly Solid conference: Software/Hardware/Everywhere was last week and with it, the future finally materialized. Wearables, robots, new materials, new methods, and new software have arrived to change . . . everything.

This slideshow requires JavaScript.

This spirit was interrupted–no, augmented–for a few hours at the beginning of Solid by some hushed mumbles: O’Reilly was giving away 30 smartwatches at lunchtime!

I will spare the details of finding and analyzing the official rules, staking out and running reconnaissance around the giveaway area, listening in, photographing, and hunkering down. I created my first personal twitter account and opened 10 identical tabs on my smartphone, ready to spawn the required golden tweet. I proudly whispered this strategy to a colleague who responded: “OK, you’ve gone too far now.” I agreed and then quietly, though not completely unabashedly, created two more tabs.

131118-qualcomm-toq-smart-watch-04

The Toq Smartwatch by Qualcomm features Qualcomm’s Mirasol display technology which delivers a sizable, always on, color touch screen without consuming much power. The screen is readable even in direct sunlight. In darkness, double tap the secret spot on the upper band to toggle the happy backlight. The screen snappily responds to touch and the battery lasted a full week in my test. Given that the display stays on for so long between charges, I find it difficult to overly criticize the often washed out, blurred colors.

The watch face is so much bigger than the band that the screen overlaps my hand a bit. The watch often digs in when the wrist is bent, say when using an armrest to get up from a chair. Tightening the band to prevent the discomfort is not an option. The Toq band is cut to fit, and careful with those scissors: a battery and sensors in the band mean you cannot replace it. The design of the band does not permit an analog fit as there are a finite number of slots. If you are one of the lucky ones with a blessed wrist size then you should be able to use Toq without frequent pain. Got pain? Regularly shove the watch up to where your arm is thicker, or sell it to someone with a wrist of equal or lesser circumference to your own.

The software, both on the Toq itself and on the required Android-only device, is adequate. Devices stayed paired and notifications were timely. Range was around 30 feet. What more do you want in a smartwatch? How about using your voice to dictate a text!? Pretty cool, Toq! An SDK is also available for you to make your own Android apps which communicate with Toq. I tried downloading it and they wanted me to create an account so I didn’t. I was also discouraged by the quiet, small dev forum.

I seldom wear a watch, but I am never without my smartphone. So will I use a smartwatch regularly? I really like being able to casually look down and immediately read a new email/chat/text. Quick access to stocks, weather, calendar, and basic music controls come in handy sometimes. Overall though, Toq leaves me wanting more: a true smartphone experience, always on, on my wrist. But then maybe Toq has done its job. I think I have seen the light, the conversion has been made, and I am enthusiastically on board for next time.

Bottom line: Qualcomm Toq is OK for a free gift but I want more.

The Narrative Clip

May 28th, 2014 4 Comments

Editor’s note: Here’s another post from friend of the ‘Lab and colleague, John Cartan. When John reached out, offering a review of the Narrative Clip (neé Memento), I jumped at the opportunity to read and publish his thoughts, and not just because I value his insights.

When Noel (@noelportugal) and I were in the Netherlands for the awesome event hosted by AMIS in March, we ran into Sten Vesterli (@stenvesterli), Ace Director and OAUX Advocate, who was sporting the very same Narrative Clip. We both quizzed Sten about it and were intrigued to explore future uses and cool demos for the little life-logging camera.

Anyway, John’s review reminded me, and now we have more anecdotal usage on which to draw if/when we get to building for the Narrative Clip.

Enjoy.

For several weeks now I’ve been wearing a small orange gadget clipped to my shirt – a “lifelogging” camera called the “Narrative Clip”. We thought we might be able to use it for ethnographic field studies (following users around to see how they do their job), or maybe for recording whiteboards during brainstorming meetings. But I was especially curious to see how other people would react to it.

From L-R: the Narrative Clip’s box, John, the Narrative Clip

Usage

The device itself is small (about the size of a Triscuit) and easy to use: just clip it onto your shirt or collar and forget it. It takes a photo once every 30 seconds without flashing lights or any visible indication. At the end of the day you hook it to a Mac or PC with a 3-inch USB cable to both upload the day’s photos and recharge the device.

The camera can be temporarily deactivated by putting it face down on a table or in a purse or pocket. In practice I found that my pocket wasn’t dark enough so I made a small carrying case out a box of mints.

Once the photos are transferred (which takes only a minute or two) you can either leave them on your hard disk, upload them to a cloud server, or both. The server upload and processing takes anywhere from ten minutes to six hours or more. Once uploaded, the images are straightened, cropped, sorted to remove blurry photos, organized into groups, and made available to a free iPhone or Android browser app.

The cloud storage is effortless and requires no local storage but sometimes over-crops (it once chopped the heads off all the people in a meeting I monitored) and provides only limited access to the photos (you have to mail yourself reduced photos from the phone app one at a time).

So I think that for full control you have to enable the local storage option. This works fine, but creates more work. You can easily generate over a thousand photos a day, which all have to be sorted and rotated. The photos consume a gig or more each day, which may eventually overwhelm your local hard drive; for long-term usage I would recommend a dedicated external drive.

Photo Quality

Each raw photo is 2592 x 1944 (5 megapixels). The quality is acceptable in full light, grainy in low light (there is no flash). But because the photos are taken mindlessly while clipped to a shirt that may bounce or sag, the results are generally poor: mostly shots of the ceiling or someone’s elbow. There is no way to check the images as they are taken, so if the lens is blocked by a droopy collar you may not discover this until the end of the day (as happened to me once). And the camera generally won’t be pointed in the direction you are looking unless you glue it to your forehead or wear it on a hat. You can force a photo by double-tapping, but this doesn’t work well.

For all these reasons the Narrative Clip is not a replacement for a normal camera. But the random nature of the photo stream does have some redeeming qualities: it notices things you do not (a passing expression on someone’s face, an interesting artifact in an odd corner of someone’s cube, etc.) and it creates a record of small moments during the course of a day which would otherwise be quickly forgotten. Even if most of the photos are unusable, they do tend to jog your memory about the actual sequence of events. And because the photos are un-posed they can sometimes capture more authentic moments than a more obvious camera usually would.

Possible Applications

The key to designing a great user experience for enterprise software is to first understand your user: what her job is, how she does it, what challenges she has to overcome each day, etc. One way of doing this is an “ethnographic field study” – the researcher follows the user around and documents a typical day.

Our original idea was that the Narrative Clip could enhance ethnographic field studies. Either the researcher could wear it while following a user, or you could ask the user to wear it for a day and then meet later to review the photos.

I think both of these ideas are worth trying. The Narrative Clip would not replace a normal camera; it’s main value would be to jog the memory when writing up reports at the end of the day. Similarly, if the user wears the clip herself, the researcher should schedule time the next day to step through the photos together and answer questions (“What were you doing here? Who is that? It looks like you stepped briefly onto the shop floor after lunch – how often do you that?”).

There are other applications as well. I set up the camera in a meeting room to take a photo of the whiteboard every 30 seconds. This could be a quick and easy way to capture drawings during the course of a brainstorming session. Placing the camera far enough back to capture the entire board meant the writing was hard to discern; it might work with good lighting and strong marking pens.

johnWhiteBoard

John conducting at the whiteboard

Setting the clip on a table during an interview allowed me to collect a collage of un-posed portraits which, in total, gave a more accurate reflection of the subject’s personality than any single posed photo could provide.

image003

Logging an interview with one of our OAUX colleagues

Another possible application is using the camera to take photos from the dashboard of a moving car. For optimal results the camera needs to be placed near the windshield and high enough to avoid photographing the hood of the car. I achieved a stable mount by clipping the camera to a placard holder (from an office supply store) and placing that on a dashboard sticky pad (from an auto supply store).

Personal Reactions

As we enter the age of wearable sensors and the Internet of Things, we are starting to ask a new question during our design sessions: “is that creepy?” As technologists we are naturally excited by the new applications and the bounty of data made available. But as we think about the user experience of our customers, it is important to consider what it’s like being on the other end of the camera. Wearing the Narrative Clip was a great way to explore personal reactions to this brave new world.

I found that in general people didn’t notice (or were to polite to ask) about the device unless I brought it up. But once they realized it was a camera, some people were uncomfortable (at first). Most people didn’t seem to mind too much once they understood how it worked, but some people were definitely shy about having their photos taken. Some changed positions so as not to be in my normal field of vision. One person requested that I destroy any photos it might take of her. It helps to explain what you’re doing and ask permission first.

Here is what one acquaintance of mine confessed:

“What I think is that I value one-to-one time that is ephemeral. Not recorded. Felt in the heart. I feel threatened when recorded without permission. Sigh. I know. That sounds dumb. I mean, with cell phones everywhere, I don’t even have privacy in the gym locker room. Then the flip side of my brain starts blabbing: “What are you worrying about? Who would want to see your body or record your thoughts anyway?” Am I just prejudiced? I would not want to hire someone I interviewed if they wore one. I would leave the dinner table if a date wore one.”

I feel that it is very important to respect attitudes like this. If people are uncomfortable with a new technology, they will find ways to bypass or subvert it. Sensor-based enterprise applications will only succeed if we strike the right balance between convenience and privacy, are upfront about exactly what data we are collecting and how it will be used, and show respect by asking permission and letting people opt in as much as possible.

Conclusions

The Narrative Clip is a solid, easy to use device that could be helpful for tasks like ethnographic fieldwork, but culling through the flood of random images requires time and effort. Further experimentation is needed to determine if the trade-off would be worthwhile.

Recording entire days – and being recorded by others – was an illuminating experience. Sensor-based technologies can provide treasure troves of data, but it’s always worth asking what it would be like to be on the other end of the camera. A reasonable balance can be struck if we are transparent about what we are doing and show respect by asking permission.

The Misfit Shine

May 27th, 2014 16 Comments

Over the past 12 months, the chatter about wearables (glasses, watches, bands, clothing, material) has become too loud to ignore. It almost seems like manufacturers will force consumers to like wearables, like it or not.

There are good uses for wearables, and one of the most common is the fitness tracker.

Although I haven’t worn one myself until recently, I’ve been around lots of people who have, e.g. my wife had an early FitBit, Noel (@noelportugal) was an early adopter of the Nike+ Fuelband and has a Jawbone UP, Ultan (@ultan) has at least a dozen different fitness trackers, etc.

I finally made the jump and bought the Misfit Wearables Shine, and after wearing it for a week, I’m impressed. I do wonder how long it will keep my attention though.
Pros

Of all the fitness bands and smartwatches (and smartphone apps) that track activity, I chose the Shine because I love the small form factor and the flexible ways to wear it. The Shine is about the diameter of a quarter, and guessing here, about the thickness of two or three quarters stacked.

So, yeah, it’s small. It comes with a wristband and a magnetic clasp, and you can buy other, erm, Shine holders including necklaces, leather wristbands and even socks and t-shirts, specifically designed to hold the little guy.

Another plus for the Shine is that it takes a standard watch battery, no need to charge it or tether it for syncing, a common complaint about other fitness trackers.

The Shine uses Bluetooth 4.0 (a.k.a. Bluetooth Low Energy) to communicate with the phone. BLE uses less power than the older spec, but keeping the Bluetooth receiver on all the time runs down the battery noticeably.

misfit-wearables-shine-exploreEven though its design is minimalist, the Shine can tell you the time, if you learn its indicators and ensure you know which side is 12 o’clock. Easier than a binary clock, but requires some learning.

My experience so far has been pretty positive. I like the little guy, but I’m not sure how long I’ll stay engaged. This isn’t a Misfit problem though.

Cons

There are some noteworthy negatives.

Misfit only provides a mobile app for the Shine, no accompanying web app, which I actually don’t mind, yet. This does limit the metrics and analytics a bit, which I know other people like, especially as they accumulate data over time. So, this will eventually bug me

I’m a fan of the quantified self, to a fault; I used to carry a workout journal with eight years of handwritten data in it.

I’m *that* guy.

Misfit has no publicly-available developer options, no APIs, no SDK. They have been promising an API for a while now, so I assume it’s coming soon. An SDK would be nice, e.g. to allow developers to access the Shine for glanceable notifications. Not sure if that’s in the cards or not.

Finally, one of the positives can be a negative. I like the different options for wearing the Shine, and I’ve tested out both the sports band and the magnetic clasp. The latter leads me to a con; it’s easy to lose the Shine.

Case in point, I was wearing the Shine attached to my shorts. I went about my day and suddenly realized it was missing. Looking at the last time I had synced, I retraced my steps to no avail, using the Bluetooth scanning feature as a BLE dowsing rod of sorts.

As a last resort, I pinged Noel, BLE master. He pointed me to an Android app called simply Bluetooth 4.0 Scanner and within minutes, I had found it.

Screenshot_2014-05-22-13-42-30

Huzzah for Noel! Huzzah for Bluetooth 4.0 Scanner! Reading the comments on that app shows that my use case is not unique. Perhaps the developer should rename it, Fitness Band Finder, or some such.

Anyway, that’s my week or so with the Misfit Shine.

Find the comments.

Saying Wearables in Spanish

May 23rd, 2014 11 Comments

Friend of the ‘Lab, Bob Rhubart (@otnarchbeat) recently recorded a segment with our own Noel (@noelportugal) and Sarahi Mireles (@sarahimireles), a UX developer from our Mexico Development Center.

The topic was wearables, but I only know this because they told me. Google Translate wasn’t very helpful, unless “Manos libres y vista al frente: Con el futuro puesto” means “Handsfree and front view: With the future since.”

Anyway, enjoy.

Update: Noel pointed me to an English version on the same topic.

Whereable is the Killer Wearable Translation Use Case? Glass, Word Lens and UX

May 21st, 2014 2 Comments

Editor’s note: Here comes a guest post from our old pal and colleague, Ultan O’Broin (@ultan). You can read more of his thoughts and musing at User Experience Assistance and Not Lost in Translation. Enjoy.

Whereable is the Killer Wearable Translation Use Cases? Glass, Word Lens and UX
By Ultan O’Broin

I just can’t escape the Word Lens-AppsLab vortex. I blogged about the Quest Visual Word Lens augmented reality (AR) mobile translation app for Jake (@jkuramot) a while ago. I’m involved in user experience (UX) outreach with Noel (@noelportugal) or Anthony (@anthonyslai) demoing the Google Glass version of Word Lens. Blogged about that, too.

noelGlass

Noel does his translation thang with Word Lens on Google Glass in the UK.

Now, Google have just announced an acquisition of Word Lens. That’s good news for the Glass version. The current version “works” but hardly at the level UX aspires to. The AR translation is impacted by stuff like how often you drink in certain bars in San Francisco the Glass wearer’s head moves, the fonts used in the target image, and so on. Likely this acquisition will mean Google Translate’s overall UX improves, offering an upped experience in terms of optimized UIs on different devices, all pivoting around a cloud-based real-time translation of a wide range of language combinations.

Up to now these mobile translation apps (there’s a ton) seem like a hammer in search of a nail. Finding a consumer nail, let alone an enterprise one, often leaves me scratching at the bottom of the toolbox.  Besides the translation quality, contextual factors are forgotten. Stuff like cost of operation or the device, or the very environment you’re got to work in.

Take Word Lens on Glass. Great for wearables ideation,  the promise of an immersive UX,  the AR potential, and people just love the awesome demos. But would you ever use it for real, right now?

Consider this: I’m a Glass Explorer and a runner. I recently did a 10 miler in Tel Aviv using Strava Run Glassware. Yeah, more of our global experiment to see if normal people would notice or care about Glass being in their faces (they didn’t).

stravaUltan

Strava Dashboard with that Tel Aviv run using Google Glass

It was a great user experience, sure.  But the cost of using my Glass tethered to my EU-registered Samsung Galaxy SIII for data on the move forced me back to reality: nearly 33 EUR (45 USD, today) in roaming charges. Over 3 Euros a mile.

Of course, there’s also the cost of Glass itself. Effectively, with taxes and bits added, it’s 1700 USD (1250 EUR). Not cheap. So, consider adding another real world problem. Running around sweating on your 1700 bucks. Nothing that some handy tape and a plastic bag can’t deal with in a sort of Nerdy 2.0 duct tape eyeglasses repairy way. But, not a UX for the stylish.

glassForRunners

Customizing Google Glass for runners on the go. Cut tape and plastic bag over the bone-conducting part of the device to exclude sweat.

I’ve no idea what Word Lens on Glass would cost to translate a foreign language dinner menu, billboard or sign when away on vacation. But, I’d bet if you’re going to try more serious translation tasks and stay connected during it, then it’ll likely be cheaper and a lot easier to just man up and try and ask someone local. Unless, the app is usable offline … and works outside in the rain.

Time will tell where Google Glass and Word Lens ends up. The message from all this is that in the enterprise, when it comes to gathering user requirements for wearable (and other) tech, it’s about more than just the end user and about taking technology at face value. Context is king.

Oh, we’ve a course on that, too.

Raspi Shutdown Key

May 20th, 2014 1 Comment

Noel (@noelportugal) is a clever dude. He’s also passionate. If you’ve ever met him, you already know these things.

Although I haven’t yet jumped into Raspberry Pi, despite Noel’s unbridled passion about the little-computer-that-could, I have captured some metadata about it, just from being around him and his passion.

For example, I know the Raspi needs to be shutdown a certain way, or bad things happen. I recall Noel being very specific about this at the Raspi hackday he ran in January.

Since he created a Maker Faire DIY activity based around Raspi and Embedded Java, Noel needed an easy and standardized way to shutdown his Raspis without the benefit of any standard peripherals, no keyboard, no mouse, no monitor.

So, he made a key because of course he made a key. Duh.

IMG_20140516_125213

One of Noel’s Raspis and the shutdown key

I saw the guys use this key a few times during Maker Week, and I’m not entirely sure what’s on it, e.g. scripts or magic bits, but it certainly made shutting down more than a dozen of Raspis a breeze.

I hope he’ll share the magic, fingers crossed.

Update: Noel shared the secret sauce in comments, and I’ll add it here for post-completeness.

So here is the secret sauce:

#Get Device Id and Vendor Id from USB Wifi adaptor.
$ lsusb

Bus 001 Device 004: ID 0bda:8176 Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN Adapter

#Create a new udev rule
$ sudo vi /etc/udev/rules.d/10-local.rules


ACTION==”remove”, ENV{ID_VENDOR_ID}==”0bda”, ENV{ID_MODEL_ID}==”8176″, RUN+=”/sbin/shutdown -h now”


$ sudo udevadm control –reload-rules

If you want to make a shutdown key make sure you use a USB drive and not a USB wifi/bluetooth dongle. Hot-plugging a powered device will cause the raspberry pi (model b) to restart.

AppsLab at the Maker Faire

May 19th, 2014 Leave a Comment

Last week was Maker Week for us, and the culmination was the Maker Faire over the weekend.

This was my first Maker Faire, and wow was it an absolute blast. I highly recommend taking in one of their events if there’s one in your town. Perhaps the best attribute of the Maker Faire is that it includes not just technology, but also handicrafts, woodcraft, pretty much every kind of craft. Walking around, you can feel the passion of everyone there, and it’s infectious.

Anyway, Oracle and Java were present in a big way at the Faire, and our team’s role was to build the DIY activity for the Java area in the Expo Center. Noel (@noelportugal) created an activity he calls “If this, then Pi” which pays homage to one of our favorite services, IFTTT, and highlights the power of Raspberry Pi and Embedded Java.

IFTTPi takes an input:

And does something:

IMG_20140517_085236

Noel’s if-this-then-pi console

Noel and a few of the guys (Anthony, Raymond, Tony) put all this together in less than a month with some inspiration and guidance from Hinkmond Wong. Aside from showcasing Embedded Java on the Raspi, the overall goal was to show how actions can trigger IoT events with limited human interaction.

Aside from some network hiccups, the DIY activity was a big hit. We also brought along Noel’s Raspi target to entertain kids of all ages.

Noel also gave a short talk on Saturday, check it out here or on YouTube:

As he promises in the video, Noel expects to share the code he and the guys wrote for IFTTPi soon, and I’m going to nag him for a technical post on the whole shebang, sooner than later.

A nostalgic note, both Paul (@ppedrazzi) and Rich (@rmanalan) were at Maker Faire on Saturday, and they happened to come visit us at Java DIY area, creating an impromptu AppsLab reunion. Great to see those guys. They helped lay the groundwork for what we’re doing today.

Update: Almost forgot my favorite story from Saturday. We were in the Expo Hall, no AC, and it was hot and stuffy. Someone asked facilities if the doors could be opened to let some breeze into our area. Facilities said they had no more doorstops, and a guy said he could print some up on his 3D printer. I don’t know if he did, but the doors were eventually opened.

You wouldn’t get that kind of answer at any other event. Hilarious and very typical of the Maker Faire. No problems, only solutions.

Another update: We plan to take a scaled-down version of IFTTPi on the road with us to conferences, so look for it, possibly as soon as at Kscope 14 next month.

Enjoy some shots I took around the Faire and find the comments.

This slideshow requires JavaScript.

Randomly Humorous Correlations

May 16th, 2014 Leave a Comment

Love this so much, Tyler Vigen’s Spurious Correlations, randomly compares data sets and creates correlations. Frequently, hilarity ensues.

cageCorrelation

Like The Onion, I’m sure its “findings” will be misrepresented as serious news soon.

h/t Flowing Data

 

Tweets from Jeremy’s MakerCon Keynote

May 15th, 2014 2 Comments

Yesterday, our fearless leader, Jeremy Ashley (@jrwashley), gave a keynote at MakerCon. Unfortunately, I had to miss it, but the guys reported a positive reaction. Noel (@noelportugal) pointed me to tweets for rapid reaction.

Here’s a sample:

tweet


tweet1

tweet2

Yeah, I cherry-picked, but you can read the reactions yourself if you don’t believe me.

The keynote was recorded, and if it becomes freely-available, I’ll embed here. We also have awesome pictures on the way too.

So, stay tuned for more Maker Week content.

OK Google, Where’s My Car?

May 15th, 2014 3 Comments

Google Now recently added a Parking Location card to help you solve the classic dude-where’s-my-car problem. According to The Verge:

The company’s Google Now assistant will now recognize when you’ve left a moving vehicle and automatically keep track of your car’s last location. There’s no magic happening here: Google does all of this using your smartphone’s bevy of sensors. It’s essentially guesswork, and the company readily admits that it may sometimes guess wrong. “You may see parking location cards even if you didn’t park your car,” the company says on a help page addressing the new feature. “For example, these cards could show up after you exit a bus or a friend’s car.”

A directionally challenged person like me will enjoy this feature. Of course, Google can get me to the parking structure, but I’ll probably still have to walk around clicking the alarm playing Marco Polo with the car.

I accidentally used this feature today when I dropped off a rental car at San Jose International Airport.

Screenshot_2014-05-14-06-09-21

Although they have not  yet, I sincerely hope Google adds the ability to recall the Parking Location with the phrase “OK Google, where’s my car?” which would make for any number of fantastic Easter Eggs. OK, so maybe only Chet (@oraclenerd), Jeff (@thatjeffsmith) and I would enjoy them, but still, fantastic to us.

Google Now continues to impress me, and it’s becoming a necessary travel assistant.

Thoughts? Comments, find them.

A Week with the Amazon Fire TV

May 14th, 2014 Leave a Comment

Even though I’m more than content with the Chromecast, the Amazon Fire TV caught my eye for a very simple reason: Amazon Prime content.

I’ve been buying digital content from Amazon since they launched their MP3 Store, the first place to buy music without any DRM, back in 2007, and Amazon is the only place to get stuff like Dora and other Nickelodeon shows. Yeah, that’s a parenting problem.

Amazon doesn’t support the Chromecast and probably won’t anytime soon, and they don’t have an Instant Video app for Android, which is limiting for my household. So, I’ve been stuck using the Amazon app for my Smart TV, which isn’t ideal.

AmazonFireTV-578-80

Aside from being slow to launch and laggy, I can guess how much development support Amazon gives Smart TV apps compared to their Android, iOS and Fire OS apps.

So, when the Fire TV was announced, it immediately intrigued me. The capper came a couple weeks after when Amazon announced that much of HBO’s library would stream exclusively to Prime Instant Video. I’ve been putting off watching “The Wire” for years, and I can’t wait to rewatch “Oz.”

Another interesting aspect of the Fire TV is that Amazon mined comments on other, similar set-top boxes (e.g. Apple TV, Roku), using them as a large focus group of sorts. It’s rumored Amazon did this prior to launching the Fire HD as well.

This makes a ton of sense. Why wouldn’t Amazon look to compete in strategic areas where demand is strong and where they have a willing group of test subjects? This has to be in the fast-follower handbook.

Anyway, it’s been about a week since I got my Fire TV, and it’s about time to share some impressions.

Pros

Amazon claimed the Fire TV was fast, and the Fire TV is fast, screaming fast, like instantaneous fast. Compared to other ways I’ve used Instant Video, including the web app, it’s a joy to rummage through Amazon’s streaming collection, and videos play immediately. No snake chasing on a turntable, no spinning beach ball.

The Fire TV has Netflix and Hulu Plus apps, which means I can consolidate all my viewing on a single device. That ‘s nice, but the best part about this for me is that these apps are also significantly faster to load than the Chromecast or Smart TV versions.

Cons

I’ve read some poor reviews of the Fire TV; in fact, most of the reviews I’ve seen have been mixed at best. I suppose this is because the reviewers are comparing other set-top streaming boxes. I’ve only used the Chromecast, which I still love, and the Fire TV measures up well against it.

The two devices actually fit nicely into my household, with the Fire TV on the primary, living room TV, and Chromecasts on the smaller TVs in other spots.

My only con is about Voice Search, which Amazon is pushing hard as an innovative, differentiating feature. I tried it for kicks, and it took a couple tries for it to understand me. I may search for Gary Busey for giggles, but mostly, this just feels like a gimmick feature.

So, for me at least, the Fire TV is awesome. It fills a need and does so very nicely.

Thoughts? Find the comments.

The Nymi Nears Production

May 13th, 2014 7 Comments

Noel (@noelportugal) chatted with the Bionym guys at SXSW earlier this year, and I know I speak for all of us here in the ‘Lab when I say I’m very stoked to get my Nymi and start playing with it.

Check out Techcrunch’s writeup and demo video.

Since I can’t embed TC’s video for some annoying reason, here is ninja cat instead.

Just pretend really hard, or watch the video over at TC.

Aside from just being an end user (please Agile Bits integrate Nymi for 1Password), I’m super excited about the cool demos we can do with the wristband and the SDK. Already noodling ideas for next year’s cool demo.

Miscellaneous Cool Stuff

May 12th, 2014 5 Comments

I’m finally harvesting my open browser tabs, sharing some of the cool stuff that has been patiently waiting to be shared with you, gentle reader.

Chrome Remote Desktop for Android

Chrome Remote Desktop has been quietly awesome for a few years, and Google recently extended it to Android. So now, I can troubleshoot my parents’ computer from the road. Yay.

Project Ara Gets Closer

Squee! I’m not really sure why, but I’m so geeked for Project Ara phones, i.e. Google’s upcoming modular smartphones. Design your phone from the sensors up to the screen size, or something like that.

Everything is DIY now, so why not?

PiPhone

Speaking of everything being DIY now, some clever bloke built a Raspberry Pi smartphone.

The UX Drum

Longtime friend of the ‘Lab, Floyd Teter (@fteter) wrote a post about the importance of UX. I concur.

Linksys WRT1900AC

And finally because everyone gets excited about networking gear, especially this time of year, I give you my latest bit of nerd pr0n, the Linksys WRT1900AC. Short version, it’s a really fast wifi router, something every telecommuter should covet. Want the long version? Techcrunch did a review.

Speed comes at a price, namely $250, but I’m asking myself why pay for a big pipe when wifi has always been the choke point?

And Finally

Things Fitting Perfectly into Other Things

How about you? Care to share your open browser tab nuggets?

You know what to do.

Projecting Multiple Android Device Screens

May 9th, 2014 4 Comments

On this team, we all carry Android devices, lots of them, including phones. Even Noel (@noelportugal) has finally been converted.

Everyone on the team, minus me, is an Android developer, and as they build for new devices like Google Glass and the upcoming Android Wear watches, the ability to project screen images becomes more essential.

49578622

Case in point, at a recent sales event, I was showing a Glass app and companion tablet app that Anthony (@anthonyslai) and Raymond built as a concept demo for Taleo interview evaluations.

Using Glass for the first time requires training, so I typically use the screencast option of the MyGlass app to see what the wearer sees. In this case, I was also showing an app on the tablet, so I couldn’t keep the screencast running.

Similarly, when I’m showing Glass or any Android apps to a room of people, projecting the screen images is a bit of an adventure.

Necessity being the mother of invention, Anthony decided to address our collective need for better Android projecting by modifying Android Projector, an open source Java project to support projecting from multiple Android devices.

You can find his code on GitHub.

Android Projector requires adb, part of the Android SDK. If you have adb, run:

adb devices

And copy the device ID you want to project. Then from the directory where you downloaded Anthony’s version of Android Projector, run:

./android-projector <device ID>

Want to show two devices? Open another terminal session, copy the other device ID, rinse, repeat.

And voila, you can see both device’s screens. If you’re giving a demo, you can now project your laptop’s screen to show all the screens.

dualProjections

Google Glass and Nexus 7 screencasts on the same machine.

Pretty cool, eh? Find the comments.

It’s Almost Maker Week

May 8th, 2014 5 Comments

Maker Faire Bay Area (@makerfaire) is coming up quickly, May 17 and 18, and we’re excited to be participating.

mf14ba_badge_150

Here’s the backstory on why. Ultan (@ultan) and Justin (@kestelyn) spoke at Maker Faire 2012 (video), and the Java (@java) team is a major sponsor of the Maker Faire.

Unfortunately for him, Ultan won’t be around to attend, so when the Java team came looking for ideas for this year’s Faire, he asked if we’d help. Noel (@noelportugal), a longtime maker as well as a past Maker Faire attendee, jumped at the chance to represent Java and Applications User Experience at this year’s installment.

But wait, there’s more. This year, on May 13 and 14, there will be a conference the week before the Maker Faire, aptly named, MakerCon. This two-day conference focuses on the business of making and will be hosted at the Oracle Conference Center.

Our fearless leader, Jeremy Ashley (@jrwashley), himself an avid maker and tinkerer, will be delivering a keynote on May 13 to kick off the event.

So, for us at least, next week is Maker Week.

Noel has been feverishly assembling a DIY activity for the Java Embedded Playground at the Maker Faire involving some Internet of Things things and a bunch of Raspis. He teased these pictures to give a taste.

Not to spoil the fun entirely, but what he’s building is a set of triggers and results (a la IFTTT), all automated. Visitors will choose an input (e.g. a sensor), a condition (e.g. keyword, hashtag) and an output (e.g. robot arm, Sphero) and watch the magic of IoT happen.

I’m excited to try this myself, especially the Sphero, which looks like outrageous fun, h/t to Tony for that one.

If you’ll be attending the Bay Area Maker Faire next week, stop by, say hi and try out Noel’s activity. Bonus, we’ll be hanging w Hinkmond Wong (@hinkmond), he of tweeting turkey fame.

Update: Worth noting that longtime friend and honorary member of the ‘Lab, David Haimes (@dhaimes) will be joining us in the Maker Faire tent to help over the weekend. Come by and see us in all our IRL glory.

Kicking the Remote Presence Device Tires with Beam

April 23rd, 2014 4 Comments

Beamnoel-beam

Remote Presence Devices or RPDs are finally becoming mainstream with products such as the Beam from Suitable Technologies. Today I kicked the tires of one (virtually of course) thanks to friend of the lab Dan Kildahl. I toured the newly renovated marketing offices at Oracle HQ. My first impression was really good. Around family and friends I am known as the clumsy game player. Yeah, I’m the one that gets constantly stuck against walls during first-person video games. But with the Beam interface I was able to easily navigate around the floor. I didn’t hit any wall, and that is good news.

I asked Dan how it was received around the office. He mentioned mixed opinions, which is completely understandable. All these new technologies are for sure changing social norms (see Google Glass). But as a technologist I just can’t help but feel excited.

What are your thoughts?