Age of the Algorithm

Hot on the heels of last week’s TED video on scary algorithms comes this piece from ReadWriteWeb (@rww) on Google Place’s new recommendations algorithm.

Google’s New Traveler Recommendations Point Towards an Age of Algorithms

As Marshall (@marshallk) notes, recommending places you might like based on what people similar to you like sounds pretty simplistic, but the long-term implications are bigger because Google is, in fact, trying to figure out what you are like in order to recommend (ahem, advertise) things to you.

Obviously, Google knows about algorithms and how to write smart ones, and they have the computing power, some 900,000 servers, to make these algorithms hum.

So, the future undoubtedly holds more algorithms that use what you tell the service provider (Google, Facebook, Twitter, potentially a mixture or all of the above) to create better information (ahem, advertising) targeted at you.

Sounds a bit 1984, but in an age where we’re inundated with information, making decisions is becoming increasingly difficult thanks to diminishing returns. A borderline creepy algorithm might be just what the doctor ordered to cut through all the inputs we have to consider now.

Coincidentally, I was just thinking about parenting today in this very context, wondering if parenting had reached its apex already. With so many different inputs to consider (doctors, research, books, magazines, media, relatives, etc.), parents face a diminishing returns problem.  We try to consume and analyze all the information possible, just in case something new makes a difference, but as the number of factors to consider rises, it becomes increasingly difficult to make intelligent (and presumably better) decisions.

I submit the vaccinations-autism mess as evidence.

Could algorithms help? Definitely, although parents would want transparency on how the algorithm weighted and scored its inputs and the ability to tweak these factors to some extent.

Anyway, algorithms are the future, but how much should they be able to influence? Do you trust code to make decisions and recommendations for you? Maybe not if you write code 🙂

Find the comments.

AboutJake

a.k.a.:jkuramot

6 comments

  1. Earlier this year I was helping to build a database to be used by researchers based on state medicaid records for the past 20 years. The gentlemen I was working for had, I believe, 2 autistic children.

    The researchers were a father/son team, here’s an article on the father. The son was a…well, he wasn’t a very nice man.

    I didn’t buy into the vaccine thing, I still don’t think there’s enough evidence. Given the complexity of the human system, I think it’s incredibly difficult bordering on impossible to say A caused B with absolute certainty. Ultimately, I’m neither here nor there on the issue; I definitely don’t believe it affected Kate (what might have affected her was the 6 weeks in a pheno-barbital coma, but I digress). I don’t believe we’ll ever know why Kate is the way she is.

    Anyway, algorithms good. 

  2. Complex algorithms are the logical next step to harness the glut of information the intertubes has brought to our fingertips. The question now is which algorithms will get funding, the ones that do real good or the ones that serve ads?

    I can only hope that readily available sets of big data and cheap cloud-based computing (e.g. AWS EC2) will allow the former to bubble up or that someone will found a VC fund to support the algorithms for good. Otherwise, we’re stuck w whatever Google decides to do on the philanthropic front and unfunded research.

  3. It will be both amazing and scary to see how decision-making algorithms develop over the next couple of decades. Presently, the algorithms are starting to get there, but have a long way to go – when Facebook serves up some weird ad to me, at least I understand WHY Facebook thought I would be interested in the ad.

    Regarding which algorithms will be funded, don’t forget that governments are also a source of algorithm funding. While governments will obviously fund items that are in THEIR best interests (as an employee of a company in the homeland security space, I’ll certainly argue that homeland security algorithms should be funded), governments have a lot of different “best interests” to pursue. I’m sure that some government agency will fund the development of algorithms to assist parents whose kids are growing up in rain-soaked areas. 🙂

    The Luddite in me worries that too much dependence on algorithms poses great risks to us. For example, let’s say that I’m letting my 2025 Ford drive itself to an oil change – what if the programmers included logic to steer the car away from J & J Auto Service and instead direct me to the local Ford dealer by default? Then we’ll have a clamor for “open source” cars that will make the phone wars seem tame by comparison.

  4. Good points. Obviously, algorithms are only as good as their inputs and code, so as w Google’s big search algorithm, constant iteration w A/B-type testing is a must.

    Governments would definitely be interested, but this is a dicey area for them, given the need for transparency and the private sector talent. 

    Re. FB, I think they’re way behind the curve, and given how much data they collect, their algorithms should be much better. They can’t even do people search very well, which should be where they shine. I wrote a post on that several years ago, and they still fail.

  5. Glad you’re covering this type of topic and its far-reaching implications. Someone needs to think beyond the short-term stuff.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.