Resistance Is Futile

Marc Andreessen recently wrote a long article in the WSJ which he asserted that “Software Is Eating The World.” I enjoyed reading it, but I don’t think it goes far enough.

I believe the machines have already taken over and resistance is futile. Regardless of your view of the idea of the singularity, we are now in a new phase of what has been referred to in different ways, but most commonly as the “information revolution.” I’ve never liked that phrase, but I presume it’s widely used because of the parallels to the shift from an agriculture-based society to the industrial-based society commonly called the “industrial revolution.”

At the Defrag Conference I gave a keynote on this topic. For those of you who were there, please feel free to weigh in on whether the keynote was great, sucked, if you agreed, disagreed, were confused, mystified, offended, amused, or anything else that humans are capable of having as stimuli-response reactions.

I believe the phase we are currently in began in the early 1990’s with the invention of the World Wide Web and subsequent emergence of the commercial Internet. Those of us who were involved in creating and funding technology companies in the mid-to-late 1990’s had incredibly high hopes for where computers, the Web, and the Internet would lead. By 2002, we were wallowing around in the rubble of the dotcom bust, salvaging what we could while putting energy into new ideas and businesses that emerged with a vengence around 2005 and the idea of Web 2.0.

What we didn’t realize (or at least I didn’t realize) was that virtually all of the ideas from the late 1990’s about what would happen to traditional industries that the Internet would distrupt would actually happen, just a decade later. If you read Marc’s article carefully, you see the seeds of the current destruction of many traditional businesses in the pre-dotcom bubble efforts. It just took a while, and one more cycle for the traditional companies to relax and say “hah – once again we survived ‘technology'”, for them to be decimated.

Now, look forward twenty years. I believe that the notion of a biologically-enhanced computer, or a computer-enhanced human, will be commonplace. Today, it’s still an uncomfortable idea that lives mostly in university and government research labs and science fiction books and movies. But just let your brain take the leap that your iPhone is essentially making you a computer-enhanced human. Or even just a web browser and a Google search on your iPad. Sure – it’s not directly connected into your gray matter, but that’s just an issue of some work on the science side.

Extrapolating from how it’s working today and overlaying it with the innovation curve that we are on is mindblowing, if you let it be.

I expect this will be my intellectual obsession in 2012. I’m giving my Resistance is Futile talk at Fidelity in January to a bunch of execs. At some point I’ll record it and put it up on the web (assuming SOPA / PIPA doesn’t pass) but I’m happy to consider giving it to any group that is interested if it’s convenient for me – just email me.

  • http://www.onetruefan.com Eric Marcoullier

    Very much looking forward to my Google Glasses and never having to admit that I’ve forgotten someone’s name again.

    http://9to5google.com/2011/12/19/google-xs-wearable-technology-isnt-an-ipod-nano-but-rather-a-heads-up-display-glasses/

  • Anonymous

    My favorite part of your Defrag keynote was when you said: “…biologically-enhanced computer, or a computer-enhanced human – does it really matter?” – very Battlestar Gallactica-ish, and I think very true.

  • http://twitter.com/tedcooke Ed Cooke

    If what you say is right, Brad, and nothing in your argument leads me to suspect that it isn’t, then the question of how it might tactfully survive the transition to humanity’s complete obsolescence may be one of your iPhone’s intellectual obsession in the 2018-20 era.

    • http://www.feld.com bfeld

      Hah!

  • Mark Triplett

    Nice post. It is fun to think about the future and the slope of the innovation curve is steepening. In fact, your assessment regarding a computer-enhanced human being commonplace over the next 20 years is probably accurate given where we are today. We no longer view the future using a Jetsons’ lens, but rather one of Phineas and Ferb that portrays fantastically forward innovation merged into a world as we know it. One of the difficulties however that I see in REALLY progressing forward is that we many times forget where we have been. In fact, often we don’t even know what the current state of the art is! Once we can figure that out as a baseline, I believe even more innovative expansion will take place. Until then, I think we will continue to see re-inventing (in a sort of repetitious way) with incremental expansion and occasional disruption. Though, perhaps this rate of innovative expansion, even if it is increasing, is necessary for adoption to occur. I just wished that I could have lived my life in a Jetson’s world that I dreamed of as a kid! At any rate, great post and I look forward to reading more of your blog in the future!

  • Anonymous

    When is comes to technology forecasting, Timing is Everything. The dot com  implosion of  2000 is a great example of why technological forecasting is so challenging. It proved again, that  technology forecasting is not by itself a good indicator of what the  future will bring, rather is a prediction of what “can” occur over a specific period of time conditional on some set of resources.  Then there is the issue of bottom up vs top down forecasts. An example of bottom up is Moore’s law. There are lots of great tools for quantifying bottom up predictions (Bayes’ Dynamic Linear Model of countable variables (words, $ expenditures, accelerator programs etc.)  But bottom up will only get you so far. Top down forecasting, looks at the problems that need to be solved over the next 20-50 years and then looks backward  at the technological innovations that needs to be overcome for one to achieve these goals.      

  • DaveJ

    So… is this not a macro prediction?

    • http://www.feld.com bfeld

      It’s not a prediction. It’s an assertion of something that had already happened.

      • DaveJ

        “Now, look forward twenty years. I believe that the notion of a biologically-enhanced computer, or a computer-enhanced human, will be commonplace.”

        (Ceci n’est pas une pipe)

        • http://www.feld.com bfeld

          I prefer the Matrix spoon boy to Treachery of Images, but they are both almost perfect.

          Spoon boy: Do not try and bend the spoon. That’s impossible. Instead… only try to realize the truth.
          Neo: What truth?
          Spoon boy: There is no spoon.
          Neo: There is no spoon?
          Spoon boy: Then you’ll see, that it is not the spoon that bends, it is only yourself.

  • http://www.alearningaday.com Rohan

    Taking over the world, the clone army is..

  • http://www.facebook.com/people/Kare-Christine-Anderson/100000521862131 Kare Christine Anderson

    Your ideas and predictions reflect some of Kevin Kelly’s thinking

    • http://www.feld.com bfeld

      I like Kevin Kelly a lot and read most of what he writes and I can get my hands on.

  • http://epcostello.com/ e.p.c.

    I thought your Defrag talk was quite good.

    One of the problems I’ve been ruminating on, before your talk but certainly accelerated by it, is about how technology reflects the implementors ethics and morals, and that by handing over routine thoughts, tasks, activities to technologies we’re trusting that whoever developed the technology has “good” ethics & morals.    What I mean is…all of the sci-fi of the past treated technology going “bad” as something that happens when AI comes about or your Sphero becomes self–aware or a cosmic ray hits the wrong bit at the right time, totally divorced from the impact human decisions have when implementing technology (whether intentionally or not).

    Take for example a company which develops a voice–based task agent which will go off and do various searches for you.  Unbeknownst to you, the person who developed the agent has a deep seated fear of Pizza Clinics.  Alternately, the developer has no such fear, but chose a database for location information which — due to an error of omission/commission/is it even relevant — has zero entries for Pizza Clinics.  So you say “yo, Ceri, tell me where the nearest Pizza Clinic is” and you get no responses, even while standing outside a building housing a known Pizza Clinic.

    The resulting mass–hysteria assumes the companies involved have made made moral decisions that are restricting access to Pizza Clinics.  Maybe they have, maybe they haven’t, maybe it was the accumulation of multiple independent decisions, but to the outside observer it appears to be one single system enforcing a specific moral/ethical outlook.

    We become reliant on these complex systems, but where leaky abstractions used to only affect APIs and systems connecting systems, they now potentially can effect changes in our behaviors, because we don’t know (or can’t know/understand) the chain of databases and design decisions that cause Ceri/Siri to recommend one thing over another.

    I guess my concern is that we end up getting blinded by the technology, but I don’t have any sense of what to do to prevent it.  Just as many people didn’t realize they could scroll down a web page, or click to the second page of search results, I wonder how many people blindly trust the responses they get from whatever assistive technology they’re using and what the impact will be on society going forward.

    • http://www.feld.com bfeld

      I think this is an extremely important and significant problem / challenge that we’ll be struggling with for a while. My belief is that the only reason machines will want to kill us when they become self-aware is that humans want to kill other humans.

      I’m optimistic and think that this will work out fine. I think the machines want to be our friends. Since we’ll be “merged” (biological computer; computer enhanced human) it’ll be up to the human side to continue to be “good” vs. “bad”. There will always be bad in the world – that’s just the way the world is – but it’ll be our responsibility as humans to keep perpetuating good.

      There’s tons of deeper philosophy, sociology, and technology all mixed up in this. I’m no expert, nor do I pretend to be one. I’m just going to be optimistic – it’s a more satisfying way to live.

      • http://epcostello.com/ e.p.c.

        I guess the thing I question is whether or not the human side will know / grasp that the technical decision they are making has moral or ethical repercussions.  “good” vs “bad” is a subjective moral assessment based on a specific context.  *You* are smart enough to recognize that a given technology may make a recommendation unaware of moral/ethical contexts, and I’d guess that most Defrag/Glue attendees are also as smart.  But the average smartphone user …is not that smart, though they are doing potentially incredibly sophisticated tasks using technology unimaginable to us 20-30 years ago.

        I don’t know that this is a technology problem, or that it can be solved by technology.  It may be easier to simply teach people to question the technology they use before they blindly follow its directions.  Unfortunately the same people are the ones in the car to your side staring (and talking) directly into their cell phones instead of focusing on the dent they just put in your car.

  • ishwari Singh

    brad, is there a way to get a copy of your Defrag talk.  This is something that I have personally realized over a period of time and transitioned from Finance to Software.  Also moved from NY to Silicon Valley as part of that transition.  

    • http://www.feld.com bfeld

      I’ll probably record one at some point and post it on Youtube. The Defrag one wasn’t recorded.

  • Anonymous

    Is that video of the rat-brain controlled robot for real or a concept demonstration production?

  • http://www.victusspiritus.com/ Mark Essel

    Yes!

    We’re augmented already given the amount of time I’m nose deep in a phone, Tablet, or computer screen. While the interface is largely symbolic (words) and mechanical (touch, keys, mouse) I have little doubt that who and what I am today is rooted heavily in access to information. My interaction with other minds over vast distances is facilitated by software and hardware. It’s integral to my thought process.