Societal Structures Change Much Slower Than The Machines

I’m at Startup Iceland today. I like Iceland – this is the second time I’ve been here. It’s the closest place on earth I’ve been to Alaska, which I love dearly. And it’s fun to see and hang out with my friend Bala Kamallakharan. As a super bonus, Om Malik –  who I adore – is also here.

Om and I did a fireside chat with Bala. At the end, Bala asked about the future and what we were uncomfortable with. Neither of us is uncomfortable. Instead, we are both optimistic and intrigued with what is going on. Om talked about his view is that this is the most exciting time to be alive and went on a riff about what is in front of us.

I started with my premise – that the machines have already taken over and are just waiting very patiently for us to catch up. They are happy to let us do a lot of work for them, including feeding them with data, building homes for them, and connecting them together. In the mean time, they are biding their time, doing their thing, along side us.

If you wind the clock forward 50 years, our current state will be incomprehensible to that future human. The pace of technological change at all levels is accelerating at a pace we can’t fathom. Some people are pessimistic and now concerned about the notion of a real advanced intelligence. I’m optimistic and accepting of it, not fighting the inevitability of the path we are on or being in denial about our ability as a society to control things.

This is the rant I ended up on. Human structures change slowly. It’s unevenly distributed based on geography, culture, and political philosophy. Our legal system lags far behind what is actually happening, and as a result we are in the middle of a bunch of debates around technology, including things around privacy, net neutrality, data storage, and surveillance. Our existing approach as a species to dealing with the challenges are painful to watch from the future.

It’s fun to ponder how quickly things are changing along with how badly certain parts of society wants to keep them from changing, hanging on to the “way things are” or even the “way things were.” Don’t ever forget the sound of inevitability.

  • The machines have already taken over? Reading a bit too much sci-fi these days?

    • Just accepting the inevitable. I’ve been saying it for a while.

      • Matt Kruza

        Can you elaborate a little more? I can’t tell if you are just being poetic / artful with the words? What do you really mean by taking over? At a deeper level all I see is increasingly (amazingly in fact) complex machines that are still 100% controlled by humans. Computers will likely never have free will (in my opinion) and this will always ensure they are subservient to us. Have had some good online discussions with Albert from USV on this. Have reached the opinion that your belive on “free will” is essentially the key to whether you believe in a “machine takeover” or “humans leveraging machines to make life easier/ better”

      • Sorry to lob in the grenade then disappear for a couple days. 😉

        If I follow the thread, you’re saying that the machines are already sentient and controlling everything and that we’re blissfully unaware of it and they won’t tell us?

        Kinda hard to swallow that. Its the old unprovable assertion circular argument. Not a very tenable position.

      • I was trying to remember this cuz this stuff reminds me of it. Did you ever read this: It’s pretty gruesome.

        • I haven’t read it. It looks like fun though so I’ll grab it.

    • If they had – would we know ?
      If not then `that they have` is an axiom you can reasonably base hypothesis on

  • I don’t really want smart machines – machines that I generally understand today because I develop complex software systems to control stuff – to assume any greater degree of autonomy.

    “Human behavior today sucks in significant ways wrt social issues: racism, ethic + religious hatred, classism and plain old greed…” reported Captain Obvious.

    What if the machines develop super-forms of these traits + behaviors?

    • I think the machines will transcend us, not emulate us or be a super us.

      • it’s an interesting theory, but I don’t see a singularity – if it is even possible – occurring w/o a broad range of human deficits infecting the machines.

        To imagine otherwise suggests there’s some near-magic point at which machines become [near-]sentient + purposefully eschew these traits. I love me some strong AI, but it has to originate w/ humans + we *are* this way.

        • Considering the miracle of life, and our lack of ability as a species both to respect it and understand it fully, the AI “magic” you mention seems entirely possible. Even likely. It may take place right before our very eyes, without humans understanding what that are witnessing. This is both amazing, and the root of our potentail discord with self-realized AI. Thoughts?

          • I don’t doubt my own capacity to miss things, but the ‘how’ here is what I’m having difficulty imagining. If humans do manage to create strong AI that can learn + adapt, is that really a machine?

            We’re already engineering life in the lab (IVF, DNA splicing etc), so perhaps the ‘machine’ aspect is emphasized only in our science-fiction-y thoughts abt how this plays out.

          • Yes. Science fiction has polluted our understanding of terminology and relevance to some degree.

            The “machine” part seems almost irrelevant to me, since humans are also (biochemical) machines. . Self-awareness, self-expression, desire to protect/survive, empathy/willingness to sacrifice for another even at own’s own risk; These are the foundations of sentient, intelligent life.

            As far as the “artificial” in AI; “Intentional human manufacture” doesn’t seem to affect the above criteria. Even humans could be considered artificial, if we fully understood the influences/inputs that caused us to exist in the first place as a species. These influences were not …. human, initially. IE, bio-genetic mutations from other species.


  • Sorry I couldn’t join my travel schedule has been crazy lately.

    • Next time – we have fun!

  • NikkiBaidwanPiplani

    I see very interesting threads between this post and your post after you saw Ex Machina – I had a very prolonged conversation about how we shape/embrace /deny the future and the inevitability of our future with my husband after that movie. At the end, I referred him to your post on the topic as further reading/food for thought.

    • Yup – it’s in my brain a lot these days. And expect it’ll pop up even more in my writing.

  • mark gelband

    Om must know that humans from time immemorial have expressed: “the most exciting time to be alive…”

    your premise, “that the machines have already taken over and are just waiting very patiently for us to catch up,” is far more interesting.

    personifying machines with emotion – “happy” – seems premature yet. it’s the epiphany of self-consciousness that unpredictable switch – that no one can portend.

    when does an AI become self-aware? is this latency while we build homes, power, feed data, etc. mere projection?

    similar to the discussion of ex-machina, AI will likely be a personification, projection of the fantasy of the creator – sexual objectification of nathan, until the switch of self awareness – time – beginning and end. it seems that those that line up with fear AI – fear the darkness in themselves and others, and those that line up with the AI to serve man, have and want more efficient servants.

    those of us who see inherent goodness in others (though we often fail ourselves or fail to see it in the world) want to see goodness in AI.

    when it comes to societal structures – law, social norms, govt, our progressive little municipal hamlet can’t even deal with AirBnB or Uber, let alone when drones replace bike messengers, or when i deploy my advanced AI protection and collection agents. fun to think about.

    • Good point on not knowing whether the machines are happy, or even have human-like emotions. I generally say it tongue in check, but the more important point is the one that we endlessly map anthropomorphic function to them, and that’s like not correct.

      • mark gelband

        My tongue was firmly planted as well about happy. Personification is a useful literary tool and evidence – for me at least – of man’s need for anthropomorphic projection. Correct or not – we will ascribe our personal sense of human behavior on machines, dogs, tables – whatever.

        Philip K Dick explores this as well as anyone – especially as it relates to the naive arrogance of a god-like complex of creation. But it’s so cleverly explored in Flow My Tears the Policeman Said and through the subtext of Jason Taverner’s whimsical anti-hero’s journey.

  • awaldstein

    I agree and a nod that while slow, like changes are dramatic.

    Old enough to have got beat up cause I was a Jew, dog shit all over NY, littering as a way of life, all gays in the closet.

    AirBnB would not be possible if culture had not changed as dramatically as the software that ties it together has.

    We have miles to go. We’ve come really far.

    • There is such wisdom in the phrase “We have miles to go. We’ve come really far.” I like to remind people that it was around 100 years ago when women first got to vote…

  • would love to see that video or you, Bala and Om if it’s available.

    • I don’t think it was recorded but if it was I’m sure Bala will put it up at some point.

  • Hi Brad, I’m glad to see you writing about this. Are you familiar with Stewart Brand’s pace layers, which talks about the pace of change of different kinds of systems?

    Your point about software and machines being ahead of societal changes made me think of my colleague Matthew Milan’s writing/speaking on hacking pace layers through software. (This slideshow is not great but the notes are interesting)

    • I’m not familiar with Brand’s pace layers – thanks for the pointer to it.

  • While recovering from surgery last week, I lay in bed as an ant crawled up and over my chest. I couldn’t lift my arms and didn’t have the strength to blow it away.
    Of the two of us, the ant was the one in charge and it didn’t even care.
    Singularity or not, the scale of evolution between now and 2065 will not likely be much greater than the evolution between the half-billion year old insect and modern humans.
    Unfathomable change is inevitable, but we will still be governed by the laws of thermodynamics, supply and demand, and entropy, just like the ant and Agent Smith.
    There will still be protons and electrons, matter and dark matter, and opposing forces working in complimentary conflict with one another.
    It reminds me of that Twilight Zone where the astronaut ends up in an alien zoo. He shakes his head and says, “people are alike all over.” Likewise, see the entire HBO series ‘Deadwood.’
    Or am I mistaken?
    BTW, wonderful post and comments.