Ants and the Superintelligence

I’ll start with my bias – I’m very optimistic about the superintelligence.

Yesterday I gave two talks in Minneapolis. One was to an internal group of Target employees around innovation. In the other, I was interviewed by my partner Seth (for the first time), which was fun since he’s known me for 16 years and could ask unique questions given our shared experiences.

I can’t remember in which talk the superintelligence came up, but I rambled on an analogy to try to simply describe the superintelligence which I’ve come up with recently that I first saw in The AI Revolution: Our Immortality or ExtinctionI woke up this morning thinking about it along with one of the questions Seth asked me where my answer left me unsatisfied.

I’ve been reading most of what I could get my hands on about current thoughts and opinions about the superintelligence and the evolution of what a lot of people simply refer to as AI. I’ve also read, and am rereading, some classical texts on this such as Minsky’s Society of the Mind. It’s a challenging subject as it functions at the intersection of computer science and philosophy combined with humans efforts to define and describe the unknown.

My ants and the superintelligence rant is a way for me to simply explain how humans will related to the superintelligence, and how the superintelligence will relate to humans.

If I’m a human, I am curious about and study ants. They have many interesting attributes that are similar to other species, but many that are unique. If you want to learn more in an efficient way, read anything written about them by E. O. Wilson. While I may think I know a lot about ants, I fundamentally can’t identify with them, nor can I integrate them into my society. But I can observe and interact with them, in good and bad ways, both deliberately as well as accidentally. Ponder an ant farm or going for a bike ride and driving over an ant hill. Or being annoyed with them when they are making a food line across your kitchen and calling the exterminator. Or peacefully co-existing with them on your 40 acres.

If I’m an ant, there are giant exogenous forces in my world. I can’t really visualize them. I can’t communicate with them. I spent a lot of time doing things in their shadow but never interacting with them, until there is a periodic overlap that often is tragic, chaotic, or energizing. I get benefit from the existence of them, until they accidentally, or deliberately, do something to modify my world.

In my metaphor, the superintelligence == humans and humans == ants.

Ponder it. For now, it’s working for me. But tell me why it does work so I can learn and modify my thinking.

Also published on Medium.

  • Sam

    Thought you did well explaining the metaphor yesterday evening at the Beta.MN event, Brad. I’m not sure I am fully with you on it, but it was clear then and is clear above.

    I have a feeling you’d really like “Sum: Forty Tales from the Afterlives,” where neuroscientist David Eagleman imagines 40 different alternative afterlife scenarios playing out. Highly creative, definitely thought-provoking, and shapes how you view the human condition in the here and now. Many of the scenarios also have a feel to them not unlike your ant metaphor. And it’s a short story format, very easy to pick up and put down as time allows.

  • irickt

    Wilson describes the ant society as a superorganism. In some thinking human society is a superorganism as well. The difference for your analogy is that while ants and humans evolved in parallel in a shared environment, AI will evolve from and in the human superorganism. Rather than thinking of ourselves in the shadow of some parallel being, I believe it will be more useful to think of AI as an extension of human culture, an inseparable peer of the genetics and chemical trails that define future humans.

    • Maybe. Maybe not. It’s conceivable that the superintelligence already exists and we are simply making steps toward understanding its existence.

      • irickt

        Ah. I took the frame of “humans efforts to define and describe the unknown” to be projection of human technology and society. You seem to have a broader metaphysics in mind.

  • Dave

    Have you read the Cixin Liu Three Body Problem trilogy? He uses the same humans as ants analogy to drive the underlying story through the three books. Agree that when dealing with a superintelligence, we as humans will be the ant in that equation and it can be one that works.

    • Loved it. Maybe that’s another place I got it from.

  • Eddie Wharton

    This line of thinking reminds of many religious metaphors for god(s). Other beings will make decisions that impact humans without humans being able to understand those decisions.

    • Or parallel universes. I believe that there are an infinite number of parallel universes operating in a way we can’t fathom.

  • understand the metaphor. how do you create superintelligence that doesn’t destroy humans? HAL.

    I am optimistic about AI etc. I think it can lead to some really cool stuff. Just being able to multitask using AI could make each human more productive. I am empathetic to the fears of people that worry about it. AI will destroy jobs-not a huge deal. I am more worried about it going off the rails in medical or senior care situations and actually killing people.

    • A question to ponder is will the superintelligence kill more humans on an annual basis than humans kill? Remember, humans are very good at killing other humans.

      • Great point! We happen to be expert at it! (this is intense sarcasm for people that might be offended)

      • TheGeekOut

        A few thoughts to offer up….

        Probably should be pointed out that most people are killed by men. In terms of the US, I think over 90% of murderers are male, when sex is known, and 98% of US mass murderers (legally defined as four or more people) are male. I don’t think those numbers change by much for elsewhere in the world. So not exactly a human phenomenon as much as a male one. This is not to say women are not violent or even less violent, but they are arguably less lethal, even in terms of committing suicide. I suspect AI funded and developed by a critical mass of women will interact with the world in a fundamentally different way than those developed by all male (or close to all male) teams, the latter which is the standard at present. I suspect, on average, any AI in the universe, even if not stemming from a human construct, will have a different experience interacting with a group of all women vs. a group of all men which will illicit different responses.

        At a macro level in terms of wars, historically the world is actually now more peaceful overall, with fewer perpetrators of lethal violence although these few perpetrators have greater access to the means to inflict more harm than their predecessors. Most people, who now more than ever before in our history can easily kill each other both on the cheap and efficiently, choose not to do so. So this begs the question, “What is to keep AI from evolving similarly and learning how to coexist overtime for the benefit of all?” Albeit perhaps there will be an initial learning curve bloodbath, but some of us are likely to survive.

        Also, keep in mind that among humans on average we are more likely to kill those who are more like us than not, in terms of race, class, etc… with perhaps the exception of sex as you are more likely to be killed by a man if a woman. So one argument is that it is more likely super-intelligence will try to take out their own kind before they will see humans as a direct threat to their programmed mission anyway. If anything, we are likely to die as a collateral damage afterthought while the AI busy themselves with trying to decommission each other.

        Lastly and separately from above, next time you come to the Twin Cities, consider involving Blacks in Technology-Minneapolis (Sharon is nothing short of brilliant!!) or Neighborhoods Organizing for Change. It was great to have Seth acknowledge Black Women Equal Pay Day yet unfortunate that I didn’t even see one black woman in attendance. This is to the tech scene’s detriment as a lack of diversity among our ranks makes us more susceptible to group think and as a result, less competitive in our respective businesses. As you alluded to in your interview, it’s just as important to know who isn’t in the room as who is. Thanks for your willingness to share your wisdom and time with us. I appreciated what you had to say.

        • Great thoughts. Amy (my wife) and I talk about this regularly. Our catch phrase for this is “men with guns.” I love your premise about women and the AI.

          Re: Who was, or wasn’t, in the audience. I definitely was aware of a very pale demographic, although it was nice to see a lot of women. The event was open to all and was publicized on a number of channels. Do you have any suggestions for channels the organizers missed that might have brought in a more racially diverse crowd?

          • TheGeekOut

            Kindly allow me to pose a different question while I simultaneously underscore that I found the event to be valuable and that I greatly appreciate both Target and BetaMN and thought it was especially smart to hold the event at a venue accessible by public transportation.

            Not saying you do, but should we find it perplexing as to why minorities and women might not be as likely to step into a euro-white owned/controlled space (Target) to attend an event, organized by a group (BetaMN) with a predominantly euro-white male constituency, that features two white males (you and Seth) talking about diversity in tech (as per the advert although I realize that wasn’t the only topic up for discussion) who are from an all white male managed firm (from what I can tell from your website)? Even all the door greeters were white guys. What about that communicates that women and minorities would fit in there even if organized/hosted by well-intentioned folks?

            Target does a better job than most in it’s top leadership as related to female representation, so it didn’t surprise me that, while still low in numbers, more women appeared to be in attendance at the event than most tech or tech-entrepreneur related events that I’ve attended in the past given the Target network.

            Even hiring and advertising that a complimentary sign language interpreter will be present at larger public events, or can be secured upon request, will send a message of inclusion and attract a larger audience to your talks in the future, if a consistent practice over time. This is the difference between attraction and promotion. Promotion just doesn’t work. People might come, but they may not stay or return for another event. Telling this demographic that they are welcome through promoting/publicizing an event is not enough to make them actually feel welcomed in my experience given dominant culture. So I would recommend at minimum partnering with an organization for an event (as oppose to just promoting through these same channels) that might be able to assist with helping more people feel welcome given their presence. Recommended partners locally, both race and gender, include Blacks in Technology (See Mondo or Sharon), Takoda Institute (Chris), Daniel Bonilla from City of Minneapolis Business Development Dept., Society of Women Engineers. Girl Develop It, or possibly The Ummah Project (Dr. Matthew Palumbo) or African Development Center. I’m less familiar with resources in the Hmong community here. The CEO of Clockwork, Nancy Lyons (@NLyons), also is passionate about this issue and is a total rockstar. Should you ever have the chance to collaborate with her, do it.

            Alternatively, consider stepping into a minority or women owned/run venue to engage minorities or women where they feel most comfortable if you value their participation and then have the organization organizing your talk put the word out generally and then see how many white people or men come instead (See Woman’s Club of Minneapolis or Capri Theater, although the latter is kinda small).

            Just some food for thought to chew on…

          • TheGeekOut

            Also, as long as your interested in engaging on the issue, I also wanted to point out that I think it is great that you wanted to highlight your values for diversity in your talk/interview as learned from your parents. I think it would have been even more powerful if you also highlighted the professional relationships you’ve had with women, not just your personal relationships with women (mom, wife) that influenced you professionally. We would have loved to hear more about the TechStars program and the minorities and women who have been involved as participants as well as how you have benefitted/what you have learned from your role as the chair of the National Center for Women in IT or in working alongside/investing in minorities and/or women. Especially to a majority white male audience, when someone like you within the context of a mainstream tech/entrepreneur focused talk (as oppose to one focused exclusively on diversity) points out stuff like how having women and minorities better represented on boards and in top leadership has been shown to improve company ROI, and that’s one factor you will look at when making investment decisions moving forward, then I think you will start seeing a more diverse demographic at your talks, too, which will hopefully lead to less Hoffsome and Mr. T stamps being doled out the world over. I’m optimistic.

  • Hawking’s comment hits home: “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

    • Good ant counter-example.

      • conorop

        The example made sense last night, but I also agree with Ben’s counter example. I think you did too…

        If HUMAN = BRAD, then DO NOT KILL.

        Thanks again for coming out!

  • From the movie Contact:

    Ellie: We pose no threat to them, it would be like us going out of our way to destroy a few microbes on an Ant hill in Africa.

    David Drumlin: How guilty would we feel if we step on an ant hill in Africa?

    • I love finding more human ant references! Thanks.

  • B. Michaels

    I remember hearing that without ants the ecosystem would be brought to a standstill. So ants are additive to humans. In the same way, I believe that superintelligence will be additive to humanity. I get the dire apocalyptic arguments. I just think we’ll derive more benefits…that AI will add to humanity more than it detracts.

    • You have the same type of optimism I have …

      • B. Michaels

        Perhaps poor analogy, but imagine what would have been said about social networking 30 years ago. “It will be the death of interpersonal comms…everyone will know everything…trolls will run rampant…apocalyptic, etc.” But net-net, I’d say social networking is a positive. So too AI…(I hope!)

        • Good analogy actually.

          • B. Michaels

            My biggest con argument is this. In working with deep learning, you create the algorithm & train it with vast amounts of data. Say to recognize a cat. You don’t necessarily know “how” the algorithm is working. It’s a black box. So extend that to better AI or superintelligence. What happens when we don’t control what the AI learns? This gives me pause.

          • When it goes awry, you press the killswitch, and you build better safeguards next time. It’s software.

    • Andrew Prystai

      I think you have the metaphor flipped – the real question is not what AI can do for Humans, but what Humans can do for AI.

      In the ant/human analogy we observe ants because we find them interesting, but we don’t let them live because they are interesting – we do that because they are useful to humans. This isn’t true for all insects and we try our best to eliminate harmful ones like mosquitoes.

      So the real challenge to make the ants/humans metaphor true is how do humans show ASI like Turry in the WaitButWhy article that we are partners like ants and not nuisances like mosquitoes.

      • B. Michaels

        So I need to read the article, but I love the discussion!

  • I used to think pretty similarly, but then I studied neuroscience and information theory.

    Unless information theory is wrong (which would go along just fine with your ‘ants can not comprehend humans’ analogy) humans are near the upper limit for the ability to process information. In such a world, there couldn’t be something an order of magnitude smarter than us, just something a little smarter that got better information or had biases better aligned with future success. That doesn’t mean an order of magnitude better PERFORMANCE isn’t possible. See

    If information theory is wrong, super intelligence won’t look like computers or AI, it’ll look something more like an intelligent nebula or a quantum string with an agenda.

    • I don’t buy the premise that humans are near the upper limit for processing information. I think this is actually more complex, which is something like “in our ability as humans to understand how to process information, we are near the upper limit of it.”

      • Matt Kruza

        I think this indeed a big crux of it. If you are right that we aren’t near a limit, your ant analogy holds. If, however Kevin’s information theory point (need to read up on it, very curious) is true that ai can’t be an order of magnitude smarter than us then the ant human analogy is a false comparison.

      • panterosa,

        I have been working in cognitive development of the next wave of preschoolers and up. We don’t use all our processing for sure but part of that is learning to see better. Part two in my view will be learning to design as nature does, biomimicry, which is the opposite of how most of human production now. Nature recycles everything, and waste is a uniquely stupid and hateful human thing, besides the downfall of our ecosystems.

        I offered to send on the latest of what we’re up to but I think you were maxxed out at that time. The door is open should you have interest in the future. It’s very exciting work.

    • DaveJ

      Worth looking at Marcus Hutter’s AIXI as an additional input into these thoughts. It is provably optimal (within his model), but also incomputable.

  • Matthew

    Hey Brad,

    It’s a really interesting analogy. I’ve been reading a great deal about this over the last couple of years. The best article(s) / eBook that I’ve read on ASI is the Wait But Why two-parter:

    While I’m totally with you on what is likely to be the counter-anthropomorphism of a superintelligence I’m not so sure that we will be as removed from us as we are from ants. By the time that it arrives (I’m also a bull on it’s progress) we’ll be rigged-up sensors ourselves – we’ll have transcended biology a la Kurtzweil – and as a result I strongly feel as though it will be an all-pervading entity with which we are connected. I think that it’s most likely to represent another dimension that we can perceive along with 3D and time.

    The second chapter – Cognifying – in Kevin Kelly’s recent book ‘The Inevitable’ has some interesting insights on how we will exist with superintelligence that I think you would enjoy.

    Thanks for all the blogs, really enjoy them and appreciate the time that you take out to write them.



    • Yup – I love the Wait But Why two parter – it is excellent. I haven’t read Kevin Kelly’s new book but it’s on the Kindle!

  • DaveJ

    Why do you use the determinate article? (“the” superintelligence). This seems to be making a strong predictive commitment (there will or can be only one) that is almost certainly overreaching (recall “epistemic humility”). It also doesn’t fit the ants metaphor since there are many humans.

    • Because I was sloppy with my language.

    • DaveJ

      I only mentioned it because you used “the” consistently, so it seemed intentional.

      Assuming your ants metaphor (which I think is reasonably apt), it’s fairly likely that we will have terminological difficulties with the identity of superintelligence. It’s not at all clear what it means to have one of them as opposed to more than one; perhaps the entire notion of individuality will undergo a reckoning given that they may be able to share information in ways not unlike how our own brains transmit information internally.

      • I have repeated the following to myself several times now:

        “It’s superintelligence, not THE superintelligence.”

        I agree that it’s an important distinction that I hope to have now hardwired into my brain.

  • Brad, to evolve, humanity has to move on from its winner takes all mentality. The current VC/Sil-Val model perpetuates this thinking. More research needs to happen on settlement systems that drive greater ecosystem network effects that are both sustainable and generative. The key is that both cost and value need to be shared between edge and core and top and bottom layers. Today’s model of silos at the core and edge is not sustainable; nor as generative as it could be.


    AI is obviously on the way in. It is indeed fascinating to ponder to what level we can push that intelligence, and what applications may result; for example, will we be able to imbue our creations with greater intelligence than (what we perceive as) our own? And what might the consequences of that be?

    But any AI we create will be by definition artificial. I find it even more interesting to contemplate the nature of what I guess we should call “natural intelligence”, and how our ability to understand and exercise that intelligence may increase. I like to think along the following lines: when one contemplates the matter, it seems self-evident that behind (within) all of creation there must be a supreme intelligence that brought forth all created things, which established an astonishingly precise inter-relationship between all created systems, and which continues to inform the activities of those systems in every particular, be they the regular whirling of an atom, the slow workings of a rock or the more advanced living of an Einstein or a Jesus. We need not trouble ourselves with questions regarding the “size” of that fundamental intelligence, for it is intelligence itself and must be infinite.

    I lean toward the view that while we will indeed come up with astonishing types of and applications for AI, the real action will be in our opening to the infinite ocean of consciousness (and intelligence) that lies within us. Granted, we normally take that to be the province of mystics but I think that science and spirituality are drawing very close indeed. I think that we’re about to discover that we are not just connected to that true Natural Intelligence; we are literally one with it. The results of that discovery will make AI look, well, artificial.

    Or not. Musings of an old hippie; I admit it!

  • Aashay Mody

    Timely post! I just finished reading the WBW two parter yesterday.

  • Jonathan Epstein

    Hey Brad,

    Very long time no see.
    Perhaps my many years in Japan have warped my ego to the degree that I think I can identify with a super-intelligence, but I enjoy imagining how a super intelligence would think and take the logic from there.

    So… Starting with the closest thing that modern humans have to a super intelligence, I imagine that Steven Hawking set the priority well: the best move for an earth-bound intelligence is to ensure its survivability (not homo sapiens, but the super-intelligent species in this case) by expanding off the planet as quickly and as broadly as possible.

    Facing a potentially existential threat, shouldn’t our baby super-intelligence stop at nothing to ensue its survival? Isn’t it likely that annoying and trivial human concerns like breathable air and sufficient nutrition might get in the way of the overarching mission?

    Then again, maybe the super-intelligence is just our ideal, eternal progeny? What parent wouldn’t happily make sacrifices to see his or her child succeed?

    Jonathan Epstein

    • I’m a believer in the “escape earth to insure long term survival” theory.

  • Blair Marshall

    My one issue with the analogy is that as a human we are able to understand this relationship between us and ants and we will understand that super intelligence exists and that there is some sort of relationship going on between the two species. Ants do not have this ability of self awareness or understanding. I have to think this will have a profoundly large and different impact on humans relationship to super intelligence compared to ants relationship with humans.

    • Blair Marshall

      And humans have emotions and ants do not. This too will have a significant impact.

      • How do you know that ants don’t have emotions?

        • Blair Marshall

          Those may be fair questions. I think there is some threshold that humans have surpassed that gives us the ability for more complex thought that will fundamentally impact our relationship with super intelligence than ants to humans – but that being said, that could just be my human egotism speaking (see what I did there..)

          • I saw what you did. Very good!

      • How do you know “Ants do not have this ability of self awareness or understanding”. That’s a human interpretation of ants, but not an ant interpretation of ants.

  • Great discussion! I believe humans have some technologically enhanced abilities that ants do not.. two worth mentioning.. Nuclear Bombs and worldwide-communications (to transmit fear and resentment i.e. NEWS). As both of these allow us to have a much greater impact than just a human-ant who get’s angry with the Super-intelligent Gods that are created, we may find some very egotistical leaders (currently in power or candidacy upcoming) who don’t like being lower on the food chain and decide to react back in a way that an anthill simply could never do. Or put more simply.. one red button destroys the world entire.