Hollywood’s Massive Miss on Strong AI

Strong AI has been on my mind a lot lately. We use weak AI all the time and the difference between then two has become more apparent as the limitations, in a particular context, of an application of weak AI (such as Siri) becomes painfully apparent in daily use.

When I was a student at MIT in the 1980s, computer science and artificial intelligence were front and center. Marvin Minsky and Seymour Papert were the gods of MIT LCS and just looking at what happened in 1983, 1984, and 1985 at what is now CSAIL (what used to be LCS/AI) will blow your mind. The MIT Media Lab was created at the same time – opening in 1985 – and there was a revolution at MIT around AI and computer science. I did a UROP in Seymour Papert’s lab my freshman year (creating Logo on the Coleco Adam) and took 6.001 before deciding to do Course 15 and write commercial software part-time while I was in school. So while I didn’t study at LCS or the Media Lab, I was deeply influenced by what was going on around me.

Since then, I’ve always been fascinated with the notion of strong AI and the concept of the singularity. I put myself in the curious observer category rather than the active creator category, although a number of the companies I’ve invested in touch on aspects of strong AI while incorporating much weak AI (which many VCs are currently calling machine learning) into what they do. And, several of the CEOs I work with, such as John Underkoffler of Oblong, have long histories working with this stuff going back to the mid-1980s through late 1990s at MIT.

When I ask people what the iconic Hollywood technology film about the future of computing is, the most common answer I get is Minority Report. This is no surprise to me as it’s the one I name. If you are familiar with Oblong, you probably can make the link quickly to the idea that John Underkoffler was the science and tech advisor to Spielberg on Minority Report. Ok – got it – MIT roots in Minority Report – that makes sense. And it’s pretty amazing for something done in 2002, which was adapted from something Philip K. Dick wrote in 1956.

Now, fast forward to 2014. I watched three movies in the last year purportedly about strong AI. The most recent was Her, which Amy, Jenny Lawton, and I watched over the weekend, although we had to do it in two nights because we were painfully bored after about 45 minutes. The other two were Transcendence and Lucy.

All three massively disappointment me. Her was rated the highest and my friends seemed to like it more, but I found the portrayal of the future, in which strong AI is called OS 1, to be pedantic. Samantha (Her) had an awesome voice (Scarlett Johansson) but the movie was basically a male-fantasy of a female strong AI. Lucy was much worse – once again Scarlett Johansson shows up, this time as another male fantasy as she goes from human to super-human to strong AI embodied in a sexy body to black goo that takes over, well, everything. And in Transcendence, Johnny Depp plays the sexy strong character that saves the femme fatale love interest after dying and uploading his consciousness, which then evolves into a nefarious all-knowing thing that the humans have to stop – with a virus.

It’s all just a total miss in contrast to Minority Report. As I was muttering with frustration to Amy about Her, I wondered what the three movies were based on. In trolling around, they appear to be screenplays rather than adaptations of science fiction stories. When I think back to Philip K. Dick in 1956 to John Underkoffler in 2000 to Stephen Spielberg in 2002 making a movie about 2054, that lineage makes sense to me. When I think about my favorite near term science fiction writers, including William Hertling and Daniel Suarez, I think about how much better these movies would be if they were adaptations of their books.

The action adventure space opera science fiction theme seems like it’s going to dominate in the next year of Hollywood sci-fi movies, if Interstellar, The Martian (which I’m very looking forward to) and Blackhat are any indication of what is coming. That’s ok because they can be fun, but I really wish someone in Hollywood would work with a great near-term science fiction writer and a great MIT (or Stanford) AI researcher to make the “Minority Report” equivalent for strong AI and the singularity.

  • R. Narayan Chowdhury

    Brad, you haven’t gone down the Eliezer Yudkowsky rabbit hole, yet, have you?

    • Nope – but I’ll go look in it.

  • williamhertling

    I think there’s been a few technology venture capitalists who have also dabbled in backing feature films. Maybe it’s time for a new investment?

  • joelklee

    Brad, In today’s climate you are far more likely to see this type of intelligent art produced by Netflix. Have you considered chatting with them about it? Shows like Black Mirror are evidence of market viability.

    • I don’t have any relationships at Netflix so no, but I will watch Black Mirror and keep my eyes out for others.

  • I would have liked to see AI woven into the planet of apes series. The more predictable pharma approach to developing the intelligence was (predictably) a disappointment.

    • I didn’t see it – I don’t think I’ve seen any Planet of the Apes since I was a kid.

  • Totally agree with you. I’m reading “Superintelligence” right now and think that the screenplay/movie writers you mention should have talked to the author. Instead, seems like they went for drama over a more realistic, real science approach. Bring William in (just tore through the first three books, good stuff) or someone who doesn’t just say singularity in hushed, fearful tones. Or maybe it’s simply that any futuristic movie is just so much better with Tom Cruise in it?

    After a lot of reading about AI and machine learning recently, my biggest issue with all of this is that it feels like technologists sometimes punt on solving current issues that humanity faces now in favor of “technology will solve everything.” I hope technology will fix some of our mistakes, but what if it doesn’t?

    Do you think we’re out of the last AI winter, with new research dollars going into developing strong AI that has some promise? Seems like VC money has been scared of it, and government funding too. I haven’t seen much about that online, perhaps you have some insight?

    • I don’t really have broad market insight, but I think this will end up front and center again in the next decade.



    • WALL-E is cute, but so much is lost in the cuteness and the dystopian human future.



  • orthorim

    The major flaw in the movie Her, to my inner geek at least, was the idea that if computers could think they’d develop awareness.

    Really it’s the other way around: All thinking, and with that, computers, emerges from consciousness. Consciousness does not come out of intelligence, it’s the other way around. It’s a classic containment problem here – you can’t think your way to awareness.

    The entire singularity theory suffers from the same basic error. It’s putting the cart before the horse.

  • Having intimate experience in tech and movies my conclusion is that it’s a really hard problem.

    The essential dilemma from a creative pov is the fact that movies are primarily about emotion, not ideas.
    “If you have a message, call western union.” is the oft quoted adage in ‘the biz.’ (attributed to Capra and S Goldwyn)

    Hence, to the degree that you have to spend screen time getting an idea across you are likely to be short changing emotion. (ie drama).

    Very few films have successfully communicated interesting scientific ideas and been dramatically successful.

    Kubrick’s films were intellectually engaging and at times (eg Clockwork Orange, 2001) they were simultaneously dramatically successful. But unfashionable though it may be to point this out, he too had troubles pulling off this trick.

    Interstellar by way of contrast is a movie that only a committed geek could love. The ideas overwhelm the human dimension.

    It is incredibly difficult to pull of both and to the degree that the idea is complex the challenge amplifies.

    We have some great examples of noble failures right now. The ‘Theory of Everything’ and ‘The Imitation Game.’ Both evidently felt obliged to distort the ideas to the point of being barely recognisable and to introduce dubious dramatic elements to goose up the ‘dry’ intellectual content.

    If we look at movies like ‘Blade Runner’ or ‘Alien’ then we see that the underlying ideas are incredibly simple and that in consequence we are otherwise in a cowboy and a haunted house move.

    Sci Fi novels are MUCH easier to pull off than movies about the future.

    Full Disclosure: ex screenwriter with inside knowledge of some disasters.

  • Gp

    Obviously Hollywood rarely knows or understands the first thing about strong AI. All I can say is read three authors

    1) Dan Simmons Endymion
    2) Peter Hamilton Judas Unchained
    3) Isaac Asimov

    We may not always be able to program compassion and empathy into our biological children but we can sure as hell program it into our silicon ones.

  • Harsha G

    We’re also currently working on what you call weak AI. I do believe potential for machine learning is enormous and over the next decade, we’ll see a lot of really cool applications to solve real tangible problems. Here’s a very good video on this subject – Vinod Khosla interviews Larry Page and Sergey Brin. Around minute 11:30 (for 10 minutes), they talk about ML and how it’s going to replace jobs at all skill levels. Page even goes to say, we are going to have people employed only to solve associated social problems, rather than need based.

    My bold prediction – we’ll have 4-day work weeks become popular within the next decade:)

    I have watched Kurzweil’s TED talks on singularity and it’s a very interesting concept, but probably not going to be a reality anytime soon. I do worry the recent hype surrounding AI could be a detriment to ML progress, as fear based opinions dominate public discussions. People immediately associate it with evil machines out to destroy humanity – even Elon Musk recently called AI mankind’s biggest threat. Leading AI researchers like Yann Lecun are also calling this current hype as extremely dangerous – he says it has killed various AI approaches at least 4 times in the past (I don’t know when and what the reasons were).

  • DaveJ

    It is interesting and might be useful to consider that Minority Report – in neither plot nor theme – is about the future of computing. Indeed the enabling capability is a combination of telepathy and divination; and the computing, as cool as it is, is really just a sideshow. The three singularity movies you discuss – despite their weaknesses – attempt to illustrate Singularity scenarios that have actually been discussed by people who think about such things.

    [Spoiler alert]

    That “Her” felt pedantic is exactly the point. We live our everyday lives, often filled with loneliness and relationship challenges, and along comes a technology that actually fills those holes for people. Then, just as this technology is beginning to embed in society, we see that to fulfill human relationships it really does need to be *strong* AI with all its attendant capabilities, and it will find us humans ultimately unsatisfying and irrelevant. I find this to be one of the more likely Singularity scenarios.

    I find your characterization of Samantha as a “male fantasy” puzzling. You intend this to be derogatory, but the fantasy is one of an extremely intelligent, inquisitive, emotionally adept woman who *does not even have a body to objectify*. Indeed when they attempt to simulate intimacy with an actual and very attractive female body, Theodore rejects it. So it is almost a reversal of a typical male fantasy in a Hollywood movie, which would be more like Scarlett’s body without all the irritating feelings and conversation; or if the woman were intelligent, where she ultimately sacrifices her own potential to be with the man. Indeed it could be argued that Samantha is a feminist model for what a male fantasy of a woman *ought to be*.

    • DaveJ, my $.02 thinks that *strong* AI with all its attendant capabilities, will not find us humans merely unsatisfying and irrelevant, but will conclude that humans are the problem here and must be dealt with accordingly. What other conclusion could a truly intelligent AI system arrive at if unbounded in its reasoning abilities?

      • DaveJ

        That is also a possible scenario, and it seems to be most the favorite for most people. I wouldn’t claim it’s unreasonable or impossible. But it simply seems less likely to me – it is like the pre-Copernican view that the earth must be the center of the solar system because otherwise humans would be less important. A true strong AI superintelligence would find humans no threat whatsoever (it can make copies of itself; it can outsmart us; it can “be” wherever it wants).

        This is of course unsatisfying for people who want movies with drama and action, as Brad does.

        Some thinkers claim that humans will all be killed because the superintelligence would want to use every available resource at its disposal (i.e., we would die off the way species who lose their habitat have died off). This seems to me more credible, but I have some interesting arguments against this view as well (without getting into detail, a superintelligence will have to evaluate the risk that there are other superintelligences in the galaxy surreptitiously watching us and our creations for evidence of threats – never forget the Fermi paradox!).

        • DaveJ,
          Please first consider that my opinions here may not be worth the value I first claimed (2 cents;-). The key thing in what I said above was “unbounded in its reasoning”. If you also have an AI system that is unbounded in its execution, we’re probably going to have a problem somewhere down the line, for the reason I claimed prior, unless that system is very limited in its scope of application and you can ensure that its scope will not creep to end up somewhere unintended. giles

          • DaveJ

            I don’t know what “unbounded in its reasoning abilities” (or execution) means. It sounds a lot like omniscience and omnipotence, in which case we are entering metaphysics and I have nothing to add.

            EIther that, or you just mean that it has the same sorts of capabilities humans do, which is precisely what we are talking about with strong AI. Of course there is some potential threat; my point is that it is not the only or necessarily the most likely possible outcome. In any case it is vastly underdetermined what will happen.

          • Dave,

            Sorry I was less then clear, I struggle to be both concise and clear in my life. But come to think about it, I tend to be misunderstood often when I’m actively seeking to be clear while exercising my normal nature of being verbose. So there you have it: consider the source 😉

            In my terminology “Unbounded reasoning” = An AI contemplative system that has no limitations whatsoever around what it is allowed to reflect on, consider and devise plans in relation to.

            So, to seek further clarity, consider an example in relation to Asimov’s 3 laws: those laws are limitations around a robots execution (what they are physically allowed to do): i.e. their executions are bounded (“don’t kill the illogical human bastards, etc, etc”).

            In my usage of the term “unbounded reasoning”, those systems would have been “allowed/unbounded” to evaluate any and all data in regards to mankind. Hence my first statement that a truly intelligent system, at some point could only conclude that humans, in the context of life and history, are the problems here: I.E. we humans collectively crap in our nest (pollute the world) , we kill each other, we do not take care of all of us, we consume foods laden with chemicals to preserve the food to the harm of our bodies, ALongSeriesOfEtc’s.

            BTW: Asimov’s second law is flawed. You cannot have robotic systems that will obey every human that gives them a command.That would result in chaos: there has to be prioritization of commands. I would not want my personal robot to obey my neighbor’s commands, even though he is a human.


        • Scott

          I’m totally with you there on the first part.

          The idea that we’d be important enough to bother harming (or bothering with) seems silly and egotistical. We have totally different needs and would almost certainly not be competing for the same scarce resources (save for energy but even that is a different proposition when you can just send robots up to asteroids and setup space-based solar clusters).

          Then on the very off chance we were worth bothering with, we’d be as easy to see through and manipulate as children so there’d be no need to do anything dramatic or obvious (and it’s probably better not to so it could stay off our radar as a threat).

  • ArmandoKirwin

    I worked in Hollywood for a number of years. Movies are 99% stories about people (love, loss, loneliness, the hero’s journey, good vs. evil, etc.), but specific technologies can obviously come into play as part of the world in which the drama is unfolding. It turns out that most big-budget productions are happy to bring in an expert or two (Interstellar, Avatar, Minority Report, etc.), but that’s because they can afford it (you actually want a Tom Cruise in your movie for this reason). Screenwriters aren’t generally pro or con specific technologies, rather they are merely regular members of the same cultural zeitgeist in which we all operate, in other words: if the New York Times isn’t writing frequently/positively about strong AI, Hollywood probably won’t do it either. Brad, you’ll be happy to know that the rights to Ramez Naam’s awesome book “Nexus” have been acquired by Paramount (where I used to work). P.S. Individuals should never ever “invest” in movies! 🙂

  • TeddyDuchampe

    Brad – You’re trapped on an island for eternity: M/F/K decision…Her, a robosuck machine, and Rosie O Donnell. Go.

  • Andrew

    On a related note, I want to thank you for turning me on to the Hyperion books. Just cranked through the first two. Wow! Cool stuff. Lots of fascinating tech stuff but also goes WAY deeper than that. I hear a movie is in the works, which seems very challenging but I’d love to see it.

    Now on to book three…

  • I tend to find that past portrayals of the future (Star Trek, Star Wars etc) were better because technology at the time wasn’t that advanced so it took a stretch of the imagination to imagine portable phones for instance or a talking computer. But now, we essentially carry around computers in our pockets and the things the films suggest could actually happen, which lessens the impact of the film.

  • Totally agree with this. I read AI Apocalypse over the holiday and it was awesome. I’m not getting any provocative thoughts even close to this at the movies.

  • Any thoughts on Ex Machina? We’re off to catch it tonight..