The Future Will Look Different From The Present

I’ve been thinking about the future a lot lately. While I’ve always read a lot of science fiction, The Hyperion Cantos shook some stuff free in my brain. I’ve finished the first two books – Hyperion and The Fall of Hyperion – and expect I’ll finish the last two in the next month while I’m on sabbatical.

If you have read The Fall of Hyperion, you’ll recognize some of my thoughts at being informed by Ummon, who is one of my favorite characters. If you don’t know Hyperion, according to Wikipedia Ummon “is a leading figure in the TechnoCore’s Stable faction, which opposes the eradication of humanity. He was responsible for the creation of the Keats cybrids, and is mentioned as a major philosopher in the TechnoCore.” Basically, he’s one of the older, most powerful AIs who believes AIs and humans can co-exist.

Lately, some humans have expressed real concerns about AIs. David Brooks wrote a NYT OpEd titled Our Machine Masters which I found weirdly naive, simplistic, and off-base. He hedges and offers up two futures, each which I think miss greatly.

Brooks’ Humanistic Future: “Machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much. In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.”

Brooks’ Cold, Utilitarian Future: “On the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.”

Brooks seems stuck on “machines” rather than what an AI actually could evolve into. Ummon would let out a big “kwatz!” at this.

Elon Musk went after the same topic a few months ago in an interview where he suggested that building an AI was similar to summoning the demon.

Musk: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.”

I need to send Elon a copy of the Hyperion Cantos so he sees how the notion of regulatory oversight of AI turns out.

Screen Shot 2014-11-03 at 6.36.19 AMI went to watch the actual interview, but there’s been a YouTube takedown by MIT, although I suspect, per a Tweet I got, that a bot actually did it, which would be deliciously ironic.

If you want to watch the comment, it’s at 1:07:30 on the MIT AeroAstro Centennial Symposium video which doesn’t seem to have an embed function.

My friend, and the best near term science fiction writer I know, William Hertling, had a post over the weekend titled Elon Musk and the risks of AIHe had a balanced view of Elon’s comment and, as William always does, has a thoughtful explanation of the short term risks and dynamics well worth reading. William’s punch line:

“Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.”

Amy and were talking about this the other night after her Wellesley board meeting. We see a huge near term schism coming on almost all fronts. Classical education vs. online education. How medicine and health care work. What transportation actually is. Where we get energy from.

One of my favorite lines in the Fall of Hyperion is the discussion about terraforming other planets and the quest for petroleum. One character asks why we still need petroleum in this era (the 2800’s). Another responds that “200 billion humans use a lot of plastic.”

Kwatz!

  • http://swissarmybrain.com/sab Chris Roffe

    Brad, thank you for this post it always warms my heart to see smart forward thinking people referencing unique science fiction. I look forward to reading William Hertling’s books, I’ve been casting about for a new Science Fiction author to read and enjoy.

    In the meantime I’ll leave this gem from within Hyperion right here: “Poets are the mad midwives to reality. They see not what is, nor what can be, but what must become.”

    • http://www.feld.com bfeld

      Martin Silenus is such an amazing character.

  • http://www.derekscruggs.com/ Derek

    Check out the Star Trek: The Next Generation episode “When the Bough Breaks” – http://en.memory-alpha.org/wiki/When_The_Bough_Breaks_(episode)

    It’s not about AI per se, but it’s about a planet where technology has sufficiently advanced that most people spend their time focusing on pursuits in the arts and humanities (which when you think about it, that’s true now – only a small percentage of the population are engineers). Anyway, the most encouraging aspect was that through testing they can determine at a very young age what a person’s greatest strengths are, then channel them into education that reinforces them. And I don’t mean “he’s good at math, let’s send him to math camp.” More like, “she gets her greatest sense of fulfillment from sculpture, so let’s make it possible for her to flourish in that field.”

  • http://degoes.net/ John A. De Goes

    I recently blogged on why there will be no rise of the machines, so this post is quite timely and interesting.

    In short, I believe that fears of AI overtaking the planet arise from a sort of “black box fallacy”, in which the mind takes something that it does not understand (how brains or even natural selection work), and conjures up demons. Today it’s AI, but 50 years ago it was radiation, and 200 years ago it was electricity.

    There is no single dimension of “intelligence”. Our own “intelligence” is information processing capacity directed at replication of our genes, as it is for all other organisms evolved through the process of natural selection. That “gene replication” was the critical feedback loop which led to a design directed at a certain purpose.

    The feedback loops for “AI” will be as varied as the purposes for which we create machines. It’s certain these machines will be “intelligent”, but that intelligence will be with respect to the feedback loop: driving people around, playing chess, simulating protein folding, etc. Information-processing agents created with such feedback loops will exhibit completely alien “psychology”, and while I think it’s ultimately a mistake to anthropomorphize this psychology, if you must do so, then think of the machines as deriving pleasure and meaning from the purposes for which they were created.

    The only real threat relevant to the “rise of AI” meme is humans specifically trying to create a human-like AI, which would then exhibit life-like behavior, specifically “intelligence” defined on a measure of efficacy of self-replication.

    Personally, I’m not worried about that, because (a) the environment in which we live is unimaginably carbon-centric; the material we need to reproduce ourselves is all around us, it grows from the ground and walks upon it, it lives in the streams and in the oceans (contrast this with the supply chain manufacturing capabilities required for non-carbon based life forms), and (b) the life forms that exist right now are the product of an earth-sized (arguably universe-sized) quantum computer that has been running quintillions of simulations in parallel for 4 billion years.

    Think about the scale of that for a second and then contemplate your silicon-based computer which can’t even simulate the brain of a single ant for one moment in time.

    In the new world, humans won’t be the best at anything. Well, except one little thing: the sole purpose for which they evolved. Humans will continue to dominate machines at self-replication, being as we are the product of an incomprehensible, parallel exploration of the near infinite landscape of possibilities that has been running since the dawn of the universe itself.

    Now if you want stuff to worry about, I think the ease with which things can be destroyed is alarming, especially as technology advances and machines (including “AI”) can be leveraged to create even more destructive technologies (e.g. a virus that kills a certain person based on DNA). I also think it’s inevitable that humans will tinker with their DNA to create what will ultimately become new species, and while I don’t necessarily view that as a bad thing, one can easily see where it could lead to mass warfare and destruction.

    Those are real and valid concerns. Terminators? Not so much.

    • Rick

      “evolved through the process of natural selection”
      .
      Humans haven’t evolved through the process of natural selection. We destroy nature at will. If we were victims of natural selection then no one would spend money on food. Because it would be everywhere for the eating. But instead society has enslaved people by forcing them to have to buy food.

      • http://degoes.net/ John A. De Goes

        Natural selection does not evolve organisms for the purpose of “preserving and protecting their environment”, it evolves them for the purpose of “turning as much of their environment as possible into copies of themselves,” (i.e. self-replication, in our case at the level of the gene).

        And yeah, at this point, it’s pretty much a fact, not a theory (albeit one poorly understood by the public), supported by an insurmountable barrage of evidence from every corner (geologic, genomic, etc.).

        • Rick

          “preserving and protecting their environment”
          .
          I was meaning we seem to be at odds with our environment. In other words it appears to me that humans don’t play well with anything that you can label “natural”. So we would destroy natural selection instead of letting it act on us in any way.
          .
          “it’s pretty much a fact, not a theory”
          .
          Kinda’ like “The check’s in the mail”. <- Just kidding.

          • http://degoes.net/ John A. De Goes

            Humans are by definition “natural”, being the product of 4 billion years of evolution. We were not evolved to “preserve and protect” the environment. Like all organisms created by natural selection, we were evolved to leverage environmental resources to replicate our genes. We just happen to be extraordinarily successful at this task (by far the most successful modern-day species in our ecological niche).

          • Rick

            “We were not evolved to “preserve and protect” the environment.”
            .
            You used the word evolved in a way that sounds like it’s being guided. Are you saying that evolution is guided by an intelligent being?
            .
            Umm… We kill each other for money and we let others starve in the streets. I can’t really agree with your statement of us being “most successful” at much of anything except destruction of a system that “used to be” able to supply us with everything we needed for free.

          • http://degoes.net/ John A. De Goes

            I don’t have time to get into a creation versus evolution debate. There’s plenty of good literature on the topic, including The Greatest Show on Earth, by Richard Dawkins. Your spiritual and / or religious beliefs do not have to be in conflict with the facts of how life evolved on planet earth. Many religious and spiritual people choose to accept the reality of evolution without compromising their core beliefs.

            As for murder and theft and selfishness, these behaviors are perfectly consistent with natural selection. Natural selection operates at the level of genes, not at the level of a species. What’s good for helping me reproduce my genes is not necessarily good and may sometimes even be detrimental to the reproduction of your genes.

          • Rick

            I don’t want to talk religion. I wanted to talk evolution. It sounded like you were saying evolution is guided. That goes against what I thought evolutionists say.
            .
            OK. You’re labeling success differently than I was. This was fun. Thanks.
            .
            “Stop calling me Jerry”

    • williamhertling

      If you ask any roomful of people with likely exposure to Star Trek: The Next Generation who would like to have Data as a friend, about a third of the room will raise their hand. (I’ve done this many times.) That is the latent demand for general intelligence. Companionship. It won’t be met with a chess-playing AI or other specialized AI. It will be very human-like.

      Right now we’re investing a lot in specialized AI because that’s what our technology is capable of. But when our hardware and software has improved, we’ll do human-like general intelligence.

      The AI of the future is not going to be created by human programmers writing “if this, then that.” It’s going to be black box designs, neural networks and genetic programming, trained and evolved into existence. We won’t know exactly how it does what it does. We’ll be confident, within a certain probability, that it will function as desired, but never sure.

      Does that mean Terminator scenario? Probably not… Although if you read The Killing Star (Pellegrino & Zebrowski) or Computer One (Warwick), you’ll see lots of examples of why preemptive aggression is a winning strategy. That concerns me.

      But there’s plenty of other ways AI can be dangerous to us.

      • http://degoes.net/ John A. De Goes

        Current generation AI (of which I am intimately familiar, having written and used a variety of advanced techniques in machine learning) is already “black box”. The “how” is created by a machine, and does not resemble “if / then / else” programming.

        What you are missing is that in order to create current or next-generation AI, you need a fitness function / training data set / feedback loop (in the case of nature, this is “self-replication”).

        While I agree that, in current or next-generation AI, humans will not necessarily understand “how”, they will still need to define the feedback loop which is ultimately responsible for which goals the information-processing agent tries to achieve. Those goals will be stuff like “driving cars”, “winning at chess”, and “providing comfort to humans,” — they will not be “self-replication”, and even if they were self-replication (i.e. some madman trying to destroy the world), my above arguments apply on the virtual impossibility of creating a self-replicating machine that can compete in our environment with what nature itself has produced.

        • williamhertling

          Computer viruses self-replicate. Thirty percent of all computers have some sort of virus or malware. I can’t find any stats for number of infections per computer, but I wouldn’t be surprised if it turns out that there’s more infections than there are computers. Seems like there must be plenty of incentive already for replication.

          • http://degoes.net/ John A. De Goes

            Computer viruses were designed by humans, and while they “self-replicate”, they do so in very controlled ways that limit their potential for adaptation. The reason for these constraints is that random mutations in a binary executable will crash a computer in nearly every case, i.e. machines are so exquisitely sensitive to structure that natural selection is computationally infeasible. Moreover, intra-computer replication is very much dependent on esoteric flaws in software (security holes), which are difficult to exploit and even more difficult to discover.

            Even if we invented a computer more compatible with natural selection (i.e. one in which errors do not halt the computer), even if we invented a virus with a kernel of self-replication capable of unbounded variation, and even if we had some guaranteed method of replicating a virus across network boundaries, the end product of that, after a tremendous amount of evolution, is a digital program capable of copying itself in the environment in which it evolved.

            Which is a computer. And which couldn’t be further from the environment in which we evolved (planet earth).

          • Rick

            This is fun. I don’t have the experience that you guys have but your argument seems strange.
            .
            “machines are so exquisitely sensitive to structure that natural selection is computationally infeasible”
            .
            Since we are the only living creatures that we can find within our reaches in this world. It makes me think nature is also exquisitely sensitive. So how is natural selection fact?

          • http://degoes.net/ John A. De Goes

            The fact that your genome is different than every other human being on this planet and yet you are living and breathing is a testament to the robustness of the “machine” on which the “program” of natural selection unfolds.

            Your DNA can be mutated in numerous ways, whole amino acids could be swapped out for others, sequences inserted and deleted, and yet you could still survive. If you open up an EXE file and change a single byte, the program will probably crash whenever the computer reaches that byte.

            That’s why the substrate of modern-day CPUs makes natural selection computationally infeasible. The shape of the fitness of the landscape of possibilities is not continuous, but discrete. From one location that works, step to the left or right, even by the tiniest amount, and you’ll land in a location that doesn’t work. At all.

            That’s not true of organic life and hasn’t been since the first primitive RNA replicator evolved.

          • Rick

            You’re talking about only one platform. The one which we use to recreate. Like you said “they do so in very controlled ways that limit their potential for adaptation”. We are recreating on one platform. It’s a very limited platform. Most living creatures are very similar to us at the lowest level.
            .
            While our world isn’t as small and limited as the one we created – computer operating systems. It’s still small compared to what is possible.
            .
            You’re making statements about long time past history as if you were there. You cannot do that because you don’t know if things have changed and left behind no clues of those changes. In other words the proof could have been erased and you could be basing your conclusions missing information.

          • Rick

            “you could be basing your conclusions missing information.”
            .
            That’s kinda’ like saying. “I know there was life on Mars because there is no life on Mars.”

      • Rick

        Is there a way to get in touch with you by phone?

        • williamhertling

          Sorry Rick, I wasn’t sure if you were talking to me or John. Yes, I’m available. Drop me an email at firstname dot lastname at gmail. Thanks!

          • Rick

            I went to your site. Did you get my message?

      • Rick

        From your silence. I’m guessing it’s a no. :-)

    • Nigel Sharp

      I feel you’re totally over-estimating human intelligence vs The Singularity effect of AI, we’re not dealing with historical challenges where we “haven’t known better”, we’re in the age where we are creating something else which knows better than us and will know more than us for the remainder of time…

      • http://degoes.net/ John A. De Goes

        You are engaging in the classic fallacy of anthropomorphism, assuming there exists a single dimension of “intelligence” which places slugs at one end, and humans at the other.

        That’s not only misguidedly anthropomorphic, but very naive. We are information-processing agents whose neural circuits have been designed over eons solely for the purpose of replicating our genes, which is why, to various degrees, we resent subjugation, crave power and control of resources, and enslave and mistreat others (as well as cooperate with them, ironically enough).

        The concept of “intelligence” for an information processing agent is completely and utterly meaningless apart from both a meta-goal (such as gene replication) and an environment (such as earth). This means that there are as many types of “intelligence” as there are goals and environments — an infinite landscape teeming with endless possibilities, of which our specific combination is an insignificantly tiny point (i.e. throw a dart at that landscape and you’ll never hit human intelligence, no matter how many times you try).

        All the traits we associate with humans and fear in machines arose precisely because they helped our ancestors achieve their meta-goal in their environment. Unless we built machines with our self-same meta-goal (which means they would have to be capable of self-replication) and turned them loose in our environment, the intelligence that arises from machines will not resemble, and will in fact be completely foreign to, the intelligence of humans.

        Even if it were possible to build machines with our meta-goal that could operate in our environment (which is no trivial undertaking — it took the universe 4 billion years to produce us, and the universe is a massively parallel quantum computer “simulating” quadrillions of possibilities in parallel), it would serve zero commercial value.

        Rather, advanced AI of the future is likely to be directed at specific human needs, e.g. answering questions based on the entirety of human knowledge, diagnosing a patient given DNA analysis and sensor data, etc.

        The “circuits” that make up such AI will be built through feedback loops that encode our meta-goals for them (in the same way we build ML classifiers using positive and negative reinforcement). The intelligence these machines have cannot be compared to our own and will not look anything like human intelligence, despite the fact that these AIs will exceed at every task imaginable, from playing chess to diagnosing patients to recognizing faces to answering questions (except the one we were designed for and excel at: self-replication in our natural environment).

        It’s time for humans to recognize that, like electricity and Mary Shelly’s Frankenstein in times past, AIs rising to take over the world makes for a good science fiction story, but it’s precisely that: pure fiction, with no plausibility in the world of actual science.

        • Nigel Sharp

          I actually agree that I never considered my linear approach to understanding the evolution of Intelligence / Artificial Intelligence. It seems like a romantic fallacy to fall into, and comes through a lifetime of being told that we are the superior intelligence in the known universe.

          One thing which allows us to excel in our natural environment is our biological programming and physiology. I don’t understand why you believe that AI can’t easily overcome that by integrating cybernetics into the biological world.. or even more fantastical, never consider the self-replication limits of the natural environment as the planet Earth and instead the AI will almost instantly think on a galaxy wide scale.. therefore taking that multi-dimensional approach to solving a problem of replication in virtually infinite space.

  • Rick

    “OpEd titled Our Machine Masters ”
    .
    That’s already here. Some people are already mastered by computers. Many people already have lost the ability to communicate with other people directly. I’ve been doing some research on this for my Time Folded book. I find many people can’t carry on a conversation without “getting approval” from their computer. I don’t mean they ask their computer if it’s OK. Instead I mean they can’t think on their own. They have to look it up and are unable to form their own opinion.
    .
    The problem with AI is not AI. The problem is of course people. If they program in the system the ability to do harm then well… Just look at how people discriminate against others for their religious beliefs or for having no religious beliefs. I know a person who was recently teased about their non-belief.

  • Doug Gibbs

    Fear of AI is like the man who fainted when he heard the sun would explode in 5 billion years. He woke up and said, I thought the professor said 5 million.
    I am more concerned about simple computers being given control of weapons. A done with the simple, non AI program of “shoot anything that moves” and sub millisecond reaction times is much more disturbing.

    • http://www.feld.com bfeld

      We will be dealing with the consequences – which will be brutal and miserable – of not having a human in the loop – very soon.

  • jkostecki

    Thanks for this Brad. It’s great to hear you’re talking about where we get our energy from. Most kids now know where we get our food from (as opposed to a supermarket), but most kids (and many adults) don’t know where the power and heat (not to mention plastic) we use daily comes from.

  • rick gregory

    Brad – have you seen Ramez Naam’s postings over at Charlie Stross’ site? http://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html is good look at the strong AI/fast takeoff scenario and why it’s not really all that likely. That doesn’t mean we shouldn’t consider it at all, but too often we overestimate the likelihood of it because, well, it’s cool.

    Stross himself commented on the entire Singularity scenario in 2011 too. http://www.antipope.org/charlie/blog-static/2011/06/reality-check-1.html

    • http://www.feld.com bfeld

      Naam is great. I love his writing. And yes…

  • Alex Pack

    Ok, but you didn’t actually explain why you think Brooks’ predictions are off-base. It’s not like he’s the only one to say these things. Various smart philosophers have been writing about both futures in terms of media, communication tech, and big data for a long time (Hannah Arendt, Althusser and the Frankfurt School, Habermas, etc.).

    For those of us that haven’t read Hyperion (yet), what’s the TL;DR?

  • http://www.startupmanagement.org/ William Mougayar

    “Classical education vs. online education.” That’s an important theme for entrepreneurship & startups.

    • Rick

      The internet is a great way to really improve people’s lives by delivering education. But the problem, of course, is finding funding. Delivering knowledge via the web doesn’t take any truly innovative technologies. That leads to difficulty with funding. Well maybe just for me. :-(
      .
      Anyway… I’ve already looked into that and it would be a fantastic thing to bring to the public!

  • james mawson

    Thoughtful post, Brad. At the Sinet event – line below – for the cyber-security officials at the US and UK government agencies (DHS, GCHQ, etc), there was a general sense that technology was going too fast to try and regulate. I’ll check out the Hyperion series to see correlation. http://globalgovernmentventuring.com/news/editorial-cyber-security-whistles-for-attention

  • Nigel Sharp

    Brad, thank you for this post, I was recently at a TEDx event in Yerevan listening to a naive speech about letting AI bots out over the internet, with no regulation these guys have raised money and are doing exactly that… and I first felt fear and then solidly believe we’re soon going to need to stand and take action, i’m pro-technology development, but in the same way nano-bots could cause the ‘Grey Goo’ scenario, I am cautious at how we explore AI and what safeguards should be installed in the code.

  • DaveJ

    To address the risks, it’s crucial to separate the idea of narrow AI from that of general AI.

    Narrow AI consists of approaches that are not actually intelligent in the way humans are (in particular, via an integrated conceptual/perceptual system) but do things that have historically required intelligence to perform. Drones that can automatically and autonomously fly your tacos to your office or shoot up a town are examples of this; it’s fairly obvious that we are making solid progress toward this sort of thing, and enormous numbers of people are working on it. However, such systems are narrow in the sense that their “intelligence” is aimed at particular applications, even if they are contextually robust. Consequently, one task they will not be able to perform is designing future generations of AI, and while they will have advantages over humans in limited areas (like all machines, including the loom), they will also have distinct disadvantages.

    The existential risks of narrow AI are very similar to those of nuclear weapons. They could create big problems for the good guys if they were to fall into the wrong hands. But those hands would have to be human.

    General AI, in contrast, must have at least the same sort of intelligence that humans do. While they might also have special purpose capabilities that they synthesize, the synthesis must be performed via the integrated conceptual/perceptual system mentioned above. Such AIs, once created, will have no particular disadvantages in comparison with humans – they can be embodied into whatever physical machines are desired, copied, backed up, sped up, and yet they will have all the same thinking capabilities that we do, including designing future generations (which may or may not have intellectual capabilities that are fundamentally different or more powerful than ours – but that doesn’t matter much, since they will be faster and be easier to connect external narrow AI, storage, etc.) Very few people are working on this problem, and progress is slow. This is because it involves a great deal of deferred gratification, whereas narrow AI shows steady progress.

    The existential risks of general AI are deeply different too. They would be the next step in the evolution of intelligence. Though most people who think about it have their favorite scenarios, it is difficult to know how these AIs will behave, and in particular how they will act toward us. It is also difficult to know whether there is much we can do about it ahead of time. There is a major “cat is out of the bag” issue with them because once they exist, we will not be smart enough to know how to contain them.

    I suggest that we emphasize thinking about the more immediate and more familiar risks associated with narrow AI, if for no other reason than to give us at least the opportunity to worry, sometime in the future, about the risks of general AI.

    • williamhertling

      Great points, Dave.

  • Robert Harmon

    Hi Brad, check out (if you have not already) Physics of the Future: How Science will Shape Human Destiny and Our Daily Lives by the Year 2100. by Michio Kaku,
    I think you will enjoy it!

  • Rick

    Where’s Brad been? Doesn’t he know that his blog post is like a first cup of coffee in the morning to entrepreneurs!

  • http://phiolo.blogspot.com/ Domenic Weber

    Brooks’s Humanistic Future doesn’t read naive to me, rather it’s just very close to near future. When put in the context of 100+ years out, especially when we factor in Moore’s Law, yes it could be missing something. He parallels to Seth Godin’s “Stop Stealing Dreams” http://www.sethgodin.com/sg/docs/stopstealingdreamsscreen.pdf the difference is Seth is talking about right now and actually what has already happened rather than the future.

  • Steve Lincoln

    My concern about the continued development of AI is that it will rapidly erode the value of labor (but not capital, of course), so that we will experience a job market (and related standards of living) that are increasingly polarized between a need for highly skilled individuals and a need for low-paying jobs that aren’t yet replaced by AI-driven machines. I think that the hope was that AI would create such efficiencies in productivity that it would eventually allow for more leisure in our lifestyles. For those who own the capital that produces the AI-driven machines or the businesses that benefit from them, that may be true. For those who own only their own labor (at the low end of the scale), it may not.

  • http://www.earlyinvesting.com/ Adam Sharp

    Last two Hyperion books are excellent, enjoy!

  • RBC

    Having come back most of the previous 10 days for a daily dose of Feld funk/optimism/tech geekery – your home page of the future seems to look like the present! I hope all is well with your health and three cheers to having you back soon!!!

  • Rick

    Brad’s been gone for more than two weeks. Anyone checked to be sure he’s not slipping into depression?

    • RBC

      was thinking the same thing…no answer to my note a week ago. I’m going to send an email, but if anyone in Boulder has more info that would be reassuring

  • http://www.museumplanet.com David Brown

    Come back Brad….

  • Rick

    This is turning into “Where in the world is Brad Sandiego?”

  • brgInRedSidis

    My all time favorite series of books. (right up there with “The Worthing Saga” ) The author lives in Longmont and I make it a point to go to all his appearances at the Boulder Book Store.

  • Rick

    Well…
    .
    Anyone want to discuss idea stage investing?

    • Slim

      Where is Brad?

      • Rick

        I think he’s off discovering how to get ‘idea stage’ funding for us entrepreneurs. I imagine he’ll be back soon with great news for us all to take advantage of!
        .
        The real question is why is no one discussing topics here? Do we all *need* Brad to tell us what to think?!
        .
        It’s sad for a person that tries to build something great when the people don’t carry on his/her work.

        • Slim

          Why build when over population is destroying the will to live? Perhaps controlling the lack of planning of would be parents is a goal.
          I suspect something more sinister is at play.

          • Rick

            I do see examples of the problem you mentioned.
            .
            Do you have any examples of “more sinister”?

        • http://www.museumplanet.com David Brown

          Rather idiotic post Rick. Some of us actually care about Brad the person.

          • Rick

            I don’t see how that’s an idiotic post. I think that keeping the light on for him is a good idea.
            .
            Maybe I just think further ahead than you do.
            .
            If Brad is in harm’s way. We can’t do anything for him here. What did they say when you called about him?

    • http://www.museumplanet.com David Brown

      No. Most of us are concerned about Brad.

  • http://www.museumplanet.com David Brown

    No…Actually most are more interested in how Brad is doing.

  • Steve Lincoln

    Brad, Dude, we miss your writing and we miss you. You know you have a bunch of people out here who care about you. Please let us now that you are doing ok (or if you need something).

  • Slim

    A little investigation shows his last tweet Nov 6 and his wife Nov 7 posting on Alaska storm. Brad posted on a story about burning out your life on work. Perhaps they are in Homer enjoying life without work or the web. Just a thought.

    • Rick

      I’ve also noticed that some of his very old archived posts have changed a bit. I think Brad may have been caught in a time warp and travel forward to the past! He is now carrying out his plan to take over the world by changing history to make himself world dictator. The problem is he cannot reach his objective until he catches up with his future self in the ‘now’ of time.
      .
      We will just have to wait to see if he can find a way to travel back in time to the present. If he can he will be world dictator where everyone is forced to have a blog and use a smartphone!!!
      .
      Until next time…

  • http://www.feedthebeast.biz/blog Drew Williams

    There is another possibility. From Brad’s last post: “…in the next month while I’m on sabbatical”. Flash sabbatical?

    • Rick

      Sabbatical: Taking time away from work to listen to Black Sabbath albums and meditate.

  • Agnieszka Maryniaczyk

    Great futurologist from my country (Lem) said that we shouldn’t fear AI, because in the future no-one will build thinking, human-alike machines. Although it would be possible. It would not be needed then. prostylowo.pl

  • http://www.FoundersFloor.com/ Matt Day

    Hope all is well Brad. If anyone has an update on how long Brad will be gone or any further info, please post. I guess the single sentence reference to a sabbatical means just that, time off. I think we are all just addicted to the daily blog posts and Tweets, so when they abruptly stop we get concerned.

  • http://www.museumplanet.com David Brown

    Why do I think troubled waters run deep?

  • Anwen Garston

    Hope you’re enjoying time off Brad. Miss your writing though!

  • http://www.about.me./Mariah.Lichtenstern Mariah Lichtenstern

    The other character should have cited abundant biologically sustainable sources of plastic, like hemp cellulose ;-) Ford used it to make a car in the 20th century.

    http://sensiseeds.com/en/blog/hemp-plastics-made/

    http://hempwaterbottles.tripod.com/what-is-hemp-plastic.html

    http://www.collective-evolution.com/2014/01/28/new-plastic-zeoform-turns-hemp-into-almost-anything/

  • Rick

    New theory on Brad’s disappearance:
    Brad’s wife caught him having an affair with his computer operating system, like in the movie Her, and now he’s at a counseling facility where the rules prohibit his use of computers.

  • Stanisław Witczyk

    When it comes to the AI and case if we want or not to develop one, I always point out Mass Effect universe, as there were human-created AI and they were great companions. Personally I don’t know whether we will or not be able to create AI, but if we will, then why don’t try?

    stolice-europy.pl/miasto-historyczne-rzym