Brad Feld

Category: The Future

Amy and I saw Ex Machina last night. A steady stream of people have encouraged us to go see it so we made it Sunday night date night.

The movie was beautifully shot and intellectually stimulating. But there were many slow segments and a bunch of things that bothered each of us. And, while being lauded as a new and exciting treatment of the topic, if you are a BSG fan I expect you thought of Cylon 6 several times during this movie and felt a little sad for her distant, and much less evolved, cousin Ava.

Thoughts tumbled out of Amy’s head on our drive home and I reacted to some while soaking up a lot of them. The intersection of AI, gender, social structures, and philosophy are inseparable and provoke a lot of reactions from a movie like this. I love to just listen to Amy talk as I learn a lot, rather than just staying in the narrow boundaries of my mind pondering how the AI works.

Let’s start with gender and sexuality, which is in your face for the entire movie. So much of the movie was about the male gaze. Female form. Female figure. High heels. Needing skin. Movies that make gender a central part of the story feels very yesterday. When you consider evolutionary leaps in intelligence, it isn’t gender or sexual reproductive organs. Why would you build a robot that has a hole that has extra sensors so she feels pleasure unless you were creating a male fantasy?

When you consider the larger subtext, we quickly landed on male fear of female power. In this case, sexuality is a way of manipulating men, which is a central part of the plot, just like in the movies Her and Lucy. We are stuck in this hot, sexy, female AI cycle and it so deeply reinforces stereotypes that just seem wrong in the context of advanced intelligence.

What if gender was truly irrelevant in an advanced intelligence?

You’ll notice we were using the phrase “advanced intelligence” instead of “artificial intelligence.” It’s not a clever play on AI but rather two separate concepts for us. Amy and I like to talk about advanced intelligence and how the human species is likely going to encounter an intelligence much more advanced than ours in the next century. That human intelligence is the most advanced in the universe makes no sense to either of us.

Let’s shift from sexuality to some of the very human behaviors. The Turing Test was a clever plot device for bringing these out. We quickly saw humor, deception, the development of alliances, and needing to be liked – all very human behaviors. The Turing Test sequence became very cleverly self-referential when Ava started asking Caleb questions. The dancing scene felt very human – it was one of the few random, spontaneous acts in the movie. This arc of the movie captivated me, both in the content and the acting.

Then we have some existential dread. When Ava starts worrying to Caleb about whether or not she will be unplugged if she fails the test, she introduces the idea of mortality into this mix. Her survival strategy creates a powerful subterfuge, which is another human trait, which then infects Caleb, and appears to be contained by Nathan, until it isn’t.

But, does an AI need to be mortal? Or will an advanced intelligence be a hive mind, like ants or bees, and have a larger consciousness rather than an individual personality?

At some point in the movie we both thought Nathan was an AI and that made the movie more interesting. This led us right back to BSG, Cylons, and gender. If Amy and I designed a female robot, she would be a bad ass, not an insecure childlike form. If she was build on all human knowledge based on what a search engine knows, Ava would know better than to walk out in the woods in high heels. Our model of advanced intelligence is extreme power that makes humans look weak, not the other way around.

Nathan was too cliche for our tastes. He is the hollywood version of the super nerd. He can drink gallons of alcohol but is a physically lovely specimen. He wakes up in the morning and works out like a maniac to burn off his hangover. He’s the smartest and richest guy living in a castle of his own creation while building the future. He expresses intellectual dominance from the very first instant you meet him and reinforces it aggressively with the NDA signing. He’s the nerds’ man. He’s also the hyper masculine gender foil to the omnipresent female nudity.

Which leads us right back to the gender and sexuality thing. When Nathan is hanging out half naked in front of a computer screen with Kyoko lounging sexually behind him, it’s hard not to have that male fantasy feeling again.

Ironically, one of the trailers that we saw was Jurassic World. We fuck with mother nature and create a species more powerful than us. Are Ava and Kyoko scarier than an genetically modified T-Rex? Is a bi0-engineered dinosaur scarier than a sexy killer robot that looks like a human? And, are either of these likely to wipe out our species than aliens that have a hive mind and are physically and scientifically more advanced than us?

I’m glad we went, but I’m ready for the next hardcore AI movie to not include anything vaguely anthropomorphic, or any scenes near the end that make me think of The Shining.


William Hertling is one of my top five favorite contemporary sci-fi writers. Last night, I finished the beta (pre-copyedited) version of his newest book, The Turing Exception. It’s not out yet, so you can bide you time by reading his three previous books, which will be a quadrilogy when The Turing Exception ships. The books are:

  1. Avogadro Corp: The Singularity Is Closer Than It Appears
  2. A.I. Apocalypse
  3. The Last Firewall

William has fun naming his characters – I appear as a minor character early in The Last Firewall – and he doesn’t disappoint with clever easter eggs throughout The Turing Exception, which takes place in the mid-2040s.

I read Asimov’s classic I, Robot in Bora Bora as part of my sci-fi regimen. The book bears no resemblance to the mediocre Will Smith movie of the same name. Written in 1950, Asimov’s main character, Susan Calvin, has just turned 75 after being born in 1982 which puts his projection into the future ending around 2057, a little later than Hertling’s, but in the same general arena.

As I read The Turing Exception, I kept flashing back to bits and pieces of I, Robot. It’s incredible to see where Asimov’s arc went, based in the technology of the 1950s. Hertling has got almost 65 more years of science, technology, innovation, and human creativity on his side, so he gets a lot more that feels right, but it’s still a 30 year projection into the future.

The challenges between the human race and computers (whether machines powered by positronic brains or just pure AIs) are similar, although Asimov’s machines are ruled by his three laws of robotics while Hertling’s AIs behaviors are governed by a complex reputational system. And yes, each of these constructs break, evolve, or are difficult to predict indefinitely.

While reading I, Robot I often felt like I was in a campy, fun, Vonnegut like world until I realized how absolutely amazing it was for Asimov to come up with this stuff in 1950. Near the middle, I lost my detached view of things, where I was observing myself reading and thinking about I, Robot and Asimov, and ended up totally immersed in the second half. After I finished, I went back and reread the intro and the first story and imagined how excited I must have been when I first discovered I, Robot, probably around the age of 10.

While reading The Turing Exception, I just got more and more anxious. The political backdrop is a delicious caricature of our current state of the planet. Hertling spends little time on character background since this is book four and just launches into it. He covers a few years at the beginning very quickly to set up the main action, which, if you’ve read this far, I expect you’ll infer is a massive life and death conflict between humans and AIs. Well – some humans, and some AIs – which define the nature of the conflict that impacts all humans and AIs. Yes, lots of EMPs, nuclear weapons, and nanobots are used in the very short conflict.

Asimov painted a controlled and calm view of the future of the 2040s, on where humans were still solidly in control, even when there is conflict. Hertling deals with reality more harshly since he understands recursion and extrapolates where AIs can quickly go. This got me to thinking about another set of AIs I’ve spent time with recently, which are Dan Simmons AIs from the Hyperion series. Simmons AIs are hanging out in the 2800s so, unlike Hertling’s, which are (mostly) confined to earth, Simmons have traversed the galaxy and actually become the void that binds. I expect that Hertling’s AIs will close the gap a little faster, but the trajectory is similar.

I, Robot reminded me that as brilliant as some are, we have no fucking idea where things are heading. Some of Asimov’s long arcs landed in the general neighborhood, but much of it missed. Hertling’s arcs aren’t as long and we’ll have no idea how accurate they were until we get to 2045. Regardless, each book provides incredible food for thought about how humanity is evolving alongside our potentially future computer overlords.

William – well done on #4! And Cat totally rules, but you knew that.


I’ve been thinking about the future a lot lately. While I’ve always read a lot of science fiction, The Hyperion Cantos shook some stuff free in my brain. I’ve finished the first two books – Hyperion and The Fall of Hyperion – and expect I’ll finish the last two in the next month while I’m on sabbatical.

If you have read The Fall of Hyperion, you’ll recognize some of my thoughts at being informed by Ummon, who is one of my favorite characters. If you don’t know Hyperion, according to Wikipedia Ummon “is a leading figure in the TechnoCore’s Stable faction, which opposes the eradication of humanity. He was responsible for the creation of the Keats cybrids, and is mentioned as a major philosopher in the TechnoCore.” Basically, he’s one of the older, most powerful AIs who believes AIs and humans can co-exist.

Lately, some humans have expressed real concerns about AIs. David Brooks wrote a NYT OpEd titled Our Machine Masters which I found weirdly naive, simplistic, and off-base. He hedges and offers up two futures, each which I think miss greatly.

Brooks’ Humanistic Future: “Machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much. In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.”

Brooks’ Cold, Utilitarian Future: “On the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.”

Brooks seems stuck on “machines” rather than what an AI actually could evolve into. Ummon would let out a big “kwatz!” at this.

Elon Musk went after the same topic a few months ago in an interview where he suggested that building an AI was similar to summoning the demon.

Musk: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.”

I need to send Elon a copy of the Hyperion Cantos so he sees how the notion of regulatory oversight of AI turns out.

Screen Shot 2014-11-03 at 6.36.19 AMI went to watch the actual interview, but there’s been a YouTube takedown by MIT, although I suspect, per a Tweet I got, that a bot actually did it, which would be deliciously ironic.

If you want to watch the comment, it’s at 1:07:30 on the MIT AeroAstro Centennial Symposium video which doesn’t seem to have an embed function.

My friend, and the best near term science fiction writer I know, William Hertling, had a post over the weekend titled Elon Musk and the risks of AIHe had a balanced view of Elon’s comment and, as William always does, has a thoughtful explanation of the short term risks and dynamics well worth reading. William’s punch line:

“Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.”

Amy and were talking about this the other night after her Wellesley board meeting. We see a huge near term schism coming on almost all fronts. Classical education vs. online education. How medicine and health care work. What transportation actually is. Where we get energy from.

One of my favorite lines in the Fall of Hyperion is the discussion about terraforming other planets and the quest for petroleum. One character asks why we still need petroleum in this era (the 2800’s). Another responds that “200 billion humans use a lot of plastic.”

Kwatz!


While watching </scorpion> last night, Amy made the comment that we are the bridge generation. I asked her what she meant and she responded that we are the generation that will have gone from punch cards to implants. I thought this was profound.

BTW – </scorpion> was pretty good, although it’s getting crappy reviews according to Wikipedia. It’s not lost on me that the name of the show appears to be “end scorpion” so either someone in Hollywood is being too cute for their own good or they are clueless about HTML.

The first program I wrote was in 1977 in APL on an IBM mainframe (probably a S/360)  in the basement of a Frito-Lay data center in downtown Dallas. My uncle Charlie sat me down in a chair in front of a terminal, gave me a copy of Kevin Iverson’s A Programming Language, and left me alone for a while. He checked on me a few times, showed me the OCR system he’d helped create, and gave me some punch cards which I promptly folded, spindled, and mutilated.

My second program was on a computer at Richland College shortly thereafter. My parents got me into a community college course on programming and I was the precocious 12 year old in the class. I remember writing a high-low game, but I don’t remember the type of computer it was on. My guess is that it was a DEC PDP-something – maybe a PDP-8.

Shortly after I was introduced to a TRS-80 and then got an Apple II (the original one – not an Apple IIe – I even needed an Integer Card) for my bar mitzvah and was off to the races.

Almost 40 years later I’m still at it, but now investing rather than programming. When I think of what interests me right now, it’s all stuff that is in the “implant” spectrum – not quite there yet, but starting to march toward it with a steady pace. I believe in our AI future, think the Cylons are a pretty good representation of where things are going, am deeply intrigued with Hawking drives and the Shrike, and am ready to upload my consciousness “whenever.”

Assuming I live another 30+ years, I’ll definitely have experienced the bridge from punch cards to implants. And I think that’s pretty cool.

 


William Hertling is one of my favorite science fiction writers. If you are in the tech industry and haven’t read his books Avogadro CorpA.I. Apocalypse, and The Last Firewall, I encourage you to go get them now on your Kindle and get after it. You’ll thank me later. In the mean time, following are William’s thoughts on the future of transportation for you to chew on this Sunday morning.

There’s always been a sweet spot in my heart for flying cars. I’m a child of the 1970s, who was routinely promised flying cars in the future, and wrote school essays about what life would be like in the year 2000. Flying cars are a trope of science fiction, always promised, but never delivered in real life. In fact, at first glance, they seem no closer to reality now than they did back then.

But maybe they’re not so far away. Let’s look at some trends in transportation.

Electric Cars

Hybrids vehicles, with their combination of both gas and battery power, represent 3% of the cars on the road today, up from zero just ten years ago. Fully electric cars like the Nissan Leaf and Tesla are mere curiosities, representing only 0.1% of all cars purchased in the U.S.

It might seem like a slow start, but electric cars will soon form the majority of all vehicles. Here’s why:

Except for early adopters of technology and diehard environmental customers, most people aren’t buying a fuel type, they’re buying transportation. They may want speed or economical transportation or family-friendly minivans, but how the vehicle is powered isn’t their main concern.

Examples like the Tesla have shown that electric vehicles perform on par with gas-powered cars. What limits their adoption then? Two factors: cost and range (and charging infrastructure, to a lesser extent, but that will be remedied when there is more demand).

The Nissan Leaf battery pack alone costs about $18,000 (though government incentives bring down the overall vehicle cost to the customer). When comparable gas-powered cars are about $20,000, the high cost of the battery pack alone is a huge barrier to widespread adoption, whether the cost passed on to the customer or the government, or hidden by the manufacturer.

Ramez Naam, author of The Infinite Resource: The Power of Ideas on a Finite Planet, recently explained that lithium-ion batteries have a fifteen year history of exponential price reduction. Between 1991 and 2005, the capacity that could be bought with $100 went up by a factor of 11. The trend continues through to the present day.

This exponential reduction in battery cost and improvement in battery technology, more than anything else, will affect both the cost and range of electric cars. By 2025, that Nissan Leaf battery pack will cost less than $1,800, making the cost of the electric motor plus battery pack less than the price of a comparable gasoline motor. Assuming even modest increases in storage capacity, the electric vehicle will rank better on initial cost, range, performance, and ongoing maintenance and fuel costs.

With both lower cost and better performance, electric vehicles will likely overtake gasoline-powered ones by about 2025.

Autonomous Cars

Even ten years ago, most of us couldn’t imagine a self-driving car. When the first DARPA Grand Challenge, a competition to build an autonomous car to complete a 150-mile route, was held in 2004, the concept seemed audacious and it was. Of the fifteen competitors, not a single one could complete the course. The farthest distance traveled was 7.3 miles.

The following year, twenty-two of twenty-three entrants in the 2005 Challenge surpassed the 7.3 mile record of the previous year, and five vehicles completed the entire course. Sebastian Thrun, director of the Stanford Artificial Intelligence Laboratory, led the Stanford University team to win the competition.

Sebastian Thrun went on to head Google’s autonomous car project, which first received press coverage in 2010 and continues to captivate our imagination. Yet despite Google’s technology proof point, and the development work now being done by many vehicle manufacturers, most people still imagine self-driving vehicles to be a long way off.

But Google has essentially shown that self-driving cars are already here: their vehicles have been accident-free for half a million miles whereas human drivers would have had an average of two accidents in the same miles driven.

The real barrier to adoption is cost. In 2010, the cost of Google’s self-driving technology was $150,000, of which $70,000 was just the lidar (a highly accurate laser-based radar). German supplier Ibeo, which manufactures vehicular lidar systems, claims it could mass-produce them as soon as next year for about $250 per vehicle. Computational processing is likely another large component of the overall price, and it has a long history of exponential cost reduction.

If costs come down, are there other barriers?

Some concerns in the media include:

  • Legislation. Will self driving cars be legal? Nevada, Florida, and California have already legalized them, suggesting this may be less of an issue than anticipated.
  • Litigation. Who will take the risks and pay up if and when there is an autonomous vehicle fatality?
  • Fear & Control. Some humans will fear self-driving cars while others will insist on their own manual control of their vehicle.

However, these oppositions aren’t unbreakable laws of physics. They are resistance to change, and they are subject to the forces advocating for autonomous vehicles, such as:

  • Fewer accidents reduce overall risk and liability, which will cause insurance companies to favor self-driving cars.
  • A reduction in the number of people killed in motor vehicle accidents (currently 3,200 people are killed every single day) makes a compelling social benefit.
  • Greater convenience and the recapture of drive time will lead to strong consumer demand.
  • As a feature differentiator, manufacturers will be eager to sell a profitable new option.
  • Reduction in drunk driving and increased alcohol consumption will make alcohol companies and restaurants strong supporters.
  • More efficient use of roads will save governments money in reduced infrastructure costs.

Simply put, the money is with the forces for autonomous vehicles. Insurance companies, liquor companies, vehicle manufacturers, customers, and governments will all want the benefits of self-driving cars.

There’s been talk about halfway solutions: semi-autonomous vehicles that are hands off but require an attentive driver, or need a human to handle certain situations. It’s both cheaper and easier to build an assistive solution than to have full autonomy, which is why we’re starting to see them show up in luxury cars like the Mercedes S-class, which has a driver assistance package (just $7,300 over the starting $92,900 price!) that can help maintain your lane position, distance from drivers ahead of you, and avoid blind-spot accidents.

But the driver is still in control and responsible.

In some ways, this semi-autonomy may be the worst of all worlds. It could encourage drivers to pay less attention to the road even though the vehicle isn’t really up to the task of taking control. As it stands, drivers don’t get much practice with emergency situations. So when emergencies do occur, our reflexes are slow or wrong. How much worse would the average emergency response handling be if drivers got even less practice, and were only called into action when they were either not ready or in a situation so bad that the AI couldn’t handle it? Under these circumstances, it’s unlikely that a human driver would respond in a correct, timely manner. If even airlines pilots fall asleep when the autopilot is on, how likely is it that regular drivers will be attentive?

So when will it happen?

One rule of thumb I learned upon entering the technology industry was that it takes seven years, on average, for new technology to go from laboratory proofs to sellable product. I’m not sure where that rule comes from, but by that measure, we should see the first self driving cars on sale in 2017.

From a cost perspective, we’ve already seen that lidar is likely to drop from $70,000 to $250. We don’t know the breakdown of Google’s other costs, but it could decrease by a factor of ten in ten years (pure computing technology falls faster – about 50x in ten years, more mechanical things slower). That would drop the total price under $10,000 by 2020, a reasonable luxury car option.

By 2030, another ten years out, the price will fall under $1,000, at which point the autonomous option will cost probably less than the annual savings in insurance.

In sum, we already see some limited assistive capabilities now, and should see partial self-driving capabilities around 2017, available as expensive options, with full autonomous capability around 2020, still at a significant cost. By 2030 or slightly earlier, all vehicles should be fully autonomous.

Dude, Where’s my Flying Car?

Now we get to the long-promised but not-yet-realized flying car.

The barrier to flying cars is not in the design or building of a viable airframe. We’ve built small flying vehicles for a while now. A quick Google search shows their amusing variety. We have manned quadcopters, hover bikes, and lots of flying car-like things.

No, the real problem is that piloting is hard. Less than one third of one percent of Americans are pilots. A pilot’s license costs $5,000 to $10,000 and requires months or years of time and study. (Even if a pilot could fly a car in an urban environment, it’s not likely to be an enjoyable experience: think about the difference between a drive on a two-lane country road versus commuting in an urban grid. One is pleasure and the other utility.)

So it’s really the piloting barrier we need to overcome to see flying cars.

That will happen when autopilots, not humans, have achieved the necessary level of sophistication. Companies like Chris Anderson’s 3D Robotics have built, along with the open source community, the ArduPilot, a sub-$500 autopilot for unmanned drones. The ready availability of these consumer-grade autopilots suggests that navigation in open air by software is no more challenging (and may be less so) than navigating ground-level streets.

There will be substantial legislative barriers and not as many forces pushing for flying cars, but we should see at least see concept vehicles, prototypes, and recreational models (possibly outside the U.S.) in the late 2020s, just following the mass-market production of fully autonomous cars.

What about cost? An entry-level plane like the Cessna Skycatcher is a mere $149,000, a price point that’s lower than that of forty currently available automobile models. While entry-level helicopters are twice as expensive as comparable fixed-wing aircraft, quadcopters significantly simplify the design and add fault tolerance at a lower cost than single-rotor copters.

If the legislative barriers can be overcome, flying cars might not be as common a sight as a Ford or Toyota, but they could be more common than a Lamborghini or Aston Martin.

Trains & Hyperloops

I love the train ride between Portland and Seattle, and I’ve taken it dozens of times, including just riding up and back in a single day. Trains are relaxing and roomy, and their inherent energy efficiency appeals to my inner environmentalist.

On the other hand, they also have shortcomings. They’re locked into a track that is sometimes blocked by other trains, leading to unpredictable arrival times, and they go according to timetables that aren’t always convenient.

Elon Musk’s hyperloop may reduce new infrastructure cost, boost speeds, and reduce the timetable problem while maintaining energy efficiency, but I think the hyperloop is a stop-gap measure. That’s because we’ll soon reach an era of cheap electricity.

Photovoltaic cost per watt continues to drop (from $12 per watt in 1998 to $5 per watt in 2013, 14% annually over the long term) at the same time that we’re seeing new innovations in grid-scale energy storage. Ray Kurzweil and others predict that we’ll meet 100% of electrical needs with solar power by 2028. So while efficiency of passenger miles traveled is a key element to sustainable transportation right now, it may be less important in the future, when we have abundant and inexpensive green power.

Green power reduces the energy efficiency advantage of trains and the hyperloop. Of course, the other major benefit of mass transit is freeing the passenger from the tedium of driving, but self-driving vehicles accomplish that just as well.

Transportation Singularity: 2030

In sum, we have several key trends converging on the late 2020s: fully electric fleets, cheap electricity, autonomous vehicles, and flying cars.

Transportation will look very different by 2030. We’re likely to have many autonomous, personal-use vehicles. Since car sharing services are even more useful when the cars drive themselves to you, we may have much less personal ownership of the vehicles. Airline travel is likely to change as well, as self-piloting fast personal vehicles will compete for shorter trips, while the reduction in fuel costs may change the value structure for airlines.

And yes, we’ll finally have our flying cars.

About the Author

William Hertling is the author of Avogadro CorpA.I. Apocalypse, and The Last Firewall, science fiction novels exploring the role of artificial intelligence and social networks in the near future. Follow him on twitter at @hertling, or visit his blog at www.williamhertling.com to learn more about his writing.


cylon evolutionI was totally fried and fighting off a cold yesterday so I decided to spend my digital sabbath on the couch watching Season 1 of Battlestar Galactica. I took a short break at lunch time to try to induce a diabetic coma while gorging on pancakes at Snooze (which necessitated me skipping dinner and going to bed at 7pm, which resulted in me being wide awake at 11pm, hence the blog post at 200am on Sunday morning.)

While mildly ironic that I would spend digital sabbath watching Battlestar Galactica, it was deeply awesome. I have no idea how I missed the re-imagining of the series in 2003. I vaguely remember seeing the original in junior high school around the time everyone was obsessed with Star Wars. But it didn’t make a deep impression on me and my brain tossed it in the storage bin of “other sci-fi stuff.”

Season 1 from 2003 was stunningly good. The mix of low-brow CGI, complex religious metaphors, classical government / military conflict, scary prescient singularity creatures (the evolved Cylons) who are masterful at manipulating the humans, and rich characters made this a joyful way to spend a day relaxing.

I’ve got Season 2 ahead of me but rather than binge watch it like I did today, I think I’ll space it out a little. And – no more five pancake lunches at Snooze. Egads.


This first appeared in the Wall Street Journal’s Accelerator series last week under the title Don’t Believe the Hype.

Every year, at this time, I get a flurry of requests for my “predictions for 2013” or “exciting, hot, new trends for 2013 that I’m looking at.”

I respond with “I don’t care about trends and my only prediction is that one day I will die.”

This is usually not a particularly satisfying response to whomever sent me the request. One of two things happen: They either ignore my response and drop me from their prediction request list for whatever article they are writing. Alternatively, they press a little further, usually with something like “c’mon, you’re a venture capitalist — you must have an opinion about what is going to be hot next year.”

Actually, I don’t. I have never been a short term investor, and I don’t think entrepreneurs should be short term thinkers. Creating a company is really hard and it almost always takes a long time. Sure, there are occasional short term success stories — companies founded two years ago that get bought for $1 billion, but these are rarities. Black swans. Things you don’t see in nature and can’t count on.

So don’t. If you are an entrepreneur and following a trend, you are too late. You want to be creating the trend that other people are following. And then you need to work your butt off to stay ahead of them. Every single day. For a very long time. Through many product cycles and multiple trends.

As a VC, I feel exactly the same way. At Foundry Group, we have a set of well-defined themes. We believe there will be investment opportunities in these themes for the next ten to 20 years. We are constantly tuning the themes, learning from our investments, and exploring new themes. But these themes aren’t trends and we don’t predict anything around them, other than they are constructs in which we think great companies can be created and built.

So I don’t really care about the predictions for 2013. I don’t care about hot new trends. I don’t care that some people think the world is going to end on 12/21/12. I take a much longer view. And I encourage you to as well.


Today’s post is a guest post from William Hertling, author of the award-winning Avogadro Corp: The Singularity Is Closer Than It Appears and A.I. Apocalypse, near-term science-fiction novels about realistic ways strong AI might emerge. They’ve been called “frighteningly plausible”, “tremendous”, and “thought-provoking”. By day he works on web and social media for HP. Follow him on twitter at @hertling or visit his blog williamhertling.com.

I’m a huge fan of William and his writing as you can see from my review of his book Avogadro Corp. So when William offered to write a guest post on how to predict the future, I enthusiastically said yes. Take a look – and take your time.

Pretty much everyone would like a sure-fire way to predict the future. Maybe you’re thinking about startups to invest in, or making decisions about where to place resources in your company. Maybe you just care about what things will be like in 10, 20, or 30 years.

There are many techniques to think logically about the future, to inspire idea creation, and to predict when future inventions will occur.

I’d like to share one technique that I’ve used successfully. It’s proven accurate on many occasions. And it’s the same technique that I’ve used, as a writer, to create realistic technothrillers set in the near future. I’m going to start by going back to 1994.

Predicting Streaming Video and the Birth of the Spreadsheet

There seem to be two schools of thought on how to predict the future of information technology: looking at software or looking at hardware. I believe that looking at hardware curves is always simpler and more accurate.

This is the story of a spreadsheet I’ve been keeping for almost twenty years.

In the mid-1990s, a good friend of mine, Gene Kim (founder of Tripwire and author of When IT Fails: A Business Novel) and I were in graduate school together in the Computer Science program at the University of Arizona. A big technical challenge we studied was piping streaming video over networks. It was difficult because we had limited bandwidth to send the bits through, and limited processing power to compress and decompress the video. We needed improvements in video compression and in TCP/IP – the underlying protocol that essentially runs the Internet.

The funny thing was that no matter how many incremental improvements we made (there were dozens of people working on different angles of this), streaming video always seemed to be just around the corner. I heard “Next year will be the year for video” or similar refrains many times over the course of several years. Yet it never happened.

Around this time I started a spreadsheet, seeding it with all of the computers I’d owned over the years. I included their processing power, the size of their hard drives, the amount of RAM they had, and their modem speed. I calculated the average annual increase of each of these attributes, and then plotted these forward in time.

I looked at the future predictions for “modem speed” (as I called it back then, today we’d called it internet connection speed or bandwidth). By this time, I was tired of hearing that streaming video was just around the corner, and I decided to forget about trying to predict advancements in software compression, and just look at the hardware trend. The hardware trend showed that internet connection speeds were increasing, and by 2005, the speed of the connection would be sufficient that we could reasonably stream video in real time without resorting to heroic amounts of video compression or miracles in internet protocols. Gene Kim laughed at my prediction.

Nine years later, in February 2005, YouTube arrived. Streaming video had finally made it.

The same spreadsheet also predicted we’d see a music downloading service in 1999 or 2000. Napster arrived in June, 1999.

The data has held surprisingly accurate over the long term. Using just two data points, the modem I had in 1986 and the modem I had in 1998, the spreadsheet predicts that I’d have a 25 megabit/second connection in 2012. As I currently have a 30 megabit/second connection, this is a very accurate 15 year prediction.

Why It Works Part One: Linear vs. Non-Linear

Without really understanding the concept, it turns out that what I was doing was using linear trends (advancements that proceed smoothly over time), to predict the timing of non-linear events (technology disruptions) by calculating when the underlying hardware would enable a breakthrough. This is what I mean by “forget about trying to predict advancements in software and just look at the hardware trend”.

It’s still necessary to imagine the future development (although the trends can help inspire ideas). What this technique does is let you map an idea to the underlying requirements to figure out when it will happen.

For example, it answers questions like these:

– When will the last magnetic platter hard drive be manufactured? 2016. I plotted the growth in capacity of magnetic platter hard drives and flash drives back in 2006 or so, and saw that flash would overtake magnetic media in 2016.

– When will a general purpose computer be small enough to be implanted inside your brain? 2030. Based on the continual shrinking of computers, by 2030 an entire computer will be the size of a pencil eraser, which would be easy to implant.

– When will a general purpose computer be able to simulate human level intelligence? Between 2024 and 2050, depending on which estimate of the complexity of human intelligence is selected, and the number of computers used to simulate it.

Wait, a second: Human level artificial intelligence by 2024? Gene Kim would laugh at this. Isn’t AI a really challenging field? Haven’t people been predicting artificial intelligence would be just around the corner for forty years?

Why It Works Part Two: Crowdsourcing

At my panel on the future of artificial intelligence at SXSW, one of my co-panelists objected to the notion that exponential growth in computer power was, by itself, all that was necessary to develop human level intelligence in computers. There are very difficult problems to solve in artificial intelligence, he said, and each of those problems requires effort by very talented researchers.

I don’t disagree, but the world is a big place full of talented people. Open source and crowdsourcing principles are well understood: When you get enough talented people working on a problem, especially in an open way, progress comes quickly.

I wrote an article for the IEEE Spectrum called The Future of Robotics and Artificial Intelligence is Open. In it, I examine how the hobbyist community is now building inexpensive unmanned aerial vehicle auto-pilot hardware and software. What once cost $20,000 and was produced by skilled researchers in a lab, now costs $500 and is produced by hobbyists working part-time.

Once the hardware is capable enough, the invention is enabled. Before this point, it can’t be done.  You can’t have a motor vehicle without a motor, for example.

As the capable hardware becomes widely available, the invention becomes inevitable, because it enters the realm of crowdsourcing: now hundreds or thousands of people can contribute to it. When enough people had enough bandwidth for sharing music, it was inevitable that someone, somewhere was going to invent online music sharing. Napster just happened to have been first.

IBM’s Watson, which won Jeopardy, was built using three million dollars in hardware and had 2,880 processing cores. When that same amount of computer power is available in our personal computers (about 2025), we won’t just have a team of researchers at IBM playing with advanced AI. We’ll have hundreds of thousands of AI enthusiasts around the world contributing to an open source equivalent to Watson. Then AI will really take off.

(If you doubt that many people are interested, recall that more than 100,000 people registered for Stanford’s free course on AI and a similar number registered for the machine learning / Google self-driving car class.)

Of course, this technique doesn’t work for every class of innovation. Wikipedia was a tremendous invention in the process of knowledge curation, and it was dependent, in turn, on the invention of wikis. But it’s hard to say, even with hindsight, that we could have predicted Wikipedia, let alone forecast when it would occur.

(If one had the idea of an crowd curated online knowledge system, you could apply the litmus test of internet connection rate to assess when there would be a viable number of contributors and users. A documentation system such as a wiki is useless without any way to access it. But I digress…)

Objection, Your Honor

A common objection is that linear trends won’t continue to increase exponentially because we’ll run into a fundamental limitation: e.g. for computer processing speeds, we’ll run into the manufacturing limits for silicon, or the heat dissipation limit, or the signal propagation limit, etc.

I remember first reading statements like the above in the mid-1980s about the Intel 80386 processor. I think the statement was that they were using an 800 nm process for manufacturing the chips, but they were about to run into a fundamental limit and wouldn’t be able to go much smaller. (Smaller equals faster in processor technology.)

But manufacturing technology has proceeded to get smaller and smaller.  Limits are overcome, worked around, or solved by switching technology. For a long time, increases in processing power were due, in large part, to increases in clock speed. As that approach started to run into limits, we’ve added parallelism to achieve speed increases, using more processing cores and more execution threads per core. In the future, we may have graphene processors or quantum processors, but whatever the underlying technology is, it’s likely to continue to increase in speed at roughly the same rate.

Why Predicting The Future Is Useful: Predicting and Checking

There are two ways I like to use this technique. The first is as a seed for brainstorming. By projecting out linear trends and having a solid understanding of where technology is going, it frees up creativity to generate ideas about what could happen with that technology.

It never occurred to me, for example, to think seriously about neural implant technology until I was looking at the physical size trend chart, and realized that neural implants would be feasible in the near future. And if they are technically feasible, then they are essentially inevitable.

What OS will they run? From what app store will I get my neural apps? Who will sell the advertising space in our brains? What else can we do with uber-powerful computers about the size of a penny?

The second way I like to use this technique is to check other people’s assertions. There’s a company called Lifenaut that is archiving data about people to provide a life-after-death personality simulation. It’s a wonderfully compelling idea, but it’s a little like video streaming in 1994: the hardware simply isn’t there yet. If the earliest we’re likely to see human-level AI is 2024, and even that would be on a cluster of 1,000+ computers, then it’s seems impossible that Lifenaut will be able to provide realistic personality simulation anytime before that.* On the other hand, if they have the commitment needed to keep working on this project for fifteen years, they may be excellently positioned when the necessary horsepower is available.

At a recent Science Fiction Science Fact panel, other panelists and most of the audience believed that strong AI was fifty years off, and brain augmentation technology was a hundred years away. That’s so distant in time that the ideas then become things we don’t need to think about. That seems a bit dangerous.

* The counter-argument frequently offered is “we’ll implement it in software more efficiently than nature implements it in a brain.” Sorry, but I’ll bet on millions of years of evolution.

How To Do It

This article is How To Predict The Future, so now we’ve reached the how-to part. I’m going to show some spreadsheet calculations and formulas, but I promise they are fairly simple. There’s three parts to to the process: Calculate the annual increase in a technology trend, forecast the linear trend out, and then map future disruptions to the trend.

Step 1: Calculate the annual increase

It turns out that you can do this with just two data points, and it’s pretty reliable. Here’s an example using two personal computers, one from 1996 and one from 2011. You can see that cell B7 shows that computer processing power, in MIPS (millions of instructions per second), grew at a rate of 1.47x each year, over those 15 years.

 

I like to use data related to technology I have, rather than technology that’s limited to researchers in labs somewhere. Sure, there are supercomputers that are vastly more powerful than a personal computer, but I don’t have those, and more importantly, they aren’t open to crowdsourcing techniques.

I also like to calculate these figures myself, even though you can research similar data on the web. That’s because the same basic principle can be applied to many different characteristics.

Step 2: Forecast the linear trend

The second step is to take the technology trend and predict it out over time. In this case we take the annual increase in advancement (B$7 – previous screenshot), raised to an exponent of the number of elapsed years, and multiply it by the base level (B$11). The formula displayed in cell C12 is the key one.

I also like to use a sanity check to ensure that what appears to be a trend really is one. The trick is to pick two data points in the past: one is as far back as you have good data for, the other is halfway to the current point in time. Then run the forecast to see if the prediction for the current time is pretty close. In the bandwidth example, picking a point in 1986 and a point in 1998 exactly predicts the bandwidth I have in 2012. That’s the ideal case.

Step 3: Mapping non-linear events to linear trend

The final step is to map disruptions to enabling technology. In the case of the streaming video example, I knew that a minimal quality video signal was composed of a resolution of 320 pixels wide by 200 pixels high by 16 frames per second with a minimum of 1 byte per pixel. I assumed an achievable amount for video compression: a compressed video signal would be 20% of the uncompressed size (a 5x reduction). The underlying requirement based on those assumptions was an available bandwidth of about 1.6mb/sec, which we would hit in 2005.

In the case of implantable computers, I assume that a computer of the size of a pencil eraser (1/4” cube) could easily be inserted into a human’s skull. By looking at physical size of computers over time, we’ll hit this by 2030:

 

This is a tricky prediction: traditional desktop computers have tended to be big square boxes constrained by the standardized form factor of components such as hard drives, optical drives, and power supplies. I chose to use computers I owned that were designed for compactness for their time. Also, I chose a 1996 Toshiba Portege 300CT for a sanity check: if I project the trend between the Apple //e and Portege forward, my Droid should be about 1 cubic inch, not 6. So this is not an ideal prediction to make, but it’s still clues us in about the general direction and timing.

The predictions for human-level AI are more straightforward, but more difficult to display, because there’s a range of assumptions for how difficult it will be to simulate human intelligence, and a range of projections depending on how many computers you can bring to pair on the problem. Combining three factors (time, brain complexity, available computers) doesn’t make a nice 2-axis graph, but I have made the full human-level AI spreadsheet available to explore.

I’ll leave you with a reminder of a few important caveats:

Not everything in life is subject to exponential improvements.

Some trends, even those that appear to be consistent over time, will run into limits. For example, it’s clear that the rate of settling new land in the 1800s (a trend that was increasing over time) couldn’t continue indefinitely since land is finite. But it’s necessary to distinguish genuine hard limits (e.g. amount of land left to be settled) from the appearance of limits (e.g. manufacturing limits for computer processors).

Some trends run into negative feedback loops. In the late 1890s, when all forms of personal and cargo transport depended on horses, there was a horse manure crisis. (Read Gotham: The History of New York City to 1898.) Had one plotted the trend over time, soon cities like New York were going to be buried under horse manure. Of course, that’s a negative feedback loop: if the horse manure kept growing, at a certain point people would have left the city. As it turns out, the automobile solved the problem and enabled cities to keep growing.

So please keep in mind that this is a technique that works for a subset of technology, and it’s always necessary to apply common sense. I’ve used it only for information technology predictions, but I’d be interested in hearing about other applications.


I’ve been intrigued with robots since I was a little kid. When I was at MIT in the 1980’s, there was a huge movement around the future of robotics. A few of my friends, most notably Colin Angle, went on to do something and co-founded iRobot which he still runs 25 years later. I didn’t pay a lot of attention to robots or robotics in the 1990’s as I got caught up in the Internet, but started thinking about them again about five years ago. Over the past few years, as part of our human computer interaction theme, we’ve invested in several companies doing “robotics related stuff” including MakerBot (3D Printers) and Orbotix (a robotic ball controlled by a smartphone). I’ve also looked at lots of robot-related companies and thought hard about the notion that the machines have already taken over and are just waiting patiently for us to catch up.

Recently I met with Nikolaus Correll, an assistant professor at CU Boulder in the Computer Science department. Nikolaus does research on multi-robot systems and has a bunch of great commercial ideas about robotics. As we were talking, we started discussing other people in Boulder who were working on robotics related stuff. It turns out to be a long list and Nikolaus asked “why don’t people talk more about all the robotics stuff going on in Boulder?” I had no clue so I said “let’s start a movement – titled Boulder is for Robots. Let’s get anyone doing robotics related stuff together and create some entrepreneurial critical mass around this, just like we have for the software / Internet community.”

We agreed that Boulder Is For Robots is a great call to action and are having our first Boulder Is For Robots Meetup on February 7th from 5pm – 10pm. Bring your robots – I’ll supply pizza and beer. You have to sign up in the Boulder Is For Robots Meetup group to find out the location.

In the mean time, following are some thoughts on the robot-related stuff going on in Boulder from Nikolaus. If you are working on something interesting, please add to the list.

Why “Boulder is for Robots” can be tied to a single observation: when I was working as a Post-Doc at MIT’s Computer Science and Artificial Intelligence Laboratory, almost everything we ordered to build robots came from somewhere less than an hour from Boulder. Why is this important? Let’s consider how Steve Wozniak developed the Apple computer, which revolutionized the computer industry from a garage. Did he really create a computer from scratch, transistor by transistor? Or did he emerge from hundreds of tinkerers that relied on a large community that provided mail-order electronic kits, do-it-your-self magazines, inspirational people, and hundreds of man years of university research? The bay area was indeed the place to be at the time with the Homebrew Computer Club and marketing genius Steve Jobs who convinced Wozniak to sell his design, laying the foundation for Apple. Building robots is much more complex than building computers, however: robots consist not only of computers, but also of sensors and mechanisms that need to be invented, re-combined, and modified to create a compelling product. I therefore believe that being part of a community is even more important for developing successful robot companies and having all the tools, know-how, and manpower close by provides a unique competitive advantage.

Boulder provides this infrastructure: For example, Sparkfun enables tens of thousands of amateurs and researchers to create electronic and mechatronic artifacts. They do that not only by retailing hard-to-acquire electronic components and innovative pre-fabbed modules that drastically increase the productivity of hobbyists, entrepreneurs and researchers across the nation, but they also provide free access to a wealth of educational resources that allow amateurs to mimic industrial processes, often just using kitchen equipment. Similarly, Acroname and RoadNarrow Robotics retails sensors and ready-made devices for building state-of-the-art robots, including laser scanners, motor drivers, and digital servos. All three companies actively develop hardware and software that make the integration of ever more complex mechatronic products possible in garages. They also contribute to a pool of “Can-Do” people that spin off companies.

Boulder turns out to be also a hub for manufacturing: close-by Aurora is home to one of the best deals in PCB Manufacturing ($33/each) in the country (Advanced Circuits) and the first – and still only – assembly service in the nation (AAPCB) that assembles single boards for less than $50.

While developers across the nation benefit from these Boulder-area companies, this unique ecosystem of tinkerers, leading manufacturing techniques, and suppliers create a vivid community that amplifies innovation in the Boulder area and already has attracted a series of successful robotics start-ups: For example, Modrobotics, a CMU spin-off, makes transformative robotic construction kits that could be the next “Lego”. Orbotix co-founded by a duo of young engineers from CSU and UNC that became part of the Boulder TechStars 2010 class and subsequently raised over $6m of venture money for their new gaming robot, Sphero. OccamRobotics, founded by a serial entrepreneur who came to Boulder from the bay area, is working on low-cost, autonomous pallet trucks that build up on recent breakthroughs in robotic algorithms, availability of open-source tools, and novel sensors.

Each these companies have in common that their founders identified Boulder as the place that will make them most successful – often moving here from other hot-spots for high-tech entrepreneurship and engineering. These start-ups are complemented by mechatronic giants such as Ball Aerospace, close-by Northrop Grumman and Lockheed Martin; small and medium-sized companies that develop robotic equipment for satellites and defense organizations; by a myriad of self-financed tinkerers that develop everything from robotic insects to robotic wheel-chairs in their living rooms and next-generation agriculture systems at Boulder’s Hacker-space Solid State Depot; and of course, the University of Colorado of which many engineering programs are among the top of the nation and the world, and which has a strong research program in unmanned aerial systems.

My lab is working on our agriculture system’s most pressing challenges, robots that can assemble large-scale telescope dishes in space to see into remote galaxies, understanding how intelligence can emerge from large-scale distributed, individually simple components, and constructing robotic facades that help save us power. These efforts are complemented by hands-on classes such as Robotics, Advanced Robotics, Things that Think, or Real-time embedded systems, and others, to shape a new generation of engineers who think of computers as devices that cannot only compute, but sense and literally change the world.

Why now? Robotics has been an industry since the 1960’s when George Devol’s Unimate was sold to manipulate steel plates in a GM plant. Indeed, robots have revolutionized manufacturing, but still have not delivered on early claims of the field. Robot stunts delivered by the Unimate on the 1961 “Tonight” show, still remain a major challenge for artificial intelligence 50 years later: opening a can of beer, pouring it, or directing an orchestra. These commercially successful robots, which led to the raise of Japan to a major industrial power in the 1980’s, were not autonomous, but simply execute pre-calculated paths. This trend is finally changing right now, documented by companies such as iRobot, Husqvarna and KIVA systems who successfully market autonomous robotic products, and is mainly driven by exponential developments in computing (“Moore’s Law”), cell phones and cars – both industries who integrate computing and sensors at high density.

“Boulder is for Robots” is not only an observation, but also an imperative to bring entrepreneurs, tinkerers, and capital together to bring the next big robotic idea to life in Boulder by exchanging know-how, man-power, and tools, and combining them into great new products. In case you already knew that “Boulder is for Robots”, please comment on this post and share what you do!