Brad Feld

Tag: future

I’ve been a remote worker for 24 years. While I have an office in Boulder, I’m physically in my office for a small amount of time.

For many years, this was a function of travel. My investments have always been geographically distributed across the US and I spent the majority of my time between Monday and Friday on the road.

I learned how to work in hotel rooms, in other people’s offices, in conference rooms, at coffee shops, and in houses (mine and friends.) In 1995, at the dawn of the age of the commercial Internet, this involved landlines, answering machines, pagers, and fax machines. Today, my bet is that most 25-year-olds have never used one of these things.

In the past few years, there have been several high profile examples of scaled companies that have a completely distributed workforce. Automattic (WordPress) is my favorite, as it’s been organized that way by design from inception. Zapier is another one that has gotten a lot of press lately around its distributed workforce approach. In a moment of delicious self-reference, Zapier put up a blog post titled 25+ Fully Remote Companies That Let You Work From Anywhere.

Many companies in our portfolio have multiple locations and increasingly distributed workforces. There’s a profound difference between “two locations” and “distributed”, but they are part of a similar phenomenon where the constraint of the physical is lowered.

As I reflect on my own work patterns, they are less and less connected to any particular physical space. This doesn’t mean that physical spaces are eliminated from my life, but that my work isn’t actually dependent on any of them. As I type of my laptop, in a room at my house in Longmont, with Amy sitting next to me, it’s easy to see how my day is going to unroll with a shower, followed by a video conference, and then an in-person meeting with someone coming to spend some time with me.

When I look at my schedule next week, I’m in my office on Monday for my partner meeting, but there’s literally no other reason I need to be in my office next week. I have some in-person meetings, but if the weather is nice, they will be walks outside. Any of them could be video conferences instead of face to face meetings.

In the past five years, as I’ve limited my travel, I’ve gained back a lot of time not spent moving from point A to point B. When I’ve chosen to travel as I did recently on a multi-day trip to Seattle, I’ve been able to be deliberate about where I was and who I spent time with, and none of it required me having a physical space.

I continue to strongly believe that place matters for the development of sustainable startup communities. But, this is different than physical office spaces. I’m going to explore this more over the next year as I continue to embrace the lack of constraints around physical space in my world.

If you have good or bad experiences with distributed work, I’d love to hear them. I know there is an increasing number of technologies in use for helping manage organizations that are distributed – I’m interested in real stories of what works, vs. marketing hype. And, given that humans are intensely social creatures, I’d love to hear stories about how you maintain the appropriate level of physical interaction in a distributed workforce.


At dinner last night with Amy and friends we ended up in a long conversation about what’s going on in the world right now. We went down a few different paths, including a set of provocative questions like “Should the US have gotten involved in World War II earlier?” (me: Yes) and “Should the US have have gotten involved in World War I earlier?” (me: I don’t know – I never have really understood World War I .)

The subtext kept cycling around what, if anything, is different today. Sure – many specific things are different – but is the essence of anything human fundamentally different?

I kept coming back to the idea that we have instantaneous information about everything everywhere all the time. That has been enabled by technology, especially over the past twenty years, and is accelerating. Technology doesn’t address everything – for example, air travel still sucks.

And, more importantly, the instantaneous information we have isn’t necessarily the truth. In fact, much of it isn’t the truth, but rather a point of view that a subset of people would like to enforce on another subset of people. This is a fundamental tenet of human behavior that has been going since, well, before, well, forever. If you are struggling with what I’m suggesting, just ponder religion (and the history of religion) for a little while.

As I mulled over our conversation this morning, I feel like we are in the middle of a profound struggle between the future and the past. Many people, companies, and organizations are trying to protect the past at any cost. We see this regularly in business as the incumbent vs. innovator fight, but I think it’s more profound than that. It’s literally a difference in point of view.

For those trying to protect the past, it is a way of retaining power, status, money, a way a life, predictability, comfort, control, and a bunch of other things like that. It is a struggle against the inevitability of change. The approach, as change becomes more certain, or accelerates, is to become more extreme in one’s behavior, in an effort to defend the past. The defenders of the past get uglier, nastier, more hostile, louder, and more irrational. Ultimately time passes, people die as mortality is still a foundational characteristic of humans, and the future becomes the present on its way to the past.

Our dinner discussion reminded all of us that this cycle plays out over and over again in the history of humanity.

 


If you are a movie producer and you want to actually make an AI movie that helps people really understand one of the paths we could find ourselves going down in the next decade, read vN: The First Machine Dynasty by Madeline Ashby.

I’ve read a lot of sci-fi in the past few years that involves AI. William Hertling is my favorite writer in this domain right now (Ramez Naam is an extremely close second) although his newest book – Kill Process (which is about to be released) is a departure from AI for him (even though it’s not AI it’s amazing, so you should read it also).

I can’t remember who recommended Madeline Ashby and vN to me but I’ve been enjoying it on Audible over the past month while I’ve been running. I finished it today and had the “yup – this was great” reaction.

It’s an extremely uncomfortable book. I’ve been pondering the massive challenge we are going to have as a mixed society (non-augmented humans, augmented humans, and machines) for a while and this is the first book that I’ve read that feels like it could take place today. Ashby wrote this book in 2012 before the phrase AI got trendy again and I love that she refers to the machines as vNs (named after Von Neumann, with a delicious twist on the idea of a version number.)

I found the human / vN (organic / synthetic) sex dynamic to be overwhelming at times but a critically important underpinning of one of the major threads of the book. The mixed human / vN relationships, including those involved parenting vN children, had similar qualities to some of what I’ve read around racially mixed, religiously mixed, and same-sex parents.

I’ve hypothesized that the greatest human rights issue our species will face in the next 30 years is what it actually means to be human, and whether that means you should be treated differently, which traces back to Asimov’s three laws of robotics. Ashley’s concept of a Fail Safe, and the failure of the Fail Safe is a key part of this as it marks the moment when human control over the machines’ behavior fails. This happens through a variety of methods, including reprogramming, iterating (self-replication), and absorption of code through consuming other synthetic material (e.g. vN body parts, or even the entire vN.)

And then it starts to get complicated.

I’m going for a two hour run this morning so I’ll definitely get into the sequel, iD: The Second Machine Dynasty.


I’ve decided to read a bunch of old science fiction as a way to form some more diverse views of the future.

I’ve been reading science fiction since I was a kid. I probably started around age ten and was a voracious reader of sci-fi and fantasy in high school. I’ve continued on as an adult, estimating that 25% of what I read is science fiction.

My early diet was Asimov, Heinlein, Harrison, Pournelle, Niven, Clarke, Sterling and Donaldson. When I was on sabbatical a few years ago in Bora Bora I read about 40 books including Asimov’s I Robot, which I hadn’t read since I was a teenager.

I’m almost done with Liu’s The Dark Forest which is blowing my mind. Yesterday morning I came across a great interview from 1999 with Arthur C. Clarke. A bunch of dots connected in my mind and I decided to go backwards to think about the future.

I don’t think we can imagine what things will be like 50 years from now and I’m certain we have no clue what a century from now looks like. So, whatever we believe is just random shit we are making up. And there’s no better way to come across random shit that people are making up than by reading sci-fi, which, even if it’s terribly incorrect, often stimulates really wonderful and wide ranging thoughts for me.

So I thought I’d go backwards 50+ years and read sci-fi written in the 1950s and 1960s. I, Robot, written in 1950, was Asimov’s second book so I decided to start with Pebble In the Sky (his first book, also written in 1950). After landing on Amazon, I was inspired to buy the first ten books by Asimov, which follow.

Pebble In The Sky (1950)
I, Robot (1950)
The Stars, Like Dust (1951)
Foundation (1951)
David Starr, Space Ranger (1952)
Foundation and Empire (1952)
The Currents of Space (1952)
Biochemistry and Human Metabolism w/Williams & Wilkins (1952)
Second Foundation (1953)
Lucky Starr and the Pirates of the Asteroids (1953)

They are all sci-fi except Biochemistry and Human Metabolism written with Williams & Wilkins in 1952. I bought it also, just for the hell of it.

I bought them all in paperback and am going to read them as though I was reading them in the 1950s (on paper, without any interruptions from my digital devices) and see what happens in my brain. I’ll report back when I’m finished (or maybe along the way).

If this list inspires you with any sci-fi books from the 1950s or 1960s, toss them in the comments and I’ll grab them.


I’m at Startup Iceland today. I like Iceland – this is the second time I’ve been here. It’s the closest place on earth I’ve been to Alaska, which I love dearly. And it’s fun to see and hang out with my friend Bala Kamallakharan. As a super bonus, Om Malik –  who I adore – is also here.

Om and I did a fireside chat with Bala. At the end, Bala asked about the future and what we were uncomfortable with. Neither of us is uncomfortable. Instead, we are both optimistic and intrigued with what is going on. Om talked about his view is that this is the most exciting time to be alive and went on a riff about what is in front of us.

I started with my premise – that the machines have already taken over and are just waiting very patiently for us to catch up. They are happy to let us do a lot of work for them, including feeding them with data, building homes for them, and connecting them together. In the mean time, they are biding their time, doing their thing, along side us.

If you wind the clock forward 50 years, our current state will be incomprehensible to that future human. The pace of technological change at all levels is accelerating at a pace we can’t fathom. Some people are pessimistic and now concerned about the notion of a real advanced intelligence. I’m optimistic and accepting of it, not fighting the inevitability of the path we are on or being in denial about our ability as a society to control things.

This is the rant I ended up on. Human structures change slowly. It’s unevenly distributed based on geography, culture, and political philosophy. Our legal system lags far behind what is actually happening, and as a result we are in the middle of a bunch of debates around technology, including things around privacy, net neutrality, data storage, and surveillance. Our existing approach as a species to dealing with the challenges are painful to watch from the future.

It’s fun to ponder how quickly things are changing along with how badly certain parts of society wants to keep them from changing, hanging on to the “way things are” or even the “way things were.” Don’t ever forget the sound of inevitability.


I hate doing “reflections on the last year” type of stuff so I was delighted to read Fred Wilson’s post this morning titled What Just Happened? It’s his reflection on what happened in our tech world in 2014 and it’s a great summary. Go read it – this post will still be here when you return.

Since I don’t really celebrate Christmas, I end up playing around with software a lot over the holidays. This year my friends at FullContact and Mattermark got the brunt of me using their software, finding bugs, making suggestions, and playing around with competitive stuff. I hope they know that I wasn’t trying to ruin their holidays – I just couldn’t help myself.

I’ve been shifting to almost exclusively reading (a) science fiction and (b) biographies. It’s an interesting mix that, when combined with some of the investments I’m deep in, have started me thinking about the next 30 years of the innovation curve. Every day, when doing something on the computer, I think “this is way too fucking hard” or “why isn’t the data immediately available”, or “why am I having to tell the software to do this”, or “man this is ridiculous how hard it is to make this work.”

But then I read William Hertling’s upcoming book The Turing Exception, remember that The Singularity (first coined in 1958 by John von Neumann, not more recently by Ray Kurzweil, who has made it a very popular idea) is going to happen in 30 years. The AIs that I’m friends with don’t even have names or identities yet, but I expect some of them will within the next few years.

We have a long list of fundamental software problems that haven’t been solved. Identity is completely fucked, as is reputation. Data doesn’t move nicely between things and what we refer to as “big data” is actually going to be viewed as “microscopic data”, or better yet “sub-atomic data” by the time we get to the singularity. My machines all have different interfaces and don’t know how to talk to each other very well. We still haven’t solved the “store all your digital photos and share them without replicating them” problem. Voice recognition and language translation? Privacy and security – don’t even get me started.

Two of our Foundry Group themes – Glue and Protocol – have companies that are working on a wide range of what I’d call fundamental software problems. When I toss in a few of our HCI-themes investments, I realize that there’s a theme that might be missing, which is companies that are solving the next wave of fundamental software problems. These aren’t the ones readily identified today, but the ones that we anticipate will appear alongside the real emergence of the AIs.

It’s pretty easy to get stuck in the now. I don’t make predictions and try not to have a one year view, so it’s useful to read what Fred thinks since I can use him as my proxy AI for the -1/+1 year window. I recognize that I’ve got to pay attention to the now, but my curiosity right now is all about a longer arc. I don’t know whether it’s five, ten, 20, 30, or more years, but I’m spending intellectual energy using these time apertures.

History is really helpful in understanding this time frame. Ben Franklin, John Adams, and George Washington in the late 1700s. Ada Lovelace and Charles Babbage in the mid 1800s. John Rockefeller in the early 1900s. The word software didn’t even exist.

We’ve got some doozies coming in the next 50 years. It’s going to be fun.


William Hertling is one of my top five favorite contemporary sci-fi writers. Last night, I finished the beta (pre-copyedited) version of his newest book, The Turing Exception. It’s not out yet, so you can bide you time by reading his three previous books, which will be a quadrilogy when The Turing Exception ships. The books are:

  1. Avogadro Corp: The Singularity Is Closer Than It Appears
  2. A.I. Apocalypse
  3. The Last Firewall

William has fun naming his characters – I appear as a minor character early in The Last Firewall – and he doesn’t disappoint with clever easter eggs throughout The Turing Exception, which takes place in the mid-2040s.

I read Asimov’s classic I, Robot in Bora Bora as part of my sci-fi regimen. The book bears no resemblance to the mediocre Will Smith movie of the same name. Written in 1950, Asimov’s main character, Susan Calvin, has just turned 75 after being born in 1982 which puts his projection into the future ending around 2057, a little later than Hertling’s, but in the same general arena.

As I read The Turing Exception, I kept flashing back to bits and pieces of I, Robot. It’s incredible to see where Asimov’s arc went, based in the technology of the 1950s. Hertling has got almost 65 more years of science, technology, innovation, and human creativity on his side, so he gets a lot more that feels right, but it’s still a 30 year projection into the future.

The challenges between the human race and computers (whether machines powered by positronic brains or just pure AIs) are similar, although Asimov’s machines are ruled by his three laws of robotics while Hertling’s AIs behaviors are governed by a complex reputational system. And yes, each of these constructs break, evolve, or are difficult to predict indefinitely.

While reading I, Robot I often felt like I was in a campy, fun, Vonnegut like world until I realized how absolutely amazing it was for Asimov to come up with this stuff in 1950. Near the middle, I lost my detached view of things, where I was observing myself reading and thinking about I, Robot and Asimov, and ended up totally immersed in the second half. After I finished, I went back and reread the intro and the first story and imagined how excited I must have been when I first discovered I, Robot, probably around the age of 10.

While reading The Turing Exception, I just got more and more anxious. The political backdrop is a delicious caricature of our current state of the planet. Hertling spends little time on character background since this is book four and just launches into it. He covers a few years at the beginning very quickly to set up the main action, which, if you’ve read this far, I expect you’ll infer is a massive life and death conflict between humans and AIs. Well – some humans, and some AIs – which define the nature of the conflict that impacts all humans and AIs. Yes, lots of EMPs, nuclear weapons, and nanobots are used in the very short conflict.

Asimov painted a controlled and calm view of the future of the 2040s, on where humans were still solidly in control, even when there is conflict. Hertling deals with reality more harshly since he understands recursion and extrapolates where AIs can quickly go. This got me to thinking about another set of AIs I’ve spent time with recently, which are Dan Simmons AIs from the Hyperion series. Simmons AIs are hanging out in the 2800s so, unlike Hertling’s, which are (mostly) confined to earth, Simmons have traversed the galaxy and actually become the void that binds. I expect that Hertling’s AIs will close the gap a little faster, but the trajectory is similar.

I, Robot reminded me that as brilliant as some are, we have no fucking idea where things are heading. Some of Asimov’s long arcs landed in the general neighborhood, but much of it missed. Hertling’s arcs aren’t as long and we’ll have no idea how accurate they were until we get to 2045. Regardless, each book provides incredible food for thought about how humanity is evolving alongside our potentially future computer overlords.

William – well done on #4! And Cat totally rules, but you knew that.


I’ve been thinking about the future a lot lately. While I’ve always read a lot of science fiction, The Hyperion Cantos shook some stuff free in my brain. I’ve finished the first two books – Hyperion and The Fall of Hyperion – and expect I’ll finish the last two in the next month while I’m on sabbatical.

If you have read The Fall of Hyperion, you’ll recognize some of my thoughts at being informed by Ummon, who is one of my favorite characters. If you don’t know Hyperion, according to Wikipedia Ummon “is a leading figure in the TechnoCore’s Stable faction, which opposes the eradication of humanity. He was responsible for the creation of the Keats cybrids, and is mentioned as a major philosopher in the TechnoCore.” Basically, he’s one of the older, most powerful AIs who believes AIs and humans can co-exist.

Lately, some humans have expressed real concerns about AIs. David Brooks wrote a NYT OpEd titled Our Machine Masters which I found weirdly naive, simplistic, and off-base. He hedges and offers up two futures, each which I think miss greatly.

Brooks’ Humanistic Future: “Machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much. In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.”

Brooks’ Cold, Utilitarian Future: “On the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.”

Brooks seems stuck on “machines” rather than what an AI actually could evolve into. Ummon would let out a big “kwatz!” at this.

Elon Musk went after the same topic a few months ago in an interview where he suggested that building an AI was similar to summoning the demon.

Musk: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.”

I need to send Elon a copy of the Hyperion Cantos so he sees how the notion of regulatory oversight of AI turns out.

Screen Shot 2014-11-03 at 6.36.19 AMI went to watch the actual interview, but there’s been a YouTube takedown by MIT, although I suspect, per a Tweet I got, that a bot actually did it, which would be deliciously ironic.

If you want to watch the comment, it’s at 1:07:30 on the MIT AeroAstro Centennial Symposium video which doesn’t seem to have an embed function.

My friend, and the best near term science fiction writer I know, William Hertling, had a post over the weekend titled Elon Musk and the risks of AIHe had a balanced view of Elon’s comment and, as William always does, has a thoughtful explanation of the short term risks and dynamics well worth reading. William’s punch line:

“Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.”

Amy and were talking about this the other night after her Wellesley board meeting. We see a huge near term schism coming on almost all fronts. Classical education vs. online education. How medicine and health care work. What transportation actually is. Where we get energy from.

One of my favorite lines in the Fall of Hyperion is the discussion about terraforming other planets and the quest for petroleum. One character asks why we still need petroleum in this era (the 2800’s). Another responds that “200 billion humans use a lot of plastic.”

Kwatz!


Today’s post is a guest post from William Hertling, author of the award-winning Avogadro Corp: The Singularity Is Closer Than It Appears and A.I. Apocalypse, near-term science-fiction novels about realistic ways strong AI might emerge. They’ve been called “frighteningly plausible”, “tremendous”, and “thought-provoking”. By day he works on web and social media for HP. Follow him on twitter at @hertling or visit his blog williamhertling.com.

I’m a huge fan of William and his writing as you can see from my review of his book Avogadro Corp. So when William offered to write a guest post on how to predict the future, I enthusiastically said yes. Take a look – and take your time.

Pretty much everyone would like a sure-fire way to predict the future. Maybe you’re thinking about startups to invest in, or making decisions about where to place resources in your company. Maybe you just care about what things will be like in 10, 20, or 30 years.

There are many techniques to think logically about the future, to inspire idea creation, and to predict when future inventions will occur.

I’d like to share one technique that I’ve used successfully. It’s proven accurate on many occasions. And it’s the same technique that I’ve used, as a writer, to create realistic technothrillers set in the near future. I’m going to start by going back to 1994.

Predicting Streaming Video and the Birth of the Spreadsheet

There seem to be two schools of thought on how to predict the future of information technology: looking at software or looking at hardware. I believe that looking at hardware curves is always simpler and more accurate.

This is the story of a spreadsheet I’ve been keeping for almost twenty years.

In the mid-1990s, a good friend of mine, Gene Kim (founder of Tripwire and author of When IT Fails: A Business Novel) and I were in graduate school together in the Computer Science program at the University of Arizona. A big technical challenge we studied was piping streaming video over networks. It was difficult because we had limited bandwidth to send the bits through, and limited processing power to compress and decompress the video. We needed improvements in video compression and in TCP/IP – the underlying protocol that essentially runs the Internet.

The funny thing was that no matter how many incremental improvements we made (there were dozens of people working on different angles of this), streaming video always seemed to be just around the corner. I heard “Next year will be the year for video” or similar refrains many times over the course of several years. Yet it never happened.

Around this time I started a spreadsheet, seeding it with all of the computers I’d owned over the years. I included their processing power, the size of their hard drives, the amount of RAM they had, and their modem speed. I calculated the average annual increase of each of these attributes, and then plotted these forward in time.

I looked at the future predictions for “modem speed” (as I called it back then, today we’d called it internet connection speed or bandwidth). By this time, I was tired of hearing that streaming video was just around the corner, and I decided to forget about trying to predict advancements in software compression, and just look at the hardware trend. The hardware trend showed that internet connection speeds were increasing, and by 2005, the speed of the connection would be sufficient that we could reasonably stream video in real time without resorting to heroic amounts of video compression or miracles in internet protocols. Gene Kim laughed at my prediction.

Nine years later, in February 2005, YouTube arrived. Streaming video had finally made it.

The same spreadsheet also predicted we’d see a music downloading service in 1999 or 2000. Napster arrived in June, 1999.

The data has held surprisingly accurate over the long term. Using just two data points, the modem I had in 1986 and the modem I had in 1998, the spreadsheet predicts that I’d have a 25 megabit/second connection in 2012. As I currently have a 30 megabit/second connection, this is a very accurate 15 year prediction.

Why It Works Part One: Linear vs. Non-Linear

Without really understanding the concept, it turns out that what I was doing was using linear trends (advancements that proceed smoothly over time), to predict the timing of non-linear events (technology disruptions) by calculating when the underlying hardware would enable a breakthrough. This is what I mean by “forget about trying to predict advancements in software and just look at the hardware trend”.

It’s still necessary to imagine the future development (although the trends can help inspire ideas). What this technique does is let you map an idea to the underlying requirements to figure out when it will happen.

For example, it answers questions like these:

– When will the last magnetic platter hard drive be manufactured? 2016. I plotted the growth in capacity of magnetic platter hard drives and flash drives back in 2006 or so, and saw that flash would overtake magnetic media in 2016.

– When will a general purpose computer be small enough to be implanted inside your brain? 2030. Based on the continual shrinking of computers, by 2030 an entire computer will be the size of a pencil eraser, which would be easy to implant.

– When will a general purpose computer be able to simulate human level intelligence? Between 2024 and 2050, depending on which estimate of the complexity of human intelligence is selected, and the number of computers used to simulate it.

Wait, a second: Human level artificial intelligence by 2024? Gene Kim would laugh at this. Isn’t AI a really challenging field? Haven’t people been predicting artificial intelligence would be just around the corner for forty years?

Why It Works Part Two: Crowdsourcing

At my panel on the future of artificial intelligence at SXSW, one of my co-panelists objected to the notion that exponential growth in computer power was, by itself, all that was necessary to develop human level intelligence in computers. There are very difficult problems to solve in artificial intelligence, he said, and each of those problems requires effort by very talented researchers.

I don’t disagree, but the world is a big place full of talented people. Open source and crowdsourcing principles are well understood: When you get enough talented people working on a problem, especially in an open way, progress comes quickly.

I wrote an article for the IEEE Spectrum called The Future of Robotics and Artificial Intelligence is Open. In it, I examine how the hobbyist community is now building inexpensive unmanned aerial vehicle auto-pilot hardware and software. What once cost $20,000 and was produced by skilled researchers in a lab, now costs $500 and is produced by hobbyists working part-time.

Once the hardware is capable enough, the invention is enabled. Before this point, it can’t be done.  You can’t have a motor vehicle without a motor, for example.

As the capable hardware becomes widely available, the invention becomes inevitable, because it enters the realm of crowdsourcing: now hundreds or thousands of people can contribute to it. When enough people had enough bandwidth for sharing music, it was inevitable that someone, somewhere was going to invent online music sharing. Napster just happened to have been first.

IBM’s Watson, which won Jeopardy, was built using three million dollars in hardware and had 2,880 processing cores. When that same amount of computer power is available in our personal computers (about 2025), we won’t just have a team of researchers at IBM playing with advanced AI. We’ll have hundreds of thousands of AI enthusiasts around the world contributing to an open source equivalent to Watson. Then AI will really take off.

(If you doubt that many people are interested, recall that more than 100,000 people registered for Stanford’s free course on AI and a similar number registered for the machine learning / Google self-driving car class.)

Of course, this technique doesn’t work for every class of innovation. Wikipedia was a tremendous invention in the process of knowledge curation, and it was dependent, in turn, on the invention of wikis. But it’s hard to say, even with hindsight, that we could have predicted Wikipedia, let alone forecast when it would occur.

(If one had the idea of an crowd curated online knowledge system, you could apply the litmus test of internet connection rate to assess when there would be a viable number of contributors and users. A documentation system such as a wiki is useless without any way to access it. But I digress…)

Objection, Your Honor

A common objection is that linear trends won’t continue to increase exponentially because we’ll run into a fundamental limitation: e.g. for computer processing speeds, we’ll run into the manufacturing limits for silicon, or the heat dissipation limit, or the signal propagation limit, etc.

I remember first reading statements like the above in the mid-1980s about the Intel 80386 processor. I think the statement was that they were using an 800 nm process for manufacturing the chips, but they were about to run into a fundamental limit and wouldn’t be able to go much smaller. (Smaller equals faster in processor technology.)

But manufacturing technology has proceeded to get smaller and smaller.  Limits are overcome, worked around, or solved by switching technology. For a long time, increases in processing power were due, in large part, to increases in clock speed. As that approach started to run into limits, we’ve added parallelism to achieve speed increases, using more processing cores and more execution threads per core. In the future, we may have graphene processors or quantum processors, but whatever the underlying technology is, it’s likely to continue to increase in speed at roughly the same rate.

Why Predicting The Future Is Useful: Predicting and Checking

There are two ways I like to use this technique. The first is as a seed for brainstorming. By projecting out linear trends and having a solid understanding of where technology is going, it frees up creativity to generate ideas about what could happen with that technology.

It never occurred to me, for example, to think seriously about neural implant technology until I was looking at the physical size trend chart, and realized that neural implants would be feasible in the near future. And if they are technically feasible, then they are essentially inevitable.

What OS will they run? From what app store will I get my neural apps? Who will sell the advertising space in our brains? What else can we do with uber-powerful computers about the size of a penny?

The second way I like to use this technique is to check other people’s assertions. There’s a company called Lifenaut that is archiving data about people to provide a life-after-death personality simulation. It’s a wonderfully compelling idea, but it’s a little like video streaming in 1994: the hardware simply isn’t there yet. If the earliest we’re likely to see human-level AI is 2024, and even that would be on a cluster of 1,000+ computers, then it’s seems impossible that Lifenaut will be able to provide realistic personality simulation anytime before that.* On the other hand, if they have the commitment needed to keep working on this project for fifteen years, they may be excellently positioned when the necessary horsepower is available.

At a recent Science Fiction Science Fact panel, other panelists and most of the audience believed that strong AI was fifty years off, and brain augmentation technology was a hundred years away. That’s so distant in time that the ideas then become things we don’t need to think about. That seems a bit dangerous.

* The counter-argument frequently offered is “we’ll implement it in software more efficiently than nature implements it in a brain.” Sorry, but I’ll bet on millions of years of evolution.

How To Do It

This article is How To Predict The Future, so now we’ve reached the how-to part. I’m going to show some spreadsheet calculations and formulas, but I promise they are fairly simple. There’s three parts to to the process: Calculate the annual increase in a technology trend, forecast the linear trend out, and then map future disruptions to the trend.

Step 1: Calculate the annual increase

It turns out that you can do this with just two data points, and it’s pretty reliable. Here’s an example using two personal computers, one from 1996 and one from 2011. You can see that cell B7 shows that computer processing power, in MIPS (millions of instructions per second), grew at a rate of 1.47x each year, over those 15 years.

 

I like to use data related to technology I have, rather than technology that’s limited to researchers in labs somewhere. Sure, there are supercomputers that are vastly more powerful than a personal computer, but I don’t have those, and more importantly, they aren’t open to crowdsourcing techniques.

I also like to calculate these figures myself, even though you can research similar data on the web. That’s because the same basic principle can be applied to many different characteristics.

Step 2: Forecast the linear trend

The second step is to take the technology trend and predict it out over time. In this case we take the annual increase in advancement (B$7 – previous screenshot), raised to an exponent of the number of elapsed years, and multiply it by the base level (B$11). The formula displayed in cell C12 is the key one.

I also like to use a sanity check to ensure that what appears to be a trend really is one. The trick is to pick two data points in the past: one is as far back as you have good data for, the other is halfway to the current point in time. Then run the forecast to see if the prediction for the current time is pretty close. In the bandwidth example, picking a point in 1986 and a point in 1998 exactly predicts the bandwidth I have in 2012. That’s the ideal case.

Step 3: Mapping non-linear events to linear trend

The final step is to map disruptions to enabling technology. In the case of the streaming video example, I knew that a minimal quality video signal was composed of a resolution of 320 pixels wide by 200 pixels high by 16 frames per second with a minimum of 1 byte per pixel. I assumed an achievable amount for video compression: a compressed video signal would be 20% of the uncompressed size (a 5x reduction). The underlying requirement based on those assumptions was an available bandwidth of about 1.6mb/sec, which we would hit in 2005.

In the case of implantable computers, I assume that a computer of the size of a pencil eraser (1/4” cube) could easily be inserted into a human’s skull. By looking at physical size of computers over time, we’ll hit this by 2030:

 

This is a tricky prediction: traditional desktop computers have tended to be big square boxes constrained by the standardized form factor of components such as hard drives, optical drives, and power supplies. I chose to use computers I owned that were designed for compactness for their time. Also, I chose a 1996 Toshiba Portege 300CT for a sanity check: if I project the trend between the Apple //e and Portege forward, my Droid should be about 1 cubic inch, not 6. So this is not an ideal prediction to make, but it’s still clues us in about the general direction and timing.

The predictions for human-level AI are more straightforward, but more difficult to display, because there’s a range of assumptions for how difficult it will be to simulate human intelligence, and a range of projections depending on how many computers you can bring to pair on the problem. Combining three factors (time, brain complexity, available computers) doesn’t make a nice 2-axis graph, but I have made the full human-level AI spreadsheet available to explore.

I’ll leave you with a reminder of a few important caveats:

Not everything in life is subject to exponential improvements.

Some trends, even those that appear to be consistent over time, will run into limits. For example, it’s clear that the rate of settling new land in the 1800s (a trend that was increasing over time) couldn’t continue indefinitely since land is finite. But it’s necessary to distinguish genuine hard limits (e.g. amount of land left to be settled) from the appearance of limits (e.g. manufacturing limits for computer processors).

Some trends run into negative feedback loops. In the late 1890s, when all forms of personal and cargo transport depended on horses, there was a horse manure crisis. (Read Gotham: The History of New York City to 1898.) Had one plotted the trend over time, soon cities like New York were going to be buried under horse manure. Of course, that’s a negative feedback loop: if the horse manure kept growing, at a certain point people would have left the city. As it turns out, the automobile solved the problem and enabled cities to keep growing.

So please keep in mind that this is a technique that works for a subset of technology, and it’s always necessary to apply common sense. I’ve used it only for information technology predictions, but I’d be interested in hearing about other applications.