Brad Feld

Month: June 2012

[embedit snippet=”crowdrise”]

While the fires in Colorado have calmed down and the firefighters are in the process of getting them contained, there continues to be plenty of fire danger as the firefighters continue to work incredibly hard. It’s going to be a tough summer for fire in Colorado and I’m proud of all the support this community has given out of the gate to people impacted.

As of this evening we’ve raised $43,000, including the $20,000 match from me, Amy, and NewsGator. A number of companies have signed up to match gifts and Crowdrise, who has helped us get this online fundraiser up and running in the last 24 hours, has been awesome to work with.

If you haven’t contributed and are willing, please donate now. All of your donations via Team Anchor Point Fund (the foundation Amy and I have) will go to the Denver Foundation – CO Fire Relief Fund 2012.

There are many other initiatives going on to fund firefighters and people impacted by the fire. One of my favorites is Wild Fire Tees. I bought one yesterday – all of the profits are being donated to Care and Share or the Colorado Red Cross.

Sometimes it’s hard to realize the impact of community support in situations like this. There are numerous people working incredibly hard to deal with a force of nature (fire) that creates huge anxiety and stress in a community. While you may not directly relate to it, every contribution of any amount, no matter how small, is helpful.

For anyone who writes a check, does something to help someone who is impacted by the fires, or even just expresses words of support, thank you. I know the Colorado community appreciates it greatly, especially those directly impacted by the fires.

 


My friends at NewsGator have started a fundraising campaign to help victims of the Colorado wildfires. In addition to getting the campaign up and running, NewsGator has committed to a matching gift of $10,000. Amy and I decided to match that gift from our foundation, so the matching gift is now $20,000.

I’d like to encourage everyone involved in a startup in Colorado (or anyone in the world) to help your neighbors in Colorado Springs, Fort Collins, and Boulder who are victims of the current fires that are raging. There are two ways to do this:

1. Give a direct gift via my page. Amy and I are matching the first $10,000 of gifts.

2. If you are part of a startup, start a campaign for your company. It’s easy and will take a few minutes. Then – rally your gang to contribute.

While the current Boulder fire is getting under control, many people in Colorado Springs and Fort Collins are still at risk. And many others have been impacted. Here’s a note I got from a friend in Fort Collins.

hi, brad. yes, sadly, our ranch burned to the ground 2 weeks ago. we got the all clear to go back on thursday. even though it’s a giant scorched hole in the earth, we need to see it. 

we’re fine. animals, horses, children all safe. we were on a motorcycle trip. so, literally have only the clothes on our backs (and some really cool motorcycle helmets). i’ve never had nothing and i’m learning a lot from it.

We are part of an amazing community. Be thankful for what you’ve got and send good karma out in the world. You never know when you’ll need it to come back to you.


Today one of our portfolio companies, LinkSmart, has come out of stealth mode and unveiled its product, Total Link Management and $4.7M in financing led by Foundry Group. My partner Seth Levine sits on the board and wrote about his view of LinkSmart. The investment was an obvious one for us as it fits squarely in our Adhesive theme given the deep relationships LinkSmart has with big media and web platforms as a provider of analytics and traffic management software.

The Internet was founded on our ability to link between things. We all distinctly remember the days of the primitive HTML pages we’d surf in our early browsers — gray page backgrounds and purple links. We clicked. And the Internet grew. This blog, your site, and how search engines like Google and Bing rank you and your relevance were all dependent upon links.

Even though the Internet and web has evolved since the mid-1990s, the most prolific web properties on the Internet all generally follow the same process: advertiser relationships are created with a publisher. This publisher then runs a never-ending fire drill with their sales, SEO and yield management teams to fulfill those obligations with their advertisers. More ad units are inserted on pages, marginally better eCPMs negotiated for, but yet the profit margins continually come under pressure as publishers scramble to generate even more traffic to support the advertising expectations.

LinkSmart has built a system that fixes part of this challenge. The technology LinkSmart’s founder Pete Sheinbaum and his team have assembled is analogous to many of the solutions we’ve seen and invested in for search engine optimization (SEO) such as SEOMoz. However, while SEO directs visitors from the open web — via search engine results — to a publisher’s site, LinkSmart takes those same readers and provides the publisher team with the means to use hyperlinks to more deeply engage the visitors, keep them on site longer, or send them to the properties the publisher wants them to visit.

If it costs a dollar to bring a reader to a site and they bail after the first page, that’s a dollar not optimized or even wasted. If that same dollar is spent and LinkSmart moves the reader three pages deeper on the site, triggering more revenue or visits to affiliate or partner site, the cost per acquisition has dropped dramatically and in many cases will improve the RPM geometrically.

Beyond the link optimization, there’s an enormous amount of valuable data generated by LinkSmart’s system. The data answers questions such as what are readers clicking on? What is their path while on site? What is the optimal number of links 100 words of written content should have to guarantee highest click-engagement? How many readers are leaving your site in a single click and where are they going? LinkSmart helps a publisher look deeply at that data to surface many things including the real-time intent of a reader when on-site.

We are psyched about what LinkSmart is launching today. If you are a publisher, check it out — it’s just the tip of the iceberg.


For starters, Amy and I are fine. The fires are not threatening our house (yet). But all we talked about at dinner last night at Ruby Tuesday was fire.

Colorado is having a horrible fire season. We are only at the end of June and have four massive wildfires going on in the state, including the High Park Fire in Fort Collins and the Waldo Canyon Fire in Colorado Springs. Yesterday afternoon as the Flagstaff Fire began in Boulder, I heard from a friend that she’d lost her house in the High Park Fire. Everything – totally burned to the ground. They were on a trip so they literally have nothing other than the clothes they had with them.

Amy and I are in Keystone so we are 90 miles away from Boulder. We shifted into obsessive monitoring of Twitter hashtag mode around 5pm and eventually wandered downstairs and watched the 5:30 news to see live video of the fire. During this time dozens of emails came in from friends around the country asking if we were ok and offering to help, since Eldorado Canyon (where our home is) was mentioned as one of the risk areas. As of this morning we still seem safe but today will tell the story.

My brother Daniel is in more danger. His house is in Table Mesa and one ridge separates his house and the fire. He lives down the road from NCAR which I expect the country will be hearing about a lot today as the fire is adjacent to the NCAR land and building. I have a hard time believing that the fire could consume NCAR and get over the ridge, but who knows. Either way, Daniel and his family decamped to our condo in Boulder last night so they are safe, but I’m thinking of them constantly.

Amy and I have had to evacuate for fires twice. The first was one that was started on our land in Eldorado Canyon. There is a trail that borders our land and often people come off the trail to sit on a giant rock on our land. That giant rock used to have a bunch of trees near it. One morning, in 1999, when I was sitting at Cooley’s office with Mike Platt working on the deal that would become BodyShop.com, I got a frantic call from Amy that said simply “come home right now – our land is on fire.” She had woken up to smoke about a quarter of a mile away from our house. As the sun came up, the fire began to blaze. Ultimately 10 acres burned, we (and all of our neighbors) had to evacuate for the day (with 10 minutes notice), and we went through the mental process of “ok – if our house burns to the ground, all our stuff will be gone.” It was ultimately determined that a human – probably someone smoking a cigarette or a joint on the rock – started the fire the night before. For two days, we had 50 amazing firepeople living around our house protecting it as the fire got within 200 yards of our house. Amazing amazing people.

The second fire was the Walker Ranch Fire of 2000. This was a much larger fire – ultimately consuming thousands of acres – and resulted in a three day evacuation for us. It stalled one ridge away from Eldorado Canyon – if it had gotten over the last ridge it would have been a disaster. This time it started the weekend of my brother’s wedding so I spend his entire wedding completely distracted by the slurry bombers flying overhead. This one was much more stressful as it stretched out for days and days.

Last year the Four Mile Canyon fire threatened Boulder and was devastating to many people in the Boulder foothills. This was my partners first taste of real “scary fire shit” as both Ryan and Jason’s houses were on the edge of the evac zone. Since they both live in downtown Boulder, that’s terrifying to consider – if the fire ended up in the residential areas in downtown Boulder, that would have been really bad.

Basically, fire completely scares the shit out of me. I’ve read about 30 books on it and find it fascinating. scary, intense, amazing, and complicated. The anxiety that it provokes in me, and many others, is incredible. I’ve now had several friends who have lost all the physical things they had in a fire – they all have similar stories of complete and total disbelief followed by a powerful rebuilding phase.

As the sun comes up in Keystone this morning, it’s another beautiful day in Colorado. You wouldn’t know that 90 miles away an entire city faces a very threatening fire. This is a big planet, and days like this remind me how fragile it all is.

Here’s hoping the awesome firefighters in Boulder get things under control today. I’m sending good karma to all of my Boulder friends. And for everyone who reached out, thank you – you guys are awesome.


Today’s guest post from Chris Moody, the COO of Gnip, follows on the heels of the amazing Big Boulder event that Gnip put on last Thursday and Friday. To get a feel for some of the speakers, take a look at the following blog posts summarizing talks from leaders of Tumblr, Disqus, Facebook, Klout, LinkedIn, StockTwits, GetGlue, Get Satisfaction, and Twitter.

  • Transition at a Massive Scale with Ken Little of Tumblr
  • From Monologue to Dialogue with Daniel Ha and Ro Gupta of Disqus
  • Measuring Engagement on Facebook with Sean Bruich
  • Measuring Influence Online with Joe Fernandez and Matt Thomson of Klout
  • Data Science at LinkedIn with Yael Garten
  • Industry-Focused Social Networks with Howard Lindzon of StockTwits
  • Distributed vs. Centralized Conversations with Jesse Burros of GetGlue 
  • Engaging with Customers Online with Wendy Lea of Get Satisfaction
  • Creating the Social Data Ecosystem with Ryan Sarver and Doug Williams of Twitter 

The event was fantastic, but Chris sent out a powerful email to everyone at Gnip on Saturday that basically said “awesome job on Big Boulder – our work is just beginning.” For a more detailed version, and some thoughts on why The Work Begins When The Milestone Ends, I now hand off the keyboard to Chris.

We’ve just finished up Big Boulder, the first ever conference dedicated to social data.   By all accounts, the attendees and the presenters had a great experience. The Gnip team is flying high from all the exciting conversations and the positive feedback.   After countless hours of planning, hard work, and sleepless nights, it is very tempting to kick back and relax. There is a strong natural pull to get back into a normal workflow. But, we can’t relax and we won’t.  Here’s why.

As a company it is important to recognize the difference between a milestone and a meaningful business result.  Although it took us almost nine months to plan the event, Big Boulder is really just a milestone.   In this particular case, it is actually an early milestone.    The real results will likely begin months from now.   All too often startups confuse milestones for results.   This mistake can be deadly.

Milestones Are Not Results

Milestones represent progress towards a business result.  Examples of milestones that are commonly mistaken for results include:

Getting Funded.  Having someone make an early investment in your company is positive affirmation that at least one person (and perhaps many) believe in what you are trying to accomplish.  But, the results will come based upon how effectively you spend the money; build your team/product, etc.  Chris Sacca has tweeted a few times that he doesn’t understand why startups ever announce funding.  Although I haven’t heard him explain his tweets, I assume he is making the point that funding isn’t a meaningful business result so it doesn’t make sense to announce the news to the world.

Signing a partnership.  Getting a strategic partnership deal signed can take lots of hard work and months/years to accomplish.  Once a partnership deal is finally signed, a big announcement usually follows.  The team may celebrate because all the hard work has finally paid off.  But, the obvious mistake is thinking the hard work has paid off.  Getting the deal signed is a major milestone, but the results will likely be based upon the amount of effort your team puts in to the partnership after the deal is signed.  I’ve never experienced a successful partnership that just worked after the deal was signed.  Partnerships typically take a tremendous amount of ongoing work in order to get meaningful results.

Releasing a new feature.   Your team has worked many late nights getting a new killer feature in to the product.  You finally get the release out the door and a nice article runs in TechCrunch the next day.  The resulting coverage leads to your highest site traffic in a year.   But, have you really accomplished any business results yet?  Often the results will come after lots of customer education, usage analysis, or feature iterations.   If no customers use the new feature, have you really accomplished anything?

Is it okay to celebrate milestones?  Absolutely! Blow off steam for a half-day or a long celebratory night.  Take the time to recognize the team’s efforts and to thank them for their hard work.   But, also use that moment to remind everyone that the true benefits will happen based upon what you do next.

Results Increase Value

Unlike milestones, results have a direct impact on the value of the company.  Results also vary dramatically based upon different business models.   Examples of common results include: increasing monthly recurring revenue, decreasing customer turnover, lowering cost of goods sold (increasing gross margin).

Announcing a new feature is a milestone because it adds no value to the company.  On the other hand, having customers actually adopt a new feature might increase customer retention, which could be a meaningful business result.

The Work Begins When X Ends

When I worked at Aquent, there was a point in time when we were doing lots of tradeshows. We noticed a pattern of team members taking months to prepare for an event and then returning from the tradeshow declaring the event a success.   They would put a stack of business cards on their desk and spend the next several weeks digging out from the backlog of normal work stuff.  The business cards would begin to collect dust and the hot leads from the show would eventually become too cold to be useful.

In order to avoid this phenomenon, someone coined the expression “the work begins when the tradeshow ends”.  This simple statement had a big impact on the way that I think about milestones versus results.  Since that time, I’ve used the concept of this phrase hundreds of times to remind my team and myself that a particular milestone isn’t a result.    You can substitute the word “tradeshow” for whatever milestone your team has recently achieved to help maintain focus.

The most recent example?  The work begins when Big Boulder ends.


Deal Co-op, the company that powers Brad Feld’s Amazing Deals, released their software last week as a self serve, SaaS product. This means anyone reading this post can go to dealcoop.com and create their own deal store like I did with Brad Feld’s Amazing Deals. This type of software works great for bloggers, entrepreneurs, publishers – basically anyone with an existing online presence looking to monetize their audience.

I was a lead mentor for Deal Co-op during TechStars Seattle 2010. Back then, the founders (and brothers) Nate and Mike Schmidt had already created a great white label software that powers daily deal and group buying sites. Now that their software is self serve, you can hop on their site, sign up, and start designing your store with an Interactive Store Designer tool. It’s fun and you can build a new business in seconds. On average, I make $555 for each deal I post and blog about. Not bad!

Nate and Mike have spent a bunch of time thinking about the long term sustainability of daily deals and group buying. Brad Feld’s Amazing Deals is a good example of Deal Co-op’s vision for the industry. Relevant offers to a targeted audience are the key – and you don’t have to make offers every day to be successful. To explain this with real numbers, they  just put up an excellent blog post discussing the history of Brad Feld’s Amazing Deals.

But that’s not all. Don’t miss our deal this week, Learn How to Build iPhone Apps! Check it out – a $49 course that is normally $197. Yet another amazing deal brought to you by your favorite huckster.


Last night Amy and I watched the first episode of Aaron Sorkin’s new TV show The Newsroom. It started out strong but by about 30 minutes in I said to Amy “this isn’t going to last for us – this is Sports Night, but less interesting.” By the end I realized Sorkin was simply following “The Formula” which many people, both creatives and professionals, fall into. I’ll explain in a bit, but first some play by play analysis (to mix metaphors).

We loved Sports Night. I’m the sports widow in this family – I don’t really care about or watch much professional sports. But we watched Sports Night on DVD from beginning to end around 2002. I remember watching five or six episodes at night at some point. We literally couldn’t stop and just raced through it. We were already into The West Wing by then and felt like we’d discovered a special, magic window into Sorkin’s brain – a parallel universe to the brilliance that was the first few seasons of The West Wing. But even faster paced, punchier, rougher, less polished, and less serious.

Isaac, Dana, Casey, Dan, Natalie, and Jeremy became new friends. We loved them – flaws and all. The dramatic tension existed in every thirty minute (well – 22 minute to allow for commercials) show. We could watch three episodes in an hour. Six in two. Awesomeness.

Thirty minutes into The Newsroom and I already recognized Charley as Isaac, Will as Casey, Mackenzie as Dana, Jim as Nathalie, Maggie as Jeremy. Only Dan was missing. The supporting characters in the newsroom all looked familiar and as non-memorable as the one’s in Sports Center. There were a few gender change ups, but not many, and the obvious romantic / sexual relationship with Will-Mackenzie (Casey-Dana) and pending Jim-Maggie (Nathalie-Jeremy) were front and center.

I won’t bother watching episode two. I’ll let The Newsroom run its course for the first season and if it gets great reviews go back and watch it later. I’m bummed because I was hoping it would feel like another West Wing to me rather than Studio 60. We’ve been looking for a new TV series to watch since we burned out on Mad Men – I guess it won’t be this one.

Back to The Formula. I got an email from an entrepreneur on Saturday. He described his new business in the words of his last successful business, which exited in 2000. I have no idea what he’s done between 2000 and 2012 – he didn’t go into it, but he used his 1996 – 2000 experience to explain why his new business was going to be great. While the context was different, the business was different, the environment was different, and the technology was different, The Formula was the same.

Big companies love The Formula. They keep doing the same things over and over again until they don’t work anymore. Suddenly, when they don’t work, they either go through radical transformation, upheaval, or disruption. In some cases, like IBM in the early 1990’s, they have a near death experience before re-emerging as something completely different. In others cases, like Novell, they just quietly disappear.

VCs use The Formula constantly. I’ve sat through thousands of board meetings where I hear the equivalent of “in 1985 we did blah blah blah and you should also.” Or, “sales works this way – you need to be getting $X per rep for direct – it has always worked this way.” I could give an endless list of examples of this. It’s one of the challenges with VCs, especially ones who had some success, drifted for a while, and then rediscovered “The Formula” as the path to being successful again. Sometimes it works, sometimes it doesn’t.

The Formula works for a while. Eventually it gets stale. If you go back 30 years, you’ll see The Formula hard at work in the sitcoms of my childhood. Happy Days. Laverne and Shirley. Three’s Company. Try to sit through two hours of these shows – you’ll pluck your eyeballs out with a tweezer. They are campy, fun and nostalgic for 15 minutes and then mindblowingly dull.

If you are an entrepreneur, recognize that The Formula is hard at work all around you. Many people – your investors, your partners, your competitors – are simply using a newer version of The Formula they used for the last 20 years. Don’t be afraid to completely blow it up – it worked in the past but people are attracted to new things, inspiring things, things that challenge the way they think. Inspire – don’t fall back on “it’s always worked this way.”

Don’t ignore The Formula. When it’s working, it’s awesome. But remember that it doesn’t work forever.

C’mon Sorkin – inspire us!


Holy cannoli! That’s what I shouted out loud (startling Amy and the dogs who were laying peacefully next to me on the couch last night) about 100 pages into William Hertling‘s second book A.I. Apocalypse. By this point I figured out where things were going to go over the next 100 pages, although I had no idea how it was going to end. The computer virus hacked together by a teenager had become fully sentient, completely distributed, had formed tribes that now had trading patterns, a society, and a will to live. All in a parallel universe to humans, who were now trying to figure out how to deal with them, ranging from shutting them off to negotiating with them, all with the help of ELOPe, the first AI who was accidentally created a dozen years earlier and was now working with his creator to suppress the creation of any other AI.

Never mind – just go read the book. But read Avogadro Corp: The Singularity Is Closer Than It Appears first as they are a series. And if you want more of a taste of Hertling, make sure you read his guest post from Friday titled How To Predict The Future.

When I was a teenager, I obsessively read everything I could get my hands of by Isaac Asimov, Ray Bradbury, and Robert Heinlein. In college, it was Bruce Sterling, William Gibson, and Neal Stephenson. Today it’s Daniel Suarez and William Hertling. Suarez and Hertling are geniuses at what I call “near-term science fiction” and required reading for any entrepreneur or innovator around computers, software, or Internet. And everyone else, if you want to have a sense of what the future with our machines is going to be like.

I have a deeply held belief that the machines have already taken over and are just waiting for us to catch up with them. In my lifetime (assuming I live at least another 30 years) I expect we will face many societal crises around the intersection of man and machine. I’m fundamentally an optimist about this and how it evolves and resolves, but believe the only way you can be prepared for it is to understand many different scenarios. In Avogadro Corp and A.I. Apocalypse, Hertling creates two amazingly important situations and foreshadows a new one in his up and coming third book.


Today’s post is a guest post from William Hertling, author of the award-winning Avogadro Corp: The Singularity Is Closer Than It Appears and A.I. Apocalypse, near-term science-fiction novels about realistic ways strong AI might emerge. They’ve been called “frighteningly plausible”, “tremendous”, and “thought-provoking”. By day he works on web and social media for HP. Follow him on twitter at @hertling or visit his blog williamhertling.com.

I’m a huge fan of William and his writing as you can see from my review of his book Avogadro Corp. So when William offered to write a guest post on how to predict the future, I enthusiastically said yes. Take a look – and take your time.

Pretty much everyone would like a sure-fire way to predict the future. Maybe you’re thinking about startups to invest in, or making decisions about where to place resources in your company. Maybe you just care about what things will be like in 10, 20, or 30 years.

There are many techniques to think logically about the future, to inspire idea creation, and to predict when future inventions will occur.

I’d like to share one technique that I’ve used successfully. It’s proven accurate on many occasions. And it’s the same technique that I’ve used, as a writer, to create realistic technothrillers set in the near future. I’m going to start by going back to 1994.

Predicting Streaming Video and the Birth of the Spreadsheet

There seem to be two schools of thought on how to predict the future of information technology: looking at software or looking at hardware. I believe that looking at hardware curves is always simpler and more accurate.

This is the story of a spreadsheet I’ve been keeping for almost twenty years.

In the mid-1990s, a good friend of mine, Gene Kim (founder of Tripwire and author of When IT Fails: A Business Novel) and I were in graduate school together in the Computer Science program at the University of Arizona. A big technical challenge we studied was piping streaming video over networks. It was difficult because we had limited bandwidth to send the bits through, and limited processing power to compress and decompress the video. We needed improvements in video compression and in TCP/IP – the underlying protocol that essentially runs the Internet.

The funny thing was that no matter how many incremental improvements we made (there were dozens of people working on different angles of this), streaming video always seemed to be just around the corner. I heard “Next year will be the year for video” or similar refrains many times over the course of several years. Yet it never happened.

Around this time I started a spreadsheet, seeding it with all of the computers I’d owned over the years. I included their processing power, the size of their hard drives, the amount of RAM they had, and their modem speed. I calculated the average annual increase of each of these attributes, and then plotted these forward in time.

I looked at the future predictions for “modem speed” (as I called it back then, today we’d called it internet connection speed or bandwidth). By this time, I was tired of hearing that streaming video was just around the corner, and I decided to forget about trying to predict advancements in software compression, and just look at the hardware trend. The hardware trend showed that internet connection speeds were increasing, and by 2005, the speed of the connection would be sufficient that we could reasonably stream video in real time without resorting to heroic amounts of video compression or miracles in internet protocols. Gene Kim laughed at my prediction.

Nine years later, in February 2005, YouTube arrived. Streaming video had finally made it.

The same spreadsheet also predicted we’d see a music downloading service in 1999 or 2000. Napster arrived in June, 1999.

The data has held surprisingly accurate over the long term. Using just two data points, the modem I had in 1986 and the modem I had in 1998, the spreadsheet predicts that I’d have a 25 megabit/second connection in 2012. As I currently have a 30 megabit/second connection, this is a very accurate 15 year prediction.

Why It Works Part One: Linear vs. Non-Linear

Without really understanding the concept, it turns out that what I was doing was using linear trends (advancements that proceed smoothly over time), to predict the timing of non-linear events (technology disruptions) by calculating when the underlying hardware would enable a breakthrough. This is what I mean by “forget about trying to predict advancements in software and just look at the hardware trend”.

It’s still necessary to imagine the future development (although the trends can help inspire ideas). What this technique does is let you map an idea to the underlying requirements to figure out when it will happen.

For example, it answers questions like these:

– When will the last magnetic platter hard drive be manufactured? 2016. I plotted the growth in capacity of magnetic platter hard drives and flash drives back in 2006 or so, and saw that flash would overtake magnetic media in 2016.

– When will a general purpose computer be small enough to be implanted inside your brain? 2030. Based on the continual shrinking of computers, by 2030 an entire computer will be the size of a pencil eraser, which would be easy to implant.

– When will a general purpose computer be able to simulate human level intelligence? Between 2024 and 2050, depending on which estimate of the complexity of human intelligence is selected, and the number of computers used to simulate it.

Wait, a second: Human level artificial intelligence by 2024? Gene Kim would laugh at this. Isn’t AI a really challenging field? Haven’t people been predicting artificial intelligence would be just around the corner for forty years?

Why It Works Part Two: Crowdsourcing

At my panel on the future of artificial intelligence at SXSW, one of my co-panelists objected to the notion that exponential growth in computer power was, by itself, all that was necessary to develop human level intelligence in computers. There are very difficult problems to solve in artificial intelligence, he said, and each of those problems requires effort by very talented researchers.

I don’t disagree, but the world is a big place full of talented people. Open source and crowdsourcing principles are well understood: When you get enough talented people working on a problem, especially in an open way, progress comes quickly.

I wrote an article for the IEEE Spectrum called The Future of Robotics and Artificial Intelligence is Open. In it, I examine how the hobbyist community is now building inexpensive unmanned aerial vehicle auto-pilot hardware and software. What once cost $20,000 and was produced by skilled researchers in a lab, now costs $500 and is produced by hobbyists working part-time.

Once the hardware is capable enough, the invention is enabled. Before this point, it can’t be done.  You can’t have a motor vehicle without a motor, for example.

As the capable hardware becomes widely available, the invention becomes inevitable, because it enters the realm of crowdsourcing: now hundreds or thousands of people can contribute to it. When enough people had enough bandwidth for sharing music, it was inevitable that someone, somewhere was going to invent online music sharing. Napster just happened to have been first.

IBM’s Watson, which won Jeopardy, was built using three million dollars in hardware and had 2,880 processing cores. When that same amount of computer power is available in our personal computers (about 2025), we won’t just have a team of researchers at IBM playing with advanced AI. We’ll have hundreds of thousands of AI enthusiasts around the world contributing to an open source equivalent to Watson. Then AI will really take off.

(If you doubt that many people are interested, recall that more than 100,000 people registered for Stanford’s free course on AI and a similar number registered for the machine learning / Google self-driving car class.)

Of course, this technique doesn’t work for every class of innovation. Wikipedia was a tremendous invention in the process of knowledge curation, and it was dependent, in turn, on the invention of wikis. But it’s hard to say, even with hindsight, that we could have predicted Wikipedia, let alone forecast when it would occur.

(If one had the idea of an crowd curated online knowledge system, you could apply the litmus test of internet connection rate to assess when there would be a viable number of contributors and users. A documentation system such as a wiki is useless without any way to access it. But I digress…)

Objection, Your Honor

A common objection is that linear trends won’t continue to increase exponentially because we’ll run into a fundamental limitation: e.g. for computer processing speeds, we’ll run into the manufacturing limits for silicon, or the heat dissipation limit, or the signal propagation limit, etc.

I remember first reading statements like the above in the mid-1980s about the Intel 80386 processor. I think the statement was that they were using an 800 nm process for manufacturing the chips, but they were about to run into a fundamental limit and wouldn’t be able to go much smaller. (Smaller equals faster in processor technology.)

But manufacturing technology has proceeded to get smaller and smaller.  Limits are overcome, worked around, or solved by switching technology. For a long time, increases in processing power were due, in large part, to increases in clock speed. As that approach started to run into limits, we’ve added parallelism to achieve speed increases, using more processing cores and more execution threads per core. In the future, we may have graphene processors or quantum processors, but whatever the underlying technology is, it’s likely to continue to increase in speed at roughly the same rate.

Why Predicting The Future Is Useful: Predicting and Checking

There are two ways I like to use this technique. The first is as a seed for brainstorming. By projecting out linear trends and having a solid understanding of where technology is going, it frees up creativity to generate ideas about what could happen with that technology.

It never occurred to me, for example, to think seriously about neural implant technology until I was looking at the physical size trend chart, and realized that neural implants would be feasible in the near future. And if they are technically feasible, then they are essentially inevitable.

What OS will they run? From what app store will I get my neural apps? Who will sell the advertising space in our brains? What else can we do with uber-powerful computers about the size of a penny?

The second way I like to use this technique is to check other people’s assertions. There’s a company called Lifenaut that is archiving data about people to provide a life-after-death personality simulation. It’s a wonderfully compelling idea, but it’s a little like video streaming in 1994: the hardware simply isn’t there yet. If the earliest we’re likely to see human-level AI is 2024, and even that would be on a cluster of 1,000+ computers, then it’s seems impossible that Lifenaut will be able to provide realistic personality simulation anytime before that.* On the other hand, if they have the commitment needed to keep working on this project for fifteen years, they may be excellently positioned when the necessary horsepower is available.

At a recent Science Fiction Science Fact panel, other panelists and most of the audience believed that strong AI was fifty years off, and brain augmentation technology was a hundred years away. That’s so distant in time that the ideas then become things we don’t need to think about. That seems a bit dangerous.

* The counter-argument frequently offered is “we’ll implement it in software more efficiently than nature implements it in a brain.” Sorry, but I’ll bet on millions of years of evolution.

How To Do It

This article is How To Predict The Future, so now we’ve reached the how-to part. I’m going to show some spreadsheet calculations and formulas, but I promise they are fairly simple. There’s three parts to to the process: Calculate the annual increase in a technology trend, forecast the linear trend out, and then map future disruptions to the trend.

Step 1: Calculate the annual increase

It turns out that you can do this with just two data points, and it’s pretty reliable. Here’s an example using two personal computers, one from 1996 and one from 2011. You can see that cell B7 shows that computer processing power, in MIPS (millions of instructions per second), grew at a rate of 1.47x each year, over those 15 years.

 

I like to use data related to technology I have, rather than technology that’s limited to researchers in labs somewhere. Sure, there are supercomputers that are vastly more powerful than a personal computer, but I don’t have those, and more importantly, they aren’t open to crowdsourcing techniques.

I also like to calculate these figures myself, even though you can research similar data on the web. That’s because the same basic principle can be applied to many different characteristics.

Step 2: Forecast the linear trend

The second step is to take the technology trend and predict it out over time. In this case we take the annual increase in advancement (B$7 – previous screenshot), raised to an exponent of the number of elapsed years, and multiply it by the base level (B$11). The formula displayed in cell C12 is the key one.

I also like to use a sanity check to ensure that what appears to be a trend really is one. The trick is to pick two data points in the past: one is as far back as you have good data for, the other is halfway to the current point in time. Then run the forecast to see if the prediction for the current time is pretty close. In the bandwidth example, picking a point in 1986 and a point in 1998 exactly predicts the bandwidth I have in 2012. That’s the ideal case.

Step 3: Mapping non-linear events to linear trend

The final step is to map disruptions to enabling technology. In the case of the streaming video example, I knew that a minimal quality video signal was composed of a resolution of 320 pixels wide by 200 pixels high by 16 frames per second with a minimum of 1 byte per pixel. I assumed an achievable amount for video compression: a compressed video signal would be 20% of the uncompressed size (a 5x reduction). The underlying requirement based on those assumptions was an available bandwidth of about 1.6mb/sec, which we would hit in 2005.

In the case of implantable computers, I assume that a computer of the size of a pencil eraser (1/4” cube) could easily be inserted into a human’s skull. By looking at physical size of computers over time, we’ll hit this by 2030:

 

This is a tricky prediction: traditional desktop computers have tended to be big square boxes constrained by the standardized form factor of components such as hard drives, optical drives, and power supplies. I chose to use computers I owned that were designed for compactness for their time. Also, I chose a 1996 Toshiba Portege 300CT for a sanity check: if I project the trend between the Apple //e and Portege forward, my Droid should be about 1 cubic inch, not 6. So this is not an ideal prediction to make, but it’s still clues us in about the general direction and timing.

The predictions for human-level AI are more straightforward, but more difficult to display, because there’s a range of assumptions for how difficult it will be to simulate human intelligence, and a range of projections depending on how many computers you can bring to pair on the problem. Combining three factors (time, brain complexity, available computers) doesn’t make a nice 2-axis graph, but I have made the full human-level AI spreadsheet available to explore.

I’ll leave you with a reminder of a few important caveats:

Not everything in life is subject to exponential improvements.

Some trends, even those that appear to be consistent over time, will run into limits. For example, it’s clear that the rate of settling new land in the 1800s (a trend that was increasing over time) couldn’t continue indefinitely since land is finite. But it’s necessary to distinguish genuine hard limits (e.g. amount of land left to be settled) from the appearance of limits (e.g. manufacturing limits for computer processors).

Some trends run into negative feedback loops. In the late 1890s, when all forms of personal and cargo transport depended on horses, there was a horse manure crisis. (Read Gotham: The History of New York City to 1898.) Had one plotted the trend over time, soon cities like New York were going to be buried under horse manure. Of course, that’s a negative feedback loop: if the horse manure kept growing, at a certain point people would have left the city. As it turns out, the automobile solved the problem and enabled cities to keep growing.

So please keep in mind that this is a technique that works for a subset of technology, and it’s always necessary to apply common sense. I’ve used it only for information technology predictions, but I’d be interested in hearing about other applications.