Brad's Books and Organizations





Hi, I’m Brad Feld, a managing director at the Foundry Group who lives in Boulder, Colorado. I invest in software and Internet companies around the US, run marathons and read a lot.

« swipe left for tags/categories

swipe right to go back »

I Want More Information, Not Less

Comments (43)

I spent the last two days at the Defrag Conference.  It was awesome on so many levels including the content, the venue, seeing a bunch of great friends, and meeting a bunch of new people.

The conference originated out of an email exchange that Eric Norlin (the amazing guy who puts on the Defrag and Glue Conferences with his even more amazing wife Kim) and I had as a result of a series of blog posts that I wrote in 2006 starting with There Is A Major Software Innovation Wave Coming and Intelligence Amplification.

Over the past three years there has been an incredible amount of innovation around this theme (which we originally called the Implicit Web.)  While lots of it is still messy, sloppy, or ineffective, that’s just part of the innovation cycle.  A consistent discussion point at Defrag was “how to deal with this overwhelming amount of information” – there is no debate about the (a) amount, (b) need to deal with it, or (c) value of dealing with it.  However, a lot of the subtext was that there was too much information and we needed better ways to deal with it.

I agree with the conclusion, but not the premise.  I don’t think there is too much information.  I want more.  More, more, more, more, more.  MORE.  I don’t want to stop until I have all the information.  MORE!  You can’t give me too much information!

I don’t believe the issue is too much information.  This is an independent variable that we can’t control.  For the foreseeable future, there will be a continuous and rapid increase of information as more of the world gets digitized, more individuals become content creators, more systems open up and provide access to their data, and more infrastructure for creating, storing, and transmitting information (and data) gets built.

Yeah – I know – that’s obvious.  But there are a few ways to approach it.  My desired way is to accept the thing you can’t control (more information) and drastically improve the methods for consuming it.  I spent the better part of two days having this thought over and over again. 

By the end of the second day, I’d decided that my original premise was correct – there continues to be a huge innovation wave in software that addresses this.  And we are just starting to deal with it.  And while software is at the core of it, we’ve learned an enormous amount in the past few years about the power of people to help curate it, both directly (by doing things to it) and indirectly (by software interpreting the broad signals of what a large number of people are doing to it.)

The user interfaces – and user interaction model – for all of this stuff still sucks rocks.  And I love things that suck, because that creates huge opportunities for innovation.

The Web Is Working Harder For Me

Comments (80)

A few weeks ago I wrote a post titled The Maturing of the Implicit Web.  In it I talked about new releases from AdaptiveBlue and OneRiot.  As I sit here in my hotel room in Seattle waiting for TA McCann (CEO of Gist) to show up for our pre-board meeting run, I’m pondering how much work I’m starting to get the web to do for me.

For example, as part of my morning information routine, I go through my Gist dashboard.  This is a list of all the new information that Gist has found from a wide variety of data sources about people and companies in my social network.  It derives the social network from my email inbox, integrates it with my Facebook, LinkedIn, and Twitter social graphs, and then presents it to me in a way that is prioritized by what it thinks I find most interesting.  The level of relevance to me is amazing now that I’ve had it running for a few months.  While Gist is still in closed beta, if you want an invite just email me.

Gist is synthesizing the data for me from a variety of other web services.  At our board meeting today we have a long list of potential partners and data services that we prioritize based on (a) quality of data, (b) availability of data, and (c) ability to integrate the data.  Exactly one year ago I wrote a post titled No API?  You Suck! I feel so strongly about this; if I wrote the post again today I would have titled it “No API?  You Really Suck!”

One of the data sources that has a strong API layer is our company Gnip.  They recently announced that they will be integrating data from the Facebook Platform into their data set.  This comes on the heals of their announcement about adding WordPress as a data publisher to their system.  Gnip now has over 30 data publishers actively flowing through their system and have found rapid adoption from a number of interested customers.  Oh – and Gnip’s API is well documented, public, and evolving rapidly.

So – it was with great pleasure when I saw Alex Iskold’s announcement that there is now a Glue API. There is a tremendous amount of interesting semantic data in AdaptiveBlue’s Glue system – the API liberates it for anyone that wants to put some energy into working with it.

But wait, there’s more.  OneRiot also just released their API, which – while not public yet – is available by requestOneRiot has a fascinating set of real time data available via a search interface that gets better and more relevant every week.  They’ve also demonstrated that they can build to search scale as they have some superb technical search folks on their team.

Gang – thanks for not sucking.  Y’all are helping set things up so the web does more of the work for me!

The Maturing of the Implicit Web

Comments (27)

I’ve been fascinated with the notion of the Implicit Web since I determined that I was tired of my computer (and the Internet in general) being stupid.  I wanted it (my computer as well as the Internet) to pay attention to what I, and others, were doing.  Theoretically “my compute infrastructure” should learn, automate repeated tasks (automatically), figure out what information I actually want, and make sure I get it when I want it.

In 20 years, I expect we will snicker at the idea of having to go search for information by typing a few words into a text box on the screen.  It’s way better than 20 years ago, but when you step back and think about it, it’s pretty lame.  I mean, I’ve got this incredible computer on my desk, a gazillion servers in the cloud, this awesome social network, yet I find myself typing the same stuff into little boxes over and over again.  Ok – it’s all pretty incredible given that it wasn’t so long ago that people had to rub sticks together to get fire, but can’t it be amazing and lame at the same time?

Several companies that I’ve got a personal investment in that play in and around the implicit web recently came out with new releases that I’m pretty excited about; each addresses different problems, but does it in elegant and clever ways.

The first – OneRiot – came out with a new twist on using Twitter for search.  OneRiot’s goal is to provide a search engine for the real time web.  To that end, they’ve historically gotten their data on what people are looking at from a collection of browser-based sensors (anonymous, opt-in only).  They’ve built a unique search infrastructure that takes a variety of factors, including number of people on a specific URL in a particular time period, freshness of the content, and typical content weighting algorithms.  A little while ago they realized that people were tweeting a huge number of URLs, mostly via URL shorteners (which are loathed by some very smart people.) Twitter search addresses keywords in the tweet, but it doesn’t do anything with the URL’s, especially the shortened ones.  So, OneRiot built a pre-processor that grabs tweets from Twitter’s API that include a URL, tosses the shortened URL into OneRiot’s search corpus (which expands the URL and indexes the full page text), and then references it back to the original tweet.  It also correlates all tweets with the same URL (including re-tweets) across any URL shortened service.  Now, imagine incorporating any URL data that’s real time that has an API, such as Digg.  Aha!  It’s alpha so forgive it if it breaks – but give it a try.

The second – AdaptiveBlue – has released their newest version of Glue.  Glue is a contextual network that uses semantic technology to automatically connect people around everyday things such as books, music, movies, stars, artists, stocks, wine, and restaurants.  It uses a browser-based plugin to build this contextual network implicitly.  When you are on a site such as Amazon,, Netflix, Yahoo! Finance,, or Citysearch, the Glue bar automatically appears when it recognizes an appropriate object, categorizes it, and let’s you take specific action on it if you want.  Glue has been evolving nicely and now includes the idea of connected conversations between friends (e.g. talk about whatever you like regardless of the site you are visiting), smart recommendations (e.g. implicit recommendations), and web wide top lists of the aggregated activity of all Glue users.

In addition, we’ve finally found a company that we think is attacking a wide swath of the problem of the Implicit Web the correct way, at least given today’s technology. We hope to close the investment and start talking publicly about it early next month. 

For now, I expect the applications around the Implicit Web to continue to fall into the early adopter / you need to see it to believe it category (where it’s harder to explain than just to show).  In the near term, if you are interested in this are, try out OneRiot and Glue – they are both evolving and maturing very nicely.

I Need A News Feed For My News Feeds

Comments (15)

Josh Kopelman doesn’t blog that frequently, but almost all of them are worth reading carefully.  His latest post – Feed Frenzy – is great.  Josh is facing the "multiple news feed problem" as he joins more and more services that publish a news feed.  He takes on the notification side of the equation – the opposite of what FriendFeed and SocialThing do.

All of the social network sites continue to use email as a notification mechanism.  When something happens on the social network that pertains to you (including messages), you get an email.  Anyone that has a meaningful volume of social network activity quickly learns how to turn these notifications off.  This defeats part of the real time value of social networks – now I have to go check and see what’s going on to see if anything relevant to me has happened.

As the "too much email" meme continues to circulate, someone is going to realize that one of the drivers of it is the endless notification cycle and the least common denominator – namely email – that is the mechanism for the notifications.

The solution – as Josh points out – is analogous to SNMP and network operations.  Josh wants an SNMP enabled dashboard for all his news feeds.  Aggregate everything into one easy to monitor dashboard, take action automatically on critical things that I’ve told the dashboard it can take action on, and organize the rest of the notifications in a way that I can deal with.

As an extra special bonus, this dashboard would help me connect all the atomic data (namely – my friend data) on the various social networks I’m getting news feed data from.  Fred Wilson would be "Fred Wilson" across twitter, his blog, his tumblr, facebook, linkedin, myspace, disqus, intense debate, etc.  I’d be able to interact with "Fred Wilson", not each of the discrete Fred Wilson’s.

There was a moment in time where I thought RSS might be the solution for this.  But it’s not – there’s a second order problem (and opportunity) here that requires something additional, especially given that new API’s are appearing for handling specific services news feeds. 

Stuff like FriendFeed and SocialThing address part of the problem, but not all of it (and – ironically – often create additional data as anyone who was been notified by email that a new friend has signed up to follow them on FriendFeed has discovered.)

I love recursive problems.

Defragging The Scary Place Called My Brain

Comments (3)

My partner Ryan put up another post on the Foundry Group blog about our Implicit Web theme.  This is a theme that is particularly interesting and useful to me since I spend so much time in front of my computer trying to deal with and synthesize a never ending stream of data and information.

I want my computer to do more of the work for me.  I want the web to figure stuff out for me.  I want my computer to learn.  I want my friends’ behavior and interests to inform mine.  I want people I don’t know but am connected to through other people I trust to help me find things.  I want a pony.

A couple of years ago my path crossed with Eric Norlin.  Out of our interactions emerged a conference called Defrag.  The first version of Defrag happened last fall – as is typical of my schemes, I always have an evil plan.  In this case is was to get in the middle of a bunch of really smart people and hear what they think about the wide range of problems we addressed at Defrag (which was a proxy for the concept of the Implicit Web.)  Eric talked about this in his post today titled On Community-driven tech conferences.

My evil plan worked great and I learned a ton.  We’re doing Defrag 2008 in – surprise – 2008 (11/3 and 11/4 to be exact.)  I’m continuing to spend a lot of time looking at, thinking about, and investing in the Implicit Web between now and then.  And reading as much Alex Iskold as he will write.

Build something great with me