« swipe left for tags/categories
swipe right to go back »
Last week SimpleGeo and their partner Stamen Design jointly released a project they have been working on together called Polymaps. It’s absolutely beautiful and a stunning example of what you can do with the SimpleGeo API. They’ve released the Polymaps source code on GitHub so any developer can quickly see how the API is used, play around with real production code, and modify the base examples for their own use.
When I first started program, it was 1979. I started on an Apple II – I learned BASIC, Pascal, and 6502 Assembler. I studied every page and example in the Apple II Reference Manual (the “Red Book”). Whenever I got source code for any application at a user group meeting, I stared at it, played with it, and tried to understand what it was doing.
When I started programming on an IBM PC in 1983, I did exactly the same thing. I spent a lot of time with Btrieve and there were endless source code examples to build on. I had a few friends that were also using BASIC + the IBM BASIC Compiler + Btrieve so we shared code (by handing each other floppy disks). We built libraries that did specific things and as each of us improved them, we shared them back with each other.
In my first company, we were heavy users of Clarion. While Clarion was compiled, it still came with a solid library of example code, although we quickly built our own libraries that we used throughout the company as we grew. When I started investing in companies that were building Web apps in 1994, it was once again all HTML / source code and examples everywhere. My friends at NetGenesis (mostly Raj Bhargava and Eric Richard) wrote one of the first Web programming books – Build a Web Site: The Programmer’s Guide to Creating, Building and Maintaining a Web Presence – I vaguely remember NetGenesis getting paid something like $25,000 (which was a ton of money to them at the time) to write it.
In the last few months, the phrase “data as a service” has started to be popular. I’m not totally sure I understand what people mean by it and I’ve been involved in several larger discussions about it and even noticed an article today in the New York Times titled “Data on Demand Is an Opportunity.” I’ve invested in several companies that seem to fit within this categorization, including SimpleGeo, Gnip, and BigDoor, but we don’t really think about them as “data as a service” companies (SimpleGeo and Gnip are in our Glue theme; BigDoor is in our Distribution theme).
When I reflect on all of this, it seems painfully obvious to me (and maybe to you also) that the best way to popularize “data as a service” is to start with an API (which creates the revenue model dynamic) and build a bunch of open source examples on top of it. Your goal should be to make it as simple as possible for a developer to immediately start using your API in ways relevant to them. By open sourcing the starting point, you both save an enormous amount of time and give the developers a much more interactive way to learn rather than forcing them to start from scratch and figure out how the API works.
I like how SimpleGeo has done this and realize that this can apply to a bunch of companies we are both investing in and looking at. I’m not sure that it has anything to do with the construct of “data as a service” (which I expect will quickly turn into DaaS) but it does follow from the long legacy of how people have learned from each other around the creation of software, especially around new platforms.
While we are using SFLA (silly four letter acronyms – we’ve got PaaS, and IaaS, along with our old friend SaaS), any ideas what ZaaS is going to stand for?
My long time friend Alan Shimel has been blogging up a storm on Network World (if you want to hear any amusing story, ask him about the first time he met me.) When Alan started writing his column for Network World he asked me for introductions to a bunch of our portfolio companies that were using open source. Alan is a tough critic and calls it like he sees it so while I knew there was no guarantee that he’d go easy on the companies, I knew that Alan would do an even handed job of highlighting their strengths and weaknesses. I also know that everyone I invest in values any kind of feedback – both good and bad – and they work especially hard to delight their customers so any kind of feedback will make them better.
Earlier today, Alan wrote an article on Standing Cloud titled Seeding the Cloud with Open Source, Standing Cloud Makes It Easy. On Monday, Standing Cloud released their first version of their product (called the Trial Edition) which is a free version that lets you install and work with around 30 open source products on five different cloud service providers. It’s the first step in a series of releases over the next two quarters that Standing Cloud has planned as they work create an environment where it is trivial to deploy and manage open source applications in the cloud. Alan played around with Standing Cloud’s Trial Edition, totally understood what they are doing, and explained why the Trial Edition is interesting and where Standing Cloud is heading when they release their Community Edition at the end of April.
Alan’s also written several other articles about companies in our portfolio recently, including the open source work Gist has been doing with Twitter and a great review of the Pogoplug and how it uses open source.
I believe I’m one of the people that inspired Alan to start blogging a number of years ago. Through his personal blog Ashimmy, the blog he writes for Network World titled Open Source Face and Fiction, and the blogging he does on security.exe (his company CISO Group’s blog), Alan is one of my must read technology bloggers. And he’s often funny as hell, especially when he gets riled up. Keep it up Alan!
Kevin Kelleher’s article on GigaOm this morning titled 2009: Year of the Hacker made me think back to the rise of open source after the Internet crash of 2001. In the aftermath of the crash, many experienced software developers were out of work for a period of time ranging from weeks to years. Some of them threw themselves into open source projects and, in some cases, created their next job with the expertise they developed around a particular open source project.
We are still in a tense and ambiguous part of the current downturn where, while many developers are getting laid off, some of them are immediately being picked back up by other companies that are in desperate need for them. However, many other developers are not immediately finding work. If the downturn gets worse, the number of out of work developers increases.
If they take a lesson from the 2001 – 2003 time frame, some subset of them will choose to get deeply in an open source related project. Given the range of established open source projects, the opportunity to do this today is much more extensive than it was seven years ago. In addition, most software companies – especially Internet-related ones – now have robust API’s and/or open source libraries that they actively encourage third parties to work with for free. The SaaS-based infrastructure that exists along with maturing source code repositories add to the fun. The ability to hack something interesting together based on an established company’s infrastructure is omnipresent and is one of the best ways to “apply for a job” at an interesting company.
We are thinking hard about how to do this correctly at a number of our new investments, including companies like Oblong, Gnip, and a new cloud-computing related startup we are funding in January. Of course, many of our older investments such as NewsGator and Rally Software already have extensive API libraries and actively encourage developers to work with them. And of course, there are gold standards of open source projects like my friends at WordPress and masters of the API like Twitter.
If you are a developer and want help engaging with any of these folks, or have ideas about how this could work better, feel free to drop me an email.
A few weeks ago MIT refreshed its OpenCourseWare project. This project – in which MIT shares curriculum, lecture notes, exams, and other material from over 1700 projects – is amazing.
The project was launched in 2002 by computer science professor Hal Abelson with 32 courses. I took 6.001: Structure and Interpretation of Computer Programs from Abelson in 1984. You can take it also – including working through the online version of the textbook!
While I don’t have a favorite MIT course (that would be an emotionally complex oxymoron), my most miserable was 18.700 Linear Algebra which I dropped about halfway through. Sloan (management) courses are well represented, including 15.351: Managing the Innovation Process which was my introduction to Eric von Hippel and my lifelong hatred of software patents.
Speaking of other amazing MIT feats, one of my fraternity brothers – Dan Tani – is current in outer space.
I also have a photo of Dan from a party dressed up as a piece of nigiri sushi, but I’ll spare you that.
Alan Shimel has a good post on a recent release of a formerly “open source” product called Nessus. With the version 3.0 release, the authors abandoned the GPL license and effectively made it closed source. While one can debate the rationale of the parties all day long, the fundamental issue surrounding the migration of “successful” open source projects to “closed source” as part of a commercialization phase is one that I think both vendors and customers be thrashing around with for a while.
Recently, I’ve been exploring some thoughts with my former doctoral advisor Eric von Hippel on the broader issues surrounding Free / Open Source software. There’s been a flurry of academic research in this arena that is covered nicely in Perspectives on Free and Open Source Software (co-edited by Karim Lakhani, one of Eric’s current students.) My simpleminded conclusion is that there is an enormous amount of complexity around this issue, especially when you incorporate our completely busted software patent system into the mix. While it’s easy to blow this off as something that will sort itself out, I don’t think it will and we’ll be living with the dynamics of the F/OSS ecosystem for a long time.