Sunday Morning Reading

I usually sleep in on Sunday’s but I’ve got a 16 mile run and want to meet Amy at the end of it at 10:30 for brunch.  So – good morning 5am and my normal daily reading routine.  I ran across a lot of intriguing stuff this morning that I thought I’d share with you.  I encourage you to trade your TV watching time for a handful of clicks.

Will Google’s Purity Pay Off?  If you have engaged in the “yes Twitter is cool, but how will it make money” conversation I encourage you to read this BusinessWeek article from Pearl Harbor Day in 2000.  And I quote: “LIMITED BUSINESS.  But how will Google ever make money? There’s the rub. The company’s adamant refusal to use banner or other graphical ads eliminates what is the most lucrative income stream for rival search engines. “

Are We Home Alone? Sometimes Thomas Friedman nails it and sometimes he doesn’t.  Today, he nails it.  I completely agree that Obama (who I voted for) completely blew it on the AIG bonus thing.  Per Friedman:

“President Obama missed a huge teaching opportunity with A.I.G. Those bonuses were an outrage. The public’s anger was justified. But rather than fanning those flames and letting Congress run riot, the president should have said: “I’ll handle this.”  He should have gone on national TV and had the fireside chat with the country that is long overdue. That’s a talk where he lays out exactly how deep the crisis we are in is, exactly how much sacrifice we’re all going to have to make to get out of it, and then calls on those A.I.G. brokers — and everyone else who, in our rush to heal our banking system, may have gotten bonuses they did not deserve — and tells them that their president is asking them to return their bonuses “for the sake of the country.”  Had Mr. Obama given A.I.G.’s American brokers a reputation to live up to, a great national mission to join, I’d bet anything we’d have gotten most of our money back voluntarily. Inspiring conduct has so much more of an impact than coercing it. And it would have elevated the president to where he belongs — above the angry gaggle in Congress.” 

Dov Seidman, the CEO of LRN (I’m an investor) summarizes it well: “Laws tell you what you can do. Values inspire in you what you should do. It’s a leader’s job to inspire in us those values.”

Tit for tat: TomTom sues Microsoft for patent infringement:  The Microsoft / TomTom patent suit battle is heating up.  This is an important one to watch for a variety of reasons including it’s one of the few offense patent litigations from Microsoft to date.

SpringStage goes live: A year ago, David Cohen (TechStars co-founder and author of the ColoradoStartups blog) told me about the idea he had with Alex Muse for creating a national network of startup blogs. SpringStage now has over 30 startup blogs in its network.  Pretty cool – take a look.

Concur’s stock sinks after CEO admits he didn’t earn degree: I have never, ever understood why people lie about graduating from college.  The reputational effect (and general ease) of getting caught – especially today – far outweighs any benefits.  For the record, I have an S.B. (bachelors degree) and S.M. (masters degree) in management science from MIT and was in the Ph.D. program for three years before I got kicked out.  I do not have a Ph.D.  My dad is Dr. Feld, but I am not.

Investing in open source hardware: Eric von Hippel – my MIT advisor and professor that I worked with (before getting kicked out of the Ph.D. program) has been researching user driven innovation since the 1970’s.  He invited me to come talk at his annual MIT Innovation Lab seminar last week about open source hardware from a VC perspective.  I wasn’t able to make it to Boston, but suggested Bijan Sabet from Spark Capital talk, as Bijan is an investor in BugLabs and Boxee and has a point of view about this stuff. 

What Is A Good Venture Return: Fred Wilson digs a little deeper into what makes a good venture return on the heals of the PE Hub article asserting that the $590m acquisition of Pure Digital by Cisco was a decent return for a middle-of-the road VC firm, but “for big name backers Benchmark Capital and Sequoia Capital that’s pretty much a dud.”  Fred decomposes this more and concludes “It’s an investment that worked out well for the investors and I am sure they are quite happy they made the investment and with the returns.” 

Try, Try Again, or Maybe Not: I guess I have to go read this paper by Harvard professor Paul Gompers, Anna Kovner, Josh Lerner, and David Sharfstein.  In it, they claim to have determined that the answer to the question “Does failure breed new knowledge or experience that can be leveraged into performance the second time around?” is “In some cases, yes, but over all, he says, “We found there is no benefit in terms of performance.”  Mark Pincus and Zynga (I’m an investor) are highlighted in the article.  I’ve been an investor in two of Mark’s successes and missed one of his failures; my experience is that the lessons he learned from his failure have been extremely well integrated into his brain.  My own anecdotal experience runs counter to the study – I love working with entrepreneurs that have both success and failure.

Ok – I’ve stalled long enough.  Time to go run.

  • You got kicked out???


    Woz got kicked out from CU Boulder.

    Someday you'll have to blog the story of getting kicked out. Whoever did the kicking is probably kicking themselves now.

    • It's a good story. They did me a favor.

  • Doug

    The AIG thing reminds me of the story of the frog and the scorpion.
    You knew they were greedy blood sucking leaches when you gave them billions. How can anyone be surprised that the money is going straight into executive pockets.
    Stop bailing out failing companies. It will never ever work!

    • This is one of my favorite parables and it is right on the money with this situation.

  • Point: Claims of the form "We found there is no benefit …" are nearly always suspect. While this point is commonly missed, usually I don't complain, but Brad and the readers on this blog are "able to handle the truth"!

    Some of the main reasons for this point are just that in such a test it is easy to get this result of "no benefit" just by a combination of sloppy data collection and weak techniques for processing the data. Generally such poor work is partly saved when the result is "we were able to show a solid effect that has less than 1 chance in 100 of being from pure chance". But, again, when the result is "no benefit", we don't know if the cause was (A) a benefit at most too small to measure or (B) low quality testing.

    Here is a class of examples from, say, managing server farms and networks: Now via simple network management protocol (SNMP) and similar standards, it is reasonably easy to collect rivers of data from such equipment. We can get data on each of several variables at rates from once each several seconds up to hundreds a second. So, as a part of 'system monitoring', for each 'target' from which data is collected, we can continually ask "Is the target healthy or sick" (right, from all problems, hardware, software, security, etc.).

    If our testing technique is just reading tea leaves, and if we have adjusted our false alarm rate to something reasonably low, even when the system is sick we will usually say that the system is not sick but healthy. This situation is analogous to "no benefit" or, more generally, found nothing unusual or nothing different. The cause of our seeing "no benefit" was just reading tea leaves. Easy to do.

    So, what should we do? Well, typically it is fairly easy to adjust the rate of false alarms (saying that the target is sick when it is healthy, saying that there is a benefit when there is not, or Type I error). Then the question is, what technique should we use to get the lowest rate of missed detections (saying that the target is healthy when it is sick, saying that there is no benefit when there is, Type II error)? For the answer, with reasonable assumptions the best possible, about 60 years ago J. Neyman at Berkeley built on some work of K. Pearson of about 100 years ago and showed the now classic Neyman-Pearson result. Intuitively the idea is roughly like investing: Expend the available false alarm rate in the places with the highest detection rate (of real problems — lowest rate of missed detections). The proof of a relatively general version follows from the classic Hahn decomposition in measure theory. Curiously, in the discrete case, for the actual computations we get a knapsack problem, that is, an NP-complete problem.

    So, to make conclusions such as "no benefit" less common, process the data according to the best possible Neyman-Pearson result.

    In the case of monitoring for 'health and wellness', Neyman-Pearson asks for even more data than we can have in practice, especially on when the target is sick.

    One way to see how to use weak techniques for this problem, more realistic than reading tea leaves, is doing all the detection with just the traditional 'thresholds' on just one or a few of the available variables. So, geometrically, for when we say that the target is healthy, a threshold on one variable gives an interval and, on several variables, a rectangle of several dimensions. Alas, a rectangle likely fits reality poorly thus giving too much Type II error for each selected value of Type I error.

    There are some things that can be done in this situation to do nearly as well as Neyman-Pearson using only data that is readily available, that is, with less than half the data Neyman-Pearson assumes, but the techniques are — horrors! — mathematics and not just 'computer science'!

    Now I've "stalled long enough"! Back to writing software!

  • Good explanation from a parallel universe. As a failed social scientist (that was part of what got me kicked out of the Ph.D. program – I liked to do things a lot more than study them) there are so many disconnects between reality and "the studies" – especially retrospective ones. The social scientists always want statistically significant samples – we saw what that did for the economists that ignored the notion of a black swan!

  • The 'black swan' problem is different:

    Here is how one of the basic arguments in 'mathematical finance' went:

    We assume that there is no free money, no free lunch, that is, no 'arbitrage' because if there were too quickly too many investors would exploit it at which time the arbitrage would no longer exist. So, this assumption of no arbitrage is, maybe, an okay (for some purposes) first cut assumption for how the market works in the large (even if individual traders get rich with individual stocks). So, what is left is that no one knows, or has any way of discovering, what the stock market will do tomorrow, the next day, etc. So, what the market does each day is (something like flipping a coin) probabilistically independent of the past. Next, lacking any information to the contrary (e.g., no 'Monday effect'), we assume that the distribution of the change from one day to the next is always the same. Soooooo, with these two assumptions, and a relatively mild assumption about the nature of the distribution (finite variance is enough and the Lindeberg-Feller result has a tricky weaker assumption) the change in the stock market over, say, 30 days is the sum of 30 changes each of which is independent and has the same distribution so that by the classic central limit theorem the change over the 30 days must start to be approximated by a Gaussian distribution.

    Now, with these assumptions, as the number of days, the 30, grows, the actual distribution will converge to a Gaussian distribution as closely as we please.

    In particular, if we look at where the market is, say, each 30 days, then what we get will look, just to the eye, something like Brownian motion.

    My view is that this argument (really just the assumptions since the mathematics itself is rock solid) is neither completely wrong nor as accurate as is important.

    Even if really believe this argument, (and many traders, e.g., naked short sellers, have reason to laugh) we still do not know how accurately the convergence is to a Gaussian distribution. In particular, even we add up the daily incremental changes for a year, say, 250 trading days, we still can't rule out some rare events far out in the tail, that is, a 'black swan'.

  • Eric von Hippel

    Actually, as far as I remember, Brad didn't get kicked out of the PhD program – am I wrong, Brad? My recollection is that, while a student, he just kept starting new and interesting companies that kept on succeeding and growing. They took up more and more of his time, and made finishing a PhD less and less interesting to him.

    Who knows? One of these days he might decide to give up his day job and come back and finish up his dissertation. Could happen. 🙂

    • Yeah, well, who knows!  I recall formally taking an “indefinite leave of absence” but in the context of “get serious about this or take an indefinite leave of absence.”  By the time the moment of truth about getting serious about it came, I was way more interested in the companies I was involved in than in the actual PhD activity.  Regardless of whether I was “kicked out”, “left because I lost interest”, or “some other dynamic”, the experience was hugely important to me on many levels, as was your involvement and mentoring.  I clearly recall you giving me plenty of time and space to figure out what I wanted to do and being supportive regardless of which path I took.  If nothing else (and I learned loads of other things), learning the value of that was a biggie!

      • Eric von Hippel

        Well, that's what they mean by the "doctor father" role – if I do it right we both grow – and end up with a life-time bond!

        • Indeed.  And you did it great.

  • I wasn't suggesting they were the same problem. I was suggesting that two different categories of problems confound social science research in similar (yet different) ways.

  • Yes, "social science research" has some challenges. My wife and brother tried, but I stayed with math, physical science, engineering, and technology (for business).

    But the two problems we have mentioned, that some people forgot that (1) "We found there is no benefit …" is not the same as there was no benefit and (2) the central limit theorem does not guarantee to kill black swans, are elementary mistakes and more an embarrassment of some social scientists than the challenges of social science problems.

    Is it possible to make progress in the social sciences? Actually, yes: For (1), that is just a hypothesis test, and the social sciences have long commonly made good use of that subject, e.g., as in

    Sidney Siegel, 'Nonparametric Statistics for the Behavioral Sciences', McGraw-Hill, New York, 1956.

    Some people in the social sciences know this material well. The system monitoring communities in computer science and information technology would do well getting caught up on such material.

    For more, the recent rapid progress in 'social' applications of computing and the Internet show progress in understanding 'social' phenomena.

    Having the social sciences as precise as mathematical physics soon? No. Possible to make progress in the social sciences? Yes.

  • This article is very interesting. Thank you very much for sharing .

  • Pingback: auto insurance quotes comparison()

  • Pingback: buy auto insurance online()