Does Moore’s Law Suddenly Matter Less?

A post in the New York Times this morning asserted that Software Progress Beats Moore’s Law. It’s a short post, but the money quote is from Ed Lazowska at the University of Washington:

“The rate of change in hardware captured by Moore’s Law, experts agree, is an extraordinary achievement. “But the ingenuity that computer scientists have put into algorithms have yielded performance improvements that make even the exponential gains of Moore’s Law look trivial,” said Edward Lazowska, a professor at the University of Washington.

The rapid pace of software progress, Mr. Lazowska added, is harder to measure in algorithms performing nonnumerical tasks. But he points to the progress of recent years in artificial intelligence fields like language understanding, speech recognition and computer vision as evidence that the story of the algorithm’s ascent holds true well beyond more easily quantified benchmark tests.”

If you agree with this, the implications are profound. Watching Watson kick Ken Jennings ass in Jeopardy a few weeks ago definitely felt like a win for software, but someone (I can’t remember who) had the fun line that “it still took a data center to beat Ken Jennings.”

While that doesn’t really matter because Moore’s Law will continue to apply to the data center, but my hypothesis is that there’s a much faster rate of advancement on the software layer. And if this is true it has broad impacts for computing, and computing enabled society, as a whole. It’s easy to forget about the software layer, but as an investor I live in it. As a result of several of our themes, namely HCI and Glue, we see first hand the dramatic pace at which software can improve.

I’ve been through my share of 100x to 1000x performance improvements because of a couple of lines of code or a change in the database structure in my life as a programmer 20+ years ago. At the time the hardware infrastructure was still the ultimate constraint – you could get linear progress by throwing more hardware at the problem. The initial software gains happened quickly but then you were stuck with the hardware improvements. If don’t believe me, go buy a 286 PC and a 386 PC on eBay, load up dBase 3 on each, and reindex some large database files. Now do the same with FoxPro on each. The numbers will startle you.

It feels very different today. The hardware is rapidly becoming an abstraction in a lot of cases. The web services dynamic – where we access things through a browser – built a UI layer in front of the hardware infrastructure. Our friend the cloud is making this an even more dramatic separation as hardware resources become elastic, dynamic, and much easier for the software layer folks to deploy and use. And, as a result, there’s a different type of activity on the software layer.

I don’t have a good answer as to whether it’s core algorithms, distributed processing across commodity hardware (instead of dedicated Connection Machines), new structural approaches (e.g. NoSql), or just the compounding of years of computer science and software engineering, but I think we are at the cusp of a profound shift in overall system performance and this article pokes us nicely in the eye to make sure we are aware of it.

The robots are coming. And they will be really smart. And fast. Let’s hope they want to be our friends.

  • http://www.nektra.com Sebastian Wain

    I don’t think it matters less. What happen is that software, in general, doesn’t take full advantage of hardware, so there is a gap between them. For example Microsoft Excel 2007 was the first version with multithreading (read: multicore advantage) calculation and other software is far behind.Think in terms of cost, YouTube was developed mainly in Python, if there was a quick way to do it in C/C++ the costs to operate this infrastructure would be significantly lower (and they are not marginal). Currently it’s not easy in terms of development costs to decide in favor of C/C++, but that is a limitation related to the tools we have.Conclusion? It matter less in the sense that you can build a distributed system with standard PCs, but in terms of costs it matters in a big distributed system (i.e. Google).

  • http://www.studentswithpatents.com Richard Weisberger

    For those interested, See the 2002 paper published in Operations Research, “Solving Real-World Linear Programs: A Decade and More of Progress.” which looks at this issue in the LP space.

    http://bit.ly/gZt1mI

    Stepping back…Look to the work of Ray Kurzweil who documents quite clearly that “all innovation” follows exponential growth trajectories. (Though we don’t always see it living in the linear region) and that this type of growth seems to be a natural law. Given what we have not yet discovered about genetic algorithms, stay tuned for the unexpected.

    • http://www.feld.com bfeld

      Yup – great paper to point to – I hadn’t seen it so thanks.

  • http://www.lowpan.com Jon Smirl

    There are three areas: hardware, software, and networking. Most of what you perceive as gains in software are really gains enabled by networking. I do a lot of programming and from where I sit software doesn’t seem to have moved much in the last 20 years. Almost all of the algorithms in use today were known 20 years ago. But 20 years ago we didn’t have a planetary network to deploy the algorithms onto.

    I believe Watson owes more to the networking advances that let his data center be built than software improvements. Sure the AI software has improved, but the improvements in networking are far, far greater.

    If software patents continue on their present course we may even reverse the progress of software in the near future. There are already too few programmers in the world, litigation and defending against it wastes an already scarce resource. If we keep roping off blocks of the software pyramid pretty soon we won’t be able to build pyramids.

    • http://www.feld.com bfeld

      I completely agree with you on patents.

      I’m not sure I agree with you on networking being the driver, but I’m going
      to think more about that – I definitely agree that it’s in the mix but I
      continue to think it is enabling of different software approaches that have
      advanced much more quickly in recent years.

      • http://www.lowpan.com Jon Smirl

        Networking is more than the Internet. Those 65,000 CPU clusters totally depend on internal networking to function. Things like Infiniband and Fibre Channel are also networks.

        • http://www.feld.com bfeld

          Yup – totally understand that. You probably don’t know my background but I
          wired up plenty of Synoptics backplanes when 10baseT first emerged. And I
          agree that’s part of the performance driver, but I’m not sure I agree that
          it’s the driver vs. the enabler.

          • http://www.lowpan.com Jon Smirl

            Networking lets you partition datasets and process on the parts independently. But the algorithms being executed inside the partitions are pretty much the same ones we had in 1990. Partitioning is what has caused all of these perceived gains in software.

            Take databases. Partitioning (clusters) is the main advancement. Column indexes are another, but those are over 20 years old. Now we are getting noSQL. But noSQL is just hash tables partitioned over a network. We’d had hash tables for fifty years. It’s networking that has made distributed hash tables useful as a database. Google is a master at partitioning (but the concept of map/reduce is from the early 1980’s).

            The complex algorithms in linear programming have advanced, but linear programming is a niche application. I doubt if one in million computers runs linear programming commercially. Advances like the ones in LP are uncommon.

            Most programming does not consists of complex algorithms like LP. Instead it is basic algorithms that are applied many times over and over. These basic algorithms aren’t getting any faster. Instead programmers are being more careful about how they combine these algorithms so that they don’t block parallelism. When the parallelism isn’t blocked the program can be partitioned over a large network which allows it to scale up to build applications that were previously conceived of but couldn’t be built. Networks allowed us to build much larger applications using the previously known algorithms.

          • http://www.feld.com bfeld

            How about image recognition algorithms, especially over very large image
            sets or requiring interaction in real time?

          • http://www.lowpan.com Jon Smirl

            A better example is codecs. There has been significant algorithmic advancement in video and audio codecs.

  • http://twitter.com/brianylim Brian Lim

    I believe the solution will be very complex with many layers and structures. The key is identifying the bottlenecks and finding elegant and robust solutions for them.

    Marvin Minsky in Society of Mind stated that the mind consists of huge aggregation of mindless agents that have evolved to perform specific tasks. I am seeing software layers able to better communicate between mindless agents. I wonder how many communication layers have been identified in the human brain vs. modern computer hardware/software? Ray Kurtzweil probably has a model for this…

  • http://twitter.com/eriksquared Erik Engstrom

    Morning folks,

    I’ve spent the last ten years working problems out of major enterprise and distributed systems in utterly complex and dramatic ways for Fortune customers. I want to remind the reader that at the “bleeding” edge of tech investment and development is literally light years ahead of global business in nearly every aspect.

    I can attest that as further developments and progress is made in the entrepreneurial space and academic computing sectors, businesses are left across an ever widening chasm of non-adoption and may never be able to catch up without our help. Most organizations remain seized upon “what works” and what they can support with their resources. They are slow to adopt, slow to shift thinking and need real innovation in each of their business domains,

    There are many billion-dollar businesses still running 15 year old client software, still using flat file FTP transfers to operate infrastructure and critical aspects of the global economy. Solutions are still made with floppy disks being the intermediary in some cases between vendors. In other words there is a great land of opportunity actually applying new tech to the existing business models, inter-operational spaces between businesses and sectors.

    I don’t disagree that the rate of change within the social consumer landscape is proving Moore’s Law less relevant. My own android smart phone and the surgical implant I had installed last week is proof of this. Simply put, in this forum, with this many entrepreneurial readers on hand, I can’t pass up the opportunity to remind people of the long-term growth value of building technology for the now (or future), but focusing on the way other businesses can use, buy and profit from them.

    I am so compelled to bridge these two dimensions together, there is so much opportunity that I often find it hard to sleep at night. I wonder, too, if there are others that see the value in bridging into business. It isn’t the sexiest, hottest space, compared to the hustle and draw around consumer web, but the opportunity shouldn’t be overlooked. All you have to do to find profitable customers is to look over your shoulder at the massive businesses and brands that you’re technically leaving behind in your dust.

  • http://blog.jason.pollock.ca/ JasonPollock

    90 servers in 10 racks is not a datacenter, that’s a small machine room. :)

    At the end of the day, a quicksort is a quicksort, and a hash is a hash. We haven’t seen any improvements those algorithms.

    That research talks about heuristic changes which improve the performance of algorithms such as
    3-SAT. They aren’t general use, and they won’t speed up your web browser, word processor
    or database.

    This is why where I work, although the hardware is at least 10x faster, the system still performs exactly the same amount of work (calls/second) it did 10 years ago. We are doing more work (more complicated flows), and are more worried about security. However, the fundamental limits of the system are still exactly what they were 10 years ago. We still have portions of the system which are O(N) and others which are fundamentally O(N^2), all around things which haven’t sped up in the past 10 years (disk). Thankfully we don’t have any which are O(N^N), which is where the improvements that Martin Grotschel are.

    So yes, some algorithms are faster, but they don’t apply generally.

  • http://www.freemicrosoftpointsnow.com/ Free Microsoft Points

    I don’t believe it issues less. Exactly what happen is the fact that software, generally, doesn’t make the most of hardware, therefore there is a space between them. For instance Microsoft Stand out 2007 had been the first edition with multithreading (study: multicore advantage) computation and other software programs are far at the rear of.

  • Robin

    As a non-technical person, while I don’t understand the specifics of your comments, I do understand that things are changing much faster than I (and many other average humans) can comprehend. Your last sentence is what impacts me the most, as I am much more likely to approach an issue from a sociological perspective.

    While I don’t live in fear of a rogue robot (yet), I do see the processing and control of information as a new currency. Those few who understand the complexities and potential of information and communication management will be the “haves” and those who don’t will be the “have nots.” Those who are truly lost will surely be left behind and will struggle to find a valuable way to contribute to society – at least a way that is measurable. Our current public education system has no way to keep up.

    I don’t mean for this to be a downer comment. Rapid change is a challenge for most of us. It is inevitable and has the potential to produce great things for society. It also has the potential to put a huge amount of control in a few hands, just as it has the potential to put a great amount of information out to many people at once – I just hope and trust that most of that information will be honest and factual.

    I am optimistic that the great minds that are taking us forward in the information age are doing so with an understanding of the power they really have and are mindful of the potential misuse of their creations. Most of the “geeks” I know have strong social consciences, and while valuing the importance of their discoveries, balance them with what it means to be a complete human being. I am in awe of and grateful for what they do – even though I may not always “get it.”

    • http://www.feld.com bfeld

      Robin – this is a super important comment from my perspective – thanks for
      taking the time to put it out there.

      I had a great discussion with a taxi driver early the other morning in
      Tucson on the way to the airport at 4am. He and I got into an interesting
      conversation where I asked him to explain how “the Internet worked.” His
      answer was enlightening and reminded me how important it is to make sure
      those of us deep in creating this stuff pay attention to the knowledge,
      understanding, and concerns of everyone else in society.

      Technology can be put to very good uses and very bad uses. I love your
      comment about the geeks having strong social connections – I like to think
      that I do and always come down on the side of using this stuff for good.

      • http://twitter.com/eriksquared Erik Engstrom

        Brad – a question for you, if you can answer, please.

        How would you advise someone to bring up (if at all) social and moral objectives for building a business? Does it even have a place in understanding fit?

        I have felt recently that you and a few other investor blogs have been signaling what I consider to be a valuable moral tone. Perhaps you’ve reflected on this and can share some thoughts.

        • http://www.feld.com bfeld

          I’m not sure I’ve got a deep answer here as I don’t think I’ve been thinking
          hard about it. My quick response is that every entrepreneur should be clear
          about their goals and purpose for their business. It’s easy to point at “do
          something you are incredibly passionate about”, but it’s even more powerful
          to define it “in the context of doing something that does good.”

          The word “moral” is a tough one in this context because the definition has a
          range that is heavily influenced by one’s point of view, religion, and value
          system. I’ve never felt that it was a particularly precise word, especially
          given the 2am discussions on the couch in my fraternity about “morality”
          some 25 years ago!

          All that said, it’s an interesting observation from my writing, especially
          since I hadn’t realized there was a tone in it.

          • http://twitter.com/eriksquared Erik Engstrom

            Brad,

            I’m hardly a trained philosopher, but I’ve always considered ‘moral’ as an uncontrollable and unconscious display of ‘fiber’. ‘Mores’ are what I consider to be normal morality in a social group. “Doing good” is a moral statement in my opinion and resonates.

            Thanks.

          • http://www.feld.com bfeld

            Ooh – I like that. I’ll use that.

  • http://borasky-research.net/about-data-journalism-developer-studio-pricing-survey/ M. Edward (Ed) Borasky

    I disagree – Moore’s Law has made brute-force MapReduce over cubic miles of servers an economically viable alternative to cold calling. ;-) But seriously, the last time I heard the “algorithms have accounted for most of the progress” argument was during the glory days of high performance computing – 1980 – 1990, plus or minus a year or two.

    The context was scientific computing, especially weather / climate modeling and other physics, chemistry and engineering computations. The core algorithms used today in search and natural language processing are for the most part iterative computational linear algebra over extremely sparse “term-document” matrices. They’re known quantities and were known quantities back in the 1990s.

    Maybe there are breakthroughs to be had in the *implementation* – I can point you at some research if you’re interested – but they’re still the same core multiply-add patterns and operation counts as a function of problem size as what we had back then.

    Maybe there’s an “FFT of search” waiting to be found, but so far, there *hasn’t* been progress beyond brute force iterative linear algebra. We’ve stopped looking, IMHO precisely *because* of Moore’s Law.

  • http://twitter.com/andyidsinga andyidsinga

    I think your right ..and even hardware is ‘software’ if you consider verilog, system c, opencores.org, fgpas, dirt cheap cplds. xylinx and altera are your friend :)

  • http://analytikainc.com/blog/ John R. Sedivy

    I believe that the change will continue to accelerate due to an increasing awareness of cross-disciplinary performance. Rather than gauging individual hardware or software performance curves, more will be gained by an integral approach – tapping performance by optimizing interoperability. “We” as opposed to “hardware vs. software.”

    Your last line reminds me of an interview between Ken Wilber and Kevin Kelly about exploring The Technium. The ideas in that interview discuss a technological evolution.

  • http://somethingventured.me/ Conrad Wai

    Brad — totally agree. I actually wrote a post last year called Moore’s Law is Already Dead. I was arguing mostly about things like user experience mattering more, but as an ex-programmer like you I appreciate the software/algorithm angle as well.