Fundamental Software Problems That Haven’t Been Solved Yet

I hate doing “reflections on the last year” type of stuff so I was delighted to read Fred Wilson’s post this morning titled What Just Happened? It’s his reflection on what happened in our tech world in 2014 and it’s a great summary. Go read it – this post will still be here when you return.

Since I don’t really celebrate Christmas, I end up playing around with software a lot over the holidays. This year my friends at FullContact and Mattermark got the brunt of me using their software, finding bugs, making suggestions, and playing around with competitive stuff. I hope they know that I wasn’t trying to ruin their holidays – I just couldn’t help myself.

I’ve been shifting to almost exclusively reading (a) science fiction and (b) biographies. It’s an interesting mix that, when combined with some of the investments I’m deep in, have started me thinking about the next 30 years of the innovation curve. Every day, when doing something on the computer, I think “this is way too fucking hard” or “why isn’t the data immediately available”, or “why am I having to tell the software to do this”, or “man this is ridiculous how hard it is to make this work.”

But then I read William Hertling’s upcoming book The Turing Exception, remember that The Singularity (first coined in 1958 by John von Neumann, not more recently by Ray Kurzweil, who has made it a very popular idea) is going to happen in 30 years. The AIs that I’m friends with don’t even have names or identities yet, but I expect some of them will within the next few years.

We have a long list of fundamental software problems that haven’t been solved. Identity is completely fucked, as is reputation. Data doesn’t move nicely between things and what we refer to as “big data” is actually going to be viewed as “microscopic data”, or better yet “sub-atomic data” by the time we get to the singularity. My machines all have different interfaces and don’t know how to talk to each other very well. We still haven’t solved the “store all your digital photos and share them without replicating them” problem. Voice recognition and language translation? Privacy and security – don’t even get me started.

Two of our Foundry Group themes – Glue and Protocol – have companies that are working on a wide range of what I’d call fundamental software problems. When I toss in a few of our HCI-themes investments, I realize that there’s a theme that might be missing, which is companies that are solving the next wave of fundamental software problems. These aren’t the ones readily identified today, but the ones that we anticipate will appear alongside the real emergence of the AIs.

It’s pretty easy to get stuck in the now. I don’t make predictions and try not to have a one year view, so it’s useful to read what Fred thinks since I can use him as my proxy AI for the -1/+1 year window. I recognize that I’ve got to pay attention to the now, but my curiosity right now is all about a longer arc. I don’t know whether it’s five, ten, 20, 30, or more years, but I’m spending intellectual energy using these time apertures.

History is really helpful in understanding this time frame. Ben Franklin, John Adams, and George Washington in the late 1700s. Ada Lovelace and Charles Babbage in the mid 1800s. John Rockefeller in the early 1900s. The word software didn’t even exist.

We’ve got some doozies coming in the next 50 years. It’s going to be fun.

  • http://coursefork.org/ Elliott Hauser

    The overarching theme that stands out for me here is the production of certainty. Sharing photos without replicating them is a solution to the problem of the uncertainty of versioning. The Blockchain provides certainty along with anonymity. Better interfaces produce certainty in users about their ability to accomplish things with computers (“Get me directions to the nearest ___”).

    Computers stopped being the limiting factor in technology several years ago but we’re still adapting to this reality. Human consciousness and thought is the new limiting factor, which means that the advances in ‘computing’ we see must increasingly be ergonomic. Instead of focusing on the performance of computations, we’ll shift to focus on human access to that computation.

    AI is perhaps the most interesting example of this, as you note. There’s plenty of AI power out there now, but this power must be anthropomorphized through naturalistic interfaces (Siri, etc) before most people can reliably use it.

  • http://petegrif.tumblr.com/ Pete Griffiths

    I think that a top three software problem is human wetware. Evolution has landed us with a body that is spaghetti code, remarkably little modularity and hence many side effects. If we want to move medicine problem the current paradigms of poisoning or plumbing/carpentry to dubugging wetware problems, which is the next paradigm, then we have some very tricky software issues. We can’t just rewrite our code and make it more modular so we have to learn to better understand the complexity. Fortunately advertising has brought us tools to manage big data and extract insights so ironically, free services paid for by advertising may fuel the next wave of medicine.

  • Doug

    Thanks for the suggestions, I’ll get right on that.
    Lately I have been fascinated by graphs and graph theory in software. There are graph databases, and once you learn about it, you see graph applications everyplace. There will be some big problems solved using graphs, they are the structure of neural networks, lots of big data problems that don’t fit in a relational database, bio and genetics problems, and AI.
    I also believe there will be a new OS that becomes “standard” or popular. Everything is Windows, or Linux-Android. Apple OS is free BSD at the heart. There will be something new, that is modern and faster on multiple platforms, with less cruft. At least I would love to see that.

  • Tom Flaherty

    Admire, that you get down to the essential problem of managing data as a core need. The friction this causes slows down all progress, so I will root for any firm that improves data.

    Also I tend to agree that the Singularity is a joke and Ray Kurzwell should know better. While we will see breakthroughs in machine cognition, I sense little progress in consciousness that is at the heart of life’s intelligence.

    • http://www.feld.com bfeld

      I think we don’t actually understand how the singularity is going to work. I think the predictions about it are generally nonsensical, but the AI evolution that we are going to see over the next decade is likely going to startle us.

      • http://rocrastination.com/ Ro Gupta

        What did you think of the depiction in Her? I actually thought it was one of the most reasonable ones I’ve seen for most of the movie. As did Kurzweil I believe.

        • http://www.feld.com bfeld

          Bizarrely, we haven’t seen Her yet. It’s front and center on our AppleTV (we bought it a while ago). We’ll watch it soon.

          • http://rocrastination.com/ Ro Gupta

            get on that sh*t .. today!

    • http://www.derekscruggs.com/ Derek

      I agree with you about consciousness and the singularity, but I also suspect that consciousness will be something like relativity – a fundamental reorganization of our understanding of how things work, which in turn enables a round of innovation we can’t even conceive of. Who in 1900 could have predicted the atomic bomb less than 50 years later?

      You can’t predict an Einstein or Newton. Whether it will happen in the next 20 years or 200 or 2,000 or 20,000 is unknowable.

  • http://www.dudumimran.com/ Dudu Mimran

    A unified digital self is something not solved. You’ve got bits and bytes everywhere across many products and services which don’t talk to each other and become your digital representation. Lots of applications can be built on such.

    • http://www.feld.com bfeld

      Yup. One of the reasons I’m so interested / deep into FullContact is addressing the problem.

      • http://www.dudumimran.com/ Dudu Mimran

        I just gave them and it looks nice, let’s see if they stay me for the long run. Btw, I remember you had other investments in the area of contacts, no? Something from the past.

    • Rick

      “A unified digital self is something not solved.”
      .
      And it should stay unsolved. Remember the digital world is just a fake playground not the real world. When a person losses track of that they enter the world of mental illness. There is no reason for people to have to be their self when online!

      • http://www.dudumimran.com/ Dudu Mimran

        From my point of view it is a matter of convenience and not a replacement for me. For example such an identity could serve true content personalization for news, email, books. Another aspect is finding interesting people based on different shared topics of interest.

  • Rebecca

    Brad — I’ve been thinking a lot about the 5-20 year time frame lately. The single best resource I’ve found thinking about it is the book Generations by Willam Strauss & Neil Howe. While it doesn’t fit your two categories, I recommend you pick it up (if you haven’t already) ASAP. It’s the most important book I’ve read (out of about 100) this year.

    Also — I was in the round one for Will’s new book. He’s a favorite.

    • http://www.feld.com bfeld

      Just grabbed Generations. No Kindle, so I get more dead trees to hang out to for posterity!

  • http://www.venturedeal.com/ Don Jones

    You read biographies? Me, too…love ‘em

    Currently reading Napoleon: A Life, by Andrew Roberts. Absolutely fantastic (and voluminous) reappraisal of the man that Winston Churchill called “The greatest man of action born in Europe since Julius Caesar.”

  • http://ryancasey.me/ Ryan Casey

    I’ll throw my 2 cents in from what we’ve seen on the enterprise front.
    I think one of the unspoken fundamental problems that needs more discussion lies within the high performance computing market. Nearly all of the major software and libraries are still written from a serial processing perspective. As we start to hit moores law with heat, scale, and speed, there is a tectonic shift feebly happening to move to heterogenous computing with CPUs and GPUs. In fact, we’ve hit such a wall that many of the worlds fastest supercomputers (nearly all of them actually) sit at less than 10% utilization rate.

    Why is this important? Many of the bleeding edge discoveries owe a great deal to running complex formulas and simulations. With technical debt, the innovation cycle is being halved by shit code and non-linear growth computing compatible. Machine learning, big data, aerospace, bioinformatics, physics – you name it and it uses massive compute power. So when a question such as “why doesn’t my computer just know how to do this”, the reality is, it probably could. But, since software has so much legacy and there isn’t enough CS grads coming out knowing how to do serial AND parallel software development, your question probably is a limiting factor on the software side. I don’t think this has anything to deal with a limitation of our consciousness, lack of ability to perform the task, or ability to think outside of the box. We’re asking the right questions.

    We definitely have some fun doozies coming up and the next 10 years specifically will be incredible fun to watch or be a part of for the Big Data/Machine Learning world.

    And, as an end note, here is something I ask myself daily even though I’m pretty sure I know the answer – Why the hell do we still run our OS, software, compute, etc. locally? I feel like we have the solutions to the questions above but it’s incredible difficult to shift. Hopefully that was more thought provoking than a rant :)

    • http://www.feld.com bfeld

      Good rant! Local vs. not-local is going to become increasingly interesting in the next decade. And some things want to be distributed. Others want to not be. How’s that for AI like grammar.

    • Rick

      You have to take into consideration that you’re mind is spinning inside a computer. What I mean is you need to realize that we are in a computing growth phase right now. But!!! One day people will realize that they wasted much precious time playing with their computer instead of enjoying life. When that happens they may just walk away from computing. Your statement “there isn’t enough CS grads coming out” might be the first sign of a shift to *real* instead of *virtual* lifestyle.
      .
      People can be very strange indeed and if they decide computing is boring or not fun or out of style they will drop it like a hot potato! That will mean you cannot give it away.
      .
      I might be one on the leading edge. I was recently at a store where the credit card terminal was taking a poll. The cashier said “It’s asking you a question” and pointed to the terminal. I said “I don’t care what it’s doing.” The cashier then took my money and handed me my items.

  • http://prometheefeu.wordpress.com/ PrometheeFeu

    I don’t think you can “solve” security (I count privacy as a subset of security) anymore than you can solve “scalability” or “reliability”. Secure software systems are just software systems that don’t have bugs that allow people to do things you don’t want them to do. The “don’t want them to do” part is crucial because it allows you to realize that security will always be a custom solution. (in part) There are many aspects that can be checked, but only you know whether user X is allows to perform action Y.

    On a different note, do you have advice on an Ada Lovelace biography?

    • http://www.feld.com bfeld

      Agree on security and privacy. It’s sort of like “humans achieving peace on earth.” Yeah – that’s not going to happen.

      Ada Lovelace – I really liked Ada’s Algorithm: How Lord Byron’s Daughter Ada Lovelace Launched the Digital Age – https://www.goodreads.com/book/show/23396040-ada-s-algorithm

      • http://prometheefeu.wordpress.com/ PrometheeFeu

        Thanks.

    • williamhertling

      I disagree. We haven’t solved scalability or reliability, but my impression is the the tools and processes we’ve used to get to scalable or reliable are vastly improved over where they were 10, 20, or 30 years ago. A small team can put out very reliable, very scalable software, relatively easily now.

      From a security and privacy perspective, it seems like we’re moving backwards. I was probably more secure 30 years ago, in terms of having control over who had access to my personal data. And partly that stems from security bugs, but it also comes from legislation, government and corporate attitude, etc.

      I think of this as the Cory Doctorow privacy argument: Yes, it’s possible for you to create your own drinkable water supply by building a rain catchment, cisterns, and a system for sterilizing the water, but it’s vastly more efficient when the government delivers drinkable water as a service.

      So long as the government is invested in having backdoors in our code, and corporations are putting malware in the form of DRM and other controls in our code, then we’re always opening up and maintaining potential vectors for abuse, rather than eliminating them.

      • http://prometheefeu.wordpress.com/ PrometheeFeu

        I didn’t mean to imply we can’t develop effective tools to help us build more secure systems. We obviously can and that in combination with some changes in corporate and governmental policies can substantially improve the security landscape. But fundamentally, a secure system is a system that doesn’t have security bugs. And while some security bugs can be solved via automated memory management, etc, many security bugs are application-specific. Who should be able to do what and is that all they are able to do? Only you can know the answer to that question.

  • Rick

    “Every day, when doing something on the computer…”
    .
    That comes from having so many years experience using computers. I get the same way. But I try to take it up a notch and remember that computers are nothing more than tools. So I focus on my objectives and the best processes to reach them not how I can change my objectives or processes to fit the tool. This way I don’t get stuck with a certain tool. I instead use the tool(s) that help me the most at reaching my objectives. I’ve been noticing that many times paper and a pencil help the most. I don’t have to *learn* the tool with those. They are so flexible that I can *apply* them to most any task. I’m not *hampered* by tools that don’t do things the way I want. Also if I come up with a unique and effective way of doing things then I benefit from the advantage. One thing I must say is that since I can develop software I can automate *my* processes. So while paper and a pencil are doing better for me with certain tasks I can sometimes increase that effectiveness by starting up an IDE and creating an electronic tool that mimics what I do with the paper and pen.
    .
    “…AIs that I’m friends with…”
    .
    You might want to be careful there. Since computers can’t view you as a friend. You might be flirting with delusions.
    .
    “Identity is completely #$%&*+, as is reputation.”
    .
    The digital world, for example the web, is not the real world. As such people take on personas etc. to enjoy the games played. There is no reason to change that in the digital world. Reputation doesn’t matter on a stage where people play the part of various characters from one day to the next.
    .
    “We still haven’t solved the “store all your digital photos and share them without replicating them” problem.”
    .
    Huh? What makes you say that?
    .
    “Privacy and security – don’t even get me started.”
    .
    Privacy and security in the digital world is nothing more than a dream. The real thing we need are laws that state any entity that forces a person to put private information onto the web (public) is invading that person’s privacy. Every person should always have the right to keep private what they choose.

  • TeddyDuchampe

    How on earth do you find time to do all that reading? What is sacrificed to do it? If you were in Book It, how many pizzas do you think you would’ve earned by now?

    • http://www.feld.com bfeld

      I sacrifice nothing – I love to read!

  • http://markocalvocruz.com/ marko calvo-cruz

    Data doesn’t move nicely between things and what we refer to as “big data” is actually going to be viewed as “microscopic data”, or better yet “sub-atomic data” by the time we get to the singularity.

    As a citizen, does that not scare you Brad?

    I understand that we are inevitably heading toward technology that can surveil (track, trace, pick your verb) people at unprecedented magnitudes, fine.

    But as a VC who invests in these sorts of technological advances thinking how it may benefit society, does it not bother you knowing that these technologies will likewise inevitably fall into the hands of power-hungry people (governments), people with different intentions (safety of citizens at all costs!)?

    I’d be interested to know your position on the moral issues to consider surrounding this topic.

    I fear that even in face of the counterpoint that there may/may not be checks-and-balances to guarantee our personal information is not exploited, as newer generations are raised surrounded by increasingly invasive technology that our line of what’s personal and what’s not will fade until nothing’s personal/private anymore (who here had facebook when they were kids?) .

    Something to think about: Google CEO Eric Schmidt Dismisses the Importance of Privacy

    I don’t mean to sound like a paranoid nut, but I think it’s a concern worth raising.

    • http://www.feld.com bfeld

      It’s totally valid. And everything you are worried about is already happening. It’s going to be a continuous problem for these rest of our existence.

      • Rick

        I don’t understand what you mean. How is this stuff a problem for us?
        .
        Also what do you mean we can’t share pictures without replicating them?

        • http://www.feld.com bfeld

          Security and privacy is a massive problem already.

          Re: Pictures – when was the last time you emailed someone a photo?

          • Rick

            I don’t use email.
            .
            There are ways around that. I worked as a programmer on a system that doesn’t dup email content but I know many do.

          • http://markocalvocruz.com/ marko calvo-cruz

            Rick, are you familiar with the Edward Snowden/NSA scandal (depending on how you view the issue)?

            The technology that NSA uses to spy on people is probably technology that was invented with consumer interests in mind prior.

          • Rick

            I’m just not understanding how security and privacy are problems in this context. The systems that we use are just tools. If people use them to do things they should not then the problem is with those people not the tools.

    • http://www.derekscruggs.com/ Derek

      The best counterargument I’ve heard against this (and granted it’s a bit weak), is that along with government surveillance there’s been a corresponding rise in citizen surveillance — think cellphone videos of cops breaking the law, the Eric Snowden revelations and so on.

  • http://www.dudumimran.com/ Dudu Mimran

    Here is another one and that is learning. It takes too long to learn new stuff. Maybe the solution is in the area of forgetting:)

  • williamhertling

    You mentioned identity, reputation, and data movement as three things that need to be solved.

    Ian Glazer gave a talk at Defrag called “No Person is an Island: How Relationships Make the IT World More Manageable”, that’s not about the soft-side of relationships, but on identity management for people, software, and hardware, and how identity and relationships can be combined to get some really effective approaches at how to manage data and permissions across all the many components of your digital life. I was pretty impressed by the theory of it, although the implementation doesn’t seem very far along.