Accelerating Technology Change and Continuous Learning

At dinner last week, my long time friend Dave Jilk (we just celebrated our 30th friendship anniversary) tossed a hypothesis at me that as people age, they resist adopting new technologies. This was intended as a personal observation, not an ageist statement, and we devolved into a conversation about brain plasticity. Eventually we popped back up the stack to dealing with changing tech and at some point I challenged Dave to write an essay on this. 

The essay follows. I think he totally nails it. What do you think?

People working in information technology tend to take a producer perspective. Though the notion of a “lean startup” that uses both Agile and Customer Development approaches is ostensibly strongly customer focused, the purpose of these methodologies is for the company to find an maximize its market, not specifically to optimize the user experience. The following is an observation more purely from the perspective of the consumer of information technology.

On average, as people age they resist adopting new technologies, only doing so slowly and where the benefits compellingly outweigh the time cost and inevitable frustrations. This resistance is not necessarily irrational – after a number of cycles where the new technology proves to be a fad, or premature, or less than useful, we learn that it may behoove us to wait and see. We want to accomplish things, not spend time learning tools that may or may not help us accomplish something.

Consequently, for many decades the pattern has been that technology adoption is skewed toward younger people, not only because they have not yet built up this resistance, but also because they are immersed in the particular new technologies as they grow up.

But something new is happening today, and it is evidence of accelerating rather than merely progressive technology change. Discrete technology advances are giving way to continuous technology advances. Instead of making a one-time investment in learning a new technology, and then keeping up with the occasional updates, it is increasingly necessary to be investing in learning on a constant, ongoing basis.

I will provide three examples. First, application features and user interfaces are increasingly in a state of continuous flux. From a user perspective, on any given day you may connect to Facebook or Gmail or even a business application like Salesforce.com, and find that there are new features, new layout or organization of screen elements, new keystroke patterns, even new semantics associated with privacy, security, or data entered and displayed. This is most prominent in online systems, but increasingly software updates are automatic and frequent on mobile devices and even full computer systems. On any given day, one may need to spend a significant amount of time re-learning how to use the software before being productive or experiencing the desired entertainment.

My mother is 86 years old. For perspective, when she was 20, television was a new consumer technology, and room-sized digital computers had just been invented. She uses the web, Yahoo mail, and Facebook, impressive feats in themselves for someone her age. But every time Yahoo changes their UI, she gets frustrated, because from her perspective it simply no longer works. The changes neither make things better for her nor add capabilities she cares about. She wants to send email, not learn a new UI; but worse, she doesn’t really know that learning a new UI is what she is expected to do.

Middle-aged people like me are better prepared to cope with these changes, because we’ve gotten used to them, but we still find them frustrating. Perhaps it is in part because we are busy and we have things we need to get done, but it is interesting to see how much people complain about changes to the Facebook interface or iOS updates or what have you. We can figure it out, but it seems more like a waste of time.

Young people gobble up these changes. They seem to derive value from the learning itself, and keeping up with the changes even has a peer pressure or social esteem component. Yes, this is in part because they also have fewer responsibilities, but that cannot be the entire explanation. They have grown up in a world where technology changes rapidly. They didn’t just “grow up with social media,” they grew up with “social media that constantly changes.” In fact, not only do they keep up with the changes on a particular social media service, they are always exploring the latest new services. Several times a year, I hear about a new service that is all the rage with teens and tweens.

A second example that is more esoteric but perhaps a leading indicator, is the rise of continuous integration in software development, not just with one’s own development team but with third-party software and tools. No longer is it sufficient to learn a programming language, its idiosyncrasies, its libraries, and its associated development tools. Instead, all of these tools change frequently, and in some cases continuously. Every time you build your application, you are likely to have some new bugs or incompatibilities related to a change in the language or the libraries (especially open source libraries). Thus, learning about the changes and fixing your code to accommodate them are simply part of the job.

This situation has become sufficiently common that some language projects (Ruby on Rails and Python come to mind) have abandoned upward compatibility. That’s right, you can no longer assume that a new version of your programming language will run your existing applications. This is because you are expected to keep up with all the changes all the time. Continuous integration, continuous learning. Older coders like me view this as a tax on software development time, but younger coders accept it as a given and seem to not only take it in stride but revel in their evolving expertise.

My final example, a little different from the others, is the pace of client device change. From 1981, when the IBM PC was introduced, until about 2005, one could expect a personal computer system to have a lifespan of 3-5 years. You could get a new one sooner if you wanted, but it would have reasonable performance for three years and tolerable for five. By then, the faster speed of the new machine would be a treat, and make learning the latest version of DOS, and later Windows, almost tolerable. Today, five years is closer to the lifespan of a device category. Your recent smartphone purchase is more likely to be replaced in 2017 by a smart watch, or smart eyewear, as it is by another smartphone. You won’t just have to migrate your apps and data, and learn the new organization of the screen – you will have to learn a new way to physically interact with your device. Hand gestures, eye gestures, speaking – all of these are likely to be part of the interface. Another five years and it is highly likely that some element of the interface will take input from your brain signals, whether indirectly (skin or electromagnetic sensors) or directly (implants). When you say you are having trouble getting your mind around the new device, you will mean it literally.

The foregoing is primarily just an observation, but it will clearly have large effects on markets and on sociology. It suggests very large opportunities but also a great deal of disruption. And this transition from generational learning to continuous learning is not the last word. Technology will not just keep advancing, it will keep accelerating. As the youth of today, accustomed to continuous learning, reach their 40s and beyond, they will become laggards and slow to adopt in comparison with their children. Even continuous learning will no longer be sufficient. What will that look like?

  • James Mitchell

    A question about “ageist” comments – Let’s assume that 90
    percent of 60 year olds wear green shoes and 90 percent of 20 year olds wear
    blue shoes. It is ageist to say that “older people like to wear green shoes?” My
    experience with trying to change human behavior is everything else equal, the
    older someone is, the more resistant to change they are. There are some
    exceptions but they are few.

    As for the device analogy, the key thing is that you are talking about mobile
    devices. For the foreseeable future, any advanced mobile device will have a
    much shorter lifespan than a non-mobile device.

    In terms of software development, in a lot of ways we have gone backwards. The
    fact that for languages such as Python and frameworks such as Rails are not
    upward compatible is a huge reason to avoid them.

    • DaveJ

      I define abc-ism (where abc is some identifiable trait or category of people) as the strong assumption that an individual’s behavior will match that of the group. So, I don’t think discussion of the group statistics constitutes abc-ism, it is only when one irrationally applies the statistics to an individual. But others may have different definitions.

      End user computing devices are increasingly mobile. So that’s just what we’re talking about. The non-mobile devices involved in one’s computing are necessarily opaque.

      You have to keep in mind the overall context of software development today. Most serious projects include significant open source code over which one has little control. So continuous integration and keeping up with the changes is necessary anyway, and the language is just a part of that.

  • http://timjaeger.com/ Timothy Jaeger

    “as people age, they resist adopting new technologies”

    This is funny, most of my older relatives are on Facebook, what are they resisting?

    Has anyone stopped to consider whether this is a general resistance or could this be rephrased as:

    “As people age they are more likely to gravitate to technologies that empower them and fulfill their needs”

    For example, a 50-year old empty nester might not have a need to take duck-faced selfies and post to Instragram and Vine but might catch up with her daughter’s Facebook feed. It’s not resistance, it’s just that they have no use for the technology. S/he might not be working as a programmer at a startup so won’t be testing new technologies and frameworks.

    In the same vein, older 60+ people might not be interested in partying in Mexico for Spring Break, so they would be more likely to pass up vacation specials offering certain deals. Not ‘resistance’ just different demographic / market. Your friend’s hypothesis is highly anecdotal…I think there are plenty of cases to be made for rephrasing the central argument.

    • DaveJ

      All hypotheses are anecdotal, that’s why we test them.

      It might be better to say that on average, people become more selective about their technology adoption. But seriously, if you listen to people who don’t use Facebook or newer technologies about why they don’t, “resistance” seems applicable.

      • Lura

        And Facebook is already yesterday’s news for teens. They’re on to SnapChat, Instagram, and several others. While SnapChat is simply amusing to me, it’s a major way that my teen communicates with friends.

  • http://www.startupmanagement.org/ William Mougayar

    That’s a pretty heady essay, and all good.

    But re: “As the youth of today, accustomed to continuous learning, reach their
    40s and beyond, they will become laggards and slow to adopt in
    comparison with their children.”, wouldn’t their 20’s prepare them for change well enough that they will continue sailing along the various seas of technologies?

    • eabrandon

      I like your question. Maybe as this generation graduates into adulthood we will find out whether this is truly an age thing, or a generational thing. It makes me think of the differences between myself and my 5yo daughter. I know way more about technology than she does. And everything she knows about the laptop and iOS devices is because I have exposed her to it. However, she interacts with them in a very different way than I do. Which makes me think that when she is in an adult, she will be naturally synced up with her technology in a way that I would have to actively learn.

      What I’m getting at: it won’t be a matter of whether or not you are sailing the seas of technologies, it will be HOW you are sailing the seas of technologies.

    • DaveJ

      You are assuming that the current pace of change, at which the youth of today are adept, will be the pace of change in the future. But the whole point of the article is that the pace is *accelerating*. So they will inevitably be challenged as well. What forms this will take is difficult to predict.

      • http://www.startupmanagement.org/ William Mougayar

        To the contrary, I’m not assuming either that the current young generation will not be able to adapt. Change is not linear, and it’s not always based on past behavior.

        • DaveJ

          I agree that we should be careful about making assumptions here – it’s all prophesy rather than prediction, since we don’t know what the state of affairs will be.

          • http://www.startupmanagement.org/ William Mougayar

            #truth

  • williamhertling

    Interesting points, but I want to probe one part: “Today, five years is closer to the lifespan of a device category. Your recent smartphone purchase is more likely to be replaced in 2017 by a smart watch, or smart eyewear, as it is by another smartphone.”

    What I’ve noticed is that we’re developing new ways to interact with technology, but each new user interface solves a smaller percentage of use cases for a smaller percentage of users.

    In the beginning, there was the command line and text interfaces. That’s all there was. It solved 100% of problems for 100% of users.

    Along came GUIs. Very nice for 99% of the people, 99% of the time. But if you’re a programmer or hardcore geek, you still spend time in a console and with vi.

    Then along came touch interfaces like smartphones. We love them! But few people exclusively use a smartphone or tablet for their computing. We still mostly have computers with GUIs. Let’s say the touch interface solve 50% of computing for 80% of people. It’s a subset of GUI computing, which is itself a subset of all computing.

    Then comes voice interfaces and gestural interfaces: good for gaming, texting, and other simple use cases. Google Glass solves a subset of smartphone interactions.

    Each successive generation of user interaction technology is solving a increasingly smaller subset of all computing interactions.

    Smart watches and smart glasses may augment our smartphones, but they won’t replace them, in the same way that smartphones haven’t replaced our computers.

    I do believe there will eventually be a reverse trend in which the foremost technology gobbles up the older technology.

    We’ve already seen this with laptops and desktops. Ten to fifteen years ago we had desktop computers and laptops. Now we just plug our laptops into monitors because they’ve grown powerful enough. In another five to ten years (circa 2020), we’ll start to see smartphones replace laptops — not in user interface — but as a portable computing brick, because they’ll have enough power. We’ll just plug them into monitors and keyboards.

    • DaveJ

      I generally agree, and I think the devices tend to divide up by use case more than by user (hardcore programmers also have tablets and smartphones, for example). So the adoption issue still arises in each new category. I do wonder whether this will change as the pace accelerates further, and people will find it necessary to stay within some category.

  • Leonard Welter

    I have been thinking that it is not the technology anymore – it’s the workflow. Technology does not matter (of course it does but follow me on this) to most people. People now care about is the ‘tool’ useful and easy to use.

    This post has help clarify my thoughts around this. Thanks.

    • http://www.feld.com bfeld

      Glad the post helped. I’m constantly striving for even more abstraction between the tool and the actually underlying technology – I use so many different things that fundamentally the thing I care about is the effectiveness of the “tool” as applied to the job. I’m comfortable with continuous change, but as I stare at the pile of hardware crap on my desk that I need to set up “someday” I’m reminded that often I don’t need the new incremental technology.

  • eabrandon

    This is an even bigger concern in the field of education, where professors are perhaps resistant to adapting to new technologies and certainly don’t have the time. However, students have become accustomed to learning through technologies – whether intentionally doing researching on the internet, or accidental learning through gaming and communication. In fact it’s becoming increasingly important for youth to intentionally work on their “unwired” skills, such as teamwork, communication, listening, etc. that will be important to have as an adult. Not to mention body language, anger management, business etiquettes, and on and on.

    Incidentally, this ties in to my personal field of research: playing video games is a great way for older generations (and anyone else) to keep their minds sharp and adapt to technology on the fly. The skills acquired in a fun game will directly apply to approaching daily tech challenges like a UI update.

    If anyone needs game recommendations let me know :)

    • http://www.feld.com bfeld

      Great point on playing video games. And yes – this is a dramatic / disconnect issue in education.

    • DaveJ

      Here is an interesting thought that derives from your comment. Video games have the notion of “leveling up” where each stage is more challenging and things are moved around and different. You could see the beginnings of this with games of the 80s like Missile Command and PacMan, but it has become a central part of how these games create engagement and Flow. Those who grew up with this will find the evolution of technology (e.g., today’s new Facebook feature) comfortable and even entertaining – it’s like getting to level 7 or whatever. Those who did not grow up with it find it infuriating. Interesting.

  • http://www.feld.com bfeld
  • MorganHoward

    I disagree that “as people age they resist adopting new technologies”. Perhaps, it’s the word “resist”. I propose that as people age, the know themselves better. When you’re young, you will date a wide range of people. When a older person dates, they are much more selective. It’s not because they are “resisting” change or new people, they just know themselves better.

    • http://www.feld.com bfeld

      Hmmm – I think that’s semantics – I think you are saying the same thing as Dave. I don’t know if “resist” or “being selective” is more important / better language here, but I think the dynamic is the same.

    • DaveJ

      Saying that people are more selective about technology adoption is a more neutral phrasing. But if you ask people why they did not adopt, it feels more like resistance.

  • Neil

    A real thought provoking read. My opinion regarding the outcome differs slightly. Technology will adapt, so consumers can adopt. Continuous learning will no longer be required.

    • http://www.feld.com bfeld

      I agree – in 40 years none of this matters because the Cylons will be way more adaptable then us and if they are nice they’ll treat us like we treat our pets.

      • Jeffrey Hartmann

        I really don’t think it will take that long Brad, but I think for the foreseeable future things are going to be quite interesting and the impact of change will definitely become less important to the human as capable artificial intelligence really comes into general use. What I personally see is computers and their robot bodies being wildly better than us for some tasks, and humans being many times better for a certain class of task. Computers will dominate the mundane, repetitive and tasks that can be brute forced computationally, and humans will be creatives directing the engines of production. Computers will make poor physicists, but pair a computer with a creative problem solver and they will accomplish things the world hasn’t even dreamed of together. I don’t think we will be pets, but we will be peers who are each thousands of times better at certain classes of problems. I think the next ten and twenty years are going to be very interesting, and very transformative.

    • DaveJ

      See my comment about video games, below. Many people enjoy the continuous learning.

  • http://petegrif.tumblr.com/ Pete Griffiths

    One thing which is difficult to continuously toy with, even to make meaningful improvements, is APIs. An ecosystem built on a set of APIs is incredibly valuable and it’s a real trick to evolve APIs without alienating developers.

    • http://www.feld.com bfeld

      The meta API dynamic is really helpful here. We’ve invested in several companies that try to build stable APIs that go across many services. One we haven’t invested in, but is making nice progress, is Cloud Elements – http://cloud-elements.com/

      This particular aspect of the problem is going to get much worse before it gets better.

      • http://petegrif.tumblr.com/ Pete Griffiths

        You mean broken APIs by companies running to increase functionality?

        • http://www.feld.com bfeld

          Yup

          • http://petegrif.tumblr.com/ Pete Griffiths

            One of the most interesting aspects of Apple’s product strategy has been its willingness to abandon the old in favor of the new. They have been prepared to take heat to keep moving forward. It’s a very difficult course to steer.

  • http://www.feld.com bfeld

    An email comment from a reader’s 73 year old father:

    “Amen. The changes in the tax law (daily) led me to retire; the changes in communications (facebook, twitter, etc.) are unnecessary, from my point of view. Especially when communications are just a way of getting your principal work done, not the principal occupation. The activities of software engineers, for whom the work is a principal occupation, drives me nuts.”

  • http://www.EyeOnJewels.com/ Darius Vasefi

    Great article. The pace of change is truly maddening and in the majority of time unnecessary. I don’t know if it’s people trying to justify their jobs or what but every time I go to Google (maps, adwords, etc.) something is different, every download of the twitter app has new things in it and there is no way I’m going to read the release notes on any of these. And every time this happens deep down my resentment grows for companies and apps I love… Another classic screw-up is the new Yahoo email as everyone knows. Who in their right mins would change so many things on an app used by hundreds of millions of people – sorry but this is a negative mark on MM from me.

    Re. age I agree, we just have more on our minds and to do as we grow up (till retirement) and so we need to be more selective on how we’re spending our time.

  • http://www.feld.com bfeld

    Another email comment.

    The adoption of technology in the aged is often part of ‘gerontechnology’ studies – or the study of technology and its use/application among aging populations. There are several papers on the topic. You can find the Gerontechnology Journal here (they have a current paper on technology adoption rates of seniors).

    Dave’s short essay on the topic hits it on the head. Thanks for posting!

  • http://www.semilshah.com/ Semil Shah

    Fascinating essay. I love the way it wraps up at the end — am curious, briefly, could you share your thoughts on SnapChat given this framework?

    • http://www.feld.com bfeld

      As a 47 year old, I’ve tried SnapChat a few times and “don’t get it” which just confirms some of the dynamics Dave talks about.

  • Tom

    I see this dynamic as rational, not any reflection on age-related ability. Young people have a much lower opportunity cost of time, and there is high value to building expertise. As we age, the opportunity cost of time goes up – we already have skills that are valuable and we have high demands on our time – and we leave the task of figuring out which novelties are relevant to people who are young, have low-value time, and are looking for expertise to build. As we age further, we realize that our existing expertise is becoming dated (if it is), but now the hill we have to climb to “come up to speed” on a new expertise is high, and the time we have ahead to utilize new skills is shortening, so again it doesn’t make sense. Eventually, ability does decline, but it’s pretty late in life. We stop being “cutting edge” adopters long before that.