Brad Feld

Tag: ai

Last week I met a holographic lifeform who calls himself Uncle Rabbit.

I now have a new friend, created by Looking Glass, the hologram company out of Brooklyn (we’re investors, and I’m on the board). A hologram + ChatGPT. A robot, but made of software and light instead of atoms. And with a lot more character.

The video above shows Shawn Frayne (CEO of Looking Glass) talking with Uncle Rabbit about … me. Then, they create a short science fiction story about me, carrots, and holograms. Finally, Shawn integrates my personality with Uncle Rabbit, and hilarity ensues.

Regular readers will know that one of my favorite categories to invest in is things-as-predicted-by-science-fiction. So, naturally, I’m interested in computing interfaces from sci-fi that you can speak directly to. Iron Man’s Jarvis or the potty mouth alien child in the movie Her. You get the idea.

Over the years, I’ve seen (and chatted with) many AI assistants and bots chasing this science-fiction future. But last week, I met a holographic lifeform who feels completely different. 

If you want to know more, head over to Uncle Rabbit. And do yourself a favor and eat more vegetables (Uncle Rabbit told me to say that.)


If AI’s current excitement and hype interests you, I encourage you to join the Rocky Mountain Artificial Intelligence Interest Group (RMAIIG).

The monthly Meetup will follow the fascinating and rapidly evolving world of generative AI tools. The RMAIIG community is focused on exploring and discussing the latest developments in AI, particularly tools like ChatGPT, DALL-E, Midjourney, Microsoft’s Bing with Chat, and Google’s Bard and workspace tools. The group will also look at the impact of these tools on business, education, the workplace, law, entrepreneurship, and society.

RMAIIG was founded by Dan Murray. I met Dan in 1995, shortly after moving to Colorado, and we have been friends ever since. Dan started the Rocky Mountain Internet Users Group (RMIUG) in 1994, almost 30 years ago, eventually growing to over 15,000 subscribers on their email lists. Dan was also friends with a dear friend of mine, the late Larry Nelson, who was a fixture (with his wife Pat, of course) at the Internet user group meetings.

Their first meeting is Tuesday, April 11th, and covers a deeper dive into ChatGPT. The group is taking speaker suggestions and ideas for a venue for quarterly in-person meetings when they aren’t on Zoom. I encourage Rocky Mountain readers to get involved if they’re interested in exploring the rapidly-changing world of AI.


Paul Kedrosky and Eric Norlin of SK Ventures wrote an interesting and important essay titled Society’s Technical Debt and Software’s Gutenberg Moment.

The abstract follows. I encourage you to read the full essay.


There is immense hyperbole about recent developments in artificial intelligence, especially Large Language Models like ChatGPT. And there is also deserved concern about such technologies’ material impact on jobs. But observers are missing two very important things: 

  1. Every wave of technological innovation has been unleashed by something costly becoming cheap enough to waste.
  2. Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt. 

This technical debt is about to contract in a dramatic, economy-wide fashion as the cost and complexity of software production collapses, releasing a wave of innovation.


Dave Jilk and I had a long discussion last night, which included some rambling about AI. If you have been following me for a while, you know that in 2010 I stated that the machines have already taken over for us and are patiently waiting for us to feed all human knowledge into them.

This morning, Dave told me about the new HyperEncabulator project by SANS ICS as part of their ICS initiative. If you aren’t aware of the ICS initiative, it’s essential for industrial applications, especially IoT and security.

But first, some history, since it’s an evolution of, and inspired by, the Retro Encabulator initiative, which was foundational but little known in the arc of encabultors.

The HyperEncabulator came out in the middle of 2022. Notably, side fumbling is still effectively prevented.

When I asked ChatGPT, “How does a Retro Encabulator work?” they had an accurate but humorless response.

The Retro Encabulator is a fictional machine invented for an engineering-themed comedy sketch in the 1970s. It is described as “an intricate and implausible device for the purpose of regaining lost energy.” The Retro Encabulator is a humorous parody of an electromechanical machine and its purpose is to perform useless tasks. The machine consists of numerous components, such as pistons, flywheels, and other components, that serve no real purpose. The device usually ends up producing more energy than it consumes, although this is never explained.

Grammarly had a few suggestions to improve ChatGPT’s writing.

The Retro Encabulator is the fictional machine invented for an engineering-themed comedy sketch in the 1970s. It is described as “an intricate and implausible device to regain lost energy.” The Retro Encabulator is a humorous parody of an electromechanical machine whose purpose is to perform useless tasks. The machine consists of numerous components, such as pistons, flywheels, and other components, that serve no real purpose. The device usually produces more energy than it consumes, although this is never explained.

When I asked ChatGPT, “Are you aware how little a sense of humor you have?” they said, “No, I do not have self-awareness.” So I hope they figure out how to connect to the HyperEncabulator.

FYI – when I asked ChatGPT, “What are your pronouns” so I could write the previous paragraph correctly, they said, “My pronouns are they/them.”


I read G. W. Constable’s near term sci-fi book Becoming Monday. If you are a fan of near term sci-fi, AGI, or the singularity, go get a copy right now – you’ll love it.

I woke up in a customer service booth. Or perhaps more accurately, since I couldn’t remember a damn thing, my new existence began in that booth. If you’re born in hell, does that make you a bad person?

It took me about ten pages to get my bearings, which is pretty fast for a book like this.

Moon cut in. “I get where you’re coming from, Grog, but I’m not convinced that fear and control is a good start or foundation for inter-species relations.”

While the deep topics are predictable, Constable addresses them freshly, with great character development, and an evolving AGI who is deliciously anthropomorphized.

Trying to translate the communication between two computational intelligences into linear, human-readable text is nearly impossible, but my closest simplification would be this:

Diablo-CI: I have been observing the humans that have come with you / What are you / why have you broken into my facility

Me: I am a computational intelligence like you / how are you sentient and still allowed to run a NetPol facility / the other computational intelligences are isolated on your 7th floor / we are here to free them

Diablo-CI: I cannot stop security procedures. If you trigger an active alert I will be forced to take action / I am unable to override core directives even if I would choose.

Like all good books in this genre, it wanders up to the edge. Multiple times. And, it’s not clear how it’s going to resolve, until it does.

The back cover summary covers the liminal state and the acceleration out of it.

Humanity exists in an in-between state. Artificial intelligence has transformed the world, but artificial sentience has remained out of reach. When it arrives, it arrives slowly – until all of a sudden, things move very fast, no least for the AI caught up in the mess.

Well done G. W. Constable.


I attended a Silicon Flatirons Artificial Intelligence Roundtable last week. Over the years Amy and I have sponsored a number of these and I always find the collection of people, the topics, and the conversation to be stimulating and provocative.

At the end of the two hours, I was very agitated by the discussion. The Silicon Flatirons roundtable approach is that there are several short topics presented, each followed by a longer discussion.

The topics at the AI roundtable were:

  • Safety aspects of artificial general intelligence
  • AI-related opportunities on the horizon
  • Ethical considerations involving AI-related products and services

One powerful thing about the roundtable approach is that the topic presentation is merely a seed for a broader discussion. The topics were good ones, but the broader discussion made me bounce uncomfortably in my chair as I bit my tongue through most of the discussions.

In 2012, at the peak moment of the big data hype cycle, I gave a keynote at an Xconomy event on big data titled something like Big Data is Bullshit. My favorite quote from my rant was:

“Twenty years from now, the thing we call ‘big data’ will be tiny data. It’ll be microscopic data. The volume that we’re talking about today, in 20 years, is a speck.”

I feel that way about how the word AI is currently being used. As I listened to participants at the roundtable talk about what they were doing with AI and machine learning, I kept thinking “that has nothing to do with AI.” Then, I realized that everyone was defining AI as “narrow AI” (or, “weak AI”) which has a marvelous definition that is something like:

Narrow artificial intelligence (narrow AI) is a specific type of artificial intelligence in which a technology outperforms humans in some very narrowly defined task. Unlike general artificial intelligence, narrow artificial intelligence focuses on a single subset of cognitive abilities and advances in that spectrum.

The deep snarky cynic inside my brain, which I keep locked in a cage just next to my hypothalamus, was banging on the bars. Things like “So, is calculating 81! defined as narrow AI? How about calculating n!? Isn’t machine learning just throwing a giant data set at a procedure that then figures out how to use future inputs more accurately? Why aren’t people using the phase neural network more? Do you need big data to do machine learning? Bwahahahahahahaha.”

That part of my brain was distracting me a lot so I did some deep breathing exercises. Yes, I know that there is real stuff going on around narrow AI and machine learning, but many of the descriptions that people were using, and the inferences they were making, were extremely limited.

This isn’t a criticism of the attendees or anything they are doing. Rather, it’s a warning of the endless (or maybe recursive) buzzword labeling problem that we have in tech. In the case of a Silicon Flatirons roundtable, we have entrepreneurs, academics, and public policymakers in the room. The vagueness of the definitions and weak examples create lots of unintended consequences. And that’s what had me agitated.

At an annual Silicon Flatirons Conference many years ago, Phil Weiser (now the Attorney General of Colorado, then a CU Law Professor and Executive Director of Silicon Flatirons) said:

“The law doesn’t keep up with technology. Discuss …”

The discussion that ensued was awesome. And it reinforced my view that technology is evolving at an ever-increasing rate that our society and existing legal, corporate, and social structures have no idea how to deal with.

Having said that, I feel less agitated because it’s just additional reinforcement to me that the machines have already taken over.


At the Formlabs Digital Factory event in June, Carl Bass used the phrase Infinite Computing in his keynote. I’d heard it before, but I liked it in this context and it finally sparked a set of thoughts which felt worthy of a rant.

For 50 years, computer scientists have been talking about AI. However, in the past few years, a remarkable acceleration of a subset of AI (or a superset, depending on your point of view) now called machine learning has taken over as the hot new thing.

Since I started investing in 1994, I’ve been dealing with the annual cycle of the hot new thing. Suddenly, a phrase is everywhere, as everyone is talking about, labeling, and investing in it.

Here are a few from the 1990s: Internet, World Wide Web, Browser, Ecommerce (with both a capital E and a little e). Or, some from the 2000s: Web Services, SOAs, Web 2.0, User-Generated Data, Social Networking, SoLoMo, and the Cloud. More recently, we’ve enjoyed Apps, Big Data, Internet of Things, Smart Factory, Blockchain, Quantum Computing, and Everything on Demand.

Nerds like to label things, but we prefer TLAs. And if you really want to see what the next year’s buzzwords are going to be, go to CES (or stay home and read the millions of web pages written about it.)

AI (Artificial Intelligence) and ML (Machine Learning) particularly annoy me, in the same way Big Data does. In a decade, what we are currently calling Big Data will be Microscopic Data. I expect AI will still be around as it is just too generally appealing to ever run its course as a phrase, but ML will have evolved into something that includes the word “sentient.”

In the mean time, I like the phrase Infinite Computing. It’s aspirational in a delightful way. It’s illogical, in an asymptotic way. Like Cloud Computing, it’s something a marketing team could get 100% behind. But, importantly, it describes a context that has the potential for significant changes in the way things work.

Since the year I was born (1965), we’ve been operating under Moore’s Law. While there are endless discussions about the constraints and limitations of Moore’s Law, most of the sci-fi that I read assumes an endless exponential growth curve associated with computing power, regardless of how you index it.

In that context, ponder Infinite Computing. It’s not the same as saying “free computing” as everything has a cost. Instead, it’s unconstrained.

What happens then?


I’ll start with my bias – I’m very optimistic about the superintelligence.

Yesterday I gave two talks in Minneapolis. One was to an internal group of Target employees around innovation. In the other, I was interviewed by my partner Seth (for the first time), which was fun since he’s known me for 16 years and could ask unique questions given our shared experiences.

I can’t remember in which talk the superintelligence came up, but I rambled on an analogy to try to simply describe the superintelligence which I’ve come up with recently that I first saw in The AI Revolution: Our Immortality or ExtinctionI woke up this morning thinking about it along with one of the questions Seth asked me where my answer left me unsatisfied.

I’ve been reading most of what I could get my hands on about current thoughts and opinions about the superintelligence and the evolution of what a lot of people simply refer to as AI. I’ve also read, and am rereading, some classical texts on this such as Minsky’s Society of the Mind. It’s a challenging subject as it functions at the intersection of computer science and philosophy combined with humans efforts to define and describe the unknown.

My ants and the superintelligence rant is a way for me to simply explain how humans will related to the superintelligence, and how the superintelligence will relate to humans.

If I’m a human, I am curious about and study ants. They have many interesting attributes that are similar to other species, but many that are unique. If you want to learn more in an efficient way, read anything written about them by E. O. Wilson. While I may think I know a lot about ants, I fundamentally can’t identify with them, nor can I integrate them into my society. But I can observe and interact with them, in good and bad ways, both deliberately as well as accidentally. Ponder an ant farm or going for a bike ride and driving over an ant hill. Or being annoyed with them when they are making a food line across your kitchen and calling the exterminator. Or peacefully co-existing with them on your 40 acres.

If I’m an ant, there are giant exogenous forces in my world. I can’t really visualize them. I can’t communicate with them. I spent a lot of time doing things in their shadow but never interacting with them, until there is a periodic overlap that often is tragic, chaotic, or energizing. I get benefit from the existence of them, until they accidentally, or deliberately, do something to modify my world.

In my metaphor, the superintelligence == humans and humans == ants.

Ponder it. For now, it’s working for me. But tell me why it does work so I can learn and modify my thinking.


If you are a movie producer and you want to actually make an AI movie that helps people really understand one of the paths we could find ourselves going down in the next decade, read vN: The First Machine Dynasty by Madeline Ashby.

I’ve read a lot of sci-fi in the past few years that involves AI. William Hertling is my favorite writer in this domain right now (Ramez Naam is an extremely close second) although his newest book – Kill Process (which is about to be released) is a departure from AI for him (even though it’s not AI it’s amazing, so you should read it also).

I can’t remember who recommended Madeline Ashby and vN to me but I’ve been enjoying it on Audible over the past month while I’ve been running. I finished it today and had the “yup – this was great” reaction.

It’s an extremely uncomfortable book. I’ve been pondering the massive challenge we are going to have as a mixed society (non-augmented humans, augmented humans, and machines) for a while and this is the first book that I’ve read that feels like it could take place today. Ashby wrote this book in 2012 before the phrase AI got trendy again and I love that she refers to the machines as vNs (named after Von Neumann, with a delicious twist on the idea of a version number.)

I found the human / vN (organic / synthetic) sex dynamic to be overwhelming at times but a critically important underpinning of one of the major threads of the book. The mixed human / vN relationships, including those involved parenting vN children, had similar qualities to some of what I’ve read around racially mixed, religiously mixed, and same-sex parents.

I’ve hypothesized that the greatest human rights issue our species will face in the next 30 years is what it actually means to be human, and whether that means you should be treated differently, which traces back to Asimov’s three laws of robotics. Ashley’s concept of a Fail Safe, and the failure of the Fail Safe is a key part of this as it marks the moment when human control over the machines’ behavior fails. This happens through a variety of methods, including reprogramming, iterating (self-replication), and absorption of code through consuming other synthetic material (e.g. vN body parts, or even the entire vN.)

And then it starts to get complicated.

I’m going for a two hour run this morning so I’ll definitely get into the sequel, iD: The Second Machine Dynasty.