Life After Intelligence

Food for Thought

AI is not a “machine.” It certainly runs on machines, but if we define “machine” in the way the Industrial Revolution framed it, can we say that what AI does is the same as what a “machine” does? The most obvious difference is that machines, in the traditional sense, solve deterministic problems, where the outputs are always exactly the same given the same inputs. This is the image most people have of “machines.” AI is fundamentally different from this.

From this perspective, AI does not extend us. It is beyond “medium” as defined by Marshall McLuhan. A boss can think of his employees as extensions of himself, tools or mediums, but because each employee has some degree of agency, they resist that characterization. AI is similar in this respect; it does not simply extend us. It does something on its own that happens to align, more or less, with what we want.

The difference becomes clearer when we think about the kinds of people we work with. If you hire an undergraduate, you might be able to offload tasks you would otherwise do yourself. If you hire a PhD student, you are no longer simply offloading, because she may be capable of things you are not. If you collaborate with someone at your own level, the idea of offloading no longer applies at all. Likewise, we don’t simply offload our work to AI. It does something we cannot do. At that point, it is no longer a “tool” or a “medium,” but something closer to a collaborator, with the potential to exceed us.

AI will dominate domains where outcomes are measurable. If the goal is to make more money, the system that performs best will win, because wealth is measurable. If the goal is to cure cancer, AI will likely reach solutions before humans do. But in purely subjective domains, AI will only ever be as “good” as humans, because there is no way to determine whether it has outperformed us. In art, if you like something, that is enough. You do not need to justify what you find beautiful.

AI will likely force a shift toward subjectivity as it takes over domains where intelligence can be measured and optimized. We have already seen a version of this. The ability to perform complex arithmetic in one’s head once signaled intelligence. Today, it is almost meaningless, almost like a freak show, because calculators have made it trivial. The ability itself did not disappear, but its value did. In the same way, intelligence that solves measurable problems may become a commodity, and therefore no longer something we value. When forced to define the meaning of life, many of us resort to problem-solving of some kind, like saving lives. AI will make problem-solving itself a cheap commodity. 

Art is not a solution to a problem. Artists do what they do even when nobody asks for it and nobody needs it. They are not optimizing toward an outcome. They are compelled to produce. If AI continues in this direction, it may push more of us toward that kind of activity, where the point is not efficiency or correctness, but the act itself.