When rubbing two seemingly unrelated concepts together, sometimes you get a new kind of fire.
For example, in the left hand of our contemporary muse:
Il vecchio mondo sta morendo. Quello nuovo tarda a comparire. E in questo chiaroscuro nascono i mostri
The old world is dying. The new one is late in emerging. And in this half-light monsters are born.
—Antonio Gramsci, from his prison notebooks
And in the right hand:
In fact, the overwhelming impression I get from generative AI tools is that they are created by people who do not understand how to think and would prefer not to. That the developers have not walled off ethical thought here tracks with the general thoughtlessness of the entire OpenAI project.
—Elizabeth Lopatto in the Verge article, The Questions ChatGPT Shouldn’t Answer, March 5, 2025
If we rub these particular concepts together, will anything interesting really be ignited? That depends, I think, on whether or not we believe that the old world really is dying, and whether or not we also share the consensus developing among our professional optimists that the technologies we’ve begun to call artificial intelligence represent a genuine hope of resurrecting it.
If we believe that Gramsci’s prescient vision is finally being fulfilled, what monsters, exactly, should we be expecting? The resurgence of fascism and fascists is certainly the one most talked about at the moment, but the new fascism seems to me less like a genuinely possible future, and more like the twitching of a partially reanimated corpse. Vladimir Putin can rain missiles down on the Ukraine all day long, and we learn nothing. Donald Trump can rage and bluster and condemn, and still not conquer France in a month. Despite the deadly earnest of both these tyrants, Marx’s “second time as farce” has never seemed more apt.
The more interesting monsters are the tech bros of Silicon Valley, the idiot savants who modestly offer themselves up to serve as our once and future philosopher kings. They give us ChatGPT, and in their unguarded moments natter endlessly on about AGI and the singularity as though they’d just succeeded in turning lead into gold, or capturing phlogiston in a bell jar.
Never mind asking Sam Altman why, despite his boasting, we haven’t yet achieved AGI on current hardware. Ask a simpler question instead: Why don’t we yet have self-driving cars? More specifically, why don’t we have self-driving cars that can drive anywhere a human being can, in any conditions a human being can? Then ask yourself how you would train a self-driving car to handle all the varied versions of the trolley problem that are likely to occur during the millions of miles driven each day on the world’s roads.
We do know how human beings handle these situations, don’t we? If we train a car’s self-driving system in the same way human beings are trained, should we not expect equivalent results?
The answer to our first question is no, we can’t actually describe with any degree of certainty what mental processes govern human reactions in such situations. What we do know is that mind and body, under the pressure of the split-second timing often required, seem to react as an integrated unit, and rational consciousness seems to play little if any part in the decisions taken in the moment. Ethical judgments, if any are involved, appear purely instinctive in their application.
Memory doesn’t seem to record these events and our responses to them as rational progressions either. When queried at the time, what memory presents seems almost dreamlike—dominated by emotions, flashes of disjointed imagery, and often a peculiarly distorted sense of time. When queried again after a few days or weeks, however, memory will often respond with a more structured recollection. The rational mind apparently fashions a socially respectable narrative out of the mental impressions which were experienced initially as both fragmentary and chaotic. The question is whether this now recovered and reconstructed memory is data suitable for modeling future decision-making algorithms, or is instead a falsification which is more comfortable than accurate.
With such uncertainty about the answer to our first question, the answer to our second becomes considerably more complicated. If we don’t know whether or not the human reactions to an actual trolley problem are data-driven in any real sense, where should we source the data for use in training our self-driving car systems? Could we use synthetic data—a bit of vector analysis mixed with object avoidance, and a kind of abstract moral triage of possible casualties? After all, human reactions per se aren’t always anything we’d want emulated at scale, and even if our data on human factors resists rational interpretation, we already have, or can acquire, a considerable amount of synthetic data about the technical factors involved in particular outcomes.
Given enough Nvidia processors and enough electricity to power them, we could certainly construct a synthetic data approach, but could we ever really be sure that our synthetic data matched what happens in the real world in the way, and to the degree, that we expect? This might be the time to remind ourselves that neither we nor our machines are gods, nor are our machines any more likely than we are to become gods, not, at least, as long as they have to rely on us to teach them what any god worth worshipping would need to know.
I think Elizabeth Lopatto has the right of it here. Generative AI knows, in almost unimaginable historical completeness, what word is likely to come after the word that has just appeared—on the page, over the loudspeaker, in the mind, whatever—but historical is the operative word here. Every progression it assembles is a progression that has already existed somewhere in the history of human discourse. It can recombine these known progressions in new ways—in that sense, it can be a near-perfect mimic. It can talk like any of us, and when we listen to the recording after the fact, not even we ourselves can tell that we didn’t actually say what we hear coming out of the speaker. It can paint us a Hockney that not even David Hockney himself could be entirely sure was a forgery.
The problem—and in my opinion it’s a damning one—is that generative AI, at least as presently designed and implemented, can never present us with anything truly new. The claims made for successful inference algorithms at work in the latest generative AI models seem to me to be premature at best. While it may ultimately prove to be true that machines are capable of becoming independent minds in the same way that human beings do, proving it requires an assessment of all sorts of factors related to the physical and environmental differences in the development of our two species which so far remains beyond the state of the art.
Human beings still appear to me to be something quite different from so-called thinking machines—different in kind, not merely in degree. However we do it, however little we understand even at this late date how we do it, we actually can create new things—new sensations, new concepts, new experiences, new perspectives, new theories of cosmology, whatever. The jury is still out on whether our machines can learn to do the same.
If we’re worried about fascism, or indeed any form of totalitarianism that might conceivably subject our posterity to the kind of dystopian future already familiar to us from our speculative fiction, we’d do well to consider that a society based on machines that quote us back to ourselves in an endlessly generalized loop can hobble us in more subtle but just as damaging and permanent ways as the newspeak and doublethink portrayed in Orwell’s 1984. No matter how spellbinding ChatGPT and the other avatars of generative AI can be, we should never underestimate our own capabilities, nor exchange them out of convenience for the bloodless simulacrum offered us by the latest NASDAQ-certified edgelord.
The bros still seem confident that generative AI can transform itself in a single lifetime by some as yet undefined magic from a parrot into a wise and incorruptible interlocutor. If they’re right, I’ll be happy to consign this meditation to the dustbin of history myself, and to grant them and their creations the respect they’ll undoubtedly both demand. In the meantime, though, I think I’ll stick with another quote, this one by [email protected] in a Mastodon thread about ChatGPT:
I don’t believe this is necessarily intentional, but no machine that learns under capitalism can imagine another world.