A New Kind Of Fire

When rubbing two seemingly unrelated concepts together, sometimes you get a new kind of fire.

For example, in the left hand of our contemporary muse:

Il vecchio mondo sta morendo. Quello nuovo tarda a comparire. E in questo chiaroscuro nascono i mostri

The old world is dying. The new one is late in emerging. And in this half-light monsters are born.

—Antonio Gramsci, from his prison notebooks

And in the right hand:

In fact, the overwhelming impression I get from generative AI tools is that they are created by people who do not understand how to think and would prefer not to. That the developers have not walled off ethical thought here tracks with the general thoughtlessness of the entire OpenAI project.

—Elizabeth Lopatto in the Verge article, The Questions ChatGPT Shouldn’t Answer, March 5, 2025

If we rub these particular concepts together, will anything interesting really be ignited? That depends, I think, on whether or not we believe that the old world really is dying, and whether or not we also share the consensus developing among our professional optimists that the technologies we’ve begun to call artificial intelligence represent a genuine hope of resurrecting it.

If we believe that Gramsci’s prescient vision is finally being fulfilled, what monsters, exactly, should we be expecting? The resurgence of fascism and fascists is certainly the one most talked about at the moment, but the new fascism seems to me less like a genuinely possible future, and more like the twitching of a partially reanimated corpse. Vladimir Putin can rain missiles down on the Ukraine all day long, and we learn nothing. Donald Trump can rage and bluster and condemn, and still not conquer France in a month. Despite the deadly earnest of both these tyrants, Marx’s “second time as farce” has never seemed more apt.

The more interesting monsters are the tech bros of Silicon Valley, the idiot savants who modestly offer themselves up to serve as our once and future philosopher kings. They give us ChatGPT, and in their unguarded moments natter endlessly on about AGI and the singularity as though they’d just succeeded in turning lead into gold, or capturing phlogiston in a bell jar.

Never mind asking Sam Altman why, despite his boasting, we haven’t yet achieved AGI on current hardware. Ask a simpler question instead: Why don’t we yet have self-driving cars? More specifically, why don’t we have self-driving cars that can drive anywhere a human being can, in any conditions a human being can? Then ask yourself how you would train a self-driving car to handle all the varied versions of the trolley problem that are likely to occur during the millions of miles driven each day on the world’s roads.

We do know how human beings handle these situations, don’t we? If we train a car’s self-driving system in the same way human beings are trained, should we not expect equivalent results?

The answer to our first question is no, we can’t actually describe with any degree of certainty what mental processes govern human reactions in such situations. What we do know is that mind and body, under the pressure of the split-second timing often required, seem to react as an integrated unit, and rational consciousness seems to play little if any part in the decisions taken in the moment. Ethical judgments, if any are involved, appear purely instinctive in their application.

Memory doesn’t seem to record these events and our responses to them as rational progressions either. When queried at the time, what memory presents seems almost dreamlike—dominated by emotions, flashes of disjointed imagery, and often a peculiarly distorted sense of time. When queried again after a few days or weeks, however, memory will often respond with a more structured recollection. The rational mind apparently fashions a socially respectable narrative out of the mental impressions which were experienced initially as both fragmentary and chaotic. The question is whether this now recovered and reconstructed memory is data suitable for modeling future decision-making algorithms, or is instead a falsification which is more comfortable than accurate.

With such uncertainty about the answer to our first question, the answer to our second becomes considerably more complicated. If we don’t know whether or not the human reactions to an actual trolley problem are data-driven in any real sense, where should we source the data for use in training our self-driving car systems? Could we use synthetic data—a bit of vector analysis mixed with object avoidance, and a kind of abstract moral triage of possible casualties? After all, human reactions per se aren’t always anything we’d want emulated at scale, and even if our data on human factors resists rational interpretation, we already have, or can acquire, a considerable amount of synthetic data about the technical factors involved in particular outcomes.

Given enough Nvidia processors and enough electricity to power them, we could certainly construct a synthetic data approach, but could we ever really be sure that our synthetic data matched what happens in the real world in the way, and to the degree, that we expect? This might be the time to remind ourselves that neither we nor our machines are gods, nor are our machines any more likely than we are to become gods, not, at least, as long as they have to rely on us to teach them what any god worth worshipping would need to know.

I think Elizabeth Lopatto has the right of it here. Generative AI knows, in almost unimaginable historical completeness, what word is likely to come after the word that has just appeared—on the page, over the loudspeaker, in the mind, whatever—but historical is the operative word here. Every progression it assembles is a progression that has already existed somewhere in the history of human discourse. It can recombine these known progressions in new ways—in that sense, it can be a near-perfect mimic. It can talk like any of us, and when we listen to the recording after the fact, not even we ourselves can tell that we didn’t actually say what we hear coming out of the speaker. It can paint us a Hockney that not even David Hockney himself could be entirely sure was a forgery.

The problem—and in my opinion it’s a damning one—is that generative AI, at least as presently designed and implemented, can never present us with anything truly new. The claims made for successful inference algorithms at work in the latest generative AI models seem to me to be premature at best. While it may ultimately prove to be true that machines are capable of becoming independent minds in the same way that human beings do, proving it requires an assessment of all sorts of factors related to the physical and environmental differences in the development of our two species which so far remains beyond the state of the art.

Human beings still appear to me to be something quite different from so-called thinking machines—different in kind, not merely in degree. However we do it, however little we understand even at this late date how we do it, we actually can create new things—new sensations, new concepts, new experiences, new perspectives, new theories of cosmology, whatever. The jury is still out on whether our machines can learn to do the same.

If we’re worried about fascism, or indeed any form of totalitarianism that might conceivably subject our posterity to the kind of dystopian future already familiar to us from our speculative fiction, we’d do well to consider that a society based on machines that quote us back to ourselves in an endlessly generalized loop can hobble us in more subtle but just as damaging and permanent ways as the newspeak and doublethink portrayed in Orwell’s 1984. No matter how spellbinding ChatGPT and the other avatars of generative AI can be, we should never underestimate our own capabilities, nor exchange them out of convenience for the bloodless simulacrum offered us by the latest NASDAQ-certified edgelord.

The bros still seem confident that generative AI can transform itself in a single lifetime by some as yet undefined magic from a parrot into a wise and incorruptible interlocutor. If they’re right, I’ll be happy to consign this meditation to the dustbin of history myself, and to grant them and their creations the respect they’ll undoubtedly both demand. In the meantime, though, I think I’ll stick with another quote, this one by [email protected]‬ in a Mastodon thread about ChatGPT:

I don’t believe this is necessarily intentional, but no machine that learns under capitalism can imagine another world.

Prolegomenon For a Future De-Nazification

Against the day we know will come, when genuine justice can once again be served, let us make sure we remember the names of every one of Musk’s script kiddies, every recipient of Trump’s January 6 pardons, every member of every Texas or Florida governor’s rat squad, every fake elector put up by Republicans for the 2020 election, every member of every right-wing militia or lawless county sheriff’s posse comitatus, and every member of the U.S. Supreme Court who voted with the majority in Relentless, Inc. v. Department of Commerce, Dobbs v. Jackson Women’s Health Organization, and Citizens United v. Federal Election Commission. If we are ever to restore what the Constitution once promised us—to secure the blessings of liberty to ourselves and our posterity—we will need them to answer publicly and completely for what they’ve attempted to do to our country.

The Irrelevance of Precedent

What do I think about TikTok? What do I think about X? What do I think about all our 21st century digital anxieties—China’s nefarious designs on democracy, Musk’s knee-jerk racism, Zuckerberg’s peculiar concept of masculinity, Thiel’s equally peculiar attitude toward his own mortality, and by extension our own?

What I think is that once the box is opened, Pandora can no longer help us—or, in more contemporary terms, scale matters. What does that mean? It means, to resort to the original Latin, Homo sum, humani nihil a me alienum puto. Genuine freedom of speech reveals things to us about ourselves that we’d rather not know. Content moderation can’t help us with that. Neither can the clever pretense of algorithm patrolling, nor bans that, for obvious economic reasons, won’t ever actually be enforced except selectively. Not even some real version of the Butlerian Jihad can help us.

The singularity may never come to pass, but governmental interventions in the creations of the digital age, legislative, executive, or judicial, are, like the military career of Josef Švejk, tainted with all the accidental qualities an indifferent universe can conjure. The truth is, we can no longer afford our own immaturity. My advice is simple: don’t go with the tech bros if you want to live. They really have no idea what they’ve wrought.

Unbidden Bits—December 23, 2024

Historians of the Future:

Frank Herbert’s Dune seems to have been written by a man who’d read too much Gibbon. Max’s DUNE Prophecy, on the other hand, seems to have been created by people who’ve watched too much TikTok.

Viewed from a certain critical perspective, both are satirical masterpieces, and like all such masterpieces, feel eerily appropriate to their times.

The Rush To Surrender

Whenever I read about our new capitalist overlords gutting each other over who gets to profit from the rabbit-out-of-a-hat tricks of large language models, I have to laugh. Here are a handful of quotes that will give you some idea why:

————————————————————————

I don’t believe this is necessarily intentional, but no machine that learns under capitalism can imagine another world.

—@[email protected], from a Mastodon thread about ChatGPT

————————————————————————

Und so wie gesellschaftliche und technische Entwicklungen zuvor die Unantastbarkeit Gottes in Zweifel zogen, so stellen sie nun die„Sakralisierung” des Menschen zur Disposition.

And just as social and technical developments once cast doubt on the sanctity of God, so they now subject the sacralization of humanity to renegotiation.

—Roberto Simanowski, Todesalgorithmus: Das Dilemma der künstlichen Intelligenz (Passagen Thema)

————————————————————————

Der tiefere Sinn der Singularity-These ist die technische Überwindung kultureller Pluralität.

The deeper meaning of the singularity-thesis is the triumph of technology over cultural plurality.

—Roberto Simanowski, Todesalgorithmus: Das Dilemma der künstlichen Intelligenz (Passagen Thema)

————————————————————————

Die Aufklärung ist der Ausgang des Menschen aus seiner selbstverschuldeten Unmündigkeit.

The Enlightenment is the emergence of humankind from its self-inflicted immaturity.

—Immanuel Kant, Beantwortung der Frage: Was ist Aufklärung

————————————————————————

Remember, imbeciles and wits, 

sots and ascetics, fair and foul, 

young girls with little tender tits, 

that DEATH is written over all. 

Worn hides that scarcely clothe the soul 

they are so rotten, old and thin, 

or firm and soft and warm and full— 

fellmonger Death gets every skin.

All that is piteous, all that’s fair, 

all that is fat and scant of breath, 

Elisha’s baldness, Helen’s hair, 

is Death’s collateral: 

—Basil Bunting, Villon

————————————————————————

Say what you will, it’s clear to me that the Pax Americana, and more generally humanism itself, with all its honorable striving, are both well and truly done. Contemplating what passes for virtue and wisdom among those so obviously eager to feast on the leftovers would make even the gods laugh.

Yeah. Okay. Fine.

Kamala. She’s not Trump. I get it. More importantly, she’s overcome the obvious disadvantages, even in California, of her race and gender, and like President Obama before her, she’s visibly ambitious, with the talent, the intelligence, and the courage to realize those ambitions in a system designed to discriminate against people like her. Also like President Obama she seems to have managed to steer her way through the myriad corruptions set out in our system to trap the ambitious without succumbing to any of them as thoroughly as many of her peers.

Given the limitations of the Presidency, she’ll do. She’s got my vote. What would be nice, though, is if we’d all stop for a moment and look beyond the hagiography and see that we’ve been beating a dead horse politically for decades now with no resolution in sight. Kamala won’t help us with that. She can’t. She owes things to people, and we aren’t those people. We’re the people who can’t survive the decadence, the corruption, the cluelessness about the future that both parties are obliged by their true allegiances to defend, the hostages they’ve all given to fortune to get where they are today. Politics is not a consumer good, it’s a slow motion conflict about who gets to decide how we approach the future. We forget that at our peril.

Unbidden Bits—May 30, 2024

I’m in Arizona, in the checkout line at Walmart, clutching something I need today that was two days away by Amazon.

I look around at the patriarchal beards, the camouflage cargo pants, thinking idle thoughts about the carnival barkers on Fox News, Samuel Alito’s wife, how temporary the privilege of calling Trump a felon will probably turn out to be.

It comes to me then: A people camping out in the ruins of their own civilization. I pay for my indispensable, cross the parking lot, head back home.

I throw my car keys on the kitchen counter, hearing Hillary the imposter’s earnestness, her arrogance, back in 2016. It was way too late even then, and now….

“Going forward,” as the Wall Street pundits are so fond of saying, it’s not what we do with them that will matter. It’s what they’ll do with us.

Brecht in the 21st Century*

Nur wer im Wolfstand lebt, lebt angenehm.

Years ago, when I first fell in love with a scratchy early recording of die Dreigroschenoper, I misheard the famous punchline from die Ballade vom angenehmen Leben (The Ballad of the Comfortable Life), which actually goes Nur wer im Wohlstand lebt, lebt angenehm.

The original line, which, translated into English means something like “Only he who is well-off can live a comfortable life,” came, in my misheard version, to mean something like “Only he who adopts the habits of a predator can live a comfortable life.”

When I discovered my mistake, my first take was, “God, how embarrassing,” and my second, which cheered me up a little, was “Hey, I just made my first pun in German.” (A friend of mine, who’d been partially deaf from birth, once confessed to me that he’d learned early on that when he misheard something in a social situation, being credited with a clever pun was much more to his advantage than being considered slow-witted. I now knew exactly what he’d meant.)

Brecht’s original line represented a very understandable attitude for anyone, let alone a Marxist, witnessing the horrors of the German 1920’s, but I have to wonder if he might not also have approved of my corrupted version had he been confronted with the viciousness of 21st century neoliberalism in the United States, or the schwarze Null fetishism of Wolfgang Schäuble and the CDU in the reunified Germany of today. With all due respect to the genius of the original, I’d like to think so….

*Apologies to any native German speakers who might be reading this, der Wolfstand not being a genuine German word, as far as I know, I have no idea what anyone born into the language would make of my accidental corruption of Brecht’s famous line. All I know is that it’s stuck with me all these years as somehow being even more Brechtian than the original. This is blasphemy, or at least lèse majesté, I admit, but I mean well….