Gertrude!

Gertrude Stein was a steward of the English language as well as its first modern sorcerer. To this day, fifty years after I first read her Lectures in America, I’m amazed by how skillfully she managed to dissolve the accepted frameworks of literacy without simultaneously depriving literacy itself of either its traditional subtlety or its depth. In the twenty-first century, as we’re beginning to believe that the written word lacks the ease of use that terminal stage capitalism and its media torrents demand, we look to computers to do the work of creating, disseminating, sorting and interpreting the flood of content for us. That’s a mistake, possibly a catastrophic one. If you want to know why, read Gertrude Stein, the only effective antidote I know of to the Newspeak now being forced on us by the shiny barbarisms of our new century.

The Kirk Circus

Everybody has a take. Everybody is deploring, threatening, scribbling cringeworthy hagiographies, lowering flags to half mast, offering up thoughts and prayers.

Charlie Kirk got what he deserved. He got what he’d already said he’d be willing to accept, if not endorse, as collateral damage in pursuit of what he considered to be a vigorous and necessary defense of the second amendment.

He never imagined that he’d be the one with a fatal bullet hole in him. Those would be reserved for Jews, immigrants, black and brown people, gay people, women who refused his benevolent instruction, empathetic people, people who’d read the wrong books, and above all, people who’d had a belly full of his trumpeted triumphs of the will to come, the triumphs that he and his equally deluded buddies were peddling to anyone stupid enough to take them at face value.

Civil society is in abeyance in the US. This was never our fault, but restoring it is nevertheless our duty. We can start by not shedding any tears for this sad, sick, puer aeternus, whose intelligence matured tragically earlier than his wisdom.

Unbidden Bits—April 16, 2025

Political posts on social media often seem little more than rehearsals for what we’d like to see engraved on the tombstones of our friends and allies, if not on our own. Fair enough. No matter what form we choose to embody our resistance, la lutte continue:

A New Kind Of Fire

When rubbing two seemingly unrelated concepts together, sometimes you get a new kind of fire.

For example, in the left hand of our contemporary muse:

Il vecchio mondo sta morendo. Quello nuovo tarda a comparire. E in questo chiaroscuro nascono i mostri

The old world is dying. The new one is late in emerging. And in this half-light monsters are born.

—Antonio Gramsci, from his prison notebooks

And in the right hand:

In fact, the overwhelming impression I get from generative AI tools is that they are created by people who do not understand how to think and would prefer not to. That the developers have not walled off ethical thought here tracks with the general thoughtlessness of the entire OpenAI project.

—Elizabeth Lopatto in the Verge article, The Questions ChatGPT Shouldn’t Answer, March 5, 2025

If we rub these particular concepts together, will anything interesting really be ignited? That depends, I think, on whether or not we believe that the old world really is dying, and whether or not we also share the consensus developing among our professional optimists that the technologies we’ve begun to call artificial intelligence represent a genuine hope of resurrecting it.

If we believe that Gramsci’s vision was truly prescient and is at last on the verge of being fulfilled, what monsters, exactly, should we be expecting? The resurgence of fascism and fascists is certainly the one most talked about at the moment, but the new fascism seems to me less like a genuinely possible future, and more like the twitching of a partially reanimated corpse. Vladimir Putin can rain missiles down on the Ukraine all day long, and we learn nothing. Donald Trump can rage and bluster and condemn, and still not conquer France in a month. Despite the deadly earnest of both these tyrants, Marx’s “second time as farce” has never seemed more apt.

The more interesting monsters are the tech bros of Silicon Valley, the idiot savants who modestly offer themselves up to serve as our once and future philosopher kings. They give us ChatGPT, and in their unguarded moments natter endlessly on about AGI and the singularity as though they’d just succeeded in turning lead into gold, or capturing phlogiston in a bell jar.

Never mind asking Sam Altman why, despite his boasting, we haven’t yet achieved AGI on current hardware. Ask a simpler question instead: Why don’t we yet have self-driving cars? More specifically, why don’t we have self-driving cars that can drive anywhere a human being can, in any conditions a human being can? Then ask yourself how you would train a self-driving car to handle all the varied versions of the trolley problem that are likely to occur during the millions of miles driven each day on the world’s roads.

We do know how human beings handle these situations, don’t we? If we train a car’s self-driving system in the same way human beings are trained, should we not expect equivalent results?

The answer to our first question is no, we can’t actually describe with any degree of certainty what mental processes govern human reactions in such situations. What we do know is that mind and body, under the pressure of the split-second timing often required, seem to react as an integrated unit, and rational consciousness seems to play little if any part in the decisions taken in the moment. Ethical judgments, if any are involved, appear purely instinctive in their application.

Memory doesn’t seem to record these events and our responses to them as rational progressions either. When queried at the time, what memory presents seems almost dreamlike—dominated by emotions, flashes of disjointed imagery, and often a peculiarly distorted sense of time. When queried again after a few days or weeks, however, memory will often respond with a more structured recollection. The rational mind apparently fashions a socially respectable narrative out of the mental impressions which were experienced initially as both fragmentary and chaotic. The question is whether this now recovered and reconstructed memory is data suitable for modeling future decision-making algorithms, or is instead a falsification which is more comfortable than accurate.

With such uncertainty about the answer to our first question, the answer to our second becomes considerably more complicated. If we don’t know whether or not the human reactions to an actual trolley problem are data-driven in any discoverable sense, where should we source the data for use in training our self-driving car systems? Could we use synthetic data—a bit of vector analysis mixed with object avoidance, and a kind of abstract moral triage of possible casualties? After all, human reactions per se aren’t always anything we’d want emulated at scale, and even if our data on human factors resists rational interpretation, we already have, or can acquire, a considerable amount of synthetic data about the purely technical factors involved in particular outcomes.

Given enough Nvidia processors and enough electricity to power them, we could certainly construct a synthetic data approach, but could we ever really be sure that it would respond to what happens in the real world in the way, and to the degree, that we expect? This might be the time to remind ourselves that neither we nor our machines are gods, nor are our machines any more likely than we are to become gods, not, at least, as long as they have to rely on us to teach them what any god worth worshipping would need to know.

I think Elizabeth Lopatto has the right of it here. Generative AI knows, in almost unimaginable historical completeness, what word is likely to come after the word that has just appeared—on the page, over the loudspeaker, in the mind, whatever—but historical is the operative word here. Every progression it assembles is a progression that has already existed somewhere in the history of human discourse. It can recombine these known progressions in new ways—in that sense, it can be a near-perfect mimic. It can talk like any of us, and when we listen to the recording after the fact, not even we ourselves can tell that we didn’t actually say what we hear coming out of the speaker. It can paint us a Hockney that not even David Hockney himself could be entirely sure was a forgery.

The problem—and in my opinion it’s a damning one—is that generative AI, at least as presently designed and implemented, can never present us with anything truly new. The claims made for successful inference algorithms at work in the latest generative AI models seem to me to be premature at best. While it may ultimately prove to be true that machines are capable of becoming independent minds in the same way that human beings do, proving it requires an assessment of all sorts of factors related to the physical and environmental differences in the development of our two species which so far remains beyond the state of the art.

Human beings still appear to me to be something quite different from so-called thinking machines—different in kind, not merely in degree. However we do it, however little we understand even at this late date how we do it, we actually can create new things—new sensations, new concepts, new experiences, new perspectives, new theories of cosmology, whatever. The jury is still out on whether our machines can learn to do the same.

If we’re worried about fascism, or indeed any form of totalitarianism that might conceivably subject our posterity to the kind of dystopian future already familiar to us from our speculative fiction, we’d do well to consider that a society based on machines that quote us back to ourselves in an endlessly generalized loop can hobble us in more subtle but just as damaging and permanent ways as the newspeak and doublethink portrayed in Orwell’s 1984. No matter how spellbinding ChatGPT and the other avatars of generative AI can be, we should never underestimate our own capabilities, nor exchange them out of convenience for the bloodless simulacrum offered us by the latest NASDAQ-certified edgelord.

The bros still seem confident that generative AI can transform itself in a single lifetime by some as yet undefined magic from a parrot into a wise and incorruptible interlocutor. If they’re right, I’ll be happy to consign this meditation to the dustbin of history myself, and to grant them and their creations the respect they’ll undoubtedly both demand. In the meantime, though, I think I’ll stick with another quote, this one by kat@weatherishappening.network‬ in a Mastodon thread about ChatGPT:

I don’t believe this is necessarily intentional, but no machine that learns under capitalism can imagine another world.

A Quasi-biblical Revelation

I’ve never been in any doubt about the depth of Donald Trump’s depravity, but I’m familiar enough with German history to understand why half the country voted for him, and why our titans of industry rushed to provide him with the means to fulfill his vile ambitions. I am surprised, though, at some of the people I’m belatedly finding in the miles-long line of fools waiting to kiss his ass.

It’s not just the hypocritical gasbags who’ve been lecturing us for decades about ethics, morality, courage, manliness, and the sanctity of free enterprise. Everyone knows that commoditized list of virtues by heart, and all of us know at least one person who’s made a career out of preaching from it. It’s not as shocking to me as it should be to see them now suddenly burning their own books, scrubbing their own mottoes off the walls, and looking down at their shoes when I ask them why.

No, the people who’ve surprised me are those I’d come to know as decent, compassionate, human beings, who now refuse to defend the defenseless, who turn away now from those in the greatest need with a shrug, with platitudes, with lectures about choosing one’s fights, with supposedly sage advice that one must be patient, that this too shall pass. This won’t do, this won’t do at all. If we want to keep calling the United States the Land Of the Free and the Home Of the Brave without being consumed by shame, this temporizing, compromising, agreeing that black is white, that President Zelenskyy should wear a suit when invited to lick the tyrant’s boots—all this nonsense will have to go. We need to do better. A whole lot better.

Prolegomenon For a Future De-Nazification

Against the day we know will come, when genuine justice can once again be served, let us make sure we remember the names of every one of Musk’s script kiddies, every recipient of Trump’s January 6 pardons, every member of every Texas or Florida governor’s rat squad, every fake elector put up by Republicans for the 2020 election, every member of every right-wing militia or lawless county sheriff’s posse comitatus, and every member of the U.S. Supreme Court who voted with the majority in Relentless, Inc. v. Department of Commerce, Dobbs v. Jackson Women’s Health Organization, and Citizens United v. Federal Election Commission. If we are ever to restore what the Constitution once promised us—to secure the blessings of liberty to ourselves and our posterity—we will need them to answer publicly and completely for what they’ve attempted to do to our country.

Near Miss

In search of Lost Angeles, February 8, 2025

A view of some buildings and cars on the road.

It was maybe an hour after dark one Friday evening early in 1976 as my wife and I headed east on the Santa Ana Freeway, bound for a weekend visit to my in-law’s house in Corona. I was in front, behind the wheel of our brand new, rallye-yellow VW Dasher. My wife was in the back, tending to our nearly year-old daughter in her bucket-shaped car seat.

Traffic was surprisingly moderate for the beginning of a weekend. I was cruising in the left lane at just over seventy miles an hour as we approached the Los Angeles River on our way to the San Bernardino freeway junction. Suddenly an enormous Oldsmobile station wagon with its lights out appeared crosswise in the lane ahead of us, with four or five young people seemingly trying with little success to push it out of the way of oncoming traffic.

Terrified, I instinctively hauled the steering wheel as far to the right as I could, feeling the car flex and go up on three wheels as I did. Once safely past the moment of imminent collision, and fearful of what might be approaching from behind us in the lane I’d just blindly swerved into, I hauled the steering wheel back sharply to the left and felt the uplifted rear wheel thump back down on the pavement behind me as we swept past the iconic sheds and storage tanks of the defunct Brew 102 brewery to our right. A little more than an hour later we’d arrived safely at our destination, and my daughter’s grandparents got to fawn over their granddaughter again without even the slightest inkling of how near the angel of death had come to visiting all of us that evening.

Years later I read in some auto magazine that the intentionally flexible unibody construction of the VW Dasher and its Audi 80 stablemate allowed them to recover surprisingly reliably from abrupt steering inputs like those I’d been forced to make use of that evening. German engineering may no longer be what it once was, and the iconic Brew 102 brewery complex has long since been demolished, but as Los Angeles memories go, it would be hard to come up with one more emblematic of the ambiguities of life in the post-war Southern California capital of la dolce vita. The living was certainly easy enough—jobs were plentiful, the sun reliable, the beaches close-by. The fact that sudden death in a river of steel was also only a couple of miles or so from the door of every suburban garage seemed comically irrelevant, at least until you experienced your first genuinely near miss.

The Irrelevance of Precedent

What do I think about TikTok? What do I think about X? What do I think about all our 21st century digital anxieties—China’s nefarious designs on democracy, Musk’s knee-jerk racism, Zuckerberg’s peculiar concept of masculinity, Thiel’s equally peculiar attitude toward his own mortality, and by extension our own?

What I think is that once the box is opened, Pandora can no longer help us—or, in more contemporary terms, scale matters. What does that mean? It means, to resort to the original Latin, Homo sum, humani nihil a me alienum puto. Genuine freedom of speech reveals things to us about ourselves that we’d rather not know. Content moderation can’t help us with that. Neither can the clever pretense of algorithm patrolling, nor bans that, for obvious economic reasons, won’t ever actually be enforced except selectively. Not even some real version of the Butlerian Jihad can help us.

The singularity may never come to pass, but governmental interventions in the creations of the digital age, legislative, executive, or judicial, are, like the military career of Josef Švejk, tainted with all the accidental qualities an indifferent universe can conjure. The truth is, we can no longer afford our own immaturity. My advice is simple: don’t go with the tech bros if you want to live. They really have no idea what they’ve wrought.