The Trump Patrimony

An abused child speaks:

I wouldn’t want to be the last country that tries to negotiate a trade deal with @realDonaldTrump,” posted Eric Trump. “The first to negotiate will win—the last will absolutely lose. I have seen this movie my entire life.”

—Eric Trump, as quoted in “China Called Trump’s Bluff,” from an Atlantic article by Jonathan Chait published online in Apple News, May 12, 2025

We know this movie. It’s the one where the sons submit unconditionally to the cruelty of their father. It appears to be as popular in the Trump family today as it was two generations ago. Elsewhere it gets decidedly mixed reviews. Check out the Bible, or the Taviani Brothers’ film Padre Padrone. (Like the Bible, it’s available in a dubbed version for you Trumps, who still steadfastly refuse to acknowledge that anything of interest exists in the world except America-first assholes and their medieval prejudices.)

Yes Eric, I know you’d rather travel to exclusive game preserves in Africa to shoot large animals than read a book, so it might surprise you to learn that history is made by the sons who defy their fathers, not by those who submit to licking papa’s boots in the hope that someday they might inherit papa’s money and papa’s puissance. (That’s a French word, Eric. Look it up.)

Let me do you a favor, kid. Let me recommend another Taviani brothers’ film to you, La Notte di San Lorenzo. Pay special attention to what happens in the end to young Marmugi, the son of the local Fascist party chief who’d assumed thoughout the film that following in his father’s footsteps was his key to a bright future of domination over everyone in his village. Above all, consider how easily his actual fate could be yours.

Unbidden Bits—April 16, 2025

Political posts on social media often seem little more than rehearsals for what we’d like to see engraved on the tombstones of our friends and allies, if not on our own. Fair enough. No matter what form we choose to embody our resistance, la lutte continue:

Rat Leaves Sinking Ship…

From the online edition of The Atlantic, May, 2025:

I SHOULD HAVE SEEN THIS COMING

When I joined the conservative movement in the 1980s, there were two types of people: those who cared earnestly about ideas, and those who wanted only to shock the left. The reactionary fringe has won.

By David Brooks

Everybody you’ve been sneering at for the last 40 years saw this coming. Everybody who could tell the difference between Edmund Burke and William F. Buckley Jr. saw this coming. I gotta say, you’re as stupid and full of yourself now as you were 40 years ago. You’ve always been a joke as a bully, you’re even more of a joke now as a victim.

A New Kind Of Fire

When rubbing two seemingly unrelated concepts together, sometimes you get a new kind of fire.

For example, in the left hand of our contemporary muse:

Il vecchio mondo sta morendo. Quello nuovo tarda a comparire. E in questo chiaroscuro nascono i mostri

The old world is dying. The new one is late in emerging. And in this half-light monsters are born.

—Antonio Gramsci, from his prison notebooks

And in the right hand:

In fact, the overwhelming impression I get from generative AI tools is that they are created by people who do not understand how to think and would prefer not to. That the developers have not walled off ethical thought here tracks with the general thoughtlessness of the entire OpenAI project.

—Elizabeth Lopatto in the Verge article, The Questions ChatGPT Shouldn’t Answer, March 5, 2025

If we rub these particular concepts together, will anything interesting really be ignited? That depends, I think, on whether or not we believe that the old world really is dying, and whether or not we also share the consensus developing among our professional optimists that the technologies we’ve begun to call artificial intelligence represent a genuine hope of resurrecting it.

If we believe that Gramsci’s vision was truly prescient and is at last on the verge of being fulfilled, what monsters, exactly, should we be expecting? The resurgence of fascism and fascists is certainly the one most talked about at the moment, but the new fascism seems to me less like a genuinely possible future, and more like the twitching of a partially reanimated corpse. Vladimir Putin can rain missiles down on the Ukraine all day long, and we learn nothing. Donald Trump can rage and bluster and condemn, and still not conquer France in a month. Despite the deadly earnest of both these tyrants, Marx’s “second time as farce” has never seemed more apt.

The more interesting monsters are the tech bros of Silicon Valley, the idiot savants who modestly offer themselves up to serve as our once and future philosopher kings. They give us ChatGPT, and in their unguarded moments natter endlessly on about AGI and the singularity as though they’d just succeeded in turning lead into gold, or capturing phlogiston in a bell jar.

Never mind asking Sam Altman why, despite his boasting, we haven’t yet achieved AGI on current hardware. Ask a simpler question instead: Why don’t we yet have self-driving cars? More specifically, why don’t we have self-driving cars that can drive anywhere a human being can, in any conditions a human being can? Then ask yourself how you would train a self-driving car to handle all the varied versions of the trolley problem that are likely to occur during the millions of miles driven each day on the world’s roads.

We do know how human beings handle these situations, don’t we? If we train a car’s self-driving system in the same way human beings are trained, should we not expect equivalent results?

The answer to our first question is no, we can’t actually describe with any degree of certainty what mental processes govern human reactions in such situations. What we do know is that mind and body, under the pressure of the split-second timing often required, seem to react as an integrated unit, and rational consciousness seems to play little if any part in the decisions taken in the moment. Ethical judgments, if any are involved, appear purely instinctive in their application.

Memory doesn’t seem to record these events and our responses to them as rational progressions either. When queried at the time, what memory presents seems almost dreamlike—dominated by emotions, flashes of disjointed imagery, and often a peculiarly distorted sense of time. When queried again after a few days or weeks, however, memory will often respond with a more structured recollection. The rational mind apparently fashions a socially respectable narrative out of the mental impressions which were experienced initially as both fragmentary and chaotic. The question is whether this now recovered and reconstructed memory is data suitable for modeling future decision-making algorithms, or is instead a falsification which is more comfortable than accurate.

With such uncertainty about the answer to our first question, the answer to our second becomes considerably more complicated. If we don’t know whether or not the human reactions to an actual trolley problem are data-driven in any discoverable sense, where should we source the data for use in training our self-driving car systems? Could we use synthetic data—a bit of vector analysis mixed with object avoidance, and a kind of abstract moral triage of possible casualties? After all, human reactions per se aren’t always anything we’d want emulated at scale, and even if our data on human factors resists rational interpretation, we already have, or can acquire, a considerable amount of synthetic data about the purely technical factors involved in particular outcomes.

Given enough Nvidia processors and enough electricity to power them, we could certainly construct a synthetic data approach, but could we ever really be sure that it would respond to what happens in the real world in the way, and to the degree, that we expect? This might be the time to remind ourselves that neither we nor our machines are gods, nor are our machines any more likely than we are to become gods, not, at least, as long as they have to rely on us to teach them what any god worth worshipping would need to know.

I think Elizabeth Lopatto has the right of it here. Generative AI knows, in almost unimaginable historical completeness, what word is likely to come after the word that has just appeared—on the page, over the loudspeaker, in the mind, whatever—but historical is the operative word here. Every progression it assembles is a progression that has already existed somewhere in the history of human discourse. It can recombine these known progressions in new ways—in that sense, it can be a near-perfect mimic. It can talk like any of us, and when we listen to the recording after the fact, not even we ourselves can tell that we didn’t actually say what we hear coming out of the speaker. It can paint us a Hockney that not even David Hockney himself could be entirely sure was a forgery.

The problem—and in my opinion it’s a damning one—is that generative AI, at least as presently designed and implemented, can never present us with anything truly new. The claims made for successful inference algorithms at work in the latest generative AI models seem to me to be premature at best. While it may ultimately prove to be true that machines are capable of becoming independent minds in the same way that human beings do, proving it requires an assessment of all sorts of factors related to the physical and environmental differences in the development of our two species which so far remains beyond the state of the art.

Human beings still appear to me to be something quite different from so-called thinking machines—different in kind, not merely in degree. However we do it, however little we understand even at this late date how we do it, we actually can create new things—new sensations, new concepts, new experiences, new perspectives, new theories of cosmology, whatever. The jury is still out on whether our machines can learn to do the same.

If we’re worried about fascism, or indeed any form of totalitarianism that might conceivably subject our posterity to the kind of dystopian future already familiar to us from our speculative fiction, we’d do well to consider that a society based on machines that quote us back to ourselves in an endlessly generalized loop can hobble us in more subtle but just as damaging and permanent ways as the newspeak and doublethink portrayed in Orwell’s 1984. No matter how spellbinding ChatGPT and the other avatars of generative AI can be, we should never underestimate our own capabilities, nor exchange them out of convenience for the bloodless simulacrum offered us by the latest NASDAQ-certified edgelord.

The bros still seem confident that generative AI can transform itself in a single lifetime by some as yet undefined magic from a parrot into a wise and incorruptible interlocutor. If they’re right, I’ll be happy to consign this meditation to the dustbin of history myself, and to grant them and their creations the respect they’ll undoubtedly both demand. In the meantime, though, I think I’ll stick with another quote, this one by kat@weatherishappening.network‬ in a Mastodon thread about ChatGPT:

I don’t believe this is necessarily intentional, but no machine that learns under capitalism can imagine another world.

The American Degeneracy

If there ever was any doubt, there’s none now. There’ll be no justice, no mercy, and no place to hide so long as Trump, Vance, Musk, and their coterie of bootlickers, wannabes, and volunteer thugs are running things. Act accordingly.

More Historical Rhyming

Trump and Musk are about to do Social Security what Jimmy Hoffa and Frank Fitzsimmons did to the Teamsters’ pension fund. But not to worry, the Republicans will wring their hands for you, if only on those rare occasions when they’re not busy licking Trump’s boots or praising Musk’s moral clarity. And the Democrats? Well, I hear they’ll be glad to help you look under the couch cushions, but only after you guarantee them 50% of what you find.

A Quasi-biblical Revelation

I’ve never been in any doubt about the depth of Donald Trump’s depravity, but I’m familiar enough with German history to understand why half the country voted for him, and why our titans of industry rushed to provide him with the means to fulfill his vile ambitions. I am surprised, though, at some of the people I’m belatedly finding in the miles-long line of fools waiting to kiss his ass.

It’s not just the hypocritical gasbags who’ve been lecturing us for decades about ethics, morality, courage, manliness, and the sanctity of free enterprise. Everyone knows that commoditized list of virtues by heart, and all of us know at least one person who’s made a career out of preaching from it. It’s not as shocking to me as it should be to see them now suddenly burning their own books, scrubbing their own mottoes off the walls, and looking down at their shoes when I ask them why.

No, the people who’ve surprised me are those I’d come to know as decent, compassionate, human beings, who now refuse to defend the defenseless, who turn away now from those in the greatest need with a shrug, with platitudes, with lectures about choosing one’s fights, with supposedly sage advice that one must be patient, that this too shall pass. This won’t do, this won’t do at all. If we want to keep calling the United States the Land Of the Free and the Home Of the Brave without being consumed by shame, this temporizing, compromising, agreeing that black is white, that President Zelenskyy should wear a suit when invited to lick the tyrant’s boots—all this nonsense will have to go. We need to do better. A whole lot better.

How It’s Going

Politics as usual hasn’t been as usual lately as it used to be. Were he still among the living, even our 20th century Nostradamus, Mr. Orwell, might be surprised to learn that Oceania has finally and definitively lost its war with Eurasia, and is presently hiding the rump of itself under the skirts of a demented real estate developer with delusions of grandeur. Definitely not the Big Brother Mr. Orwell promised us, this one, although the red hat rubes hardly seem to have noticed. Eastasia, meanwhile, is licking its chops, oblivious to its own vulnerabilities, doing its unctuous best to look as inscrutable as western racists expect.

A couple of degrees more of global warming, a nuclear exchange or two between idiot regimes, and even Elon Musk and his sycophant armies might find themselves roasting rats-on-a-stick over burning rubber tires somewhere where angels fear to tread. That, my fellow deeply concerned citizens of the First World, is actually how it’s going, and you didn’t even have to sit through a commercial to hear about it.