Unbidden Bits—April 16, 2025

Political posts on social media often seem little more than rehearsals for what we’d like to see engraved on the tombstones of our friends and allies, if not on our own. Fair enough. No matter what form we choose to embody our resistance, la lutte continue:

From 1995: Ziggurats

Post-modern architecture comes to the campus—from a previous incarnation on the Web

Anywhere you look in the Nineties, you’ll find the whimsies of Post-Modernism grinning back at you. Every mall seems to evoke the Forum Romanum, every apartment block the baths of Caracalla.

It’s a clever sort of classicism, but not a rich one. With little money available in modern times for marble, let alone for craftsmen willing to spend their lives chipping away at acanthus leaves, the glory of imperial Rome is only hinted at.

Which, I gather, is exactly the intent. Post-Modern architects claim no allegiance to a particular style; their stated passion is to reintroduce the decorative element into architectural design, to abandon the idea of the city as a “machine for living” in favor of something that won’t give us all nightmares.

Ironic quotations from the past would nevertheless seem to be an essential element of their designs; without them the architect would be vulnerable to the charge of bad decoration, or worse still, of dishonesty. (Stone is stone. Prestressed concrete isn’t. “Form follows function,” etc.) By impudently placing a column where no column could possibly be, Philip Johnson can justifiably claim to be as candid as Van der Rohe about the distinction between the structural and the “merely” decorative.

In any event, the products of more than ten years of Post-Modern construction are now all around us, and the surprising thing is that many of them actually seem to work pretty well.

On the University of California campus where I earn my living, most of the recent buildings are Post-Modern. With their porticos and exterior staircases, their friezes of semi-engaged columns or sunken windows set into beveled architraves, they resemble — at least from a distance — the modest public buildings of a state capital in the Midwest.

On closer inspection, the classical illusion is tempered by the realization that the columns are shells over steel beams, the architraves stucco over styrofoam; that the rooftops above the tiled eaves are burdened with roaring machinery and impossibly large exhaust funnels.

Nevertheless, with their exterior walls painted in shades of pink, sienna, and pale gray-green to match the eucalyptus trees which surround them, their staircases faced in polychrome Mediterranean tile, these pseudo-Roman exercises seem much more restful, more human, than the angular modernist monstrosities from the Sixties which stand beside them.

We’re told that imperial Rome was also painted, that brick and tile were as much a feature of its public facades as marble. Crossing the grass quadrangle between “Physical Sciences North” and “Physical Sciences South,” I’d like to think so. It would help explain why I can imagine men in togas standing under these porticos, or coming down these staircases, something which I could never imagine on the steps of the grand white palaces of Washington.

The illusion of less complicated times lingers for a moment, then I realize that if this were truly Rome, there’d be a long row of monuments to Republican senators along the edge of quadrangle, or perhaps an equally long row of crucified Christians. That, I suspect, would constitute more irony than the architect intended, or the public relations office on our campus would be willing to endure.

The Fascist International

Unthinkable thoughts? An oxymoron of a concept surely, at least it appears that way to anyone who takes the idea of personal liberty seriously. Any attempt to explain how it became the cornerstone of moral education in the West would be too complex to include in this meditation, but one critical aspect of that potential explanation is simple enough: How a child reacts the first time he catches an adult in a self-serving lie, or more properly, how the child perceives the social significance of that lie, can be far more important than most people think in determining what kind of adult that child will grow up to be.

For reasons that should be obvious to anyone who’s more than an occasional visitor to Dogtown, I’ve long considered unthinkable thoughts to be a false category, one established by tyrants for the sole purpose of controlling the allegiances of their subjects. Given that I’m a more or less direct intellectual descendent of the Enlightenment, my response to them is to quote Immanuel Kant:

Sapere aude! Habe Mut, dich deines eigenen Verstandes zu bedienen! ist also der Wahlspruch der Aufklärung.

Dare to know! Have the courage to avail yourself of your own understanding! is therefore the motto of the Enlightenment.

Unworthy thoughts, on the other hand—those that take the path of least emotional resistance, and in doing so escape into the world before being considered in the full light of all our mental faculties—are real enough. Despite what our pious god botherers demand, they are also common enough and harmless enough in a comparative sense not to be judged as sins by some chimerical Father in Heaven, or some equally chimerical Freudian superego. In fact, to the extent that such thoughts prioritize honesty over our all too common tendency to create a falsely competent persona, they can actually be a blessing.

Which is not to say that they can’t also be embarrassing. Yesterday I deleted my most recent post here—not because I found it indefensible, but because I found it irrelevant. Angry screeds against the enshittification of our public discourse, the arrogance of our billionaire know-it-alls, the ignorant viciousness of our sociopathic president and his followers, and the sorry state of our geopolitics in general are everywhere one looks these days. Adding to them can be tempting, but succumbing to that temptation can all too easily turn into one of those disabling addictions that prove nearly impossible to overcome.

Relying on a purely rhetorical social media-style carping as our sole defense against the lunatics responsible for our current political, economic, and social agonies is in some fundamental sense a fool’s errand, As far as I can see, it isn’t actually helping anyone. By most accounts the crisis we currently find ourselves in as a society is overdetermined to an unprecedented degree. How we think about it is dependent on which aspects of its driving force we believe to be most vulnerable to intervention, and what kinds of interventions we believe are within our power to organize and carry out.

The sad fact is that the current worldwide rise of fascism is itself as much the effect of a crisis as it is the cause of one. Fear is arguably at the root of what’s driving it. The pace of technologically driven social, political, and economic change, the effect on our collective consciousness of an always awake Internet—along with the equivalence of fact and fantasy, truth and lies that it engenders—are more than many people can bear without constructing a comforting narrative they hope will somehow sustain their sense of self. As far as these unfortunates are concerned, the fact that their narrative bears little if any resemblance to the truth is a feature, not a bug. The truth can be painful. An end to that pain is what they’re after.

This is fertile ground for sociopathic influencers, and we’re as up to our eyeballs in them now as we were in the 1930s. Tucker Carlson tells us it’s manly to tan one’s bollocks. Elon Musk, the latest incarnation of Oswald Spengler, declares empathy to be the true cause of the Decline of the West. Donald Trump announces a list of thoughts you may not think if you want a paycheck or any financial help from the federal treasury. Steve Bannon gets out of jail, dusts off his persona, and embarks on a tour of the world’s dictators, checking to see if they fancy him as the Johnny Appleseed of a new fascist international. (tl;dr, they don’t. Elon Musk is prettier, and hands out more money.)

Despite the sheer weirdness of all this nonsense, laughing at it seems uncomfortably like laughing at Auschwitz. What we’re facing seems to me to be something metaphorically akin to the exothermic chemical reactions high school chemistry teachers used to demonstrate by dropping a pencil eraser-sized nub of metallic sodium into a beaker of distilled water. Once such a reaction gets going, the energy it produces makes it self-sustaining. Stopping it before the reagents are completely consumed can only be accomplished by removing energy from the reaction faster than it’s being produced. Depending on the scale of the reaction in question, this can be virtually impossible to accomplish.

Metaphors admittedly have their limits, but if the history of our previous century is anything to go by, calling the rise of a 21st century fascist international an exothermic political reaction seems to fit what I see developing. The more vulnerable bourgeois democracies and their ruling economic classes in the 1930s were so terrified of a socialist international which demanded a more equitable distribution of the wealth their economies produced that they backed a fascist international instead. The irony is that despite how disastrously that turned out, they now look as though they’re preparing to do it again. I’m no Nostradamus, but if I had to assess current geopolitical probabilities, I’d say that it’s very unlikely that their choices this time are going to let us off any more easily than they did at the end of the 1930s. YMMV.

Rat Leaves Sinking Ship…

From the online edition of The Atlantic, May, 2025:

I SHOULD HAVE SEEN THIS COMING

When I joined the conservative movement in the 1980s, there were two types of people: those who cared earnestly about ideas, and those who wanted only to shock the left. The reactionary fringe has won.

By David Brooks

Everybody you’ve been sneering at for the last 40 years saw this coming. Everybody who could tell the difference between Edmund Burke and William F. Buckley Jr. saw this coming. I gotta say, you’re as stupid and full of yourself now as you were 40 years ago. You’ve always been a joke as a bully, you’re even more of a joke now as a victim.

The Dumbfuck(ery) Marches On

Whatever I could think of to say, there’s always someone out there who can say it better. For example:

From Jay Kuo’s Newsletter The Status Kuo, April 3, 2025—

“Economist Justin Wolfers pulled no punches in his assessment: ‘Monstrously destructive, incoherent, ill-informed tariffs based on fabrications, imagined wrongs, discredited theories and ignorance of decades of evidence. And the real tragedy is that they will hurt working Americans more than anyone else.’”

Unbidden Bits—April 1, 2025

If you aspire to rule as a latter-day Caligula, you should probably pay a lot more attention to your latter-day Praetorian Guard. Did you see the video of that very large bodyguard watching Elon do his drunken frat-boy fork and spoon trick at a recent Trumpfest? If the country finally tires of our ruling monsters, it won’t matter how many of us leftie riff-raff they’ve deported or disappeared. The sound of gladii being sharpened in the White House basement must be deafening these days—if, of course, you have the ears to hear it.

A New Kind Of Fire

When rubbing two seemingly unrelated concepts together, sometimes you get a new kind of fire.

For example, in the left hand of our contemporary muse:

Il vecchio mondo sta morendo. Quello nuovo tarda a comparire. E in questo chiaroscuro nascono i mostri

The old world is dying. The new one is late in emerging. And in this half-light monsters are born.

—Antonio Gramsci, from his prison notebooks

And in the right hand:

In fact, the overwhelming impression I get from generative AI tools is that they are created by people who do not understand how to think and would prefer not to. That the developers have not walled off ethical thought here tracks with the general thoughtlessness of the entire OpenAI project.

—Elizabeth Lopatto in the Verge article, The Questions ChatGPT Shouldn’t Answer, March 5, 2025

If we rub these particular concepts together, will anything interesting really be ignited? That depends, I think, on whether or not we believe that the old world really is dying, and whether or not we also share the consensus developing among our professional optimists that the technologies we’ve begun to call artificial intelligence represent a genuine hope of resurrecting it.

If we believe that Gramsci’s vision was truly prescient and is at last on the verge of being fulfilled, what monsters, exactly, should we be expecting? The resurgence of fascism and fascists is certainly the one most talked about at the moment, but the new fascism seems to me less like a genuinely possible future, and more like the twitching of a partially reanimated corpse. Vladimir Putin can rain missiles down on the Ukraine all day long, and we learn nothing. Donald Trump can rage and bluster and condemn, and still not conquer France in a month. Despite the deadly earnest of both these tyrants, Marx’s “second time as farce” has never seemed more apt.

The more interesting monsters are the tech bros of Silicon Valley, the idiot savants who modestly offer themselves up to serve as our once and future philosopher kings. They give us ChatGPT, and in their unguarded moments natter endlessly on about AGI and the singularity as though they’d just succeeded in turning lead into gold, or capturing phlogiston in a bell jar.

Never mind asking Sam Altman why, despite his boasting, we haven’t yet achieved AGI on current hardware. Ask a simpler question instead: Why don’t we yet have self-driving cars? More specifically, why don’t we have self-driving cars that can drive anywhere a human being can, in any conditions a human being can? Then ask yourself how you would train a self-driving car to handle all the varied versions of the trolley problem that are likely to occur during the millions of miles driven each day on the world’s roads.

We do know how human beings handle these situations, don’t we? If we train a car’s self-driving system in the same way human beings are trained, should we not expect equivalent results?

The answer to our first question is no, we can’t actually describe with any degree of certainty what mental processes govern human reactions in such situations. What we do know is that mind and body, under the pressure of the split-second timing often required, seem to react as an integrated unit, and rational consciousness seems to play little if any part in the decisions taken in the moment. Ethical judgments, if any are involved, appear purely instinctive in their application.

Memory doesn’t seem to record these events and our responses to them as rational progressions either. When queried at the time, what memory presents seems almost dreamlike—dominated by emotions, flashes of disjointed imagery, and often a peculiarly distorted sense of time. When queried again after a few days or weeks, however, memory will often respond with a more structured recollection. The rational mind apparently fashions a socially respectable narrative out of the mental impressions which were experienced initially as both fragmentary and chaotic. The question is whether this now recovered and reconstructed memory is data suitable for modeling future decision-making algorithms, or is instead a falsification which is more comfortable than accurate.

With such uncertainty about the answer to our first question, the answer to our second becomes considerably more complicated. If we don’t know whether or not the human reactions to an actual trolley problem are data-driven in any discoverable sense, where should we source the data for use in training our self-driving car systems? Could we use synthetic data—a bit of vector analysis mixed with object avoidance, and a kind of abstract moral triage of possible casualties? After all, human reactions per se aren’t always anything we’d want emulated at scale, and even if our data on human factors resists rational interpretation, we already have, or can acquire, a considerable amount of synthetic data about the purely technical factors involved in particular outcomes.

Given enough Nvidia processors and enough electricity to power them, we could certainly construct a synthetic data approach, but could we ever really be sure that it would respond to what happens in the real world in the way, and to the degree, that we expect? This might be the time to remind ourselves that neither we nor our machines are gods, nor are our machines any more likely than we are to become gods, not, at least, as long as they have to rely on us to teach them what any god worth worshipping would need to know.

I think Elizabeth Lopatto has the right of it here. Generative AI knows, in almost unimaginable historical completeness, what word is likely to come after the word that has just appeared—on the page, over the loudspeaker, in the mind, whatever—but historical is the operative word here. Every progression it assembles is a progression that has already existed somewhere in the history of human discourse. It can recombine these known progressions in new ways—in that sense, it can be a near-perfect mimic. It can talk like any of us, and when we listen to the recording after the fact, not even we ourselves can tell that we didn’t actually say what we hear coming out of the speaker. It can paint us a Hockney that not even David Hockney himself could be entirely sure was a forgery.

The problem—and in my opinion it’s a damning one—is that generative AI, at least as presently designed and implemented, can never present us with anything truly new. The claims made for successful inference algorithms at work in the latest generative AI models seem to me to be premature at best. While it may ultimately prove to be true that machines are capable of becoming independent minds in the same way that human beings do, proving it requires an assessment of all sorts of factors related to the physical and environmental differences in the development of our two species which so far remains beyond the state of the art.

Human beings still appear to me to be something quite different from so-called thinking machines—different in kind, not merely in degree. However we do it, however little we understand even at this late date how we do it, we actually can create new things—new sensations, new concepts, new experiences, new perspectives, new theories of cosmology, whatever. The jury is still out on whether our machines can learn to do the same.

If we’re worried about fascism, or indeed any form of totalitarianism that might conceivably subject our posterity to the kind of dystopian future already familiar to us from our speculative fiction, we’d do well to consider that a society based on machines that quote us back to ourselves in an endlessly generalized loop can hobble us in more subtle but just as damaging and permanent ways as the newspeak and doublethink portrayed in Orwell’s 1984. No matter how spellbinding ChatGPT and the other avatars of generative AI can be, we should never underestimate our own capabilities, nor exchange them out of convenience for the bloodless simulacrum offered us by the latest NASDAQ-certified edgelord.

The bros still seem confident that generative AI can transform itself in a single lifetime by some as yet undefined magic from a parrot into a wise and incorruptible interlocutor. If they’re right, I’ll be happy to consign this meditation to the dustbin of history myself, and to grant them and their creations the respect they’ll undoubtedly both demand. In the meantime, though, I think I’ll stick with another quote, this one by [email protected]‬ in a Mastodon thread about ChatGPT:

I don’t believe this is necessarily intentional, but no machine that learns under capitalism can imagine another world.

The American Degeneracy

If there ever was any doubt, there’s none now. There’ll be no justice, no mercy, and no place to hide so long as Trump, Vance, Musk, and their coterie of bootlickers, wannabes, and volunteer thugs are running things. Act accordingly.

Electrons and Promises

Money universalized the power of wealth, endowed it with the freedom to extend itself beyond the organizational capacity of soi disant divine monarchies and pious religious latifundia. For better or worse, it drove the ascendancy of the human species to total dominance over all creation. The principle of liquidity which money embodied was a revelation that even Saul of Tarsus would have found transformational, had he not been paying more attention to the delusions of religious fervor than the reality under his donkey’s hooves. Money was a store of value that was easily transferable, infinitely divisible, that permitted superb transactional granularity—a reliable accounting of who had what, and even more importantly, who owed what to whom—this was really what gave rise to the anthropocene.

In the beginning, money was gold and promises, then it was paper and promises. Now it’s capital accounting, which is to say electrons and promises. The bitcoin boo-yah boys understand everything about electrons, but nothing about promises, or the need to keep them if you want the world of human beings to remain stable.

God, or somebody, help us all.