Siri is from Apple and is here to help us. We are assured that it doesn’t spy on us like its relatives from Amazon and Alphabet do, so why do we hate it? Maybe being talked to like we were five years old by a machine the size of a grapefruit has something to do with it. Maybe being given answers that are either irrelevant or insane when we ask it a question does also. Artificial intelligence sounds like a fine idea. Being given artificial stupidity instead tends to confirm the contempt that we suspect the management of large corporations have for us. The tech bros fear the singularity. What they should fear is the Butlerian Jihad.
The Calliope of Dread
The Fascist International
Unthinkable thoughts? An oxymoron of a concept surely, at least it appears that way to anyone who takes the idea of personal liberty seriously. Any attempt to explain how it became the cornerstone of moral education in the West would be too complex to include in this meditation, but one critical aspect of that potential explanation is simple enough: How a child reacts the first time he catches an adult in a self-serving lie, or more properly, how the child perceives the social significance of that lie, can be far more important than most people think in determining what kind of adult that child will grow up to be.
For reasons that should be obvious to anyone who’s more than an occasional visitor to Dogtown, I’ve long considered unthinkable thoughts to be a false category, one established by tyrants for the sole purpose of controlling the allegiances of their subjects. Given that I’m a more or less direct intellectual descendent of the Enlightenment, my response to them is to quote Immanuel Kant:
Sapere aude! Habe Mut, dich deines eigenen Verstandes zu bedienen! ist also der Wahlspruch der Aufklärung.
Dare to know! Have the courage to avail yourself of your own understanding! is therefore the motto of the Enlightenment.
Unworthy thoughts, on the other hand—those that take the path of least emotional resistance, and in doing so escape into the world before being considered in the full light of all our mental faculties—are real enough. Despite what our pious god botherers demand, they are also common enough and harmless enough in a comparative sense not to be judged as sins by some chimerical Father in Heaven, or some equally chimerical Freudian superego. In fact, to the extent that such thoughts prioritize honesty over our all too common tendency to create a falsely competent persona, they can actually be a blessing.
Which is not to say that they can’t also be embarrassing. Yesterday I deleted my most recent post here—not because I found it indefensible, but because I found it irrelevant. Angry screeds against the enshittification of our public discourse, the arrogance of our billionaire know-it-alls, the ignorant viciousness of our sociopathic president and his followers, and the sorry state of our geopolitics in general are everywhere one looks these days. Adding to them can be tempting, but succumbing to that temptation can all too easily turn into one of those disabling addictions that prove nearly impossible to overcome.
Relying on a purely rhetorical social media-style carping as our sole defense against the lunatics responsible for our current political, economic, and social agonies is in some fundamental sense a fool’s errand, As far as I can see, it isn’t actually helping anyone. By most accounts the crisis we currently find ourselves in as a society is overdetermined to an unprecedented degree. How we think about it is dependent on which aspects of its driving force we believe to be most vulnerable to intervention, and what kinds of interventions we believe are within our power to organize and carry out.
The sad fact is that the current worldwide rise of fascism is itself as much the effect of a crisis as it is the cause of one. Fear is arguably at the root of what’s driving it. The pace of technologically driven social, political, and economic change, the effect on our collective consciousness of an always awake Internet—along with the equivalence of fact and fantasy, truth and lies that it engenders—are more than many people can bear without constructing a comforting narrative they hope will somehow sustain their sense of self. As far as these unfortunates are concerned, the fact that their narrative bears little if any resemblance to the truth is a feature, not a bug. The truth can be painful. An end to that pain is what they’re after.
This is fertile ground for sociopathic influencers, and we’re as up to our eyeballs in them now as we were in the 1930s. Tucker Carlson tells us it’s manly to tan one’s bollocks. Elon Musk, the latest incarnation of Oswald Spengler, declares empathy to be the true cause of the Decline of the West. Donald Trump announces a list of thoughts you may not think if you want a paycheck or any financial help from the federal treasury. Steve Bannon gets out of jail, dusts off his persona, and embarks on a tour of the world’s dictators, checking to see if they fancy him as the Johnny Appleseed of a new fascist international. (tl;dr, they don’t. Elon Musk is prettier, and hands out more money.)
Despite the sheer weirdness of all this nonsense, laughing at it seems uncomfortably like laughing at Auschwitz. What we’re facing seems to me to be something metaphorically akin to the exothermic chemical reactions high school chemistry teachers used to demonstrate by dropping a pencil eraser-sized nub of metallic sodium into a beaker of distilled water. Once such a reaction gets going, the energy it produces makes it self-sustaining. Stopping it before the reagents are completely consumed can only be accomplished by removing energy from the reaction faster than it’s being produced. Depending on the scale of the reaction in question, this can be virtually impossible to accomplish.
Metaphors admittedly have their limits, but if the history of our previous century is anything to go by, calling the rise of a 21st century fascist international an exothermic political reaction seems to fit what I see developing. The more vulnerable bourgeois democracies and their ruling economic classes in the 1930s were so terrified of a socialist international which demanded a more equitable distribution of the wealth their economies produced that they backed a fascist international instead. The irony is that despite how disastrously that turned out, they now look as though they’re preparing to do it again. I’m no Nostradamus, but if I had to assess current geopolitical probabilities, I’d say that it’s very unlikely that their choices this time are going to let us off any more easily than they did at the end of the 1930s. YMMV.
The Dumbfuck(ery) Marches On

Whatever I could think of to say, there’s always someone out there who can say it better. For example:
From Jay Kuo’s Newsletter The Status Kuo, April 3, 2025—
“Economist Justin Wolfers pulled no punches in his assessment: ‘Monstrously destructive, incoherent, ill-informed tariffs based on fabrications, imagined wrongs, discredited theories and ignorance of decades of evidence. And the real tragedy is that they will hurt working Americans more than anyone else.’”
A New Kind Of Fire
When rubbing two seemingly unrelated concepts together, sometimes you get a new kind of fire.
For example, in the left hand of our contemporary muse:
Il vecchio mondo sta morendo. Quello nuovo tarda a comparire. E in questo chiaroscuro nascono i mostri
The old world is dying. The new one is late in emerging. And in this half-light monsters are born.
—Antonio Gramsci, from his prison notebooks
And in the right hand:
In fact, the overwhelming impression I get from generative AI tools is that they are created by people who do not understand how to think and would prefer not to. That the developers have not walled off ethical thought here tracks with the general thoughtlessness of the entire OpenAI project.
—Elizabeth Lopatto in the Verge article, The Questions ChatGPT Shouldn’t Answer, March 5, 2025
If we rub these particular concepts together, will anything interesting really be ignited? That depends, I think, on whether or not we believe that the old world really is dying, and whether or not we also share the consensus developing among our professional optimists that the technologies we’ve begun to call artificial intelligence represent a genuine hope of resurrecting it.
If we believe that Gramsci’s vision was truly prescient and is at last on the verge of being fulfilled, what monsters, exactly, should we be expecting? The resurgence of fascism and fascists is certainly the one most talked about at the moment, but the new fascism seems to me less like a genuinely possible future, and more like the twitching of a partially reanimated corpse. Vladimir Putin can rain missiles down on the Ukraine all day long, and we learn nothing. Donald Trump can rage and bluster and condemn, and still not conquer France in a month. Despite the deadly earnest of both these tyrants, Marx’s “second time as farce” has never seemed more apt.
The more interesting monsters are the tech bros of Silicon Valley, the idiot savants who modestly offer themselves up to serve as our once and future philosopher kings. They give us ChatGPT, and in their unguarded moments natter endlessly on about AGI and the singularity as though they’d just succeeded in turning lead into gold, or capturing phlogiston in a bell jar.
Never mind asking Sam Altman why, despite his boasting, we haven’t yet achieved AGI on current hardware. Ask a simpler question instead: Why don’t we yet have self-driving cars? More specifically, why don’t we have self-driving cars that can drive anywhere a human being can, in any conditions a human being can? Then ask yourself how you would train a self-driving car to handle all the varied versions of the trolley problem that are likely to occur during the millions of miles driven each day on the world’s roads.
We do know how human beings handle these situations, don’t we? If we train a car’s self-driving system in the same way human beings are trained, should we not expect equivalent results?
The answer to our first question is no, we can’t actually describe with any degree of certainty what mental processes govern human reactions in such situations. What we do know is that mind and body, under the pressure of the split-second timing often required, seem to react as an integrated unit, and rational consciousness seems to play little if any part in the decisions taken in the moment. Ethical judgments, if any are involved, appear purely instinctive in their application.
Memory doesn’t seem to record these events and our responses to them as rational progressions either. When queried at the time, what memory presents seems almost dreamlike—dominated by emotions, flashes of disjointed imagery, and often a peculiarly distorted sense of time. When queried again after a few days or weeks, however, memory will often respond with a more structured recollection. The rational mind apparently fashions a socially respectable narrative out of the mental impressions which were experienced initially as both fragmentary and chaotic. The question is whether this now recovered and reconstructed memory is data suitable for modeling future decision-making algorithms, or is instead a falsification which is more comfortable than accurate.
With such uncertainty about the answer to our first question, the answer to our second becomes considerably more complicated. If we don’t know whether or not the human reactions to an actual trolley problem are data-driven in any discoverable sense, where should we source the data for use in training our self-driving car systems? Could we use synthetic data—a bit of vector analysis mixed with object avoidance, and a kind of abstract moral triage of possible casualties? After all, human reactions per se aren’t always anything we’d want emulated at scale, and even if our data on human factors resists rational interpretation, we already have, or can acquire, a considerable amount of synthetic data about the purely technical factors involved in particular outcomes.
Given enough Nvidia processors and enough electricity to power them, we could certainly construct a synthetic data approach, but could we ever really be sure that it would respond to what happens in the real world in the way, and to the degree, that we expect? This might be the time to remind ourselves that neither we nor our machines are gods, nor are our machines any more likely than we are to become gods, not, at least, as long as they have to rely on us to teach them what any god worth worshipping would need to know.
I think Elizabeth Lopatto has the right of it here. Generative AI knows, in almost unimaginable historical completeness, what word is likely to come after the word that has just appeared—on the page, over the loudspeaker, in the mind, whatever—but historical is the operative word here. Every progression it assembles is a progression that has already existed somewhere in the history of human discourse. It can recombine these known progressions in new ways—in that sense, it can be a near-perfect mimic. It can talk like any of us, and when we listen to the recording after the fact, not even we ourselves can tell that we didn’t actually say what we hear coming out of the speaker. It can paint us a Hockney that not even David Hockney himself could be entirely sure was a forgery.
The problem—and in my opinion it’s a damning one—is that generative AI, at least as presently designed and implemented, can never present us with anything truly new. The claims made for successful inference algorithms at work in the latest generative AI models seem to me to be premature at best. While it may ultimately prove to be true that machines are capable of becoming independent minds in the same way that human beings do, proving it requires an assessment of all sorts of factors related to the physical and environmental differences in the development of our two species which so far remains beyond the state of the art.
Human beings still appear to me to be something quite different from so-called thinking machines—different in kind, not merely in degree. However we do it, however little we understand even at this late date how we do it, we actually can create new things—new sensations, new concepts, new experiences, new perspectives, new theories of cosmology, whatever. The jury is still out on whether our machines can learn to do the same.
If we’re worried about fascism, or indeed any form of totalitarianism that might conceivably subject our posterity to the kind of dystopian future already familiar to us from our speculative fiction, we’d do well to consider that a society based on machines that quote us back to ourselves in an endlessly generalized loop can hobble us in more subtle but just as damaging and permanent ways as the newspeak and doublethink portrayed in Orwell’s 1984. No matter how spellbinding ChatGPT and the other avatars of generative AI can be, we should never underestimate our own capabilities, nor exchange them out of convenience for the bloodless simulacrum offered us by the latest NASDAQ-certified edgelord.
The bros still seem confident that generative AI can transform itself in a single lifetime by some as yet undefined magic from a parrot into a wise and incorruptible interlocutor. If they’re right, I’ll be happy to consign this meditation to the dustbin of history myself, and to grant them and their creations the respect they’ll undoubtedly both demand. In the meantime, though, I think I’ll stick with another quote, this one by [email protected] in a Mastodon thread about ChatGPT:
I don’t believe this is necessarily intentional, but no machine that learns under capitalism can imagine another world.
The American Degeneracy
If there ever was any doubt, there’s none now. There’ll be no justice, no mercy, and no place to hide so long as Trump, Vance, Musk, and their coterie of bootlickers, wannabes, and volunteer thugs are running things. Act accordingly.
Electrons and Promises
Money universalized the power of wealth, endowed it with the freedom to extend itself beyond the organizational capacity of soi disant divine monarchies and pious religious latifundia. For better or worse, it drove the ascendancy of the human species to total dominance over all creation. The principle of liquidity which money embodied was a revelation that even Saul of Tarsus would have found transformational, had he not been paying more attention to the delusions of religious fervor than the reality under his donkey’s hooves. Money was a store of value that was easily transferable, infinitely divisible, that permitted superb transactional granularity—a reliable accounting of who had what, and even more importantly, who owed what to whom—this was really what gave rise to the anthropocene.
In the beginning, money was gold and promises, then it was paper and promises. Now it’s capital accounting, which is to say electrons and promises. The bitcoin boo-yah boys understand everything about electrons, but nothing about promises, or the need to keep them if you want the world of human beings to remain stable.
God, or somebody, help us all.
Unbidden Bits—February 28, 2025
So now there can be no doubt whatsoever that we’ve finally found a more wretched hive of scum and villainy than Mos Eisley spaceport—1600 Pennsylvania Avenue. Trump and Vance in full cry—it’s impossible to imagine a more vile, dishonorable display than they put on today in attacking a man whose mere presence as a supplicant rather than an honored guest is itself enough to shame them. This is a day which will live even longer in infamy than December 7, 1941.
How It’s Going
Politics as usual hasn’t been as usual lately as it used to be. Were he still among the living, even our 20th century Nostradamus, Mr. Orwell, might be surprised to learn that Oceania has finally and definitively lost its war with Eurasia, and is presently hiding the rump of itself under the skirts of a demented real estate developer with delusions of grandeur. Definitely not the Big Brother Mr. Orwell promised us, this one, although the red hat rubes hardly seem to have noticed. Eastasia, meanwhile, is licking its chops, oblivious to its own vulnerabilities, doing its unctuous best to look as inscrutable as western racists expect.
A couple of degrees more of global warming, a nuclear exchange or two between idiot regimes, and even Elon Musk and his sycophant armies might find themselves roasting rats-on-a-stick over burning rubber tires somewhere where angels fear to tread. That, my fellow deeply concerned citizens of the First World, is actually how it’s going, and you didn’t even have to sit through a commercial to hear about it.
All Your Base Are Now Belong To Us
Die Fahne hoch! Die Reihen fest geschlossen.
Der Musk maschiert mit mutig festem Schritt
Kam’raden, die Wokist’n vernichtet haben
marschier’n im Geist in unsern Reihen mit.
(For those of my readers who don’t know German, this is a contemporary parody of the 90 year-old Nazi party anthem das Horst-Wessel-Lied.) In English, it goes more or less like this:
The flag held high, ranks firmly closed together!
Musk marches on with bravely stiffened stride
Comrades who’ve done away with Wokeists
march with us in our ranks in spirit now.
GERMAN CITIZENS PLEASE NOTE: Reproducing these lyrics is, with few exceptions, currently illegal in the Federal Republic of Germany. In the US our laws are more permissive. My apologies to the citizens of the Federal Republic, but given the current constitutional crisis in my own country, I felt the need to fiddle with them here.
Im Westen Nichts Neues
Judging by the chaos engendered by our Orange Furor’s decree cutting off all aid to everybody everywhere, it’s a good thing our cut-rate Nazis weren’t trying to invade France. And will somebody please tell Stephen Miller that even if he shaves his head, squints malevolently, and shouts a lot, people will still know he’s a Jew, and will still find a Jewish Goebbels unseemly.