God Damn AI

The Thinker, Cleveland Museum of Art
‘The Thinker,’ a bronze sculpture by Auguste Rodin, at the Cleveland Museum of Art, April 26, 2015. Photo credit: Erik Drost / Wikimedia (CC BY 2.0)

Jonathan D. Simon  02/21/26 (whowhatwhy.org)

By its very nature, nothing of our own creation should be inevitable.

Q: How does a country like America that so values freedom deal with the purported inevitability of artificial intelligence?

A: The United States manages the inevitability of artificial intelligence by fostering a “pro-innovation, pro-freedom” approach, emphasizing American leadership in AI development while implementing, where necessary, targeted, light-touch safeguards. The strategy focuses on balancing rapid technological advancement with democratic values, aiming for “democratic AI” rather than authoritarian surveillance, utilizing this 2025 AI Action Plan to maintain technological dominance. [Per Google AI]

“Inevitable” — say it with me — may just be the ugliest word in the English language. 

Inevitable means “You have no choice.” Inevitable means “Fuck you. Deal with it.” A massive asteroid headed for Earth is inevitable. A boxcar headed for Auschwitz is inevitable. Stage 4 metastatic cancer, after every experimental protocol has been exhausted, is inevitable.

And AI, everyone now says, is inevitable.

But AI is not an asteroid, or even a boxcar or a tumor. It is what a small group of heedless, reckless, greedy entrepreneurs are inflicting on the rest of humanity, with an all-in fascist US government hell bent on making sure it gallops forward without any regulation, oversight, or impediment — ostensibly so China doesn’t pull ahead (“Mr. President,” shouts Gen. Buck Turgidson, “we must not allow a mineshaft gap!”).

I know humanity too well to believe for a nanosecond that the awesome power and promise of AI will be channeled pro bono publico, that it will be managed with care and conscience.

If I sound exercised, overwrought — which I am — it’s because I know how this will go. 

I know humanity too well to believe for a nanosecond that the awesome power and promise of AI will be channeled pro bono publico, that it will be managed with care and conscience.

I’ve seen what’s become of social media, but it is not merely the excellent prospect for  replication of that race to the bottom that alarms me. Nor the danger that Artificial General Intelligence (AGI), or sentient AI, may one day decide to do away with the pathetic — in its view, anyway — species that created it. Depending on how “wise” AGI turns out to be, that might actually be something of a boon for the rest of the life that shares our planet, much of which we — having found it in our way — have treated more or less as the doomers fear AGI will one day treat us.

No, my distress and anger right now lie closer, in space and time, to home. I learned just yesterday from a friend that she was in the first wave of AI-replaceable workers when her full-time job in publishing was eliminated by AI. A mid-career single mom, employment for her is not — as it is for me, a semi-retiree — optional. But she has already found that the majority of available jobs in her field are temporary positions training LLMs — so that the new, highly efficient “workers” she helps “educate” can take over other people’s jobs. How nice.

Although predictions by the “experts” have been wildly divergent, something of a consensus has begun to form that millions and millions of humans are in line to share my friend’s fate, that the first wave in which she found herself is but a gentle ripple, a mere tickler of the tsunami that will come crashing on our human shores. 

Inevitable.

Former presidential candidate Andrew Yang now predicts the loss, within just a few years, of up to 50 percent of all white-collar jobs — with all kinds of dire economic aftershocks and knock-on effects. And he is not alone in making such forecasts — which are based, after all, on a solid understanding of how capital and labor markets work when left to their own devices.

A Cognoscenti’s Advice

At the same time I was attempting to come to grips with my friend’s misfortune and the mind-boggling scenarios it prompted me to start reading about and taking seriously, an even more disturbing paean to inevitability came to my attention, in the form of what might best be described as an “advice” column from blogger Matt Shumer, a multi-hatted denizen of the tech world.

Shumer’s column, titled “Something Big Is Happening: A personal note for non-tech friends and family on what AI is starting to change,” first basically corroborates, in considerable detail, the Yangian view that AI is spectacularly powerful, ferociously metastatic, and, yes, inevitable. That initially shocking depiction had, more or less overnight, become familiar to me.

What really spun my poor human head was what came next: Shumer’s advice to his “non-tech friends and family” about how to survive the “something big.” 

He first clears his throat thus: “I’m going to be direct with you because I think you deserve honesty more than comfort.” And direct he certainly is. No spoonful of sugar to help this medicine go down.

The gist of it is this: 

This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too. [Emphasis added.]

And:

The most recent AI models make decisions that feel like judgment. They show something that looked like taste: an intuitive sense of what the right call was, not just the technically correct one. A year ago that would have been unthinkable. My rule of thumb at this point is: if a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly. …

Nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn’t “someday.” It’s already started.

Eventually, robots will handle physical work too.

So… the race is on, and:

The single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt.

Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It’s $20 a month. … Make sure you’re using the best model available.

And here’s the thing to remember: if it even kind of works today, you can be almost certain that in six months it’ll do it near perfectly. The trajectory only goes one direction. …

[T]here is a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says “I used AI to do this analysis in an hour instead of three days” is going to be the most valuable person in the room. Not eventually. Right now. Learn these tools. Get proficient. Demonstrate what’s possible. If you’re early enough, this is how you move up: by being the person who understands what’s coming and can show others how to navigate it. That window won’t stay open long. Once everyone figures it out, the advantage disappears.

A few final tips:

Get your financial house in order. I’m not a financial advisor, and I’m not trying to scare you into anything drastic. But if you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago. Build up savings if you can. Be cautious about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Give yourself options if things move faster than you expect.

Think about where you stand, and lean into what’s hardest to replace. Some things will take longer for AI to displace. Relationships and trust built over years. Work that requires physical presence. Roles with licensed accountability: roles where someone still has to sign off, take legal responsibility, stand in a courtroom. Industries with heavy regulatory hurdles, where adoption will be slowed by compliance, liability, and institutional inertia. None of these are permanent shields. But they buy time. And time, right now, is the most valuable thing you can have, as long as you use it to adapt, not to pretend this isn’t happening. [Emphasis added.]

Rethink what you’re telling your kids. The standard playbook: get good grades, go to a good college, land a stable professional job. It points directly at the roles that are most exposed.

Build the habit of adapting. This is maybe the most important one. The specific tools don’t matter as much as the muscle of learning new ones quickly. AI is going to keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well won’t be the ones who mastered one tool. They’ll be the ones who got comfortable with the pace of change itself…

I know the next two to five years are going to be disorienting in ways most people aren’t prepared for. This is already happening in my world. It’s coming to yours.

I know the people who will come out of this best are the ones who start engaging now — not with fear, but with curiosity and a sense of urgency.

Like a Hole in the Head

So there it is. I think Matt Shumer is highly knowledgeable, well-meaning, and sincere — and somehow that makes his exhortations all the harder to stomach. Because I have little doubt that he is right. 

Shumer is essentially depicting and forecasting a social Darwinistic workplace fight to the death, with spectacular success being the palm for a handful of winners and displacement, obsolescence, impoverishment, and purposelessness as the booby prize for most everyone else.

And I also have little doubt that what he is describing is the kind of upheaval brought on by, say, the Industrial Revolution, only compressed from a couple of centuries to a few years, or less. 

Shumer is essentially depicting and forecasting a social Darwinistic workplace fight to the death, with spectacular success being the palm for a handful of winners and displacement, obsolescence, impoverishment, and purposelessness as the booby prize for most everyone else.

Disorientation “in ways most people aren’t prepared for” is not generally a recipe for peaceful, contented coexistence. And, with a mental health crisis already stalking the young, and suicide rates spiking, isn’t a ratcheted up “sense of urgency” just what we all need? 

Whatever we may tell you, I strongly suspect that most people my age, in our “golden years,” quietly envy the young and, all else being equal, wish we were closer to the starting than the finish line of our lives. No more.

Now this is entirely apart from that whole set of other delights that AI is thought to hold in store: its tweaking to advance the dominionist agendas of the likes of Elon Musk and Peter Thiel; its outsized resource demands and ecological impacts; its already measured tendency to atrophy human cognitive processes, especially in the young; its capacity to generate deepfakes good enough to be taken unquestioningly for fact and truth in politics, the marketplace, and pretty much every other realm of human interaction; its off-the-charts addictive powers, especially when it comes to a species of porn that one clinician called “crack or meth” in comparison to the “cocaine” of ordinary porn; and, ultimately, a potential reversal of the master-servant relationship with humanity.

Microsoft, Azure, datacenters, Wenatchee, WA
Microsoft Azure datacenters in Pangborn, Wenatchee, WA, August 22, 2025. Photo credit: Tedder / Wikimedia (CC BY 4.0)

And, before you point to all the medical advances and other upsides, yes I am all too aware of the promise, the seductions. In the abstract. What AI as a wisely used tool ideally could bring us. 

But does the current development process look “ideal” to you? Or does it look like a mad scramble — part gold rush and part Cold War, the charge led by some of the least ethically concerned and constrained humans on the planet?

Related: Artificial Intelligence Combined With Human Stupidity Is a Recipe for Disaster

A Pivotal Moment

So I suggest we put aside the abstract and look hard at the reality of it. I’ve seen just enough, so far, to pray for a collapse, a burst bubble — which seems less and less likely. 

I’ve seen jobs already lost; I’ve seen MechaHitler (and how easy it was to tweak it in and out of existence); I’ve heard from my 28-year-old daughter that her younger colleagues in her doctoral program can barely answer a basic question without running to ChatGPT or Claude.

Shumer is right that this is a pivotal moment. But he regards it as pivotal in the sense of personal adaptability: Will you get on board or get run over? 

I regard it — quixotically, I know — as pivotal in a different way: a moment to choose, collectively, who we are and who we want to become.

One cannot but be awed by the brilliance and inventiveness that some of our fellow beings have shown in bringing us to this spot. At the same time, should we not at least — before we start adapting — have the opportunity to ask whether AI’s inventors make its best stewards and whether what amounts to an arms race, and one rife with personal agendas if not outright corruption, is an acceptable incubator for this most powerful and potentially destructive of weapons? 

To be perfectly blunt, do you trust Donald Trump and our dick-waving broligarchs to drive this turbocharged bus that, in Shumer’s words, “only goes in one direction”?

No Choice? No Voice?

I typed my first screed — “Why Is There War?” (yes, you may laugh) — on a Smith Corona manual with two stuck keys, a packet of Ko-Rec-Type, and a sheet of carbon paper. What I’m feeling now recalls the impotent rage I felt then as a 10-year-old witnessing the idiotic escalation of the Vietnam War. 

I went to the library with my index cards to research the Domino Theory and the Gulf of Tonkin. I made a call to my congressman, on my rotary phone. I took a break for lunch, heating up some tomato soup in a pot on the stove. That was nearly 60 years ago. 

I took a computer course in 1974, my freshman year in college; I wrote a (very clunky) program that took a melody line and composed from it a piece in classical four-part harmony. I stayed up three nights straight debugging it. What fun it was! 

And little did I know… Laptops were inevitable; microwaves were inevitable; smartphones were inevitable (full disclosure: I have no plans to “upgrade” my flip phone). 

Everything, in retrospect, is inevitable

AI, as Shumer maintains, is different in both scope and scale. It is not popular, as polls have consistently shown. It is the spawn of greed out of the womb of curiosity. It is being shoved down our throats. Hard. Fast.

But let no one call me or my generation Luddites. A quick review of the torrent of changes and “improvements” we have incorporated into our brief lives — sometimes grateful, sometimes grumbling, and sometimes both — should make clear just how flexible and adaptive we have been.

AI, as Shumer maintains, is different in both scope and scale. It is not popular, as polls have consistently shown. It is the spawn of greed out of the womb of curiosity. It is being shoved down our throats. Hard. Fast.

Shumer advises me to adapt, tells me resistance is useless, self-defeating. I don’t think he’s wrong. 

But I do think that if there was ever a moment for rage against the machine, this is it.

I’ve lost a lot of friends who became MAGAs of the Left. I’m wondering how many more I will lose who tell me AI is inevitable and I should just get myself the deluxe model.

By its very nature, nothing of our own creation should be inevitable.

If AI is inevitable, it may well be because humanity is too collectively weak for hard and patient thought, too weak to weigh consequences, too weak to protect those who will be swept into obsolescence, too weak to protect ourselves and our future, too weak to protect our children. 

Purposeless, impoverished, lawless, marauding bands of once-productive workers? Inevitable

Just deal with it. Build a moat around your house. Ask ChatGPT — the deluxe model — for instructions. 


Leave a Reply

Your email address will not be published. Required fields are marked *