My YouTube algorithms make very little sense to me, but I’m sure I’m not alone. As YouTube suggests more and more videos based on my most recent searches, obliterating everything that came before, the occasional outlier manages to get through. That recent outlier has been ‘EMERGENCY’ podcasts. The word EMERGENCY is always in all caps, I guess to emphasize the EMERGENCiness of the content. Inevitably however, the content is almost never an actual emergency, but yet another not-so-subtle and not-so-clever attempt by the content creator to capitalize on attention and engagement.
The biggest offender in my algo feed is one of my guilty pleasures, the Tim Dillon Show. Since Donald Trump took office, Dillon produced three EMERGENCY podcasts. The first was about the Los Angeles fires in Altadena and Pacific Palisades, which was an actual emergency at the time. The other two, however, one with the disavowed and disgraced former Donald Trump grey eminence Steve Bannon and the other with right-wing podcaster Candace Owens, are a bit of a stretch as far as an emergency is concerned. But Dillon is nothing if not a clever marketer with a keen sense for popular sentiment, always on the lookout for which way the winds of vibe shifts are blowing.
The most recent EMERGENCY podcast that A-bombed my YouTube feed was the ‘AI AGENTS EMERGENCY DEBATE’ from the extremely popular channel The Diary of A CEO, in which three guests debate the topic of AI Agents, and whether “AI and AI Agents will replace God, steal your job, and change your future.” While the topic of an AI God is salient, it is also stridently speculative and obtusely abstract. If you search YouTube for the keywords ‘emergency podcast’ you’ll get a host of results, none of which seem very much like an emergency. I got the Andrew Tate & the Hodgetwins EMERGENCY PODCAST, another EMERGENCY DEBATE from The Diary of a CEO, an NBA Playoff Emergency Pod and an EMERGENCY PODCAST about MSU hiring a football coach. And of course every five videos YouTube also throws out sponsored content, because in the age of internet emergencies only advertising is going to lift us out of the morass. So what gives?
Perhaps the answer lies in a recent critical study of AI called ‘AI 2027’ that made a splash in technological evangelist circles. Written by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean, the article highlights the rapid advancements in AI in areas like coding, research, and hacking, detailing the evolution of AI agents developed by a fictional company, OpenBrain, from basic personal assistants in mid-2025 to sophisticated research and development tools.
It addresses potential risks such as AI involvement in cyber warfare and the theft of advanced AI models. The study also details ethical considerations of AI alignment, AI (dis)honesty and questions of harmlessness, and societal impacts like job displacement and economic changes. What is truly remarkable about the study however, is its pessimistic stance toward Artificial Intelligence that is nonetheless grounded in real-world problems facing society today.
In true science fiction fashion the study then provides two potential scenarios or ‘endings’, the first, which we may call the ‘good’ ending, presumes that AI corporations and governments will do the right thing and slow down the development of Artificial Intelligence to reduce potential harm to civilization and the world economy through unintended consequences and the deployment of a runaway system. In the good ending
“The superintelligence, aligned with an oversight committee of OpenBrain leadership and government officials, gives the committee extremely good advice to further their own goals. Thankfully, the committee uses its power in a way that is largely good for the world: the AI is released to the public, spurring a period of rapid growth and prosperity. The main obstacle is that China’s AI — which is also superintelligent by now, is misaligned. But it is less capable and has less compute than the U.S. AI, and so the U.S can make a favorable deal, giving the Chinese AI some resources in the depth of space in return for its cooperation now. The rockets start launching, and a new age dawns.”
Everyone claps and goes home.
The problem is of course the ‘bad’ ending. Here all tenets of civilization, from morality to common sense, are thrown out and replaced with the logic that undergirded the Cold War and one that has inexplicably been recently resurrected through the magic of political necromancy. This is the ‘Race’ ending.
“OpenBrain continues to race. They build more and more superhuman AI systems. Due to the stellar performance of the AI system on tests, and the ongoing AI race with China, the US government decides to deploy their AI systems aggressively throughout the military and policymakers, in order to improve decision making and efficiency.
OpenBrain quickly deploys their AI. The AI continues to use the ongoing race with China as an excuse to convince humans to get itself deployed ever more broadly. Fortunately for the AI, this is not very difficult — it's what the humans wanted to do anyways. The AI uses its superhuman planning and persuasion capabilities to ensure that the rollout goes smoothly. Some humans continue to work against it, but they are discredited. The US government is sufficiently captured by the AI that it is very unlikely to shut it down.
Fast robot buildup and bioweapon. The US uses their superintelligent AI to rapidly industrialize, manufacturing robots so that the AI can operate more efficiently. Unfortunately, the AI is deceiving them. Once a sufficient number of robots have been built, the AI releases a bioweapon, killing all humans. Then, it continues the industrialization, and launches Von Neumann probes to colonize space.”
Ugh. I’ve seen this movie somewhere, I think. Was it Oblivion, The Matrix or something? I don’t know how many movies will have to be shot and books written that essentially detail this very same scenario, except in different ways. Society is on a crash course with self-annihilation, that much we know.
But if Artificial Intelligence can become something like a God, or more precisely, something that to human intelligence would seem like a God, omnipotent, transcendental, and immortal, what is stopping AI from becoming an Anti-God, the inverse of all that is lauded by tech evangelists and venture capitalists? Is it because we simply do not have a concept of evil that can be applied to AI? What is evil outside of religion? How can a study of the deployment of Artificial Intelligence at the scale that is proposed, not include its spiritual dimension?
In another recent interview on The Diary of a CEO, Geoffrey Hinton, the ‘godfather of AI’ outlines the three spheres of human endeavor that technology is poised to attack. First are muscles, this was actually what the first industrial revolution attacked; machines replaced laborers like ditch diggers and farmers. Second in line is intelligence. This is what AI is now quickly replacing. Soon there will be very few coders, bank tellers, day traders and human resource managers left. Hinton’s third sphere is creativity, but for that AI would have to become AGI, in other words, basically sentient. It may be that the lack of a spiritual dimension to AI presupposes a lack of moral ontology as concerns most LLMs and the debates that surround them.
The Race ending in ‘AI 2027’ is simply the thought experiment taken to its absolute extreme, with a view of history and capitalism as the two major driving forces behind all the decisions made by the fictional corporation OpenBrain, essentially a facsimile of OpenAI, the company behind ChatGPT. In this sense, ‘AI 2027’ can be seen as an anti-capitalist critique of Artificial Intelligence that isn’t grounded in leftist ideology.
And here I think is the crux of the problem. The EMERGENCY podcast is yet another indicator of the total defeat and retreat of the left from popular public discourse. The AI debate on The Diary of A CEO channel may be a clever public relations gimmick that presents a very narrow point of view from the perspective of three highly ‘successful’ individuals, Amjad Masad who uses the entire debate as an advertising opportunity for his AI software company, Daniel Priestley, an Australian entrepreneur, and Bret Weinstein, a biologist and academic turned podcaster. To be fair, the ‘debate’ is interesting. It skillfully walks the tightrope between doomscroll engagement peddling and positive grindset affirmation content, a veritable money printer today if one knows how to life hack the algorithms. I’ve seen little to no answers or rebuttals from any of the popular left podcasters, probably because they see the topic of AI super intelligence as silly at best and irrelevant at worst and because it doesn’t fit into their socio-political model of the world.
And I don’t blame them either. Nobody in AI research is even considering the real problems facing the majority of people today - increasing food and prescription drug prices, lack of affordable housing, the disappearance of public spaces and a collapsing infrastructure. No AI program or company is working on universal healthcare or creating better public transportation networks. Instead, AI evangelists tend to revert back to the old argument of ‘providing access’, which is shorthand for allowing someone to purchase products or services through an intermediary designed to extract profits from every transaction. In the 2+ hours of debate, very little was said about the possibility of AI creating a socialist society. Instead words like ‘democratization’ gets bandied around, which is another buzz word for what in real life often amounts to being its very opposite. Read The Death of the Artist if you’d like to follow that line of thought.
The left has for the most part lost or abandoned the AI narrative, save for Ed Zitron of the ‘Better Offline’ podcast. Zitron makes his mission to be the total annihilation of the AI narrative. To Zitron AI is simply a speculative bubble and conversation of the AI God variety are to him evidence of the total clownish buffoonery of the Silicon Valley laptop class. It exists, he says, to mask the fact that the AI narrative is based on a lie that got out of hand and that programs like ChatGPT are nothing more than a rebranded Eliza program. While some of this is true, it is nonetheless true that AI has made its way into our lives whether we wanted it or not and that in terms of productivity, AI does have real world application - it can write emails, resumes, cover letters, summarize articles, books and videos, it can extract data and write code, create images, programs and apps, it can research and ‘think’ billions of times faster than any human. Masad’s software Replit can create totally new and original apps based on users’ prompts in thirty minutes. To a pessimist, the real emergency and the time society ought to have responded to it already passed years ago and we are now witnessing western society slowly circling the drain of the toilet that it designed and built. This is why odious figures like Trump, Musk, Bannon and Owens became the strange culture bearers for our age. They are the one-eyed princes in a kingdom of blind reactionary voices, filling the void of the space emptied by a toothless and neutered left, for as Walter Benjamin once wrote, “behind every fascism, there is a failed revolution.”
Until next time