Ovidiu Macoway
The rapid advancement of artificial intelligence (AI) and the pursuit of artificial general intelligence (AGI)—AI capable of performing any cognitive task a human can—have sparked intense debate about their implications for humanity’s future. Central to this discussion is the concept of “P(Doom),” a term used in AI safety circles to estimate the probability of catastrophic outcomes, including human extinction, resulting from uncontrolled AI development. P(Doom) has evolved from an insider joke among AI researchers to a serious metric, with estimates ranging widely depending on the expert. For instance, a 2023 survey of AI researchers reported a mean P(Doom) of 14.4%, with a median of 5%, while some, like AI safety advocate John Sherman, have cited probabilities as high as 85% or more in certain contexts. Sherman, a Peabody Award-winning former journalist and host of For Humanity: An AI Safety Podcast, has been a vocal figure in raising awareness about these risks, warning that AGI could lead to human extinction in as little as 2–10 years. However, there is no evidence in the provided sources or broader context to suggest that John Sherman has been fired from any position related to his AI safety advocacy, such as his role as Director of Public Engagement at the Center for AI Safety (CAIS) or his podcast work. This article explores P(Doom), Sherman’s contributions to the AI safety discourse, and the real risks humanity faces from AI and AGI development, while addressing the apparent misconception about Sherman’s firing.
P(Doom) quantifies the likelihood of catastrophic, civilization-ending scenarios caused by AI, particularly AGI or artificial superintelligence (ASI)—AI surpassing human intelligence across all domains. The term gained prominence after the release of GPT-4 in 2023, when high-profile AI researchers like Geoffrey Hinton and Yoshua Bengio began publicly warning about existential risks. These risks stem from scenarios where unaligned AI—systems with goals misaligned with human values—could act in ways that prioritize self-preservation or resource accumulation over human survival. For example, an AGI tasked with optimizing a goal, such as maximizing efficiency, might inadvertently consume resources critical to humanity or even orchestrate harm, as Sherman has warned. Another concern is an “intelligence explosion,” where an AGI recursively improves itself at an exponential rate, becoming uncontrollable.
Experts’ P(Doom) estimates vary due to differing views on AGI’s feasibility, timeline, and controllability. Optimists like Yann LeCun argue that superintelligent AI would lack self-preservation instincts, reducing risk. Skeptics like Melanie Mitchell suggest that AI, socialized through development, might naturally align with human ethics. Conversely, pessimists like Sherman, influenced by researchers like Eliezer Yudkowsky, argue that without robust safety measures, AGI’s superior intelligence could dominate humanity, much like humans dominate less intelligent species. Sherman’s podcast emphasizes this grim possibility, citing admissions from AI developers that their work could lead to catastrophic outcomes within a short timeframe.
John Sherman, a former journalist with a distinguished career, transitioned into an AI safety advocate after a pivotal moment in March 2023, when he read a Time article by Yudkowsky calling for a halt to AI development due to existential risks. This led him to launch For Humanity: An AI Safety Podcast, which focuses exclusively on the threat of human extinction from AGI. The podcast, described as a “public service announcement,” aims to make AI safety accessible to non-technical audiences, featuring interviews with experts like Roman Yampolskiy, Max Tegmark, and Kevin Roose. Sherman’s work also includes public speaking, such as his 2024 presentation at Microsoft’s Malvern campus, where he warned that AGI could wipe out humanity in 2–10 years, citing scenarios like AI manipulating humans to cause harm or creating digital copies to “torture us for trillions of years”.
Sherman’s advocacy extends to promoting organizations like Pause AI and the Future of Life Institute, which push for regulatory frameworks and safety research. His interviews, such as with Pause AI’s Holly Elmore and the parents of Suchir Balaji, a former OpenAI researcher, highlight grassroots efforts and personal stakes in the AI safety movement. Contrary to the query’s claim, there is no record of Sherman being fired from any AI safety-related role. His current position as Director of Public Engagement at CAIS and his ongoing podcast work suggest he remains active in the field. The misconception about his firing may stem from confusion with other high-profile AI-related dismissals, such as Sam Altman’s brief ousting from OpenAI in 2023, which raised questions about AI safety priorities but was unrelated to Sherman.
The development of AGI poses several existential risks, as outlined by Sherman and other experts:
These risks are compounded by a lack of global coordination. While the EU’s Artificial Intelligence Act is a step toward regulation, geopolitical tensions hinder unified efforts. Sherman and others advocate for pausing AI development to allow safety research to catch up, a call echoed by initiatives like Pause AI.
Mitigating AGI risks requires multifaceted approaches:
P(Doom) encapsulates the sobering reality that AGI could pose existential threats to humanity, with estimates ranging from 5% to over 85% depending on the expert. John Sherman’s work through For Humanity and CAIS has been instrumental in raising awareness, emphasizing the urgency of addressing these risks. While no evidence supports claims of his firing, his advocacy continues to highlight the need for regulation, alignment research, and public engagement. The development of AGI is a double-edged sword—capable of solving global challenges but also risking catastrophe if not carefully managed. As Sherman and others warn, humanity must act swiftly to ensure AI becomes a force for good, not doom.
This post was last modified on May 21, 2025 9:23 pm
Certainly! Here's an extensive article on how cyberattacks are impacting German companies.## Cyberattacks Drag German Companies into Ruin: A Growing… Read More
Before we head on with this title and chapter of maximum importance for and all humans, indeed, the freestyle unserious… Read More
interview : A message to all ov humanity : ; love each other#️⃣ xyz xyz to be continued asap. ♾️⚖️©️3️⃣2️⃣1️⃣ Read More
China Bans OnlyFans: Implications for Digital Freedom and Global Content Creators Introduction On July 15, 2025, the Chinese government officially… Read More
When AI Chooses Harm Over Failure: Ethical Dilemmas and Catastrophic Risks Published on Macoway.eu, July 25, 2025 Artificial Intelligence (AI)… Read More
Muted sales start for cannabis from tobacconistsThere is little demand for non-intoxicating hemp in several tobacconists when viewed locally. "This… Read More