cap

TRUTH, JUSTICE, & LIBERTY FOR ALL

cap

The Los Angelos Times

FOUNDED IN 2023

Responsabilitas Singularis Abundantiam Societati Affert

APRIL 5, 2026

cap
heading
cap

CEO Sam Altman: 'Trust Us on AI Doomsday, We Have It Under Control... Mostly'

March 29, 2026
Tech/Science

CEO Sam Altman: 'Trust Us on AI Doomsday, We Have It Under Control... Mostly'

Sam Altman, the CEO of OpenAI, has long positioned his company as the vanguard against AI apocalypse, promising that they're building superintelligent systems with safety baked in from the ground up. In a recent announcement, OpenAI touted a supposed breakthrough in AGI safety measures, claiming they've developed new protocols to keep artificial general intelligence from turning us all into paperclips. But then, like a plot twist in a bad sci-fi thriller, leaked internal memos surfaced, revealing a company scrambling to catch up on the very risks they're publicly downplaying.

The memos, which spread like wildfire on Reddit's r/technology subreddit, paint a picture of internal skepticism and corner-cutting that contrasts sharply with OpenAI's polished press releases. While Altman preaches caution and alignment, the documents suggest that safety efforts are often deprioritized in favor of rapid development and investor-pleasing demos. It's the kind of revelation that makes you wonder if "AGI safety" is just corporate jargon for "let's not blow the funding round."

Safety First? Or Profit Second?

OpenAI's announcement came amid growing hype around their latest models, with Altman himself tweeting about the need for "responsible scaling" of AI capabilities. The breakthrough, they said, involves advanced techniques to ensure AI systems remain controllable even as they approach human-level intelligence. Yet the leaked memos, dated from earlier this year, show engineers expressing doubts about whether these safeguards can scale with the tech's explosive growth.

One memo, reportedly from a senior researcher, warns that current safety testing is "inadequate for the pace of deployment," highlighting rushed experiments that prioritize flashy results over thorough risk assessment. This isn't some fringe conspiracy; it's a peek behind the curtain at how the race for AI dominance often trumps the rhetoric of restraint. As one Reddit commenter put it in the viral thread, "They're like firefighters building the arsonist's dream house."

The backlash has been swift, with AI ethicists and former OpenAI employees piling on. Timnit Gebru, the prominent AI researcher who was ousted from Google over her safety advocacy, didn't hold back in a recent interview. "This is what happens when profit motives eclipse ethical guardrails," she said. "Companies like OpenAI talk a big game on safety, but leaks like this show it's all smoke and mirrors until the boardroom demands results."

Doomers and Boosters: United in Hype

The irony here is thicker than a neural network. On one side, you have the doomers, figures like Eliezer Yudkowsky, who has warned that unchecked AGI could end humanity faster than you can say "existential risk." Yudkowsky, in a 2023 essay, grimly predicted that without ironclad safety, AI development is "summoning the demon." On the other, the boosters like Altman, who in a TED Talk last year optimistically declared, "We can build AI that benefits all of humanity if we do it right."

But both camps thrive on the same fuel: fear and excitement that drive funding and headlines. The leaks underscore how "safety" often serves as a convenient delay tactic, slow down competitors with regulations while pushing your own envelope. OpenAI's own charter commits to benefiting humanity, yet the memos reveal internal debates over whether to pause projects amid safety concerns, only to proceed anyway for competitive edge. As Geoffrey Hinton, the "Godfather of AI" who quit Google in 2023 over risks, told CBS News, "I console myself with the normal excuse: If I hadn't done it, somebody else would have." It's a collective shrug from the industry, where doomsday prevention is preached but rarely practiced at the expense of progress.

Critics from across the spectrum have weighed in, pointing to the hypocrisy. Even Elon Musk, OpenAI's co-founder turned rival, tweeted in response to the leaks: "Told you so. OpenAI was supposed to be open and safe, now it's neither." Musk's own xAI ventures face similar accusations of hype over substance, but the point stands: in the AI arms race, everyone's cutting corners while claiming the moral high ground.

So here we are, with OpenAI vowing to investigate the leaks and double down on transparency, even as their safety breakthrough announcement feels a tad hollow. It's a reminder that in tech's grand narrative of salvation or doom, the real winners are the ones selling the story. And isn't that just the perfect plot twist, humanity's fate hinging on executives who can't even keep their memos under wraps?