TLDR: OpenAI CEO Sam Altman has issued a significant warning regarding the potential for Artificial Intelligence to lead to ‘really bad stuff’ and ‘strange or scary moments.’ Speaking on the a16z podcast, Altman highlighted concerns over the rapid advancement of deepfake technology, exemplified by OpenAI’s Sora 2, and the broader societal implications of AI’s integration, while advocating for careful safety testing over extensive government regulation.
OpenAI CEO Sam Altman has delivered a stark warning about the future trajectory of Artificial Intelligence, cautioning that the technology could usher in ‘some really bad stuff’ and ‘really strange or scary moments.’ His remarks, made during an interview on the a16z podcast from venture capital firm Andreessen Horowitz on October 8, 2025, underscore growing concerns within the AI community about the rapid evolution and societal impact of advanced AI systems.
Altman specifically pointed to the proliferation of deepfake technology as a significant immediate threat. The recent popularity of OpenAI’s new video application, Sora 2, which quickly climbed to the top of Apple’s App Store, has demonstrated the speed at which such sophisticated technology can become mainstream. Following Sora 2’s release, instances of its use to create deepfake videos of renowned individuals, including Dr. Martin Luther King Jr., and various copyrighted characters, surfaced on social media. This prompted OpenAI to issue an apology and implement measures to address concerns, such as pausing the generation of videos depicting Dr. King Jr. and exploring ‘opt-in’ mechanisms for character usage and revenue sharing with rights-holders. Altman acknowledged that ‘very soon the world is going to have to contend with incredible video models that can deepfake anyone or kind of show anything you want.’
Beyond deepfakes, Altman articulated broader anxieties about AI’s long-term societal effects. He described a scenario where ‘billions of people talking to the same brain’ could trigger unexpected and rapid chain reactions, leading to ‘weird, societal-scale things.’ He also cautioned against a future where individuals increasingly delegate programming and decision-making to computers, emphasizing that the absence of a catastrophic AI-related event to date ‘doesn’t mean it never will.’ Altman stressed that the more insidious dangers might not be ‘killer robots’ but rather ‘very subtle societal misalignments’ that could cause things to ‘go horribly wrong’ through no ill intention.
Also Read:
- Expert Argues Widespread Misuse, Not Superintelligence, Is AI’s True Danger
- Stephen Hawking’s Enduring Warnings on AI: Humanity’s Potential Triumph or Peril
Despite these grave warnings, Altman expressed skepticism about extensive government regulation of AI, stating that ‘Most regulation probably has a lot of downside.’ Instead, he advocated for ‘very careful safety testing’ specifically for what he termed ‘extremely superhuman’ AI models. He believes that society and AI must ‘co-evolve’ and that ‘you can’t just drop the thing at the end,’ concluding with a belief in a societal adaptation process where ‘we’ll develop some guardrails around it as a society.’


