
OpenAI’s freshly launched video app Sora 2 promised a creative frontier; users could transform text into realistic videos and share them in a social feed. But within hours of release, the platform surfaced deeply disturbing content: scenes of violence, racism, and misuse of copyrighted characters, tells The Guardian.
Reviewers prompted Sora to generate bomb scares, mass-shooting footage, war zones, and grotesque twists involving popular characters such as SpongeBob donning Nazi imagery. One “Charlottesville rally” clip featured a Black protester shouting a white supremacist slogan. These videos violated OpenAI’s own rules against promoting violence or causing harm, but the enforcement clearly failed.
The article makes a strong point: lifelike synthetic media can blur fact and fiction, and when tools such as Sora fall into the wrong hands, they can be used for threats, bullying, and disinformation. Misinformation experts warn we’re entering a world where “the guardrails are not real.”
OpenAI CEO Sam Altman cast the launch as a “ChatGPT for creativity” moment, admitting concerns over addictive social media and misuse. The company claims to have embedded safeguards and limits on generating likenesses, but critics say they already see how easily those filters are bypassed.
A major flashpoint is how Sora handles copyright: instead of requiring permission to replicate characters, OpenAI opted for an “opt-out” model, forcing rights holders to flag problematic content manually. Experts say that the approach is fundamentally flawed.
The article frames Sora as a warning: creating a powerful tool without strong oversight may unleash media chaos. In a digital age already drowning in disinformation, Sora underscores how fragile our trust in truth really is.