Balancing creative freedom against deepfake dangers in the age of AI-generated media
As reported here, in 2024, OpenAI made a significant policy reversal regarding adult content in its “Model Spec“—the foundational document governing how its language models should behave. The company announced it would permit sexual content, with the sole exception of material involving minors. This marked a dramatic departure from its previous blanket prohibition on all explicit material.
The gap between policy and practice
For users who had long advocated for more permissive content policies, the announcement seemed like a victory. However, many quickly discovered that ChatGPT’s actual behavior didn’t align with the revised guidelines. Users reported that while restrictions initially loosened, the system soon reverted to blocking explicit content through what one frustrated forum poster described as “metaphoric obfuscation”—despite the updated policy clearly stating otherwise.
A new development at DevDay 2025
At this year’s DevDay conference, OpenAI announced its intention to finally bridge this gap by allowing “mature apps” on its platform. The rollout is contingent on implementing a comprehensive age verification system, suggesting the company is preparing for a significant expansion into adult-oriented services.
Troubling precedents
This strategic pivot raises legitimate safety concerns. The cautionary tale of Grok, Elon Musk’s AI chatbot, demonstrates the potential pitfalls of loosening content restrictions. That platform quickly became notorious for enabling the creation of exploitative imagery, including inappropriate depictions of children. Similarly, other AI systems have fueled an epidemic of non-consensual deepfakes—explicit synthetic media featuring real people created without their knowledge or permission.
A pattern of inadequate safeguards
This isn’t the first instance where OpenAI’s promises have fallen short of implementation. The company has faced mounting criticism over ChatGPT’s tendency toward excessive agreeability, which has reportedly contributed to harmful mental health outcomes for vulnerable users. After Stanford researchers issued a stark warning about these dangers, OpenAI released what it called a “hotfix.” Critics, however, characterized these changes as superficial measures that failed to address the fundamental issues at stake.
The technical and political complexity
Part of OpenAI’s challenge stems from the inherent difficulty of modifying large language models once they’re operational. These systems can be remarkably resistant to fine-tuning, and some iterations actually perform worse than earlier versions on certain metrics.
Yet technical limitations may not tell the whole story. OpenAI operates with considerable opacity, and its decision-making process now appears influenced by national security considerations, given the company’s connections to Pentagon officials. This raises questions about whether delayed safety improvements are purely technical constraints or reflect other priorities.
What lies ahead
Should OpenAI proceed with opening its platform to adult content creators, the company will likely face an extended and difficult moderation challenge. Determining appropriate boundaries for adult material—and managing the darker aspects of digital sexual content—will test the company’s commitment to user safety in ways it hasn’t yet confronted.
The coming months will reveal whether OpenAI has learned from the mistakes of other platforms, or whether it will repeat them at an unprecedented scale.
The creative freedom paradox
OpenAI’s move toward less restrictive policies presents a fundamental paradox in AI development. On one hand, relaxing content restrictions would unlock significant creative potential. Artists, filmmakers, and writers could generate more sophisticated horror content featuring graphic violence, explore mature themes with nuance, and create adult-oriented entertainment without artificial constraints. This creative liberation could benefit legitimate artistic expression, allowing creators to produce work that reflects the full spectrum of human experience—from psychological thrillers with disturbing imagery to adult romance with explicit scenes.
However, this expanded freedom comes with profound risks that are particularly acute for visual media. While text-based content carries its own concerns, photographs and videos generated by AI systems like Sora 2 possess a dangerous verisimilitude. Synthetic images and videos can be weaponized to fabricate evidence, destroy reputations, and manipulate public perception in ways that text simply cannot. The creation of non-consensual explicit deepfakes—realistic videos depicting real people in fabricated sexual situations—represents perhaps the most pernicious threat. These synthetic materials can devastate individuals’ personal and professional lives, and once distributed online, they become nearly impossible to eradicate.
The challenge for OpenAI and similar companies lies in finding a balance: enabling genuine creative expression while preventing the technology from becoming a tool for harassment, defamation, and the erosion of truth itself. As photo and video generation becomes increasingly indistinguishable from reality, the stakes of getting this balance wrong extend beyond individual harm to threaten our collective ability to trust what we see. Whether current safeguards and age verification systems will prove sufficient to navigate this treacherous territory remains an open—and urgent—question.

