This week, OpenAI made headlines by disclosing its exploration into the “responsible” generation of AI-generated explicit content. This was revealed in a document intended to provide transparency and solicit feedback on its AI guidelines. OpenAI’s new NSFW policy aims to initiate a dialogue regarding the potential inclusion of explicit images and text in its AI offerings.
“We aim to maximize user control provided it doesn’t breach any laws or infringe on the rights of others,” said Joanne Jang, a member of OpenAI’s product team, in a conversation with NPR. “There are instances where content involving sexuality or nudity is crucial for our users.”
This is not OpenAI’s first foray into contentious areas. Earlier this year, CTO Mira Murati mentioned to The Wall Street Journal her uncertainty about allowing their video generation tool, Sora, to create adult content.
What should we make of this?
There is a conceivable future where OpenAI permits AI-generated pornography, and everything functions smoothly. Jang is not wrong to argue that adult artistic expression is a legitimate use case that could benefit from AI-powered tools.
However, there’s a lingering doubt about whether OpenAI—or any generative AI vendor—can execute this responsibly.
Consider the issue of creators’ rights. OpenAI’s models have ingested extensive public web content, some of which is undoubtedly pornographic. But OpenAI hasn’t secured licenses for all this material, nor has it consistently allowed creators to opt out of its training processes until recently—and even then, only in limited scenarios.
The adult content industry is already challenging for creators trying to make a living. Should OpenAI bring AI-generated adult content into the mainstream, it would introduce fierce competition, potentially built on the very work of existing creators.
Another major concern is the inadequacies of current safeguards. While OpenAI and its competitors have been refining their filtering and moderation tools for years, users continually find ways to circumvent these protections and misuse AI models, applications, and platforms.
In January, Microsoft had to modify its Designer image generation tool, which utilizes OpenAI models, after users created nude images of Taylor Swift. On the text generation side, it’s relatively easy to find chatbots based on supposedly “safe” models, like Anthropic’s Claude 3, that quickly produce erotica.
AI has already become a tool for a new kind of sexual abuse. Incidents are surfacing where young students use AI apps to create “stripped” photos of their classmates without consent. A 2021 poll in the U.K., New Zealand, and Australia reported that 14% of respondents aged 16 to 64 had been targeted with deepfake imagery.
New legislation in the U.S. and other countries aims to address this, but questions remain about the efficacy of a justice system that already struggles with many sex crimes to regulate a sector as dynamic as AI.
Envisioning a risk-free approach for OpenAI to adopt regarding AI-generated porn is challenging. OpenAI might need to reevaluate its stance, or perhaps, against the odds, it will devise a better solution. Whatever the outcome, it seems we’ll have answers sooner rather than later.