OpenAI’s New Initiatives to Enhance Transparency in AI-Generated Content

Ryan Daws is a senior editor at TechForge Media, with a seasoned background spanning over a decade in tech journalism. His expertise lies in identifying the latest technological trends, dissecting complex topics, and weaving compelling narratives around the most cutting-edge developments. His articles and interviews with leading industry figures have gained him recognition as a key influencer by organisations such as Onalytica. Publications under his stewardship have since gained recognition from leading analyst houses like Forrester for their performance. Find him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social).

OpenAI is joining the Coalition for Content Provenance and Authenticity (C2PA) steering committee and will integrate the open standard’s metadata into its generative AI models to increase transparency around generated content.

The C2PA standard allows digital content to be certified with metadata proving its origins, whether created entirely by AI, edited using AI tools, or captured traditionally. OpenAI has already started adding C2PA metadata to images from its latest DALL-E 3 model output in ChatGPT and the OpenAI API. The metadata will be integrated into OpenAI’s upcoming video generation model Sora when launched more broadly.

“People can still create deceptive content without this information (or can remove it), but they cannot easily fake or alter this information, making it an important resource to build trust,” OpenAI explained.

The increase in concerns about the potential misuse of AI-generated content in deceiving voters in the impending major elections taking place across the US, UK and other parts of the world has prompted this move. Establishing the authenticity of media created by AI could aid in thwarting deepfakes and other forms of manipulated content designed to spread disinformation.

OpenAI recognises that while technical measures provide assistance, the successful implementation of content authenticity in practice requires platforms, creators, and handlers of content to work collectively in preserving metadata for consumers at the end of the chain.

Alongside integrating C2PA, OpenAI is also in the process of developing new provenance methods. These include a tamper-resistant watermarking system for audio and image detection classifiers that are designed to identify visuals generated by AI.

OpenAI has begun accepting applications for access to its DALL-E 3 image detection classifier via its Researcher Access Program. This tool is capable of predicting the probability of an image being sourced from one of the models under OpenAI.

“Our goal is to enable independent research that assesses the classifier’s effectiveness, analyses its real-world application, surfaces relevant considerations for such use, and explores the characteristics of AI-generated content,” the company said.

Internal testing shows high accuracy distinguishing non-AI images from DALL-E 3 visuals, with around 98% of DALL-E images correctly identified and less than 0.5% of non-AI images incorrectly flagged. However, the classifier struggles more to differentiate between images produced by DALL-E and other generative AI models.

OpenAI has also incorporated watermarking into its Voice Engine custom voice model, currently in limited preview.

The company believes increased adoption of provenance standards will lead to metadata accompanying content through its full lifecycle to fill “a crucial gap in digital content authenticity practices.”

OpenAI is partnering with Microsoft to roll out a $2 million societal resilience fund dedicated to endorsing AI education and understanding, backed by AARP, International IDEA, and the Partnership on AI.

“In as much as technical solutions like the aforementioned provide us with applicable tools for our safeguard, effectively actualizing content authenticity in reality will necessitate collective measures,” remarks OpenAI.

“Our initiatives in the sphere of provenance are just a fraction of a broader industrial endeavor – many of our fellow research laboratories and generative AI firms are equally propelling research in this discipline. We extol these initiatives—the industry should cooperate and exchange knowledge to enhance our cognition and maintain the promotion of transparency on the internet.”

Credit: Marc Sendra Martorell

See also: Chuck Ros, SoftServe: Delivering transformative AI solutions responsibly

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX,Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

ai, artificial intelligence, c2pa, ethics, genai, generative ai, openai, Society

You must be logged in to post a comment.

Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More

Leave a Reply

Your email address will not be published. Required fields are marked *