OpenAI Introduces DALL-E 3 AI Detector Tool
OpenAI is introducing new AI detector tools in an effort to combat the spread of misinformation and the misuse of generative AI.
- OpenAI has joined the Coalition for Content Provenance and Authenticity (C2PA) Steering Committee.
- The AI detector tool can tell which images were created by the DALL-E 3 system.
- The company is also implementing tamper-resistant watermarking.
OpenAI is introducing a new tool that detects images generated by its AI image generator, DALL-E 3, which might deter users from using the platform.
With the rise of AI, people have become entranced by what it can produce, be it text, video, or image. There’s this global worry about using AI to spread misinformation or misrepresent something/someone. As these companies kept growing their various AI systems, people were trying to leash their misuse.
In a blog post, the AI startup shared the new measures it will be taking to help people understand the origins of what they see online; if it’s man-made or AI-generated. As a result, OpenAI has joined the Steering Committee of the Coalition for Content Provenance and Authenticity (C2PA). This coalition creates technical standards for tracing the origin and history of digital content to fight misinformation. C2PA creates a system that allows creators to claim ownership of their content and consumers to verify the authenticity.
For the last year, OpenAI has been adding C2PA metadata to all images created by DALL-E 3. Now, they have gone a step further, implementing tamper-resistant watermarking. For example, it will add an invisible signal to an AI-generated audio that is hard to remove.
Beyond that, the AI startup has also developed detection classifiers that use AI to assess the likelihood that content is AI-generated. OpenAI claims that its image detection classifier can predict whether an image was generated by DALL-E 3 with an impressive 98 percent accuracy rate. The team wrote, “Internal testing on an early version of our classifier has shown high accuracy for distinguishing between non-AI-generated images and those created by DALL·E 3 products.” Compressing, cropping, and changing the saturation have minimal effects on the AI detector tool’s performance. However, other types of modifications, like hue alterations, can reduce it.
As admirable as OpenAI’s efforts are, their classifier cannot detect other companies’ AI systems. To remedy this, the AI startup seeks to refine its technology, welcoming external feedback and collaboration.
But what if every generative AI platform made a tool that could detect its AI system’s creations? Makes sense, doesn’t it? Every company gets to keep its ‘secrets’ all while ensuring that people know that they are looking/reading/watching/listening to something AI-generated.
It would be a win-win situation, except for the fact that a lot of generative AI users pass the creations as their own. Some, for example, pose as graphic designers online, meanwhile, their graphic design experience extends only to knowing how to prompt an AI image generator. So, if generative AI platforms produce detector tools for their own output, clients or future employers could determine who exactly created the work, and they could then act accordingly.
Would this ruin the allure of generative AI platforms for these millions of users?
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.