Meta Platforms has unveiled plans to detect and label images produced by artificial intelligence (AI) services from various companies.
The major initiative, announced by Nick Clegg, Meta’s president of global affairs, on Tuesday, marks a proactive step by the social media giant to address the proliferation of digitally manipulated content across its platforms.
Nick Clegg in a blog post outlined that Meta will utilize invisible markers embedded within image files to identify and label content generated by external AI services. These labels will be applied to posts on Facebook, Instagram, and Threads, alerting users to the digital origins of such images, even when they closely resemble real photographs. Meta’s president of global affairs emphasized the company’s commitment to providing users with enhanced context about the nature of the content they encounter online.
The implementation of this labeling system extends beyond content generated by Meta’s own AI tools. It will encompass images produced using AI services operated by tech companies including OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Alphabet’s Google. This collaborative effort underscores the growing recognition within the tech industry of the need to address the potential risks associated with generative AI technologies, which can produce convincing yet fabricated content based on minimal input.
Nick Clegg likened this initiative to established protocols for coordinating the removal of prohibited content, such as instances of violence and child exploitation, across various online platforms. While expressing confidence in the ability to reliably label AI-generated images, Nick Clegg acknowledged the complexity of addressing similar challenges posed by audio and video content, which remain under development.
In the absence of mature technologies for audio and video labeling, Meta plans to mandate self-labeling of altered content by users and may impose penalties for non-compliance, although specific details were not provided. However, Clegg noted the absence of viable mechanisms for labeling AI-generated text, indicating the complexity of addressing this aspect of synthetic content.
Responding to inquiries, a Meta spokesperson declined to confirm whether similar labeling measures would be applied to generative AI content circulated through the encrypted messaging service WhatsApp.
This announcement follows recent scrutiny from Meta’s independent oversight board, which criticized the company’s policies on misleadingly altered videos and advocated for labeling rather than removal of such content, Reuters news report said.
Nick Clegg affirmed his agreement with the oversight board’s assessment, acknowledging the need for more robust policies to address the evolving landscape of synthetic and hybrid content. He cited Meta’s labeling partnership as evidence of the company’s proactive response to these concerns, signaling a broader shift towards transparency and accountability in combating misinformation across its platforms.