Meta may apply ‘penalties’ to users who fail to disclose use of generative AI for images

The company says “AI-generated content is also eligible to be fact-checked.”

Meta may apply ‘penalties’ to users who fail to disclose use of generative AI for images

Meta will roll out new standards surrounding AI-generated content on Facebook, Instagram and Threads over the coming months, according to a Jan. 6 company blog post.

Content that’s identified as AI-generated, due to metadata or other intentional watermarking, will be given a visible label. Users on Meta platforms will also be able to flag unlabeled content suspected of being generated by artificial intelligence.

Crowd-sourcing

If any of this sounds familiar, it’s because it mirrors Meta’s early content moderation practices. Before the era of AI-generated content, the company (then Facebook) developed a user-facing system for reporting content that violated the platform’s terms of service.

Fast-forward to 2024, and Meta is equipping users across its social networks with tools to flag content again, tapping into what may be the world’s largest consumer crowd-sourcing force.

This also means that creators on the company’s platforms will have to label their own work as AI-generated, whenever applicable, or there could be consequences.

According to the blog post:

“We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so.”

Detecting AI-generated content

Meta says whenever its built-in tools are used to create AI-generated content, that content receives a watermark and label clearly indicating its origin. However, not all generative AI systems have these guardrails embedded.

The company says it’s working with other companies via consortium partnerships — including Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock — and will continue to develop methods for detecting invisible watermarks at scale.

Unfortunately, these methods may only apply to AI-generated images. “While companies are starting to include signals in their image generators,” reads the blog post, “they haven’t started including them in AI tools that generate audio and video at the same scale.”

Per the post, this means Meta cannot currently detect audio and video generated by AI at scale — including deepfake technology.

Related: Meta unveils Artemis chip to boost AI, cut Nvidia ties — Report

Related Articles

Responses