Meta to combat generative-AI abuse head of EU parliamentary elections
Meta outlines its strategy to combat the misuse of generative AI in content on its platforms ahead of the European Union parliamentary elections in June 2024.
Meta, the parent company of Facebook and Instagram, has outlined its strategy to combat the misuse of generative artificial intelligence (AI) to ensure the integrity of the electoral process on its platforms ahead of June’s 2024 European parliamentary elections.
In a blog post on Feb. 25, the head of EU Affairs at Meta, Marco Pancini, reinforced that the principles behind the platform’s “Community Standards” and “Ad Standards” will also apply to content generated by AI.
“AI-generated content is also eligible to be reviewed and rated by our independent fact-checking partners,” he wrote, with one of the ratings to show if the content is “altered,” meaning “faked, manipulated or transformed audio, video, or photos.”
Already, the platform’s policies require photorealistic images created using Meta’s own AI tools to be labeled as such.
This latest announcement revealed that Meta is also building new features to label AI-generated content created by other tools such as Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock that users post to any of its platforms.
Additionally, Meta said it plans to add a feature for users to disclose when they have shared an AI-generated video or audio in order for it to be flagged and labeled, with potential penalties for failing to do so.
Related: Texas firm faces criminal probe for misleading US voters with Joe Biden AI
Advertisers running political, social or election-related ads, in particular, that were altered or created using AI must also disclose its usage. The blog post said that between July and December 2023, Meta removed 430,000 ads across the EU for failing to carry a disclaimer.
This topic has become increasingly relevant with elections around the globe set to take place in 2024. Prior to this most recent update, both Meta and Google have spoken out about rules regarding AI-generated political advertising on their platforms.
On Dec. 19, Google said that it would limit answers to election queries on its AI chatbot Gemini – then called Bard – and its generative search feature in the lead-up to the 2024 presidential election in the United States.
OpenAI, the developer of the popular AI chatbot ChatGPT, has also tried to dispel fears regarding AI interference in global elections by creating internal standards to monitor activity on its platforms.
On Feb. 17, 20 companies, including Microsoft, Google, Anthropic, Meta, OpenAI, Stability AI and X, all signed a pledge to curb AI election interference, acknowledging the potential danger of the situation if not controlled.
Governments around the world have also taken action to combat AI misuse ahead of local elections. The European Commission initiated a public consultation on proposed election security guidelines to reduce democratic threats posed by generative AI and deepfakes.
In the U.S., AI-generated voices in automated phone scams were banned and made illegal after it was used in scam robocalls as a deepfake of President Joe Biden began circulating and misleading the public.
Responses