EU Commission queries Big Tech on AI risks to electoral integrity
The Commission is empowered to levy fines for inaccuracies, incompleteness, or misinformation provided in response to information requests.
The European Commission has issued formal requests for information (RFI) to Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube and X regarding their management of risks associated with the use of generative artificial intelligence (AI) that can potentially mislead voters.
In a press release on March 14, the Commission announced that it requests additional details from these platforms concerning their measures to address risks associated with generative AI.
These include addressing “hallucinations,” viral deepfake dissemination, and automated service manipulation potentially affecting voter perception.
These requests are being issued in accordance with the Digital Services Act (DSA), the EU’s updated e-commerce and online governance rules.
The eight platforms are classified as very large online platforms (VLOPs) under the regulation mandating them to evaluate and address systemic risks, alongside adhering to other provisions outlined in the rulebook.
The Commission emphasizing that the questions relate to both the dissemination and the creation of Generative AI content said:
“The Commission is also requesting information and internal documents on the risk assessments and mitigation measures linked to the impact of generative AI on electoral processes, dissemination of illegal content, protection of fundamental rights, gender-based violence, protection of minors and mental well-being.”
The EU — responsible for overseeing the compliance of VLOPs with the specific DSA regulations pertaining to Big Tech — has identified election security as a primary focus for enforcement.
Recently, it has been soliciting input on election security regulations for VLOPs while simultaneously developing formal guidance in this area.
According to the Commission, the requests are intended to contribute to the development of guidance. While the platforms have until April 3 to furnish information regarding election protection — a request categorized as “urgent”— the EU aims to finalize the election security guidelines by March 27.
Related: First EU country adopts quantum-resistant technology
The Commission highlighted that the cost of generating synthetic content is decreasing significantly, intensifying the threat of deceptive deepfakes being circulated during elections. Consequently, it is increasing its focus on major platforms capable of widely disseminating political deepfakes.
Under Article 74(2) of the DSA, the Commission is empowered to levy fines for inaccuracies, incompleteness, or misinformation provided in response to information requests. Should VLOPs and VLOSEs fail to respond could result in the imposition of periodic penalty payments.
The European Commission’s request for information comes despite the technology industry agreement aimed at addressing deceptive AI use during elections, arising from the Munich Security Conference in February and supported by several platforms currently receiving RFIs from the Commission.
Responses