Biden administration takes action to safeguard public from AI risks
By Dec. 1, agencies must establish specific safeguards for AI applications that could affect the rights or safety of Americans, as outlined in a White House fact sheet.
The White House has unveiled its inaugural comprehensive policy for managing the risks associated with artificial intelligence (AI), mandating that agencies intensify reporting on AI utilization and tackle potential risks posed by the technology.
According to a March 28 White House memorandum, federal agencies must, within 60 days, appoint a chief AI officer, disclose AI usage and integrate protective measures.
This directive aligns with United States President Joe Biden’s executive order on AI from October 2023. On a teleconference with reporters, Vice President Kamala Harris said:
“I believe that all leaders from governments, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone can enjoy its full benefits.”
The latest regulation, an initiative by the Office of Management and Budget (OMB), aims to guide the entire federal government in safely and efficiently utilizing artificial intelligence amid its rapid expansion.
While the government seeks to harness AI’s potential, the Biden administration remains cautious of its evolving risks.
As stated in the memo, certain AI use cases, particularly those within the Department of Defense, will not be mandated for disclosure in the inventory, as their sharing would contradict existing laws and government-wide policies.
By Dec. 1, agencies must establish specific safeguards for AI applications that could affect the rights or safety of Americans. For instance, travelers should have the option to opt out of facial recognition technology used by the Transportation Security Administration at airports.
Related: Biden administration announces key AI actions after executive order
Agencies unable to implement these safeguards must discontinue using the AI system unless agency leadership can justify how doing otherwise would heighten risks to safety or rights or hinder critical agency operations.
The OMB’s recent AI directives align with the Biden administration’s blueprint for an “AI Bill of Rights” from October 2022 and the National Institute of Standards and Technology’s AI Risk Management Framework from January 2023. These initiatives emphasize the importance of creating reliable AI systems.
The OMB also seeks input on enforcing compliance and best practices among government contractors supplying technology. It intends to ensure alignment between agencies’ AI contracts and its policy later in 2024.
The administration also unveiled its intention to recruit 100 AI professionals into the government by the summer, as outlined in the October executive order’s “talent surge.”
Responses