US AI safety consortium revealed with the biggest names in tech

The U.S. released an all-star list of tech companies that will participate in its AI safety consortium including Microsoft, Meta, Nvidia and Google, calling the initiative the “first of its kind.”

US AI safety consortium revealed with the biggest names in tech

The United States Department of Commerce announced on Feb. 8 the creation of its AI Safety Institute Consortium (AISIC), filled with a hefty list of participants from every corner of the tech industry. 

Gina Raimondo, the Secretary of Commerce, said the consortium aims to “unite AI creators and users, academics, government and industry researchers, and civil society organizations” to foster an environment that creates safe and trustworthy artificial intelligence (AI).

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of AI. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem.”

AISIC will be responsible for developing guidelines for red-teaming, evaluating AI capabilities, risk management, safety and security and watermarking synthetic content – the latter of which is a key issue. 

The consortium is made up of more than 200 members with the biggest names in the industry including Microsoft, Google, Meta, Apple, OpenAI, Anthropic, Adobe, Nvidia, GitHub, the Frontier Model Forum, Hewlett Packard Enterprise, IMB and many more.

According to the official announcement, this represents “the largest collection of test and evaluation teams established to date” with state and local governments, along with non-profits, who will work with organizations from “like-minded nations” on standards for the industry.

Related: Microsoft Azure lays foundation for India-focused voice-based generative AI apps

This development follows the creation of the U.S.’s AI Safety Institute (USAISI), which was established as a result of President Biden’s executive order on AI safety back in late October 2023. 

Raimondo said Biden’s executive order will “ensure” that the U.S. remains at the “head of the pack” in the development and deployment of safe and responsible AI.

Bruce Reed, the White House Deputy Chief of Staff, said keeping up with AI means:

“…we have to move fast and make sure everyone – from the government to the private sector to academia – is rowing in the same direction.”

Reed recently convened the White House AI Council on Jan. 30 to hear reports of how they have implemented actions from the executive order. This resulted in an updated fact sheet that revealed the U.S. “met or exceeded the many requirements that were slated for the first three months in the executive order.”

Related Articles

Responses