Your right to bear AI could soon be infringed upon

The more powerful artificial intelligence becomes, the more challenging it will be to regulate it without restricting civil liberties.

Your right to bear AI could soon be infringed upon

The only way to combat the malicious use of artificial intelligence (AI) may be to continuously develop more powerful AI and put it in government hands.

That seems to be the conclusion a team of researchers came to in a recently published paper entitled “Computing power then governance of artificial intelligence.”

Scientists from OpenAI, Cambridge, Oxford, and a dozen other universities and institutes conducted the research as a means of investigating the current and potential future challenges involved with governing the use and development of AI.

Centralization

The paper’s main argument revolves around the idea that, ostensibly, the only way to control who has access to the most powerful AI systems in the future is to control access to the hardware necessary to train and run models.

As the researchers put it:

“More precisely, policymakers could use compute to facilitate regulatory visibility of AI, allocate resources to promote beneficial outcomes, and enforce restrictions against irresponsible or malicious AI development and usage.”

In this context, “compute” refers to the foundational hardware required to develop AI such as GPUs and CPUs.

Essentially, the researchers are suggesting that the best way to prevent people from using AI to cause harm would be to cut them off at the source. This suggests that governments would need to develop systems by which to monitor the development, sale, and operation of any hardware that would be considered necessary to the development of advanced AI.

Artificial intelligence governance

In some ways, governments around the world are already exercising “compute governance.” The U.S., for example, restricts the sale of certain GPU models typically used to train AI systems to countries such as China.

Related: US officials extend export curbs on Nvidia AI chip to ‘some Middle Eastern countries’

But, according to the research, truly limiting the ability for malicious actors to do harm with AI will require manufacturers to build “kill switches” into hardware. This could give governments the ability to conduct “remote enforcement” efforts such as shutting down illegal AI training centers.

However, as the researchers note, “naïve or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.”

Monitoring hardware use in the U.S., for example, could fall afoul of the White House’s recent guidance on developing a “blueprint for an AI bill of rights” that says citizens have a right to protect their data.

Kill switches could be DOA

On top of those concerns, the researchers also point out that recent advances in “communications-efficient” training could lead to the use of decentralized compute to train, build, and run models.

This could make it increasingly difficult for governments to locate, monitor, and shut down hardware associated with illegal training efforts.

According to the researchers, this could leave governments with no choice but to take an arms race stance against the illicit use of AI. “Society will have to use more powerful, governable compute timely and wisely to develop defenses against emerging risks posed by ungovernable compute.”

Related Articles

Responses