Former OpenAI, Anthropic employees call for 'right to warn' on AI risks

Former OpenAI, Anthropic, and DeepMind employees urge AI companies to expand whistleblower protections to publicly address AI risks amid growing concerns over the “deprioritization” of safety.

Former OpenAI, Anthropic employees call for 'right to warn' on AI risks

Former employees of leading artificial intelligence (AI) developers are urging these pioneering AI companies to enhance their whistleblower protections. This would enable them to voice “risk-related concerns” to the public regarding the advancement of sophisticated AI systems.

On June 4, 13 former and current employees at OpenAI (ChatGPT), Anthropic (Claude) and DeepMind (Google), along with the “Godfathers of AI” Yoshua Bengio and Geoffrey Hinton and renowned AI scientist Stuart Russell started the “Right to Warn AI” petition.

The statement aims to establish a commitment from frontier AI companies to allow employees to raise risk-related concerns about AI internally and with the public.

William Saunders, a former OpenAI employee and supporter of the movement, commented that, when dealing with potentially dangerous new technologies, there should be ways to share information about risks with independent experts, governments, and the public.

“Today, the people with the most knowledge about how frontier AI systems work and the risks related to their deployment are not fully free to speak because of possible retaliation and overly broad confidentiality agreements.”

Right to warn principles

The proposal has four primary propositions to the AI developers, the first being to eliminate non-disparagement about risks so that companies won’t silence employees with agreements that prevent them from raising concerns about the risks of AI or punishing them for doing so.

They also aim to establish anonymous reporting channels for individuals to raise concerns regarding AI risks, thereby cultivating an environment conducive to open criticism surrounding such risks.

Lastly, the petition is asking for the protection of whistleblowers, in which companies will not retaliate against employees who disclose information to expose serious AI risks.

Saunders said the proposed principles are a “proactive way” to engage with the AI companies to achieve safe and beneficial AI that is needed.

Growing AI safety concerns

The petition comes as concerns mount over the “deprioritization” by AI labs regarding the safety of their latest models, especially in the face of artificial general intelligence (AGI), which attempts to create software with humanlike intelligence and the ability to self-teach.

Former OpenAI employee Daniel Kokotajlo said he decided to leave the company because he “lost hope that they would act responsibly,” specifically regarding AGI creation.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood.”

On May 28, Helen Toner, a former board member at OpenAI, said during a Ted AI podcast that Sam Altman, OpenAI’s CEO, was reportedly dismissed from the company for allegedly withholding information from the board.

Related Articles

Responses