đ Stay ahead with AI and receive:
â
Access our Free Community and join 400K+ professionals learning AI
â 35% Discount for ChatNode
Eleven current and former employees from OpenAI, alongside two from Google DeepMind, have signed an open letterâcalled âA Right to Warn about Advanced Artificial Intelligenceâ--which voices their concerns about the lack of safety governance and oversight from big tech companies within the industry and asks for better protection for whistleblowers who want to speak out about these concerns.
The letter states that AI companies have âsubstantial non-public informationâ relating to capabilities, limitations, and risks associated with their AI models, including the âloss of control of control of autonomous AI systems potentially resulting in human extinctionâ, Â but âhave weak obligations to share this information with governments and societyâ and âstrong financial incentivesâ to avoid effective oversight measures.
It also states that there are insufficient protections for whistleblowers who are one of the few uniquely positioned to hold these big tech companies accountable.
âOrdinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.â
It asks AI companies to commit to four principles, that will:
- Stop them from forcing employees to sign non-disclosure agreements that prevent them from criticizing their employees for risk-related issues.
- Make them create an anonymous process for employees to raise any concerns to board members, regulators
- Create a âculture of open criticismâ
- Prevent them from disciplining or retaliating against current and former employees who have shared ârisk-related confidential information after other processes have failed.â
The letter comes after OpenAI was recently slammed for forcing employees to sign a non-disclosure agreement, or risk losing the equity theyâve earned while at the company. Although CEO, Sam Altman, has since apologized and promised to change its off-boarding protocol.
It also follows OpenAIâs disbandment of its âSuperalignmentâ safety team, after two key members quit citing safety concerns, and a lack of safety prioritization.
OpenAI has defended its safety practices, claiming to be âproud of their track record with providing the most capable and safest AI systemsâ and agreeing to âcontinue to engage with governments, civil society and other communities around the worldâ, whereas Google has yet to comment.