• Dapps:16.23K
  • Blockchains:78
  • Active users:66.47M
  • 30d volume:$303.26B
  • 30d transactions:$879.24M

Enhancing Whistleblower Protections in AI Companies

user avatar

by Giorgi Kostiuk

2 years ago


Former employees who worked at renowned artificial intelligence (AI) developers are urging these pioneering companies to strengthen their whistleblower protections. The aim is to enable them to raise concerns related to risks associated with the advancement of sophisticated AI systems to the public.

A group of 13 former and current employees from OpenAI (ChatGPT), Anthropic (Claude), and DeepMind (Google), alongside prominent figures in the AI field like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell, have launched the 'Right to Warn AI' petition on June 4. The petition is a collective effort to push for a commitment from leading AI companies to allow employees to voice risk-related concerns about AI both internally and externally.

One of the advocates for this cause, William Saunders, a former OpenAI employee, emphasized the necessity for mechanisms to share information about potential risks associated with emerging technologies with independent experts, governmental bodies, and the general public.

According to Saunders, the individuals with the deepest insights into the workings and risks of cutting-edge AI systems often face constraints in sharing their knowledge due to fears of repercussions and overly broad confidentiality agreements.

Right to Warn Principles

The 'Right to Warn AI' petition includes four key propositions directed at AI developers. Firstly, it calls for the elimination of non-disparagement clauses related to risks, ensuring that employees are not silenced by agreements that hinder them from expressing concerns about AI risks or subjecting them to punitive actions.

Secondly, the proposal advocates for the establishment of anonymous reporting channels to encourage individuals to voice apprehensions regarding AI risks. This move aims to nurture a culture where open critiques about such risks are welcomed.

Lastly, the petition demands safeguards for whistleblowers, seeking assurances that companies will not retaliate against employees who divulge information aimed at exposing critical AI risks.

Saunders described these proposed principles as a proactive approach to engage with AI companies in fostering the development of safe and beneficial AI technologies.

Escalating AI Safety Concerns

The petition's emergence coincides with mounting worries about the negligence displayed by AI labs towards the safety of their latest models, particularly in the realm of artificial general intelligence (AGI). The pursuit of AGI entails crafting software with humanlike intelligence and self-learning capabilities.

Daniel Kokotajlo, a former OpenAI employee, cited his loss of faith in the company's responsible actions, particularly concerning AGI development, as a reason for his departure.

Kokotajlo criticized the 'move fast and break things' approach adopted by some entities in the AI sphere, emphasizing its unsuitability for a technology as potent and poorly understood as AGI.

Recent reports, such as the claims made by Helen Toner, a former OpenAI board member, during a Ted AI podcast about Sam Altman's alleged dismissal from OpenAI, have further fueled concerns about transparency and accountability within AI organizations.

0

Rewards

chest
chest
chest
chest

More rewards

Discover enhanced rewards on our social media.

chest

Other news

AAVE V4 Launch Coincides with EthCC in Cannes

chest

AAVE is set to activate its V4 on the Ethereum mainnet this week, coinciding with the EthCC event in Cannes, which is the largest annual European Ethereum gathering. This launch is expected to enhance the platform's security and risk parameters.

user avatarTando Nkube

US Military Operations in Iran Under Consideration

chest

The Pentagon is reportedly preparing for possible weeks of ground operations in Iran, as President Trump discusses the idea of seizing the Kharg Island oil terminal.

user avatarKofi Adjeman

CFTC Chair Proposes New Regulations for Prediction Markets

chest

CFTC Chair Mike Selig is developing new regulatory plans for prediction markets to ensure compliance with trading laws and address their intersection with cryptocurrency.

user avatarNguyen Van Long

US Dollar Surges Amid Trump's Iran Warnings

chest

The US Dollar experienced a significant rally following former President Trump's renewed warnings about Iran, leading to a flight to safety in global markets.

user avatarJesper Sørensen

Market Anticipates NonFarm Payrolls Report

chest

Traders are preparing for the NonFarm Payrolls report, expected to impact the US Dollar amid geopolitical tensions, with forecasts of 180,000 job changes and a 3.9% unemployment rate.

user avatarSatoshi Nakamura

Solana Faces Weakening Performance and Declining Stablecoin Supply

chest

Solana's network shows signs of decline with a significant drop in stablecoin supply, raising concerns about its future compared to Ethereum.

user avatarRajesh Kumar

Important disclaimer: The information presented on the Dapp.Expert portal is intended solely for informational purposes and does not constitute an investment recommendation or a guide to action in the field of cryptocurrencies. The Dapp.Expert team is not responsible for any potential losses or missed profits associated with the use of materials published on the site. Before making investment decisions in cryptocurrencies, we recommend consulting a qualified financial advisor.