A legislative proposal in California requiring artificial intelligence developers to put up safety protocols to prevent 'critical harms' against humanity has sparked discussions within Silicon Valley's tech community.
Key Features of SB 1047
The 'Safe and Secure Innovation for Frontier Artificial Intelligence Models Act' (SB 1047) mandates AI developers to implement safety protocols to prevent events such as mass casualties or major cyberattacks. The legislative proposal also requires an 'emergency stop' button for AI models, annual third-party audits of AI safety practices, the creation of a new Frontier Model Division (FMD) to oversee compliance, and heavy penalties for violations.
Opposition from Congress and Silicon Valley
However, there has been opposition from Congress. US Congressman Ro Khanna expressed concern that 'the bill as currently written would be ineffective, punishing of individual entrepreneurs and small businesses, and hurt California’s spirit of innovation.' Khanna acknowledged the need for AI legislation 'to protect workers and address potential risks including misinformation, deepfakes, and an increase in wealth disparity.'
Concerns about AI Ecosystem
The bill has also been met with opposition from Silicon Valley venture capital firms such as Andreessen Horowitz. On Aug. 2, a16z chief legal officer Jaikumar Ramaswamy sent a letter to Senator Scott Wiener, one of the bill’s creators, claiming it would 'burden startups because of its arbitrary and shifting thresholds.' On Aug. 6, computer scientist Li told Fortune: 'If passed into law, SB-1047 will harm our budding AI ecosystem, especially the parts of it that are already at a disadvantage to today’s tech giants: the public sector, academia, and ‘little tech.’
The bill passed the Senate with bipartisan support in May and now heads to the Assembly, where it must pass by Aug. 31.
Comments