Elon Musk’s company xAI has announced its intention to sign the safety and security chapter in the new European Union's code of practice for artificial intelligence. This decision adds interesting factors to the context of AI regulation in Europe.
Support for Safety Chapter
xAI confirmed that it agrees with the part of the code regarding safety and security. In a post on X, the company stated, "xAI supports AI safety and will be signing the EU AI Act’s Code of Practice Chapter on Safety and Security."
Reactions from Other Companies
Other tech companies have reacted differently to the code. Google, part of the Alphabet group, announced its willingness to sign the entire code despite concerns about certain aspects. Meanwhile, Meta, Facebook's parent company, refused to sign, citing legal uncertainty and excessive requirements. Microsoft and OpenAI have not yet disclosed their intentions regarding signing the code.
Context and Importance of the Code
The EU's code of practice serves as a transitional tool to help companies align with forthcoming laws that will come into effect for high-impact models on August 2. The new AI Act imposes strict requirements on developers studying systemic risk models. Although the code is not legally binding, it outlines general expectations around documentation, content sourcing, and responses to copyright claims.
xAI's decision to support part of the code may reflect a growing divide among tech companies regarding AI regulation in the EU. This may be a middle ground between the push for safety and concerns over excessive regulation.