MIT researchers and other organizations have produced the AI Risk Repository — a broad database of documented risks compounded by AI systems.
AI Risk Documentation
The repository aims to assist decision-makers in various institutions such as government, research, businesses, and industry to assess the emerging risks associated with AI, despite its transformative abilities. The repository incorporates information from 43 existing taxonomies such as peer-reviewed articles, preprints, conference papers, and reports. This helps in understanding the situations and mechanisms through which AI risks can emerge, thanks to a two-pronged classification system.
Seven Categories of AI Risks
Risks are classified based on their causes, taking into consideration the entity liable (human or AI), the intent (unintentional or intentional), and the timing (post-development or pre-deployment). Additionally, risks are divided into seven different domains, including misinformation and malicious actors, misuse, discrimination and toxicity, privacy, and security.
Utilizing the Database
The repository is publicly available and can be leveraged by various institutions for risk assessment and mitigation. For example, an organization developing an AI-powered hiring system can use the repository to identify potential risks associated with discrimination and bias. Similarly, a firm using AI for content moderation can use the 'misinformation' domain to understand potential risks related to AI-generated content. According to the MIT team, the repository helps identify potential gaps or imbalances in how organizations address risks.
MIT researchers plan to frequently update the database with new risks, latest findings, and evolving trends to keep it relevant and useful. This tool is intended to be an essential resource for businesses and research institutions that understand the importance of managing AI risks.
Comments