Source Frameworks
Before diving into the AIRGC, it’s important to understand the two source frameworks it connects
MIT’s AI Risk Repository, which provides the risk taxonomy, and
James Kavanagh’s AI Governance MegaMap, which provides the governance control framework.
Each was developed independently to serve different purposes, as we’ve noted above, together they offer a powerful lens for AI risk governance. MIT’s AI Risk Repository provides the taxonomy of AI risks (the problem space) and Kavanagh’s AI Governance MegaMap provides the taxonomy of controls (the solution space).
MIT’s AI Risk Repository: A Taxonomy of AI Risks
The AI Risk Repository (AIRR), developed at MIT in 2024 (this guide references their April, 2025 release), is a comprehensive, living database of AI failure modes and potential harms. It was created by synthesizing findings from vetted, prior research papers, frameworks, and incident reports into a common reference taxonomy. MIT's AI Risk Database captures 1600 (plus) risks extracted from 65 existing frameworks and classifications of AI risks. Its aim is to provide industry, policymakers, and academics with a shared framework for understanding and monitoring AI risks. Rather than propose yet another siloed risk list, MIT’s effort consolidates existing knowledge into an evolving repository that can be updated as new risks are identified.
Comprehensive & Research-Based
By drawing on dozens of prior classifications and over hundreds of documented risk instances, it provides a meta-view of AI risks grounded in research. It captures common patterns across many sources rather than a singular organization’s perspective.
Industry-Agnostic
The taxonomy is generic and not tied to any single industry or application. This broad applicability means organizations in any sector can use it as a checklist to ensure they aren’t overlooking major categories of AI risk. An AI system in finance and one in healthcare might have different specific issues, but both should consider bias, privacy, security, etc. where applicable.
Evolving aka “Living”
The AIRR is maintained as a living repository that can be updated as new risks emerge or as understanding of AI failure modes improves. Updates to this document based on MIT’s future releases will be critical given the fast pace of AI advancement – new techniques (e.g. agentic AI or advanced autonomous systems) could introduce new types of risks not yet catalogued. The MIT team anticipates addin such new risk subdomains over time to keep the taxonomy current. A key aspect of the AIRR is the Domain Taxonomy of AI Risks, which classifies AI risks into 7 top-level domains and 23 subdomains. Each domain represents a broad category of concern, and each subdomain captures a specific type of problem that can arise when AI systems are deployed.
The 7 Risk Domains from MIT
Discrimination & Toxicity: AI causing unfair bias, discrimination, or generating toxic content.
Privacy & Security: AI compromising privacy or being vulnerable to security threats.
Misinformation: AI generating or spreading false or misleading information.
Malicious Actors & Misuse: AI being used by bad actors for harmful purposes (fraud, cyberattacks, etc.).
Human-Computer Interaction: AI affecting human autonomy, agency, or leading to overreliance.
Socioeconomic & Environmental: Broader societal impacts like job displacement, inequality, environmental costs.
AI System Safety, Failures, & Limitations: Technical safety issues, misalignment with human goals, and fundamental limitations of AI.
Kavanagh’s AI Governance MegaMap: A Unified Control Framework
James Kavanagh’s AI Governance “MegaMap” (2023) is a consolidated mapping of AI governance requirements and best-practice controls distilled from six different standards and laws—designed “…to balance AI-specific governance, general security and privacy foundations, requirements in law and industry-prevalent best practices.” six different standards and laws into a common master control set. The MegaMap leverages ISO42001, ISO27001 and ISO42001 as a base for security, privacy and responsible AI in coordination with the NIST Risk Management Framework for AI, the EU AI Act, and finally the SOC2 Trust Services Criteria. In developing the MegaMap, Kavanagh recognized that organizations face a fragmented landscape of AI guidelines – from international standards to laws – and sought to unify them into one comprehensive framework.
These sources include:
ISO/IEC 42001 (Artificial intelligence management system standard),
ISO/IEC 27001 (Information security management),
ISO/IEC 27701 (Privacy information management),
NIST AI Risk Management Framework (NIST AI RMF 1.0, a U.S. guidance for AI risk management),
EU AI Act (draft) (the proposed EU regulation on AI), and
SOC 2 (Service Organization Controls relevant to security, availability, etc., now being applied to AI services).
The MegaMap aligns the overlapping requirements from the six major sources (above) into a single set of control domains and controls. Each of these source frameworks contains dozens or hundreds of individual controls and requirements directly applicable for managing AI. Kavanagh’s contribution was to synthesize them into a single master control set.
The 12 Governance Control Sets from the MegaMap
Governance & Leadership (GL)
High-level organizational commitments, roles, and strategic alignment for AI governance. (Controls GL-1 to GL-3.)Risk Management (RM)
Processes to identify, assess, and mitigate AI risks on an ongoing basis. (Controls RM-1 to RM-4.)Regulatory Operations (RO)
Ensuring compliance with laws and regulations, including transparency, registration, and monitoring obligations. (Controls RO-1 to RO-4.)System, Data & Model Lifecycle (LC)
Managing the AI development lifecycle responsibly – data governance, model development, deployment, and change management. (Controls LC-1 to LC-5.)Security (SE)
Protecting AI systems from threats – covering security architecture, access control, data protection, etc.. (Controls SE-1 to SE-6.)Privacy (PR)
Safeguarding personal data and ensuring privacy by design in AI systems. (Controls PR-1 to PR-4.)Safe & Responsible AI (RS)
Ensuring AI is used ethically and safely – covering areas like human oversight, fairness, robustness, and explainability. (Controls RS-1 to RS-5.)Assurance & Audit (AA)
Validation and oversight mechanisms – internal audits, independent assessments, and validation of AI systems. (Controls AA-1 to AA-3.)Operational Monitoring (OM)
Ongoing monitoring of AI system performance and outcomes in production, including event logging andimprovement processes. (Controls OM-1 to OM-3.)
Incident Management (IM)
Preparedness and processes to handle AI incidents or failures, from detection to response and post-incident analysis. (Controls IM-1 to IM-3.)Third Party & Supply Chain (TP)
Managing risks from third-party AI components or vendors – ensuring vendors meet requirements and supply chain transparency. (Controls TP-1 to TP-2.)Transparency & Communication (CO)
Communicating about AI systems to stakeholders – disclosure of AI capabilities/limitations and stakeholder engagement. (Controls CO-1 to CO-2.)