AAT-10.4: AI TEVV Safety Demonstration
Mechanisms exist to demonstrate the Artificial Intelligence (AI) and Autonomous Technologies (AAT) to be deployed are safe, residual risk does not exceed the organization's risk tolerance and can fail safely, particularly if made to operate beyond its knowledge limits.
Control Question: Does the organization demonstrate the Artificial Intelligence (AI) and Autonomous Technologies (AAT) to be deployed are safe, residual risk does not exceed its risk tolerance and can fail safely, particularly if made to operate beyond its knowledge limits?
General (3)
| Framework | Mapping Values |
|---|---|
| NIST AI 100-1 (AI RMF) 1.0 | MEASURE 2.6 |
| NIST AI 600-1 | GV-1.3-006 MG-1.3-001 MG-2.2-001 MG-3.2-009 |
| SCF CORE AI Model Deployment | AAT-10.4 |
EMEA (1)
| Framework | Mapping Values |
|---|---|
| EMEA EU AI Act | 9.6 |
Capability Maturity Model
Level 0 — Not Performed
There is no evidence of a capability to demonstrate the Artificial Intelligence (AI) and Autonomous Technologies (AAT) to be deployed are safe, residual risk does not exceed its risk tolerance and can fail safely, particularly if made to operate beyond its knowledge limits.
Level 1 — Performed Informally
C|P-CMM1 is N/A, since a structured process is required to report the status and results of Artificial Intelligence Test, Evaluation, Validation & Verification (AI TEVV) to relevant stakeholders, including governing bodies, as required.
Level 2 — Planned & Tracked
C|P-CMM2 is N/A, since a well-defined process is required in this domain to demonstrate the Artificial Intelligence (AI) and Autonomous Technologies (AAT) to be deployed are safe, residual risk does not exceed its risk tolerance and can fail safely, particularly if made to operate beyond its knowledge limits.
Level 3 — Well Defined
Artificial Intelligence and Autonomous Technology (AAT) efforts are standardized across the organization and centrally managed, where technically feasible, to ensure consistency. CMM Level 3 control maturity would reasonably expect all, or at least most, the following criteria to exist:
- The Chief Information Security Officer (CISO), or similar function with technical competence to address cybersecurity concerns, analyzes the organization's business strategy and prioritizes the objectives of the security function to determine prioritized and authoritative guidance for Artificial Intelligence and Autonomous Technologies (AAT), within the broader scope of cybersecurity and data protection operations.
- The CISO, or similar function, develops a security-focused Concept of Operations (CONOPS) that documents management, operational and technical measures to apply defense-in-depth techniques across the organization. This CONOPS for AAT may be incorporated as part of a broader operational plan for the cybersecurity and data privacy program.
- A Governance, Risk & Compliance (GRC) function, or similar function, provides governance oversight for the implementation of applicable statutory, regulatory and contractual cybersecurity and data protection controls to facilitate the implementation of secure and compliant practices to protect the confidentiality, integrity, availability and safety of the organization's applications, systems, services and data. Compliance requirements for AAT are identified and documented.
- A steering committee is formally established to provide executive oversight of the cybersecurity and data privacy program, including AAT. The steering committee establishes a clear and authoritative accountability structure for AAT operations.
- Legal reviews are conducted to minimize the inadvertent infringement of third-party Intellectual Property (IP) rights through the use of AAT products and/ or services.
- AAT-specific compliance requirements for cybersecurity and data privacy are identified and documented.
- Governance function for AAT is formally assigned with defined roles and associated responsibilities.
- A Program Management Office (PMO), or similar function, tracks and reports on activities related to the mapping, measuring and managing of AAT.
- Secure engineering principles are identified and implemented to ensure AAT are designed to be reliable, safe, fair, secure, resilient, transparent, explainable and data privacy-enhanced to minimize emergent properties or unintended consequences.
- Robust development and pre-deployment functionality, security and data privacy testing is conducted on all internal and third-party AAT projects.
- Production use of AAT is closely monitored to minimize emergent properties or unintended consequences.
- Robust incident response and business continuity plans exist to respond to AAT-related emergent properties or unintended consequences.
- Data sources utilized in the training and/or operation of AAT are identified and documented.
- The Confidentiality, Integrity and Availability (CIA) of source data to prevent accidental contamination or malicious corruption (e.g., data poisoning) that could compromise the performance of AAT.
Level 4 — Quantitatively Controlled
See C|P-CMM3. There are no defined C|P-CMM4 criteria, since it is reasonable to assume a quantitatively-controlled process is not necessary to demonstrate the Artificial Intelligence (AI) and Autonomous Technologies (AAT) to be deployed are safe, residual risk does not exceed its risk tolerance and can fail safely, particularly if made to operate beyond its knowledge limits.
Level 5 — Continuously Improving
See C|P-CMM4. There are no defined C|P-CMM5 criteria, since it is reasonable to assume a continuously-improving process is not necessary to demonstrate the Artificial Intelligence (AI) and Autonomous Technologies (AAT) to be deployed are safe, residual risk does not exceed its risk tolerance and can fail safely, particularly if made to operate beyond its knowledge limits.
Assessment Objectives
- AAT-10.4_A01 the organization's Artificial Intelligence Test, Evaluation, Validation & Verification (AI TEVV) capability demonstrates the Artificial Intelligence (AI) and Autonomous Technologies (AAT) to be deployed is safe
- AAT-10.4_A02 the organization's Artificial Intelligence Test, Evaluation, Validation & Verification (AI TEVV) capability demonstrates residual, negative risk from Artificial Intelligence (AI) and Autonomous Technologies (AAT) does not exceed the organization's risk tolerance and can fail safely, particularly if made to operate beyond its knowledge limits.
- AAT-10.4_A03 the organization's Artificial Intelligence Test, Evaluation, Validation & Verification (AI TEVV) capability demonstrates Artificial Intelligence (AI) and Autonomous Technologies (AAT) can fail safely, particularly if made to operate beyond its knowledge limits.
Technology Recommendations
Micro/Small
- Information Assurance (IA) Program
- Artificial Intelligence (AI) / autonomous technologies governance program
Small
- Information Assurance (IA) Program
- Artificial Intelligence (AI) / autonomous technologies governance program
Medium
- Information Assurance (IA) Program
- Artificial Intelligence (AI) / autonomous technologies governance program
Large
- Information Assurance (IA) Program
- Artificial Intelligence (AI) / autonomous technologies governance program
Enterprise
- Information Assurance (IA) Program
- Artificial Intelligence (AI) / autonomous technologies governance program