Skip to main content

AAT-10.1: AI TEVV Trustworthiness Assessment

AAT 10 — Critical Detect

Mechanisms exist to evaluate Artificial Intelligence (AI) and Autonomous Technologies (AAT) for trustworthy behavior and operation including security, anonymization and disaggregation of captured and stored data for approved purposes.

Control Question: Does the organization evaluate Artificial Intelligence (AI) and Autonomous Technologies (AAT) for trustworthy behavior and operation including security, anonymization and disaggregation of captured and stored data for approved purposes?

General (5)
Framework Mapping Values
ISO 42001 2023 9.2.1 9.2.1(a) 9.2.1(a)(1) 9.2.1(a)(2) 9.2.1(b) A.6.2.4
NIST AI 100-1 (AI RMF) 1.0 MEASURE 2.0
NIST AI 600-1 MANAGE 4.1 MAP 3.4 MEASURE 4.2 MG-3.1-003
SCF CORE Mergers, Acquisitions & Divestitures (MA&D) AAT-10.1
SCF CORE AI Model Deployment AAT-10.1

Capability Maturity Model

Level 0 — Not Performed

There is no evidence of a capability to evaluate Artificial Intelligence (AI) and Autonomous Technologies (AAT) for trustworthy behavior and operation including security, anonymization and disaggregation of captured and stored data for approved purposes.

Level 1 — Performed Informally

C|P-CMM1 is N/A, since a structured process is required to report the status and results of Artificial Intelligence Test, Evaluation, Validation & Verification (AI TEVV) to relevant stakeholders, including governing bodies, as required.

Level 2 — Planned & Tracked

C|P-CMM2 is N/A, since a well-defined process is required in this domain to evaluate Artificial Intelligence (AI) and Autonomous Technologies (AAT) for trustworthy behavior and operation including security, anonymization and disaggregation of captured and stored data for approved purposes.

Level 3 — Well Defined

Artificial Intelligence and Autonomous Technology (AAT) efforts are standardized across the organization and centrally managed, where technically feasible, to ensure consistency. CMM Level 3 control maturity would reasonably expect all, or at least most, the following criteria to exist:

  • The Chief Information Security Officer (CISO), or similar function with technical competence to address cybersecurity concerns, analyzes the organization's business strategy and prioritizes the objectives of the security function to determine prioritized and authoritative guidance for Artificial Intelligence and Autonomous Technologies (AAT), within the broader scope of cybersecurity and data protection operations.
  • The CISO, or similar function, develops a security-focused Concept of Operations (CONOPS) that documents management, operational and technical measures to apply defense-in-depth techniques across the organization. This CONOPS for AAT may be incorporated as part of a broader operational plan for the cybersecurity and data privacy program.
  • A Governance, Risk & Compliance (GRC) function, or similar function, provides governance oversight for the implementation of applicable statutory, regulatory and contractual cybersecurity and data protection controls to facilitate the implementation of secure and compliant practices to protect the confidentiality, integrity, availability and safety of the organization's applications, systems, services and data. Compliance requirements for AAT are identified and documented.
  • A steering committee is formally established to provide executive oversight of the cybersecurity and data privacy program, including AAT. The steering committee establishes a clear and authoritative accountability structure for AAT operations.
  • Legal reviews are conducted to minimize the inadvertent infringement of third-party Intellectual Property (IP) rights through the use of AAT products and/ or services.
  • AAT-specific compliance requirements for cybersecurity and data privacy are identified and documented.
  • Governance function for AAT is formally assigned with defined roles and associated responsibilities.
  • A Program Management Office (PMO), or similar function, tracks and reports on activities related to the mapping, measuring and managing of AAT.
  • Secure engineering principles are identified and implemented to ensure AAT are designed to be reliable, safe, fair, secure, resilient, transparent, explainable and data privacy-enhanced to minimize emergent properties or unintended consequences.
  • Robust development and pre-deployment functionality, security and data privacy testing is conducted on all internal and third-party AAT projects.
  • Production use of AAT is closely monitored to minimize emergent properties or unintended consequences.
  • Robust incident response and business continuity plans exist to respond to AAT-related emergent properties or unintended consequences.
  • Data sources utilized in the training and/or operation of AAT are identified and documented.
  • The Confidentiality, Integrity and Availability (CIA) of source data to prevent accidental contamination or malicious corruption (e.g., data poisoning) that could compromise the performance of AAT.
Level 4 — Quantitatively Controlled

See C|P-CMM3. There are no defined C|P-CMM4 criteria, since it is reasonable to assume a quantitatively-controlled process is not necessary to evaluate Artificial Intelligence (AI) and Autonomous Technologies (AAT) for trustworthy behavior and operation including security, anonymization and disaggregation of captured and stored data for approved purposes.

Level 5 — Continuously Improving

See C|P-CMM4. There are no defined C|P-CMM5 criteria, since it is reasonable to assume a continuously-improving process is not necessary to evaluate Artificial Intelligence (AI) and Autonomous Technologies (AAT) for trustworthy behavior and operation including security, anonymization and disaggregation of captured and stored data for approved purposes.

Assessment Objectives

  1. AAT-10.1_A01 the organization's Artificial Intelligence Test, Evaluation, Validation & Verification (AI TEVV) capability evaluates Artificial Intelligence (AI) and Autonomous Technologies (AAT) for trustworthy characteristics.

Technology Recommendations

Micro/Small

  • Controls Validation Testing (CVT)
  • Artificial Intelligence (AI) / autonomous technologies governance program

Small

  • Controls Validation Testing (CVT)
  • Artificial Intelligence (AI) / autonomous technologies governance program

Medium

  • Controls Validation Testing (CVT)
  • Artificial Intelligence (AI) / autonomous technologies governance program

Large

  • Controls Validation Testing (CVT)
  • Artificial Intelligence (AI) / autonomous technologies governance program

Enterprise

  • Controls Validation Testing (CVT)
  • Artificial Intelligence (AI) / autonomous technologies governance program

The Secure Controls Framework (SCF) is maintained by SCF Council. Use of SCF content is subject to the SCF Terms & Conditions.

Manage this control in SCF Connect

Track implementation status, collect evidence, and map controls to your compliance frameworks automatically.