Auditors would verify risk management claims of AI companies and compliance with AI guardrails
WASHINGTON – Today, U.S. Senator John Hickenlooper, Chair of the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security, announced he will introduce the Validation and Evaluation for Trustworthy (VET) AI Act which directs the National Institute of Standards and Technology (NIST) to work with federal agencies and stakeholders across industry, academia, and civil society to develop detailed specifications, guidelines, and recommendations for the certification of third-party evaluators to work with AI companies to provide robust independent external assurance and verification of their systems. Hickenlooper will formally introduce the bill when the Senate returns from recess.
“AI is moving faster than any of us thought it would two years ago,” said Hickenlooper. “But we have to move just as fast to get sensible guardrails in place to develop AI responsibly before it’s too late. Otherwise, AI could bring more harm than good to our lives.”
Currently, AI companies make claims about how they train, conduct safety red-team exercises, and carry out risk management on their AI models without any external verification. Hickenlooper’s bill would create a pathway for independent evaluators, with a function similar to those in the financial industry and other sectors, to work with companies as a neutral third-party to verify their development, testing, and use of AI is in compliance with established guardrails. As Congress moves to establish AI regulations, benchmarks to independently validate AI companies’ claims will only become more essential.
In a February speech at Silicon Flatirons, Hickenlooper proposed his “Trust, but Verify Framework”, establishing auditing standards for Artificial Intelligence (AI) to increase transparency and adoption of AI and protect consumers. In the same speech, Hickenlooper also called for federal privacy legislation to create a national standard for protecting Americans’ privacy and data.
Specifically, the bill would:
- Direct NIST, in coordination with the Department of Energy and National Science Foundation, to develop voluntary specifications and guidelines for developers and deployers of AI systems to conduct internal assurance and work with third parties on external assurance regarding the verification and red-teaming of AI systems.
- Such specifications require considerations for data privacy protections, mitigations against potential harms to individuals from an AI system, dataset quality, and governance and communications processes of a developer or deployer throughout the AI systems’ development lifecycles.
- Such specifications require considerations for data privacy protections, mitigations against potential harms to individuals from an AI system, dataset quality, and governance and communications processes of a developer or deployer throughout the AI systems’ development lifecycles.
- Establish a collaborative Advisory Committee to review and recommend criteria for individuals or organizations seeking to obtain certification of their ability to conduct internal or external assurance for AI systems.
- Require NIST to conduct a study examining various aspects of the ecosystem of AI assurance, including the current capabilities and methodologies used, facilities or resources needed, and overall market demand for internal and external AI assurance.
###