Press Releases

Hickenlooper, Capito Introduce Bipartisan Bill to Create Guidelines for Third-Party Audits of AI

Jul 25, 2024

Auditors would verify risk management claims of AI companies and compliance with AI guardrails

WASHINGTON – Yesterday, U.S. Senators John Hickenlooper and Shelley Moore Capito introduced the bipartisan Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act which directs the National Institute of Standards and Technology (NIST) to work with federal agencies and stakeholders across industry, academia, and civil society to develop detailed specifications, guidelines, and recommendations for third-party evaluators to work with AI companies to provide robust independent external assurance and verification of how their AI systems are developed and tested.

“AI is moving faster than any of us thought it would two years ago,” said Hickenlooper, Chair of the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security. “But we have to move just as fast to get sensible guardrails in place to develop AI responsibly before it’s too late. Otherwise, AI could bring more harm than good to our lives.”

“This commonsense bill will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them. I look forward to getting this bill and our AI Research Innovation and Accountability Act passed out of the Commerce Committee soon,” said Capito.

Currently, AI companies make claims about how they train, conduct safety red-team exercises, and carry out risk management on their AI models without any external verification. The VET AI Act would create a pathway for independent evaluators, with a function similar to those in the financial industry and other sectors, to work with companies as a neutral third-party to verify their development, testing, and use of AI is in compliance with established guardrails. As Congress moves to establish AI regulations, evidence-based benchmarks to independently validate AI companies’ claims on safety testing will only become more essential. 

Earlier this month, Hickenlooper announced his intent to introduce the VET AI Act. For full bill text, click HERE.

In a February speech at Silicon Flatirons, Hickenlooper proposed his “Trust, but Verify Framework”, which included a call to establish auditing standards for Artificial Intelligence (AI) systems in order to increase transparency and adoption of AI while protecting consumers. In the same speech, Hickenlooper also called for federal data privacy legislation to create a national standard for protecting Americans’ personal and sensitive data.

Specifically, the VET AI Act would:

  • Direct NIST, in coordination with the Department of Energy and National Science Foundation, to develop voluntary specifications and guidelines for developers and deployers of AI systems to conduct internal assurance and work with third parties on external assurance regarding the verification and red-teaming of AI systems.
    • Such specifications require considerations for data privacy protections, mitigations against potential harms to individuals from an AI system, dataset quality, and governance and communications processes of a developer or deployer throughout the AI systems’ development lifecycles.
  • Establish a collaborative Advisory Committee to review and recommend criteria for individuals or organizations seeking to obtain certification of their ability to conduct internal or external assurance for AI systems.
  • Require NIST to conduct a study examining various aspects of the ecosystem of AI assurance, including the current capabilities and methodologies used, facilities or resources needed, and overall market demand for internal and external AI assurance.

“Bipartisan Policy Center (BPC) Action applauds Sens. John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV) for their leadership on the VET Artificial Intelligence Act to both spur AI development and deployment, while also protecting Americans from privacy harms. Its commonsense public-private sector approach will help grow consumer trust in AI, expanding the number of Americans and businesses adopting this important set of technologies. At a time when AI is being used by vast numbers of Americans in nearly every sector of our economy, we’re excited to see Sens. Hickenlooper and Moore Capito work together to craft this thoughtful bill,” said Michele Stockwell, President, BPC Action.

“The Validation and Evaluation for Trustworthy (VET) AI Act would bring much-needed certainty to AI developers, deployers, and third parties on external assurances on what processes such as verification, red teaming, and compliance should look like while we, as a country, figure out how we will engage with AI governance and regulation,” says Dan Correa, Chief Executive Officer of the Federation of American Scientists. “We commend Senator Hickenlooper and Senator Capito for working together on this global policy issue that will showcase America’s leadership in setting standards for AI Safety and Governance. This bill also highlights the importance of NIST in paving the way for innovative and equitable uses of AI, underscoring that this critical agency should have enough resources to lead the way in AI standards-setting.”

“The Center for AI Policy (CAIP) strongly endorses the Validation and Evaluation for Trustworthy (VET) AI Act, spearheaded by Senator Hickenlooper and Senator Capito. This bipartisan legislation represents a crucial step towards ensuring AI safety through collaborative efforts across industry, academia, and government. Although many AI companies now make at least some effort to evaluate their models, there is no way for outsiders to tell whether the evaluation is complete, accurate, and performed with integrity. The VET AI Act would solve this problem by developing reliable third-party standards for evaluating whether an AI model can be trusted to behave as intended. CAIP is encouraged to see legislators making important progress toward this worthy goal,” said Jason Green-Lowe, Executive Director for the Center for AI Policy

“Developing voluntary best practices for rigorously assessing the safety and reliability of AI systems will help ensure their responsible deployment. The VET AI Act would accelerate efforts to create a standardized methodology for evaluating AI systems and equip U.S. companies with new tools to ensure their AI systems are among the most trusted in the global market,” said Daniel Castro, Vice President of the Information Technology and Innovation Foundation

“The public is increasingly at a loss as to who or what to trust when it comes to AI systems and the Validation and Evaluation for Trustworthy (VET) AI Act is a necessary and significant step towards addressing that gap between public apprehension and trust. IEEE-USA is thankful that Senator Hickenlooper has introduced legislation to ensure that AI systems are demonstrably trustworthy, fair, and equitable,” said David Mindham, Vice President of IEEE-USA.

“Developing responsible AI requires flexible regulation. The VET AI Act takes important steps towards ensuring a multistakeholder process to reinforce privacy and reduce harms through voluntary consensus standards that will assist those who continue to innovate in AI deployment,” said Josh Landau, Senior Counsel for Innovation Policy at CCIA.

“OTI welcomes the introduction of the VET AI Act. This piece of legislation is a necessary step in building out a meaningful regulatory environment, as it sets a rigorous bar for both internal and third-party specifications and auditing. It’s important to center concerns around data privacy and protection, mitigating harms that can stem from AI use, and the existence of processes for governance and communication within the development and deployment of AI. The bill does just that. The VET AI Act’s focus on aligning US practices with international standards on assurance is an important component that ensures the United States harmonizes its approach to AI governance, which will make it easier for AI companies to be globally compliant and competitive,” said Prem Trivedi, Policy Director at OTI.

“Booz Allen is proud to support the ‘Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act’. We believe that the Act’s emphasis on testing, evaluation, validation, and verification of AI systems aligns with our goals for building public trust and accountability in AI applications. The bill’s emphasis on ‘consensus-driven and evidence-based guidelines’ resonates with our belief that a one-size-fits-all approach is impractical for AI systems. The focus on ‘use case’ and ‘relative risk’ is commendable, as it aligns with our support for context-driven, risk-based AI governance, and it mirrors our principles for delivering trusted AI, promising a future of responsible AI technology growth and governance,” said John Larson, Executive Vice President, Head of Booz Allen’s AI Business.

“The VET AI Act represents an important step in promoting effective and consistent assurance safeguards over AI technologies. As the association representing professionals responsible for providing independent internal assurance – especially as it relates to AI – The IIA supports the legislation’s emphasis on establishing essential governance and risk management processes. The IIA was grateful for the opportunity to work with Senator Hickenlooper’s staff and provide legislative language that was included in the VET AI Act. We believe these changes will enhance the already strong foundation of the VET AI Act. We look forward to a continued dialogue with Senator Hickenlooper and other congressional leaders as this critical legislation advances,” said Anthony J. Pugliese CIA, CPA, CGMA, CITP President and Chief Executive Officer of The Institute of Internal Auditors.

“The Software and Information Industry Association (SIIA) is pleased to express support for the Validation and Evaluation for Trustworthy (VET) AI Act co-sponsored by U.S. Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV). Introduction of this legislation is an important step towards advancing the responsible development and deployment of AI technologies and supporting U.S. innovation and leadership in AI. As the pace of AI advancement continues to accelerate, pragmatic guardrails are important to maximize the benefits of AI while minimizing potential risks. The VET AI Act contributes to this through its directive for the National Institute of Standards and Technology (NIST) to collaborate with federal agencies, industry stakeholders, academia, and civil society to develop detailed specifications, guidelines, and recommendations for third-party evaluators. These standards would factor in privacy protections, harm mitigation, dataset quality, and governance throughout the AI development lifecycle which is essential for fostering a trustworthy AI ecosystem. By establishing a collaborative Advisory Committee and mandating a comprehensive study on the AI assurance ecosystem, the VET AI Act ensures a well-rounded and informed approach to AI regulation and certification. SIIA looks forward to working with Congress towards passage of this critical bill.” said Paul Lekas, Head of Global Public Policy & Government Affairs and Senior Vice President of SIIA.

“The introduction of the VET AI Act represents a major step forward in establishing a framework for responsible AI development and deployment and we applaud Senators Hickenlooper and Capito for their leadership on this critical issue. The proposed legislation addresses one of the most pressing challenges in AI governance: the need for independent verification of AI companies’ claims about their systems’ safety, reliability, and compliance with ethical standards. The VET AI Act’s focus on developing specifications and guidelines for third-party evaluators is a thoughtful approach to increasing transparency and accountability in the AI industry. By involving NIST, federal agencies, and stakeholders from industry, academia, and civil society, the bill ensures a comprehensive and collaborative process in creating these standards. At Americans for Responsible Innovation, we believe that as AI capabilities continue to advance, it is essential that we develop robust mechanisms to verify companies’ claims about their AI models’ safety and risk management practices. Public trust and verifiable safety are essential to the widespread adoption of beneficial AI. We are particularly encouraged by the bill’s emphasis on data privacy protections, mitigations against potential harms, dataset quality, and governance processes throughout AI systems’ development lifecycles. These considerations are fundamental to ensuring that AI technologies are developed and deployed in a responsible manner,”said Eric Gastfriend, Executive Director of Americans for Responsible Innovation.

###

Recent Press Releases