WASHINGTON – Today, U.S. Senators John Hickenlooper and Marsha Blackburn, Chair and Ranking Member of the Senate Commerce Committee’s Subcommittee on Consumer Protection, Product Safety, and Data Security, sent a letter to leading technology associations asking how they will implement a recent framework from the National Institute of Standards and Technology (NIST) that helps to mitigate potential risks Artificial Intelligence (AI) may have on consumers and encourage its responsible development and use.
Advancements in AI are the result of American innovation and will keep our country competitive as we look toward the future. However, recent developments in AI technology have raised a range of concerns. These concerns underscore the need for new policies that ensure current and future AI technologies are developed responsibly and trusted by consumers.
“As AI applications or services grow in their sophistication, we also must recognize the areas where appropriate guardrails or evidence-based best practices can improve the performance and accuracy of AI,” wrote Hickenlooper and Blackburn. “The U.S. government has continuously supported policies encouraging growth in the field of AI. It is with this vision for growth in mind that we want to highlight work needed to ensure AI is developed transparently and is used responsibly.”
Full text of the letter is HERE or below:
We write to you in a moment of accelerating developments in the field of Artificial Intelligence (AI). AI has the potential to transform how consumers use traditional products and services, increase our economy’s productivity and competitiveness, and catapult our society further into the future. Today, AI can help consumers and businesses understand complex problems and simplify manual tasks, assist students in advancing their education, and enable manufacturers to monitor and manage critical supply chain logistics. These advancements are made possible thanks to continued public and private investments in research & development and gains in computing power to process and learn from massive datasets.
The United States’ leadership in emerging technologies like AI is built by our skilled workforce, rooted in our premier research institutions, and strengthened by our open pathways to commercialize new discoveries. Our nation has always been the global leader in science and technology, and we should continue to ensure new innovations in AI are introduced to consumers in a deliberate and responsible manner. In doing so, we can both increase AI’s trustworthiness and reliability while fostering continued innovation and creativity.
As AI applications or services grow in their sophistication, we also must recognize the areas where appropriate guardrails or evidence-based best practices can improve the performance and accuracy of AI. For example, generative AI is now capable of writing detailed academic papers, producing high-quality pieces of artistic, musical, or creative content, and even conversing with humans with unprecedented levels of contextual understanding and nuance. The U.S. government has continuously supported policies encouraging growth in the field of AI. It is with this vision for growth in mind that we want to highlight work needed to ensure AI is developed transparently and is used responsibly.
Through the National Artificial Intelligence Initiative (NAII), we have seen a whole-of-government effort—in partnership with the private sector—to identify, understand, and develop responses to the range of issues in the field of AI such as generated bias or limited transparency. By seeking solutions through collaboration with industry, academia, and government, AI can continue its development and its applications can be trusted and adopted by consumers. The NAII and the work of its Advisory Committee, which your member companies represent, will be a fundamental part of shaping AI’s future.
To further guide the efforts of the NAII and the private sector, the National Institute for Standards & Technology (NIST) recently released the first version of the AI Risk Management Framework (RMF). As we understand, the AI RMF is designed to be a voluntary, flexible, and evidence-based framework to help organizations of all sizes identify and mitigate risks throughout an application’s development lifecycle.
As we continue to monitor and understand developments in the field of AI, we would like to learn more about how your member companies are considering how to incorporate best practices from the AI RMF to improve services or products offered to consumers.
- How do you plan to build and deploy safe and transparent AI systems for consumers?
- How does the AI RMF align with and support your AI development and deployment practices?
- How could NIST continue to offer support to you in your efforts to deploy AI applications in the near-term?
- What plans do you have to contribute independently or jointly to case studies, AI profiles, or tutorials that the AI RMF Roadmap outlines?
- What are the most effective ways to provide resources directly to consumers in consumer-facing products to help them understand and trust AI systems?
We stand willing and ready to work with you to ensure AI innovations continue to thrive and risk-based protections deliver a positive impact for consumers.
Sincerely,
###