FB pixel

Calls for national standards grow as US AI action plan takes shape

Calls for national standards grow as US AI action plan takes shape
 

On February 6, the National Science Foundation’s (NSF) Networking and Information Technology Research and Development National Coordination Office (NCO) issued a Request for Information on the Development of an Artificial Intelligence (AI) Action Plan. The public comment period ended March 15, with more than 8,000 comments submitted to NCO. While NCO hasn’t made the comments public, many parties who commented have made their comments public, and represent a cross section of businesses, public policy, and private interests.

While each of the respondents approached AI governance from different positions, their shared concerns reflect a broader industry-wide push for policies that provide clarity, ensure access to data, encourage voluntary standards, and prevent regulatory fragmentation. No matter the context, the prevailing sentiment is that the government should play a supportive role in fostering AI innovation rather than imposing restrictive mandates that could stifle progress.

Taken together, the responses illustrate a complex and evolving policy landscape for AI development. The overarching themes include the need for regulatory clarity, balanced data governance policies, sector-specific oversight, and international cooperation to ensure the security and ethical use of AI-driven systems.

Ensuring legal clarity, strengthening privacy protections, fostering innovation, and addressing bias concerns will be central to shaping the future of AI development and regulations. The responses underscored the importance of a balanced regulatory approach that enables technological progress while safeguarding fundamental rights.

Indeed, as the NSF AI Action Plan begins to take shape, policymakers will have to navigate a balance between fostering innovation, protecting privacy, ensuring security, and maintaining global competitiveness. The future of AI will be shaped by how these regulatory discussions unfold, influencing everything from compliance requirements to opportunities for technological advancement in digital identity verification and authentication systems.

A significant concern that has emerged is the issue of regulatory clarity and the need for a national standard. Many of the respondents stressed that a fragmented approach to AI regulation, particularly at the state level, could create compliance challenges all the way around. Calls for a unified federal framework are particularly relevant to biometric authentication and privacy issues which often require interoperability across different jurisdictions and industries.

The National Conference of State Legislatures, for example, said it “strongly believes intergovernmental collaboration is essential to developing a strong AI action plan to ensure this new technology is being used to enhance America’s leadership and support and advance our citizens, businesses and communities.”

The Business Roundtable said the Trump administration must “work collaboratively with industry on standards for AI development and deployment that are voluntary, harmonized, flexible and build on existing widely adopted standards developed in multistakeholder venues. AI standards developed through close collaboration between government, industry, civil society, academia and international partners are among the most effective.”

It encouraged the use of the National Institute of Standards and Technology (NIST) AI Risk Management Framework, saying it “is an example of strong existing risk management guidance developed through robust public-private partnership.”

Continuing, the Business Roundtable said, “Where regulatory guardrails are deemed necessary, whether in new or existing rules covering AI systems, policymakers should provide clear guidance to businesses, foster U.S. innovation, and adopt a risk-based approach that carefully considers and recognizes the nuances of different use cases, including those that are low-risk and routine. Reporting requirements should be carefully crafted to avoid unnecessary information collection and onerous compliance burdens that slow innovation. Moreover, AI governance and regulation should evolve as the AI products, use cases and markets themselves evolve.”

The American National Standards Institute (ANSI) said that in order “to balance opportunities and risks in AI deployment, we recommend that the new AI Action Plan prioritize private-sector-led standardization efforts that involve the participation of all affected stakeholders, including government. These efforts include developing and deploying technical and safety standards and related conformity assessment solutions that address key areas: AI risk hierarchies, acceptable risks, and tradeoffs; data quality, provenance, and governance; security; and benchmarks, testing, monitoring, and risk management.”

“Where appropriate, public-private partnerships can speed deployment of new AI applications and systems,” ANSI said.

OpenAI similarly promoted a “regulatory strategy that ensures the freedom to innovate,” saying that “for innovation to truly create new freedoms, America’s builders, developers, and entrepreneurs – our nation’s greatest competitive advantage – must first have the freedom to innovate in the national interest. We propose a holistic approach that enables voluntary partnership between the federal government and the private sector.”

Stanford University’s Stanford Institute for Human-Centered Artificial Intelligence and Stanford Center for Research on Foundation Models also advocated for open innovation, saying “open innovation has been a cornerstone of the U.S. AI ecosystem, fueling collaboration, competition, and rapid experimentation. To maintain this advantage, the United States must protect, preserve, and promote access to open models rather than impose broad restrictions that could slow innovation and isolate domestic developers … this approach allows a diverse set of researchers—from startups to technology firms and academic institutions—to build on shared advancements, driving faster breakthroughs and greater technological resilience.”

TechNet, a bipartisan network of technology CEOs and senior executives that promotes the growth of the innovation economy by advocating a targeted policy agenda at the federal and state level, said “AI systems should be developed, deployed, and used responsibly, enabling the United States to maintain its competitive edge and innovation leadership.”

The group said “any AI regulations should focus on mitigating known or reasonably foreseeable risks and designating responsibility and liability appropriately across the AI value chain.”

TechNet further said technical standards should be based on industry frameworks and best practices. “To promote innovation and adapt to technological changes, we encourage the use of industry frameworks and evidence-based regulatory tools like safe harbors, which allow the industry to test and share best practices,” it said in its response, adding it “believes the administration should promote technical standards and codes of conduct developed through industry-led processes that can help AI stakeholders signal to users that the platform utilizes trustworthy AI systems.”

The Computer & Communications Industry Association (CCIA) noted that state legislatures across the country are currently developing a wide array of AI-related legislation, and that “while there may be unique local concerns in some areas, the vast majority of AI policy would be best served by a unified national policy that provides clear determination of the responsibilities of the various actors in the AI value chain.”

CCIA argued that, “generally, federal legislation should preempt state laws concerning the development and deployment of frontier AI models” and that “this preemption should fully preempt state law in these areas, ensuring a unified federal approach.” However, “such an approach ensures consistency and facilitates compliance for businesses operating across multiple states, preventing a fragmented regulatory landscape that could harm consumers, hinder innovation, and create unnecessary obstacles for industry growth.”

“While full preemption is generally preferable, states will still have a role to play,” CCIA said. “In particular, state legislatures are ideally suited to address specific gaps or unique local concerns that may not be not adequately covered by federal regulations. By starting from a presumption of preemption and permitting states to operate to fill gaps where necessary, this approach will ensure national uniformity in managing frontier AI while accommodating state-specific needs.”

“Federal agencies should clarify the application of existing law to AI,” CCIA added, noting that “only where there is a risk or harm that is unique to AI systems and not present in the human or non-AI software equivalent is legislation required.”

The Model Evaluation and Threat Research (METR) group discussed at length the catastrophic risks to public safety and national security that AI poses, saying that “once AI systems are highly capable across a broad range of tasks, it will be important to have a highly-secure option available for AI workloads in the U.S. even if the vast majority of work does not require it.”

METR added that “advanced AI systems are valuable targets for theft and misuse by external adversaries, by spies, or by rogue employees. Moreover, there may be AI systems that cannot be safely tested or deployed without substantial security and internal controls. For example, it would be important to securely test specialized AI models that are finetuned for nuclear weapons-related tasks, either for actual use or to assess the level of risk the base model poses.”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Age assurance shouldn’t lead to harvesting of kids’ data: Irish privacy watchdog

Age assurance requirements for pornography sites and platforms hosting extremely violent content will become mandatory in Ireland this July. Media…

 

Idemia reveals Armenia JV details, Saudi Arabia MoU, WVU biometrics research lab

Idemia is busily establishing new partnerships to develop biometrics for national projects, from Armenia to Saudi Arabia, and to further…

 

EU SafeTravellers project works to secure biometric digital travel credentials

Idemia Public Security, iProov, Vision-Box and Ubiquitous Technologies Company (Ubitech) are part of a European Union-funded project to introduce traveler…

 

World puzzled by lack of public trust in massive technology corporations

Sam Altman and Alex Blania, figureheads and evangelists for cryptically related firms World and Tools for Humanity, recently spoke at…

 

Milwaukee police debate trading biometric data for Biometrica facial recognition

Although it has pledged to seek public consultation before signing a contract with a biometrics provider, the Milwaukee Police Department…

 

Italian regulator holds out hopes to collect fine from Clearview AI

Italy data protection regulator, the Garante, has not given up on collecting the millions of euros in fines it imposed…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events