FB pixel

White paper lays out Google position on government involvement in AI

White paper lays out Google position on government involvement in AI
 

Google has laid out its position on government regulation of artificial intelligence in a white paper, suggesting a limited approach to legislation, and calling for government involvement in five specific areas.

Perspectives on Issues in AI Governance (PDF) is a 34-page document which calls for governments to work with civil society and industry stakeholders on explainability standards, fairness appraisal, safety considerations, human-AI collaboration, and liability frameworks. The white paper also notes some of the trade-offs inherent to the discussion, such as between explainability and safeguarding security.

In the section on safety considerations, Google suggests that like electrical products in Europe are required to demonstrate their safety through CE certification, the same could be done for AI.

“For example, biometric recognition technology in smart lock systems could be tested against a representative, randomized dataset to ensure they exceed pre-set accuracy standards, before being certified safe for use.”

Google Global Policy Lead for Emerging Technologies Charina Chou, who co-authored the report, tells Wired that the motivation is to respond to government questions about what practical steps they can take to ensure responsible AI use. She cautions that further dialogue is needed before drafting new laws.

“At this time, it’s not necessarily super obvious what things should be regulated and what shouldn’t,” Choi says. “The aim of this paper is to really think about: What are the types of questions that policymakers need to answer and [decisions] we as a society have to make?”

Internet Institute researcher Sandra Wachter told Wired that high-level abstract claims such as that AI should be fair are of limited value at this point.

“I think it’s a good initial list. Where I’d say there is still a gap is how to govern those things,” she says.

Google’s suggestions are more abstract but also much broader than Microsoft’s recent proposals about regulation of facial recognition, and would apply to other biometrics as well. Lawmakers meanwhile continue to move ahead with proposed restrictions on biometrics and other AI technologies at various levels.

Article Topics

 |   |   | 

Latest Biometrics News

 

As retailers turn to biometrics to reduce theft, costs of poor implementation loom

Demand for biometrics to reduce retail crime continues to rise, but the risk of flawed deployments of the technology are…

 

Socure announces faster biometric IDV, deepfake and synthetic identity fraud detection

Identity verification provider Socure has announced the launch of its next generation DocV, now including enhanced deepfake selfie biometrics detection…

 

Rights group criticize EU AI Act for inadequate protections against potential abuse

The EU’s AI Act is done, and no one is happy. Having been adopted by the European Parliament in March…

 

Kids Code bills prompt epic showdown between regulators, activists and big tech firms

The latest craze sweeping the United States – legislation to protect kids’ data and overall online safety – has its…

 

UK’s £54M welfare fraud case illustrates need for biometric identity verification

A team of fraudsters has been convicted for what was described as “the largest case of benefit fraud in England…

 

Intellicheck, OneID tout banks’ unique position to cut fraud as digital ID enablers

Banks could play a significantly larger role in protecting consumers, businesses and payment systems from fraud, Intellicheck CEO Bryan Lewis…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read From This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events