FB pixel

US, UK and ASEAN debating how to regulate AI

US, UK and ASEAN debating how to regulate AI
 

Governments, civil society and businesses are all attempting to steer the development of artificial intelligence. The Ada Lovelace Institute wants to revise current UK AI laws and has specific recommendations for how to do so, Singapore may be ASEAN’s regulatory role model, while U.S. tech giants have pledged to support the White House in mitigating AI risks.

UK research group says UK has gaps in regulating AI

Research and advocacy organization The Ada Lovelace Institute says that the UK’s AI regulatory network is facing “significant gaps.” It has issued 18 recommendations in their report, published last week, that it believes could jump-start the island nation into becoming a global AI governance pioneer.

One of the recommendations is a call to review the UK General Data Protection Regulation (GDPR) and the Equality Act 2010 to include new rights and protections for those affected by AI.   Other recommendations are to establish a dedicated AI ombudsman, AI deployment pilots, and to introduce mandatory reporting requirements for developers of foundation models. The Institute also suggests developing a biometrics governance system.

The UK does not have a holistic body of law governing AI in the same way that the European Union is trying to achieve with its upcoming AI Act. The country should work on establishing both “horizontal” frameworks, covering human rights, equality and data protection, and “vertical” or domain-specific regulation, such as regulating medical devices, according to the Institute.

“The UK has an opportunity to position itself as a leader in global AI governance, pioneering a context-based, institutionally focused model for regulating AI that could serve as a template for other global jurisdictions,” the report concludes.

Southeast Asia needs an AI governance framework and Singapore may have the answer

Southeast Asian leaders should consider adopting Singapore’s Model AI Governance document on a regional level ensuring its countries have a common legal basis to govern the use of Al, similar to the EU’s AI Act. This would allow the region to strengthen both competitiveness and digital rights, a new opinion piece published by East Asia Forum argues.

Singapore’s Model AI Governance Framework, launched in 2020 as part of its National AI Strategy, is one of several AI-focused initiatives in the region. Indonesia, Thailand, Malaysia and Vietnam have also published national strategies and roadmaps for developing the technology, while the 10-member Association of Southeast Asian Nations (ASEAN) is currently drawing up the ASEAN Guide on AI Governance and Ethics.

The main advantage of Singapore’s approach is its focus on AI risks. However, the document still lacks many details surrounding categories and levels of risk, including facial recognition, according to Albert J. Rapha, a postgraduate student in Public Sector Innovation and E-governance at Katholieke Universiteit Leuven. In January, Singapore introduced an AI governance testing framework and tools in their A.I. Verify exercise.

A survey conducted by consulting firm Kearney in 2020 showed that although the vast majority (80 percent) of the region considers AI adoption to be at nascent stages, Southeast Asia could see a 10 to 18 percent GDP uplift by 2030 thanks to the technology, equivalent to nearly $1 trillion.

US tech companies pledge to work with White House on AI risks

Leading U.S. tech companies, including Amazon, Google, Meta, and Microsoft, have committed to collaborating with the Biden administration to address potential risks associated with AI. Their commitments include security testing of AI systems before release, sharing risk information with various organizations, watermarking AI-generated content, and addressing harmful bias and public trust issues.

OpenAI, the developer of ChatGPT, as well as its competitor Anthropic, which was established by former OpenAI staff, and Inflection, the startup responsible for Chatbot Pi and led by former DeepMind leader Mustafa Suleyman, have also made voluntary commitments.

In his remarks to reporters before meeting with AI company leaders, President Joe Biden commended these efforts, emphasizing the importance of safety, security, and trust in AI development. He stated, “The group here will be critical in shepherding that innovation with responsibility and safety-by-design to earn the trust of Americans. We must be clear-eyed and vigilant about the threats from emerging technologies that can pose — don’t have to — but can pose to our democracy and our values.”

Despite these commitments, some observers believe that more than the voluntary commitments of only seven AI companies is required. They believe that other AI companies will only continue to develop and implement AI tools with proper oversight if the government creates the proper regulations.

The White House has promised to work with allies to establish a global framework for governing AI. It is developing an executive order and bipartisan legislation to address AI-related issues such as algorithmic bias and transparency. Senator Todd Young, R-Ind., has stated that the Senate is working to adapt laws and regulations to address the impact of AI on society. New AI legislation is anticipated to be released within the next six months.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read From This Week

Featured Company

Biometrics Insight, Opinion

Biometrics White Papers

Biometrics Events

Explaining Biometrics