World Economic Forum, EU push for human-centered AI regulatory framework to build trust
Given current concerns regarding ethics, privacy and accessibility of facial biometrics, defining its responsible use, establishing systems to support product teams, auto-assessments and third-party audit to ensure compliance are some principles that could help build trust in the technology, the World Economic Forum believes.
While privacy and human rights activists are asking for a complete ban, warning about misuse, some tech companies such as IBM, Microsoft and Amazon have already suspended development and stopped selling biometric facial recognition technology to law enforcement.
WEF writes that “human-centered” ethical and responsible design standards could build trust by preventing bias and misuse. The organization’s AI team has developed a responsible development framework for engineers and policymakers.
The growing interest in touchless and passwordless technology, especially due to COVID-19, could encourage companies to develop safe and secure ways to verify identities by using “Principles for Action” focused on privacy, risk assessment, proportional use of technology, accountability, end-user consent, accessibility for impaired people, and adaptability.
In their effort to design systems responsible by design, product teams not only have to work together with the platform provider and technology user, but also consider facial recognition justification, a data plan suitable with the user’s features, bias risk mitigation and ways to keep users informed.
To ensure the Principles of Action are followed, auto-assessing will help identity weak spots and if the systems are according to standards by comparing results. Companies can build transparency and prevent mistrust by assigning an independent third-party to conduct a compliance audit.
The World Economic Forum points out how lessons learned by the accounting industry can easily be applied to build facial recognition trust. Governments will have to play their part and adopt bills to encourage “sustainable regulation” in relation to international standards.
EU wants regulatory framework on “high-risk” AI
According to Science Business, regulatory framework on “high-risk” AI should be introduced in Europe as per 42 percent of respondents in a recent European Commission consultation. High-risk AI applications range from autonomous vehicles, to AI applications in healthcare or biometric facial recognition.
The overview contains a total of 1,215 responses with mixed feelings when it comes to how this technology should be approached, specifically if there is a need for new legislation or simply a revision of what already exists. As little as 3 percent feel existing legislation is enough.
“It is interesting to note that respondents from industry and business were more likely to agree with limiting new compulsory requirements to high-risk applications [by] a percentage of 54.6 per cent,” the commission noted.
Earlier this year, the EU published a white paper on Artificial Intelligence to establish global standards for technological advancement and AI technology. During the consultation that followed, a high number of companies were aware that AI may be “breaching fundamental rights” which could lead to discrimination.
“Ninety percent and 87 percent of respondents [respectively] find these concerns important or very important,” the commission said.
The companies that participated are from the EU, India, China, Japan, Syria, Iraq, Brazil, Mexico, Canada, the US and the UK. A more detailed report will be released.
Article Topics
AI | biometric identification | biometrics | EU | facial recognition | regulation | World Economic Forum
Comments