Guidelines, frameworks and predictions for AI and facial recognition published
The European Parliamentary Research Service (EPRS) has released a set of guidelines for ethical use of artificial intelligence, based on a ‘human-centric’ approach in alignment with European values and principles, according to an announcement.
The “EU guidelines on ethics in artificial intelligence: Context and implementation” is a 13-page document which mentions biometric technology explicitly four times, mostly in connection with facial recognition in the UK. The paper is intended to increase awareness of recommended ethical rules for the design, development, deployment, implementation, and use of AI products and services in the region. It identifies implementation challenges and sets out possible future EU actions, including “soft law guidance,” standardization, and legislation. The need for clarification of the guidelines is discussed, as is how to encourage the adoption of ethical standards and legally binding rules for transparency and common requirements on human rights impact assessments, as well as how to deal with facial recognition, with national-level regulation already being discussed by some member states. The paper concludes with a review of prominent ethical frameworks for AI under development elsewhere, such as in the U.S. and China.
A document on AI ethics guidelines was also produced by the European Commission’s High-Level Expert Group on AI in April, and Commission President-elect Ursula van der Leyen is planning to propose legislation to coordinate the European approach to the topic.
Analysts see change coming
Algorithmic bias will be tackled by data auditors to reduce the impact of “pale, male, and stale” AI development between now and 2021, according to global analyst firm CCS Insight, Mobile Europe reports. The firm has published 15 predictions, several relating to facial recognition and other biometrics.
A Premier League soccer club will launch a system for ticketing with facial recognition by 2021, CCS Insight predicts. Manchester City has denied that it is planning to use facial recognition for entry into its stadium, but the club does have a relationship with Blink Identity.
CCS Insight also predicts that psychometric testing of software developers will become common by 2023, that Apple will launch a privacy brand to communicate its stance on protecting user information next year, and that Samsung will launch Galaxy Glasses by 2022. A wearable device maker will pay users for data due to a lack of diversity in available data sets by 2023, and technology for detecting deepfakes will emerge by 2021, the firm says. Other predictions relate to the advance of 5G, and the longer-term displacement of business travel by virtual reality due to environmental concerns.
WEF publishes strategy framework
The World Economic Forum is encouraging the development of national AI strategies, and has published a framework to guide that process.
The 20-page whitepaper, titled “A Framework for Developing a National Artificial Intelligence Strategy Centre for Fourth Industrial Revolution”, provides a way to build the minimum viable strategy for a national approach to AI, according to the announcement.
The whitepaper considers why national strategies on AI are needed, and how to design one, beginning with setting objectives, considering its key dimensions and discussing implementation plans.
U.S. Government AI plans criticized
The White House Office of Science and Technology Policy has released a document presenting a “Summary of the 2019 White House Summit on Artificial Intelligence in Government.”
The summit was chaired by U.S. Chief Technology Officer Michael Kratsios, and included 175 leaders from government, industry, and academia, Forbes reports. The White House has launched a national AI initiative, including a website to centralize efforts, though Ron Schmelzer, senior analyst at AI advisory firm Cognilytica, who wrote the Forbes article, claims the recent investment of $973 million in AI funding included in the federal government’s 2020 budget proposal is “actually a drop in the bucket when gauged against worldwide investment.” The U.S. government was reported to be spending more than $2 billion a year on AI research and development prior to an announcement of increased funding by DARPA last year.
Schmelzer also says the skills gap is widening, and argues for best practices adoption to be fostered within the industry. He suggests a AI Center of Excellence be created, as part of a push for greater engagement with the private sector.