UK report sees no sure biometric success with COVID unless citizens are consulted

UK report sees no sure biometric success with COVID unless citizens are consulted

A British AI think tank’s report carries a necessary and obvious message. The usual hand waving, hype and funding scorecards used by tech developers will not warm the UK public to biometrics in the fight against COVID-19.

Development of tools intended to stifle the coronavirus must begin in, not end with, consultation with the public, according to the report, published by the Ada Lovelace Institute.

Legitimate, difficult-to-address concerns over, say facial recognition in contact tracing or digital health credentials, cannot be dismissed as irrelevant or uninformed. Nor can they be papered over with shiny, non-essential features and design.

“It’s complicated,” reads a top-level summary for the report. But is it really?

That sentiment is more of a genuflection before Big Tech (and Big Funding). Virtually all the pieces required are well-worn.

Interoperability between the many, many databases in the world is probably the biggest tech hurdle, and solving that will require not a single fundamental innovation. In fact, it has been going on since the Arpanet.

Contract tracing will require a combination of manual work and surveillance through machine vision and mobile device tracking via an app. AI programming must make it clear when systems need to look, when to look away and when to forget.

“Each app will require people to adopt it, to use it and to adhere to it. If it’s not deemed legitimate, proportionate, safe and fair it will fail,” according to the report.

Adoption of Singapore’s relatively well-known contact tracing app TraceTogether was described as “not enough” by the country’s digital office.

The accuracy of AI systems is not perfect and never will be. Development will be like creating a perfect sphere. Instead, the organizing principle behind AI and machine learning development has to be that algorithms must prove themselves correct. People should not have to prove systems wrong.

That point is the key.

The Lovelace Institute’s report makes clear that the distrust among those who are skeptical of life under the constant gaze of autonomous eyes and ears runs deep. Those people are not paranoid rubes.

They do not trust, nor should they blindly trust that collected data is correct and safe or that it is used by businesses or governments to benefit citizens.

As the report states, it would go a useful distance toward broad acceptance if the public actually understood what systems are deployed, what they do and why. It is just as important for people to know they are not under AI’s thumb.

The stakes are high in terms of preventing more contagion, and that cannot be minimized. But the report points out that the forces allying to beat back the coronavirus will not get many chances here.

Because of the health consequences — but also because of the privacy concerns.

“Failure won’t just be dismissed as one poor product.” Federal and state governments could change hands. Businesses could see their share prices and reputation sheared. And the coronavirus could get a firmer foothold in daily life.

Worse, there is the possibility that a big mistake “undermines faith in future tech tools that could prove lifesaving.”

The surest way forward, according to the institute, is to involve the informed public now in a discussion that helps all parties better understand what is needed, what is possible and what is acceptable.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics