FB pixel

Explainable AI: A question of evolution?

Explainable AI: A question of evolution?
 

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner

Since the dawn of technology, we were on speaking terms with our tools. So says George Dyson in Analogia where he maps our circular evolution from flint chippers to micro chippers. In the leap from adze to Azure, kinship might be one reason behind our early code dweller instincts that Artificial Intelligence (AI) must be ‘explainable’. But that would assume all other technologies in common use are immediately and fully understood by everyone using them. We haven’t been acquainted with many of our tools for a while and you don’t need to go back to the dawn of technology to see that – somewhere around mid-morning should do. When people unpacked their first Macintosh home computer how many understood the workings of its graphical user interface? My parents didn’t. Computer pointers evolved from CAT to mouse; with the arrival of the touchpad there was no need to grasp either the device or its theory, just accept the ingenuity that lets us work remotely. Why then should AI-derived technology be different?

One thing about AI that is no different from previous eras is the atavistic fear it stirs. Some debates about AI’s very existence could just as easily be about witches. There’s nothing new about neophobia, even when our opposable thumbs are tapping in all caps. Ancient superstitions aside, there are however several empirically sound reasons why AI might need specific explanation, the first being evolution – not ours but our tools’. Whether or not we’ve enjoyed a deep understanding with our industrial machinery so far, this time around the tools will be able to evolve without (or even despite) us. This may not (yet) be evolution in the biological sense, but aptitude for self-perpetuation and imitation deserves explanation.

The tools of the next era will be able to teach themselves and be on speaking terms with each other, beyond the parameters of their initial programming. Autodidactic, idiolectal tools are more than a 2.0 version of anything that went before. They’re a new species and grappling with their implications calls for more than digital dexterity, particularly when they are used by the state. Artificial Intelligence should probably explain itself because it can.

Another reason why AI tools need to be explainable is automated decision making. Inexplicable black boxes lead back to the bewitchment of the Sorting Hat; with real life tools we need to know how their decisions are made. As for the human-in-the-loop on whom we are pinning so much, if they are to step in and override AI decisions the humans better be on more than just speaking terms with their tools. Explanation is their job description.

And it’s where the tools are used by the state to make decisions about us, our lives, liberty and livelihoods, that the need for explanation is greatest. Take a policing example. Whether or not drivers understand them we’ve been rubbing along with speed cameras for decades. What will AI-enabled road safety tools look and sound and think like? If they’re on speaking terms with our in-car telematics they’ll know what we’ve been up to behind the wheel for the last year not just the last mile. Will they be on speaking terms with juries, courts and public inquiries, reconstructing events that took place before they were even invented, together with all the attendant sounds, smells and sensation rather than just pics and stats? Much depends on the type of AI involved but even Narrow AI has given the police new reach like remote biometrics. Matching historical data for elimination and implication is one thing but it’s AI’s prospective decision-making power that’s the real payoff. Artificial Intelligence will accelerate actuarial policing like law enforcement’s own Large Hadron Collider (minus the 10,000 scientists and ten-year project cycle). If we can predict accidents, prevent incidents and pre-empt criminality, saving time, money and lives using AI, wouldn’t it be morally delinquent not to? That’s going to need hypervigilant governance accompanied by unequivocal explanation. Why? Recent research at CENTRIC reported that nearly three in four participants across 30 countries agreed that the police should use AI to predict crimes before they happen (74.3%). It’s a very short – and perhaps irresistible – evolutionary step for the tools to try to identify the future offender. That way lies a vast galaxy of dark matter.

Lastly, AI upsets our ancestral dominance. Dyson concluded that we were heading for “a world where humans coexist with technologies they no longer control.” Perhaps AI’s greatest difference lies here, with its introduction of the first power sharing arrangement with our tools, replacing our absolute authority over them and bringing the prospect that we may have to explain ourselves to them.

If we have always enjoyed an intimate relationship with our tools in the past, we’re probably the last generation where that holds good. As our tools have proliferated, we have regressed and are now barely on nodding terms with devices that sustain our everyday lives.

Being ‘on speaking terms’ describes a reciprocal preparedness to engage. On one view, ours is in fact the first generation to be on speaking terms with its tools – and the first since the dawn of technology where our tools may not be on speaking terms with us. That deserves explanation.

About the author

Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.

Related Posts

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Mitek unveils multilayered GenAI fraud detection to stop PAD, injection attacks

Mitek Systems has launched what it calls the first multilayered solution to the growing challenge posed by generative AI for…

 

Authsignal teams with Mattr on terminal to bind palm biometrics with mDLs

New Zealand-based Authsignal has announced the launch of a new palm biometrics terminal, developed in collaboration with Mattr and Qualcomm,…

 

UK grapples with border biometrics expansion and delays

The UK Home Office has provided key updates on its electric border management initiatives during a Justice and Home Affairs…

 

FBI looking at biometric matching algorithms for NGI, issues RFI

The U.S. Federal Bureau of Investigation’s (FBI) Criminal Justice Information Services (CJIS) in Clarksburg, West Virginia issued a Request for…

 

Bhutan charts a digital future with blockchain, bitcoin, and national digital ID

The Kingdom of Bhutan is leveraging digital assets and strategic investments to propel its national development agenda, integrating blockchain technology…

 

Digital ID can help Sri Lanka expand tax base: Deloitte

Sri Lanka seems to be caught in a chicken-and-egg situation regarding its development of digital ID as its ministry sets…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events