Do robots have the right to remain silent? Ask Miranda
By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner
You do not have to say anything. The inexhaustible appeal of crime-based entertainment means the English-speaking world is very familiar with the right not to say anything on being arrested. In England and Wales, the words are from the Caution which came into use over 100 years ago but its younger American cousin, Miranda (from Arizona v Miranda in 1966) is probably better known. Beginning ‘you have the right to remain silent’, Miranda became a verb in U.S. law enforcement (as in “did you Mirandize my client officer?”). Failing to remind suspects of their rights properly can lead to evidence being excluded or cases being thrown out.
Artificial Intelligence (AI) generates lots of legal questions about how it will fit into our world – and vice versa – one of which is whether robots will have rights. The right to remain silent has, as the U.S. Supreme Court noted nearly 25 years ago, become a part of our culture and offers an insight into the rights-based arguments for AI in law enforcement.
Lawyers know it as the right against self-incrimination and for self-incrimination to occur you need two things: a crime (where the word comes from) and a risk of conviction (where the warning comes from). With a robot there’s currently neither. It can’t testify as a witness – against itself or anyone else – nor can it be the subject of a criminal conviction. ‘Taking the Fifth’ (remaining tactically silent) gives citizens several crucial civil rights as against the state and similar provisions exist in the UK. Without rights, a robot can’t take the Fifth as it has nothing in jeopardy when it speaks. That seems to be that then: asked and answered. No rights. No risk. No further questions.
But should there be? Consider three scenarios. In the first a court wants data held by a robot during a prosecution of a human being. If the human whose rights (to a fair trial etc.) are in peril, it is arguable that the AI should not stay quiet and in fact has a duty to contribute any relevant data. But if it’s inappropriate to speak of rights for machines then they shouldn’t have duties either. Rights and duties are reciprocal and the labels don’t really fit the AI world. Another way to look at the scenario is that the civil rights of the human on trial extend to data held by the AI in which case it should not be permitted to remain mute. This fits with the norms of ethical and explainable AI without having to assume a ‘duty’ in the technology.
Consider then the second scenario. A computer makes a devastating error and is in sole possession of all the information necessary to understand what happened. There are many examples of the computer being catastrophically wrong; as AI decision-making proliferates it’s reasonable to anticipate many more. The public interest may therefore require ‘candour by design’ to be built into all AI used for critical functions like policing but how will that work in relation to the liability of those who designed, sold and operated it? It may be very much in the interests of its human designers for the AI to keep schtum when interrogated after an error, going into protective mode against any blame coming their way. In creating their bots, programmers might reflect Seamus Healey’s admonition: ‘whatever you say, say nothing’, although deliberately blurting out disinformation and leaving false evidence trails might be a more effective ploy for coding. After all, if a robot can’t give evidence it’s not in the frame for perjury, so who would be lying?
Transparency is important but inevitably there will be times where the public interest depends on the AI not giving out information: national security is the obvious one. Designing AI for transparency and public interest immunity simultaneously will be challenging.
Admission of error to exonerate innocent humans may be called for but Netflix taggers will detect another cop drama episode here: the one where a hapless stooge is persuaded to take the fall for everything. Could Patsy-AI be programmed to ‘fess up simply to divert from the human (in the loop or otherwise)?
Final scenario, a penitent crime boss confesses all to ‘AI Jesus’ who was just installed in Peter’s Chapel, Lucerne (now there’s an idea for a plot). How can investigators get that invaluable information? The only sanctity for that communication is contractual not spiritual and the long running saga of the Boston College tapes shows how it ends: US/UK law enforcement will probably get the court’s blessing – if not the church’s – when they go after confessions for serious crimes such as terrorism, regardless of any data sharing niceties. We should expect the same approach in respect of robots.
Regardless of whether they have any rights, properly programmed bots will know the law and won’t need an attorney. They will know that anything said may be used against others. Perhaps a better AI question for law enforcement and the right to silence is whether the police should use an AI assistant to ensure suspects get cautioned properly, at the right time, auditably and accurately, in any language necessary. ‘Hey Miranda, read them their rights’.
About the author
Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.
Article Topics
data privacy | data protection | explainability | facial recognition | Fraser Sampson | police | robots
Comments