India’s Telangana minister talks public facial recognition trust on WEF panel
The development, deployment and use of artificial intelligence-based technologies such as facial recognition is spreading like wildfire across the world today, but this has to be matched with intentional efforts aimed at ensuring that people gain deep trust in these systems.
This is the summation of views expressed by the Information Technology Minister of India’s Telangana state, KT Rama Rao, during a panel discussion at the recent World Economic Forum (WEF) gathering in Davos.
Titled ‘AI on the Street: Managing Trust in the Public Square,’ the panel discussion saw the participation of NEC Corporation CEO Takayuki Morita; the CEO of Edge Technologies Coen van Oostrom, and the executive director of Ushahidi, a South Africa-based open-source software application, Angela Odour Lungati.
In the course of the discussion moderated by experienced city planner and Professor Carlo Ratti, the panelists were first unanimous about the importance of AI-based technologies to society, before x-raying some of the challenges and risks they come with.
While the issue of trust in facial recognition technology is absolutely important, impact and relevance is also something to consider before any deployments. This is what KT Rama Rao thinks. “One of the things we believe in as a government is that no matter how cool and fancy a technology is, it is futile if it doesn’t have any positive societal impact. We leverage facial recognition and AI in our government,” he said explaining that the government uses facial recognition for purposes such as identity verification to obtain certain services, for renewal of driver’s licenses and for e-voting (which is still at a pilot stage).
Rao also notes the importance of a foundation for face biometrics use in law, and transparency in implementations.
“It’s important to use a consensual method which ensures that citizens are given a choice of consent in the use of such technologies, and also to ensure that officials in the government are educated and have limited access to the data. There have been criticisms and concerns, but by and large, I think ours has been a successful model, and it’s helping the society at large,” he added.
Rao on another note said because they are emerging technologies, the efforts to improve the development and deployment of these AI solutions has to be constant in order to build the performance guarantees required for their smooth functioning.
Meanwhile, speaking about trust in AI, NEC’s Morita said: “The main issue in deploying surveillance technologies such as facial recognition is trust. Trust should be built based on transparency. Questions should be asked about how the data will be handled. There is need to establish principles and regulations.”
Edge CEO van Oostrom, for his part, argued that although many tracking technologies are important in different applications, they also have a downside, and often, they allow companies or institutions to have much more personal data than they really should have, thus raising concerns about trust.
“The people seem not to trust governments any more. The issue is how to use such technologies to the benefit of people without getting into a situation where trust becomes so much of a negative force. People will definitely like to hop in or embrace such technologies if they benefit their daily lives,” said the Edge boss.
Chiming into the conversation, Angela Odour Lungati emphasized the intrusive nature of technologies like facial recognition and the risks people can be exposed to.
“Many people are oblivious about what risks technologies like facial recognition create or might expose them to. Some of these AI tools are also exacerbating existing inequalities and propagating certain biases that already exist. This is largely a function of who is building these tools and the kind of data that are fed into these systems,” she stated.
This point of hers brought the panelists to the issue of how facial recognition algorithms are being trained, as some of them are known for instance to have been built with relatively poor performance in recognizing people with particular skin colours and tones.
Another proposal made by the panelists regarding the development, deployment and use of such AI technologies is the need for third party verification of how data is collected and used, and the active involvement of people from communities those systems are intended to serve.
“From the point of building these systems, we have to make sure that those communities we are looking to serve are actually represented. But I also have to recognise the progress that has been made in coming up with policies and regulations around this, despite the difficulties. We also need to start looking at how to empower the ordinary people to begin to take charge of the implementation of some of these regulations;” said Odour Lungati, adding that educating people around what their rights are and what their data will be used for is important to enable them decide whether to opt in or out. She says that it is equally important for the issues that breed mistrust in AI systems to be first clearly identified before any efforts can be made to resolve them.