FB pixel

Six key considerations for ethical AI in financial services

Six key considerations for ethical AI in financial services
 

By Matt Peake, Global Director of Public Policy at Onfido

Biometrics has quickly become embedded in our everyday lives, particularly facial biometric technology which is commonly used to unlock smartphones and access online applications. With 8 out of 10 people finding biometrics both secure and convenient, it’s seeing widespread adoption across financial services.

Biometric verification is powered by artificial intelligence (AI) systems which rely on models trained with data. This enables them to recognize, categorize and classify facial images very quickly and accurately. With 68 percent of large companies in the UK having adopted at least one AI application, the technology has real consequences for real people and therefore must be built properly.

For this reason, it has to be subject to ethical parameters as it is developed and implemented. Within financial services, this is particularly important given banks and payment service providers are the gateway to financial inclusion and services based on trust.

There are six key considerations typically associated with ethical AI: fairness and bias, trust and transparency, privacy, accountability, security, and social benefit. If just one of these fails, it can have serious consequences for individuals and businesses. This includes financial exclusion, delayed innovation and growth, and lack of regulatory compliance.

Delaying or ignoring the issue, or passing the responsibility to engineering, compliance or legal teams is no longer an option. Leaders within organizations, no matter the department, must take an active role in seeking to address the flaws in their applications and be accountable for the performance of AI that they deploy.

Why is ethical AI so important?

AI is used across multiple functions of finance from fraud detection and risk management to credit ratings, and so plays an essential part in the processes that underpin everyday life. If AI is not ethical, it damages trust in the system and erodes the value of financial services.

Currently, when issues with AI automation arise, human intervention is often the solution. But a manual fallback isn’t always the best answer as humans are prone to systemic bias. It is commonly understood that bias exists in systems seeking to distinguish faces of people from ethnically diverse backgrounds. This can lead to the development of non-optimal products, increased difficulty expanding to global markets, and an inability to comply with regulatory standards.

Where discrimination occurs, the consequences can be severe and include alienation from essential services. This is why Onfido takes a proactive stance to reduce bias, having published guidance based on defining, measuring, and mitigating biometric bias, and also participated in the UK’s Information Commissioner’s Office sandbox to pioneer research into data protection concerns associated with AI bias published its report.

Elsewhere, ethical AI is at the heart of regulation. The UK’s AI Governance regulations and the EU’s AI Act outline how trust should be at the center of how businesses develop and use AI. Not only will it be a requirement for financial services to follow the considerations of ethical AI, but it will be central to future growth. There is also an ongoing requirement for  compliance with Anti-Money Laundering (AML) and Know Your Customer (KYC) regulations, holding financial institutions accountable on how they verify customers’ identities. With an investment in ethical AI, financial services will improve the accuracy and reliability of their KYC processes and reduce false acceptance and rejection acceptance rates across the board.

Implementing ethical AI

There’s no doubt that ethical AI is an evolving challenge that requires financial services to stay on top of their applications as new use-cases emerge and deployment grows.

Developing and deploying ethical AI should be a company-wide initiative. It requires a top-down commitment to ensure ethical practices are embedded into every stage of application development and implementation. Without such an approach, it can be all too easy to fall behind on the challenges of developing and maintaining ethical AI and encounter issues that could otherwise have been prevented. To achieve optimal outcomes, businesses must bring teams together to identify problems, define and formulate solutions, implement them, and then track and monitor their progress.

Executive teams must understand the risks of developing AI that is not ethical and the long-term financial and reputational repercussions it could have. But they must also recognize that ethical AI is the gateway to innovation, driving accurate and efficient financial services that can lead to positive social outcomes – for the benefit of all customers, no matter who or where they are.

The impact of ethical AI

By following the six considerations of ethics, financial services firms can help meet their regulatory obligations, build fair, transparent and secure systems, and demonstrate their ongoing commitment to protecting their customers.

Failure to address ethical considerations, however, runs the risk of causing long-term issues. It can lead to products and services that exclude customers, and may ultimately result in non-compliance with regulations. Embedding ethical considerations into AI development and implementation will ensure that customers are treated fairly, while financial services can protect and improve their brand reputation and build trust with their customers. When creating and using AI technologies, we must ensure that they operate fairly for all individuals and ensure that privacy is respected and upheld.

About the author

Matt Peake is the Global Director of Public Policy at Onfido. He has nearly 20 years of experience in public policy roles in telecoms and technology. Prior to Onfido, he spent over 10 years as Head of Policy for UK and Ireland at Verizon, the US tech giant, overseeing policy across a range of areas including digital competition, cyber security and privacy. Matt holds a law degree (UEA), MBA (Henley Business School), post-graduate diploma in Competition Law (Kings College) and diploma in business international relations and the political economy (London School of Economics).

DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

London police tout success of FRT, flout rules on use of custody images

Police in London have arrested a total of 540 individuals this year using live facial recognition for offenses ranging from…

 

Cameroon live facial recognition project still trying to identify a banker

The President of Cameroon, Paul Biya, last week directed his government to source for money from a Chinese bank for…

 

Socure consortium hits milestones in tackling First-Party Fraud problem

Heeding the call for more collaboration and joint defense across industries facing a massive increase in identity fraud, Socure has…

 

Swiss e-ID has an official name, technical implementation plan

Switzerland’s government has outlined plans for the technical implementation of its upcoming national electronic identity, including a trust infrastructure that…

 

Identity verification scale and maturity to push average cost down

The costs that relying parties pay for digital identity verification, from collecting and analyzing selfie biometrics to ID document authenticity…

 

How the ID industry can become more sustainable – and help to raise awareness for greener travel

By Tobias Nuessle, COO of Veridos The travel and tourism industry is a significant contributor to global CO2 emissions. Various…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events