FB pixel

Six key considerations for ethical AI in financial services

Six key considerations for ethical AI in financial services

By Matt Peake, Global Director of Public Policy at Onfido

Biometrics has quickly become embedded in our everyday lives, particularly facial biometric technology which is commonly used to unlock smartphones and access online applications. With 8 out of 10 people finding biometrics both secure and convenient, it’s seeing widespread adoption across financial services.

Biometric verification is powered by artificial intelligence (AI) systems which rely on models trained with data. This enables them to recognize, categorize and classify facial images very quickly and accurately. With 68 percent of large companies in the UK having adopted at least one AI application, the technology has real consequences for real people and therefore must be built properly.

For this reason, it has to be subject to ethical parameters as it is developed and implemented. Within financial services, this is particularly important given banks and payment service providers are the gateway to financial inclusion and services based on trust.

There are six key considerations typically associated with ethical AI: fairness and bias, trust and transparency, privacy, accountability, security, and social benefit. If just one of these fails, it can have serious consequences for individuals and businesses. This includes financial exclusion, delayed innovation and growth, and lack of regulatory compliance.

Delaying or ignoring the issue, or passing the responsibility to engineering, compliance or legal teams is no longer an option. Leaders within organizations, no matter the department, must take an active role in seeking to address the flaws in their applications and be accountable for the performance of AI that they deploy.

Why is ethical AI so important?

AI is used across multiple functions of finance from fraud detection and risk management to credit ratings, and so plays an essential part in the processes that underpin everyday life. If AI is not ethical, it damages trust in the system and erodes the value of financial services.

Currently, when issues with AI automation arise, human intervention is often the solution. But a manual fallback isn’t always the best answer as humans are prone to systemic bias. It is commonly understood that bias exists in systems seeking to distinguish faces of people from ethnically diverse backgrounds. This can lead to the development of non-optimal products, increased difficulty expanding to global markets, and an inability to comply with regulatory standards.

Where discrimination occurs, the consequences can be severe and include alienation from essential services. This is why Onfido takes a proactive stance to reduce bias, having published guidance based on defining, measuring, and mitigating biometric bias, and also participated in the UK’s Information Commissioner’s Office sandbox to pioneer research into data protection concerns associated with AI bias published its report.

Elsewhere, ethical AI is at the heart of regulation. The UK’s AI Governance regulations and the EU’s AI Act outline how trust should be at the center of how businesses develop and use AI. Not only will it be a requirement for financial services to follow the considerations of ethical AI, but it will be central to future growth. There is also an ongoing requirement for  compliance with Anti-Money Laundering (AML) and Know Your Customer (KYC) regulations, holding financial institutions accountable on how they verify customers’ identities. With an investment in ethical AI, financial services will improve the accuracy and reliability of their KYC processes and reduce false acceptance and rejection acceptance rates across the board.

Implementing ethical AI

There’s no doubt that ethical AI is an evolving challenge that requires financial services to stay on top of their applications as new use-cases emerge and deployment grows.

Developing and deploying ethical AI should be a company-wide initiative. It requires a top-down commitment to ensure ethical practices are embedded into every stage of application development and implementation. Without such an approach, it can be all too easy to fall behind on the challenges of developing and maintaining ethical AI and encounter issues that could otherwise have been prevented. To achieve optimal outcomes, businesses must bring teams together to identify problems, define and formulate solutions, implement them, and then track and monitor their progress.

Executive teams must understand the risks of developing AI that is not ethical and the long-term financial and reputational repercussions it could have. But they must also recognize that ethical AI is the gateway to innovation, driving accurate and efficient financial services that can lead to positive social outcomes – for the benefit of all customers, no matter who or where they are.

The impact of ethical AI

By following the six considerations of ethics, financial services firms can help meet their regulatory obligations, build fair, transparent and secure systems, and demonstrate their ongoing commitment to protecting their customers.

Failure to address ethical considerations, however, runs the risk of causing long-term issues. It can lead to products and services that exclude customers, and may ultimately result in non-compliance with regulations. Embedding ethical considerations into AI development and implementation will ensure that customers are treated fairly, while financial services can protect and improve their brand reputation and build trust with their customers. When creating and using AI technologies, we must ensure that they operate fairly for all individuals and ensure that privacy is respected and upheld.

About the author

Matt Peake is the Global Director of Public Policy at Onfido. He has nearly 20 years of experience in public policy roles in telecoms and technology. Prior to Onfido, he spent over 10 years as Head of Policy for UK and Ireland at Verizon, the US tech giant, overseeing policy across a range of areas including digital competition, cyber security and privacy. Matt holds a law degree (UEA), MBA (Henley Business School), post-graduate diploma in Competition Law (Kings College) and diploma in business international relations and the political economy (London School of Economics).

DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News


Privacy advocates push back against Meta’s data usage for AI development

Meta has notified millions of European users about upcoming changes to its privacy policy, set to take effect on June…


Gibraltar’s eGov platform advances with digital ID cards and open-source shift

In the past six months, Gibraltar’s e-Gov platform has made considerable strides in enhancing its digital services, Chief Secretary Glendon…


ROC set new highs in NIST biometrics testing for age estimation

ROC believes it is well-positioned for success in the burgeoning market for biometric facial age estimation and verification, based on…


IOTA’s Web3 IDV project one of 20 projects invited to play in EU sandbox

The European Blockchain Regulatory Sandbox has selected the use cases to be featured in its second cohort of sandbox dialogues….


New eIDAS-Testbed runs first successful tests on European Digital Identity Wallet

The entourage forming around eIDAS continues to grow, as the EU regulation pushes Europe toward a digitized society activated through…


UK Peers slam Ofcom refusal to require biometric age estimation for under-13s

Ofcom’s draft Children’s Safety Code of Practice will leave millions of young people exposed to online harms the legislation behind…


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events