The impact of AI on ID verification jobs: Who should be concerned?
By Ihar Kliashchou, Chief Technology Officer at Regula
In recent news, tech leaders and experts in artificial intelligence (AI) have called for a six-month pause in developing systems more powerful than OpenAI’s language model, GPT-4. This move has been made not because they fear a dystopian future where robots take over humanity, but because of the potential risks to society that these advanced systems pose. Among those risks there are job displacement, perpetuation of existing biases and harmful societal structures, the lack of transparency and interpretability of advanced AI systems, and its potential usage with malicious purpose.
Job displacement is indeed a legitimate concern. As advanced AI and machine learning systems become more capable, they are increasingly able to perform tasks that were previously performed by humans, potentially leading to job losses and mass unemployment in certain industries. For example, advances in robotics and automation have already led to job displacement in manufacturing and logistics industries. Similarly, the development of natural language processing (NLP) systems and chatbots could lead to job losses in customer service and support roles.
Is this also the case for the tasks related to identity verification? What we see now is that AI-powered identity verification systems can be more efficient and accurate than human verification, which leads to the displacement of workers in this field.
Some of the common IDV tasks that can be done by AI – more quickly and accurately than done by humans – include: document verification, facial recognition, biometric verification (fingerprints, iris scans, or voice), behavior analysis (an individual’s behavior patterns, such as the way they interact with their device or the way they type on a keyboard), risk assessment (by analyzing factors such as the user’s location, device, and behavior), and fraud detection (by analyzing patterns and anomalies in data).
While AI and machine learning systems can automate many aspects of identity verification, some tasks still require human involvement: these systems learn from the data they are trained on, which can sometimes be incomplete, biased, or not representative of the real world. Additionally, they may encounter new or unexpected situations that they have not been trained on, which can lead to errors or inaccuracies in their predictions or decisions. In the context of identity verification, there may be cases where a machine learning system makes an incorrect decision, such as failing to identify a fraudulent ID or incorrectly flagging a legitimate ID as suspicious. In these cases, human oversight and intervention can help correct these errors and ensure that the verification process is accurate and reliable.
Additionally, regulations and standards must be in place to ensure that AI is used ethically and transparently. This includes ensuring that the privacy and security of personal data are protected, and that AI is not used to discriminate against individuals or groups. For example, if an AI system used for identity verification is trained to identify suspicious behavior based on historical data, it may be more likely to flag certain individuals or groups unfairly based on factors such as race, ethnicity, or gender, even if they pose no actual threat.
Europe is one of the first ones moving towards regulating AI based on its potential to cause harm. European Parliament is set to finalize its position on the AI Act, aimed at regulating the use of AI, by the end of April, after which it will enter into negotiations with the EU Council and Commission. Meanwhile, The Australian Human Rights Commission has also suggested a new national human rights law to reduce potential discrimination caused by AI. This shows that both Europe and Australia are taking proactive steps to regulate AI to ensure that it is used ethically and transparently.
In light of these concerns, the recent call for a pause in developing systems more powerful than GPT-4 is a welcome and understood development. By carefully considering the potential risks and benefits of AI-powered systems, we can work to maximize their benefits for society while minimizing the potential harm to workers in different industries and society in general.
About the author
Ihar Kliashchou is the Chief Technology Officer at Regula.
DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.
Comments