Facial recognition: The tool of despair and hope
By Neil Sahota (萨冠军), an IBM Master Inventor, UN AI advisor, and Professor at UC Irvine
Every year, about 250,000 Americans find that their medical identity has been stolen. Medical identity theft is when personal information like your name, social security number or Medicare number is stolen to submit fraudulent claims on your behalf (according to the Federal Trade Commission). Sadly, the average victim in the U.S. lost over $13,500 because of fraudulent healthcare bills. Even worse, our medical information is sufficient for the thief to expand to full scale identity theft, which can affect your personal information, banking and even lead to replication. As a result, many healthcare providers are turning towards more biometric solutions by validating an individual’s identity based on physical characteristics like facial recognition, to reduce fraud, protect patient data, and safeguard their operations.
It is estimated that by 2030, the healthcare biometric market is estimated to reach $79 billion. These rapid changes in market dynamics which enhance diagnostic data, and operational opportunities, drive the emphasis on facial recognition. Consider, Apple’s Face ID is benchmarked at being twenty times more secure than its Touch ID. Our faces are substantially more unique and difficult to replicate than a fingerprint in large part because our faces have thousands more data points that machines analyze. Ultimately, this will help reduce medical identity theft, and more importantly, can also help improve the quality of care.
Furthermore, using visual recognition to assist healthcare practitioners will act as having a set of “machine eyeballs” to help detect even minuscule items for early detection. For example, dentists leverage the technology to identify that tiny, white spot on your tongue to see if that was an accidental bite or the potential development of cancerous cells. When looking at our face as a whole, there are vast amounts of subtle tell-tale signs of other types of illness that machine learning systems can detect and flag for human review. There’s a big caveat here, though. These tools are only as good as the training we provide. A study from 2019 found that a healthcare risk-prediction algorithm used on over 200 million people in the U.S. demonstrated racial-bias because it relied on a faulty metric for determining need. If this detection fails, this could lead to long term risks and, depending on the disease, even death.
However, this problem does have a solution: diversity. Already, developers of these AI systems are incorporating diversity in data, perspective, and thought. While there is still a lot of bias correction work to be done, initial progress has led to improved diagnostic efforts in healthcare, including recognizing how underserved communities manifest illness, both physically and mentally. Moreover, this approach is used in training facial recognition systems to find ways to turn threats into opportunities.
While many people fear being monitored by retailers, the flip side is there is great value in looking for specific people. Particularly, facial recognition has become a powerful ally in locating missing people or locating known criminals Additionally, these systems can scan for variant faces, such as a criminal in disguise or aging a missing child who has been lost for an extended period of time. For example, think back to when immigration and customs lines could take several hours to pass through. Thanks to visual recognition, many countries have shortened this time to just a handful of minutes while also increasing early detection of bad actors and threats to travelers. For just general security, people are adopting facial recognition in lieu of passwords and PIN numbers, to use an ATM or unlock their front door.
Additionally, some organizations are using visual recognition as a force for good, such as by identifying victims of child trafficking, domestic violence, or forced coercion. AI’s ability to read facial expressions, body language, and environmental context; these systems can more rapidly detect these instances, even in populated areas. Moreover, the platforms used to promote human trafficking have excelled at misinformation but the images are still based on real photographs, which has allowed the law enforcement agencies to identify and rescue these people. That’s hope for the nearly 25 million people trafficked worldwide (according to the U.S. State Department.)
Facial recognition is by no means perfect, and we still have great strides to make to reduce the implicit bias against some of the genders and ethnicities. However, there is also a great opportunity to use this technology to help these communities.
For example, consider the mortgage lending industry. There is hard evidence that there is material discrimination against people of color in qualifying for a loan and even the rate they are given. In an FSIC white paper, the authors outline the challenge but also show how AI underwriting algorithms have also been helpful to people of color by assessing them without this implicit bias. The results show that people of color are qualifying at a higher percentage and qualifying for lower rates. Likewise, we are just starting to see similar results in providing better individual medical care and, to a degree, improved personal safety.
Positively, this is just the beginning. Biometric technology, like visual recognition, holds great promise, but only if we are focusing our efforts to help the human race as a whole, which means inclusion of everyone in each community. This means we must build diverse teams to cover the socio-economic spectrum to maximize all the people who can benefit from this technology. Organizations have taken the first steps, and we must continue to build and support this mindset to reap the true benefits.
About the author
Neil Sahota (萨冠军) is an IBM Master Inventor, United Nations (UN) Artificial Intelligence (AI) Advisor, Faculty at UC Irvine, and author of Own the A.I. Revolution. With 20+ years of business experience, he works with organizations to create next generation products/solutions powered by emerging technology. His work experience spans multiple industries including legal services, healthcare, life sciences, retail, travel and transportation, energy and utilities, automotive, telecommunications, media/communication, and government. Moreover, Neil is one of the few people selected for IBM’s Corporate Service Corps leadership program that pairs leaders with NGOs to perform community-driven economic development projects. For his assignment, Neil lived and worked in Ningbo, China where he partnered with Chinese corporate CEOs to create a leadership development program.
In addition, Neil partners with entrepreneurs to define their products, establish their target markets, and structure their companies. He is a member of several investor groups like the Tech Coast Angels, advises venture capital funds like Miramar, and assists startups with investor funding. Neil also serves as a judge in various startup competitions and mentor in several incubator/accelerator programs. He actively pursues social good and volunteers with nonprofits. He is currently helping the Zero Abuse Project prevent child sexual abuse as well as Planet Home to engage youth culture in sustainability initiatives.
DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.
Article Topics
accuracy | AI | biometrics | commercial applications | consumer adoption | facial authentication | facial recognition | fraud prevention | law enforcement
Comments