AI’s role in combating the other AI: Artificial identities

By Arun Shrestha, CEO and co-founder of BeyondID
Earlier this year, an employee in the finance department of a multinational company based in Hong Kong was scammed by bad actors who digitally recreated a version of the company’s chief financial officer. The deepfake CFO – created through publicly available video footage – ordered money transfers during a Zoom call with multiple participants. It turns out, the only real person on the call was the employee and yet it was so believable that they followed the “CFO’s” instructions and complied with the request. The company lost HK$200 million (US$25.6 million) in the transaction involving 15 different wire transfers. Whether they’re called deepfake, shallow/cheap fakes, or synthetically generated media, they all have one element in common.
Artificial identities.
Thanks to advances in technology, criminals have become more creative in how they use artificial intelligence to facilitate scams through identity manipulation.
At the core of these threats lie artificial identities, created and manipulated using sophisticated algorithms. As our digital footprints expand, so does the risk of identity theft to generate cyberattacks. The traditional methods of authentication, such as passwords and security questions, are no longer sufficient to protect individuals and organizations from the ever-growing sophistication of bad actors.
But the same technology that facilitates these threats also holds the key to minimizing their impact. AI, with its capacity for learning patterns, analyzing vast datasets, and detecting anomalies, emerges as a potent ally in the fight against deep fakes and security breaches. Future developments in AI are poised to revolutionize the cybersecurity landscape by creating proactive, adaptive identity defense mechanisms.
One promising development is the application of AI in digital forensics. Advanced algorithms can scrutinize multimedia content to identify inconsistencies, artifacts, or other telltale signs of manipulation. By analyzing these subtle nuances in facial expressions, voice patterns, and contextual cues, AI can distinguish between real and fake content, providing a powerful tool to verify the authenticity of digital media.
While we wait for some of these technologies to come to be full developed, fighting back against AI and identity-focused attacks is a mix of strengthening our identity solutions with AI and advocating for employee education.
Having a strong identity-centric security strategy in place can greatly hinder bad actors. If your door is locked tight, attackers are more likely to try someone else’s. Lock down your organization with AI using:
- Identity First Zero-Trust.This approach to identity authentication is made better, faster and smarter by AI. Every service request is verified and authenticated, and AI is on guard for indications a user may not be legitimate.
- Proactive Policies. Using tools like multifactor authentication (MFA), we can make logging in with stolen credentials much more difficult. If a user is who they say they are, log-in with MFA is a breeze. Users posing as someone else likely won’t have access to the range of proof MFA will ask for.
- Language Models. Attackers use language models to map out an organization’s security perimeter and identify weak points. We can use that same process to identify those weaknesses first and make sure hackers aren’t able to exploit them.
- Passwordless Authentication. Passwords are guess-able with AI, but passwordless authentication requires proof only you can provide. This makes passwordless log-in incredibly difficult to bypass.
Researchers at Stanford University have been able to attribute 88% of organizational breaches to human error. Since humans have long been considered the security perimeter’s weakest link, this is no surprise. The only way to mitigate the risk of targeted phishing attacks against your employees is through education. Consider:
- Implementing a robust cybersecurity policy.
- Making cybersecurity a part of your company culture and an ongoing conversation.
- Investing in a cybersecurity training program and holding periodic training sessions for employees. During these lessons, you can train your staff to:
- Spot suspicious activity
- Respond appropriately to suspicious activities/requests
- Practice device safety
- Maintain a high level of confidentiality
Remember, education is key to ensuring employees don’t click that link, transfer those funds or buy those gift cards. Never trust, always verify. And always be wary of urgent requests.
About the author
Arun Shrestha has 20+ years of building and leading enterprise software and services companies and is committed to building a world class identity services organization. Prior to co-founding BeyondID, Arun held executive positions at Oracle, Sun Microsystems, SeeBeyond and most recently Okta, where he was responsible for building a world class services and customer success organization.
DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.
Article Topics
artificial identities | BeyondID | biometrics | cybersecurity | deepfakes | digital identity
Comments