ChatGPT: You’re not ready for the new wave of cyberattacks
By Philipp Pointner, Chief of Digital Identity at Jumio
Although it’s only been a few months post-launch, AI chatbot phenomenon ChatGPT has completely taken over the internet. Its quick ability to respond to queries, assist with tasks and generate content has made it a useful resource for students, employees and countless other users. Not to mention, the ability to open a free account in seconds and its ease of use have made it an even more attractive tool.
Unfortunately, obvious benefits aside, ChatGPT is also serving as a powerful weapon for easily creating malicious content at a greater scale, taking cybercrime to a whole new level. To be sure, researchers have found cases of cybercriminals overriding ChatGPT’s anti-abuse restrictions to generate or review malicious code. It also has the ability to produce highly convincing dialogues for deepfake videos within seconds, allowing bad actors to spread disinformation faster than ever.
As cybercriminals continue finding new ways to use ChatGPT maliciously, organizations and consumers must be better equipped to defend themselves.
Prepare for chatbot cyberattacks
Perhaps one of generative AI’s biggest threats is the newfound ability to rapidly spread disinformation at scale. In January 2023, researchers predicted that generative AI will make disinformation easier and cheaper to generate for even more propagandists and conspiracy theorists. But this reality is far closer to us than we might think — in fact, it’s already here. There have been dozens of cases of nation-state actors creating AI-generated videos of political figures to spread disinformation, and it’s becoming increasingly difficult to distinguish what’s real and what isn’t. With tools like ChatGPT and other synthetic media generators available for free and at our fingertips, anyone with internet access can now generate cyber attacks and spread disinformation.
Furthermore, generative technology is constantly evolving and may soon have the ability to fully connect to the internet. This would allow chatbots to obtain access to live information in real-time, further improving their accuracy. When this happens, bots will become even more sophisticated than they already are. If placed in the wrong hands, there’s no telling what kind of propaganda can be spread across media channels, including news outlets and social media platforms.
Many of us have already been on the receiving end of those phishing text messages, where our bosses or friends are asking us to wire money or send them gift cards — except it’s not really someone we know on the other end. As far as we know, these phishing text messages have been sent out by real humans pretending to be someone they’re not. But now, with generative technology, a single person can send out these texts on a mass scale and recruit bots to keep the conversation going.
Ditch CAPTCHA. New bot detection tools are needed
With research showing that ChatGPT can trick humans into bypassing CAPTCHA, the very tool meant to keep bots out, it’s clear that online organizations must evolve beyond these outdated methods.
Enterprises have a responsibility for keeping their business and their customers safe and secure, but bot detection methods of the past aren’t enough for the new wave of bots and fraud. What’s needed are sophisticated identity verification tools that can confirm everyone creating an account — and subsequently signing into that account — is a real person behind the screen.
These types of solutions might include liveness detection, where a secure algorithm determines whether a biometric authentication sample is a fake representation or a live human being. Another type of identity tool that organizations can deploy is document-centric identity proofing, where a government-issued ID is compared to a real-time selfie to verify a user is who they say they are. These modern digital identity technologies can give businesses the peace of mind that their users are real humans and indeed who they claim to be, while helping prevent consumers from being exposed to disinformation and fraud.
Assume bot until proven human
As consumers, we also have an obligation to ensure we’re extra cautious when coming across any form of content online, even if it seems to be from a reputable source. Consumers shouldn’t always believe every piece of content they see and are encouraged to do further research into news that seems even remotely suspicious. On the social media side, it would be wise for users to first assume accounts are bots until they are proven to be human-operated, especially when it comes to questionable content.
While generative AI has its evident advantages, it also poses significant societal risks. However, if organizations deploy modern tools and consumers stay on the side of caution when coming across content online, chatbot-powered cyberattacks won’t stand a chance.
About the author
As Chief of Digital Identity, Philipp leads Jumio‘s digital identity strategy and the initiative to enable multiple digital identity providers in its ecosystem. Prior to Jumio, Philipp was responsible for paysafecard, Europe’s most popular prepaid solution for online purchases.
DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.
Article Topics
biometric liveness detection | ChatGPT | cybersecurity | generative AI | identity verification | Jumio
Comments