FB pixel

RAND warns of hostile use of AI deepfakes, risks to privacy, democracy

RAND warns of hostile use of AI deepfakes, risks to privacy, democracy
 

Of the many risks that are explored in a new RAND Europe report, one of the most pressing involves rogue states and other bad actors’ use and manipulation of AI deepfakes. Meanwhile, the report warns about the inherent issues and dangers that Western democracies face in addressing these and other emerging AI-generated and enhanced information manipulation issues and the impact on matters of privacy and free speech.

Commissioned by the UK Ministry of Defence and the Foreign, Commonwealth and Development Office, the 144-page report, Strategic competition in the age of AI: Emerging Risks and Opportunities from Military Use of Artificial Intelligence, says the potential for AI to be used for information manipulation (e.g. highly sophisticated deepfakes) should be viewed as posing a “high-level impact” on society and the economy with “consequences for everything from political warfare, subversion, electoral interference, crime, and public trust.”

The report notes, however, that any regulation of AI must also balance free speech, data, and privacy concerns.

The RAND Europe researchers said their work uncovered “prominent risks” which “include AI enabling an unprecedented spread of disinformation, causing social upheaval and atomization, and undermining trust in facts, institutions, and democratic politics … AI-enabled deepfakes and disinformation campaigns fuel truth decay and governance crisis.”

In addition, the RAND Europe researchers direly warned that “competition for advantage in AI leads to a race to the bottom on regulatory standards on issues such as data protections, algorithmic bias, harm prevention, and privacy,” and that

The weaponization of AI is driving “new forms of economic warfare,” such as the use of deepfakes – or memetic engineering – to disrupt financial markets and attack the models used for algorithmic trading.

The RAND study cautioned that democracies may be “more exposed to information manipulation, electoral interference, and other acts of political subversion using AI,” noting that “concerns about privacy, civil liberties, and algorithmic bias may also make it less palatable to utilize certain datasets for training of AI systems, and obviously influence policy, legal, and ethical restrictions on lethal autonomous weapons systems.”

Acknowledging that a “fierce debate” exists within the AI community between those focused on existential risk and those focused on nearer term risks (e.g. concerns around bias, privacy, inequality, etc.), the RAND Europe researchers said this also “poses a false dichotomy to policy makers,” and it is therefore “imperative to address both types of risk, which “should be feasible with the collective resources and political bandwidth of major governments and tech firms. This means iteratively developing solutions to the immediate practical challenges posed by AI adoption (e.g. developing governance arrangements to mitigate concerns around safety and bias and accentuate the technology’s benefits) while also being mindful about any longer-term trends and path dependencies that could lead to global catastrophic risks.”

The RAND Europe researchers also warned that the application of AI to existing systems of repression by state militaries and armed non-state groups around the world could also “open the door to ever-more repressive regimes that are able to use the speed, automation, and pattern recognition of AI to further enhance their control of the information space and crack down on dissenters.

Among the significant concerns AI experts have, the report says, is “the extent to which AI could tip the balance in favor of repressive and authoritarian modes of governance in many parts of the world, while simultaneously threatening to subvert democratic politics, pollute the information environment, and undermine societies’ will-to-fight.”

The report found that there are “specific advantages that AI presents to authoritarian leaders. AI tools may assist such regimes in reinforcing systems of mass repression and surveillance, with Russia investing in AI systems that exploit massive volumes of data on their populations, from video surveillance and Internet traffic to facial or gait recognition and even DNA databases.”

“China, for example, has been accused of developing AI tools that specifically help it with monitoring its Uighur minority, as well as extending the influence of its digital ‘social credit’ system … Harnessing AI to existing systems of repression could open the door to ever-more repressive regimes that are able to use the speed, automation, and pattern recognition of AI to further enhance their control of the information space and crack down on dissenters.”

The RAND Europe researchers added that China is spreading “authoritarian norms” by exporting non-AI surveillance platforms to cities in over 80 Countries, and that “the addition of AI-enabled systems is a concerning elaboration on this trend.”

Furthermore, the researchers warned, “even within democratic countries, commercial and dual-use AI technologies exported from suppliers in authoritarian countries (e.g. drones or CCTV cameras with built-in edge AI) could create new vulnerabilities and potential backdoors within critical national infrastructure. AI tools could also support online monitoring, intimidation, or extortion of diaspora communities or foreign dissidents as well as assisting with social engineering,

Honeytraps, and use of deepfakes to exert influence over elected politicians. Such threats pose significant challenges as well as direct threats to the AI sector.

The report says that the “adoption of AI by state militaries and armed non-state groups is ushering in significant changes to the character of competition and conflict,” emphasizing that the “development, integration, and use of AI for military purposes could have profound implications for the future of warfare and for international peace and security more generally.”

And “this presents both opportunities and risks, whether at the tactical, operational, or strategic level,” the report says.

At a high level, the RAND report says that “AI can be further differentiated into Narrow, Broad and Strong AI.”

Narrow AI, sometimes known as Weak AI, “refers to AI systems that are designed to perform a narrow task (e.g. facial recognition or Internet searches) and can only operate under a limited predefined range,” the report says. These “are specialized systems that excel in their specific tasks but lack the ability to understand or apply knowledge beyond their programming.”

RAND explained that “Broad AI refers to an approach to AI that focuses on creating systems capable of generalizing knowledge and skills across multiple tasks and domains. These systems would be able to adapt to tasks, but not at the level of sentience or comparable to human performance.”

Finally, there’s Strong AI, which RAND said includes artificial general intelligence (AGI) and refers to those systems that are “able to understand, learn, adapt, and implement knowledge across a broad range of tasks at a level equal to or beyond human capabilities. AGI, or the related concept of artificial superintelligence, is a long-term goal of many research programs, but largely theoretical at this point.”

The RAND Europe report cautions that the “understanding of the applications and implications of these technologies is improving, but from a low base. Despite a lot of hype around AI, there are significant gaps in both our theoretical understanding and our empirical data on the potential benefits, drawbacks, and risks of different use cases for AI, including in a military setting. This has prompted intense and at times highly ideological debates among global AI experts, and left policy makers grappling with high levels of uncertainty around the likely pace and direction of future advances.”

Related Posts

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Biometrics and injection detection for deepfake defense a rising priority

Biometrics integrations with injection attack detection to defend the latest front in the global battle against fraud, deepfakes, is the…

 

Biometric Update Podcast looks at the road to a global standard for age assurance

Episode 2 of the Biometric Update Podcast is a dispatch from the 2025 Global Age Assurance Standards Summit, held from…

 

WEF launches new DPI initiative focused on emerging tech, including biometrics

Global Digital Public Infrastructure (DPI) initiatives are lagging behind emerging technologies such as AI, which could lead to inefficiencies, bottlenecks…

 

Odds are good for biometrics firms in the global gambling sector

Gambling has always been a vice associated with certain kinds of criminal activity, but the development of the online gambling…

 

New Zealand issues tender for digital ID services accreditation infrastructure

New Zealand’s accredited digital identity services regulator, the Trust Framework Authority (TFA), has published a request for information (RFI) for…

 

Pindrop surpasses $100M in annual recurring revenue, kicks off BU podcast

A release from Atlanta-based voice biometrics firm Pindrop celebrates a milestone: the firm has surpassed US$100 million in Annual Recurring…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events