DHS fears digital injection attacks, seeks solution to secure live video streams
The U.S. Department of Homeland Security (DHS) has identified the need for innovative software solutions to counter the growing threats to video communications against digital injection attacks and to ensure the integrity of video communications in critical applications. These attacks, which allow malicious actors to alter live video streams, pose significant risks to trust and security in remote interactions, including identity verification.
To this end, DHS is seeking a software solution through its Small Business Innovation Research (SBIR) Program, which issued a pre-solicitation for a solution that can secure multiparty video interactions by establishing and maintaining the integrity of live video streams. The goal is to develop a solution that can prevent digital injection attacks while seamlessly integrating with existing hardware and video conferencing applications. This technology must provide users with confidence in the authenticity of their video interactions and notify them of any changes in the security status during a session.
The development of secure video communication software has significant implications for both national security and private sector operations. DHS said it “is increasingly doing business online including immigration interviews, remote identity proofing and agency meetings online. The ability to transact digitally with trust is key to multiple DHS missions. Widely used video platforms such as Microsoft Teams, Zoom, Webex, etc., allow substitution of virtual cameras with no notice or awareness to the participants in a video interaction. This deficiency can allow for video injection attacks.”
DHS said the technology it seeks will also play a key role in combating fraud and deception enabled by deepfake technologies. As deepfakes become more sophisticated, the need for robust defense mechanisms is more urgent than ever. The proposed solution will help bridge the gap between detection and prevention, providing proactive protection against digital injection attacks.
Digital injection attacks manipulate live video feeds to deceive participants in virtual interactions. A common example is the substitution of a real video feed with a deepfake or other digitally altered content. This vulnerability is particularly concerning for platforms that are widely used for personal and professional communications. And existing video platforms often allow the use of virtual cameras without notifying users, creating opportunities for attackers to exploit the system.
The threat of live deepfakes underscores the urgency of this issue, and the pre-solicitation document refers to a Biometric Update guest post by ID R&D Co-founder and Chief Science Officer Konstantin Simonchik which explains why. Deepfake technologies are advancing rapidly, making it increasingly difficult to distinguish between authentic and manipulated content in real time. However, DHS said that although there has been progress in deterrent solutions such as biometric presentation attack detection (PAD), “not all approaches can be integrated for real-time streaming video and few detectors are robust over various methods of generating live deepfakes on all commodity personal computers.”
DHS said “the dynamically changing and rapidly improving methods for generating live deepfakes may defeat some detectors, resulting in an ongoing challenge between generators and detectors. This topic is seeking an innovative solution to mitigate and prevent digital injection attacks where a bad actor could modify live video content to deceive, commit fraud, or perpetrate scams.”
The department said current detection methods also have limitations in robustness and compatibility with real-time streaming video across commodity devices like laptops and mobile phones.
The proposed solution DHS is looking for must go beyond traditional detection methods to offer a novel and distinct capability. It should be interoperable, adhere to open standards, and demonstrate immunity from digital injection attacks as defined by international security standard CEN/TC 224. By achieving these objectives, the software will help DHS and other stakeholders conduct secure, trustworthy video communications.
To meet DHS’s needs, the software must satisfy several technical and operational criteria. The solution must establish and maintain the authenticity of the video stream between participants using hardware such as laptops, desktops, and mobile devices, and must ensure that the video content originates from a real camera and has not been modified.
It also must have real-time security indicators. Users should receive clear indications that the video interaction is secure. If any security changes occur during the session, users must be immediately notified.
The solution also must work with existing operating systems, device drivers, hardware, and video applications to ensure compatibility with widely used platforms without requiring extensive modifications.
The proposed software must differ from existing solutions like PAD and liveness detection and should offer advanced protection against live deepfake generation and other sophisticated attacks.
The project will be developed in three phases, each addressing key technical and operational aspects of the solution.
The first phase focuses on evaluating the viability of proposed security approaches to prevent digital injection attacks. DHS said respondents “must explain how the proposed solution integrates with existing operating systems, drivers, hardware, and video applications, and how the proposed solution is novel and distinct from existing commercial presentation attack detection solutions.”
Developers must identify the necessary software security layers to maintain digital trust in video streams and determine how the solution will integrate with existing systems. This phase involves detailed modeling to demonstrate the distinctiveness of the proposed solution compared to commercial alternatives.
Phase I also requires developers to outline the extended functionalities needed for video applications, such as verifying the authenticity of video feeds and alerting participants to security breaches. By addressing these foundational requirements, Phase I sets the stage for prototype development.
In Phase II, developers will create a working prototype of the software, incorporating the security layers modeled in Phase I. The prototype will include utilities to verify and assert a secure channel between video applications and cameras, ensuring that video content is authentic and untampered.
The final phase focuses on deploying the solution in real-world settings and refining it based on user feedback. Phase III has cross-cutting applications, spanning government operations and commercial use cases. For DHS, the technology will support critical activities such as virtual immigration interviews and remote identity proofing transactions for disaster assistance.
Through its phased approach, the project will address technical challenges, integrate with existing systems, and deliver a robust solution that meets the evolving needs of modern communication. In doing so, DHS said, it will set a new standard for trusted video interactions, safeguarding the integrity of virtual engagements in an increasingly interconnected world.
Article Topics
biometric liveness detection | biometrics | deepfake detection | DHS | digital trust | generative AI | injection attacks
Comments