Server misconfiguration leaks source code for Clearview AI biometrics apps

Server misconfiguration leaks source code for Clearview AI biometrics apps

Facial recognition company Clearview AI is making the headlines again after cybersecurity firm SpiderSilk detected a server misconfiguration that exposed, for a limited time online, Clearview’s internal files, apps and source code, writes TechCrunch.

According to Mossab Hussein, chief security officer at SpiderSilk, although the repository had a password, anyone could take advantage of the server misconfiguration to log in as a new user and access the source code, secret keys, and credentials for cloud storage buckets. In the buckets the company kept copies of both finished and pre-release apps for testing. The misconfiguration also exposed Slack tokens which enabled access to private corporate communications.

Largely criticized by privacy advocates, Clearview says its technology is only used by law enforcement, yet some reports tell a different story of the company approaching businesses such as Walmart and even the NBA for partnerships.

Contacted by TechCrunch, Clearview founder Hoan Ton-That said the company has been targeted by cyberattacks and is now investing in its security strategy.

“We have set up a bug bounty program with HackerOne whereby computer security researchers can be rewarded for finding flaws in Clearview AI’s systems,” Ton-That told TechCrunch. “SpiderSilk, a firm that was not a part of our bug bounty program, found a flaw in Clearview AI and reached out to us. This flaw did not expose any personally identifiable information, search history or biometric identifiers.”

Ton-That accused SpiderSilk of extortion, but, according to TechCrunch, this accusation is not confirmed by the email exchange between the two companies. Hussein claims he refused the bounty, as it would have prevented him from making the security issue known publicly.

Clearview went through “a full forensic audit of the host to confirm no other unauthorized access occurred,” Ton-That told TechCrunch, also claiming the secret keys have been changed.

Hussein’s research revealed Clearview AI used a “prototype” camera to take some 70,000 videos of residents in the lobby of a residential building. All footage was found stored in the cloud storage buckets.

“As part of prototyping a security camera product we collected some raw video strictly for debugging purposes, with the permission of the building management,” Ton-That told TechCrunch.

In an interview with CBS This Morning, Ton-That claimed the First Amendment protects scraping of public biometric data and that the technology is not used as “a 24/7 surveillance system.” An Illinois resident who has taken the company to court for violating BIPA has requested a pre-trial order from a federal judge ordering Clearview to destroy the data of state residents. In February, Clearview fell victim to a data breach which resulted in the theft of its customer list. California residents have filed class action against Clearview AI biometric data collection citing CCPA.

Related Posts

Article Topics

 |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics