AI concerns prompt deepfake challenge, civil society criticism, and new DoD ethicist position
Issues in algorithmic trust and artificial intelligence ethics are motivating action by a number of institutions and businesses around the world.
As concern about the threat of deepfakes to social discourse and online trust continues to grow, a group of stakeholders including Facebook, the Partnership on AI, Microsoft, and academics from the U.S. and UK are holding a Deepfake Detection Challenge (DFDC), presenting a new data set for researchers and technology developers to use produce new technology to identify video altered with artificial intelligence.
In a blog post, Facebook CTO Mike Schroepfer explains that the company is commissioning a dataset of deepfake videos freely available for community use, with paid actors who have consented to the use of their image in the dataset. Facebook will also contribute more than $10 million to fund collaborations and prizes for the challenge winners.
The challenge parameters and dataset will be tested in a technical working session at the upcoming International Conference on Computer Vision (ICCV) in October. Following this initial test, the DFDC launch and release of the dataset will be held at the Conference on Neural Information Processing Systems (NeurIPS) in December. Facebook will participate in the challenge, but not compete for prize money.
“In order to move from the information age to the knowledge age, we must do better in distinguishing the real from the fake, reward trusted content over untrusted content, and educate the next generation to be better digital citizens,” comments Professor Hany Farid, Professor in the Department of Electrical Engineering & Computer Science and the School of Information, UC Berkeley. “This will require investments across the board, including in industry/university/NGO research efforts to develop and operationalize technology that can quickly and accurately determine which content is authentic.”
Australian groups slam facial recognition proposals
A pair of human rights groups described by Which-50 as the leading groups in the country have called for Australia’s parliament to reject planned facial recognition laws, arguing they are worse than a similar system in the UK, which has been widely criticized.
A pair of bills to create a national facial recognition database are currently under review by the Parliamentary Joint Committee on Intelligence and Security (PJCIS), and the Human Rights Law Centre (HRLC) and the Australian Human Rights Commission have both made submissions to the PJCIS slamming the proposals.
HRLC calls the proposals “more draconian” than the UK system, and says they do not provide a legal basis for identity matching services use.
“The Bill can be characterised as providing authorities with extraordinarily broad capabilities to use facial recognition technology without any apparent regard for the civil liberties of all of us who will be affected,” according to the submission. The group’s Legal Director Emily Howe says the proposed laws are “something you’d expect in an authoritarian state.”
The Australian Human Rights Commission expresses concerns about the technology’s accuracy “in ‘real world’ applications,” and warns of the potential for algorithmic bias to affect law enforcement and service delivery.
The government’s attempt to pass the identity matching services Bill 2018 did not clear parliament before the end of the session.
DoD hiring ethicist for AI center
The U.S. Department of Defense’s (DoD’s) Joint Artificial Intelligence Center (JAIC) is planning to hire an ethicist to guide the department’s efforts in the field, defense.gov reports.
The JAIC was launched a year ago with a skeleton staff, but has grown to around 60 employees, with a headquarters, and a budget request of $268 million.
“One of the positions we are going to fill will be somebody who is not just looking at technical standards, but who is an ethicist,” said JAIC Director Air Force Lt. Gen. Jack Shanahan. “We are going to bring in someone who will have a deep background in ethics, and then the lawyers within the department will be looking at how we actually bake this into the Department of Defense.”
Shanahan wants the JAIC to not only being AI technologies to the field, but also function as a “center of excellence.” He also notes that while some other countries like Russia and China may have an advantage in fast access to data due to fewer restrictions based on privacy and civil liberties, this does not necessarily translate to an advantage in the field. Shanahan does, however, want to strengthen ties between the U.S. government, industry, and academia.
India urged to pass algorithmic bias law
In an editorial for the Observer Research Foundation (ORF), University of Illinois—Urbana-Champaign Associate Professor Rakesh Kumar argues that with the use of computer technology for decision making in numerous areas of Indian society, the government needs to introduce a bill to establish safeguards against algorithmic bias.
Kumar spells out historical problems and plausible examples of potential algorithmic bias in India and other places, and reviews the current legal landscape in India. He finds that while current and proposed rules could enhance data protection and access rights, algorithmic bias is not dealt with.
“Europe now prohibits solely automated decisions in cases where there could be significant or legal impact on the individual, and a right to human-in-the-loop and a non-binding right to explanation exists in all other cases,” Kumar explains. “Policy makers, industry, and civil society must debate if an equivalent framework is appropriate for India. At the least, there should be required minimum human involvement in the design and evaluation of a computer model.”
Existing laws dealing with discrimination could also be updated to apply them to digital interactions. Noting that law enforcement in Maharashtra and Delhi are already using predictive policing practices, and biometric facial recognition is used in Rajasthan, Punjab, and Uttarakhand, Kumar suggests that the Indian government needs to act to regulate computer learning and AI models so that they do not deepen social divisions.