AI skeptics to lawmakers: Resist the urge to solve AI bias with more tech

European privacy advocates think it is a mistake for governments to believe more technology can make AI fair to all of the humans it serves. But if policymakers insist on that viewpoint, an alliance of AI skeptics has some recommendations.
The group, called European Digital Rights, or EDRi, has published a report based on the premise that debiasing AI algorithms and datasets (such as those in facial recognition systems) only further consolidates governance in the hands of technology companies.
Government leaders, according to the report, “must tackle the root causes of the power imbalances caused by the pervasive use of AI systems.”
Debiasing is an overly simplistic proposal, according to EDRi’s report, called “Beyond Debiasing: Regulating AI and its Inequalities.”
It means using technical design to solve complex technical problems finely integrated with societal dynamics. And it is the “primary means of addressing discrimination AI” in recent European Union policies.
Also troubling, debiasing is typically carried out by the people and processes that created biased programs to begin with.
An increasing amount of thought is going into how AI can be both effective and ethical, and not all of it involves debiasing.
The U.S. Government Accountability Office has created four principles of its own, for instance. And the U.S. National Institute of Standards and Technology also have opined on managing AI bias. This is not to say that either sets of ideas has gained much transaction.
The report’s authors make the case that more and better-informed public policy needs to take a greater share of control over AI. EDRi is a two-decade-old collection of subject-matter experts, non-governmental organizations, policy advocates and advocates.
Several recommendations are offered to lawmakers who likely will continue to pursue debiasing solutions for biometric systems and others.
Most of them require leaders to become well-informed on holding the technology industry responsible for removing bias — and to stay informed indefinitely.
For example, EDRi says governments have to clearly define societal problems related to AI, issue solution criteria, create guidance on known technology limitations and fund ongoing interdisciplinary research.
The guidelines also essentially call for insulating legislators and regulators from the powerful lobbying campaigns of companies that have revenue reserves beyond anything ever seen in the history of commerce.
Article Topics
AI | algorithms | biometric-bias | biometrics | ethics | Europe | legislation | regulation
Comments