What, me worry? Businesses missing the importance of ethics in AI development
Almost everyone involved in AI research at least pays lip service to the critical need for ethical and responsible programming.
Hardly a victory yell, but it is encouraging. That statement could not have been made with a straight face almost anywhere three years ago. But it cannot be generally applied today to businesses investing in AI for products or operations.
Credit scorer FICO has issued a report that it compiled with market analysis firm Corinium that finds a gaping blind spot for executives when it comes to responsible AI. A large majority of respondents to a survey within the report seem like they cannot be bothered with it.
If the conclusions can be extrapolated both globally and over time, the industry could be sabotaging itself — as well as economies and the lives of individual people — the way many industries have done by not taking the threat of cybercrime and -terrorism seriously enough.
Revenue worldwide for AI software, hardware and services is predicted to grow 16.4 percent year over year in 2021, to $327.5 billion, according to industry analyst International Data Corp., figures that are spotlighted in FICO’s report.
There are those, especially in the research community, who are moving deeper into the issue of trust. It is unclear reading FICO’s report, however, when or if private-sector momentum will take over. (The document can be downloaded here.)
Seventy-three percent of those responding to the survey reported having “struggled” to get executives to prioritize ethical and responsible AI practices.
Research and early experience are showing developers and researchers that consumers will resist AI-involved products and services if they do not trust the algorithms, the vendors or government regulators. In the case of face biometrics, that resistance even extends to legislators.
Part of creating trust is being able to explain how algorithms work, but 65 percent of respondents’ companies are incapable of saying how specific model decisions and predictions are made.
And in one of the more damning findings, FICO’s second annual survey indicates that only one in five companies “actively monitor their models in production for fairness and ethics.”
That is not to say the idea of credible efforts is alien to business leaders. Eighty percent of “AI-focused executives say they are struggling to establish processes that ensure responsible AI use.”
But 43 percent of respondents “say they have no responsibilities beyond regulatory compliance” when it comes to operating AI systems that make decisions capable of “indirectly” impacting livelihoods.
There is an undercurrent of thought in the AI industry that, until this report, offered hope for proponents of AI’s unfettered deployment.
The idea is that worries about all varieties of bias in algorithms, for example, are overstated because businesses are as motivated as any participant in society to see scrupulously fair and accurate software created.
FICO’s report seems to illustrate a private sector clicking boxes instead of taking responsibility for the actions of software that have unprecedented and growing power over economic activity and government operations.
Perhaps looking for an optimistic note amid the buck-passing and -chasing, the report’s authors found that 63 percent of survey takers “believe that AI ethics and responsible AI will become a core element of their organization’s strategy within two years.”
That seems to assume something fortunate will happen in the immediate future, like maybe the writing of AI that writes AI ethics into other AI systems.