AI researcher argues machine learning discoveries require checking
The accuracy and reproducibility of scientific discoveries made with machine-learning techniques should be questioned by scientists until systems can be developed that effectively critique themselves, according to a researcher from Rice University.
EurekaAlert! reports that Genevera Allen, who is an associate professor of statistics, computer science and electrical and computer engineering at Rice and of pediatrics-neurology at Baylor College of Medicine recently addressed the topic at the 2019 Annual Meeting of the American Association for the Advancement of Science (AAAS).
Allen says that it appears that discoveries currently being made by applying machine learning to large data sets can probably not be trusted without confirmation, “but work is underway on next-generation machine-learning systems that will assess the uncertainty and reproducibility of their predictions.”
Developing predictive models has been one of the focuses of the ML field, according to Allen.
“A lot of these techniques are designed to always make a prediction,” she notes. “They never come back with ‘I don’t know,’ or ‘I didn’t discover anything,’ because they aren’t made to.”
Machine learning technologies lead an explosion of new AI patent applications, according to research from the World Intellectual Property Organization, and the Association for Computing Machinery recently added a Conference on Fairness, Accountability and Transparency to its lineup for 2019 to address increasing interest in how AI systems arrive at their conclusions.
Comments