3 ‘illusions,’ 3 ‘visions’ that AI-using researchers need to know to avoid

Scientists are people too. As such, when engaged in research projects using AI, they must resist the very human impulse to over-delegate tasks to algorithms.

Two accomplished scholars of science have written a peer-reviewed paper warning their peers to fight that proclivity mightily.

Science and tech anthropologist Lisa Messeri, PhD, of Yale and cognitive scientist Molly Crockett, PhD, of Princeton had their commentary published in Nature this month.

“The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less,” they explain. “[P]roposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do.”

Before setting pen to proverbial paper, Messeri and Crockett combed through 100 or so published papers, scientific manuscripts, conference proceedings and books. From this exercise they identified a number of “illusions” that can lead researchers to trust AI research aids a little too much.

Three of the most common as synopsized in a Nature editorial:

1. The illusion of explanatory depth.

People relying on another person—or, in this case, an algorithm—for knowledge have a tendency to mistake that knowledge for their own and think their understanding is deeper than it actually is.

2. The illusion of exploratory breadth.

Research becomes skewed toward studying the kinds of thing that AI systems can test. For example, in social science, AI could encourage experiments involving human behaviors that can be simulated by an AI—and discourage those on behaviors that cannot, such as anything that requires being embodied physically.

3. The illusion of objectivity.

Researchers see AI systems as representing all possible viewpoints or not having a viewpoint. In fact, these tools reflect only the viewpoints found in the data they have been trained on, and are known to adopt the biases found in those data.

Meanwhile, Messeri and Crockett point out, these illusions don’t come from nowhere. Typically they arise from scientists’ “visions” of AI as something it’s not—or maybe something they sometimes wish it were. Three examples:

  • AI as oracle. Researchers may come to see AI tools as capable of not only tirelessly reading scientific papers but also competently assimilating the material therein. When this happens, researchers may trust the tools to “survey the scientific literature more exhaustively than people can,” Messeri and Crockett write.
     
  • AI as arbiter. Scientists can perceive automated systems as more objective than people and thus better able to solve disagreements from a detached, dispassionate perspective. The error here is assuming AI tools are less likely than humans to “cherry-pick the literature to support a desired hypothesis or to show favoritism in peer review.”
     
  • AI as quant (quantitative analyst). In this problematic “vision,” AI tools “seem to surpass the limits of the human mind in analyzing vast and complex data sets,” the authors state. In a similarly sneaky vision—AI as surrogate—AI tools simulate data that are too difficult or complex to obtain.

In the synopsis of the Messeri and Crockett paper, Nature’s editors remark:

All members of the scientific community must view AI use not as inevitable for any particular task, nor as a panacea, but rather as a choice with risks and benefits that must be carefully weighed.

Messeri/Crockett paper here (behind paywall), accompanying editorial here.

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup