In cooperation with | ● |
|
|
| | | Scientists are people too. As such, when engaged in research projects using AI, they must resist the very human impulse to over-delegate tasks to algorithms. Two accomplished scholars of science have written a peer-reviewed paper warning their peers to fight that proclivity mightily. Science and tech anthropologist Lisa Messeri, PhD, of Yale and cognitive scientist Molly Crockett, PhD, of Princeton had their commentary published in Nature this month. “The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less,” they explain. “[P]roposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do.” Before setting pen to proverbial paper, Messeri and Crockett combed through 100 or so published papers, scientific manuscripts, conference proceedings and books. From this exercise they identified a number of “illusions” that can lead researchers to trust AI research aids a little too much. Three of the most common as synopsized in a Nature editorial: 1. The illusion of explanatory depth.People relying on another person—or, in this case, an algorithm—for knowledge have a tendency to mistake that knowledge for their own and think their understanding is deeper than it actually is.
2. The illusion of exploratory breadth. Research becomes skewed toward studying the kinds of thing that AI systems can test. For example, in social science, AI could encourage experiments involving human behaviors that can be simulated by an AI—and discourage those on behaviors that cannot, such as anything that requires being embodied physically.
3. The illusion of objectivity. Researchers see AI systems as representing all possible viewpoints or not having a viewpoint. In fact, these tools reflect only the viewpoints found in the data they have been trained on, and are known to adopt the biases found in those data.
Meanwhile, Messeri and Crockett point out, these illusions don’t come from nowhere. Typically they arise from scientists’ “visions” of AI as something it’s not—or maybe something they sometimes wish it were. Three examples: - AI as oracle. Researchers may come to see AI tools as capable of not only tirelessly reading scientific papers but also competently assimilating the material therein. When this happens, researchers may trust the tools to “survey the scientific literature more exhaustively than people can,” Messeri and Crockett write.
- AI as arbiter. Scientists can perceive automated systems as more objective than people and thus better able to solve disagreements from a detached, dispassionate perspective. The error here is assuming AI tools are less likely than humans to “cherry-pick the literature to support a desired hypothesis or to show favoritism in peer review.”
- AI as quant (quantitative analyst). In this problematic “vision,” AI tools “seem to surpass the limits of the human mind in analyzing vast and complex data sets,” the authors state. In a similarly sneaky vision—AI as surrogate—AI tools simulate data that are too difficult or complex to obtain.
In the synopsis of the Messeri and Crockett paper, Nature’s editors remark: All members of the scientific community must view AI use not as inevitable for any particular task, nor as a panacea, but rather as a choice with risks and benefits that must be carefully weighed.
Messeri/Crockett paper here (behind paywall), accompanying editorial here. |
| | |
| |
| | | Buzzworthy developments of the past few days. - President Biden is seeking $20B to fund all sorts of AI research in 2025. As always around this time of year, the White House positions the budget request as equal parts plea for spending money to Congress and statement of policy priorities to America. Of interest to healthcare AI stakeholders, the Administration would like $10 million for AI in VA clinical trials and $2 million for AI in administering benefits for veterans. The elite D.C.-based law firm Akin nicely breaks it down here.
- The VA’s ears must be ringing AI tones. The Biden Administration isn’t the only one with the agency on its mind vis-à-vis AI. This week the Federal News Network takes a look at what the VA is doing with AI for veterans and for the VA health workers who serve them.
- ‘Europe is now a global standard-setter in AI.’ So brags Thierry Breton, European Commissioner for Internal Market, on the platform formerly known as Twitter. The source of Breton’s pride is the passage of the EU’s AI Act by a wide margin (523 yeas, 46 nays and a few dozen abstentions). The law reaches far and wide. In fact, some are calling it “the world’s most comprehensive regulatory framework for AI.” Information Week concurs with that descriptor, noting the new rules will give the EU power to fine noncomplying businesses 7% of global revenues or $38 million—“whichever is greater.” The outlet’s analysis is aimed at CIOs but accessible to all.
- Make way for hypothesis-driven AI in healthcare. As developed by Mayo Clinic researchers, this new kid on the medical AI block integrates scientific questions—aka hypotheses—so as to uncover insights likely to be missed by conventional AI. Mayo pharmacologist Hu Li, PhD, one of the lead researchers involved in the work, says hypothesis-driven AI can overcome AI’s “rubbish in, rubbish out” shortcomings. The new way, he suggests, will “yield significant insights that can help form testable hypotheses and move medicine forward.” Mayo News Network coverage here, scientific paper by Hu Li and colleagues here.
- Here’s more AI confidence emanating from Rochester, Minnesota. “Not far in the future, we will see living databases that undergo nightly self-updates from EMR data streams and allow continuous retraining of models that combine clinical features from radiology, pathology and more into true multimodal predictive machines,” Jacob Shreve, MD, a Mayo senior oncology fellow, states in OncLive. Along the march from here to there, Shreve adds, AI developers will come up with models that “not only outperform their counterparts but also demonstrate the maturity, rigor and reproducibility that is expected in medicine.”
- Emerging Covid variants with deadly potential can be sniffed out with AI a lot faster than without. Proof-of-concept research has borne this out in the U.K. The underlying idea is to formulate variant-specific vaccines and possibly antidotes before any one variant multiplies itself into a population-level outbreak. The work also may help avoid wasting time on Covid strains unlikely to gain traction. Learn more here.
- Now where were we on whether or not AI will annihilate humankind? Oh yes. We were briefly hung up on an alarming report from Gladstone AI commissioned by the U.S. State Department. This week’s entry in the genre is a report from the Forecasting Research Institute. More precisely, it’s from coverage of the report by a fellow traveler on the AI beat. “Is it extraordinary to believe that AI will kill all of humanity when humanity has been around for hundreds of thousands of years?” asks Vox writer Dylan Matthews, only half-rhetorically, I think. “[O]r is it extraordinary to believe that humanity would continue to survive alongside smarter-than-human AI?” Either way, we’re all in this together. Read the FRI report here and Dylan Matthews’s piece on it here.
- A few upcoming events of note:
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|