Healthcare consumers tend to feel excitement over AI’s potential for improving their care while holding reservations about the technology’s safety and oversight.
The latter concern seems tied to patients’ worries that someone other than a physician will be in charge of applying healthcare AI to real-world clinical care.
So found Mayo Clinic researchers who conducted 15 focus groups with 44 men and 43 women recruited from Mayo primary care patient rolls in Minnesota and Wisconsin. The team’s findings are current in current in NPJ Digital Medicine.
Biomedical ethicist Richard Sharp, PhD, and colleagues discovered that most of the patients’ input on healthcare AI could be categorized as one of six distinct themes.
Here’s a summary with sample patient quotes:
1. Participants are excited about healthcare AI but want assurances about safety.
- “[W]hen this intelligence is built we have to test it, right? We have to test it to make sure that it’s helping correctly, and that to me represents a big challenge and one we don’t wanna jump into and see what happens. We’ve gotta be very careful there.”
2. Patients expect their clinicians to ensure AI safety.
- “I believe the doctor always has the responsibility to be checking for you, and you’re his responsibility, you know? The AI is not responsible; that’s just a tool.”
3. Preservation of patient choice and autonomy.
- “I’d rather know what they’re observing and, if [the AI is] wrong, I would [want to] be able to correct it rather than have them just collect data and make assumptions.”
4. Concerns about healthcare costs and insurance coverage.
- “[I]t sounds expensive, and healthcare is already fairly expensive. … [A] lot of times you can get something that works just as well for a lot less or you could get something super fancy. That makes you think, hey I got this big fancy thing, but it really doesn’t do any better than the original, cheaper version.”
5. Ensuring data integrity.
- “There’s a lot of discrepancies in the medical record I must say, especially now that you can see your portal. … I’ve had a lot of different things in my medical chart that are inaccurate, very inaccurate, so if they’re training an artificial intelligence that this is facts, it’s like, well no.”
6. Risks of technology-dependent systems.
- “I have some background in electronics, and one thing you can guarantee with electronics is they will fail. Might not be now, might never happen in 10, 20 years. … [But] electronics fail. They just do.”
Sharp and co-authors acknowledge several limitations in their study design, including a lack of racial and ethnic diversity (93.1% white) due to recruitment methods, as well as above-average levels in education and health coverage.
Still, they state, their results give voice to underlying concerns many U.S. healthcare consumers likely have about the use of AI in their healthcare.
Possibly tops among these is the need for assurance that the use of AI in clinical care will always be overseen by physicians.
If this expectation is not met, it is possible that we could see a third ‘AI Winter’ in which fears of patient harm lead to widespread rejection of healthcare AI by patients and their providers. To avoid that possibility, it is critical that AI developers engage the public in dialogue about both the potential benefits and harms of applications of AI in healthcare.”
The study is available in full for free.