In cooperation with | ● |
|
|
| | | As AI continues augmenting the expertise of healthcare professionals, look for it to go further and do nothing less than “enhance humanity.” The note of optimism might have struck hearers as hyperbolic had it not been sounded by a distinguished technology expert addressing an audience of almost 4,000 tech experts and aficionados. The speaker was computer scientist Fei-Fei Li, PhD, of the Stanford School of Engineering. The occasion was the May 14 inaugural symposium of RAISE Health, a joint initiative of Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI). Li is co-director of HAI and of RAISE Health, the latter standing for Responsible AI for Safe and Equitable Health. Here are some other key quotes from the event as covered by Stanford Medicine science writer Hanae Armitage. 1. ‘It was quite remarkable.’—Lloyd Minor, MD, dean of the Stanford School of Medicine and co-director of RAISE Health, commenting on generative AI’s speed and accuracy when he asked it to describe a rare condition of the inner ear
2. ‘I encourage people to push [AI] for the unknown. I think everyone here knows someone who is suffering from a health condition that needs something beyond what we can offer today.’ —Jessica Mega, MD, MPH, Stanford cardiologist and a co-founder of Alphabet’s Verily
3. ‘All of us are better than any one of us, and we’re recognizing … that we don’t have a prayer of reaching the potential of [AI] unless we understand how to interact with each other.’—Laura Adams, RN, senior advisor at the National Academy of Medicine, on the need for partnerships between academia, the private sector and the public sector
4. ‘It’s [about] putting patients at the center of everything we do. We need to be thinking about their needs and priorities.’—Lisa Lehmann, MD, PhD, director of bioethics at Brigham and Women’s Hospital and associate professor of medicine and global health and social medicine at Harvard Medical School
5. ‘Does it work? Does it work in my institution? Who pays for it? Who is liable?’—Jesse Ehrenfeld, MD, president of the American Medical Association, discussing four drivers of adoption for any digital health tool, including those powered by AI
6. ‘If we are looking for improving health [and] decreasing disparities, we’re going to have to make sure that we are collecting high-quality data on human behaviors, as well as the social and physical environment.’—Michelle Williams, ScD, professor of epidemiology at Harvard University and a visiting professor of epidemiology and population health at Stanford Medicine
7. ‘We’re physicians. We take this oath to do no harm. That needs to be the first way that we’re assessing any of these tools.’ —Nina Vasan, MD, MBA, clinical assistant professor of psychiatry at Stanford, where she is founder and executive director of Brainstorm: The Stanford Lab for Mental Health Innovation
8. ‘AI is a mirror that reflects the society that we’re in. I’m hopeful that every time we get an opportunity to shine a light on a problem—hold up that mirror to ourselves—it will be a spur for things to get better.’—David Magnus, PhD, Stanford professor of pediatrics and of medicine
9. ‘Doing the science right for one model takes about 10 years. If every one of [our] 123 fellowship and residency programs wanted to test and deploy one model at that level of rigor … it would [cost] $138 billion. We can’t afford that. …’ —Nigam Shah, MBBS, PhD, professor of medicine at Stanford University and chief data scientist for Stanford Health Care
Full article here. Symposium videos here. |
| | |
| |
| | | Buzzworthy developments of the past few days. - Surprise! More blowback on Schumer & friends’ $32B ask for AI funding. We haven’t heard the last of the gnashing of teeth over last week’s “bipartisan” unveiling (by two Democrats and two Republicans) of a roadmap for AI spending. Noting Sen. Schumer’s framing of the proposal as a must for keeping the U.S. from falling behind China on AI, editorial leaders at The Wall Street Journal lay out a punchy rebuttal. “Now’s not a time for more pork-barrel spending,” they write. “The Navy could buy a lot of ships to help deter China with an additional $32 billion a year.” Get their brief argument here.
- Our neighbors to the North are tussling over how much to spend on AI too. The Globe and Mail just published an opinion piece authored by two thought leaders with expertise in economics, computer science and governmental innovation. They point out that France is riding high on commitments for investments in AI infrastructure and applications totaling $16.2 billion from Amazon, Microsoft and Morgan Stanley. By comparison, they state, the $2.4 billion Canada has so far invested in AI is “already mostly obsolete.” As Canada reviews its AI investment focus, “we should not only emulate the breadth of investments being made in France but also leverage private partnerships to amplify the impact and reach of our efforts,” the two experts assert. “Canada’s AI competitiveness depends on it.” Read the rest.
- Here’s another AI thought leader getting his thoughts out into the public square. Soroush Saghafian, PhD, founder and director of the Public Impact Analytics Science Lab at Harvard, does so by letting his brain get gently picked by a contributing writer for Fast Company. “Imagine a healthcare system where preventive measures are as accessible as receiving a notification on your phone,” Saghafian says to the contributor, Guadalupe Hayes-Mota, MBA, an MIT senior lecturer in business and engineering. Article here.
- Reddit is emerging as a social media platform fresh for the data plucking. Scientific American makes note of the development. Its writer is interested in recent research that used Reddit posts to let large language AI identify misinformation that would cause people to behave in a certain way. Such as, for example, refusing to get vaccinated during a pandemic. Of course, misinformation flowed in both directions—pro- and anti-vaccine—during the COVID-19 crisis. In any case, the AI technique at hand taps the preexisting “fuzzy-trace” theory, which holds that people “pay more attention to the implications of a piece of information than to its literal meaning.” Read the SciAm piece here.
- Tomorrow’s healthcare workers may learn from AI-powered dummies. Employed as practice patients, such mannequins are already in use at Darlington College in the U.K. The unfailingly polite stand-ins have pulses, respond to spoken words, react realistically to a number of medical situations and, when appropriate, appreciate the occasional defibrillator jolt. More here.
- Can you distinguish between the three main methodologies used to train machine learning in healthcare? By name, they’re supervised learning, unsupervised learning and reinforcement learning. If you’d like to brush up, the American Medical Association has you covered.
- And while you’re at it, here’s a fresh overall primer on AI in healthcare. This one is from Healthnews, which is interestingly enough headquartered in Lithuania. (That was enough to get me to read it.)
- You didn’t just buy a new laptop, did you? Starting June 18, the new Copilot+ models will be available for Windows lovers (and potential Mac deserters) starting at $999. The big breakthrough: They’re designed with AI in mind. As Microsoft pitches the products in a May 20 blogpost, “We have completely reimagined the entirety of the PC—from silicon to the operating system, the application layer to the cloud—with AI at the center, marking the most significant change to the Windows platform in decades.” The post is here, and there are already practical libraries on the product drop everywhere else online. (The New York Times covers it by posing an oddly compelling question: “Can AI make the PC cool again?)
- Recent research roundup:
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|