| | | Half a year after President Biden officially directed federal agencies in the executive branch’s bailiwick to “seize the promise and manage the risks” of AI, the White House has posted a status report. The update on last fall’s executive order, or “EO,” suggests agencies and departments “all across government” have not only complied with the boss’s instructions but also made noteworthy strides of their own volition. The report groups the achievements into four main categories. Here are examples of AI actions that federal bodies have completed under each of the four. Managing risks to safety and security
- Developed the first AI safety and security guidelines for critical infrastructure owners and operators. These guidelines are informed by the completed work of nine agencies to assess AI risks across all 16 critical infrastructure sectors.
- Piloted new AI tools for identifying vulnerabilities in vital government software systems. The Department of Defense (DoD) made progress on a pilot for AI that can find and address vulnerabilities in software used for national security and military purposes. Complementary to DoD’s efforts, DHS piloted different tools to identify and close vulnerabilities in other critical government software systems that Americans rely on every hour of every day.
Standing up for workers, consumers and civil rights
- Announced a final rule clarifying that nondiscrimination requirements in health programs and activities continue to apply to the use of AI, clinical algorithms, predictive analytics and other tools. Specifically, the rule applies the nondiscrimination principles under Section 1557 of the Affordable Care Act to the use of patient care decision support tools in clinical care, and it requires those covered by the rule to take steps to identify and mitigate discrimination when they use AI and other forms of decision support tools for care.
- Developed a strategy for ensuring the safety and effectiveness of AI deployed in the healthcare sector. The strategy outlines rigorous frameworks for AI testing and evaluation, and it outlines future actions for HHS to promote responsible AI development and deployment.
Harnessing AI for good
- Announced DOE funding opportunities to support the application of AI for science, including energy-efficient AI algorithms and hardware.
- Authored a report on AI’s role in advancing scientific research to help tackle major societal challenges, written by the President’s Council of Advisors on Science and Technology.
Bringing AI talent into government
- The General Services Administration has onboarded a new cohort of Presidential Innovation Fellows (PIF) and also announced their first-ever PIF AI cohort starting this summer.
- DHS has launched the DHS AI Corps, which will hire 50 AI professionals to build safe, responsible, and trustworthy AI to improve service delivery and homeland security.
- The Office of Personnel Management has issued guidance on skills-based hiring to increase access to federal AI roles for individuals with non-traditional academic backgrounds.
The White House points out that the six-month goals followed numerous successes at the 90- 120- and 150-day marks, and that agencies “also progressed on other work tasked by the EO over longer timeframes.” Go deeper into the Biden Administration’s key activities around AI: |
| | |
| |
| | | Buzzworthy developments of the past few days. - How can AI help solve the problem of the global physician shortage? That’s just one of many excellent questions taken up by a distinguished panel of AI-experienced professors who took part in a roundtable discussion hosted by Harvard’s T.H. Chan School of Public Health. Responding to that particular question, Lucila Ohno-Machado, MD, PhD, of Yale said AI can certainly step in when all that’s needed is a solid clinical opinion on a simple medical problem. “But I must say,” she added, “[human] expertise is not dead.” In fact, she believes AI will only make physicians’ clinical know-how “more valued than ever.” Milind Tambe, PhD, of Harvard concurred and pointed out that AI can do things like help increase vaccination rates. “Where, exactly, might [human] intervention be the most useful?” Tambe said. Who should get vouchers for traveling to vaccination sites, who should get a ride and who should get just a reminder? “Machine learning tools,” Tambe said, “can be precise at figuring out where each of these interventions would be most effective.” Watch the full discussion on YouTube.
- Ohno-Machado’s above argument gets support from new research. After systematically putting ChatGPT-4 through its paces, clinical investigators at Mass General Brigham concluded the tool can boost efficiency and contribute to patient education—but it surely should not be turned loose absent a doctor in the loop. And even that won’t always be enough. “As providers rely more on large language models, we could miss errors that could lead to patient harm,” explains the study’s corresponding author, Danielle Bitterman, MD. “This study demonstrates the need for systems to monitor the quality of LLMs, training for clinicians to appropriately supervise LLM output, more AI literacy for both patients and clinicians, and—on a fundamental level—a better understanding of how to address the errors that LLMs make.” Mass General Brigham news item here, journal study here.
- “We have definitely seen a trend toward decreasing ‘pajama time.’” That’s what some doctors call the sleepy nighttime period in which they find themselves finishing up the day’s documentation duties and administrative tasks. The quote is from Andrew Narcelles, MD, a family medicine practitioner at OhioHealth. The healthy trend to which he refers has been made possible by clinical notetaking bots from Nuance. Narcelles spoke with Axios reporter Ned Oliver, who reports that drafts created by the AI-enabled software “aren’t always perfect, but the early reviews are overwhelmingly positive.”
- If you thought social psychology’s replication crisis was bad, wait till you consider how bad AI’s reproducibility crisis-in-the-making could get. The principle is the same. One scholarly study arrives at a set of firm conclusions only to have them overturned when a follow-up study tries to replicate or reproduce the science. Fortunately, AI researchers can learn from past mistakes in other fields. And some are focused on doing precisely that. One of them, Princeton computer scientist Arvind Narayanan, PhD, tells his institution’s news operation that the scientific literature, “especially in applied machine learning research, is full of avoidable errors. And we want to help people.” Who’s we? And how are they aiming to prune this problem before it blooms? Get the basics here, explore the complexities here.
- Taken one at a time, emerging technologies transforming healthcare are only so impressive. But string together a handful and you’ve got yourself a genuine gee-whiz moment. Medscape delivers one of those in a zippy little roundup.
- Meta is overdosing on AI. And its users are feeling trapped in its bad trip. That’s the sense you get from tech, business and media journalist Scott Nover. One of the examples he gives to back up his take is the switcheroo Instagram seems to have pulled with one of its most basic functions. The platform’s search bar, “once a place to look up a friend’s account, now exists seemingly to usher users into conversation with a chatbot,” Nover reports in Fast Company. When it urges him to “Ask Meta AI anything,” he mentally shoots back: “Um, no. I just want to look up my dog’s daycare to see if they posted any pictures of her.” Read and relate.
- Bill Gates keeps trying to leave Microsoft. GenAI keeps pulling him back. Business Insider has the goods on him. “In early 2023, when Microsoft debuted a version of its search engine Bing turbocharged by the same technology as ChatGPT, throwing down the gauntlet against competitors like Google, Gates, executives said, was pivotal in setting the plan in motion,” reports chief tech correspondent Ashley Stewart. “While [Microsoft CEO Satya] Nadella might be the public face of the company's AI success—the Oz who built the yellow-brick road to a $3 trillion juggernaut—Gates has been the man behind the curtain.” Read it all.
- Recent research roundup:
- Funding news of note:
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|