| | | In the Biden administration’s latest push to encourage beneficial AI innovation while discouraging hazardous AI risk-taking, the president on Monday announced an executive order fleshing out that coupled call. The EO is primarily aimed at federal agencies, regulators and legislators. By extension, it speaks to regulable private businesses developing the technology in the U.S. as well as governments and companies of good will abroad. A fact sheet synopsizing the EO outlines eight key deliverables for the target audiences to pursue. Three of the items diverge from the usual check boxes. These are provisions for students as well as consumers, employment supports for workers and measures of responsible AI use by the U.S. government. Reactions to the EO have been mostly supportive. This is unsurprising since tech leaders themselves have been asking to be regulated on AI for the better part of a year now. Detractors have been quiet, albeit not silent. Here’s a sampling. - We face a genuine inflection point in history, one of those moments where the decisions we make in the very near term are going to set the course for the next decades.—President Joe Biden speaking at the executive order signing ceremony
- Behind the White House’s rosy PR push about setting a new course for AI lurk the scary but very real monsters of congressional dysfunction and international rivals. Without overcoming both, Biden’s AI vision could struggle to take root as his administration hopes it will.—Journalist Matt Laslo writing in Wired
- Congress will need to adequately fund our federal science agencies to be able to do the important research and standards development described in this executive order.—Rep. Zoe Lofgren (D-CA) in comments to the Washington Post
- [T]he order issued by Mr. Biden, the result of more than a year of work by several government departments, is limited in its scope. While Mr. Biden has broad powers to regulate how the federal government uses artificial intelligence, he is less able to reach into the private sector.—New York Times reporters Cecilia Kang and David E. Sanger
- Today’s executive order is a vital step to begin the long process of regulating rapidly advancing AI technology—but it’s only a first step.—Robert Weissman, president of DC-based consumer group Public Citizen, in comments to ABC News
- I think the White House has … used an interesting combination of techniques to put something together that I’m personally optimistic will move the dial in the right direction.—Lee Tiedrich, distinguished faculty fellow at Duke University’s Initiative for Science & Society, in comments to IEEE Spectrum
|
| | |
| |
| | | Buzzworthy developments of the past few days. - Watch for Vice President Kamala Harris to announce investments in AI of $200 million or more. She might drop the happy bomb while attending a global AI Safety Summit hosted by the U.K. Nov. 1 and 2. Bloomberg is reporting the cash will come from private philanthropic foundations interested in promoting AI advancements in consumer and worker protections.
- For its own part, the U.K. will be spending a fresh $122 million or so (£100 million) on healthcare AI. A special area of focus will be diagnostics and therapeutics for mental healthcare. And within that subfield, risks and preventatives for dementia will draw much attention. Computerworld has the story.
- On the one hand: Despite a glaring lack of scientific trialing, large-language AI chatbots are being used in patient-facing situations. “That’s really bad.” Such is the view of an Ivy League computer scientist who worries over the pace at which the technology is slipping into doctors’ offices. And much if not most of it is slipping into care pathways unchecked by the FDA. A Politico healthcare fellow speaks with the scholar and unpacks the problem.
- On the other hand: “For the first time in my lifetime, and largely because of AI, I think it’s possible to imagine democratizing health for everybody on the planet.” And that’s regardless of their address, insurance type or skin color. The expression of excitement is from Google’s chief health officer, Karen DeSalvo, who gave a guest lecture at Harvard Medical School this week. Harvard Crimson coverage here.
- Forget everything you know about AI. Start with the very term “artificial intelligence.” Why? Because there’s nothing really intelligent, much less potentially sentient, about algorithms, large language models and all the rest of it. The latest hype slayer to point this out is Isaac Schick, a policy analyst at the nonprofit American Consumer Institute. “The first step to protecting the progress AI could help deliver is to re-characterize how we view AI,” Schick writes in a short opinion piece published by National Review, “from a machine that ‘thinks for itself’ to yet another tool for humanity to use.”
- A college whose undergrad student body is 40% minority has launched a curriculum for grad students seeking Master of Science degrees in health informatics. Hood College in Frederick, Md., announced the new program this week. Also on offer will be a post-baccalaureate health informatics certificate. Applications must be filed by Dec. 1. Classes will begin in January with in-person, online and hybrid options. Details here.
- The American Telemedicine Association is out with a set of six principles it sees as essential to developing and using AI in healthcare. The group bases its suggestions on the grounds that AI use “should maximize potential benefits as a meaningful tool for patients and providers and keep them at the center of healthcare decision-making.” News release here, 2-page summary here.
- The Veterans Administration is dangling $1 million in front of software developers. In a competition called AI Tech Sprint, the agency will divide the pot among winning entrants who innovate AI tools for relieving VA clinicians of administrative duties. Military Times has the story.
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|