Industry Watcher’s Digest
Buzzworthy developments of the past few days.
- When a hacker breaches a business, the usual break-and-entry point is a hole opened by a phishing scam. AI can help employees practice not taking the bait. “[T]hink fire drills for your business continuity and disaster readiness programs,” explains subject matter expert Marc Haskelson in a piece posted by HIPAA Journal. “These ‘cyber fire drills’ also help management identify strengths and weaknesses in security processes, allowing them to allocate resources appropriately.” The principles apply to hospitals and health systems as well as commercial concerns. Read the whole thing.
- Governmental oversight of AI is happening at the state level. This year, Utah, Florida and Colorado turned nagging concerns into official actions. The Foley & Lardner law firm names the resulting regulations as models by which stakeholders elsewhere can prepare for federal oversight to come. “As such, companies [and healthcare organizations] are left with the opportunity to model standards for ethical and safe uses of AI and early adopters can act now to help influence AI policy,” the post’s authors write. They’re summarizing discussions from a summit held in July. Read the rest.
- The Ant Group is looking to muscle up its healthcare AI. Ant is a fintech affiliate of Alibaba, which was co-founded by the Chinese multibillionaire Jack Ma. It’s seeking to acquire Haodf.com, a Chinese online healthcare platform that, among other things, provides virtual visits with real doctors. Benzinga has a few more details.
- The cost to develop a drug, get it approved and bring it to market often lands well north of $2 billion. As drug development is one of the most anticipated use cases for AI in healthcare, hopes are high that the technology will accelerate the process and lower the costs. This week Information Week looks at what’s involved. “As the drugs themselves progress toward and through the trial stage, the information gathered along the way can be organized by AI as well, identifying patterns that humans might not notice and potentially reducing redundancies and procedures that may lead to dead ends,” reports freelance writer Richard Pallardy. “This frees researchers from laborious analysis and gives them time to engage in real-world lab work that can then itself be fed back into the models.”
- An astronomer has invented an AI-powered medical device. Joseph Carson, PhD, came up with the tool, which helps screen for cervical cancer, by applying principles used in space exploration. “Once inserted into patients, Carson’s colposcope captures dozens of snapshots of a cervix,” the Post and Courier of Charleston, S.C., reports. “Artificial intelligence technology from NASA telescopes helps transform those images into what Carson calls a ‘topographical map.’ That 3D rendering helps doctors identify and treat cervical cancer.” Photos of the device and the rest of the story are here.
- How could AI go wrong? Let us count the ways. The number starts at 700 and is likely to go up. All are itemized in the AI Risk Repository, which was compiled by a group at MIT’s Computer Science AI Laboratory (aka CSAIL). Describing the resource, MIT Technology Review says the most common risks center around AI system safety and robustness (76%), unfair bias and discrimination (63%) and compromised privacy (61%). “Less common risks tended to be more esoteric, such as the risk of creating AI with the ability to feel pain or to experience something akin to ‘death.’” Get the rest.
- The G7 is to hold a 10-day conference on AI in healthcare. Representatives from the group’s seven member countries—Canada, France, Germany, Italy, Japan, the U.K. and the U.S.—will discuss various clinical use cases. They’ll also air out differing opinions on the benefits, challenges and implications of the use of AI in healthcare. It’ll all happen in the seaport city of Ancona. Learn more here.
- ‘We’re going to be a combination of our natural intelligence and our cybernetic intelligence, and it’s all going to be rolled into one.’ These are the words of the futurist and AI scientist Ray Kurzweil, whose latest book dropped this summer. He’s talking about the singularity, of course. He didn’t invent the concept, but his 2005 book The Singularity Is Near did a lot to popularize it. His new follow-up, The Singularity is Nearer, takes his thinking a step further. Making the singularity possible, he tells the Guardian in a recent interview, “will be brain-computer interfaces which ultimately will be nanobots—robots the size of molecules—that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.”
- Recent research in the news:
- Columbia University: Artificial Intelligence cannot yet reliably read and extract information from clinical notes in medical records
- Tel Aviv University: A wearable sensor supported by machine learning models is used to monitor and quantify freezing of gait (FOG) episodes in people with Parkinson's disease
- Harvard Medical School: New AI tool captures how proteins behave in context
- Columbia University: Artificial Intelligence cannot yet reliably read and extract information from clinical notes in medical records
- Funding news of note:
- From AIin.Healthcare’s news partners:
- Health Imaging: AI tool predicts metabolic disease using 3D body scans