In cooperation with | ● |
|
|
| | | The Department of Defense’s National Security Agency (NSA) has launched a new organization to take charge of AI security. The move is primarily geared to protect information systems crucial to national defense and security. But it’s likely to affect hospitals and health systems as well. How could it not? Healthcare is not only the country’s largest single industry but also an avid adopter of AI. Plus it’s a hot target for malicious hackers at home and abroad. Simply called the AI Security Center, AISC for short, the new federal enterprise has as its inaugural leader Army Gen. Paul Nakasone, who will step down as director of the NSA and commander of the U.S. Cyber Command. Announcing the new center’s birth Sept. 28, Nakasone said the center will work closely with academia and private industry—both in the U.S. and in allied countries—and other AI-intensive domains to “address threats and retain our nation’s advantage in AI.” Here’s more from Nakasone and others, pro and con, about the need that drove the creation of a centralized AI security operation in our nation. - PRO: ‘Our adversaries, who have for decades used theft and exploitation of our intellectual property to advance their interests, will seek to co-opt our advances in AI and corrupt our application of it. AI security is about protecting AI systems from learning, doing and revealing the wrong thing.’—Gen. Nakasone via DoD News
- PRO: ‘For the private sector, this is an overwhelming win. It means that small to medium businesses, hospitals and other private-sector organizations will be the receiver of some of the most impactful threat intelligence to date.’—Symmetry Systems CTO Landen Brown via Infosecurity magazine
- CON: ‘Nobody is clamoring for more data mining and invasion of privacy from three-letter agencies. Congress should be looking to limit the scope of these domestic spying operations, not giving them a de facto green light.’—American Principles Project policy director Jon Schweppe via Fox News Digital
- PRO: ‘Prior to the [COVID-19] pandemic, hospitals had already struggled to defend themselves against an onslaught of ransomware and data breaches. Hospitals, medical researchers and other health institutions need the expertise and resources your agencies have developed defending against … sophisticated [cyber] threats.’—Bipartisan group of U.S. senators in a 2020 letter to Gen. Nakasone and Christopher Krebs, director of the Cybersecurity and Infrastructure Security Agency
|
| | |
| |
| | | Buzzworthy developments of the past few days. - Three years from now if not sooner, almost half the world’s physicians will be allowing generative AI to help with clinical decisions. Doctors in China may be setting the pace, as 53% there currently state they’re optimistic about adoption within that timeframe. By comparison, 42% of physicians in the U.S. and 34% in the U.K. register such rosy expectations. The figures are from a report released Oct. 4 by Pymnts.com with AI-ID. The report projects the global generative AI market will balloon from around $1 billion last year to nearly $22 billion by 2032. The report is teased here and offered in exchange for contact info here.
- A billionaire techie is warning AI resistors they’re doomed to live out their lives like goldfish in a bowl. Not sure what he meant by that. In any case, Masayoshi Son, CEO of Tokyo-based Softbank, offered the alert in (and presumably to) Japan, where he further predicted artificial general intelligence, or “AGI,” will be humbling smartypants humans within 10 years. The speech has drawn blanket coverage.
- Count Meta’s president of global affairs as an AGI skeptic. Or maybe an AGI resistor. “If we ever approach, as a world, that dystopian vision of what’s called ‘artificial general intelligence,’ where these models develop an autonomy and an agency of their own, then of course you’re in a completely different ballgame,” says the exec, Nick Clegg. “And then, I think, the debate completely changes.” New York Times interview transcript here.
- And then there’s the AI that built a working droid. All humans did was ask the algorithm to design a robot that can walk across a flat surface. With only that to go on, the AI came up with a thingamabob that looks like a boxy rhinoceros. After a few failed tries, the inanimate creature walked across a flat surface. Sort of, anyway. See for yourself.
- Leave it to the Windy City to tax end-users of ChatGPT. Not all of them, but enough of them to make an outside observer go “Hmmm.” Get the gist at IllinoisPolicy.org.
- Researchers at Wake Forest University in North Carolina have bioprinted ‘full thickness’ skin. The breakthrough promises to help wound recovery across all three layers in human tissue—epidermis, dermis and hypodermis—and to minimize scarring. Details plus link to journal study here.
- Videra Health (Orem, Utah) and Baltimore Health Analytics (Baltimore) have jointly launched an AI-equipped means of continuously improving the patient experience. The mediator is an app that lets discharged patients express their likes and dislikes in their own words. Announcement here.
- Health Data Analytics Institute (Boston) has lined up $31 million in Series C funding. The company says it will use the backing to expand the reach of its HealthVision product, which uses AI to guide decision-making for both patient care and population health. Announcement.
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|