In cooperation with | ● |
|
|
| | | Several hundred AI experts, stakeholders and commentators are alerting the world to the technology’s potential for widespread harm. To maximize chances of being heard far and wide, the collective has boiled down the gist of its fears to one succinct sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Drafted by the Center for AI Safety, a San Francisco nonprofit whose name tips its mission, the statement went up on the CAIS website Tuesday. It’s been making headlines and spurring conversation ever since. Here’s a roundup of things said or written in response to the 22-word message so far. Note that some are avidly supportive, others seriously skeptical. - “There’s a very common misconception, even in the AI community, that there are only a handful of doomers. But, in fact, many people privately would express concerns about these [risks].” —Dan Hendrycks, executive director, Center for AI Safety (Source: New York Times)
- “How exactly is this end-of-days scenario supposed to go down? ... [I]t seems that humanity’s biggest threat is not our own inventions but rather our boundless talent for hyperbole.”—Jose Antonio Lanz, Esq., Decrypt
- “[B]oth AI risk advocates and skeptics agree that, even without improvements in their [current] capabilities, AI systems present a number of threats in the present day—from their use enabling mass surveillance to powering faulty ‘predictive policing’ algorithms to easing the creation of misinformation and disinformation.”—James Vincent, senior reporter, The Verge
- “Notably absent from the [CAIS statement] are Google CEO Sundar Pichai and Microsoft CEO Satya Nadella, the field’s two most powerful corporate leaders.”—Washington Post reporters Aaron Gregg, et al.
- “Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable.”—Elizabeth Renieris, senior research associate, Oxford University’s Institute for Ethics in AI (source: BBC)
- “This [CAIS statement] is inherently ridiculous, sorry. No one is making Google and OpenAI develop AI that puts humanity at ‘risk of extinction.’ If they honestly thought it was such a dire threat they could stop building it *today*. They do not, so they won’t.”—Brian Merchant, tech columnist, Los Angeles Times (via Twitter)
- “In today’s world, asking us as a society to act responsibly and to use personal discipline regarding technology is like having a Weight Watcher’s meeting at Wendy’s. I am not optimistic.”—Anonymous New York Times reader commenting on the newspaper’s coverage of the CAIS statement (and garnering more recommendations than any other reader/commenter)
The CAIS statement comes a little less than two months after the Future of Life Institute published a letter calling for a pause of at least six months in all major AI research projects. That letter was signed by Elon Musk and more than 1,100 other technology experts. |
| | |
| |
| | | Buzzworthy developments of the past few days. - DiagnaMed (Toronto) has launched a generative AI “pal” designed to help people improve their brain health. The product, PalGPT.ai, gets to know its users, then sends friendly text messages offering brain-supportive tips, advice, support and more, the company explains. In addition, the virtual confidant offers “a private space for sharing thoughts, feelings, beliefs, experiences, memories and dreams.”
- Inovaare of Milpitas, Calif., is debuting a digital compliance assistant undergirded by generative AI. Called Usher, the conversational Q&A tool can function for providers, payers and any other healthcare orgs that could use a little help keeping up with regulatory requirements. Announcement.
- SameSky Health in North Hollywood, Calif., has injected natural language processing and machine learning into its platform for health-plan members of the company’s health-benefits clients. SameSky says the refresh will allow users to personalize the software’s interactivity along cultural lines and other preferences. Announcement here.
- GE HealthCare has been cleared by the FDA to market deep learning software that boosts the quality of images acquired with a PET/CT machine made by GE. Jan Makela, president & CEO of the company’s imaging division: “One of the main advantages of moving fully into the future of AI and deep learning is making state-of-the-art imaging accessible to more practices, across more care areas than ever before.” Details here.
- Researchers at the University of Technology Sydney in Australia have innovated a 3D-printed model that replicates a disc of the lumbar spine. Calling the invention “disc-on-a-chip,” the inventors say the high-tech contraption is initially aimed at clinical researchers. It can stand in for actual discs of the low back with simulated injuries, degenerative conditions and, as need dictates, general healthiness.
- HiDO Health (El Dorado Hills, Calif.) has introduced an AI platform for homecare providers to place in patients’ homes. The technology centers on watching for proper intake of prescription meds but can also help with remote monitoring in general. The company says such proactive care can cut hospitalizations by 80% and health costs by 67%.
- Two tech companies are working together to bring virtual reality to the elderly. Waya Health (Boone, N.C.) and Viva Vita (Washington, D.C.) say their partnership will open access to advanced VR experiences that will “revolutionize” daily healthcare regimens for seniors living at home and in community settings.
|
| | |
|
| |
|
|