| | | Close to half the world’s cybersecurity professionals, 45%, have zero involvement in projects within their respective enterprises to develop, implement and govern AI solutions. The exclusion came to light in a 2024 survey of more than 1,800 of these specialists conducted by the Information Systems Audit and Control Association (better known as ISACA). The largest single respondent block, 45%, hailed from North America. Publishing the full findings in its latest State of Cybersecurity report, ISACA called the paltry inclusion of cybersecurity professionals “disheartening.” In commentary on the report published Jan. 6, Goh Ser Yoong, MBA, head of compliance at Advance.AI and a member of the ISACA Emerging Trends Working Group, builds on the report’s conclusions. “The exclusion of cybersecurity teams from AI development and implementation poses significant risks to organizational security, which includes oversight in addressing AI adversarial attacks, data poisoning and breaches, as well as model vulnerabilities,” he writes before adding: ‘To mitigate these risks and ensure the secure and effective integration of AI, it is imperative to increase awareness, bridge the gap and foster collaboration between cybersecurity teams and other departments, such as AI development teams, products, compliance and even legal.’
With that, Ser Yoong recommends some to-do’s for organizations moving ahead with AI. 1. Involve your cybersecurity team early.Early involvement will ensure that security considerations are “embedded into the design, integration and development of AI solutions, minimizing the risk of vulnerabilities and security breaches,” Ser Yoong points out. “There is anticipation from studies by Gartner that agentic AI deployments will be increasing in 2025. Such deployments could be through third-party agents to be readily integrated, though they could be developed in-house too.” More: ‘Regardless of the approach, a proper third-party risk management process would be required, as well as mature development through methodologies such as development, security and operations (DevSecOps), and these would need to be driven by the cybersecurity team.’
2. Cultivate a cross-functional, collaborative culture.Organizations do well to foster a culture of collaboration and communication, Ser Yoong notes. “This can be achieved through regular meetings, joint workshops and shared training programs,” he adds. “Cross-functional collaboration will ensure that all teams understand each other’s perspectives, share knowledge and work together to achieve common security goals.” ‘Development methodologies such as DevSecOps also is required to evolve, with machine learning operations (MLOps) being the next potential new culture and practice that unifies machine learning application development with systems deployment and operations.’
3. Encourage upskilling and training. Cybersecurity professionals “should be provided with the necessary training and upskilling opportunities to stay abreast of the latest AI developments and security challenges,” Ser Yoong advises. ‘This will enable them to contribute effectively to AI development and implementation, ensuring that security considerations are addressed throughout the AI lifecycle.’
4. Choose and use AI training data carefully. “Organizations should prioritize the proper selection and usage of training data for AI models, including both properly sourced real-world data and the generation of synthetic data,” Ser Yoong states. ‘This will ensure the development of robust and effective AI solutions while addressing potential biases and privacy concerns.’
5. See the big picture. The involvement of cybersecurity teams in AI development “is essential for organizations to harness the full potential of AI while mitigating the associated security risks that it would pose throughout its lifecycle,” Ser Yoong concludes. ‘In the long run, the goal should still be to maintain the digital trust that organizations have built with their customers.’
Download ISACA’s 2024 State of Cybersecurity report here. Read Goh Ser Yoong’s commentary here. |
| | |
| |
| | | Access the 2024 Executive Handbook: Ten Transformative Trends in Healthcare - What was top of mind for healthcare executives this year? What trends will shape 2025? Nabla's Chief Medical Officer, Dr. Ed Lee, MD, MPH, was recently interviewed for the 2024 Executive Handbook: Ten Transformative Trends in Healthcare, offering his perspective on how AI is enhancing clinical workflows and setting the stage for the future of patient care. From shifting federal healthcare policies to the emergence of disruptors beyond traditional health systems and pressing cybersecurity challenges, discover the key insights shaping the industry. Download the full handbook here. Assistant or Associate Dean, Health AI Innovation & Strategy - UCLA Health seeks a visionary academic leader to serve as its Assistant or Associate Dean for Health AI Innovation and Strategy and Director for the UCLA Center for AI and SMART Health. This unique position offers the opportunity to shape and drive AI vision and strategy for the David Geffen School of Medicine (DGSOM) and ensure translation of innovation in our renowned Health system. This collaborative leader will work with academic leadership, faculty, staff and trainees to harness the power of AI to transform biomedical research, decision and implementation science, and precision health. Learn more and apply at: https://recruit.apo.ucla.edu/JPF09997 (tenured track) https://recruit.apo.ucla.edu/JPF10032 (non-tenured track)
|
| | |
|
| | | Buzzworthy developments of the past few days. - Just because you have the right to do something with an emerging technology doesn’t mean it’s the right thing to do. In the 1990s, the birth and survival of Dolly the cloned sheep raised serious moral and ethical concerns about cloning humans. The next decade, the experimental insertion of human DNA into a rabbit embryo set off similar worries over human-nonhuman chimeras. These are just two relatively recent chapters in the ever-unfinished book of exciting yet unsettling technological advances. And now we’re midway through the 2020s. What will be the real-world AI use case that launches society into a heated moment of reckoning over guiding principles? For some, the question has bolted from hypothetical to urgent. Their spur: the second coming of a Trump-led executive branch. This one, of course, will be backed by same-party majorities in both chambers of Congress—and guided by tech visionary/Trump whisperer Elon Musk. Predicting a laissez-faire attitude toward AI will displace the prevailing appetite for AI guardrails, Axios technology editor Scott Rosenberg warns that those “hoping to build AI with strong ethical safeguards, bias protections or safety limits should expect an uphill battle. The odds are great that if something can be done with AI, it will be done.” Hear him out.
- AI roles claim three of LinkedIn’s top 25 hottest jobs in the U.S. In fact, AI engineer and AI consultant come in at the tippity-top, Nos. 1 and 2, respectively. And AI researcher lands at a very respectable No. 12. LinkedIn analysts came by the rankings after examining millions of jobs started by LI members between the start of 2022 and about midway through 2024. In the reader comments section, a talent recruiter notes the prevalence of AI know-how across the board. “[C]ompanies aren't just hiring for technical skills anymore but [are looking] for people who can bridge the gap between AI and human insight,” the reader remarks. “It’s not about AI replacing jobs, it’s about professionals who can leverage AI to enhance human capabilities.” Read the rest.
- Healthcare delivery is growing in complexity as the healthcare workforce shrinks in manpower. AI can help manage the imbalance two ways. One, AI can help patients appropriately self-manage their care. And two, the technology can help the healthcare system evolve from a 1:1 doctor/patient paradigm to a 1:many model—and without sacrificing care quality or scrimping on patient experience. That’s the stated vision of Daniel Yang, MD, vice president of AI and emerging technologies for Kaiser Permanente. “Our belief is that AI should never replace the judgment or expertise of our doctors and clinicians,” Yang tells HIMSS Media’s Healthcare IT News. “To succeed in this, we must assess any AI tool before deploying it to ensure we understand how to safely and effectively use it.”
- All AI politics is local. Some health systems are taking charge of change rather than waiting for change to descend upon them from Washington—or from their own statehouses. In North Carolina, UNC Health’s CMIO, David McSwain, MD, MPH, feels state-level AI legislation is a bad idea. “But the reality of it is, that is what’s going to happen,” he says. “What we want to do is establish state-level [AI] legislation that minimizes the burden on health systems, minimizes the burden on providers, minimizes the potential negative health equity impacts.” Coverage by NC Health News here.
- What’s law got to do with it? If by “it” you mean AI, the answer is plenty. In a new report from the Sheppard Mullin law firm, Carolyn Metnick, JD, and colleagues look back on 2024 and ahead to 2025. As one would expect, they keep an eye trained on the legal considerations of governmental AI oversight—and the absence, to date, thereof. “In light of a lack of cohesive federal framework for AI regulation, industry groups such as the Coalition for Health AI and others have stepped in to fill the gap, providing frameworks for responsible AI use,” Metnick and co-authors write. “[T]he focus on AI utilization in light of the California Act, as well as other state laws and federal actions, is likely just the tip of the spear in terms of AI-related regulation that will develop in the healthcare space.” The report is available in full for free.
- A word to wise healthcare IT leaders: Think hard about how you’ll integrate AI solutions into clinician workflows such that the technology makes friends rather than resisters. After all, any selected solution may be wonderful, but if it’s implemented poorly, the adopting organization “might as well have done nothing at all.” That’s from two healthcare strategists at CDW. “Most healthcare organizations have limited budgets; therefore, some AI tools will make the cut while others won’t,” they write in HealthTech magazine. “Tools that don’t solve an existing problem or provide some form of return on the money being spent will be a lower priority for an organization, which may choose to do what it’s always done instead.”
- Is AI adoption a marathon or a sprint? It’s both at once. So say IBM researchers after surveying 1,500 executives finds organizations. Big Blue finds responding organizations speeding into the AI age on both conventional and generative fronts—while slowly but surely letting the technology permeate all functions in the enterprise to some degree. “For example, 88% use AI to a moderate or significant extent in demand forecasting, 87% for HR help desks, 84% in creating and managing trade promotions, and 81% in inventory and order management,” the authors write. “But over the next 12 months, companies are keen on expanding to more sophisticated uses that require more complex system integrations and collaboration.”
- Recent research in the news:
- Funding news of note:
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|