Most hospitals and health systems are still evaluating whether AI is right for their organization. However, many are looking to augment existing implementations of AI and machine learning.
Healthcare provider organizations that fall into either of these buckets are the target of an educational session at the upcoming HIMSS Forum, a session offering strategic guidance for adopting AI in healthcare.
Tom Hallisey is the head of digital health strategy and board member of Columbia Memorial Health. He will speak about the work of AI in the healthcare system at the 2023 HIMSS AI in Healthcare Forum, December 14-15 in San Diego. His panel session, which he moderated, is titled “A Strategic Guide to Incorporating AI into Your Healthcare Roadmap.”
Panelists include Albert Marinez, chief analyst at the Cleveland Clinic; Tatiana Fedotova, director, global data, platforms and partnerships, at Johnson & Johnson; and Christopher Larkin, Chief Technology Officer at Concord Technologies.
Panelists will reveal the most critical questions to ask and decisions to make at every phase of the AI journey, from build vs. buy and tool selection to ensuring AI investments are focused on maximum impact and much more.
We sat down with Khalisi to get a preview of the session and a better idea of what it takes to make the transition to AI in healthcare.
Q. What is an example of a key question to ask during the initial phase of the healthcare AI journey? And why is it important?
A. As is often the case, the most important question to ask ourselves is what problem we are trying to solve. Generative AI tools are capable of so many new and potentially valuable outcomes, but we need to have a specific measurable goal in mind to show value and scale the work.
Once we have a pilot idea, then comes the AI-specific question of whether to build, buy, or modify an existing large language model tool with internal data and rules. This decision will be based on internal capabilities, privacy and security considerations, scalability and data/bias.
The projects most likely to fail are those that seek to find an application for really cool new tools, and right now there are no tools cooler than AI. It takes a careful, measured approach to what type of value we seek and what types of tools we can trust and provide the resources for a successful implementation.
Q. What is one way to ensure that AI investments are targeted for maximum impact?
A. To ensure the best impact from AI investments, establish a committee to gather and prioritize ideas, guide resource selection, review pilot results, and support scaling. Be careful to include a diverse group on the committee.
Business units and end users know best where problems and inefficiencies lie and can guide planning for best impact; their involvement will be essential to success. If a project is considered too risky for a given area because this technology is still very new and not well understood, it is unlikely to succeed. It’s better to start elsewhere as you educate staff about the capabilities and potential issues with AI tools.
However, it is also important to have senior leaders in the selection process to ensure that decisions are based on current leading organizational strategies and most important issues. There are many use cases for AI tools that can add some value, but could take away important resources to attack the most pressing problems of the day.
We also need to choose designs and tools that are mature enough for proper integration into existing or updated workflows. One-off or niche projects will not bring big results. Look at ChatGPT’s web usage, even it’s down from its peak. Tools must be integrated into operations to transform the workflow and deliver real value.
Q. What is one tip for ensuring long-term success with a healthcare AI investment?
A. AI tools are often so new that long-term success can be difficult to achieve. As AI LLM or clinical algorithms continue to be used, data is updated, demographics change and results may vary.
A recent study even pointed out how algorithms can build on their own obsolescence, as interventions change the underlying data on which they are built and therefore their ability to predict.
Plans should be put in place to continuously measure the results of each AI tool and intervention. What works in one site or population may not work in another, and as I noted, what works today may not work next year. The new AI regulations from the White House executive order seek to address these concerns, as do ONC’s recently proposed algorithm rules addressing integrated support for clinical EHR capture.