Will 2024 be the year of AI rule?

Federal regulators are trying to create guidelines for the ethical use of AI in a number of industries. Will healthcare cooperate or stake its own claim to governance?

As we enter the new year, the hot topic on every healthcare executive’s mind is AI. And one of the biggest questions surrounding the technology centers on who will regulate it.

The Biden administration set the tone last October with an executive order that shifted much of the federal regulatory burden to the Department of Health and Human Services and the Office of the National Coordinator for Health IT (ONC), a position held by Mickey Tripathi. HHS then set a schedule with a final order in December that calls for more transparency in AI tools used in clinical settings by the end of next year.

While much of the action so far has focused on technology vendors designing AI tools, health system leaders are closely watching how the federal government will affect their use of the technology. Many health systems develop and use their own tools and platforms and pledge to uphold ethical standards in all clinical applications.

“We have a culture of accountability that goes hand in hand with agile innovation,” said Ashley Beecy, MD, FACC, medical director of AI operations at New York Presbyterian Hospital and assistant professor of medicine at Weill Cornell Medical College, in HealthLeaders interview earlier this year, before Biden’s executive order. “Health systems have a unique opportunity” to establish their own standards for the proper use of AI.

Tarun Kapoor, MD, MBA, senior vice president and chief digital transformation officer at New Jersey-based Virtua Health, says healthcare organizations have the clinical training needed to develop effective and sustainable AI management. They know how it will be used in health care and can focus on nuances that federal regulators might miss.

“We have to get a lot better at [regulating AI] because we are the ones using it,” he says.

Like many (if not all) healthcare systems using AI these days, Virtua Health has a policy that every AI service has a human in the loop, meaning no action is taken on AI-generated content until be reviewed by at least one flesh-and-blood supervisor. At this stage, when most projects are trained on back-office tasks, it’s a safe bet; but as the technology makes its way into clinical decision-making, this extra step may prove critical.

“Always put doctors in front of these decisions,” says Siva Namasivayam, CEO of Cohere Health, a Boston-based company that focuses on using AI to improve the prior authorization process. He says technology should be used to enhance the physician’s role—what he calls “getting to the yes factor,” rather than replacing it.

“We never use AI to say no,” he adds.

But who gets to make these decisions? The Biden administration wants to be part of that chain of command and is pushing for a collaborative environment, having secured voluntary commitments from more than three dozen health systems, payer organizations and technology vendors to use AI responsibly. The agreement centers on a new catchphrase for ethical use: FAVES, which stands for Fair, Appropriate, Valid, Effective and Safe.

The healthcare industry, which still suffers from having electronic medical records forced upon them before they’re really ready for adoption, is holding up well so far. But in many hospitals, the C-Suite is facing pressure to take control of AI and make it an industry priority.

“You govern yourself at a level above the law,” says Kapoor.

He notes that health systems like Virtua Health are being very careful in how they use the technology, not just greenlighting every potential use.

“Just because you can say anything and create your own [projects] doesn’t mean I’m going to let you say whatever you want and do it,” he points out.

Kapoor says that healthcare providers will understand the drawbacks of AI technology and the risks they pose better than anyone outside the industry. And health systems like Virtua Health address these challenges with steering committees that are comprised not only of clinical leaders, but also of financial, IT, legal and operational areas of the organization.

[Read also: Are Health Systems Mature Enough to USE AI Properly?]

Arlen Myers, president and CEO of the Society of Physician Entrepreneurs, professor emeritus at the University of Colorado School of Medicine and the Colorado School of Public Health, says the industry needs to step up and show leadership at a time when AI management is increasingly still in force flow. He notes that hundreds of healthcare organizations have established dedicated centers of excellence for AI, and some have pledged to develop ethics and standards for use. Users could also get in on the action by helping shape an “AI Bill of Rights” for patients.

“Right now, nobody trusts the government or the industry to regulate this,” he says. “When you look at who has to regulate what … the industry has to put the safeguards in place.”

The next year will be key to establishing governance for AI as more and more healthcare systems adopt the technology and push the boundaries beyond administrative use and into clinical applications. As the Biden administration moves to speed up regulation through HHS and ONC, many wonder whether the health care industry will wait that long or allow a federal agency to propose the first rules.

Others wonder what it will take to create regulations that will work. A look at the current debate over interoperability and data blocking standards makes it clear that just because rules are made, doesn’t mean they will be readily adopted.

“At the end of the day, you follow the money,” says Myers, who expects that health care and government will have to come to some kind of agreement to create something lasting. “This is how [rules] it will be done.”

Eric Wiklund is associate content manager and senior editor for innovation, technology, telehealth, supply chain and pharma for HealthLeaders.

Leave a Comment

Your email address will not be published. Required fields are marked *