AI tools that have been developed invariably include a lot of fixes and are not off-the-shelf products, according to Zafar Chaudry, chief digital officer and director of AI and information at Seattle Children’s. For this reason, contracts between hospitals and AI developers to create suitable tools must include risk sharing. Therefore, hospitals will not be solely responsible for paying for tools that do not deliver results, according to Medcity News..
Zafar Chaudry referred to the global dialogue that has existed around the governance and regulation of AI in healthcare over the past year. “It’s not just about technology. People and processes are a big problem in all of this,” he warns.
Seattle Children’s’ Chief Information and AI Officer has signed off on the most important talking points in the healthcare industry: AI ethics, safety, and transparency. Organizations such as the Coalition for Health AI Coverage (CHAI) and VALID AI have dedicated their strategies to developing AI governance frameworks.
AI tools are working quickly as off-the-shelf solutions, Chaudry says, explaining: “I haven’t seen a lot of real, tangible use cases that [AI companies] are actually solving. There are a few, but many are very much of the “If we tell you what the problem is, we’ll build the solution for you” type.
That’s why Zafar Chaudry advocates that hospitals sign contracts with AI developers to create suitable tools, including risk sharing by both parties. However, it won’t be an easy goal to accomplish, as most AI developers want to be compensated for their time and materials.
“The problem with a time-and-materials contract is that if I have a great idea and agree to build it for myself, who wouldn’t love to build it over a long period of time, since the bill is always going to increase? “It’s very difficult to control the costs of this,” Chaudry explained, concluding that the ideal would be for the hospital and the AI company to work out a contract in which the hospital would pay for targets met and only if the product performed well.
Currently, hospitals’ contracts with technology companies do not designate that the product they are investing in will deliver the intended result. “If you want a blue car and they give you a red car, would you just take the red car? You would say, “Hey, there’s something wrong here.” But with AI in healthcare, it seems to be, “Well, we’ll give you more or less what you want and that’s it,” Chaudry said.