Having spent the last couple of years looking at the clinical use of AI, I’ve become increasingly convinced — as have, no doubt, many of you — that AI will eventually play an important role in clinical decision-making.
However, it’s wise to question whether we have the appropriate guardrails in place before we deploy new technologies, especially if they are poised to have a major impact on patient care. And according to a recent research article, we don’t have widely-accepted standards for evaluating AI tools yet.
The article, which was published in the journal Science, sets forth a list of standards the authors feel should be implemented whenever AI applications are used in patient treatment:
The benefits of the AI tool should be clearly identifiable and verifiable by the FDA in the same manner as drugs and other medical devices and technologies
The industry should create AI benchmarks allowing a given tool to be assessed appropriate to the area in which they are being used
Variable input specifications should be clear so that multiple institutions can use them correctly when testing new AI-based applications
AI-based medical devices should be audited regularly in a similar manner as new drugs
The authors of this article are focused primarily on how healthcare AI tools should be regulated by agencies like the FDA. This is a good thing, as it never hurts to keep regulators on their tools when it comes to emerging areas of medical technology.
It would be even better to see clinicians develop their own standards for validating and deploying AI within their practices. With Amazon having invested $2 million in a research partnership with Beth Israel Deaconess Medical Center, it seems clear that forward-looking technology companies will get involved in such standardization, if for no other reason than that vendors who establish standards get a head start in the market.
But that’s not enough.
I’d like to see the AMA and specialty societies get involved in actively creating a healthcare AI evaluation process which explicit calls for clinical input at every stage of conceptualizing, testing and rolling out AI within healthcare organizations. The AMA did publish an interesting set of articles on the future of AI in its AMA Journal of Ethics (such as this piece looking at AI’s role in detecting cancer cells) but this is just scratching the surface.
Let’s not repeat the mistakes of the initial era of EHR rollouts. While AI isn’t necessarily going to be quite as disruptive a force as EHRs once were, it will generate major changes in care delivery over time. Doctors, get your voices heard before it’s too late!