Researchers from the Regenstrief Institute found that artificial intelligence (AI)-driven healthcare needs to be thoroughly tested and continuously monitored in order to prevent harmful errors due to bias. Healthcare algorithms have the potential to positively transform medical decision-making and treatment, and also have the potential to cause major harm.
The team highlighted the importance of algorithmovigilance to address the inherent biases in healthcare algorithms and deployment. Algorithmovigilance concerns the scientific methods and activities relating to the evaluation, monitoring, understanding and prevention of adverse effects of algorithms in healthcare.
It is important that healthcare officials treat algorithms in the same way they would a new pharmaceutical or medical device. An algorithm’s efficacy and safety needs to be guaranteed before it is used in a clinical setting.
Often algorithms have additional complexities and variations that affect if it has a positive or negative influence. This includes how a given algorithm is deployed, who interacts with it and the clinical workflows where interactions with the algorithm happen.
Algorithm performance changes as it is deployed with different data, settings and human-computer interactions. This is why it is important for medical professionals to be continuously monitored.
A paper on this research was published in JAMA Network Open.