Evaluating medical AI Models: New Barco Research at SPIE 2023

1677046533826-small jpg

Barco, global leader in medical visualization technology, has presented new research in the field of artificial intelligence. The paper was presented by the Barco team, at the SPIE Medical Imaging Congress in San Diego (California).

AI models can be biased, as they are trained on specific sets of data or people. This means that specific characteristics that belong to minority groups (for example, age, gender, or skin type) can be overlooked or misinterpreted by AI algorithms. This could negatively impact diagnostic and treatment processes of members of these minority groups. The study that was presented by the Barco team is titled “Open-source tool for model performance analysis for subpopulations”. It focuses on the research question: “How to guarantee AI model safety and effectiveness for all subgroups of the target population?”

The paper presents an open-source, customizable tool that can measure how good an AI algorithm works. For example, it can highlight the subpopulations that are most at risk of being misinterpreted by the algorithm.

Stijn Vandewiele, Research Engineer at Barco, comments: “Artificial intelligence will certainly lead to massive advances in healthcare. However, it’s important to remember that it’s not an infallible technology. Healthcare professionals need to be able to rely on correctly trained algorithms and have checks in place for errors. We believe that this paper can contribute to ascertaining a safe use of AI in the medical world.”

This research was funded through the Vivaldy project, PENTA 19021, and financially supported by the Flemish Government (Vlaio grant HBC.2019.274).

Editor