Conferencias Magistrales

Dra. María del Pilar Gómez Gil

Improving anatomical plausibility and auditing fairness in deep segmentation networks

Enzo Ferrante


En los últimos meses hemos notado un incremento impresionante en las características que ofrecen algunas aplicaciones de software basadas en Inteligencia Artificial, la mayoría soportadas por modelos conocidos como “Inteligencia Artificial Generativa”. Este auge ha producido una gran expectativa en los consumidores en prácticamente todas las áreas de desarrollo, por ejemplo, la economía, la educación, la comunicación, la diversión etc. Sin embargo, el entusiasmo en su uso ha venido acompañado con ejemplos de fracasos en su implementación y el enfrentamiento a retos durante el diseño, lo que nos obliga a reflexionar sobre su verdadera capacidad actual para cumplir con principios básicos en el diseño de software, como son la confiabilidad, la explicabilidad en la toma de decisiones, la robustez, la persistencia y la sustentabilidad. En esta plática comentaremos algunas ideas básicas sobre el funcionamiento, alcances y limitaciones de esta tecnología, avances que se han conseguido en las regulaciones para su uso y consejos sobre cómo incursionar en su implementación, sin comprometer los principios básicos de diseño.

 

The evolution of deep segmentation networks has empowered the enhancement of extensive medical imaging datasets with automatically generated anatomical segmentation masks. In this talk we will discuss recent methods we proposed to improve anatomical plausibility in deep segmentation networks. By improving anatomical plausibility we mean to ensure that the segmentation masks produced by our network are constrained to the actual shape and appearance of organs. We will briefly discuss some of our studies [1,2,3] which use autoencoders to learn low dimensional embeddings of anatomical structures and propose different ways in which they can be incorporated into deep learning models for segmentation and registration.The complexity is further intensified by recent studies indicating potential biases in AI-based medical imaging models related to gender, age, and ethnicity [4,5]. Here we will share insights from our journey in developing the CheXMask large-scale database of x-ray anatomical segmentations [6]. We will delve into the strategies we implemented for automatic quality control and the methods we formulated for unsupervised bias discovery in the absence of ground-truth annotations [7].[1] Learning deformable registration of medical images with anatomical constraints | Mansilla L, Milone D, Ferrante E. – Neural Networks (2020)[2] Post-DAE: Anatomically Plausible Segmentation via Post-Processing with Denoising Autoencoders | Larrazabal A, Martinez C, Glocker B, Ferrante E. – IEEE Transactions on Medical Imaging (2020) – MICCAI 2019 (conference version)[3] HybridGNet – Improving anatomical plausibility in image segmentation via hybrid graph neural networks: applications to chest x-ray image analysis | Gaggion N, Mansilla L, Mosquera C, Milone D, Ferrante E. – IEEE Transactions on Medical Imaging (2022) MICCAI 2021 (conference version)[4] Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis | Larrazabal AJ, Nieto N, Peterson V, Milone DH, Ferrante E. – Proceedings of the National Academy of Sciences (2020)[5] Addressing fairness in artificial intelligence for medical imaging | Ricci Lara MA, Echeveste R, Ferrante E. – Nature communications (2022)[6] CheXmask: a large-scale dataset of anatomical segmentation masks for multi-center chest x-ray images | Gaggion N, Mosquera C, Mansilla L, Aineseder M, Milone DH, Ferrante E. – arXiv preprint (2023)[7] Unsupervised bias discovery in medical image segmentation | Gaggion N, Echeveste R, Mansilla L, Milone DH, Ferrante E. – MICCAI FAIMI Workshop (2023)

 

Patrocinadores