医学影像/息肉检测分割


2022-12-01 更新

Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge

Authors:Sharib Ali, Noha Ghatwary, Debesh Jha, Ece Isik-Polat, Gorkem Polat, Chen Yang, Wuyang Li, Adrian Galdran, Miguel-Ángel González Ballester, Vajira Thambawita, Steven Hicks, Sahadev Poudel, Sang-Woong Lee, Ziyi Jin, Tianyuan Gan, ChengHui Yu, JiangPeng Yan, Doyeob Yeo, Hyunseok Lee, Nikhil Kumar Tomar, Mahmood Haithmi, Amr Ahmed, Michael A. Riegler, Christian Daul, Pål Halvorsen, Jens Rittscher, Osama E. Salem, Dominique Lamarque, Renato Cannizzaro, Stefano Realdon, Thomas de Lange, James E. East

Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, location, and surface largely affect identification, localisation, and characterisation. Moreover, colonoscopic surveillance and removal of polyps (referred to as polypectomy ) are highly operator-dependent procedures. There exist a high missed detection rate and incomplete removal of colonic polyps due to their variable nature, the difficulties to delineate the abnormality, the high recurrence rates, and the anatomical topography of the colon. There have been several developments in realising automated methods for both detection and segmentation of these polyps using machine learning. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets that come from different centres, modalities and acquisition systems. To test this hypothesis rigorously we curated a multi-centre and multi-population dataset acquired from multiple colonoscopy systems and challenged teams comprising machine learning experts to develop robust automated detection and segmentation methods as part of our crowd-sourcing Endoscopic computer vision challenge (EndoCV) 2021. In this paper, we analyse the detection results of the four top (among seven) teams and the segmentation results of the five top teams (among 16). Our analyses demonstrate that the top-ranking teams concentrated on accuracy (i.e., accuracy > 80% on overall Dice score on different validation sets) over real-time performance required for clinical applicability. We further dissect the methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets.
PDF 26 pages

点此查看论文截图

LAPFormer: A Light and Accurate Polyp Segmentation Transformer

Authors:Mai Nguyen, Tung Thanh Bui, Quan Van Nguyen, Thanh Tung Nguyen, Toan Van Pham

Polyp segmentation is still known as a difficult problem due to the large variety of polyp shapes, scanning and labeling modalities. This prevents deep learning model to generalize well on unseen data. However, Transformer-based approach recently has achieved some remarkable results on performance with the ability of extracting global context better than CNN-based architecture and yet lead to better generalization. To leverage this strength of Transformer, we propose a new model with encoder-decoder architecture named LAPFormer, which uses a hierarchical Transformer encoder to better extract global feature and combine with our novel CNN (Convolutional Neural Network) decoder for capturing local appearance of the polyps. Our proposed decoder contains a progressive feature fusion module designed for fusing feature from upper scales and lower scales and enable multi-scale features to be more correlative. Besides, we also use feature refinement module and feature selection module for processing feature. We test our model on five popular benchmark datasets for polyp segmentation, including Kvasir, CVC-Clinic DB, CVC-ColonDB, CVC-T, and ETIS-Larib
PDF 7 pages, 7 figures, ACL 2023 underreview

点此查看论文截图

Augmenting Knowledge Distillation With Peer-To-Peer Mutual Learning For Model Compression

Authors:Usma Niyaz, Deepti R. Bathula

Knowledge distillation (KD) is an effective model compression technique where a compact student network is taught to mimic the behavior of a complex and highly trained teacher network. In contrast, Mutual Learning (ML) provides an alternative strategy where multiple simple student networks benefit from sharing knowledge, even in the absence of a powerful but static teacher network. Motivated by these findings, we propose a single-teacher, multi-student framework that leverages both KD and ML to achieve better performance. Furthermore, an online distillation strategy is utilized to train the teacher and students simultaneously. To evaluate the performance of the proposed approach, extensive experiments were conducted using three different versions of teacher-student networks on benchmark biomedical classification (MSI vs. MSS) and object detection (Polyp Detection) tasks. Ensemble of student networks trained in the proposed manner achieved better results than the ensemble of students trained using KD or ML individually, establishing the benefit of augmenting knowledge transfer from teacher to students with peer-to-peer learning between students.
PDF changed the format of paper

点此查看论文截图

Stack of discriminative autoencoders for multiclass anomaly detection in endoscopy images

Authors:Mohammad Reza Mohebbian, Khan A. Wahid, Paul Babyn

Wireless Capsule Endoscopy (WCE) helps physicians examine the gastrointestinal (GI) tract noninvasively. There are few studies that address pathological assessment of endoscopy images in multiclass classification and most of them are based on binary anomaly detection or aim to detect a specific type of anomaly. Multiclass anomaly detection is challenging, especially when the dataset is poorly sampled or imbalanced. Many available datasets in endoscopy field, such as KID2, suffer from an imbalance issue, which makes it difficult to train a high-performance model. Additionally, increasing the number of classes makes classification more difficult. We proposed a multiclass classification algorithm that is extensible to any number of classes and can handle an imbalance issue. The proposed method uses multiple autoencoders where each one is trained on one class to extract features with the most discrimination from other classes. The loss function of autoencoders is set based on reconstruction, compactness, distance from other classes, and Kullback-Leibler (KL) divergence. The extracted features are clustered and then classified using an ensemble of support vector data descriptors. A total of 1,778 normal, 227 inflammation, 303 vascular, and 44 polyp images from the KID2 dataset are used for evaluation. The entire algorithm ran 5 times and achieved F1-score of 96.3 +- 0.2% and 85.0 +- 0.4% on the test set for binary and multiclass anomaly detection, respectively. The impact of each step of the algorithm was investigated by various ablation studies and the results were compared with published works. The suggested approach is a competitive option for detecting multiclass anomalies in the GI field.
PDF

点此查看论文截图

Subclass Knowledge Distillation with Known Subclass Labels

Authors:Ahmad Sajedi, Yuri A. Lawryshyn, Konstantinos N. Plataniotis

This work introduces a novel knowledge distillation framework for classification tasks where information on existing subclasses is available and taken into consideration. In classification tasks with a small number of classes or binary detection, the amount of information transferred from the teacher to the student is restricted, thus limiting the utility of knowledge distillation. Performance can be improved by leveraging information of possible subclasses within the classes. To that end, we propose the so-called Subclass Knowledge Distillation (SKD), a process of transferring the knowledge of predicted subclasses from a teacher to a smaller student. Meaningful information that is not in the teacher’s class logits but exists in subclass logits (e.g., similarities within classes) will be conveyed to the student through the SKD, which will then boost the student’s performance. Analytically, we measure how much extra information the teacher can provide the student via the SKD to demonstrate the efficacy of our work. The framework developed is evaluated in clinical application, namely colorectal polyp binary classification. It is a practical problem with two classes and a number of subclasses per class. In this application, clinician-provided annotations are used to define subclasses based on the annotation label’s variability in a curriculum style of learning. A lightweight, low-complexity student trained with the SKD framework achieves an F1-score of 85.05%, an improvement of 1.47%, and a 2.10% gain over the student that is trained with and without conventional knowledge distillation, respectively. The 2.10% F1-score gap between students trained with and without the SKD can be explained by the extra subclass knowledge, i.e., the extra 0.4656 label bits per sample that the teacher can transfer in our experiment.
PDF Published in IVMSP22 Conference. arXiv admin note: substantial text overlap with arXiv:2109.05587

点此查看论文截图

On the Efficiency of Subclass Knowledge Distillation in Classification Tasks

Authors:Ahmad Sajedi, Konstantinos N. Plataniotis

This work introduces a novel knowledge distillation framework for classification tasks where information on existing subclasses is available and taken into consideration. In classification tasks with a small number of classes or binary detection (two classes) the amount of information transferred from the teacher to the student network is restricted, thus limiting the utility of knowledge distillation. Performance can be improved by leveraging information about possible subclasses within the available classes in the classification task. To that end, we propose the so-called Subclass Knowledge Distillation (SKD) framework, which is the process of transferring the subclasses’ prediction knowledge from a large teacher model into a smaller student one. Through SKD, additional meaningful information which is not in the teacher’s class logits but exists in subclasses (e.g., similarities inside classes) will be conveyed to the student and boost its performance. Mathematically, we measure how many extra information bits the teacher can provide for the student via SKD framework. The framework developed is evaluated in clinical application, namely colorectal polyp binary classification. In this application, clinician-provided annotations are used to define subclasses based on the annotation label’s variability in a curriculum style of learning. A lightweight, low complexity student trained with the proposed framework achieves an F1-score of 85.05%, an improvement of 2.14% and 1.49% gain over the student that trains without and with conventional knowledge distillation, respectively. These results show that the extra subclasses’ knowledge (i.e., 0.4656 label bits per training sample in our experiment) can provide more information about the teacher generalization, and therefore SKD can benefit from using more information to increase the student performance.
PDF Changing the material of the paper. I will resubmitted again after correction

点此查看论文截图

文章作者: 木子已
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 木子已 !
  目录