2022-03-29 更新
Meta Ordinal Regression Forest for Medical Image Classification with Ordinal Labels
Authors:Yiming Lei, Haiping Zhu, Junping Zhang, Hongming Shan
The performance of medical image classification has been enhanced by deep convolutional neural networks (CNNs), which are typically trained with cross-entropy (CE) loss. However, when the label presents an intrinsic ordinal property in nature, e.g., the development from benign to malignant tumor, CE loss cannot take into account such ordinal information to allow for better generalization. To improve model generalization with ordinal information, we propose a novel meta ordinal regression forest (MORF) method for medical image classification with ordinal labels, which learns the ordinal relationship through the combination of convolutional neural network and differential forest in a meta-learning framework. The merits of the proposed MORF come from the following two components: a tree-wise weighting net (TWW-Net) and a grouped feature selection (GFS) module. First, the TWW-Net assigns each tree in the forest with a specific weight that is mapped from the classification loss of the corresponding tree. Hence, all the trees possess varying weights, which is helpful for alleviating the tree-wise prediction variance. Second, the GFS module enables a dynamic forest rather than a fixed one that was previously used, allowing for random feature perturbation. During training, we alternatively optimize the parameters of the CNN backbone and TWW-Net in the meta-learning framework through calculating the Hessian matrix. Experimental results on two medical image classification datasets with ordinal labels, i.e., LIDC-IDRI and Breast Ultrasound Dataset, demonstrate the superior performances of our MORF method over existing state-of-the-art methods.
PDF
论文截图
Convolutional neural network based on transfer learning for breast cancer screening
Authors:Hussin Ragb, Redha Ali, Elforjani Jera, Nagi Buaossa
Breast cancer is the most common cancer in the world and the most prevalent cause of death among women worldwide. Nevertheless, it is also one of the most treatable malignancies if detected early. In this paper, a deep convolutional neural network-based algorithm is proposed to aid in accurately identifying breast cancer from ultrasonic images. In this algorithm, several neural networks are fused in a parallel architecture to perform the classification process and the voting criteria are applied in the final classification decision between the candidate object classes where the output of each neural network is representing a single vote. Several experiments were conducted on the breast ultrasound dataset consisting of 537 Benign, 360 malignant, and 133 normal images. These experiments show an optimistic result and a capability of the proposed model to outperform many state-of-the-art algorithms on several measures. Using k-fold cross-validation and a bagging classifier ensemble, we achieved an accuracy of 99.5% and a sensitivity of 99.6%.
PDF 9 pages, 7 figures. arXiv admin note: text overlap with arXiv:2009.08831
论文截图
BUSIS: A Benchmark for Breast Ultrasound Image Segmentation
Authors:Min Xian, Yingtao Zhang, H. D. Cheng, Fei Xu, Kuan Huang, Boyu Zhang, Jianrui Ding, Chunping Ning, Ying Wang
Breast ultrasound (BUS) image segmentation is challenging and critical for BUS Comput-er-Aided Diagnosis (CAD) systems. Many BUS segmentation approaches have been studied in the last two decades, but the performances of most approaches have been assessed using relatively small private datasets with different quantitative metrics, which results in a discrepancy in performance comparison. Therefore, there is a pressing need for building a benchmark to compare existing methods using a public dataset objectively, to determine the performance of the best breast tumor segmentation algorithm available today, and to investigate what segmentation strategies are valuable in clinical practice and theoretical study. In this work, a benchmark for B-mode breast ultrasound image segmentation is presented. In the benchmark, 1) we collected 562 breast ultrasound images, prepared a software tool, and involved four radiologists in obtaining accurate annotations through standardized procedures; 2) we extensively compared the performance of sixteen state-of-the-art segmentation methods and discussed their advantages and disadvantages; 3) we proposed a set of valuable quantitative metrics to evaluate both semi-automatic and fully automatic segmentation approaches; and 4) the successful segmentation strategies and possible future improvements are discussed in details.
PDF 27 pages, 4 figures, 3 tables
论文截图
Vision Transformer for Classification of Breast Ultrasound Images
Authors:Behnaz Gheflati, Hassan Rivaz
Medical ultrasound (US) imaging has become a prominent modality for breast cancer imaging due to its ease-of-use, low-cost and safety. In the past decade, convolutional neural networks (CNNs) have emerged as the method of choice in vision applications and have shown excellent potential in automatic classification of US images. Despite their success, their restricted local receptive field limits their ability to learn global context information. Recently, Vision Transformer (ViT) designs that are based on self-attention between image patches have shown great potential to be an alternative to CNNs. In this study, for the first time, we utilize ViT to classify breast US images using different augmentation strategies. The results are provided as classification accuracy and Area Under the Curve (AUC) metrics, and the performance is compared with the state-of-the-art CNNs. The results indicate that the ViT models have comparable efficiency with or even better than the CNNs in classification of US breast images.
PDF 5 pages, 2 figures, Under review in EMBC