A COVID‐19 Visual Diagnosis Model Based on Deep Learning and GradCAM.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Subject Terms:
    • Subject Terms:
    • Abstract:
      Recently, the whole world was hit by COVID‐19 pandemic that led to health emergency everywhere. During the peak of the early waves of the pandemic, medical and healthcare departments were overwhelmed by the number of COVID‐19 cases that exceeds their capacity. Therefore, new rules and techniques are urgently required to help in receiving, filtering and diagnosing patients. One of the decisive steps in the fight against COVID‐19 is the ability to detect patients early enough and selectively put them under special care. Symptoms of this disease can be observed in chest X‐rays. However, it is sometimes difficult and tricky to differentiate "only" pneumonia patients from COVID‐19 patients. Machine‐learning can be very helpful in carrying out this task. In this paper, we tackle the problem of COVID‐19 diagnostics following a data‐centric approach. For this purpose, we construct a diversified dataset of chest X‐ray images from publicly available datasets and by applying data augmentation techniques. Then, we employ a transfer learning approach based on a pre‐trained convolutional neural network (DenseNet‐169) to detect COVID‐19 in chest X‐ray images. In addition to that, we employ Gradient‐weighted Class Activation Mapping (GradCAM) to provide visual inspection and explanation of the predictions made by our deep learning model. The results were evaluated against various metrics such as sensitivity, specificity, Positive Predictive Value (PPV), Negative Predictive Value (NPV) and the confusion matrix. The resulting models has achieved an average detection accuracy close to 98.82%. © 2022 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC. [ABSTRACT FROM AUTHOR]
    • Abstract:
      Copyright of IEEJ Transactions on Electrical & Electronic Engineering is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)