A Deep Learning-Based System to Assist Radiologists in Detecting COVID-19 Disease from Chest Computed Tomography Images

The COVID-19 pandemic has had a significant negative impact on the world in various ways. In an effort to mitigate the negative effects of the pandemic, this study proposes a deep learning approach for the automatic detection of COVID-19 from chest computed tomography (CT) images. This would enable healthcare professionals to more efficiently identify the presence of the virus and provide appropriate care and support to infected individuals. The proposed deep learning approach is based on binary classification and utilizes members of the pre-trained EfficientNet model family. These models were trained on a dataset of real patient images, called the EFSCH-19 dataset, to classify chest CT images as positive or negative for COVID-19. The results of the predictions made on the test images showed that all models achieved accuracy values of over 98%. Among these models, the EfficientNet-B2 model performed the best, with an accuracy of 99.75%, sensitivity of 99.50%, specificity of 100%, and an F1 score of 99.75%. In addition to the high accuracy achieved in the classification of chest CT images using the proposed pre-trained deep learning models, the gradient-weighted class activation mapping (Grad-CAM) method was also applied to further understand and interpret the model's predictions.


Introduction
Throughout history, our world has encountered various pandemics, including the black death, cholera, and Spanish flu.Currently, the global community is grappling with the COVID-19 pandemic, which originated in Wuhan, Hubei province of China (Tekin, 2021).On December 31, 2019, the World Health Organization (WHO) was informed of a cluster of pneumonia cases of unknown cause.
Samples were collected from the affected patients and, upon analyzing the genome sequences of these samples, the WHO announced on January 12, 2020 that a novel coronavirus, subsequently named SARS-CoV-2, was responsible for the outbreak and the associated disease, COVID-19 (Issever et al., 2020).SARS-CoV-2 is believed to have initially transmitted from animals to humans due to its zoonotic characteristics (Ahmad et al., 2020).The virus rapidly spread globally following its emergence in Wuhan, leading to the declaration of a pandemic on March 11, 2020 (Hua and Shaw, 2020).As of July 2022, over 580 million people have been infected worldwide, with more than 6 million deaths attributed to COVID-19 (URL-1).The most prominent symptoms of COVID-19 include fever, dry cough, respiratory issues, severe sore throat, and diarrhea.Other commonly reported symptoms include loss of taste and smell, joint pain, and nasal congestion.In severe cases, which comprise 2%-3% of all COVID-19 cases, multiorgan dysfunction can lead to death (Madabhavi et al., 2020).Although vaccines have proven to be beneficial, they are not a definitive solution to preventing COVID-19.There have been instances of vaccinated individuals contracting the disease (Wadman, 2021).Therefore, measures such as creating isolation between healthy and infected individuals through raising awareness and promoting the use of masks are important in the efforts to curb virus transmission.Currently, COVID-19 detection and differentiation between healthy and infected individuals rely on molecular detection methods and radiological imaging techniques (Giri et al., 2021) Molecular detection methods, such as real-time reverse transcription polymerase chain reaction (RT-PCR), are used to identify the virus.RT-PCR, which is also utilized in the detection of the Middle East Respiratory Syndrome Coronavirus (MERS-CoV), detects the presence of the virus and identifies it by analyzing the ribonucleic acid (RNA) component of the virus (Corman et al., 2020).
To perform the test, a cotton swab is used to collect throat and nasal samples from the individual.The duration of time required to obtain results from RT-PCR tests can range from one hour to several days, depending on the specific PCR version.The reliability of the RT-PCR test result is dependent on various factors, including the sample collection method, the types of primers and probes used, the inclusion of appropriate controls, and the temperature value (Wu et al., 2020).However, the RT-PCR test may be less efficient and reliable due to limitations such as variations in viral load rates used by different kit manufacturers, shortages in kit supply, and the number of skilled health professionals who will use the kit (Zali, 2021).Therefore, it may be useful to utilize alternative methods in the early stages of the disease in addition to or instead of the RT-PCR test.
To address the limitations of the RT-PCR test, it is advisable to examine radiological images prior to the test (Kommos et al., 2020).The lungs are often significantly affected in COVID-19 and SARS-CoV-2 infection can cause severe, potentially permanent damage to lung tissue (Ghaderzadeh et al., 2021).Therefore, lung tissue is often used to detect or monitor COVID-19 disease.Chest radiography or chest computed tomography (CT) scanning are commonly employed methods for radiologically diagnosing COVID-19 (Ng et al., 2020).Lung imaging is often the initial approach recommended by medical and healthcare protocols due to its expediency and simplicity (Fields et al., 2021).CT devices, which have detailed anatomical resolution, have been frequently used since the onset of the pandemic to monitor the effects of COVID-19 (Gundel et al., 2021).Radiologists examine chest CT scans obtained using these devices and create a report with recommendations for the patient's diagnosis.This report creation process is crucial and is therefore carried out with great care.During the current pandemic, radiologists are faced with the task of interpreting an excessive number of radiological images for COVID-19 diagnosis, which exceeds the standard limits (Chartrand et al., 2017).As a result, the time required to interpret the patient's report and initiate appropriate treatment increases significantly.In addition, the reporting process becomes prone to errors due to human factors such as fatigue from extended work hours.Computers with high processing capacity and the ability to be unaffected by human factors can assist radiologists in this process (Arsalan et al., 2020).
Artificial intelligence (AI)-based systems have been widely utilized in the detection of various diseases and for monitoring treatment progress (Owais et al., 2019).Currently, research involving the use and development of deep learning models for the classification of medical images based on disease is common (Mainak et al., 2019).AI-based systems assist physicians in tasks that can greatly impact human life, such as the detection of stroke, the classification of patients with kidney stone issues, and the identification of cancer cells (Chin et al., 2017;Yildirim et al., 2021;Khan et al., 2019).During the COVID-19 pandemic, numerous AI-based COVID-19 detection studies have been proposed in order to minimize social and economic damage and support physicians, similar to other studies.The main aim of classification studies is to determine, based on available data, whether there are positive cases of COVID-19 disease.Bogu and Snyder (2021) collected heart rate data from real individuals using a smartwatch device, with durations ranging from 15 seconds to 1 minute.They divided this data into two classes based on COVID-19 disease status: infected and non-infected.Using deep learning methods, they classified individuals with COVID-19 disease based on their heart rate with 92% accuracy.Pahar et al. (2021) classified cough recordings of individuals with positive and negative COVID-19 diagnoses, collected using smartphones.The dataset used in their study included both natural and forced cough samples.The researchers found that cough recordings with a positive diagnosis of COVID-19 were 15-20% shorter than other recordings.When input into seven different machine learning classifiers, the cough recordings achieved the highest success rate, with 95.33% accuracy, when classified using the ResNet-50 classifier.In addition to wearable sensors and smartphone technology, radiological images are frequently used in studies related to the detection of COVID-19 disease.Wang et al. (2020) proposed a model called COVID-Net, a deep convolutional neural network (CNN) specifically designed for the detection of COVID-19 cases from chest radiographs.The model was trained using 13,975 samples collected from publicly available datasets, which were subjected to pre-processing steps such as cropping, rotating, and zooming.COVID-Net was trained for 22 epochs using ImageNet weights and Adam optimization with a batch size of 64.In the testing stages, COVID-Net was compared to the VGG-19 and ResNet-50 models, which were trained with the same dataset, and was found to be the best performing model with an accuracy rate of 93.3%.Luz et al. (2022) classified X-ray images containing COVID-19 findings using models in the EfficientNet family.The models were trained using publicly available datasets collected from various sources and using cross-dataset evaluation methods.The dataset, which includes samples from three classes: COVID-19 positive, pneumonia, and normal, consists of samples with different resolution values.Among the models trained for 20 epochs, the EfficientNet-B3 model was the best performing model with an accuracy rate of 93.9%.Song et al. (2021) developed a system that can assist clinicians in detecting COVID-19-infected patients from CT images.Their model, called DRE-Net, was trained on CT images from two hospitals in China.When the model's performance was evaluated using performance metrics after training, the researchers reported that DRE-Net achieved 86% accuracy.The gradient-weighted class activation mapping (Grad-CAM) method was used to show which areas of the image the model focuses on when classifying.Bozkurt (2022), presents a framework called HANDEFU for the automatic early diagnosis of COVID-19 using chest x-ray images.HANDEFU is an interactive software that allows the user to select from a library of feature extraction techniques and classification methods to build their own model for diagnosis.The authors evaluated the performance of 27 different models using HANDEFU on an open-access dataset for COVID-19 diagnosis.The model with the highest performance, local binary pattern (LBP)+support vector machine (SVM), achieved an accuracy of 99.36%.This study is noteworthy for its development of a single framework that incorporates image preprocessing and diverse feature extraction and classification techniques.Marques et al. (2020) developed a CNN using the EfficientNet architecture for the purpose of COVID-19 diagnosis.The CNN was evaluated using a stratified 10-fold cross-validation approach with images of COVID-19 patients, pneumonia patients, and healthy individuals.The results of binary classification (COVID-19 versus normal) showed an average accuracy of 99.62%, recall of 99.63%, precision of 99.64%, and F1 score of 99.62%.The results of multi-class classification pneumonia,and normal) showed an average accuracy of 96.70%, recall of 96.69%, precision of 97.54%, and F1 score of 97.11%.The authors suggest that this CNN model could serve as a medical decision support system to assist healthcare professionals in COVID-19 diagnosis.Wang et al. (2021) presented a method for detecting COVID-19 disease from chest CT images using a deep learning algorithm.Chest CT images from three different hospitals were labeled as 325 positives for COVID-19 and 740 negatives for COVID-19 by two radiologists, resulting in a dataset of 1065 total samples, which were sized at 299×299px.After the learning rate was set at 0.01, the Minception model was trained for 15,000 epochs.Upon completing performance evaluation, the researchers reported that the model achieved 89.5% accuracy.
These classification studies aim to interpret relevant radiological images quickly and reliably, regardless of human factors.In literature studies, deep learning models are typically trained using publicly available datasets.As a result, issues such as mislabeled images and unbalanced distributions in these datasets are often overlooked.To avoid the negative effects of public datasets, a special dataset was created that contains real patient scans.This dataset, called EFSCH-19, includes chest CT samples with both positive and negative COVID-19 findings, and does not contain any personal information from any patient.
The primary objective of this study is to use a deep learning model to automatically classify COVID-19 and normal findings on chest CT images.The goal of this is to facilitate the early detection of the disease and to provide necessary information to physicians.The main contributions of this study are as follows: • Evaluation of the performance of pre-trained EfficientNet models on a novel COVID-19 dataset.
• Demonstration of high classification performance of chest CT images on a uniformly distributed dataset without data augmentation.
• Use of the Grad-CAM algorithm to visualize predictions and increase confidence in the clinical use of deep learning models.
• Utilization of deep learning modules to perform tasks currently carried out manually by experts, resulting in reduced time and margin of error.
The remainder of this paper is organized as follows: Section 2 presents the proposed method for this study, including the dataset used, pre-trained deep learning models, training phases, pre-processing, classification performance measures, and the Grad-CAM algorithm.Section 3 presents the parameters and environments used in the training phase, the numerical values of the model during the training phase, the test phase predictions, and performance values.The discussion and conclusion sections of the study are presented in Section 4 and Section 5, respectively.

Materials and Methods
A classifier system has been designed to use a deep learning model to predict whether chest CT images are positive or negative for COVID-19 using the binary classification method.Figure 1 illustrates the block representation of the proposed approach used in this study.

COVID-19 Negative
Input Image Output Image

EFSCH-19 Dataset
This dataset was created with the approval of the ethics committee of Fırat University, Turkey.
It is a binary classification dataset, comprising two classes labeled as COVID-19 positive and COVID-19 negative.The dataset includes a total of 4,000 chest CT images, with 2,000 samples in each class.All images have a resolution of 768×768px and are in the Digital Imaging and Communication in Medicine (DICOM) data format.To protect patient privacy, the layers containing personal information were removed from the images before they were provided to us.In radiological images, COVID-19 findings can be classified into two main categories: typical and atypical (Comert and Kiral, 2020).Typical findings include ground glass opacity, consolidation, and air bubbles.The EFSCH-19 dataset includes chest CT images of real patients with typical findings labeled as positive.
These samples and the findings they contain have been verified by a radiologist.Figure 2 shows a random selection of samples from the EFSCH-19 dataset.

EfficientNet Models
CNNs are designed with a predetermined resource allocation and the number of layers is increased as additional resources become available to enhance accuracy.Increasing the number of layers and the depth or width of the network can also enhance accuracy by using a higher input resolution for training and evaluation.However, these techniques may not be adequate in terms of performance.The EfficientNet model family consists of several models that have been derived using various scaling methods.In addition to the basic model, these models differ in the size of their input layers and the number of parameters they possess.The scaling methods responsible for these differences are depicted in Figure 5.The EfficientNet models are derived using scaling methods that involve increasing the depth, width, and resolution of the network, while simultaneously adjusting the input size and number of parameters to maintain constant efficiency.The basic model in the EfficientNet model family, along with the variations derived from it, are outlined in Table 2. Recognition Challenge (ILSVRC) (Russakovsky et al., 2015).Overall, EfficientNet models have exhibited high accuracy on the ImageNet dataset while also being comparatively efficient in terms of the number of parameters and computational resources required.As such, EfficientNet models were employed as the classifier deep learning model in this study.

Training Phases
The chest CT images in the EFSCH-19 dataset cannot be utilized directly in the training phase due to their resolution and format.Therefore, it is necessary to apply pre-processing steps to the samples in the EFSCH-19 dataset.Following the pre-processing steps, all EfficientNet-B0 to EfficientNet-B7 models were trained while maintaining the parameters constant in order to determine the most suitable model for our study.Once the training phases have been completed, various performance metrics are examined and the most appropriate EfficientNet model to be used is determined.The block diagram of the method employed for training the models is provided in Figure 6.

Pre-processing
All images in the EFSCH-19 dataset, which are in the DICOM data format, were converted to the JPEG format using the ImageJ application (Collins, 2007).This conversion process was carried out in a nearly lossless manner.Figure 7 illustrates the result obtained before and after the conversion process, with four times zoom.All samples were converted to the JPEG data format, and resizing was performed.Images with a resolution of 768×768px were resized according to the input size of the model to be included in the training.The resizing process is illustrated in Figure 8.  3.This numerical information is independent of the name of the trained model and applies to all eight models.• True Positive (TP) is when an image with the COVID-19 positive label is predicted as COVID-19 positive.
• False Positive (FP) is when an image with the COVID-19 negative label is predicted as COVID-19 positive.
• False Negative (FN) is when an image with the COVID-19 positive label is predicted as COVID-19 negative.
• True Negative (TN) is when an image with the COVID-19 negative label is predicted as COVID-19 negative.
The confusion matrix can be visually expressed by placing the values of TP, FP, FN, and TN in a 2×2 matrix.
In addition to standardized statistical evaluation metrics, it is also important to compare the areas of focus for the deep learning model in its predictions.

Gradient-weighted Class Activation Mapping
The Grad-CAM algorithm visualizes, using the heat map technique (Selvaraju et al., 2017), the areas of the image that the classifier deep learning model considers when making a class prediction for the image provided as input.In this way, it can be determined whether the areas on which the classifier deep learning model focuses in correctly predicted images are, in fact, the areas that play an active role in determining the class.The proposed approach to the Grad-CAM algorithm is shown in Figure 10.The Grad-CAM algorithm operates by referencing the feature maps produced by the last convolution layer of a CNN.Therefore, the last convolution layer of the classifier deep learning model should be referenced using the Grad-CAM algorithm.After referencing the last convolution layer, a heat map is created using gradients to highlight the important points of the class label in the image.
The yellow and red areas in the generated heat maps indicate the pixel areas that the classifier deep learning model focuses on when making predictions.

Experimental Results
This section presents the results of EfficientNet models trained for the detection of COVID-19 disease from chest CT images.The following sections provide the evaluation metric values for the experimental findings.

Experimental Setups
In

Results
The use of an early stopping function aims to prevent overfitting in machine learning models.
Overfitting occurs when a model is too complex and performs exceptionally well on the training data,  Upon examining the accuracy graphs of the models, it was observed that they performed well on both the training data and unseen data.This indicates that the models have learned patterns in the training data that generalize well to unseen data, and are therefore able to accurately predict the output for new input data.This is generally a good indication of a well-performing model.However, in order to fully assess the performance of the models, input was only given to each model using images reserved for use in the testing phase.Each model made predictions for the corresponding test images.

EfficientNet-B2 EfficientNet-B1 EfficientNet-B0
The confusion matrices resulting from these predictions are shown in Figure 13.  4. Performance metrics, such as accuracy, precision, and recall, can be useful for evaluating the overall performance of a CNN, but they do not provide detailed information on how the CNN is arriving at its predictions.By visualizing the regions in the input that are most pertinent to a particular prediction, the Grad-CAM algorithm enables us to comprehend the specific features.This can be beneficial for debugging and improving the performance of the CNN, as well as for comprehending the internal functioning of the CNN.The Grad-CAM algorithm was applied to randomly selected samples from the test images during the prediction process, and the results are depicted in Figure 14.

Discussion
Radiological images, such as CT scans and X-rays, can be used to detect COVID-19.However, detecting COVID-19 from radiological images can be challenging because the disease can present differently in different individuals and may not always produce visible abnormalities on images.
Furthermore, the manual detection process is very time-and resource-intensive.Thus, it is important to be able to automatically detect COVID-19 without human intervention.Consequently, automatic detection of COVID-19 is a highly popular research topic, and there have been numerous deep learning-based studies on the subject using radiological images.Upon examination of the results in Table 6, it is observed that all models have a success rate of over 90%.However, with an accuracy rate of 96.51%, the EfficientNet-B3 model is more performant compared to the other models.While the EfficientNet-B2 model architecture was employed in our study, it is worth noting that it may not be optimal for all tasks and datasets.Alternative model architectures may potentially yield better performance in certain cases.
The limitations of this study are as follows: • The dataset comprises 4,000 images that were collected from a single CT device, which may not be sufficient to adequately capture the variability and complexity of the findings analyzed.
• Identifying slices with signs of COVID-19 from chest CT scans is time-consuming.
• Our study only tested the performance of eight pre-trained models on the task of classifying CT images.
• The robustness of the models in the face of changes in lighting, background, or other factors that may affect the appearance of COVID-19 findings in images has not been evaluated.
• The model was only evaluated on a single split of the data, rather than using k-fold cross-validation to assess its performance more thoroughly.
The results of this study demonstrate the effectiveness of the EfficientNet-B2 model for automated detection of COVID-19 from chest CT images, but there are several areas of future work that could be pursued to further improve the performance and generalizability of the model.One potential direction for future work is to expand the dataset used for training and evaluation.The study used a novel dataset of 4,000 images, but it could be beneficial to further increase the size of the dataset to improve the robustness and generalizability of the model.This could involve collecting additional images, as well as increasing the diversity of the images in terms of factors such as patient demographics, clinical characteristics, and imaging modalities.Another avenue for future research is to evaluate the performance of alternative model architectures.While the EfficientNet-B2 model performed well in this study, it may not be optimal for all tasks and datasets.Future work could include testing the performance of other model architectures, such as those with a greater number of layers and parameters, to determine if they offer any benefits over the EfficientNet-B2 model.This could involve evaluating the models on the same dataset used in this study, as well as on other datasets to assess the generalizability of the results.Additionally, it would be beneficial to investigate methods for addressing any potential biases or weaknesses identified through explainability techniques.In this study, the explainability method of Grad-CAM was utilized to provide insights into the model's decision-making process, but further investigation may reveal other biases or weaknesses that could be mitigated.Finally, it may be interesting to explore the use of the model in other contexts or applications.While the present study demonstrated the effectiveness of the EfficientNet-B2 model for the automated detection of COVID-19 from chest CT images, it may be valuable to test the model on other imaging modalities, such as chest X-rays or MRIs, or to use it to detect other diseases or conditions.

Conclusion
The COVID-19 disease has negative impacts on our society in various ways.Classical laboratory methods used for the detection of the disease are insufficient due to their disadvantages.
By using radiological imaging techniques instead of classical laboratory methods, the COVID-19 disease can be detected at an early stage.However, specialists are needed to detect the disease from radiological images.Due to the speed and ease of transmission, millions of people have been infected with COVID-19.As a result, more than the normal number of patients are assigned to specialists who will examine the radiological images.Taking into account the maximum working time and other human characteristics, the time for diagnosis of the patient whose radiological image is taken increases significantly due to the intensity.There is a risk of transmission of the disease to other normal individuals in possible patients whose disease takes a long time to detect.Computers with the ability to process quickly and not be affected by human factors can assist experts in the examination of radiological images.In this study, a decision support system is proposed that can automatically detect COVID-19 disease from chest CT images.The Grad-CAM algorithm was used to explain the detection of COVID-19 disease by focusing on which areas in chest CT images.Eight different EfficientNet models trained with a normally distributed dataset called EFSCH-19 can automatically classify chest CT images.Among these models, the EfficientNet-B2 model showed the highest performance for the test images.The margin of error is expected to be close to zero in this system, which was designed to support decisions that can directly affect human life.Therefore, improvements such as expanding the scope of the dataset used in the training of the model and increasing the number of samples it contains can be planned for future studies.With the use of such systems in the healthcare sector, early detection of diseases can be achieved and erroneous diagnoses caused by inexperienced personnel can be prevented.

Figure 1 .
Figure 1.Block representation of the proposed approach.

Figure 6 .
Figure 6.The proposed training method.
Confusion matrix-based performance measures are utilized to evaluate the performance of the deep learning model in classification studies.The confusion matrix provides information regarding the relationship between the class label predicted by the deep learning model for the input image and the known, actual class label of the input image.Deep learning models suitable for binary classification can only predict two different classes at their output, thus up to four different situations can arise in class prediction.These situations are as follows:

Figure 9 .
Figure 9. Structure of the confusion matrix.

Figure 10 .
Figure10.The proposed approach to the Grad-CAM algorithm.
this study, members of the EfficientNet model family were used for training a deep learning model, utilizing the Keras library implemented in Python.The models included in the training utilized ImageNet weights rather than random starting weights, and the final layers of the models were modified to produce output in the range of 0 to 1, with the softmax activation function applied (The summary of the model is available at: https://github.com/oguzhankatar/EfficientNet-B2.git).The Adam optimization algorithm was utilized for training, with the early stopping function active for 50 epochs, a constant batch size of 16 and learning rate of 0.001, using the cross-entropy loss as the value to be minimized.The validation accuracy rate was tracked during training, and the early stop function was activated if the validation accuracy did not surpass the highest value for five consecutive epochs.These training processes were conducted in the Google Colab environment, with the aim of achieving the highest possible classification accuracy.
but fails to generalize to new, unseen data.The early stopping function monitors the performance of a model on a validation set during training and stops the training process if the model's performance on the validation set begins to decrease while the performance on the training set continues to improve.This indicates that the model is starting to overfit and the early stopping function ensures that the final model is the one that performed the best on the validation set.In this study, the training of each model was set to a maximum of 50 epochs, but the early stopping function terminated the training at earlier epochs.The weights were saved following the training phases and the loss graphs for each model are provided in Figure 11.The analysis of loss graphs can facilitate the comprehension of the efficacy of a model's learning process and the identification of potential issues impacting the model's performance.As the model improves its accuracy in predicting outputs, a decrease in the loss value is anticipated during training.Conversely, an absence of decrease or an increase in the loss value may indicate ineffective learning and warrant further investigation.

Figure 11 .Figure 12 .
Figure 11.The loss graphs of each model.

Figure 13 .
Figure 13.The confusion matrices of each model.

Figure 14 .
Figure 14.The Grad-CAM outputs of each model.
comprising 1.2 million training images and 50,000 validation images from 1000 classes, has consistently shown top results for EfficientNet models in the annual ImageNet Large Scale Visual

Table 3 .
Numerical distribution after the data splitting.

Table 4 .
The performance values of EfficientNet models.

Table 5
CNNs, and end-to-end training of a new CNN model.The deep features were classified using the SVM classifier with various kernel functions, including Linear, Quadratic, Cubic, and Gaussian.A dataset comprising 180 COVID-19 and 200 normal chest X-ray images was utilized for experimentation.The results demonstrated the potential of deep learning in detecting COVID-19 based on chest X-ray images, with the highest accuracy score of 92.6% achieved by the fine-tuned ResNet50 model.Ozturk et al. (2020) proposed a new model for automatic COVID-19 detection using raw chest X-ray images for both binary (COVID vs. No-Findings) and multi-class (COVID vs.All 349 COVID-19 and 397 Non-COVID CT images were used as input to the EfficientNet models trained with the EFSCH-19 dataset samples.The performance values resulting from the classification of 746 images by each model are presented inTable 6.The graphical representation of the values given in Table 6 is shown in Figure 15. Zhou et al. (2021)ewly designed lightweight deep learning model.The authors employed a repeated 10-fold holdout validation method for training, validation, and testing of the models and achieved the highest classification accuracy of 98.91% using transfer-learned DarkNet19.Zhou et al. (2021)of the proposed method.

Table 6 .
Performance values of the EfficientNet models tested with the COVID-CT-Dataset.
Figure 15.Bar chart of the performance of EfficientNet models.