An EfficientNetB017, pre-trained on ImageNet18, was used as the starting model in our experiments. We finetuned the model by removing the final layer and adding a layer with three output classes (Normal, Benign, Malignant). All the weights are left to be fine-tuned during the training. Categorical cross-entropy was used as the loss function with Adam optimizer19 as shown in Eq. 1, where CE(b) is the cross entropy loss for batch b, C the number of classes, N the number of images in the batch, y is the ground-truth, and is the prediction. A batch size of 16 was used, a decaying learning rate of 1e-3, and a dropout layer20 with a drop probability of 0.8 on the final visual features was used before the classifier.
Copyright and License information: The Author(s) ©2022 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this
article to respond.