Model & training

RK Rana Khaled
MH Maha Helal
OA Omar Alfarghaly
OM Omnia Mokhtar
AE Abeer Elkorany
HK Hebatalla El Kassas
AF Aly Fahmy
ask Ask a question
Favorite

An EfficientNetB017, pre-trained on ImageNet18, was used as the starting model in our experiments. We finetuned the model by removing the final layer and adding a layer with three output classes (Normal, Benign, Malignant). All the weights are left to be fine-tuned during the training. Categorical cross-entropy was used as the loss function with Adam optimizer19 as shown in Eq. 1, where CE(b) is the cross entropy loss for batch b, C the number of classes, N the number of images in the batch, y is the ground-truth, and y^ is the prediction. A batch size of 16 was used, a decaying learning rate of 1e-3, and a dropout layer20 with a drop probability of 0.8 on the final visual features was used before the classifier.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A