In our work, the cross-entropy loss function (19) is employed for all deep models. The cross-entropy loss is computed as follows:
where yi,c is the one-hot encoding format of ground truth labels, pi,c is the matrix of predicted values for pixels in each class, where the indices, c and i, iterate over all classes and pixels, respectively. Cross-entropy loss is based on minimizing pixel-wise error, where for class imbalanced cases, it may lead to an over-representation for dominant class samples in the loss, resulting in the poor representation or weak contribution for minority samples.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.