Weighted Cross Entropy
Weighted Cross Entropy Loss¶
Weighted Cross Entropy applies a scaling parameter alpha to Binary Cross Entropy, allowing us to penalise false positives or false negatives more harshly. If you want false positives to be penalised more than false negatives, alpha must be greater than 1. Otherwise, it must be less than 1.
The equations for Binary and Weighted Cross Entropy Loss are the following:
We calculate the Gradient:
We also need to calculate the Hessian:
By setting alpha = 1 we obtain the Gradient and Hessian for Binary Cross Entropy Loss, as expected.