Categorical Cross-Entropy Loss

For one-hot encodings, the cross-entropy loss simplifies to Categorical Cross-Entropy, aka negative log-likelihood loss:

sample
… index of the correct class
… true label (one-hot, all probability mass is on the true class)
… predicted probability distribution (from softmax, …)
… model parameters
… input features

center
, see surprise:
center

References

Understanding softmax and the negative log-likelihood

cross-entropy
classification