• Train a neural network with dropout layers.
  • At test/inference time, keep dropout active.
  • Pass the same input through the network multiple times. Each pass will have a different set of neurons “dropped out,” effectively creating slightly different sub-networks.
  • You’ll get a distribution of predictions. The variance of these predictions is used as the epistemic uncertainty.