Frequentist probability or frequentism is an interpretation of probability; it defines an event’s probability as the limit of its relative frequency in many trials (the long-run probability). Probabilities can be found (in principle) by a repeatable objective process (and are thus ideally devoid of opinion). The continued use of frequentist methods in scientific inference, however, has been called into question.
Frequentism interprets probability as the long-run frequency of events in repeated trials. To a frequentist, “50% probability” means that in infinitely many repetitions, the event occurs half the time.
Key principle: Parameters are fixed but unknown constants. You can’t assign probabilities to them - they either have a specific value or they don’t. This leads to confidence intervals with a specific interpretation: “If we repeated this experiment many times, 95% of such intervals would contain the true parameter.”
- Often computationally simpler (many closed-form solutions)
- No need to specify priors (avoids (or hides?) subjective choices)
- Standard in regulatory settings (clinical trials, quality control)
Common applications: Most neural network training (maximum likelihood estimation), A/B testing, industrial quality control, classical hypothesis testing with p-values.