EBL using contrastive learning (showing negative pairs, where energy should go up)

, for unrelated
Curse of dimensionality: infeasible to collect negative examples to push the energy down everywhere for high dim spaces.
You get a jagged energy landscape, with occasional troughs around datapoints.

Link to original

As we scale the dimensionality of the representation space, we run into the curse of dimensionality, as we have an exponentially growing space vs. a linearly growing space of samples.

example methods: