year: 2017
paper: https://arxiv.org/pdf/1703.06868.pdf
website: https://paperswithcode.com/method/adaptive-instance-normalization
code:
status: skimmed
connections: generative AI, style-transfer, instance normalization


Takeaways

Adaptive Instance Normalization

Instance normalization normalizes the input to a single style specified by the affine parameters (). was introduced to adapt to arbitrarily given styles (in generative AI) by using adaptive affine transformations. receives a content input and a style input and aligns the channel-wise mean and variance of to match those of . There are no learnable affine parameters (like in or conditional ). Instead, it “adaptively computes affine parameters from the style input”:

TLDR.: Style is encoded through scaling and shifting of the values. ( and are learnt through a linear layer).

Like in the statistics are computed across spatial dimensions. Intuitively, let us consider a feature channel that detects brushstrokes of a certain style. A style image with this kind of strokes will produce a high average activation for this feature. The output produced by AdaIN will have the same high average activation for this feature, while preserving the spatial structure of the content image. In the original paper, and , are replaced by std and mean of the style input, normalizing it to the given style. A bit less flexible

4 5

Link to original