Why Does Gatys et al Neural Style Transfer Work Best With Old VGG CNN Features?
Does it really?
Let's avoid a discussion of what 'works best' even means, let alone 'style'. For now.
I grabbed this archived discussion from reddit and copy/pasted it here below in case the one on reddit vanishes for some reason. And it's a very interesting read, and it highlights some things we kept pointing out at HTC in many previous posts. That there is something about the VGG architecture that seems to work well with a number of different neural net image transformation tasks.