

working with sets of objects in a scene (think AIR or SQAIR).where we want permutation invariance) are: Some practical examples where we want to treat data or different pieces of higher order information as sets (i.e. To give a short, intuitive explanation for permutation invariance, this is what a permutation invariant function with three inputs would look like: $f(a, b, c) = f(a, c, b) = f(b, a, c) = \dots$. In such a situation, the invariance property we can exploit is permutation invariance. Often our inputs are sets: sequences of items, where the ordering of items caries no information for the task in hand. ultimately, to become more data efficient and generalize better.īut images are far from the only data we want to build neural networks for.decouple the number of parameters from the number of input dimensions, and.drastically reduce the number of parameters needed to model high-dimensional data.The success of convolutional networks boils down to exploiting a key invariance property: translation invariance. Most successful deep learning approaches make use of the structure in their inputs: CNNs work well for images, RNNs and temporal convolutions for sequences, etc. Over to Fabian, Ed and Martin for the rest of the post. Wagstaff, Fuchs, Engelcke, Posner and Osborne (2019) On the Limitations of Representing Functions on Sets.


FebruDeepSets: Modeling Permutation Invariance guest post by Fabian Fuchs, Ed Wagstaff, and Martin Engelcke
