3

3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer

Transferring the style from one image onto another is a popular and widely studied task in computer vision. Yet, learning-based style transfer in the 3D setting remains a largely unexplored problem. To our knowledge, we propose the first learning-based approach for style transfer between 3D objects providing disentangled content and style representations. Our method allows to combine the content and style of a source and target 3D model to generate a novel shape that resembles in style the target while retaining the source content.

Batch Normalization Embeddings for Deep Domain Generalization

Domain generalization aims at training machine learning models to perform robustly across different and unseen domains. Several recent methods use multiple datasets to train models to extract domain-invariant features, hoping to generalize to unseen domains. Instead, first we explicitly collect domain-dependant representations and use these representations to map domains in a shared latent space, where membership to a domain can be measured by means of a distance function. We propose to infer properties of an unseen domain as a linear combination of the known ones.

Depth-Aware Action Recognition: Pose-Motion Encoding through Temporal Heatmaps

Most state-of-the-art methods for action recognition rely only on 2D spatial features encoding appearance, motion or pose. However, 2D data lacks the depth information, which is crucial for recognizing fine-grained actions. In this paper, we propose a depth-aware volumetric descriptor that encodes pose and motion information in a unified representation for action classification in-the-wild.