I am a first-year Ph.D. student at the Max Planck ETH Center for Learning Systems, jointly supervised by Prof. Fisher Yu and Prof. Bernt Schiele. Currently, I am working in the Visual Intelligence and Systems group at the Computer Vision Lab (ETH Zurich). My research interests lie in the area of computer vision and the robustness of deep learning solutions.
I received my Master’s degree in Robotics, Systems and Control from ETH Zurich in 2021, and my Bachelor’s degree in Automation Engineering from the Politecnico di Milano in 2018. In 2020, I worked for one year as a Student Researcher at Google.
MSc in Robotics, Systems and Control, 2021
BSc in Automation Engineering, 2018
Politecnico di Milano
Transferring the style from one image onto another is a popular and widely studied task in computer vision. Yet, learning-based style transfer in the 3D setting remains a largely unexplored problem. To our knowledge, we propose the first learning-based approach for style transfer between 3D objects providing disentangled content and style representations. Our method allows to combine the content and style of a source and target 3D model to generate a novel shape that resembles in style the target while retaining the source content.
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains. Several recent methods use multiple datasets to train models to extract domain-invariant features, hoping to generalize to unseen domains. Instead, first we explicitly collect domain-dependant representations and use these representations to map domains in a shared latent space, where membership to a domain can be measured by means of a distance function. We propose to infer properties of an unseen domain as a linear combination of the known ones.
Neural networks predictions are unreliable when the input sample is out of the training distribution or corrupted by noise. Being able to detect such failures automatically is fundamental to integrate deep learning algorithms into robotics. We propose a novel framework for uncertainty estimation. Based on Bayesian belief networks and Monte-Carlo sampling, our framework not only fully models the different sources of prediction uncertainty, but also incorporates prior data information, e.g. sensor noise. We show theoretically that this gives us the ability to capture uncertainty better than existing methods.