My research interests are at the intersection between computer vision, computer graphics and machine learning. I am particularly interested in analyzing and modeling people from all sorts of sensor data: IMUs, 3D/4D scans, RGBD, images or video. Current computer vision algorithms can detect people in images or estimate 2D pose to a remarkable accuracy. However, people are much more complex. Humans perceive lots of information from other humans; for example we sense other people’s emotional state based on facial expressions and body movements, or we make guesses about people’s preferences based on what clothing they wear. In order for machines to interact with humans they have to both perceive all this information from sensory data and they have to “appear” human to us. My research revolves around perceiving and building digital generative models of people from real people.