We aim to enable touching of digital media, such as images and virtual objects. Imagine being able to touch the sofa your avatar is going to sit on, or letting blind people “see’’ images through their fingers. We are developing a complete haptic system for tactile sensing and already have a device that simulates sensation of surface geometry and texture. Next challenge is to figure out how to transform images to surfaces that people will touch and understand the content.
In many problems in computer vision, collecting data for training and testing is hard or even impossible. For example, it is notoriously hard to annotate videos, and as a result autonomous driving platforms use synthetically generated videos. In our research we explore multiple facets of handling the lack of data. We explore methods for differentiable data augmentation, completing missing annotations, generating synthetic images, and more.