|Topic:||Using single view depth prediction from CNNs for pose estimation under rolling shutter in 3D computer vision|
|Announce as:||Diplomová práce, Semestrální projekt|
|Description:||Estimating the relative pose between two images, i.e., the relative translation and rotation between the two images, or the absolute pose of an image with respect to a scene, i.e., the position and orientation from which it was taken, are fundamental problems in many 3D computer vision pipelines such as Structure-from-Motion, SLAM, or visual localization. As such, they are fundamental problems that need to be solved for applications such as self-driving cars, robotics, as well as Augmented / Mixed Reality systems (such as Microsoft HoloLens). Efficient solutions to these problems exist if the images were taken with global shutter cameras, i.e., if the images are taken at a single point in time. However, many modern cameras use a rolling shutter, i.e., images are captured line by line. Movement during the capture process leads to distortions in the images. These artifacts in turn significantly complicate relative and absolute pose estimation.
Modern deep neural networks are able to predict depth maps from a single image, i.e., they are able to predict the distance to the scene for each pixel in an image. While these depth predictions can be rather noisy, they should provide useful information for camera pose estimation under rolling shutter: the strength of the rolling shutter effect depends on the depth for each pixel. The goal of this thesis is thus to use single view depth predictions made by neural networks to design efficient camera pose estimation procedures for rolling shutter cameras.
|Bibliography:||Albl et al., Rolling Shutter Camera Absolute Pose, PAMI 2019
Godard et al., Digging into Self-Supervised Monocular Depth Prediction, ICCV 2019