|Topic:||Analýza dynamické scény pro autonomní řízení|
|Department:||Skupina vizuálního rozpoznávání|
|Supervisor:||Mgr. Jan Šochman, Ph.D.|
|Announce as:||Diplomová práce, Bakalářská práce, Semestrální projekt|
|Description:||We are part of the Toyota research lab, a research project at CTU sponsored by Toyota, and our goal is to understand dynamic scenes from a video recording, to know what moves and what is static, and where the moving objects move to. Our main application is a self-driving car equipped with a camera, but many other applications exist in robotics, human-computer interfaces, movie editing, ...
Recently we have developed a method for discovering and segmenting independently moving objects in a video taken by a moving camera (https://github.com/michalneoral/Raptor). There are many interesting extensions of this work. For instance, we may be interested in the distance to the objects or their trajectories with relation to the ego-vehicle motion. Or, we may experiment with temporal coherence of the algorithm output.
The exact problem formulation will be specified depending on the current needs of the project and the student experience (Bc./Ing.).
The student has to be able to code in Python and some knowledge of deep learning frameworks like PyTorch or Tensorflow is advantageous but not necessary.
|Bibliography:|| Monocular Arbitrary Moving Object Discovery and Segmentation:
 Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer:
 Learning to Recover 3D Scene Shape from a Single Image: