Podrobnosti studentského projektu

Seznam
Téma:Analýza dynamické scény pro autonomní řízení
Katedra: Skupina vizuálního rozpoznávání
Vedoucí:Mgr. Jan Šochman, Ph.D.
Vypsáno jako:Diplomová práce, Bakalářská práce, Semestrální projekt
Popis:We are part of the Toyota research lab, a research project at CTU sponsored by Toyota, and our goal is to understand dynamic scenes from a video recording, to know what moves and what is static, and where the moving objects move to. Our main application is a self-driving car equipped with a camera, but many other applications exist in robotics, human-computer interfaces, movie editing, ...

Recently we have developed a method for discovering and segmenting independently moving objects in a video taken by a moving camera (https://github.com/michalneoral/Raptor). There are many interesting extensions of this work. For instance, we may be interested in the distance to the objects or their trajectories with relation to the ego-vehicle motion. Or, we may experiment with temporal coherence of the algorithm output.

The exact problem formulation will be specified depending on the current needs of the project and the student experience (Bc./Ing.).

The student has to be able to code in Python and some knowledge of deep learning frameworks like PyTorch or Tensorflow is advantageous but not necessary.
Literatura:[1] Monocular Arbitrary Moving Object Discovery and Segmentation:
https://www.bmvc2021-virtualconference.com/assets/papers/1500.pdf

[2] Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer:
https://arxiv.org/pdf/1907.01341v3.pdf

[3] Learning to Recover 3D Scene Shape from a Single Image:
https://openaccess.thecvf.com/content/CVPR2021/papers/Yin_Learning_To_Recover_3D_Scene_Shape_From_a_Single_Image_CVPR_2021_paper.pdf
Za obsah zodpovídá: Petr Pošík