|Topic:||Generování trénovacích dat pro 2D segmentaci z 3D modelů|
|Supervisor:||Ing. Michal Polic|
|Announce as:||Diplomová práce, Semestrální projekt|
|Description:||The accurate dense reconstruction of a physical environment plays an essential role in many Computer Vision fields. Such data are crucial for generating training data for camera localization, multiview stereo, or 2D segmentation. Most of the current approaches for 2D segmentation require a rich collection of annotated images for training, which is hard to obtain in general but even harder for complex scenes as medical, factory, or construction environments. The generation of segmentation masks and related images requires high-quality 3D models that are difficult to obtain for highly structured scenes with small details. Therefore, the student will focus on techniques for optimizing the dense reconstruction quality. The work consists of capturing a complex scene in a lab environment by RGB-D scanner and running/retraining the SPSG (CVPR2021) method to improve quality and fill the missing parts of the reconstruction. The final models will be segmented to generate 2D images and objects masks in AI Habitat (ICCV2019).
Record the laboratory environment by RGBD camera. Run COLMAP to obtain the camera poses. Compose the dense pointcloud from depth maps. Run the SPSG (CVPR2021) to improve the quality of the reconstruction and fill the missing parts. Run AI Habitat (ICCV2019) and write the code to generate images with segmentation masks.
|Bibliography:||* Savva, Manolis, et al. "Habitat: A platform for embodied ai research." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.
* Dai, Angela, et al. "Spsg: Self-supervised photometric scene generation from rgb-d scans." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
* Schonberger, Johannes L., and Jan-Michael Frahm. "Structure-from-motion revisited." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.