|Topic:||Generative models for neural radiance fields|
|Supervisor:||Mgr. Jonáš Kulhánek|
|Announce as:||Diplomová práce, Bakalářská práce, Semestrální projekt|
|Description:||Novel view synthesis is a long-standing computer vision problem with a significant impact in other fields such as computer graphics and robotics. In this thesis, the student will work with the newest state-of-the-art neural rendering methods such as [1,2]. Current Neural Radiance Field (NeRF) methods use neural networks to represent scenes and a differentiable rendering algorithm to optimize the representation. However, the scene representation is learned for each individual scene from scratch. Incorporating prior knowledge on common 3D scenes could reduce the number of images and the time needed in order to adapt the representation to a novel scene. There are approaches  that achieve this adaptation by reprojecting convolutional features into 3D and using this representation as a neural radiance field. However, the quality of this representation is limited.
This thesis aims to instead use a novel NeRF rendering pipeline  which uses a hash-encoded embedding and a small neural network as the scene representation. Instead of training the representation from scratch, a generative model will be used to generate the initial representation and to constrain the optimization process.
Although this thesis can be challenging, if it goes as expected, the results can be presented or published in a prestigious conference or journal (e.g., CVPR, ICCV), which can benefit the student’s career greatly.
|Bibliography:||: Müller, T., Evans, A., Schied, C., & Keller, A. (2022). Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. arXiv:2201. 05989.
: Barron, Jonathan T., et al. "Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields." arXiv preprint arXiv:2103.13415 (2021).
: Yu, A., Ye, V., Tancik, M., & Kanazawa, A. (2021). pixelNeRF: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4578-4587).