|Topic:||Interpretovatelnost modelů hlubokých neuronových sítí pro segmentaci obrazu|
|Department:||Vidění pro roboty a autonomní systémy|
|Supervisor:||Ing. Michal Reinštein, Ph.D.|
|Announce as:||Diplomová práce, Bakalářská práce, Semestrální projekt|
|Description:||The motivation is to design and implement Deep Neural Network (DNN) [1, 2, 3] based solution for the SPACENET satellite imagery dataset  and compare it with the already published state-of-the-art results. Due to very complex nature of the SPACENET dataset understanding the process of DNN model training is essential and necessary prerequisite for successful model design. Therefore various methods of DNN model interpretability [5, 6] should be explored, implemented and evaluated experimentally. Comparison with related state-of-the-art work, especially with the results of the original competition, is integral part of the project and should be presented in the final report. Recommendation: implementation should be done in Python, using Keras  and TensorFlow  frameworks; Google Colab is recommended for creating a user interface to present the results.|
|Bibliography:|| He, Kaiming, et al. "Mask R-CNN" arXiv preprint arXiv:1703.06870 (2017).
 Szegedy, Christian, et al. "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning." AAAI. 2017. APA
 Goodfellow, Ian, et al. „Deep Learning“, MIT Press, 2016
 Janice Lan, Rosanne Liu, Hattie Zhou, Jason Yosinski. "LCA: Loss Change Allocation for Neural Network Training." arXiv preprint arXiv: arXiv:1909.01440 (2019).
 Abadi, Martın, et al. "TensorFlow: Large-scale machine learning on heterogeneous systems, 2015." Software available from tensorflow.org.