Podrobnosti studentského projektu

Seznam
Téma:Interpretovatelnost modelů hlubokých neuronových sítí pro segmentaci obrazu
Katedra: Vidění pro roboty a autonomní systémy
Vedoucí:Ing. Michal Reinštein, Ph.D.
Vypsáno jako:Diplomová práce, Bakalářská práce, Semestrální projekt
Popis:The motivation is to design and implement Deep Neural Network (DNN) [1, 2, 3] based solution for the SPACENET satellite imagery dataset [4] and compare it with the already published state-of-the-art results. Due to very complex nature of the SPACENET dataset understanding the process of DNN model training is essential and necessary prerequisite for successful model design. Therefore various methods of DNN model interpretability [5, 6] should be explored, implemented and evaluated experimentally. Comparison with related state-of-the-art work, especially with the results of the original competition, is integral part of the project and should be presented in the final report. Recommendation: implementation should be done in Python, using Keras [7] and TensorFlow [8] frameworks; Google Colab is recommended for creating a user interface to present the results.
Literatura:[1] He, Kaiming, et al. "Mask R-CNN" arXiv preprint arXiv:1703.06870 (2017).
[2] Szegedy, Christian, et al. "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning." AAAI. 2017. APA
[3] Goodfellow, Ian, et al. „Deep Learning“, MIT Press, 2016
[4] https://spacenetchallenge.github.io/datasets/datasetHomePage.html
[5] Janice Lan, Rosanne Liu, Hattie Zhou, Jason Yosinski. "LCA: Loss Change Allocation for Neural Network Training." arXiv preprint arXiv: arXiv:1909.01440 (2019).
[6] https://github.com/jphall663/awesome-machine-learning-interpretability
[7] https://keras.io/
[8] Abadi, Martın, et al. "TensorFlow: Large-scale machine learning on heterogeneous systems, 2015." Software available from tensorflow.org.
[9] http://yann.lecun.com/exdb/mnist/
[10] https://www.cs.toronto.edu/~kriz/cifar.html
Za obsah zodpovídá: Petr Pošík