Podrobnosti studentského projektu

Seznam
Téma:Vylepšený softwarový balíček pro kvantované neuronové sítě
Katedra:Katedra kybernetiky
Vedoucí:Mgr. Oleksandr Shekhovtsov, Ph.D.
Vypsáno jako:Diplomová práce, Semestrální projekt
Popis:Project: for an improved speed and energy efficiency of neural networks, when they are deployed in mobile, robotics, surveillance, etc. applications, the network weights and activations can be quantized. In the 'Quant' library we implement state-of-the-art and new theoretically principled quantized training and adoptation methods.

Tasks:
Contribute to the development of ‘quant’ library: a library for training neural networks with quantized weights and activations with low bit resolution (down to 1 bit) based on pytorch. While quantized neural networks have a potential for fast execution on different edge devices, the library has to provide in the first place a sufficiently fast training on GPUs. We want to achieve fast training and best performance with any specified quantization level. In the project:
- analize variants how to implement network propagation consisting of elementary layers with a method determined at runtime.
- Profile the training loop to identify performance bottlenecks.
- Improve the inference speed by obtaining the test-time equivalent quantized model. Each model should have an .inference() method.
- Improve code efficiency by using computation streams, C++ extensions implementing essential computations blocks using A10 library, possibly C++ CUDA extension for the most critical operations, if such are identified.
- Extend the library to support additional methods.
- Test the library on a realistic large classification problem, e.g. surveillance.
Literatura:[0] Binary Neural Networks as a general-propose compute paradigm for on-device computer vision
https://arxiv.org/abs/2202.03716
[1] Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks
https://arxiv.org/abs/2006.03143
[2] Reintroducing Straight-Through Estimators as Principled Methods for Stochastic Binary Networks
https://arxiv.org/abs/2006.06880
[3] Straight-Through Top-to-Bottom: A General Formalization for Binary, Quantized and Categorical Variables (draft)

[4] Relaxed Quantization for Discretized Neural Networks
https://arxiv.org/abs/1810.01875

[5] A Survey of Quantization Methods for Efficient Efficient Neural Network Inference
https://arxiv.org/abs/2103.13630
Za obsah zodpovídá: Petr Pošík