Machine Learning group
Our research directions
Probabilistic deep networks
Deep networks with stochastic neurons as well as probabilistic generative models like Variational Autoencoders become increasingly important in Deep Learning due to the growing understanding of the importance of versatile and expressive learned (latent) representations. Our research in this area focuses on advanced learning algorithms as well as on conceptually novel approaches for semi-supervised learning for such models. read more
Two-dimensional automata and grammars
The theory of two-dimensional (2D) languages studies automata and grammars for processing 2D arrays of symbols. In our research we aim to examine such 2D models that have applications in the field of structural pattern recognition for domains such as mathematical formulas (paper), flowcharts (paper) or document layouts (paper). We also deal with complexity of pattern matching againsts 2D languages accepted by various models of 2D automata (paper).
The Many Facets of Orthomodularity
Czech Science Foundation grant 20-09869L, 2020-22
Quantum physics requires a non-standard model of probability, admitting the description of events which are observable separately, but not simultaneously. Thus all observable events do not form a sigma-algebra, but a more general structure, typically an orthomodular lattice or poset. This makes the development of probability theory more difficult; still many results were proved under the weaker assumptions. Among them, the proof (by Bell, Kochen, Specker, and others) of the impossibility of describing quantum theory in the classical terms using so-called “hidden variables”, wrongly predicted by Albert Einstein. We contributed to improvements of this result. Further research clarifies the algebraic properties of quantum logics.
Voráček, V., Navara, M.: Generalised Kochen–Specker Theorem in three dimensions. Foundations of Physics 51 (2021), 67. DOI 10.1007/s10701-021-00476-3
Machine learning applications and theory
Structured output learning ([Franc&Savchyynski 2008]),learning from weak annotations ([Franc&Cech 2018]), support vector machines ([Franc et al. 2011]), cutting plane algorithm ([Franc&Sonnenburg 2009], reject option classification ([Franc&Prusa 2019]), face recognition ([Yermakov&Franc 2021]), computer security ([Bartos et al 2016]).
Tomáš Dlask | (postdoc) |
Boris Flach | (assoc. prof.) |
Vojtěch Franc | (assist. prof.) |
Antonín Hruška | (PhD student) |
Mirko Navara | (prof.) |
Jakub Paplhám | (PhD student) |
Daniel Průša | (assoc. prof.) |
Tomáš Werner | (assoc. prof.) |
Courses
winter semester
- Statistical Machine Learning (Open Informatics, master program)
- Optimisation (Open Informatics, bachelor program)
- Graphical Markov Models (PhD course)
- Algorithms (Open Informatics, bachelor program)
- Advanced Algorithms (Open Informatics, master program)
- Numerical Analysis (Open Informatics, bachelor program)
summer semester
- Fuzzy Logic (PhD course)
- Deep Learning (Open Informatics, bachelor and master program)
.
Available bachelor/master thesis topics
DP,SP | Algebraické a analytické vlastnosti kvantových a fuzzy logik | Navara Mirko |
---|---|---|
DP,BP,SP | Aproximace hodnot fuzzy konjunkcí | Navara Mirko |
BP,SP | Automatická extrakce dat z výsledkových listin | Průša Daniel |
DP,SP | Deep spatio-temporal models for satellite based Earth observation. | Flach Boris |
DP,SP | Hluboké stochastické prediktory | Flach Boris |
DP,BP,SP | Knihovna fuzzy konjunkcí | Navara Mirko |
DP,SP | Neexistence skrytých proměnných v kvantové fyzice | Navara Mirko |
DP,BP,SP | Paradoxy v teorii pravděpodobnosti | Navara Mirko |
DP,SP | Pravděpodobnost na kvantových strukturách | Navara Mirko |
DP,SP | Principy fuzzy řízení | Navara Mirko |
DP,BP,SP | Sudoku solver učený z příkladů | Franc Vojtěch |
DP,SP | Symmetric learning for Variational Autoencoders (VAE) | Flach Boris |
Carlos Bejines López | (postdoc) |