Machine Learning

Machine Learning group

HomeMembersTeachingAlumni

Our research directions


Probabilistic deep networks

Deep networks with stochastic neurons as well as probabilistic generative models like Variational Autoencoders become increasingly important in Deep Learning due to the growing understanding of the importance of versatile and expressive learned (latent) representations. Our research in this area focuses on advanced learning algorithms as well as on conceptually novel approaches for semi-supervised learning for such models. read more


Two-dimensional automata and grammars

The theory of two-dimensional (2D) languages studies automata and grammars for processing 2D arrays of symbols. In our research we aim to examine such 2D models that have applications in the field of structural pattern recognition for domains such as mathematical formulas (paper), flowcharts (paper) or document layouts (paper). We also deal with complexity of pattern matching againsts 2D languages accepted by various models of 2D automata (paper).


The Many Facets of Orthomodularity

Czech Science Foundation grant 20-09869L, 2020-22

Quantum physics requires a non-standard model of probability, admitting the description of events which are observable separately, but not simultaneously. Thus all observable events do not form a sigma-algebra, but a more general structure, typically an orthomodular lattice or poset. This makes the development of probability theory more difficult; still many results were proved under the weaker assumptions. Among them, the proof (by Bell, Kochen, Specker, and others) of the impossibility of describing quantum theory in the classical terms using so-called “hidden variables”, wrongly predicted by Albert Einstein. We contributed to improvements of this result. Further research clarifies the algebraic properties of quantum logics.

Voráček, V., Navara, M.: Generalised Kochen–Specker Theorem in three dimensions. Foundations of Physics 51 (2021), 67. DOI 10.1007/s10701-021-00476-3


Machine learning applications and theory

Structured output learning ([Franc&Savchyynski 2008]),learning from weak annotations ([Franc&Cech 2018]), support vector machines ([Franc et al. 2011]), cutting plane algorithm ([Franc&Sonnenburg 2009], reject option classification ([Franc&Prusa 2019]), face recognition ([Yermakov&Franc 2021]), computer security ([Bartos et al 2016]).

Tomáš Dlask (postdoc)
Boris Flach (assoc. prof.)
Vojtěch Franc (assist. prof.)
Antonín Hruška (PhD student)
Mirko Navara (prof.)
Jakub Paplhám (PhD student)
Daniel Průša (assoc. prof.)
Tomáš Werner (assoc. prof.)

Courses

winter semester
summer semester

.

Available bachelor/master thesis topics

Responsible person: ML Group Editor