|Popis:||Autonomous driving perception needs to detect objects with different states and visual representation. The usual approach is to learn the object detector based on the huge number of annotated data. This project is about leveraging the sequence of motions of the points observable from LIDAR sensor. We aim for learning the object classes from interpretable dynamic motion properties, rather than their visual appearance. We will try to minimize the need of manual annotations by exploiting unlabelled data and physical principles of motion and LIDAR scans.