|Description:||Unmanned Aerial Vehicles (UAVs) are expanding from open outdoor environments into more constrained indoor locations thanks to improvements in accuracy and precision of localization, navigation and control algorithms in recent years. New possibilities for deployment of autonomous UAV swarms into indoor environments emerge, leading to the development of high-level mission-oriented algorithms. Our team is in particular interested in mapping and documenting of historic buildings to assess the condition of the ceiling, murals, statues, stained glass, etc. When a UAV is operating in indoor environments, it is vital to prevent collisions of the UAV with obstacles with arbitrary shape. The UAV can be equipped with multiple sensors estimating the distance of an obstacle from the UAV. The sensors are working on different principles (LIDARs and passive cameras) to generate a computer representation of the surroundings of the UAV. While the LIDARs provide an accurate and precise distance to the obstacles, they provide measurements only in the sensor plane. A monocular camera, on the other hand, can detect obstacles in the whole area in front of the UAV, but cannot provide depth information.
The goal of this project is to develop a collision avoidance system. The technique will segment the image, to obtain the position estimate of objects in the axes of camera plane. The distance from the UAV to the objects will be estimated by finding correspondences between the objects in the image and laser scans. The trajectory of the UAV will then be modified to avoid the collision with the obstacle.