Pavel Petracek presents RMS: Redundancy-Minimizing Point Cloud Sampling for Real-Time Pose Estimation

On 2024-06-04 11:00:00 at E112, Karlovo náměstí 13, Praha 2
In the talk, we will introduce our novel method for uninformed and
deterministic
sampling of structured 3D point clouds. We will talk about how the method
minimizes the point redundancy within a point cloud and show that although it
yields the highest compression rate when compared to state-of-the-art works, it
is superior in increasing the accuracy and in lowering the computational delay
of real-time LiDAR-based pose estimation pipelines.

Abstract:

The typical point cloud sampling methods used in state estimation for mobile
robots preserve a high level of point redundancy. This redundancy unnecessarily
slows down the estimation pipeline and may cause drift under real-time
constraints. Such undue latency becomes a bottleneck for resource-constrained
robots (especially UAVs), requiring minimal delay for agile and accurate
operation. We propose a novel, deterministic, uninformed, and single-parameter
point cloud sampling method named RMS that minimizes redundancy within a 3D
point cloud. In contrast to the state of the art, RMS balances the
translation-space observability by leveraging the fact that linear and planar
surfaces inherently exhibit high redundancy propagated into iterative
estimation
pipelines. We define the concept of gradient flow, quantifying the local
surface
underlying a point. We also show that maximizing the entropy of the gradient
flow minimizes point redundancy for robot ego-motion estimation. We integrate
RMS into the point-based KISS-ICP and feature-based LOAM odometry pipelines and
evaluate experimentally on KITTI, Hilti-Oxford, and custom datasets from
multirotor UAVs. The experiments demonstrate that RMS outperforms
state-of-the-art methods in speed, compression, and accuracy in
well-conditioned
as well as in geometrically-degenerated settings.

Paper, code and video: https://github.com/ctu-mrs/RMS

Standard seminar length: 30-40 min talk, 20 min discussion
Za obsah zodpovídá: Petr Pošík