Jan Bayer presents Deep Hough Voting for 3D Object Detection in Point Clouds

On 2020-06-12 11:00:00 at G205, Karlovo náměstí 13, Praha 2
Online reading group on the work "Deep Hough Voting for 3D Object Detection in
Point Clouds" (ICCV 2019) by Charles Qi, Or Litany, Kaiming He, Leonidas J.
Guibas presented by Jan Bayer.

Video conference link: meet.google.com/ecs-waer-gxp
Instructions: http://cmp.felk.cvut.cz/~toliageo/rg/index2.html

Paper abstract: Current 3D object detection methods are heavily influenced by
2D detectors. In order to leverage architectures in 2D detectors, they often
convert 3D point clouds to regular grids (i.e., to voxel grids or to bird’s
eye view images), or rely on detection in 2D images to propose 3D boxes. Few
works have attempted to directly detect objects in point clouds. In this work,
we return to first principles to construct a 3D detection pipeline for point
cloud data and as generic as possible. However, due to the sparse nature of the
data – samples from 2D manifolds in 3D space – we face a major challenge
when directly predicting bounding box parameters from scene points: a 3D object
centroid can be far from any surface point thus hard to regress accurately in
one step. To address the challenge, we propose VoteNet, an end-to-end 3D object
detection network based on a synergy of deep point set networks and Hough
voting. Our model achieves state-of-the-art 3D detection on two large datasets
of real 3D scans,
ScanNet and SUN RGB-D with a simple design, compact model size and high
efficiency. Remarkably, VoteNet outperforms previous methods by using purely
geometric information without relying on color images.

Paper URL:

Instructions for participants: The reading group studies the literature in the
field of pattern recognition and computer vision. At each meeting one or more
papers are prepared for presentation by a single person, the presenter. The
meetings are open to anyone, disregarding their background. It is assumed that
everyone attending the reading group has, at least briefly, read the paper –
not necessarily understanding everything. Attendants should preferably send
questions about the unclear parts to the speaker at least one day in advance.
During the presentation we aim to have a fruitful discussion, a critical
analysis of the paper, as well as brainstorming for creative extensions.

See the page of reading groups
Responsible person: Petr Pošík