Tammy R. Raviv presents Diving Deep into Cell Segmentation in Microscopy Videos

On 2019-05-31 11:00:00 at G205, Karlovo náměstí 13, Praha 2
The analysis of cluttered objects in a video sequence is a challenging task,
particularly in the presence of complex spatial structures
and complicated temporal changes. We present a Deep Neural Network framework
which addresses two aspects of object segmentation within
video sequences, namely, the inherent dependencies between video frames and the
evaluation of segmentation results. We propose the integration of the U-Net
architecture (Ronneberger et. al) with Convolutional Long Short Term Memory
(C-LSTM). The segmentation network's unique architecture enables it to capture
multi-scale, compact,
spatio-temporal encoding of the objects in the C-LSTMs memory units. The
proposed network exploits temporal cues which facilitate the individual
segmentation of touching or partially occluded objects. The method was applied
to live cell microscopy data and tested on the common cell segmentation
benchmark, the Cell Tracking Challenge (www.celltrackingchallenge.net), and
ranked 1st and 2nd place on two challenging datasets.We further present a novel
method for the quality assurance (QA) of segmentation methods, the QANet. The
network, based on our novel RibCage architecture, estimates the Intersection
over Union of a proposed segmentation without
the need for ground truth annotations. The Rib-Cage network is suited for this
task since it is designed such that multi-level features of both the
image and segmentation maps are compared at multiple scales allowing for the
extraction of complex joint representations.
Responsible person: Petr Pošík