Giulia D´Angelo presents Event-driven figure-ground organisation model for the humanoid robot iCub
On 2025-05-15 11:00:00 at G102A, Karlovo náměstí 13, Praha 2
Citation: D’Angelo, G., Voto, S., Iacono, M. et al. Event-driven figure-ground
organisation model for the humanoid robot iCub. Nat Commun 16, 1874 (2025).
https://doi.org/10.1038/s41467-025-56904-9
Abstract:
Figure-ground organisation is a perceptual grouping mechanism for detecting
objects and boundaries, essential for an agent interacting with the environment.
Current figure-ground segmentation methods rely on classical computer vision or
deep learning, requiring extensive computational resources, especially during
training. Inspired by the primate visual system, we developed a bio-inspired
perception system for the neuromorphic robot iCub. The model uses a
hierarchical, biologically plausible architecture and event-driven vision to
distinguish foreground objects from the background. Unlike classical approaches,
event-driven cameras reduce data redundancy and computation. The system has been
qualitatively and quantitatively assessed in simulations and with event-driven
cameras on iCub in various scenarios. It successfully segments items in diverse
real-world settings, showing comparable results to its frame-based version on
simple stimuli and the Berkeley Segmentation dataset. This model enhances hybrid
systems, complementing conventional deep learning models by processing only
relevant data in Regions of Interest (ROI), enabling low-latency autonomous
robotic applications.
organisation model for the humanoid robot iCub. Nat Commun 16, 1874 (2025).
https://doi.org/10.1038/s41467-025-56904-9
Abstract:
Figure-ground organisation is a perceptual grouping mechanism for detecting
objects and boundaries, essential for an agent interacting with the environment.
Current figure-ground segmentation methods rely on classical computer vision or
deep learning, requiring extensive computational resources, especially during
training. Inspired by the primate visual system, we developed a bio-inspired
perception system for the neuromorphic robot iCub. The model uses a
hierarchical, biologically plausible architecture and event-driven vision to
distinguish foreground objects from the background. Unlike classical approaches,
event-driven cameras reduce data redundancy and computation. The system has been
qualitatively and quantitatively assessed in simulations and with event-driven
cameras on iCub in various scenarios. It successfully segments items in diverse
real-world settings, showing comparable results to its frame-based version on
simple stimuli and the Berkeley Segmentation dataset. This model enhances hybrid
systems, complementing conventional deep learning models by processing only
relevant data in Regions of Interest (ROI), enabling low-latency autonomous
robotic applications.
External www: https://www.nature.com/articles/s41467-025-56904-9