John Collomosse (Adobe Research) and Michael Guthe (Bayreuth University) presents Content Provenance: To Authenticity / On the fly mesh reconstruction

On 2023-06-29 15:00:00 at KN:E-112 (Vyčichlova knihovna), Karlovo nam. 13, Praha 2
John Collomosse: Content Provenance: To Authenticity and Beyond!

Abstract: Technologies for determining content provenance (‘where did this
image come from?’, ‘what was done to it, and by whom’?) are critical to
establishing attribution and trust in media. Provenance can help society fight
fake news and misinformation by enabling users to make better trust decisions on
content they encounter online. Yet provenance may be a foundational step to
new models for our future creative economy. In a future metaverse where
interoperating platforms generate value through creation and exchange of digital
assets, provenance can help creative gain recognition for their work and open
the door to decentralized markets for creative content. Tracing the provenance
of synthetic media too, enables apportionment of credit to those contributing
their work for training generative AI. In this talk I will outline Adobe’s
role in work toward an open standard to support content provenance, and research
addressing open challenges around provenance via content fingerprinting and
hashing, generative / synthetic media attribution, as well as distributed trust
models to underwrite the provenance of assets e.g. via distributed ledger
technology (DLT) and blockchain. I will describe how these technologies can
help fight abuse of digital tools and help recognize and reward creatives for
their contributions to generative AI.

Bio: Prof. John Collomosse is a Principal Scientist at Adobe Research where he
leads the cross-modal representation learning (XRL) group. John’s research
focuses on representation learning for creative visual search and for media
provenance e.g. robust content fingerprinting and synthetic media attribution.
He is a full professor at the Centre for Vision Speech and Signal Processing,
University of Surrey (UK) where he founded and co-directs the DECaDE
multi-disciplinary research centre exploring the intersection of AI and
Distributed Ledger Technology to create decentralized platforms for the future
creative economy. John is part of the Adobe-led content authenticity initiative
(CAI) and contributor to the technical work group of the C2PA open standard for
digital provenance. He is on the ICT and Digital Economy advisory boards for the
UK Science Council EPSRC.
http://collomosse.com/

-----

Michael Guthe: On the fly mesh reconstruction

Abstract: In the last two decades various 3d scanning devices have become
available on the consumer market. Despite the variety of the underlying scan
technology, all of them are able to generate 3d point clouds of the environment
at interactive rates. While this has is also used for 3d scanning of large
objects, most systems only offer a low quality interactive feedback of the
scanning process. Interactive high quality feedback however does not only allow
to detect missing parts that have not yet been scanned, but also regions where
is accuracy is not as high as desired.
In this talk I will give an overview of the recent work in on the fly mesh
generation and processing that has been conducted in Bayreuth in the recent
years. Starting from the input point cloud, I will cover reconstruction and
processing algorithms from calculating and correcting normals of point clouds,
to on the fly surface reconstruction. Goal of this pipeline is to offer a direct
feedback on the quality of the final mesh during the point cloud capturing
process.

Bio: Prof. Dr. Michael Guthe has received his PhD in Computer Science from the
Rheinische Friedrich-Wilhelms-University Bonn in 2005. In 2007 he became
assistant Professor for Computer Graphics at Philipps-University Marburg. Since
2012 he is Professor for Visual Computing at University Bayreuth. His research
interests include geometric modeling, global illumination, medical image
processing, visualization and parallel programming with focus on GPGPU
computing.
Za obsah zodpovídá: Petr Pošík