The 3rd SHApe Recovery from Partial Textured 3D Scans (SHARP) Workshop and Challenge

Sharp 2022

Organizers

An overall 8k€ will be awarded as cash prizes to the winners sponsored by Artec3D.

This workshop will host a competition focusing on the reconstruction of full high-resolution 3D meshes from partial or noisy 3D scans and includes 2 challenges and 3 datasets: 

  • The first challenge consists of the recovery of textured 3D scans from partial acquisition. It involves 2 tracks: 
    • Track 1: Recovering textured human body scans from partial acquisitions. The dataset used in this scope is the 3DBodyTex.v2 dataset, containing 2500 textured 3D scans. It is an extended version of the original 3DBodyTex.v1 dataset, first published in the 2018 International Conference on 3D Computer Vision, 3DV 2018.
    • Track 2: Recovering textured object scans from partial acquisitions. It involves the recovery of generic object scans from the 3DObjTex.v1 dataset, which is a subset from the ViewShape online repository of 3D scans. This dataset contains over 2000 various generic objects with different levels of complexity in texture and geometry.
  • The second challenge focuses on the recovery of fine object details in the form of sharp edges from noisy sparse scans with smooth edges. The CC3D-PSE dataset which is a new version of the CC3D dataset, introduced at the 2020 IEEE International Conference on Image Processing (ICIP), will be used for this purpose. It contains over 50k pairs of CAD models and their corresponding 3D scans. Each pair of scan and CAD model is annotated with parametric sharp edges. Given a 3D scan with smooth edges, the goal is to reconstruct the corresponding CAD model as a triangular mesh, with sharp edges approximating the ground-truth sharp edges. The second challenge involves 2 tracks: 
    • Track 1: Recovering linear sharp edges. A subset of the CC3D-PSE dataset is considered in this track which includes only linear sharp edges.
    • Track 2:  Recovering sharp edges as linear, circular, and spline segments. The whole CC3D-PSE will be used in this track.

This is the third edition of SHARP, after two successful editions in conjunction with CVPR 2021 and ECCV 2020.

Sponsor

Call for Participation
(Challenges)

Challenge
Textured Partial Scan Completion

The task of this challenge is to accurately reconstruct a full 3D textured mesh from a partial 3D scan. It involves 2 tracks:

Track 1: Body shapes
Track 2: Object Scans

Challenge
Sharp Edge Recovery

Given a 3D object scan with smooth edges, the goal of this challenge is to reconstruct the corresponding CAD model as a triangular mesh with sharp edges approximating the ground-truth sharp edges.

Track 1: Sharp lines
Track 2: Sharp edges (circles, splines, lines)

 

Timeline

Organizers

Textured Partial Scan Completion

Challenge 1

Track 1

Recovering textured human body scans from partial acquisitions. The dataset used in this scope is the 3DBodyTex.v2 dataset, containing 2500 textured 3D scans. It is an extended version of the original 3DBodyTex.v1 dataset, first published in the 2018 International Conference on 3D Computer Vision, 3DV 2018.  

Track 1

Track 2

Recovering textured object scans from partial acquisitions. It involves the recovery of generic object scans from the 3DObjTex.v1 dataset, which is a subset from the ViewShape online repository of 3D scans. This dataset contains over 2000 various generic objects with different levels of complexity in texture and geometry. 

Track 2
  • Any custom procedure should be reported with description and implementation among the deliverable.
  • A quality-check is performed to guarantee a reasonable level of defects.
  • The partial scans are generated synthetically.
  • For privacy reasons, all meshes are anonymized by blurring the shape and texture of the faces, similarly to the 3DBodyTex data.
  • During evaluation, the face and hands are ignored because the shape from raw scans is less reliable.

New: In addition to the routines for partial data generation provided in the previous editions (SHARP 2020 and SHARP 2021), more realistic partial data generation routines have been put in place for this edition. Samples of partial scans from Track 1 and Track 2 can be found above.

Sharp Edge Recovery

Challenge 2

New: This challenge introduces the CC3D-PSE dataset which is a new version of the CC3D dataset used in SHARP 2021. CC3D-PSE consists of:

  • 50k+ pairs of scans and CAD models as triangular meshes
  • Sharp edge annotations provided as parametric curves including linear, circular, and spline segments

Track 1

Recovering linear sharp edges. A subset of the CC3D-PSE dataset is considered in this track which includes only linear sharp edges. 

Track 1

Track 2

Recovering sharp edges as linear, circular, and spline segments. The whole CC3D-PSE will be used in this track. 

Track 2

Leaderboard

The competition is also hosted on Codalab. A development phase with evaluation samples and metrics is running. Participants might follow their ranking during this phase with respect to other participants by submitting their predictions on the evaluation samples mentioned in Codalab. More details can be found in the following links:

Programme

SHARP will be held on 19 June 2022.
The workshop will follow a hybrid format.

b4-table-white b4-table-first-col b4-table-specifications b4-table-header-colspan
Essential inspection
Opening13:30 – 13:35
Presentation of SHARP Challenges13:35 – 13:50
Plenary Talk – Prof. Angela Dai13:50 – 14:40
Coffee Break14:40 – 14:55
Finalists 1: Points2ISTF – Implicit Shape and Texture Field from Partial Point Clouds – Jianchuan Chen14:55 – 15:15
Finalists 2: 3D Textured Shape Recovery with Learned Geometric Priors – Lei Li15:15 – 15:35
Finalists 3: Parametric Sharp Edges from 3D Scans – Anis Kacem15:15 – 15:35
Plenary Talk – Prof. Tolga Birdal15:55 – 16:45
Announcement of Results16:45 – 16:55
Analysis of Results16:55 – 17:10
Panel Discussion17:10 – 17:30
Closing Remarks17:30 – 17:35
Invited Speakers

Prof. Angela Dai

Technical University of Munich

Prof. Angela Dai
Invited Speakers

Prof. Tolga Birdal

Imperial College London

Bio: Angela Dai is an Assistant Professor at the Technical University of Munich where she leads the 3D AI group. Prof. Dai’s research focuses on understanding how the 3D world around us can be modeled and semantically understood. Previously, she received her PhD in computer science from Stanford in 2018 and her BSE in computer science from Princeton in 2013. Her research has been recognized through a Eurographics Young Researcher Award, ZDB Junior Research Group Award, an ACM SIGGRAPH Outstanding Doctoral Dissertation Honorable Mention, as well as a Stanford Graduate Fellowship.

Bio: Tolga Birdal is an assistant professor in the Department of Computing of Imperial College London. Previously, he was a senior Postdoctoral Research Fellow at Stanford University within the Geometric Computing Group of Prof. Leonidas Guibas. Tolga has defended his masters and Ph.D. theses at the Computer Vision Group under Chair for Computer Aided Medical Procedures, Technical University of Munich led by Prof. Nassir Navab. He was also a Doktorand at Siemens AG under supervision of Dr. Slobodan Ilic working on “Geometric Methods for 3D Reconstruction from Large Point Clouds”. His current foci of interest involve geometric machine learning and 3D computer vision. More theoretical work is aimed at investigating and interrogating limits in geometric computing and non-Euclidean inference as well as principles of deep learning. Tolga has several publications at the well-respected venues such as NeurIPS, CVPR, ICCV, ECCV, T-PAMI, ICRA, IROS, ICASSP and 3DV. Aside from his academic life, Tolga has co-founded multiple companies including Befunky, a widely used web-based image editing platform.

Plenary Talks

Towards Commodity 3D Content Creation

With the increasing availability of high quality imaging and even depth imaging now available as commodity sensors, comes the potential to democratize 3D content creation. State-of-the-art reconstruction results from commodity RGB and RGB-D sensors have achieved impressive tracking, but reconstructions remain far from usable in practical applications such as mixed reality or content creation, since they do not match the high quality of artist-modeled 3D graphics content: models remain incomplete, unsegmented, and with low-quality texturing. In this talk, we will address these challenges: I will present a self-supervised approach to learn effective geometric priors from limited real-world 3D data, then discuss object-level understanding from a single image, followed by realistic 3D texturing from real-world image observations. This will help to enable a closer step towards commodity 3D content creation.

Plenary

Talks Rigid & Non-Rigid Multi-Way Point Cloud Matching via Late Fusion

Correspondences fuel a variety of applications from texture-transfer to structure from motion. However, simultaneous registration or alignment of multiple, rigid, articulated or non-rigid partial point clouds is a notoriously difficult challenge in 3D computer vision. With the advances in 3D sensing, solving this problem becomes even more crucial than ever as the observations for visual perception hardly ever come as a single image or scan. In this talk, I will present an unfinished quest in pursuit of generalizable, robust, scalable and flexible methods, designed to solve this problem. The talk is composed of two sections diving into (i) MultiBodySync, specialized in multi-body & articulated generalizable 3D motion segmentation as well as estimation, and (ii) SyNoRim, aiming at jointly matching multiple non-rigid shapes by relating learned functions defined on the point clouds. Both of these methods utilize a family of recently matured graph optimization techniques called synchronization as differentiable modules to ensure multi-scan / multi-view consistency in the late stages of deep architectures. Our methods can work on a diverse set of datasets and are general in the sense that they can solve a larger class of problems than the existing methods.