CVPR 2024 - Seattle, United States

C3DV: 2nd Workshop on Compositional 3D Vision

Second workshop on compositional 3D vision and VSIC and 3DCoMPaT++ dataset challenges, hosted by #CVPR2024.










3DCoMPaT Challenge

3DCoMPaT dataset++

🔍 Challenge overview

The Grounded CoMPaT Recognition (GCR) is a compositional 3D vision task that aims to collectively recognize and ground compositions of materials on parts of 3D objects. This task is based on the 3DCoMPaT dataset++, a large-scale dataset composed of stylized 3D objects and associated 2D renderings.
We propose two variations of this task: GCR-Coarse and GCR-Fine, which are based on coarse-grained and fine-grained 3D segmentations of the 3DCoMPaT models.
We highly encourage participants of the challenge to enter and submit to both tracks of the challenge.

📊 Dataset

The 3DCoMPaT dataset++ for both challenge tracks is available through our download page.

📨 Submission

Submission will be made through the platform.

📜 Rules

Here are the rules for the challenge:

  • Submission Limit: Each participant is allowed to submit their solution a maximum of three times per day.
  • Data Usage: Participants are not permitted to use any data other than the 3DCoMPaT data for training their models.
  • Technical Report: Each participant must submit a technical report detailing their methods, which will be made public, in order to be eligible for any prizes or rewards.

🏆 Awards

Total prize pool: TBD. Teams are encouraged to particpate to both challenge tracks.
Fine track:
  • 1st: TBD
  • 2nd: TBD
Coarse track:
  • 1st: TBD
  • 2nd: TBD

These prizes are designed to motivate participants to put their best effort into the challenge and to reward those who perform exceptionally well. The challenge organizers hope that these prizes will encourage a high level of participation and help to drive innovation in the field of 3D computer vision. It should be noted that eligibility for these prizes is contingent on participants adhering to the rules of the challenge. Therefore, participants must submit their solutions in accordance with the rules and provide a technical report detailing their methods to be considered for any prizes or rewards.

💬 Q&A

If you encounter any technical issue related to the challenge, or if you're missing critical information, please open a ticket on our GitHub repository.

🎉 2023 Winning Solution

We share below the previous year's solution winner, and her winning solution repository below:

Cattalya's repository


VSIC Challenge

The Visual Shape Inference Challenge (VSIC) will focus on the task of inferring structured shape representations from object-centered datasets, in the form of language. Inspired by recent advances in program synthesis for visual data (Ganeshan et. al., 2023, Jones et al. 2023) this challenge aims to explore how modern machine learning techniques can be applied to improve the inference of shape programs from visual information. Building on the principles of compactness and structure, participants will be provided with object-centered datasets and will be tasked with generating refined and parsimonious shape programs, drawing inspiration from techniques like Sparse Intermittent Rewrite Injection (SIRI). Please stay tuned for more information.

Paper submission


🦜 Topics

Besides the CoMPaT and VSIC challenges, the C3DV workshop also accepts papers in relation with compositional 3D vision. The workshop will include a poster and an oral session for related works. Topics of this workshop include but are not limited to:

  • Deep learning methods for compositional 3D vision
  • Self-supervised learning for compositional 3D vision
  • Visual relationship detection in 3D scenes
  • Zero-shot recognition/detection of compositional 3D visual concepts
  • Novel problems in 3D vision and compositionality
  • Text/composition to 3D generation
  • Text/composition-based editing of 3D scenes/objects
  • Language-guided 3D visual understanding (objects, relationships, ...)
  • Transfer learning for compositional 3D Vision
  • Multimodal pre-training for 3D understanding
  • Composition-based 3D object/scene search/retrieval
  • Compositional 3D vision aiding language problems
  • ...

The submitted 4-page abstracts will be peer-reviewed in CVPR format. Abstracts will be presented in the workshop poster session, and a portion of the accepted papers will be orally presented.

📨 Submission

Paper submissions will be handled with CMT through the following link:

Microsoft CMT: C3DVCVPR2024

Please select the appropriate track (archival or non-archival) and check for the relevant timelines in the dates section.


Invited Speakers

Angela Dai

Assistant Professor Technical University of Munich

Katerina Fragkiadaki

Assistant Professor Carnegie Mellon University

Srinath Sridhar

Assistant Professor Brown University

Minhyuk Sung

Assistant Professor KAIST

Andrea Vedaldi

Professor University of Oxford

Xiaojuan Qi

Assistant Professor University of Hong Kong

Jiajun Wu

Assistant Professor Stanford University


Workshop Organizers

Habib Slim

Ph.D. Student KAUST

Wolfgang Heidrich

Professor KAUST

Peter Vajda

Researcher and Engineering Manager Meta AI

Natalia Neverova

Research Lead Meta AI

Mohamed Elhoseiny

Assistant Professor KAUST

Challenge Organizers

Aditya Ganeshan

Ph.D. Student Brown University

Kenny Jones

Ph.D. Student Brown University

Habib Slim

Ph.D. Student KAUST

Mahmoud Ahmed

Research Student KAUST

Xiang Li

Postdoctoral Researcher KAUST

Daniel Ritchie

Assistant Professor Brown University

Peter Wonka

Professor KAUST

Mohamed Elhoseiny

Assistant Professor KAUST



Archival track (will appear in CVPR proceedings):
Event Date
Paper submission deadline March 24th
Notification to authors April 1st
Camera-ready deadline April 7th
Workshop date June 18th
Non-archival track:
Event Date
Paper submission deadline April 12th
Notification to authors May 1st
Camera-ready deadline May 31th
Workshop date June 18th
3DCoMPaT Challenge:
Event Date
Release of training/validation data Feb. 17th
Validation server online Feb. 20th
Test server online Feb. 20th
Submission deadline April 1st
Fact sheets/source code submission deadline April 31st
Winners announcement May 15th


Workshop Program

More details about the program will be available soon.

For any question or support, please reach @Habib.S.