3DCoMPaT: Composition of Materials on Parts of 3D Things

1KAUST, 2Poly9, 3Qure.AI
*Equal contribution

Abstract

We present 3DCoMPaT, a richly annotated large-scale dataset of more than 7.19 million rendered compositions of Materials on Parts of 7262 unique 3D Models; 990 compositions per model on average. 3DCoMPaT covers 10 shape categories, 10 unique part names, and 167 unique material classes that can be applied to parts of 3D objects.

Each object with the applied part-material compositions is rendered from four equally spaced views as well as four randomized views, leading to a total of 58 million renderings (7.19 million compositions x 8 views). This dataset primarily focuses on stylizing 3D shapes at part-level with compatible materials.

We introduce a new task, called Grounded CoMPaT Recognition (GCR), to collectively recognize and ground compositions of materials on parts of 3D objects. We present two variations of this task and adapt state-of-art 2D/3D deep learning methods to solve the problem as baselines for future research. We hope our work will help ease future research on compositional 3D Vision.

▶️ Video

📖 Citations

If our work is useful to your research, please consider citing our work by referencing the following research papers for the 3DCoMPaT dataset:

3DCoMPaT v1:

@inproceedings {li20223d_compat,
    title={{3DCoMPaT}: Composition of Materials on Parts of 3D Things},
    author={Yuchen Li, Ujjwal Upadhyay, Habib Slim,
    Ahmed Abdelreheem, Arpit Prajapati,
    Suhail Pothigara, Peter Wonka, Mohamed Elhoseiny},
    booktitle={17th European Conference on Computer Vision (ECCV)},
    year={2022}
}