Articulate3D

Holistic Understanding of 3D Scenes as Universal Scene Description

1INSAIT, Sofia University 2ETH Zurich
*equal contribution

Articulate3D provides an expertly curated dataset in the Universal Scene Description (USD) format, featuring high-quality manual annotations, for instance, segmentation and articulation on indoor scenes.

Articulate3D includes 8 types of annotations - object and part segmentations, movable, interactable and fixed parts, motion parameters, connectivity, and object mass annotations.

Instance Segmentation (Object- and Part-Level)

Scan
Articulate3D
Articulate3D & ScanNet++ Combined Annotations

Articulations

Models

Coming soon!

Abstract

3D scene understanding is a long-standing challenge in computer vision and a key component in enabling mixed reality, wearable computing, and embodied AI. Providing a solution to these applications requires a multifaceted approach that covers scene-centric, object-centric, as well as interaction-centric capabilities. While there exist numerous datasets approaching the former two problems, the task of understanding interactable and articulated objects is underrepresented and only partly covered by current works. In this work, we address this shortcoming and introduce (1) an expertly curated dataset in the Universal Scene Description (USD) format, featuring high-quality manual annotations, for instance, segmentation and articulation on 280 indoor scenes; (2) a learning-based model together with a novel baseline capable of predicting part segmentation along with a full specification of motion attributes, including motion type, articulated and interactable parts, and motion parameters; (3) a benchmark serving to compare upcoming methods for the task at hand. Overall, our dataset provides 8 types of annotations - object and part segmentations, motion types, movable and interactable parts, motion parameters, connectivity, and object mass annotations. With its broad and high-quality annotations, the data provides the basis for holistic 3D scene understanding models. All data is provided in the USD format, allowing interoperability and easy integration with downstream tasks. Our dataset, benchmark, and method's source code will be made publicly available.

BibTeX


    @article{halacheva2024articulate3d,
      title={Holistic Understanding of 3D Scenes as Universal Scene Description},
      author={Anna-Maria Halacheva and Yang Miao and Jan-Nico Zaech and Xi Wang and Luc Van Gool and Danda Pani Paudel},
      year={2024},
      journal={arXiv preprint arXiv:2412.01398},
    }