๐ค Submission:
Participants can submit their predictions on the
evaluation server
.
The challenge platform is based on the open-source EvalAI framework
and is hosted on our own infrastructure.
For questions regarding the baseline or submission platform, contact
yangmiaogz@gmail.com.
โน๏ธ Note: The
OpenSUN3D Workshop
also has a paper track in addition to the challenge track.
We welcome 8-page papers for the proceedings, as well as 4-pagers.
Submission Opens
Jan 17, 2025
Deadline Proceedings Track
March 20, 2026
Decision Notification Proceedings
March 25, 2026
Deadline Extended Abstracts
April 20, 2026
Decision Notification Extended Abstracts
April 25, 2026
Workshop
June 3, 2026
Working with Articulate3D
๐ฆ Articulate3D Annotations
Articulate3D provides rich per-scene annotations. For this challenge, the
relevant annotations include:
๐งฉ Part Segmentation
Segmentation masks for fixed, movable, and interactable (graspable) parts.
๐ Part Connectivity
Connectivity graphs (e.g., which handle belongs to which door).
๐ Articulation
Motion specifications: origin, axis, range, and motion type.
We provide a Python-based scene iterator that returns:
A dictionary of movable parts, each with predicted motion and its associated
interactable parts
A face-level scene mask marking all movable and interactable segments
โ ๏ธ Scans:
Articulate3D annotations are based on ScanNet++ scenes.
You must obtain the ScanNet++ scans separately.
Articulate3D provides annotations only.
๐ค Submission Instructions
Example submissions:
Download example submission files
here
.
1๏ธโฃ What to Submit
The required submission format depends on the challenge phase.
Please ensure that all detected movable and interactable parts
are included for each scan.
๐งฉ Movable Part Segmentation & Articulation
Submit a .zip file containing predictions for each scan
Predictions follow the baseline output format (.pickle)
Each scan must include all detected movable and interactable instances
{
"scene_id_1": {
"pred_masks": numpy.Array,
"pred_scores": numpy.Array,
"pred_classes": numpy.Array // articulation type of parent movable part
},
...
}
โ ๏ธ Storage Notice:
Pickle files can be large and may overwhelm the submission server.
Therefore, predictions are converted to .txt format and
compressed into a .zip file before submission.
Submissions above or equal to the official baseline.
Rank
Team / Method
MAP_AXIS_ORIGIN โ
๐ฅ
pico-mr Winners Challenge @ICCV25
0.41
2
astar
0.29
3
Uni Stuttgart
0.27
4
ww
0.26
โ
USDNet Official Baseline
0.26
Metrics computed on hidden test set. Higher is better (โ).
BibTeX
@InProceedings{halacheva2025articulate3d,
author = {Halacheva, Anna-Maria and Miao, Yang and Zaech, Jan-Nico and Wang, Xi and Van Gool, Luc and Paudel, Danda Pani},
title = {Articulate3D: Holistic Understanding of 3D Scenes as Universal Scene Description},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year = {2025},
}