Leaderboard Challenge 2022

A total of 42 academic and industrial groups submitted their results to the 2022 Hilti SLAM Challenge. The challenge results were announced as part of the Future of Construction workshop at the IEEE International Conference on Robotics and Automation in June 2022.

We congratulate the winners CSIRO, V&R and MARS LAB for their outstanding performance. Additionally, we congratulate TUM for winning the vision-only award.

Team Score
1 CSIRO-Wildcat
563.8
2 V&R
443.8
3 Fly @ MARS LAB 7000 USD
400.4
4 Urban Robotics Lab. @ KAIST 3000 USD
317.5
5 BUAAer 2000 USD
311.6
6 SpaceR-Junlin
303.8
7 NPM3D
272.8
8 SMRT@AIST
260.5
9 FL2SAM-HKUST RAMLAB-Gatech Borglab
257.6
10 VIRAL SLAM
251.9
11 Autonomous Systems Lab @ ETHZ
207.0
13 FL2BIPS
196.5
16 Ekumen
109.9
17 ANYbotics - Pharos SLAM
93.3
18 XXY
87.7
19 RAL SLAM
74.7
20 PR
73.0

Camera LiDAR IMU. Leaderboard by 18.05.2022. Entries with without a report are hidden.

Team Score
1 Smart Robotics Lab
32.5
2 Ifp@UniStuttgart & CAMP@TUM 1000 USD
22.2
3 HITCRC-GAS-SLAM
7.4
4 YH_SLAM
6.8
5 OpenVINS
2.4

Camera LiDAR IMU. Leaderboard by 18.05.2022. Entries with a score 0 or without a report are hidden.

Results

The highest scoring team was CSIRO with a score of 563.8. Their Wildcat SLAM algorithm uses a continuous-time trajectory representation for lidar-inertial odometry using sliding-window optimization and online pose graph optimization. This is refined by an offline global optimization module that takes advantage of non-causal information.

Of the top 25 teams, all used lidar and IMU data, while only 10 used camera data. The highest scoring vision-only submission was Smart Robotics Lab with a score of 32.5. SRL’s OKVIS2.0 produced complete trajectories for all of the sequences, however typical errors were in the 10– 20 cm range which resulted in a low score. This highlights the gap in performance between lidar and camera-based SLAM systems, and the susceptibility of vision-based systems to subtle scaling and calibration errors.

Read the corresponding academic publication

Score Computation

After transformation of the estimates from the imu frame to the pole tip, we aligned the trajectory with the sparse ground truth points using a rigid transformation. Then the ATE for each point is computed (we rely on the evo script). Depending on the error, each ground truth point adds a certain amount of points to the score:

  • < 1cm → 10 points
  • < 3cm → 6 points
  • < 6cm → 3 points
  • < 10cm → 1 point
  • 10cm → 0 points

In order to give each sequence the same weight, a normalization factor is introduced. That means each sequence can score up to 100 points (all points <1cm error). That leads to a maximum of 800 points.