Leaderboard Challenge 2023

The Hilti SLAM Challenge 2023 saw an increase in participation over the previous iterations with 69 unique teams participating, compared to 42 from 2022, and 27 from 2021. This indicates an increasing level of trust being placed in the dataset and benchmark by the worldwide SLAM community. Results were announced at the IEEE ICRA 2023 2nd workshop on the Future of Construction.

We congratulate the winners Urban Robotics Lab. @ KAIST for Lidar Single-session, and Autonomous Systems Lab @ ETHZ for Lidar Multisession leaderboards. Additionally, we congratulate XR Penguin for winning both vision-only single- and multi-session awards.

Team Score
1 Urban Robotics Lab. @ KAIST 3000 USD
1,177.6
2 HKU-MaRS-HBA
934.4
3 Strelka
840.3
4 RPM Robotics LAB
789.5
6 GSW
727.3
7 ANYbotics - Pharos SLAM
657.6
9 Promising SLAM kid
629.3
10 NTU IOT
597.7
11 dummy2
581.7
12 FlyH
579.0
13 SpaceR-Junlin
562.7
14 HKU-FAST-LIVO2
506.4
15 X-SLAM
502.9
18 ifp@UniStuttgart
444.3
19 Be2RLab
433.1

Camera, LiDAR, IMU. Leaderboard by 22.05.2023. Teams that chose to remain undisclosed are hidden.

Team Score
1 XR Penguin--MAVIS SLAM
452.2
2 Urban Robotics Lab. @ KAIST 1000 USD
200.8
4 Autonomous Systems Lab @ ETHZ
121.2
5 RPM Robotics LAB
60.0
9 Ifp@UniStuttgart & CAMP@TUM
20.0

Camera, LiDAR, IMU. Leaderboard by 22.05.2023. Teams that chose to remain undisclosed are hidden.

Team Score
1 Autonomous Systems Lab @ ETHZ 4000 USD
36.7
2 Strelka
1.7

Camera, LiDAR, IMU, Automated. Leaderboard by 22.05.2023. Teams that chose to remain undisclosed are hidden.

Team Score
1 XR Penguin--MAVIS SLAM
27.0
2 Autonomous Systems Lab @ ETHZ 2000 USD
15.3

Camera, LiDAR, IMU, Automated. Leaderboard by 22.05.2023. Teams that chose to remain undisclosed are hidden.

Results

The category winner for LiDAR-driven multi-session leveraged Maplab with lidar, vision, and IMU sensor modalities. Hierarchical Bundle adjustment is a new high-performing system backend created by runner-up HKU-MARS Lab. There are also high-performing open-loop odometry-only systems in the top teams such as FT-LVIO and ANYBotics which secured 5th and 6th spots on the leaderboard. Very few camera-augmented lidar SLAM systems are in the top 10 teams, with FT-LVIO being the only entry of that kind.

The Vision Single Session Category is led by big-tech, with Tencent Games’ XR Lab with their MAVIS system securing first place. The category also finds all top SLAM frontends being optimizer based. On comparing Lidar-based SLAM vs. Vision-based SLAM, the RMSE ATE accuracy gap appears to be closing when compared to the 2022 Challenge leaderboard.

The Multisession SLAM tracks for lidar and vision-based systems saw relatively limited participation, with two teams participating in each track, which was anticipated given the limited number of multi-session systems available. Both vision- and lidar-driven multi-session systems received similar scores at similar accuracies in the 30cm range of RMSE ATE.

Read the corresponding academic publication

Score Computation

After transformation of the estimates from the imu frame to the pole tip, we aligned the trajectory with the sparse ground truth points using a rigid transformation. Then the ATE for each point is computed (we rely on the evo script). Depending on the error, each ground truth point adds a certain amount of points to the score:

  • < 0.5cm → 20 points
  • < 1cm → 10 points
  • < 3cm → 6 points
  • < 6cm → 5 points
  • < 10cm → 3 point
  • < 40cm → 1 points
  • > 40cm → 0 points

In order to give each sequence the same weight, a normalization factor is introduced. Sequences for site 1 and 2 can score up to 100 points; while sequences for site 3 can score up to 200 points. That leads to a maximum of 1600 points.