LuSNAR: A Lunar Segmentation, Navigation and Reconstruction Dataset

Jiayi Liu1, Xubo Luo1, Qianyu Zhang1, Xue Wan2, Chenming Ye3, Shengyang Zhang1, Yaolin Tian1, Haodong Han1, Yutao Zhao1, Baichuan Liu1, Zeyuan Zhao1, Zhizhong Kang3
1University of Chinese Academy of Sciences    2Chinese Academy of Sciences    3China University of Geosciences (Beijing)
Under Review at IEEE Transactions on Geoscience and Remote Sensing (TGRS)

Abstract

LuSNAR addresses the critical gap in autonomous lunar exploration by providing a multi-task, multi-scene, and multi-label benchmark dataset. Unlike existing datasets that focus on single tasks, LuSNAR enables comprehensive evaluation of perception and navigation systems essential for next-generation lunar rovers. The dataset includes 108GB of synchronized multi-sensor data comprising high-resolution stereo camera images (1024Γ—1024 at 10 Hz), dense depth maps, pixel-perfect semantic labels across five categories (regolith, craters, rocks, mountains, sky), 128-beam LiDAR point clouds with semantic annotations, and 100 Hz IMU measurements with ground truth trajectories. Nine diverse simulation scenes represent varied lunar terrain characteristics categorized by topographic relief and object density. Built using Unreal Engine with physically accurate sensor simulation, LuSNAR provides photo-realistic rendering and supports multiple research tasks including 2D/3D semantic segmentation, visual/LiDAR SLAM, stereo matching, and 3D reconstruction.

News

  • [2024.09] πŸŽ‰ Paper submitted to TGRS and currently under review.
  • [2024.09] πŸ“ arXiv v3 released with updated experimental results.
  • [2024.07] πŸš€ LuSNAR dataset and code publicly released.
  • [2024.07] πŸ“„ Initial preprint available on arXiv.

Highlights

  • πŸŒ• First Multi-Task Lunar Benchmark: Comprehensive dataset supporting semantic segmentation, SLAM, and 3D reconstruction simultaneously.
  • πŸ“Š 108GB High-Quality Data: Multi-sensor synchronized data including stereo cameras, LiDAR, and IMU across 9 diverse lunar scenes.
  • 🎯 Pixel-Perfect Ground Truth: High-precision semantic labels, depth maps, and 3D point clouds with category annotations.
  • πŸ”οΈ Diverse Terrain Coverage: Scenes categorized by topographic relief and object density for robust algorithm evaluation.
  • πŸ”§ Unreal Engine Based: Photo-realistic rendering with physically accurate sensor simulation.

Supported Tasks

πŸ–ΌοΈ 2D Semantic Segmentation
Pixel-wise scene understanding
🧊 3D Semantic Segmentation
Point cloud classification
πŸ—ΊοΈ Visual SLAM
Camera-based localization
πŸ“‘ LiDAR SLAM
3D mapping and odometry
πŸ” Stereo Matching
Depth estimation from stereo
πŸ—οΈ 3D Reconstruction
Dense surface reconstruction

Dataset Statistics

Component Size Details
Stereo Images42 GB1024Γ—1024, 80°×80Β° FOV, 10 Hz
Depth Maps50 GBDense per-pixel depth
Semantic Labels356 MB2D masks + 3D point annotations
LiDAR Point Clouds14 GBUp to 20M points/sec, semantic labels
IMU Data-100 Hz, 6-DOF measurements
Ground Truth Poses-Sub-millimeter accuracy
Total Size108 GB9 scenes, multiple trajectories

Dataset Structure

LuSNAR/
β”œβ”€β”€ image1/                    # Left camera
β”‚   β”œβ”€β”€ RGB/                   # Color images
β”‚   β”œβ”€β”€ Depth/                 # Depth maps
β”‚   └── Label/                 # Semantic labels
β”œβ”€β”€ image2/                    # Right camera (same structure)
β”œβ”€β”€ LiDAR/                     # Point cloud data
β”‚   β”œβ”€β”€ timestamp1.txt         # Format: x y z category
β”‚   └── ...
β”œβ”€β”€ Rover_pose.txt             # Ground truth trajectory
└── IMU.txt                    # IMU measurements

Semantic Categories

2D Semantic Labels

CategoryHex Code
Lunar Regolith#BB469C
Impact Crater#7800C8
Rock#E8FA50
Mountain#AD451F
Sky#22C9F8

3D Point Cloud Labels

Category IDCategory
-1Lunar Regolith
0Impact Crater
174Rock

Sensor Specifications

πŸ“· Stereo Camera

  • Resolution: 1024 Γ— 1024 pixels
  • Frame Rate: 10 Hz
  • Field of View: 80Β° Γ— 80Β°
  • Baseline: 310 mm
  • Focal Length: 610.17784 pixels

πŸ“‘ LiDAR

  • Type: 128-beam spinning LiDAR
  • Frequency: 10 Hz
  • Horizontal FOV: 360Β°
  • Vertical FOV: -25Β° to +27Β°
  • Range: ≀30 m
  • Point Rate: Up to 20M points/second

🧭 IMU

  • Frequency: 100 Hz
  • Axes: 3-axis accelerometer & gyroscope
  • Accelerometer Random Walk: 0.002353596 m/s³√Hz
  • Gyroscope Random Walk: 8.7266462e-5 rad/s√Hz

Download

Main Dataset (108 GB): CSTCloud Link (Password: fjZt)

BEV Data (Optional): CSTCloud Link (Password: jtk0)

BibTeX

@article{liu2024lusnar,
  title={LuSNAR: A Lunar Segmentation, Navigation and Reconstruction Dataset based on Muti-sensor for Autonomous Exploration},
  author={Liu, Jiayi and Zhang, Qianyu and Wan, Xue and Zhang, Shengyang and Tian, Yaolin and Han, Haodong and Zhao, Yutao and Liu, Baichuan and Zhao, Zeyuan and Luo, Xubo},
  journal={arXiv preprint arXiv:2407.06512},
  year={2024}
}