3D Sensor Fusion

High-quality and Scalable Multi-Sensor Fusion Data Annotation for Autonomous Mobility

Lidar box labeling for autonomous vehicles to identify objects from 3D images.

3D Sensor Fusion, 3D Point Cloud & LiDAR

  • Multi-sensor Fusion: At iMerit, we excel at multi-sensor annotation for the camera, LiDAR, radar, and audio data for enhanced scene perception, localization, mapping, and trajectory optimization.
  • Ground-Truth Accuracy: Our teams use 3D data points with additional RGB or intensity values to analyze imagery within the frame to ensure that annotations have the highest ground-truth accuracy. 
  • Merged point cloud: Merged point cloud unifies all coodrinates into single frame and eliminate manual frame traversal, offering a holistic view of object sequences
  • API’s and Automation: Our tooling ecosystem supports automation in annotation and validation, workflow customization, API integration, easy visualization, and cost-effective labeling.
  • Annotation types: Multi-sensor annotations include 2D/ 3D linking, 2D/ 3D bounding boxes, and 3D point cloud segmentation.

3D Point Cloud tool

Case Study

Leading Autonomous Mobility Company partners with iMerit for LiDAR Annotation to build a 3D Perception System

We partnered with this company to support them with data annotation across 2D images and 3D point clouds. 3D perception systems are highly dependent on data quality for improved performance, and the company was looking at target identification in LiDAR frames with lane marking, road boundaries, traffic lights, and others.
With our human-in-the-loop workflows, data labeling on 3D LiDAR frames for poles, pedestrians, signs, cars, and barriers, was achieved seamlessly and accurately.

"iMerit's incredibly knowledgeable data labelers are part of our team and are the right people for our enrichment efforts."

Director of Data Products, Research Institute of Autonomous Driving

Combination of LiDAR and images to capture different angles and sensors for autonomous navigation

CONTACT US