A leading Autonomous Vehicle company was looking to build a highly-precise and reliable 3D perception system to support its self-driving technology. It required accurately interpreting information from multiple sensors, like navigation systems, vision modules, LiDAR, and radar for ground truth.
iMerit worked with this company to support its data annotation needs across 2D image space and 3D point clouds to help successfully deploy its 3D perception stack.
Problem
The company was looking for support on its increasing training data needs across sensor fusion annotation to support use cases like target identification, lane marking, road boundaries, traffic lights, and other targets in LiDAR Frames.
Solution
We supported the Autonomous Mobility company with annotation guideline review to refine them. With experienced annotators from iMerit, the company could receive labeling and attribution on 3D LiDAR scenes for cars, raised boundaries, pedestrians, crosswalks, road surfaces, poles, signs, and barriers.
Results
With the modular pipeline approach of combining LiDAR, navigation satellite systems, and 3D HD maps, self-driving technology could reconstruct a consistent representation of the surrounding environments.