Multi-sensor Fusion
Combining LiDAR and images from multiple angles captured from different sensors, iMerit’s teams help to reduce uncertainty in navigation.
PanOptic Segmentation
Coupling instance and semantic segmentation, iMerit enrichment teams identify the pixels in images as belonging to a class and identify what instances of that class they belong to.
LiDAR
iMerit teams label images and videos in 360 degree visibility, captured by multi-sensor cameras, in order to build accurate, high-quality, ground truth datasets to power autonomous driving algorithms.
Bounding boxes
iMerit Computer Vision experts use rectangular box annotation to illustrate objects and train data, enabling algorithms to identify and localize objects during ML processes.
Polygon Annotation
Expert annotators plot points on each vertex of the target object. Polygon annotation allows all of the object’s exact edges to be annotated, regardless of shape.
Semantic Segmentation
Images are segmented into component parts, by the iMerit team, and then annotated. iMerit Computer Vision experts detect desired objects within images at the pixel level.
Object tracking
iMerit teams detect instances of semantic objects of a certain class in digital images and videos, use cases could include face detection and pedestrian detection.