Post

Edge Cases Are the Key to Better AI. Here’s Why.

February 04, 2022

At the iMerit ML Data Ops Summit, Chris Barker, founder & CEO of CBC Transportation consulting interviewed autonomous driving figures Kai Wang, Director of Prediction at Zoox, and Jack Xiaojiang Guo, Head of Autonomy Platform at Nuro.

The conversation focused on the importance of identifying and managing edge cases in the autonomous driving industry in order to improve the performance and safety of a self-driving algorithm.

iMerit ML Data Ops Summit

Edge Cases Frequency

By definition, edge cases are rare occurrences where a machine learning algorithm is presented with scenarios it hasn’t encountered before, and are only likely to be identified in a real-world scenario.

In the case of autonomous driving, an edge case can represent an unexpected element, such as a stop sign leaning against a traffic cone. A human driver can quickly assess whether the sign is placed intentionally or not and continue as required. A self-driving algorithm may not have enough information or be able to gather enough from the context.

“If your goal is to drive hundreds of thousands of miles, edge cases might be something that happen every hundred miles, and that accumulates and shows up fairly often.”

– Kai Wang

While these sorts of scenarios do not happen often, when driving for thousands of miles, they can happen often enough that they need to be addressed in the design stages of a self-driving algorithm. Ad-hoc roadworks, obstacles in the road, unexpected behaviors, uncommon vehicles, all types of edge cases must be captured and generalized in the machine learning algorithm to ensure continuous safety and performance.

Identifying Edge Cases

Edge cases are best identified in a real-world scenario, either during the data gathering process, namely driving to collect real-world information, or during testing phases. Upon encountering an edge case, the driver or safety driver handles the situation safely. As these types of situations are identified, they must then be corrected. The development team must conduct a root cause analysis to determine what went wrong. These types of problems can be caused by a perception issue, the self-driving model, a behavior prediction failure, or the like. 

A common perception issue is the identification of rubble or debris on the road. These can either be benign, such as a paper cup or bag, or they can be dangerous, such as a stone or brick. In this context, the autonomous vehicle must determine whether it is safe to drive over it, or not.

“Not all three-point turns are created equal.”

– Chris Barker

For behavior, the self-driving algorithm must identify maneuvers with all their variations. These could include multipoint turns in dense traffic, tight road configurations, U-turns in junctions and others. Due to many variables whenever a driver, cyclist or pedestrian executes a maneuver, the self-driving mechanism needs to pick up on cues and subtleties to identify the intent.

Identifying-Edge-Cases

Multiple Modalities Failover

Edge cases can be difficult to predict even when we know they will happen. Halloween, for instance, is a time where we know to expect the unexpected. But despite knowing the date, we cannot predict all the scenarios that can take place during halloween. A pedestrian wearing an oversized costume may no longer be identified as a pedestrian by a computer vision algorithm, and therefore the autonomous vehicle cannot respond and react accordingly.

In this type of scenario, we can rely on other sensors to extract enough information for the vehicle to perform safely. Typically, in addition to a camera, an AV is equipped with Lidar and Radar sensors. These have a lower resolution, but their main job is obstacle detection rather than object identification. In the case of a pedestrian wearing an oversized costume, they can be correctly identified as an obstacle and treated like that.

“A human in a costume is an edge case for the camera, but not for Lidar.”

– Jack Guo

Each sensor modality has strengths and weaknesses. Lidar is good for object detection in a 3D space, helping estimate the size, shape and distance to the object. Radar, despite having lower resolution than Lidar, can bypass environmental elements such as rain, fog, or darkness. Vision has the highest level of resolution and is capable of gathering information about the whole environment, but only in good driving conditions.

“You can benefit from all these kinds of modalities and fuse them together.”

– Kai Wang

A common practice is to use the highest resolution modality for all-purpose activities, using the lower resolution but higher reliability modalities for failover. In the context of autonomous vehicles, vision is the go-to modality for all-purpose driving, whilst Lidar and Radar are used as failovers when the vision algorithm fails to identify an object, or during bad driving conditions.

Multiple Modalities Failover

Creating a Generalized Solution

Having failover mechanisms is a component of a generalized solution. Especially when dealing with edge cases, it’s important to create an all-purpose solution rather than solving a single case by putting a band-aid over it. For example, encountering a plastic bag on the road – an object you can drive over – should have the same process no matter what color the bag is.

A great way to create a generalized solution for a known edge case is using simulations. In a simulated environment, many variations within the same edge case can provide an ML algorithm with enough data to help generalize the problem.

“Making a cycle to iterate on edge cases quickly is crucial to improve the performance of a self-driving car.”

– Jack Guo

For each of those, ML developers must create a structure process for generalizing edge cases. This can include the process of generating a simulation for the use case, pre-determining the amount of data required for generalization, and a validation process, all to help quickly iterate on edge cases.

In Summary

A safety-first and generalized approach to solving edge cases will determine the overall performance of an autonomous vehicle. To handle edge cases, our experts suggest the best way forward is to firstly have a process of identifying and defining edge cases, create a robust algorithm with the help of failovers, and to devise a method for quickly iterating on edge cases.