Post

Top Data Annotation Challenges and How to Solve them

October 05, 2023

Data-related tasks typically account for nearly 80% of the time spent on AI projects, making them a critical component of the machine learning pipeline. Among these data-related tasks, data labeling, on average, consumes up to one-fourth of the project’s duration. Much like the subsequent stages involving model development and hyperparameter tuning, the data labeling process comes with its challenges, making it one of the most difficult, time-consuming, and expensive tasks if not handled appropriately.

Data labeling is tackled haphazardly by many organizations working to build an AI/ML pipeline and is usually underestimated in complexity. It is a pitfall in the process, causing inadequate results. Only 8% of firms engage in core practices that support the widespread adoption of AI/ML solutions. As reported by Harvard Business Review, most firms have run only ad hoc pilots or applied AI/ML in just a single business process.

This blog delves into the intricacies of data labeling and examines various facets that contribute to its complexity. Whether you are an experienced professional or have a data annotation project in progress, consider signing up for iMerit’s Ango Hub to streamline your image labeling tasks.

Source

Subject Matter Expertise

Subject Matter Expertise means the amount of domain knowledge or information a labeler has on the data he is labeling. Fundamentally, data labeling is a task that employs human knowledge to prepare data for a model to train upon in the future.

Often, this data is of a nature that cannot be accurately labeled without expertise regarding the characteristics and the complexity of the data. It is the primary reason subject matter experts are needed for many labeling tasks.

For instance, when annotators label images of tumors found in MRI scans, it would be challenging to comprehend and label someone with no medical or radiological knowledge. Annotating this type of data would be best understood by an expert radiologist or a doctor.

Consider another case where an organization may want to distinguish faulty architectural blueprints from robust ones. For this task, a qualified architect would do the best job of identifying such blueprints. An unqualified labeler would make many mistakes in this complex decision-making process.

The availability and inclusion of subject matter experts become the primary challenge of a data labeling task. These experts can be expensive and very hard to access for many organizations due to mutually exclusive domains of operation of the experts and the organization.

Subjectivity and Human Bias

Many machine learning tasks require data that is often subjective. There are sometimes no right or wrong answers; this makes the tasks inherently fuzzy and up to the judgment of the labeler. It induces human bias into the labels, as the labelers have to follow what seems like the best (or the most logical) answer to them.

More technically, this concept is known as the induction of cognitive bias, which can manifest in various ways. Some of these are:

  • Confirmation Bias: It refers to the tendency of the labeler to label data according to information that confirms his existing beliefs. It is seen, for instance, in given data related to COVID-19 vaccine effectiveness, where the labeler may use preconceived notions on the vaccine effectiveness while labeling the data.
  • Anchoring Bias: It refers to the tendency of the labeler to give higher weight and importance to the data they encountered early on or the first piece of information relayed to them. For example, initial examples/samples for labeling deeply form the definition of labels for the labelers, and they tend to follow those trends. 

Functional Fixedness: The labeler tends to look at a specific label for only one side/function/direction of it. For instance, when asked to label an object to push down nails in an image, they may annotate a hammer and may ignore a wrench, even though the other fulfills the same function.

Consistency

Consistency in data labeling is the level of agreement for a label among different individuals (or machines) that labeled that specific item (or row) of data. It applies to the cases when multiple labelers label a single piece of data. In general, high consistency is required for quality labeled data. However, maintaining consistency can be challenging, partly due to the subjectivity and bias discussed above.

Over and above these reasons, mistakes in tasks requiring judgment/discretion or logic are bound to happen. Hence, the need for different labels for the same data item arises. It lowers consistency and demands consideration before data delivery.

There are multiple ways to enhance consistency, but some of the most effective ones are the following:

  • Review system: Any platform used for labeling should have an integrated, robust, and effective review system that allows reviewers to check labels, including the ones that are erroneous or significantly inconsistent. This approach, in general, allows for more consistent data.
  • Communication: The requirements must communicate how the data must be labeled, succinctly and effectively, to the labelers. Traditional methods, such as workshops, meetings, memos, or emails, work well. However, an ideal labeling platform should integrate some communication features for both the owners of the data and the labelers for everyone to be on the same page throughout the labeling process. It also has a positive effect on labeling consistency, as via open communication, annotators tend to act in ways demarcated.

Privacy

With the growing adaptation of outsourcing or crowdsourcing data for labeling, it is critical to ensure safety, privacy, and confidentiality. Unauthorized access, deletion, and storage of data at an unauthorized location are often concerns that need to be addressed by the labeling entity.

Often, organizations choose to have the labeling services on-premise to tackle this problem and ensure that no third party can access the data. It is the most effective way to ensure privacy. However, it comes with its own managerial and administrative overhead, as managing labels on-premise is an extensive process.

The ideal way to tackle this challenge is to ensure that the firm that labels the data complies closely with privacy regulations and processes the data lawfully, fairly, and transparently. It removes the complex workforce and project management layers and allows the experts to label the finalized data. Some of the things to look out for within the process of ensuring data privacy are:

  • Confidentiality of Data
  • Processing data only by instructions
  • Anonymization of personal/sensitive data
  • Deletion / Return of data after the processing (labeling) period

Source

Conclusion

Data labeling, especially at large scale, is required today for many use cases and can be challenging with many facets that need attention. Without addressing these challenges, the data may either be low quality (the pitfalls of which are in our Quality Assurance Techniques in Data Annotation blog) or incur extra layers of complexity and financial overhead. Often, it is best to outsource this task to firms you can trust and those that deliver quality and speed and tackle all these challenges professionally.

At iMerit, using a combination of techniques mentioned above, we ensure that we only provide the highest quality, consistent, and unbiased data labeled by a handpicked and highly talented team of experts subject to multiple cycles of review to our customers. 

Throughout the process, we ensure transparent and effective communication by providing initial samples of well-labeled data, instructions, and the ability for any labeler to report issues within data or the labeling process.

Are you looking for data annotation to advance your project? Contact us today.