Post

3 Ways Gaming Companies Can Use AI/ML for Safety & Inclusivity

February 14, 2023

Artificial intelligence and machine learning have been revolutionizing the video gaming industry and are improving the overall gaming experience. With developers and coders spending less time on tedious tasks, they are now focusing on new game sequences and storylines, making games engaging and rewarding. Especially during the pandemic, online games served as a lifeline for many, boosting their mental health. However, the negative aspect of online gaming is when players get too competitive and start threatening, abusing, bullying, or doxing. Moreover, players with anonymous usernames feel empowered to say anything that creates an unsafe and toxic space.

The result? The offended player may exit the platform and never return; the toxicity may adversely affect their mental state, and the gaming platform can receive negative reviews that lead to a poor reputation and even legal repercussions.

How common is toxicity in gaming platforms?

  • Global Consumer Survey (GCS) by Statista shows that nearly one in five US gamers say toxic behavior and sexism are issues within the gaming community today.
  • A study taken by the cross-platform game engine Unity found that 68% of players say they have experienced forms of toxicity, including, but not limited to, sexual harassment, hate speech, threats of violence, and doxing. Recently the company partnered with the UK police and developed a unique police alert system to tackle extreme cases of toxicity in gaming.
  • According to a study, 59% of adult gamers believe that we need laws to increase transparency around how game companies address hate, harassment, and extremism.

How AI & Machine Learning Can Curb In-Game Toxicity

Toxicity in gaming platforms is a rising concern. In this blog, we will look at three ways gaming companies can design safe and inclusive communities by leveraging AI and machine learning.

Moderating In-game Content like Chats, Audio, and Video

A large amount of training data of acceptable and objectionable content trains the model. The AI uses these examples to learn what is permissible within a gaming community. The machine learning models can then monitor and identify bullying, profanity, hate speech, sexual harassment, and other offensive comments. If the AI model recognizes a comment to be harmful, it can take it down even before it reaches the receiver. In case of missing context, the model can identify and flag such content for review allowing moderators to take appropriate actions.

Context-Aware Content Moderation

It is a more advanced AI trained to recognize the context and intent behind the words. Instead of selecting isolated keywords and phrases, the system evaluates complete sentences, making it more proficient in comprehending semantic meaning, context, and implications.

For instance, let us say the words “kill” and “boar” are used together in an in-game chat. If the usage is in a non-toxic context, such as “I need to kill ten boars to complete this quest,” it is neutral or even positive because it’s a common objective in many games. However, if the context is toxic, such as “I am going to kill you too as you look like a boar,” it can be offensive. It begins with data collection, labeling, and performing sentiment analysis to train the models to recognize in-game toxicity.

Content Moderation in Multiple Languages

When frustrated or angry, most people start using their native language. Similarly, aggressive players can use native slang to release their frustration. If your AI model is not trained in that language, the content moderation model will fail to detect harmful behavior. Hence, training the AI model to understand multiple languages becomes crucial. To achieve this, the speech of people from all demographic backgrounds is necessary for the training of automatic speech recognition (ASR) models.

High-Quality Data Annotation is Key

Once the data is collected from multiple relevant sources, the next step is to annotate the data. The quality of data annotation directly impacts the quality of your automatic speech recognition (ASR) models, which will ultimately help you remove toxicity from the gaming community. iMerit has experience working with AI/ML teams across the gaming industry to customize automated speech recognition and player behavior classification models tuned to the standards of your player communities.

Are you looking for gaming behavior moderation to make your platform safer? Here’s how iMerit can help.