Skip to main content

Riot Games and Ubisoft team up in AI research project — using player chat logs as training data

Results of this research are set to be published next year.

Ubisoft and Riot are collaborating on an anti-toxicity research project, which will focus on collecting in-game chat logs as training data for AI algorithms. Both companies are set to release their findings from this data next Summer, at which point future steps will be decided.

Speaking to me via a Zoom call, Wesley Kerr, director of tech research at Riot, and Yves Jacquier, executive director of Ubisoft La Forge, both shared their goals and long term hopes for the project. Hailing it as the first of its kind as an open research collaboration in the field of AI between two gaming companies, they hope the learnings published next year will be the first step in the industry’s effective use of AI as a tool in reducing toxicity.

For some reason, Udyr players always were hella toxic when I played League a lot. I wonder if that's still true after his rework?

According to Jacquier, the project has three main goals. First, to create a shared data set network, filled with fully anonymised player data. The second, to create an AI algorithm that can work off this data. Finally, for this partnership to act as “prototype” for future industry initiatives against toxicity, encouraging competition and further strides in the field.

It makes sense that Riot and Ubisoft would be two companies invested in solving this problem, if its popular multiplayer titles are considered. Rainbow Six: Siege gets real dirty real quick as soon as teamwide cooperation takes a hit, and Riot’s troubled twins League of Legends and Valorant are drenched in toxic ooze.

Both Kerr and Jacquier emphasised throughout the interview that player anonymity, and adhering to regions laws and GDPR, were among their top priorities. When asked if player data was shared between companies, Kerr stressed that your League of Legends account info wouldn’t be sent around to other companies without a player’s consent. Rather, the chat logs would be stripped of identifying info before any algorithms could pick and prod away at it.

The most immediate problem that comes to mind when you hear about AI curbing toxicity is the perseverance of players, determined to let you know just how trash you are. The invention of new words, a morphing trash talk lexicon, is constantly shifting within online communities. How could an AI respond to that? The trick, according to Jacquier, is not to rely on the dictionary and static data sources. Thus the value in using current player chat logs, which reflect the current toxicity meta.

Then there’s the other concern of misfires, especially in a medium where friendly banter between friends, random team mates, and even enemy players can be part of the experience. If I’m playing top lane in League of Legends, and I write “nice CS bud” to my 0/3 lane opponent, that’s just a bit of banter, right? If they do the same to me, that’s invigorating. It makes me want to win more, and enhances the experience. How can an AI determine the difference between genuine harmful toxicity and banter?

“It’s very difficult,” states Jacquer. “Understanding the context of a discussion is one of the hardest parts. For example, if a player threatens another player. In Rainbow Six, if a player says ‘hey I’m gonna take you out’, that might be part of the fantasy. Whereas in other contexts it might have a very different meaning.” Kerr followed this up with some of the benefits video games have in this regard, thanks to other factors.

According to him, taking into account who you queue up with is an example of a factor that could assist AI in determining real toxicity from fun ribbing. In theory, you wouldn’t be hit by strays if you call your lifelong best friend dogshit in a League lobby.

As for the future, all eyes are on next year’s published results. For now, it’s focused only on chat logs, but with Riot Games looking into monitoring voice comms in Valorant, Kerr refused to take it off the table as a future area of research if the collaboration continues past 2023. For now, it’s a blueprint. A first step in a long journey both companies appear dedicated to travelling. While both Kerr and Jacquier are hopeful that the research project will produce important findings, and inspire other companies to follow suite, they don’t believe AI is the be all and end all of toxicity moderation.

“AI is a tool, but it’s not a silver bullet. There are many ways to ensure player safety, so the idea is to better understand how this tool can best be used to tackle harmful content."

Ultimately, this research is but one component in a wider effort, but in the minds of both Jacquier and Kerr one that will hopefully prove critical in the future. Only time will tell if they’re right, whether they can keep to their promise that the privacy of players will be upheld, and whether AI really is the next frontier in the battle against toxicity.

Read this next