Esports platform FaceIt has launched its a new AI to manage its community.
According to a new blog post, the new tech - called Minerva - is trained through machine learning with the intent of tackling toxicity. So far, the new AI has reduced toxic messages by 20 per cent and banned 20,000 users for vile behaviour.
To begin with, Minerva was focused on Counter-Striker: Global Offence matches, within seconds of the match ending the AI is able to make decisions on whether to issue warnings or bans.
Minerva issued a further 90,000 warnings on top of the bans. More than two-hundred million messages were analysed with seven million being flagged as toxic The number of unique users sending vile messages has decreased by eight per cent.
The more a user is flagged as toxic, the harsher the punishments they receive are. Minerva has been fully automated since August, no longer needing manual intervention to issue bans.
“In-game chat detection is only the first and most simplistic of the applications of Minerva and more of a case study that serves as a first step toward our vision for this AI,” said the company in a statement.
“We’re really excited about this foundation as it represents a strong base that will allow us to improve Minerva until we finally detect and address all kinds of abusive behaviours in real-time.
“In the coming weeks, we will announce new systems that will support Minerva in her training.”