The ML algorithm detects toxic emails and chats in real time

Toxic work environment
Photo: Kim Britten

People managers and HR leaders have a new tool for detecting and stopping job harassment: a machine learning algorithm that flags toxic communications. CommSafe AI can document incidents of racism and sexism and track patterns of behavior over time. This makes it easier to spot bad actors and intervene when a problem starts rather than waiting until a case is filed.

The Comm Safe algorithm monitors email and chat services, including platforms such as Microsoft Teams and uses machine learning to measure the emotion and tone of written communication in real time. According to Ty Smith, founder and CEO of CommFree AI, the algorithm understands nuance and context, such as when “breast” refers to a lunch order from a chicken place and when it refers to human anatomy. Smith said the algorithm avoids false positives generated by tracking solutions that use rules or keyword searches to detect problematic comments.

Smith said the goal is to change behavior when the problem first appears. He also recommends that employers always tell employees that the tracking service is in place.

“Instead of sending these messages to Slack, they will keep it to themselves or just speak this way when they see the person in person,” he said. “Either way it resulted in a change in individual behavior and reduced risk for the company.”

Bern Elliot, a research vice president at Gartner who specializes in artificial intelligence and natural language processing, said sentiment analysis has improved over the past few years due to increased computing power for analyzing more large datasets and the ability to have many types of content.

“Algorithms can now span a wider time frame and range of content and do it to scale,” he said.

CommSafe customers can also review archived communications as part of a harassment investigation.

“If a woman goes into HR and says she was harassed by a coworker at Slack six months ago, the software could present any chance of toxic communication to support whether that happened or not,” she said.

SEE: The COVID-19 gender gap: What will it take to get Black women back to work?

Smith said diversity officers can use this tracking and tracing over time to identify bad actors within the company.

“If a company is going to hire a DEI officer and make them responsible for this work, that person needs to know where to start,” he said.

Elliot said the key is to anonymize this information in general and give access to only a few people who can address the problem behavior privately.

“Someone should be able to de-anonymize the information,” he said.

Measuring trust and safety

The challenge is to scale this corrective feedback across companies with 10,000 employees or online games with 100,000 users and monitor multiple channels for multiple problems, including IP violation, spam, fraud, phishing, false information and illegal content, such as non -consensual sexually explicit imagery.

“We haven’t developed the right tools to manage this,” Elliot said. “Really big companies have big groups of people working here and they still have ways to go.”

HR leaders don’t necessarily see or monitor recurring patterns of behavior and individuals accepting the end of harassment don’t always want to call the problem their own, Elliot says.

“If you dig, these things don’t start out of nowhere,” Elliot said. “These are patterns of behavior and you can see if there are indications of behaviors that someone wants to view from HR.”

SEE: Why a secure metaverse is needed and how to build welcoming virtual worlds

Elliott suggested that companies use this tracking software to also measure group safety in society.

“A pattern of behavior can be within a group or with an individual, and you can interact with other things,” he says. “People don’t break those rules all the time; there are some triggers that make it OK. “

Elliot suggested that companies consider implementing this type of emotional analysis beyond employee communications.

“Toxic content is actually a pretty narrow problem — it’s looking at content generated by third parties that you have some responsibility for that’s the bigger issue,” he said.

The bigger challenge is tracking trust and safety in conversations and other interactions that include text, voice, photos and even emojis.

Develop a model to identify hate speech

Smith started a technology -enabled risk assessment company in 2015 with an initial focus on workplace violence.

“Once I stood up in the company and started working with large customers, I realized that active shooter situations were a very small part of that problem,” he said.

In early 2020, he shifted the company’s focus to toxic communication with the idea of ​​anticipating the problem rather than responding to the bad things that had already happened.

He conducted a brainstorming session with military and law enforcement officers with experience in dealing with violent individuals. The group has come across toxic communications as a precursor to workplace bullying, racial discrimination and other violent behavior.

Smith said the CommSafe team used publicly available datasets to build the algorithm, including text taken from Stormfront, a white supremacist forum, and the Enron email dataset that includes email from 150 senior managers at failed company.

After selling a beta version of the software, CommSafe engineers integrated customer data to train the algorithm in a more industry -specific language.

SEE: Glassdoor: 3 out of 5 have witnessed or experienced employment discrimination

“It usually takes between three and eight weeks for AI to learn about a particular organization and set a baseline of what the culture looks like,” he said.

The company plans to release a new version of the software by the end of May. In February, the company received certification from ServiceNow which means the CommSafe software is now available in the ServiceNow Store.

The algorithm does not recommend a particular course of action in response to a particular email or Slack message. Smith said HR customers have identified real-time tracking as the most important feature. Companies can also launch tracking software as part of a ServiceNow integration.

“CommSafe AI customers can build workflows on the side of ServiceNow to allow them to solve the problem in real time,” he said.

CommSafe is also working on a Department of Defense contract phase to test the algorithm’s ability to detect warning signs of suicide and self -harm.

“We are working with DOD today to see if we can explore this use case to a wider audience,” he said.

#algorithm #detects #toxic #emails #chats #real #time #Source Link #The ML algorithm detects toxic emails and chats in real time

Leave a Comment