As Tech Turns to A.I. to Fight Hate, Experts Warn Human Input Is Critical

Yfat Barak-Cheney, speaking at the Eradicate Hate Global Summit, is helping to combat extremism in her role as director of technology and human rights at the World Jewish Congress. (Joe Porrello / The CC Pulse)

Editor’s note: The Eradicate Hate Global Summit began in 2021 in response to the mass shooting at the Tree of Life synagogue. Last year, The CC Pulse was there for the first time in service of our Stop the Hate coverage. This is one of many stories we will publish that is about or inspired by the summit.

By Joe Porrello

PITTSBURGH — With hate speech growing online, social media and other tech companies are facing increasing scrutiny about their role in its spread and increased pressure to do something about it.

Representatives from TikTok, Meta and Microsoft spoke with artificial intelligence experts and nonprofit leaders at the Eradicate Hate Global Summit about how they’re trying to battle bigotry.

Specifically, they suggested that online communication contributing to hate-fueled violence could be reduced using A.I. But panelists said not to rely on the technology alone and stressed the importance of having people involved in the process of moderating and removing potentially problematic or harmful content. However, widespread tech layoffs have eliminated many of the jobs with such responsibilities.

According to a 2023 Anti-Defamation League survey, online hate rose sharply for both adults and teens in the past 12 months. Reports of each type of online hate covered in the survey increased by nearly every measure and within almost every demographic group on social media.

Director of Technology and Human Rights at the World Jewish Congress Yfat Barak-Cheney said that despite both persistent and new challenges battling hate online, popular social media companies are making a lot of progress.

“It’s an everlasting game of chasing after our next steps,” she said, adding that smaller online sites need to be incentivized for managing hate in their platforms.

Barak-Cheney works with Meta and TikTok to redirect people who search for conspiracy theories to proper resources elsewhere on the internet.

Through her work with victims, Barak-Cheney said she realized one of the only avenues for someone facing hate is to report it, a frustrating process for many people. She and others on the panel are trying to create new ways for targets of hate to heal and move forward.

>>>Want to report a hate crime or hate incident without involving law enforcement? You can do so anonymously by visiting CAvsHate.org or calling 833-8-NO-HATE.

Valiant Richey, the global head of outreach and partnerships for trust and safety at TikTok, said that words, images, videos and symbols could cultivate connection just as much as division.

Richey said that in the first quarter of 2023, TikTok removed 90% of hateful content — 75% of which was never seen — and that its #swipeouthate campaign videos have amassed over 250 million views in about three years.

However, as the New York Post reported, USC researchers in 2023 said the addictive nature of social media fuels hate speech, misinformation and desensitization.

>>>Read: My Generation Is Addicted to Social Media

United Nations experts said last year there is an “urgent need” to hold social media giants accountable for curbing hate.

“We have seen across the world, and time, how social media has become a major vehicle in spreading hate speech at an unprecedented speed,” said Alice Wairimu Nderitu, special adviser to the U.N. Secretary-General on the Prevention of Genocide. “We saw how the perpetrators in the incidents of identity-based violence used online hate to target, dehumanize and attack others, many of whom are already the most marginalized in society…”

Dina Hussein, global head of policy development and expert partnerships for counterterrorism and dangerous organizations policy at Meta, said it is tackling extremism by addressing not just hateful content but also the behaviors behind it.

She said thousands of trust and safety workers at Meta remove any content praising, supporting or representing those on its Dangerous Organizations and Individuals list.

However, companies like Meta, Amazon, Alphabet — which owns Google — and Twitter (now X) have all greatly reduced their teams focused on trust and safety as well as ethics, CNBC reported in 2023. Meta announced it would cut 21,000 jobs, which would have “an outsized effect on the company’s trust and safety work.”

Hussein said companies like hers can most effectively address online bigotry by looking at hate in large online spaces and sharing data with other social media companies and nonprofit organizations, as well as using A.I. software.

New technology and partnerships enable Meta to track terrorist propaganda and prevent it from being reposted.

“While we are evolving our tactics, the adversary is also mutating; our evolution needs to meet with that mutation in an equilibrium and hopefully advance beyond it,” Hussein said.

Senior Policy Manager Hugh Handeyside said Microsoft will use A.I. to address new problems surrounding hate on a global scale while maintaining committed to fundamental rights like freedom of speech.

“We see in A.I. an opportunity for a leap forward in our ability to predict and mitigate risk,” he said.

Handeyside believes even considering the understandable concerns with A.I. and its impact on society, it can promote fairness and trustworthiness in online platforms.

Last March, Microsoft laid off its entire A.I. ethics and society team, which employees said “played a critical role in ensuring that the company’s responsible AI principles are actually reflected in the design of the products that ship,” the Verge reported.

Mike Pappas, the founder and CEO of Modulate — which helps online platforms defend against toxicity — said moderating content requires a careful balance between the assumption of user privacy and detecting hateful acts, while ensuring they are not diving deeper into anyone’s content or profile than needed.

One key to getting the most out of A.I., according to Handeyside, is making sure human oversight and review at multiple points along the way is central to the deployment and effectiveness of systems detecting hateful posts.

Human input is particularly necessary when hateful language is used in posts to educate or raise awareness, ensuring they are not removed by A.I.

“I’m a founder of an A.I. tech startup, and let me tell you very emphatically — please don’t just trust A.I. to moderate your content; it’s a bad idea,” said Pappas. “We think about A.I. as a tool to augment moderation and trust and safety teams, which frankly are all too often under-resourced.”

Even with A.I. and human oversight, perpetrators can find their way into online spaces. So looking for ways the system can be abused or bypassed is crucial, according to Handeyside.

Pappas noted that in the physical world, places like the library have a code of conduct known by all, but many online spheres have less clear-cut rules because of their relative newness — often causing youth to develop toxic behavior.

“They don’t understand that it’s bad. They just understand it’s a new thing they can do to push the envelope,” Pappas said. “So they mess around and don’t really realize the harm they’re doing.

>>>Q&A: How Youth Become Radicalized and What to Do

Working with nonprofit community mental well-being resource Take This, Modulate found children’s typical timeline to online hate begins with using swear words, followed by sexual vulgarity, then violent speech and hateful language.

“The good news is, most kids don’t graduate to the next level,” Pappas told the Pulse, referring to committing acts of violence.

Because codes of conduct tend to be overly general and wordy, Pappas said they are often hard for adults to even understand.

“How do they actually figure out what the code of conduct is? They say, ‘What can I get away with?’ ” he said.

Richey agreed.

“It’s not enough to tell people ‘don’t hate,’ ” he said. “We have a responsibility to help our community understand a little more what that means.”

Richey said TikTok updated its community guidelines earlier this year to be more clear about how different types of hate are defined, including misgendering and slurs.

As social media and A.I. continue to evolve, those combating hate online will have to keep looking for fresh solutions.

“In this sphere, success is not a finish line; it’s a constant improvement,” said Handeyside.

Hussein noted that policy must go beyond the web to be effective.

“Just removing it from your platform doesn’t remove it from existence,” she said.

This resource is supported in whole or in part by funding provided by the State of California, administered by the California State Library in partnership with the California Department of Social Services and the California Commission on Asian and Pacific Islander American Affairs as part of the Stop the Hate program. To report a hate incident or hate crime and get support, go to CA vs Hate.

No Comments

Post A Comment

Enjoy our content?  
SIGN UP FOR OUR NEWSLETTER
JOIN TODAY
close-image