Artificial intelligence (AI) is often in the spotlight for its role in detecting and combating hate speech. Now, Vaibhav Garg, a Virginia Tech collegiate assistant professor of computer science at the Innovation Campus in Alexandria, is pushing the conversation further.

His latest research focuses on an often-overlooked category: inciting language. This work not only distinguishes incitement from hate speech, but also paves the way for AI-driven solutions that promote safer online spaces.

“The whole world is talking about hate speech, but there’s already extensive work in that area,” Garg said. “What we found as researchers is that another category – inciting speech – is largely unaddressed.”

Inciting speech, unlike hate speech, is often subtle, according to Garg’s recently published article in IEEE Transactions on Computational Social Systems, “Understanding Inciting Speech as New Malice.” It doesn’t always rely on overtly abusive language, which makes it difficult for traditional AI models to detect. Further, it involves multiple parties: an instigator, an intermediary, and a target. Garg and his team set out to bridge this gap by focusing their research on how AI can detect this nuanced form of harmful speech.

Through a study of social media platforms – X, formerly Twitter, plus Gab and WhatsApp – Garg and his collaborators identified three primary types of inciting speech:

  • Identity-based incitement: Targeting a person’s identity, beliefs, or affiliations to provoke action against them
  • Imputed misdeeds: Highlighting alleged past misdeeds of a group or individual to justify discrimination or retaliation
  • Exhortation: Direct calls for action, such as boycotts, violence, or discrimination against a group or individual

These forms of inciting speech contribute to pervasive cancel culture and exclusion based on controversial narratives. The speed with which social media accelerates the spread of these messages demonstrates the need for AI-driven solutions to intercept them before they lead to real-world harm.

Religious communities were a primary focus of the study, but Garg said inciting speech also affects other marginalized groups, including LGBTQ+ individuals and people of color.

“My whole research focus is about using AI for social good,” Garg said. “We want to create safer online environments where people aren’t bullied or harassed because of their identity.”

The study found that current AI models are insufficient at detecting inciting speech, but Garg’s model achieved an 86 percent efficacy rate on WhatsApp data. The goal now is to refine and deploy these models in real-world settings to help platforms like Meta and Google moderate harmful content more effectively.

Garg emphasizes the importance of digital literacy in combating inciting speech. “We should be critical of the content we consume daily. Many people, especially the elderly, forward messages without questioning their intent or bias. Education and awareness are key.”

In the future, Garg hopes to expand his research to include different types of incitement across various platforms and adapt AI models to detect nuanced language variations. His work aligns closely with the Virginia Tech Innovation Campus’ mission of fostering inclusive and socially responsible technological advancements.

This research was a collaborative effort with Georgia Tech and North Carolina State University. Garg is also eager to involve students in this work. “Students are deeply immersed in social media, making them well-positioned to understand and address these challenges. It’s a triple win – for students, educators, and the university.”

Original study DOI: 10.1109/TCSS.2024.3504357

Share this story