With national elections looming in the United States, concerns about misinformation are sharper than ever, and advances in artificial intelligence (AI) have made distinguishing genuine news sites from fake ones even more challenging. AI programs, especially Large Language Models (LLMs), which train to write fluent-reading text using vast data sets, have automated many aspects of fake news generation. The new instant video generator Sora, which produces highly detailed, Hollywood-quality clips, further raises concerns about the easy spread of fake footage.

Virginia Tech experts explore three different facets of the AI-fueled spread of fake news sites and the efforts to combat them.   

Walid Saad on how technology helps generate, and identify, fake news

“The ability to create websites that host fake news or fake information has been around since the inception of the Internet, and they pre-date the AI revolution,” said Walid Saad, engineering and machine learning expert at Virginia Tech. “With the advent of AI, it became easier to sift through large amounts of information and create ‘believable’ stories and articles. Specifically, LLMs made it more accessible for bad actors to generate what appears to be accurate information. This AI-assisted refinement of how the information is presented makes such fake sites more dangerous.   

“The websites keep operating as long as people are feeding from them. If misinformation is being widely shared on social networks, the individuals behind the fake sites will be motivated to continue spreading the misinformation.  

“Addressing this challenge requires collaboration between human users and technology. While LLMs have contributed to the proliferation of fake news, the also presents potential tools to detect and weed out misinformation. Human input — be it from readers, administrators, or other users — is indispensable. Users and news agencies bear the responsibility not to amplify or share false information, and additionally, users reporting potential misinformation will help refine AI-based detection tools, speeding the identification process.  

“Crucially, while these measures aim to assist users in discerning between authentic and fake news, they must align with the principles of the First Amendment and refrain from censoring free speech,” Saad said.  

Cayce Myers on what legal measures can and can’t do

“Regulating disinformation in political campaigns presents a multitude of practical and legal issues,” said Cayce Myers, communications policy expert at Virginia Tech. “Despite these challenges, there is a global recognition that something needs to be done. This is vitally important given that the U.S., U.K., India, and the E.U. all have important elections in 2024, which will likely see a host of disinformation posted throughout social media.    

“Easy access to AI means that disinformation, specifically deepfakes, are easier to create and disseminate, and the law will have a tough time catching. Legal accountability for deepfake content presents certain logistical problems, as many of the individuals creating the content may never be identified or caught. Some of these content creators live outside of the nation in which their content gets posted, which makes it harder to hold them accountable.    

“Technological developments such as Sora show why so many people are concerned about the connection between AI and disinformation. While Sora is not yet released to the public, it demonstrates that users will increasingly have few barriers to creating high quality AI-generated content. Generative AI video and images are so good they cannot be distinguished from actual photographs of real events.  Even watermarking and disclosures may not be enough because they can be altered and removed. As a result, politicians, campaigns, and voters are entering a new political reality where disinformation will be higher quality and more prolific.

“In the U.S., under the Communications Decency Act, Section 230, social media sites that host political disinformation, including deepfakes, are legally immune from responsibility. Combatting election disinformation largely falls to platforms’ self-imposed terms of use, which have drawn criticisms and allegations of unfair bias. 

“There have been calls to hold the AI platforms legally responsible for disinformation, an approach that may result in internal guardrails on creating disinformation.  However, AI platforms are still developing and proliferating, so a full-proof structure that prevents AI from creating disinformation is not in place and likely would be impossible to create,” Myers said.  

Julia Feerrar on how you can guard against disinformation

 “AI-generated and other false or misleading online content can look very much like quality content,” said Julia Feerrar, librarian and digital literacy educator at Virginia Tech. “As AI continues to evolve and improve, we need strategies to detect fake articles, videos, and images that don’t just rely on how they look.

“One of the most powerful things you can do to identify misinformation, whether AI-generated or not, is to look at where it's coming from. Is it from a reputable, professional news organization or from a website or account you don’t recognize? If you’re even a little unsure, open a new browser tab and do a quick Google search for the name of the website. The goal is to find a description that isn't from the original source itself — for example, many organizations will have a Wikipedia article that describes them.

“Experts refer to this process as lateral reading: searching beyond the content itself to find out more about what you’re looking at. Another way to read laterally is to see if other trusted news outlets are reporting on the same headline you’re seeing,” said Feerrar.  

More tips from Feerrar for evaluating news articles:

  • Fake news content is often designed to appeal to our emotions — it’s important to take a pause when something online sparks a big emotional reaction. 
  • Verify headlines and image content by adding fact-check to your Google search.
  • Very generic website titles can be a red flag for AI-generated news. 
  • Some generated articles have contained error text that says things along the lines of being ‘unable to fulfill this request’ because creating the article violated the AI tool’s usage policy. Some sites with little human oversight may miss deleting these messages. 
  • Current red flags for AI-generated images include a hyper-real, strange appearance overall, and unreal-looking hands and feet. 

About Saad
Walid Saad is a professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. He is internationally recognized for his contributions to research surrounding wireless communications including 5G and 6G, artificial intelligence, game theory, machine learning, and cybersecurity. Saad is the Next G Wireless lead at the Virginia Tech Innovation Campus. Read more here

About Myers
Cayce Myers is a professor of public relations and director of graduate studies at the School of Communication at Virginia Tech. His work focuses on media history, political communication, and laws that affect public relations practice. He is the author of "Public Relations History: Theory Practice" and "Profession and Money in Politics: Campaign Fundraising in the 2020 Presidential Election." Read more here.  

About Feerrar 
Julia Feerrar is a librarian and digital literacy educator. She is an associate professor at the University Libraries at Virginia Tech and head of the Digital Literacy Initiatives. Her interests include digital well-being, combatting mis/disinformation, and digital citizenship. Read more here.  

Schedule an interview    
To schedule interviews with these experts, contact Mike Allen in the media relations office at mike.allen@vt.edu or 540-400-1700. 

 

Share this story