Thousands of well-meaning social media users have been sharing photos supposedly depicting the aftermath of Hurricane Helene’s destruction that have turned out to be fake images generated by artificial intelligence (AI). These fake images can erode trust in legitimate sources of news during a crisis or even function as a vector for cyberattacks.

Communication media expert Cayce Myers explained the problems these fake images pose, while digital literary expert Julia Feerrar shared steps for determining whether a compelling image is AI-generated or taken out of context from another source.

Cayce Myers on challenges of AI-generated fakes

“The Hurricane Helene photos demonstrate the current challenges with disinformation and social media. AI technology is providing greater ability to create realistic images that are deceptive,” Myers said. “The hurricane images have certainly had an impact on the public, and their spread and believability demonstrate how we now live in a new technological and communication reality in the age of artificial intelligence.

“The problem is these fake images influence peoples’ perception of reality, and social media fuels the spread of this disinformation. The net effect can be harmful to society, especially when dealing with important issues like democracy and public health,” Myers said.

Julia Feerrar on detecting fake or out-of-context images

“Here are some steps for vetting the images you see in your social media feeds,” Feerrar said.

  • “Take a moment to pause when you see an image or other media that sparks a big response for you. The emotional aspect of information-sharing during and after a disaster like Hurricane Helene is so challenging, and with AI-generated content in the mix it can be especially hard to sift through information to help us understand the situation and take action to help.” 
  • “Open up your search engine of choice to find more context. Describing the image and adding the phrase ‘fact check’ to your search is often the fastest way to get more information and debunk misleading content.” 
  • “Use reverse image search tools like TinEye or Google Lens to see where else a certain image has been shared online. This strategy can help you catch AI-generated images, as well as older images from other events that may have been reshared out of context.” 
  • “Look out for images with strange lighting, hyper-real or overly smooth surfaces, or other details that feel ‘off.’ Inconsistencies in hands and feet, in particular, are a red flag for AI-generated content, though we can’t count on these cues as AI tools are continuing to improve.” 
  • “Vet social media posts that ask you to take action, such as donating through a link. It can often be safer to go directly to a given organization’s donation page. Multiple national and local organizations, as well as universities, have curated lists of places to donate and help further.”
  • Check out further resources on fact-checking and evaluating information in this toolkit from the University Libraries at Virginia Tech: https://guides.lib.vt.edu/dltoolkit/evaluation

 

About Myers
Cayce Myers is a professor of public relations and director of graduate studies at the School of Communication at Virginia Tech. His work focuses on media history, political communication, and laws that affect public relations practice. He is the author of “Public Relations History: Theory Practice” and “Profession and Money in Politics: Campaign Fundraising in the 2020 Presidential Election.” Read more here.   

About Feerrar 
Julia Feerrar is a librarian and digital literacy educator. She is an associate professor at the University Libraries at Virginia Tech and head of the Digital Literacy Initiatives. Her interests include digital well-being, combatting mis/disinformation, and digital citizenship. Read more here.  

Schedule an interview    
To schedule interviews with these experts, contact Mike Allen in the media relations office at mike.allen@vt.edu or 540-400-1700.  

Share this story