Research with artificial intelligence (AI) has exploded over the last five years. In fiscal year 2025 alone, the National Science Foundation dedicated $2 billion to research and development for AI-related projects in an effort to emphasize U.S. leadership in this space. 

While AI research has become a strategic national asset, critical to innovation, global competitiveness, and security, it also comes with increased vulnerability to espionage, misuse, and ethical misconduct. 

Researchers at Virginia Tech have been awarded $300,000 by the National Science Foundation to tackle these concerns head on by building a more resilient, responsible, and secure AI research ecosystem. 

A need for more secure research 

Historically, concern for secure research centered on military technologies or commercially sensitive innovations.

“There’s now a concern with the entire research life cycle, especially with emerging technologies like AI and biotechnologies, in a way that there just wasn’t before,” said Rockwell Clancy, research scientist in the Department of Engineering Education. “The risks of stolen intellectual property can happen during data collection, while co-developing models with international collaborations, throughout evaluation and publication, or even in routine conversations about project progress.”

The need for additional systems and protection for research is driven by several factors:

  • Other countries can move research discoveries to applied technology rapidly, causing concern that the U.S. may be falling behind.
  • There's been an uptick in cases involving intellectual property diversion or illicit technology transfer.
  • The sensitive data, models, and methods used in AI research can be exploited long before the research is completed, giving other countries access to proprietary information.
  • Federal agencies acknowledge that existing standards don’t adequately address vulnerabilities.
small robots sitting on a table in an academic building
Robots are just one of the many examples of how we use artificial intelligence in research. The team's tools will provide guidelines for researchers looking to protect data, intellectual property, and other information throughout the research lifecycle. Photo by Chelsea Seeber for Virginia Tech.

Creating tools to protect

While many universities are talking about research security, few are producing evidence-based tools that faculty can use as part of their daily work.

Additionally, most national efforts are still conceptual: policy papers, high-level guidance, and broad discussions about foreign influence or data protections. Federal agencies are asking for discipline-specific, actionable training that helps researchers understand what threats look like in their own fields.

“Our team here at Virginia Tech is one of the few groups developing evidence-based, scenario tools to help researchers understand and determine what threats across the AI research life cycle look like,” said Qin Zhu, associate professor of engineering education and principal investigator. 

To create these tools, the team will interview and survey various stakeholders in the community of research security, including professionals doing AI research, to learn more about the security threats they’ve witnessed firsthand. From those data, the team will build fictional but realistic scenarios that mimic breaches or misconduct throughout various stages of the research life cycle. Once refined, the team plans to package these tools into an accessible digital tool kit to help universities, funding agencies, and industry partners better recognize and respond to risks.

“Our ultimate goal is to show our industry partners and funding agencies that we are knowledgeable and care deeply about secure research,” said John Talerico, assistant vice president for research security. “We want to be able to say, ‘Come sponsor your research here at Virginia Tech. Your work is safe with us.’”

Meet the research team

  • Qin Zhu, principal investigator, associate professor, Department of Engineering Education
  • Rockwell Clancy, research scientist, Department of Engineering Education
  • Lisa M. Lee, senior associate vice president for the Office of Research and Innovation and director of the Division of Scholarly Integrity and Research Compliance
  • John Talerico, assistant vice president for research security and chief research officer
research team stands while holding small robots in academic building
(From left) John Talerico, Lisa Lee, Qin Zhu, and Rocky Clancy are partnering together to improve research security at Virginia Tech and beyond. Photo by Chelsea Seeber for Virginia Tech.
Share this story