Rigorous mathematical approaches could help protect military personnel

Virginia Tech researchers Doug Bowman and Brendan David-John are a part of a multi-institution Defense Advanced Research Projects Agency (DARPA) program to improve the security of the U.S. military against “cognitive attacks” in mixed-reality environments.
“Cognitive attacks mean that an adversary might do something in the real world to make the system misbehave or to confuse the operator,” said Bowman, director of the Center for Human-Computer Interaction, part of Virginia Tech’s Institute for Creativity, Arts, and Technology. “The experiments will tell us how these cognitive attacks can affect mixed reality users. Once we know that, we can design the system to mitigate those effects.”
Their work is a collaboration on Intrinsic Cognitive Security (ICS), a DARPA’s three-year, multi-million-dollar program that more than four teams across the country. The team is led by SRI International, a California-based independent nonprofit research organization, in collaboration with Virginia Tech, New York University, and the University of California Santa Barbara. The team’s total funding is $8.1 million with more than $1 million dedicated to Bowman and David-John’s work developing cognitive models and mathematical guarantees to protect military personnel.
The military and mixed-reality
Mixed reality integrates real-world and virtual elements, allowing users to interact with objects in both environments in real-time. According to the researchers, branches of the military have expanded the use of the technology in recent years as a way of enhancing situational, tactical awareness, as well as target acquisition. The technology can be used by soldiers to see around corners, look through walls, or communicate with other teams in real-time.
“Military personnel in the U.S. armed forces will increasingly rely on immersive technologies,” said Bowman, who is also a professor of computer science. “While handheld devices like computers or tablets can be used, they have the drawback of reduced situational awareness. Seamlessly overlaying information onto the real world, however, can significantly improve an operator’s ability to make decisions.”
Bowman researches augmented and virtual reality as well as collaboration across time, space, and reality. David-John’s research specializes in gaze-based interactions in virtual reality as well as emerging privacy and security concerns in extended reality.
“Extended reality is a broad term that includes virtual, augmented, and mixed reality technologies,” said David-John, assistant professor of computer science. “Virginia Tech has a strong reputation for its work in extended reality, which is why SRI International initially approached us.”

Research team
Bowman and David-John are supervising a team of graduate and undergraduate students who contribute to various aspects of the project, from cognitive modeling to designing augmented reality and virtual reality environments. Team members include:
- Anish Narkar, Ph.D. student studying computer science and applications
- Jan Michalak, graduate student studying computer science and applications
- Ayush Roy, Ph.D. student studying computer science and applications
- Matthew Gallagher, senior studying electrical and computer engineering
SRI’s project explores cognitive attacks across five categories: physiology, perception, attention, status, and confidence. Bowman and David-John’s research addresses two of the five: status and attention.
Status
Status attacks involve an adversary who has real-time access to the operator’s data and seeks to determine if the operator is in a vulnerable state.
The researchers are investigating how bad actors exploit user data, including eye-tracking information, position, and other biometric data.
“Eye tracking can reveal when a user is distracted or about to select an object,” David-John said. “Along with revealing this valuable information, an attack could induce motion sickness by increasing latency or causing lag within the headset.”

Attention
An attention attack involves adversaries trying to hinder a user’s mission by introducing confusing or distracting information. Examples include unnecessary alarms or highlighting information irrelevant to the mission.
The researchers are examining this by simulating an environment where participants identify and match virtual targets while varying the intensity of visual and auditory distractions. They hope to better understand the impacts of factors such as the density and size of visual distractions, the frequency of visual flashes, and auditory elements like sound volume and pitch.
“The hypothesis is that increased visual and auditory distractions will likely extend the time it takes to complete the target-matching task,” Bowman said.


Next steps
Less than a year into the project, the researchers say they are already making progress on this newly explored topic.
“This big challenge of layering formal understanding of human behavior is something that’s hardly ever been done before because it’s extremely difficult,” David-John said.
As their work continues, they believe they will better understand the interplay of these complex systems and ultimately be able to help make them safer for use.
“The main goal of our work is to model human performance in these environments and to understand how different mixed reality and augmented reality displays impact that performance,” David-John said.