Two Ph.D. students in the Virginia Tech Department of Computer Science have been named Amazon Fellows supported by the Amazon-Virginia Tech Initiative for Efficient and Robust Machine Learning for the 2025-26 academic year.

The announcement was made during a recent Amazon-Virginia Tech artificial intelligence (AI) workshop held at Academic Building One in Alexandria.

Launched in 2022 to advance research and innovation in AI and machine learning (ML), the initiative is funded by Amazon, housed in the College of Engineering, and directed by researchers at the Sanghani Center for Artificial Intelligence and Data Analytics.

Amazon Fellowships are awarded to Virginia Tech doctoral students pursuing educational and research experiences in AI-related fields who are recognized for their scholarly achievements and potential for future accomplishments. In addition to receiving funding for their work, fellows have an opportunity to interview for an Amazon internship intended to provide them with a greater understanding of industry and use-inspired research.

Amazon Fellows

The new fellows are Yusuf Dalva and Haohui Wang

Dalva's research aims to make visual generative models more interpretable and controllable, granting users greater flexibility in creating and composing visual content. His work has spanned a variety of challenges, including representation learning in diffusion models, image editing, and multi-concept personalization. Currently, his research focuses on integrating multi-modal large language models (LLMs) with image and video generation models to leverage the reasoning capabilities of language models in visual creation. Advised by Pinar Yanardag, Dalva is particularly fascinated by the emergent capabilities of these models to push the boundaries of generative creativity.

Wang's research focuses on machine learning with applications in AI for material science. She is particularly interested in the challenges of artificial long-tail intelligence in the open-world environment, including developing theories and frameworks to better understand long-tail phenomena and improving model generalization, especially under dynamic, complex, and heterogeneous scenarios. Advised by Dawei Zhou, Wang is enthusiastic about building reliable AI systems that can generalize to long-tail intelligence in real-world environments. 

The initiative also supports faculty awards for projects that work toward revolutionizing the way the world uses and understands this field of modern technology.

"Our faculty and students continue to push the boundaries of what’s possible in AI and ML deployment. This year, they are exploring cutting-edge ideas such as fine-tuning for transfer across domains, privacy-preserving collaborative reasoning in multi-agent systems, and new paradigms for reinforcement learning with carefully designed reward signals,” said Naren Ramakrishnan, the Thomas L. Phillips Professor of Engineering, director of the Sanghani Center, and director of the initiative. 

“Our awardees are also advancing foundational understanding of how code is comprehended by LLMs. These projects are collaborative with multiple departments and focus on making AI more transparent, trustworthy, and applicable to real-world challenges," said Ramakrishnan, who is also a core faculty member at the Virginia Tech Institute for Advanced Computing in Alexandria.  

All awardees are selected by an advisory committee of Virginia Tech faculty and Amazon researchers.

Virginia Tech faculty Muhammad Ali Gulzar, Chang-Tien Lu, Ming Jin, and Tu Vu standing in front of  Amazon/Virginia Tech  banner
Faculty receiving project support from the Amazon-Virginia Tech initiative are (from left) Muhammad Ali Gulzar, Chang-Tien Lu, Ming Jin, and Tu Vu. Photo by Craig Newcomb for Virginia Tech.

Faculty awards

Additionally, four faculty members received funding through the initiative for their projects. 

Muhammad Ali Gulzar, assistant professor in the Department of Computer Science, received funding for “Foundations on the Code Comprehensibility of Large Language Models.” LLMs have demonstrated strong performance in code generation. With the rise of agentic LLMs, their use is rapidly expanding into post development tasks requiring a deeper semantic understanding of code that is not strictly rooted in lexical and syntactic code features. While popular LLM benchmarks measure the accuracy of LLMs’ code generation, the extent to which LLMs truly understand code remains largely unevaluated. This project seeks to design a scalable, quantitative, and automated method for assessing how well an LLM understands code and the impact of this understanding on post-development tasks. The goal is to encourage more mindful use in coding tasks and, in the long run, provide an actionable basis for prioritizing training data for LLM fine-tuning.

Ming Jin, assistant professor in the Bradley Department of Electrical and Computer Engineering, received funding for “Enhancing Foundation Model Reasoning through Reinforcement Learning with Novel Reward Design.” Current efforts to enhance foundation model reasoning face limitations like high compute costs; reward hacking and stability issues with learned reward models; difficulty balancing reasoning quality and efficiency; and challenges in multimodal contexts. Improving complex reasoning of foundation models reliably and efficiently is critical for Amazon's AI ecosystem. Producing both critiques and actionable hints for a richer signal has shown promise for improving optimization efficiency and effectiveness in previous research. This proposal builds on this foundation by designing novel reward signals that guide a model's reasoning process, transforming it into a more autonomous agent capable of tackling complex, multi-step problems. 

Chang-Tien Lu, professor in the Department of Computer Science and associate director of the Sanghani Center, received funding for “Privacy-Preserving Collaborative Reasoning in Multi-Agent Systems.” Multi-agent systems enhance performance by combining a weaker but locally accessible model with a more powerful yet proprietary black-box remote model. This combination exposes local data to a remote agent, raising concerns about information leakage, especially in sensitive domains like healthcare information, financial records, and e-commerce activities. For virtual assistants like Amazon Alexa and smart home systems, which frequently process sensitive user data, robust local data protection is also crucial for preserving user privacy and trust. The goal of this research is to design a collaborative reasoning mechanism without exposing sensitive local data to thoroughly protect it before the black-box model inference. 

Tu Vu, assistant professor in the Department of Computer Science, received funding for “Efficient Model Development through Fine-tuning Transfer.” Large Language Models are continually evolving, with newer versions released to improve pretraining quality, architecture, or alignment. Yet each new version of the base model typically demands repeated and computationally expensive alignment procedures. This inefficiency extends to domain- or language-specific models, where fine-tuning must be redone from scratch with every base model upgrade. Transferring fine-tuning updates (i.e., weight differences or “diff vectors”) across model versions offers a compelling alternative: enabling model updates without full retraining. This proposed approach promises to significantly reduce training costs while maintaining competitive performance, making it a viable strategy for sustainable LLM development.

About the workshop

The invitation-only AI workshop was held in Ocotber at Academic Building One in Alexandria and included remarks by Lance Collins, vice president of the greater Washington, D.C., area; Ramakrishnan; and Anand Rathi, center liaison and director, software development, artificial general intelligence, at Amazon. 

"We are pleased to welcome our Amazon collaborators to Virginia Tech’s new Academic Building One in Alexandria for our annual gathering,” Ramakrishnan said. “It is a great opportunity to connect Virginia Tech faculty in the space of AI with Amazon researchers and foster future collaborations.”  

“Our collaboration with Virginia Tech represents a strategic investment in developing the next generation of AI talent and innovation,” said Rathi. “The research emerging from this partnership continues to advance our understanding of responsible and efficient AI systems while preparing students for the complex challenges of tomorrow."

Additionally, Chalapathi Choppa, senior manager, security engineer, Amazon, discussed Amazon Artificial General Intelligence and the importance of responsible AI, and two Virginia Tech faculty members who have sponsored research projects with Amazon gave lightning talks. They were: 

  • Ruoxi Jia, assistant professor, Bradley Department of Electrical and Computer Engineering, Virginia Tech, “A Compositional Framework for Proactive AI Safety”
  • Hongliang Xin, assistant professor, Department of Chemical Engineering, “Next-Generation Catalysts for Fischer–Tropsch Synthesis”

Previous events related to the initiative have been held at the Virginia Tech Research Center — Arlington and on the university's Blacksburg campus.

Virginia Tech faculty Hongliang Xin (left) and Ruoxi Jia standing in front of Virginia Tech/Amazon banner
Hongliang Xin (at left) and Ruoxi Jia gave lightning talks. Photo by Craig Newcomb for Virginia Tech.
Share this story