Two student Amazon Fellows and five faculty-led projects supported by the Amazon-Virginia Tech Initiative for Efficient and Robust Machine Learning for the 2024-25 academic year were named at a retreat held on the Blacksburg campus.

The initiative, launched in 2022 to advance research and innovation in artificial intelligence (AI) and machine learning, is funded by Amazon, housed in the College of Engineering, and directed by researchers at the Sanghani Center for Artificial Intelligence and Data Analytics on Virginia Tech’s Blacksburg campus and at the Virginia Tech Innovation Campus in Alexandria. 

Fellowships are awarded to Virginia Tech doctoral students recognized for their scholarly achievements and potential for future accomplishments. They must be enrolled in their second, third, or fourth year and interested in and currently pursuing educational and research experiences in AI-focused fields. In addition to receiving funding for their work, the fellowship includes an opportunity to interview for an Amazon internship intended to provide them with a greater understanding of industry and use-inspired research.

The 2024-25 Amazon Fellows are

  • Md Hasan Shahriar, Ph.D. student in the Department of Computer Science. His research interests lie in machine learning and cybersecurity and address the emerging security challenges in cyber-physical systems, particularly connected and autonomous vehicles. He is passionate about designing advanced intrusion detection systems and developing robust multimodal fusion-based perception systems. He is also enthusiastic about studying machine learning vulnerabilities and creating resilient solutions to enhance the safety and reliability of autonomous systems. Shahriar is advised by Wenjing Lou.
  • Bilgehan Sel, Ph.D. student in the Bradley Department of Electrical and Computer Engineering. His research focuses on decision-making with foundational models; reinforcement learning; and recommender systems. Currently, he is exploring the application of multi-modal large language models (LLM) in planning and recommendation systems. He is particularly fascinated by the development of models that exhibit more human-like reasoning capabilities. Sel, also a student at the Sanghani Center, is advised by Ming Jin.  

The initiative's faculty awards support machine learning sponsored research that works toward revolutionizing the way the world uses and understands this field of modern technology.

Citing the projects selected for funding this year, Naren Ramakrishnan, the Thomas L. Phillips Professor of Engineering, director of the Sanghani Center, and director of the Amazon-Virginia Tech initiative, said, “Not surprisingly, many of the awards pertain to large language models. Our awardees this year are studying how to make LLM applications more efficient, safe, and expressive. One of our projects also involves a collaboration between the Department of Electrical and Computer Engineering and the Department of Mathematics.”

The five faculty and their projects are

  • Muhammad Ali Gulzar, assistant professor in the Department of Computer Science, “Privacy-Preserving Semantic Cache Towards Efficient LLM-based Services”: LLMs such as ChatGPT and Llama-3 have revolutionized natural language processing and search engine dynamics. However, they incur exceptionally high computational cost with inference demanding billions of floating-point operations. This project proposes a data- driven research plan that investigates different querying patterns when users interact with LLMs using a multi-tiered, privacy-preserving caching technique that can dramatically reduce the cost of LLM inference. The objective of the project is two-fold: to identify the characteristics of similar queries, habitual query patterns, and the true user intent behind similar queries and to design a multi-tiered caching design for LLM services to cater to different similar query types. The final goal is to respond to a user’s semantically similar query from a local cache rather than re-querying the LLM, which addresses privacy concerns, supports scalability, and reduces costs, service provider load, and environmental impact. 
  • Ruoxi Jia, assistant professor in the Bradley Department of Electrical and Computer Engineering and core faculty at the Sanghani Center for Artificial Intelligence and Data Analytics, "A Framework for Durable Safety Assurance in Foundation Models": Foundation models are designed to provide a versatile base for a myriad of applications, enabling their adaptation to specific tasks with minimal additional data. The wide adoption of these models presents a significant challenge: ensuring safety for an infinite variety of models that build upon a foundational base. This project aims to address this issue by integrating end-to-end safety measures into the model's lifecycle. This process includes: modifying pre-trained models to eliminate harmful content inadvertently absorbed during the pre-training phase, encoding safety constraints into the fine-tuning process, and implementing continuous safety monitoring during the model's ultimate deployment.
  • Xuan Wang, assistant professor in the Department of Computer Science and core faculty at the Sanghani Center for Artificial Intelligence and Data Analytics, “Reasoning Over Long Context with Large Language Models”: Tasks for reasoning over long context include multi-hop question answering, multi-document summarization, and document-level or cross-document information extraction. LLM systems face significant challenges in this area due to complexities in bridge vs. comparison questions, as well as sequential vs. parallel reasonings, which require more novel and fine-grained prompting methods to enhance the performance of multi-hop question answering. While it is difficult for an LLM to retrieve and focus on the correct evidence from massive documents in external knowledge bases such as Wikipedia, new architectures of LLMs with a longer input window provide opportunities to develop reasoning models for long context input data. This project will undertake a systematic exploration of reasoning over long context with LLMs from two different perspectives by focusing on enhancing complex reasoning prompting and evidence understanding in closed-source LLMs such as GPT and on fine-tuning an end-to-end reasoning framework with open-source long-context LLMs such as Mamba.
  • Wenjie Xiong, assistant professor, Bradley Department of Electrical and Computer Engineering, “Efficient Distributed Training with Coding” with co-principal investigators Gretchen Matthews, professor in the Department of Mathematics and director of the Commonwealth Cyber Initiative in Southwest Virginia, and Pedro Soto, postdoctoral associate in the Department of Mathematics: Recent machine learning models have become larger with the latest open-source language models having up to 180 billion parameters. On larger data sets, training the models is distributed amongst a number of computing nodes, instead of a single node, allowing for training models on smaller and cheaper nodes at a lower cost. This project considers a distributed learning framework where there is a parameter server (master) that coordinates a distributed learning algorithm by communicating local parameter updates between a distributed network of machines (workers) with local partitions of the data set. Such training brings new challenges, including how to deal with stragglers/unreliable nodes and how to provide efficient communication between nodes to improve the accuracy of the model during distributed training. If these systems are not properly designed, communication can become the performance bottleneck. This necessitates a need to compress the messages between compute nodes. This project will use erasure coding to solve a ubiquitous problem in distributed computing, namely, mitigating stragglers and heterogeneous performance in a distributed network while maintaining low encoding, decoding, and communication complexity. The goal is to give high performance computer-style distributed and parallel algorithms as well as to create open-source frameworks and software that can perform these tasks. 
  • Dawei Zhou, assistant professor, Department of Computer Science, and core faculty at the Sanghani Center for Artificial Intelligence and Data Analytics, “Revolutionizing Recommender Systems with Large Language Models: A Dual Approach of Re-ranking and Black-box Prompt Optimization” with co-principal investigator Bo Ji, associate professor, Department of Computer Science and College of Engineering Faculty Fellow: LLMs have demonstrated exceptional proficiency in understanding and processing text, showcasing impressive reasoning abilities. However, incorporating these powerful models into retrieval and recommendation systems presents significant challenges, particularly when the system needs to match user queries with specific items or products rather than simply analyzing text descriptions. This complexity arises from the need to bridge the gap between the LLMs' text-based capabilities and the structured nature of item catalogs or product databases, requiring innovative approaches to leverage the full potential of LLMs in practical search and recommendation applications. This project aims to investigate how to overcome the aforementioned challenges and harness the power of LLMs for large-scale retrieval and recommendation tasks. To achieve this goal, we aim to design a novel re-rank aggregation framework for utilizing LLMs in the final re-ranking stage. Furthermore, the project will address enabling the LLM-powered recommender system to provide transparent and reliable responses in online environments.
Amazon partnership awardees
(From left) Anand Rathi, center liaison, director, software development, artificial general intelligence, Amazon; Md Hasan Shahriar, Ph.D. student and Amazon Fellow, Department of Computer Science, Virginia Tech; Bilgehan Sel, Ph.D. student and Amazon Fellow, Department of Electrical and Computer Engineering, Virginia Tech; and Christine Julien, professor of and department head. Virginia Tech photo

Applications for a fellowship or a faculty-funded project are submitted through an open call across Virginia Tech. This year’s application process also included a phase where Amazon scientists provided feedback on submitted abstracts so faculty could use this feedback to refine their final project idea before proposal. 

Awardees are selected by an advisory committee of Virginia Tech faculty and Amazon researchers.

The retreat

Attendees were welcomed to the day-long retreat by Ramakrishnan and Anand Rathi, center liaison, director, software development, Artificial General Intelligence, Amazon.

“This year’s fall research symposium on Virginia Tech’s Blacksburg campus was truly energizing," Rathi said. "The new faculty and Ph.D. student awardees are pursuing novel and challenging research directions, advancing the state-of-the-art in responsible and efficient generative AI.”

Also participating in the event from Amazon were Chao Wang, director, applied science, Artificial General Intelligence, who gave the keynote address; Rajiv Dhawan and Kathleen Allen, principals, Academic Partnerships; Ameya Agaskar, senior research scientist, Artificial General Intelligence; Alix Delgato, enterprise account manager, higher education, Amazon Web Services; and Ankit Battawar, solutions architect, Amazon Web Services. 

Previous Amazon–Virginia Tech events have been held in northern Virginia at the Virginia Tech Research Center — Arlington, "but because this is a cross-campus initiative, we wanted to hold this year’s event in Blacksburg, giving more students from diverse academic departments within the university an opportunity to attend,” Ramakrishnan said.

In addition to research presentations from the two new Fellows and five faculty awardees, the retreat included a panel discussion on how to write a successful Amazon proposal; a panel exploring graduate student opportunities for research and internships; and a student poster session and networking hour where 18 Virginia Tech graduate students had the opportunity to highlight their work and talk to Virginia Tech faculty, Amazon representatives, and each other.

An extended agenda for the Amazon visitors included demos at Drone Park; the new Data Visualization Lab; and a visit to studios at the Institute for Creativity, Arts, and Technology, the Cube, and the Moss Arts Center.

“We are very pleased to host this annual retreat for what we view as Virginia Tech’s flagship program with Amazon in Blacksburg and at the VT Innovation Campus which will officially open in January 2025,” said Ramakrishnan. “We are looking forward to a continuing and growing partnership with Amazon that develops interesting project-based research experiences for our students and faculty at both locations.” 

Share this story