Advanced Research Computing offers free access to generative AI models
The new service eliminates barriers to entry for all who are interested in exploring how artificial intelligence can help move research forward.

Generative artificial intelligence (AI) is playing a greater role in research and education at Virginia Tech. From using generative AI as a tool for urban planning, to deepening our understanding of how generative AI can enrich the learning process, the Virginia Tech community is exploring the potential benefits of this powerful tool.
Large language models (LLMs) are the backbone of generative AI and can be useful for anything from generating code to discovering statistical trends in large data sets to creating visual representations of data.
However, using LLMs in higher education, particularly for research, can come with some challenges and risks — especially for those who are new to using generative AI, or who are working with particularly complex or sensitive data, said Alberto Cano, associate vice president for research computing.
First, it’s important to find the right LLM for the data used and goals of the effort. For example, the model that works best to predict how different molecules interact with one another won’t be the same one that’s ideal for assessing sentiment in written language, Cano said.
Next, there are the costs. Commercially available LLMs such as ChatGPT can charge several hundred dollars per month for access to premium-level services, he said.
Finally, there is the complicated relationship that web-based AI services have with user data. Even if the proprietor has the best intentions to provide a secure platform, using data with any externally-managed system exposes it to risk of compromise, he said.
Part of the deal when using external, web-based services is that they will use your data to train their models. For researchers who need to keep their data under wraps until it is published, this could be a concern — and if a data set includes personally identifiable information or any other legally protected data, sharing it with a cloud-based LLM risks violating university policy and the law, he said.
“While AI has the potential to significantly accelerate research, enhance teaching and learning, or even streamline business processes, users often face real barriers when trying to integrate it into their work, especially those who aren't deeply familiar with the inner workings of large language models,” Cano said.
“These challenges can include uncertainty about where to start, concerns about accuracy or reproducibility, and the steep learning curve associated with new tools. As a result, some may miss out on opportunities where AI could truly enhance their productivity and innovation.”
New program helps researchers use AI tools
Advanced Research Computing (ARC), a unit within the Division of Information Technology, is helping researchers overcome these challenges through its new AI as a Service program.
Through AI as a Service, users can connect to an on-site LLM to leverage AI in the way that best meets their needs. These LLMs are containerized, meaning that all of the code, software, and other necessary components are bundled together so the LLM can run on Advance Research Computing’s infrastructure. This allows data to remain completely in isolation on Virginia Tech’s systems, ensuring that sensitive or protected information remains secure.
“A common concern among those using large language models is data security, especially when working with sensitive or proprietary information. ARC’s containerized LLM solutions address this by providing a secure, isolated environment where no data is shared externally or used to further train the model. Once your session ends, your data is completely deleted, giving researchers peace of mind and full control over their information,” Cano said.
The program offers several LLM models to choose from, and computational scientists can provide individualized consulting to help users select the most suitable model based on their data and research requirements, as well as to configure the LLM to execute the calculations they need to perform.
Users can then manage their work through Open OnDemand, an easy-to-use web interface that allows researchers to connect to the university’s computing clusters from anywhere.
Because Advance Research Computing already has the infrastructure to run these LLMS, the AI as a Service program is free to Virginia Tech faculty, staff, and students working with a faculty member via the existing research allocations on the clusters, as well as to qualified co-investigators from other institutions who are working with Virginia Tech researchers on a project.
“This service helps researchers save valuable time and reduce expenses, enabling them to focus more funding and effort on advancing their scientific goals. In today’s increasingly competitive funding landscape, maximizing efficiency can make a critical difference in moving research forward,” Cano said.
Advance Research Computing will install 56 new NVIDIA H200 GPUs in late May to increase the availability of resources to run LLMs and other AI software, he said.
For more information on how to leverage this new service, visit arc.vt.edu or request a consultation with Advance Research Computing’s computational science team.