Editor's note: The image generated for this story came from a source of artificial intelligence to illustrate how the technology can be used.

Before they change the world, revolutionary new technologies often cause a bit of panic.

The telegraph. The calculator. The internet. All engendered some societal upheaval before they were fully embraced.

Now generative artificial intelligence (AI) has joined that list, sending shockwaves through higher education since the November 2022 release of ChatGPT. 

If anything drove initial concern about generative AI chatbots among faculty, it was the idea that they were a nearly perfect cheating machine, able to write an original essay, ace a quiz, even pass a Ph.D. qualifying exam without leaving any clear signs that “AI was here.” Faculty fretted that AI would turn traditional learning at Virginia Tech upside down. 

As the university enters its first full post-AI school year, however, alarm may be softening to cautious optimism about its promise as a tool for teaching and research.

“The national conversation has been, ‘What is this? Is this going to replace my job?’” said Cayce Myers, professor of public relations in the School of Communication. “If we can get past that, then we can get into some more nuanced conversations about, ‘How can this enhance my work and my students’ educational experience?’”

Rethinking learning

In his spring advertising ethics course, Myers asked ChatGPT to create 10 taglines for a made-up potato chip company. “It did a really good job,” he said.

That prompted an important discussion among his students: Where do these tools fit into their discipline? What use of AI tools is OK? Students agreed that disclosure was necessary, but would it always be? “That was kind of the point of the exercise,” said Myers. “In early adoption of a technology, how do you navigate those contours?”

Initially, navigating the contours of AI has meant figuring out how to ward off its misuse. Faculty have rallied around a few practical ideas, such as developing a syllabus statement with clear AI guidelines, teaching students to document when and how they use AI, adding extra security to Canvas quizzes, or pointing to the Undergraduate Honor Code’s statements on plagiarism.

Yet a laser focus on catching cheaters likely misses the point. Dale Pike, associate vice provost of technology-enhanced learning, acknowledged that AI detection software has proved unreliable so far, but added that universally barring use of AI will leave students unprepared for the workforce. “Ultimately, I think what has to happen is we have to rethink student learning assessment, wholesale,” he said.

Intelligent tutoring

That rethinking, for some faculty, involves replacing gameable assignments based on memorizing and summarizing with assignments involving problem-solving, in-class creation, critical thinking, and collaboration.

Beyond that, faculty are considering how AI models such as ChatGPT can customize learning by producing dynamic case studies or offering instant feedback or follow-up questions. “It could be emergent and responsive in a way that one human never could,” said Jacob Grohs, associate professor of engineering education in the College of Engineering. “It really ups the ante in terms of what we need to be doing as teachers.” 

In a first-year engineering course Andrew Katz taught last semester, the assistant professor of engineering education had ChatGPT explain foundational engineering concepts with different audiences in mind — a first-grader, a high schooler, an undergraduate. Then, his students identified baseline pieces of information amid the varying layers of complexity. “I'll continue to encourage students to use these tools this fall,” he said. “So then the biggest question is, How do you help students use them thoughtfully?”

One use he’s particularly hopeful about is AI’s potential as an intelligent tutoring system that can individualize education by using students’ interests to teach new information — for instance, offering soccer metaphors to teach a new concept to a soccer-playing student. “If you can take even a step in that direction that’s a big improvement,” said Katz.

For now, many faculty are making AI the subject of assignments. They’re asking students to analyze and identify weaknesses in arguments produced by ChatGPT, for instance, or to edit an AI-produced essay with “track changes” on.

That kind of critical thinking about generative AI is vital, said Ismini Lourentzou, assistant professor of computer science in the College of Engineering. “It's our responsibility as educators to teach students how to use these tools responsibly, and then understand the limitations of these tools.”

Potential AI pitfalls

AI's limitations are, admittedly, worrisome.

Lourentzou, who has long worked at the intersection of machine learning, artificial intelligence, and data science, recently collaborated on a commentary published in the biomedical journal eBioMedicine pointing out how AI models amplify pre-existing health care inequities for the already marginalized. 

Junghwan Kim, assistant professor of geography, in the College of Natural Resources and Environment, published a research paper in the journal Findings about potential geographic biases in a generative AI chatbot’s presentation of problems and solutions related to transportation in the United States and Canada. 

For students to develop digital literacy around AI, they must understand its flaws, including bias, hallucinations, privacy concerns, and issues of intellectual property. Such problems aren’t necessarily a dealbreaker, as long as students learn about them. “I’m a little concerned,” Kim said. “But my argument is, let's be aware of the capabilities and limitations and then use it wisely.” 

The path ahead for AI

As faculty navigate the challenges of generative AI, working groups have cropped up around the university, including those sponsored by Technology-enhanced Learning and Online Strategies (TLOS) and the College of Liberal Arts and Human Sciences, to discuss challenges, opportunities, and discipline-specific norms, which may vary widely for engineers and artists, writers and scientists. 

Other university resources and responses include the following:

  • TLOS is offering a series of online and in-person workshops this fall to introduce faculty and staff to generative AI and prompt design. It’s also exploring various ways to support faculty looking at rethinking assignments and learning assessment inside their courses.
  • The Office of Research and Innovation is launching a podcast featuring episodes that explore AI with experts around campus, including Myers and Katz.
  • Graduate School Dean Aimée Surprenant and Dale Pike released a statement to guide instructors in the Graduate School.
  • The Faculty Senate is considering organizing faculty discussions around generative AI, but it advocates allowing individual faculty members to choose how to use it. “It is a tool, and it's obviously going to be up to each faculty member to decide, ‘Do I want to use it in my class? Do I want to let students use it?’” said Faculty Senate President Joe Merola. “It’s an evolving story.”

The conversation, nationally and at Virginia Tech, will continue for the foreseeable future. In the meantime, Pike urges faculty members to spend time experimenting with the technology. “I think the most important thing right now is that everyone — faculty, students, staff, administration — should be dabbling in this and getting their head wrapped around what's different about this and how it might be helpful.”

At the very least, everyone needs to be paying attention, said Pike. “I don’t think this is hype. There are a lot of very smart people who are saying, ‘This is revolutionary.’”

Share this story