How semantic communication could reshape the way we think about wireless
Advances in artificial intelligence could significantly change the fundamental way we transmit information.
Consider, for a moment, the photograph. The light-sensitive film within a camera reacts to color wavelengths when exposed to light to imprint upon it an image of the world outside its lens. This all happens on a physical, atomic level — absorbed photons eject electrons, which combine with silver ions within the film to create the image.
Now, think about what a digital image is. It isn’t, on a root level, the same thing, but rather an interpretation of those color wavelengths, quantified into data. Put another way, it’s a reconstruction of the original image, through a bunch of tiny bits of information. We can easily share these images with one another — we do it all the time, by email, or text message, or posting to social media — but the larger they are, the more we may need to compress them to do so. This is, essentially, how we transmit all our data in the modern world.
Understanding that, imagine a process in which we don’t have to send a photo at all, but just a couple details of that photo, which the receiver on the other side of the transmission can take and fill in all the gaps to faithfully recreate in full. Once dismissed as fantasy, advances in artificial intelligence have made this idea — semantic communication — much more of a potential reality in the wireless revolution. That could make the strain on wireless systems significantly less data and energy intense, especially as commercial 6G services are expected to arrive in 2030.
“At the time, people kind of brushed it aside because it was a bit theoretical,” said Walid Saad, professor of electrical and computer engineering and the Next-G Wireless Lead at the Virginia Tech Institute for Advanced Computing. “In fact, all of the evolution of wireless we see today is because of that idea of ‘ignore everything and send bits.’”
Saad knows that when you think about the word “semantics,” you may well not be thinking about anything to do with technology. But just as semantics in social sciences is the study of language that helps us derive meaning from words, so can it be applied to the world of data transmission. Saad is hopeful that, when used in conjunction with recent advances in artificial intelligence, semantic communication could offer a breakthrough in the way we wirelessly send information across space.
Consider what happens when satellites temporarily lose contact with their targets, because of weather, obstructions, or any other reason. What happens to a stream of data in that interim? Under the current data transmission structure, the satellite continues to try to communicate all the data, but not all of it gets captured at the receiving end. Using semantic communication, they could continue to shower instructions when signal is available, but rely on onboard computing power to fill in the gaps during the downtime.
Working in the Network IntElligence, Wireless, and Security Laboratory at Virginia Tech, Jean-Luc DeRieux M.S. ’25 joined with Saad to develop the Semantic Context-aware Framework for Adaptive Multimodal Reasoning, or SCE-FOAM. By training the system on a video — in this case, the Netflix documentary “Our Planet” — they were able to send 98% less information than a traditional wireless transmission, while still recreating a mostly identical video.
How were they able to do it? Much of today’s communications infrastructure is built on a more than 75-year-old theory from mathematician Claude Shannon, who first introduced the idea of the bit. That has been the central focus of data transmission, and although the idea of semantic communication was proposed around the same time, the technology did not exist to make it a practical path to pursue.
Enter, AI. By using neural networks to build a world model on the receiving end of a transmission, ideally one can train a system to learn enough to fill in these gaps on its own. Practically, that might mean building a world model of a door, so that it can know how to open any door. Or teaching it to understand the laws of physics, so that it can accurately predict the movement of an object under specific conditions.
The AI revolution has drastically improved the amount of computing power readily available in lots of different systems. That dynamic has shifted the assumptions and priorities built out of Shannon’s work. Instead of bit transmission being the be all end all, transmission systems might now be able to lean on a combination of classical communication and computing resources to achieve the same tasks.
“We started with semantics — semantics is really learning the representations. Then we got to world models, where we can think of plausible futures,” said Saad, whose current work builds on the foundations laid in a 2022 paper that he co-authored. “That means the AI can now generalize, because it’s getting data from the network. It’s able to take different actions and, if you want, imagine things offline and then use that imagination to actually do better.”
Unlike large language model AI, neural networks like the ones Saad and his team are implementing are built to avoid some of the pitfalls of those systems. While not perfect yet, they represent “the difference between a fluent speaker and an intelligent speaker,” according to Saad, and are able to avoid mistakes because they actually understand the first principles of the tasks they are given to accomplish.
Saad uses a couple of classroom examples to help explain semantic communication, as it functions similarly to the way he teaches and his students make sense of the information he’s sharing. He doesn’t teach straight from a textbook, rather choosing the important points and letting his students, who are already trained on a lot of the material, fill in the gaps.
And just as a student going back to school after a summer off doesn’t need to fully relearn every year of prior schooling before entering the next one, these systems can build upon past information with fairly small levels of learning loss, around 10%. This could have dramatic implications for deep space satellites equipped with such a system, which could continue to learn between long-range transmissions.
While Saad and DeRieux haven’t tested their framework on virtual or augmented reality systems yet, Saad believes it is “actually the best candidate” for the Semantic Context-aware Framework for Adaptive Multimodal Reasoning, for a couple of reasons. Unlike deep space communication, VR doesn’t pose the constant threat of complete signal loss, but it generates a ton of data in an ongoing stream, which strains the communication network. Semantic communication and learning could significantly reduce that data load and fill in the gaps on the fly, making the system much less data and energy intensive.
Saad will be presenting his work at Tech on Tap on Nov. 6 at 6 p.m. at Academic Building One in Alexandria.