Turning emotion into architecture
'Mood-vironment' is an interactive installation that uses artificial intelligence to translate visitors’ emotions into shifting light, sound, and shared space.
What if architecture could sense how you feel — and respond in real time?
“Mood-vironment,” an interactive public art installation codesigned by Ramtin Haghnazar, assistant professor in the School of Architecture, transformed emotion into a shared spatial experience when it was on display last summer and fall in downtown New Bedford, Massachusetts. The walk-through structure invited passersby to step inside and experience an ever-changing atmosphere shaped by light, sound, and human presence.
Using artificial intelligence (AI), the installation translated visitors’ facial expressions and vocal cues into an evolving environment that reflected the emotional pulse of the community. Presented as part of the Design Art and Technology Institute biennial, “Mood-vironment” blurred the boundaries between architecture, technology, and public art while giving Virginia Tech students hands-on experience at the forefront of design innovation.
Haghnazar shared more about the inspiration behind “Mood-vironment,” how it worked, and the students and collaborators who helped bring it to life.
What sparked the original idea or inspired you to create “Mood-vironment”?
The idea for “Mood-vironment” came from a simple question: Where do emotions exist in our cities? We are constantly surrounded by information, buildings, and technology, yet there are very few public spaces that acknowledge how people actually feel. Many emotions, especially those of marginalized or overlooked groups, remain invisible. I wanted to explore how public art could make those emotions visible and shared, using technology not as a spectacle, but as a tool for empathy and reflection.
For someone encountering “Mood-vironment” for the first time, how would you describe the experience?
“Mood-vironment” is an immersive space that reacts to you. When visitors step inside, the atmosphere, light, and sound change in response to their emotional presence. You don’t need instructions or prior knowledge; you simply stand, look, or speak, and the installation responds. The experience is both personal and collective, because while each person leaves a distinct trace, the space ultimately reflects the emotions of everyone inside.
How does it work? How does the installation “read” emotions and translate them into light and sound in real time?
In simple terms, the installation uses cameras and microphones to observe facial expressions and vocal tone. Artificial intelligence software analyzes these signals to identify broad emotional states, such as happiness, stress, or calm. Once an emotion is detected, the system instantly translates it into color and music. No personal data is stored or identified; the system only responds to the emotional qualities it senses in the moment.
How were Virginia Tech students involved and what aspects of the work did they contribute to?
Virginia Tech students were deeply involved in several stages of the project. They contributed to physical fabrication, sensor integration, developing lighting systems, presentation, and testing. Students worked hands-on with materials, electronics, and software, helping translate an abstract idea into a full-scale, public installation. Their involvement was not peripheral, they were essential collaborators in making the project possible.
What skills or perspectives did students gain from working on a real-world, public-facing installation like this?
Students gained experience working across disciplines, integrating design, technology, and social considerations. They learned to collaborate in teams, adapt designs to real-world constraints, and consider the public impact of their work. Perhaps most importantly, they experienced how architecture and design can engage emotional, social, and ethical dimensions, not just technical ones.
How does this project connect to your teaching and research at Virginia Tech, and what’s next for this area of your work?
“Mood-vironment” directly reflects my teaching and research focus on computational design, digital fabrication, and emerging technologies in architecture. In my courses, students explore how data, AI, and interactive systems can shape space in meaningful ways. This project extends that research into the public realm. Moving forward, I plan to further explore emotionally responsive environments, public installations, and the role of technology in fostering empathy and collective awareness in cities.
Who were the other key collaborators on the project?
“Mood-vironment” was developed in collaboration with Mona Ghandi, associate professor at Washington State University, and Mohammad Taba, doctoral student at the University of Washington. Sida Dai, assistant professor of architecture at Virginia Tech, contributed expertise in AI, programming, and machine learning as part of the interactive systems team. The following graduate students also participated: Behrang Chaichi, Foad Beheshti, Aleia Gerhardt, Yasaman Ashjazadeh, Marcus Ryan Wagner, Amirreza Taghvaie, Alireza Karami, Michael (Mike) Herrboldt, Airii Massey, Pranshu Agrawal, Yusna Ayer, Noah Freedman, Elena Ahwee-Marrah, SeyedAli Derazgisou, Ghazaleh Shams, Nyle Sheriff, Gabby Brooking, and Yacine Berrada. We are especially grateful to the Design Art and Technology Institute for the opportunity and institutional support that made this project possible.