Virginia Tech researchers to investigate impact of judgmental AI on vulnerable groups
Pamplin College of Business professors Tabitha L. James and Viswanath Venkatesh will examine the role judgmental AI has on phenomena like toxic beauty standards and learn ways to limit its potentially harmful effects.
Two Virginia Tech researchers from the Pamplin College of Business will study the impact of judgmental artificial intelligence on vulnerable populations through a new inclusive cybersecurity program funded by the Southwest Virginia node of the Commonwealth Cyber Initiative (CCI).
According to their research proposal, “‘Don’t You Dare Judge Me’: Judgments by AI Tools and their Impact on Minorities,” R.B. Pamplin Professor of Business Information Technology Tabitha L. James and Verizon Professor of Business Information Technology Viswanath Venkatesh will examine the influence that judgmental artificial intelligence (AI) has on minorities, the underprivileged, and other vulnerable populations. Understanding the effects of this can have design, regulatory, and legal ramifications for the appropriate use of such tools in business.
Judgmental AI, or judgy AI, a term coined by researchers, describes AI systems that evaluate an aspect of a person at a personal level — such as looks — for business or personnel purposes.
For example, a judgy AI system might evaluate the wrinkles of person’s skin as input into what skin care product someone should purchase. Such judgments from sophisticated AI tools, according to the researchers, could exacerbate a troubling phenomenon of young people falling prey to “toxic beauty standards” and purchasing “preventive treatments.”
In their research, James and Venkatesh will examine both cognitive and emotional trust as well as how types of algorithmic transparency can influence trust in AI. They will do so by developing a judgy AI system to provide feedback on the severity of a person’s facial wrinkles, an application of AI that cosmetic companies are currently using to sell skin care products. They will then examine the emotional and cognitive responses to both the algorithm and the feedback and study how such a judgy AI intervention can result in purchases for people with different traits.
The goal of the research is two-fold. First, the researchers hope to understand the effectiveness of judgy AI as a sales tool and if it can lead to undesirable spending, especially among those with low self-esteem and minorities by making people feel poorly about themselves. They also hope to discover what types of interventions might help companies that use judgy AI in limiting the potential damage of the tool while preserving the benefits of customized service that companies hope such tools might be able to provide.
The proposed research is one of 11 projects funded as part of the 2024 Addressing Inclusion and Accessibility in Cybersecurity Program by CCI. The project was one of only two funded by the CCI Southwest Virginia node.
“The CCI Southwest Virginia node invests in this type of research to ensure that cybersecurity efforts reach some of the most vulnerable communities,” said Director Gretchen Matthews. “This project will further the understanding of trust in AI systems and the role of algorithmic transparency in AI, especially in a consumer context.”
Related stories
Commonwealth Cyber Initiative funds 11 inclusive cybersecurity projects