Engineering's Devi Parikh seeks to lessen failures in computerized visual recognition programs
Computer programs that use facial or image recognition systems -- be it security cameras or applications that search databases for everything from photographs of wanted criminals to images of bears – are like any other technological marvel. They may be fast and versatile, but they frequently fail, and are limited to one-way communication, taking orders from the user.
Devi Parikh, an assistant professor with the Virginia Tech Bradley Department of Electrical and Computer Engineering, wants to change that, creating a two-way communication path between user and computer vision systems and algorithms. The two way system won’t directly prevent failures and faults, but it will help users better diagnose computer problems and correct errors, and prevent future occurrences.
“Models that characterize the failures of a system can then also be used to predict oncoming failure,” said Parikh, whose research project is at the center of a $150,000 U.S. Army Research Office Young Investigators’ Award, and well could have future applications in a wide variety of artificial intelligence systems. “Such a warning signal can be valuable to a downstream application that uses the output of the machine perception system as input. These techniques are broadly applicable to many research and development efforts on intelligent and autonomous systems.”
To wit, using computer vision programs – or almost any computer system – that prove faulty or make errors is now much akin to talking with a small child who may be ill: The adult can tell something is wrong with the child from his or her behavior, but the child does not have the vocabulary to express why he or she is feeling ill. The parent must guess and/or seek help in a diagnosis, or the child remains sick.
Computers act much the same way during a system or program crash or failure. When a facial recognition system fails to recognize or track a person’s face, it may not be able to tell the user – likely law enforcement – why it is failing or even that it is failing. The user must guess if the program is failing because of, say, low or harsh light or because the subject has his or her face at an odd angle, askew from the lens.
Parikh wants to remove the guesswork, allowing the system or application to directly tell the user the cause of failure.
Once the user is aware of the fault, they can take action to correct the error – switch to a different camera to capture the person’s face from another angle or lower the aperture of the lens to take in less light, thereby avoiding excessive glare – and obtain a better, usable image.
Much the same way, if a computer is programmed to sort through thousands of images for photographs of bears, when its initial model is based only on images of a grizzly standing near a lake, the system well can mistake the body of water as directly associated with a bear, and miss images of polar bears as it was only instructed to search for one type of the species. Parikh wants to create systems smart enough to ask the user questions that will avoid such errors or shortcomings, thus saving the user’s time and likely, money.
“A semantic characterization of the failure modes of a system can thus allow us to design better systems in the future, as well as to make today’s computer vision systems more usable even with their existing imperfections,” Parikh wrote in her proposal.
Parikh leads Virginia Tech’s Computer Vision Lab and is a core faculty member of the university’s Discovery Analytics Center and the Virginia Center for Autonomous Systems. Previous to joining the Virginia Tech faculty she was a research assistant professor at the Toyota Technological Institute on University of Chicago’s campus. Among her previous grants is a Google Faculty Research Award for relative attributes-based feedback used during image Web searchers.
She earned her bachelor's degree from Rowan University, and her master’s degree and Ph.D. from Carnegie Mellon University.