Researchers are starting to see the same problem with AI. It began when University of Virginia computer science professor Vicente Ordóñez saw a disturbing pattern in his image recognition software. If it saw a picture of a kitchen, it would be much more likely to associate it with women. Concerned, he created a team to investigate two sets of AI training photos supported by Facebook and Microsoft. He found that both show gender bias, with activities like shopping and cooking linked to women, and sports to men. It didn’t just reflect the bias in the data, it amplified it further. It’s an important drawback to be aware of, and Microsoft researchers have their own work in that regard. In a previous study, the software giant teamed up with Boston University. It trained an AI from Google News content and found that when asked “Man is to computer programmer as woman is to X?,” it replied, “homemaker.”
Fighting the Bias
Researchers are now looking for ways to minimize such biases. However, they have to be able to identify the bias before they counteract it. As a result, director of Microsoft research Eric Horvitz is pushing others to adopt the same tools. His team has an internal ethics committee which focuses on finding and addressing biases. “I and Microsoft as a whole celebrate efforts identifying and addressing bias and gaps in datasets and systems created out of them,” he said. He’s also considering the introduction of more idealized data sets. As with educational content for children, it may be worth presenting AI with an equal world. “It’s a really important question–when should we change reality to make our systems perform in an aspirational way?” he added. Princeton researcher Aylin Caliskan says no. “We risk losing essential information,” she argues. “The datasets need to reflect the real statistics in the world.”