One of the most-discussed topics in AI recently has been the growing realization that AI-based systems absorb human biases and prejudices from training data. While this has only recently become a hot news topic, AI organizations, including Luminoso, have been focused on this issue for a while. Denise Christie sat down with Luminoso’s Chief Science Officer, Rob Speer, to talk about how AI becomes biased in the first place, the impact such bias can have, and - more importantly - how to mitigate it.
Read MorePerhaps you heard about Tay, Microsoft’s experimental Twitter chat-bot, and how within a day it became so offensive that Microsoft had to shut it down and never speak of it again. And you assumed that you would never make such a thing, because you’re not doing anything weird like letting random jerks on Twitter re-train your AI on the fly.
My purpose with this tutorial is to show that you can follow an extremely typical NLP pipeline, using popular data and popular techniques, and end up with a racist classifier that should never be deployed.
There are ways to fix it. Making a non-racist classifier is only a little bit harder than making a racist classifier. The fixed version can even be more accurate at evaluations. But to get there, you have to know about the problem, and you have to be willing to not just use the first thing that works.
Read More