Location: 702 Hamilton
This talk will use the example of sentiment analysis to show that supervised machine learning has the potential to amplify the voices of the most privileged people in society. A sentiment analysis algorithm is considered ‘table stakes’ for any serious text analytics platform in social media, finance, or security. As an example of supervised machine learning, Mike will show how these systems are trained. But he'll also show that they have the unavoidable property that they are better at spotting unsubtle expressions of extreme emotion. Such crude expressions are used by a particularly privileged group of authors: men. In this way, brands that depend on sentiment analysis to 'learn what people think' inevitably pay more attention to men. The problem doesn't stop with sentiment analysis: at every step of any model building process, we make choices that can introduce bias, enhance privilege, or break the law! Mike will review these pitfalls, talk about how you can recognise them in your own work, and touch on some new academic work that aims to mitigate these harms.
Bio: Mike Williams is a research engineer at Fast Forward Labs, which develops prototypes and writes reports demonstrating innovations in machine intelligence. He has a PhD in astrophysics from Oxford, and did postdocs at the Max Planck Institute in Munich and at Columbia University.