FEED issue 28 Web

43 GENIUS INTERVIEW Lauren Klein

scale network. You say to yourself: “It’s showing me everything”. But it’s a trick. There are two parts: one is the godlike perspective. The other is the trick part. Because, as anyone who’s produced one of these data visualisations knows, you made some choices. It’s always produced by another person. Again, that doesn’t delegitimise the knowledge. It just makes you ask what the contexts are under which this visualisation or data set or analysis is being performed. And the ultimate goal is, again, not that we can’t trust it, but that we know more about what it is telling us precisely, like asking what’s included in the data. What were the missing data in your data set? Can our model be productively applied? And in what contexts? You might say it definitely applies in these contexts, but doesn’t apply in those contexts. And that makes better science. FEED: What are your thoughts about how data is used in media, particularly with content recommendations? LAUREN KLEIN: Obviously, machine learning and predictive and generative models are used all over the media industry. Your YouTube playlists, Spotify playlist, Google search results, they’re all these predictive algorithms, and it’s really important to understand the maths behind them. The off-the-shelf ones, and the ones that most people adapt for their use, are set up so that they amplify existing content or existing preferences. And this optimises the model, but it doesn’t optimise society. There’s a ton of examples. A simple one is Spotify recommendations. They had to work really hard to not make the recommendations become narrower and narrower the more you listened, because that’s the model. They’re set up to converge on a clearer answer. Now it’s one thing if it’s music preferences, but I had a student who looked at YouTube recommendations and discovered there’s real racial bias in who it suggests you watch next. And you can zoom back a little bit more and look at someone like Safiya Noble. She’s written a book called Algorithms of Oppression , which is about Google and these recommendation engines. They are ultimately driven by people’s preferences and because people are racist and sexist, and because there are imbalances of types of people using these engines, they perpetuate oppression. One example that she leads with in the book is the difference in Google search

DEVIL’S IN THE DATA The book, Data Feminism explores the intersection of feminist thinking and data science

engineering sense, not just that it goes around and around, but that it actually gets amplified.

results between when you search for ‘black girls’ and ‘white girls.’ When you type ‘black girls’ into Google, you get porn sites. And when you type ‘white girls,’ you get wholesome stock photography. White male users of Google, when they type ‘black girls’ into the search box, they tend to not be looking for wholesome photography of black kids; they’re looking for porn. And once they click on that link, it moves higher and higher up because Google is optimising for those users. Then the algorithm starts to learn, in the computational sense, that they should upvote the porn sites. So then this little black girl types that search query into Google, and what does she see? She doesn’t see what she wants, she sees what the majority of white male users want. This is just an example of a feedback loop – and feedback in the WHEN YOU TYPE ‘BLACK GIRLS’ INTO GOOGLE, YOU GET PORN SITES

FEED: What are some of the solutions?

LAUREN KLEIN: The point we make in our book is asking whether you are truly designing for everyone. Or can you hack an existing system in order to make it show more choices to more people? And there are some people, who we quote in the book, who think you should actively optimise for the margins; that you should be designing for the people on the extremes, because those people tell you more about the widest range of human experience than someone in the centre. In the book, we have laid out seven principles of data feminism (see page 41) that have to do with different approaches, and we hope to offer seven different entry points into thinking about how you can do better. Not all of them apply in all contexts, but they’re all intended, not just as provocations but as practical advice. We want people to think: ‘So I work in industry or for one of the Big Five or want to start a non-profit tech company – whatever the entry point may be – how do I do something good instead of something harmful with the technical abilities I have?’

feedzinesocial feedmagazine.tv

Powered by