coded misogyny

AI’s gender problem: two leading researchers discuss threats to women’s rights in tech
Current affairs | 29 December 2020
Intro Finn Blythe
Illustration Alex Beausire.

Susan Leavy and Dr Rachel Adams are two voices at the forefront of AI research, questioning its male-dominated foundations and heavily gendered narratives. Leavy, a postdoctoral research fellow at University College Dublin, works with the Insight Centre for Data Analytics to critically assess how gender bias is learned by AI systems. Dr Rachel Adams is a research specialist at the Human Sciences Research Council, Africa’s largest policy think tank based in South Africa.

Together, they help safeguard against the effects of a technology that promises so much yet finds itself in desperate need of restructuring. When American professor Donna Haraway wrote A Cyborg Manifesto in 1985, she imagined human and machine as one, subverting oppressive powers and embracing global unity. Since then, this hybridisation has seen anything but. Instead of overcoming divisions, AI systems have, in many areas, ensured their perpetuation.

With the rise of Siri, Alexa and other virtual assistants built with passive, compliant female personas, the future threatens to rely increasingly upon a technology designed, built, tested and marketed almost exclusively by men. Not just maintaining the boundaries Haraway sought to remove, but enforcing them.

Rachel Adams: There is discussion about the way in which AI is thought about and imagined, and how that translates into policy and development. However, to me, the current literature around these narratives is very flat from a gendered perspective; they talk about the hopes and fears, and how much of the thinking around AI is very dichotomous. Either we think it’s wonderful and it’s going to fix all our problems, or it’s really worrisome and it’s going to replace us as humans.

Both of these broad narratives don’t consider any gendered dimension. I’ve done quite a lot of work around why the most popular form of AI, virtual personal assistants, are gendered female and why it’s not been called out as much. There was a study by UNESCO earlier this year that said by 2021 we’re going to be talking to our virtual personal assistants more than our spouses. So they’re a ubiquitous technology but Siri, Alexa, Cortana, Echo all appropriate really stereotyped female gendering within their design. For me, this is a sub-narrative within AI that is really latent.

“There was a study by UNESCO earlier this year that said by 2021 we’re going to be talking to our virtual personal assistants more than our spouses.”

Susan Leavy: One thing about gendering in regard to personal assistants is how they’re trained and what data they’re trained on. So at OpenAI [US organisation conducting research in AI, co-founded by Elon Musk] they’ve developed GTP-2, a text-generating language model, one of the best in the world. They didn’t release the full code at the time because it was too dangerous, they thought it might be abused to generate fake news articles or spam emails.

It’s like deepfakes for language but trained on Reddit data – and we know the gender split among Reddit users and the problems with misogyny there. So you could have this ludicrous situation where AI’s, with the persona of pliant females, are learning to speak based on misogynistic Reddit troll posts. So as well as the persona, how these bots are educated or trained to speak and interact has to be way more transparent. Do you know what data Siri and Alexa are trained on?

RA: I haven’t really looked into it, but on the general point around data sets being biased, they are that way because AI is trained on historical data. It can’t be any other way unless you’re using synthetic data. Our world is biased and these data sets are only going to reflect what is happening in the world. A really good and often quoted example is the recruiting algorithm that Amazon developed. It looked at a whole load of CV’s and decided – based on Amazon’s historical hiring practices – which resumes to take forward. It downgraded any CV with the word ‘woman’, ‘women’ or ‘gender’, as well as anybody who has been to a women’s college in the US.

They found there was nothing they could do to correct this algorithm, it just had to be scrapped because the historical data that existed was so biased. They couldn’t get around that unless they generated a whole load of synthetic data sets, but any data scientist is going to embed their own biases. That’s why the question around representation, design and development teams for AI is so important. The more diverse your design team, the more issues you will be able to foresee and the more world-views you will be able to embed into your technology. At the moment we have really dire representation rates within Silicon Valley and more broadly across tech industries worldwide. I think around 80% of Silicon Valley is men.

“AI’s with the persona of pliant females”: Ava in Alex Garland’s Ex Machina (2015)

“A really good and often quoted example is the recruiting algorithm that Amazon developed…it downgraded any CV with the word ‘woman’, ‘women’ or ‘gender’, as well as anybody who has been to a women’s college in the US.”

SL: Do you know the Neural Conference in Vancouver? It’s one of the leading machine learning conferences. So a lot of the underlying technology for AI systems will come from those kinds of conferences, and this year thirteen percent of the data is by women – which is not enough.

RA: And what effort is being made to promote women and discussion around gender? There are many steps you can take to get women at the table.
SL: I’ve seen workshops go up alongside the main conferences and they changed the name recently. It used to be called NIPS [Neural Information Processing Systems], but there were so many problems they changed it.

RA: Oh yeah, I heard that [laughs].
SL: So you have these prominent women in machine learning who are very active in conferences, but the pipeline is still quite male. Computer science departments have a big role to play in getting women in at undergraduate level, because even submissions to conferences – they’re from PhD students, masters students… So at the root of this is universities and computer science departments maybe not doing enough, not taking it seriously enough and addressing the gender imbalance which is getting worse, not better.

RA: About 30 years ago you would have around 35 percent of graduates from computer sciences being women. The rate is now less than 20 percent. There’s been an absolute drop off in the number of women going to universities to study these types of subjects. They’re becoming increasingly stereotyped as male. And then there’s a broader history around the development of computing and coding. Following the Second World War, women were originally in these roles – they were the first kind of computer coders.

There’s a fantastic book by Mar Hicks called Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing, that discusses this history specifically in relation to Britain. When it became obvious that artificial intelligence and computing was something of strategic value, there was a push to put men in these positions and siphon out women to more secretarial roles. It was about re-ascribing the narrative around these fields of knowledge as very much a male thing.

“When it became obvious that artificial intelligence and computing was something of strategic value, there was a push to put men in these positions and siphon out women to more secretarial roles.”

SL: Yes, totally. In society there are changes in discourse, what’s acceptable to say and not say. That changes dramatically with events and the introduction of new legislation. You may introduce legislation around hiring people but if you’re training AI assistants on pre-legislative data then you’re training AI assistants to ignore new legislation. Like you said, you’re looking at historical data, so you’re bypassing and undoing a lot of the social progress that happened rapidly.

The stereotyping is such a difficult one to fix. I know from personal experiences, it’s a very male culture and feels like an alien world for women coming into some universities. They’re in the minority, and even the kind of assignments that are set for them – it’s very much focussed on what the blokes in class are assumed to enjoy – gaming or predicting soccer matches. The roots of AI are multi-disciplinary. When I first started working in the field we had backgrounds in music, psychology, philosophy, maths, English literature and we were all in a room, looking at these problems together.

When I left, it took a turn and became very much engineer focussed. To get in you had to have a computer science or maths background. Competitions emerged, like who can solve this problem fastest? There was no critical thinking, no exercise in impact assessments, just do that thing faster, better, you know? I think to become smarter it has to go back to its multidisciplinary roots and that includes people from diverse academic backgrounds. Computer scientists alone cannot recreate human intelligence, it’s not possible. It involves a load of people.

RA: I mean smart speakers – I think of Alexa and Siri – there’s a whole host of issues for me there. The way they’re designed as female, the way designing a female intentionally tries to obscure their surveillance and recording abilities, the way it draws on female stereotypes to suggest they’re not something you need to worry about, that they’re there to assist and pacify rather than direct you.

The idea they’re not going to have much of an impact on you because they’re only women, and that a female voice can be disembodied. You can literally put them in their place, in the home, and they will only speak when spoken to. And deepfakes, the issues around revenge porn, it’s not something I’ve looked into but from what I understand deepfakes can be used in awful ways against women.

“They’re there to assist and pacify rather than direct you.”: Joi in Denis Villeneuve’s Blade Runner 2049 (2017)

“Computer scientists alone cannot recreate human intelligence, it’s not possible. It involves a load of people.”

SL: I mean the term was funnily enough coined by a Reddit user, he’d generated porn and their user was called ‘deepfakes’. So it’s a huge issue for revenge porn but also in terms of what’s ahead, things like non-consensual pornography. There’s a potential for porn to be generated with someone’s identity and then you’re into the future of tech and 3D and avatars and all of that.

So it’s a grim, murky world for women and whether it’s virtual or not, it’s seriously damaging. Imagine a schoolgirl in a class and this kind of thing is going around the classroom about them. Even if it’s not true, it’s incredibly abusive and dangerous. Society judges women much more harshly so they’re that much more vulnerable to fake images being created of them.

RA: There’s such a big debate about how law and policy can’t keep up with technological innovation, which I think in some ways is fair. I was at a conference recently here [South Africa] where we discussed South Africa’s policy responses to AI. People were saying we need agile laws that we can change overnight and I replied that the law-making process here, as in elsewhere, is about public participation.

It’s deliberately slow so we can involve as many people as possible and not make mistakes. I’ve actually just been elected as chair of the expert group on AI here in South Africa. Our task is to develop a national strategy around AI, which is wonderfully exciting and also really terrifying. Not just copying and pasting what’s going on elsewhere – which has been so much of the practice previously – but developing something that’s responding to the specific issues we have.

“Society judges women much more harshly so they’re that much more vulnerable to fake images being created of them.”

SL: It’s interesting what you say about agile law. It’s a shocking concept. We’re bad enough at creating good laws slowly rather than having agile laws. Agile messages in computer science have caused a lot of issues. Projects get something out there as quick as possible, beat the competitors and worry about the problems later. Back in the old days you built your software and thoroughly tested it because if you sent it out with an error you’re stuffed. But now you can put things out and change things on the fly, so things have been rushed out without proper societal impact assessments.

I think the government are balancing wanting tech jobs versus regulation. It’s like competing on tax law: you lower your taxes, you get more jobs. Now it’s data. If you have less data protection, are tech companies more likely to move there? So I think it takes bravery and – because tech companies are transnational – it has to be a global endeavour. You can’t have an isolated, protected bunch who are developing software just by abusing the data rights of other countries.

It’s OK to say slow down, too. We don’t rush drugs out for market that aren’t completely properly tested. They don’t say, “We’ll try that on the population and if it kills half we’ll fix it.” But you have software that has huge effects on society and how people think that’s been rushed out there without any testing. I would totally resist calls for agile law and just slow down technology. Test your systems, completely and thoroughly, before you use people as guinea pigs.

Feature originally published in HEROINE 13.


Read Next