A Word on Gendered Innovations – Artificial Intelligence and Machine Learning


Joy Buolamwini did not exist. At least not for the machines who were designed to recognize her face through artificial intelligence. While the machines were doing pretty good with women of a lighter skin tone, they were not able to recognize Buolamwinis face, she being an American computer science researcher with African-American heritage.



In her study “Gender Shades, 2018”, Joy Buolamwini tested how the algorithms (IBM, Microsoft and Face+++) identified pictures of various different people – men, women, young, old, different ethnicities. The results show: all programs had higher success rates when it came to people with lighter skin colors. Additionally, the programs worked better when detecting male faces than female ones – the most failures were counted for women with darker skin.


Other examples: automated soap dispensers in public restrooms that only recognize white* hands, virtual assistants (f.e. Alexa) who perceive voices of younger people better than the voices of older people or an algorithm which configures your credibility for bank loans and systematically prefers men. There are tons of examples out there.


When we look at the relationship between new technologies, machine learning and gender stereotypes, artificial intelligence is a very interesting focus: because they are systems which (may) hold discriminatory structures and reproduce/reinforce them automatically. It would be a mistake to think intelligent machines are “neutral” in their decisions, as J. Buolamwini says: “We risk to lose all that we have gained through Civil/Women’s Rights Movements if we believe that machines could really be neutral or dispatched from our social norms.”


But why are machines and artificial intelligence biased towards gender, ethnicity or age? An article by Wired argues that it has to do with the way these algorithms and products are designed. Did you know that approx. only 12% of the scholars in computer sciences are female? When “smart technologies” are innovated, most likely the producers who design it (and also the imagined consumers they design the product for) do not represent a diverse and/or equal society. Often, the lack of diversity, and also the lack of goodwill to improve the diversity in big tech companies contribute to the discriminatory design of artificial intelligence. The result: products that are produced by white* young men for white* young men.


This has a lot to do with the human unconscious gender bias. Basically, it describes the way that producers always imagine the consumers as being similar to themselves. During the design-process, the identity (therefore also gender) influences the design of the product. Gender is already inscribed onto the final product because it was designed for a specific identity of the consumer – the identity imagined to be similar to the producer’s identity themselves. So, new technology (in this sense: the product) can never be neutral because the design process behind it is influenced by human bias – they are produced, designed and sold in relation to gender norms.


If we think now of the massive impact smart technologies, AI and automated algorithms will have on our future society, it is clear that it is a field that can no longer neglect its unconscious bias which reproduces discriminatory social structures. It is important for future justice to understand that intelligent machines are not only reproducing normative decisions but also hold the power to create new social realities. Technological innovation should not happen by a few for a few people, when it is actually impacting everyone.


There are some practical ideas promoting a more diversified approach to the design process, as suggested in Gendered Innovations. For example:


(1) If we are able to design intelligent algorithms which take decisions by themselves, we can also create algorithms which are especially sensitive towards discriminatory patterns. To integrate the latter to the final products may help to identify and eliminate structures of reproduced discrimination.


(2) More diverse datasets also help to avoid homogenous groups of probable consumers. Part of this strategy is also to diversify the research and design teams, including different academic disciplines, ethnic and cultural backgrounds and genders.


(3) Campaigning to increase understanding and familiarity with “intersectional discrimination” surely contributes to a better and diversified work environment. When power relations, discrimination structures, sexism and other forms of injustice/prejudice are publicly discussed and standards for work places and production processes are introduced, we could expect a decrease in the reproduction of discriminatory patterns.


The ideal solution is very likely to be a combination of the steps mentioned above, and more. But it is clear that we have to do something if we want to work towards a fair and just society where everyone can profit from new innovations.


Do you have any other ideas you’d add to the list above?




Further information and sources:




Credits:

  • drawing by pigwire

30 views0 comments

Recent Posts

See All