By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
July 17, 2020

Newsletter - July 2020 - AI bias

July 17, 2020

AI bias

Newsletter July 2020

The Black Lives Matter movement, revived by the tragic death of George Floyd, has spread all over the world. This tragedy has put the discrimination that still undermines our societies in a harsh light. Technology and artificial intelligence in particular are not exempt from this self-examination that we must carry out collectively. Conceived by humans, they tend to reproduce their prejudices. This raises many questions about how AI is designed, by whom, and how we can ensure its ethics.

In 2018, Joy Buolamwini, a black researcher at MIT, is surprised that a facial recognition software she works with recognizes the faces of her friends but not her own. When she puts on a white mask, the algorithm identifies her immediately. An experiment that prompts her to do her thesis on the subject.

<iframe width="560" height="315" src="https://www.youtube.com/embed/TWWsW1w-BVo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

After testing facial recognition systems from IBM, Microsoft and Face ++, the results are clear: they all recognize more men than women and are more accurate on lighter than darker skin. There is no shortage of examples of the discrimination that this type of problem causes:

  • the Apple Pay Card algorithm gave a higher credit limit to men than to women, despite equivalent incomes;
  • of the software used by the U.S. justice system to assess recidivism risk, is estimated to be much higher for African-Americans;
  • In 2016, Tay, a Microsoft AI launched on Twitter to learn from its exchanges with Internet users, had to be withdrawn in a hurry after he started uttering racist and negationist tweets.

Human too human

There is too often a tendency to conceive of AI as an entity that arose ex nihilo. Yet it only learns from the images and information it is provided with. In other words, it only learns from the image of those who conceive of it. As such, it is not surprising that it leads to racist and sexist decision-making, when the algorithms that make it up are overwhelmingly developed by white men. "Algorithms are not endowed with consciousness, but built by people," says Mouhamadou Moustapha Cissé, head of Africa's first artificial intelligence research centre. "It is, in part, the biases of these people that affect the way the algorithms work.

Intrinsic defects

In an op-ed piece, Nelly Chatue-Diop and Soumia Malinbaum, respectively Chief Data Officer Betclic Group and Vice President Business Development Keyrus point out three more technical weaknesses in the way these algorithms are designed:

  1. The algorithms themselves: Most application developers do not use learning algorithms that they have personally created to measure. These generic algorithms, which are freely available, were mostly developed by scientists whose priority is to validate the accuracy of their mathematical model and to avoid overlearning, not to ensure that the models created with these algorithms are generalized in a fair way. Thus, not only were none of these algorithms designed with the explicit objective of non-discrimination, but they were developed by a singularly homogeneous population.
  2. Learning data: As the model is built from data, it is understandable that for the learning and validation of the model itself, there is a crucial need to have data sets that are consistent, but also representative of the diversity of the situations/cases to be dealt with.
  3. Performance criteria: A model is considered successful when, after the learning phase, it correctly handles a high percentage of new cases. Pre-operational tests are generally carried out on global samples that do not allow to know whether or not the model discriminates on gender, origin or other criteria. They focus on assessing the accuracy of the model and judge whether the error rate is acceptable: no other criteria are taken into account.

Awareness

In recent years, there has been a growing awareness of these issues:

  • In a report published in December 2017, the CNIL recommends training all the actors in the algorithmic chain in ethics, making the subjects transparent and understandable, and setting up a national ethics platform.
  • The ethical aspect also featured prominently in Cédric Villani's "AI for humanity" report, which was submitted to Emmanuel Macron in March 2018.
  • At the end of May, the rights defender and the CNIL called for a collective mobilisation on the subject.
  • In June, the Equality Lab unveiled 30 very concrete solutions to combat the discriminatory abuses of AIThe Government of the United States of America is committed to the promotion of gender equality, particularly with regard to women.

Yes we can

An evolution towards the best is possible. "Noting stigmatizing differences in Google Translate's translations in some languages, Google has made sure to generate gendered translations for all sentences where the original template offered only a male version. Indicators have been put in place to measure progress and reduce this bias by more than 90% in translations from Hungarian, Finnish, Persian and Turkish into English. The new model is all the more relevant and now understands that in these languages "doctor" and "engineer" can be female! "Nelly Chatue-Diop and Soumia Malinbaum are delighted.

This is more than ever a major issue, as AI is becoming more and more involved in every little gap in our daily lives. It is up to us to make it in the image of the society we want to build.

Isahit: ethics at heart

Collaborating almost exclusively with young black women from developing countries, isahit contributes fully to the development of a more ethical AI and to the image of society as a whole, respecting all its diversity. By annotating and supervising machine learning algorithms for our clients, these women work every day to avoid the biases they could generate. The challenge is not only ethical, as diversity is one of the essential keys to the accuracy and efficiency of AI. Very concretely :

  • in the case of facial recognition, a person of a certain ethnicity will find it easier to recognize another person of the same ethnicity,
  • in the food sector, an African will be able to identify more finely the dishes or ingredients specific to African cuisine,
  • The same applies to waste analysis.

It is through the diversity of its community that isahit can guarantee its clients accurate and valid, specific and contextualized data.

"At isahit, we also put a lot of emphasis on the formation of our community. Our goal is to help these women achieve their professional goals while giving them the keys to take full part in the ongoing digital revolution.

Isabelle Mashola, CEO of isahit

Read the newsletter in your web browser

You might also like
this new related posts

Want to scale up your data labeling projects
and do it ethically? 

We have a wide range of solutions and tools that will help you train your algorithms. Click below to learn more!