Delphine Pouponneau

Published on 20 April 2020

1st international charter for inclusive AI: Orange is committed

The international charter for inclusive AI commits companies that want to help combat bias and stereotypes by promoting diversity and responsible use in the development of artificial intelligence. Led by Arborus, Orange is the first to sign.

Artificial intelligence is gradually entering more domains and we’ve made it an axis of development for our Engage 2025 strategic plan. We have also just signed the first international charter for inclusive AI through the Arborus Endowment Fund, of which we’re a founding member. What are the charter’s key objectives? Delphine Pouponneau, Head of Diversity and Inclusion at Orange, tells us more. 



What do you mean by “inclusive artificial intelligence”?

Today, artificial intelligence algorithms power a host of online services: from video or book recommendations to job searches and even dating sites. They’re also used in particularly sensitive sectors such as health, security, education or insurance… However, algorithms can behave like us and therefore, according to the data we feed them, can reproduce societal biases and stereotypes. The risk is that the power of artificial intelligence may magnify the inequalities that already exist. These biases stem from human intervention during the design phase, while collecting data, or when writing code. If the algorithms, as well as the choice of data used to feed these algorithms, are only designed, developed and managed by Caucasian men, machines will only use and compound a particular and therefore biased vision of society. That’s why we need to be so careful when it comes to identifying and mitigating these biases. Allowing a larger share of the population to understand and participate in the design of algorithms is therefore key.


What is the purpose of the charter for inclusive AI?

By signing this charter for inclusive AI, companies are committing to diversity, especially within teams specialising in AI. Signatories also have to ensure that all of their stakeholders act responsibly in identifying and controlling discriminatory biases. Let’s not forget that artificial intelligence is a great lever for development. It also carries with it a real opportunity to reduce inequalities. For individuals, it can open up a whole range of career opportunities, and no group, especially women, should be excluded.


Can you explain the motivating factor for Orange in this project?

We’re involved in the project for several reasons. First, we’ve always been one of the leading French players when it comes to artificial intelligence, and are gradually incorporating it into our customer relations, finance and even researching its potential for network supervision. Soon, we will be able to do predictive maintenance to avoid incidents. Artificial intelligence is therefore one of the career opportunities that we want to develop as a priority at Orange, and are planning to recruit heavily in this area in the coming years. We’ll make sure we promote in-house training, by which we mean retraining employees who wish to do so and to ensure that these positions offer equal opportunities to women. In fact, we’re targeting 30% of women in technical trades by 2025. Our experts also work within the High-Level Expert Group on Artificial Intelligence (AI HLEG) of the European Commission to recommend future policies and respond to ethical, legal and societal issues related to AI.


How were the 7 commitments within the charter defined (see the box below)? 

The commitments made within the charter respond to three major challenges. First, to promote the place of women in AI roles - today, only 12% of researchers worldwide are women, according to a study by the Canadian company Element AI. Next, to ensure each link in the chain has the means to detect or signal potential biases, through a training programme that incorporates a broader range of people than technical professions and includes product managers, HR, marketing managers, etc. We can only combat bias effectively if we raise awareness among all stakeholders and beyond: in fact society as a whole. Finally, to set up a continuous improvement process for the quality of the data used in order to assess and react to all forms of discrimination.


Why is it important for Orange to establish a trusted relationship between AI (and more generally technology) and individuals?

At the end of 2019 we defined our new purpose as being “a trusted partner who gives everyone the keys to a responsible digital world”. Among all the digital technologies, AI poses the most questions. According to an Ifop study, 80% of French people acknowledge that algorithms play a huge role in their lives but more than half admit they don’t know what these algorithms mean. New technologies have great potential for enabling progress, but we must provide guarantees as to their use and collectively establish the rules of conduct. We also have to train people and raise awareness on a massive scale. This is how they will be accepted in society. These actions are part of our global and proactive strategy to promote diversity and ensure a fair and balanced representation of society. I’m taking this opportunity to promote an initiative that Orange supports, launched by the Institut Montaigne, Open Classrooms platform and the Abeona Foundation: Objective AI, which aims to train at least 1% of French people in algorithmic biases.


How will this charter influence the design of AI systems without a European or international regulatory framework?

A legislative framework already exists when it comes to personal data protection in Europe: GDPR. Do we need further legislation? We should be careful not to slow down the dynamic nature of the sector, especially in the face of Chinese or American players. So I believe it’s better to give incentives and educate people rather than coerce them. Promoting certifications would enable continuous improvement initiatives to be implemented and any progress made to be verified by an external or independent body. Certifications or labels would also provide new guarantees and therefore strengthen people’s trust in these new technologies. We’re also working with Arborus to design a GEEIS (Gender Equality European & International Standard) AI label based on the model that measures gender equality in the workplace.



7 commitments from companies signing the international charter for inclusive AI 

Signatories undertake to:

  1. Promote diversity in their teams working on AI. 
  2. Find ways to assess and respond to all forms of discrimination that could result from biased or stereotypical data.
  3. Ensure data quality to guarantee the fairest possible systems: data that is unified, consistent, verified, traceable and exploitable.
  4. Design training to raise awareness among designers, developers and all actors involved in AI, about the stereotypes and biases that can generate discrimination.
  5. Raise awareness among anyone using AI solutions (HR, finance, customer relations, marketing etc) about the risks of bias and stereotypes that can generate discrimination and include checkpoints and iterative evaluation within specifications. 
  6. Ensure suppliers are chosen and evaluated on an ongoing basis so that the entire AI value chain is non-discriminatory. 
  7. Introduce controls for AI solutions and continually adapt processes. 


Under the direction of Cédric O, French Secretary of State for Digital Affairs attached to the Minister of the Economy and Finance and Minister of Public Action and Accounts, with the support of Delphine O, Ambassador and General Secretary for the UN World Conference on Women and Nicole Ameline, Chair of the UN CEDAW Committee.